.

Tuesday, August 20, 2019

Definition And Characteristics Of Cluster Computing

Definition And Characteristics Of Cluster Computing The evolution of networks and especially the Internet is that nowadays, they need more resources to process data more quickly. Given that the use of a machine could not meet these requirements, it appeared that the act of distributing the spots over several machines to run simultaneously would remedy this problem. In what follows, we describe the different characteristics of a cluster and its various categories. Then we will look networks (architecture, topologies, components ,). Then finally we will discuss how communications are in clusters. 2 Definition and characteristics of a cluster Were talking about clustering, server cluster or farm Computing Technologies for designer consolidate multiple independent computers (called nodes) to enable management comprehensive and go beyond the limitations of a computer to: Increase availability Facilitating the scalability Enable load balancing Facilitate management of resources (CPU, RAM, hard disks, network bandwidth). Clusters of servers are an inexpensive method, residing in the establishment of multiple computers apparatre network that will be a single computer with more capabilities (more powerful, etc..), they are widely used for parallel computing. This maximizes use of resources enables the distribution of different treatments on nodes. A major advantage of a cluster is he no longer need to buy expensive multiprocessor servers but it is now possible to settle for smaller systems that can connect to the following one to other according to changing needs. There are different types of cluster: Extended distance cluster: This is a cluster with nodes located in different data centers separated by distance. Extended distance clusters are connected through a cable which guarantees high-speed network access between nodes until all the guidelines for the fault tolerant architecture are followed. The maximum distance between nodes in a cluster distance scope is defined by the limits of technology and data replication limits networking. Metropolitan Cluster: This cluster geographically distributed within the confines of a metropolitan area requiring authorization for filing of cabling and network components for data replication redundant. Continental Cluster: This is a group of clusters that use networks of roads and service networks public data replication and cluster communication to support failover package between different clusters in different data centers. Continental clusters are often located in different cities or countries and may extend over hundreds or thousands of kilometer. 3 General architecture of a cluster A cluster is essentialy composed by more than one machine (PC, server ), operating system, interconexion technologies, parallel programming environment, middleware and application- cation. Fig 1 : General architecture of a cluster 4 Differents classes of Cluster 4.1 High availibility cluster 4.1.1 Architecture Fig 2 : Architecture of a hight availibility cluster 4.1.2 Definition High availability clusters are used to protect one or more sensitive applications. To do this, the application and all the resources necessary for it will be controlled permanently. For powerful protection application, include this protection in the hardware, the network and operating system. Generally, several products are used to protect multiple applications on a same node but there are solutions that can protect as many applications as you want. With these solutions, we are not obliged to raise all applications and can be made case by case basis. If the cluster software reconnat failure then, initially, it will try to restart the X resource both locally on the same node. Then, if this resource does not restart, the software will start the application switch to another node. In any case, the customer will notice that the application is located on another node in the cluster and their access APPLICATION as before. The typical high availability clusters contain only a few nodes but can use clusters involving 32 or 64 knots. If a cluster contains more than two nodes, so we can define different switching planes. This can be useful to decrease the reduction of performance after a seesaw. 4.2 High Performance cluster 4.2.1 Architecture Fig 3 : Architecture of a hight performance cluster 4.2.2 Definition The main function of a high performance cluster (also called High Performance Technical Clustering HPC) is to increase the power of a PC. To perform this, it is necessary to cut the stain that has been carried out into sub-tasks. The result is the total sub-tasks. The Management Unit to coordinate all the sub tasks and the node that receives the result are the only critical points (single point of failure). These components can be protected via a high availability cluster. The crash of one of the nodes is not a disaster because the work of this node can be done by another. The performance of the cluster but it will weaken the cluster always work. 4.3 Load balancing cluster Architecture Fig 4 : Architecture of a load balancing cluster Definition A Cluster is a load balancing server farm with the same function. A splitter is required to distribute the requests of users each node, it verifies that each node has the same workload. The application will be sent to the node that has the fastest time in response to it. This algorithm can provide better performance at anytime. The performance of the cluster depend on the dispatcher. It will choose the node that has the opportunity to address the application of the user as quickly as possible. Without any protection the cluster load balancing can be a SPOF (single point of failure). Best is to add redundancy to this cluster. If one node is no longer in working condition, the cluster will work as same. The dispatcher will identify the dead node and does include more in its calculations, the overall performance of the cluster then it will decrease. The web-server farms (Google. ..) represent an example of cluster load balancing. 5 Inteconnexion technologies Today, improved network technologies help achieve more efficient cluster. These must integrate the speed interconnect technologies to support the wide bandwidth and low latency communication between nodes in the cluster. Because these two indicators measure the performance of interconnects. The selection of a technology cluster interconnect network depends several factors, such as compatibility with the hardware in the cluster, the operating system, price and performance. In what follows, we will detail some of the most used technologies. 5.1 Myrinet Myrinet (ANSI / VITA 26-1998) is a high-speed network protocol designed by Myricom to be used as system interconnect multiple machines forming a cluster. Myrinet causes much less overhead network on its own communication protocol that most used protocols such as Ethernet, and then offers a higher bandwidth, less interference and less latency when using the system processor. Although it can be used as a traditional network protocol, Myrinet is often used by programs that know how to use it directly, negating the need for system calls. Physically, Myrinet uses two fiber optic cables, one for sending data and one for reception, each connected to a machine via a single connector. The machines in question are connected to each other through routers and switches with low latency (the machines are not directly connected to each other). Myrinet also offers some features that improve tolerance to errors, mostly managed by the switches. These features include flow control, error control and st atus monitoring of each physical connection. The fourth and final version of Myrinet, also named Myri-10G supports a throughput of 10 Gbps and is interoperable in terms of physics with 10 Gbps Ethernet standard (cables, connectors, distance, type of signal). 5.2 Infiniband It is a computer bus has high-speed. It is intended to both internal and external communications. It is the result of the merger of two competing technologies, Future I / O, developed by Compaq, IBM, and Hewlett-Packard, with Next Generation I / O (ngio), developed by Intel, Microsoft, and Sun Microsystems. InfiniBand uses a bi-directional bus with low cost, and enjoying a low latency. But he will remain very rapid, as it provides a throughput of 10 Gbps in each direction. InfiniBand uses a technology that allows multiple devices to simultaneously access the network. Data are transmitted as packets, which together form messages. The InfiniBand is now widely used in the world of HPC (High Performance Computing) as a PCI-X or PCI-Express APPOINTED HCA (Host Channel Adapter) operating at 10 Gbit / s (SDR, Single Data Rate), 20 Gbps (DDR, Double Data Rate) or 40 Gbit / s (QDR Quad Data Rate). It also requires specialized network using switches (or switches) and InfiniBand copper cables o r type CX4 Fiber for long distances (using an adapter to Fiber CX4). The protocol allows the use of InfiniBand these cards natively by making use of the protocol VERBS or software overlays: IPoIB (IP over InfiniBand) that presents an Ethernet layer on top of Infiniband and thus the possibility to configure an IP over InfiniBand ports. SDP (Sockets Direct Protocol), which presents a socket layer over InfiniBand. SRP (SCSI RDMA Protocol) which allows frames to encapsulate SCSI over InfiniBand. Some manufacturers offer windows InfiniBand attached storage rather than Fibre Channel. These overlays offer lower performance in the native protocol, but are easier to use because they not require the redevelopment of applications to use the InfiniBand network. In the world of HPC libraries MPI (Message Passing Interface) generally use the native layer to deliver directly VERBS best possible performance. Gigabit Ethernet Gigabit Ethernet (GbE) is a term used to describe a variety of technologies used to implement the Ethernet standard has a data transfer rate of one gigabit per second (or 1000 megabits per second). These technologies are based on twisted pair copper cable or fiber optics. They are defined by the IEEE 802.3z and 802.3ab. Unlike other Ethernet technologies, Gigabit Ethernet provides flow control. The networks on which they are located will be more reliable. They are equipped with FDR, or Full-Duplex Repeaters that allow multiplex lines, using buffers and localized flow control to improve performance. Most of its switches are constructed as new modules for different models of compatible Gigabit switches already exist. 5.4 SCI (Scalable Coherent Interface) SCI Scalable Coherent Interface, IEEE Standard 1596-1992 is a providing a shared memory system has low latency across a cluster. SCI can use a memory extending to the set of the cluster, thus ridding the programmer to manage this complex. This can be seen as a kind of BUS INPUT / Output processor-memory via a LAN. The facilities of programming it offers and the fact that SCI is an IEEE standard has made it a fairly popular choice for the interconnection of machines in a high performance cluster. 6 Comparison of Interconnect technologies This comparison includes the main criteria for judging the performance of a cluster and by needs and resources of each organization technologies will vary. Gigabit Ethernet Infiniband Myrinet SCI Bandwidth 850 230 Latency (Â µs) 10 01/02/10 Max nods 1000 > 1000 1000 1000 Table 1 : Comparison of Interconnects technologies 7 Performing test A group of authors Pourreza, Eskicioglu and Graham led the ratings performance of a number of technologies we have presented above. The parameter they have taken into account is the timing of the execution of the same applications on cluster nodes identical. They tested a number of standard algorithms namely NAS Parallel Benchmark and the Pallas Benchmark and some applications of parallel computing the real world on the first and second generation Myrinet, SCI, but also on FastEthernet (100Mbps) and Gigabit Ethernet (1000Mbps). The results obtained are presented below. These tests were performed on a cluster has eight nodes under RedHat 9.0 with kernel 3.2.2 and gcc 2.4.18smp. Each node has: A dual Pentium III; a 550 MHz processor with 512 MB of SDRAM memory shared; local disks (all activities of entry-exit in the experiments are performed on local disks to eliminate the effects of access to NFS). Each node also has the first and second generations of Myrinet, Fast Ethernet, Gigabit Ethernet network interface card and point-to-point SCI (Dolphin WulfKit). All interfaces of network cards are connected to dedicated switches except those of SCI which are connected to a mesh configuration (24). 7.1 Bandwidth Fig 6 : Bandwith of four interconnects H. Pourreza,Graham,Eskicioglu Latency Fig 7 : Latency of four interconnects H. Pourreza,Graham,Eskicioglu The basic performance of different interconnect technologies in terms of bandwidth and latency are presented respectively in Figures 1 and 2. This indicates that Fast Ethernet is significantly lower than all the others, and Gigabit Ethernet is visibly lower than SCI and Myrinet shows that despite a bandwidth substantially similar. From these results, it is clear that Fast Ethernet is probably only suitable for applications related to the calculation. Conclusion The competitive nature of business and progress of research fields have created a need for computer systems scalable, flexible and reliable. Advanced applications now require a large computing power. Clusters provide a solution to his problems. Clusters represent a promising future for this new concept provides scalability in the world of data processing. Thanks to the different technologies we use to implement them, there are networks that are becoming performants. Because these new technologies can have high bandwidth and low latency. Performance tests carried out have demonstrated that some technologies were more efficient than others. When setting up the cluster, it should choose an architecture and an appropriate network topology to avoid excessively reducing network performance. The use of cluster is less expensive than buying a supercomputer, since it uses the resources of several machines on which the spots are distributed and most of the clusters using the Linux operating system which is a powerful system around because of its flexibility, workability and low cost. Sources : The essence of Distributed Systems : Joel M. Crichlow Parallel Computing , Theory and Comparisons : G. Jack Lipovski, Miroslaw Malek Parallel Computers : Hockney Jesshope Parallel and Distributed Computation, Numerical Methods :Dimitri P. Bertsekas, John N.Tsitsilklis. Practical Parallel Processing, An introduction to problem solvin in Parallel : Alan Chalmers and Jonathan Tidmus.

No comments:

Post a Comment