Professional Documents
Culture Documents
Introduction History why cluster computing? Architecture Clustering Concept Several application Operating System Companies that use it High performance Clusters (HPC)
computers that work together so that in many respects they can be viewed as a single system.
Cluster consists of: Nodes(master+computing) Network OS Cluster middleware: It permits compute clustering programs to be portable to a wide variety of clusters.
CPU
CPU
CPU
Cluster
3
INTRODUCTION
Consists of many of the same or similar type of machines. Tightly-coupled using dedicated network connections. The components of a cluster are usually connected to each other
through fast local area networks, each node running its own instance on an operating system.
All machines share resources. They must trust each other so that does not require a password,
HISTORY
The first commercial clustering product was ARCnet, developed
Through CLUSTERS
Data sharing Message passing and communication Task scheduling Node failure management
Parallel programming
Debugging and monitoring
Logical view
ARCHITECTURE
It consists of a collection of interconnected stand-alone computers
a single or multiprocessor system with memory, I/O facilities, &OS generally 2 or more computers (nodes) connected together in a single cabinet, or physically separated & connected via a LAN appear as a single system to users and applications provide a cost-effective way to gain features and benefits
ARCHITECTURE
Database Replication Clusters
Beowulf cluster
Start from 1994 Donald Becker of NASA assembled this cluster. Also called Beowulf cluster Applications like data mining, simulations, parallel processing, weather modeling, etc
across multiple computers to create cheap and powerful supercomputers. A Beowulf Cluster in practice is usually a collection of generic computers connected through an internal network.
A cluster has two types of computers, a master computer, and node
computers.
When a large problem or set of data is given to a Beowulf cluster, the
master computer first runs a program that breaks the problem into small discrete pieces; it then sends a piece to each node to compute. As nodes finish their tasks, the master computer continually sends more pieces to them until the entire problem has been computed.
12
( Ethernet,Myrinet.) + (MPI)
Master: or service node or front node ( used to interact with users and manage the cluster ) Nodes : a group of computers (computing node s)( keyboard, mouse, floppy, video)
13
CentOS 5 managed by Rochs-cluster 1 (Intel P4 2.4 GHz) 32(Intel P4 2.4- 2.8 GHz) 1 GB per node
Gigabit Ethernet, 2 cards per node Myrinet 2 G Language: C, C++, Fortran, java Compiler: Intel compiler, sun Java compiler
14
Science Computation
Digital Biology
Aerospace
Resources Exploration
15
ISSUES TO BE CONSIDERED
Cluster Networking Cluster Software Programming Timing Network Selection Speed Selection
Windows mainly used to build a High Availability Cluster or a NLB(Network Local Balance) Cluster, provide services such as Database, File/Print,Web,Stream Media .Support 2-4 SMP or 32 processors. Hardly used to build a Science Computing Cluster
Redhat Linux The most used OS for a Beowulf Cluster. provides High Performance and Scalability / High Reliability / Low Cost ( get freely and uses inexpensive commodity hardware ) SUN Solaris Uses expensive and unpopular hardware
18
20
Calculation procedure for peak performance: No of nodes 64 NO. of Master Nodes : 1 Memory RAM: 4 GB Hard Disk Capacity/each node : 250GB Storage Cap. 4 TB CLUSTER Software : ROCKS version 4.3 No .of processors and cores: 2 X 2 = 4(dual core + dual socket) CPU speed : 2.6 GHz
21