You are on page 1of 4

Cluster Management (Supercomputer) in Native GNU/ Linux

4 April 2015, Jakarta, Indonesia

Cluster Management (Supercomputer) in Native GNU/ Linux


Hary Cahyono
Published April 2015
Abstract
Originally, clusters and high performance computing were synonymous. Today, the meaning of the
word cluster has expanded beyond high-performance to include high-availability (HA) and loadbalancing (LB) clusters, then you have to ensure that you have skill to manage your entire cluster. Skill of
programming software like MPI and another like that is a must. Youll need to keep your cluster running.
Cluster management includes both routine system administration tasks and monitoring the health of your
cluster. Fortunately, there are several method came from packages that can be used to simplify these
tasks.
Keywords: Supercomputer, Native Linux
Preface

feel similar to that of issuing commands on a


single machine. The commands are both secure

There were various tools in order to support

and scale reliably. Each command is actually a

activity to manage our cluster. OpenMosix,

Python script. The c3-4.0.1.tar.gz is the file that I

OSCAR, ROCKS, Beowulf, whatever method or

have been used. Once you have met the

cluster kits that you have been used [1]. Well

prerequisites, you can download, unpack, and

need third party software in order to make it

install C3. They are come with powerful

easier.

commands like cexec, cget, ckill, cpush, crm,


By choosing the two alternative from all
prospect,

C3,

set

of

commands

cshutdown, clist, cname, and cnum [2].

for
Increasingly, tasks that are computationally

administering multiple systems and PVFS, one

expensive also involve a large amount of I/O,

of the parallel filesystems that make clustering

frequently accessing either large data sets or

I/O easier.

large databases. Selecting a filesystem for a


Theory

cluster is a balancing act. There are number


characteristics that can be used to compare

Both C3 and PVFS can be used with

filesystems,

federated clusters as well as simple clusters.

including

robustness,

failure

recovery, journaling, enhanced security, and

Cluster Command and Control (C3) is a set of

reduced latency. Unfortunately, NFS is not very

about a dozen command-line utilities used to

efficient. In particular, it has not been optimized

execute common management tasks. These


commands were designed to provide a look and

http://tifosilinux.wordpress.com

Cluster Management (Supercomputer) in Native GNU/ Linux


4 April 2015, Jakarta, Indonesia

for the types of I/O often needed with many

are also get the suitable patch and PVFS kernel

high-performance cluster applications. Figure 1.

as

well as

the packages.

In the

PVFS

installation, as noted, this version the kernel


module sources need to be patched.
Figure 3. Patch the appropriate kernel and
PVFS.

That is, you must decide which machines will be


I/O servers (these directories are where the
Next, configuring the metadata server. Create

actual pieces of a data file will be stored at

the metadata configuration files and place them

mount point), metadata server (these will

in the directory, creates the two configuration

containing all of informations like permission,

files .iodtab and .pvfsdir contains permission

path, fs type, etc will be stored) and which

information

machines will be clients.

for

the

metadata

directory.

Fortunately, PVFS provides a script to simplify


Result and Discussion

the process. Figure 4, 5, and 6.

The default configuration of C3 recorded all


of hostname nodes is /etc/c3.conf.
Figure 2. cpush and cexec <mkdir> command is
used to move a file to each node on the cluster
and created directory on each machine.

Since the old version of PVFS would not


compile correctly under my Red Hat 9.0. You

http://tifosilinux.wordpress.com

Cluster Management (Supercomputer) in Native GNU/ Linux


4 April 2015, Jakarta, Indonesia

I/O server setup. To set up the I/O servers,


you need to create a data directory on the
appropriate machines, create a configuration

Running PVFS. Finally, now that you have

file, and then push the configuration file, along

everything installed, you can start PVFS. You

with the other I/O server software, to the

need to start the appropriate daemons on the

appropriate machines. In my case, all the nodes

appropriate machines and load the kernel

in the cluster including the head node are I/O

module. To load the kernel module, use the

servers.

insmod command.
Figure 11. PVFS should be up and running.

Client setup is little more involved. For each


client, youll need to create a PVFS device file,
copy over the kernel module, create a mount
point and a PVFS table, and copy over the
appropriate executable along with any other
utilities you might need on the client machine. In
my case, all nodes including the head ar
configured as clients. Figure 7, 8, 9, and 10.
Using an extra mknod command.

The mounted PVFS will be included in the


listing given with the mount command after
cexec

/sbin/mount.pvfs

hary:/pvfs-meta

/mnt/pvfs from head node. Figure 12.

This should work on each node.

http://tifosilinux.wordpress.com

Cluster Management (Supercomputer) in Native GNU/ Linux


4 April 2015, Jakarta, Indonesia

Conclusion
Although

PVFS

(supported

with

C3s

command) eliminates the bottleneck inherent in


a single file server approach such as NFS. This
is only partial listing of what is available. Since
PVFS does not provide redundancy, does not
support symbolic or hard links, and it does not
provide fsck-like utility. But while they may be
ideal for some uses, they may be problematic for
others .
Acknowledgments
First and foremost, credit goes to Allah SWT
and this proceeding would not exist if not for the
GNU/ Linux community.
References
[1] Cahyono, Hary. (2015). (UPDATE) RAIH
DUNIA DENGAN SUPERKOMPUTER DI
LINUX NATIVE. 2015: 1-59. Retrieved 4
April 2015.
[2] Sloan, Joseph D. (2004). High Performance
LINUX CLUSTER with OSCAR, Rocks,
openMosix & MPI. United States of America:
OReilly Media.

Hary Cahyono
Email : h4ry.oop@gmail.com
Phone: 085695042489
Skype: hary_122
BBM: 7943F602

http://tifosilinux.wordpress.com

You might also like