You are on page 1of 41





COURSE OBJECTIVE: OBJECTIVE: To expose students to FOSS environment and introduce them to use open source packages List of lab excercises: 1. Kernel configuration, compilation and installation : Download / access the latest kernel source code from,compile the kernel and install it in the local system.Try to view the source code of the kernel 2. Virtualisation environment (e.g., xen, kqemu or lguest) to test an applications, new kernels and isolate applications. It could also be used to expose students to other alternate OSs like *BSD 3. Compiling from source : learn about the various build systems used like the auto* family, cmake, ant etc. instead of just running the commands. This could involve the full process like fetching from a cvs and also include autoconf, automake etc., 4. Introduction to packet management system : Given a set of RPM or DEB, how to build and maintain, serve packages over http or ftp. and also how do you configure client systems to access the package repository. 5. Installing various software packages Either the package is yet to be installed or an older version is existing. The student can practice installing the latest version. Of course, this might need internet access. Install samba and share files to windows Install Common Unix Printing System(CUPS) 6. Write userspace drivers using fuse -- easier to debug and less dangerous to the system (Writing full-fledged drivers is difficult at student level) 7. GUI programming : a sample programme using Gambas since the students have VB knowledge. However, one should try using GTK or QT 8. Version Control System setup and usage using RCS, CVS, SVN 9. Text processing with Perl: simple programs, connecting with database e.g., MYSQL 10. Running PHP : simple applications like login forms after setting up a LAMP stack 11. Running Python : some simple exercise e.g. Connecting with MySql database 12. Set up the complete network interface usinf ifconfig command liek setting gateway, DNS, IP tables, etc.,

Resources : An environment like FOSS Lab Server (developed by NRCFOSS containing the various packages) OR Equivalent system with Linux distro supplemented with relevant packages Note: Once the list of experiments are finalised, NRCFOSS can generate full lab manuals complete with exercises, necessary downloads, etc. These could be made available on NRCFOSS web portal. TOTAL: 45 PERIODS LIST OF EQUIPMENTS: Hardware: Minimum Requirements: - 700 Mhz X86 Processor - 384 MB of system memory (RAM) - 40 GB of disk space - Graphics card capable of 1024*768 resolution - Sound Card,Network or Internet Connection Software: Latest distribution of Linux

Lab exercise 1: Kernel configuration, compilation and installation : Download / access the latest kernel source code from,compile the kernel and install it in the local system.Try to view the source code of the kernel solution: In computing, the kernel is the main component of most computer operating systems; it is a bridge between applications and the actual data processing done at the hardware level. The kernel's responsibilities include managing the system's resources (the communication between hardware and software components).Usually as a basic component of an operating system, a kernel can provide the lowest-level abstraction layer for the resources (especially processors and I/O devices) that application software must control to perform its function. It typically makes these facilities available to application processes through inter-process communication mechanisms and system calls. Operating system tasks are done differently by different kernels, depending on their design and implementation. While monolithic kernels execute all the operating system code in the same address space to increase the performance of the system, microkernels run most of the operating system services in user space as servers, aiming to improve maintainability and modularity of the operating system.A range of possibilities exists between these two extremes.

Fig.A kernel connects the application software to the hardware of a computer The kernel's primary purpose is to manage the computer's resources and allow other programs to run and use these resources.[1] Typically, the resources consist of: The Central Processing Unit. This is the most central part of a computer system, responsible for running or executing programs on it. The kernel takes responsibility for deciding at any time which of the many running programs should be allocated to the processor or processors (each of which can usually run only one program at a time) The computer's memory. Memory is used to store both program instructions and data. Typically, both need to be present in memory in order for a program to execute. Often multiple programs will want access to memory, frequently demanding more memory than the computer has available. The kernel is responsible for deciding which memory each process can use, and determining what to do when not enough is available. Any Input/Output (I/O) devices present in the computer, such as keyboard, mouse, disk drives, printers, displays, etc. The kernel allocates requests from applications to perform I/O to an appropriate device (or subsection of a device, in the case of files on a disk or windows on a display) and provides convenient methods for using the device (typically abstracted to the point where the application does not need to know implementation details of the device).

Key aspects necessary in resource managements are the definition of an execution domain (address space) and the protection mechanism used to mediate the accesses to the resources within a domain.[1] Kernels also usually provide methods for synchronization and communication between processes (called inter-process communication or IPC). A kernel may implement these features itself, or rely on some of the processes it runs to provide the facilities to other processes, although in this case it must provide some means of IPC to allow processes to access the facilities provided by each other. Finally, a kernel must provide running programs with a method to make requests to access these facilities.

Process management
Main article: Process management (computing) The main task of a kernel is to allow the execution of applications and support them with features such as hardware abstractions. A process defines which memory portions the application can access.[3] (For this introduction, process, application and program are used as synonyms.) Kernel process management must take into account the hardware built-in equipment for memory protection.[4] To run an application, a kernel typically sets up an address space for the application, loads the file containing the application's code into memory (perhaps via demand paging), sets up a stack for the program and branches to a given location inside the program, thus starting its execution.[5] Multi-tasking kernels are able to give the user the illusion that the number of processes being run simultaneously on the computer is higher than the maximum number of processes the computer is physically able to run simultaneously. Typically, the number of processes a system may run simultaneously is equal to the number of CPUs installed (however this may not be the case if the processors support simultaneous multithreading). In a pre-emptive multitasking system, the kernel will give every program a slice of time and switch from process to process so quickly that it will appear to the user as if these processes were being executed simultaneously. The kernel uses scheduling algorithms to determine which process is running next and how much time it will be given. The algorithm chosen may allow for some processes to have higher priority than others. The kernel generally also provides these processes a way to communicate; this is known as inter-process communication (IPC) and the main approaches are shared memory, message passing and remote procedure calls (see concurrent computing). Other systems (particularly on smaller, less powerful computers) may provide co-operative multitasking, where each process is allowed to run uninterrupted until it makes a special request that tells the kernel it may switch to another process. Such requests are known as "yielding", and typically occur in response to requests for interprocess communication, or for waiting for an event to occur. Older versions of Windows and Mac OS both used co-operative multitasking but switched to preemptive schemes as the power of the computers to which they were targeted grew[citation needed]. The operating system might also support multiprocessing (SMP or Non-Uniform Memory Access); in that case, different programs and threads may run on different processors. A kernel for such a system must be designed to be re-entrant, meaning that it may safely run two different parts of its code simultaneously. This typically means providing synchronization mechanisms (such as spinlocks) to ensure that no two processors attempt to modify the same data at the same time.

Memory management
The kernel has full access to the system's memory and must allow processes to safely access this memory as they require it. Often the first step in doing this is virtual addressing, usually achieved by paging and/or segmentation. Virtual addressing allows the kernel to make a given physical address appear to be another address, the virtual address. Virtual address spaces may be different for different processes; the memory that one process accesses at a particular (virtual) address may be different memory from what another process accesses at the same address. This allows every program to behave as if it is the only one (apart from the kernel) running and thus prevents applications from crashing each other.[5] On many systems, a program's virtual address may refer to data which is not currently in memory. The layer of indirection provided by virtual addressing allows the operating system to use other data stores, like a hard drive, to store what would otherwise have to remain in main memory (RAM). As a result, operating systems can allow programs to use more memory than the system has physically available. When a program needs data which is not currently in RAM, the CPU signals to the kernel that this has happened, and the kernel responds by writing the contents of an inactive memory block to disk (if necessary) and replacing it with the data requested by the program. The program can then be resumed from the point where it was stopped. This scheme is generally known as demand paging. Virtual addressing also allows creation of virtual partitions of memory in two disjointed areas, one being reserved for the kernel (kernel space) and the other for the applications (user space). The applications are not permitted by the processor to address kernel memory, thus preventing an application from damaging the running kernel. This fundamental partition of memory space has contributed much to current designs of actual general-purpose kernels and is almost universal in such systems, although some research kernels (e.g. Singularity) take other approaches.

Device management
To perform useful functions, processes need access to the peripherals connected to the computer, which are controlled by the kernel through device drivers. For example, to show the user something on the screen, an application would make a request to the kernel, which would forward the request to its display driver, which is then responsible for actually plotting the character/pixel.[5] A kernel must maintain a list of available devices. This list may be known in advance (e.g. on an embedded system where the kernel will be rewritten if the available hardware changes), configured by the user (typical on older PCs and on systems that are not designed for personal use) or detected by the operating system at run time (normally called plug and play). In a plug and play system, a device manager first performs a scan on different hardware buses, such as Peripheral Component Interconnect (PCI) or Universal Serial Bus (USB), to detect installed devices, then searches for the appropriate drivers. As device management is a very OS-specific topic, these drivers are handled differently by each kind of kernel design, but in every case, the kernel has to provide the I/O to allow drivers to physically access their devices through some port or memory location. Very important decisions have to be made when designing the device management system, as in some designs accesses may involve context switches, making the operation very CPU-intensive and easily causing a significant performance overhead.[citation needed]

System calls
To actually perform useful work, a process must be able to access the services provided by the kernel. This is implemented differently by each kernel, but most provide a C library or an API, which in turn invokes the related kernel functions.[6] The method of invoking the kernel function varies from kernel to kernel. If memory isolation is in use, it is impossible for a user process to call the kernel directly, because that would be a violation of the processor's access control rules. A few possibilities are: Using a software-simulated interrupt. This method is available on most hardware, and is therefore very common. Using a call gate. A call gate is a special address stored by the kernel in a list in kernel memory at a location known to the processor. When the processor detects a call to that address, it instead redirects to the target location without causing an access violation. This requires hardware support, but the hardware for it is quite common. Using a special system call instruction. This technique requires special hardware support, which common architectures (notably, x86) may lack. System call instructions have been added to recent models of x86 processors, however, and some operating systems for PCs make use of them when available. Using a memory-based queue. An application that makes large numbers of requests but does not need to wait for the result of each may add details of requests to an area of memory that the kernel periodically scans to find requests. STEPS TO INSTALL KERNEL: 1. Check for the current version of the working kernel $uname -r 2. Download the kernel source code from wget 3. Extract the kernel source code (through GUI) or terminal tar zxvf linux- 4. Inside the kernel source directory, open the Makefile vi Makefile 5. Look for the line EXTRAVERSION and append a name ex. 6. make menuconfig (choose the desired kernel options) 7. make bzImage (builds the kernel) 8. su

9. cp arch/x86/boot/bzImage /boot/vmlinuz- 10. chmod 755 /boot/vmlinuz- 11. make modules (builds the kernel modules) 12. make modules_install (installs the kernel modules in the /lib/modules directory) 13. mkinitrd /boot/initramfs-<extra-ver> Note: check for the newly created directory in the /lib/modules directory and give the name above LAB EXERCISE 2: Virtualisation environment (e.g., xen, kqemu or lguest) to test an applications, new kernels and isolate applications. It could also be used to expose students to other alternate OSs like *BSD SOLUTION: Operating system-level virtualization is a server virtualization method where the kernel of an operating system allows for multiple isolated user-space instances, instead of just one. Such instances (often called containers, VEs, VPSs or jails) may look and feel like a real server, from the point of view of its owner. On Unix systems, this technology can be thought of as an advanced implementation of the standard chroot mechanism. In addition to isolation mechanisms, the kernel often provides resource management features to limit the impact of one container's activities on the other containers. USES OF VIRULIZATION: Operating system-level virtualization is commonly used in virtual hosting environments, where it is useful for securely allocating finite hardware resources amongst a large number of mutually-distrusting users. It is also used, to a lesser extent, for consolidating server hardware by moving services on separate hosts into containers on the one server. Other typical scenarios include separating several applications to separate containers for improved security, hardware independence, and added resource management features. OS-level virtualization implementations that are capable of live migration can be used for dynamic load balancing of containers between nodes in a cluster. Steps for Virtualization using QEMU and FreeDOS 1. Download the fdbasecd.iso from (or) from fosslab server The following steps should be followed from the directory where fdbasecd.iso is stored 2. qemu-img create virtualdisk.img 100M 3. qemu -hda virtualdisk.img -cdrom fdbasecd.iso -boot d

4. After booting through QEMU, select 1 to boot from CDROM 5. Select Language - English US 6. Select option to boot FreeDOS from CDROM 7. from X:>fdisk 8. Enable Large disk support (Y) 9. Create DOS partition -> Primary DOS partition 10. Make the whole DOS Primary partition as Active (Y) LAB EXERCISE 3: Compiling from source : learn about the various build systems used like the auto* family, cmake, ant etc. instead of just running the commands. This could involve the full process like fetching from a cvs and also include autoconf, automake etc. SOLUTION: THEN POST WEBATTERY SOURCE IN SVN AND TRY TO ACCESS FROM SVN ,FINALY FOLLOW THE STEPS TO COMPILE SOURCE OF WEBATTERY Webattery sample for Compiling from source 1. Download webattery-<version>-src.rpm to /home/fosslab folder 2. cd /home/fosslab 3. rpm -ivh webattery-<version>-src.rpm 4. The above command creates a directory rpmbuild under the home folder /home/fosslab ex: /home/fosslab/rpmbuild 5. cd /home/fosslab/rpmbuild/SOURCES/ 6. Extract the source file using the command $tar zxvf webattery-1.2.tar.gz 7. cd webattery-1.2/ 8. ./configure (checks for the necessary tools/libraries for the build environment and creates the Makefile) 9. make (compiles all the source files and creates the executable binaries)

10. make install 11. which webattery (prints the location of the binary executable) LAB EXERCISE 4: Introduction to package management system : Given a set of RPM or DEB, how to build and maintain, serve packages over http or ftp. and also how do you configure client systems to access the package repository. SOLUTION: We can install package using RPM Install a package rpm ivh packagename upgrade a package rpm Uvh packagename create a tar file tar cvf myfiles.tar mydir/ (add z if you are dealing with or creating .tgz (.tar.gz) files) standard install from source tar xvzf Apackage.tar.gz cd Apackage ./configure make make install Further , we have to learn how to install zip packages so we are going to install JOOMLA setup Joomla on fedora First of all we need to make sure that php and mysql is install on our system included our webserver

[ismail@localhost ~]$ which php mysql httpd /usr/local/bin/php /usr/bin/mysql /usr/sbin/httpd

If php or mysql or apache server is not installed you can install with following commands [ismail@localhost ~]$ sudo yum -y install httpd php mysql mysql-

server php-mysql Alright now php mysql and apache(httpd) server is installed now we need to download Joomla from its site [ismail@localhost ~]$ wget Now Next steps are to be performed by root user

[ismail@localhost ~]$ su Password: [root@localhost ismail]#

Now Create a directory of your project in my case i have choosed mysite it can be any thing [root@localhost ismail]# mkdir /var/www/html/mysite Now we are going to move downloaded file of joomla in our site diectroy where we will extract files from zip [root@localhost ismail]# mv /var/www/html/mysite/ [root@localhost ismail]# cd /var/www/html/mysite [root@localhost mysite]# unzip [root@localhost mysite]# rm Now we need to create a configuration file and make it read and write able

[root@localhost mysite]# vi configuration.php [root@localhost mysite]# chmod 666 configuration.php [root@localhost mysite]# firefox localhost/mysite & Now configuring joomla open your browser and type following on url localhost/mysite

Click next after selecting your language default is english

Click next again its telling you your stats about your configuration

Click next to accept the agreement statement

Now first select database (Mysql) than tell your host name most probably your will be also localhost Next select username of mysql server i choosed root and than entered the password of your database server of root and last box is of database name i have choosed mysite_db as my database for my project of joomla. Now click next

Now select yes if you want ftp support on i choosed it as off and click next

Now enter your site configuratins such as site Name your Email address and your password for administrator Admin password now don't bother next steps i recommend to click next afer filling site name , email , password and confirm password and now click next pop up will come click ok to continue

Alright now you have done with configuration now you jsut need to delete a folder from installation folder folder name installation in mysite folder delete that Folder you have to delete will be in following location /var/www/html/mysite/installation [ismail@localhost ~]$ sudo rm -r /var/www/html/mysite/installation/ After you have deleted installation folder just type this url on your browser localhost/mysite

Now you see is your home page and now to get start working type following url on browser localhost/mysite/administrator

Enter user name as admin and password what you choosed during installation

Here you are done with installation and configuration of Joomla and now you can create manage your joomla site

LAB EXERCISE 5: Installing various software packages Either the package is yet to be installed or an older version is existing. The student can practice installing the latest version. Of course, this might need internet access. Install samba and share files to windows Install Common Unix Printing System(CUPS) SOLUTION: Installing Samba Connect to your server on the shell and install the Samba packages: yum install cups-libs samba samba-common Edit the smb.conf file: vi /etc/samba/smb.conf Make sure you see the following lines in the [global] section: [...]
# # # # # # # # # # ----------------------- Standalone Server Options -----------------------security = the mode Samba runs in. This can be set to user, share (deprecated), or server (deprecated). passdb backend = the backend used to store user information in. New installations should use either tdbsam or ldapsam. No additional configuration is required for tdbsam. The "smbpasswd" utility is available for backwards compatibility. security = user passdb backend = tdbsam [...]

This enables Linux system users to log in to the Samba server. Then create the system startup links for Samba and start it: chkconfig --levels 235 smb on /etc/init.d/smb start

3 Adding Samba Shares

Now I will add a share that is accessible by all users. Create the directory for sharing the files and change the group to the users group: mkdir -p /home/shares/allusers chown -R root:users /home/shares/allusers/

chmod -R ug+rwx,o+rx-w /home/shares/allusers/ At the end of the file /etc/samba/smb.conf add the following lines: vi /etc/samba/smb.conf
[...] [allusers] comment = All Users path = /home/shares/allusers valid users = @users force group = users create mask = 0660 directory mask = 0771 writable = yes

If you want all users to be able to read and write to their home directories via Samba, add the following lines to /etc/samba/smb.conf (make sure you comment out or remove the other [homes] section in the smb.conf file!):
[...] [homes] comment = Home Directories browseable = no valid users = %S writable = yes create mask = 0700 directory mask = 0700

Now we restart Samba: /etc/init.d/smb restart

4 Adding And Managing Users

In this example, I will add a user named tom. You can add as many users as you need in the same way, just replace the username tom with the desired username in the commands. useradd tom -m -G users Set a password for tom in the Linux system user database. If the user tom should not be able to log into the Linux system, skip this step. passwd tom -> Enter the password for the new user. Now add the user to the Samba user database: smbpasswd -a tom -> Enter the password for the new user. Now you should be able to log in from your Windows workstation with the file explorer (address is \\ or \\\tom for tom's home directory) using the username tom and the chosen password and store files on the Linux server either in tom's home directory or in the public shared directory.

INSTALL CUPS(PRINTING SYSTEM) OPEN WEB BROWSER AND TYPE http://localhost:631/ THEN SELECT CUPS for Administrators Adding Printers and Classes Printers
Add Printer

Find New Printers

Manage Printers

CLICK ADD PRINTER AND SELECT THE LOCAL PRINTERS : Serial port #1 Add Printer SCSI Printer Serial Port #1 Serial Port #2 HP Printer (HPLIP) CUPS-PDF (Virtual PDF Printer) HP Fax (HPLIP)

Local Printers:

Discovered Network Printers: AppSocket/HP JetDirect Internet Printing Protocol (http) Internet Printing Protocol (ipp) Internet Printing Protocol (https) Other Network Printers: LPD/LPR Host or Printer Backend Error Handler Canon network printer Windows Printer via SAMBA AND CLICK CONTINUE and follow instruction on screen and finish. Thus CUPS installed.

LAB EXERCISE 6: Write userspace drivers using fuse -- easier to debug and less dangerous to the system (Writing full-fledged drivers is difficult at student level) Introduction With FUSE it is possible to implement a fully functional filesystem in a userspace program. Features include: Simple library API Simple installation (no need to patch or recompile the kernel) Secure implementation Userspace - kernel interface is very efficient Usable by non privileged users Runs on Linux kernels 2.4.X and 2.6.X Has proven very stable over time

Some projects include the whole FUSE package (for simpler installation). In other cases or just to try out the examples FUSE must be installed first. The installation is simple, after unpacking enter:
> ./configure > make > make install

If this produces an error, please read on. The configure script will try to guess the location of the kernel source. In case this fails, it may be specified using the --with-kernel parameter. Building the kernel module needs a configured kernel source tree matching the running kernel. If you build your own kernel this is no problem. On the other hand if a precompiled kernel is used, the kernel headers used by the FUSE build process must first be prepared. There are two possibilities: 1. A package containing the kernel headers for the kernel binary is available in the distribution (e.g. on Debian it's the kernel-headers-X.Y.Z package for kernel-image-X.Y.Z) 2. The kernel source must be prepared: Extract the kernel source to some directory Copy the running kernel's config (usually found in /boot/config-X.Y.Z) to .config at the top of the source tree Run make prepare

Implementing a filesystem is simple, a hello world filesystem is less than a 100 lines long. Here's a sample session:

~/fuse/example$ mkdir /tmp/fuse ~/fuse/example$ ./hello /tmp/fuse ~/fuse/example$ ls -l /tmp/fuse total 0 -r--r--r-- 1 root root 13 Jan 1 1970 hello ~/fuse/example$ cat /tmp/fuse/hello Hello World! ~/fuse/example$ fusermount -u /tmp/fuse ~/fuse/example$

After installation, you can try out the filesystems in the example directory. To see what is happening try adding the -d option. This is the output produced by running cat /tmp/fuse/hello in another shell:
~/fuse/example> ./hello /tmp/fuse -d unique: 2, opcode: LOOKUP (1), ino: 1, insize: 26 LOOKUP /hello INO: 2 unique: 2, error: 0 (Success), outsize: 72 unique: 3, opcode: OPEN (14), ino: 2, insize: 24 unique: 3, error: 0 (Success), outsize: 8 unique: 4, opcode: READ (15), ino: 2, insize: 32 READ 4096 bytes from 0 READ 4096 bytes unique: 4, error: 0 (Success), outsize: 4104 unique: 0, opcode: RELEASE (18), ino: 2, insize: 24

More operations can be tried out with the fusexmp example filesystem. This just mirrors the root directory similarly to mount --bind / /mountpoint. This is not very useful in itself, but can be used as template for creating a new filesystem. By default FUSE filesystems run multi-threaded. This can be verified by entering the mountpoint recursively in the fusexmp filesystem. Multi-threaded operation can be disabled by adding the -s option. Some options can be passed to the FUSE kernel module and the library. See the output of fusexmp -h for the list of these options.

How does it work?

The following figure shows the path of a filesystem call (e.g. stat) in the above hello world example:

The FUSE kernel module and the FUSE library communicate via a special file descriptor which is obtained by opening /dev/fuse. This file can be opened multiple times, and the obtained file descriptor is passed to the mount syscall, to match up the descriptor with the mounted filesystem. AND STUDENT CAN IMPLEMENT FOLLOWING CODE USING fusepy: #!/usr/bin/env python
from errno import ENOENT from stat import S_IFDIR, S_IFREG from sys import argv, exit from time import time from fuse import FUSE, FuseOSError, Operations, LoggingMixIn, fuse_get_context class Context(LoggingMixIn, Operations): """Example filesystem to demonstrate fuse_get_context()""" def getattr(self, path, fh=None): uid, gid, pid = fuse_get_context() if path == '/': st = dict(st_mode=(S_IFDIR | 0755), st_nlink=2) elif path == '/uid': size = len('%s\n' % uid) st = dict(st_mode=(S_IFREG | 0444), st_size=size) elif path == '/gid': size = len('%s\n' % gid) st = dict(st_mode=(S_IFREG | 0444), st_size=size) elif path == '/pid':

size = len('%s\n' % pid) st = dict(st_mode=(S_IFREG | 0444), st_size=size) else: raise FuseOSError(ENOENT) st['st_ctime'] = st['st_mtime'] = st['st_atime'] = time() return st def read(self, path, size, offset, fh): uid, gid, pid = fuse_get_context() if path == '/uid': return '%s\n' % uid elif path == '/gid': return '%s\n' % gid elif path == '/pid': return '%s\n' % pid return '' def readdir(self, path, fh): return ['.', '..', 'uid', 'gid', 'pid'] # Disable unused operations: access = None flush = None getxattr = None listxattr = None open = None opendir = None release = None releasedir = None statfs = None if __name__ == "__main__": if len(argv) != 2: print 'usage: %s <mountpoint>' % argv[0] exit(1) fuse = FUSE(Context(), argv[1], foreground=True)

LAB EXERCISE 7: GUI programming : a sample programme using Gambas since the students have VB knowledge. However, one should try using GTK or QT. SOLUTION: Type gambas in shell # gambas then you will see new dialoge and select New project-> Create graphical project ,enter project name. Select project browser and right click Forms ,then select New FORM. Select button tool from toolbox and draw it in form. And select drawing area ,put it in FORM. Double click button1 and type following code.

Draw a Line This little program will draw a line. You have to start a new graphics project. You take a form as your start form. You need a drawingarea and a commandbutton on the form to get it going.
PUBLIC SUB Button1_Click() Draw.Begin(DrawingArea1) Draw.Line(1, 130, 500, 400) Draw.End END

Mouse Tracking You can simplify the program, when you work without the drawingarea. The code should look like this.
PUBLIC SUB Form1_MouseMove() Textbox1.text = Mouse.X Textbox2.text = Mouse.Y END

Database programme. PRIVATE $hConn AS Connection PUBLIC SUB btnConnect_Click() DIM sName AS String TRY $hConn.Close sName = txtName.Text WITH $hConn .Type = cmbType.Text .Host = txtHost.Text .Login = txtUser.Text .Password = txtPassword.Text END WITH IF chkCreate.Value THEN $hConn.Open IF NOT $hConn.Databases.Exist(sName) THEN $hConn.Databases.Add(sName) ENDIF $hConn.Close ENDIF $hConn.Name = sName $hConn.Open

frmDatabase.Enabled = TRUE frmRequest.Enabled = TRUE CATCH Message.Error(Error.Text) END PUBLIC SUB btnCreate_Click() DIM hTable AS Table hTable = $hConn.Tables.Add("test") hTable.Fields.Add("id", gb.Integer) hTable.Fields.Add("firstname", gb.String, 16) hTable.Fields.Add("name", gb.String, 32) hTable.Fields.Add("birth", gb.Date) hTable.Fields.Add("active", gb.Boolean) hTable.Fields.Add("salary", gb.Float) hTable.PrimaryKey = ["id"] hTable.Update CATCH Message.Error(Error.Text) END PUBLIC SUB btnDelete_Click() $hConn.Tables.Remove("test") CATCH Message.Error(Error.Text) END PUBLIC SUB btnFill_Click() DIM iInd AS Integer DIM rTest AS Result INC Application.Busy

$hConn.Begin rTest = $hConn.Create("test") FOR iInd = 1 TO 10000 rTest!id = iInd rTest!firstname = [ "Paul", "Pierre", "Jacques", "Antoine", "Mathieu" ][Int(Rnd(5))] rTest!name = "Name #" & iInd rTest!birth = CDate("01/01/1970") + Int(Rnd(10000)) rTest!active = Int(Rnd(2)) rTest!salary = Rnd(1000, 10000) rTest.Update NEXT $hConn.Commit FINALLY DEC Application.Busy CATCH $hConn.Rollback Message.Error(Error.Text) END PUBLIC SUB btnRun_Click() DIM rData AS Result DIM hForm AS FRequest rData = $hConn.Exec(txtRequest.Text) hForm = NEW FRequest($hConn, rData) hForm.Show CATCH Message.Error(Error.Text) END PUBLIC SUB Form_Open() $hConn = NEW Connection

END PUBLIC SUB Form_Close() $hConn.Close END LAB EXERCISE 8: Version Control System setup and usage using RCS, CVS, SVN SOLUTION: The Concurrent Versions System (CVS), also known as the Concurrent Versioning System, is a client-server free software revision control system in the field of software development. Version control system software keeps track of all work and all changes in a set of files, and allows several developers (potentially widely separated in space and/or time) to collaborate. CVS uses a clientserver architecture: a server stores the current version(s) of a project and its history, and clients connect to the server in order to "check out" a complete copy of the project, work on this copy and then later "check in" their changes. Typically, the client and server connect over a LAN or over the Internet, but client and server may both run on the same machine if CVS has the task of keeping track of the version history of a project with only local developers. The server software normally runs on Unix (although at least the CVSNT server also supports various flavors of Microsoft Windows), while CVS clients may run on any major operating-system platform. Several developers may work on the same project concurrently, each one editing files within their own "working copy" of the project, and sending (or checking in) their modifications to the server. To avoid the possibility of people stepping on each others' toes, the server will only accept changes made to the most recent version of a file. Developers are therefore expected to keep their working copy up-to-date by incorporating other people's changes on a regular basis. This task is mostly handled automatically by the CVS client, requiring manual intervention only when an edit conflict arises between a checkedin modification and the yet-unchecked local version of a file. If the check in operation succeeds, then the version numbers of all files involved automatically increment, and the CVS-server writes a user-supplied description line, the date and the author's name to its log files. CVS can also run external, user-specified log processing scripts following each commit. These scripts are installed by an entry in CVS's loginfo file, which can trigger email notification or convert the log data into a Web-based format. Clients can also compare versions, request a complete history of changes, or check out a historical snapshot of the project as of a given date or as of a revision number. STEPS TO INSTALL COLLABNET SVN:
1. Set the JAVA_HOME environment variable, and point it to your Java 6 JRE home. For example: export JAVA_HOME=/usr/java/default

Test the variable: $ $JAVA_HOME/bin/java -version java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Java HotSpot(TM) Client VM (build 16.3-b01, mixed mode, sharing) 2. Switch to the folder where you want to install CollabNet Subversion Edge. You must have write permissions to this folder. $ cd /opt 3. Untar the file you downloaded from CollabNet. $ tar zxf CollabNetSubversionEdge-x.y.z_linux-x86.tar.gz This will create a folder named "csvn" in the current directory. You can rename this folder if desired. 4. Optional. Install the application so that it will start automatically when the server restarts. This command generally requires root/sudo to execute. $ cd csvn $ sudo -E bin/csvn install In addition to configuring your system so that the server is started with the system, it will also write the current JAVA_HOME and the current username in to the file data/conf/csvn.conf. You can edit this file if needed as it controls the startup settings for the application. By setting the JAVA_HOME and RUN_AS_USER variables in this file, it ensures they are set correctly when the application is run. 5. Optional. Configure proxy settings. CollabNet Subversion Edge need access to the internet to check for and install updates. If you need to go through a proxy to access the internet, then you can configure the proxy by editing the data/conf/csvn.conf file which was created by the previous step. Uncomment and edit the HTTP_PROXY variable to configure your proxy server. 6. Start the server. Be sure that you are logged in as your own userid and not running as root. $ bin/csvn start This will take a few minutes and the script will loop until it sees that the server is running. If the server does not start, then try starting the server with this command: $ bin/csvn console This will start the server but output the initial startup messages to the console. You must login to the CollabNet Subversion Edge browser-based management console and configure the Apache server before it can be run for the first time. The UI of the management console writes the needed Apache configuration files based on the information you provide.

The default administrator login is: Address: http://localhost:3343/csvn Username: admin Password: admin Subversion Edge also starts an SSL-protected version using a self-signed SSL certificate. You can access the SSL version on this URL: Address: https://localhost:4434/csvn You can force users to use SSL from the Server configuration. This will cause attempts to access the site via plain HTTP on port 3343 to be redirected to the

secure port on 4434. LAB EXERCISE 9: Text processing with Perl: simple programs, connecting with database e.g., MYSQL Text Processing using perl. Perl is powerful for text processing applications. For text processing Regular Expression (RE) helps a lot. Let us learn some basic Regular expressions. Matching a pattern - m/ or just / is used to match a pattern in a string. match a pattern syntax: /pattern/ or m/pattern/ This example uses default variable $_ $_='hello how are you'; if (/hello/){ print "default variable =$_\n"; print "found hello\n"; } # i modifier ignores case difference if (m/HELLO/i){ print "found HELLo\n";

} #match with negation if (! /hallo/){ print "not found hallo\n"; } print "\n\n";

Read a file. Count number of words in it. To keep it simple let us define word as a set characters followed by a space. Thus 'hello how' will be two words and 'hello how .' will be three words. #! /usr/bin/perl print "Enter File Name "; $filename=<STDIN>; chomp $filename; #First open the file for reading open ($fh,"<",$filename); #read line by line until end of file while ($line=<$fh>){ @words=split ' ',$line; #break lines in words and store in an array. $nw = $nw + scalar(@words); #no.of element in array = no of words. } print "Number of Words in the file : $nw \n"; close ($fh);

Database interface using perl dbi. DBI - DataBase Interface module (more precisely a collection of modules) is a feature rich, efficient and yet simple tool to access databases using perl. Almost all Linux distributions have them, if not you can download from The interface to any DBMS requires two sets of tools - one DBI itself which is generic, two the DBD::the_database. DBD is the driver component and you should install drivers for whatever database you are using. DBI drivers are available for almost all standard databases. Normally database access workflow is like this: a.Connect to the database (logging) using username,password etc.. Once properly authenticated a databasehandle will be given. b.Create the sql query, use database-handle to send the query to the server and ask it to prepare the query. c. Server parses the sql, if no errors, returns a statement-handle. d.Use the statement-handle to execute the query. f.Use statement-handle to fetch data - single row or multiple rows at a time. g.Close the statement handle h. Repeat steps b to g as long as you want, with new queries i.Finally disconnect from database(logout) using database-handle. Let us see them by means of a sample code. Assumptions: DBD driver for mysql is already installed. Database Server: mysql running on local host Database user name: test Password: test123

Database name : testdb Table : names Columns in the table: id,name,age Example 1: 1. List all records in the table and find out average age Let us start the code. (codes are given in italics and bold) Step 1: Connecting to the database use DBI; my $dbh=DBI>;connect('DBI:mysql:database=testdb;host=localhost','test','test123'); Connect requires three arguments : datasource, username,password First argument -datasource gives information about the database server like type of dbms, location etc. In our example datasource is specified as DBI:mysql:database=testdb;host=localhost Here DBI:mysql means use mysql driver. database=testdb means use the database testdb. host=localhost means the host in which the database is running. Other two arguments are username and password which need no explanation. Step 2 Run the select query on the server. First store sql in a variable like this my $query='select * from name '; Then send the sql to the server for parsing and checking my $sth=$dbh->prepare($query) or die "could not prepare $query\n";

In the above statement $dbh is the database connection handle we got while using DBI->connect earlier. $sth is the statement handle returned upon successful preparation. $query refers to the sql statement. The query can be given directly as string also. Here we do some error checking by using die. The $sth that is returned will be required for any further operation on this query. Now we will run the query on the server $sth->execute(); Note here, we are simply using $sth to run the query. Once we call execute, the server runs the query and keeps the result set ready for retrieval. Step 3 Get results from the server one row at a time. fetchrow_array() is a function that will return one row of data and store result in an array. We will use a while loop to fetch all rows from the server. while (($id,$name,$age)=$sth->fetchrow_array()){ print "id=$id name=$name age=$age\n"; } $sth->fetchrow will return a null when there are no more rows. Thus this loop will run until null. ($id,$name,$age)=$sth->fetchrow_array() is used to equate the rows returned to a set of variables. Step 4 Close the statement handle $sth->finish();

Step 5 Close the database connection $dbh->disconnect(); Here is the output of running the script. id=1 name=RAMAN age=45 id=2 name=RAVI age=35 For the sake convenience I am repeating program listing here. use DBI; my $dbh=DBI>connect('DBI:mysql:database=testdb;host=localhost','test','test123'); my $query='select * from name '; my $sth=$dbh->prepare($query) or die "could not prepare $query\n"; $sth->execute(); while (($id,$name,$age)=$sth->fetchrow_array()){ print "id=$id name=$name age=$age\n"; } $sth->finish(); $dbh->disconnect()

LAB EXERCISE 10: Running PHP : simple applications like login forms after setting up a LAMP stack SOLUTION: PHP is a general-purpose scripting language originally designed for web development to produce dynamic web pages. For this purpose, PHP code is embedded into the HTML source document and interpreted by a web server with a PHP processor module, which generates the web page document. It also has evolved to include a command-line interface capability and can be used in standalone graphical applications.PHP can be deployed on most web servers and as a standalone interpreter, on almost every operating system and platform free of charge SAMPLE PROGRAM TO GENERATE RANDOM NUMBERS MYINPUT.HTML <form method="post" action="forms.php"> <p>Range starting <input type="text" name="begin"/> </p> <p>Range end <input type="text name="end"/> </p> <p><input type="submit" value="generate"> </p> </form>

FORMS.PHP <?php import_request_variable("pg","form-");?> <html>

<boby> <p>From Range <?php echo $ form_begin; ?> To<?php echo $ form_begin; ?> <P> I have selected Random number <?php echo rand($ form_begin,$form_end); ?> </p> </body> </html>

LAB EXERCISE 11: Running Python : some simple exercise e.g. Connecting with MySql database SOLUTION: MYSQL /DATABASE ACCESS PROGRAM USING PYTHON #!/usr/bin/python import MySQLdb

db= MySQLdb.connect(host="localhost", user="root", passwd="Pass12", db="python")

cursor = db.cursor() stmt = "SELECT * from Books" cursor.execute(stmt)

rows = cursor.fetchall () for row in rows: print "Row: " for col in row : print "Column: %s" % (col) print "End of Row" print "Number of rows returned: %d" % cursor.rowcount



LAB EXERCISE 12: Set up the complete network interface using ifconfig command liek setting gateway, DNS, IP tables, etc., SOLUTION:

Setting up Network Interfaces To see how your network is configured on your computer, use the ifconfig command. Note that this command is in the /sbin directory and this directory is not in normal users's path and only in root's path. So, as normal user, you will have to use "/sbin/ifconfig" command on the command prompt. As root just using "ifconfig" will do. Here is an example output if this command is used as a normal user: $> /sbin/ifconfig eth0 Link encap:Ethernet HWaddr 00:0F:3D:CA:D3:95 inet addr: Bcast: Mask: inet6 addr: fe80::20f:3dff:feca:d395/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:891 errors:0 dropped:0 overruns:0 frame:0 TX packets:220 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:238795 (233.1 Kb) TX bytes:101252 (98.8 Kb) Interrupt:9 Base address:0xa000 lo Link encap:Local Loopback inet addr: Mask: inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:10 errors:0 dropped:0 overruns:0 frame:0 TX packets:10 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:700 (700.0 b) TX bytes:700 (700.0 b) In the above output, note that there are two sections. The eth0 sections gives information about the network interface card (NIC), sometimes also called the ethernet card. The second section, lo, is for the local loopback interface. If you do not have networking setup, it is possible that eth0 section will not appear in the output. But lo section must appear. If it does not(which is highly unlikely), then something is really wrong in the Fedora installation. Note that NICs in Linux are named ethx where x is 0 for the first network card, 1 for second network card and so on. The most important thing, for now, in the output above is the second line of each section which shows the network address, broadcast address and netmask of each interface. This will be the same for lo interface in all computers. The one in eth0 section will differ depending on the network setup of the first NIC in each computer. Now, if you have two network cards, or controllers (e.g. on-board NIC), you should have eth0 and eth1 sections. If for some reason ethx sections do not appear for any of your network cards, then that card is either off (down) or it has been not recognized by Linux. In this case, try the same command as above but with the "-a" option (for more options, do "man ifconfig" on a command prompt). Here is an example from a computer which has two NICs and works as a router: $> /sbin/ifconfig -a eth0 Link encap:Ethernet HWaddr 00:04:75:8A:D6:DF inet addr: Bcast: Mask: inet6 addr: fe80::204:75ff:fe8a:d6df/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:203651 errors:0 dropped:0

overruns:0 frame:0 TX packets:215610 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:204439186 (194.9 MiB) TX bytes:72319551 (68.9 MiB) Interrupt:22 Base address:0xd800 eth1 Link encap:Ethernet HWaddr 00:50:BA:50:03:87 inet addr: Bcast: Mask: inet6 addr: fe80::250:baff:fe50:387/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:210939 errors:0 dropped:0 overruns:0 frame:0 TX packets:194640 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:69994495 (66.7 MiB) TX bytes:199919032 (190.6 MiB) Interrupt:23 Base address:0xd400 lo Link encap:Local Loopback inet addr: Mask: inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:415911 errors:0 dropped:0 overruns:0 frame:0 TX packets:415911 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:50202233 (47.8 MiB) TX bytes:50202233 (47.8 MiB) Here, the two networks cards are being shown. Had any of the network cards not been up(on) in our first try of ifconfig, it should have been shown in this output. "-a" tells ifconfig to list all network interfaces which the Linux kernel knows about on the computer. If an ethx section is missing even now, then that would mean that the NIC has not been recognized by the kernel. Using lspci to see ethernet controllers You can also see what network controllers are detected by the kernel using the lspci command which also resides in the /sbin directory. Here is an example: $> outputlspci 0000:00:00.0 Host bridge: Intel Corp. 82845 845 (Brookdale) Chipset Host Bridge (rev 03) 0000:00:01.0 PCI bridge: Intel Corp. 82845 845 (Brookdale) Chipset AGP Bridge (rev 03) 0000:00:1e.0 PCI bridge: Intel Corp. 82801 PCI Bridge (rev 12) 0000:00:1f.0 ISA bridge: Intel Corp. 82801BA ISA Bridge (LPC) (rev 12) 0000:00:1f.1 IDE interface: Intel Corp. 82801BA IDE U100 (rev 12) 0000:00:1f.2 USB Controller: Intel Corp. 82801BA/BAM USB (Hub #1) (rev 12) 0000:00:1f.3 SMBus: Intel Corp. 82801BA/BAM SMBus (rev 12) 0000:00:1f.4 USB Controller: Intel Corp. 82801BA/BAM USB (Hub #2) (rev 12) 0000:01:00.0 VGA compatible controller: nVidia Corporation NV11 [GeForce2 MX/MX 400] (rev b2) 0000:02:09.0 FireWire (IEEE 1394): Lucent Microelectronics FW323 (rev 61) 0000:02:0a.0 Ethernet controller: 3Com Corporation 3c905C-TX/TX-M [Tornado] (rev 78) 0000:02:0b.0 Ethernet controller: D-Link System Inc RTL8139 Ethernet (rev 10) 0000:02:0c.0 Multimedia audio controller: Ensoniq 5880 AudioPCI (rev 02) In the above example output, two ethernet controllers are being listed as detected by the kernel. One is on the PCI address of 0:02:0a.0 and the other is at 0:02:0b.0.

Subnets and two NICs in one computer

If you have two NICs in one computer, it is important that they are on different subnets. For example, the first NIC can be 192.168.1.x (where x is any valid last octect for an IP address for an interface, i.e. it cannot be 0 or 255) and the other can be 192.168.0.x. And subnet mask of both should be

Setting up the LAN

If you are connecting to the internet through an ADSL or a T1 connection, you can share that connection with multiple computer by making your own Local Area network (LAN). This is shown schematically in the figure below.

Note that depending on your personal preferences and needs, you can use a router and a modem to connect to the T1 or ADSL line. You then need a computer with two NICs. By connecting one of the NICs, e.g. NIC0, to the router HR and by connection the other NIC, NIC1, to a switch, you can share the internet connection with compturs(C1, C2 ... Cn) in your internal LAN which will connect to the switch S. Depending on the network interface you have, you may also be able to connect the external internet connection directly to the network card of the router computer (as shown by the dashed line). Here we will assume you are connection as shown with solid lines in the figure.

Configuring the NICs of the router Linux machine

We want to configure the two network cards of our router machine. Note that the two network cards must be on different subnets. As an example, we will assume that the eth0 is on 192.168.1.x subnet and that eth1 is on 192.168.0.x subnet. What we are looking for is that eth0 should have: IP address subnet mask broadcast address network address IP address subnet mask broadcast address network address

and we want that eth1 should have:

In all this, we are assuming that the hardware router has an address of The NICs can be configure either using the GUI method or in text mode by editing the config files.

GUI method
You can get the network configuration GUI by RedHatMenu->System Settings->Network. There you can configure your devices based on the above IP information. Note that both of your cards are being configured with static IP addresses.

Text based method

Text based method is very useful in many cases. It is faster and it consumes less resources. Moreover, if you do not have a display opened (you are working remotely without display though a text console or the router machine does not have X installed, then you can work in text mode *only*). The configuration of the network cards is saved in the following config files: 1. /etc/sysconfig/network-scripts/ifcfg-ethx 2. /etc/sysconfig/networking/devices/ifcfg-ethx for NICx. Now, here is something I am not sure about. You probably have to modify the script files in the network-scripts directory, but I guess you should also copy those config files into the devices directory as well. Here is what your eth0 configuration file, /etc/sysconfig/networkscripts/ifcfg-eth0, should look like(the following is the output obtained by the cat command(first line) and note that anything following a "#" is considered a comment and ignored on any particular line): $> cat /etc/sysconfig/network-scripts/ifcfg-eth0 # Please read /usr/share/doc/initscripts-*/sysconfig.txt # for the documentation of these parameters. IPV6INIT=no BOOTPROTO=static ONBOOT=yes USERCTL=yes TYPE=Ethernet DEVICE=eth0 IPADDR= NETWORK= GATEWAY= BROADCAST= NETMASK= You should probably copy this file into the devices directory also. Stoping and retarting networking No need to reboot the machine. Once changes have neen made in the configuration, you need to restart the networking to make the changes take effect. This is done by giving the following commands(as root in a command terminal): #> /etc/init.d/networking stop #> /etc/init.d/networking start or by just giving the following one command which will first stop the networking service and then start it: #> /etc/init.d/networking restart All the scripts in /etc/init.d/ are executables that take arguments start, stop and restart. All services (httpd, sshd, ftpd, etc.) can similarly controlled.