You are on page 1of 76

Sameer Mohammed Omar Ali B.Sc.

-IT (New) - Semester 5 RN:540811612 LC:2531

BT0051 UNIX Operating System (Book ID: B0584) Assignment Set 1


1- Using the following Directory tree structure, if IT is your Home directory,: Root

IT

MBA

BSc-IT

MCA

Finance

Marketing

UNIX Which command do use to navigate to: Bsc-IT directory? cd BSc-IT

Which command is used to display your current directory? pwd

Which command is used to directly navigate to root directory from BSc-IT directory? cd /

Which command is used to create a new directory in BSc-IT called UNIX? mkdir UNIX

2- Use cat command to: Display text Welcome Home on your screen

Redirect the text Welcome Home to file called file-1 Append text Friends to the contents of file-1

$ cat >file-1 Welcome Home Crlt +d

$ Cat >>file-1 Friends Crlt +d

3- Which command is used to search following patterns in file by name file-1: TENDULKAR, tendulkar, TeNdUlKar? $ cat file-1|grep TENDULKAR

$ cat file-1|grep tendulkar

$ cat file-1|grep TeNdUlKar

You can search with grep I since spelling is same and with grep I it ignores case-senstive.you can also create patterfile and you can use grep F patternfile

BT0051 UNIX Operating System (Book ID: B0584) Assignment Set 2


1- Define Process. Which is the command used to find out currently executing Process in UNIX? A process is an instance of running a program. If, for example, three people are running the same program simultaneously, there are three processes there, not just one. In fact, we might have more than one process running even with only person executing the program, because (you will see later) the program can split into two, making two processes out of one. Keep in mind that all Unix commands, e.g. cc and mail, are programs, and thus contribute processes to the system when they are running. If 10 users are running mail right now, that will be 10 processes. At any given time, a typical Unix system will have many active processes, some of which were set up when the machine was first powered up. Every time you issue a command, Unix starts a new process, and suspends the current process (the C-shell) until the new process completes (except in the case of background processes). Unix identifies every process by a Process Identification Number (pid) which is assigned when the process is initiated. When we want to perform an operation on a process, we usually refer to it by its pid. Unix is a timesharing system, which means that the processes take turns running. Each turn is a called a timeslice; on most systems this is set at much less than one second. The reason this turns-taking approach is used is fairness: We dont want a 2-second job to have to wait for a 5-hour job to finish, which is what would happen if a job had the CPU to itself until it completed. When you execute a program the scheduler submits your process to a queue called process queue. At this instant the process is said to be in submit state. Once submitted the process waits its turn in the queue for some time. At this stage the process is said to be in hold state. As the process advances in the queue at some instant it would become the next one in the queue to receive CPU attention. At this stage it is in ready state. Finally the process gets the attention of CPU and starts getting executed and thereby attains the run state. In the middle of this execution it might so happen that the time slice allotted to this process gets over and the CPU starts running another process. At such times the old process is returned to the ready state and is

placed back in the process queue. As the CPU diverts its attention to the new process all the necessary parameters of the old process are saved for retrieval when its next time slice arrives. The old process will now be in ready state waiting for its next time slice to arrive. Some processes might be required to do disk input/output. Since I/O is a slow operation the CPU cant lie idle till the time I/O is over. Therefore, such processes are put in wait state until their I/O is over and are then placed in the ready state. A process whose execution comes to an end goes into complete state and is then removed from the process queue. The Command used to find out currently executing Process in UNIX is pscommand. Example: To see which processes are running at any instant just type ps and hit enter.

$ps PID TIY 2269 2396 TIME COMMAND 3a 3a 0.05 0.00 sh ps

The meaning of the column titles is as follows: PID process identification number TIY controlling terminal of the process TIME amount of CPU time the process has acquired so far COMMAND name of the command that issued the process

Unix assigns a unique number to every process running in memory. This number is called process ID or simply PID. The PIDs start with 0 and run upto a maximum of 32767. When the maximum number is reached it starts counting all over again from 0 onwards. The output of ps shows the PIDs for the two processes being run by us when ps was executed. The output also shows the terminal from which the processes were launched, the time that has elapsed since the processes were launched and the names of the processes.

The first process running at your terminal is sh. This stands for Bourne shell. This process is born the moment you login and dies only when you log out of the system. The other process that is running is ps itself. This process was obviously running when ps took the snapshot of memory to determine which processes were running.

2- What is the output of : $ ps-e $ ps-a commands The output of $ps-e (-e stands for every process running at that instant.) are: PID O 1 2 3 487 288 TTY ? ? ? ? 01 02 TIME 0:00 0:01 0:00 0:00 0:01 0:01 COMMAND sched init vhand bdflush sh sh

$ ps-a commands The output of $ps-a (-a standing for processes of all the users.) are:

$ ps a

PID 2269 2396 2100 2567

TTY 3a 3a 3b 3b

TIME 0.05 0.00 0.00 0.00

COMMAND sh ps-a sh vi

3- Which command is used to transfer a Foreground process to Background? Give one example.

To run a process in the background, Unix provides the ampersand (&) symbol. While executing a command, if this symbol is placed at the end of the command then the command will be executed in the background. When you run a process in the background a number is displayed on the screen. This number is nothing but, the PID of the process that you have just executed in the background. For example;

$ sort employee.dat > emp.out & 17653 $

The task of sorting the file employee.dat and storing the output in emp.out has now been assigned to the background, letting the user free to carry out any other task in the foreground.

4- Say you have three Process P1, P2 and P3 running in background. You want to assign priorities to them, which command is suitable give its syntax? Processes in the UNIX system are usually executed with equal priority. This is not always desirable since high-priority jobs must be completed at the earliest. UNIX offers the nice command, which IS Used with the & operator to reduce the priority of jobs. More important jobs can then have greater access to the system resources (being nice to your neighbors). To run a job with a low priority, the command name should be prefixed with nice P1 -1 uxmanual &

Nice value range from 0 to 39, and commands run with a nice value of 20 in the Bourne shell, a higher nice value meaning a lower priority. nice reduces the priority of any process by 10 units, raising its nice value to 30. The amount of reduction can also be specified with then option: nice n 15 wc -1 uxmanual&

5- Which command is used to: Fetch names from a file Name.txt sort them and remove any duplicate entries.

$ sort u Name.txt

6- Which commands are used to carry out following operations in vi- Editor: a. To save a file :w b. To start editing a file :e filename c. To save and Quit a file :wq d. To quit without saving (Forced quit) :q!

BT0052 Client Server Architecture (Book ID: B0036) Assignment Set 1


1- Distinguish CISC and RISC. CISC:- Pronounced sisk, and stands for Complex Instruction Set Computer. Most PC's use CPU based on this architecture. For instance Intel and AMD CPU's are based on CISC architectures. Typically CISC chips have a large amount of different and complex instructions. The philosophy behind it is that hardware is always faster than software, therefore one should make a powerful instruction set, which provides programmers with assembly instructions to do a lot with short programs. In common CISC chips are relatively slow (compared to RISC chips) per instruction, but use little (less than RISC) instructions.

RISK:- Pronounced risk, and stands for Reduced Instruction Set Computer. RISC chips evolved around the mid-1980 as a reaction at CISC chips. The philosophy behind it is that almost no one uses complex assembly language instructions as used by CISC, and people mostly use compilers which never use complex instructions. Apple for instance uses RISC chips. Therefore fewer, simpler and faster instructions would be better, than the large, complex and slower CISC instructions. However, more instructions are needed to accomplish a task. Another advantage of RISC is that - in theory - because of the more simple instructions, RISC chips require fewer transistors, which makes them easier to design and cheaper to produce.

2- Write a short note on Asynchronous Transfer Mode of transmission. ATM is cell and multiplexing technology that combines the benefits of dedicated circuits (invariant transmission delay and guaranteed capacity) with those of packet switching (flexibility and efficiency for intermittent traffic). The fixed length of ATMs cells(53 bytes48 bytes for the payload and 5 bytes for headers)facilitates high speed implementation that can support isochronous (time critical) application such as video and telephony with constant flow rates, in addition to more conventional data communications between computers where fluctuation in packet arrival rates typically not problematic (7). ATM standards

define a board range of bandwidths from 1.5 mbps (via t1 or DS1) to 622 mbps (OC -12) and above but most commercially available ATM products currently provide 155.52 mbps (OC3 ) or 100 mbps (TAXI). ATM is currently implemented over fiber connections to and various twisted pair wiring alternatives. All devices in an ATM network attach directly to an ATM switch. Multiple ATM switches can be combined in a fabric sometimes called an ATM cloud and virtual circuits can be dynamically created between any two nodes on one or more ATM switches so long as the switch can handle the aggregate cell transfer rate, additional connections to the switch can be made.

3- Explain various client/server applications using Java. The Unix input/output (I/O) system follows a paradigm usually referred to as Open-Read-Write-Close. Before a user process can perform I/O operations, it calls Open to specify and obtain permissions for the file or device to be used. Once an object has been opened, the user process makes one or more calls to Read or Write data. Read reads data from the object and transfers it to the user process, while Write transfers data from the user process to the object. After all transfer operations are complete, the user process calls Close to inform the operating system that it has finished using that object. When facilities for Inter Process Communication (IPC) and networking were added to Unix, the idea was to make the interface to IPC similar to that of file I/O. In Unix, a process has a set of I/O descriptors that one reads from and writes to. These descriptors may refer to files, devices, or communication channels (sockets). The lifetime of a descriptor is made up of three phases: creation (open socket), reading and writing (receive and send to socket), and destruction (close socket). The IPC interface in BSD-like versions of Unix is implemented as a layer over the network TCP and UDP protocols. Message destinations are specified as socket addresses; each socket address is a communication identifier that consists of a port number and an Internet address. The IPC operations are based on socket pairs, one belonging to a communication process. IPC is done by exchanging some data through transmitting that data in a message between a socket in one process and another socket in another process. When messages are sent, the messages are queued at the sending socket until the underlying network protocol has transmitted them. When they arrive, the messages are queued at the receiving socket until the receiving process makes the necessary calls to receive them. TCP/IP and UDP/IP communications

There are two communication protocols that one can use for socket programming: datagram communication and stream communication. Datagram communication The datagram communication protocol, known as UDP (user datagram protocol), is a connectionless protocol, meaning that each time you send datagrams, you also need to send the local socket descriptor and the receiving socket's address. As you can tell, additional data must be sent each time a communication is made. Stream communication The stream communication protocol is known as TCP (transfer control protocol). Unlike UDP, TCP is a connection-oriented protocol. In order to do communication over the TCP protocol, a connection must first be established between the pair of sockets. While one of the sockets listens for a connection request (server), the other asks for a connection (client). Once two sockets have been connected, they can be used to transmit data in both (or either one of the) directions. Now, you might ask what protocol you should use -- UDP or TCP? This depends on the c1ienUserver application you are writing. The following discussion shows the differences between the UDP and TCP protocols; this might help you decide which protocol you should use.

4- How will you develop a simple web client in Java using Sockets? import java.io.*; Import java.net.*; Public class webc { try { Socket cSock1=new Socket (cecasum.utc.edu, 80); System.out.println(Client1: + csock1); getPage(csock1);

} Catch (UnknownHostException e) {System.out.println(UnknownHostException : + e); } Catch (IOException e) {System.err.println(IOException: + e); } } Public static void getPage(Socket csock) { { DataOutputStream outbound=new DataOutputStream(cSock.getOutputStream()); DataInputStream inbound=new DataInputStream(cSock.getInputStream()); Outbound.writeBytes(Get /~cslab/cpsc591/test.htmHTTP/1.0\r\n\r\n); String responseLine; while((responseLine=inbound.readLine()) !=null) {System.out.println(responseLine); If(responseLine.indexOf(</HTML>) !=-1) Break; }outbound.close(); inbound.close(); cSock.close(); } Catch(IOException ioe) {

System.out.println(IOException: + ioe); } } }

5- Explain TCP/IP Protocol in detail. TCP, an acronym for Transmission Control Protocol, corresponds to the fourth layer of OSI reference model. IP corresponds to the third layer of the same model. Each of these protocols has the following features:

TCP: It provides a connection type services. That is a logical connection must be established prior to communication. Because of this, continuous transmission of large amount of data is possible. It ensures a highly reliable data transmission for upper layer using IP protocol. This is possible because TCP uses positive acknowledgement to confirm the sender about the proper reception of data. A negative acknowledgement implies that the failed data segment needs to be retransmitted. The TCP header includes both source and destination port fields for identifying the applications for which the connection is established. The sequence and acknowledgement Number fields underlie the positive acknowledgement and retransmission techniques. Integrity checks are accommodated using the checksum field.

IP: It is connectionless type service and operates at third layer of OSI reference model. That is, prior to transmission of data. No logical connection is needed. This type of protocol is suitable for the sporadic transmission of data to a number of destinations. This has no functions like sequence control, error recovery and control, flow control but this identifies, the connection with the port number.

6- Explain Remote Procedure Call in detail. Remote Procedure Call (RPC) is a client/server infrastructure that increases the interoperability, portability, and flexibility of an application by allowing the application to be distributed over multiple heterogeneous platforms. It reduces the complexity of developing application that span multiple operating systems and network protocols by insulating the application developer from the details of

the various operating system and network interfaces-function calls are the programmers interface when using RPC.

To access the remote server portion of an application, special function calls, RPCs, are embedded, within the client portion of the client/server application program. Because they are embedded, RPC do not stand alone as a discreet middleware layer.

RPC increases the flexibility of an architecture by allowing a client component of an application to employ a function call to access a server on remote system. RPC allows the remote component to be accessed without knowledge of the network address or any other lower-level information. Most RPCs use a synchronous, request-reply (sometimes referred to as call/wait) protocol which involves blocking of the client until the server fulfils its request. Asynchronous (call/wait) implementations are available but are currently the exception.

RPC can be implemented in two ways:

1. Within a broader, more encompassing propriety product 2. By a programmer using a proprietary tool to create client/server RPC stubs.

RPC is appropriate for client/server applications in which the client can issue a request and wait for the servers response before continuing its own processing. Because most RPC implementations do not support peer-to-peer, or asynchronous, client/server interaction, RPC is not well-suited for applications involving distributed objects or object-oriented programming.

Asynchronous and synchronous mechanisms each have strength and weaknesses that should be considered when designing any specific application. In contrast to asynchronous mechanisms employed by Message-Oriented Middleware, the use of a synchronous request-reply mechanism in RPC requires that the client and server are always available and functioning. In order to allow a client/server application to recover from a blocked condition, an implementation of a RPC is required

to provide mechanisms such as error, messages, request timer, retransmission, or redirection to an alternate server. The complexity of the application using a RPC is dependent on the sophistication of the specific RPC implementation. RPCs that implement asynchronous mechanisms are very few and are difficult to implement.

BT0052 Client Server Architecture (Book ID: B0036) Assignment Set 2


1- Explain the different types of Client / Server Architectures. different types of client/server architectures are:1. Two tier client/server architecture 2. Three tier client/server architecture

Two tier client server architecture: - @-tier architecture, RPCs or SQL are typically used to communicate between the client and server. The server is likely to have support for stored procedures and trigger. These mean that the server can be programmed to implement business rules that are better suited to run on the client, resulting in a much more efficient overall system. Since 1992 software vendors have developed and brought to market toolsets to simplify development of application for the 2-tier client/server architecture. The best known of these tools are Microsofts visual basic Borlands Delphi, and Sybases power Builder These modern, powerful tools combined with literally million of developer who to use them that the 2tiered client/server approach is a good and economical solution for certain classes of problems. The 2-tiered client/server architecture has proven to be very effective in solving workgroup problems. Workgroup, as used here, is loosely defined as a dozen to 100 people interacting on a LAN. For bigger, enterprise -class problems and/or application that distributed over a WAN, use of this 2-tier approach has generated some problem.

Three tier client/server architecture:- The client can deliver its request to the middle layer, disengage and be assured that a proper response will be forthcoming at a later time. In addition, the middle layer add synonymous in this context .Theres no free lunch, however, and the price for this added flexibility and performance has been a development environment of 2-tiered applications.

The most part type of middle layer (and the oldest, the concept on mainframes dating from the early 1970s) is the transaction processing monitor or TP Monitor. TP monitor as a kind of message queuing service. The client connects to the TP monitor instead of the database server. The transaction is accepted by the monitor, which queues it and then takes responsibility for managing it to correct completion.

The net result of using a 3-tier/server architecture with a TP monitor is that the resulting environment is FAR more scalable than a 2-tier approach with direct client to server connection. For really large (10, 00 user) application, a TP monitor is one of the most effective solutions.

2- Explain hierarchical vs localized file server deployment.

3- Explain WAN Connectivity in detail. A frequently overlooked aspect of a LANs topology is its connection to the wide area network. In many cases, WAN connectivity is provided by a single connection from the backbone to the router.

The LANs connection to the router that provides WAN connectivity is a crucial link in a buildings overall LAN topology. Improper technology selection at this critical point can result in unacceptably deteriorated levels of performance for all traffic entering or exiting the building. LAN technologies that use a connection- based access method are highly inappropriate for this function.

Networks that support a high degree of WAN to LAN and LAN to WAN traffic benefit greatly from having the most robust connection possible in this aspect of their overall topology. The technology selected should be robust in terms of its nominal transmission rate and its access method. Connection- based media, even on a dedicated switched port, may become problematic in high usage networks. This is the bottleneck for all traffic coming into, and trying to get out to the building.

4- Write a Java Program to implement a simple web server.

uses a server socket to wait for a connection, then opens a socket to the client to return an HTML document with HTTP headers.

Import java.io.*; Import java.net.*;

Class websrv { Public static void main (String args []) Serversocket srvsock = null; Socket clisock=null; Int connection = 0; Try { Srvsock = new serversocket(60337, 5); While (connects <3) { {

clisock = srvsock.accept (); Serviceclient (clisock); Connects++; } Srvsock.close(); } catch (IOException e) {

System.out.println (error in simple webserver: +e); } } Public static void serviceclient (socket client) Throws IOException {

Datainputstream inbound = null; Dataoutputstream outbound = null;

Try Inbound

{ = new Datainputstream (client.getinputstream()); new Dataoutputstream (client.getoutputstream()); = prepareoutput();

Outbound =

StringBuffer buffer String inputLine;

While ((inputLine = inbound.readline()) !=null) { If (inputline.equals ()) { Outbound.writebytes (buffer. to string ()); System.out.println (wrote buffer to + client); //sleep (500); for slow win 95 break; } } } finally {

System.out.println (cleaning up connection: + client); Outbound.close ( ); Inbound.close ( ); Client.close ( ); Client.close ( ); } } Public static stringbuffer prepare output ( ) Stringbuffer outputbuffer = newstring buffer ( ); Outputbuffer.append (<HTML>\n<HEAD>\n<TITLE>Test HTML Document </TITLE>\n); outputBuffer.append (</HEAD>\n); outputBuffer.append (</BODY>\n This is a <STRONG>test</STRONG>HTML Document!\n ); outputBuffer.append (</BODY>\n); outputBuffer.append (</HTML>\n); stringbuffer headerbuffer = new stringbuffer ( ); {

headerBuffer.append (HTTP/1.0 200 ok\r\n); headerBuffer.append (content.type: text/html\r\n); headerBuffer.append (content-length: + outputbuffer.length( )); headerBuffer.append (\r\n\r\n); headerBuffer.append (outputbuffer.tostring() ); return headerbuffer; } }

5- Explain Booth Multiplication Algorithm with a suitable example. Booth algorithm gives a procedure for multiplying binary integers in signed-2s complement representation. It operates on the fact that strings of 0s in the multiplier require no addition but just shifting, and a string of ls in the multiplier from bit weight 2 k to weight 2m can be treated as 2k-1 . For example The binary number 001110(+14) has a string of 1s from r to 21 (k=3, m=1).

Numerical example of binary Multiplier

The number can be represented as rk+1 2 = 2n-2 = 16-2=14. Therefore, the multiplication M*14, where M is multiplicand and 14 the multiplier, can be done as M*24 M*2*. Thus, the product can be obtained by shifting the binary multiplicand M four times to the left and subtracting M shifted left once. As in all multiplication schemes booth algorithm requires examination of the multiplier bits and shifting of the partial product. Prior of the shifting, the multiplicand may be added to the partial product, subtracted from the partial product, or left unchanged according to the following rules:1. The multiplicand is subtracted from the partial product upon encountering the first least significant 1 in a string of 1s in the multiplier. 2. The multiplicand is added to the partial product upon encountering the first Q in a string of 0s in the multiplier. 3. The partial product does not change when the multiplier bit is identical to the previous multiplier bit. The algorithm works for positive or negative multipliers in 2s complement representation. This is because a negative multiplier ends with a string of ls and the last operation will be a subtraction of the appropriate weight. For example, a multiplier equal to -14 is represented in 2s complement as 110010 and is treated as -24 + 22 = -- 14.

6- What is meant by Remote Method Invocation? Explain with a suitable example Remote Method Invocations (RMI):Java Remote Method Invocation (RMI) allows you to write distributed objects using java. This paper describes the benefit of RMI, and how you can connect it to existing and legacy system as well as to components written in java. RMI provides a simple and direct model for distributed computation with java objects. These objects can be java objects, or can be simple java wrappers around an existing API. Java embraces the Write Once, Run Anywhere model. RMI extends the java to be run everywhere. RMI connects to exiting and legacy system using the standard java native method interface JUI. RMI can also connect to existing relation database using the standard JDBC Package. The RMI/JNI and RMI/JDBC combinations let you use RMI to communication today with existing server in non-java languages, and to expand your use of java to those server when it make sense for you to do so RMI lets you take full advantage of java when you do expand you use. RMI is javas remote procedure call (RPC) mechanism. RMI has several

advantages over traditional RPC system because it is part of javas oriented approach. Tradition RPC system are language-neutral, and therefore are essentially least-common denominator system- they cannot provide functionality that is not available on all possible target platforms *The primary advantages of RMI:Object oriented, Mobile behavior, Design patterns, Safe secure, Easy to write/easy to use, Connects to existing/legacy system, Write once, run anywhere, Distributed garbage collection, parallel computing, the java distributed computing solution.

BT0053 Java Programming (Book ID: B0831) Assignment Set 1

1- What do you mean by Java Virtual Machine? A Java Virtual Machine is a piece of software that is implemented on non-virtual hardware and on standard operating systems. A JVM provides an environment in which Java bytecode can be executed, enabling such features as automated exception handling, which provides "root-cause" debugging information for every software error (exception), independent of the source code. A JVM is distributed along with a set of standard class libraries that implement the Java application programming interface (API). Appropriate APIs bundled together form the Java Runtime Environment (JRE). JVMs are available for many hardware and software platforms. The use of the same bytecode for all JVMs on all platforms allows Java to be described as a "compile once, run anywhere" programming language, as opposed to "write once, compile anywhere", which describes cross-platform compiled languages. Thus, the JVM is a crucial component of the Java platform. Java bytecode is an intermediate language which is typically compiled from Java, but it can also be compiled from other programming languages. For example, Ada source code can be compiled to Java bytecode and executed on a JVM.

2- Write a simple Java program to display a string message and explain the steps of compilation and execution in Java Environment class test() { public void show()

{ System.out.println("Hello world"); } public static void(string arg[]) { test T=new test(); T.show(); } }

1 Open any text editor and write above code and save this file with extension .java. 2 A compiler converts the java program into an intermediate language representation called Byte code(file.class).Using javac filename on command prompt. 3 on command prompt write java file.class after calling this command all control take under JVM(java virtual Machine).with the help of jvm your program successfully run.

3- Write a Java program to code the ApplicantCollection class, which stores and display the personal details of three applicants.

import java.io.*; import java.util.*;

public class QueueImplement{

LinkedList<Integer> list; String str; int num; public static void main(String[] args){

QueueImplement q = new QueueImplement();

} public QueueImplement(){

try{

list = new LinkedList<Integer>(); InputStreamReader ir = new InputStreamReader(System.in); BufferedReader bf = new BufferedReader(ir); System.out.println("Enter number of elements : "); str = bf.readLine(); if((num = Integer.parseInt(str)) == 0){ System.out.println("You have entered either zero/null."); System.exit(0); } else{ System.out.println("Enter elements : "); for(int i = 0; i < num; i++){ str = bf.readLine(); int n = Integer.parseInt(str); list.add(n); }

} System.out.println("First element :" + list.removeFirst()); System.out.println("Last element :" + list.removeLast()); System.out.println("Rest elements in the list :");

while(!list.isEmpty()){ System.out.print(list.remove() + "\t"); } } catch(IOException e){ System.out.println(e.getMessage() + " is not a legal entry."); System.exit(0); } }

4- What are the different types of relationships? Kind-Of relationship, Is-A relationship, Part-Of relationship, Has-A relationship.

5- Define Access Specifier. Explain the following Access Specifiers with an example for each: Public b. Private c. Protected

a.

Public Data member and function access inside as well as out the program i.e. within a package or outside the package. Private Data member and function access only within a class. Protected Data member and faction access only through inheritance outside the package.

6- What are the different methods are available under BufferedInputStream and BufferedOutputStream classes. Methods of BufferedInputStream class int available() Returns the number of bytes that can be read from this input stream without blocking. void close() Closes this input stream and releases any system resources associated with the stream. void mark(int readlimit) See the general contract of the mark method of InputStream. boolean markSupported() Tests if this input stream supports the mark and reset methods. int read() See the general contract of the read method of InputStream. int read(byte[] b, int off, int len) Reads bytes from this byte-input stream into the specified byte array, starting at the given offset. void reset() See the general contract of the reset method of InputStream. long skip(long n) See the general contract of the skip method of InputStream.

Methods of BufferedOutputStream class void flush() Flushes this buffered output stream. void write(byte[] b, int off, int len) Writes len bytes from the specified byte array starting at offset off to this buffered output stream.

void write(int b) Writes the specified

BT0053 Java Programming (Book ID: B0831) Assignment Set 2

1- Define the concept of Java Bytecode and its importance. Bytecode is highly optimized set of instruction deigned to executed by the Java-run-time system, which is called the Java Virtual Machine (JVM). In standard form JVM is an interpreter for bytecode. Translating a Java program into bytecode helps make it much easier to run a program in a wide variety of environment. The reason is straightforward : only the JVM need to be implemented for each platform. Once the run time package exists for a given system, any java program can run on it. JVM will be differ from platform to platform, all the interpret the same Java bytecode. The interpreter of bytecode is the easiest way to create truly portable programs. A java program is interpreted also helps to make it secure.

2- How will you compile a Java program? A compiler converts the Java program into an intermediate language representation called Bytecode which is platform independent. A Java file will have the extension .java. Suppose I have created a java file named Dharmendra.java. When this file is compiled we get a file called as Dharmendra.class. Now after compiling the program into bytecode(.class file) that run on the Java Virtual Machine which can interpret and run the program on any operating system. This make Java programs platrform-independent. 3- Write a program to perform the basic arithmetic operations: a) Addition b) Subtraction c) Multiplication d) Division Use 4 different methods for the above and invoke them as necessary from the main() method.

Ans: import java.io.*; import java.util.*; class Arithmetic { void add(int a,int b) { System.out.println("Sum of " + a + " And " + b + " is " + (a+b)); } void sub(int a,int b) { System.out.println("Subtraction of " + a + " And " + b + " is " + (a-b)); } void mul(int a,int b) { System.out.println("Mutiplication of " + a +" And " + b + " is " + (a*b)); } void div(int a,int b) { System.out.println("Divison of " + a +" And " + b + " is " + (a/b)); } } class Main { public static void main(String arg[]) throws IOException { InputStreamReader byteIn= new InputStreamReader(System.in); BufferedReader kbin= new BufferedReader(byteIn); System.out.println("Enter First Number");

int n1=Integer.parseInt(kbin.readLine()); System.out.println("Enter First Number"); int n2=Integer.parseInt(kbin.readLine()); Arithmetic a=new Arithmetic(); System.out.println("1 For Add"); System.out.println("2 For Subtraction"); System.out.println("3 For Mutliplication"); System.out.println("4 For Divison"); System.out.println("Enter your choice"); int ch=Integer.parseInt(kbin.readLine()); switch(ch) { case 1: a.add(n1,n2); break; case 2: a.sub(n1,n2); break; case 3: a.mul(n1,n2); break; case 4: if(n1<n2) System.out.println("Nemurator is Smaller then Demurator"); else a.div(n1,n2); break; default:

System.out.println("This is invalide choice"); } } }

4- Explain the concept of interfaces in Java with a suitable example for the same. With the help of interface you can fully abstract a class from its implementation that is using interface; you can specify what a class must do but not how it does it. Interfaces are syntactically similar to classes, but they lack instance variables, and their methods are declared without any body. By providing the interface keyword, java allows you to fully utilize the one interface, multiple methods aspect of polymorphisms. 5- What are the different exception handling techniques availablein Java. When an unexpected error occurs in a method java creates an object of the appropriate exception class. After creating the exception objects, java passes it to the program, by an action called throwing an excetion. The exception object contains information about the type of error and the state of the program when the exceptinon-handler and process the exception. Following keywords are use in exception-handling in java: a) try b) catch c) finally The try block :The statements that may throw an exception in the try block. The following skeletal code illustrates the use of the try block. Try{ //statement that may cause an exception } The try block governs the statements that are enclosed within it and defines the scope of the exception handlers associated with it. In other words if an exception occurs within try block the appropriate exception-handler that is associated with the try block handles the exception. A try block must have at

least one catch block that follow it immediately. The try statement can be nested. That is a try statement can be inside the block of another try. Each time a try statement is entered, the context of that exception is pushed on the stack. If an inner try statement does not have a catch handler for a particular exception the stack is unwound and the next try statements catch handlers are inspected for a match. Example: Class Nesttry{ public static void main(String args[]) { try{ int a =args.length; int b=42/a; System.out.println(a=+a); try{ if(a==1) a=a/(a-a); if (a==2){ int c[]={1}; c[42]=99; } } catch(ArrayIndexOutOfBoundsException e ) {} }catch(ArithmeticException e) { System.out.println(Divide by 0: + e); } } }

6- Write a simple GUI based program to prepare a similar interfaces as shown below:

Ans: import javax.swing.*; import java.awt.*; public class FrameMaker extends JFrame {JPanel FirstPanel,LastPanel,MainPanel; JTextField Num1,Num2,Result; JButton Badd,Bsub,Bmul,Bdiv,Bexit; JLabel Lnum,Lnum1,Rslt; Color c; public FrameMaker(String x) {setTitle(x); Lnum=new JLabel("Enter First Number",JLabel.CENTER); Lnum1=new JLabel("Enter Second Number",JLabel.CENTER); Rslt=new JLabel("The Result",JLabel.CENTER); Num1=new JTextField(20); Num2=new JTextField(20); Result=new JTextField(20); FirstPanel=new JPanel(); LastPanel=new JPanel(); FirstPanel.setLayout(new GridLayout(5,2,40,40)); LastPanel.setLayout(new GridLayout(7,1,40,20)); MainPanel=new JPanel(new BorderLayout()); MainPanel.add(FirstPanel,BorderLayout.CENTER); MainPanel.add(LastPanel,BorderLayout.LINE_END);

setContentPane(MainPanel); FirstPanel.add(new JLabel(""));FirstPanel.add(new JLabel("")); FirstPanel.add(Lnum);FirstPanel.add(Num1);FirstPanel.add(Lnum1); FirstPanel.add(Num2);FirstPanel.add(Rslt);FirstPanel.add(Result); FirstPanel.add(new JLabel(""));FirstPanel.add(new JLabel("")); Badd=new JButton("Add");Bsub=new JButton("Sub"); Bmul=new JButton("Multiply");Bdiv=new JButton("Divide"); Bexit=new JButton("Exit"); LastPanel.add(new JLabel("")); LastPanel.add(Badd);LastPanel.add(Bsub); LastPanel.add(Bmul);LastPanel.add(Bdiv);LastPanel.add(Bexit); LastPanel.add(new JLabel("")); Dimension d=Toolkit.getDefaultToolkit().getScreenSize(); setBounds((d.width-700)/2,(d.height-500)/2,700,500); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setVisible(true); c=new Color(120,120,240); }public static void main(String[] s) {FrameMaker ss=new FrameMaker("Calculator"); } public Insets getInsets() { setBackground(c); return new Insets(100,80,80,80); } }

BT0054 Basics of E-Commerce (Book ID: B0035 & B0045) Assignment Set 1

1- What are the difficulties and or issues of e-Commerce? Explain. E-Commerce has a scope as wide as an ocean, and there is the implementation hurdle. It has difficulties that are:High profile failure: Failure in fulfillment system. Failure in customer services. Failure in technology and infrastructure. Failure in legal compliance. Failure in fraud control. Hidden complexities: The system and services that can fail include Web server Database server Internet services provider(ISP) Local loop (Connection between web & ISP) Commerce software Credit card gateway Credit card processor Fulfillment system Establishment the trust: Steps involved in simple retail transaction between buyer and seller is given below

Information sharing Establishing trust Negotiation deal Payment and settlement Procedure and delivery After sale services Customer view: Is the merchant has the authority to sell the product? Is he charge the right price? Merchant view: Is the consumer or customer has the right to buy? Is there any social restriction on the item asked?

2- What is Firewall? Explain how it works as a packet filters. A firewall is a part of a computer system or network that is designed to block unauthorized access while permitting authorized communications. It is a device or set of devices which is configured to permit or deny computer applications based upon a set of rules and other criteria. Firewalls can be implemented in either hardware or software, or a combination of both. Firewalls are frequently used to prevent unauthorized Internet users from accessing private networks connected to the Internet, especially intranets. All messages entering or leaving the intranet pass through the firewall, which examines each message and blocks those that do not meet the specified security criteria. There are several types of firewall techniques: 1. Packet filter: Packet filtering inspects each packet passing through the network and accepts or

rejects it based on user-defined rules. Although difficult to configure, it is fairly effective and mostly transparent to its users. It is susceptible to IP spoofing. 2. Application gateway: Applies security mechanism to specific application such as FTP and Telnet servers. This is very effective, but can impose a performance degradation.

3. Circuit-level gateway: Applies security mechanism when a TCP or UDP connection is established. Once the connection has been made, packet can flow between the hosts without further checking. 4. Proxy server: Intercepts all messages entering and leaving the network. The proxy server effectively hides the true addresses.

3- What are the methods used for encryption? Two methods are used for encryption Secret key or symmetric encryption. Public key or asymmetric encryption. Secret key: In this type of encryption scheme, both the sender and recipient possess the same key, to encrypt and decrypt the data.

Drawbacks Both parties must agree upon a shared secret key. If there are n correspondent one have to keep track of n-different secret keys. If the same key is used by more than one correspondent, common key holders can read each others mail. Symmetric encryption schemes are also subjected to authenticity problems. Because, sender & recipient have the same secret key identify of originator or recipient cannot be proved. Both can encrypt or decrypt the massage.

Public key or Asymmetric encryption: Public-key cryptography is a cryptographic approach which involves the use of asymmetric key algorithms instead of or in addition to symmetric key algorithms. Unlike symmetric key algorithms, it does not require a secure initial exchange of one or more secret keys to both sender and receiver. The asymmetric key algorithms are used to create a mathematically related key pair: a secret private key and a published public key. Use of these keys allows protection of the authenticity of a message by creating a digital signature of a message using the private key, which can be verified using the public key. It also allows protection of the confidentiality and integrity of a message, by public key

encryption, encrypting the message using the public key, which can only be decrypted using the private key.

Public key cryptography is a fundamental and widely used technology around the world. It is the approach which is employed by many cryptographic algorithms and cryptosystems. It underlies such Internet standards as Transport Layer Security (TLS) (successor to SSL), PGP, and GPG.

The two main branches of public key cryptography are:

Public key encryption a message encrypted with a recipient's public key cannot be decrypted by anyone except a possessor of the matching private key -- presumably, this will be the owner of that key and the person associated with the public key used. This is used for confidentiality. Digital signatures a message signed with a sender's private key can be verified by anyone who has access to the sender's public key, thereby proving that the sender had access to the private key (and therefore is likely to be the person associated with the public key used), and the part of the message that has not been tampered with. On the question of authenticity, see also message digest. An analogy to public-key encryption is that of a locked mailbox with a mail slot. The mail slot is exposed and accessible to the public; its location (the street address) is in essence the public key. Anyone knowing the street address can go to the door and drop a written message through the slot; however, only the person who possesses the key can open the mailbox and read the message.

An analogy for digital signatures is the sealing of an envelope with a personal wax seal. The message can be opened by anyone, but the presence of the seal authenticates the sender.

A central problem for use of public-key cryptography is confidence (ideally proof) that a public key is correct, belongs to the person or entity claimed (i.e., is 'authentic'), and has not been tampered with or replaced by a malicious third party. The usual approach to this problem is to use a public-key infrastructure (PKI), in which one or more third parties, known as certificate authorities, certify ownership of key pairs. Another approach, used by PGP, is the "web of trust" method to ensure authenticity of key pairs.

4- Explain the role of intermediaries in e-commerce. The significance of using intermediaries in e-commerce transactions cannot be overemphasized; they assist in the following areas - Businesses are able to concentrate their resources on their core activities instead of uncoordinated and multiple searches for other businesses to transact with - The supply chain is shortened, reducing the number of weeks and months involved to conclude a transaction; businesses are able to do more in a short period. - Huge reduction in cost in procurement and sales - The use intermediaries improves the online of experience of all the parties to a transaction through regularly updated means of collaboration - Use of intermediaries help in organizing suppliers and purchasers for easy and timely occurrence of transactions

5- Write the importance of EDI. EDI = Electronic Data Interchange.. EDI comprises of many types of messages which when transmitted between two parties is designed to substitute other forms of data transfer. In the olden days, a hard copy of the manifest used to be handed over to the captain of the ship and also a hard copy of the same used to be couriered or posted to the relevant discharge ports.. Then as technology improved, there was the process by which the manifest was sent to the relevant discharge ports by email.. The manifests thus received by the discharge port agents were manually captured into their respective computer systems.

With the advent of EDI, the above can now be avoided and precious time can be saved. When sent as an EDI message, the data can be instantly downloaded into the recipients system thereby avoiding manual capture which in turn will avoid any typographical errors and also saves a lot of time. This EDI messaging is also used to send the data to Customs (Manifest, Bill of Entry), Port (Container Stowage planning, Cargo Dues, Load/Discharge list, Container moves), Principals (Load/Discharge list, Container moves, Bookings). Usually there are software that will help to convert data into an EDI format and this is then sent by email to the recipient same as a normal machine. At the other end, there are systems that can automatically receive these msgs and transfer them back into data in their system.

6- Explain the following security attacks a) IP Spoofing

b) Denial of Service Attack.

a) IP Spoofing- IP spoofing hides your IP address by creating IP packets that contain bogus IP addresses in an effort to impersonate other connections and hide your identity when you send information. IP spoofing is a common method that is used by spammers and scammers to mislead others on the origin of the information they send. Use of IP spoofing IP spoofing is used to commit criminal activity online and to breach network security. Hackers use IP spoofing so they do not get caught spamming and to perpetrate denial of service attacks. These are attacks that involve massive amounts of information being sent to computers over a network in an effort to crash the entire network. The hacker does not get caught because the origin of the messages cannot be determined due to the bogus IP address. IP spoofing is also used by hackers to breach network security measures by using a bogus IP address that mirrors one of the addresses on the network. This eliminates the need for the hacker to provide a user name and password to log onto the network.

b) Denial of Service Attack- A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer resource unavailable to its intended users. Although the means to carry out, motives for, and targets of a DoS attack may vary, it generally consists of the concerted efforts of a person or people to prevent an Internet site or service from functioning efficiently or at all, temporarily or indefinitely. Perpetrators of DoS attacks typically target sites or services hosted on high-profile web servers such as banks, credit card payment gateways, and even root nameservers. The term is generally used with regards to computer networks, but is not limited to this field, for example, it is also used in reference to CPU resource management. One common method of attack involves saturating the target (victim) machine with external communications requests, such that it cannot respond to legitimate traffic, or responds so slowly as to be rendered effectively unavailable. In general terms, DoS attacks are implemented by either forcing the targeted computer(s) to reset, or consuming its resources so that it can no longer provide its intended service or obstructing the communication media between the intended users and the victim so that they can no longer communicate adequately. Denial-of-service attacks are considered violations of the IAB's Internet proper use policy, and also violate the acceptable use policies of virtually all Internet Service Providers. They also commonly constitute violations of the laws of individual nations.

7- Explain the limitations and weaknesses of e-Commerce security measures. a. Software for security flaws & hardware Software is complex. As the size of a piece of software grows it becomes increasingly difficult to test all. Complex code will probably have unknown loopholes that an attacker can exploit. These loopholes may be convoluted, but that will not prevent an attacker from trying to exploit them. Some systems, particularly commerce systems, rely on tamper-resistant hardware for security: smart cards, electronic wallets, dongles, etc. These systems may assume public terminals never fall into the wrong hands, or that those "wrong hands" lack the expertise and equipment to attack the hardware. While hardware security is an important component in many secure systems, t is hard to trust systems

whose security rests solely on assumptions about tamper resistance. One rarely sees tamper resistance techniques that work and the tools for defeating tamper resistance are getting better all the time. When the designer systems that use tamper resistance, are designed complementary security mechanisms just in case the tamper resistance fails. The "timing attack" made a big press splash in 1995: RSA private keys could be recovered by measuring the relative times cryptographic operations took. The attack has been successfully implemented against smart cards and other security tokens and against electronic commerce servers across the Internet. Counterpane Systems and others have generalized these methods to include attacks on a system by measuring power consumption, radiation emissions, and other "side channels," and have implemented them against a variety of public-key and symmetric algorithms in "secure" tokens. Related research has looked at fault analysis: deliberately introducing faults into cryptographic processors in order to determine the secret keys. The effects of this limitation or attack can be devastating. b. Firewall and network configurations Network security is designed to address the problems identified with host security. The network accesses to hosts and services rather than on securing the hosts themselves. Network security approaches include building firewalls to protect trusted networks from untrusted networks, utilizing strong authentication techniques, and using encryption to protect the confidentiality and integrity of data as it passes across the network. A firewall is a network device that allows only certain authorized operations or programs to be run between internal networks and the Internet. A firewall configuration can be very simple or extremely complex depending on the particular requirements of the enterprise. Many enterprises today are connecting their private, internal networks to the Internet to provide access to external resources on the Internet. Although this is an important capability and it is one that should be well planned to avoid possible security risks by exposing the internal network to users outside the enterprise. c. Human elements and companys threats/vulnerabilities

Security is worthless if somebody can steal your password. Focusing on strong encryption, while ignoring the importance of passwords, is like building a steal-lined vault but taping the combination on the outside door. Since cracking encryption of any sort, even the relatively weak data encryption standard (DES) algorithm is likely to consume more time; passwords will always remain the weak link in any encryption method. The more secret and hidden the password, the more likely it will only block access to legitimate users. A password is only good if its chosen carefully. Too often users choose obvious passwords like middle names, their birthday, their office phone number, or the name of favorite pets. These passwords can be guessed at and WWW servers, unlike Unix login program to break in by brute force. You also should be alert to the possibility of remote users sharing their user names and passwords. It is more secure to use a combination of IP address restriction and password than to use either of them alone. Many systems break because they rely on user-generated passwords. Left to themselves, people don't choose strong passwords. If they're forced to use strong passwords, they can't remember them. If the password becomes a key, it's usually much easier--and faster--to guess the password than it is to bruteforce the key; one of the weakness to fail the security system can be this way. Some user interfaces make the problem even worse: limiting the passwords to eight characters, converting everything to lower case, etc. Even pass phrases can be weak: searching through 40-character phrases is often much easier than searching through 64-bit random keys. Even when a system is secure if used properly, its users can subvert its security by accident--especially if the system isn't designed very well. The classic example of this is the user who gives his password to his coworkers so they can fix some problem when he's out of the office. Users may not report missing smart cards for a few days, in case they are just misplaced. They may not carefully check the name on a digital certificate. They may reuse their secure passwords on other, insecure systems. They may not change their software's default weak security settings. Good system design can't fix all these social problems, but it can help avoid many of them. d. Weakness Cryptographic Designs

A cryptographic system can only be as strong as the encryption algorithms, digital signature algorithms, one-way hash functions, and message authentication codes it relies on. Break any of them, and you've broken the system. And just as it's possible to build a weak structure using strong materials, it's possible to build a weak cryptographic system using strong algorithms and protocols. Often find systems that "void the warranty" of their cryptography by not using it properly: failing to check the size of values, reusing random parameters that should never be reused, and so on. Encryption algorithms don't necessarily provide data integrity. Key exchange protocols don't necessarily ensure that both parties receive the same key. A recent research project found out that some--not all--systems using related cryptographic keys could be broken, even though each individual key was secure. Security is a lot more than plugging in an algorithm and expecting the system to work. Even good engineers, well-known companies, and lots of effort are no guarantee of robust implementation; the U.S. digital cellular encryption algorithm illustrated that. Random-number generators are another place where cryptographic systems often break. Good randomnumber generators are hard to design, because their security often depends on the particulars of the hardware and software. Many products examined use bad ones. The cryptography may be strong, but if the random-number generator produces weak keys, the system is much easier to break. Other product uses secure random-number generators, but they don't use enough randomness to make the cryptography secure. Specific random-number generator may be secure for one purpose but insecure for another; generalizing security analyses is dangerous. On the other hand, at interactions between individually secure cryptographic protocols, and given a secure protocol, show was how to build another secure protocol that will break the first if both are used with the same keys on the same device. Sometimes, products even get the cryptography wrong. Some rely on proprietary encryption algorithms. Invariably, these are very weak. Keeping the algorithm secret isn't much of an impediment to analysis; it only takes a couple of days to reverse-engineer the cryptographic algorithm from executable code. S/MIME 2 electronic-mail standard took a relatively strong design and implemented it with a weak cryptographic algorithm. The system for DVD encryption took a weak algorithm and made it weaker.

Many other cryptographic weaknesses can be like implementations that repeat "unique" random values, digital signature algorithms that don't properly verify parameters, hash functions altered to defeat the very properties they're being used for. Also cryptographic protocols used in ways that were not intended by the protocols' designers, and protocols "optimized" in seemingly trivial ways completely break their security. e. Weakness and limitation on implementations Many systems fail because of mistakes in implementation. Some systems don't ensure that plain text is destroyed after it's encrypted. Other systems use temporary files to protect against data loss during a system crash, or virtual memory to increase the available memory; these features can accidentally leave plain text lying around on the hard drive. In extreme cases, the operating system can leave the keys on the hard drive. One product may use a special window for password input. The password remained in the window's memory even after it was closed. It didn't matter how good that product's cryptography was; it was broken by the user interface. Other systems fall to more subtle problems. Sometimes the same data is encrypted with two different keys, one strong and one weak. Other systems use master keys and then one-time session keys. It may break systems using partial information about the different keys. Also systems that use inadequate protection mechanisms for the master keys mistakenly are relying on the security of the session keys. It's vital to secure all possible ways to learn a key, not just the most obvious ones. E-commerce systems often make implementation trade-off to enhance usability. One of the subtle vulnerabilities here, when designers don't think through the security implications of their trade-off. Doing account reconciliation only once per day might be easier, but what kind of damage can an attacker do in a few hours? Can audit mechanisms be flooded to hide the identity of an attacker? Some systems record compromised keys on "hot lists"; attacks against these hot lists can be very fruitful. Other systems can be broken through replay attacks: reusing old messages or parts of old messages, to fool various parties. Systems that allow old keys to be recovered in an emergency provide another area to attack. Good cryptographic systems are designed so that the keys exist for as short a period of time as possible; key

recovery often negates any security benefit by forcing keys to exist long after they are useful. Furthermore, key recovery databases become sources of vulnerability in themselves, and have to be designed and implemented securely. Flaws in the key recovery database can allow criminals to commit fraud and then frame legitimate users. f. Limitations against Trust Models Many of interesting limitations are against the underlying trust model of the system: who or what in the system is trusted, in what way, and to what extent. Simple systems, like hard-drive encryption programs or telephone privacy products, have simple trust models. Complex systems, like electronic-commerce systems or multi-user e-mail security programs, have complex (and subtle) trust models. An e-mail program might use uncrackable cryptography for the messages, but unless the keys are certified by a trusted source (and unless that certification can be verified), the system is still vulnerable. Some commerce systems can be broken by a merchant and a customer colluding, or by two different customers colluding. Other systems make implicit assumptions about security infrastructures, but don't bother to check that those assumptions are actually true. If the trust model isn't documented, then an engineer can unknowingly change it in product development, and compromise security. Many software systems make poor trust assumptions about the computers they run on; they assume the desktop is secure. Trojan horse software that sniffs passwords, reads plain text, or otherwise circumvents security measures can often break these programs. Systems working across computer networks have to worry about security flaws resulting from the network protocols. Computers that are attached to the Internet can also be vulnerable. Again, the cryptography may be irrelevant if it can be circumvented through network insecurity. And no software is secure against reverse engineering. Often, a system will be designed with one trust model in mind, and implemented with another. Decisions made in the design process might be completely ignored when it comes time to sell it to customers. A system that is secure when the operators are trusted and the computers are completely under the control of the company using the system may not be secure when the operators are temps hired at just over minimum wage and the computers are untrusted. Good trust models work even if some of the trust assumptions turn out to be wrong.

g. Weakness on failure recovery Strong systems are designed to keep small security breaks from becoming big ones. Recovering the key to one file should not allow the attacker to read every file on the hard drive. A hacker who reverse-engineers a smart card should only learn the secrets in that smart cards, not information that will help him break other smart cards in the system. In a multi-user system, knowing one person's secrets shouldn't compromise everyone else's. Many systems have a "default to insecure mode." If the security feature doesn't work, most people just turn it off and finish their business. If the on-line credit card verification system is down, merchants will default to the less-secure paper system. Similarly, it is sometimes possible to mount a "version rollback attack" against a system after it has been revised to fix a security problem: the need for backwards compatibility allows an attacker to force the protocol into an older, insecure, version. Other systems have no ability to recover from disaster. If the security breaks, there's no way to fix it. For electronic commerce systems, which could have millions of users, this can be particularly damaging. Such systems should plan to respond to attacks, and to upgrade security without having to shut the system down. Good system design considers what will happen when an attack occurs, and works out ways to contain the damage and recover from the attack.

8- Explain with an example how an ATM fraud takes place. Many frauds are carried out with some inside knowledge or access, and ATM fraud turns out to be no exception. Banks in the English speaking world dismiss about one percent of their staff every year for disciplinary reasons, and many of these sackings are for petty thefts in which ATMs can easily be involved.

In a recent case, a housewife from Hastings, England, had money stolen from her account by a bank clerk who issued an extra card for it. The banks systems not only failed to prevent this, but also had the feature that whenever a cardholder go a statement from an ATM, the items on it would not subsequently appear on the full statements sent to the account address. This enabled the clerk to see to it that she did not get any statement showing the thefts he had made from her account.

Most thefts by staff show up as phantom withdrawals at ATMs in the victims neighborhood. English banks maintain that a computer security problem would result in a random distribution of transactions round the country, and as most disputed withdrawals happen near the customers home or place of work; these must be due to cardholder negligence [BB]. Thus the pattern of complaints which arises from thefts by their own staff only tends to reinforce the banks complacence about their system.

9- What are the basic issues in secret key management ? explain in brief. The data isnt secret unless the key is secret The more random your key, the harder it will be to guess Randomness really does not come easily, especially to computers

10- Explain distributed Flat key management approach. The main concerns with centralized approaches are the danger of implosion and the existence of a single point of failure. It is thus attractive to search for a distributed solution for the key management problem. This solution was found in completely distributing the key database of the Centralized Flat approach, such that all participants are created equal and nobody has complete knowledge. As in the Centralized Flat approach above, each participant only holds keys matching his ID, and the collaboration of multiple participants is required to propagate changes to the whole group. There is no dedicated Group Manager, instead, every participant may perform admission control operations. While some participants will be distinguished as Key Holders, performing some authoritative function, this function a) is only needed to improve performance on version changes, b) is assigned naturally to the creator of the newest version of the key, and c) can be taken over at any time by any other participant knowing the key, if that node should seem to have disappeared.2 The duties of a Key Holder are to heartbeat the key and to perform key translations. They will be detailed in the description of the operations below. Since there is no Group Manager knowing about the IDs in use, the IDs need to be

generated uniquely in a distributed way. Apparent solutions would be to use the participants network address directly or to apply a collision-free hash function. This scheme is the most resilient to network or node failures because of its inherent self-healing capability, but is also more vulnerable to inside attacks than the others. It offers the same security to break-in attacks as the schemes discussed above; thanks to its higher resilience to failures, it can be considered stronger against active attacks.

BT0054 Basics of E-Commerce (Book ID: B0035 & B0045) Assignment Set 2
1- Explain two major classifications (method) of encryption process. There are two types of crypto systems: secret key and public key. In secret key cryptography, also referred to as symmetric cryptography, the same key is used for both encryption and decryption. The most popular secret key crypto system in use today is known as DES, the Data Encryption Standard. In public key cryptography, each user has a public key and private key. The public key is made public while the private key remains secret. Encryption is performed with the public key while decryption is done with the private key. The RSA public key crypto system is the most popular form of public key cryptography. RSA stands for Rivest, Shamir and Aldeman, the inventors of the RSA cryptosystem.

Both the sender and receiver have to know what set of rules (called cipher) was used to transform original information in to its cipher text (code) form cipher text. Simple cipher might to be add an arbitrary number of characters to all the character in the message. Basically encryption has two parts: Algorithm A cryptographic algorithm is a mathematical function. Key string of digits.

Cryptographic algorithm combines the plain text or other intelligible information with a string of digit called keys to produce unintelligible cipher text. But some encryption algorithms does not use a key.

2- Discuss the Application Protocols.

3- Explain how internet payment process is different from traditional payment process Internet payment is different from traditional payment. Traditional payment methods are: cash, debit cards, Travelers cheque, Credit cards, money orders, barter system, personal cheque, bank draft, tokens etc.. these modes of payments are used these days by customers, organizations, have their own

instruments, including purchase orders, lines of credit etc. the requirements of financial transaction include confidently, privacy, Integrity and authentication for both form of commerce. Established traditional mode of payment schemes are designed to meet this requirement. But the task of e-commerce to provide electronic payment system to meet all the requirements and yet users must find it traditional and on the internet must look alike, even though the implementation (media) is totally different, so that the users adaptability is good. Methods for meeting all these requirements on the internet are not yet in place.

4- Explain credit card payment schemes on Internet. Sequence of credit card payment scheme on internet Cardholder: The holder of the card used to make a purchase; the consumer.

Card-issuing bank: The financial institution or other organization that issued the credit card to the cardholder. This bank bills the consumer for repayment and bears the risk that the card is used fraudulently. American Express and Discover were previously the only card-issuing banks for their respective brands, but as of 2007, this is no longer the case. Cards issued by banks to cardholders in a different country are known as offshore credit cards.

Merchant: The individual or business accepting credit card payments for products or services sold to the cardholder.

Acquiring bank: The financial institution accepting payment for the products or services on behalf of the merchant.

Independent sales organization: Resellers (to merchants) of the services of the acquiring bank.

Merchant account: This could refer to the acquiring bank or the independent sales organization, but in general is the organization that the merchant deals with.

Credit Card association: An association of card-issuing banks such as Visa, MasterCard, Discover, American Express, etc. that set transaction terms for merchants, card-issuing banks, and acquiring banks.

Transaction network: The system that implements the mechanics of the electronic transactions. May be operated by an independent company, and one company may operate multiple networks.

Affinity partner: Some institutions lend their names to an issuer to attract customers that have a strong relationship with that institution, and get paid a fee or a percentage of the balance for each card issued using their name. Examples of typical affinity partners are sports teams, universities, charities, professional organizations, and major retailers.

5- Explain internet v/s private nets. The protocols are being developed to allow Internet users to reserve Bandwidth for applications, and for prioritized traffic, for example, the Resource reservation Protocol, or RSVP, has been developed to help reserve bandwidth for multimedia transmissions such as streaming audio, Video and video conferencing , this same protocol can be used to priority e-mail for EDI messages or FTP for file transfers. Routers supporting RSVP are only now becoming available itll be some time before a great deal of the internet routinely supports RSVP. ISPs are also starting to offer their own end-to-end networks across the United States independently of the Internets main backbone, but still link to it is needed. Aimed at businesses, these networks can be used to speed along summer Internet traffic. These private commercial networks also make it easier for companies to form virtual private networks (VPNs) with added security; replacing private corporate networks can be less costly than leased-line net-works, even with the additional rates incurred. Private networks also offer another advantage that they link to the internet, allowing for communication with other partners and customer without requiring special set ups.

6- Discuss the properties of Good Crypto Algorithms. Preferred algorithms generally have the following properties to some degree. NO RELIANCE ON ALGORITHM SECRECY

While it may, in some cases, increase the attackers work factor to keep as much secret as possible, keeping a crypto algorithm secret can be a double-edged sword. If we dont know how the algorithm works- we cant tell if it has some easy-to-exploit flaw. Good crypto algorithms rely exclusively on keys to protect the data. Revealing the algorithms should not significantly improve an attackers likelihood of success. NO RELIANCE ON ALGORITHM The algorithm should have been designed in the first place to resist crypt analysis. This is not always true of algorithms used for encryption. For example, some products use simple random number generators to produce a venom cipher key stream. Simple notations of statistical randomness do not guarantee strength against crypt analysis. AVAILABLE FOR ANALYSIS Ideally, the algorithm had been published and subjected to scrutiny by the public cryptographic community. The longer mathematicians and crypt analysts have to look at the algorithm, the more likely they will find its weaknesses. DES has stood the rest of time and is likely to be used for many years to come in some form or other. SUBJECT TO ANALYSIS Have recognized cryptanalysis published results regarding the algorithm strength? Ideally, recognized experts should be openly discussing the algorithms and other experts review publishing analysis in referred professional journals that ensure the work. This almost never occurs except in cases when the algorithm itself has been published. It is always important to judge the experts rendering the opinion: are they within their scope of expertise? NO PRACTICAL WEAKNESSES

The analysis performed should show that there are no serious weaknesses in the algorithm that an attacker can easily exploit. Custom-built algorithms embedded in commercial software tend to have serious weaknesses if a commercial package claims to encrypt data and does not use a recognize algorithm, do not presume that it protect against any motivated attacker.

7- Explain how ATM encryption works. Most ATMs operate usi8ng some variant of a system developed by IBM. This uses a secret key, called the PIN key to derive the PIN from the account number, by means of a published algorithm known as the data Encryption Standard, or DES. The result of this operation is called the natural PIN; an offset can be added to it in order to give the PIN which the customer must enter. The offset has no real cryptographic function; it just enables customers to choose their own PIN. Here is an example of the process:

Account number PIN key Result of DES Result decimalized Natural PIN Offset Customer PIN

: 88071458700155458715 : FEFEFEFEFEFEFEFEFE : A2CE126C69AEC82D : 0224126269042823 : 0224 : 6565 : 6789

It is clear that the security f the system depends on keeping the PIN key absolutely secret. The usual strategy is to supply a terminal key to each ATM in the form of two printed components, which are carried to the branch by two separate officials, input at the ATM keyboard, and combined to from the key. The PIN key, encrypted under this terminal key, is then sent to the ATM by the banks central computer.

These working keys in turn have to be protected, and the usual arrangement is that a bank will share a zone key with other banks or with a network switch, and use this to encrypt fresh working keys which are

set up each morning. It may also send a fresh working key every day to each of its ATMs, by encrypting it under the ATMs terminal key.

Keeping keys secret is only part of the problem. They must also; be available for use at all times by authorized processes. The PIN key is needed all the time to verify transactions, as are the current working keys; the terminal keys and Zone keys are less critical, but are still used once a day to set up new working keys.

8- Explain the methods used in Random Key generation. A practical and secure crypto system needs keys that cannot be guessed. There should be no way for an outsiders to predict what keys are being used, or even to guess approximately which keys might have been used. A good key generator will produce keys that cannot be guessed even if attackers know how the generator works.

Many procedures called pseudorandom number generators (PRNGs), which generate hard-to-predict sequences of numbers. For true randomness you must seed these procedures with initial value. A good PRNG is not enough by itself to produce effective keys. The generation process must be seeded by a random number that is sufficiently hard to guess. We need a random technique to generate a random seed value so we can generate a series of fandom numbers. In practice there are three computer-based approaches for producing truly random data: 1. Monitor hardware that generates random data. 2. Collect random data from user interaction. 3. Collect hard-to-predict data from inside the computer.

But we will discuss here about only two methods. Hardware based random number generation is the best though most costly approach. The generator is usually an electronic circuit that is sensitive to some random physical event. Like diode noise or cosmic ray bombardment, and converts the event into an unpredictable sequence of bits. However their rarity makes it expensive to add them to a system.

User interaction is a very good source of random data, though it can be inconvenient. People are notoriously bad at doing the same thing twice, and random data can be collected by tracking interactive human behavior. For example PGP e-mail package collects keystrokes from the user and measures the time between keystrokes to produce a random seed value.

9- Write a note on Centralized Flat Key Management scheme. Instead of organizing the bits of the id in a hierarchical, tree-based fashion and distributing the keys accordingly, they can also be assigned in a flat fashion (). This has the advantage of greatly reducing database requirements, and obviates the sender from the need of keeping information about all participants. It is now possible to exclude participates without knowing whether they were in the group in the first place.

The table contains 2W KEYS, two keys for exact bit, corresponding to the two values that bit can take. The key associated with bit b having value v is referred to as kb.v(Bit keys). While the keys in the table could be used to generate a tree-like keying structure, they can also be used independently of each other.

The results are very similar to the tree based control from, but the key space is much smaller: for an id length of w bits, only 2W+1 key are needed, independent of the actual number of participants. The number of participants is limited to 2w, so a value of 32 is considered a good choice. To allow for the separation of participants residing; on the same machine the id space can be extended to 48 bits, thus including port number information. For ipv6 and calculated ids, a value of 128 sh8ould be chosen to avoid collisions. This still keeps the number of keys and the size of change messages small. Besides reducing the storage and communication needed, this approach has the advantage that nobody needs to keep track of who is currently a member, yet the group manager is still able to expel an unwanted participant.

10- What is Digital signature? Explain how digital signatures are produced The digital signature is the most novel mechanism provided by modern crypto technology. It is mechanism that does not involve secrets but it protects data from undetected change. Moreover, the digital signature

associates the data with the owner of a specific private key. If we can verify the signature with the Sushmas public key, we can e certain that the data was signed with Sushmas private key. Experts will feel this technique will from the bedrock of electronic commerce by providing digital credentials that are hard to forge.

Digital signature uses a private key to produce a crypto checksum. Crypto checksums based on conventional secret key techniques can only be verified by people, who are trusted with the secret key, and the technique cannot tell which key holder actually produced the crypto checksum. Digital signatures are tied to a particular private key, so we can safely assume that only the private key holder could have produced the corresponding digital signature. Anyone with the corresponding public key can validate the hash or checksum themselves, tying the messages contents to the holder of the corresponding private key.

We produce a RSA digital signature for a data message by hashing the message contents and then encrypting the hash with the authors private key. We include the hash type in the digital signature for both cryptographic security and compatibility reasons.

The recipient validates the data by check8ing the encrypted hash value. The recipient decrypts the digital signature with the authors public key, yielding the hash type and value. Then the indicated hash function is applied to the data received. This hash result is compared to the one protected by digital signature.

BT0055 Internet Technology & Web Designing (Book ID: B0020, B0022 & B0217) Assignment Set 1
1- How will you add styles to a document? There are three basic ways to include style information in an HTML document. The first is to use an outside style sheet, either by importing it or linking to it. The second is to embed a document-wide style in the <HEAD> element of the document. The third is to provide an inline style, right where the style needs to be applied. a). Linking to a Style Sheet :- An external style sheet is simply a plain text file containing the style specifications for HTML tags or classes. The common extensions indicating that it is a style sheet file is .css for CSS1 style sheets. By this method, we can use one style sheet for multiple pages. b). Embedding and Importing Style Sheets :- The second way to include an external style sheet is to embed it. When you embed a style sheet, you write the style rules directly within the HTML document. Document-wide style is a very easy way to begin using style sheets. It involves the use of <STYLE> element found within the <HEAD> element of an HTML document. Enclose the style rules within the <STYLE> and </STYLE> tag pair and place these within the head section of the HTML document. c). Using Inline Styles :- Other than using style sheet for the whole document, it is possible to add style information right down to single element. The simplest way to add style information, but not necessarily the best, is to add style rules to particular HTML element. This is how it works. Consider an example. Lets say you want to set one particular <H1> tag to render 48-point, green, Arial font. Then you need to apply that style to <H1> elements or to a class of them (discussed later) by applying a document-wide style. You can also apply the style to the tag in question, using the STYLE attribute, which can be used within nearly any HTML element. 2- How will you view XML in different ways? Different ways of viewing XML: a). Viewing XML Using the XML Data Source Object :- Data Source objects are used for what Microsoft calls Data Binding. Data binding is Microsofts way of bringing data manipulation to the browser (client) and away from the server. Normally, if you want a new view on the data, you resubmit a query to the server. The server performs the necessary calculations and sends a new HTML page to the client. This doesnt happen with data binding. The server sends an HTML page together with the data to the client. They are stored locally and can be manipulated locally without reconnecting to the server. To implement this data binding, you need to first include a data source object in your page. After the insertion of the data source object, you need to define HTML

elements that are able to read data from the data source: they are called Data consumers. b). Using an HTML Table :We add the code <TABLE BORDER ="2 CELLPADDING ="3 CELLSPACING ="2" width="40%" DATASRC="#xmldso"> <THEAD> <TH>Musician</TH> <TH>Instrument</TH> <TH>Number of recordings</TH> </THEAD> <TR> <TD><SPAN DATAFLD = "name"></SPAN></TD> <TD><SPAN DATAFLD = "instrument"></SPAN></TD> <TD><SPAN DATAFLD = "NrOfRecordings"></SPAN></TD> </TR> </TABLE> The output is

In the above example we learnt, about how to view the XML document in the Browser.

c). Using Cascading Style sheets :Step 1: Create a style sheet named cd_catalog.css as below: CATALOG { background-color: #ffffff; width: 100%; } CD { display: block; margin-bottom: 30pt; margin-left: 0; } TITLE { color: #FF0000; font-size: 20pt; } ARTIST { color: #0000FF; font-size: 20pt; } COUNTRY,PRICE,YEAR,COMPANY { Display: block; color: #000000; margin-left: 20pt; } Step 2: We shall consider the catalog contains 3 cd titles and write the XML document as follows: <?xml version="1.0"?> <?xml-stylesheet type="text/css" href="cd_catalog.css"?> <CATALOG> <CD> <TITLE>Empire Burlesque</TITLE> <ARTIST>Bob Dylan</ARTIST> <COUNTRY>USA</COUNTRY> <COMPANY>Columbia</COMPANY> <PRICE>10.90</PRICE> <YEAR>1985</YEAR> </CD> <CD> <TITLE>Hide your heart</TITLE>

<ARTIST>Bonnie Tylor</ARTIST> <COUNTRY>UK</COUNTRY> <COMPANY>CBS Records</COMPANY> <PRICE>9.90</PRICE> <YEAR>1988</YEAR> </CD> <CD> <TITLE>Greatest Hits</TITLE> <ARTIST>Dolly Parton</ARTIST> <COUNTRY>USA</COUNTRY> <COMPANY>RCA</COMPANY> <PRICE>9.90</PRICE> <YEAR>1982</YEAR> </CD> </CATALOG> The output:

3- Explain various file handling mechanism available in PERL. Various file handling mechanism in PERL: a). Opening a file: Before writing or reading from a file the file must be opened. To open a file, Library function open is called whose syntax is as follows. Open (filevariable, filename) The first parameter file variable represents the name that you want to use in your Perl program to refer to the file. In other words it can be termed as the file handle name that may be used in your program. Rules that apply for naming the scalar variables apply for the file variable also except for the absence of starting $ character. It is good idea to use all uppercase letters for your file variable names. This makes it easier to distinguish file variable names from the reserved words. Filename represents the location of the file in your machine. If file is inside the current working directory, then filename is just the name of the file. On the other hand if the file is located somewhere else, the filename consists of the full path address to that file. Ex. open(FILE1, "file1"); There are 3 file access modes: read mode, write mode and append mode. In any of the modes it is not possible to simultaneously read from and write into the same file. By default open assumes the mode to be in read mode. To specify other modes you must follow the following statements. Read mode open (MYFILE,"file1"); Write mode open (OUTFILE, ">file1"); Append mode open (OUTFILE,">>file1"); Write mode will clear the contents of the file if any already present, and start writing the contents. But in append mode the data to be written is appended to the end of the existing contents. On successful operation open returns a nonzero value. On error in opening a zero value is returned. b). Reading from a file Once a file is opened, you can read information from that file. To do so the following syntax is followed. $line =<MYFILE>; where MYFILE is the file variable name. c). Writing to a file To write to a file, file must be opened in write or append mode. Specifying the file variable with the print function as shown below may do writing into a file that you have opened. open(OUTFILE, ">outfile"); print OUTFILE "Here is an output line."; d). Closing a file When you are finished reading from a file, you can tell the Perl interpreter that you have completed by calling the close library function.. The syntax for the function is as given below. close(filevariable name). One important point is that you need not call close when you have finished with a

file: Perl automatically closes the file when the program terminates or when you open another file using a previously defined file variable. Consider the following example. open(MYFILE, ">file1"); print MYFILE "Here is a line of output"; open(MYFILE, ">file2"); print MYFILE " Here is another line of output"; e). Determining the status of a file( File test operators) Many jobs involving the file operations open a file and test the successfulness of the open operation. If open fails, it might be useful to find exactly why file could not be opened. To do this, use one of the file-test operators. Syntax to use the file test operator is as following. -x expr. Here, x is an alphabetic character representing a file test operator and expr is any expression. The value of expr is assumed to be a string that contains the name of the file to be tested. In the following example the file test operator -e for finding the existence of the file is illustrated. $var1 = "file1"; if(-e $var1){ print "The file1 exists"; } if( -e $var1."a"){ print "The file1.a exists"; } Answer 4). <script type="text/javascript"> //Write a "Good morning" greeting if //the time is less than 10 var d=new Date(); var time=d.getHours(); if (time<10) { document.write("<b>Good morning</b>"); } </script>

4- Write a javascript procedure to input 5 numbers and display their sum. <script type="text/javascript">

function DoAdd () { var sum = 0;

var messages = [ "first", "second", "third", "fourth", "fifth"];

for (var i = 0; i < 5; i++) { var value = prompt ("Enter " + messages [i] + " value : ", 0); sum += parseInt (value);

if (isNaN (sum)) break; }

if (isNaN (sum)) { alert ("Invalid input or action cencelled."); } else { alert ("Sum = " + sum); } }

</script>

5- Write a VBScript function to sort N numbers stored in an array. Private Function SortArray(byVal UnsortedArray) Dim I, J, Temp, Gap, Swapped Dim ArrSize, Combcom, Combswap Const Shrink = 1.3 ArrSize = UBOUND( UnsortedArray ) Gap = Arrsize - 1 Do

Gap = Int(Gap / Shrink) Swapped = True Combcom = Combcom + 1 For J = 0 To Arrsize Gap If UnsortedArray(J) > UnsortedArray(J + Gap) Then Temp = UnsortedArray(J) UnsortedArray(J) = UnsortedArray(J + Gap) UnsortedArray(J + Gap) = Temp Swapped = False Combswap = Combswap + 1 End If Next

Loop Until Not Swapped And Gap = 1 SortArray = UnsortedArray End Function

6- List 10 common ActiveX controls.

7- Discuss Network Interface using any OS. TCP/IP, as an internetwork protocol suite, can operate over a vast number of physical networks. The most common and widely used of these protocols is, of course, Ethernet. Ethernet and IEEE 802 Local Area Networks (LANs) Two frame formats can be used on the Ethernet coaxial cable: The standard issued in 1978 by Xerox Corporation, Intel Corporation and Digital Equipment Corporation, usually called Ethernet (or DIX Ethernet). The international IEEE 802.3 standard, a more recently defined standard. The difference between the two standards is in the use of one of the header fields, which contains a protocol-type number for Ethernet and the length of the data in the frame for IEEE 802.3. The type field in Ethernet is used to distinguish between different protocols running on the coaxial cable, and allows their coexistence on the same physical cable. The maximum length of an Ethernet frame is 1526 bytes. This means a data field length of up to 1500 bytes. The length of the 802.3 data field is also limited to 1500 bytes for 10 Mbps networks, but is different for other transmission speeds. In the 802.3 MAC frame, the length of the data field is indicated in the 802.3 header. The type of protocol it carries is then indicated in the 802.2 header (higher protocol level; see Figure 2-1). In practice, however, both frame formats can coexist on the same physical coax. This is done by using protocol type numbers (type field) greater than 1500 in the Ethernet frame. However, different device drivers are needed to handle each of these formats. Thus, for all practical purposes, the Ethernet physical layer and the IEEE 802.3 physical layer are compatible. However, the Ethernet data link layer and the IEEE 802.3/802.2 data link layer are incompatible. 8- Explain BGP components and its working The Border Gateway Protocol (BGP) is an exterior gateway protocol. It was originally developed to provide a loop-free method of exchanging routing information between autonomous systems. BGP has since evolved to support aggregation and summarization of routing information. BGP concepts and terminology: BGP uses specific terminology to describe the operation of the protocol. Figure 2.21 is used to illustrate this terminology.

Figure 2.21: Components of a BGP network BGP speaker: A router configured to support BGP. BGP neighbors (peers): A pair of BGP speakers that exchange routing information. There are two types of BGP neighbors: = Internal (IBGP) neighbor: A pair of BGP speakers within the same AS. = External (EBGP) neighbor: A pair of BGP neighbors, each in a different AS. These neighbors typically share a directly connected network. BGP session: A TCP session connecting two BGP neighbors. The session is used to exchange routing information. The neighbors monitor the state of the session by sending keepalive messages. Traffic type: BGP defines two types of traffic: = Local: Traffic local to an AS either originates or terminates within the AS. Either the source or the destination IP address resides in the AS. = Transit: Any traffic that is not local traffic is transit traffic. One of the goals of BGP is to minimize the amount of transit traffic. AS type: BGP defines three types of autonomous systems: = Stub: A stub AS has a single connection to one other AS. A stub AS carries only local traffic.

= Multihomed: A multihomed AS has connections to two or more autonomous systems. However, a multihomed AS has been configured so that it does not forward transit traffic. = Transit: A transit AS has connections to two or more autonomous systems and carries both local and transit traffic. The AS may impose policy restrictions on the types of transit traffic that will be forwarded. Depending on the configuration of the BGP devices within AS 2 in Figure 2.21, this autonomous system may be either a multihomed AS or a transit AS. AS number: A 16-bit number uniquely identifying an AS. AS path: A list of AS numbers describing a route through the network. A BGP neighbor communicates paths to its peers. Routing policy: A set of rules constraining the flow of data packets through the network. Routing policies are not defined in the BGP protocol. Rather, they are used to configure a BGP device. For example, a BGP device may be configured so that: = A multihomed AS can refuse to act as a transit AS. This is accomplished by advertising only those networks contained within the AS. = A multihomed AS can perform transit AS routing for a restricted set of adjacent autonomous systems. It does this by tailoring the routing advertisements sent to EBGP peers. = An AS can optimize traffic to use a specific AS path for certain categories of traffic. Network layer reachability information (NLRI): NLRI is used by BGP to advertise routes. It consists of a set of networks represented by the tuple <length,prefix>. For example, the tuple <14,220.24.106.0> represents the CIDR route 220.24.106.0/14. Routes and paths: A route associates a destination with a collection of attributes describing the path to the destination. The destination is specified in NRLI format. The path is reported as a collection of path attributes. This information is advertised in UPDATE messages.

BT0055 Internet Technology & Web Designing (Book ID: B0020, B0022 & B0217) Assignment Set 2
1- Explain various style sheet properties. Font Properties

Font Family Font Style Font Variant Font Weight Font Size Font

Color and Background Properties


Color Background Color Background Image Background Repeat Background Attachment Background Position Background

Text Properties

Word Spacing Letter Spacing Text Decoration Vertical Alignment Text Transformation Text Alignment Text Indentation Line Height

Box Properties

Top Margin Right Margin

Bottom Margin Left Margin Margin Top Padding Right Padding Bottom Padding Left Padding Padding Top Border Width Right Border Width Bottom Border Width Left Border Width Border Width Border Color Border Style Top Border Right Border Bottom Border Left Border Border Width Height Float Clear

Classification Properties

Display Whitespace List Style Type List Style Image List Style Position List Style

Units

Length Units Percentage Units Color Units URLs

2- What is DTD? Explain This is the oldest form of format for XML is DTD. The full form for DTD isdocument type definition and it is inherited from SGML. It can be said thatinclusion of DTD has increased the popularity of XML but DTD has its share oflimitations.The document type definition is an XML description of the contentmodel of a type of documents. Then document type declaration is a statement inan XML file that idenifies the DTD that belong to the document. A Document TypeDeclaration associates a DTD with an XML document. Document Type Declarations appear in the syntactic fragment doctypedecl near the start of an XML document the declaration establishes that the document is an instance of the type defined by the referenced DTD. The syntax of a document type declaration is: < !DOCTYPED TD .name [internal.subnet ]> DTDs make two sorts of declaration: 1.an internal subset 2.an external subset The declarations in the internal subset form part of the Document Type Declaration inthe document itself. The declarations in the external subset are located in a separatetext file. The external subset may be referenced via a public identifier and/or a systemidentifier. Programs for reading documents may not be required to read the externalsubset.

3- What is Pattern Matching? Explain. Pattern matching is the searching of a sequence of characters within a character string. When doing pattern matching if a pattern is found then a match is said to have occurred. A pattern is a sequences of characters to be searched for in a character string. In Perl pattern are normally enclosed in slash character: /def/ this represent the pattern def. The pattern matching operators Perl defines two operators for pattern matching. They are The =~ operator tests whether a pattern matched. The !~ operator is similar to =~, except that it checks whether a pattern is not matched. Pattern matching functions Pearl has three main functions which are used for pattern matching (although pattern matching can be used in other functions such as split () function). They are the m//, s//, and tr///.

4- Write a javascript procedure to input the name and address of a visitor and display a greeting message. <script type="text/javascript"> //Write a "Good morning" var name = prompt("What is your name", "Type you name here"); var address = prompt("What is your address", "Type you address here"); document.write("<b>Good morning</b>" + </br> + name + </br> + address ); < /script>

5- Describe various VBScript data types and subtypes VBScript has only one data type called a variant. A variant is a special kind of data type that can contain different kinds of information, depending on how it is used. Since variant is the only data type in VBScript, it is also the data type returned by all functions in VBScript. A variant can also have a variety of numeric data ranging from Boolean value to huge floating-point numbers. These different categories of information that can be contained in a variant are called subtypes. Subtypes Description Empty Variant is not initialized. Value is 0 for numeric variables or a zero-length string() for string variables. Null Variant intentionally contains no valid data. Boolean Contain either True or False Byte Contain integer in the range 0 to 255. Integer Contain integer in the range -32,768 to 32,767. Currency -922,337,203,687,477.5808 to 922,337,203,685,477.5807. Long Contain integer in the range -2,147,483,648 to 2,147,483,647. Single Contains a single-precision , floating-point number Double Contain a double precision, floating number Date(Time) Contains a number that represents a data between January 1, 100 to December 31, 9999. String Contains a variable-length string that can be up to approximately 2 billion characters in length. Object Contains an object. Error Contains an error number. 6- What are ActiveX controls? Explain. Active X defines new specification for OLE controls, which allows them to be much smaller and more efficient in the Internet environment. Active X provides a means for interaction on the world wide web. Similar to OLE, Active X is aso based on COM. Active X allows the controls to be embedded in web pages and also allows them to be used interactively. Active X is designed for speed and size. Active X focuses on

integrating various objects written in a variety of programming languages, such as c, c++, Java, and Visual Basic. Active X is currently supported by the windows operating system. There are more than one thousand controls available. An Active X control can be embedded to an HTML document by using <OBJECT> tag. The <OBJECT> tag has several attributes to describe the properties of the object. Two most important attributes that is required when including the Active X controls in web pages are CLASSID and ID. The CLASSID identifies the type of control and code needed for its execution and ID attribute gives the name to the object which is used to reference the control within the document. <OBJECT> tags requires a corresponding </OBJECT> tag. The attributes of <OBJECT> tag ID: It identifies the object within a script. CLASSID: A URL that identifies an implementation for the object. DECLARE: Indicates that the object is to be declared but not instantiated. CODEBASE: The Base URL from which the object is referenced. DATA: A URL pointing to the object data. In the absence of CLASSID, Data determines a default value for the CLASSID attribute. CODETYPE: It specifies the internet media type of code referenced by CLASSID attributes before it is actually retrieved. STANDBY: It specifies a short text string the browser can show while loading the object. ALIGN: Determines where to place the object. LEFT: The object is drawn as a left flush floating object and text flows around it. TEXTTOP: Surrounding text is aligned with the top of the object. MIDDLE: Object is drawn from the center. Text flow around it. TEXTMIDDLE: Surrounding text is aligned with the middle of the object. BASELINE: The object is aligned with the baseline of the continuous text. TEXTBOTTOM: Surrounding text is aligned with the bottom of the object. CENTER: Surrounding text is aligned with the center of the object. RIGHT: The object is drawn as a right flush floating object and text flow around it. WIDTH: Specifies the suggested width (in pixels). HEIGHT: Specifies the suggested height (in pixels). BORDER: Specifies the Border of the object when it is part of hypertext link. HSPACE: Specifies the suggested width (in pixels) of the area enclosing the object. VSPACE: Specifies the suggested height (in pixels) of the area enclosing the object. USEMAP: This specifies a URL for a client-side image map to be used with the object. SHAPES: It indicates that the object contains anchors with hypertext links. NAME: It specifies the name of the object when submitted part as part of a form. NOTAB: Exclude the object from the tabbing order. TABINDEX: It specifies the objects position in the tabbing order. 7- What is DHCP Protocol? Explain.

The Dynamic Host Configuration Protocol (DHCP) provides a framework for passing configuration information to hosts on a TCP/IP network. DHCP is based on the BOOTP protocol, adding the capability of automatic allocation of reusable network addresses and additional configuration options. DHCP messages use UDP port 67, the BOOTP server's well-known port and UDP port 68, the BOOTP client's well-known port. DHCP participants can interoperate with BOOTP participants. DHCP consists of two components: 1. A protocol that delivers host-specific configuration parameters from a DHCP server to a host. 2. A mechanism for the allocation of temporary or permanent network addresses to hosts. IP requires the setting of many parameters within the protocol implementation software. Because IP can be used on many dissimilar kinds of network hardware, values for those parameters cannot be guessed at or assumed to have correct defaults. The use of a distributed address allocation scheme based on a polling/defense mechanism, for discovery of network addresses already in use, cannot guarantee unique network addresses because hosts may not always be able to defend their network addresses. DHCP supports three mechanisms for IP address allocation: 1. Automatic allocation: DHCP assigns a permanent IP address to the host. 2. Dynamic allocation: DHCP assigns an IP address for a limited period of time. Such a network address is called a lease. This is the only mechanism that allows automatic reuse of addresses that are no longer needed by the host to which it was assigned. 3. Manual allocation: The host's address is assigned by a network administrator.

8- What is OSPF? Explain its working. The Open Shortest Path First (OSPF) protocol is another example of an interior gateway protocol. It was developed as a non-proprietary routing alternative to address the limitations of RIP. Initial development started in 1988 and was finalized in 1991. Subsequent updates to the protocol continue to be published. The current version of the standard is documented in RFC 2328. OSPF provides a number of features not found in distance vector protocols. Support for these features has made OSPF a widely-deployed routing protocol in large networking environments. In fact, RFC 1812 Requirements for IPv4 Routers, lists OSPF as the only required dynamic routing protocol. The following features contribute to the continued acceptance of the OSPF standard:

Equal cost load balancing: The simultaneous use of multiple paths may provide more efficient utilization of network resources. Logical partitioning of the network: This reduces the propagation of outage information during adverse conditions. It also provides the ability to aggregate routing announcements that limit the advertisement of unnecessary subnet information.

Support for authentication: OSPF supports the authentication of any node transmitting route advertisements. This prevents fraudulent sources from corrupting the routing tables. Faster convergence time: OSPF provides instantaneous propagation of routing changes. This expedites the convergence time required to update network topologies. Support for CIDR and VLSM: This allows the network administrator to efficiently allocate IP address resources.

OSPF is a link state protocol. As with other link state protocols, each OSPF router executes the SPF algorithm Shortest-Path First (SPF) algorithm to process the information stored in the link state database. The algorithm produces a shortest-path tree detailing the preferred routes to each destination network.

You might also like