Professional Documents
Culture Documents
Secondary storage devices, as indicated by the name, save data after it has been saved by the primary storage device, usually referred to as RAM (Random Access Memory). Alternatively referred to as external memory and auxiliary storage, secondary storage is a storage medium that holds information until it is deleted or overwritten regardless if the computer has power. For example, a floppy disk drive and hard drive are both good examples of secondary storage devices. As can be seen by the below picture there are three different storage on a computer, although primary storage is accessed much faster than secondary storage because of the price and size limitations secondary storage is used with today's computers to store all your programs and your personal data.
Finally, although off-line storage could be considered secondary storage, we've separated these into their own category because this media can be removed from the computer and stored elsewhere.
drives as well as digital versatile discs (DVD) are secondary storage devices that are used. DVD's have the ability to store six times more data than CD's, according to Computer Nature.
1)Floppy Disks
Floppy disks are a storage medium made of a thin magnetic disk. They were widely used from the 1970s to the early 2000s. On the 3 12-inch microfloppy, common from the late 1980s onward, storage capabilities ranged from the standard 1.44 MB to 200 MB on some versions.
2)Magnetic Tape
Magnetic tape has been in use for more than 50 years. Modern magnetic tape is packaged in cartridges or cassettes and is used for storing data backups, particularly in corporate settings. The average amount of storage is 5 MB to 140 MB for every standard-length reel, which is 2,400 feet.
2)CD-R
A CD-R, a type of recordable CD, is an optical secondary storage device invented by Sony and Philips. It is also known as a WORM -- write once read many -- medium.
3)DVD-R
A DVD-R, a type of recordable DVD, has a storage capacity of usually 4.1 GB. There is also an 8.54-GB dual-layer version, called DVD-R DL.
Query by Example
Query by Example (QBE) is a database query language for relational databases. It was devised by Mosh M. Zloof at IBM Research during the mid 1970s, in parallel to the development of SQL. It is the first graphical query language, using visual tables where the user would enter commands, example elements and conditions. Many graphical front-ends for databases use the ideas from QBE today. Originally limited only for the purpose of retrieving data, QBE was later extended to allow other operations, such as inserts, deletes and updates, as well as creation of temporary tables. In the context of information retrieval, QBE has a somewhat different meaning. The user can submit a document, or several documents, and ask for "similar" documents to be retrieved from a document database. QBE is a feature included with various database applications that provides a user-friendly method of running database queries. Typically without QBE, a user must write input commands using correct SQL (Structured Query Language) syntax. This is a standard language that nearly all database programs support. However, if the syntax is slightly incorrect the query may return the wrong results or may not run at all. The Query By Example feature provides a simple interface for a user to enter queries. Instead of writing an entire SQL command, the user can just fill in blanks or select items to define the query she wants to perform. For example, a user may want to select an entry from a table called "Table1" with an ID of 123. Using SQL, the user would need to input the command, "SELECT * FROM Table1 WHERE ID = 123". The QBE interface may allow the user to just click on Table1, type in "123" in the ID field and click "Search." QBE is offered with most database programs, though the interface is often different between applications. For example, Microsoft Access has a QBE interface known as "Query Design View" that is completely graphical.
As a general technique
The term also refers to a general technique influenced by Zloof's work whereby only items with search values are used to "filter" the results. It provides a way for a software user to perform queries without having to know a query language (such as SQL). The software can automatically generate the queries for the user (usually behind the scenes). Here are some examples: Example Form B:
.....Name: Bob ..Address: .....City: ....State: TX ..Zipcode:
Resulting SQL:
SELECT * FROM Contacts WHERE Name='Bob' AND State='TX'
Note how blank items do not generate SQL terms. Since "Address" is blank, there is no clause generated for it. Example Form C:
.....Name: ..Address: .....City: Sampleton ....State: ..Zipcode: 12345
Resulting SQL:
SELECT * FROM Contacts WHERE City='Sampleton' AND Zipcode=12345
Example
For example, to find all customers having an account at the SFU branch:
To find all customers having an account at both the SFU and the MetroTown branch:
Network Operating System also referred to as the Dialoguer,[1] is the software that runs on a server and enables the server to manage data, users, groups, security, applications, and other networking functions.[2] The network operating system is designed to allow shared file and printer access among multiple computers in a network, typically a local area network (LAN), a private network or to other networks. The most popular network operating systems are Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.
Characteristics
Network Operating Systems are based on a client/server architecture in which a server enables multiple clients to share resources.[2]
Use in Routers
Network Operating Systems (NOS of windows) are embedded in a router or hardware firewall that operates the functions in the network layer (layer 3) of the OSI model.[1]
Examples: o JUNOS, used in routers and switches from Juniper Networks o ZyNOS, used in network devices made by ZyXEL. o ExtremeXOS, used in network devices made by Extreme Networks. Also called EXOS.
Peer-to-Peer
In a peer-to-peer network operating system users are allowed to share resources and files located on their computers and access shared resources from others. This system is not based with having a file server or centralized management source. A peer-to-peer network sets all connected computers equal; they all share the same abilities to use resources available on the network.[3]
Examples: o AppleShare used for networking connecting Apple products. o Windows for Workgroups used for networking peer-to-peer windows computers.
Advantages
Disadvantages
No central location for storage. Lack of security that a client/server type offers.
Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984,[1] allowing a user on a client computer to access files over a network in a manner similar to how local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call (ONC RPC) system. The Network File System is an open standard defined in RFCs, allowing anyone to implement the protocol Among the many different file systems that FreeBSD supports is the Network File System, also known as NFS. NFS allows a system to share directories and files with others over a network. By using NFS, users and programs can access files on remote systems almost as if they were local files.
benefits
Local workstations use less disk space because commonly used data can be stored on a single machine and still remain accessible to others over the network. There is no need for users to have separate home directories on every network machine. Storage devices such as floppy disks, CDROM drives, drives can be used by other machines on the network. This may reduce the number of removable media drives throughout the network.
mountd The NFS mount daemon which carries out the requests that nfsd(8) passes on to it. rpcbind This daemon allows NFS clients to discover which port the NFS server is using.
NFS consists of seven layers of protocols that correspond to the layers of the Open System Interconnection (OSI) model. Table 11.1 OSI Layers and NFS Protocols OSI Layers NFS Layers Application NFS and NIS Presentation XDR Session RPC Transport TCP, UDP Network IP Data Link Ethernet Physical Ethernet The Physical layer controls how data is physically transmitted across the network. The Data Link layer provides transfer of data that is combined into frames. Ethernet is the standard implementation of these two layers. The Network layer is concerned with getting the data from one host to another on the network. The Internet Protocol (IP) is an implementation of this layer. IP must get the packets to the correct destination. It is not concerned with data reliability or with data order. It can fragment packets that are too large. The Internet Protocol uses unique IP addresses to identify hosts The Transport Layer, which is responsible for data flow and data reliability, is implemented by using UDP or TCP.
Transmission Control Protocol (TCP) provides reliable, ordered delivery of data packets and is stateful. TCP keeps track of the order of information and resends missing data. This protocol is best for long network connections, such as file transfer. User Datagram Protocol (UDP) is a simple, connectionless protocol that does not ensure the order or the completeness of the datagrams. It is stateless and is best for short connections such as remote procedure calls. The Session Layer is concerned with the exchange of messages between devices. NFS uses the Remote Procedure Call (RPC) protocol. The Presentation Layer is concerned with the exchange of data types between heterogeneous systems. NFS uses the External Data Representation (XDR) protocol. This protocol specifies the format to which the data must be converted before being sent. Once received, the data is then reconverted.
D:-Clientserver model:
A computer network diagram of clients communicating with a server via the Internet. Both the clients and the server are nodes (communication points) on the network. The arrangement of the nodes in a network is called the network topology.
The clientserver model is an approach to computer network programming developed at Xerox PARC during the 1970s. It is now prevalent in computer networks. Email, the World Wide Web, and network printing all apply the clientserver model. The model assigns one of two roles to the computers in a network: Client or server. A server is a computer system that selectively shares its resources; a client is a computer or computer program that initiates contact with a server in order to make use of a resource. Data, CPUs, printers, and data storage devices are some examples of resources. This sharing of computer resources is called time-sharing, because it allows multiple people to use a computer (in this case, the server) at the same time. Because a computer does a limited amount of work at any moment, a time-sharing system must quickly prioritize its tasks to accommodate the clients. Clients and servers exchange messages in a request-response messaging pattern: The client sends a request, and the server returns a response. To communicate, the computers must have a common language, and they must follow rules so that both the client and the server know
what to expect. The language and rules of communication are defined in a communications protocol. All client-server protocols operate in the application layer. Whether a computer is a client, a server, or both, it can serve multiple functions. For example, a single computer can run web server and file server software at the same time to serve different data to clients making different kinds of requests. Client software can also communicate with server software on the same computer.[1] Communication between servers, such as to synchronize data, is sometimes called inter-server or server-to-server communication.
implementations and design alternatives. Based on the assessment of these systems, the paper makes the point that a departure from the approach of extending centralized file systems over a communication network is necessary to accomplish sound distributed file system design..
You expect to add file servers or modify file locations. Users who access targets are distributed across a site or sites. Most users require access to multiple targets. Server load balancing could be improved by redistributing targets. Users require uninterrupted access to targets. Your organization has Web sites for either internal or external use.
DFS types
You can implement a distributed file system in either of two ways, either as a stand-alone root distributed file system, or as a domain distributed file system.
where fd is an integer, buf is an array of characters, and nbytes is another integer. If the call is made from the main program, the stack will be as shown in Fig. 2-17(a) before the call. To make the call, the caller pushes the parameters onto the stack in order, last one first, as shown in Fig. 2-17(b). (The reason that C compilers push the parameters in reverse order has to do with printf by doing so, printf can always locate its first parameter, the format string.) After read has finished running, it puts the return value in a register, removes the return address, and transfers control back to the caller. The caller then removes the parameters from the stack, returning it to the original state, as shown in Fig. 2-17(c).
Fig. 2-17. (a) The stack before the call to read. (b) The stack while the called procedure is active. (c) The stack after the return to the caller.
Several things are worth noting. For one, in C, parameters can be call-by-value or call-byreference. A value parameter, such as fd or nbytes, is simply copied to the stack as shown in Fig. 2-17(b). To the called procedure, a value parameter is just an initialized local variable. The called procedure may modify it, but such changes do not affect the original value at the calling side. A reference parameter in C is a pointer to a variable (i.e., the address of the variable), rather than the value of the variable. In the call to read, the second parameter is a reference parameter because arrays are always passed by reference in C. What is actually pushed onto the stack is the address of the character array. If the called procedure uses this parameter to store something into the character array, it does modify the array in the calling procedure. The difference between call-by-value and call-by-reference is quite important for RPC, as we shall see. One other parameter passing mechanism also exists, although it is not used in C. It is called call-by-copy/restore. It consists of having the variable copied to the stack by the caller, as in call-by-value, and then copied back after the call, overwriting the caller's original value. Suppose that a program needs to read some data from a file. The programmer puts a call to read in the code to get the data. In a traditional (single-processor) system, the read routine is extracted from the library by the linker and inserted into the object program. It is a short procedure, usually written in assembly language, that puts the parameters in registers and then issues a READ system call by trapping to the kernel. In essence, the read procedure is a kind of interface between the user code and the operating system. RPC achieves its transparency in an analogous way. When read is actually a remote procedure (e.g., one that will run on the file server's machine), a different version of read, called a client stub, is put into the library. Like the original one, it too, is called using the calling sequence of Fig. 2-17. Also like the original one, it too, traps to the kernel. Only unlike the original one, it does not put the parameters in registers and ask the kernel to give it data. Instead, it packs the parameters into a message and asks the kernel to send the message
to the server as illustrated in Fig. 2-18. Following the call to send, the client stub calls receive, blocking itself until the reply comes back.
Fig. 2-18. Calls and messages in an RPC. Each ellipse represents a single process, with the shaded portion being the stub.
When the message arrives at the server, the kernel passes it up to a server stub that is bound with the actual server. Typically the server stub will have called receive and be blocked waiting for incoming messages. The server stub unpacks the parameters from the message and then calls the server procedure in the usual way (i.e., as in Fig. 2-17). From the server's point of view, it is as though it is being called directly by the client the parameters and return address are all on the stack where they belong and nothing seems unusual. The server performs its work and then returns the result to the caller in the usual way. For example, in the case of read, the server will fill the buffer, pointed to by the second parameter, with the data. This buffer will be internal to the server stub. When the server stub gets control back after the call has completed, it packs the result (the buffer) in a message and calls send to return it to the client. Then it goes back to the top of its own loop to call receive, waiting for the next message. When the message gets back to the client machine, the kernel sees that it is addressed to the client process (to the stub part of that process, but the kernel does not know that). The message is copied to the waiting buffer and the client process unblocked. The client stub inspects the message, unpacks the result, copies it to its caller, and returns in the usual way. When the caller gets control following the call to read, all it knows is that its data are available. It has no idea that the work was done remotely instead of by the local kernel.
A collection of independent computers which can cooperate, but which appear to users of the system as a uniprocessor computer. Two aspects Hardware and Software
Examples
Users sharing a processor pool. System dynamically decides where processes are executed Distributed Banking
#Hardware Concepts
Although all Distributed Systems consist of multiple CPUs, there are different ways of interconnecting them and how they communicate Flynn (1972) identified two essential characteristics to classify multiple CPU computer systems: the number of instruction streams and the number of data streams Uniprocessors SISD
Array processors are SIMD - processors cooperate on a single problem MISD - No known computer fits this model Distributed Systems are MIMD - a group of independent computers each with its own program counter, program and data
MIMD can be split into two classifications Multiprocessors - CPUs share a common memory
Can be further subclassified as Bus - All machines connected by single medium (e.g., LAN, bus, backplane, cable)
Switched - Single wire from machine to machine, with possibly different wiring patterns (e.g, Internet)
Further classification is
Tightly-coupled - short delay in communication between computers, high data rate (e.g., Parallel computers working on related computations)
Loosely-coupled - Large delay in communications, Low data rate (Distributed Systems working on unrelated computations
#Software concepts
Operating Systems for multiprocessors and multicomputers Tightly-coupled and Loosely-coupled operating systems Loosely-coupled
Ex- LAN where users have their own, independent machines, but are still able to interact in a limited way when necessary
Tightly-coupled