You are on page 1of 16

DBMS: Secondary storage device

Secondary storage devices, as indicated by the name, save data after it has been saved by the primary storage device, usually referred to as RAM (Random Access Memory). Alternatively referred to as external memory and auxiliary storage, secondary storage is a storage medium that holds information until it is deleted or overwritten regardless if the computer has power. For example, a floppy disk drive and hard drive are both good examples of secondary storage devices. As can be seen by the below picture there are three different storage on a computer, although primary storage is accessed much faster than secondary storage because of the price and size limitations secondary storage is used with today's computers to store all your programs and your personal data.

Finally, although off-line storage could be considered secondary storage, we've separated these into their own category because this media can be removed from the computer and stored elsewhere.

Types of Secondary Storage Devices


There are a variety of secondary storage devices that are either external storage or internal, that are used to store your computer's data. Some internal devices include magnetic and optical disks as well as magnetic tapes. Some secondary disks are external like CDs and are used primarily for data backups as well as audio and visual storage. Memory cards, flash

drives as well as digital versatile discs (DVD) are secondary storage devices that are used. DVD's have the ability to store six times more data than CD's, according to Computer Nature.

Internal Hard Disk Drive


The internal hard disk drive is the main secondary storage device that stores all of your data magnetically, including operating system files and folders, documents, music and video. You can think of the hard disk drive as a stack of disks mounted one on top of the other and placed in a sturdy case. They are spinning at high speeds to provide easy and fast access to stored data anywhere on a disk

1)Floppy Disks
Floppy disks are a storage medium made of a thin magnetic disk. They were widely used from the 1970s to the early 2000s. On the 3 12-inch microfloppy, common from the late 1980s onward, storage capabilities ranged from the standard 1.44 MB to 200 MB on some versions.

2)Magnetic Tape
Magnetic tape has been in use for more than 50 years. Modern magnetic tape is packaged in cartridges or cassettes and is used for storing data backups, particularly in corporate settings. The average amount of storage is 5 MB to 140 MB for every standard-length reel, which is 2,400 feet.

External Hard Disk Drive


External hard disk drives are used when the internal drive does not have any free space and you need to store more data. In addition, it is recommended that you always back up all of your data and an external hard drive can be very useful, as they can safely store large amounts of information. They can be connected by either USB or Firewire connection to a computer and can even be connected with each other in case you need several additional hard drives at the same time.

1)USB Flash Drive


USB flash memory storage device is also portable and can be carried around on a key chain. This type of a secondary storage device has become incredibly popular due to the very small size of device compared to the amount of data it can store (in most cases, more than CDs or DVDs). Data can be easily read using the USB (Universal Serial Bus) interface that now comes standard with most of the computer

2)CD-R
A CD-R, a type of recordable CD, is an optical secondary storage device invented by Sony and Philips. It is also known as a WORM -- write once read many -- medium.

3)DVD-R
A DVD-R, a type of recordable DVD, has a storage capacity of usually 4.1 GB. There is also an 8.54-GB dual-layer version, called DVD-R DL.

Uses of Secondary Storage Device


1)Computer Backup
The majority of secondary storage devices are used for a very simple task: backing up data. Most likely, your computer is filled with all of your music, pictures, videos and other items of value. You may also have expensive applications that you paid for loaded onto it. Backing up your data is always recommended in case of a hard drive crash on your computer.

2)Network Attached Storage


Many businesses will connect secondary storage devices to their network. This acts as an easy way for them to share files. When you connect a secondary storage device to your network, you can set specific permissions for anyone on your network to connect to it.

3)For Travel and Easy Transport


Secondary storage devices are becoming more and more portable as technology progresses. A very popular use of secondary storage devices has become using them to transport data. Whether you are going on an airplane and need to bring files to your boss, or just want to run a computer game to your friend, secondary devices will do the trick.

Benefits of Secondary Storage


Secondary storage devices offer several distinct benefits for your computer use like: > possessing the capacity to store enormous amounts of information such as hundreds, even the equivalent of thousands, of books. > Secondary storage also removes the once-enormous costs to businesses that were incurred for storage of important documents in filing cabinets > secondary storage devices are safe, reliable and permanent

Disadvantages of Secondary Storage


> secondary storage devices are slower because they are electro-mechanical. >The information on the secondary device has to be first located, then copied and moved to the primary memory or RAM > secondary storage simply provides storage for the computer while, "primary memory, supports ongoing CPU activity by storing instructions and data of currently running programs

Query by Example
Query by Example (QBE) is a database query language for relational databases. It was devised by Mosh M. Zloof at IBM Research during the mid 1970s, in parallel to the development of SQL. It is the first graphical query language, using visual tables where the user would enter commands, example elements and conditions. Many graphical front-ends for databases use the ideas from QBE today. Originally limited only for the purpose of retrieving data, QBE was later extended to allow other operations, such as inserts, deletes and updates, as well as creation of temporary tables. In the context of information retrieval, QBE has a somewhat different meaning. The user can submit a document, or several documents, and ask for "similar" documents to be retrieved from a document database. QBE is a feature included with various database applications that provides a user-friendly method of running database queries. Typically without QBE, a user must write input commands using correct SQL (Structured Query Language) syntax. This is a standard language that nearly all database programs support. However, if the syntax is slightly incorrect the query may return the wrong results or may not run at all. The Query By Example feature provides a simple interface for a user to enter queries. Instead of writing an entire SQL command, the user can just fill in blanks or select items to define the query she wants to perform. For example, a user may want to select an entry from a table called "Table1" with an ID of 123. Using SQL, the user would need to input the command, "SELECT * FROM Table1 WHERE ID = 123". The QBE interface may allow the user to just click on Table1, type in "123" in the ID field and click "Search." QBE is offered with most database programs, though the interface is often different between applications. For example, Microsoft Access has a QBE interface known as "Query Design View" that is completely graphical.

As a general technique
The term also refers to a general technique influenced by Zloof's work whereby only items with search values are used to "filter" the results. It provides a way for a software user to perform queries without having to know a query language (such as SQL). The software can automatically generate the queries for the user (usually behind the scenes). Here are some examples: Example Form B:
.....Name: Bob ..Address: .....City: ....State: TX ..Zipcode:

Resulting SQL:
SELECT * FROM Contacts WHERE Name='Bob' AND State='TX'

Note how blank items do not generate SQL terms. Since "Address" is blank, there is no clause generated for it. Example Form C:
.....Name: ..Address: .....City: Sampleton ....State: ..Zipcode: 12345

Resulting SQL:
SELECT * FROM Contacts WHERE City='Sampleton' AND Zipcode=12345

Example
For example, to find all customers having an account at the SFU branch:

To find the names of all branches not located in Burnaby:

To find all customers having an account at both the SFU and the MetroTown branch:

To find all customers having an account at either branch or both:

Find all customers having an account at the same branch as Jones:

O.s:B):-Network operating system &NFS

Network Operating System also referred to as the Dialoguer,[1] is the software that runs on a server and enables the server to manage data, users, groups, security, applications, and other networking functions.[2] The network operating system is designed to allow shared file and printer access among multiple computers in a network, typically a local area network (LAN), a private network or to other networks. The most popular network operating systems are Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.

Characteristics
Network Operating Systems are based on a client/server architecture in which a server enables multiple clients to share resources.[2]

Use in Routers
Network Operating Systems (NOS of windows) are embedded in a router or hardware firewall that operates the functions in the network layer (layer 3) of the OSI model.[1]

Examples: o JUNOS, used in routers and switches from Juniper Networks o ZyNOS, used in network devices made by ZyXEL. o ExtremeXOS, used in network devices made by Extreme Networks. Also called EXOS.

Peer-to-Peer
In a peer-to-peer network operating system users are allowed to share resources and files located on their computers and access shared resources from others. This system is not based with having a file server or centralized management source. A peer-to-peer network sets all connected computers equal; they all share the same abilities to use resources available on the network.[3]

Examples: o AppleShare used for networking connecting Apple products. o Windows for Workgroups used for networking peer-to-peer windows computers.

Advantages

Ease of setup Less hardware needed, no server needs to be purchased.

Disadvantages

No central location for storage. Lack of security that a client/server type offers.

Network File System

Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984,[1] allowing a user on a client computer to access files over a network in a manner similar to how local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call (ONC RPC) system. The Network File System is an open standard defined in RFCs, allowing anyone to implement the protocol Among the many different file systems that FreeBSD supports is the Network File System, also known as NFS. NFS allows a system to share directories and files with others over a network. By using NFS, users and programs can access files on remote systems almost as if they were local files.

benefits

Local workstations use less disk space because commonly used data can be stored on a single machine and still remain accessible to others over the network. There is no need for users to have separate home directories on every network machine. Storage devices such as floppy disks, CDROM drives, drives can be used by other machines on the network. This may reduce the number of removable media drives throughout the network.

30.3.1 How NFS Works


NFS consists of at least two main parts: a server and one or more clients. The client remotely accesses the data that is stored on the server machine. In order for this to function properly a few processes have to be configured and running. The server has to be running the following daemons:
Daemon nfsd Description The NFS daemon which services requests from the NFS clients.

mountd The NFS mount daemon which carries out the requests that nfsd(8) passes on to it. rpcbind This daemon allows NFS clients to discover which port the NFS server is using.

C:-NFS Architecture and Protocols

NFS consists of seven layers of protocols that correspond to the layers of the Open System Interconnection (OSI) model. Table 11.1 OSI Layers and NFS Protocols OSI Layers NFS Layers Application NFS and NIS Presentation XDR Session RPC Transport TCP, UDP Network IP Data Link Ethernet Physical Ethernet The Physical layer controls how data is physically transmitted across the network. The Data Link layer provides transfer of data that is combined into frames. Ethernet is the standard implementation of these two layers. The Network layer is concerned with getting the data from one host to another on the network. The Internet Protocol (IP) is an implementation of this layer. IP must get the packets to the correct destination. It is not concerned with data reliability or with data order. It can fragment packets that are too large. The Internet Protocol uses unique IP addresses to identify hosts The Transport Layer, which is responsible for data flow and data reliability, is implemented by using UDP or TCP.

Transmission Control Protocol (TCP) provides reliable, ordered delivery of data packets and is stateful. TCP keeps track of the order of information and resends missing data. This protocol is best for long network connections, such as file transfer. User Datagram Protocol (UDP) is a simple, connectionless protocol that does not ensure the order or the completeness of the datagrams. It is stateless and is best for short connections such as remote procedure calls. The Session Layer is concerned with the exchange of messages between devices. NFS uses the Remote Procedure Call (RPC) protocol. The Presentation Layer is concerned with the exchange of data types between heterogeneous systems. NFS uses the External Data Representation (XDR) protocol. This protocol specifies the format to which the data must be converted before being sent. Once received, the data is then reconverted.

D:-Clientserver model:

A computer network diagram of clients communicating with a server via the Internet. Both the clients and the server are nodes (communication points) on the network. The arrangement of the nodes in a network is called the network topology.

The clientserver model is an approach to computer network programming developed at Xerox PARC during the 1970s. It is now prevalent in computer networks. Email, the World Wide Web, and network printing all apply the clientserver model. The model assigns one of two roles to the computers in a network: Client or server. A server is a computer system that selectively shares its resources; a client is a computer or computer program that initiates contact with a server in order to make use of a resource. Data, CPUs, printers, and data storage devices are some examples of resources. This sharing of computer resources is called time-sharing, because it allows multiple people to use a computer (in this case, the server) at the same time. Because a computer does a limited amount of work at any moment, a time-sharing system must quickly prioritize its tasks to accommodate the clients. Clients and servers exchange messages in a request-response messaging pattern: The client sends a request, and the server returns a response. To communicate, the computers must have a common language, and they must follow rules so that both the client and the server know

what to expect. The language and rules of communication are defined in a communications protocol. All client-server protocols operate in the application layer. Whether a computer is a client, a server, or both, it can serve multiple functions. For example, a single computer can run web server and file server software at the same time to serve different data to clients making different kinds of requests. Client software can also communicate with server software on the same computer.[1] Communication between servers, such as to synchronize data, is sometimes called inter-server or server-to-server communication.

Comparison with peer-to-peer architecture


In the client-server model, the server is a centralized system. The more simultaneous clients a server has, the more resources it needs. In a peer-to-peer network, two or more computers (called peers) pool their resources and communicate in a decentralized system. Peers are coequal nodes in a non-hierarchical network. Collectively, lesser-powered computers can share the load and provide redundancy. Since most peers are personal computers, their shared resources may not be available consistently. Although an individual node may have variable uptime, the resource remains available as long as one or more other nodes offer it. As the availability of nodes changes, an application-layer protocol reroutes requests

E:-Distributed File System


With Distributed File System (DFS), system administrators can make it easy for users to access and manage files that are physically distributed across a network. With DFS, you can make files distributed across multiple servers appear to users as if they reside in one place on the network. Users no longer need to know and specify the actual physical location of files in order to access them. For example, if you have marketing material scattered across multiple servers in a domain, you can use DFS to make it appear as though all of the material resides on a single server. This eliminates the need for users to go to multiple locations on the network to find the information they need.The purpose of a distributed file system (DFS) is to allow users of physically distributed computers to share data and storage resources by using a common file system. A typical configuration for a DFS is a collection of workstations and mainframes connected by a local area network (LAN). A DFS is implemented as part of the operating system of each of the connected computers. This paper establishes a viewpoint that emphasizes the dispersed structure and decentralization of both data and control in the design of such systems. It defines the concepts of transparency, fault tolerance, and scalability and discusses them in the context of DFSs. The paper claims that the principle of distributed operation is fundamental for a fault tolerant and scalable DFS design. It also presents alternatives for the semantics of sharing and methods for providing access to remote files. A survey of contemporary UNIX@-based systems, namely, UNIX United, Locus, Sprite, Suns Network File System, and ITCs Andrew, illustrates the concepts and demonstrates various

implementations and design alternatives. Based on the assessment of these systems, the paper makes the point that a departure from the approach of extending centralized file systems over a communication network is necessary to accomplish sound distributed file system design..

Reasons for using DFS


You should consider implementing DFS if:

You expect to add file servers or modify file locations. Users who access targets are distributed across a site or sites. Most users require access to multiple targets. Server load balancing could be improved by redistributing targets. Users require uninterrupted access to targets. Your organization has Web sites for either internal or external use.

DFS types
You can implement a distributed file system in either of two ways, either as a stand-alone root distributed file system, or as a domain distributed file system.

F:-Basic RPC Operation


To understand how RPC works, it is important first to fully understand how a conventional (i.e., single machine) procedure call works. Consider a call like
count = read(fd, buf, nbytes);

where fd is an integer, buf is an array of characters, and nbytes is another integer. If the call is made from the main program, the stack will be as shown in Fig. 2-17(a) before the call. To make the call, the caller pushes the parameters onto the stack in order, last one first, as shown in Fig. 2-17(b). (The reason that C compilers push the parameters in reverse order has to do with printf by doing so, printf can always locate its first parameter, the format string.) After read has finished running, it puts the return value in a register, removes the return address, and transfers control back to the caller. The caller then removes the parameters from the stack, returning it to the original state, as shown in Fig. 2-17(c).

Fig. 2-17. (a) The stack before the call to read. (b) The stack while the called procedure is active. (c) The stack after the return to the caller.

Several things are worth noting. For one, in C, parameters can be call-by-value or call-byreference. A value parameter, such as fd or nbytes, is simply copied to the stack as shown in Fig. 2-17(b). To the called procedure, a value parameter is just an initialized local variable. The called procedure may modify it, but such changes do not affect the original value at the calling side. A reference parameter in C is a pointer to a variable (i.e., the address of the variable), rather than the value of the variable. In the call to read, the second parameter is a reference parameter because arrays are always passed by reference in C. What is actually pushed onto the stack is the address of the character array. If the called procedure uses this parameter to store something into the character array, it does modify the array in the calling procedure. The difference between call-by-value and call-by-reference is quite important for RPC, as we shall see. One other parameter passing mechanism also exists, although it is not used in C. It is called call-by-copy/restore. It consists of having the variable copied to the stack by the caller, as in call-by-value, and then copied back after the call, overwriting the caller's original value. Suppose that a program needs to read some data from a file. The programmer puts a call to read in the code to get the data. In a traditional (single-processor) system, the read routine is extracted from the library by the linker and inserted into the object program. It is a short procedure, usually written in assembly language, that puts the parameters in registers and then issues a READ system call by trapping to the kernel. In essence, the read procedure is a kind of interface between the user code and the operating system. RPC achieves its transparency in an analogous way. When read is actually a remote procedure (e.g., one that will run on the file server's machine), a different version of read, called a client stub, is put into the library. Like the original one, it too, is called using the calling sequence of Fig. 2-17. Also like the original one, it too, traps to the kernel. Only unlike the original one, it does not put the parameters in registers and ask the kernel to give it data. Instead, it packs the parameters into a message and asks the kernel to send the message

to the server as illustrated in Fig. 2-18. Following the call to send, the client stub calls receive, blocking itself until the reply comes back.

Fig. 2-18. Calls and messages in an RPC. Each ellipse represents a single process, with the shaded portion being the stub.

When the message arrives at the server, the kernel passes it up to a server stub that is bound with the actual server. Typically the server stub will have called receive and be blocked waiting for incoming messages. The server stub unpacks the parameters from the message and then calls the server procedure in the usual way (i.e., as in Fig. 2-17). From the server's point of view, it is as though it is being called directly by the client the parameters and return address are all on the stack where they belong and nothing seems unusual. The server performs its work and then returns the result to the caller in the usual way. For example, in the case of read, the server will fill the buffer, pointed to by the second parameter, with the data. This buffer will be internal to the server stub. When the server stub gets control back after the call has completed, it packs the result (the buffer) in a message and calls send to return it to the client. Then it goes back to the top of its own loop to call receive, waiting for the next message. When the message gets back to the client machine, the kernel sees that it is addressed to the client process (to the stub part of that process, but the kernel does not know that). The message is copied to the waiting buffer and the client process unblocked. The client stub inspects the message, unpacks the result, copies it to its caller, and returns in the usual way. When the caller gets control following the call to read, all it knows is that its data are available. It has no idea that the work was done remotely instead of by the local kernel.

A:-What's a Distributed System?


A collection of independent computers which can cooperate, but which appear to users of the system as a uniprocessor computer. Two aspects Hardware and Software

Examples

Users sharing a processor pool. System dynamically decides where processes are executed Distributed Banking

#Hardware Concepts

Although all Distributed Systems consist of multiple CPUs, there are different ways of interconnecting them and how they communicate Flynn (1972) identified two essential characteristics to classify multiple CPU computer systems: the number of instruction streams and the number of data streams Uniprocessors SISD

Array processors are SIMD - processors cooperate on a single problem MISD - No known computer fits this model Distributed Systems are MIMD - a group of independent computers each with its own program counter, program and data

MIMD can be split into two classifications Multiprocessors - CPUs share a common memory

Multicomputers - CPUs have separate memories

Can be further subclassified as Bus - All machines connected by single medium (e.g., LAN, bus, backplane, cable)

Switched - Single wire from machine to machine, with possibly different wiring patterns (e.g, Internet)

Further classification is

Tightly-coupled - short delay in communication between computers, high data rate (e.g., Parallel computers working on related computations)

Loosely-coupled - Large delay in communications, Low data rate (Distributed Systems working on unrelated computations

#Software concepts

Operating Systems for multiprocessors and multicomputers Tightly-coupled and Loosely-coupled operating systems Loosely-coupled

Ex- LAN where users have their own, independent machines, but are still able to interact in a limited way when necessary

Tightly-coupled

Ex- multiprocessor dedicated to solving a particular problem in parallel

You might also like