You are on page 1of 40

1.What is an operating system?

An operating system is a program that acts as an intermediary between the user and the computer hardware. The purpose of an OS is to provide a convenient environment in which user can execute programs in a convenient and efficient manner.It is a resource allocator responsible for allocating system resources and a control program which controls the operation of the computer h/w. 2.What are the various components of a computer system? 1. 2. 3. 4. The The The The hardware operating system application programs users.

3.What is purpose of different operating systems? The machine Purpose Workstation individual usability &Resources utilization Mainframe Optimize utilization of hardware PC Support complex games, business application Hand held PCs Easy interface & min. power consumption 4.What are the different operating systems? 1. 2. 3. 4. 5. Batched operating systems Multi-programmed operating systems timesharing operating systems Distributed operating systems Real-time operating systems

6.What is a boot-strap program?

Bootstrapping is a technique by which a simple computer program activates a more complicated system of programs. It comes from an old expression "to pull oneself up by one's bootstraps."
7.What is BIOS?

A BIOS is software that is put on computers. This allows the user to configure the input and output of a computer. A BIOS is also known as firmware. 8.Explain the concept of the batched operating systems?

In batched operating system the users gives their jobs to the operator who sorts the programs according to their requirements and executes them. This is time consuming but makes the CPU busy all the time. 9.Explain the concept of the multi-programmed operating systems? A multi-programmed operating systems can execute a number of programs concurrently. The operating system fetches a group of programs from the job-pool in the secondary storage which contains all the programs to be executed, and places them in the main memory. This process is called job scheduling. Then it chooses a program from the ready queue and gives them to CPU to execute. When a executing program needs some I/O operation then the operating system fetches another program and hands it to the CPU for execution, thus keeping the CPU busy all the time. 10.Explain the concept of the timesharing operating systems?

It is a logical extension of the multi-programmed OS where user can interact with the program. The CPU executes multiple jobs by switching among them, but the switches occur so frequently that the user feels as if the operating system is running only his program. 11.Explain the concept of the multi-processor systems or parallel systems?

They contain a no. of processors to increase the speed of execution, and reliability, and economy. They are of two types: 1. Symmetric multiprocessing 2. Asymmetric multiprocessing In Symmetric multi processing each processor run an identical

copy of the OS, and these copies communicate with each other as and when needed.But in Asymmetric multiprocessing each processor is assigned a specific task. 12.Explain the concept of the Distributed systems?

Distributed systems work in a network. They can share the network resources,communicate with each other 13.Explain the concept of Real-time operating systems?

A real time operating system is used when rigid time requirement have been placed on the operation of a processor or the flow of the data; thus, it is often used as a control device in a dedicated application. Here the sensors bring data to the computer. The computer must analyze the data and possibly adjust controls to modify the sensor input. They are of two types: 1. Hard real time OS 2. Soft real time OS Hard-real-time OS has well-defined fixed time constraints. But soft real time operating systems have less stringent timing constraints. 14.Define MULTICS?

MULTICS (Multiplexed information and computing services) operating system was developed from 1965-1970 at Massachusetts institute of technology as a computing utility. Many of the ideas used in MULTICS were subsequently used in UNIX. 15.What is SCSI?

Small computer systems interface. 16.What is a sector?

Smallest addressable portion of a disk. 17.What is cache-coherency?

In a multiprocessor system there exist several caches each may containing a copy of same variable A. Then a change in one cache should immediately be reflected in all other caches this process of maintaining the same value of a data in all the caches s called cache-coherency. 18.What are residence monitors?

Early operating systems were called residence monitors. 19.What is dual-mode operation?

In order to protect the operating systems and the system programs from the malfunctioning programs the two mode operations were evolved: 1. System mode. 2. User mode. Here the user programs cannot directly interact with the system resources, instead they request the operating system which checks the request and does the required task for the user programs-DOS was written for / intel 8088 and has no dual-mode. Pentium provides dual-mode operation. 20.What are the operating system components? 1. 2. 3. 4. 5. 6. 7. 8. Process management Main memory management File management I/O system management Secondary storage management Networking Protection system Command interpreter system

21.What are operating system services?

1. 2. 3. 4. 5. 6. 7. 8.

Program execution I/O operations File system manipulation Communication Error detection Resource allocation Accounting Protection

22.What are system calls?

System calls provide the interface between a process and the operating system. System calls for modern Microsoft windows platforms are part of the win32 API, which is available for all the compilers written for Microsoft windows. 23.What is a layered approach and what is its advantage?

Layered approach is a step towards modularizing of the system, in which the operating system is broken up into a number of layers (or levels), each built on top of lower layer. The bottom layer is the hard ware and the top most is the user interface.The main advantage of the layered approach is modularity. The layers are selected such that each uses the functions (operations) and services of only lower layer. This approach simplifies the debugging and system verification. 24.What is micro kernel approach and site its advantages?

Micro kernel approach is a step towards modularizing the operating system where all nonessential components from the kernel are removed and implemented as system and user level program, making the kernel smaller.The benefits of the micro kernel approach include the ease of extending the operating system. All new services are added to the user space and consequently do not require modification of the kernel. And as kernel is smaller it is easier to upgrade it. Also this approach provides more security and reliability since most services are running as user processes rather than kernels keeping the kernel intact.

25.What are a virtual machines and site their advantages?

It is the concept by which an operating system can create an illusion that a process has its own processor with its own (virtual) memory. The operating system implements virtual machine concept by using CPU scheduling and virtual memory. 1. The basic advantage is it provides robust level of security as each virtual machine is isolated from all other VM. Hence the system resources are completely protected. 2. Another advantage is that system development can be done without disrupting normal operation. System programmers are given their own virtual machine, and as system development is done on the virtual machine instead of on the actual physical machine. 3. Another advantage of the virtual machine is it solves the compatibility problem. EX: Java supplied by Sun micro system provides a specification for java virtual machine. 26.What is a process?

A program in execution is called a process. Or it may also be called a unit of work. A process needs some system resources as CPU time, memory, files, and i/o devices to accomplish the task. Each process is represented in the operating system by a process control block or task control block (PCB).Processes are of two types: 1. Operating system processes 2. User processes 27.What are the states of a process? 1. 2. 3. 4. 5. New Running Waiting Ready Terminated

28.What are various scheduling queues? 1. Job queue

2. Ready queue 3. Device queue 29.What is a job queue?

When a process enters the system it is placed in the job queue. 30.What is a ready queue?

The processes that are residing in the main memory and are ready and waiting to execute are kept on a list called the ready queue. 31.What is a device queue?

A list of processes waiting for a particular I/O device is called device queue. 32.What is a long term scheduler & short term schedulers?

Long term schedulers are the job schedulers that select processes from the job queue and load them into memory for execution. The short term schedulers are the CPU schedulers that select a process form the ready queue and allocate the CPU to one of them. 33.What is context switching?

Transferring the control from one process to other process requires saving the state of the old process and loading the saved state for new process. This task is known as context switching. 34.What are the disadvantages of context switching?

Time taken for switching from one process to other is pure over head. Because the system does no useful work while switching. So one of the solutions is to go for threading when ever possible. 35.What are co-operating processes?

The processes which share system resources as data among each other. Also the processes can communicate with each other via interprocess communication facility generally used in distributed systems. The best example is chat program used on the www. 36.What is a thread?

A thread is a program line under execution. Thread sometimes called a light-weight process, is a basic unit of CPU utilization; it comprises a thread id, a program counter, a register set, and a stack. 37.What are the benefits of multithreaded programming? 1. Responsiveness (neednt to wait for a lengthy process) 2. Resources sharing 3. Economy (Context switching between threads is easy) 4. Utilization of multiprocessor architectures (perfect utilization of the multiple processors). 38.What are types of threads? 1. User thread 2. Kernel thread User threads are easy to create and use but the disadvantage is that if they perform a blocking system calls the kernel is engaged completely to the single user thread blocking other processes. They are created in user space.Kernel threads are supported directly by the operating system. They are slower to create and manage. Most of the OS like Windows NT, Windows 2000, Solaris2, BeOS, and Tru64 Unix support kernel threading. 39.Which category the java thread do fall in?

Java threads are created and managed by the java virtual machine, they do not easily fall under the category of either user or kernel thread 40.What are multithreading models?

Many OS provide both kernel threading and user threading. They are called multithreading models. They are of three types: 1. Many-to-one model (many user level thread and one kernel thread). 2. One-to-one model 3. Many-to many In the first model only one user can access the kernel thread by not allowing multi-processing. Example: Green threads of Solaris.The second model allows multiple threads to run on parallel processing systems. Creating user thread needs to create corresponding kernel thread (disadvantage).Example: Windows NT, Windows 2000, OS/2.The third model allows the user to create as many threads as necessary and the corresponding kernel threads can run in parallel on a multiprocessor. Example: Solaris2, IRIX, HP-UX, and Tru64 Unix. 41.What is a P-thread?

P-thread refers to the POSIX standard (IEEE 1003.1c) defining an API for thread creation and synchronization. This is a specification for thread behavior, not an implementation. The windows OS have generally not supported the P-threads. 42.What are java threads?

Java is one of the small number of languages that support at the language level for the creation and management of threads. However, because threads are managed by the java virtual machine (JVM), not by a user-level library or kernel, it is difficult to classify Java threads as either user- or kernel-level. 43.What is process synchronization?

A situation, where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called race condition. To guard against the race condition we need to ensure that only one process at a time can be manipulating the same data. The technique we use for this is called process synchronization. 44.What is critical section problem?

Critical section is the code segment of a process in which the process may be changing common variables, updating tables, writing a file and so on. Only one process is allowed to go into critical section at any given time (mutually exclusive).The critical section problem is to design a protocol that the processes can use to co-operate. The three basic requirements of critical section are: 1. Mutual exclusion 2. Progress 3. bounded waiting Bakery algorithm is one of the solutions to CS problem. 45.What is a semaphore?

It is a synchronization tool used to solve complex critical section problems. A semaphore is an integer variable that, apart from initialization, is accessed only through two standard atomic operations: Wait and Signal. 46.What is bounded-buffer problem?

Here we assume that a pool consists of n buffers, each capable of holding one item. The semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the value 1.The empty and full semaphores count the number of empty and full buffers, respectively. Empty is initialized to n, and full is initialized to 0.

47.What is readers-writers problem?

Here we divide the processes into two types: 1. Readers (Who want to retrieve the data only) 2. Writers (Who want to retrieve as well as manipulate) We can provide permission to a number of readers to read same data at same time.But a writer must be exclusively allowed to access. There are two solutions to this problem: 1. No reader will be kept waiting unless a writer has already obtained permission to use the shared object. In other words, no reader should wait for other readers to complete simply because a writer is waiting. 2. Once a writer is ready, that writer performs its write as soon as possible. In other words, if a writer is waiting to access the object, no new may start reading. 48.What is dining philosophers problem?

Consider 5 philosophers who spend their lives thinking and eating. The philosophers share a common circular table surrounded by 5 chairs, each belonging to one philosopher. In the center of the table is a bowl of rice, and the table is laid with five single chop sticks. When a philosopher thinks, she doesnt interact with her colleagues. From time to time, a philosopher gets hungry and tries to pick up two chop sticks that are closest to her .A philosopher may pick up only one chop stick at a time. Obviously she cant pick the stick in some others hand. When a hungry philosopher has both her chopsticks at the same time, she eats without releasing her chopsticks. When she is finished eating, she puts down both of her chopsticks and start thinking again. 49.What is a deadlock?

Suppose a process request resources; if the resources are not available at that time the process enters into a wait state. A waiting process may never again change state, because the resources they have requested are held by some other waiting processes. This situation is called deadlock.

50.What are necessary conditions for dead lock? 1. Mutual exclusion (where at least one resource is nonsharable) 2. Hold and wait (where a process hold one resource and waits for other resource) 3. No preemption (where the resources cant be preempted) 4. circular wait (where p[i] is waiting for p[j] to release a resource. i= 1,2,n j=if (i!=n) then i+1 else 1 ) 51.What is resource allocation graph?

This is the graphical description of deadlocks. This graph consists of a set of edges E and a set of vertices V. The set of vertices V is partitioned into two different types of nodes P={p1,p2,,pn}, the set consisting of all the resources in the system, R={r1,r2,rn}.A directed edge Pi?Rj is called a request edge; a directed edge Rj? Pi is called an assignment edge. Pictorially we represent a process Pi as a circle, and each resource type Rj as square.Since resource type Rj may have more than one instance, we represent each such instance as a dot within the square.When a request is fulfilled the request edge is transformed into an assignment edge. When a process releases a resource the assignment edge is deleted. If the cycle involves a set of resource types, each of which has only a single instance, then a deadlock has occurred. Each process involved in the cycle is deadlock. 52.What are deadlock prevention techniques? 1. Mutual exclusion : Some resources such as read only files shouldnt be mutually exclusive. They should be sharable. But some resources such as printers must be mutually exclusive. 2. Hold and wait : To avoid this condition we have to ensure that if a process is requesting for a resource it should not hold any resources. 3. No preemption : If a process is holding some resources and requests another resource that cannot be immediately allocated to it (that is the

process must wait), then all the resources currently being held are preempted(released autonomously). 4. Circular wait : the way to ensure that this condition never holds is to impose a total ordering of all the resource types, and to require that each process requests resources in an increasing order of enumeration. 53.What is a safe state and a safe sequence?

A system is in safe state only if A sequence of processes is a safe allocation state if, for each Pi, still request can be satisfied by resources plus the resources held

there exists a safe sequence. sequence for the current the resources that the Pi can the currently available by all the Pj, with j

54.What are the deadlock avoidance algorithms?

A dead lock avoidance algorithm dynamically examines the resource-allocation state to ensure that a circular wait condition can never exist. The resource allocation state is defined by the number of available and allocated resources, and the maximum demand of the process.There are two algorithms: 1. Resource allocation graph algorithm 2. Bankers algorithm a. Safety algorithm b. Resource request algorithm 55. What are the basic functions of an operating system?

Operating system controls and coordinates the use of the hardware among the various applications programs for various uses. Operating system acts as resource allocator and manager. Since there are many possibly conflicting requests for resources the operating system must decide which requests are allocated resources to operating the computer system efficiently and fairly. Also operating system is control program which controls the user programs to prevent errors and improper use of the computer. It is especially concerned with the operation and control of I/O devices.

56.Explain briefly about, processor, assembler, compiler, loader, linker and the functions executed by them. Processor:A processor is the part a computer system that executes instructions .It is also called a CPU Assembler: An assembler is a program that takes basic computer instructions and converts them into a pattern of bits that the computers processor can use to perform its basic operations. Some people call these instructions assembler language and others use the term assembly language. Compiler: A compiler is a special program that processes statements written in a particular programming language and turns them into machine language or code that a computers processor uses. Typically, a programmer writes language statements in a language such as Pascal or C one line at a time using an editor. The file that is created contains what are called the source statements. The programmer then runs the appropriate language compiler, specifying the name of the file that contains the source statements. Loader:In a computer operating system, a loader is a component that locates a given program (which can be an application or, in some cases, part of the operating system itself) in offline storage (such as a hard disk), loads it into main storage (in a personal computer, its called random access memory), and gives that program control of the compute Linker: Linker performs the linking of libraries with the object code to make the object code into an executable machine code. 57. What is a Real-Time System?

A real time process is a process that must respond to the events within a certain time period. A real time operating system is an operating system that can run real time processes successfully 58. What is the difference between Hard and Soft real-time systems?

A hard real-time system guarantees that critical tasks complete on time. This goal requires that all delays in the system be bounded from the retrieval of the stored data to the time that it takes the operating system to finish any request made of it. A soft real time system where a critical real-time task gets

priority over other tasks and retains that priority until it completes. As in hard real time systems kernel delays need to be bounded 59. What is virtual memory?

A virtual memory is hardware technique where the system appears to have more memory that it actually does. This is done by timesharing, the physical memory and storage parts of the memory one disk when they are not actively being used 60. What is cache memory?

Cache memory is random access memory (RAM) that a computer microprocessor can access more quickly than it can access regular RAM. As the microprocessor processes data, it looks first in the cache memory and if it finds the data there (from a previous reading of data), it does not have to do the more timeconsuming reading of data 61.Differentiate between Complier and Interpreter?

An interpreter reads one instruction at a time and carries out the actions implied by that instruction. It does not perform any translation. But a compiler translates the entire instructions. 62.What are different tasks of Lexical Analysis?

The purpose of the lexical analyzer is to partition the input text, delivering a sequence of comments and basic symbols. Comments are character sequences to be ignored, while basic symbols are character sequences that correspond to terminal symbols of the grammar defining the phrase structure of the input 63. Why paging is used?

Paging is solution to external fragmentation problem which is to permit the logical address space of a process to be noncontiguous, thus allowing a process to be allocating physical memory wherever the latter is available. 64. What is Context Switch?

Switching the CPU to another process requires saving the state of the old process and loading the saved state for the new process. This task is known as a context switch.Context-switch time is pure overhead, because the system does no useful work while switching. Its speed varies from machine to machine, depending on the memory speed, the number of registers which must be copied, the existed of special instructions(such as a single instruction to load or store all registers). 65. Distributed Systems?

Distribute the computation among several physical processors. Loosely coupled system each processor has its own local memory; processors communicate with one another through various communications lines, such as high-speed buses or telephone lines Advantages of distributed systems: ->Resources Sharing ->Computation speed up load sharing ->Reliability ->Communications 66.Difference between Primary storage and secondary storage? Main memory: only large storage media that the CPU can access directly. Secondary storage: extension of main memory that provides large nonvolatile storage capacity. 67. What is CPU Scheduler? ->Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. ->CPU scheduling decisions may take place when a process: 1.Switches from running to waiting state. 2.Switches from running to ready state.

3.Switches from waiting to ready. 4.Terminates. ->Scheduling under 1 and 4 is nonpreemptive. ->All other scheduling is preemptive. 68. What do you mean by deadlock?

Deadlock is a situation where a group of processes are all blocked and none of them can become unblocked until one of the other becomes unblocked.The simplest deadlock is two processes each of which is waiting for a message from the other 69. What is Dispatcher?

->Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: Switching context Switching to user mode Jumping to the proper location in the user program to restart that program Dispatch latency time it takes for the dispatcher to stop one process and start another running. 70. What is Throughput, Turnaround time, waiting time and Response time? Throughput number of processes that complete their execution per time unit Turnaround time amount of time to execute a particular process Waiting time amount of time a process has been waiting in the ready queue Response time amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment) 71. Explain the difference between microkernel and macro kernel?

Micro-Kernel: A micro-kernel is a minimal operating system that performs only the essential functions of an operating system. All other operating system functions are performed by system processes.

Monolithic: A monolithic operating system is one where all operating system code is in a single executable image and all operating system code runs in system mode. 72.What is multi tasking, multi programming, multi threading? Multi programming: Multiprogramming is the technique of running several programs at a time using timesharing.It allows a computer to do several things at the same time. Multiprogramming creates logical parallelism. The concept of multiprogramming is that the operating system keeps several jobs in memory simultaneously. The operating system selects a job from the job pool and starts executing a job, when that job needs to wait for any i/o operations the CPU is switched to another job. So the main idea here is that the CPU is never idle. Multi tasking: Multitasking is the logical extension of multiprogramming .The concept of multitasking is quite similar to multiprogramming but difference is that the switching between jobs occurs so frequently that the users can interact with each program while it is running. This concept is also known as timesharing systems. A time-shared operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of time-shared system. Multi threading: An application typically is implemented as a separate process with several threads of control. In some situations a single application may be required to perform several similar tasks for example a web server accepts client requests for web pages, images, sound, and so forth. A busy web server may have several of clients concurrently accessing it. If the web server ran as a traditional single-threaded process, it would be able to service only one client at a time. The amount of time that a client might have to wait for its request to be serviced could be enormous. So it is efficient to have one process that contains multiple threads to serve the same purpose. This approach would multithread the web-server process, the server would create a separate thread that would listen for client requests when a request was made rather than creating another process it would create another thread to service the request. So to get the advantages like responsiveness, Resource sharing economy and utilization of multiprocessor architectures multithreading concept can be used 73. Give a non-computer example of preemptive and non-preemptive scheduling?

Consider any system where people use some kind of resources and compete for them. The non-computer examples for preemptive scheduling the traffic on the single lane road if there is emergency or there is an ambulance on the road the other vehicles give path to the vehicles that are in need. The example for preemptive scheduling is people standing in queue for tickets. 74. What is starvation and aging? Starvation: Starvation is a resource management problem where a process does not get the resources it needs for a long time because the resources are being allocated to other processes. Aging: Aging is a technique to avoid starvation in a scheduling system. It works by adding an aging factor to the priority of each request. The aging factor must increase the requests priority as time passes and must ensure that a request will eventually be the highest priority request (after it has waited long enough) 75.Different types of Real-Time Scheduling?

Hard real-time systems required to complete a critical task within a guaranteed amount of time. Soft real-time computing requires that critical processes receive priority over less fortunate ones. 76. What are the Methods for Handling Deadlocks?

Ensure that the system will never enter a deadlock state. ->Allow the system to enter a deadlock state and then recover. ->Ignore the problem and pretend that deadlocks never occur in the system; used by most operating systems, including UNIX. 77. What is a Safe State and its use in deadlock avoidance?

When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state ->System is in safe state if there exists a safe sequence of all processes. ->Sequence is safe if for each Pi, the resources that Pi can still request can be satisfied by currently available resources + resources held by all the Pj, with j If Pi resource needs are not immediately available, then Pi can wait until all Pj have finished. When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and terminate. When Pi terminates, Pi+1 can obtain its needed resources, and so on. ->Deadlock Avoidance ensure that a system will never enter an unsafe state. 78. Recovery from Deadlock?

Process Termination: ->Abort all deadlocked processes. ->Abort one process at a time until the deadlock cycle is eliminated. ->In which order should we choose to abort? Priority of the process. How long process has computed, and how much longer to completion. Resources the process has used. Resources process needs to complete. How many processes will need to be terminated? Is process interactive or batch? Resource Preemption: ->Selecting a victim minimize cost. ->Rollback return to some safe state, restart process for that state. ->Starvation same process may always be picked as victim, include number of rollback in cost factor. 79.Difference between Logical and Physical Address Space? ->The concept of a logical address space that is bound to a separate physical address space is central to proper memory management. Logical address generated by the CPU; also referred to as virtual address.

Physical address address seen by the memory unit. ->Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme 80. Binding of Instructions and Data to Memory?

Address binding of instructions and data to memory addresses can happen at three different stages Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes. Load time: Must generate relocatable code if memory location is not known at compile time. Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers). 81. What is Memory-Management Unit (MMU)?

Hardware device that maps virtual to physical address. In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory. ->The user program deals with logical addresses; it never sees the real physical addresses 82. What are Dynamic Loading, Dynamic Linking and Overlays? Dynamic Loading: ->Routine is not loaded until it is called ->Better memory-space utilization; unused routine is never loaded. ->Useful when large amounts of code are needed to handle infrequently occurring cases. ->No special support from the operating system is required implemented through program design. Dynamic Linking: ->Linking postponed until execution time. ->Small piece of code, stub, used to locate the appropriate

memory-resident library routine. ->Stub replaces itself with the address of the routine, and executes the routine. ->Operating system needed to check if routine is in processes memory address. ->Dynamic linking is particularly useful for libraries. Overlays: ->Keep in memory only those instructions and data that are needed at any given time. ->Needed when process is larger than amount of memory allocated to it. ->Implemented by user, no special support needed from operating system, programming design of overlay structure is complex. 83. What is fragmentation? Different types of fragmentation?

Fragmentation occurs in a dynamic memory allocation system when many of the free blocks are too small to satisfy any request. External Fragmentation: External Fragmentation happens when a dynamic memory allocation algorithm allocates some memory and a small piece is left over that cannot be effectively used. If too much external fragmentation occurs, the amount of usable memory is drastically reduced.Total memory space exists to satisfy a request, but it is not contiguous Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on the allowed sizes of allocated blocks.Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used Reduce external fragmentation by compaction ->Shuffle memory contents to place all free memory together in one large block. ->Compaction is possible only if relocation is dynamic, and is done at execution time. 84. Define Demand Paging, Page fault interrupt, and Trashing? Demand Paging: Demand paging is the paging policy that a page is not read into memory until it is requested, that is, until there is a page fault on the page. Page fault interrupt: A page fault interrupt occurs when a memory reference is made to a page that is not in memory.The present bit in the page table entry will be found to be off by the virtual memory hardware and it will signal an interrupt.

Trashing: The problem of many page faults occurring in a short time, called page thrashing, 85. Explain Segmentation with paging?

Segments can be of different lengths, so it is harder to find a place for a segment in memory than a page. With segmented virtual memory, we get the benefits of virtual memory but we still have to do dynamic storage allocation of physical memory. In order to avoid this, it is possible to combine segmentation and paging into a two-level virtual memory system. Each segment descriptor points to page table for that segment.This give some of the advantages of paging (easy placement) with some of the advantages of segments (logical division of the program). 86. Under what circumstances do page faults occur? Describe the actions taken by the operating system when a page fault occurs?

A page fault occurs when an access to a page that has not been brought into main memory takes place. The operating system verifies the memory access, aborting the program if it is invalid. If it is valid, a free frame is located and I/O is requested to read the needed page into the free frame. Upon completion of I/O, the process table and page table are updated and the instruction is restarted 87. What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?

Thrashing is caused by under allocation of the minimum number of pages required by a process, forcing it to continuously page fault. The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming. It can be eliminated by reducing the level of multiprogramming.

Operating System:Interview, Viva and Basic Questions Part 6 Q1.Explain the meaning of Kernel. The kernel is the essential center of a computer operating system, the core that provides basic services for all other parts of the operating system. As a basic component of an operating system, a kernel provides the lowest-level abstraction layer for the resources. The kernel's primary purpose is to manage the computer's resources and allow other programs to run and use the resources like the CPU, memory and the I/O devices in the computer. The facilities provides by the kernel are :

Memory management The kernel has full access to the system's memory and must allow processes to access safely this memory as they require it. Device management To perform useful functions, processes need access to the peripherals connected to the computer, which are controlled by the kernel through device drivers System calls To actually perform useful work, a process must be able to access the services provided by the kernel.

Types of Kernel:

Monolithic kernels Every part which is to be accessed by most programs which cannot be put in a library is in the kernel space: Device drivers Scheduler Memory handling File systems Network stacks Microkernels In Microkernels, parts which really require to be in a privileged mode are in kernel space: -Inter-Process Communication, -Basic scheduling -Basic memory handling -Basic I/O primitives

Due to this, some critical parts like below run in user space:

The complete scheduler Memory handling File systems Network stacks

Q2.Explain the meaning of Kernel. Kernel is the core module of an operating system. It is the kernel that loads first and retain in main memory of the computer system. It provides all essential operations /services those are needed by applications. Kernel takes the responsibility of managing the memory, task, disk and process. Q3.What is a command interpreter?

The part of an Operating System that interprets commands and carries them out. A command interpreter is the part of a computer operating system that understands and executes commands that are entered interactively by a human being or from a program. In some operating systems, the command interpreter is called the shell. The BIOS is looking for the files needed to load in case of Windows is the Command.com. The required files are Command.com, IO.sys, and Msdos.sys to get Windows started. They reside in the Root of the C Drive.
Q4.What is a daemon?

In Unix and some other operating systems, a daemon is a computer program that runs in the background, It is not under the direct control of a user. They are usually initiated as background processes. Daemons have names that end with the letter "d". E.g. syslogd, sshd In a Unix, the parent process of a daemon is usually the init process (PID=1). Processes usually become daemons by forking a child process and then having their parent process immediately exit, thus causing init to adopt the child process.
Q5.Explain the basic functions of process management.

The basic functions of the OS wrt the process management are :


Allocating resources to processes, enabling processes to share and exchange information, protecting the resources of each process from other processes and enabling synchronisation among processes.

Q6.What is a named pipe?

A connection used to transfer data between separate processes, usually on separate computers. Its a pipe that an application opens by name in order to write data into or read data from the pipe. They are placed in the /dev directory and are treated as special files. Using a named pipe facilitates interprocess communications.
Q7.What is pre-emptive and non-preemptive scheduling?

Tasks are usually assigned with priorities. At times it is necessary to run a certain task that has a higher priority before another task although it is running. Therefore, the running task is interrupted for some time and resumed later when the priority task has finished its execution. This is called preemptive scheduling. Eg: Round robin In non-preemptive scheduling, a running task is executed till completion. It cannot be interrupted. Eg First In First Out
Q8.What is a semaphore? A semaphore is a variable. There are 2 types of semaphores:

Binary semaphores counting semaphores Binary semaphores have 2 methods associated with it. (up, down / lock, unlock) Binary semaphores can take only 2 values (0/1). They are used to acquire locks. When a resource is available, the process in charge set the semaphore to 1 else 0.

Counting Semaphore may have value to be greater than one, typically used to allocate resources from a pool of identical resources.
Q9.What is difference between binary semaphore and mutex? The differences between binary semaphore and mutex are:

Mutex is used exclusively for mutual exclusion. Both mutual exclusion and synchronization can be used by binary. A task that took mutex can only give mutex. From an ISR a mutex can not be given. Recursive taking of mutual exclusion semaphores is possible. This means that a task that holds before finally releasing a semaphore, can take the semaphore more than once. Options for making the task which takes as DELETE_SAFE are provided by Mutex, which means the task deletion is not possible when holding the mutex.

Q10.Explain the meaning of mutex.

A mutex and the binary semaphore are essentially the same. Both can take values: 0 or 1. However, there is a significant difference between them that makes mutexes more efficient than binary semaphores. A mutex can be unlocked only by the thread that locked it. Thus a mutex has an owner concept.
Q11.Explain the meaning of virtual memory.

Virtual memory is an approach to make use of the secondary storage devices as an extension of the primary storage of the computer. It is the process of increasing the apparent size of a computer's RAM by using a section of the hard disk storage as an extension of RAM. Logically-assigned memory that may or may not exist physically. Through the use of paging and the swap area, more memory can be referenced and allocated than actually exists on the system, thus giving the appearance of a larger main memory than actually exists.

Q12.What is RTOS? A certain capability within a specified time constraint is guaranteed by an operating system called real time operating system. For example, certain object availability for a robot when it is assembled is ensured by a real time operating system. For making an object available within a designated time, the operating system would terminate with a failure. This is called a hard real time operating system. The assembly line will be continued for functioning, but the output of production might be lower as the objects appearance is failed in a designated time, which causes the robot to be temporarily unproductive. This is called soft real time operating system. Some of the real time operating systems qualities can be valuated by operating systems such as Microsofts Windows 2000, IBMs OS/390 up to some extent. It means that, if an operating system does not qualify certain characteristics of the operating system enables to be considered as a solution to a certain real time application problem. There is a requirement of real time operating system in small embedded systems which are bundled as part of micro devices. The requirements of real time operating systems are considered for some kernels. Since device drivers also needed for a suitable solution, a real time operating system is larger than just a kernel. Q13.What is the difference between hard real-time and soft realtime OS? Critical task completion on time is guaranteed by a hard real time system. The tasks needed for delays in the system are to be bounded by retrieving the stored data at the time which takes the operating system to complete any request. A critical task obtains a priority over other tasks and maintaining that priority until the completion of the task. This is performed by a soft real time system. The system kernel delays need to be bounded as in the case of hard real time system Q14.What type of scheduling is there in RTOS? The tasks of real time operating system have 3 states namely, running, ready, blocked. Only one task per CPU is being performed at a given point of time. In systems that are simpler, the list is usually short, two or three tasks at the most.

The designing of scheduler is the real key. Usually to minimize the worst-case length of time spent in the schedulers critical section, the data structure of the ready list in the scheduler is designed. This is done during the inhibition of preemption. All interrupts are disabled in certain cases. The data structure choice depends on the tasks on the ready list can perform at the maximum. Q15.What is interrupt latency? The time between a device that generates an interrupt and the servicing of the device that generated the interrupt is known as interrupt latency. Many operating systems devices are serviced soon after the interrupt handler of the device is executed. The effect of interrupt latency may be caused by the interrupt controllers, interrupt masking, and the methods that handle interrupts of an operating system Q16.What is priority inheritance? Priority inversion problems are eliminated by using a method called priority inheritance. The process priority will be increased to the maximum priority of any process which waits for any resource which has a resource lock. This is the programming methodology of priority inheritance. When one or more high priority jobs are blocked by a job, the original priority assignment is ignored and execution of critical section at the highest priority level of jobs it blocks is performed. The job returns to the original priority level soon after executing the critical section. This is the basic idea of priority inheritance protocol Q17.What is spin lock? In a loop a thread waits simply (spins) checks repeatedly until the lock becomes available. This type of lock is a spin lock. The lock is a kind of busy waiting, as the threads remains active by not performing a useful task. The spin locks are to release explicitly, although some locks are released automatically when the tread blocks 18.What is an operating system? What are the functions of an operating system?

An operating system is an interface between hardware and software. OS is responsible for managing and co-ordinating the activities of a computer system. Functions of an operating system: Every operating system has two main functions 1. Operating system makes sure that the data is saved in the required place on the storage media. Programs are loaded into the memory properly, and the file system of OS will keep the files in the order. 2. OS enables the hardware and software to interact and perform functionality like, printing, scanning, mouse operations, web cam operations. OS allows application softwares to interact with the hardware Q19.What is paging? Why paging is used? OS performs an operation for storing and retrieving data from secondary storage devices for use in main memory. Paging is one of such memory management scheme. Data is retrieved from storage media by OS, in the same sized blocks called as pages. Paging allows the physical address space of the process to be non contiguous. The whole program had to fit into storage contiguously. Paging is to deal with external fragmentation problem. This is to allow the logical address space of a process to be noncontiguous, which makes the process to be allocated physical memory. Q20.Difference between a process and a program - A program is a set of instructions that are to perform a designated task, where as the process is an operation which takes the given instructions and perform the manipulations as per the code, called execution of instructions. A process is entirely dependent of a program. - A process is a module that executes modules concurrently. They are separate loadable modules. Where as the program perform the tasks directly relating to an operation of a user like word processing, executing presentation software Q21.What is the meaning of physical memory and virtual memory?

Physical memory is the only memory that is directly accessible to the CPU. CPU reads the instructions stored in the physical memory and executes them continuously. The data that is operated will also be stored in physical memory in uniform manner. Virtual memory is one classification of memory which was created by using the hard disk for simulating additional RAM, the addressable space available for the user. Virtual addresses are mapped into real addresses. Q22.What is the difference between socket and pipe? Sockets: Socket is a part of OSI layer model. Communication among different layers is performed through sockets. Application layer serves through some sockets to the presentation layer and upper application layer. Sockets are used in Secure Socket Layer networks. Communication is in bi-directional in sockets. Pipes: Pipes are related to processing in CPU. Pipes are the segments for processes in execution. By processing multiple processes simultaneously, the productivity can be improved. Communication is uni-directional in pipes Q23.What are the difference between THREAD, PROCESS and TASK? A program in execution is known as process. A program can have any number of processes. Every process has its own address space. Threads uses address spaces of the process. The difference between a thread and a process is, when the CPU switches from one process to another the current information needs to be saved in Process Descriptor and load the information of a new process. Switching from one thread to another is simple. A task is simply a set of instructions loaded into the memory. Threads can themselves split themselves into two or more simultaneously running tasks.

Q24.Difference between NTFS and FAT32 The differences are as follows: NTFS: - Allows the access local to Windows 2000, Windows 2003, Windows NT with service pack 4 and later versions may get access for some file. - Maximum partition size is 2TB and more. - Maximum size of file is upto 16TB - File and folder encryption is possible. FAT 32: - Allows the access local to Windows 95, Windows 98, Windows ME, Windows 2000, Windows xp on local partition. - Maximum partition size is 2TB - Maximum size of file is upto 4GB - File and folder encryption is not possible Q25.Differentiate between RAM and ROM RAM: Volatile memory Electricity needs to flow continuously Program information is stored in RAM RAM is read / write memory Cost is high

ROM: Permanent memory Instructions are stored in ROM permanently. BIOS has information to boot the system ROM is read only memory Access speed is less

Q26.What is DRAM? In which form does it store data? DRAM Dynamic Random Access Memory. One of the read / write memory. DRAM is cheap and does the given task. DRAM has cells made up of a capacitor and a transistor, where the data resides. Capacitors need to recharge for every couple of milliseconds. The process of recharging cells tends to performance slow down of DRAM as compared with speedier RAM types

Q27.What is cache memory? Explain its functions Cache memory is RAM. The most recently processing data is stored in cache memory. CPU can access this data more quickly than it can access data in RAM. When the microprocessor starts processing the data, it first checks in cache memory. The size of each cache block ranges from 1 to 16 bytes. Every location has an index that corresponds to the location which has data to access. This index is known as address. The locations have tags; each contains the index and the datum in the memory that is needed to be cached. Q28.What is the function of SMON? The SMON background process performs all system monitoring functions on the oracle database. Each time oracle is re-started, SMON performs a warm start and makes sure that the transactions that were left incomplete at the last shut down are recovered. SMON performs periodic cleanup of temporary segments that are no longer needed Q29.Explain different types of segment. There are four types of segments used in Oracle databases: data segments index segments rollback segments temporary segments

Data Segments: There is a single data segment to hold all the data of every non clustered table in an oracle database. This data segment is created when you create an object with the CREATE TABLE/SNAPSHOT/SNAPSHOT LOG command. Also, a data segment is created for a cluster when a CREATE CLUSTER command is issued. The storage parameters control the way that its data segment's extents are allocated. These affect the efficiency of data retrieval and storage for the data segment associated with the object.

Index Segments: Every index in an Oracle database has a single index segment to hold all of its data. Oracle creates the index segment for the index when you issue the CREATE INDEX command. Setting the storage parameters directly affects the efficiency of data retrieval and storage. Rollback Segments Rollbacks are required when the transactions that affect the database need to be undone. Rollbacks are also needed during the time of system failures. The way the roll-backed data is saved in rollback segment, the data can also be redone which is held in redo segment. A rollback segment is a portion of the database that records the actions of transactions if the transaction should be rolled back. Each database contains one or more rollback segments. Rollback segments are used to provide read consistency, to rollback transactions, and to recover the database. Types of rollbacks: - statement level rollback - rollback to a savepoint - rollback of a transaction due to user request - rollback of a transaction due to abnormal process termination - rollback of all outstanding transactions when an instance terminates abnormally - rollback of incomplete transactions during recovery. Temporary Segments: The SELECT statements need a temporary storage. When queries are fired, oracle needs area to do sorting and other operation due to which temporary storages are useful. The commands that may use temporary storage when used with SELECT are: GROUP BY, UNION, DISTINCT, etc. Q30. Explain SGA memory structures. SGA (System Global Area) is a dynamic memory area of an Oracle Server. In SGA,the allocation is done in granuels. The size of the SGA is dependent on SGA_MAX_SIZE parameter. The memory structures contained by SGA are:-

Shared Pool this memory structure is divided into two sub-structures which are Library Cache and Data Dictionary Cache for storing recently used PL/SQL statements and the recent data definitions. The maximum size of the Shared Pool depends on the SHARED_POOL_SIZE parameter. Database Buffer Cache This memory structure improves the performance while fetching or updating the recently used data as it stores the recently used datafiles. The size of this block is decided by DB_BLOCK_SIZE. Redo Log Buffer This memory structure is used to store all the changes made to the database and it's primarily used for the data recovery purposes. The size of this block is decided by LOG_BUFFER. Java Pool This memory structure is used when Java is installed on the Oracle server. Size that can be used is stored in parameter named JAVA_POOL_SIZE. Large Pool This memory structure is used to reduce the burden of the Shared Pool, as the Session memory for the Shared Server, as the temporary storage for the I/O and for the backup and restore operations or RMAN. Parameter that stores the maximum size is LARGE_POOL_ Q31.What is SQL Loader? Explain the files used by SQL Loader to load file. SQL*Loader is a bulk loader utility used for moving data from external files into the Oracle database. SQL*Loader supports various load formats, selective loading, and multi-table loads. When a control file is fed to an SQL*Loader, it writes messages to the log file, bad rows to the bad file and discarded rows to the discard file. Control file The SQL*Loader control file contains information that describes how the data will be loaded. It contains the table name, column datatypes, field delimiters, etc. controlfile.sql should be used to generate an accurate control file for a given table.

Log File The log file contains information about the SQL*loader execution. It should be viewed after each SQL*Loader job is complete Q32.Explain the methods provided by SQL Loader. Conventional Path Load and Direct Path Load are the two methods of loading data. Conventional Path Load is the default loading method and uses the SQLs INSERT statements. It takes the following form: INSERT INTO TABLE T partition (P) VALUES ... The row is loaded if it maps with the portioned else an error log is written. Direct Path Load: This method is faster than the conventional load method. In this method, the data to be loaded is parsed as per the description in the loader control file. It converts data for each input field with its corresponding data type and builds pairs in the form of pairs). The SQL *Loader then uses these pairs to build index keys and formats the oracle data blocks as per the pairs. These blocks are then written into the database. This reduces the processing load as compared to INSERT Q33.What is the physical and logical structure of oracle? Logical Database structures Logical structures include tablespaces, schema objects, data blocks, extents and segments. Tablespaces Database is logically divided into one or more tablespaces. Each tablespace creates one or more datafiles to physically store data. Schema objects Schema objects are the structure that represents database's data. Schema objects include structures such as tables, views, sequences, stored procedures, indexes, synonyms, clusters and database links. Data Blocks Data block represents specific number of bytes of physical database space on disk.

Extents An extent represents continuous data blocks that are used to store specific data information. Segments A segment is a set of extents allocated for a certain logical structure. Physical database structure The physical database structure comprises of datafiles, redo log files and control files Datafiles Datafiles contain database's data. The data of logical data structures such as tables and indexes is stored in datafiles of the database. One or more datafiles form a logical unit of database storage called a tablespace. Redo log files The purpose of these files is to record all changes made to data. These files protect database against failures. Control files Control files contain entries such as database name, name and location of datafiles and redo log files and time stamp of database creation Q34.Explain the categories of oracle processes i.e. user, data writing processes, logging processes and monitoring processes.

User process User process is used in invocation of application software. Data writing process - A database writer process is used to write buffer content into a datafile. They are specifically used to write dirty block to data files from the buffer. Logging processes - Log writer is used to write the redo log buffer from system global area to online redo log file. Only those redo entries are written hat have been copied into the buffer since the last time it wrote. Monitoring process - this can be either a system monitor process or a process monitor process. System monitor process is mainly used for crash recovery and cleaning up of temporary segments. Process monitor is used to clean all resources acquired by a failed process

Q35.What is a compiler?

A compiler is a program that takes a source code as an input and converts it into an object code. During the compilation process the source code goes through lexical analysis, parsing and intermediate code generation which is then optimized to give final output as an object code Q36.Explain loader and linker. A loader is a program used by an operating system to load programs from a secondary to main memory so as to be executed. Usually large applications are written into small modules and are then compiled into object codes. A linker is a program that combines these object modules to form an executable Q37.Explain the types of memory. The basic types of memories that a computer has are:

RAM Random access memory ROM Read Only memory Cache Memory Other than these a computer also has secondary storage devices like a hard disk which also contribute towards it memory.

Q38.Define compactions. Compaction is the process of reducing the amount of data needed for the storage or transmission of a given piece of information Q39.What is dirty bit? A dirty bit is a flag that indicates whether an attribute needs to be updated. Such situations usually occur when a bit in a memory cache or virtual memory page that has been changed by a processor but has not been updated in the storage Q40.What is page fault and when does it occur? A page is a fixed length memory block used as a transferring unit between physical memory and an external storage. A page fault occurs when a program accesses a page that has been mapped in address space, but has not been loaded in the physical memory Q41.Define Thread.

Threads are small processes that for parts of a larger process. A thread is contained inside a process. Different threads in the same process share some resources Q42.What are the advantages of using Thread? Some advantages of using threads are: - A process switching takes a longer time than that by threads. - They can execute in parallel on a multiprocessor. - Threads can share address spaces Q43.Compare Thread and process. Threads Share address space Have direct access to data segment of its process Can communicate with other threads of the same process Have no overhead If a main thread gets affected, other threads to can get affected Processes Have own address space Have own copy of data segment of the parent process Processes must use IPC for communication within sibling processes Have considerable overhead Change in a parent process has no effect on the child processes What are the different types of memory? The memory types are: SIMM - Single-Line Memory Modules: Used to store single row of chips which are soldered onto Printed Circuit Board. DIMM Dual-Line Memory Modules: Used to store two rows of chips which are soldered onto printed circuit board and enables to contain two times memory than SIMM DRAM Dynamic Random Access Memory: It holds data for short time period and will be refreshed periodically. SDRAM - Static RAM Holds data and refreshing does not required. It is faster than DRAM.

Flash Memory: A non volatile, rewritable and solid state memory which performs the functions of both RAM and hard disk combined. Data is retained in the memory, in case of power loss. It is ideal for printers, cellular phones, digital cameras, pagers. Shadow RAM: Allows the moving of selected parts of BIOS code that is available in ROM to the faster RAM.

You might also like