You are on page 1of 7

Q 10. What is Page fault Ans.

A page fault is a trap to the software raised by the hardware when a program accesses a page that is mapped in the virtual address space, but not loaded in physical memory. In the typical case the operating system tries to handle the page fault by making the required page accessible at a location in physical memory or kills the program in the case of an illegal access. The hardware that detects a page fault is the memory management unit in a processor. The exception handling software that handles the page fault is generally part of the operating system. Contrary to what the name 'page fault' might suggest, page faults are not errors and are common and necessary to increase the amount of memory available to programs in any operating system that utilizes virtual memory, including Microsoft Windows, Unixlike systems (including Mac OS X, Linux, *BSD, Solaris, AIX, and HP-UX), and z/OS. Microsoft uses the term hard fault in more recent versions of the Resource Monitor (e.g., Windows Vista) to mean 'page fault'. Types Minor page fault If the page is loaded in memory at the time the fault is generated, but is not marked in the memory management unit as being loaded in memory, then it is called a minor or soft page fault. The page fault handler in the operating system merely needs to make the entry for that page in the memory management unit point to the page in memory and indicate that the page is loaded in memory; it does not need to read the page into memory. This could happen if the memory is shared by different programs and the page is already brought into memory for other programs. The page could also have been removed from a process's Working Set, but not yet written to disk or erased, such as in operating systems that use Secondary Page Caching. For example, HP OpenVMS may remove a page that does not need to be written to disk (if it has remained unchanged since it was last read from disk, for example) and place it on a Free Page List if the working set is deemed too large. However, the page contents are not overwritten until the page is assigned elsewhere, meaning it is still available if it is referenced by the original process before being allocated. Since these faults do not involve disk latency, they are faster and less expensive than major page faults. Major page fault If the page is not loaded in memory at the time the fault is generated, then it is called a major or hard page fault. The page fault handler in the operating system needs to find a free page in memory, or choose a page in memory to be used for this page's data, write out the data in that page if it hasn't already been written out since it was last modified, mark that page as not being loaded into memory, read the data for that page into the page, and then make the entry for that page in the memory management unit point to the page in memory and indicate that the page is loaded in memory.[clarification needed] Major faults are more expensive than minor page faults and add disk latency to the interrupted program's execution. This is the mechanism used by an operating system to increase the amount of program memory available on demand. The operating system delays loading parts of the program from disk until the program attempts to use it and the page fault is generated

Invalid page fault If a page fault occurs for a reference to an address that's not part of the virtual address space, so that there can't be a page in memory corresponding to it, then it is called an invalid page fault. The page fault handler in the operating system then needs to terminate the code that made the reference, or deliver an indication to that code that the reference was invalid. A null pointer is usually represented as a pointer to address 0 in the address space; many operating systems set up the memory management unit to indicate that the page that contains that address is not in memory, and do not include that page in the virtual address space, so that attempts to read or write the memory referenced by a null pointer get an invalid page fault.

Q. 2) What is race around condition and how it is eliminated? Ans: A Race condition is a severe way crashing the server/ system at times. Generally this problem arises in priority less systems or the users who have equal priority will be put to this problem. Race condition is a situation in which a resource D is to be serviced to a process A and the process B which holds the resoure C is to be given to the process A. So a cyclic chain occurs and no way the resources will be get shared and also the systems with equal prirority wont get the resoure so that the system wont come out of the blocked state due to race condition! Race condition is a bug in your application, occurs when the result of your application depends on which one of two or more threads reaches a shared block of code first. In this case, the application output changes each time it is executed! As an example; assume that we have a shared integer object called x, and we have two threads1, and 2. Thread number 1 attempt to increment the x object by one, and during this increment process, its time slice has been finished. Thread 2 time slice just start and it attempt to increment the same x object too. Thread 2 incremented the x object successfully, and then its time slice finished. Thread 1 starts a new time slice and completing the increment process not knowing that the object x value is already changed. This is a race condition, and the output of such code is of course incorrect! The above race condition problem can be solved by using an object like "Inter Lock", with its "Increment", and "Decrement" methods. Race conditions can be avoided generally by considering each line of code you write, and asking yourself: What might happen if a thread finished before executing this line? or during executing this line? and another thread overtook it?

Q. 3) Give Different phases of instruction cycle. Ans: Instruction cycle: An instruction cycle (sometimes called fetch-and-execute cycle, fetchdecode-execute cycle, or FDX) is the basic operation cycle of a computer. It is the process by which a computer retrieves a program instruction from its memory, determines what actions

the instruction requires, and carries out those actions. This cycle is repeated continuously by the central processing unit (CPU), from boot up to when the computer is shut down. 1 Circuits used 2 Instruction cycle 2.1 2. Decode the instruction 2.2 3. Execute the instruction 2.3 4. Store results 3 Fetch cycle 4 Execute cycle 5 Initiating the cycle 6 The Fetch-Execute cycle in Transfer Notation 7 Reference

Circuits used The circuits used in the CPU during the cycle are: Program Counter (PC) - an incrementing counter that keeps track of the memory address of which instruction is to be executed next Memory Address Register (MAR) - holds the address in memory of the next instruction to be executed Memory Data Register (MDR) - a two-way register that holds data fetched from memory (and ready for the CPU to process) or data waiting to be stored in memory Current Instruction Register (CIR) - a temporary holding ground for the instruction that has just been fetched from memory Control Unit (CU) - decodes the program instruction in the CIR, selecting machine resources such as a data source register and a particular arithmetic operation, and coordinates activation of those resources Arithmetic logic unit (ALU) - performs mathematical and logical operations The time period during which one instruction is fetched from memory and executed when a computer is given an instruction in machine language. There are typically four stages of an

instruction cycle that the CPU carries out: 1) Fetch the instruction from memory. 2) "Decode" the instruction. 3) "Read the effective address" from memory if the instruction has an indirectadd ress. 4) "Execute" the instruction. Instruction cycle Each computer's CPU can have different cycles based on different instruction sets, but will be similar to the following cycle: Decode the instruction The instruction decoder interprets the instruction. If the instruction has an indirect address, the effective address is read from main memory, and any required data is fetched from main memory to be processed and then placed into data registers. During this phase the instruction inside the IR (instruction register) decode. Execute the instruction The CU passes the decoded information as a sequence of control signals to the relevant function units of the CPU to perform the actions required by the instruction such as reading values from registers, passing them to the ALU to perform mathematical or logic functions on them, and writing the result back to a register. If the ALU is involved, it sends a condition signal back to the CU.

Store results The result generated by the operation is stored in the main memory, or sent to an output device. Based on the condition of any feedback from the ALU, Program Counter may be updated to a different address from which the next instruction will be fetched. The cycle is then repeated. Fetch cycle Steps 1 and 2 of the Instruction Cycle are called the Fetch Cycle. These steps are the same for each instruction. The fetch cycle processes the instruction from the instruction word which contains an opcode and an operand. Execute cycle Steps 3 and 4 of the Instruction Cycle are part of the Execute Cycle. These steps will change with each instruction. The first step of the execute cycle is the Process-Memory. Data is transferred between the CPU and the I/O module. Next is the Data-Processing uses mathematical operation as well as

logical operations in reference to data. Central alterations is the next step, is a sequence of operations, for example a jump operation. The last step is a combined operation from all the other steps. Initiating the cycle The cycle starts immediately when power is applied to the system using an initial PC value that is predefined for the system architecture (in Intel IA-32 CPUs, for instance, the predefined PC value is 0xfffffff0). Typically this address points to instructions in a read-only memory (ROM) which begin the process of loading the operating system. (That loading process is called booting.)[1]

The Fetch-Execute cycle in Transfer Notation Expressed in register transfer notation: (Increment the PC for next cycle) The registers used above, besides the ones described earlier, are the Memory Address Register (MAR) and the Memory Data Register (MDR), which are used (at least conceptually) in the accessing of memory.

Q 4. Distinguish between Hardwired control and micro programmed control. Ans: Hardwired vs. Micro-programmed Computers It should be mentioned that most computers today are micro-programmed. The reason is basically one of flexibility. Once the control unit of a hard-wired computer is designed and built, it is virtually impossible to alter its architecture and instruction set. In the case of a micro-programmed computer, however, we can change the computer's instruction set simply by altering the micro program stored in its control memory. In fact, taking our basic computer as an example, we notice that its four-bit op-code permits up to 16 instructions. Therefore, we could add seven more instructions to the instruction set by simply expanding its micro program. To do this with the hard-wired version of our computer would require a complete redesign of the controller circuit hardware. Another advantage to using micro-programmed control is the fact that the task of designing the computer in the first place is simplified. The process of specifying the architecture and instruction set is now one of software (micro-programming) as opposed to hardware design. Never the less, for certain applications hard-wired computers are still used. If speed is a consideration, hard-wiring may be required since it is faster to have the hardware issue the required control signals than to have a "program" do it.

Hardwired control is a control mechanism to generate control signals by using appropriate finite state machine (FSM). Micro programmed control is a control mechanism to generate control signals by using a memory called control storage (CS), which contains the control signals. Although micro programmed control seems to be advantageous to CISC machines, since CISC requires systematic development of sophisticated control signals, there is no intrinsic difference between these 2 control mechanisms. The pair of "microinstruction-register" and "control storage address register" can be regarded as a "state register" for the hardwired control. Note that the control storage can be regarded as a kind of combinational logic circuit. We can assign any 0, 1 values to each output corresponding to each address, which can be regarded as the input for a combinational logic circuit. This is a truth table. Hardwired systems are made to perform in a set manner, implemented with logic, switches, etc. between any input and output in the system. Once the manner in which the control is executed, you can not change the behavior of the system. Micro programmed systems are centred around a computer of some sort, often a microcontroller in small systems, that controls the system using a program. Input is sent to the computer, and the program determines what should be done with the input to come up with an output. So the processor is between the input and the output, rather than a direct link between the input and output. The vesatility of the micro programmed system far exceeds the hardwired system. The system scan also be considerably smaller. The size of a complex microcontroller can be quite a bit smaller that a bunch of logic and switches for the same functionality.

Q 8. Is it possible to have a hardwired command associated with a control memory? Ans: To execute instructions, a computer's processor must generate the control signals used to perform the processor's actions in the proper sequence. This sequence of actions can either be executed by another processor's software (for example in software emulation or simulation of a processor) or in hardware. Hardware methods fall into two categories: the processor's hardware signals are generated either by hardwired control, in which the instruction bits directly generate the signals, or by micro programmed control in which a dedicated microcontroller executes a micro program to generate the signals. Before microprocessors, hardwired control usually was implemented using discrete components, flipchips, or even rotating discs or drums. This can be generally done by two methods. Method1: The classical method of sequential circuit design. It attempts to minimize the amount of hardwire, in particular, by using only log2p flip flops to realize a p state circuit.Method2: An approach that uses one flip flop per state and is known as one hot method. While expensive in terms of flip flops, this method simplifies controller unit design and debugging. In practice, processor control units are often so complex that no one design method by itself

can yield a satisfactory circuit at an acceptable cost. The most acceptable design may consist of several linked, but independently designed, sequential circuits.Microprogramming made it possible to re-wire, as it were, a computer by simply downloading anew micro program to it. This required dedicated hardware or an external processor. For example, some of DEC's PDP-10 processors used a PDP-11 as a front-end which uploaded a micro program to the main processor at boot time. Traditionally, a sewing machine stitch patterns and a washing machine's wash programs were implemented as hardwired, usually mechanical, controls. In modern machines, these are instead implemented as software which controls a computer which controls the machine hardware. This makes it possible, for example, to download additional stitch patterns for a small fee or upgrade a machine without having to buy a complete new machine. It also opens up for intellectual property rights issues.

You might also like