You are on page 1of 9

AMIE(I)

STUDY CIRCLE(REGD.)

A Focused Ap proach

Additional Course Material


Computer Basics
MEMORY Memory Hierarchy
The main memory is usually located on chips inside the system unit. There are has two kinds of memory: random-access memory (RAM) and read-only memory (ROM). The instructions that the computer gets and the data the processes are kept in RAM during computer works. RAM is not a permanent storage place for information. It is active only when computer is on. If computer is switched off, the information is deleted from RAM. ROM is the memory where information is only read. When the computer is turned off, information is not deleted from ROM. Writing information to ROM is made by vendor. The size of the main memory is measured in megabytes. The external memory is disk. Unlike information stored in RAM, information stored on a disk is not deleted when the computer turned off . Information stored on the disks is moved in is and out of RAM. The amount of space on a disk is measured in give bytes. There are two kinds of disks: hard disk and floppy disk. The main memory and the floppy disk have less storage capacity than the hard disk. The hard disk can write and read information to and from the main memory much faster than a floppy disk. The access speed of main memory is also much faster than a hard disk. We can see a wide variety of storage in a computer system, which can be organized in a hierarchy (see figure) according to either their speed or their cost. The higher levels are expensive, but very fast. As we move down the hierarchy, the cost per bit decreases, while the access time increases and the amount of storage at each level increases. This is reasonable, since if a given storage system was both faster and cheaper than another, with other properties being the same, there would be no reason to use the slower, more expensive memory.

POST BOX NO.77, 2 ND FLOOR, SULTAN TOWERS, ROORKEE 247667 UTTARANCHAL PH: (01332) 266328 Email : pcourses@hotmail.com

TOTAL PAGES:

AMIE(I)

STUDY CIRCLE(REGD.)

A Focused Approach

The design of a computer memory system attempts to balance these factors: using only as much expensive memory as absolutely necessary while trying to provide as much cheap memory as possible. Demand paging is an example of such a system design: a limited amount of expensive, fast main memory is used to provide fast access to a much larger virtual memory which is actually stored on cheaper, slower secondary memory. Main memory can be viewed as a fast cache for secondary memory. Caching is an important principle of computer systems, both in hardware and software. Information is normally stored in some storage system (like main memory). As it is used, it is copied into a faster storage system (the cache) on a temporary basis. When a particular piece of information is needed, we first check if it is in the cache. If it is, we use the information directly from the cache. If not, we use the information from the main storage system, putting a copy in the cache in the hope that it will be needed again. Extending this view, internal programmable registers, such as index registers and accumulators, are a high-speed cache for main memory. The programmer (or compiler) implements the register allocation and replacement algorithms to decide what information to keep in registers, and what to keep in main memory. The movement of information between levels of a storage hierarchy may be either explicit or implicit.

Memory Management
A programs machine language code must be in the computers main memory in order to execute. Assuring that at least the portion of code to be executed is in memory when a processor is assigned to a process is the o j b of the memory manager of the operating system. This task is complicated by two other aspects of modern computing systems. The first is multiprogramming. From its definition, we know that multiprogramming mean that several (at least two) processes can be active within the system during any particular time interval. But these multiple active processes result from various jobs entering and leaving the system in an unpredictable manner. Pieces, or blocks, of memory are allocated to these processes when they enter the system, and are subsequently freed when the process leaves the system. Therefore, at any given moment, the computers memory, viewed as a whole, consists of a part of blocks, some allocated to processes active at that moment, and others free and available to a new process which may, at any time, enter the system.
P.B. NO.77, 2 ND FLOOR, SULTAN TOWERS, ROORKEE 247667 UTTARANCHAL PH: (01332) 266328 Email : pcourses@hotmail.com

AMIE(I)

STUDY CIRCLE(REGD.)

A Focused Approach

In general, then , programs designed to execute in this multiprogramming environment must be compiled so that they can execute from any block of storage available at the time of the programs execution. Such program are called relocatable programs, and the idea of placing them into any currently available block of storage is called relocation. The second aspect of modern computing systems affecting memory management is the need to allow the programmer to use a range of program addresses which may be larger, perhaps significantly larger than the range of memory locations actually available. That is, we want to provide the programmer with a virtual memory, with characteristics (especially size) different from actual memory, and provide it in a way that is invisible to the programmer. This is accomplished by extending the actual memory with secondary memory such as disk. Providing an efficiently operating virtual memory is another task for the memory management facility.

Virtual Memory
Virtual memory or virtual memory addressing is a memory management technique, used by multitasking computer operating systems wherein noncontiguous memory is presented to a software (aka process) as contiguous memory. This contiguous memory is referred to as the virtual address space. Virtual memory addressing is typically used in paged memory systems. This in turn is often combined with memory swapping (also known as anonymous memory paging ), whereby memory pages stored in primary storage are written to secondary storage (often to a swap file or swap partition ), thus freeing faster primary storage for other processes to use. The term virtual memory is often confused with memory swapping, probably due in part to the Microsoft Windows family of operating systems referring to the enabling/disabling of memory swapping as "virtual memory". In fact, Windows uses paged memory and virtual memory addressing, even if the so called "virtual memory" is disabled. In technical terms, virtual memory allows software to run in a memory address space whose size and addressing are not necessarily tied to the computer's physical memory. To properly implement virtual memory the CPU (or a device attached to it) must provide a way for the operating system to map virtual memory to physical memory and for it to detect when an address is required that does not currently relate to main memory so that the needed data can be swapped in. While it
P.B. NO.77, 2 ND FLOOR, SULTAN TOWERS, ROORKEE 247667 UTTARANCHAL PH: (01332) 266328 Email : pcourses@hotmail.com

AMIE(I)

STUDY CIRCLE(REGD.)

A Focused Approach

would certainly be possible to provide virtual memory without the CPU's assistance it would essentially require emulating a CPU that did provide the needed features. Background. Most computers possess four kinds of memory: registers in the CPU, CPU caches (generally some kind of static RAM) both inside and adjacent to the CPU, main memory (generally dynamic RAM) which the CPU can read and write to directly and reasonably quickly; and disk storage, which is much slower, but much larger. CPU register use is generally handled by the compiler and this isn't a huge burden as data doesn't generally stay in them very long. The decision of when to use cache and when to use main memory is generally dealt with by hardware so generally both are regarded together by the programmer as simply physical memory. Many applications require access to more information (code as well as data) than can be stored in physical memory. This is especially true when the operating system allows multiple processes/applications to run seemingly in parallel. The obvious response to the problem of the maximum size of the physical memory being less than that required for all running programs is for the application to keep some of its information on the disk, and move it back and forth to physical memory as needed, but there are a number of ways to do this. One option is for the application software itself to be responsible both for deciding which information is to be kept where, and also for moving it back and forth. The programmer would do this by determining which sections of the program (and also its data) were mutually exclusive, and then arranging for loading and unloading the appropriate sections from physical memory, as needed. The disadvantage of this approach is that each application's programmer must spend time and effort on designing, implementing, and debugging this mechanism, instead of focusing on his or her application; this hampers programmers' efficiency. Also, if any programmer could truly choose which of their items of data to store in the physical memory at any one time, they could easily conflict with the decisions made by another programmer, who also wanted to use all the available physical memory at that point. Another option is to store some form of handles to data rather than direct pointers and let the OS deal with swapping the data associated with those handles between the swapfile and physical memory as needed. This works but has a couple of problems, namely that it complicates application code, that it requires applications to play nice (they generally need the power to lock the data into physical memory to actually work on it) and that it stops the languages standard library doing its own suballocations inside large blocks from the OS to improve performance. The best known example of this kind of arrangement is probably the 16-bit versions of Windows. The modern solution is to use virtual memory, in which a combination of special hardware and operating system software makes use of both kinds of memory to make it look as if the computer has a much larger main memory than it actually does and to lay that space out differently at will. It does this in a way that is invisible to the rest of the software running on the computer. It usually provides the ability to simulate a main memory of almost any size (In practice there's a limit imposed on this by the size of the addresses. For a 32-bit system, the
P.B. NO.77, 2 ND FLOOR, SULTAN TOWERS, ROORKEE 247667 UTTARANCHAL PH: (01332) 266328 Email : pcourses@hotmail.com

AMIE(I)
32

STUDY CIRCLE(REGD.)

A Focused Approach

total size of the virtual memory can be 2 , or approximately 4 gigabytes. For the newer 64bit chips and operating systems that use 64 or 48 bit addresses, this can be much higher. Many operating systems do not allow the entire address space to be used by applications to simplify kernel access to application memory but this is not a hard design requirement. Virtual memory makes the job of the application programmer much simpler. No matter how much memory the application needs, it can act as if it has access to a main memory of that size and can place its data wherever in that virtual space that it likes. The programmer can also completely ignore the need to manage the moving of data back and forth between the different kinds of memory. Having said that if the programmer cares about performance when working with large volumes of data they need to minimise the number of blocks that are being accessed close together to avoid unnecessary swapping. Paging and virtual memory . Virtual memory is usually (but not necessarily) implemented using paging. In paging, the low order bits of the binary representation of the virtual address are preserved, and used directly as the low order bits of the actual physical address; the high order bits are treated as a key to one or more address translation tables, which provide the high order bits of the actual physical address. For this reason a range of consecutive addresses in the virtual address space w hose size is a power of two will be translated in a corresponding range of consecutive physical addresses. The memory referenced by such a range is called a page . The page size is typically in the range of 512 to 8192 bytes (with 4K currently being very common), though page sizes of 4 megabytes or larger may be used for special purposes. (Using the same or a related mechanism, contiguous regions of virtual memory larger than a page are often mappable to contiguous physical memory for purposes other than virtualization, such as setting access and caching control bits.) The operating system stores the address translation tables, the mappings from virtual to physical page numbers, in a data structure known as a page table. If a page that is marked as unavailable (perhaps because it is not present in physical memory, but instead is in the swap area), when the CPU tries to reference a memory location in that page, the MMU responds by raising an exception (commonly called a page fault) with the CPU, which then jumps to a routine in the operating system. If the page is in the swap area, this routine invokes an operation called a page swap , to bring in the required page. The page swap operation involves a series of steps. First it selects a page in memory, for example, a page that has not been recently accessed and (preferably) has not been modified since it was last read from disk or the swap area. (See page replacement algorithms for details.) If the page has been modified, the process writes the modified page to the swap area. The next step in the process is to read in the information in the needed page (the page corresponding to the virtual address the original program was trying to reference when the exception occurred) from the swap file. When the page has been read in, the tables for translating virtual addresses to physical addresses are updated to reflect the revised contents of the physical memory. Once the page swap completes, it exits, and the program is restarted
P.B. NO.77, 2 ND FLOOR, SULTAN TOWERS, ROORKEE 247667 UTTARANCHAL PH: (01332) 266328 Email : pcourses@hotmail.com

AMIE(I)

STUDY CIRCLE(REGD.)

A Focused Approach

and continues on as if nothing had happened, returning to the point in the program that caused the exception. It is also possible that a virtual page was marked as unavailable because the page was never previously allocated. In such cases, a page of physical memory is allocated and filled with zeros, the page table is modified to describe it, and the program is restarted as above. Details. The translation from virtual to physical addresses is implemented by an MMU (Memory Management Unit). This may be either a module of the CPU, or an auxiliary, closely coupled chip. The operating system is responsible for deciding which parts of the program's simulated main memory are kept in physical memory. The operating system also maintains the translation tables which provide the mappings between virtual and physical addresses, for use by the MMU. Finally, when a virtual memory exception occurs, the operating system is responsible for allocating an area of physical memory to hold the missing information (and possibly in the process pushing something else out to disk), bringing the relevant information in from the disk, updating the translation tables, and finally resuming execution of the software that incurred the virtual memory exception. In most computers, these translation tables are stored in physical memory. Therefore, a virtual memory reference might actually involve two or more physical memory references: one or more to retrieve the needed address translation from the page tables, and a final one to actually do the memory reference. To minimize the performance penalty of address translation, most modern CPUs include an on-chip MMU, and maintain a table of recently used virtual-to-physical translations, called a Translation Lookaside Buffer, or TLB. Addresses with entries in the TLB require no additional memory references (and therefore time) to translate, However, the TLB can only maintain a fixed number of mappings between virtual and physical addresses; when the needed translation is not resident in the TLB, action will have to be taken to load it in. On some processors, this is performed entirely in hardware; the MMU has to do additional memory references to load the required translations from the translation tables, but no other action is needed. In other processors, assistance from the operating system is needed; an exception is raised, and on this exception, the operating system replaces one of the entries in the TLB with an entry from the translation table, and the instruction which made the original memory reference is restarted. The hardware that supports virtual memory almost always supports memory protection mechanisms as well. The MMU may have the ability to vary its operation according to the type of memory reference (for read, write or execution), as well as the privilege mode of the CPU at the time the memory reference was made. This allows the operating system to protect its own code and data (such as the translation tables used for virtual memory) from corruption by an erroneous application program and to protect application programs from each other and (to some extent) from themselves (e.g. by preventing writes to areas of memory which contain code). P.B. NO.77, 2 ND FLOOR, SULTAN TOWERS, ROORKEE 247667 UTTARANCHAL PH: (01332) 266328 Email : pcourses@hotmail.com 6

AMIE(I)

STUDY CIRCLE(REGD.)

A Focused Approach

Number System & Boolean Algebra


ARITHMETIC CIRCUITS
A logic gate is an electronic circuit, which accepts a binary input and produces a binary output namely 0 and 1. The inverter (NOT) logic gate has one input and one output, but a logic gate in general accepts one or more inputs and produces one output. Apart from the NOT gate there are six other types of logic gates. Input to a gate will be designated by binary variables A, B, C etc. and the output will be indicated by binary variable Y. As stated earlier, a binary variable can take on values 0 and 1 which are electronically represented by LOW and HIGH voltage levels. In terms of boolean algebra the function of a logic gate will be represented by a binary expression.

HALF ADDER
This circuit adds two binary variables, yields a carry but does not accept carry from another circuit(adder). The truth table of half adder is given in below. A 0 0 1 1 B 0 1 0 1 S 0 1 1 0 C 0 0 0 1

From this table S = AB + AB = A B

C = AB
Half adder logic circuit is shown in given figure.

Circuit using XOR

Half adder circuit using NOR gates is shown in given figure.

Half adder circuit using NOR gates


P.B. NO.77, 2 ND FLOOR, SULTAN TOWERS, ROORKEE 247667 UTTARANCHAL PH: (01332) 266328 Email : pcourses@hotmail.com

AMIE(I) Here S = S = AB + AB = (A + B) + (A + B)

STUDY CIRCLE(REGD.)

A Focused Approach

FULL ADDER
This circuit can add two binary numbers, accept a carry and yield a carry. Such a circuit can easily be visualized by means of two half adders(HA) and an OR as in given figure.

Full adder using two half adders

To synthesize the full adder circuit, we should proceed from the truth table as in given table. A B Ci 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 It can be immediately written from this table that
S = ABCi + ABC i + ABC i + ABC i

S 0 1 1 0 1 0 0 1

C0 0 0 0 1 0 1 1 1 (1) (2)

and

C0 = ABCi + ABCi + ABCi + ABCi

Recognizing that Y + Y + Y + .....+ Y= Y and adding ABCi twice to right hand side of eq. (2), we can write
C0 = (ABCi + ABCi ) + (ABCi + ABC i ) + (ABCi + ABCi )

Observing that Y + Y = 1 , we get


C0 = BCi + ACi + AB

(3)

The following is easily established S = [(ACi + BCi + AB) + ABC i ](A + B + Ci ) = [(AC.BC.AB) + ABC](A + B + Ci ) i i i = (AC.BC.AB)(A + B + Ci ) + ABCi i i = (A + Ci )(B + Ci )(A + B)(A + B + Ci ) + ABCi = ABCi + ABCi + ABCi + ABCi = S
P.B. NO.77, 2 ND FLOOR, SULTAN TOWERS, ROORKEE 247667 UTTARANCHAL PH: (01332) 266328 Email : pcourses@hotmail.com

(4)

AMIE(I)

STUDY CIRCLE(REGD.)

A Focused Approach

Using AND OR INVERT(AOI) gates, equations (3) and (4) are implemented in figure given, which gives outputs S and C0.

P.B. NO.77, 2 ND FLOOR, SULTAN TOWERS, ROORKEE 247667 UTTARANCHAL PH: (01332) 266328 Email : pcourses@hotmail.com

You might also like