You are on page 1of 2

* -Have you studied buses? What types? yes bus its a group of lines or many lines.

a> Address bus b> Data bus c> Control bus * -Have you studied pipelining? List the 5 stages of a 5 stage pipeline. Assumin g 1 clock per stage, what is the latency of an instruction in a 5 stage machine? What is the throughput of this machine ? 5 stages are: Instruction Fetch Instruction Decode Execution Data Memory(Read/write) Write Back Latency(time required for the first instruction to produce output) is 5 cycles and for long stream of instructions the throughput is 1 instruction per clock cycle. * -How many bit combinations are there in a byte? 2^8 -For a single computer processor computer system, what is -the purpose of a proc essor cache and describe its operation? the purpose of cache is to reduce the average time to access the main memory, the operation is like that when the CPU wants to access the data, it first check the cache, if an entry can be found with a tag matching that of desired data, CPU gets the data from cache directly, otherwise, the data will be copied into cache for next access. * -Explain the operation considering a two processor computer system with a cach e for each processor. (SNOOPING) Let's take an example of different processors(along with their independent caches) sharing the same memory system. Now if this memory is a Read-Write memory and let's say that one of the processors writes to this memory after some data computation, all the processors need to update their individual cache copies of the now modified memory. To do this the concept of snooping is implemented. cache snooping is the means by which each cache constantly monitors/detects the bus for any write to a memory location and if a write is found, it invalidates it current copy of cached memory data and copies the new content over. * -What are the main issues associated with multiprocessor caches and how might you solve them? issue : Cache coherency or Data coherency. The problem is all the processors cache should have exactly the same shared data (cohenrent data). and there are races possible with multiprocessors. possible solution: use one central cache controller which will get all the read/write requests from all the processors and peripherals so that it can make sure there are no races and cache coherency is maintained.

-Explain the difference between write through and write back cache. Write Through. After writing in cache memory, main memory is updated too inmediatly to mantain reliability Write Back After writing in cache memory a flag bit called dirty bit is set. When this value need to be replaced that bit is check, if it is set then the value is taken to main memory * -Are you familiar with the term MESI? MESI stands for a protocol which is followed in shared processor systems. M - Mo dified E - Exclusive S - Shared I - Invalid * -Are you familiar with the term snooping? Now if this memory is a Read-Write memory and let's say that one of the processors writes to this memory after some data computation, all the processors need to update their individual cache copies of the now modified memory. To do this the concept of snooping is implemented. cache snooping is the means by which each cache constantly monitors/detects the bus for any write to a memory location and if a write is found, it invalidates it current copy of cached memory data and copies the new content over. * Virtual Memory? Virtual Memory is a way of extending a computers memory by using a disk file to simulate add'l memory space. The OS keeps track of these add'l memory addresses on the hard disk called pages, and the operation in bringing in pages is called page fault. * Q1. What is cache coherency When multiple processors with separate caches share a common memory, it is neces sary to keep the caches in a state of coherence by ensuring that any shared oper and that is changed in any cache is changed throughout the entire system. This is done in either of two ways: through a directory-based or a snooping syst em. In a directory-based system, the data being shared is placed in a common direct ory that maintains the coherence between caches. The directory acts as a filter through which the processor must ask permission to load an entry from the primar y memory to its cache. When an entry is changed the directory either updates or invalidates the other caches with that entry. In a snooping system, all caches on the bus monitor (or snoop) the bus to determ ine if they have a copy of the block of data that is requested on the bus. Every cache has a copy of the sharing status of every block of physical memory it has .

You might also like