You are on page 1of 22

CT 014 3 1 Hardware, Software Systems and Networks

CT 014 3 1 Hardware, Software Systems and Networks


Individual Assignment UC1F0903 ITP Name : Sim Piin Hor ID : TP 018704

CT 014 3 1 Hardware, Software Systems and Networks

Table of contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Section 1 : Operating System Memory management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Memory partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Fragmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Virtual memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Section 2 : Computer Systems Architecture Central processing unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Design goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Frequently ask questions ( FAQ ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

CT 014 3 1 Hardware, Software Systems and Networks

Introductions
A computer is a machine that can process data and produce result in an electronic form. A complete computer system must consist of 3 major components which are hardware, software and data. Computer system must consist of these important hardware like Central Processing Unit ( CPU ), Memory, Input/Output devices and storage components. Storage can group to primary storage and secondary storage. Primary storage also call Random Access Memory ( RAM ) are crucial to computer system and are most important partner to CPU. Primary storage also called volatile memory requires power to maintain the data stored. Modern computer RAM are Static RAM ( SRAM ) and Dynamic RAM ( DRAM ). A memory chip build by integrated circuit ( IC ) consists of millions capacitors and transistors to form memory cells that can store data of in form of 1s and 0s. Each memory cells consists of 8 bits or 1 bytes. John von Neumann, the famous Mathematician proposing a Stored Program Concept which greatly affects modern computer architecture. Von Neumann Architecture will store both data and instructions of programs to memory. Main purpose of memory is store programs and instructions that CPU needed to process. Number of instructions and data that can stored to memory determine by size of RAM in Bytes. A mid range modern computer system nowadays will have 1GB ( Gigabyte ) size. The size of memory increase rapidly as program and operation system grows bigger and more complex. Central Processing Unit ( CPU ) acts as heart of the computer system. The main purposes of CPU are to execute and process data or instructions. Three main sub-components of CPU are Arithmetic / Logic Unit ( ALU ), Control Unit ( CU ) and Interface unit.

CT 014 3 1 Hardware, Software Systems and Networks

Section 1 : Operating System


Question 2 : Research, investigate and document areas relating to memory management of any Operating System of your choice. Areas to be discussed in your research documentation to include among other areas, how memory is managed including mechanisms and strategies used, problems faced by these techniques and solutions to overcome them. ( Virtual memory, single partition, overlay and variable memory such first feed, best feed and worst feed. )

Memory management
Memory management is methods of scheduling instructions and data loaded into memory. The ultimate purpose of having memory management is that program can find space in the simplest manner and to reduce memory wastage at maximum. A program may not need all the memory space but in situations that many programs need to load into memory, memory management becomes very important. Single Tasking With Overlay In a situation that program may need more memory space that it can fit, overlay method are used. Overlay will divide large program to tiny small pieces and loaded into memory when processor or operating system needs it. From Figure 1.1, at the beginning of execution time, operating system ( OS ) and main program will load into memory, overlay will load into memory. As it finish processing overlay, program will load another part of overlay into memory to process. The loop continues until the program complete its processing. A disadvantage of overlay methods is that it cannot take full use of larger memory available since it divide memory needed to specify memory space.

CT 014 3 1 Hardware, Software Systems and Networks

Memory Partitioning
The simplest method of memory management is to divide memory space into partitions. Memory partitioning have 2 type which are Fixed Partitioning and Variable ( Dynamic ) Partitioning. Fixed Partitioning Fixed partitioning will divide memory into equal size fixed space. Each partition can only load one process only. Total processes can be perform are determine by number of partitions it have. How fixed partition working is that process load into one partition and when it finish processing, partition will available for next process in queue. Variable Partitioning Variable partitioning use widely in modern computer architecture because of flexible memory management. OS will divide memory to specify size of partition for specific processes or programs. Every time a new process request for memory space, OS will divide out partition for that process using different methods depend on types of OS use. There were 3 methods which are First-Fit, Best-Fit, Largest-Fit ( Worst-Fit ) algorithm. First-Fit First-fit algorithm will load program into first available space that fits. Next program or processes will load to next free space and so on. A simply figure shows how Fist-Fit algorithm operate in Figure 1.2. Best-Fit Best-Fit will allocate program or process into the smallest free space that fits. It will scan though whole memory for most suitable slot to load program. Figure 1.3 give illustration on how Best-Fit algorithm work. Largest-Fit ( Worst-Fit ) Largest-Fit algorithm will load program or process into largest available memory space it can find. It works like Best-Fit which will scan though all memory space, however Largest-Fit will fit program or process into largest space. Figure 1.4 will explain how Largest-Fit algorithm work.

CT 014 3 1 Hardware, Software Systems and Networks

Fragmentation
Memory partitioning will cause fragmentation which are small pieces of free memory spaces that unable to load process into. Both fixed and variable partitioning will cause fragmentation. Fixed partitioning will make internal fragmentation occur. Internal Fragmentation Internal fragmentation occur when program or process load into a fixed partition but unable to occupy all partition space. Internal fragmentation cannot be re-solve since fixed partition only allow one program to be load into one partition. Internal fragmentation show on Figure 2.1 can give a rough picture of it. External Fragmentation In variable partitioning, programs and processes will load into memory partition and terminate. After numerous load and terminate, many small free space that are too small to fit a program or process occurred. Those small pieces of memory space if combine, may be able to create a space big enough for another process. Figure 2.2 give some illustration on external defragmentation. To re-allocate all small pieces of free memory space, a method calls Compaction ( Defragmentation ). Concept of Compaction shown at Figure 2.3 how it can create a new large memory space for another process.

CT 014 3 1 Hardware, Software Systems and Networks

Virtual Memory
Virtual memory is a method to separate logical memory from physical memory. Using Virtual memory, a large virtual memory can be use other than small physical memory. Programmer can load program into small memory system without worrying memory size or have to use overlay method on program. Two methods of virtual memory are demand paging and demand segmentation. Demand Paging Paging will break memory into fixed size memory block called page. OS will divide physical memory into many pages with equal size frames. As program or process start executing, OS will load pages needed into frames. Typical size for frame is 4KB. Figure 3.1 will illustrate paging. When a program is called, OS will paged it into memory. However not the whole program are load, only necessary pages are loaded. Demand paging had the advantages of using smaller physical memory as it does not load unnecessary pages and thus reduce paging time. Demand Segmentation Segmentation are similar to paging, instead of page, segment is blocks of programs that variable in size. Every segment will have its own physical address and size or limitation. Demand segmentation is the most efficient virtual memory methods but will have higher hardware requirement. Figure 3.2 illustrate how segment transfer into physical memory. Page Fault Page fault occur when program or process trying to access a page but not in memory. When a page fault occur, OS will first locate the missing page inside secondary storage. Then it will find available frame in memory and copying into physical memory. After page copy into memory, OS will reset page table then repeat the loop. Figure 3.3 display a simple loop showing how to handle page fault.

CT 014 3 1 Hardware, Software Systems and Networks

Section 2 : Computer Systems Architecture


Question 1 : Research, investigate and document microprocessors of desktop machines, servers and laptops. Areas to be discussed in your research documentation to include among other areas are major trends affecting microprocessor performance and design in recent years and differences between microprocessors design goals for laptops, servers, desktops and embedded systems.

Central Processing Unit


Case study was make on Intel based processor which will concentrate on Pentium Architecture. Pentium Architecture based processor were one of the most success processor architecture developed. Computer System with Intel Pentium processor was popular in desktop, server and laptop system from Pentium I generation until more recent Pentium IV generation.

Central Processing Unit ( CPU ) having three major components which are Arithmetic / Logic Unit ( ALU ), Control Unit ( CU ) and Input/ Output Interface ( I/O interface ). ALU is the component where data been hold temporary and where data or instructions are been calculate or process. While CU control data and instructions that flow inside CPU. CU will determine and fetch the data or instructions needed to be execute next by CPU. Design of CPU was for one main purpose, to execute instructions. Thus performance of the processor determined by number of instructions it can process by specific time. Many methods can improve performance of instruction processing, one of the methods is to add multiple CPU inside the system. Multiple processor can in theory process more instructions by number of processor. However adding more processors into hardware system are costly and the performance increase are not exactly as theory claim. Clock Speed Considering single CPU system, method to increase processor performance is to increase clock speed. Increasing clock speed will increase number of instructions to be executed
8

CT 014 3 1 Hardware, Software Systems and Networks

per second. Unit of clock speed is GHz which determines how many time instructions can be execute within 1 second. Thus by that theory, a Intel Pentium IV processor with 1.8GHz can perform 1800 million instructions every second. Though actual instructions can executed are affected by other factors. Data Length Data length is length that processor can perform in one time length. With larger data bandwidth, processor can executed more instructions at the same time. Nowadays processor like Intel Pentium IV processor already equip with 64bit processing abilities although most of the operating system still at 32bit system. In theory, 64 bit processor can perform up to twice faster than 32bit processor with double data bandwidth. Cache Memory Cache is high speed memory that put between CPU and primary memory. It only has small memory size typically 64KB depend on processor architecture. Cache memories are divide into blocks with small size like 8 or 16 bytes only. Cache memory can only be access by processor to fetch data and instructions inside cache. Every time CPU request for data or instructions, it will check from cache memory first, if cache memory had the data required, data will be fetch into processor. If data are store inside cache memory when CPU request, it is called a Hit, else if cache memory does not have the request data will call a Miss. Ratio of hits to number of CPU request known as Hit Ratio. Increasing cache memory may increase Hit Ratio as more data and instructions can be loaded into cache memory for CPU to access rapidly. If the request data was not inside cache memory, CPU will goes to main memory and copy require data into cache memory. In case cache memory is full, a block of cache memory will remove and replace with new data. There can be many algorithm to replace cache memory, one of the popular algorithm is Least-Recently Use ( LRU ). New cache memory block will replace the memory block that has not been use for long time. Modern processors like Intel Pentium IV already include cache memory into CPU even with 2 or 3 level of cache memory. With built-in cache memory, it will enjoy extreme low latency and fast access time due to high speed internal bus. First level cache memory ( L1 cache ) divided into data cache and instruction cache. It will store most frequently use data and instructions CPU needed for fast read rate. L1 cache will greatly affect CPUs performance as it act as closest memory CPU will access. Second level cache memories ( L2 cache ) will having greater size ranging from 512KB up to few MB.

CT 014 3 1 Hardware, Software Systems and Networks

Instruction Set Instructions set are pack of instructions that are commonly and frequently use by CPU that include into processor. Instructions set have been introduce to early processor and number of instructions increase so rapid. Some of these instructions can only be accessed by CPU or OS itself, example like security or memory controlling instructions. Modern processor architecture already includes a lot of frequently use instructions set into their CPU family. While early developed instructions are mainly concern on performance of processor, modern CPU implements more of multimedia , graphic processing instructions set. Examples of latest Instructions set include into Intel Pentium family are SSSE3 and EM64T. Pipelining Pipelining is a method that can increase execution speed by overlap instructions. Thats mean more than one instructions are being process instead of traditional one instructions. Sometime in the Fetch Execute cycle within CPU, several instructions can actually perform together. Almost all modern processor can perform pipelining. Idea of pipelining is that next instructions will already fetch and ready while waiting for current instructions to finish executions. It will not affect execution time of the instructions but will increase number of instructions can be execute in specific time. Intel Pentium IV family further enhanced the pipelining method by introducing Multi-Thread technology. It allows processor to perform 2 instructions together.

10

CT 014 3 1 Hardware, Software Systems and Networks

Design Goals
Desktop Processor Modern desktop processor compare to earlier processor, have significant increase in processing power. A mid-range Intel Pentium processor would having processing power of earlier high-end server class processor. After few decades of processor development, processors performance hardly can increase significantly due to few limitations. Some of these limitations are that clock speed cannot be increasing dramatically anymore. Example of Intel Pentium IV processor can reach more than 3GHz maximum clock speed but its successor Intel Core family can only reach slightly higher maximum clock speed. Processor manufacturer had turned their vision onto utilizing executing process. By using enhancement to CPU unit, it allows instructions to finish execution faster. Example of enhancement to modern processors is enhancing video or graphic processing speed. From the trend of adding extended instructions set that enhance multimedia into processor give a clear prove to processor manufacturers intention. Yet another trend of increasing processing power of computer system is by multi-core processing. Unlike server class processor, multi-core processing had only introduce to desktop machines in recently. Few factors that why multi-core processing is introduce, one of the factors was that processors maximum clock speed was limited due to hardware limitations in manufacturing processor. As manufacturers manufacture technique improve, it is now possible for them to fit more than one processor core into single processor chips. Degree of multi-core processing will only rise as this method shows a simpler way of increasing processing power of the machines rather than increase clock speed.

Server Processor Server class processor in industry has not yet shows a significant changes during these years. Recent year server processor did not so differ than previous years other than further enhance its processing power. It had introduce multi-core concept into server machines, however nowadays server machines having quite significant difference number of processor in one systems. Previous server machines will increase physical number of processor in the system while recent trend was to introduce more cores within one processor chip. Latest server processor already having up to 8 cores in one chip.

11

CT 014 3 1 Hardware, Software Systems and Networks

Another trend in server processor was the enhancement of virtual machine. Virtual machine means manipulate more than one operating system on same physical machines. The Virtual machine concept had been introduce into server market quite some time already but have not fully utilize due to hardware limitations. Recent developed server processor had largely utilize Virtual machine compatibilities that will allow a more efficient use of servers powerful hardware.

Laptop Processor Laptop or notebook have been increasing user number recent year and having trend of replacing desktop as daily usage personal computer. The popularity had related to new laptop class processor. There were 2 different trend that how laptop processor goes. Earlier generation of laptop having the impression of big size ( compare to handheld devices like Personal Digital Assistant, PDA ) and slow processor ( compare to desktop processor ). By taking advantages of manufacturing technique, new laptop processor having smaller size and powerful processor. Latest Intel Core architecture processor in a Intel Centrino 2 machine will only weight around 2 to 3 KG but have the similar processing power with desktop processor. While laptop processor having sufficient processing power, user tend to ask for more mobility. As what Laptop or Notebook words, it should be light enough to be use putting on users lap or small enough to grab as notes. The latest famous words for recent year were Netbook, a smaller version of notebook weighting around 1 to 2 KG with size of 10 inch only. Thanks to new technology, processor manufacturer able to develop processor that small in size and consume less battery power but in the time provide sufficient processing power for normal usage. Latest Intel Core Architecture processor manufacture using 45nm technology can be tiny in size, while its voltage are relatively low compare to desktop processor. With lower voltage, it will greatly increase laptops battery life.

Embedded System Processor Embedded system are machines that had its hardware components build in compact size. These type of system will usually consume smaller size by using smaller hardware and packing together. Embedded system were widely use in virtually every where, example of embedded system include computerized POST machine in shop, digital home theater system in living room or machine control system in factory. Embedded system having the
12

CT 014 3 1 Hardware, Software Systems and Networks

advantages of small size and convenient setup which helps when set up environments has limited space. Since most embedded system does not require powerful hardware, a tiny machine is possible. Using Intel Atom architecture which utilizes tiny size of processor, combining nVidia ION system will create a machines with the size of half of A4 paper. Since embedded system requires smaller size, processor will have not only small in size but also generate less heat. Intel Atom architecture processor with proper hardware setting may not even need fan for cooling.

13

CT 014 3 1 Hardware, Software Systems and Networks

Conclusion
Throughout this module, I had learn basic computer knowledge on both hardware and software. Within this module, gained more understanding on how computer system work. With this assignment, had though understanding on how memory are scheduled and the operation of CPU.

14

CT 014 3 1 Hardware, Software Systems and Networks

Frequently Ask Questions ( FAQ )


Section 1:
Why is memory management important ? Memory management is important because it can maximize memory usage by schedule and arrange programs that load into memory space using different algorithms. What are the advantages and disadvantages of single partition with overlay ? Advantage of overlay is that a larger program can be fit into smaller size memory. Disadvantage of overlay is that it will require programmer to design its memory management in the program and also overlay cannot take advantages of more free memory space as memory size for program have been pre-determine. Can internal fragmentation being eliminated ? Internal fragmentation cannot be eliminate. Internal fragmentation occurs inside fixed partition which each partition have designated size thus unable to move the fragmentation. What are the differences between paging and segmentation ? Different between paging and segmentation is that all pages are equal in size while segments block have different size. Another different is Segmentation need block size and limitation that show where to start.

Section 2:
How can cache memory size improve processors performance ? Size of cache memory determines how many data can be manipulate into cache for processor to fetch rapidly. Cache memory size also affect hit ratio of processor.

Appendices
15

CT 014 3 1 Hardware, Software Systems and Networks

Figure 1.1 : Single partition with overlay.

Figure 1.2 : First-fit

16

CT 014 3 1 Hardware, Software Systems and Networks

Figure 1.3 : Best-fit

Figure 1.4 : Largest-fit ( Worst-fit )

17

CT 014 3 1 Hardware, Software Systems and Networks

Figure 2.1 : Internal fragmentation

Figure 2.2 : External Fragmentation

18

CT 014 3 1 Hardware, Software Systems and Networks

Figure 2.3 : Compaction

Figure 3.1 : Demand Paging

19

CT 014 3 1 Hardware, Software Systems and Networks

Figure 3.2 : Demand Fragmentation

Figure 3.3 : Page fault

References
20

CT 014 3 1 Hardware, Software Systems and Networks

Englander , Irv. , 2000 , The Architecture of Computer Hardware and Systems Software , 2nd Edition , United States of America , John Wiley & Sons Inc. Chandana Prasad , Ravi , Ashutosh Kumar Singh , Sharmila Kanna , 2003 , Computer System Organization and Architecture , Malaysia , Prentice Hall.

Assessment Criteria:

21

CT 014 3 1 Hardware, Software Systems and Networks

Research and Investigation Referencing Analysis Reflection Documentation

20% 20% 20% 20% 20%

Marking Criteria: Student Name Research and Investigation Referencing Analysis Reflection Documentation

20% 20% 20% 20% 20% Total:

Distinction Demonstrated comprehensive research with detailed evidence. High level of analysis performed, exceptional and thorough knowledge and understanding displayed with regard to facilities and services of the Operating System. Documentation presented in a professional manner, without any spelling or grammar mistakes. Displayed evidence of critical appraisal. Credit Adequate research conducted with fair detail of evidence presented. Moderate level of understanding, analysis and knowledge displayed. Good level of documentation presented. Some level of reflection was evident in the documentation. Moderate level of critical appraisal. Pass Low level research conducted. Some evidence of research displayed. Basic level of understanding and knowledge analysis displayed. Satisfactory level of documentation. Satisfactory or low level of reflection displayed. No level of critical appraisal demonstrated.

22

You might also like