You are on page 1of 12

Operating Systems Lecture Notes Lecture 2 Processes and Threads

Martin C. Rinard

A process is an execution stream in the context of a particular process state. o An execution stream is a sequence of instructions. o Process state determines the effect of the instructions. It usually includes (but is not restricted to): Registers Stack Memory (global variables and dynamically allocated memory) Open file tables Signal management information Key concept: processes are separated: no process can directly affect the state of another process.

Process is a key OS abstraction that users see - the environment you interact with when you use a computer is built up out of processes. o The shell you type stuff into is a process. o When you execute a program you have just compiled, the OS generates a process to run the program. o Your WWW browser is a process. Organizing system activities around processes has proved to be a useful way of separating out different activities into coherent units. Two concepts: uniprogramming and multiprogramming. o Uniprogramming: only one process at a time. Typical example: DOS. Problem: users often wish to perform more than one activity at a time (load a remote file while editing a program, for example), and uniprogramming does not allow this. So DOS and other uniprogrammed systems put in things like memory-resident programs that invoked asynchronously, but still have separation problems. One key problem with DOS is that there is no memory protection - one program may write the memory of another program, causing weird bugs. o Multiprogramming: multiple processes at a time. Typical of Unix plus all currently envisioned new operating systems. Allows system to separate out activities cleanly. Multiprogramming introduces the resource sharing problem - which processes get to use the physical resources of the machine when? One crucial resource: CPU. Standard solution is to use preemptive multitasking - OS runs one process for a while, then takes the CPU away from that process and lets another process run. Must save and restore process state. Key issue: fairness. Must ensure that all processes get their fair share of the CPU.

How does the OS implement the process abstraction? Uses a context switch to switch from running one process to running another process. How does machine implement context switch? A processor has a limited amount of physical resources. For example, it has only one register set. But every process on the machine has its own set of registers. Solution: save and restore hardware state on a context switch. Save the state in Process Control Block (PCB). What is in PCB? Depends on the hardware. o Registers - almost all machines save registers in PCB. o Processor Status Word. o What about memory? Most machines allow memory from multiple processes to coexist in the physical memory of the machine. Some may require Memory Management Unit (MMU) changes on a context switch. But, some early personal computers switched all of process's memory out to disk (!!!). Operating Systems are fundamentally event-driven systems - they wait for an event to happen, respond appropriately to the event, then wait for the next event. Examples: o User hits a key. The keystroke is echoed on the screen. o A user program issues a system call to read a file. The operating system figures out which disk blocks to bring in, and generates a request to the disk controller to read the disk blocks into memory. o The disk controller finishes reading in the disk block and generates and interrupt. The OS moves the read data into the user program and restarts the user program. o A Mosaic or Netscape user asks for a URL to be retrieved. This eventually generates requests to the OS to send request packets out over the network to a remote WWW server. The OS sends the packets. o The response packets come back from the WWW server, interrupting the processor. The OS figures out which process should get the packets, then routes the packets to that process. o Time-slice timer goes off. The OS must save the state of the current process, choose another process to run, the give the CPU to that process. When build an event-driven system with several distinct serial activities, threads are a key structuring mechanism of the OS. A thread is again an execution stream in the context of a thread state. Key difference between processes and threads is that multiple threads share parts of their state. Typically, allow multiple threads to read and write same memory. (Recall that no processes could directly access memory of another process). But, each thread still has its own registers. Also has its own stack, but other threads can read and write the stack memory. What is in a thread control block? Typically just registers. Don't need to do anything to the MMU when switch threads, because all threads can access same memory. Typically, an OS will have a separate thread for each distinct activity. In particular, the OS will have a separate thread for each process, and that thread will perform OS activities on behalf of the process. In this case we say that each user process is backed by a kernel thread. o When process issues a system call to read a file, the process's thread will take over, figure out which disk accesses to generate, and issue the low level instructions required to start the transfer. It then suspends until the disk finishes reading in the data.

When process starts up a remote TCP connection, its thread handles the low-level details of sending out network packets. Having a separate thread for each activity allows the programmer to program the actions associated with that activity as a single serial stream of actions and events. Programmer does not have to deal with the complexity of interleaving multiple activities on the same thread. Why allow threads to access same memory? Because inside OS, threads must coordinate their activities very closely. o If two processes issue read file system calls at close to the same time, must make sure that the OS serializes the disk requests appropriately. o When one process allocates memory, its thread must find some free memory and give it to the process. Must ensure that multiple threads allocate disjoint pieces of memory.
o

Having threads share the same address space makes it much easier to coordinate activities can build data structures that represent system state and have threads read and write data structures to figure out what to do when they need to process a request.

One complication that threads must deal with: asynchrony. Asynchronous events happen arbitrarily as the thread is executing, and may interfere with the thread's activities unless the programmer does something to limit the asynchrony. Examples: o An interrupt occurs, transferring control away from one thread to an interrupt handler. o A time-slice switch occurs, transferring control from one thread to another. o Two threads running on different processors read and write the same memory. Asynchronous events, if not properly controlled, can lead to incorrect behavior. Examples: o Two threads need to issue disk requests. First thread starts to program disk controller (assume it is memory-mapped, and must issue multiple writes to specify a disk operation). In the meantime, the second thread runs on a different processor and also issues the memory-mapped writes to program the disk controller. The disk controller gets horribly confused and reads the wrong disk block. o Two threads need to write to the display. The first thread starts to build its request, but before it finishes a time-slice switch occurs and the second thread starts its request. The combination of the two threads issues a forbidden request sequence, and smoke starts pouring out of the display. o For accounting reasons the operating system keeps track of how much time is spent in each user program. It also keeps a running sum of the total amount of time spent in all user programs. Two threads increment their local counters for their processes, then concurrently increment the global counter. Their increments interfere, and the recorded total time spent in all user processes is less than the sum of the local times. So, programmers need to coordinate the activities of the multiple threads so that these bad things don't happen. Key mechanism: synchronization operations. These operations allow threads to control the timing of their events relative to events in other threads. Appropriate use allows programmers to avoid problems like the ones outlined above.

Fragmentation (computer)
From Wikipedia, the free encyclopedia Jump to: navigation, search This article's citation style may be unclear. The references used may be made clearer with a different or consistent style of citation, footnoting, or external linking. (April 2011) This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. Please improve this article by introducing more precise citations where appropriate. (April 2011)

In computer storage, fragmentation is a phenomenon in which storage space is used inefficiently, reducing storage capacity and in most cases the performance. The term is also used to denote the wasted space itself. There are three different but related forms of fragmentation: external fragmentation, internal fragmentation, and data fragmentation. Various storage allocation schemes exhibit one or more of these weaknesses. Fragmentation can be accepted in return for increase in speed or simplicity.

Contents
[hide]

1 Internal fragmentation 2 External fragmentation 3 Data fragmentation 4 References 5 External links 6 See also

[edit] Internal fragmentation


Internal fragmentation occurs when storage is allocated without intention to use it.[1] This space is wasted. While this seems foolish, it is often accepted in return for increased efficiency or simplicity. The term "internal" refers to the fact that the unusable storage is inside the allocated region but is not being used. "A partition of main memory is wasted with in a partition is said to be Internal Fragmentation" For example, in many file systems, each file always starts at the beginning of a cluster, because this simplifies organization and makes it easier to grow files. Any space left over between the last byte of the file and the first byte of the next cluster is a form of internal fragmentation called file slack or slack space.[2][3] Slack space is a very important source of evidence in computer forensic investigation. Similarly, a program which allocates a single byte of data is often allocated many additional bytes for metadata and alignment. This extra space is also internal fragmentation. Another common example: English text is often stored with one character in each 8-bit byte even though in standard ASCII encoding the most significant bit of each byte is always zero. The unused bits are a form of internal fragmentation. Similar problems with leaving reserved resources unused appear in many other areas. For example, IP addresses can only be reserved in blocks of certain sizes, resulting in many IPs that are reserved but not actively used. This is contributing to the IPv4 address shortage. Unlike other types of fragmentation, internal fragmentation is difficult to reclaim; usually the best way to remove it is with a design change. For example, in dynamic memory allocation, memory pools drastically cut internal fragmentation by spreading the space overhead over a larger number of objects.

[edit] External fragmentation


External fragmentation is the phenomenon in which free storage becomes divided into many small pieces over time.[1] It is a weakness of certain storage allocation algorithms, occurring when an application allocates and deallocates ("frees") regions of storage of varying sizes, and the allocation algorithm responds by leaving the allocated and deallocated regions interspersed. The result is that although free storage is available, it is effectively unusable because it is divided into pieces that are too small to satisfy the demands of the application. The term "external" refers to the fact that the unusable storage is outside the allocated regions. "A partition of main memory is the wastage of an entire partition is said to be External Fragmentation". For example, in dynamic memory allocation, a block of 1000 bytes might be requested, but the largest contiguous block of free space has only 300 bytes. Even if there are ten blocks of 300 bytes of free space, separated by allocated regions, one still cannot allocate the requested block of 1000 bytes, and the allocation request will fail. External fragmentation also occurs in file systems as many files of different sizes are created, change size, and are deleted. The effect is even worse if a file which is divided into many small pieces is deleted, because this leaves similarly small regions of free spaces.

[edit] Data fragmentation


Data fragmentation occurs when a piece of data in memory is broken up into many pieces that are not close together. It is typically the result of attempting to insert a large object into storage that has already suffered external fragmentation. For example, files in a file system are usually managed in units called blocks or clusters. When a file system is created, there is free space to store file blocks together contiguously. This allows for rapid sequential file reads and writes. However, as files are added, removed, and changed in size, the free space becomes externally fragmented, leaving only small holes in which to place new data. When a new file is written, or when an existing file is extended, the operating system puts the new data in new non-contiguous data blocks to fit into the available holes. The new data blocks are necessarily scattered, slowing access due to seek time and rotational delay of the read/write head, and incurring additional overhead to manage additional locations. This is called file system fragmentation. When writing a new file of a known size, if there are any empty holes that are larger than that file, the operating system can avoid data fragmentation by putting the file into any one of those holes. There are a variety of algorithms for selecting which of those potential holes to put the file; each of them is a heuristic approximate solution to the bin packing problem. The "best fit" algorithm chooses the smallest hole that is big enough. The "worst fit" algorithm chooses the largest hole. The first-fit algorithm chooses the first hole that is big enough. The "next fit" algorithm keeps track of where each file was written. As another example, if the nodes of a linked list are allocated consecutively in memory, this improves locality of reference and enhances data cache performance during traversal of the list. If

the memory pool's free space is fragmented, new nodes will be spread throughout memory, increasing the number of cache misses. Just as compaction can eliminate external fragmentation, data fragmentation can be eliminated by rearranging data storage so that related pieces are close together. For example, the primary job of a defragmentation tool is to rearrange blocks on disk so that the blocks of each file are contiguous. Most defragmenting utilities also attempt to reduce or eliminate free space fragmentation. Some moving garbage collectors will also move related objects close together (this is called compacting) to improve cache performance.

Technology

Printing Mechanics IT o Systems o Security o Quality and Testing o Protocols o Programming o Operating System o Networking o Internet o General o Database o Business Intelligence o Applications Electronics o Television o Smart Phones Operating System Components Applications o R1 o Printer o Phone o Pad & Tablet o Music Instruments o Media Player o Home & Office o Games o Computers

Netbook Laptops Desktop o Components o Camera o Accessories Electrical Communication o VoIP o Switching o Encoding o Core Network o Access Network

Science & Nature

Science o Physics o Chemistry o Biology Nature o Universe o Plants o Insects o Disasters o Birds o Animals Mathematics Earth

Public

Politics Legal
o Justice System Humanitarian Government Foreign Affairs Defence o Weaponry o War o Intelligence

People

Travel

Religion Relationships o Networks o Family Professions Personalities Music Movements Life Style o Photography o Arts & Crafts Language o Words o Grammar o French History Food o Raw Material o Fruits&Vegetables o Drinks o Dessert o Cookery Culture Behaviour Astrology Agriculture

Home

Tools Shopping o Online Shopping Household Gardening Furniture

Health

Women Men Medicine o Radiology o Psychiartry o Physiology o Nutrientts & Drugs o Neurology

Gynecology General ENT Diseases Dermatology Dentristry Cosmetic Surgery Cardiology Ayurveda Anatomy Health Care o Programs o Professions o Products o Hospitals/Clinics Children
o o o o o o o o o o

Fashion

Personal care Footwear Clothe Accessories

Education

University Training Qualifications Professional & Certification bodies Field

Countries

US
o o o o o o o o o o

Sports Places People Labor Industry Immigration ICT Health Finance Defence

UK

Travel Places Industry General Finance Defence Singapore Russia North America Lang Korea Japan o History Israel India o Travel o States Cities
o o o o o o

Places

Politics Places people ICT Gov Education Economy Defence Automobile Geography Europe China Canada o Places o Labour o Finance o Defence Australia o Weather o Technology o Places o ICT o Health o Government o General o Defence o Business
o o o o o o o o o

Arab Africa

Business

Transport & Logistics Sales o Consumer Registration Marketing Human Resources General Management General Finance o Tax o Retirement Plans o Mortgage o Investment o Insurance o Banking o Accounting Engineering Economics Construction o Building Material Business Law o Contracts

Automobile

Van & Heavy Vehicle Plane Car Accessories

You might also like