You are on page 1of 102

VIRTUAL

MEMORY

WHAT IS VIRTUAL
MEMORY?

Background
Code needs to be in memory to execute, but entire
program rarely used
Error code, unusual routines, large data
structures
Entire program could not needed at the same time
Consider ability to execute partial-loaded program
Program no longer constrained by limits of
physical memory
Programs could be larger than physical memory

VIRTUAL MEMORY
a technique that allows the
execution of process that are not
completely in the main memory.
A separation of user logical
memory from physical memory

Without virtual memory


32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program
3
Pr0gram
4
Program
5

30 bit ram
address
space(1gb)

Without virtual memory


32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program
3
Pr0gram
4
Program
5

30 bit ram
address
space(1gb)
Program
0

Without virtual memory


32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program
3
Pr0gram
4
Program
5

30 bit ram
address
space(1gb)
Program
0
Program
1

Without virtual memory


32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program
3
Pr0gram
4
Program
5

30 bit ram
address
space(1gb)
Program
0
Program
1
Pr0gram
2

Without virtual memory


32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program
3
Pr0gram
4
Program
5

30 bit ram
address
space(1gb)
Program
0
Program
1
Pr0gram
2

????

Without virtual memory


32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program
3
Pr0gram
4
Program
5

30 bit ram
address
space(1gb)
Program
0
Program
1
Pr0gram
2

????

Crash if we try
to access more
ram than we
have

With virtual
memory

Without virtual memory


32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program
3
Pr0gram
4
Program
5

30 bit ram
address
space(1gb)
Program
0
Program
1
Pr0gram
2

????

Crash if we try
to access more
ram than we
have

32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program

30 bit ram
address
space(1gb)

MAP

3
Program
4
Program
5
DISK

With virtual
memory

Without virtual memory


32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program
3
Pr0gram
4
Program
5

30 bit ram
address
space(1gb)
Program
0
Program
1
Pr0gram
2

????

Crash if we try
to access more
ram than we
have

32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program

30 bit ram
address
space(1gb)

Program
0

MAP

3
Program
4
Program
5
DISK

With virtual
memory

Without virtual memory


32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program
3
Pr0gram
4
Program
5

30 bit ram
address
space(1gb)
Program
0
Program
1
Pr0gram
2

????

Crash if we try
to access more
ram than we
have

32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program

30 bit ram
address
Program
space(1gb)
1
Program
0

MAP

3
Program
4
Program
5
DISK

With virtual
memory

Without virtual memory


32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program
3
Pr0gram
4
Program
5

30 bit ram
address
space(1gb)
Program
0
Program
1
Pr0gram
2

????

Crash if we try
to access more
ram than we
have

32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program

MAP

30 bit ram
address
Program
space(1gb)
1
Program
0
Program
2

3
Program
4
Program
5
DISK

With virtual
memory

Without virtual memory


32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program
3
Pr0gram
4
Program
5

30 bit ram
address
space(1gb)
Program
0
Program
1
Pr0gram
2

????

Crash if we try
to access more
ram than we
have

32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program

MAP

30 bit ram
address
Program
space(1gb)
1
Program
0
Program
2

3
Program
4
Program
5
DISK

With virtual
memory

Without virtual memory


32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program
3
Pr0gram
4
Program
5

30 bit ram
address
space(1gb)
Program
0
Program
1
Pr0gram
2

????

Crash if we try
to access more
ram than we
have

32 bit program
address space
(4gb)
Program
0
Program
1
Program
2
Program

30 bit ram
address
Program
space(1gb)
1

MAP

3
Program
4
Program
5

Program
3
Program
2

Program 0

DISK

moves the oldest


data (0) to disk

Virtual Address
Space
MAX
Stack

heap
Data
0

Code

Virtual Address
Space
MAX

Stack

Stack

Stack
Shared
library

Shared
pages

Shared
library

heap

heap

heap

Data

Data

Data

Code

Code

Code

Shared Library using Virtual

Demand Paging
Pages are loaded only when they are demanded during
program execution.
Uses lazy swapper

Demand Paging
Pages are loaded only when they are demanded during
program execution.
Uses lazy swapper
pager is the correct term not swapper

Demand Paging
Pages are loaded only when they are demanded during
program execution.
Uses lazy swapper
pager is the correct term not swapper

Manipulates the
entire process

Demand Paging
Pages are loaded only when they are demanded during
program execution.
Uses lazy swapper
pager is the correct term not swapper

Concerned with the


individual pages of a
process

Manipulates the
entire process

Demand Paging
Pages are loaded only when they are demanded during
program execution.
Uses lazy swapper
pager is the correct term not swapper

Concerned with the


individual pages of a
process

Manipulates the
entire process

Decreases the swap time and the amount of


physical memory needed.

Demand Paging
The hardware to support demand paging is the same as
the hardware for paging and swapping:
Page table

Secondary memory

Demand Paging
The hardware to support demand paging is the same as
the hardware for paging and swapping:
Page table. This table has the ability to mark an
entry invalid through a validinvalid bit or a
special value of protection bits.
Secondary memory

Demand Paging
The hardware to support demand paging is the same as
the hardware for paging and swapping:
Page table. This table has the ability to mark an
entry invalid through a validinvalid bit or a
special value of protection bits.
Secondary memory. This memory holds those
pages that are not present in main memory. The
secondary memory is usually a high-speed disk. It
is known as the swap device, and the section of
disk used for this purpose is known as swap
space.

Demand Paging
Transfer of memory to contiguous disk
space

10

11

12

13

14

15

Swap out

Progra
mA

Swap in

Progra
mB

Main memory

Demand Paging
Benefits
Less I/O needed
Less memory needed
More users

Demand Paging
Valid and Invalid
Bit Scheme
When bit is set to valid
the associated page
is both legal and in memory.

Demand Paging
Valid and Invalid
Bit Scheme
When bit is set to valid (v)
the associated page
in memory.
When bit is set to invalid (i)
the page is not in memory

Page table when some pages are not in


memory
0
a
1

5
6
7

g
h

Logical
memor
y

Validinvali
frame d bit

1
2

0
1
2
3

4 a

6
7

8
9

10

11

5
6
7

i
Page table

physica
lmemo
ry

What happens if the process tries


to access a page that is not
brought into memory?

Access to a page marked invalid causes a


page fault. The paging hardware, in
translating the address through the page
table, will notice that the invalid bit is set,
causing a trap to the operating system.
This trap is the result of the operating
systems failure to bring the desired page
into memory.

Page fault
1. OS looks at another table to decide
.
Invalid reference =>abort
. Just not in memory
2.
3.
4.
5.
6.

Get empty frame


Swap page into frame
Reset tables
Set validation bit = V
Restart the instruction that caused the fault

Page is on
backing store

OS
2

trap

reference
1

Load M

i
v

Restart
instruction

Free
frame

Page table

Reset page
table

Bring in
missing
page

Physical memory

Steps in handling page f

Copy on Write
Copy-on-Write (COW) allows both parent and child
processes to initially share the same pages in memory
If either process modifies a shared page, only then is the
page copied
COW allows more efficient process creation as only
modified pages are copied
Free pages are allocated from a pool using a technique
called zero-fill-on demand. Zero-fill-on-demand pages
have been zeroed-out before
being allocated, thus erasing the previous contents.

Before Process 1 modifies


Page C
Process 1
Pag
eA

Pag
eB

Pag
eC

Physical Memory

Process 2

After Process 1 modifies


Page C
Physical Memory
Process 1
Pag
eA

Pag
eB

Pag
eC

Copy
of
page
C

Process 2

PAGE REPLACEMENT
Basic to demand paging
Completes separation between logical memory and physical memory
Condition on existence of demand paging
No demand paging:
User addresses are mapped into physical addresses
All pages of a process still must be in physical memory
With demand paging:
The size of the logical address space is no longer constrained by physical
memory
Two major problems to solve in order to implement demand paging:
Frame-allocation algorithm
Must decide how many frames to allocate in each process
Must select the frames that are to be replaced
Page-replacement algorithm
Want lowest page-fault rate on first access and re-access
Evaluated by running on a reference string

BASIC PAGE REPLACEMENT


If no frame is free, we find one that is not currently being used and free it
Steps:

Find the location of the desired page on the disk

Find a free frame:

If there is a free frame, use it

If there is no free frame, use a page-replacement algorithm to select a


victim frame

Write the victim frame to the disk; change the page and frame tables

Read the desired page into a newly freed frame; change the page and frame
tables

Continues the use process from where the page fault occurred

If reference is in frames, no fault for reference

MODIFY BIT (DIRTY BIT)


The modify bit for a page is set by the hardware whenever any byte in the page is
written into; indicating that the page has been modified
When selecting a page for replacement, modify bit is examined

If bit is set:

Page has been modified since it was read from the disk

We must write the page to the disk

If bit is not set:

Page has no been modified since it was read into memory

Need not write memory page to the disk

FIFO PAGE REPLACEMENT ALGORITHM


First In First Out
Recording time is not necessary when a page is brought in
FIFO queue to hold pages in memory
Replace page at the head of the queue
When page is brought into memory, insert it at the tail of queue
Looking backward
Performance is not always good
A bad replacement choice increases the page-fault rate and slows process
execution; does not cause incorrect execution

BELADYS ANOMALY
For some page-replacement algorithms, the page-fault may increase as the number
of allocated frames increases

OPTIMAL PAGE REPLACEMENT ALGORITHM

Lowest page-fault rate of all algorithms for a fixed number of frames


Never suffer from Beladys anomaly
Simply, replace the page that will not be used for the longest period of time
Looking forward in time

LRU PAGE REPLACEMENT ALGORITHM

Least Recently Used


Often used and considered to be good
Associates with each page of that pages last use
Chooses the page that has not been used for the longest period of time

MAJOR PROBLEM: HOW TO IMPLEMENT


LRU?
Counters
Associate with each page-table entry a time-of-use field
Add to the CPU a logical clock or Counter
Clock is incremented for every memory reference
Whenever reference to a page is made, the contents of the clock register are
copied to the time-of-use field on the page-table entry for that page
Overflow of the clock must be considered

MAJOR PROBLEM: HOW TO IMPLEMENT


LRU?
Stack
Whenever a page is referenced, it is removed from the stack and put on the
top
Entries must be removed from the middle of the stack
It is best to implement using doubly linked list with a head pointer and tail
pointer

LRU-APPROXIMATION PAGE REPLACEMENT


ALGORITHM

Resource bit
Associated with each entry in the page table
Initially bits are 0
As user executes, the bit associated with each page referenced is set to 1

ADDITIONAL-REFERENCE-BITS ALGORITHM

At regular intervals, a time interrupt transfers control to the operating system


Operating system shifts the reference bit for each page into the high-order bit of
its 8-bit byte
Shifting the other bits rigt by 1 and discarding the low-order bit

SECOND-CHANCE ALGORITHM
When page has been selected, inspect its reference bit
If value is 0:
Proceed to replace page
If value is 1:
Give page a second chance and move on to the next FIFO page
When page gets second chance:
Reference bit is cleared
Arrival time is set to current time

CLOCK ALGORITHM
Second-Chance algorithm implemented using a circular queue
A pointer indicates which page to replace next
When a frame is needed, the pointer advance until it finds a page with a 0
reference bit
Clears reference bit as it advances
Once a victim page is found, the page is replaced, and the new page is inserted in
the circular queue in that position

ENHANCED SECOND-CHANCE ALGORITHM

(0, 0) - neither recently used nor modified-best page to replace


(0, 1) - not recently used but modified-not quite as good, because the page will
need to be written out before replacement
(1, 0) - recently used but clean-probably will be used again soon
(1, 1) - recently used and modified- probably will be used again soon, and the page
will be needed to be written out to disk before it can be replaced

ENHANCED SECOND-CHANCE ALGORITHM

When page replacement is called


Same scheme as in the clock algorithm but instead examines the class to
which that page belongs
Replace the first page encountered in the lowest nonempty class
Give preference to those pages that have been modified in order to reduce the
number of I/Os required

COUNTING-BASED PAGE REPLACEMENT

Least Frequently Used

Most Frequently Used

LEAST FREQUENTLY USED (LRU)


Requires page with the smallest count be replaced
Reason is that an actively used page should have a large reference number
Problem arises when a page is used heavily during initial phase of a process but
then is never used again
Solution:
Shift the counts right by 1 bit at regular intervals, forming an exponentially
decaying average usage count

MOST FREQUENTLY USED (MFU)

Based on the argument that the page with the smallest count was probably just
brought in and has yet to be used

PAGE-BUFFERING ALGORITHMS
Maintain a list of modified pages
Modifies page is selected and is written to the desk
Modify bit is reset
Modification to keep a pool of free frames but to remember which page was in
each frame
Old page can be reused directly from the free-frame pool if it is needed before
the frame is reused
No I/O needed
When page fault occurs:
First check if desired page in in the free-frame pool
Otherwise, select a free frame and read into it

VHX/VMS SYSTEM ALONG WITH FIFO


ALGORITHM

When FIFO mistakenly replaces a page that is still in active use,

Page is quickly retrieved from the free-frame pool, and no I/O is necessary

Free-frame provides protection against the relatively poor, but simple, FIFO
replace algorithm

APPLICATIONS AND PAGE REPLACEMENT


Database
Understands their memory use and disk use better than does on operating
system implementing algorithms for general-purpose use
Data Warehouses
LRU Algorithm would be removing the old pages and preserving new ones
Application would more likely be reading the older pages than newer ones
MFU is actually more efficient than LRU

APPLICATIONS AND PAGE REPLACEMENT

Raw Disk
Disk partition as a large sequential array of logical blocks without any filesystem data structures
Raw I/O
I/O to Raw Disk
Bypasses all the file-allocation, file names, and directories

ALLOCATION OF FRAMES
Simplest solution of a single user system is to place the free frames in a free-frame
list the process which will then generate page faults will then get the free frames.
When the free-frame list is exhausted a page-replacement algorithm would be
used to select one of the in-memory pages to be replaced with the next one, and so
on. When the process terminates the frames will then be put back to the freeframes list.
There are variations on this strategy. One is keeping a certain number of frames in
the free-frame list so that there is always a free frame for when a page fault is
generated.

ALLOCATION OF FRAMES

Processes needs a minimum number of frames allocated to it.


However there is only a limited number of total frames in the system.
There are two major allocation schemes

fixed allocation

priority allocation

FIXED ALLOCATION

Equal Allocation If there are n processes and m free frames then each process
should get m/n frames and keep extra frames for a free frame buffer pool.

Proportional allocation Allocate according to the size of process

Dynamic as degree of multiprogramming, process sizes changes size of process p


i
i

S si
m total number of frames
s
ai allocation for pi i m
S

PRIORITY ALLOCATION

It uses the same equation as the proportional allocation but instead of s i = size of pi
instead si = priority of pi.
It also enables a higher priority process to take frames from a lower priority process
when it generates a page fault.

PAGE REPLACEMENT
There are two general types of Page replacement algorithms Global allocation and Local
allocation.

Global allocation allows processes to select for replacement from anywhere in the system.

Allows for more throughput overall but is less consistent on completion times of
individual processes.

More Commonly used.

Local allocation only allows replacement pages from its own page pool

Reliable and consistent completion times on individual processes

Less throughput overall because some memory might be underutilized

NUMA(NON-UNIFORM MEMORY ACCESS)


Main Memory cannot be accessed by all parts of the system in the same way. Some
may be able to access some parts of it faster than some other parts.
Optimal performance can only be achieved by allocating the memory that a
processor can access the fastest.

It can be also done by scheduling the thread such that it is done on the same system
board as its required memory.

Solved by Solaris by creating lgroup

Structure to track CPU / Memory low latency groups

Used my schedule and pager

When possible schedule all threads of a process and allocate all memory for that process within
the lgroup

THRASHING
When there is a lack of pages for a process its page fault rate increases.
If the process then spends more time page faulting than executing then this activity
is called thrashing.
This happens when a process page faults to get a page so it starts to take pages
away from other processes but these processes needed those pages so they also
take away pages from other processes and so on and so forth.
This leads to low CPU utilization and the OS thinking it needs to increase the degree
of multiprogramming and thus adds another process to continue the thrashing.

THRASHING

One can limit the effects of thrashing by using Local Allocation since the processes
then cannot take away pages from other processes.

However the processes page faulting is still page faulting and will still continue to do so
until they are completed so they will tie up page fault queues.

One can also use the locality model which states that as a processes executes it
should move from a group of pages that are to used together, known as a locality,
to another. Localities may overlap.

WORKING SET MODEL


= working-set window = page references constant
WSSi (working set of Process Pi) =
total number of pages referenced in the most recent working set window (varies in
time)
D = WSSi total demand frames
if D > total number of frames then Thrashing occurs so then suspend or swap out
one of the processes.
You can monitor and maintain the working set by setting a fixed-interval timer
interrupt to know if a set of pages was referenced that is if one of the bits in
memory = 1 then that page is in the working set.
This can be made more accurate by increasing the frequency of the interrupt but
will also then be more costly to maintain.

PAGE
FAULT
FREQUENCY
A more direct
approach
in solving thrashing
It uses the page fault rate of a process. An allowable range for page fault rate is
chosen.
If a process has a higher page fault rate than the allowable then a frame is
allocated to it.
If a process has a lower page fault rate than the allowable then a frame is taken
from it.

MEMORY MAPPED FILES


Memory mapping a file allows it to be treated as routine memory access by
allowing a part of the virtual address space to be logically associated with the file.
This is done by mapping a disk block to a page or pages in memory.
Initial access proceeds through ordinary demand paging resulting in a page fault
then a page-sized portion of the file is read from the file system into a physical
page which enables the subsequent access to the file to be handled as routine
memory access.
The data on the disk is synchronously updated instead depending on the OS it will
update when the system periodically checks the changes made.
On file close the data is written back to the disk and is removed from the virtual
memory of the process.

MEMORY MAPPED FILES

Some OS use memory mapped files for standard I/O i.e. Solaris
Processes can explicitly request memory mapping a file using mmap() system call
In Solaris even without using mmap() files are memory mapped but is mapped into
kernel address space instead of user address space
mmap() can also be used to implement shared memory by memory mapping the
same file

SHARED MEMORY IN THE WINDOWS API


To share a file in memory using File mapping you need to first create a file mapping to
the file then establish a view of the file in the processs virtual address space. The
mapped file will then represent the shared memory to be used by other processes.
A simple illustration of this is to use CreateFile()function to open the file and returns a
HANDLE to the opened file.
Then the process can then use CreateFileMapping() to create a mapping of this file
HANDLE.
The process can then establish a view of the mapped file in its virtual address space with
the MapViewOfFile() function.
The view of the mapped file represents the portion of the file being mapped in the virtual
address space of the process the entire file or only a portion of it may be mapped.

EXAMPLE PROGRAM
#include <windows.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
HANDLE hFile, hMapFile;
LPVOID lpMapAddress;
hFile = CreateFile("temp.txt", /* file name */
GENERICREAD | GENERICWRITE, /* read/write
access */
0, /* no sharing of the file */
NULL, /* default security */
OPENALWAYS, /* open new or existing file */
FILEATTRIBUTENORMAL, /* routine file
attributes */
NULL); /* no file template */
hMapFile = CreateFileMapping(hFile, /* file
handle */

NULL, /* default security */


PAGEREADWRITE, /* read/write access to mapped
pages */
0, /* map entire file */
0,
TEXT("SharedObject")); /* named shared memory
object */
lpMapAddress = MapViewOfFile(hMapFile, /*
mapped object handle */
FILEMAPALLACCESS, /* read/write access */
0, /* mapped view of entire file */
0,
0);
/* write to shared memory */
sprintf(lpMapAddress,"Shared memory message");
UnmapViewOfFile(lpMapAddress);
CloseHandle(hFile);
CloseHandle(hMapFile);
}

MEMORY-MAPPED I/O

In the case of OSs that uses Memory mapped I/O instead of mapping the file to the
user address space it is mapped to the kernel address space
Usually a range of memory addresses is set aside just for this. Usually contiguous
especially for device I/Os
Usually implemented for devices with fast response times like video controllers.

ALLOCATING KERNEL
MEMORY

KERNEL MEMORY

Often allocated from a free-memory pool different from


the list used to satisfy ordinary user-mode processes

PRIMARY REASONS

Kernel requests memory


for structures of varying
sizes, some which are
less than a page in size.

Some kernel memory


need to be contiguous.

BUDDY SYSTEM
First Strategy

BUDDY SYSTEM
Allocates memory from a fixed-size segment
consisting of physically contiguous pages.
Memory is allocated using a power-of-2
allocator
Satisfies requests in units sized as a power
of 2
A request in units not appropriately sized is
rounded up to the next higher power of 2

BUDDY SYSTEM
Initial memory segment size = 256 KB
Kernel request = 21 KB of memory
256 KB

128 KB
AL
64 KB
BL
32
KB
CL

32
KB
CR

128 KB
AR
64 KB
BR
Buddy system
allocation

SLAB ALLOCATION
Second Strategy

SLAB ALLOCATION
Slab made up of one or more physically contiguous
pages
Cache consists of one or more
slabs
o There is a single cache for each unique
kernel data structure

Each cache is populated with


objects
instantiations of the kernel data
structure the cache represents

SLAB ALLOCATION
free
The objects which
are initially
marked as free
are allocated to
the cache.
When a new object
for a kernel data
structure is needed,
the slab allocator can
assign any free
object from the
cache to satisfy the
request.

use

The object
assigned from the
cache is marked
as use.

SLAB ALLOCATION IN LINUX


Scenario: Kernel requests memory from the slab
allocator for an object representing a process
descriptor.

SLAB ALLOCATION IN LINUX


Scenario: Kernel requests memory from the slab
allocator for an object representing a process
descriptor.
requires approximately 1.7 KB of
memory

struct task_struct

When the Linux kernel creates a new task, it requests


the necessary memory for the struct task_struct
object from its cache
The cache will fulfill the request using a struct
task_struct object that has already been allocated in
a slab and is marked as free

SLAB ALLOCATION IN LINUX


A slab may be in one of three possible states:
1. Full All objects in the slab are marked as used.
2. Empty All objects in the slab are marked as
free.
3. Partial The slab consists of both used and
free objects

SLAB ALLOCATION IN LINUX


The slab allocator first attempts to satisfy the request
with a free object in a partial slab.
If none exists, a free object is assigned from an empty
slab.
If no empty slabs are available, a new slab is allocated
from contiguous physical pages and assigned to a
cache; memory for the object is allocated from this
slab.

SLAB ALLOCATOR MAIN BENEFITS


1. No memory is wasted due to
fragmentation.
Each unique kernel data structure has an
associated cache
Each cache is made up of one or more slabs that
are divided into chunks the size of the objects
being represented

2. Memory requests can be satisfied


quickly.
Objects are created in advance and thus can be
quickly allocated from the cache

When the cache has finished with an object and


releases it, it is marked as free and returned to
its cache.

SLAB ALLOCATOR
Slab started in Solaris, now wide-spread for both kernel
mode and user memory in various OSes.
Linux 2.2 had SLAB, now has both SLOB and SLUB
allocators.
SLOB (Simple List of Blocks) designed for system with
limited memory
It works by maintaining 3 lists of objects: small, medium &
large
SLUB

performance optimized SLAB that removes the perCPU queues


It moves the metadata that is stored with each SLAB allocation into
the page structure

OTHER CONSIDERATIONS: PREPAGING


When a swapped-out process is restarted, all its pages are on
the disk, and each must be brought in by its own page fault.

An attempt to prevent high level of initial paging.


Strategy:
Bring into memory at one time all the pages that will be
needed
before they are referenced.
Disadvantage:
If prepaged pages are unused, I/O and memory was
wasted

If unused pages is close to 0, prepaging loses

OTHER CONSIDERATIONS: PAGE SIZE


There is no single best page size. There is a set of factors that
supports various sizes.
Powers of 2
4,096 () -> 4,194,304 () bytes
Consider some factors in page size selection:
Internal fragmentation & Locality small page size
Table size, I/O time & Page faults large page size
With a smaller page size, we have better resolution, allowing us to
isolate only the memory that is actually needed. With a larger page size,
we must allocate and transfer not only what is needed but also anything
else that happens to be in the page, whether it is needed or not.

OTHER CONSIDERATIONS: TLB REACH


The amount of memory accessible from the TLB and is
simply,
(TLB Size) x (Page Size)
The working set for a process is stored in the TLB.
If it is not, there is a high degree of page
faults the TLB reach:
For increasing
o Increase the size of the page
This may lead to an increase in fragmentation as not all
applications require a large page size

o Provide multiple page sizes


sizesThis allows applications that require larger page sizes the

opportunity to use them without an increase in fragmentation

OTHER CONSIDERATIONS: PROGRAM


STRUCTURE
Array is stored row major; that is, the array is
Program structure

int[128,128] data;

Each row is stored in one page

Program 1

For pages of 128 words, each


row takes one page. Thus,
this code zeros one word in
each page.

stored data[0][0],data[0][1],...,data[0]
[127], data[1][0], data[1][1],...,
...data[127][127]

for (j = 0; j <128; j++)


for (i = 0; i < 128; i++)
data[i,j] = 0;

If the operating system allocates fewer than 128 frames to the entire
program, then its execution will result in 128 x 128 = 16,384 page faults

Program 2
for (i = 0; i < 128; i++)
for (j = 0; j < 128; j++)
data[i,j] = 0;
This code zeros all the words on one page before starting the next page,
reducing into
128 page faults

To write a
block on
tape:
bloc
k copy

OTHER CONSIDERATIONS: I/O INTERLOCK


When demand paging is used, we sometimes
need to allow some of the pages to be locked in
memory.
Consider I/O Pages that are used for copying a
file from a device:
Data are always copied between system memory and
Allow
pages to be locked into memory
user memory
Allow pages to be locked into memory

Pinning of pages into


Fairly common, and most operating systems have a system call
memory
allowing an application to request that a region of its logical
address space be pinned.
Moving
disk
memory
blocks

syste
mwrite
tap
e
Pages containing the
block
lock
memor
y

The system can then


continue as usual. When
I/O is complete, the
pages are unlocked.

OPERATING-SYSTEM EXAMPLES: WINDOWS


Implements virtual memory using demand paging with
Handles page faults by bringing in not only the faulting page but also
clustering
several pages following the faulting page.

Working-set minimum
Minimum number of pages the process is guaranteed to
have in memory.
If sufficient memory is available, a process may be assigned as
many pages as
Working-set maximum
When the amount of memory falls below the threshold, the virtual
memory manager uses a tactic
knownworking-set
as
automatic
to restore the value above trimming
the threshold.
Removes pages from processes that have pages in excess of their working set
minimum

OPERATING-SYSTEM EXAMPLES: SOLARIS


It maintains a list of free pages to assign faulting processes.
lotsfree a parameter that represents a threshold to begin
paging
- amount of free memory
pageout starts up if the number of free pages falls below
- scans pages using modified clock
lotsfree
called more frequently depending on the amount of free memory
algorithm
available

scanrate rate at which pages are scanned.


- ranges from slowscan to fastscan
desfree threshold parameter to increasing paging
priority paging distinguishing pages that have been allocated to
processes from pages
allocated to regular files

HOMEWORK
1. When is a process considered to be thrashing?
2. How does memory mapping speed up the system?
3. Consider having m = 90 free frames, s0 = 5 , s1 = 15 and s2 = 35 if you are using
proportional allocation algorithm how many frames will be allocated to p 0, p1, and
p2?
4. Given a reference string: 1, 3, 7, 5, 2, 1 ,0, 2, 4, 3, 0, 6, 7, 1, 3, 1, 2, 5, 6, 3, 7, 5, 1,
3,
and memory with the size of 4 frames, show how many page faults occur using
clock LRU algorithm. (Show solution)

You might also like