You are on page 1of 2

Memory Management Unit (MMU): Memory management units that enable individual threads

of software to run in hardware-protected address spaces.


Real Time Operating System (RTOS): A Real-Time Operating System is a computing
environment
that reacts to input within a specific time period.
Rate Monotonic Analysis (RMA): Rate monotonic analysis is frequently used by system
designers
to analyze and predict the timing behavior of systems.
Deadlock: A deadlock is a situation in which two computer programs sharing the same
resource
are effectively preventing each other from accessing the resource, resulting in both programs
ceasing to function.
Resource Allocation Graph: Deadlocks can be described more precisely in terms of a directed
graph called a system resource-allocation graph.
Deadlock Avoidance: A deadlock-avoidance algorithm dynamically examines the resource
allocation state to ensure that a circular wait condition can never exist.
Claim Edge: Resource-allocation graph can be used for deadlock avoidance. In addition to the
request and assignment edges, we introduce a new type of edge, called a claim edge.
File Server System: File-server systems provide a file-system interface where clients can
create,
update, read, and delete files.
Kernel: Kernel is a program that constitutes the central core of a computer operating system.
MISD Multiprocessing: Multiple Instruction, Single Data is a type of 1Tparallel computing1T
1Tarchitecture1T where many functional units perform different operations on the same data.
Multitasking: An operating system that utilizes multitasking is one that allows more than one
program to run simultaneously.
Operating System: An operating system (OS) is a software program that manages the
hardware
and software resources of a computer.
Peer-to-Peer System: Peer-to-peer (P2P) computing or networking is a distributed application
architecture that partitions tasks or workloads between peers.
Real Time Operating System (RTOS): Real-time operating systems are used to control
machinery,
scientific instruments and industrial systems such as embedded systems.
SIMD Multiprocessing: In a single instruction stream, multiple data stream computer one
processor handles a stream of instructions, each one of which can perform calculations in
parallel
on multiple data locations.
Symmetric Multiprocessing: SMP involves a multiprocessor computer architecture where two
or more identical processors can connect to a single shared main memory.
System Calls: System call is the mechanism used by an application program to request service
from the operating system.
Unix-like: Unix-like family is a diverse group of operating systems, with several major
subcategories including System V, BSD, and Linux.
uffering: A buffer is a temporary storage location for data while the data is being transferred.
Context Switch: A context switch (also sometimes referred to as a process switch or a task
switch)
is the switching of the CPU (central processing unit) from one process or thread to another.
Cooperating Processes: Processes can cooperate with each other to accomplish a single
task.
Cooperating processes can:
Improve performance by overlapping activities or performing work in parallel.
Enable an application to achieve a better program structure as a set of cooperating processes,
where each is smaller than a single monolithic program.
CPU Registers: The central processing unit (CPU) contains a number of memory locations
which are individually addressable and reserved for specific purpose. These memory locations
are called registers.
Inter-process Communication (IPC): In computing, Inter-process communication (IPC) is
a set of techniques for the exchange of data among multiple threads in one or more processes.
Message-Passing System: Message passing in computer science is a form of communication
used in parallel computing, object-oriented programming, and interprocess communication.
Process Control Block (PCB): The PCB is a certain store that allows the operating systems to
locate key information about a process.
Process Counter: Program instructions uniquely identified by their program counters (PCs)
provide a convenient and accurate means of recording the context of program execution and
PC-based prediction techniques have been widely used for performance optimizations at the
architectural level.
Process Management: The operating system manages many kinds of activities ranging from
user programs to system programs like printer spooler, name servers, file server, etc. Each of
these activities is encapsulated in a process.
Process Scheduling: The problem of determining when processors should be assigned and to
which processes is called processor scheduling or CPU scheduling.
Process State: The process state consist of everything necessary to resume the process
execution
if it is somehow put aside temporarily.
Synchronization: In computer science, especially parallel computing, synchronization means
the coordination of simultaneous threads or processes to complete a task in order to get correct
runtime order and avoid unexpected race conditions.
Thread: A thread is a single sequence stream within in a process. Because threads have some
of the properties of processes, they are sometimes called lightweight processes. In a process,
threads allow multiple executions of streams.
Process Management: Process management is a series of techniques, skills, tools, and
methods
used to control and manage a business process within a large system or organization.
Threads: A thread is a single sequence stream within in a process.
Process: A process is an instance of a computer program that is being executed. It contains the

program code and its current activity. Depending on the operating system (OS).
Kernel: The kernel is the central component of most computer operating systems; it is a bridge
between applications and the actual data processing done at the hardware level.
Context Switch: A context switch is the computing process of storing and restoring state
(context)
of a CPU so that execution can be resumed from the same point at a later time.
Multitasking: Multitasking is the ability of an operating system to execute more than one
program simultaneously.
The Cost of Context Switching: Context switching represents a substantial cost to the system
in terms of CPU time and can, in fact, be the most costly operation on an operating system.
BIOS: The BIOS software is built into the PC, and is the first code run by a PC when powered
on (boot firmware). The primary function of the BIOS is to load and start an operating
system.
CPU Scheduling: CPU scheduling algorithms have different properties, and the choice of a
particular algorithm may favor one class of processes over another.
Scheduling Algorithm: A scheduling algorithm is the method by which threads, processes or
data
flows are given access to system resources (e.g. processor time, communications bandwidth).
Memory Management Unit (MMU): Memory management units that enable individual threads
of software to run in hardware-protected address spaces.
Real Time Operating System (RTOS): A Real-Time Operating System is a computing
environment
that reacts to input within a specific time period.
Rate Monotonic Analysis (RMA): Rate monotonic analysis is frequently used by system
designers
to analyze and predict the timing behavior of systems.
Deadlock: A deadlock is a situation in which two computer programs sharing the same
resource
are effectively preventing each other from accessing the resource, resulting in both programs
ceasing to function.
Resource Allocation Graph: Deadlocks can be described more precisely in terms of a directed
graph called a system resource-allocation graph.
Deadlock Avoidance: A deadlock-avoidance algorithm dynamically examines the
resourceallocation state to ensure that a circular wait condition can never exist.
Claim Edge: Resource-allocation graph can be used for deadlock avoidance. In addition to the
request and assignment edges, we introduce a new type of edge, called a claim edge.
Compile Time: It refers to either the operations performed by a compiler (the compile-time
operations), programming language requirements that must be met by source code for it to
be successfully compiled (the compile-time requirements), or properties of the program that
can be reasoned about at compile time.
Fragmentation: A multiprogrammed system will generally perform more efficiently if it
has a higher level of multiprogramming. For a given set of processes, we can increase the
multiprogramming level only by packing more processes into memory. To accomplish this task,
we must reduce memory waste or fragmentation. Systems with fixed-sized allocation units,
such as the single-partition scheme and paging, suffer from internal fragmentation. Systems
with variable-sized allocation units, such as the multiple-partition scheme and segmentation,
suffer from external fragmentation.
Global Descriptor Table (GDT): Is specific to the IA32 architecture. It contains entries telling
the CPU about memory segments. A similar Interrupts Descriptor Table exists containing tasks
and interrupts descriptors. Read the GDT Tutorial.
Hashed Page Tables: A common approach for handling address spaces larger than 32 bits.
Local Descriptor Table (LDT): Is a memory table used in the x86 architecture in protected
mode and containing memory segment descriptors: start in linear memory, size, executability,
writability, access privilege, actual presence in memory.

Compile Time: It refers to either the operations performed by a compiler (the


compile-time operations), programming language requirements that must be met by
source code for it to be successfully
compiled (the compile-time requirements), or properties of the program that can be
reasoned about
at compile time.
Fragmentation: A multiprogrammed system will generally perform more efficiently if
it has a higher
level of multiprogramming. For a given set of processes, we can increase the
multiprogramming level
only by packing more processes into memory. To accomplish this task, we must
reduce memory
waste or fragmentation. Systems with fixed-sized allocation units, such as the singlepartition scheme
and paging, suffer from internal fragmentation. Systems with variable-sized allocation
units, such as
the multiple-partition scheme and segmentation, suffer from external fragmentation.
Global Descriptor Table (GDT): Is specific to the IA32 architecture. It contains
entries telling the CPU
about memory segments. A similar Interrupts Descriptor Table exists containing tasks
and interrupts
descriptors. Read the GDT Tutorial.
Hashed Page Tables: A common approach for handling address spaces larger than
32 bits.
Local Descriptor Table (LDT): Is a memory table used in the x86 architecture in
protected mode and
containing memory segment descriptors: start in linear memory, size, executability,
writability, access privilege, actual presence in memory.
Memory-Management Unit (MMU): The run-time mapping from virtual to physical

addresses is doneby a hardware device.


Relocation: One solution to the external-fragmentation problem is compaction.
Compaction involvesshifting a program in memory without the program noticing the
change. This consideration requires
that logical addresses be relocated dynamically, at execution time. If addresses are
relocated only atload time, we cannot compact storage.
Secondary Memory: This memory holds those pages that are not present in main
memory.
Translation Look-Aside Buffer (TLB): Standard solution to this problem is to use a
special, small, fastlook up hardware cache.
Domain Name System (DNS): A system for converting host names and domain
names into IP addresseson the Internet or on local networks that use the TCP/IP
protocol. For example, when a Web siteaddress is given to the DNS either by typing
a URL in a browser or behind the scenes from oneapplication to another, DNS
servers return the IP address of the server associated with that name.
Network Information Service (NIS): A naming service from Sun that allows
resources to be easilyadded, deleted or relocated. Formerly known as Yellow Pages,
NIS is a de facto Unix standard. NIS+is a redesigned NIS for Solaris 2.0 products.
The combination of TCP/IP, NFS and NIS comprises the
primary networking components of Unix.
Distributed File System (DFS): Is a set of client and server services that allow an
organization usingMicrosoft Windows servers to organize many distributed SMB file
shares into a distributed file system.DFS provides location transparency and
redundancy to improve data availability in the face of failureor heavy load by allowing

shares in multiple different locations to be logically grouped under onefolder, or DFS


root.
Anonymous Access: The most common Web site access control method, allows
anyone to visit thepublic areas of your Web sites.
Free-Space List: A list of unoccupied areas of memory in main or backing store. It is
a special case ofan available list.
Bit-level stripping: Data striping consists of splitting the bits of each byte across
multiple disks; suchstriping is called bit-level striping.
Constant linear velocity (CLV): Constant linear velocity (CLV) is a qualifier for the
rated speed of anoptical disc drive and may also be applied to the writing speed of
recordable discs.
Data stripping: The distribution of a unit of data over two or more hard disks,
enabling the data to beread more quickly, known as data striping.
Error correcting code (ECC): Error correction code is a coding system that
incorporates extra paritybits in order to detect errors.
Logical blocks: The logical block is the smallest unit of transfer. The size of a logical
block is usually512 bytes.
Logical formatting: Logical formatting is the process of placing a file system upon a
hard disk drivepartition of a hard disk so that an operating system can use available
hard disk platter space to storeand retrieve files.
Low-level formatted: The sector identification on a disk that the drive uses to locate
sector for readingand writing is called low level formatted.

You might also like