You are on page 1of 5

BCA 2050 COMPUTER ORGANIZATION

Q1 What are the input and output devices?


ANS. 1) Input Devices: - Computer accepts the coded information through the input unit. It has
the capability of reading the instructions and data to be processed. The most commonly used
input device is the keyboard of a video terminal. This is electronically connected to the
processing part of a computer. The keyboard is wired such that whenever a key is pressed the
corresponding letter or digit is automatically translated into its corresponding code and is directly
sent either to memory or the processor.
2) Output Devices:- Output unit displays the processed results. Examples are video terminals
and graphic displays. I/O devices do not alter the information content or the meaning of the data.
Some devices can be used as output only e.g. graphic displays. Following are the Input / Output
techniques

Programmed
interrupt driven
direct memory Access (DMA)

Q2List the fundamental design issues in designing an instruction set.


Ans. Instruction Set Design:The design of an instruction set is very complex, since it affects many different aspects of the
computer system. The instruction set defines many of the functions performed by CPU. The
instruction set is a programmers means of implementation of the CPU.
The fundamental design issues in designing an instruction set are:
Operation Repertoire: How many and which operations to provide, and how complex
operations should be?
Data Types: The various types of data upon which operations are performed.
Instruction Format: Instruction length (in bits), number of addresses, sizes of various
fields, and so on.
Registers: Number of CPU registers that can be referenced by instructions and their use.
Addressing: The mode or modes by which the address of an operand is specified.

These issues are highly interrelated and must be considered together in designing an instruction
set.
Q3What is the meaning of arithmetic associatively? Explain in brief
Ans. Computer arithmetic associatively Programs have typically been written first to run
equentially before being rewritten to run concurrently, so a natural question is do the two
versions get the same answer? If the answer is no, you presume there is a bug in the parallel
erosion that you need to track down. This approach assumes that computer arithmetic does not
affect the results when going from sequential to parallel. This assumption holds for twos
complement integers even if the computation overflows. Another way to say this is the integer
addition is associative. But as floating point numbers are approximations of real numbers and
because arithmetic has limited precision, floating point addition is not associative. A more
confusing version of this pitfall occurs on a parallel computer where the operating system
scheduler may use a different number of processors depending on what other programs are
running on a parallel computer. The unaware parallel programmer may be confused by his/her
programs getting slightly different answers each time it is run for the same identical code and
input, as the varying number of processors for each run would cause the floating point sums to be
calculate in different orders. Given this quandary, programmers who write parallel code with
floating point numbers need to verify whether the results are credible even if they dont give the
same exact answers as the sequential code. The field that deals with such issues is called
numerical analysis.
Q4Briefly explain the read and write cycle of an 8086 processor.
Ans. Read Write Cycles After understanding of the instruction cycle of the processor, now let
us learn about the Read-Write cycles. Read Cycles Data Transfer from Memory or Input/output
(I/O) to CPU. Write Cycle is Data Transfer from CPU to Memory or I/O.
Read Cycle Read Cycle consists of several steps (See figure 4.10) which are discussed below:1. Processor starts a read bus cycle by floating the address of the memory Location on the address
lines.
2. After the address lines are steady, the processor declares the address Strobe signal on the bus.
The address strobe signals are utilized to verify The validity of the address lines.
3. The Processor thereafter positions the Read/Write signal to high, i.e. Read.
4. Subsequently, the processor declares the data strobe signal. Data Strobe gives signal to the
memory that the processor is all set to read Data.

5. The memory subsystem decodes the address and puts the data on the Data lines.
6. The memory subsystem thereafter declares the data acknowledge Signal. Data acknowledge is
utilized to signal to the processor that valid Data can at this point be latched in.
7 Processor latches in the data and cancels out the data strobe. This gives Signal to the memory
that the data has been latched by the processor.
8) Processor also cancels out the address strobe signal.
9) Memory subsystem now cancels out the data acknowledgement signal.
This acts as a signal to the termination of the read bus cycle.
Write Cycle:- The various steps included in the Write Cycle are as given below:1. Processor starts a write bus cycle by floating the address of the Memory position on the
address lines.
2. After the address lines are steady, the processor declares the address Strobe signal on the bus.
The address strobe gives signal to the validity Of the address lines.
3. Processor thereafter positions the Read/Write signal to low, i.e. write.
4. After this, the processor puts the data on the data lines.
5. At this point, the processor declares the data strobe signal. This Provides signal to the memory
that the processor contains valid data Needed for the memory write operation?
6. The memory subsystem decodes the address and writes the data into The addressed memory
location.
7. The memory subsystem thereafter declares the data acknowledge Signal. This gives signal to
the processor that data has been written to The memory.
8. Then the processor cancels out the data strobe, giving signal that the Data is not valid any
more.
9. Processor also cancels out the address strobe signal.
10. Memory subsystem at this point cancels out the data acknowledgement Signal, giving signal
for termination of the write bus cycle.

Q5 Briefly explain Interrupt Driven I/O.


ANS. Interrupt Driven I/O

Using Program-controlled I/O requires continuous involvement of the processor in the I/O
activities. It is desirable to avoid wasting processor execution time. An alternative is for the CPU
to issue an I/O command to a module and then go on other work. The I/O module wills then
nterrupt the CPU requesting service when it is ready to exchange data with the CPU. The CPU
will then execute the data transfer and then resumes its former processing. Based on the use of
interrupts, this technique improves the utilization of the processor. An interrupt is more than a
simple mechanism for coordinating I/O transfers. In a general sense, interrupts enable transfer of
control from one program to another to be initiated by an event that is external to a computer.
Execution of the interrupted program resumes after completion of execution of the interrupt
service routine. The concept of interrupts is useful in operating systems and in many control
applications where processing of certain routines has to be accurately timed relative to the
external events. The latter type of application is generally referred to as real-time processing. The
operations listed can be better understood with an example to input a block of data. A flow chart
using this technique for input of a block of data is as shown in input block of data using
interrupt driven I/O Using Interrupt Driven I/O technique CPU issues read command. I/O
module gets data from peripheral while CPU does other work and I/O module interrupts CPU,
checks the status if no error, that is, the device is ready, then CPU requests data and I/O module
transfers data. Thus CPU reads the data and stores it in the main memory.
Thus Interrupt Driven I/O
Overcomes CPU waiting.
No repeated CPU checking of device.
I/O module interrupts when ready.
Q6 What is the difference between synchronous and asynchronous data transfer.
Ans. Synchronous and Asynchronous Data Transfer
You will find it interesting that asynchronous and synchronous communication epresentsmethods
through which signals are transferred in computing technology. Such signals permit computers to
move data between components inside the computer or between an external network and the
computer. Majority of the operations and actions which occur in computers are carefully
controlled and appen at particular times and intervals. The data transfer on the system bus might
be synchronous or asynchronous.

Synchronous data transfer


In synchronous data transmission, data is transmitted though a bit-stream, that transmits a group
of characters in a single stream. In this type of data transfer, the transmission speed is
synchronized at both the sender and receiver by the help of clock signal, at the time of transfer. A
particular activity which might utilize a synchronous protocol can be a transmission of files from
one point to the other. As every transmission is acquired, a reaction is returned depicting success
or the requirement to resend.
Asynchronous data transfer
The word asynchronous is normally used to portray communications in which data are able to be
transmitted irregularly instead of in a steady stream. Asynchronous systems do not send separate
information to indicate the encoding or clocking information. The receiver must decide the
clocking of the signal itself. The trouble with asynchronous communications is that the receiver
should have a method to differentiate between noise and valid data. In computer
communications, this is normally acquired via a special start bit and stop bit at the starting and
ending of every piece of data. Due to this reason, asynchronous communication is at times
known as start-stop transmission.

You might also like