Professional Documents
Culture Documents
In digital circuits, a flip-flop is a term referring to an electronic circuit that has two stable states and thereby is capable of serving as one bit of memory.A flip-flop is usually controlled by one or two control signals and/or a gate or clock signal. The output often includes the complement as well as the normal output. Uses :
A single flip-flop can be used to store one bit, or binary digit, of data. Any one of the flip-flop type can be used to build any of the others. Many logic synthesis tools will not use any other type than D flip-flop and D latch. Level sensitive latches cause problems with Static Timing Analysis (STA) tools and Design For Test (DFT). Therefore, their usage is often discouraged. Many FPGA devices contain only edge-triggered D flip-flops
(2) Adder
In electronics, an adder or summer is a digital circuit that performs addition of numbers. In modern computers adders reside in the arithmetic logic unit (ALU) where other operations are performed. Although adders can be constructed for many numerical representations, such as Binary-coded decimal or excess-3, the most common adders operate on binary numbers. In cases where two's complement or one's complement is being used to represent negative numbers, it is trivial to modify an adder into an adder-subtractor. Other signed number representations require a more complex adder.
Inputs Outputs
A B C S
0 0 1 1 0 1 0 1 0 0 0 1 0 1 1 0
A half adder is a logical circuit that performs an addition operation on two one-bit binary numbers often written as A and B. The half adder output is a sum of the two inputs usually represented with the signals Cout and S where .
Example half adder circuit diagramAs an example, a Half Adder can be built with an XOR gate
and an AND gate.
___________ A ------| | | Half |----| Adder | | |----B ------|___________|
Inputs Outputs
A B Ci Co S
Schematic symbol for a 1-bit full adder with Cin and Cout drawn on sides of block to emphasize their use in a multi-bit adder. 0 1 0 1 0 1 0 1 0 0 1 1 0 0 1 1 0 0 0 0 1 1 1 1 0 0 0 1 0 1 1 1 0 1 1 0 1 0 0 1
A full adder is a logical circuit that performs an addition operation on three one-bit binary numbers often written as A, B, and Cin. The full adder produces a two-bit output sum typically represented with the signals Cout and S where .
(3) Subtractor
In electronics, a subtractor can be designed using the same approach as that of an adder. The binary subtraction process is summarized below. As with an adder, in the general case of calculations on multi-bit numbers, three bits are involved in performing the subtraction for each bit of the difference: the minuend (Xi), subtrahend (Yi), and a borrow in from the previous (less significant) bit order position (Bi). The outputs are the difference bit (Di) and borrow bit Bi + 1. K-map Bi(1,2,3,7) Subtractors are usually implemented within a binary adder for only a small cost when using the standard two's complement notation, by providing an addition/subtraction selector to the carry-in and to invert the second operand. (definition of two's complement negation)
(a)
Half Subtractor
The half-subtractor is a combinational circuit which is used to perform subtraction of two bits. It has two inputs, X (minuend) and Y (subtrahend) and two outputs D (difference) and B (borrow). From the above table one can draw the Karnaugh map for "difference" and "borrow".
X 0 0 1 1 0 1 0 1
Y 0 1 1 0
D 0 1 0 0
Truth table & Logic Circuit:-The truth table & logic circuit for the full subtractor is given
below. XYZDB 0 0 0 0 0 0 0 1 1 1 0 1 0 1 1 0 1 1 0 1 1 0 0 1 0 1 0 1 0 0 1 1 0 0 0 1 1 1 1 1
(4) Opcode
In computer technology, an opcode (operation code) is the portion of a machine language instruction that specifies the operation to be performed. Their specification and format are laid out in the instruction set architecture of the processor in question (which may be a general CPU or a more specialized processing unit). Apart from the opcode itself, an instruction normally also has one or more specifiers for operands (i.e. data) on which the operation should act, although some operations may have implicit operands, or none at all. There are instruction sets with nearly uniform fields for opcode and operand specifiers, as well as others (the x86 architecture for instance) with a more complicated, varied length structure. Depending on architecture, the operands may be register values, values in the stack, other memory values, I/O ports, etc, specified and accessed using more or less complex addressing modes. The types of operations include arithmetics, data copying, logical operations, and program control, as well as special instructions (such as CPUID and others).
among other words, certain features of the word itself (for example, the presence of specific codes in its digits), the absolute value of a word, its presence in a preset range, and so on. The operation of an associative memory is based on the representation of all information in the form of a sequence of zones according to properties and characteristic attributes. In this case the retrieval of information is reduced to the determination of the zone according to the preset attributes by means of scanning and comparison of those attributes with the attributes that are stored in the associative memory. There are two basic methods of realizing the associative memory. The first is the construction of a memory with storage cells that have the capability of performing simultaneously the functions of storage, nondestructive reading, and comparison. Such a method of realizing an associative memory is called network parallel-associativethat is, the required sets of attributes are preserved in all the memory cells, and the information that possesses a given set of attributes is searched for simultaneously and independently over the entire storage capacity. Card indexes for edge-punched cards are prototypes of such an associative memory. Thin-film kryotrons, transfluxors, biaxes, magnetic thin films, and so on are used as storage elements of network-realized associative memories. The second method of realizing an associative memory is the programmed organization (modeling) of the memory. It consists of the establishment of associative connections between the information contained in the memory by means of ordered arrangement of the information in the form of sequential chains or groups (lists) connected by linkage addresses whose codes are stored in the same memory cells. This procedure is the more suitable for practical realization in dealing with large volumes of information because it provides for the use of conventional accumulators with address reference. The use of an associative memory considerably facilitates the programming and solution of informational-logical problems and accelerates by hundreds (thousands) of times the speed of retrieval, analysis, classification, and processing of data.
is critical. Another and related application area is various forms of stream processing where it is essential to have data processing and transfer in parallel, in order to achieve sufficient throughput.
Principle DMA is an essential feature of all modern computers, as it allows devices to transfer data without subjecting the CPU to a heavy overhead. Otherwise, the CPU would have to copy each piece of data from the source to the destination, making itself unavailable for other tasks. This situation is aggravated because access to I/O devices over a peripheral bus is generally slower than normal system RAM. With DMA, the CPU gets freed from this overhead and can do useful tasks during data transfer (though the CPU bus would be partly blocked by DMA). In the same way, a DMA engine in an embedded processor allows its processing element to issue a data transfer and carries on its own task while the data transfer is being performed. A DMA transfer copies a block of memory from one device to another. While the CPU initiates the transfer by issuing a DMA command, it does not execute it. For so-called "third party" DMA, as is normally used with the ISA bus, the transfer is performed by a DMA controller which is typically part of the motherboard chipset. More advanced bus designs such as PCI typically use bus mastering DMA, where the device takes control of the bus and performs the transfer itself. In an embedded processor or multiprocessor system-on-chip, it is a DMA engine connected to the on-chip bus that actually administers the transfer of the data, in coordination with the flow control mechanisms of the on-chip bus. A typical usage of DMA is copying a block of memory from system RAM to or from a buffer on the device. Such an operation usually does not stall the processor, which as a result can be scheduled to perform other tasks unless those tasks include a read from or write to memory. DMA is essential to high performance embedded systems. It is also essential in providing so-called zero-copy implementations of peripheral device drivers as well as functionalities such as network packet routing, audio playback and streaming video. Multicore embedded processors (in the form of multiprocessor system-on-chip) often use one or more DMA engines in combination with scratchpad memories for both increased efficiency and lower power consumption. In computer clusters for highperformance computing, DMA among multiple computing nodes is often used under the name of remote DMA. There are two control signal used to request and acknowledge a DMA transfer in microprocess-based system.The HOLD pin is used to request a DMA action and the HLDA pin is an output acknowledges the DMA action. A coprocessor is a computer processor used to supplement the functions of the primary processor (the CPU). Operations performed by the coprocessor may be floating point arithmetic, graphics, signal processing, string processing, or encryption. By offloading processor-intensive tasks from the main processor, coprocessors can accelerate system performance. Coprocessors allow a line of computers to be customized, so that customers who do not need the extra performance need not pay for it.
(9) Coprocessors
7
Coprocessors were first seen on mainframe computers, where they added "optional" functionality such as floating point math support. A more common use was to control input/output channels, where they were more often called channel controllers.
Intel coprocessors
i80387 microarchitecture. The original IBM PC included a socket for the Intel 8087 floating-point coprocessor (aka FPU) which was a popular option for people using the PC for CAD or mathematics-intensive calculations. In that architecture, the coprocessor sped up floating-point arithmetic on the order of fiftyfold. Users that only used the PC for word processing, for example, saved the high cost of the coprocessor, which would not have accelerated performance of text manipulation operations. The 8087 was tightly integrated with the 8086/8088 and responded to floating-point machine code operation codes inserted in the 8088 instruction stream. An 8088 processor without an 8087 would interpret these instructions as an internal interrupt, which could be directed to trap an error or to trigger emulation of the 8087 instructions in software.
Intel 80386 CPU w/ 80387 Math Coprocessor Another coprocessor for the 8086/8088 central processor was the 8089 input/output coprocessor. It used the same programming technique as 8087 for input/output operations, such as transfer of data 8
from memory to a peripheral device, and so reducing the load on the CPU. But IBM didn't use it in IBM PC design and Intel stopped development of this type of coprocessor. During the era of 8- and 16-bit desktop computers another common source of floating-point coprocessors was Weitek. The Intel 80386 microprocessor used an optional "math" coprocessor (the 80387) to perform floating point operations directly in hardware. The Intel 80486DX processor included floating-point hardware on the chip. Intel released a costreduced processor, the 80486SX, that had no FP hardware, and also sold an 80487SX co-processor that essentially disabled the main processor when installed, since the 80487SX was a complete 80486DX with a different set of pin connections. Intel processors later than the 80486 integrated floating-point hardware on the main processor chip; the advances in integration eliminated the cost advantage of selling the floating point processor as an optional element. It would be very difficult to adapt circuit-board techniques adequate at 75 MHz processor speed to meet the time-delay, power consumption, and radio-frequency interference standards required at gigahertz-range clock speeds. These on-chip floating point processors are still referred to as coprocessors because they operate in parallel with the main CPU.
Motorola coprocessors
The Motorola 68000 family had the 68881/68882 coprocessors which provided similar floating-point speed acceleration as for the Intel processors. Computers using the 68000 family but not equipped with the hardware floating point processor could trap and emulate the floating-point instructions in software, which, although slower, allowed one binary version of the program to be distributed for both cases
(10)
Universal set
For "universal set" in the sense of a universe of discourse with respect to which the absolute set complement is taken.
In set theory, a universal set is a set which contains all objects, including itself.[1] In set theory as usually formulated, the conception of a set of all sets leads to a paradox. The reason for this lies with the parameters of Zermelo's axiom of separation: for any formula universal set V existed, then Russell's paradox could be resolved by considering . More generally, for any set A we can prove that is not an element of A. and set A, the set exists. If the which contains exactly those elements x of A that satisfy
A second issue is that the power set of the set of all sets would be a subset of the set of all sets, providing that both exist. This conflicts with Cantor's theorem that the power set of any set (whether infinite or not) always has strictly higher cardinality than the set itself. The idea of a universal set seems intuitively desirable in the ZermeloFraenkel set theory, particularly because most versions of this theory do allow the use of quantifiers over all sets (see universal quantifier). This is handled by allowing carefully circumscribed mention of V and similar large collections as proper classes. In theories with proper classes the statement is not true because proper classes cannot be elements.
(12) Paging
In computer operating systems, paging is one of the memory-management schemes by which a computer can store and retrieve data from secondary storage for use in main memory. In the paging memory-management scheme, the operating system retrieves data from secondary storage in samesize blocks called pages. The main advantage of paging is that it allows the physical address space of a process to be noncontiguous. Before paging, systems had to fit whole programs into storage contiguously, which caused various storage and fragmentation problems.[1] 10
Paging is an important part of virtual memory implementation in most contemporary general-purpose operating systems, allowing them to use disk storage for data that does not fit into physical Randomaccess memory (RAM). Paging is usually implemented as architecture-specific code built into the kernel of the operating system. The main functions of paging are performed when a program tries to access pages that are not currently mapped to physical memory (RAM). This situation is known as a page fault. The operating system must then take control and handle the page fault, in a manner invisible to the program. Therefore, the operating system must: 1. 2. 3. 4. 5. Determine the location of the data in auxiliary storage. Obtain an empty page frame in RAM to use as a container for the data. Load the requested data into the available page frame. Update the page table to show the new data. Return control to the program, transparently retrying the instruction that caused the page fault.
Because RAM is faster than auxiliary storage, paging is avoided until there is not enough RAM to store all the data needed. When this occurs, a page in RAM is moved to auxiliary storage, freeing up space in RAM for use. Thereafter, whenever the page in secondary storage is needed, a page in RAM is saved to auxiliary storage so that the requested page can then be loaded into the space left behind by the old page. Efficient paging systems must determine the page to swap by choosing one that is least likely to be needed within a short time. There are various page replacement algorithms that try to do this.
Verification:The process of evaluating an application to determine whether or not the work products of a stage of a software development lifecycle fulfill the requirements established during the previous stage.
dependency tree is used where components that have other components dependent upon them are developed first.
(16) Demoralization
12
demoralization is the process of attempting to optimize the read performance of a database by adding redundant data or by grouping data. In some cases, demoralization helps cover up the inefficiencies inherent in relational database software. A relational normalized database imposes a heavy access load over physical storage of data even if it is well tuned for high performance. A normalized design will often store different but related pieces of information in separate logical tables (called relations). If these relations are stored physically as separate disk files, completing a database query that draws information from several relations (a join operation) can be slow. If many relations are joined, it may be prohibitively slow. There are two strategies for dealing with this. The preferred method is to keep the logical design normalized, but allow the database management system (DBMS) to store additional redundant information on disk to optimize query response. In this case it is the DBMS software's responsibility to ensure that any redundant copies are kept consistent. This method is often implemented in SQL as indexed views (Microsoft SQL Server) or materialized views (Oracle). A view represents information in a format convenient for querying, and the index ensures that queries against the view are optimized. The more usual approach is to deformalize the logical data design. With care this can achieve a similar improvement in query response, but at a costit is now the database designer's responsibility to ensure that the deformalized database does not become inconsistent. This is done by creating rules in the database called constraints, that specify how the redundant copies of information must be kept synchronized. It is the increase in logical complexity of the database design and the added complexity of the additional constraints that make this approach hazardous. Moreover, constraints introduce a trade-off, speeding up reads (SELECT in SQL) while slowing down writes (INSERT, UPDATE, and DELETE). This means a deformalized database under heavy write load may actually offer worse performance than its functionally equivalent normalized counterpart. A deformalized data model is not the same as a data model that has not been normalized, and renormalization should only take place after a satisfactory level of normalization has taken place and that any required constraints and/or rules have been created to deal with the inherent anomalies in the design. For example, all the relations are in third normal form and any relations with join and multivalued dependencies are handled appropriately.
13
ASIMO uses sensors and intelligent algorithms to avoid obstacles and navigate stairs.
Main article: Natural language processing Natural language processing gives machines the ability to read and understand the languages that humans speak. Many researchers hope that a sufficiently powerful natural language processing system would be able to acquire knowledge on its own, by reading the existing text available over the internet. Some straightforward applications of natural language processing include information retrieval (or text mining) and machine translation.
Perception
Main articles: Machine perception, Computer vision, and Speech recognition Machine perception is the ability to use input from sensors (such as cameras, microphones, sonar and others more exotic) to deduce aspects of the world. Computer vision is the ability to analyze visual input. A few selected sub problems are speech recognition,[facial recognition and object recognition.
(19) Domain
While the term "domain" is often used synonymously with "domain name," it also has a definition specific to local networks. A domain contains a group of computers that can be accessed and administered with a common set of rules. For example, a company may require all local computers to be networked within the same domain so that each computer can be seen from other computers within the domain or located from a central server. Setting up a domain may also block outside traffic from accessing computers within the network, which adds an extra level of security While domains can be setup using a variety of networking software, including applications from Novell and Oracle, Windows users are most likely familiar with Windows Network Domains. This networking option is built into Windows and allows users to create or join a domain. The domain may or may not be password-protected. Once connected to the domain, a user may view other computers within the domain and can browse the shared files and folders available on the connected systems. 14
Windows XP users can browse Windows Network Domains by selecting the "My Network Places" option on the left side of an open window. You can create a new domain by using the Network Setup Wixard. Mac users using Mac OS X 10.2 or later can also connect to a Windows Network by clicking the "Network" icon on the left side of an open window. This will allow you to browse local Macintosh and Windows networks using the SMB protocol
(20) Applet
This a Java program that can be embedded in a Web page. The difference between a standard Java application and a Java applet is that an applet can't access system resources on the local computer. System files and serial devices (modems, printers, scanners, etc.) cannot be called or used by the applet. This is for security reasons -- nobody wants their system wiped out by a malicious applet on some wacko's Web site. Applets have helped make the Web more dynamic and entertaining and have given a helpful boost to the Java programming language.
for example: empno empname salary 1 firoz 35000 2 basha 34000 3 chintoo 40000
it will not the values as follows:
1 1
Definition : A primary key, also called a primary keyword, is a key in a relational database
that is unique for each record. It is a unique identifier, such as a driver license number, telephone number (including area code), or vehicle identification number (VIN). A relational database must always have one and only one primary key. Primary keys typically appear as columns in relational database tables.
15
The choice of a primary key in a relational database often depends on the preference of the administrator. It is possible to change the primary key for a given database when the specific needs of the users changes. For example, the people in a town might be uniquely identified according to their driver license numbers in one application, but in another situation it might be more convenient to identify them according to their telephone numbers.
(22) Encryption
Encryption is the coding or scrambling of information so that it can only be decoded and read by someone who has the correct decoding key. Encryption is used in secure Web sites as well as other mediums of data transfer. If a third party were to intercept the information you sent via an encrypted connection, they would not be able to read it. So if you are sending a message over the office network to your co-worker about how much you hate your job, your boss, and the whole dang company, it would be a good idea to make sure that you send it over an encrypted line.
(23)Two-phase locking
In databases and transaction processing (transaction management), Two-phase locking, (2PL) is a concurrency control locking protocol, or mechanism, which guarantees Serializability (e.g., see Bernstein et al. 1987, Weikum and Vossen 2001). It is also the name of the resulting class (set) of transaction schedules. Using locks that block processes, 2PL may be subject to deadlocks that result from the mutual blocking of two transactions or more. 2PL is a super-class of Strong strict two-phase locking (SS2PL), also called Rigorousness, which has been widely utilized for concurrency control in general-purpose database systems since their early days in the 1970s. SS2PL implementation has many variants. SS2PL was called in the past Strict 2PL, and confusingly it is still called so by some. SS2PL is also a special case (subclass) of Commitment ordering (Commit ordering; CO), and inherits many of CO's useful properties. 2PL in its general form, as well as when combined with Strictness, i.e., Strict 2PL (S2PL), are not known to be utilized in practice. Two-phase locking According to the two-phase locking protocol, a transaction handles its locks in two distinct, consecutive phases during the transaction's execution: 1. Expanding phase (number of locks can only increase): locks are acquired and no locks are released. 2. Shrinking phase: locks are released and no locks are acquired.
16
The serializability property is guaranteed for a schedule with transactions that obey the protocol. The 2PL schedule class is defined as the class of all the schedules comprising transactions with data access orders that could be generated by the 2PL protocol. Typically, without explicit knowledge in a transaction on end of phase-1, it is safely determined only when a transaction has entered its ready state in all its processes (processing has ended, and it is ready to be committed; no additional locking is possible). In this case phase-2 can end immediately (no additional processing is needed), and actually no phase-2 is needed. Also, if several processes (two or more) are involved, then a synchronization point (similar to atomic commitment) among them is needed to determine end of phase-1 for all of them (i.e., in the entire distributed transaction), to start releasing locks in phase-2 (otherwise it is very likely that both 2PL and Serializability are quickly violated). Such synchronization point is usually too costly (involving a distributed protocol similar to atomic commitment), and end of phase-1 is usually postponed to be merged with transaction end (atomic commitment protocol for a multi-process transaction), and again phase-2 is not needed. This turns 2PL to SS2PL (see below). All known implementations of 2PL in products are SS2PL based.
A user has entered invalid data. A file that needs to be opened cannot be found. A network connection has been lost in the middle of communications, or the JVM has run out of memory.
Some of these exceptions are caused by user error, others by programmer error, and others by physical resources that have failed in some manner. To understand how exception handling works in Java, you need to understand the three categories of exceptions: 17
Checked exceptions: Achecked exception is an exception that is typically a user error or a problem that cannot be foreseen by the programmer. For example, if a file is to be opened, but the file cannot be found, an exception occurs. These exceptions cannot simply be ignored at the time of compilation. Runtime exceptions: A runtime exception is an exception that occurs that probably could have been avoided by the programmer. As opposed to checked exceptions, runtime exceptions are ignored at the time of compliation. Errors: These are not exceptions at all, but problems that arise beyond the control of the user or the programmer. Errors are typically ignored in your code because you can rarely do anything about an error. For example, if a stack overflow occurs, an error will arise. They are also ignored at the time of compilation.
(26) Cluster
A group of sectors on a disk. While a sector is the smallest unit that can be accessed on your hard disk, a cluster is a slightly larger unit that is used to organize and identify files on the disk. Most files take up several clusters of disk space. Each cluster has a unique ID, which enables the hard drive to locate all the clusters on the disk. After reading and writing many files to a disk, some clusters may remain labeled as being used even though they do not contain any data. These are called "lost clusters" and can be fixed using ScanDisk on Windows or the Disk Utility program on the Mac. This is why running a disk utility or defragmentation program may free up space on your hard disk.
(27) OSI
Open Systems Interconnection (OSI) model is a reference model developed by ISO (International Organization for Standardization) in 1984, as a conceptual framework of standards for communication in the network across different equipment and applications by different vendors. It is now considered the primary architectural model for inter-computing and internetworking communications. Most of the network communication protocols used today have a structure based on the OSI model. The OSI model defines the communications process into 7 layers, which divides the tasks involved with moving information between networked computers into seven smaller, more manageable task groups. A task or group of tasks is then assigned to each of the seven OSI layers. Each layer is reasonably self-contained so that the tasks assigned to each layer can be implemented independently. This enables the solutions offered by one layer to be updated without adversely affecting the other layers.
OSI 7 Layers Reference Model For Network Communication
18
OSI model with comparison of TCP/IP Layer 7:Application Layer Defines interface to user processes for communication and data transfer in network Provides standardized services such as virtual terminal, file and job transfer and operations Layer 6:Presentation Layer Masks the differences of data formats between dissimilar systems Specifies architecture-independent data transfer format Encodes and decodes data; Encrypts and decrypts data; Compresses and decompresses data Layer 5:Session Layer Manages user sessions and dialogues Controls establishment and termination of logic links between users Reports upper layer errors Layer 4:Transport Layer Manages end-to-end message delivery in network Provides reliable and sequential packet delivery through error recovery and flow control mechanisms Provides connectionless oriented packet delivery Layer 3:Network Layer 19
Determines how data are transferred between network devices Routes packets according to unique network device addresses Provides flow and congestion control to prevent network resource depletion Layer 2:Data Link Layer Defines procedures for operating the communication links Frames packets Detects and corrects packets transmit errors Layer 1:Physical Layer Defines physical means of sending data over network devices Interfaces between network medium and devices Defines optical, electrical and mechanical characteristics
(28) Extranet
If you know the difference between the Internet and an intranet, you have an above average understanding of computer terminology. If you know what an extranet is, you may be in the top echelon. An extranet actually combines both the Internet and an intranet. It extends an intranet, or internal network, to other users over the Internet. Most extranets can be accessed via a Web interface using a Web browser. Since secure or confidential information is often accessible within an intranet, extranets typically require authentication for users to access them. Extranets are often used by companies that need to share selective information with other businesses or individuals. For example, a supplier may use an extranet to provide inventory data to certain clients, while not making the information available to the general public. The extranet may also include a secure means of communication for the company and its clients, such as a support ticket system or Web-based forum.
The underlying cost estimation equations Every assumption made in the model (e.g. "the project will enjoy good management") Every definition (e.g. the precise definition of the Product Design phase of a project) 20
The costs included in an estimate are explicitly stated (e.g. project managers are included, secretaries aren't)
Because COCOMO is well defined, and because it doesn't rely upon proprietary estimation algorithms, Costar offers these advantages to its users:
COCOMO estimates are more objective and repeatable than estimates made by methods relying on proprietary models COCOMO can be calibrated to reflect your software development environment, and to produce more accurate estimates
Costar is a faithful implementation of the COCOMO model that is easy to use on small projects, and yet powerful enough to plan and control large projects. Typically, you'll start with only a rough description of the software system that you'll be developing, and you'll use Costar to give you early estimates about the proper schedule and staffing levels. As you refine your knowledge of the problem, and as you design more of the system, you can use Costar to produce more and more refined estimates. Costar allows you to define a software structure to meet your needs. Your initial estimate might be made on the basis of a system containing 3,000 lines of code. Your second estimate might be more refined so that you now understand that your system will consist of two subsystems (and you'll have a more accurate idea about how many lines of code will be in each of the subsystems). Your next estimate will continue the process -- you can use Costar to define the components of each subsystem. Costar permits you to continue this process until you arrive at the level of detail that suits your needs. One word of warning: It is so easy to use Costar to make software cost estimates, that it's possible to misuse it -- every Costar user should spend the time to learn the underlying COCOMO assumptions and definitions from Software Engineering Economics and Software Cost Estimation with COCOMO II. Introduction to the COCOMO Model The most fundamental calculation in the COCOMO model is the use of the Effort Equation to estimate the number of Person-Months required to develop a project. Most of the other COCOMO results, including the estimates for Requirements and Maintenance, are derived from this quantity. Source Lines of Code The COCOMO calculations are based on your estimates of a project's size in Source Lines of Code (SLOC). SLOC is defined such that:
Only Source lines that are DELIVERED as part of the product are included -- test drivers and other support software is excluded SOURCE lines are created by the project staff -- code created by applications generators is excluded One SLOC is one logical line of code Declarations are counted as SLOC Comments are not counted as SLOC 21
The original COCOMO 81 model was defined in terms of Delivered Source Instructions, which are very similar to SLOC. The major difference between DSI and SLOC is that a single Source Line of Code may be several physical lines. For example, an "if-then-else" statement would be counted as one SLOC, but might be counted as several DSI. The Scale Drivers In the COCOMO II model, some of the most important factors contributing to a project's duration and cost are the Scale Drivers. You set each Scale Driver to describe your project; these Scale Drivers determine the exponent used in the Effort Equation. The 5 Scale Drivers are:
Precedentedness Development Flexibility Architecture / Risk Resolution Team Cohesion Process Maturity
Cost Drivers COCOMO II has 17 cost drivers you assess your project, development environment, and team to set each cost driver. The cost drivers are multiplicative factors that determine the effort required to complete your software project. For example, if your project will develop software that controls an airplane's flight, you would set the Required Software Reliability (RELY) cost driver to Very High. That rating corresponds to an effort multiplier of 1.26, meaning that your project will require 26% more effort than a typical software project. COCOMO II defines each of the cost drivers, and the Effort Multiplier associated with each rating. Check the Costar help for details about the definitions and how to set the cost drivers. COCOMO II Effort Equation The COCOMO II model makes its estimates of required effort (measured in Person-Months PM) based primarily on your estimate of the software project's size (as measured in thousands of SLOC, KSLOC)): Effort = 2.94 * EAF * (KSLOC)E Where EAF Is the Effort Adjustment Factor derived from the Cost Drivers E Is an exponent derived from the five Scale Drivers As an example, a project with all Nominal Cost Drivers and Scale Drivers would have an EAF of 1.00 and exponent, E, of 1.0997. Assuming that the project is projected to consist of 8,000 source lines of code, COCOMO II estimates that 28.9 Person-Months of effort is required to complete it: Effort = 2.94 * (1.0) * (8)1.0997 = 28.9 Person-Months Effort Adjustment Factor 22
The Effort Adjustment Factor in the effort equation is simply the product of the effort multipliers corresponding to each of the cost drivers for your project. For example, if your project is rated Very High for Complexity (effort multiplier of 1.34), and Low for Language & Tools Experience (effort multiplier of 1.09), and all of the other cost drivers are rated to be Nominal (effort multiplier of 1.00), the EAF is the product of 1.34 and 1.09. Effort Adjustment Factor = EAF = 1.34 * 1.09 = 1.46 Effort = 2.94 * (1.46) * (8)1.0997 = 42.3 Person-Months COCOMO II Schedule Equation The COCOMO II schedule equation predicts the number of months required to complete your software project. The duration of a project is based on the effort predicted by the effort equation: Duration = 3.67 * (Effort)SE Where Effort Is the effort from the COCOMO II effort equation SE Is the schedule equation exponent derived from the five Scale Drivers Continuing the example, and substituting the exponent of 0.3179 that is calculated from the scale drivers, yields an estimate of just over a year, and an average staffing of between 3 and 4 people: Duration = 3.67 * (42.3)0.3179 = 12.1 months Average staffing = (42.3 Person-Months) / (12.1 Months) = 3.5 people The SCED Cost Driver The COCOMO cost driver for Required Development Schedule (SCED) is unique, and requires a special explanation. The SCED cost driver is used to account for the observation that a project developed on an accelerated schedule will require more effort than a project developed on its optimum schedule. A SCED rating of Very Low corresponds to an Effort Multiplier of 1.43 (in the COCOMO II.2000 model) and means that you intend to finish your project in 75% of the optimum schedule (as determined by a previous COCOMO estimate). Continuing the example used earlier, but assuming that SCED has a rating of Very Low, COCOMO produces these estimates: Duration = 75% * 12.1 Months = 9.1 Months Effort Adjustment Factor = EAF = 1.34 * 1.09 * 1.43 = 2.09 Effort = 2.94 * (2.09) * (8)1.0997 = 60.4 Person-Months Average staffing = (60.4 Person-Months) / (9.1 Months) = 6.7 people Notice that the calculation of duration isn't based directly on the effort (number of Person-Months) instead it's based on the schedule that would have been required for the project assuming it had 23
been developed on the nominal schedule. Remember that the SCED cost driver means "accelerated from the nominal schedule". The Costar command Constraints | Constrain Project displays a dialog box that lets you trade off duration vs. effort (SCED is set for you automatically). You can use the dialog box to constrain your project to have a fixed duration, or a fixed cost.
(30) Firmware
Firmware is a software program or set of instructions programmed on a hardware device. It provides the necessary instructions for how the device communicates with the other computer hardware. But how can software be programmed onto hardware? Good question. Firmware is typically stored in the flash ROM of a hardware device. While ROM is "read-only memory," flash ROM can be erased and rewritten because it is actually a type of flash memory. Firmware can be thought of as "semi-permanent" since it remains the same unless it is updated by a firmware updater. You may need to update the firmware of certain devices, such as hard drives and video cards in order for them to work with a new operating system. CD and DVD drive manufacturers often make firmware updates available that allow the drives to read faster media. Sometimes manufacturers release firmware updates that simply make their devices work more efficiently
25
Regression testing Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types. Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application. Load testing Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the systems response time degrades or fails. Stress testing System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load. Performance testing Term often used interchangeably with stress and load testing. To check whether system meets performance requirements. Used different performance and load tools to do this. Usability testing User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing. Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment. Recovery testing Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. Security testing Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks. Comparison testing Comparison of product strengths and weaknesses with previous versions or other similar products. Alpha testing In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing. Beta testing Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.
(33) Intranet
Contrary to popular belief, this is not simply a misspelling of "Internet." "Intra" means "internal" or "within," so an Intranet is an internal or private network that can only be accessed within the confines of a company, university, or organization. "Inter" means "between or among," hence the difference between the Internet and an Intranet. 26
Up until the last few years, most corporations used local networks composed of expensive proprietary hardware and software for their internal communications.
Point-to-Point Communication
In a nutshell, a socket represents a single connection between exactly two pieces of software. More than two pieces of software can communicate in client/server or distributed systems (for example, many Web browsers can simultaneously communicate with a single Web server) but multiple sockets are required to do this. Socket-based software usually runs on two separate computers on the network, but sockets can also be used to communicate locally (interprocess) on a single computer. Sockets are bidirectional, meaning that either side of the connection is capable of both sending and receiving data. Sometimes the one application that initiates communication is termed the client and the other application the server, but this terminology leads to confusion in non-client/server systems and should generally be avoided. Libraries : Programmers access sockets using code libraries packaged with the operating system. Several libraries that implement standard application programming interfaces (APIs) exist. The first mainstream package - the Berkeley Socket Library is still widely in use on UNIX systems. Another very common API is the Windows Sockets (Winsock) library for Microsoft operating systems. Relative to other network programming technologies, socket APIs are quite mature: Winsock has been in use since 1993 and Berkeley sockets since 1982. Interface Types Socket interfaces can be divided into three categories. Perhaps the most commonly-used type, the stream socket, implements "connection-oriented" semantics. Essentially, a "stream" requires that the two communicating parties first establish a socket connection, after which any data passed through that connection will be guaranteed to arrive in the same order in which it was sent. Datagram sockets offer "connection-less" semantics. With datagrams, connections are implicit rather than explicit as with streams. Either party simply sends datagrams as needed and waits for the other to respond; messages can be lost in transmission or received out of order, but it is the application's responsibility and not the socket's to deal with these problems. Implementing datagram sockets can give some applications a performance boost and additional flexibility compared to using stream sockets, justifying their use in some situations. 27
The third type of socket -- the so-called raw socket -- bypasses the library's built-in support for standard protocols like TCP and UDP. Raw sockets are used for custom low-level protocol development. Addresses and Ports Today, sockets are typically used in conjunction with the Internet protocols -- Internet Protocol, Transmission Control Protocol, and User Datagram Protocol (UDP). Libraries implementing sockets for Internet Protocol use TCP for streams, UDP for datagrams, and IP itself for raw sockets. To communicate over the Internet, IP socket libraries use the IP address to identify specific computers. Many parts of the Internet work with naming services, so that the users and socket programmers can work with computers by name (e.g., "thiscomputer.compnetworking.about.com") instead of by address (e.g., 208.185.127.40). Stream and datagram sockets also use IP port numbers to distinguish multiple applications from each other. For example, Web browsers on the Internet know to use port 80 as the default for socket communications with Web servers. Socket Programming and You Traditionally, sockets have been of interest mainly to computer programmers. But as new networking applications emerge, end users are becoming increasingly network-savvy. Many Web surfers, for example, now know that some addresses in the browser look like http://206.35.113.28:8080/ where 8080 is the port number being used by that socket.
The socket APIs are relatively small and simple. Many of the functions are similar to those used in file input/output routines such as read(), write(), and close(). The actual function calls to use depend on the programming language and socket library chosen.
(35)BCNF
Boyce-Codd normal form (or BCNF or 3.5NF) is a normal form used in database normalization. It is a slightly stronger version of the third normal form (3NF). A table is in Boyce-Codd normal form if and only if for every one of its non-trivial [dependencies] X Y, X is a superkeythat is, X is either a candidate key or a superset thereof. BCNF was developed in 1974 by Raymond F. Boyce and Edgar F. Codd to address certain types of anomaly not dealt with by 3NF as originally defined. 28
(36) Hover
When you roll the cursor over a link on a Web page, it is often referred to as "hovering" over the link. This is somewhat like when your boss hovers over you at work, but not nearly as uncomfortable. In most cases, the cursor will change from a pointer to a small hand when it is hovering over a link. Web developers can also use cascading style sheets (CSS) to modify the color and style of link when a user hovers over it. For example, the link may become underlined or change color while the cursor is hovering over it. The term hovering implies your computer screen is a three-dimensional space. In this conception, your cursor moves around on a layer above the text and images. When you click the mouse button while the cursor is hovering over a link, it presses down on the link to activate it. Hovering can also be used in a more general sense such as moving the cursor over icons, windows, or other objects on the screen.
(38) Applet
29
This a Java program that can be embedded in a Web page. The difference between a standard Java application and a Java applet is that an applet can't access system resources on the local computer. System files and serial devices (modems, printers, scanners, etc.) cannot be called or used by the applet. This is for security reasons -- nobody wants their system wiped out by a malicious applet on some wacko's Web site. Applets have helped make the Web more dynamic and entertaining and have given a helpful boost to the Java programming language.
Note
30
Network Interface Layer The Network Interface layer (also called the Network Access layer) handles placing TCP/IP packets on the network medium and receiving TCP/IP packets off the network medium. TCP/IP was designed to be independent of the network access method, frame format, and medium. In this way, TCP/IP can be used to connect differing network types. These include local area network (LAN) media such as Ethernet and Token Ring and WAN technologies such as X.25 and Frame Relay. Independence from any specific network media allows TCP/IP to be adapted to new media such as asynchronous transfer mode (ATM). The Network Interface layer encompasses the Data Link and Physical layers of the OSI model. Note that the Internet layer does not take advantage of sequencing and acknowledgment services that might be present in the Network Interface layer. An unreliable Network Interface layer is assumed, and reliable communication through session establishment and the sequencing and acknowledgment of packets is the function of the Transport layer. Internet Layer The Internet layer handles addressing, packaging, and routing functions. The core protocols of the Internet layer are IP, ARP, ICMP, and IGMP.
The Internet Protocol (IP) is a routable protocol that handles IP addressing, routing, and the fragmentation and reassembly of packets.
The Address Resolution Protocol (ARP) handles resolution of an Internet layer address to a Network Interface layer address, such as a hardware address.
The Internet Control Message Protocol (ICMP) handles providing diagnostic functions and reporting errors due to the unsuccessful delivery of IP packets.
The Internet Group Management Protocol (IGMP) handles management of IP multicast group membership.
The Internet layer is analogous to the Network layer of the OSI model. Transport Layer The Transport layer (also known as the Host-to-Host Transport layer) handles providing the Application layer with session and datagram communication services. The core protocols of the Transport layer are Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP).
TCP provides a one-to-one, connection-oriented, reliable communications service. TCP handles the establishment of a TCP connection, the sequencing and acknowledgment of packets sent, and the recovery of packets lost during transmission.
UDP provides a one-to-one or one-to-many, connectionless, unreliable communications service. UDP is used when the amount of data to be transferred is small (such as data that fits into a single packet), when you do not want the overhead of establishing a TCP connection, or when the applications or upper layer protocols provide reliable delivery. 31
The TCP/IP Transport layer encompasses the responsibilities of the OSI Transport layer. Application Layer The Application layer lets applications access the services of the other layers and defines the protocols that applications use to exchange data. There are many Application layer protocols and new protocols are always being developed. The most widely known Application layer protocols are those used for the exchange of user information:
The Hypertext Transfer Protocol (HTTP) is used to transfer files that make up the Web pages of the World Wide Web.
The File Transfer Protocol (FTP) is used for interactive file transfer. The Simple Mail Transfer Protocol (SMTP) is used for the transfer of mail messages and attachments.
Telnet, a terminal emulation protocol, is used for logging on remotely to network hosts.
Additionally, the following Application layer protocols help facilitate the use and management of TCP/IP networks:
The Domain Name System (DNS) is used to resolve a host name to an IP address. The Routing Information Protocol (RIP) is a routing protocol that routers use to exchange routing information on an IP internetwork.
The Simple Network Management Protocol (SNMP) is used between a network management console and network devices (routers, bridges, intelligent hubs) to collect and exchange network management information.
IP IP is a connectionless, unreliable datagram protocol primarily responsible for addressing and routing packets between hosts. Connectionless means that a session is not established before exchanging data. Unreliable means that delivery is not guaranteed. IP always makes a best effort attempt to deliver a packet. An IP packet might be lost, delivered out of sequence, duplicated, or delayed. IP does not attempt to recover from these types of errors. The acknowledgment of packets delivered and the recovery of lost packets is the responsibility of a higher-layer protocol, such as TCP. IP is defined in RFC 791. An IP packet consists of an IP header and an IP payload. The following table describes the key fields in the IP header.
32
(40)
Bandwidth
Bandwidth refers to how much data you can send through a network or modem connection. It is usually measured in bits per second, or "bps." You can think of bandwidth as a highway with cars travelling on it. The highway is the network connection and the cars are the data. The wider the highway, the more cars can travel on it at one time. Therefore more cars can get to their destinations faster. The same principle applies to computer data -- the more bandwidth, the more information that can be transferred within a given amount of time.
(41)Bitmap
Most images you see on your computer are composed of bitmaps. A bitmap is a map of dots, or bits (hence the name), that looks like a picture as long you are sitting a reasonable distance away from the screen. Common bitmap file types include BMP (the raw bitmap format), JPEG, GIF, PICT, PCX, and TIFF. Because bitmap images are made up of a bunch of dots, if you zoom in on a bitmap, it appears to be very blocky. Vector graphics (created in programs such as Freehand, Illustrator, or CorelDraw) can scale larger without getting blow.
if -statements), the pipeline has to be emptied and filled again, and there is a number of cycles equal to the pipeline length until results are again delivered. To circumvent this, the number of branches should be kept small (avoiding and/or smart placement of if -statements). Compilers and CPUs also try to minimize this problem by guessing the outcome (branch prediction). The power of a processor can be increased by combining several pipelines. This is then called a superscalar processor. Fixed-point and logical calculations (performed in the ALU - Arithmetic/ Logical Unit) are usually separated from floating-point math (done by the FPU - Floating Point Unit). The FPU is commonly subdivided in a unit for addition and one for multiplication. These units may be present several times, and some processors have additional functional units for division and the computation of square roots.
In DISTRIBUTED MEMORY, each memory module are desired. This can be achieved by using a crossbar switch. Crossbar switches can be found in high performance computers and some workstations. The problem with crossbar switches is their high complexity when many connections need to be made. This problem can be weakened by using multi-stage crossbar switches, which in turn leads to longer communication times. For this reason, the number of CPUs and memory modules than can be connected by crossbar switches is limited. The big advantage of shared memory systems is that all processors can make use of the whole memory. This makes them easy to program and efficient to use. The limiting factor to their performance is the number of processors and memory modules that can be connected to each other. Due to this, shared memory-systems usually consist of rather few processors.
2.5 DistributedMemory
As could be seen in the previous section, the number of processors and memory modules cannot be increased arbitrarily in the case of a shared memory system. Another way to build a MIMDsystem is distributed memory (DM-MIMD).Each processor has its own local memory. The processors are connected to each other. The demands imposed on the communication network are lower than in the case of a shared memory system, as the communication between processors may be slower than the communication between processor and memory.
2.6 ccNUMA
The two previous sections showed that shared memory systems suffer from a limited system size, while distributed memory systems suffer from the arduous communication between the memories of the processors. A compromise is the ccNUMA (cache coherent non-uniform memory access) architecture. A ccNUMA system basically consists of several SMP systems. These are connected to each other by means of a fast communications network, often crossbar switches. Access to the whole, distributed or non-unified memory is possible via a common cache. A ccNUMA system is as easy to use as a true shared memory system, at the same time it is much easier to expand. To achieve optimal performance, it has to be made sure that local memory is used, and not the memory of the other modules, which is only accessible via the slow communications network. The modular structure is another big advantage of this architecture. Most ccNUMA system consist of modules that can be plugged together to get systems of various sizes
35
(44) Thrashing
Most programs reach a steady state in their demand for memory locality both in terms of instructions fetched and data being accessed. This steady state is usually much less than the total memory required by the program. This steady state is sometimes referred to as the working set: the set of memory pages that are most frequently accessed. Virtual memory systems work most efficiently when the ratio of the working set to the total number of pages that can be stored in RAM is low enough that the time spent resolving page faults is not a dominant factor in the workload's performance. A program that works with huge data structures will sometimes require a working set that is too large to be efficiently managed by the page system resulting in constant page faults that drastically slow down the system. This condition is referred to as thrashing: pages are swapped out and then accessed causing frequent faults. An interesting characteristic of thrashing is that as the working set grows, there is very little increase in the number of faults until the critical point (when faults go up dramatically and majority of system's processing power is spent on handling them).
36
(45)Database replication
Database replication can be used on many database management systems, usually with a master/slave relationship between the original and the copies. The master logs the updates, which then ripple through to the slaves. The slave outputs a message stating that it has received the update successfully, thus allowing the sending (and potentially re-sending until successfully applied) of subsequent updates. Multi-master replication, where updates can be submitted to any database node, and then ripple through to other servers, is often desired, but introduces substantially increased costs and complexity which may make it impractical in some situations. The most common challenge that exists in multimaster replication is transactional conflict prevention or resolution. Most synchronous or eager replication solutions do conflict prevention, while asynchronous solutions have to do conflict resolution. For instance, if a record is changed on two nodes simultaneously, an eager replication system would detect the conflict before confirming the commit and abort one of the transactions. A lazy replication system would allow both transactions to commit and run a conflict resolution during resynchronization. The resolution of such a conflict may be based on a timestamp of the transaction, on the hierarchy of the origin nodes or on much more complex logic, which decides consistently on all nodes. Database replication becomes difficult when it scales up. Usually, the scale up goes with two dimensions, horizontal and vertical: horizontal scale up has more data replicas, vertical scale up has data replicas located further away in distance. Problems raised by horizontal scale up can be alleviated by a multi-layer multi-view access protocol. Vertical scale up is running into less trouble since internet reliability and performance are improving.
(46)DEADLOCK IN OS
In computer science, Coffman deadlock refers to a specific condition when two or more processes are each waiting for each other to release a resource, or more than two processes are waiting for resources in a circular chain (see Necessary conditions). Deadlock is a common problem in multiprocessing where many processes share a specific type of mutually exclusive resource known as a software lock or soft lock. Computers intended for the time-sharing and/or real-time markets are often equipped with a hardware lock (or hard lock) which guarantees exclusive access to processes, forcing serialized access. Deadlocks are particularly troubling because there is no general solution to avoid (soft) deadlocks. This situation may be likened to two people who are drawing diagrams, with only one pencil and one ruler between them. If one person takes the pencil and the other takes the ruler, a deadlock occurs when the person with the pencil needs the ruler and the person with the ruler needs the pencil to finish his work with the ruler. Neither request can be satisfied, so a deadlock occurs. The telecommunications description of deadlock is weaker than Coffman deadlock because processes can wait for messages instead of resources. Deadlock can be the result of corrupted messages or signals rather than merely waiting for resources. For example, a dataflow element that 37
has been directed to receive input on the wrong link will never proceed even though that link is not involved in a Coffman cycle.
(48)Encryption
38
Encryption is the coding or scrambling of information so that it can only be decoded and read by someone who has the correct decoding key. Encryption is used in secure Web sites as well as other mediums of data transfer. If a third party were to intercept the information you sent via an encrypted connection, they would not be able to read it. So if you are sending a message over the office network to your co-worker about how much you hate your job, your boss, and the whole dang company, it would be a good idea to make sure that you send it over an encrypted line.
The restrictions on a stack imply that if the elements A,B,C,D,E are added to the stack, n that order, then th efirst element to be removed/deleted must be E. Equivalently we say that the last element to be inserted into the stack will be the first to be removed. For this reason stacks are sometimes 39
referred to as Last In First Out (LIFO) lists. The restrictions on queue imply that the first element which is inserted into the queue will be the first one to be removed. Thus A is the first letter to be removed, and queues are known as First In First Out (FIFO) lists. Note that the data object queue as defined here need not necessarily correspond to the mathemathical concept of queue in which the insert/delete rules may be different.
40
The result is a series of diagrams that represent the business activities in a way that is clear and easy to communicate. A business model comprises one or more data flow diagrams (also known as business process diagrams). Initially a context diagram is drawn, which is a simple representation of the entire system under investigation. This is followed by a level 1 diagram; which provides an overview of the major functional areas of the business. Don't worry about the symbols at this stage, these are explained shortly. Using the context diagram together with additional information from the area of interest, the level 1 diagram can then be drawn. The level 1 diagram identifies the major business processes at a high level and any of these processes can then be analyzed further - giving rise to a corresponding level 2 business process diagram. This process of more detailed analysis can then continue through level 3, 4 and so on. However, most investigations will stop at level 2 and it is very unusual to go beyond a level 3 diagram. Identifying the existing business processes, using a technique like data flow diagrams, is an essential precursor to business process re-engineering, migration to new technology, or refinement of an existing business process. However, the level of detail required will depend on the type of change being considered. There are only five symbols that are used in the drawing of business process diagrams (data flow diagrams). These are now explained, together with the rules that apply to them.
This diagram represents a banking process, which maintains customer accounts. In this example, customers can withdraw or deposit cash, request information about their account or update their account details. The five different symbols used in this example represent the full set of symbols required to draw any business process diagram. External Entity
An external entity is a source or destination of a data flow which is outside the area of study. Only 41
those entities which originate or receive data are represented on a business process diagram. The symbol used is an oval containing a meaningful and unique identifier. Process
A process shows a transformation or manipulation of data flows within the system. The symbol used is a rectangular box which contains 3 descriptive elements: Firstly an identification number appears in the upper left hand corner. This is allocated arbitrarily at the top level and serves as a unique reference. Secondly, a location appears to the right of the identifier and describes where in the system the process takes place. This may, for example, be a department or a piece of hardware. Finally, a descriptive title is placed in the centre of the box. This should be a simple imperative sentence with a specific verb, for example 'maintain customer records' or 'find driver'. Data Flow
A data flow shows the flow of information from its source to its destination. A data flow is represented by a line, with arrowheads showing the direction of flow. Information always flows to or from a process and may be written, verbal or electronic. Each data flow may be referenced by the processes or data stores at its head and tail, or by a description of its contents. Data Store
A data store is a holding place for information within the system: It is represented by an open ended narrow rectangle. Data stores may be long-term files such as sales ledgers, or may be short-term accumulations: for example batches of documents that are waiting to be processed. Each data store should be given a reference followed by an arbitrary number. Resource Flow
A resource flow shows the flow of any physical material from its source to its destination. For this reason they are sometimes referred to as physical flows. The physical material in question should be given a meaningful name. Resource flows are usually restricted to early, high-level diagrams and a 42
Atomicity - Either the effects of all or none of its operations remain when a transaction is completed (committed or aborted respectively). In other words, to the outside world a committed transaction appears to be indivisible, atomic. A transaction is an indivisible unit of work that is either performed in its entirety or not performed at all ("all or nothing" semantics). Consistency - Every transaction must leave the database in a consistent state, i.e., maintain the predetermined integrity rules of the database (constraints upon and among the database's objects). A transaction must transform a database from one consistent state to another consistent state. Isolation - Transactions cannot interfere with each other. Moreover, an incomplete transaction is not visible to another transaction. Providing isolation is the main goal of concurrency control. Durability - Effects of successful (committed) transactions must persist through crashes.
(56)Entity-Relationship Diagram
Definition: An entity-relationship (ER) diagram is a specialized graphic that illustrates the relationships between entities in a database. ER diagrams often use symbols to represent three different types of information. Boxes are commonly used to represent entities. Diamonds are normally used to represent relationships and ovals are used to represent attributes. Also Known As: ER Diagram, E-R Diagram, entity-relationship model
43
Examples: Consider the example of a database that contains information on the residents of a city. The ER digram shown in the image above contains two entities -- people and cities. There is a single "Lives In" relationship. In our example, due to space constraints, there is only one attribute associated with each entity. People have names and cities have populations. In a real-world example, each one of these would likely have many different attributes.
(58)Normalization
Normalization is the process of efficiently organizing data in a database. There are two goals of the normalization process: eliminating redundant data (for example, storing the same data in more than one table) and ensuring data dependencies make sense (only storing related data in a table). Both of these are worthy goals as they reduce the amount of space a database consumes and ensure that data is logically stored.
practical applications, you'll often see 1NF, 2NF, and 3NF along with the occasional 4NF. Fifth normal form is very rarely seen and won't be discussed in this article. Before we begin our discussion of the normal forms, it's important to point out that they are guidelines and guidelines only. Occasionally, it becomes necessary to stray from them to meet practical business requirements. However, when variations take place, it's extremely important to evaluate any possible ramifications they could have on your system and account for possible inconsistencies. That said, let's explore the normal forms.
Eliminate duplicative columns from the same table. Create separate tables for each group of related data and identify each row with a unique column or set of columns (the primary key).
For more details, read Putting your Database in First Normal Form
Meet all the requirements of the first normal form. Remove subsets of data that apply to multiple rows of a table and place them in separate tables. Create relationships between these new tables and their predecessors through the use of foreign keys.
For more details, read Putting your Database in Second Normal Form
Meet all the requirements of the second normal form. Remove columns that are not dependent upon the primary key.
For more details, read Putting your Database in Third Normal Form
Meet all the requirements of the third normal form. Every determinant must be a candidate key.
For more details, read Putting your Database in Boyce Codd Normal Form 45
Meet all the requirements of the third normal form. A relation is in 4NF if it has no multi-valued dependencies.
Remember, these normalization guidelines are cumulative. For a database to be in 2NF, it must first fulfill all the criteria of a 1NF database.
(61)Cluster
Cluster is a group of sectors on a disk. While a sector is the smallest unit that can be accessed on your hard disk, a cluster is a slightly larger unit that is used to organize and identify files on the disk. Most files take up several clusters of disk space. Each cluster has a unique ID, which enables the hard drive to locate all the clusters on the disk. After 46
reading and writing many files to a disk, some clusters may remain labeled as being used even though they do not contain any data. These are called "lost clusters" and can be fixed using Scandisk on Windows or the Disk Utility program on the Mac. This is why running a disk utility or defragmentation program may free up space on your hard disk.
(62)Relational Keys
The word "key" is much used and abused in the context of relational database design. In prerelational databases (hierarchtical, networked) and file systems (ISAM, VSAM, et al) "key" often referred to the specific structure and components of a linked list, chain of pointers, or other physical locator outside of the data. It is thus natural, but unfortunate, that today people often associate "key" with a RDBMS "index". We will explain what a key is and how it differs from an index. There are only three types of relational keys Candidate Key As stated above, a candidate key is any set of one or more columns whose combined values are unique among all occurrences (i.e., tuples or rows). Since a null value is not guaranteed to be unique, no component of a candidate key is allowed to be null. There can be any number of candidate keys in a table (as demonstrated elsewhere). Relational pundits are not in agreement whether zero candidate keys is acceptable, since that would contradict the (debatable) requirement that there must be a primary key. Primary Key The primary key of any table is any candidate key of that table which the database designer arbitrarily designates as "primary". The primary key may be selected for convenience, comprehension, performance, or any other reasons. It is entirely proper (albeit often inconvenient) to change the selection of primary key to another candidate key. Alternate Key The alternate keys of any table are simply those candidate keys which are not currently selected as the primary key. According to {Date95} "... exactly one of those candidate keys [is] chosen as the primary key [and] the remainder, if any, are then called alternate keys." An alternate key is a function of all candidate keys minus the primary key.
(63)Domain
Windows Network Domains. This networking option is built into Windows and allows users to create or join a domain. The domain may or may not be password-protected. Once connected to the domain, a user may view other computers within the domain and can browse the shared files and folders available on the connected systems. Windows XP users can browse Windows Network Domains by selecting the "My Network Places" option on the left side of an open window. You can create a new domain by using the Network Setup Wizard. Mac users using Mac OS X 10.2 or later can also connect to a Windows Network by clicking the "Network" icon on the left side of an open window. This will allow you to browse local Macintosh and Windows networks using the SMB protocol. 47
Neural Networks
Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neo-cortex. The main categories of networks are acyclic or feed forward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback). Among the most popular feed forward networks are perceptions, multi-layer perceptions and radial basis networks.
Control Theory
Control theory, the grandchild of cybernetics, has many important applications, especially in robotics.
Blackboard Systems
The basic idea of a blackboard system is illustrated in Fig.
48
Blackboard
-- The shared data structure through which knowledge sources communicate.
Control System
-- Determined the order in which knowledge sources will operate on entries in the blackboard. Our email based homework help makes use of the modern technique to teach students so that they
Operator overloading is claimed to be useful because it allows the developer to program using notation "closer to the target domain" and allows user-defined types a similar level of syntactic support as types built into the language.
(66)GAME PLAYING
Ever since the beginning of AI, there has been a great fascination in pitting the human expert against the computer. Game playing provided a high-visibility platform for this contest. It is important to note, however, that the performance of the human expert and the AI game-playing program reflect qualitatively different processes.
Game Playing
Computers, Games, and The Real World (if link fails, click here for a local copy) - This Scientific American article by Matthew Ginsberg discusses the current work being done in game-playing AI, such as the now-famous Deep Blue. While it may seem somewhat frivolous to spend all this effort on games, one should remember that games present a controlled world in which a good player must learn to problem solve rapidly, and intelligently. Machine learning in Game playing - This resource page, assembled by Jay Scott, contains a large amount of links to issues related to machine game playing. Report on games, computers and artificial intelligence - An updated, and easy to understand summary of the status of game playing and AI.
More specifically, as mentioned earlier, the performance of the human expert utilizes a vast amount of domain specific knowledge and procedures. Such knowledge allows the human expert to generate a few promising moves for each game situation (irrelevant moves are never considered). In contrast, when selecting the best move, the game playing program exploits brute-force computational speed to explore as many alternative moves and consequences as possible. As the computational speed of modern computers increases, the contest of knowledge vs. speed is tilting more and more in the computers favor, accounting for recent triumphs like Deep Blue's win over Gary Kasparov.
(67)Graphics
50
A graphic is an image or visual representation of an object. Therefore, computer graphics are simply images displayed on a computer screen. Graphics are often contrasted with text, which is comprised of characters, such as numbers and letters, rather than images. Computer graphics can be either two or three-dimensional. Early computers only supported 2D monochrome graphics, meaning they were black and white (or black and green, depending on the monitor). Eventually, computers began to support color images. While the first machines only supported 16 or 256 colors, most computers can now display graphics in millions of colors. 2D graphics come in two flavors raster and vector. Raster graphics are the most common and are used for digital photos, Web graphics, icons, and other types of images. They are composed of a simple grid of pixels, which can each be a different color. Vector graphics, on the other hand are made up of paths, which may be lines, shapes, letters, or other scalable objects. They are often used for creating logos, signs, and other types of drawings. Unlike raster graphics, vector graphics can be scaled to a larger size without losing quality. 3D graphics started to become popular in the 1990s, along with 3D rendering software such as CAD and 3D animation programs. By the year 2000, many video games had begun incorporating 3D graphics, since computers had enough processing power to support them. Now most computers now come with a 3D video card that handles all the 3D processing. This allows even basic home systems to support advanced 3D games and applications.
(68)Extranet
If you know the difference between the Internet and an intranet, you have an above average understanding of computer terminology. If you know what an extranet is, you may be in the top echelon. An extranet actually combines both the Internet and an intranet. It extends an intranet, or internal network, to other users over the Internet. Most extranets can be accessed via a Web interface using a Web browser. Since secure or confidential information is often accessible within an intranet, extranets typically require authentication for users to access them. Extranets are often used by companies that need to share selective information with other businesses or individuals. For example, a supplier may use an extranet to provide inventory data to certain clients, while not making the information available to the general public. The extranet may also include a secure means of communication for the company and its clients, such as a support ticket system or Web-based forum.
may be adopted to counterfeit the consummation of the professionals; however, similar to many or every are: 1) The formation of a knowledge base that adopts little knowledge representation architecture to grab the knowledge of the Subject Matter Expert (SME). 2) A operation of aggregate that information via SME and condensing it subsequently to the architecture, that is known as knowledge engineering. 3) The time instant when the system is get developed, it will be allocated in the universe complicated circumstance like the human Subject Matter Expert (SME), customarily as a help to human peasant or like a addendum to few useful data system. Expert systems can have or it may be possible that it do not have knowledge gaining ingredients.
Methods Of Operation:
Certainty Factors
A way of completion of procedure of expert systems is via a quasi-probabilistic nearly with certainty factor.
Chaining
Two ways of cogitating if using inference rules are well developed forward chaining and backward chaining.
Inference Rules
An inference rule is supposed to be a conditional statement which consists of two clauses one is if and other one is then. This rule is mentioned to be able to find solutions to diagnostic and prescriptive problems.
(70)KNOWLEDGE REPRESENTATION
Knowledge Representation (KR) research involves analysis of how to accurately and effectively reason and how best to use a set of symbols to represent a set of facts within a knowledge domain. A symbol vocabulary and a system of logic are combined to enable inferences about elements in the KR to create new KR sentences. Logic is used to supply formal semantics of how reasoning functions should be applied to the symbols in the KR system. Logic is also used to define how operators can process and reshape the knowledge. Examples of operators and operations include negation, conjunction, adverbs, adjectives, quantifiers and modal operators. Interpretation theory is this logic. A good KR must be both declarative and procedural knowledge. What is knowledge representation can best be understood in terms of five distinct roles it plays, each crucial to the task at hand: A knowledge representation (KR) is most fundamentally a surrogate, a substitute for the thing itself, used to enable an entity to determine consequences by thinking rather than acting, i.e., by reasoning about the world rather than taking action in it. 52
It is a set of ontological commitments, i.e., an answer to the question: In what terms should I think about the world? It is a fragmentary theory of intelligent reasoning, expressed in terms of three components: (i) (ii) (iii) the representation's fundamental conception of intelligent reasoning; the set of inferences the representation sanctions; and the set of inferences it recommends.
It is a medium for pragmatically efficient computation, i.e., the computational environment in which thinking is accomplished. One contribution to this pragmatic efficiency is supplied by the guidance a representation provides for organizing information so as to facilitate making the recommended inferences. It is a medium of human expression, i.e., a language in which we say things about the world."
(72)Firmware
Firmware is a software program or set of instructions programmed on a hardware device. It provides the necessary instructions for how the device communicates with the other computer hardware. But how can software are programmed onto hardware? Good question. Firmware is typically stored in the flash ROM of a hardware device. While ROM is "read-only memory," flash ROM can be erased and rewritten because it is actually a type of flash memory. Firmware can be thought of as "semi-permanent" since it remains the same unless it is updated by a firmware updater. You may need to update the firmware of certain devices, such as hard drives and video cards in order for them to work with a new operating system. CD and DVD drive manufacturers often make firmware updates available that allow the drives to read faster media. Sometimes manufacturers release firmware updates that simply make their devices work more efficiently
53
Blind search -- can only move according to position in search. Heuristic search -- use domain-specific information to decide where to search next.
Breadth-First Search
1. 2. 3. 4. Set L to be a list of the initial nodes in the problem. If L is empty, fail otherwise pick the first node n from L If n is a goal state, quit and return path from initial node. Otherwise remove n from L and add to the end of L all of n's children. Label each child with its path from initial node. Return to 2.
54
Hill Climbing
Here the generate and test method is augmented by an heuristic function which measures the closeness of the current state to the goal state. 1. Evaluate the initial state if it is goal state quit otherwise current state is initial state. 2. Select a new operator for this state and generate a new state. 3. Evaluate the new state o if it is closer to goal state than current state make it current state o if it is no better ignore 4. If the current state is goal state or no new operators available, quit. Otherwise repeat from 2.
(74)Hover
When you roll the cursor over a link on a Web page, it is often referred to as "hovering" over the link. This is somewhat like when your boss hovers over you at work, but not nearly as uncomfortable. In most cases, the cursor will change from a pointer to a small hand when it is hovering over a link. Web developers can also use cascading style sheets (CSS) to modify the color and style of link when a user hovers over it. For example, the link may become underlined or change color while the cursor is hovering over it. The term hovering implies your computer screen is a three-dimensional space. In this conception, your cursor moves around on a layer above the text and images. When you click the mouse button while the cursor is hovering over a link, it presses down on the link to activate it. Hovering can also be used in a more general sense such as moving the cursor over icons, windows, or other objects on the screen. 55
might not always find the best solution But is guaranteed to find a good solution in reasonable time. By sacrificing completeness it increases efficiency. Useful in solving tough problems which o Could not be solved any other way. o Solutions take an infinite time or very long time to compute.
The classic example of heuristic search methods is the travelling salesman problem.
56
(76)Intranet
Contrary to popular belief, this is not simply a misspelling of "Internet." "Intra" means "internal" or "within," so an Intranet is an internal or private network that can only be accessed within the confines of a company, university, or organization. "Inter" means "between or among," hence the difference between the Internet and an Intranet. Up until the last few years, most corporations used local networks composed of expensive proprietary hardware and software for their internal communications. Now, using simple Internet technology, intranets have made internal communication much easier and less expensive. Intranets use a TCP/IP connection and support Web browsing, just like a typical Internet connection does. The difference is that Web sites served within the intranet can only be accessed by computers connected through the local network. Now that you know the difference between the Internet and an intranet, you can go around telling people on the street what you know and impress them.
(77)Software
Computer instructions or data. Anything that can be stored electronically is software. The storage devices and display devices are hardware. The terms software and hardware are used as both nouns and adjectives. For example, you can say: "The problem lies in the software," meaning that there is a problem with the program or data, not with the computer itself. You can also say: "It's a software problem." The distinction between software and hardware is sometimes confusing because they are so integrally linked. Clearly, when you purchase a program, you are buying software. But to buy the software, you need to buy the disk (hardware) on which the software is recorded.
Types of Software
System software
This is the software which is actually running your computer. This software handles different request for use the hardware, many housekeeping work, storage of data etc. A good example of such system software is the Windows program which is running your computer. Linux and UNIX are the other common system software available for running computers.
Application software
These types of software are developed to handle a specific task and are developed to cater for the specific requirement of the user. This could be the Payroll system or the Accounting system of your company. The system is developed keeping all the requirements of the user in mind. But, for small companies it would not be a good idea to develop software applications specifically for your use. To keep the system running properly, you should have skilled IT expertise available to you and you might have to spend lots time and money to run the system properly.
57
Off-the-shelf software
When software is developed catering for your specific requirement, you would be able to execute all the functions you desire. But, development of software would be quite costly and may not be suitable for small enterprises for their implementation. Off-the-shelf software generally provides some standard functions and they do not cost much. But, you may find that the software does not provide what you require. Therefore, you might have to change your business practice to be in line with this shrink-wrapped software. Because, the software is sold in large quantities, you would get proper support from the software vendor. Also, proper training facilities would be available so that you could extract the maximum benefit of using the software package. These standard off-the-shelf software packages are the best solution for small companies. The packages could be word processing program, office applications etc. Sometimes, these packages could be modified if it does not provide all the functionality you desire.
(78)INHERITANCE
Inheritance is the process by which new classes called derived classes are created from existing classes called base classes. The derived classes have all the features of the base class and the programmer can choose to add new features specific to the newly created derived class. For example, a programmer can create a base class named fruit and define derived classes as mango, orange, banana, etc. Each of these derived classes, (mango, orange, banana, etc.) has all the features of the base class (fruit) with additional attributes or features specific to these newly created derived classes. Mango would have its own defined features, orange would have its own defined features, banana would have its own defined features, etc. Inheritance helps the code to be reused in many situations. The base class is defined and once it is compiled, it need not be reworked. Using the concept of inheritance, theprogrammer can create as many derived classes from the base class as needed while adding specific features to each derived class as needed.
(79) Abstraction
Abstraction is one of the most powerful and vital features provided by object-oriented C++ programming language. Modularity is very important in any programming language, it provides flexibility to users for using the programming language. This aspect is well achieved with high performance by the concept of abstraction in C++. In object-oriented programming language the programmer can abstract both data and code when needed. The concept of abstraction relates to the idea of hiding data that are not needed for presentation. The main idea behind data abstraction is to give a clear separation between properties of data type and the associated implementation details. This separation is achieved in order that the properties of the abstract data type are visible to the user interface and the implementation details are hidden. Thus,abstraction forms the basic platform for the creation of user-defined data types called objects. Data abstraction is the process of refining data to its essential form. An Abstract Data Type is defined as a data type that is defined in terms of the operations that it supports and not in terms of its structure or implementation. 58
(80) Encapsulation
Encapsulation is the process of combining data and functions into a single unit called class. Using the method of encapsulation, the programmer cannot directly access the data. Data is only accessible through the functions present inside the class. Data encapsulation led to the important concept of data hiding. Data hiding is the implementation details of a class that are hidden from the user. The concept of restricted access led programmers to write specialized functions or methods for performing the operations on hidden members of the class. Attention must be paid to ensure that the class is designed properly. Neither too much access nor too much control must be placed on the operations in order to make the class user friendly. Hiding the implementation details and providing restrictive access leads to the concept of abstract data type.Encapsulation leads to the concept of data hiding, but the concept of encapsulation must not be restricted to information hiding. Encapsulation clearly represents the ability to bundle related data and functionality within a single, autonomous entity called a class.
1. BOUNDED MEDIA: Bounded media are the physical links through which signals are confined to narrow path. These are also called guide media. Bounded media are made up o a external conductor (Usually Copper) bounded by jacket material. Bounded media are great for LABS because they offer high speed, good security and low cast. However, some time they cannot be used due distance communication. Three common types of bounded media are used of the data transmission. These are Coaxial Cable
UNBOUNDED / UNGUIDED MEDIA Unbounded / Unguided media or wireless media doesn't use any physical connectors between the two devices communicating. Usually the transmission is send through the atmosphere but sometime it can be just across the rule. Wireless media is used when a physical obstruction or distance blocks are used with normal cable media. The three types of wireless media are: RADIO WAVES
MICRO WAVES 59
INFRARED WAVES
Category 2 These cables can support up to 4 mps implementation. Category 3 These cable supports up to 16 mps and are mostly used in 10 mps. Category 4 These are used for large distance and high speed. It can support 20mps. Category 5 This is the highest rating for UTP cable and can support up to 100mps. UTP cables consist of 2 or 4 pairs of twisted cable. Cable with 2 pair use RJ-11 connector and 4 pair cable use RJ-45 connector. 2. Shielded twisted pair (STP) It is similar to UTP but has a mesh shielding thats protects it from EMI which allows for higher transmission rate. IBM has defined category for STP cable. Type 1 STP features two pairs of 22-AWG Type 2 This type include type 1 with 4 telephone pairs Type 6 This type feature two pairs of standard shielded 26-AWG Type 7 This type of STP consist of 1 pair of standard shielded 26-AWG Type 9 This type consist of shielded 26-AWG wire
(84)Fiber Optics
Fiber optic cable uses electrical signals to transmit data. It uses light. In fiber optic cable light only moves in one direction for two way communication to take place a second connection must be made between the two devices. It is actually two stands of cable. Each stand is responsible for one 61
direction of communication. A laser at one device sends pulse of light through this cable to other device. These pulses translated into 1s and 0s at the other end. In the center of fiber cable is a glass stand or core. The light from the laser moves through this glass to the other device around the internal core is a reflective material known as CLADDING. No light escapes the glass core because of this reflective cladding. Fiber optic cable has bandwidth more than 2 gbps (Gigabytes per Second) Characteristics Of Fiber Optic Cable:
Expensive Very hard to install Capable of extremely high speed Extremely low attenuation No EMI interference
Very costly
Short waves VHF (Very High Frequency) UHF (Ultra High Frequency)
SHORT WAVES:There are different types of antennas used for radio waves. Radio waves transmission can be divided into following categories.
As the name shows this system transmits from one frequency and has low power out. The normal operating ranges on these devices are 20 to 25 meter. CHARACTERISTICS LOW POWER , SINGLE FREQUENCY:
Low cost Simple installation with pre-configured 1 M bps to 10 M bps capacity High attenuation Low immunity to EMI
2. HIGH POWER, SINGLE FREQUENCY:This is similar to low power single frequency. These devices can communicate over greater distances. CHARACTERISTICS HIGH POWER, SINGLE FREQUENCY:
Moderate cost Easier to install than low power single frequency 1 Mbps to 10 Mbps of capacity Low attenuation for long distances Low immunity to EMI
1. Terrestrial Micro waves:Terrestrial Micro waves are used are used to transmit wireless signals across a few miles. Terrestrial system requires that direct parabolic antennas can be pointed to each other. These systems operate in a low Giga Hertz range. CHARACTERISTICS of Terrestrial Micro waves:
Moderate to high cost. Moderately difficult installation 1 M bps to 10 M bps capacity Variable attenuation 63
The main problem with aero wave communication is the curvature of the earth, mountains & other structure often block the line of side. Due to this reason, many repeats are required for long distance which increases the cost of data transmission between the two points. This problem is recommended by using satellites. Satellite micro wave transmission is used to transmit signals through out the world. These system use satellites in orbit about 50,000 Km above the earth. Satellite dishes are used to send the signals to the
satellite where it is again send back down to the receiver satellite. These transmissions also use directional parabolic antenna with in line of side. In satellite communication micro wave signals at 6 GHz is transmitted from a transmitter on the earth through the satellite position in space. By the time signal reaches the satellites becomes weaker due to 50,000 Km distance. The satellite amplifies week signals and transmits it back to the earth at the frequency less than 6 GHz. Characteristics Satellite Micro waves:
High cost Extremely difficult and hare installation. Variable attenuation. Low immunity to EMI
High security needed because a signal send to satellite is broadcasts through all receivers with in satellite.
(88)Infrared
Infrared frequencies are just below visible light. These high frequencies allow high sped data transmission. This technology is similar to the use of a remote control for a TV. Infrared transmission can be affected by objects obstructing sender or receiver. These transmissions fall into two categories. 1. Point to point 2. Broadcast (i) Point to Point: - Point to point infrared transmission signal directly between two systems. Many lap top system use point to pint transmission. These systems require direct alignment between many devices. 64
Wide range of cost Moderately easy installation. 100 k bps to 16 Mb of capacity. Variable attenuation. High immunity to EMI
(i) Broad Cast: - These infrared transmission use sprayed signal, one broad cast in all directions instead of direct beam. This help to reduce the problems of proper alignment and abstraction. It also allows multiple receiver of signal Characteristics of Broad Cast:
(89)ROUTERS
Routers are physical devices that join multiple wired or wireless networks together. Technically, a wired or wireless router is a Layer 3 gateway, meaning that the wired/wireless router connects networks (as gateways do), and that the router operates at the network layer of the OSI model.
Home networkers often use an Internet Protocol (IP) wired or wireless router, IP being the most common OSI network layer protocol. An IP router such as a DSL or cable modem broadband router joins the home's local area network (LAN) to the wide-area network (WAN) of the Internet. By maintaining configuration information in a piece of storage called the routing table, wired or wireless routers also have the ability to filter traffic, either incoming or outgoing, based on the IP addresses of senders and receivers. Some routers allow the home networker to update the routing table from a Web browser interface. Broadband routers combine the functions of a router with those of a network switch and a firewall in a single unit.
65
In reality since the software requirements specification is the building block for an entire development, they need to be absolutely correct, but rarely are. The reason? Either the business requirements documentation or the project management requirements are ambiguous or misunderstood. As soon as this occurs you can kiss your project goodbye, unless of course, it is using Agile methodology in which case you may have a chance to salvage it. Intrigued? Well read on because even if you don't consider yourself a "technical" project manager, you must at the very least, understand what this documentation needs to deliver.
Interfaces
This will describe how the software will interact with people, hardware and other software. This is particularly important. More software fails because it's can't "communicate" with other systems.
Performance
The software requirements specification gives the details of performance level that the software should be able to deliver. This includes reliability, speed and response time etc.
Attributes
This includes areas such as security, portability, maintainability and accuracy.
Overall Description
66
This section includes the product perspective, product functionalities, users description, operating environment of the software, design constrains and assumptions.
System Features
This section contains the details of the features that the system will have, the priority list, actions, results expected and functional requirements.
Objects are accessible to the programmer via an interface and they can send/ receive messages to other objects. The behavior of these objects is defined by its methods.
(93) Multithreading
Multitasking is performing two or more tasks at the same time. Nearly all operating systems are capable of multitasking by using one of two multitasking techniques: process-based multitasking and thread-based multitasking. Process-based multitasking is running two programs concurrently. Programmers refer to a program as a process. Therefore, you could say that process-based multitasking is program-based multitasking. Thread-based multitasking is having a program perform two tasks at the same time. For example, a word processing program can check the spelling of words in a document while you write the document. This is thread-based multitasking.
This is the most basic purpose fulfilled by software development costs estimation.
Ensuring the presence of well documented functional requirements Focusing early on non-functional aspects Having a clear and manageable change request process 69
Arranging for high-quality configuration management of releases and environments. Creating a well defined user acceptance testing model while determining the requirements Conducting unit testing as an integral part of the process of software development will save project time and cost later on
Somebody goes to a trade show A vendor gloms onto them and, of course, has the answer to all their problems The vendor pitches to an ad hoc procurement team, which vows to research alternatives and perhaps even issue a Request For Proposal (RFP) Due to the press of business, the process is short-circuited and the decision comes down to "Can we afford what the vendor is selling?" rather than "Is this the right solution of the many alternatives we've researched?" The purchase is made and never evaluated to see if it a) solved the problem and b) delivered true ROI
Hardware must support current software as well as software planned for procurement over the next planning interval [year, 18 months, three years] Hardware must be compatible with existing or planned networks Hardware must be upgradeable and expandable to meet the needs of the next planning interval Hardware warranties must be of an appropriate length Hardware maintenance must be performed by [local/remote vendor, in-house personnel] Whenever feasible, hardware standards will dictate procurement of like brands and configurations to simplify installation and support Routine assessments of installed infrastructure will feed an upgrade/replace decision process
Software must be compatible with current and future hardware over the next planning interval Software maintenance and warranties must be of appropriate length and cost Sotware help desk must be maintained by [vendor, third party, in-house personnel] Software must be standardized throughout the business to improve purchasing power, simplify training, and facilitate support Software must comply with current standards set by technology leadership Software must support and enhance business goals
In addition to these hardware and software selection criteria, StratVantage will evaluate the proposed vendors on several criteria, including: 70
Stability Vendor's attributes such as length of operations, size of customer base, size of income and revenue, company size, leadership, stock history and more can affect a technology purchasing decision Proven Track Record A vendor's experience not only in the broader market but in your business' specific industry can be key Business Model Fit If the vendor is offering, for example, software as a service, but your business isn't always Internet-connected, this business model mismatch could rule out the vendor Mature Technology You want to see continuity in the vendor's offerings. If the vendor has been through a series of acquisitions and is just now integrating new technology with an old line of business, you may want to obtain assurances on the longevity of the vendor's solution. Service Level Agreements Unfortunately, most vendor Service Level Agreements (SLAs) aren't worth the paper they are printed on. We'll help you understand the vendor's SLA and negotiate a service level partnership instead.
(95)Graphs
Graphs are widely-used structure in computer science and different computer applications. We don't say data structure here and see the difference. Graphs mean to store and analyze metadata, the connections, which present in data. For instance, consider cities in your country. Road network, which connects them, can be represented as a graph and then analyzed. We can examine, if one city can be reached from another one or find the shortest route between two cities. First of all, we introduce some definitions on graphs. Next, we are going to show, how graphs are represented inside of a computer. Then you can turn to basic graph algorithms. There are two important sets of objects, which specify graph and its structure. First set is V, which is called vertex-set. In the example with road network cities are vertices. Each vertex can be drawn as a circle with vertex's number inside.
vertices Next important set is E, which is called edge-set. E is a subset of V x V. Simply speaking, each edge connects two vertices, including a case, when a vertex is connected to itself (such an edge is called a loop). All graphs are divided into two big groups: directed and undirected graphs. The difference is that edges in directed graphs, called arcs, have a direction. These kinds of graphs have much in common with each other, but significant differences are also present. We will accentuate which kind of graphs is considered in the particular algorithm description. Edge can be drawn as a line. If a graph is directed, each line has an arrow. 71
directed graph
Sequence of vertices, such that there is an edge from each vertex to the next in sequence, is called path. First vertex in the path is called the start vertex; the last vertex in the path is called the end vertex. If start and end vertices are the same, path is called cycle. Path is called simple, if it includes every vertex only once. Cycle is called simple, if it includes every vertex, except start (end) one, only once. Let's see examples of path and cycle.
path (simple)
cycle (simple)
The last definition we give here is a weighted graph. Graph is called weighted, if every edge is associated with a real number, called edge weight. For instance, in the road network example, weight of each road may be its length or minimal time needed to drive along.
weighted graph 72
Constructors are used to create, and can initialize, objects of their class type. You cannot declare a constructor as virtual or static, nor can you declare a constructor as const, volatile, or const volatile. You do not specify a return type for a constructor. A return statement in the body of a constructor cannot have a return value.
(97)Bubble Sort
Bubble sort is a simple and well-known sorting algorithm. It is used in practice once in a blue moon and its main application is to make an introduction to the sorting algorithms. Bubble sort belongs to O(n2) sorting algorithms, which makes it quite inefficient for sorting large data volumes. Bubble sort is stable and adaptive.
Algorithm
1. Compare each pair of adjacent elements from the beginning of an array and, if they are in reversed order, swap them. 2. If at least one swap has been done, repeat step 1. You can imagine that on every step big bubbles float to the surface and stay there. At the step, when no bubble moves, sorting stops. Let us see an example of sorting an array to make the idea of bubble sort clearer. Example. Sort {5, 1, 12, -5, 16} using bubble sort.
73
74
(98)Binary heap
There are several types of heaps, but in the current article we are going to discuss the binary heap. For short, let's call it just "heap". It is used to implement priority queue ADT and in the heapsort algorithm. Heap is a complete binary tree, which answers to the heap property.
75
Heap property
There are two possible types of binary heaps: max heap and min heap. The difference is that root of a min heap contains minimal element and vice versa. Priority queue is often deal with min heaps, whereas heapsort algorithm, when sorting in ascending order, uses max heap.
76
(99)Array
Array is a very basic data structure representing a group of similar elements, accessed by index. Array data structure can be effectively stored inside the computer and provides fast access to the all its elements. Let us see an advantages and drawbacks of the arrays.
Advantages
No overhead per element. Any element of an array can be accessed at O(1) time by its index.
Drawbacks
Array data structure is not completely dynamic. Many programming languages provides an opportunity to allocate arrays with arbitrary size (dynamically allocated array), but when this space is used up, a new array of greater size must be allocated and old data is copied to it. Insertion and deletion of an element in the array requires to shift O(n) elements on average, where n is size of the array.
it is a binary tree; each node contains a value; a total order is defined on these values (every two values can be compared with each other); left subtree of a node contains only values lesser, than the node's value; right subtree of a node contains only values greater, than the node's value.
77
78