You are on page 1of 27

SOFTWARE ENGINEERING

Ravi

SOFTWARE ENGINEERING CHAPTER - 1 SOFTWARE ENGINEERING CONCEPTS

ravi

Q. 1: - Write down important characteristics of software? Ans. 1: - Software is a logical rather than a physical system element. Therefore, software has characteristics that are considerably different than those of hardware. 1.Software is developed or engineered: it is not manufactured in the classical sense. - In both activities, high qualities is achieved through good design, but the manufacturing phase for hardware can introduce quality problem that are nonexistent (or easily corrected) for software. Software costs are concentrated in engineering. This means that software projects cannot be managed as if they were manufacturing projects. 2.Software doesnt wear out. - Software is not susceptible to the environmental maladies (dust, abuse, temperature extremes etc) that cause hardware to wear out. Undiscovered defects will cause high failure rates early in the life of a program. However, these are corrected and the curve flattens as shown. There are no software parts. Every software failure indicates an error in design or in the process through which design was translated into machine-executable code. Therefore, software maintenance involves considerably more complexity than hardware maintenance. 3.Most software continues to be custom built.(Ideas are the building blocks of ideas.) - A software component should be designed and implemented so that it can be reused in many different programs. Modern reusable components encapsulate both data and the processing that is applied to the data, enabling the software engineer to create new applications from reusable parts. The data structures and processing detail required to build the interface are contained within a library of reusable components for interface construction. Q. 2: - Write the main different categories of software systems? Ans. 2: - Today seven broad categories of software present continuing challenges for software engineers: System software - System software is a collection of programs written to service other programs. Some system software (e.g., compilers, editors, and file management utilities) processes complex, but determinate, information structures. Other system software (e.g., operating system components, drivers, networking software, telecommunications processors) process largely indeterminate data. In either case, the systems software area is characterized by heavy interaction with computer hardware: heavy usage by multiple users; Application software - Application software consists of standalone programs that solve a specific business need. Applications in this area process business or technical data I a way that facilitates business operations or managements/technical decision-making. In addition to conventional data processing applications, application software is used to control business functions in real-time (e.g., point-of-sale transaction processing, real-time manufacturing process control). Engineering/scientific software - Modern applications within the engineering/scientific area are moving away from conventional numerical algorithms. Computer-aided-design, system simulation, and other interactive applications have begun to take on real-time and even system software characteristics. Embedded Software - Embedded software resides within a product or system and is used to implement and control features and functions for the end-user and for the system itself. Embedded software can perform limited and esoteric functions (e.g., keypad control for microwave oven) or product significant function and control capability (e.g., digital functions in an automobile such a fuel control, dashboard displays, braking systems, etc.). Product-line software - These software are designed to provide a specific capability for use by many different customers, product-line software can focus on a limited and esoteric marketplace (e.g., inventory control products) or address mass consumer markets ( e.g., word processing, spreadsheets, computer graphics, multimedia, entertainment, database management, personal and busyness financial applications). Web-applications - Web-apps, span a wide array of applications. In their simplest form, Web-Appss can be little more than a set of linked hypertext files that present information using text and limited graphics. Artificial intelligence software - AI software makes use of nonnumeric algorithms to solve complex problems that are not amenable to computation or straightforward analysis. Applications within this area include robotics, expert systems, pattern recognition (image and

~2~

SOFTWARE ENGINEERING

ravi

voice), artificial neural networks, theorem proving, and game playing. There is no computer that has common sense. Q. 3: - Write down the software engineering principles and also predict their role in software system Design? Ans. 3: - David Hooker [HOO96] has proposed seven core principles that focus on software engineering practice as a whole. The First Principle: - The Reason It All Exists A software system exists for one reason: To provide value to its users. All decisions should be made with this in mind. Before specifying a system requirement, before noting a piece of system functionality, before determining the hardware platforms or development processes, ask yourself questions such as: Does this add real value to the system? If the answer is no, dont do it. All other principles support this one. The Second Principle: - KISS (Keep It Simple, Stupid!) Software design is not a haphazard process. There are many factors to consider in any design effort. All design should be as simple as possible, but no simpler. This facilitates having a more easily understood, and easily maintained system. This is not say that features, even internal features, should be discarded in the name of simplicity. Indeed the more elegant designs are usually the simple ones. Simple also does not mean quick and dirty. The third principle: - Maintain the vision A clear vision is essential to the success of a software project. Without conceptual integrity, a system threatens to become a patchwork of incompatible designs, held together by the wrong kind of screws. Compromising the architectural vision of a software system weakens and will eventually break even a well-designed system. Having an empowered architect who can hold the vision and enforce compliance helps ensure a very successful software project. The fourth principle: - What You Produce, Other Will Consume. Seldom is an industrial-strength software system constructed and used in a vacuum. In some way or other, someone else will use, maintain, document, or otherwise depend on being able to understand your system. So, always specify, design, and implement knowing someone else will have to understand what you are doing. Design, keeping the implementers in mind. Someone may have to debug the code you write, and that makes them user of your code. Making their job easier adds value to the system. The Fifth Principle: - Be Open To The Future A system with a long lifetime has more value. In todays computing environment s, hardware platforms are obsolete after just a few months; hence software lifetimes are typically measured in months instead of years. However, true industrial-strength software systems must endure far longer. Never design yourself into a corner. Always ask what if, and prepare for all possible answers by creating systems that solve the general problem, not just the specific one. This could very possibly lead to the reuse of an entire system. The Sixth Principle: - Plan Ahead for Reuse Reuse saves time and effort. According a high level of reuse is arguably the hardest goal to accomplish in developing a software system. There are many techniques to realize reuse at every level of the system development process. Planning ahead for reuse reduces the cost and increases the value of both the reusable components and the systems into which they are incorporated. The Seventh Principle: - Think! This last Principle is a probably the most overlooked. Placing clear, complete thought before action almost always produces better results. If you think about something and still do it wrong, it becomes valuable experience. A side effect of thinking is learning to recognize when you dont know something, at which point you can research the answer. When clear thoughts have gone into a system, value comes out. Applying the first six Principles requires intense thought, for which the Potential rewards are enormous. Principle of s/w engg. 1.High quality s/w is possible: - Although our industry is saturated with example of s/w systems that perform poorly (full of bugs or fell to satisfy user needs). Large s/w system can be build with very high quality, but they carry high price. The techniques that have been demonstrated to increase quality includes verify requirements before developments, simplifying design, conducting inspections and hiring the best people.

~3~

SOFTWARE ENGINEERING ravi 2.Give products to customer early: - No matter how hard you try to learn users needs during

the requirements phase. The most effective way to ensure real needs is to give users, a product to use. The convenient waterfall model delivers the first product after 99% of the development resources have been expended. Thus, the majority of customers feedback on need occurs after resources and expended. 3.Determine The Problem Before Writing The Requirement: - When faced with what, they believe is a problem, most engineers rush to offer a solution. If the engineers perception of the problem is accurate, the solution may work. The obvious solution is to increase the speed of development work. For this s/w engineer has to develop an algorithm for estimated work or project. During selection of appropriate mechanism they consider the followings: Range of costs; Risks; Time, etc. 4.Evaluate Design Alternatives: - After the requirements are agreed, we must examine a verity of architectures and algorithms. We certainly do not want to use architecture simply because it was used in requirements specification, After all, that architecture are selected to optimize the edxt4ernal system behavior. 5.Use an appropriate process model: - There are dozens of process model. There is no such thing as a process model that works for every project. Each project must select a process that makes the most sense for that project on the basis of corporate culture, willingness to take risks, application area, requirements, etc. 6.Minimize Intellectual distance: - A scientist (Dijkstra) defined intellectual distance between the real world problem and the computerized solution to the problem. The smaller, the intellectual distance, easier it is to maintain the s/w. to minimize intellectual distance, the s/ws structure should be as close as possible to the real world structure. 7.Good Management is More Important Than Good Technology: - The best technology will not compensate for poor management and a good manger cam produce grae3t results even the less resources. Good management motivates people to do their best. Management style must be adopte4d to the situation. 8.People are the key to Success: - High skilled people with appropriate experience, talent and training are keys. The right people with insufficient tool, languages and processes will probably fail.

Classical lifecycle Model: The waterfall model, sometimes called the classic life cycle, suggests a systematic, sequential approach to software development that begins with customer specification of requirements and progresses through planning, modeling, construction, and deployment, culminating in on-going support of the completed software. The waterfall model is the oldest paradigm for software engineering. Among the problems that are sometimes encountered when the waterfall model is applied are: 1. Real projects rarely follow the sequential flow that the model proposes. 2. It is often difficult for the customer to state all requirements explicitly. The waterfall model requires it and has difficulty. In an interesting analysis of actual projects, Bradac found that the linear nature of the waterfall model leads to blocking states in which some project team members must wait for other members of the team to complete dependent tasks. Today, software work is fast-paced and subject to a never-ending stream of changes. The waterfall model is inappropriate for such works. Spiral model: The spiral model of software development is shown in fig. The diagrammatic representation of this model appears like a spiral. The exact number of loops in a spiral is not fixed. Each loop of the spiral represents a phase of the software process. For example, the

CHAPTER 2 SOFTWARE LIFE CYCLE MODELS

~4~

SOFTWARE ENGINEERING

ravi

innermost loop might be concerned with the feasibility study. The next with requirements specification, the next one with design, and so on. This model is much more flexible compared to the other models, since the exact number of phases through which the product is developed in this model is not fixed. Each phase in this model is split up into four sectors as shown in fig. The first quadrant identifies the objectives of the phase and the alternative solutions possible for the phase under consideration. During the second quadrant, the alternatives solutions are evaluated to select the best solution possible. For the chosen solution, the potential risks are identified and dealt with by developing an appropriate prototype. Thus the spiral model provides direct support for coping with the project risks. Activities during the third quadrant consist of developing and verifying the next level of product. Activities during the fourth quadrant concern reviewing the results of the stages traversed so far with the customer and planning the next iteration around the spiral. The radius of the spiral at any point represents the cost incurred in the project till then, and the angular dimension represents the progress made in the current phase. In the spiral model of development, the project team must decide how, exactly to structure the project into phases. As opposed to the previously discussed models, the spiral model can be viewed as a Meta model, since it subsumes the all the discussed model o software development. Iterative Waterfall model: However the classical waterfall model cannot be used in practical development projects, since this model supports no mechanism to handle the errors committed during any of the phases. This problem is overcome in the iterative waterfall model. The iterative waterfall model is probably the most widely used software development model evolved so for. This model is simple to understand and use. However, this model id suitable for well understood problem, it is not suitable for very large projects and for projects that are subject to many risks. However, in practical development environment, the engineers do commit a large number of errors in almost ever phase of the life cycle. The source of defects can be many: oversight, wrong assumptions, use of inappropriate technology, etc. These defects usually get detached much later in the life cycle. For example, a design defect might go unnoticed till we reach the coding or testing phases. Once a defect is detected, the engineers need to go back to the phase where the defect had occurred and redo some of the work done during that phase and subsequent phases to correct the defect and effect on the later phases. Fig allow to for the correction of the errors committed during a phase that are detected in later phases. Shortcoming of waterfall model are the following; The waterfall model cannot satisfactory handle the different types of risk that a real life software project is subjected to. To achieve better efficiency and higher productivity, most real life projects cannot follow the rigid phase sequence imposed by the waterfall model.

Software design: - Software design deals with transforming the customer requirements, as described in the SRS document, into a form that is implementing using a programming language. The following items must be designed during the design phase; Different modules required to implement the design solution. Control relationship among the identified modules. Interface among different modules. The interface among different modules identifies the exact data items exchanged among the modules. Data structure of the individual modules. Algorithms required implementing different modules.

CHAPTER - 3 SOFTWARE REQUIREMENTS ANALYSIS AND DESIGN

~5~

SOFTWARE ENGINEERING

ravi

. .

Thus the objective of the design phase is to take the SRS document as the input and produce the above mentioned documents before completion of the design phase. We can broadly classify the design activities into two important parts:1. Preliminary (High level ) design 2. Detailed (low level ) design High level design: - By high level design, we world mean identifications of different modules, the control relationships and the definitions of the interfaces among them. The outcome of high level design is called the program structure or software architecture. Many different types of notations have been used to represent a high level design. A popular way is to use a tree like diagram called the structure chart to represent the control hierarchy in a high level design. Other notations such s Jackson diagram or Warnier Orr diagram can also be used. Detailed design:- During detailed design, the different structure and the algorithms of different modules are designed. The outcome of the detailed design stage is usually known as the modules are designed. Attributes of a software design: Correctness:- A good design should correctly implement all the functionalities of the system. Understandability:- A good design should be easily understandable. In order to facilitate understandability of design, the design should have the following features;It should use consistent and meaningful names for various design components. The design should be modular. The term modularity means that it should use a cleanly decomposed set of modules. Efficiency: - It should be easily emenable to change. Modularization: - It is a fundamental principle governing any good design. Decomposition of a problem into modules facilitates the design by taking advantage of the divide and conquers principle. If different modules are independent of each other, then each module can be understood separately. This reduces the complexity of the design solution greatly. To understand why this is so, remember that it may be very difficult to break sticks which have been tied together, but very easy to break the sticks individually. Mainly two modularization techniques:1. Clean decomposition 2. Neat Arrangement Software design approaches:There are fundamentally two different approaches to software design1. Function oriented design and 2. Object oriented design. *Features of function oriented design:The following are the salient features of a typical function oriented design approach: A system is viewed as something that performs a set of functions. Starting at this high level view of the system, each function is successively refined into more detailed functions. The system state is centralized and shared among different functions e.g., data such as member records is available for reference and updation to several functions. *Features of object oriented design: In object oriented design approach, the system is viewed as a collection of objects (i.e. entities). The system state is decentralized among the objects and each object manages its own state information. *Difference between the function oriented object oriented design approach:The following are the important differences between the function oriented and object oriented design: Unlike function oriented design methods, in OOD the basic abstraction are not real world function such as sort, display, track etc. but real worlds entities such as employee, picture, machine, radar system etc. In OOD state information is not represented in a centralized shares memory but is distributed among the objects of the system , For example while developing an employee payroll system, the employee data such as the names of the employees, their code numbers,

~6~

SOFTWARE ENGINEERING

ravi

basic salaries etc, are usually implemented as global data in a traditional programming system: whereas in an object oriented system, these data are distributed among different employee objects of the system. Function oriented techniques, such ad SA/SD, group the functions together if ad a group, the constitute a higher level function.

Coding: - Coding is undertaken once the design phase is complete and the design documents have been successfully received in the coding phase, every module identified and specified in the design document is independently coded. The input to the coding phase is the design document. During the coding phase, different modules identified in the design document are coded according to the respective module specification. Therefore, we can say that the objective of the coding phase is to transform the design of a system into a high level language code. Coding Standards: - Normally, good software development organizations require their programmer to adhere to some well-defined and standard style of coding, called coding standard. A coding standard gives a uniform appearance to the codes written by different engineers. It provides sound understanding of the code. It encourages good programming practices. A coding standard lists several rules to be followed during coding, such as the way the code should be laid out, error return conventions, and so forth. The following are some representative code standards: - Rules for limiting use of global:-These rules list what types of data can be declared global and what can not. Contents of the header preceding codes for different modules: - The information contained in the headers of different modules should be standard for an organization. The following are some standard header data:Name of the module. Date on which the module was created. Authors name. Modification history. Synopsis of the module. Different functions supported. Global variables accessed/modified by the module. Naming conventions for global variables, local variables and constant identifiers:Possible naming convention can be that global variable names start with a capital letter, local variable names are made of small letters and constant names are always capital letters. Error return conventions and exception handling mechanisms: - The way error conditions are reported by different functions in a program and the way common exception conditions are handled should be standard within an organization. Coding Guidelines:- Coding guidelines provide only general suggestions regarding the coding style to be followed and leave the actual implementation. The following are some representative coding guidelines recommended by many software developments organizations: Do not use a coding style that is too clever or too difficult to understand. Code should be easy to understand. Avoid obscure side effects: The side effects of a function call include modification of parameters passed by reference, modification of global variables, and I/O operations. Do not use an identifier for multiple purposes. Some of the problems caused by use of variables for multiple purposes are as follows:a. Each variable should be given a descriptive name indicating its purpose. This is not possible if an identifier is used for multiple purposes. Use of variable for multiple purposes can lead to confusion and make it difficult for somebody trying to read and understand the code. b. Use of variables for multiple purposes usually makes future enhancements more difficult.

CHAPTER - 4 PROGRAMMING TOOLS AND STANDARDS

~7~

SOFTWARE ENGINEERING

ravi

The code should be well-documented. The length of any function should not exceed to source lines. A function that is very lengthy is usually very difficult to understand as it probably carries out many different functions. Do not use goto statements. Use of goto statements makes a program unstructured and very difficult to understand. *Cohesion:- Cohesion is the measure of the functional strength of a module. A module having high cohesion is said to be functionally independent of the other modules. By the term function independence, we mean that a cohesive module performs a single task or function. Classification of cohesion:The different classes of cohesion that a module may possess are depicted in fig: Logical cohesion:- A module is said to be logically cohesive , if all elements of the module perform similar operations e.g. error handling data input, data output etc. Temporal cohesion:- A module is said to be exhibit temporal cohesion when a module contains functions that are related by the fact that all thee functions must be executed in the same time span. Procedural cohesion:- A module is said to posses procedural cohesion , if the set of functions of the module are all part of a procedure in which certain sequence of steps has to be carried out. Communication cohesion:- A module is said to have communication cohesion, if all the functions of the module refer to or update the same data structure e.g. the set of functions defined on an array or a stack . Sequential cohesion:- A module is said to possesses ,sequential cohesion , if the elements of module form the parts of a sequence, where the output from one element of the sequence is input to the next . Functional cohesion:- Functional cohesion is said to exists , if the different elements of a module cooperate to achieve a single function. For example, a module containing all the functions required to manage employees payroll displays functional cohesion. #Coupling:Coupling is a measure of the degree of independence or interaction between the two modules. A module having low coupling is said to be functionally independent of other modules. *Classification of coupling:Mainly five types of coupling can occur between any two modules: Data coupling: - Two modules are data coupled, if they communicate using elementary data items that is passed as parameters between the two e.g. an integer, a float, a character etc.This data item should be problem related and not used for the control purpose. Stamp coupling: - Two modules are stamp coupling, if they communicate using a composite data item such as a record in PASCAL or a structure in C. Control coupling: - Control coupling exists between two modules if data from one module is used to direct the order of instruction execution in another. Common coupling:- Two modules are common coupled, if they share same global data items. Content coupling: - Content coupling exists between two modules, if their code is shared e.g. a branch from one module into another module. Top down decomposition:The function oriented design approach which are still very popular and currently used in many software organizations. During the design process, high level functions are successively decomposed into more detailed functions and finally the different identified functions are mapped to modules. The term top-down decomposition is often used to denote such successive decompositions of a set of high level functions into more detailed functions. *Structured analysis and structured design:Structured analysis:- The aim of the structured analysis activity is to transform a textual problem description into a graphic model. Structured analysis is used to carry out the top-down decomposition of the set of high level functions depicted in the problem description and to represent them graphically. During structured analysis, functional decomposition of the system is achieved. The purpose of structured analysis is to capture the detailed structure of the system as perceived by the user. During structured analysis, the major processing tasks of system are analyzed, and the data flows among these processing tasks are represented graphically.

~8~

SOFTWARE ENGINEERING

ravi

Structured analysis technique is based on the following essential underlying principles: Top-down decomposition approach Divide and conquer principle. Graphical representation of the analysis results using Data Flow Diagrams. Software Documentation:- When we develop a software product, we not only develop the executable files and the source code but also develop various kinds of documents such as users manual, software requirements specification (SRS) document, design document, test document, installation manual etc as part of any software engineering process. Good documents are useful and serve the following purposes: Good documents enhance understand ability and maintainability of a software product. Good documents help the users in effectively exploiting the system. Good documents help in effectively overcoming the manpower turnover problem. Good documents help the manager effectively tracing the process of the project. Different types of software documents can be broadly classified in to the following:a. Internal documentation. b. External documentation Internal Documentation: - Internal documentation is provided through appropriate module headers and comments embedded in the source code. Internal documentation is also provided through the use of meaningful variable names, module and function header, code identification, code structting, use of enumerated types and constants identifiers, use of user-defined data types, etc. External Documentation: - External documentation is provided through various types of supporting documents such as users manual, software requirements specification document, design document, test documents, etc. 1. Coding standards and guidelines: - The principles and concepts that guide the coding task are closely aligned programming style, programming languages, and programming methods, However, there are a number of fundamental principles that can be stated: Preparation principles: - Before you write one line of code, be sure you: 1. Understand the problem youre trying to solve. 2. Understand basic design principles and concepts. 3. Pick a programming language that meet the needs of the software to be built and the environment in which it will operate. 4. Select a programming environment that provides tools that will make your work easier. 5. Create a set of unit tests that will be applied once the component you code is completed. Coding principles: - As you begin writing code, be sure you: 1. Constrain your algorithms by following structured programming practice. 2. Select data structure that will meet the needs of the design. 3. Understand the software architecture and create interfaces that are consistent with it. 4. Keep conditional logic as simple as possible. 5. Create nested loops in a way that makes them easily testable. 6. Select meaningful variable names and follow other local coding standards. 7. Write code that is a self-documenting. 8. Create a visual layout (e.g., indentation and blank lines) that aids understanding. Validation principles: After youve completed your first coding pass, by sure you: 1. Conduct a code walkthrough when appropriate. 2. Perform unit tests and correct errors youve uncovered. 3. Refactor the code.

Verification: - Verification is the process of determining whether the output of one phase of software development conforms to that of its previous phase. Validation: Validation is the process of determining whether a fully developed system conforms to its requirements specification. Verification is concerned with phase containment of errors. The aim of validation is that the product be error free.

CHAPTER - 5 TESTING AND MAINTENANCE

~9~

SOFTWARE ENGINEERING

ravi

Testing: - The aim of the testing process is to identify all defects existing in a software product. We can safely conclude that testing provides a practical way of reducing defects in a system and increasing the users confidence in a developed system. Testing in the large vs. Testing in the small:- Software products are normally tested first at the individual component (or unit) level. This is referred to as testing in the small. After testing the entire component individually, the components are slowly integrated and tested at each level of integration. Finally, the fully integrated system is tested. Integration and system testing are known as testing in the large. Thus a software product goes through three levels of testing:a. Unit testing b. Integration testing c. System testing Unit Testing: - Unit testing is undertaken when a module has been coded and successfully reviewed. Topic: 3: Black Box and White Box testing: *Black-Box Testing:- In black-box testing, test cases are designed from an examination of the input/output values only and no knowledge of design or code is required. The following are the two main approaches to designing black-box test cases: Equivalence class portioning Boundary value analysis *Equivalence Class Partitioning: In this approach, the domain of input values to a program is partitioned into a set of equivalence classes. This partitioning is done such that the behavior of the program is similar to every input data belonging to the same equivalence class. The main idea behind defining the equivalence classes is that testing the code with any value belonging to an equivalence class is as good as testing the software with any other value belonging to that equivalence class. The following are some general guidelines for designing the equivalence class: I. If the input data value to a system can be specified by a range of values, then one valid or three invalid equivalence classes should be defined. II. If the input data assumes values from a set of discrete members of some domain. Then one equivalence class for valid input values and another for invalid input equivalence classes. *Boundary value Analysis: - A type of programming error frequently occurs at the boundaries of different equivalence classes of inputs. The reason behind such errors might purely be due to psychological factors. Programmer often fails to see the special processing required by the input values that lie at the boundary of different equivalence classes. For example, programmers may improperly use <=, or conversely <= instead of <. Boundary value analysis leads to selection of test cases at the boundaries of different equivalence class. *White Box Testing:White-box testing requires knowledge of the internals of the software. There are several white-box testing strategies. One white-box testing strategy is said to be stronger than another strategy. Statement coverage Data-flow-based Branch coverage Mutation Condition coverage Testing Path coverage Statement coverage: - The statement coverage strategy aims to design test cases so that every statement in a program is executed at least once. Branch coverage:- In the branch-coverage based testing strategy, test cases are designed to make each branch condition assume true and false values in turn. It is known as edge testing. Condition Coverage: - In this, structural testing, test cases are designed to make each component of a composite expression assume both true and false values. Path coverage: - The path coverage-based testing strategy requires us to design test cases such that all linearly independent paths in the program are executed at least once. Topic: 05.02 *Debugging Strategies: - Once errors are identified, it is necessary to first locate the program statements responsible for the errors and then to fix them. This process is known as debugging.

~ 10 ~

SOFTWARE ENGINEERING

ravi

*Debugging approaches: - The following are the same of the approaches popularly adopted by programmers for debugging:i. Brute Force Method: - It is the most common method of debugging. It is the least efficient method. In this approach, the program is loaded with print statements to print the intermediate values with the hope that some of the printed values will help to identify the error in statement. ii. Back tracing: - This is also a fairly common approach. In this approach, beginning from the statements at which an error symptom developed is observed. The source code is traced until the error is discovered. iii. Cause Elimination Method: - In this approach, a list of causes which could possibly have contributed to the error symptom developed and test are conducted to eliminate each cause. iv. Program Slicing: - The technique is similar to the back tracing. However, the search space is reduced by defining slices. A slice of a program for a particular statement is the set of source lines preceding this statement that can influence the value of that variable. *Debugging Guidelines: Debugging requires a through understanding of the program design. Trying to debug based on a partial understanding of the system design and implementation may require an inordinate amount of effort. Debugging may sometimes even require full redesign of the system. One must be beware of the possibility that any one error correction may introduce new errors. Therefore, after every round of error-fixing, regression testing must be carried out. *Stress Testing: Stress testing is also known as endurance testing. Stress tests are blackbox tests which are designed to impose a range of abnormal and even illegal input conditions so as to stress the capabilities the software. Input data volume, input data rate, processing time, utilization of memory, etc are tested beyond the designed capacity. For example, suppose an O.S. is designed to support 15 multiprogrammed jobs; the system is stressed by attempting to run 15 or more jobs simultaneously. A real time system might be tested to determine the effect of simultaneous arrival of several high-priorities interrupts. Stress testing usually involves an element of time or size, such as the number of records transferred per unit time, input data size, etc. therefore stress testing may not be applicable to many types of systems. Error seeding: - Error seeding, as the name implies, seeds the code with some known errors. In other words, some artificial errors are introduced into the program. The number of these seeded errors detected in the course of the standard testing procedure is determined. These values in conjunction with the number of unseeded errors detected can be used to predict: The number of errors remaining in the product. The effectiveness of the testing strategy. Let N be the total number f defects in the system and let n of these defects be found by testing. Let S be the total number of seeded defects, and let s of these defects be found during testing. n/N=s/S or N=S x n/s Remaining defects =N n=n x ((S-s)/s) Error seeding works satisfactorily only if the kind of seeded errors matches closely with the kind of defects that actually exist in software. To some extent, the different categories of errors that remain can be of similar projects. Due to the shortcoming that the types of seeded errors should match closely with the types of errors actually existing in the code, error seeding is useful only to a moderate extent. *Integration testing: - The primary objective of integration testing is to test the module interfaces in order to ensure that there are no errors in the parameter passing when one module invokes another module. During integration testing, different modules of a system are integrated in a planned manner using an integration plan. An important factor that guides the integration plan is the module dependency graph. Types of Integration testing: a. Big-bang integration testing: - It is the simplest integration testing approach, where all the modules making up a system are integrated in a single step. In simple words, all the modules of the system are simply put together and tested. b. Top-down integration testing: - Top-down integration testing starts with the main routine and one or two subordinate routines in the system. Top-down integration testing approach requires

~ 11 ~

SOFTWARE ENGINEERING

ravi

the use of a program stubs to stimulate the effect of lower-level routines that are called by the routines under test. A disadvantage of the top-down integration testing approach is that in the absence of lower-level routines, many a times it may become difficult to exercise the top-level routines. c. Bottom-up integration testing: - In bottom-up testing, each subsystem is tested separately and then the full system is tested. The primary purpose of this testing each subsystem is to test the interfaces among various modules making up the subsystem. A principal advantage of bottom-up testing is that several disjoint subsystems can be tested simultaneously. A disadvantage of bottom-up testing is the complexity that occurs when the system is made up of a large number of small subsystems. d. Mixed integration testing: - A mixed (also called sandwiched) integration testing follows a combination of a top-down and bottom-up testing approaches. In the top-down approach, testing can start only after the bottom-levels modules are ready. In mixed integration testing overcomes this shortcoming of the top-down and bottom-up approaches. In mixed testing approach, testing can start as and when modules become available. Therefore, this is one of the most commonly used integration testing. *Phase vs. Incremental Integration Testing: - A combination of these two strategies is as follows: In incremental integration testing, only one new module is added each time, to the partial system under test. In Phase integration testing, a group of related modules are added each time to the partial system under test. Phase integration requires less number of integration steps compared to the incremental integration approach. *Program Analysis tools: - A program analysis tools usually means an automated tool that takes the source code or the executable code of a program as input and produces reports regarding several important characteristics of the program. We can classify all these tools into two broad categories of program analysis tools:Static Analysis tools Dynamic Analysis tools Static Analysis tools: - Static analysis tools asses and complete the various characteristics of a software product without executing it. Typically, static analysis tools analyze some structural representation of a program to arrive at certain analytical conclusions. A major practical limitation of the static analysis tools lies in handling dynamic evaluation of memory references at run-time. Static analysis tools often take summarize the results of analysis of every function in a polar chart known as Kiviat chart. Dynamic Analysis Tools: - Dynamic analysis tools require the program to be executed and its actual behavior recorded. Dynamic analyzer usually instruments the code. The output of a dynamic analysis tools can be stored and printed easily and provides the evidence that through testing has been done. Dynamic analysis allows us to compare, hence the extent of testing performed in the white-box mode. Topic: 05.04 *Software Maintenance: - Software maintenance denotes any changes made to a product after it has been delivered to the customer. However, most products need maintenance due to the wear and tear caused by use. On the other hand, software products do not need maintenance on this count, but need maintenance to correct errors, enhance features, port to new platforms, etc. *Characteristics of Software Maintenance: -Software maintenance is becoming an important activity of a large number of organizations. This is no surprise, given the rate of hardware obsolescence the immortality of a software products run on newer platforms; run in newer environments and/or with enhanced features. Software maintenance is necessary when the hardware platform changes and software products perform some low-level functions. *Types of software maintenance: Mainly three types of software maintenance: a. Corrective b. Adaptive c. Perfective Corrective Maintenance: -Corrective errors that were not discovered during the product development phase. This is called corrective maintenance.

~ 12 ~

SOFTWARE ENGINEERING

ravi

Perfective Maintenance: - Improving the implementation of the system and enhancing the functionalities of the system according to the customers requirements. This is called perfective maintenance. Adaptive Maintenance: - A software product might need maintenance when the customers need the product to run on new platforms, on new operating systems or when they need the product to be interfaced with new hardware or

software. This type of maintenance is called Adaptive Maintenance. * Software Maintenance Models: Topic: 05.04 *software configuration Management: - Software configuration management deals with effectively tracing and controlling the configuration of a software product during its life cycle. *Necessity of Software configuration Management: - the following are some of the important problems that appear if configuration is not used:i. Inconsistency problem when the objects are replicated: -Consider a scenario where every s/w engineer has a personal copy of an object. As each engineer makes changes to his local copy, he is expected to intimate these to other engineers, so that the changes in interfaces are uniformly changed. However, an engineer makes changes to the interfaces in his own copy and forgets to intimate other teammates about the changes. This makes the different copies of the object inconsistent. Finally, when the product is integrated it does not work. ii. Problems associated with concurrent access: -Suppose there is a single copy of a program module and several engineers are working on it. Two engineers may simultaneously carry out changes to the different portions of the same module and white saving ever write each other. Similar problems may occur for any other deliverable object. iii. Proving a stable development environment: -When a project is underway, the team members need a stable environment to make progress. iv. System accounting and maintaining status information: - System accounting keeps track of who made a particular change and when the change was made. v. Handling Variants: - The existence of variants of a s/w product causes some peculiar problems. Suppose, we have several variants of the same modules and find a bug in one of them. Then, it has to be fixed in all versions and revisions. To do it efficiently, it should not be necessary to fix it in each and every version and revision of the s/w separately. Configuration Management Activities: - Configuration management is carried out through two principal activities: Configuration identification Configuration Control *Configuration identification: - Configuration identification involves deciding which parts of the system should be kept track of. The project manager normally classifies the objects associated with a s/w development into three main categories: Controlled, Precontrolled and Uncontrolled. Controlled objects are those which are already put under configuration control. Precontrolled objects are not yet under configuration control, but will eventually be under

~ 13 ~

SOFTWARE ENGINEERING

ravi

configuration control. Uncontrolled objects are not and will not be subject to configuration control. *Configuration Control: - Configuration control is the process of managing changes to controlled objects. Configuration control is that part of a configuration management system that most directly affects the day-to-day operations of developers. The configuration control system prevents unauthorized changes to any controlled objects.

*Software Project Management: - The main goal of s/w project management is to enable a group of s/w engineers to work efficiently towards successful completion of project. *Project Planning: -Project planning is undertaken and completed even before any development activity starts. Once a project is found to be feasible, software project managers undertake project planning. Project Planning consists of the following essential activities:Estimating some basic attributes of the project Cost: How much will it cost to develop the project? Duration: How long will it cost take to complete the development? Effort: How much effort would be required? Scheduling manpower and other resources. Staff organization and staffing plans. Risk identification, analysis and abetment planning Miscellaneous plans such as quality assurance plan, configuration management plan, etc. Figure shows the order in which these important planning activities may be undertaken: Size estimation is the first activity. It is also the most fundamental parameter based on which all other planning activities are carried out. Other estimation such as estimation of effort, cost, resource, and project duration are also very important components of project planning. *Project Size Estimation: Project Size: - The size of a problem is obviously not the number of bytes that the source code occupies. It is neither the byte size of the executable code. The project size is a measure of the problem complexity in terms of effort and time required to develop the product. Metrices for project size estimation: - Currently two metrices are widely used to estimate size: a. Lines of code (LOC) b. Function point (FP) *Lines of code: - LOC is the simplest among all metrices available to estimate project size. This metric is very popular4 being the simplest to use. Using this metric, the project size is estimated by counting the number of source instructions, the lines used for commenting the code and the header lines are ignored. In order to estimat5e the LOC count at the beginning of a project, a project managers usually divide the problem into modules and each module into sub modules and so on, until the sizes of the different leaf-level modules can be approximately predicted. However, LOC as a measure of problem size has several shortcomings. LOC gives a numerical value of problem size that can vary widely with individual coding styledifferent programmers lay out their code in different ways. A good problem size measure should consider the overall complexity of the problem and the effort needed to solve it. That is , it should consider the total effort needed to specify, design, code test, etc and not just the coding effort. LOC, however, focused on the coding activity alone; it merely computes the number of source lines in the final program. LOC measure correlates poorly with the quality and efficiency of the code. A larger code size does not necessarily imply better quality or higher efficiency. LOC metric penalizes use of higher-level programming languages, code reuse etc. If a programmer consciously uses several library routines, then the LOC count will be lower. LOC metrics measures the lexical complexity of a program and doesnt address the more important but subtle issues of logical or structural complexities. It is difficult to accurately estimate LOC is the final product from the problem specification.

CHAPTER 6 SOFTWARE PROJECT MANAGEMENT

~ 14 ~

SOFTWARE ENGINEERING

ravi

*Function Point Metric: - Function point metric was proposed by Albrech [1983]. This metric overcomes many of the shortcomings of the LOC metric. One of the important advantage of using the function point metric is that it can be used to easily estimate the size of a software product directly development on the number of different functions or features it supports. Besides using the number of input and output data values, function point metric computes the size of a software product using three other characteristics of the product as shown in the following expression: UFX = (Number of inputs) * 4 + (Number of outputs) * 5 + (Number of inquires)* 4+ (Number of files)* 10 + (Number of interfaces)* 10 Once the unadjusted function point (UFP) is computed, the technical complexity factor (TCF) IS COMPUTED NEXT. *Feature Point Metric: A major shortcoming of the function point measure is that it does not take into account the algorithmic complexity of software. That is, the function point metric implicitly assumes that the effort required to design and develop any two functionalities of the system is the same. But we know that it is normally not true, the effort required to develop any two functionalities may vary widely. To overcome this problem an extension of the function point metric called Feature Point Metric has been proposed. Feature point metric incorporates an extra parameter into algorithm complexity. * Project Estimation Techniques: - The estimation of various project parameters is a basic project planning activity. The important project parameters that are estimated include project size effort required to develop the software, project duration and cost. These estimates not only help in quoting the project cost to the customer, but also prove useful in resource planning and scheduling. There are three broad categories of estimation technique:Empirical estimation techniques Heuristic Techniques Analytical estimation techniques 1. Empirical estimation techniques: - Empirical estimation techniques are based on making an educated guess of the project parameters. While using this technique, prior experience with the development of similar products is helpful. Although empirical estimation techniques are based on common sense, different activities involved in estimation have been formalized. Two popular empirical techniques are: Expert Judgments and Delphi estimation techniques. a. Expert Judgment Technique: - Expert judgment is one of the most widely used estimation techniques. In this approach, an expert makes an educated guess of the problem size after analyzing the problem thoroughly. b. Delphi Cost Estimation: - Delphi cost estimation approach tries to overcome some of the shortcomings of the expert judgment approach. Delphi estimation is carried out by a team composed of a group of experts and a coordinator. In this approach, the coordinator provides each estimator with a copy of the s/w requirements specification (SRS) document and a form for recording his cost estimate. 2. Heuristic Techniques: - Heuristic techniques assume that the relationships among the different project parameter can be modeled using suitable mathematical expressions. Once the basic parameters are known, the other parameters can be easily determined by the substituting the value of the basic parameters in the mathematical expression. Different heuristic estimation models can be divided into the following two classes: Single Variable Model and Multi Variable Model. Single Variable Model provides a means to estimate the desired characteristics of the s/w product such as its size. Example: The basic COCOMO model. Multi Variable Estimation model expected to give more accurate estimates compared to the single variable models. Example: The intermediate COCOMO Model. 3. Analytical Estimation Techniques: - Analytical estimation technique derives the required results starting with certain basic assumptions regarding the project. Thus, unlike empirical and heuristic techniques, analytical techniques do have a scientific basis. Example: - Halsteads S/W science. *COCOMO: - A Heuristic Estimation Technique.

~ 15 ~

SOFTWARE ENGINEERING

ravi

COCOMO (Constructive Cost Estimation Model) was proposed by Boehm (1981). Boehm postulated that any s/w development project can be classified into one of the following three categories based on the development complexity: Organic, Semidetached and Embedded. Organic: - A development project to be of organic type, if the project deals with developing a well understood application program, the size of the development team is reasonably small and the team members are experienced in developing similar type of projects. Semidetached: - A development project can be considered to be of semidetached type, if the development team consists of a mixture of experienced and inexperienced staff. Team members may have limited experience. Embedded: - A development project is considered to be of embedded type, if the s/w being developed is strongly coupled to complex hardware, or if stringent regulations of on the operational procedure exist. For the three product categories, Boehm provides different sets of expressions to predict the effort and development time from the size estimation given in KLOC (Kilo Lines of Source Code). According to Boehm, s/w cost estimation should be done through three stages: Basic COCOMO Intermediate COCOMO Complete COCOMO *Basic COCOMO Model: - the basic COCOMO model gives an appropriate estimate of the project parameters. The basic COCOMO estimation model is given by the following expressions: Effort = a1 x (KLOC)a2 PM Tdev = b1 x (Effort)b2 months Where, KLOC is the estimated size of the s/w product expressed in KLOC lines code. a 1, a2, b1, b2 are constants for each category of s/w products. Tdev is the estimated time to develop the s/w, expressed in months effort is the total effort required to develop the s/w product, expressed in person- months (PM). * Intermediate COCOMO: - The basic COCOMO models that effort and development time are functions of product size alone. However, a host of other project parameters besides the product affect the effort required t0o develop the product as well as the development time. Therefore, in order to obtain an accurate estimation of effort and project duration, the effect of all relevant parameters must be taken into account. The intermediate COCOMO model recognized this fact and refines the initial estimate obtained throu8gh the basic COCOMO expression by using a set of 15 drivers based on various attributes of software development. *Complete COCOMO: - A major shortcoming of both the basic and the intermediate COCOMO models is that they consider a software product as a single homogeneous entity. However, most large systems are made up a several smaller subsystems. These subsystems may have widely different characteristics. For example, some subsystems may be considered organic types some semidetached and some embedded. Not only that the inherent development complexity of the subsystems may be different, but also for some subsystems the reliability requirements may be high, for some the development team mighty has no previous experience of similar development, and so on. The complete COCOMO model considers these differences in the characteristics of the subsystems and estimates the effort and development time as the sum of the estimates for the individual subsystems. Gantt Charts: - Gantt charts are mainly used to allocate resources to activities. The resources allocated to activities include staff, hardware and s/w. Gantt charts are useful for resource planning. a Gantt chart is a special type of bar chart, where each bar represents an activity. The bars are drawn along a time line. The length of each bar is proportional to the duration of the time planned for the corresponding activity. In the Gantt charts used for s/w project management, each bar consists of a white part and a shaded part. The shaded part of the bar shows the length of time each task is estimate to take. The white part shows the slack time, the latest time by which a task must be finished. *Gantt Chart representation of the MIS problem: PERT Charts: - PERT (Project Evaluation and Review Technique) charts consist of a network of boxes and arrows. The boxes represent activities and the arrows represent task dependencies. PERT charts represent the statistical variations of in the project

~ 16 ~

SOFTWARE ENGINEERING

ravi

estimates assuming a normal distribution. Thus, in a PERT chart instead of making a single estimate for each task, pessimistic, likely and optimistic estimates are also made. The boxes of PERT charts are usually annotated with the pessimistic, likely and optimistic estimates for every task. * PERT chart representation of the MIS problem: Difference between Gantt & PERT chart: - Gantt chart representation of a project schedule is helpful in planning the utilization of resources, while a PERT chart is useful for monitoring the timely progress of activities. Also it is easier to identify the parallel activities in a project using a PERT chart. Q. Who is a good S/W Engineer? A good software engineers should posses the following attributes: Exposure to systematic techniques , i.e. familiarity with software engineering principles Good technical knowledge of the project areas Good programming abilities Good oral, written, and interpersonal skills High motivation Sound knowledge of fundamentals of computer science Intelligence Ability to work in a team Discipline, and so forth. Data flow diagram:-In simple word, DFD is a hierarchical graphical model of a system that shows the different processing activities or functions that the system performs and the data interchange among these functions. The DFD (also known as the bubble chart) is a simple graphical formalism that can be used to represent a system in term of the input data to0 the system, various processing carried out on these data, and the output data generated by the system. The main reason why the DFD technique is so popular is probably because of the fact that DFD is a very simple formalism;it is simple to understand and use. Structured design:- The aim of structured design is to transform the results of the structured analysis into a structure chart. A structure chart represents the software architecture, i.e. the various modules making up the system, the module dependency, and the parameters that are passed among the different modules. Hence, the structure chart representation can be easily implemented using some programming language. Flow chart vs. Structure chart : A flow chart is convenient technique to represent the flow of control in a program. A structure chart differs from a flow chart in three principal ways:1. It is usually difficult to identify different modules of the software from its flow chart representation. 2. Data interchange among different modules id not represented in a flow chart. 3. Sequential ordering of tasks inherent in flow chart is suppressed in a structure chart. Case tool: - A case tool is a generic term used to denote any form of automated support for software engineering. In a more restrictive sense, a case tool can mean any tool used to automate some activity associated with software development. Many case tools are now available. Some of thee tools assist in phase-related tasks such as specification, structured analysis, design, coding, testing, etc., and others are related to non-phase activities such as project management and configuration management .Case tool help develop better quality products more efficiently. The primary objectives of deploying case tool are: To increase productivity To produce better quality software at lower cost. Benefits of case tools/ Uses of case tools: A key benefit arising out of the use of case environment is cost saving through all developmental phases. Use of case tools leads to considerable improvements to quality. Case tools help produce high quality and consistent documents. CASE tools reduce the drudger in a software engineers work. For example, they need not check laboriously cost savings in software maintenance efforts.

~ 17 ~

SOFTWARE ENGINEERING

ravi

The case tools should support generation of module skeletons or templates in one or more popular programming languages. Advantage of CASE tools: CASE tools improve speed and reduce the time needed to complete a particular software development task. Development of DFD, Gantt chart, PERT chart, etc. When procedures are coded by a case tool, they give a consistent look. CASE tools can also ensure completeness. Some CASE tools help in generating program codes. Use of CASE tools increases the likelihood of meeting the user requirements. CASE tools provide prototyping. Important qualities of software product: Correctness:- A program is functionally correct if it behaves according to the specifications of the functions it should provide. A program must operate correctly or it provides a little value to its users. Robustness:- A program is robust if it behaves regionally. A program that assumes perfect input and generates run-time errors as the user types an incorrect command could not be robust. Portability: - A software product is said to be portable, if it can be easily made to work in different operating system environments, in different machines, with other software products, etc. Usability: - A software product has good usability, it different categories of users can easily invoke the functions of the product. Reusability: - A software product has good reusability, if different modules of the product can easily be reused to develop new products. Maintainability:- A software product is maintainable, if errors can be easily corrected as and when they show up, new functions can be easily added to the product, and the functionalities of the product can be easily modified. ISO 9000:ISO (International Standards Organization) is a consortium of 63 countries established to formulate and foster standardization. ISO published its 9000 series of standards in 1987.ISO 9000 certification serves a reference for contract between independent parties. The ISO standard specifies the guidelines for maintaining a quality system. The ISO standard mainly addresses operational aspects and organizational aspects such as responsibilities, reporting, etc. In a nutshell , ISO 9000 specifies a set of guidelines for repeatable and high quality product development .It is important to realize that ISO 9000 standards is set of guidelines for the production process and is not directly concerned with the product itself. ISO 9000 is a series of three standards: ISO 9001, ISO 9002, and 9003. ISO 9001:-This standard applies to the organizations engaged in design, development, production, and servicing of goods. This is the standard that is applicable to most software development organizations. ISO 9002:- This standard applies to those organizations which do not design products but are only involved in production. ISO 9003:- This standard applies to organizations involved only in installation and testing of products. The ISO 9000 registration process consists of the following stages: Application:-Once an organization decides to go for ISO 9000 certification, it applies to a registrar for registration. Pre-assessment;- During this stage, the registrar makes a rough assessment of organization. Document review and adequacy of audit:- During this stage, the registrar reviews the document s submitted by the organization and makes suggestions for possible improvements.

CHAPTER 7 SOFTWARE QUALITY ASSURANCE

~ 18 ~

by it during review have been compiled with by the organization or not. Registration: - The registrar awards the ISO 9000 certificate after successful completion of all previous phases. Continued surveillance:- The registrar continues to monitor the organization , through periodically. SEI CAPABILITY MATURITY MODEL: SEI Capability Maturity Model (SEI CMM) was proposed by Software Engineering Institute of the Carnegie Mellon University, USA. SEI CMM was originally developed to assist the U. S. Department of Defense in sofrware acquisition. SEI EMM model helped organizations to improve the quality of the software they developed and therefore adoption of the SEI CMM model had significant business benefits. In simple words, CMM is a reference model for inducting the software process maturity into different levels. It can be used to predict the most likely outcome to be expected from the next project that the organization undertakes.SEI CMM classifies software development industries into the following five maturity levels. The different levels of SEI CMM have been designed so that it is easy for an organization to slowly build its quality system beginning from scratch. Comparison between ISO 9000 and SEI /CMM: ISO 9000 is awarded by an international standards body. Therefore, ISO 9000 certification can be quoted by an organization in official documents, in communications with external parties, and in tender quotations. SEI CMM was developed specifically for software industry and therefore addresses many issues which are specific to software industry alone. SEI CMM goes beyond quality assurance and prepares an organization to ultimately achieve TQM. SEI CMM model provides a list of key process areas on which an organization at any maturity level needs to concentrate to take it from one maturity level to the next. Thus, it provides a way for achieving gradual quality improvement. 2. Software life cycle Model 1.REQUIREMENT ANALYSIS results in the specification of softwares operational characteristics; indicates softwares interface with other system elements; and establishes constraints that software must meet. Requirements analysis allows the software engineer to elaborate on basic requirements established during earlier requirements engineering tasks and build models that depict user scenarios, functional activities, problem classes and their relationships, system and class behavior, and the flow of data as it is transformed. Requirements analysis provides the software designer with a representation of information, function, and behavior that can be translated to architectural, interface, and component-level designs. The software engineers primary focus is on what not how. (i) Overall objectives and Philosophy:-The analysis model must achieve three primary objectives: 1. To describe what the customer requires, 2. To establish a basis for the creation of a software design, and 3. To define a set of requirements that can be validated once the software is built. The analysis model bridges the gap between a systemlevel description that describes overall system functionality as it is achieved by applying software, hardware, data, human, and other system elements and a software design that describes the softwares application architecture, user interface, and component level structure. (ii) Analysis Rules of Thumb The model should focus on requirements that are visible within the problem or business domain. The level of abstraction should be relatively high. Dont get bogged down in details that try to explain how the system will work. Each element of the analysis model should add to an overall understanding of software requirements and provide insight into the information domain, function, and behavior the system. Delay consideration of infrastructure and other non-functional models until design. Minimize coupling throughout the system. It is important to represent relationship between classes and functions. Be certain that the analysis model provides value to all stakeholders.

SOFTWARE ENGINEERING ravi Compliance audit;- During this stage, the registrar checks whether the suggestions made

~ 19 ~

SOFTWARE ENGINEERING

ravi

Keep the model as simple as it can be. (iii) Domain Analysis:-Software domain analysis is the identification, analysis, and specification of common requirements from a specific application domain, typically for reuse on multiple projects within that application domain analysis, and specification of common, reusable capabilities within a specific application domain, in terms of common objects, classes, subassemblies, and frameworks. 2.DESIGN CONCEPTS: - A set of fundamental design concepts has evolved over the history of software engineering. Although the degree of interest in each concept has varied over the years, each has stood the test of time. Each provides the software designer with a foundation from which more sophisticated design methods can be applied. 1.Abstraction: - At the highest level of abstraction, a solution is stated in broad terms using the language of problem environment. A procedural abstraction refers to a sequence of instructions that have a specific and limited function. The name of procedural abstraction refers to a sequence of instructions that have a specific details and limited functions, but specific details are suppressed. 2.Architecture: - software architecture alludes to the overall structure of the software and the ways in which that structure provides conceptual integrity for a system. Architecture is the structure or organization of program components (modules), the manner in which these components interact, and the structure of data that are used by the components. Structural models represent architecture as an organized collection of program components. Framework models increase the level of design abstraction by attempting to identify repeatable architectural design frameworks that are encountered similar types of applications. Dynamic models address the behavioral aspects of the program architecture, indicating how the structure or system configuration may change as a function of external events. Process models focus on the design of the business or technical process that the system must accommodate. Finally, functional models can be used to represent the functional hierarchy of a system. 3.Patterns: - Brad Appleton defines a design pattern in the following manner: A pattern is a named nudged of insight which conveys the essence of a proven solution to a recurring problem within a certain context amidst competing concerns. Each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice. 4.Modularity: - Software architecture and design patterns embody Modularity; that is, software is divided into separately named and addressable components, sometimes called modules that are integrated to satisfy problem requirements. It is possible to conclude that, if we subdivide software indefinitely, the effort required to develop it will become negligibly small! There is a number, M, of modules that would result in minimum development cost, but we do not have the necessary sophistication to predict M with assurance. We should d modularize, but care should be taken to stay in the vicinity of M Under modularity or over modularity should be avoided. 5.Information Hiding: - The principles of information hiding suggest that modules be characterized by design decisions that hide from all others. Hiding implies that effective modularity can be achieved by defining a set of independent modules that communicate with one another only that information necessary to achieve software function. The use of information hiding as design criterion for modular systems provides the greatest benefits when modifications are required during testing and later, during software maintenance. 6.Functional Independence: - The concept of functional independence is a direct out growth of modularity and the concepts of abstraction and information hiding. Functional independence is achieved by developing modules with single-minded function and an aversion to exercise interaction with other modules. Software with effective modularity, that is, independent modules, is easier to develop because function may be compartmentalized and interfaces are simplified. Independence is assessed using two qualitative criteria: cohesion and coupling. Cohesion is an indication of the relative functional strength of a module. A cohesive module performs a single task, requiring little interaction with other components in other parts of a program. Coupling is an indication of the relative

~ 20 ~

SOFTWARE ENGINEERING

ravi

interdependence among modules i.e. an indication of interconnection among modules in a software structure. 7.Refinement: - Stepwise refinement is a top-down strategy originally proposed by Niklaus Wirth. Refinement is actually a process of elaboration. Refinement causes the designer to elaborate on the original statement, providing more and more detail as each successive refinement occurs. Abstraction and refinement are complementary concepts. Abstraction enables a designer to specify procedure and data and yet suppress low-level details. Refinements help the designer to reveal low-level details as design progresses. 8.Refactoring: - Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure. When software is refactored, the existing design is examined for redundancy, unused design elements, inefficient or unnecessary algorithms, poorly constructed or inappropriate dta structures, or any other design failure that can be corrected to yield a better design. 9.Design classes: - As the design model evolves, the software team must define a set of design classes that 1. refine the analysis classes by providing design detail that will enable the classes to be implemented , and 2. Create a new set of design classes that implement a software infrastructure to support the business solution. Five different types of design classes, each representing a different layer of the design architecture are suggested: User interface classes define all abstraction that are necessary for human-computer interaction. In many cases, HCl occurs within the context of a metaphor and the design classes for the interface may be visual representations of the elements of the metaphor. Business domain classes are often refinements of the analysis classes defined earlier. Process classes implement lower-level business abstractions required to fully manage the business domain classes. Persistent classes implement software data stores that will persist beyond the execution of the software. System classes implement software management and control functions that enable the system to operate and communicate within its computing environment and with the outside world. As the design term evolves, the software team must develop a complete set of attributes and operations for each design class. 3. DESIGN NOTATIONS 1.Graphical Design Notation: The activity design allows a designer to represent sequence, condition, and repetition all elements of structured programming and is the descendent of an earlier pictorial design representation (still used widely) called a flowchart. A flowchart, like an activity diagram, is quite simple pictorially. A box is used to indicate a processing step. A diamond represents a logical condition, and arrows show the flow of control. The sequence is represented as two processing boxes connected by a line (arrow) of control. Condition, also called if-then-else, is depicted as decision diamond that if true, causes ten-part processing to occur, and if false, invokes else-part processing. Repetition is represented using two slightly different forms. The do while tests a condition and executes a loop task repetitively as long as the condition holds true. A repeat until executes the loop task first, then tests a condition and repeats the task until the condition fails. Selection construct shown in the figure is actually an extension of the if-the-else. 2.Tabular Design Notation: - In many software applications, a module nay be required to evaluate a complex combination of conditions and select appropriate actions based on these conditions. Decision tables provide a notation that translates actions and conditions into a tabular form. The table is difficult to misinterpret and may even be used as a machine readable input to a table driven algorithm. A decision table is divided into four quadrants. The upper left-hand quadrant contains a list of all conditions. The lower left-hand quadrant

~ 21 ~

SOFTWARE ENGINEERING

ravi

contains a list of all actions that are possible based on combinations of conditions. The righthand quadrants form a matrix that indicates condition combinations and the corresponding actions that will occur for a specific combination. Therefore, each column of the matrix may be interpreted as a processing rule. The following steps are applied to develop a decision table: i) List all actions that can be associated with a specific procedure (r module). ii) List all conditions (or decisions made) during execution of the procedure. iii) Associate specific sets of conditions with specific actions, eliminating impossible combinations of conditions: alternatively, develop every possible permutation of conditions. iv) Define rules by indicating what action(s) occur for a set of conditions. Condition 1 2 3 4 5 6 Regular Customer T T Silver customer T T Gold customer T T Special discount F T F T F T Actions No discount Apply 8 % discount Apply 15% discount Apply additional x % discount A regular customer receives normal print rates and delivery. A silver customer gets an 8 % discount on all quotes and is placed ahead of all regular customers in the job queue. A special discount of x % in addition to other discounts can be applied to any customers quote at the discretion of management. 3.Program Design Language: - Program design also called structured English or pseudocode, is a pidgin language in that it uses the vocabulary of one language (i.e., English) and the overall syntax of another (i.e., a structured programming language). PDL can not be compiled. However, tools can translate PDL into a programming language skelton and/or a graphical representation (e.g., a flowchart) of design. These tools also produce nesting maps, a design operation index, cross-reference tables, and a variety of other information. A program design language may be a simple translation of a language such as Ada, C, or Java, Basic PDL syntax should include constructs for component definition, interface description, data declaration, block structuring, condition constructs, repetition constructs, and I/O constructs, It should be noted that PDL can be extended to include Keywords for multitasking and /or concurrent processing, interrupt handling, interprocess synchronization, and many other features. Recall that PDL is not a programming language. The designer can adapt as required without worry of syntax errors. However, the design for the monitoring software would have to be reviewed (do you see any problems?) and further refined before code could be written. Comparison of Design Notation: - Design notation should lead to a procedural representation that is easy to understand and review. In addition, the notation should enhance code to ability so that code does, in fact, become a natural by-product of design. A natural question that arises in any discussion of design notation is: What notation is really the best, given the attributes noted above? Any answer to this question is subjective and open to debate. However, it appears that program design language offers the best combination of characteristics. PDL may be embedded directly into source listings, improving documentation and making design maintenance less difficult. Editing can be accomplished with any text editor or word-processing system, automatic processors already exist, and the potential for automatic code generation is good. However, it does not follow that other design notation is necessarily inferior to PDL or is not good in specific attributes. The pictorial nature of activity diagrams and flowcharts provides a perspective on control flow that many designers prefer. The precise tabular content of decision tables is an excellent tool for table-driven applications S. Testing and Maintenance 1. Verification and Validation: - Software testing is one element of a broader topic that is often referred to as verification and validation (V&V). Verification refers to the set of

~ 22 ~

SOFTWARE ENGINEERING

ravi

activities that ensure that software correctly implements a specific function. Validation refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements. Boehm [BOE81] states this another way: Verification: Are we building the product right? Validation: Are we building the right product? The definition of V&V encompasses many of the activities that are encompassed by software quality assurance (SQA) and discussed in detailed in chapter26. Verification and Validation encompasses a wide array of SQA activities that include formal technical reviews, quality and configuration audits, performance monitoring, simulation, feasibility study, documentation review, database review, installation testing. Although testing plays an extremely important role in V&V, many other activities are also necessary. Quality is incorporated into software throughout the process of software engineering. Proper application of methods and tools, effective formal technical reviews, and solid management and measurement all lead to quality that is confirmed during testing. Testing is the unavoidable part of any responsible effort to develop a software system. 2. Debugging: -Debugging is a straightforward application of the scientific method that has been developed over 2500 years. The basis of debugging is to locate the problems source by binary partitioning through working hypotheses that predict new valued to be examined. In general three debugging strategies have been proposed: 1. Brute force 2. Backtracking and 3. Cause elimination. Debugging tactics: - The brute force catefory of debugging is probably the most common and least efficient method for isolating the cause of asoftware error. We apply brute force debugging methods when all else fails. Using a lwt the computer find the error philosophy, memory dumps are taken, run-time traces are invoked, and the program is losded with output statements. Backtracking: - is a fairly common debugging approach that can be used successfully in small programs. Beginning at the site where a symptom has been uncovered the source code is traced backward (manually) until the site of the cause is found. Unfortunately, as the number of sources lines increases, the number of potential backward paths may become unmanageably large. Cause elimination: - is manifested by induction or deduction and introduces the concept of binary partitioning. Data related to the error occurrence are organized to isolate potential causes. A cause hypothesis is divised, and the aforementioned data re used to prove or disprove the hypothesis. Alternatively, a list of all possible causes is developed, and tests are conducted to eliminate each. The people factor: - Any discussion of debugging approaches and tools is incomplete without mention of a powerful allies other people! A fresh viewpoint, unclouded by hours of frustration, can do wonders. A final maxim for debugging might be When all else fails, get help! 3. Testing Strategies: Testing: - Testing is a set of activities that can be planned in advance and conducted systematically. For this reason a template for software testing a set of steps into which we can place specific test case design techniques and testing methods-should be defined fot the software process. To perform effective testing, a software team should conduct effective formal technical reviews. Testing begins at the component level and works outward toward the integration of the entire computer-based system. Different testing techniques are appropriate at different points in time. Testing is conducted by the developed of the software and an independent test group. Testing and debugging are different activities, but debugging must be accommodated in any testing strategy. A strategy for software testing must accommodate low-level tests that are necessary to verify that a small source code segment has been correctly implemented as well as high-level tests that validate major system functions against customer requirements. 1. verification and validation

~ 23 ~

testing the individual units(components) of the program, ensuring that each performs the function or exhibits the behavior for which it was designed. Only after the software architecture is complete does an independent test group become involved. The role of an independent test group is to remove the inherent problems associate with letting the builder test the thing that has been built. Independent testing removes the conflict of interest that may otherwise be present. After all, ITG personnel are paid to find errors. The developer and the ITG work closely throughout a software project to ensure that through tests will be conducted. While testing is conducted, the developer must be available to correct errors that are uncovered. 3. A soft ware testing Strategy for Conventional Software Architectures: - A strategy for software testing may also be viewed in the context of spiral. Unit testing begins at the vortex of the spiral and concentrates on each unit (i.e., component) of the software as implemented in source code. Testing progresses by moving outward along the spiral to Integration testing, where the focus is on design and the construction of the software architecture. Taking another turn outward on the spiral, we encounter Figure 1. Testing strategy validation testing, where requirements established a part of software requirements analysis are validated against the software that has been constructed. Finally, we arrive at system testing, where the software and other system elements are tested as a whole. To test computer software, we spiral out along streamlines that broaden the scope of testing with each turn. Considering the process from a procedural point of view, testing within the context of software engineering is actually a series of four steps that are implemented sequentially. The steps are shown in Figure 2 Initially; tests focus on each component individually, ensuring that it functions properly as a unit. Hence, the name Unit testing. Unit testing makes heavy use of testing techniques that exercise specific paths in components control structure to ensure complete coverage and maximum error detection. Next, components must be assembled or integrated to from the complete software package. Integration testing addresses the issues associated with the dual problems of verification and program construction. Test case design techniques that focus on inputs and outputs are more prevalent during integration, although techniques that exercise specific program paths may be used to ensure coverage of major control paths. After the software has been integrated (constructed), a set of high order tests are conducted, Validation criteria (established during requirements analysis) must be evaluated. Validation testing provides final assurance that software meets all functional, behavioral, and performance requirements. The last high-order testing step falls outside the boundary of software engineering and into the broader context of computer system engineering. Software, once validated, must be combined with other system elements (e.g., hardware, people, databases). System testing verifies that all elements mesh properly and that overall system function/performance is achieved. 4.A software testing for Object-oriented Architectures: - Unit testing loses its meaning, and integration strategies change significantly. In summary both testing strategies and testing tactics must account for the unique characteristics of object-oriented software.

SOFTWARE ENGINEERING ravi 2. Organizing for software Testing: - The software developer is always responsible for

~ 24 ~

SOFTWARE ENGINEERING

ravi

The overall strategy for object-oriented software is identical in philosophy to the one applied for conventional architectures, but differs in approach. We begin with focus when testing in the small change from an individual module (the conventional view) to a class that encompasses attributes and operations and implies communication and collaboration. As classes are integrated into and object-oriented architecture, a series of regression tests are run to uncover errors due to communication and collaboration between classes (components) and side effects caused by the addition of new classes (components). Finally, the system as a whole is tested to ensure that errors in requirements are uncovered. 5.Criteria for Completion of Testing: - A classical question arises every time software testing is discussed: When are we done testing how do we know that weve tested enough? Sadly, there is no definitive answer to this question. One response to this question is: Youre never done testing; the burden simply shifts form you to your customer. Another response is: Youre done testing when you run out of time or you run out of money. Musa and Ackerman suggest a response that is based on statistical criteria: No, we cant be absolutely certain that the software will never fail, but relative to a theoretically sound and experimentally validated statistical model, we have done sufficient testing to say with 95% confidence that the probability of 1000 CPU hours of failure-free operation in a probabilistically defined environment is at least 0.995. SOFTWARE MAINTENANCE: - In the early 1970s, the maintenance iceberg was big enough to sink an aircraft carrier. Today, it could easily sink the entire navy! The maintenance of existing software can account for over 60 percent of all effort expended by a development organization, and the percentage continues to rise as more software is produced. Software maintenance is, of course, far more than fixing mistakes. We may define maintenance by describing four activities that are undertaken after a program is released for use. Software maintenance can be defined by identifying four different activities: corrective maintenance, adaptive maintenance, perfective maintenance or enhancement, and preventive maintenance or reengineering. Only 20 % of all maintenance work is spent fixing mistakes. The remaining 80 % is spent adapting existing systems to changes in their external environment, making enhancements requested by users, and reengineering an application for future use. SOFTWARE CONFIGURATION MANAGEMENT: - The output of the of the software processes information that may be divided into three broad categories: (i) Computer programs(both source level and executable forms); (ii) work products that describe the computer programs (targeted at both technical practitioners and users), and (iii) data (contained within the program or external to it).The items that all comprise all information produced as part of the software process are collectively called a Software Configuration. Software configuration management SCM can be viewed as a software quality assurance activity that is applied throughout the software process. Major SCM tasks and its important concepts are as follows: 1. A SCM Scenario: - A typical CM operational scenario involves a project manager who is in charge of a software group, a configuration manager who is in charge of the CM procedures and policies, the software engineers who are responsible for developing and maintaining the software product, and the customer who uses this product. At the operational level, the scenario involves various roles and tasks. For the project manager, the goal is to ensure that the product is developed within a certain time frame. Hence, the manager monitors the progress of development and recognizes and reacts to problems. The goals of the configuration manager are to ensure that procedures and policies for creating, changing, and testing of code are followed, as well as to make information about the project accessible. The manager creates and disseminates task lists for the engineers and basically creates the project context. Also, the manager collects statistics about components in the software system, such as information determining which components in the system are problematic. Changes are propagated across each others work by merging files. Mechanisms exist to ensure that, for components which undergo simultaneous changes, there is some way of resolving conflicts and merging changes. Ideally, a CM system used in this scenario should support all these roles and tasks; that is, the roles determine the functionality required of a CM system. The project manager sees CM

~ 25 ~

SOFTWARE ENGINEERING

ravi

as an auditing mechanism; the configuration manager sees it as a controlling, tracking, and policy making mechanism; the software engineer sees it as a quality assurance mechanism. 2. Elements of a Configuration Management System: - Susan Dart identifies four important elements that should exist when a configuration management system is developed. Component elementsa set of tools coupled within a file management system (e.g., a database) that enable access to and management of each software configuration item. Process elementsa collection of procedures and tasks that define an effective approach to change management (and related activities) for all constituencies involved in the management, engineering, and use of computer software. Construction elementsa set of tools that automate the construction of software by ensuring that the proper set of validated components (i.e., the correct version) has been assembled. Human elementsto implement effective SCM, the software team use a set of tools and process features (encompassing other CM elements). These elements are not mutually exclusive. For example, component elements work in conjunction with construction elements as the software process evolves. Process elements guide many human activities that are related to SCM and might therefore be considered human elements as well. 3. Baselines: - A baseline is a software configuration management concept that helps us to control change without seriously impeding justifiable change. Change is a fact of life in software development. As the time passes, all constituencies know more (about what they need, which approach would be best, how to get it done and still make money). This additional knowledge is the driving force behind most changes and leads to a statement of fact that is difficult for many software engineering practitioners to accept: Most changes are justified! Before a software configuration item becomes a baseline, change may be made quickly and informally. However, once a baseline is established, we figuratively pass through a swinging one-way door. In the context of software engineering, a baseline is a milestone in the development of software. A baseline is marked by the delivery of one or more software configuration items that has been approved as a consequence of a formal technical review. Once all parts of the model have been reviewed, corrected, and then approved, the design model becomes a baseline. The most common software baselines are shown in figure3. 4. Software configuration Items: -A software configuration item is information that is created as part of the software engineering process. In the extreme, a SCI could be considered to be a single section of a large specification or one test case in a large suite of tests. More realistically, a SCI is a document, a entire suite of test cases, or a named program component (e.g., a C++ function or a java applet). In addition to the SCIs that are derived from software work products, many software engineering organization also place software tools under configuration control. That is, specific versions of editors, compilers, browsers, and other automated tools are frozen as part of the software configuration. Because these tools were used to produce documentation, source code, and data, they must be available when changes to the software configuration are to be made. In reality, SCIs are organized to form configuration objects that may be cataloged in the project database with a single name. A configuration object has a name, attributes, and is connected to other objects by relationships. Referring to figure 4, the configuration objects, DesignSpecification, DataModel, ComponentN, SourceCode and

~ 26 ~

SOFTWARE ENGINEERING

ravi

TestSpecification are each defined separately. However, each of the objects is related to the others shown by the arrows. A curved arrow indicates a compositional relation. That is, DataModel and ComponentN are part of the object DesignSpecification. A double headed straight arrow indicates and interrelationship. If a change were made to the SourceCode object, the interrelationships enable a software engineer to determine what other objects might affected. Requirements engineering provides the appropriate mechanism for understanding what the customer wants, analyzing need, assessing feasibility, negotiating a reasonable solution, specifying the solution unambiguously, validating the specification, and managing the requirements as they are transformed into an operational system. The requirements engineering process is accomplished through the execution of seven distinct functions: inception, elicitation, elaboration, negotiation, specification, validation, and management. Inception: - At project inception software engineers ask a set of context-free questions. The intent is to establish a basic understanding of the problem, the people who want a solution, the nature of the solution that is a desired, and the effectiveness of preliminary communication and collaboration between the customer and developer. Elicitation: - It certainly seems simple enough-ask the customer, the user, and others what the objectives for the system of product is what is to be accomplished, how the system or product is to be used on a day-to-day basis. SRS (Software requirement specification):-This document is generated as output of requirement analysis. The requirement analysis obtain a clear and through understanding of the product to be developed. Thus m SRS should be consistent, correct and complete document. The developer of the system can prepare SRS after detailed communication with customer. An SRS clearly defines the following; External interface of the system: - They identify the information which is to flow from and to the system. Functional and Non-functional requirement of the system: - They stand for the finding of run-time requirements. Constraints of the system; Characteristic of a Good SRS document:A good SRS document should possess the following qualities:1. It should be concise and at the same time unambiguous. 2. It should be complete. 3. it should be consistent. 4. It should be well-structured and easily modifiable. 5. It should specify what the system must do and not state how to do it. 6. It should specify all goals and constraints concerning implementation. 7. It should record references to maintainability, and adaptability. 8. it should characterize acceptable responses to undesired events. 9. It should show conceptual integrity so that the users of the system may easily understand it.

~ 27 ~

You might also like