You are on page 1of 13

Submitted By: Shiva Dixit

Roll No.:571016696

Master of Business Administration - MBA Semester III MI0033 Software Engineering Assignment - Set- 1
Q1. Quality and reliability are related concepts but are fundamentally different in a number of ways. Discuss them. Answer: Quality focuses on the software's conformance to explicit and implicit requirements. Reliability focuses on the ability of software to function correctly as a function of time or some other quantity. Safety considers the risks associated with failure of a computer-based system that is controlled by software. In most cases an assessment of quality considers many factors that are qualitative in nature. Assessment of reliability and to some extent safety is more quantitative, relying on statistical models of past events that are coupled with software characteristics in an attempt to predict future operation of a program There is no doubt that the reliability of a computer program is an important element of its overall quality. If a program repeatedly and frequently fails to perform, it matters little whether other software quality factors are acceptable. Software reliability, unlike many other quality factors, can be measured directed and estimated using historical and developmental data. Software reliability is defined in statistical terms as "the probability of failure-free operation of a computer program in a specified environment for a specified time" [MUS87]. To illustrate, program X is estimated to have a reliability of 0.96 over eight elapsed processing hours. In other words, if program X were to be executed 100 times and require eight hours of elapsed processing time (execution time), it is likely to operate correctly(without failure) 96 times out of 100. Whenever software reliability is discussed, a pivotal question arises: What is meant by the term failure? In the context of any discussion of software quality and reliability, failure is nonconformance to software requirements. Yet, even within this definition, there are gradations. Failures can be only annoying or catastrophic. One failure can be corrected within seconds while another requires weeks or even months to correct. Complicating the issue even further, the correction of one failure may in fact result in the introduction of other errors that ultimately result in other failures. Q2. Discuss the Objective & Principles Behind Software Testing. Answer: Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.[1] Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to, the process of executing a program or application with the intent of finding software bugs (errors or other defects). Software testing can be stated as the process of validating and verifying that a software program/application/product: Meets the requirements that guided its design and development; Works as expected; and Can be implemented with the same characteristics. Software testing, depending on the testing method employed, can be implemented at any time in the development process. However, most of the test effort occurs after the requirements have been defined and the coding process has been completed. As such, the methodology of the test is governed by the software development methodology adopted. Different software development models will focus the test effort at different points in the development process. Newer development models, such as Agile, often employ test driven development and place an

MI0033: Software Engineering

Page 1

Submitted By: Shiva Dixit

Roll No.:571016696

increased portion of the testing in the hands of the developer, before it reaches a formal team of testers. In a more traditional model, most of the test execution occurs after the requirements have been defined and the coding process has been completed. The following can be described as testing principals: All tests should be traceable to customer requirements. Tests should be planned long before testing begins. The Pareto principal applies to testing. Testing should begin "in small" and progress toward testing "in large". Exhaustive testing is not possible. To be most effective, an independent third party should conduct testing. Q3. Discuss the CMM 5 Levels for Software Process. Answer: Capability Maturity Model Integration (CMMI) is a process improvement approach whose goal is to help organizations improve their performance. CMMI can be used to guide process improvement across a project, a division, or an entire organization. Currently supported is CMMI Version 1.3. CMMI in software engineering and organizational development is a process improvement approach that provides organizations with the essential elements for effective process improvement. CMMI is registered in the U.S. Patent and Trademark Office by Carnegie Mellon University. According to the Software Engineering Institute (SEI, 2008), CMMI helps "integrate traditionally separate organizational functions, set process improvement goals and priorities, provide guidance for quality processes, and provide a point of reference for appraising current processes. CMMI currently addresses three areas of interest: 1. Product and service development CMMI for Development (CMMI-DEV) 2. Service establishment, management, and delivery CMMI for Services (CMMI-SVC) 3. Product and service acquisition CMMI for Acquisition (CMMI-ACQ) CMMI was developed by a group of experts from industry, government, and the Software Engineering Institute (SEI) at Carnegie Mellon University. CMMI models provide guidance for developing or improving processes that meet the business goals of an organization. A CMMI model may also be used as a framework for appraising the process maturity of the organization.[1] CMMI originated in software engineering but has been highly generalized over the years to embrace other areas of interest, such as the development of hardware products, the delivery of all kinds of services, and the acquisition of products and services. The word "software" does not appear in definitions of CMMI. This generalization of improvement concepts makes CMMI extremely abstract. It is not as specific to software engineering as its predecessor, the Software CMM (CMM, see below). Q4. Discuss the Water Fall model for Software Development. Answer: The waterfall model is a sequential design process, often used in software development processes, in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Conception, Initiation, Analysis, Design, Construction, Testing, Production/Implementation and Maintenance. The waterfall development model originates in the manufacturing and construction industries: highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Since no formal software development methodologies existed at the time, this hardwareoriented model was simply adapted for software development.[citation needed]

MI0033: Software Engineering

Page 2

Submitted By: Shiva Dixit

Roll No.:571016696

The first known presentation describing use of similar phases in software engineering was held by Herbert D. Benington at Symposium on advanced programming methods for digital computers on 29June 1956.[1] This presentation was about the development of software for SAGE. In 1983 the paper was republished.[2] with a foreword by Benington pointing out that the process was not in fact performed in strict top-down, but depended on a prototype. The first formal description of the waterfall model is often cited as a 1970 article by Winston W.Royce,[3] though Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a flawed, non-working model.[4] This, in fact, is how the term is generally used in writing about software development to describe a critical view of a commonly used software practice.[5] In Royce's original waterfall model, the following phases are followed in order: 1. 2. 3. 4. 5. 6. Requirements specification Design Construction (AKA implementation or coding) Integration Testing and debugging (AKA Validation) Installation Maintenance

Thus the waterfall model maintains that one should move to a phase only when its proceeding phase is completed and perfected. However, there are various modified waterfall models (including Royce's final model) that may include slight or major variations on this process. Q5. Explain the Advantages of Prototype Model, & Spiral Model in Contrast to Water Fall model. Answer: Advantages of prototyping 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. May provide the proof of concept necessary to attract funding Early visibility of the prototype gives users an idea of what the final system looks like Encourages active participation among users and producer Enables a higher output for user Cost effective (Development costs reduced). Increases system development speed Assists to identify any problems with the efficacy of earlier design, requirements analysis and coding activities Helps to refine the potential risks associated with the delivery of the system being developed Various aspects can be tested and quicker feedback can be got from the user Helps to deliver the product in quality easily User interaction available during development cycle of prototype.

The spiral model is a software development process combining elements of both design and prototypingin-stages, in an effort to combine advantages of top-down and bottom-up concepts. Also known as the spiral lifecycle model (or spiral development), it is a systems development method (SDM) used in information technology (IT). This model of development combines the features of the prototyping and the waterfall model. The spiral model is intended for large, expensive and complicated projects.

MI0033: Software Engineering

Page 3

Submitted By: Shiva Dixit

Roll No.:571016696

This should not be confused with the Helical model of modern systems architecture that uses a dynamic programming approach in order to optimize the system's architecture before design decisions are made by coders that would cause problems. The spiral model combines the idea of iterative development (prototyping) with the systematic, controlled aspects of the waterfall model. It allows for incremental releases of the product, or incremental refinement through each time around the spiral. The spiral model also explicitly includes risk management within software development. Identifying major risks, both technical and managerial, and determining how to lessen the risk helps keep the software development process under control. The spiral model is based on continuous refinement of key products for requirements definition and analysis, system and software design, and implementation (the code). At each iteration around the cycle, the products are extensions of an earlier product. This model uses many of the same phases as the waterfall model, in essentially the same order, separated by planning, risk assessment, and the building of prototypes and simulations. Documents are produced when they are required, and the content reflects the information necessary at that point in the process. All documents will not be created at the beginning of the process, nor all at the end (hopefully). Like the product they define, the documents are works in progress. The idea is to have a continuous stream of products produced and available for user review. The spiral lifecycle model allows for elements of the product to be added in when they become available or known. This assures that there is no conflict with previous requirements and design. This method is consistent with approaches that have multiple software builds and releases and allows for making an orderly transition to a maintenance activity. Another positive aspect is that the spiral model forces early user involvement in the system development effort. For projects with heavy user interfacing, such as user application programs or instrument interface applications, such involvement is helpful. Starting at the center, each turn around the spiral goes through several task regions. Determine the objectives, alternatives, and constraints on the new iteration. Evaluate alternatives and identify and resolve risk issues. Develop and verify the product for this iteration. Plan the next iteration.

Note that the requirements activity takes place in multiple sections and in multiple iterations, just as planning and risk analysis occur in multiple places. Final design, implementation, integration, and test occur in iteration 4. The spiral can be repeated multiple times for multiple builds. Using this method of development, some functionality can be delivered to the user faster than the waterfall method. The spiral method also helps manage risk and uncertainty by allowing multiple decision points and by explicitly admitting that all of anything cannot be known before the subsequent activity starts. Q6. Explain the COCOMO Model & Software Estimation Technique. Answer: The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model developed by Barry W. Boehm. The model uses a basic regression formula with parameters that are derived from historical project data and current project characteristics. COCOMO was first published in Boehm's 1981 book Software Engineering Economics [1] as a model for estimating effort, cost, and schedule for software projects. It drew on a study of 63 projects at TRWAero

MI0033: Software Engineering

Page 4

Submitted By: Shiva Dixit

Roll No.:571016696

space where Boehm was Director of Software Research and Technology. The study examined projects ranging in size from 2,000 to 100,000 lines of code, and programming languages ranging from assembly to PL/I. These projects were based on the waterfall model of software development which was the prevalent software development process in 1981.References to this model typically call it COCOMO 81. In 1995 COCOMO II was developed and finally published in 2000 in the book Software Cost Estimation with COCOMO II.[2] COCOMO II is the successor of COCOMO 81 and is better suited for estimating modern software development projects. It provides more support for modern software development processes and an updated project database. The need for the new model came as software development technology moved from mainframe and overnight batch processing to desktop development, code reusability and the use of off-the-shelf software components. This article refers to COCOMO 81. COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. The first level, Basic COCOMO is good for quick, early, rough order of magnitude estimates of software costs, but its accuracy is limited due to its lack of factors to account for difference in project attributes (Cost Drivers). Intermediate COCOMO takes these Cost Drivers into account and Detailed COCOMO additionally accounts for the influence of individual project phases. The ability to accurately estimate the time and/or cost taken for a project to come in to its successful conclusion is a serious problem for software engineers. The use of a repeatable, clearly defined and well understood software development process has, in recent years, shown itself to be the most effective method of gaining useful historical data that can be used for statistical estimation. In particular, the act of sampling more frequently, coupled with the loosening of constraints between parts of a project, has allowed more accurate estimation and more rapid development times. Popular methods for estimation in software engineering include: Analysis Effort method COCOMO (This model is obsolete and should only be used for demonstration purposes.) COCOMO II COSYSMO Evidence: based Scheduling Refinement of typical agile estimating techniques using minimal measurement and total time accounting. Function Point Analysis Parametric Estimating PRICE Systems Founders of Commercial Parametric models that estimate the scope, cost, effort and schedule for software projects. Proxy-based estimating (PROBE) (from the Personal Software Process) Program Evaluation and Review Technique (PERT) SEER: SEM Parametric Estimation of Effort, Schedule, Cost, Risk. Mimimum time and staffing concepts based on Brooks's law SLIM The Planning Game (from Extreme Programming) Weighted Micro Function Points (WMFP)

MI0033: Software Engineering

Page 5

Submitted By: Shiva Dixit

Roll No.:571016696

Master of Business Administration - MBA Semester III MI0033 Software Engineering Assignment - Set- 2
Q1. Write a note on myths of Software. Answer: Myth is defined as "widely held but false notation" by the oxford dictionary, so assigns other fields software arena also has some myths to demystify. Pressman insists Software myths- beliefs about software and the process used to build it- can be traced to earliest days of computing. Myths have a number of attributes that have made them insidious." So software myths prevail but though they do are not clearly visible they have the potential to harm all the parties involved in the software development process mainly the developer team. Tom DeMarco expresses In the absence of meaningful standards, a new industry like software comes to depend instead on folklore." The given statement points out that the software industry caught pace just some decades back so it has not matured to a formidable level and there are no strict standards in software development. There does not exist one best method of software development that ultimately equates to the ubiquitous software myths. Primarily, there are three types of software myths, all the three are stated below: 1. Management Myth 2. CustomerMyth 3. Practitioner/Developer Myth Management Myths Managers with software responsibility, like managers in most disciplines, are often under pressure to maintain budgets, keep schedules from slipping, and improve quality. Like a drowning person who grasps at a straw, a software manager often grasps at belief in a software myth, If the Belief will lessen the pressure. Myth: We already have a book thats full of standards and procedures for building software. Wont that provide my people with everything they need to know? Reality: The book of standards may very well exist, but is it used? - Are software practitioners aware of its existence? - Does it reflect modern software engineering practice? - Is it complete? Is it adaptable? - Is it streamlined to improve time to delivery while still maintaining a focus on Quality? In many cases, the answer to these entire question is no. Myth: If we get behind schedule, we can add more programmers and catch up (sometimes called the Mongolian horde concept) Reality: Software development is not a mechanistic process like manufacturing. In the words of Brooks [BRO75]: Adding people to a late software project makes it later. At first, this statement may seem counterintuitive. However, as new people are added, people who were working must spend time educating the newcomers, thereby reducing the amount of time spent on productive development effort. Myth: If we decide to outsource the software project to a third party, I can just relax and let that firm build it.

MI0033: Software Engineering

Page 6

Submitted By: Shiva Dixit

Roll No.:571016696

Reality: If an organization does not understand how to manage and control software project internally, it will invariably struggle when it out sources software project. Customer Myths A customer who requests computer software may be a person at the next desk, a technical group down the hall, the marketing /sales department, or an outside company that has requested software under contract. In many cases, the customer believes myths about software because software managers and practitioners do little to correct misinformation. Myths led to false expectations and ultimately, dissatisfaction with the developers. Myth: A general statement of objectives is sufficient to begin writing programs we can fill in details later. Reality: Although a comprehensive and stable statement of requirements is not always possible, an ambiguous statement of objectives is a recipe for disaster. Unambiguous requirements are developed only through effective and continuous communication between customer and developer. Myth: Project requirements continually change, but change can be easily accommodated because software is flexible. Reality: Its true that software requirement change, but the impact of change varies with the time at which it is introduced. When requirement changes are requested early, cost impact is relatively small. However, as time passes, cost impact grows rapidly resources have been committed, a design framework has been established, and change can cause upheaval that requires additional resources and major design modification. Q2. Explain Version Control & Change Control. Answer: Version Control: A version control system (also known as a Revision Control System) is a repository of files, often the files for the source code of computer programs, with monitored access. Every change made to the source is tracked, along with who made the change, why they made it, and references to problems fixed or enhancements introduced, by the change. Version control systems are essential for any form of distributed, collaborative development. Whether it is the history of wiki page or large software development project, the ability to track each change as it was made and to reverse changes when necessary can make all the difference between a well managed and controlled process and an uncontrolled 'first come, first served system. It can also serve as a mechanism for due diligence for software projects. Version control has been closely studied and understood in the software engineering community for a long time. The solutions are stable, robust and well-supported. There are various systems suitable for small local teams and for large distributed teams, making them ideal for coordinating software development, and for mitigating differences in culture and time zone. Version control is provided at sites such as Source Forge and Google Code. These sites typically build as suite of services around version control: archiving, release downloads, mailing lists, bug trackers, web hosting and build farms. This range of functionality makes them particularly attractive for those projects that do not have the resources to maintain their own server for version control. CVS used to be the most widely used open source version control system but these days Subversion has overtaken it and is very

MI0033: Software Engineering

Page 7

Submitted By: Shiva Dixit

Roll No.:571016696

commonly used in open source projects. However, some newer open source version control systems such as Arch and Git have made significant inroads. The basic capabilities of these systems are very similar, but they offer different security, networking and abstraction functionality, and different licenses. There are also many proprietary solutions available from a range of suppliers. Change Control: Change control is a systematic approach to managing all changes made to a product or system. The purpose is to ensure that no unnecessary changes are made, that all changes are documented, that services are not unnecessarily disrupted and that resources are used efficiently. With in information technology (IT), change control is a component of change management. The change control process is usually conducted as a sequence of steps proceeding from the submission of a change request. Typical IT requests include the addition of features to software applications, the installation of patches, and upgrades to network equipment. Heres an example of a six-step process for a software change request: Documenting the change request: When the client requests the change, that request is categorized and recorded, along with informal assessments of the importance of that change and the difficulty of implementing it. Formal assessment: The justification for the change and risks and benefits of making/not making the change are evaluated. If the change request is accepted, a development team will be assigned. If the change request is rejected, that fact is documented and communicated to the client. Planning: The team responsible for the change creates a detailed plan for its design and implementation, as well as a plan for rolling back the change should it be deemed unsuccessful. Designing and testing: The team designs the program for the software change and tests it. If the change is deemed successful, the team requests approval and a date for implementation. Implementation and review: The team implements the program and stake holders review the change. Final assessment: If the client is satisfied that the change was implemented satisfactorily, the change request is closed. If the client is not satisfied, the project is reassessed and steps may be repeated.

Q3. Discuss the SCM Process. Answer: In software engineering, software configuration management (SCM) is the task of tracking and controlling changes in the software. Configuration management practices include revision control and the establishment of baselines. SCM concerns itself with answering the question "Somebody did something, how can one reproduce it?"Often the problem involves not reproducing "it" identically, but with controlled, incremental changes. Answering the question thus becomes a matter of comparing different results and of analyzing their differences. Traditional configuration management typically focused on controlled creation of relatively simple products. Now, implementers of SCM face the challenge of dealing with relatively minor increments under their own control, in the context of the complex system being developed.

MI0033: Software Engineering

Page 8

Submitted By: Shiva Dixit

Roll No.:571016696

According to another simple definition: Software Configuration Management is how you control the evolution of a software project. The goals of SCM are generally: 1. Configuration identification - Identifying configurations, configuration items and baselines. 2. Configuration control - Implementing a controlled change process. This is usually achieved by setting up a change control board whose primary function is to approve or reject all change requests that are sent against any baseline. 3. Configuration status accounting - Recording and reporting all the necessary information on the status of the development process. 4. Configuration auditing - Ensuring that configurations contain all their intended parts and are sound with respect to their specifying documents, including requirements, architectural specifications and user manuals. 5. Build management - Managing the process and tools used for builds. 6. Process management - Ensuring adherence to the organization's development process. 7. Environment management - Managing the software and hardware that host the system. 8. Teamwork - Facilitate team interactions related to the process. 9. Defect tracking - Making sure every defect has traceability back to the source.

Q4. Explain i. Software doesnt Wear Out. ii. Software is engineered & not manufactured. Answer: i. Software doesnt Wear Out. Software doesn't wear out: The hardware can wear out whereas software can't. In case of hardware we have a "bathtub" like curve, which is a curve that lies in between failure-rate and time. In this curve, in the starting time there is relatively high failure rate. But, after some period of time, defects get corrected and failure-rate drops to a steady-state for some time period. But, the failure-rate again rises due to the effects of rain, dust, temperature extreme and many other environment effects. The hardware begins to wear out. But, the software is not responsible to the failure rate of hardware. The failure rate of software can be understood by the "idealized curve". In this type of curve the failure rate in the initial state is very high. But, the errors in the software get corrected and the curve flattens. However, the implication is clear that the software can "deteriorate" it does not "wear out". This can be explained by the actual curve. As soon as that error gets corrected the curve encounters another spike that means another error in the software. After some time the steady state of the software don't remains steady and the failure rate begins to rise. If hardware gets failed then it can be replaced but there is no replacement in case of software.

MI0033: Software Engineering

Page 9

Submitted By: Shiva Dixit

Roll No.:571016696

Figure: Relationship between failure rate and time Software is not susceptible to the environmental maladies that cause hardware to wear out. In theory, therefore, the failure rate curve for software should take the form of the idealized curve like a zigzag form. Undiscovered defects will cause high failure rates early in the life of a program. However, the implication is clear software doesn't wear out. But it does deteriorate. Before the curve can return to the original steady-state failure rate, another change is requested, causing the curve to spike again. Slowly, the Minimum failure rate level begins to rise the software is deteriorating due to change.

Figure: Idealized and actual failure curves for software The idealized curve is a gross oversimplification of actual failure models for software. However, the implication is clear software doesn't wear out. But it does deteriorate. This seeming contradiction can best be explained by considering the actual curve shown in Figure: During its life, software will undergo change (maintenance). As changes are made, it is likely that some new defects will be introduced, causing

MI0033: Software Engineering

Page 10

Submitted By: Shiva Dixit

Roll No.:571016696

the failure rate curve to spike as showing in figure. Before the curve can return to the original teady-state failure rate, another change is requested, causing the curve to spike again. Slowly, the minimum failure rate level begins to rise the software is deteriorating due to change. Another aspect of wear illustrates the difference between hardware and software. When a hardware component wears out, it is replaced by a spare part. There are no software spare parts. Every software failure indicates an error in design or in the process through which design was translated into machine executable code. Therefore, software maintenance involves considerably more complexity than hardware maintenance. Software is engineered & not manufactured: Although some similarities exist between software development and hardware manufacture, the two activities are fundamentally different. In both activities, high quality is achieved through good design, but the manufacturing phase for hardware can introduce quality problems that are nonexistent (or easily corrected) for software. Both activities are dependent on the people, but the relationship between people applied and work accomplished is entirely different. Both activities require the construction of a "product" but the approaches are different. Software costs are concentrated in engineering. This means that software projects cannot be managed as if they were manufacturing projects. When computer software succeeds when it meets the needs of the people who use it, when it performs flawlessly over a long period of time, when it is easy to modify and even easier to use it can and does change things for the better. But when software fails when its users are dissatisfied, when it is error prone, when it is difficult to change and even harder to use bad things can and do happen. We all want to build software that makes things better, avoiding the bad things that lurk in t he shadow of failed efforts. To succeed, we need discipline when software is designed and built. We need an engineering approach. The role of computer software has undergone significant change over a time span of little more than 50 years. Dramatic improvements in hardware performance, profound changes in computing architectures, vast increases in memory and storage capacity and wide variety of exotic input and output options have all precipitated more sophisticated and complex computer-based systems. Sophistication and complexity can produce dazzling results when a system succeeds, but they can also pose huge problems for those who must build complex systems. The lone programmer of an earlier era has be replaced by a team of software specialists, each focusing on one part of the technology required to deliver a complex application. And yet, the same questions asked of the lone programmer are being asked when modern computer-based systems are built: Why does it take so long to get software finished? Why are development costs so high? Why cant we find all the errors before we give the software to customers? Why do we continue to have difficulty in measuring progress as software is being developed?

These, and many other questions, are a manifestation of the concern about software and the manner in which it is developed-a concern that has lead to the adoption of software engineering practice.

MI0033: Software Engineering

Page 11

Submitted By: Shiva Dixit

Roll No.:571016696

Software is pervasive, and yet, many people in position of responsibility have little or no real understanding of what it really is, how its built, or what it means to the institutions that they control. More importantly, they have little appreciation of the dangers and opportunities that software offers. Q5.Explain the Different types of Software Measurement Techniques. Answer: Types of Software Measurement Techniques: Most estimating methodologies are predicated on analogous software programs. Expert opinion is based on experience from similar programs; parametric models stratify internal data bases to simulate environments from many analogous programs; engineering builds reference similar experience at the unit level; and cost estimating relations hips (like parametric models) regress algorithms from several analogous programs. Deciding which of these methodologies (or combination of methodologies) is the most appropriate for your program usually depends on availability of data, which is in turn, depends on where you are in the life cycle or your scope definition. Analogies: Cost and schedule are determined based on data from completed similar efforts. When applying this method, it is often difficult to find analogous efforts at the total system level. It may be possible, however, to find analogous efforts at the subsystem or lower level computer software configuration item/computer software component/computer software unit (CSCI/CSC/CSU). Furthermore, you may be able to find completed efforts that are more or less similar incomplexity. If this is the case, a scaling factor may be applied based on expert opinion (e.g.,CSCI-x is 80% as complex). After an analogous effort has been found, associated data need to be assessed. It is preferable to use effort rather than cost data; however, if only cost data are available, these costs must be normalized to the same base year as your effort using current and appropriate inflation indices. As with all methods, the quality of the estimate is directly proportional to the credibility of the data. Expert (engineering) opinion: Cost and schedule are estimated by determining required effort based on input from personnel with extensive experience on similar programs. Due to the inherent subjectivity of this method, it is especially important that input from several independent sources be used. It is also important to request only effort data rather than cost data as cost estimation is usually out of the realm of engineering expertise (and probably dependent on non-similar contracting situations). This method, with the exception of rough orders-of-magnitude estimates, is rarely used as a primary methodology alone. Expert opinion is used to estimate lower-level, low cost, pieces of a larger cost element when a labor-intensive cost estimate is not feasible. Parametric models: The most commonly- used technology for software estimation is parametric models, a variety of which are available from both commercial and government sources. The estimates produced by the models are repeatable, facilitating sensitivity and domain analysis. The models generate estimates through statistical formulas that relate a dependent variable (e.g., cost, schedule, resources) to one or more independent variables. Independent variables are called cost drivers because any change in their value results in a change in the cost, schedule, or resource estimate. The models also address both the development (e.g., development team skills/experience, process maturity, tools, complexity, size, domain, etc.) and operational (how the software will be used) environments, as well as software characteristics. The environmental factors, used to calculate cost (manpower/effort), schedule, and resources (people, hardware, tools, etc.), are often the basis of comparison among historical programs, and can be used to assess on-going program progress.

MI0033: Software Engineering

Page 12

Submitted By: Shiva Dixit

Roll No.:571016696

Q6. Write a Note on Spiral Model. Answer: The Spiral Life Cycle Model is a type of iterative software development model which is generally implemented in high risk projects. It was first proposed by Boehm. In this system development method, we combine the features of both, waterfall model and prototype model. In Spiral model we can arrange all the activities in the form of a spiral. Each loop in a spiral represents a development phase (and we can have any number of loops according to the project). Each loop has four sections or quadrants: 1. To determine the objectives, alternatives and constraints. We try to understand the product objectives, alternatives in design and constraints imposed because of cost, technology, schedule, etc. 2. Risk analysis and evaluation of alternatives. Here we try to find which other approaches can be implemented in order to fulfill the identified constraints. Operational and technical issues are addressed here. Risk mitigation is in focus in this phase. And evaluation of all these factors determines future action. 3. Execution of that phase of development. In this phase we develop the planned product. Testing is also one. In order to do development, waterfall or incremental approach can be implemented. 4. Planning the next phase. Here we review the progress and judge it considering all parameters. Issues which need to be resolved are identified in this phase and necessary steps are taken. Subsequent loops of spiral model involve similar phases. Analysis and engineering efforts are applied in this model. Large, expensive or complicated projects use this type of life cycle. If at any point of time one feels the risk involved in the project is a lot more than anticipated, one can abort it. Reviews at different phases can be done by an in-house person or by an external client.

Figure: Spiral Model Diagram

MI0033: Software Engineering

Page 13

You might also like