You are on page 1of 16

CONTENTS

1. SOFTWARE ENGINEERING
2. DIFFERENCE BETWEEN SOFTWARE AND SOFTWARE ENGINEERING Module
Code Design/Architecture

3.HISTORY 4. PROFESSION 5. EMPLOYMENT 6. CERTIFICATION 7. IMPACT OF GLOBALIZATION 8. EDUCATION

9. RELATED DISCIPLINES
Computer science Project management Systems engineering

10. SOFTWARE DEVELOPMENT PROCESS 11. Overview 12. Software development activities Requirements analysis Specification Architecture Design, implementation and testing Deployment and maintenance 13. Models Iterative processes Agile software development XP: Extreme Programming Waterfall processes Other models

14. Capability Maturity Model Integration ISO 9000 ISO 15504 Six sigma 15. Test Driven Development

16. Formal methods


THE NECESSITY FOR SOFTWARE ENGINEERING

SOFTWARE ENGINEERING:Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software , and the study of these approaches; that is, the application of engineering to software. The term software engineering first appeared in the 1968 NATO Software Engineering Conference and was meant to provoke thought regarding the current software crisis at the time.]Since then, it has continued as a professon and field of study dedicated to creating software that is of higher quality, cheaper, maintainable, and quicker to build. Since the field is still relatively young compared to its sister fields of engineering, there is still much work and debate around what software engineering actually is, and if it deserves the title engineering. It has grown organically out of the limitations of viewing software as just programming Software development a term sometimes preferred by practitioners in the industry who view software engineering as too heavyhanded and constrictive to the malleable process of creating software. Yet, in spite of its youth as a profession, the field's future looks bright as Money Magazine and Salary.com rated software engineering as the best job in America in 2006.

DIFFERENCE BETWEEN SOFTWARE ENGINEERING :-

AND

SOFTWARE

Software is a set of instructions and codes which are read and done by the computer. Software Engineering in simple means creating softwares.

The important software characteristics are: Module Code Design/Architecture

Module:
A programmer(a person who creates a software), or the programming language(the language used in writing softwares) divides a software into several sub-programs known as softwares. This subprograms are called MODULES .

Code: To create a software, codes are to be written in a specific computer language(the language used in writing softwares) and then compiled by a compiler. These codes are mainly doing all the functions in a software.

Design/Architecture:Before creating any software the programmer must at first design the composition of a software. This is known as Software Design/Architecture.

HISTORY:When the modern digital computer first appeared in 1941, the instructions to make it operate were wired into the machine. Practitioners quickly realized that this design was not flexible and came up with the "stored program architecture" or von Neumann architecture. Thus the first division between "hardware" and "software" began with abstraction being used to deal with the complexity of computing. Programming languages started to appear in the 1950s and this was also another major step in abstraction. Major languages such as Fortran, Algol, and Cobol were released in the late 1950s to deal with scientific, algorithmic, and business problems respectively. E. W. Dijsktra wrote his seminal paper, "Go To Statement Considered Harmful", [4] in 1968 and David Parnas introduced the key concept of modularity and information hiding in 1972[5] to help programmers deal with the ever increasing complexity of software systems. A software system for managing the hardware called an operating system was also introduced, most notably by Unix in 1969. In 1967, the Simula language introduced the object-oriented programming paradigm. These advances in software were met with more advances in computer hardware. In the mid 1970s, the microcomputer was introduced, making it economical for hobbyists to obtain a languages, including C++, Smalltalk, and Objective C. Open-source software started to appear in the early 90s in the form of Linux and other software introducing the "bazaar" or decentralized style of constructing software [6]. Then the Internet and World Wide Web hit in the mid 90s changing the engineering of software once again. Distributed computer and write software for it. This in turn lead to the now famous Personal Computer or PC and Microsoft Windows. The Software Development Life Cycle or SDLC was also starting to appear as a consensus for centralized construction of software in the mid 1980s. The late 1970s and early 1980s saw the introduction of several new Simula-inspired object-oriented programming Systems gained sway as a way to design systems and the Java programming language was introduced as another step in abstraction having its own virtual machine. Programmers

collaborated and wrote the Agile Manifesto that favored more light weight processes to create cheaper and more timely software. The current definition of software engineering is still being debated by practitioners today as they struggle to come up with ways to produce software that is "cheaper, bigger, quicker".

PROFESSION
Software engineer While some areas, such as Ontario, Canada [7], license software engineers; most places in the world have no laws regarding the profession of software engineers. Yet there are some guides from the IEEE Computer Society and the ACM, the two main professional organizations of software engineering. The IEEE's Guide to the Software Engineering Body of Knowledge - 2004 Version or SWEBOK defines the field and gives a coverage of the knowledge practicing software engineers should know. There is also an IEEE "Software Engineering Code of Ethics". [8] In addition, there is a Software and Systems Engineering Vocabulary (SEVOCAB) [9], published on-line by the IEEE Computer Society. In the UK, the British Computer Society licenses software engineers and members of the society can also become Chartered Engineers (CEng). But there is no legal requirement to have these qualifications.

EMPLOYMENT
In 2004, the U. S. Bureau of Labor Statistics counted 760,840 software engineers holding jobs in the U.S.; in the same time period there were some 1.4 million practitioners employed in the U.S. in all other engineering disciplines combined.[10] Due to its relative newness as a field of study, formal education in software engineering is often taught as part of a computer science curriculum, and as a result most software engineers hold computer science degrees.[11] Most software engineers work as employees or contractors. Software engineers work with businesses, government agencies (civilian or military), and non-profit organizations. Some software engineers work for themselves as freelancers. Some organizations have specialists to perform each of the tasks in the software development process. Other organizations require software engineers to do many or all of them. In large projects, people may specialize in only one role. In small projects, people may fill several or all roles at the same time. Specializations include: in industry (analysts, architects, developers, testers, technical support, managers) and in academia (educators, researchers). There is considerable debate over the future employment prospects for software engineers and other IT Professionals. For example, an online futures market called the "ITJOBS Future of IT Jobs in America" attempts to answer whether there will be more IT jobs, including software engineers, in 2012 than there were in 2002.

CERTIFICATION

Professional certification of software engineers is a contentious issue. Some see it as a tool to improve professional practice; "The only purpose of licensing software engineers is to protect the public" The ACM had a professional certification program in the early 1980s which was discontinued due to lack of interest. The ACM examined the possibility of professional certification of software engineers in the late 1990s, but eventually decided that such certification was inappropriate for the professional industrial practice of software engineering. As of 2006, the IEEE had certified over 575 software professionals.[15] In Canada the Canadian Information Processing Society has developed a legally recognized professional certification called Information Systems Professional (ISP). The Software Engineering Institute offers certification on specific topic such as Security, Process improvement and Software architecture. Most certification programs in the IT industry are oriented toward specific technologies, and are managed by the vendors of these technologies. These certification programs are tailored to the institutions that would employ people who use these technologies.

IMPACT

OF GLOBALIZATION

Many students in the developed world have avoided degrees related to software engineering because of the fear of offshore outsourcing (importing software products or services from other countries) and of being displaced by foreign visa workers. Although government statistics do not currently show a threat to software engineering itself; a related career, computer programming does appear to have been affected. Often one is expected to start out as a computer programmer before being promoted to software engineer. Thus, the career path to software engineering may be rough, especially during recessions. Some career counselors suggest a student also focus on "people skills" and business skills rather than purely technical skills because such "soft skills" are allegedly more difficult to offshore. It is the quasi-management aspects of software engineering that appear to be what has kept it from being impacted by globalization.

EDUCATION
A knowledge of programming is the main pre-requisite to becoming a software engineer, but it is not sufficient. Many software engineers have degrees in Computer Science due to the lack of software engineering programs in higher education. However, this has started to change with the introduction of new software engineering degrees, especially in post-graduate education. A standard international curriculum for undergraduate software engineering degrees was defined by the CCSE. In 1998, the US Naval Postgraduate School (NPS) established the first doctorate program in Software Engineering in the world. Steve McConnell opines that because most universities teach computer science rather than software engineering, there is a shortage of true software engineers. In 2004 the IEEE Computer Society produced the SWEBOK, which has become an ISO standard describing the body of knowledge covered by a software engineer.

SUB-DISCIPLINES

While Grace Hopper was working on the Harvard Mark II Computer at Harvard University, her associates discovered this moth stuck in a relay and thereby impeding operation, whereupon she remarked that they were "debugging" the system. Thus starting the popularity of the term software bug. Software engineering can be divided into ten subdisciplines. They are: Software requirements: The elicitation, analysis, specification, and validation of requirements for software. Software design: The design of software is usually done with Computer-Aided Software Engineering (CASE) tools and use standards for the format, such as the Unified Modeling Language (UML). Software development: The construction of software through the use of programming languages. Software testing Software maintenance: Software systems often have problems and need enhancements for a long time after they are first completed. This subfield deals with those problems. Software configuration management: Since software systems are very complex, their configuration (such as versioning and source control) have to be managed in a standardized and structured method. Software engineering management: The management of software systems borrows heavily from project management, but there are nuances encountered in software not seen in other management disciplines. Software development process: The process of building software is hotly debated among practitioners with the main paradigms being agile or waterfall. Software engineering tools, see Computer Aided Software Engineering Software quality Software localisation, a branch of the language industry.

RELATED

DISCIPLINES

Software engineering is related to the disciplines of computer science, project management, and systems engineering.

Computer science:--

Software engineering is considered a subfield of computer science by many academics. Many of the foundations of software engineering come from computer science.

Project management:The building of a software system is usually considered a project and the management of it borrows many principles from the field of Project management.

Systems engineering:Systems engineers have been dealing with the complexity of large systems for many decades and their knowledge is applied to many software engineering problems.

SOFTWARE

DEVELOPMENT PROCESS:-

A software development process is a structure imposed on the development of a software product. Synonyms include software life cycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process.

overview
A largely growing body of software development organizations implement process methodologies. Many of them are in the defense industry, which in the U.S. requires a rating based on 'process models' to obtain contracts. The international standard for describing the method of selecting, implementing and monitoring the life cycle for software is ISO 12207. A decades-long goal has been to find repeatable, predictable processes that improve productivity and quality. Some try to systematize or formalize the seemingly unruly task of writing software. Others apply project management techniques to writing software. Without project management, software projects can easily be delivered late or over budget. With large numbers of software projects not meeting their expectations in terms of functionality, cost, or delivery schedule, effective project management appears to be lacking. Organizations may create a Software Engineering Process Group (SEPG), which is the focal point for process improvement. Composed of line practitioners who have varied skills, the group is at the center of the collaborative effort of everyone in the organization who is involved with software engineering process improvement.

Software development activities

The activities of the software development process represented in the waterfall model. There are several other models to represent this process.

NEED FOR SOFTWARE ENGINEERING


Requirements analysis:The most important task in creating a software product is extracting the requirements or requirements analysis. Customers typically have an abstract idea of what they want as an end result, but not what software should do. Incomplete, ambiguous, or even contradictory requirements are recognized by skilled and experienced software engineers at this point. Frequently demonstrating live code may help reduce the risk that the requirements are incorrect. Once the general requirements are gleaned from the client, an analysis of the scope of the development should be determined and clearly stated. This is often called a scope document. Certain functionality may be out of scope of the project as a function of cost or as a result of unclear requirements at the start of development. If the development is done externally, this document can be considered a legal document so that if there are ever disputes, any ambiguity of what was promised to the client can be clarified. Domain Analysis is often the first step in attempting to design a new piece of software, whether it be an addition to an existing software, a new application, a new subsystem or a whole new system. Assuming that the developers (including the analysts) are not sufficiently knowledgeable in the subject area of the new software, the first task is to investigate the so-called "domain" of the

software. The more knowledgeable they are about the domain already, the less work required. Another objective of this work is to make the analysts, who will later try to elicit and gather the requirements from the area experts, speak with them in the domain's own terminology, facilitating a better understanding of what is being said by these experts. If the analyst does not use the proper terminology it is likely that they will not be taken seriously, thus this phase is an important prelude to extracting and gathering the requirements. If an analyst hasn't done the appropriate work confusion may ensue: "I know you believe you understood what you think I said, but I am not sure you realize what you heard is not what I meant."[1]

Specification
Specification is the task of precisely describing the software to be written, possibly in a rigorous way. In practice, most successful specifications are written to understand and fine-tune applications that were already well-developed, although safety-critical software systems are often carefully specified prior to application development. Specifications are most important for external interfaces that must remain stable. A good way to determine whether the specifications are sufficiently precise is to have a third party review the documents making sure that the requirements and Use Cases are logically sound.

Architecture
The architecture of a software system or software architecture refers to an abstract representation of that system. Architecture is concerned with making sure the software system will meet the requirements of the product, as well as ensuring that future requirements can be addressed. The architecture step also addresses interfaces between the software system and other software products, as well as the underlying hardware or the host operating system.

Design, implementation and testing


Implementation is the part of the process where software engineers actually program the code for the project. Software testing is an integral and important part of the software development process. This part of the process ensures that bugs are recognized as early as possible. Documenting the internal design of software for the purpose of future maintenance and enhancement is done throughout development. This may also include the authoring of an API, be it external or internal.

Deployment and maintenance

Deployment starts after the code is appropriately tested, is approved for release and sold or otherwise distributed into a production environment. Software Training and Support is important because a large percentage of software projects fail because the developers fail to realize that it doesn't matter how much time and planning a development team puts into creating software if nobody in an organization ends up using it. People are often resistant to change and avoid venturing into an unfamiliar area, so as a part of the deployment phase, it is very important to have training classes for new clients of your software. Maintenance and enhancing software to cope with newly discovered problems or new requirements can take far more time than the initial development of the software. It may be necessary to add code that does not fit the original design to correct an unforeseen problem or it may be that a customer is requesting more functionality and code can be added to accommodate their requests. It is during this phase that customer calls come in and you see whether your testing was extensive enough to uncover the problems before customers do.

MODELS
Iterative processes
Iterative development[2] prescribes the construction of initially small but ever larger portions of a software project to help all those involved to uncover important issues early before problems or faulty assumptions can lead to disaster. Iterative processes are preferred by commercial developers because it allows a potential of reaching the design goals of a customer who does not know how to define what they want.

Agile software development


Agile software development processes are built on the foundation of iterative development. To that foundation they add a lighter, more people-centric viewpoint than traditional approaches. Agile processes use feedback, rather than planning, as their primary control mechanism. The feedback is driven by regular tests and releases of the evolving software. Interestingly, surveys have shown the potential for significant efficiency gains over the waterfall method. For example, a survey, published in August 2006 by VersionOne and Agile Alliance and based on polling more than 700 companies claims the following benefits for an Agile approach. The survey was repeated in August 2007 with about 1,700 respondents

XP: Extreme Programming

Extreme Programming (XP) is the best-known iterative process. In XP, the phases are carried out in extremely small (or "continuous") steps compared to the older, "batch" processes. The (intentionally incomplete) first pass through the steps might take a day or a week, rather than the months or years of each complete step in the Waterfall model. First, one writes automated tests, to provide concrete goals for development. Next is coding (by a pair of programmers), which is complete when all the tests pass, and the programmers can't think of any more tests that are needed. Design and architecture emerge out of refactoring, and come after coding. Design is done by the same people who do the coding. (Only the last feature - merging design and code - is common to all the other agile processes.) The incomplete but functional system is deployed or demonstrated for (some subset of) the users (at least one of which is on the development team). At this point, the practitioners start again on writing tests for the next most important part of the system.

Waterfall processes
The waterfall model shows a process, where developers are to follow these steps in order: 1. Requirements specification (AKA Verification or Analysis) 2. Design 3. Construction (AKA implementation or coding) 4. Integration 5. Testing and debugging (AKA validation) 6. Installation (AKA deployment) 7. Maintenance After each step is finished, the process proceeds to the next step, just as builders don't revise the foundation of a house after the framing has been erected. There is a misconception that the process has no provision for correcting errors in early steps (for example, in the requirements). In fact this is where the domain of requirements management comes in, which includes change control. The counter argument, by critics to the process, is the significantly increased cost in correcting problems through introduction of iterations. This is also the factor that extends delivery time and makes this process increasingly unpopular even in high risk projects. This approach is used in high risk projects, particularly large defense contracts. The problems in waterfall do not arise from "immature engineering practices, particularly in requirements analysis and requirements management." Studies of the failure rate of the DOD-STD-2167 specification, which enforced waterfall, have shown that the more closely a project follows its process, specifically in up-front requirements gathering, the more likely the project is to release features that are not used in their current form. Often the supposed stages are part of review between customer and supplier, the supplier can, in fact, develop at risk and evolve the design but must sell off the design at a key milestone called Critical Design Review (CDR). This shifts engineering burdens from engineers to customers who may have other skills

OTHER

MODELS

Capability Maturity Model Integration


The Capability Maturity Model Integration (CMMI) is one of the leading models and based on best practice. Independent assessments grade organizations on how well they follow their defined processes, not on the quality of those processes or the software produced. CMMI has replaced CMM.

ISO 9000
ISO 9000 describes standards for formally organizing processes with documentation.

ISO 15504
ISO 15504, also known as Software Process Improvement Capability Determination (SPICE), is a "framework for the assessment of software processes". This standard is aimed at setting out a clear model for process comparison. SPICE is used much like CMMI. It models processes to manage, control, guide and monitor software development. This model is then used to measure what a development organization or project team actually does during software development. This information is analyzed to identify weaknesses and drive improvement. It also identifies strengths that can be continued or integrated into common practice for that organization or team.

Six sigma
Six Sigma is a methodology to manage process variations that uses data and statistical analysis to measure and improve a company's operational performance. It works by identifying and eliminating defects in manufacturing and service-related processes. The maximum permissible defects is 3.4 per one million opportunities. However, Six Sigma is manufacturing-oriented and needs further research on its relevance to software development.

Test Driven Development


Test Driven Development (TDD) is a useful output of the Agile camp but some suggest that it raises a conundrum. TDD requires that a unit test be written for a class before the class is written. It might be thought, then, that the class firstly has to be "discovered" and secondly defined in sufficient detail to allow the write-test-once-and-code-until-classpasses model that TDD actually uses. This would be actually counter to Agile approaches, particularly (so-called) Agile Modeling, where developers are still encouraged to code early, with light design. However, to get the claimed benefits of TDD a full design down to class and responsibilities (captured using, for example, Design By Contract) is not necessary. This would count towards iterative development, with a design locked down, but not iterative design - as heavy refactoring and re-engineering might negate the usefulness of TDD.

Formal methods
Formal methods are mathematical approaches to solving software (and hardware) problems at the requirements, specification and design levels. Examples of formal methods include the B-Method, Petri nets, Automated theorem proving, RAISE and VDM. Various formal specification notations are available, such as the Z notation. More generally, automata theory can be used to build up and validate application behavior by designing a system of finite state machines. Finite state machine (FSM) based methodologies allow executable software specification and bypassing of conventional coding (see virtual finite state machine or event driven finite state machine). Formal methods are most likely to be applied in avionics software, particularly where the software is safety critical. Software safety assurance standards, such as DO178B demand formal methods at the highest level of categorization (Level A). Formalization of software development is creeping in, in other places, with the application of Object Constraint Language (and specializations such as Java Modeling Language) and especially with Model-driven architecture allowing execution of designs, if not specifications. Another emerging trend in software development is to write a specification in some form of logic (usually a variation of FOL), and then to directly execute the logic as though it were a program. The OWL language, based on Description Logic, is an example. There is also work on mapping some version of English (or another natural language) automatically to and from logic, and executing the logic directly. Examples are Attempto Controlled English, and Internet Business Logic, which does not seek to control the vocabulary or syntax. A feature of systems that support bidirectional English-logic mapping and direct execution of the logic is that they can be made to explain their results, in English, at the business or scientific level. The Government Accountability Office, in a 2003 report on one of the Federal Aviation Administrations air traffic control modernization programs[5], recommends following the agencys guidance for managing major acquisition systems by establishing, maintaining, and controlling an accurate, valid, and current performance measurement baseline, which would include negotiating all authorized, unpriced work within 3 months; conducting an integrated baseline review of any major contract modifications within 6 months; and preparing a rigorous life-cycle cost estimate, including a risk assessment, in accordance with the Acquisition System Toolsets guidance and identifying the level of uncertainty inherent in the estimate.

THE

NECESSITY FOR SOFTWARE ENGINEERING

To understand the necessity for software engineering, we must pause briefly to look back at the recent history of computing. This history will help us to understand the problems that started to become obvious in the late sixties and early seventies, and the solutions that have led to the

creation of the field of software engineering. These problems were referred to by some as The software Crisis, so named for the symptoms of the problem. The situation might also been called The Complexity Barrier, so named for the primary cause of the problems. Some refer to the software crisis in the past tense. The crisis is far from over, but thanks to the development of many new techniques that are now included under the title of software engineering, we have made and are continuing to make progress. In the early days of computing the primary concern was with building or acquiring the hardware. Software was almost expected to take care of itself. The consensus held that hardware is hard to change, while software is soft, or easy to change. According, most people in the industry carefully planned hardware development but gave considerably less forethought to the software. If the software didnt work, they believed, it would be easy enough to change it until it did work. In that case, why make the effort to plan? The cost of software amounted to such a small fraction of the cost of the hardware that no one considered it very important to manage its development. Everyone, however, saw the importance of producing programs that were efficient and ran fast because this saved time on the expensive hardware. People time was assumed to save machine time. Making the people process efficient received little priority. This approach proved satisfactory in the early days of computing, when the software was simple. However, as computing matured, programs became more complex and projects grew larger whereas programs had since been routinely specified, written, operated, and maintained all by the same person, programs began to be developed by teams of programmers to meet someone elses expectations. Individual effort gave way to team effort. Communication and coordination which once went on within the head of one person had to occur between the heads of many persons, making the whole process very much more complicated. As a result, communication, management, planning and documentation became critical. Consider this analogy: a carpenter might work alone to build a simple house for himself or herself without more than a general concept of a plan. He or she could work things out or make adjustments as the work progressed. Thats how early programs were written. But if the home is more elaborate, or if it is built for someone else, the carpenter has to plan more carefully how the house is to be built. Plans need to be reviewed with the future owner before construction starts. And if the house is to be built by many carpenters, the whole project certainly has to be planned before work starts so that as one carpenter builds one part of the house, another is not building the other side of a different house. Scheduling becomes a key element so that cement contractors pour the basement walls before the carpenters start the framing. As the house becomes more complex and more peoples work has to be coordinated, blueprints and management plans are required. As programs became more complex, the early methods used to make blueprints (flowcharts) were no longer satisfactory to represent this greater complexity. And thus it became difficult for one person who needed a program written to convey to another person, the programmer, just what was wanted, or for programmers to convey to each other what they were doing. In fact, without better methods of representation it became difficult for even one programmer to keep track of what he or she is doing.

You might also like