Professional Documents
Culture Documents
SDLC is Software Development Life Cycle. It is used to develop the software systematically.
Information gathering - In this phase Business Analyst will gather all the information from the customer
and prepare Business Requirement Specification (BRS) or Customer Required Specification (CRS) or User
Requirement Specification (URS) documents.
Analysis - The Features and the functions that need to be put in the project are determined. Senior
Business Analyst will prepare System Requirement Specification (SRS) documents.
Design - This is done by the Chief Architect; HLD and LLD are prepared.
HLD: Defines the overall Hierarchy of the function.
LLD: Defines the Internal logic of the project.
Coding - Based on the design document, small modules are summed together to form an .exe. Once Unit
testing and White box testing are completed, Software is given for testing to verify and validate.
Testing – Actual Testing starts at Information gathering stage only. Testing phase is done by Test
Engineers/QC Team. Functional, Integration and System Testing are performed.
Implementation & Maintenance - Engineers with the coordination of Developer will install/Implement the
developed application. The Testing/Maintenance team will suggest to change the software if necessary
by performing User Acceptance Testing (UAT).
To Develop a Software using the SDLC process CMM standards have to be followed.
The Capability Maturity Model (CMM) is a methodology used to develop and refine an organization's
software development process.
The CMM is similar to ISO 9001, one of the ISO 9000 series of standards specified by the
International Organization for Standardization (ISO).
CMM defines 5 levels of process maturity based on certain Key Process Areas (KPA)
CMM Levels
Level 5 – Optimizing (< 1%)
-- process change management
-- technology change management
-- defect prevention
Level 4 – Managed (< 5%)
-- software quality management
-- quantitative process management
Level 3 – Defined (< 10%)
-- peer reviews
-- intergroup coordination
-- software product engineering
-- integrated software management
-- training program
-- organization process definition
-- organization process focus
Level 2 – Repeatable (~ 15%)
-- software configuration management
-- software quality assurance
-- software project tracking and oversight
-- software project planning
-- requirements management
Level 1 – Initial (~ 70%)
SDLC Model is a framework that describes the activities performed at each stage of a software development
project.
SDLC has standard phases which may be modified as per chosen Model.
Fish Model
Water Fall Model.
V-Model or Y – Model
Proto type Model Fig: SDLC Phases
Incremental Model.
Spiral Model
Hybrid Model.
RAD (Rapid Application Development) Model that means every company has its own model.
PET (Process Experts Tools and Techniques) Model.
Fish Model has all the necessary stages of SDLC .
"The monitoring and measuring the strength of development process is called as Software Quality
Assurance (SQA)". It is a process based concept.
After completion of every development stage, the organizations are conducting testing, called QC.
The waterfall model is a sequential development process, in which development is seen as flowing
steadily downwards (like a waterfall) through the phases of Requirements & analysis, design, testing
(validation), integration, implementation, and maintenance.
Once a phase of development is completed, the development proceeds to the next phase and there
is no turning back.
A schedule can be set with deadlines for each stage of development and a product can proceed
further and theoretically, be delivered on time.
V-Shaped Strengths
Emphasize planning for verification and validation of the product in early stages
Each deliverable must be testable
Project management can track progress by milestones
Easy to use
V-Shaped Weaknesses:
Does not easily handle concurrent events
Does not handle iterations or phases
Does not easily handle dynamic changes in requirements
Does not contain risk analysis activities Fig: V or Y Model and is self explained.
Prototyping Weaknesses:
Bad reputation for “quick-and-dirty” methods
Overall maintainability may be overlooked
The customer may want the prototype delivered.
Process may continue forever (scope creep)
When to use:
Requirements are unstable or have to be clarified
Develop user interfaces
Short-lived demonstrations
New, original development.
The incremental model performs the waterfall in overlapping sections attempting to compensate for
the length of waterfall model projects by producing usable functionality earlier. This may involve a
complete upfront set of requirements that are implemented in a series of small projects.
A project using the incremental model may start with general objectives. Then some portion of these
objectives is defined as requirements and is implemented, followed by the next portion of the
objectives until all objectives are implemented.
Because some modules will be completed long before others, well-defined interfaces are required.
Also, formal reviews and audits are more difficult to implement on increments than on a complete
system.
In a pure Top-down SDLC model, high-level requirements are documented, and programs are built to
meet these requirements. Then, the next level is designed and built.
In the Bottom-up SDLC model, the lowest level of functionality is designed and programmed first,
and finally all the pieces are integrated together into the finished application. This means that,
generally, the most complex components are developed and tested first.
The Hybrid SDLC model combines the top-down and bottom-up models.
Generally speaking, most of the time different SDLC models are combined together to create a
hybrid-methodology life cycle.
Even if you don't have the resources of a developer to put together a prototype you can still model a
system using power point, or even white boards. (This is probably where use cases originated.)
Some of the major flavors of RAD consists of Agile, Extreme Programming (XP), Joint Application
Development (JAD), Lean software development (LD), Rapid Application Development (RAD) and
SCRUM.
RAD Strengths:
Reduced cycle time and improved productivity
Fewer people means lower costs
Time-box approach mitigates cost and schedule risk
Uses modeling concepts to capture information about business, data, and processes.
RAD Weaknesses:
Accelerated development process must give quick responses to the user.
Risk of never achieving closure
Hard to use with legacy systems
Requires a system that can be modularized
Developers and customers must be committed .
It is a refinement form of V - Model. It defines mapping between development and testing process.
NOTE: White Box Testing is also known as Clear Box Testing or Open box Testing."The combination of
WBT and BBT is called as Grey Box Testing“.
Some of the types of Testing performed in different stages of STLC are as follows:
(These can be part of White Box Testing or Black Box Testing)
1.a. Basis Path Testing: (whether it is executing or not) In this, the programmers are checking all
executable areas in that program to estimate "whether that program is running or not?".
After completion of Basis Path testing, the programmers are concentrating on correctness of inputs and
outputs using Control structure Testing.
1.b. Control structure Testing: In this, the programmers are verifying every statement, condition and
loop in terms of completeness and correctness of I/O (example: Debugging)
1.c. Program Technique Testing: During this, the programmer is calculating the execution time of that
program. If the execution time is not reasonable then the programmer is performing changes in
structure of that program sans disturbing the functionality.
1.d. Mutation Testing: Mutation means a change in a program. Programmers are performing changes in
tested program to estimate completeness and correctness of that program testing.
After completion of dependent programs development and Unit Testing, the programmers are inter
connecting them to form a complete system. After completion of inter connection, the programmers are
checking the completeness and correctness of that inter connection. This Integration testing is also
known as Interface Testing.
There are 4 approaches to inter connect programs and testing on that inter connections
2. a. Top Down approach: In this approach, programmers are inter connecting the Main module and
completed sub modules sans using under constructive sub modules. In the place of under constructive
sub modules, the programmers are using temporary or alternative programs called Stubs. These stubs
are also known as Called Programs by Main module.
2.b. Bottom to Up Approach: In this approach, programmers are connecting completed Sub modules
sans inter connection of Main module which is under construction. The programmers are using a
temporary program instead of main module called Driver or Calling Program.
2. c. Hybrid Approach: It is a combination of Top-Down and Bottom-Up approaches. This approach is also
called as Sandwich Approach.
2. d. System Approach: After completion of all approaches, the programmers are integrating modules,
their submodules and Unit testing. This approach is known as Big Bank approach.
Sans means
Disturbed
After completion of all required modules integration, the development team is releasing a s/w build to a
separate testing team in our organization. The s/w build is also known as Application Under Testing
(UAT).
3.ii.a. Functionality Testing: During this test, test engineers are validating the completeness and
correctness of every functionality. This testing is also known as Requirements Testing.
In this test, a separate testing team is validating the correctness of every functionality through below
coverage’s:
GUI coverage or Behavioral coverage (valid changes in properties of objects and windows in our
application build).
Error handling coverage (the prevention of wrong operations with meaningful error messages like
displaying a message before closing a file without saving it).
Input Domain coverage (the validity of i/p values in terms of size and type like while giving alphabets
to age field).
Manipulations coverage (the correctness of o/p or outcomes).
Order of functionalities (the existence of functionality w.r.t customer requirements).
Back end coverage (the impact of front end’s screen operation on back end’s table content in
corresponding functionality).
NOTE: Above coverage’s are applicable to every functionality of the application build with the help of
Black Box Testing Techniques.
3.ii.b. Sanitation testing: It is also known as Garbage Testing. During this test, a separate testing team is
detecting extra functionalities in s/w build w.r.t customer requirements (like sign in link sign in page).
3.iii.a. Recovery/Reliability testing: During this, the test engineers are validating that whether our s/w build
is changing from abnormal state to normal state or not?
3.iii.b. Compatibility Testing: (friend’s game CD not working in our system) “It is also known as Portability
testing”.(Adjust anywhere).During this test, the test engineers are validating that whether our s/w build
is able to run on customers expected platform or not? Platform means that the OS, Compilers, Browsers
and other system s/w.
3.iii.c. Configuration Testing: It is also known as H/W compatibility testing. During this, the testing team is
validating that whether our s/w build is supporting different technology devices or not?(Example different
types of printers, different types of n/w etc)
3.iii.d. Inter system testing: It is also known as End-To-End Testing. During this the testing team is validating
that whether the s/w build is co-existing with other s/w or not? (To share common resources)
EX: E-Server
3.iii.e. Installation Testing: The order is important INITIAION, DURING & AFTER
3.iii.f. Data Volume testing: It is also known as Storage testing or Memory Testing. During this the testing
team is calculating the peak limit of data handled by the s/w build. (EX: Hospital software. It is also
known as Mass testing)
EX: MS Access technology oriented s/w builds are supporting 2GB data as maximum.
3.iii.g. Load Testing: It is also known as Performance or Scalability testing. Load or Scale means that the
number of concurrent users (at the same time) who are operating a s/w. The execution of our s/w build
under customer expected configuration and customer expected load to estimate the performance is
LOAD TESTING. (Inputs are customer expected configuration and output is performance).Performance
means that speed of the processing.
3.iii.h. Stress Testing: The execution of our s/w build under customer expected configuration and various
load levels to estimate Stability or continuity is called Stress Testing or Endurous testing.
3.iii.i. Security Testing: It is also known as Penetration testing. During this the testing team is validating
Authorization, Access control and Encryption or Decryption. Authorization is indicating the validity of
users to s/w like Student entering a class. Access control is indicating authorities of valid users to use
After completion of all reasonable tests, the project management is concentrating on UAT to garner
feedback from real customers or model customers. In this testing both the developers and testers are
involved.
After completion of Port testing the responsible release team is conducting TRAINING SESSIONS to end
users or customer site people.
Monkey or Chimpanzee testing: Due to lack of time, the testing team is covering Main activities of s/w
functionalities.
Buddy testing: Due to lack of time, the developers and testers are grouping as Buddies. Every buddy
consists of developer and tester to continue process parallel.
Exploratory testing: In general, the testing team is conducting testing w.r.t available documents. Due to
lack of documentation, the testing team is depending on past experience, discussions with others,
similar projects browsing and internet surfing. This style of testing is exploratory testing.
Pair testing: Due to lack of skills, the junior test engineers are grouping with senior test engineers to
share their knowledge.
Debugging testing: To estimate the efficiency of testing people the development team is releasing build
to testing team with known defects.
The above Ad-Hoc testing styles are also known as INFORMAL TESTING TECHNIQUES.
Summary of Case Study of each Testing Phase/Level/Stage is in the below
Table:
7.Test Initiation
8.Test Planning
9.Test Design
10.Review Test Cases
11.Test Execution
12.Test Reporting and Closure
13.User Acceptance Testing and
14.Sign Off
1.Test Initiation:
In general, the system testing process starts with Test Initiation or Test Commencement. In this stage,
the Project Manager or Test Manager selects a reasonable approach or reasonable methodology to be
followed by a separate testing team. This approach or methodology is called Test Strategy.
2.Test Planning
After preparation of Test Strategy documents with required details, the test lead category people are
defining test plan in terms of What to test, How to test, When to test and Who to test.
In this stage, the test lead is preparing the system test plan and then divides that plan into module test
plans.
In this test planning the test lead is following below approach to prepare test plans:
2.a. Testing team formation
2.b. Identify Tactical risks
2.c. Prepare Test plans
2.d. Review Test plan
3.Test Design:
After completion of required training, the responsible test engineers are concentrating on test cases
preparation. Every Test case defines a unique test condition to be applied on our s/w build. There are
3 types of methods to prepare test case after which Review of Test cases is done.
3.a.Functional and System specification based test case design:
The maximum test engineers are preparing test case depending on
functional and system specifications in SRS.
3.c.User Interface or Application based test case design: In general, the test engineers are preparing
test cases for functional and non functional tests depending on any one of previous 2 methods. To
prepare test cases for Usability testing, test engineers are depending on user interface based test
case design.
In this method, test engineers are identifying the interest of customer site people and user interface
rules in market.
NOTE: In above test case format, the test engineers are preparing test procedure when that test case is
covering an operation. And they are preparing data matrix when that test case is covering an object
(taking inputs).
4.Review Test cases:
After completion of all reasonable test cases testing team is conducting a review meeting for
completeness and correctness. In this review meeting the test lead is depending on below factors to
review all the test cases developed.
Requirements oriented coverage.
Testing techniques oriented coverage.
5.Test Execution:
After completion of test cases design and review, the testing people are concentrating on test
execution. In this stage the testing people are communicating with development team for features
negotiations.
5.a. Build version control: After confirming required environment the formal review meeting members
are concentrating on build version control. From this concept the development people are assigning
unique version number to every modified build after solving defects. This version numbering system
is understandable to testing team.
5.b. Levels of test execution: After completion of formal review meeting, the testing people are
concentrating on the finalization of test execution levels.
Level -0 testing on Initial build.
Level-1 testing on Stable build or Working build.
Level-2 testing on Modified build.
Level-3 testing on Master build.
6. Test Reporting:
During level1 and level2 test execution test engineers are reporting mismatches to
development team as defects. The defect is also known as Error or Issue or Bug.
A programmer detected a problem in program is called ERROR.
The tester detected a problem in build is called DEFECT or ISSUE.
The reported defect or issue accepted to resolve is called BUG.
In this defect reporting to developers the test engineers are following a standard
defect report format (IEEE829) and submits the Bug
If all the Critical & Major defects are resolved and low severity & low priority defects are only left then
the Project Owner/Manager takes a call on the release to the User Acceptance Testing.
7. UAT (User Acceptance Testing) on release builds:
In this level, the PM is concentrating on the feedback of the real customer site people or model
customer site people. There is 2 ways in UAT as follows:
7.a. Alpha testing
Alpha testing is simulated or actual operational testing by potential users/customers or an independent
test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form
of internal acceptance testing, before the software goes to beta testing.
7.b. Beta testing
Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to
a limited audience outside of the programming team. The software is released to groups of people so
that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are
made available to the open public to increase the feedback field to a maximal number of future
users.
8. Sign Off
After completion of UAT and their modifications, the test lead is conducting sign off review. In this
review the test lead is garnering all testing documents from test engineers as Test Strategy, System
test plan and detailed test plans, Test scenarios or titles, Test case documents, Test log, Defect
report and Final defects summary reports (defect id, description, severity, detected by and status
(closed or deferred)).
Requirements Traceability matrix (RTM) (reqid, test case, defected, status).
*RTM is mapping in b/w requirements and defects via test cases.
Fig: Case Study by Process of Testing or by Test Deliverables:
Manual Testing VS Automation Testing:
To save test execution time and to decrease complexity in manual testing, the engineers are using Test
automation. The test automation is possible for two manual tests.
Functional testing.
Performance testing (of non functional testing).
“WinRunner, QTP (Quick Time/Test Professional), Rational Robot and Silk Test are Functional testing
Tools”.
“LoadRunner, Rational Load Test, Silk performer and Jmeter are Performance Testing Tools to
automate load and stress testing”.
Some organizations are using tools for test management also. Ex: Test Editor, Quality Center and
Rational Manager.
Fig: Comparison between the SDLC vs
STLC process
http://www.stylusinc.com/Common/Concerns/SoftwareDevtPhilosophy.php
http://en.wikipedia.org/wiki/Systems_Development_Life_Cycle
http://www.estylesoft.com/?id=317&pid=1
http://www.commediait.com/stlc.html
http://www.smsgupshup.com/groups/Sweet_Bobby
http://www.agiledata.org/essays/databaseTesting.html
http://search.hp.com/query.html?cc=us&lang=en&qt=qtp+trail+version&la=en
http://en.wikipedia.org/wiki/Software_testing
http://www.extremeprogramming.org/
http://c2.com/cgi/wiki?ExtremeProgrammingRoadmap
http://www.xprogramming.com/
http://www.nebulon.com/articles/index.html
http://se.cs.depaul.edu/ise/agile.htm
…puranamravinder@gmail.com
http://www.smsgupshup.com/groups/Sweet_Bobby