You are on page 1of 38

…by Ravinder Sharma

SDLC is Software Development Life Cycle. It is used to develop the software systematically.

SDLC has 6 Stages:


Stages:

 Information gathering - In this phase Business Analyst will gather all the information from the customer
and prepare Business Requirement Specification (BRS) or Customer Required Specification (CRS) or User
Requirement Specification (URS) documents.

 Analysis - The Features and the functions that need to be put in the project are determined. Senior
Business Analyst will prepare System Requirement Specification (SRS) documents.

 Design - This is done by the Chief Architect; HLD and LLD are prepared.
HLD: Defines the overall Hierarchy of the function.
LLD: Defines the Internal logic of the project.

Sample HLD of a Web Site: Sample LLD of a Login Page:

 Coding - Based on the design document, small modules are summed together to form an .exe. Once Unit
testing and White box testing are completed, Software is given for testing to verify and validate.

 Testing – Actual Testing starts at Information gathering stage only. Testing phase is done by Test
Engineers/QC Team. Functional, Integration and System Testing are performed.

 Implementation & Maintenance - Engineers with the coordination of Developer will install/Implement the
developed application. The Testing/Maintenance team will suggest to change the software if necessary
by performing User Acceptance Testing (UAT).
 To Develop a Software using the SDLC process CMM standards have to be followed.
 The Capability Maturity Model (CMM) is a methodology used to develop and refine an organization's
software development process.
 The CMM is similar to ISO 9001, one of the ISO 9000 series of standards specified by the
International Organization for Standardization (ISO).
 CMM defines 5 levels of process maturity based on certain Key Process Areas (KPA)

CMM Levels
Level 5 – Optimizing (< 1%)
-- process change management
-- technology change management
-- defect prevention
Level 4 – Managed (< 5%)
-- software quality management
-- quantitative process management
Level 3 – Defined (< 10%)
-- peer reviews
-- intergroup coordination
-- software product engineering
-- integrated software management
-- training program
-- organization process definition
-- organization process focus
Level 2 – Repeatable (~ 15%)
-- software configuration management
-- software quality assurance
-- software project tracking and oversight
-- software project planning
-- requirements management
Level 1 – Initial (~ 70%)
SDLC Model is a framework that describes the activities performed at each stage of a software development
project.

A model depends on Client, Company and timeline......

SDLC has standard phases which may be modified as per chosen Model.

There are few recognized models.....

 Fish Model
 Water Fall Model.
 V-Model or Y – Model
 Proto type Model Fig: SDLC Phases
 Incremental Model.
 Spiral Model
 Hybrid Model.
 RAD (Rapid Application Development) Model that means every company has its own model.
 PET (Process Experts Tools and Techniques) Model.
 Fish Model has all the necessary stages of SDLC .

 "The monitoring and measuring the strength of development process is called as Software Quality
Assurance (SQA)". It is a process based concept.

 After completion of every development stage, the organizations are conducting testing, called QC.

 QA is indicating defect prevention. The QC is indicating defect detection and correctness.

 Fish Model is defined in the below figure and is self explained.

Fig: Fish Model


 Often considered the classic approach to the systems development life cycle, the waterfall model
describes a development method that is linear and sequential.

 The waterfall model is a sequential development process, in which development is seen as flowing
steadily downwards (like a waterfall) through the phases of Requirements & analysis, design, testing
(validation), integration, implementation, and maintenance.

 Once a phase of development is completed, the development proceeds to the next phase and there
is no turning back.

 A schedule can be set with deadlines for each stage of development and a product can proceed
further and theoretically, be delivered on time.

Fig: Waterfall Model


 This Model is called a Verification and Validation Model
 V model is the classic software development model. It encapsulates the steps in Verification and
Validation phases for each step in the SDLC. For each phase, the subsequent phase becomes the
verification (QA) phase and the corresponding testing phase in the other arm of the V becomes the
validating (Testing) phase.
 Testing of the product is planned in parallel with a corresponding phase of development

V-Shaped Strengths
 Emphasize planning for verification and validation of the product in early stages
 Each deliverable must be testable
 Project management can track progress by milestones
 Easy to use

V-Shaped Weaknesses:
 Does not easily handle concurrent events
 Does not handle iterations or phases
 Does not easily handle dynamic changes in requirements
 Does not contain risk analysis activities Fig: V or Y Model and is self explained.

When to use the Waterfall Model:


 Requirements are very well known
 Product definition is stable
 Technology is understood
 New version of an existing product
 Prototype-based programming is a style of object-oriented programming in which classes are not
present, and behavior reuse is performed. Instead of concentrating on documentation, more effort is
placed in creating the actual software. This way, the actual software could be released in advance.
 It is a software development process combining elements of both design and prototyping-in-stages,
in an effort to combine advantages of top-down and bottom-up concepts. Also known as the spiral
lifecycle model, it is a systems development method (SDM) used in information technology (IT).
 This model of development combines the features of the prototyping model and the waterfall model.

Prototyping Strengths: Fig: Prototype Model


 Customers can “see” the system requirements as they are being gathered
 Developers learn from customers
 A more accurate end product
 Unexpected requirements accommodated
 Allows for flexible design and development
 Steady, visible signs of progress produced

Prototyping Weaknesses:
 Bad reputation for “quick-and-dirty” methods
 Overall maintainability may be overlooked
 The customer may want the prototype delivered.
 Process may continue forever (scope creep)

When to use:
 Requirements are unstable or have to be clarified
 Develop user interfaces
 Short-lived demonstrations
 New, original development.
 The incremental model performs the waterfall in overlapping sections attempting to compensate for
the length of waterfall model projects by producing usable functionality earlier. This may involve a
complete upfront set of requirements that are implemented in a series of small projects.
 A project using the incremental model may start with general objectives. Then some portion of these
objectives is defined as requirements and is implemented, followed by the next portion of the
objectives until all objectives are implemented.
 Because some modules will be completed long before others, well-defined interfaces are required.
Also, formal reviews and audits are more difficult to implement on increments than on a complete
system.

Fig: Incremental Model


 This model combines the features of the prototyping model and the waterfall model.
 In order to overcome the constraints of "The Waterfall Model", Spiral Model was developed.
 The spiral view illustrates resources can be held constant but the system size grows. The spiral size
corresponds to system size, while the distance between the coils of the spiral indicates resources.
The distance between the coils does not change, which indicates that the amount of the resources
being used is not changing.
 The fact that the process needs to be iterated more than once demands more time and is somehow
expensive task.
 This Model is used in large, expensive and complicated projects.

Spiral model Strengths: Fig: Spiral Model


 Users see the system early because of rapid prototyping tools
 Critical high-risk functions are developed first
 The design does not have to be perfect
 Early and frequent feedback from users
 Cumulative costs assessed frequently

Spiral model Weaknesses:


 The model is complex
 Risk assessment expertise is required
 Developers must be reassigned in non-development
phase activities
 May be hard to define objective,
to proceed through the next iteration

When to use Spiral Model:


 When creation of a prototype is appropriate
 When costs and risk evaluation is important
 For medium to high-risk projects
 Users are unsure of their needs
 Requirements are complex
 New product line
 In reality, many of the SDLC models are a variation of the Hybrid Model. The Spiral Model is an
example discussed previously.

 In a pure Top-down SDLC model, high-level requirements are documented, and programs are built to
meet these requirements. Then, the next level is designed and built.

 In the Bottom-up SDLC model, the lowest level of functionality is designed and programmed first,
and finally all the pieces are integrated together into the finished application. This means that,
generally, the most complex components are developed and tested first.

 The Hybrid SDLC model combines the top-down and bottom-up models.

 Generally speaking, most of the time different SDLC models are combined together to create a
hybrid-methodology life cycle.

Fig: Hybrid Model


 Rapid Application Development (RAD) Model is a software development methodology, which involves
iterative development and the construction of prototypes.

 Even if you don't have the resources of a developer to put together a prototype you can still model a
system using power point, or even white boards. (This is probably where use cases originated.)

 Some of the major flavors of RAD consists of Agile, Extreme Programming (XP), Joint Application
Development (JAD), Lean software development (LD), Rapid Application Development (RAD) and
SCRUM.

RAD Strengths:
 Reduced cycle time and improved productivity
 Fewer people means lower costs
 Time-box approach mitigates cost and schedule risk
 Uses modeling concepts to capture information about business, data, and processes.

RAD Weaknesses:
 Accelerated development process must give quick responses to the user.
 Risk of never achieving closure
 Hard to use with legacy systems
 Requires a system that can be modularized
 Developers and customers must be committed .

When to use RAD:


 Reasonably well-known requirements
 User involved throughout the life cycle
 Project can be time-boxed
 Functionality delivered in increments
 High performance not required
 Low technical risks
 System can be modularized.

Fig: RAD Model.


 PET (Process Experts Tools and Techniques) Model.

 It is a refinement form of V - Model. It defines mapping between development and testing process.

 This is developed by HCL Technologies, Chennai.


 Absence of system crashes
 Correspondence between the software and the users’ expectations
 Performance to specified requirements
 Quality must be controlled because it lowers production speed, increases maintenance costs and can
adversely affect business

 The plan for quality assurance activities should be in writing


 Decide if a separate group should perform the quality assurance activities
 Some elements that should be considered by the plan are: defect tracking, unit testing, source-code
tracking, technical reviews, integration testing and system testing.
 Defect tracing – keeps track of each defect found, its source, when it was detected, when it was
resolved, how it was resolved, etc
 Unit testing – each individual module is tested
 Source code tracing – step through source code line by line
 Technical reviews – completed work is reviewed by peers
 Integration testing -- exercise new code in combination with code that already has been integrated
 System testing – execution of the software for the purpose of finding defects.
STLC means Software Testing Life Cycle. It is the Process/Model by which Testing is performed.

Few Questions and Answers:

 Why Testing is Performed?

 Hardware/Software testing is an empirical investigation conducted to provide Client/Customer with


information about the quality of the product or service under test, with respect to the context in
which it is intended to operate. This includes, but is not limited to, the process of executing a
program or application with the intent of finding BUGS.
 Also to:
 Meet customer requirements
 Meet customer expectations
 Cost to purchase
 Time to release

 What are the Types of Testing?

 In general, Testing is defined and Differentiated as

White Box Testing (WBT) and


Black Box Testing (BBT)

 What is the Process of Testing?


 Test Initiation
 Test Planning
 Test Design
 Review Test Cases
 Test Execution
 Test Reporting
 User Acceptance Testing and
 Sign Off
Testing is divided into following types:
 White Box Testing: It is a program based testing technique used to estimate the completeness and
correctness of the internal programs structure.
 Black Box Testing: It is a s/w level testing technique used to estimate the completeness and
correctness of the external functionality.

NOTE: White Box Testing is also known as Clear Box Testing or Open box Testing."The combination of
WBT and BBT is called as Grey Box Testing“.

Some of the types of Testing performed in different stages of STLC are as follows:
(These can be part of White Box Testing or Black Box Testing)

11. Unit Testing


12. Integration Testing
13. System Testing
14. User Acceptance Testing
15. Release Testing
16. Testing During Maintenance
17. INFORMAL or Ad-Hoc testing
After completion of analysis and design, programmers start coding. "The Analysis and Design level
reviews are also known as Verification Testing". After completion of Verification testing, the
programmers start coding and verify every program internal structure using WBT technique’s called Unit
Testing by following types:

1.a. Basis Path Testing: (whether it is executing or not) In this, the programmers are checking all
executable areas in that program to estimate "whether that program is running or not?".

To conduct this testing the programs are following below approach.


Write program w.r.t design logic (HLD's and LLD's).
Prepare flow graph.
Calculate individual paths in that flow graph called Cyclomatic Complexity.
Run that program more than one time to cover all individual paths.

After completion of Basis Path testing, the programmers are concentrating on correctness of inputs and
outputs using Control structure Testing.

1.b. Control structure Testing: In this, the programmers are verifying every statement, condition and
loop in terms of completeness and correctness of I/O (example: Debugging)

1.c. Program Technique Testing: During this, the programmer is calculating the execution time of that
program. If the execution time is not reasonable then the programmer is performing changes in
structure of that program sans disturbing the functionality.

1.d. Mutation Testing: Mutation means a change in a program. Programmers are performing changes in
tested program to estimate completeness and correctness of that program testing.
After completion of dependent programs development and Unit Testing, the programmers are inter
connecting them to form a complete system. After completion of inter connection, the programmers are
checking the completeness and correctness of that inter connection. This Integration testing is also
known as Interface Testing.

There are 4 approaches to inter connect programs and testing on that inter connections

2. a. Top Down approach: In this approach, programmers are inter connecting the Main module and
completed sub modules sans using under constructive sub modules. In the place of under constructive
sub modules, the programmers are using temporary or alternative programs called Stubs. These stubs
are also known as Called Programs by Main module.

2.b. Bottom to Up Approach: In this approach, programmers are connecting completed Sub modules
sans inter connection of Main module which is under construction. The programmers are using a
temporary program instead of main module called Driver or Calling Program.

2. c. Hybrid Approach: It is a combination of Top-Down and Bottom-Up approaches. This approach is also
called as Sandwich Approach.

2. d. System Approach: After completion of all approaches, the programmers are integrating modules,
their submodules and Unit testing. This approach is known as Big Bank approach.

Sans means
Disturbed
After completion of all required modules integration, the development team is releasing a s/w build to a
separate testing team in our organization. The s/w build is also known as Application Under Testing
(UAT).

This System testing is classified into 3 levels as


3.i. Usability Testing
3.ii. Functional Testing (Black Box Testing Techniques) and
3.iii. Non Functional Testing (this is an Expensive Testing)
 
3.i. Usability Testing:
In general a separate Testing team is starting test execution with usability testing to estimate user
friendliness of s/w build. During this, test engineers are applying below sub tests.

3.i.a. User Interface Testing:

Whether every screen in the application build is


Ease of use (understandable).
Look and feel (attractiveness).
Speed in interface (short navigations).
 
3.i.b. Manual support testing: During the s/w release, our organization is releasing user manuals also.
Before s/w release, the separate team is validating those user manuals also in terms of completeness
and correctness.
3.ii. Functional testing:
After completion of user interface testing on responsible screens in our application build, the separate
testing team is concentrating on requirements correctness and completeness in that build. In this
testing; a separate Testing team is using a set of Black Box Testing Techniques, like Boundary Value
Analysis, Equivalence Class Partitions, Error Guessing, etc.
This testing is classified into 2 sub tests as follows:

3.ii.a. Functionality Testing: During this test, test engineers are validating the completeness and
correctness of every functionality. This testing is also known as Requirements Testing.

In this test, a separate testing team is validating the correctness of every functionality through below
coverage’s:

 GUI coverage or Behavioral coverage (valid changes in properties of objects and windows in our
application build).
 Error handling coverage (the prevention of wrong operations with meaningful error messages like
displaying a message before closing a file without saving it).
 Input Domain coverage (the validity of i/p values in terms of size and type like while giving alphabets
to age field).
 Manipulations coverage (the correctness of o/p or outcomes).
 Order of functionalities (the existence of functionality w.r.t customer requirements).
 Back end coverage (the impact of front end’s screen operation on back end’s table content in
corresponding functionality).

NOTE: Above coverage’s are applicable to every functionality of the application build with the help of
Black Box Testing Techniques.

3.ii.b. Sanitation testing: It is also known as Garbage Testing. During this test, a separate testing team is
detecting extra functionalities in s/w build w.r.t customer requirements (like sign in link sign in page).

NOTE: Defects in s/w are 3 types Mistakes, Missing and Extra.


3.iii. Non-Functional Testing:
It is also a Mandatory testing level in System Testing phase. But it is expensive and complex to conduct.
During this test, the testing team is concentrating on characteristics of a s/w.

3.iii.a. Recovery/Reliability testing: During this, the test engineers are validating that whether our s/w build
is changing from abnormal state to normal state or not?
3.iii.b. Compatibility Testing: (friend’s game CD not working in our system) “It is also known as Portability
testing”.(Adjust anywhere).During this test, the test engineers are validating that whether our s/w build
is able to run on customers expected platform or not? Platform means that the OS, Compilers, Browsers
and other system s/w.
3.iii.c. Configuration Testing: It is also known as H/W compatibility testing. During this, the testing team is
validating that whether our s/w build is supporting different technology devices or not?(Example different
types of printers, different types of n/w etc)
3.iii.d. Inter system testing: It is also known as End-To-End Testing. During this the testing team is validating
that whether the s/w build is co-existing with other s/w or not? (To share common resources)
EX: E-Server
3.iii.e. Installation Testing: The order is important INITIAION, DURING & AFTER
3.iii.f. Data Volume testing: It is also known as Storage testing or Memory Testing. During this the testing
team is calculating the peak limit of data handled by the s/w build. (EX: Hospital software. It is also
known as Mass testing)
EX: MS Access technology oriented s/w builds are supporting 2GB data as maximum.
3.iii.g. Load Testing: It is also known as Performance or Scalability testing. Load or Scale means that the
number of concurrent users (at the same time) who are operating a s/w. The execution of our s/w build
under customer expected configuration and customer expected load to estimate the performance is
LOAD TESTING. (Inputs are customer expected configuration and output is performance).Performance
means that speed of the processing.
3.iii.h. Stress Testing: The execution of our s/w build under customer expected configuration and various
load levels to estimate Stability or continuity is called Stress Testing or Endurous testing.
3.iii.i. Security Testing: It is also known as Penetration testing. During this the testing team is validating
Authorization, Access control and Encryption or Decryption. Authorization is indicating the validity of
users to s/w like Student entering a class. Access control is indicating authorities of valid users to use
After completion of all reasonable tests, the project management is concentrating on UAT to garner
feedback from real customers or model customers. In this testing both the developers and testers are
involved.

There are 2 ways to conduct UAT such as


 
Alpha Testing
Beta testing

(Both testing purpose is to garner feedback from customer)


After completion of UAT and their modifications, the project manager is defining Release or Delivery
team with few developers, few testers and few h/w engineers. This release team is coming to
responsible customer site and conducts Release Testing or Port Testing or Green Box Testing. In this
testing the release team is observing below factors in that customer site.

Compact Installation. (fully installed or not)


Overall Functionality.
Input devices handling. (Keyboard, mouse, etc)
Output Devices handling. (Monitor printer, etc)
Secondary Storage devices handling. (CD drive, hard disk, floppy etc)
OS error handling. (Reliability)
Co-Existence with other s/w application.

After completion of Port testing the responsible release team is conducting TRAINING SESSIONS to end
users or customer site people.

 After completion of Release testing, the customer site


people are utilizing that s/w for required purposes.
 During this utilization, the customer site people are
sending Change Requests to the company. The
responsible team to handle that change is called CCB
(Change Control Board).This team consists of project
manager, few developers, few testers and few h/w
engineers. This team will receive 2 types of change
requests such as Enhancement and Missed/Latent
defects.
 Testing needs to be performed perfectly in each stages
as Missed/Latent defects to the CCB will effect Testing
Sometimes, organizations are not able to conduct planned testing. Due to some risks, the testing teams
are conducting Ad-Hoc testing instead of planned testing.

There are different styles of Ad-Hoc testing.

Monkey or Chimpanzee testing: Due to lack of time, the testing team is covering Main activities of s/w
functionalities.
Buddy testing: Due to lack of time, the developers and testers are grouping as Buddies. Every buddy
consists of developer and tester to continue process parallel.
Exploratory testing: In general, the testing team is conducting testing w.r.t available documents. Due to
lack of documentation, the testing team is depending on past experience, discussions with others,
similar projects browsing and internet surfing. This style of testing is exploratory testing.
Pair testing: Due to lack of skills, the junior test engineers are grouping with senior test engineers to
share their knowledge.
Debugging testing: To estimate the efficiency of testing people the development team is releasing build
to testing team with known defects.

The above Ad-Hoc testing styles are also known as INFORMAL TESTING TECHNIQUES.
Summary of Case Study of each Testing Phase/Level/Stage is in the below
Table:

Testing phase/level/stage Responsible Testing technique


Walk through, Inspections and Peer
In analysis Business analyst
reviews
Walk through, Inspections and Peer
In design Designer
reviews
Unit testing Programmer White box testing
Top down, Bottom up and Hybrid
Integration/Interface testing Programmer
system approach
System testing
(Usability, Functionality and Non- Testing team Black box testing
Functionality)
Real/Model
UAT Alpha and Beta testing
customers
Release testing Release team Port testing factors
Test s/w changes (Regression
Testing during maintenance CCB
testing)
Many organizations are maintaining a separate testing team only for System testing stage.
This stage is bottle neck stage in s/w development.

Fig: Software System Testing Process


Process of Testing involves below steps/stages:

7.Test Initiation
8.Test Planning
9.Test Design
10.Review Test Cases
11.Test Execution
12.Test Reporting and Closure
13.User Acceptance Testing and
14.Sign Off

1.Test Initiation:
In general, the system testing process starts with Test Initiation or Test Commencement. In this stage,
the Project Manager or Test Manager selects a reasonable approach or reasonable methodology to be
followed by a separate testing team. This approach or methodology is called Test Strategy.

2.Test Planning
After preparation of Test Strategy documents with required details, the test lead category people are
defining test plan in terms of What to test, How to test, When to test and Who to test.
In this stage, the test lead is preparing the system test plan and then divides that plan into module test
plans.

In this test planning the test lead is following below approach to prepare test plans:
2.a. Testing team formation
2.b. Identify Tactical risks
2.c. Prepare Test plans
2.d. Review Test plan
3.Test Design:
After completion of required training, the responsible test engineers are concentrating on test cases
preparation. Every Test case defines a unique test condition to be applied on our s/w build. There are
3 types of methods to prepare test case after which Review of Test cases is done.
 
3.a.Functional and System specification based test case design:
The maximum test engineers are preparing test case depending on
functional and system specifications in SRS.

3.b.Use cases based test case design: Usecases are more


elaborative than
functional and system specifications in SRS. In this Usecase
oriented test
case design, test engineers are not taking their own
assumptions.

3.c.User Interface or Application based test case design: In general, the test engineers are preparing
test cases for functional and non functional tests depending on any one of previous 2 methods. To
prepare test cases for Usability testing, test engineers are depending on user interface based test
case design.
In this method, test engineers are identifying the interest of customer site people and user interface
rules in market.

Test case document format IEEE829:

 
NOTE: In above test case format, the test engineers are preparing test procedure when that test case is
covering an operation. And they are preparing data matrix when that test case is covering an object
(taking inputs).
 
4.Review Test cases:
After completion of all reasonable test cases testing team is conducting a review meeting for
completeness and correctness. In this review meeting the test lead is depending on below factors to
review all the test cases developed.
 Requirements oriented coverage.
 Testing techniques oriented coverage.

5.Test Execution:
After completion of test cases design and review, the testing people are concentrating on test
execution. In this stage the testing people are communicating with development team for features
negotiations.
5.a. Build version control: After confirming required environment the formal review meeting members
are concentrating on build version control. From this concept the development people are assigning
unique version number to every modified build after solving defects. This version numbering system
is understandable to testing team.
5.b. Levels of test execution: After completion of formal review meeting, the testing people are
concentrating on the finalization of test execution levels.
 Level -0 testing on Initial build.
 Level-1 testing on Stable build or Working build.
 Level-2 testing on Modified build.
 Level-3 testing on Master build.
6. Test Reporting:
 During level1 and level2 test execution test engineers are reporting mismatches to
development team as defects. The defect is also known as Error or Issue or Bug.
 A programmer detected a problem in program is called ERROR.
 The tester detected a problem in build is called DEFECT or ISSUE.
 The reported defect or issue accepted to resolve is called BUG.
 In this defect reporting to developers the test engineers are following a standard
defect report format (IEEE829) and submits the Bug

6.a. Defect Life Cycle:

Fig: Test Reporting method


Fig: Defect status diagram
6.b. Types of defects: During Usability, Functional and Non-Functional test execution on our application
build or UAT.

The test engineers are detecting below categories:

6.b.i. User interface defects (low severity):


 Spelling mistakes (high priority)
 Invalid label of object w.r.t functionality (medium priority)
 Improper right alignment (low priority)
 
6.b.ii. Error handling defects (medium severity)
 Error message not coming for wrong operation (high priority)
 Wrong error message is coming for wrong operation (medium)
 Correct error message but incomplete (low) 

6.b.iii. Input domain defects (medium severity)


 Does not taking valid input (high)
 Taking valid and invalid also (medium)
 Taking valid type and valid size values but the range is exceeded (low)

6.b.iV. Manipulations defects (high severity)


 Wrong output (high)
 Valid output without having decimal points (medium)
 Valid output with rounded decimal points (low)
 EX: actual answer is 10.96
 High (13), medium (10) and low (10.9)

6.b.V. Race conditions defects (high)


 Hang or dead lock (show stopper and high priority)
 Invalid order of functionalities (medium)
 Application build is running on some of platforms only (low)
6.b.Vi. H/W related defects (high)
 Device is not connecting (high)
 Device is connecting but returning wrong output (medium)
 Device is connecting and returning correct output but incomplete (low)

6.b.Vii. Load condition defects (high)


 Does not allow customer expected load (high)
 Allow customer expected load on some of the functionalities (medium)
 Allowing customer expected load on all functionalities w.r.t benchmarks (low)

6.b.Viii. Source defects (medium)


 Wrong help document (high)
 Incomplete help document (medium)
 Correct and complete help but complex to understand (low)

6.b.iX. Version control defects (medium)


 Unwanted differences in b/w old build and modified build

6.b.X. ID control defects (medium)


 Logo missing, wrong logo, version number missing, copy right window missing, team members
names missing
6.c. Test Closure(UAT)
After completion of all reasonable test cycles completion the test lead is conducting a review meeting to
estimate the completeness and correctness of the test execution. If the test execution status is
equal to EXIT CRITERIA then testing team is going to stop testing. Otherwise the team will continue
remaining test execution w.r.t available time. In this test closure review meeting the test lead is
depending on below factors:
 Coverage analysis
 Defect density
 Analysis of deferred defects

If all the Critical & Major defects are resolved and low severity & low priority defects are only left then
the Project Owner/Manager takes a call on the release to the User Acceptance Testing.
7. UAT (User Acceptance Testing) on release builds:
 In this level, the PM is concentrating on the feedback of the real customer site people or model
customer site people. There is 2 ways in UAT as follows:
 
7.a. Alpha testing
Alpha testing is simulated or actual operational testing by potential users/customers or an independent
test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form
of internal acceptance testing, before the software goes to beta testing.
 
7.b. Beta testing
Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to
a limited audience outside of the programming team. The software is released to groups of people so
that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are
made available to the open public to increase the feedback field to a maximal number of future
users.

8. Sign Off
After completion of UAT and their modifications, the test lead is conducting sign off review. In this
review the test lead is garnering all testing documents from test engineers as Test Strategy, System
test plan and detailed test plans, Test scenarios or titles, Test case documents, Test log, Defect
report and Final defects summary reports (defect id, description, severity, detected by and status
(closed or deferred)).
Requirements Traceability matrix (RTM) (reqid, test case, defected, status).
 
*RTM is mapping in b/w requirements and defects via test cases.
Fig: Case Study by Process of Testing or by Test Deliverables:
Manual Testing VS Automation Testing:

In general, the test engineers are executing Test cases manually.

To save test execution time and to decrease complexity in manual testing, the engineers are using Test
automation. The test automation is possible for two manual tests.
 
 Functional testing.
 Performance testing (of non functional testing).
 
 “WinRunner, QTP (Quick Time/Test Professional), Rational Robot and Silk Test are Functional testing
Tools”.

 “LoadRunner, Rational Load Test, Silk performer and Jmeter are Performance Testing Tools to
automate load and stress testing”.

 Some organizations are using tools for test management also. Ex: Test Editor, Quality Center and
Rational Manager.
Fig: Comparison between the SDLC vs
STLC process
http://www.stylusinc.com/Common/Concerns/SoftwareDevtPhilosophy.php

http://en.wikipedia.org/wiki/Systems_Development_Life_Cycle

http://www.estylesoft.com/?id=317&pid=1

http://www.commediait.com/stlc.html

http://www.smsgupshup.com/groups/Sweet_Bobby

http://www.agiledata.org/essays/databaseTesting.html

http://search.hp.com/query.html?cc=us&lang=en&qt=qtp+trail+version&la=en

http://en.wikipedia.org/wiki/Software_testing

http://www.extremeprogramming.org/

http://c2.com/cgi/wiki?ExtremeProgrammingRoadmap

http://www.xprogramming.com/

http://www.nebulon.com/articles/index.html

http://se.cs.depaul.edu/ise/agile.htm
…puranamravinder@gmail.com

http://www.smsgupshup.com/groups/Sweet_Bobby

You might also like