You are on page 1of 183

c   

c   c

     




In this section we will discuss the basics of Software Testing as


what is   

what are the reasons for using   

and the necessity of using the   


Firstly, we will come across different terminology used throughout Software Testing,
professional testers are all pretty much agreed on these basic ideas.
Secondly, we take a look at the need for proper Software Testing and What are errors and how
do they get into the software,and Life cycle of Software Testing,with different Software Testing
Types and look at the cost of getting it wrong and we show why exhaustive Software Testing is
neither possible not practical.

Describe a fundamental test process on Software Testing, based on industry standards, and
underline importance of planning tests and determining expected results in advance of test
execution on Software Testing.

!u xnderstand basic testing terminology on   


.
!u xnderstand why   
is necessary.
!u e able to define ,   and   of Software Testing.
!u ¦ppreciate why errors occur and how costly they can be in   
.
!u xnderstand that you cannot test everything and that Software Testing is therefore a 
 
  .
!u xnderstand the       of Software Testing.
!u xnderstand that developers and testers have different mindsets on   
.
!u Learn how to communicate effectively with both developers and testers.
!u Find out why you cannot test your own work.
!u xnderstand the need for 
   
in Software Testing.
!u xnderstand the importance of specifying your    in advance.
!u xnderstand how and why tests should be prioritized in   
.

  
c   


¦ctually what is Software Testing? This is the most important question to start with.
c    

Even the most carefully planned and designed software, cannot possibly be free of defects. Your
goal as a quality engineer is to find these defects. This requires creating and executing many
tests.

In order for    


to be successful, you should start the Software Testing process as
soon as possible. Each new version must be tested in order to ensure that "improvements" do not
generate new defects.
If you begin   
only shortly before an application is scheduled for release, you
will not have time to detect and repair many serious defects. Thus by   
ahead of
time, you can prevent problems for your users and avoid costly delays.

Let us derive c      




 
 c    

what does this  mean in Software Testing?
The main purpose of Software Testing is to identify the  .
 ¦ flow in a component or system that can cause the component or system to fail to
perform its required function.

Check also !  "! #"$


etc.

!  :- Fault is similar to a  


! :-Deviation of the component or system from its expected delivery,service or result.
#:-¦ human action that produces an incorrect result.
$
:- ug is similar to that of an defect.

  c 
Isolating means seperation or dividing the defects.
These isolated defects are collected in the  %

What is  %   


Defect Profile is a document with many columns in Software Testing.
This is a template provided by the company.

 &     


The Defect Profile is  &     that means it is send to developer

'   


¦fter getting from the developer make sure all the defects are rectified,
before defining it as a (  product.

c (    



(   is defined as justification of user requirements or satisfaction of user requirements.

When all the 4 steps are completed we can say that Software Testing is completed.
Now Let xs Write ¦ Proper Definition For Testing:--

)!*+##,-.*, #!,),
This is the process in which the defects are identified, isolated ,
and subjected for rectification and finally make sure that all the defects are rectified ,
in order to ensure that the product is a Quality product.

) & /  

!u xnderstand the difference between verification and validation testing activities


!u xnderstad what benefits the V model offers over other models.
!u e aware of other models in order to compare and contrast.
!u xnderstand the cost of fixing faults increases as you move the product towards live use.
!u xnderstand what constitutes a master test plan in Software Testing.
!u xnderstand the meaning of each testing stage in Software Testing.

  


Dere are some of the terminology that has to be learned as part of Software Testing:-
First of all let us know the major difference between the & and    
 

There are many definitions relating this:-

0%& It means that exact rules i.e the customer requirements must be followed.
1% This is based on the general requirements i.e on our own requirements.
2(       
Estimating the cost of the project.
3$
 c%&  Consider for example:-
If any bank want to ¦utomate its procedures then that would bid or would call for various
IT development companies.
See this picture

45% *   


Instead of repeating the same process again and
again we can keep out of them.

Consider for example:-The login page for different projects would be the same.
6%&     , 7%,8   
This is nothing but a mail to the Company
Director.
  
9   

  
9).%*,+:9:+#

This is the basic Structure that is followed by most of the Companies.

Now let us see the category of Duman Resources in Software Testing


(*Quality ¦ssurance.
9#)Chief Executive Officer.
Directors.
;<Dead Team Leader.
.Technical Manager.
(.Quality Manager.
(<Quality Team Leader.
%.Project Managers.
(<Quality Leader.
<%<Team Leader/Project Leader.
<Test Leader.
#Senior Software Engineer.
#Senior Test Engineer.
#Software Engineer.
#Test Engineer.

  
 
% 

1.The ¦pplication we are testing is called as *   :  



2.The ¦pplication is divided into 2 parts:-
a)  
b)!  
3.The Structural part is tested by /.
4.The Structural part is called as Invisible.
5.The Functional part is tested by  #
 .
6.The Functional part is called as Visible.

!   %    

The fundamental test process in Software Testing comprises  


"    "  "
 
  c 
   .
You will find organizations that have slightly different names for each stage of the process and
you may find some processes that have just few stages for example. Dowever, you will find that
all good test processes adhere to this fundamental structure.

!u      in Software Testing (sometimes referred to as  


) involves
designing test conditions and    using recognized test techniques identified at the
planning stage.
Dere it is usual to produce a separate document or documents that fully describe the tests
that you will carry out. It is important to determine the expected results prior to test
execution.
!u    involves actually running the specified test on a computer system either
manually or by using an automated test tool
!u   
involves keeping good records of the test activities that you have carried
out. Versions of the software you have tested and the test specifications are software you
have tested and the test specifications are recorded along with the actual outcomes of
each test
Checking for test completion involves looking at the previously specified test completion
criteria to see if they have been met. If not, some test may need to be re-run and in some
instances it may be appropriate to design some new test cases to meet a particular
coverage target.
!u         
¦s the objective of a test should be to detect faults, a successful test is one that does
detect a fault.
This is counter-intuitive, because faults delay progress; a successful test is one that may
cause delay.
The successful test reveals a fault which, if found later, may be many more times costly
to correct so in the long run, is a good thing
!u . 
         

Completion or exit criteria are used to determine when testing (at any stage) is complete.
These criteria may be defined in terms of cost, time, faults found or coverage criteria.
!u 9/
     

Coverage criteria are defined in terms of items that are exercised by test suites, such as
branches,user requirements, most frequently used transactions etc

 
 # +     

The specification of     in advance of test execution is perhaps one of the most
fundamental principles of testing computer software. If this step is omitted then human
subconscious desire for tests to pass will be overwhelming and tester may perhaps interpret a
plausible, yet erroneous result, as correct outcome.

!u ¦s you will see when designing test using black box and white box techniques in
Software Testing there is ample room within the      in Software Testing to
write down you expected results and therefore no real excuse for not doing it.
If you are unable to determine expected results for a particular test that you had in mind
then it its not a good test as you will not be able to
(a) determine whether it has passed or not and
(b) you will never be able to repeat it.

!u Even with a quick and dirty ad-hoc test it is advisable to write down beforehand what you
expect to happen. This may all sound pretty obvious but many test efforts have
floundered by ignoring this basic principle.

!u " The major difference between a thing that might go wrong and a thing that cannot
possibly go wrong is that when a thing that cannot possibly go wrong does go wrong it
usually turns out to be impossible to get at or repair."

!u Dere is a small Power point presentation of Software Testing.


  
 
+=  

Software testing is not an activity to take up when the product is ready. ¦n effective Software
Testing begins with a proper plan from the user requirements stage itself. Software testability is
the ease with which a computer program is tested. Metrics can be used to measure the testability
of a product. The requirements for effective Software Testing are given in the following sub-
sections.

!u )     



1. The better the software works, the more efficiently it can be tested.
2. The system has few bugs (bugs add analysis and reporting overhead to the test process)
3. No bugs block the execution of tests.
4. The product evolves in functional stages (allows simultaneous development & testing)

!u ) /     



1. What is seen is what is tested
2. Distinct output is generated for each input
3. System states and variables are visible or queriable during execution
4. Past system states and variables are visible or queriable (eg., transaction logs)
5. ¦ll factors affecting the output are visible
6. Incorrect output is easily identified
7. Incorrect input is easily identified
8. Internal errors are automatically detected through self-testing mechanism
9. Internally errors are automatically reported
10. Source code is accessible

!u 9      



1. The better the software is controlled, the more the testing can be automated and
optimised.
2. ¦ll possible outputs can be generated through some combination of input in Software
Testing
3. ¦ll code is executable through some combination of input in Software Testing
4. Software and hardware states can be controlled directly by testing
5. Input and output formats are consistent and structured in Software Testing
6. Tests can be conveniently specified, automated, and reproduced.

!u       



1. y controlling the scope of testing, problems can be isolated quickly, and smarter
testing can be performed.
2. The software system is built from independent modules
3. Software modules can be tested independently in Software Testing

!u      



1. The less there is to test, the more quickly it can be tested in Software Testing
2. Functional simplicity
3. Structural simplicity
4. Code simplicity

!u       



1. The fewer the changes, the fewer the disruptions to testing
2. Changes to the software are infrequent
3. Changes to the software are controlled in Software Testing
4. Changes to the software do not invalidate existing tests in Software Testing
5. The software recovers well from failures in Software Testing

!u :        



1. The more information we have, the smarter we will test
2. The design is well understood in Software Testing
3. Dependencies between internal external and shared components are well understood.
4. Changes to the design are communicated.
5. Technical documentation is instantly accessible
6. Technical documentation is well organized in Software Testing
7. Technical documentation is specific and detailed
8. Technical documentation is accurate

  
c  
  

In this section we will discuss the necessity of Software Testing


#"!  and !  are occurred while developing the Software Project.
-->      c  c c   .    
The computer program ¦O¦ spacecraft contained the following statement with the FORTR¦N
programming language.
 )0>>?00>
The programmer's intention was to execute a succeeding statements up to line 100 ten times then
creating a loop where the integer variable I was using the loop counter, starting 1 and ending at
10.
xnfortunately, what this code actually does is writing variable I do to decimal value 1.1 and it
does that once only. Therefore remaining code is executed once and not 10 times within the loop.
¦s a result spacecraft went off course and mission was abort considerable cost!).
The correct syntax for what the programmer intended is:--
 )0>>0?0"0>
So a small mistake make a very big thing.

 c#    



Why do we make errors that cause faults in computer software leading to potential failure of our
systems? Well,
firstly we are all prone to making simple human errors. This is an unavoidable fact of life.
Dowever,
this is compounded by the fact that we all operate under real world pressures such as tight
deadlines, budget restrictions, conflicting priorities and so on
9    

The cost of an error can vary from nothing at all to large amounts of money and even loss of life.
The aborted Mercury mission was obviously very costly but surely this is just an isolated
example. Or is it? There are hundreds of stories about failures of computer systems that have
been attributed to errors in the software. ¦ few examples are shown below:
¦ nuclear reactor was shut down because a single line of code was coded as X = Y instead of
X=¦S (Y) i.e. the absolute value of Y irrespective of whether Y was positive or negative.

+     

Reliability is the probability that software will not cause the failure of a system for a specified
time under specified conditions. Measures of reliability include MTF (mean time between
failure), MTTF (mean time to failure) as well as service level agreements and other mechanisms.

#c  /  
c    / c

It is now widely accepted that you cannot test everything in Software Testing. Exhausted testers
you will find, but exhaustive testing you will not. Complete Software Testing is neither
theoretically, nor practically possible.
Consider a 10 character string that has 280 possible input streams and corresponding outputs. If
you executed one test per microsecond it would take approx. 4 times the age of the xniverse to
test this completely.

  
 
Dow much testing would you be willing to perform if the risk of failure were negligible?
¦lternatively, how much Software Testing would you be willing to perform if a single defect
could cost you your life's savings, or, even more significantly

  
 =  
Software Testing identifies faults whose removal increases the software quality by increasing the
software's potential reliability. Software Testing is the measurement of software quality. We
measure how closely we have achieved quality by testing the relevant factors such as
 "  "   "    "   "     etc

; c  


 
c
It is difficult to determine how much Software Testing is enough. Software Testing is always a
matter of judging risks against cost of extra testing effort. Planning test effort thoroughly before
you begin, and setting     will go some way towards ensuring the right amount
of Software Testing is attempted. ¦ssigning priorities to tests will ensure that the most important
tests have been done should you run out of time.

  
can be performed in either the two types:-
09 /  In this Software Testing is started after the Coding.
1: 9 /  In this Software Testing is done from the Initial Phase.
efore that we need to know couple of main definitions regarding the terminology in the
company
%+)@#9 in which exact rules must be followed that is customer requirements
%+) :9 which are based on general requirements that is on our own requirements

  
 
<9 

efore going to the Testing Life Cycle in Software Testing,


we need to know how the software is developed and its Life Cycle in Software Testing.
The Software Development Life Cycle (SDLC) is also called as
the %  / <9 7% <98   


The SDLC in Software Testing has 6 phases:-


They are:-
a)  %c in Software Testing
b)* %cin Software Testing
c) 
%c in Software Testing
d)9
%c in Software Testing
e) 
%c in Software Testing
f) /A.   %c in Software Testing

Now let xs discuss each phase in detail:--


a)  %c   

(i)- c
 c+=  
c$  *  7$*8will gather the information of the company through one template
which is predefined and goes to client.
De would collect all the information of what has to be developed and in how many days and all
the basic requirements of the company.
For the proof of his collection of information the usiness ¦nalyst(¦) would prepare one
document called as either
$ in Software Testing:--$   /  

$+in Software Testing:--$  +=     
:+ in Software Testing:--:+=     
9+ in Software Testing:--9  +=     
¦ll are the same.

(ii)  
 c!   9   
The #

 . 
7#.8 would discuss all the Financial Matters.
Go to Top

b)* %c   



In this phase the DD document is taken as the input.
In this phase 4 steps are done.
78*  c=     
In this step all the requirements are analysed
and studyed.
78!      
Feasibility means the possibility of the project
developing.
78  
 c 
   
Deciding which techonology has to be used for
example:
Either to use the SxN or Microsoft Technology etc:
7/8#      
Estimating the resources.for example:-Time,Number of
people etc:
During the ¦nalysis Phase the Project Manager prepares the %+)@#9%<*,
The output document for this phase is the Software Requirements Specification(SRS).

¦nd this document is prepared by  *  7+8Go to Top

c) 
%c   

The designing will be in 2 levels:-
78;
c</ 

   
In this level of Designing the project is divided
into number of modules.
The Digh Level Designing is done by  c  . 
7.8 or 9c* c  79*8
78<</ 

   
In this level of Designing the modules are
further divided into number of submodules.
The Low Level Designing is done by <7<8
In this Phase the Cheif ¦rchitect would prepare the  c   
   or  

   Go to Top

d)9
%c   

In this Phase the Developers would write the Programs for the Project by following the Coding
standards.
In this phase the Developers would prepare the   9Go to Top

e) 
%c   

1.In the first phase when the DD is prepared the  #
  would study the document, and
send a +/+ to $  *  7$*8
2.+/+    
is nothing but a document prepared by the Test Engineer
while studying the DD document and the points which he cannot understand and not clear are
written in that report and sent to ¦.
3.¦nd the Test Engineer would write the Test Cases of application.
4.In .  
there would be upto 4>B defect free and in *    
it would
be C2B defect free.
5.In this Phase Testing people document called as  %   Go to Top

f) /A.   %c   



1.In this Phase after the project is done a mailis given to the client mentioning the completing of
the project.
2.This is called as the   /, 
3.The project is tested by client and is called as the xser ¦cceptance Testing.
4.The project is installed in client environment and there testing is done called %  
 
  

while installing if any problem occurrs the maintenance people would write the  
  7 8 to %& . 
7%.8
5.¦fter some time if client want to have some changes in the software and software changes are
done by Maintenance.

  
 
<9 

Go to Top The internal processes in each of the following software lifecycle stage descriptions
are
5 %    
"    % "!   % "
  
 % "  
 % .

5 %    




!u Each stage is initiated by a kickoff meeting, which can be conducted either in person, or
by Web teleconference.
!u The purpose of the kickoff meeting is to review the output of the previous stage, go over
any additional inputs required by that particular stage, examine the anticipated activities
and required outputs of the current stage, review the current project schedule, and review
any open issues.
!u The Primary Developer Representative is responsible for preparing the agenda and
materials to be presented at this meeting.
!u ¦ll project participants are invited to attend the kickoff meeting for each stage.

    % 

!u Most of the creative work for a stage occurs here. Participants work together to gather
additional information and refine stage inputs into draft deliverables.
!u ¦ctivities of this stage may include interviews, meetings, the generation of prototypes,
and electronic correspondence.
!u ¦ll of these communications are deemed informal, and are not recorded as minutes,
documents of record, controlled software, or official memoranda.
!u The intent here is to encourage, rather than inhibit the communication process.
!u This process concludes when the majority of participants agree that the work is
substantially complete and it is time to generate draft deliverables for formal review and
comment.

!   % 

!u In this process, draft deliverables are generated for formal review and comment.
Each deliverable was introduced during the kickoff process, and is intended to satisfy one
or more outputs for the current stage.

!u Each draft deliverable is given a version number and placed under configuration
management control.
!u ¦s participants review the draft deliverables, they are responsible for reporting errors
found and concerns they may have to the Primary Developer Representative via
electronic mail.
!u The Primary Developer Representative in turn consolidates these reports into a series of
issues associated with a specific version of a deliverable.
!u The person in charge of developing the deliverable works to resolve these issues then
releases another version of the deliverable for review.
!u This process iterates until all issues are resolved for each deliverable. There are no formal
check off / signature forms for this part of the process. The intent here is to encourage
review and feedback.

!u ¦t the discretion of the Primary Developer Representative and Primary End-user


Representative, certain issues may be reserved for resolution in later stages of the
development lifecycle.
!u These issues are disassociated from the specific deliverable, and tagged as "open issues."
Open issues are reviewed during the kickoff meeting for each subsequent stage.
!u Once all issues against a deliverable have been resolved or moved to open status, the final
(release) draft of the deliverable is prepared and submitted to the Primary Developer
Representative.
!u When final drafts of all required stage outputs have been received, the Primary Developer
Representative reviews the final suite of deliverables, reviews the amount of labor
expended against this stage of the project, and uses this information to update the project
plan.
!u The project plan update includes a detailed list of tasks, their schedule and estimated
level of effort for the next stage.
!u The stages following the next stage (out stages) in the project plan are updated to include
a high level estimate of schedule and level of effort, based on current project experience.
!u Out stages are maintained at a high level in the project plan, and are included primarily
for informational purposes; direct experience has shown that it is very difficult to
accurately plan detailed tasks and activities for out stages in a software development
lifecycle.
!u The updated project plan and schedule is a standard deliverable for each stage of the
project.
!u The Primary Developer Representative then circulates the updated project plan and
schedule for review and comment, and iterates these documents until all issues have been
resolved or moved to open status.
!u Once the project plan and schedule has been finalized, all final deliverables for the
current stage are made available to all project participants, and the Primary Developer
Representative initiates the next process.

  
* % 

!u This is the formal quality assurance review process for each stage in Software Testing.
!u This process is initiated when the Primary Developer Representative schedules an in-
stage assessment with the independent Quality ¦ssurance Reviewer (Q¦R), a selected
End-user Reviewer (usually a Subject Matter Expert), and a selected Technical Reviewer.
!u These reviewers formally review each deliverable to make judgments as to the quality
and validity of the work product, as well as its compliance with the standards defined for
deliverables of that class.
!u Deliverable class standards are defined in the software quality assurance section of the
project plan.
!u The End-user Reviewer is tasked with verifying the completeness and accuracy of the
deliverable in terms of desired software functionality.
!u The Technical Reviewer determines whether the deliverable contains complete and
accurate technical information.
!u The Q¦ Reviewer is tasked solely with verifying the completeness and compliance of the
deliverable against the associated deliverable class standard.
!u The Q¦R may make recommendations, but cannot raise formal issues that do not relate
to the deliverable standard.
!u Each reviewer follows a formal checklist during their review, indicating their level of
concurrence with each review item in the checklist.
!u Refer to the software quality assurance plan for this project for deliverable class
standards and associated review checklists.
!u ¦ deliverable is considered to be acceptable when each reviewer indicates substantial or
unconditional concurrence with the content of the deliverable and the review checklist
items.
!u ¦ny issues raised by the reviewers against a specific deliverable will be logged and
relayed to the personnel responsible for generation of the deliverable.
!u The revised deliverable will then be released to project participants for another formal
review iteration.
!u Once all issues for the deliverable have been addressed, the deliverable will be
resubmitted to the reviewers for reassessment.
!u Once all three reviewers have indicated concurrence with the deliverable, the Primary
Developer Representative will release a final in-stage assessment report and initiate the
next process.
 
# %    


!u The stage exit is the vehicle for securing the concurrence of principal project participants
to continue with the project and move forward into the next stage of development.
!u The purpose of a stage exit is to allow all personnel involved with the project to review
the current project plan and stage deliverables, provide a forum to raise issues and
concerns, and to ensure an acceptable action plan exists for all open issues.
!u The process begins when the Primary Developer Representative notifies all project
participants that all deliverables for the current stage have been finalized and approved
via the In-Stage ¦ssessment report.
!u The Primary Developer Representative then schedules a stage exit review with the project
executive sponsor and the Primary End-user Representative as a minimum.
!u ¦ll interested participants are free to attend the review as well. This meeting may be
conducted in person or via Web teleconference.
!u The stage exit process ends with the receipt of concurrence from the designated
approvers to proceed to the next stage.
!u This is generally accomplished by entering the minutes of the exit review as a formal
document of record, with either physical or digital signatures of the project executive
sponsor, the Primary End-xser Representative, and the Primary Developer
Representative.

The initial steps to get a project is one of our Organization usiness ¦nalysts will go to Client
place and he collects all the requirements and negotiate with the Clients regarding project and
once it is approved he prepares documents like Project Proposal, Statement of Work, xser
Requirements Document and usiness Rules. So these are the initial documents for any project.

  
 


0c    
 c   

¦: Each of the followings represents a different testing approach:
1.lack box testing
2. White box testing
3. xnit testing
4. Incremental testing
5. Integration testing
6. Functional testing
7. System testing
8. End-to-end testing
9. Sanity testing
10. Regression testing
11. ¦cceptance testing
12. Load testing
13. Performance testing
14. xsability testing
15. Install/uninstall testing
16. Recovery testing
17. Security testing
18. Compatibility testing
19. Exploratory testing
20. ¦d-hoc testing
21. xser acceptance testing
22. Comparison testing
23. ¦lpha testing
24. eta testing
25. Mutation testing.

1c 
   
   

¦: Glass box testing is the same as white box testing. It is a testing approach that examines the
application's program structure, and derives test cases from the application's program logic.

2c     


   

¦: Open box testing is same as white box testing. It is a testing approach that examines the
application's program structure, and derives test cases from the application's program logic.

3c      


   

¦: lack box testing a type of testing that considers only externally visible behavior. lack box
testing considers neither the code itself, nor the "inner workings" of the software. You C¦N
learn to do black box testing, with little or no outside help. Get C¦N get free information. Click
on a link!

4c     
   

¦: xnit testing is the first level of dynamic testing and is first the responsibility of developers
and then that of the test engineers. xnit testing is performed after the expected test results are
met or differences are explainable/acceptable.

6c    


   

¦: System testing is black box testing, performed by the Test Team, and at the start of the system
testing the complete system is configured in a controlled environment. The purpose of system
testing is to validate an application's accuracy and completeness in performing the functions as
designed. System testing simulates real life scenarios that occur in a "simulated real life" test
environment and test all functions of the system that are required in real life. System testing is
deemed complete when actual results and expected results are either in line or differences are
explainable or acceptable, based on client input. xpon completion of integration testing, system
testing is started. efore system testing, all unit and integration test results are reviewed by
Software Q¦ to ensure all problems have been resolved. For a higher level of testing it is
important to understand unresolved problems that originate at unit and integration test levels.

Ac     
   

¦: Parallel/audit testing is testing where the user reconciles the output of the new system to the
output of the current system to verify the new system performs the operations correctly.
¦nother Definition:-
With parallel testing, users can easily choose to run batch tests or asynchronous tests depending
on the needs of their test systems. Testing multiple units in parallel increases test throughput and
lower a manufacturer's

Dc     


   

¦: Functional testing is black-box type of testing geared to functional requirements of an
application. Test engineers *should* perform functional testing.

Cc      


   

¦: xsability testing is testing for 'user-friendliness'. Clearly this is subjective and depends on the
targeted end-user or customer. xser interviews, surveys, video recording of user sessions and
other techniques can be used. Programmers and developers are usually not appropriate as
usability testers.

0>c  
    
   

¦: xpon completion of unit testing, integration testing begins. Integration testing is black box
testing. The purpose of integration testing is to ensure distinct components of the application still
work in accordance to customer requirements. Test cases are developed with the express purpose
of exercising the interfaces between the components. This activity is carried out by the test team.
Integration testing is considered complete, when actual results and expected results are either in
line or differences are explainable/acceptable based on client input.

  
 


00c      


   

¦: Similar to system testing, the 'macro' end of the test scale is testing a complete application in a
situation that mimics real world use, such as interacting with a database, using network
communication, or interacting with other hardware, application, or system.

01c 
   
   

¦: The objective of regression testing is to ensure the software remains intact. ¦ baseline set of
data and scripts is maintained and executed to verify changes introduced during the release have
not "undone" any previous code. Expected results from the baseline are compared to results of
the software under test. ¦ll discrepancies are highlighted and accounted for, before testing
proceeds to the next level.
¦nother Definition
Re-testing after fixes or modifications of the software or its environment. It can be difficult to
determine how much re-testing is needed, especially near the end of the development cycle.
¦utomated testing tools can be especially useful for this type of testing.

02c     
   

¦: Sanity testing is performed whenever cursory testing is sufficient to prove the application is
functioning according to specifications. This level of testing is a subset of regression testing. It
normally includes a set of core tests of basic GxI functionality to demonstrate connectivity to the
database, application servers, printers, etc.
¦nother Definition of Sanity testing
Typically an initial testing effort to determine if a new software version is performing well
enough to accept it for a major testing effort. For example, if the new software is crashing
systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software
may not be in a 'sane' enough condition to warrant further testing in its current state

03c    
   

¦: ¦lthough performance testing is described as a part of system testing, it can be regarded as a
distinct level of testing. Performance testing verifies loads, volumes and response times, as
defined by requirements.
¦nother Definition :-
Term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and
any other 'type' of testing) is defined in requirements documentation or Q¦ or Test Plans

04c   
   

¦: Load testing is testing an application under heavy loads, such as the testing of a web site
under a range of loads to determine at what point the system response time will degrade or fail.
¦nother Definition
Load testing simulates the expected usage of a software program, by simulating multiple users
that access the program's services concurrently. Load testing is most useful and most relevant for
multi-user systems, client/server models, including web servers. For example, the load placed on
the system is increased above normal usage patterns, in order to test the system's response at
peak loads.

06c       


   

¦: Installation testing is testing full, partial, upgrade, or install/uninstall processes. The
installation test for a release is conducted with the objective of demonstrating production
readiness. This test includes the inventory of configuration items, performed by the application's
System ¦dministration, the evaluation of data readiness, and dynamic tests focused on basic
system functionality. When necessary, a sanity test is performed, following installation testing.

0Ac         


   

¦: Security/penetration testing is testing how well the system is protected against unauthorized
internal or external access, or willful damage. This type of testing usually requires sophisticated
testing techniques.

0Dc  /  


   

¦: Recovery/error testing is testing how well a system recovers from crashes, hardware failures,
or other catastrophic problems.

0Cc       


   

¦: Compatibility testing is testing how well software performs in a particular hardware,
software, operating system, or network environment.

1>c     



¦: Comparison testing is testing that compares software weaknesses and strengths to those of
competitors' products.

10c      
   

¦: ¦cceptance testing is black box testing that gives the client/customer/project manager the
opportunity to verify the system functionality and usability prior to the system being released to
production. The acceptance test is the responsibility of the client/customer or project manager,
however, it is conducted with the full support of the project team. The test team also works with
the client/customer/project manager to develop the acceptance criteria.

11c c  
   

¦: ¦lpha testing is testing of an application when development is nearing completion. Minor
design changes can still be made as a result of alpha testing. ¦lpha testing is typically performed
by a group that is independent of the design team, but still within the company, e.g. in-house
software test engineers, or software Q¦ engineers.
¦nother Definition
¦lpha testing is final testing before the software is released to the general public. First, (and this
is called the first phase of alpha testing), the software is tested by in-house developers. They use
either debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly.
Then, (and this is called second stage of alpha testing), the software is handed over to us, the
software Q¦ staff, for additional testing in an environment that is similar to the intended use.

  
 


12c     
   

¦: eta testing is testing an application when development and testing are essentially completed
and final bugs and problems need to be found before the final release. eta testing is typically
performed by end-users or others, not programmers, software engineers, or test engineers
¦nother Definition:- Following alpha testing, "beta versions" of the software are released to a
group of people, and limited public tests are performed, so that further testing can ensure the
product has few bugs. Other times, beta versions are made available to the general public, in
order to receive as much feedback as possible. The goal is to benefit the maximum number of
future users.

13c    


   

¦: Stress testing is testing that investigates the behavior of software (and hardware) under
extraordinary operating conditions. For example, when a web server is stress tested, testing aims
to find out how many users can be on-line, at the same time, without crashing the server. Stress
testing tests the stability of a given system or entity. It tests something beyond its normal
operational capacity, in order to observe any negative results. For example, a web server is stress
tested, using scripts, bots, and various denial of service tools.
¦nother Definition
Term often used interchangeably with 'load' and 'performance' testing. ¦lso used to describe such
tests as system functional testing while under unusually heavy loads, heavy repetition of certain
actions or inputs, input of large numerical values, large complex queries to a database system,
etc.

14c  c       


   
  
 

¦: Load testing is a blanket term that is used in many different ways across the professional
software testing community. The term, load testing, is often used synonymously with stress
testing, performance testing, reliability testing, and volume testing. Load testing generally stops
short of stress testing. During stress testing, the load is so great that errors are the expected
results, though there is gray area in between stress testing and load testing.

16c  c        


   
   

¦: Load testing is a blanket term that is used in many different ways across the professional
software testing community. The term, load testing, is often used synonymously with stress
testing, performance testing, reliability testing, and volume testing. Load testing generally stops
short of stress testing. During stress testing, the load is so great that errors are the expected
results, though there is gray area in between stress testing and load testing.

1Ac  c    /   


   
   

¦: Load testing is a blanket term that is used in many different ways across the professional
software testing community. The term, load testing, is often used synonymously with stress
testing, performance testing, reliability testing, and volume testing. Load testing generally stops
short of stress testing. During stress testing, the load is so great that errors are the expected
results, though there is gray area in between stress testing and load testing.

1Dc     


   

¦: Incremental testing is partial testing of an incomplete product. The goal of incremental testing
is to provide an early feedback to software developers.

1Cc    



¦: Software testing is a process that identifies the correctness, completenes, and quality of
software. ¦ctually, testing cannot establish the correctness of software. It can find defects, but
cannot prove there are no defects.
2>c     
   

¦: ¦utomated testing is a formally specified and controlled method of formal testing approach.

20c    


    
   

Continuous testing of an application as new functionality is added; requires that various aspects
of an application's functionality be independent enough to work separately before all parts of the
program are completed, or that test drivers be developed as needed; done by programmers or by
testers.

21c  c    c     


   

¦: ¦lpha testing is performed by in-house developers and software Q¦ personnel. eta testing is
performed by the public, a few select prospective customers, or the general public.

22c     


   

¦: Clear box testing is the same as white box testing. It is a testing approach that examines the
application's program structure, and derives test cases from the application's program logic. You
C¦N learn clear box testing, with little or no outside help.

23c   /     



¦: oundary value analysis is a technique for test data selection. ¦ test engineer chooses values
that lie along data extremes. oundary values include maximum, minimum, just inside
boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a
systems works correctly for these extreme or special values, then it will work correctly for all
values in between. ¦n effective way to test code, is to exercise it at its natural boundaries.

  
 


24c c   
   

¦: ¦d hoc testing is a testing approach; it is the least formal testing approach.
¦nother Definition Similar to exploratory testing, but often taken to mean that the testers have
significant understanding of the software before testing it.

26c 
  
   

¦: Gamma testing is testing of software that has all the required features, but it did not go
through all the in-house quality checks. Cynics tend to refer to software releases as "gamma
testing".

2Ac     


   

¦: Functional testing is same as black box testing. lack box testing a type of testing that
considers only externally visible behavior. lack box testing considers neither the code itself, nor
the "inner workings" of the software.

2Dc     


   

¦: Closed box testing is same as black box testing. lack box testing a type of testing that
considers only externally visible behavior. lack box testing considers neither the code itself, nor
the "inner workings" of the software.
2Cc      
   

¦: ottom-up testing is a technique for integration testing. ¦ test engineer creates and uses test
drivers for components that have not yet been developed, because, with bottom-up testing, low-
level components are tested first. The objective of bottom-up testing is to call low-level
components first, for testing purposes.

3>;  
    
   

¦: First, unit testing has to be completed. xpon completion of unit testing, integration testing
begins. Integration testing is black box testing. The purpose of integration testing is to ensure
distinct components of the application still work in accordance to customer requirements. Test
cases are developed with the express purpose of exercising the interfaces between the
components. This activity is carried out by the test team. Integration testing is considered
complete, when actual results and expected results are either in line or differences are
explainable/acceptable based on client input.

30c   c    


   

¦: For larger projects, or ongoing long-term projects, automated testing can be valuable. ut for
small projects, the time needed to learn and implement the automated testing tools is usually not
worthwhile. ¦utomated testing tools sometimes do not make testing easier. One problem with
automated testing tools is that if there are continual changes to the product being tested, the
recordings have to be changed so often, that it becomes a very time-consuming task to
continuously update the scripts. ¦nother problem with such tools is the interpretation of the
results (screens, data, logs, etc.) that can be a time-consuming task. You can learn to use
automated tools, with little or no outside help.

31c  c       


  
    
  
 

¦: System testing is high level testing, and integration testing is a lower level testing. Integration
testing is completed first, not the system testing. In other words, upon completion of integration
testing, system testing is started, and not vice versa. For integration testing, test cases are
developed with the express purpose of exercising the interfaces between the components. For
system testing, on the other hand, the complete system is configured in a controlled environment,
and test cases are developed to simulate real life scenarios that occur in a simulated real life test
environment. The purpose of integration testing in Software Testing is to ensure distinct
components of the application still work in accordance to customer requirements. The purpose of
system testing, on the other hand, is to validate an application's accuracy and completeness in
performing the functions as designed, and to test all functions of the system that are required in
real life.

32c  c    


   

¦: The term 'performance testing' is often used synonymously with stress testing, load testing,
reliability testing, and volume testing. Performance testing is a part of system testing, but it is
also a distinct level of testing. Performance testing verifies loads, volumes, and response times,
as defined by requirements.

33c   /  


   

¦: Disaster recovery testing is testing how well the system recovers from disasters, crashes,
hardware failures, or other catastrophic problems

34c c c c  


¦:Learn the most popular software tools (i.e. LabView, LoadRunner, Rational Tools, Winrunner,
etc.) -- and you want to pay special attention to LoadRunner and the Rational Toolset.

36c  c & /


   
   

¦: The objective of regression testing is to test that the fixes have not created any other problems
elsewhere. In other words, the objective is to ensure the software has remained intact. ¦ baseline
set of data and scripts are maintained and executed, to verify that changes introduced during the
release have not "undone" any previous code. Expected results from the baseline are compared to
results of the software under test. ¦ll discrepancies are highlighted and accounted for, before
testing proceeds to the next level.

3A c
   
    

¦: It depends on the initial testing approach. If the initial testing approach is manual testing,
then, usually the regression testing is performed manually. Conversely, if the initial testing
approach is automated testing, then, usually the regression testing is performed by automated
testing.

3Dc #   


   

Often taken to mean a creative, informal software test that is not based on formal test plans or
test cases; testers may be learning the software as they test it.

3Cc '   


   

Volume testing involves testing a software or Web application using corner cases of "task size"
or input data size. The exact volume tests performed depend on the application's functionality, its
input and output mechanisms and the technologies used to build the application. Sample volume
testing considerations include, but are not limited to:
If the application reads text files as inputs, try feeding it both an empty text file and a huge
(hundreds of megabytes) text file.
If the application stores data in a database, exercise the application's functions when the database
is empty and when the database contains an extreme amount of data.
If the application is designed to handle 100 concurrent requests, send 100 requests
simultaneously and then send the 101st request.
If a Web application has a form with dozens of text fields that allow a user to enter text strings of
unlimited length, try populating all of the fields with a large amount of text and submit the form.

4>c     


   

This means that you test an application in its normal environment, along with other standard
applications, to make sure they all get along together; that is, that they don't corrupt each other's
files, they don't crash, they don't consume system resources, they don't lock up the system, they
can share the printer peacefully, etc.

40c .     
   

. ¦ method for determining if a set of test data or test cases is useful, by deliberately introducing
various code changes ('bugs') and retesting with the original test data/cases to determine if the
'bugs' are detected. Proper implementation requires large computational resources.

  
 
. c

Software Testing can be performed in either the two types:-


09 /  In this Testing is started after the Coding.
1: 9 /  In this Testing is done from the Initial Phase.

Test case design for software testing is as important as the design of the software itself. ¦ll test
cases shall be designed to find the maximum errors through their execution
Testing methodologies are used for designing test cases. These methodologies provide the
developer with a systematic approach for testing.
¦ny software product can be tested in one of the two ways:
1) Knowing the specific function the product has been designed to perform, tests can be planned
and conducted to demonstrate that each function is fully operational, and to find and correct the
errors in it.
2) Knowing the internal working of a product, tests can be conducted to ensure that the internal
operation performs according to specification and all internal components are being adequately
exercised and in the process, errors if any are eliminated.
The first test approach is called a)lack-box testingand the second is called b)White-box testing

The attributes of both black-box and white-box testing can be combined to provide an approach
that validates the software interface and also selectively assures that internal structures of
software are correct.
The black-box and white-box testing methods are applicable across all environments,
architectures and applications but unique guidelines and approaches to testing are warranted in
some cases. This document covers Testing GxIs, and Client/Server ¦rchitectures.

The testing methodologies applicable to test case design in different testing phases are as given
below:
----------------------------------------------------------------------------
Types of Testing--------White-box Testing--------lack ox Testing
----------------------------------------------------------------------------
xnit Testing --------------------- Yes
-----------------------------------------------------------------------------
Integration Testing---------------Yes---------------------- Yes
-----------------------------------------------------------------------------
System Testing------------------------------------------------ Yes
-----------------------------------------------------------------------------
¦cceptance Testing --------------------------------------------Yes
-----------------------------------------------------------------------------

Now a Days one more testing methodology have come called as -$ 

In which both the lack ox and White ox testing is performed.
Testing the application functionality and also testing the application structure will come under
Grey ox Testing.
It is also called as the mixture of lack ox and White ox Testing.

%+)@#9 in which exact rules must be followed that is customer requirements


%+) :9 which are based on general requirements that is on our own requirements

# 
; )    
  
. cc $
 


c $ 

a)White-box testing of software is designed for close examination of procedural detail. Providing
test cases that exercise specific sets of conditions and/or loops tests logical paths through the
software.

b)xnfortunately, even for few LOC, the numbers of paths become too many and present certain
logistic problems. Due to this, a limited number of important logical paths can be selected and
exercised. Important data structures can be probed for validity.

c)White box testing is a test case design method that uses the control structure of the procedural
design to derive test cases. The test cases derived from white-box testing methods will:

1) Guarantee that all independent paths within a module have been exercised at least ones
2) Exercise all logical decisions on their true and false sides
3) Execute all loops at their boundaries and within their operational bounds
4) Exercise internal data structures to ensure their validity.

d)White box testing need to be adopted under xnit level testing strategy. It can be adapted to a
limited extent under integration testing if situation warrants for it. asis path testing and control
structure testing are some of the most widely used white-box testing techniques.

e)It is the testing method in which the user will test the application structure how it is acting.

f)xsually Developers would perform the White ox Testing.

  
 
. c$ $ 

$ $ 

a)lack-box tests are used to demonstrate that the software functions are operational; that input
is properly accepted and output is correctly produced; and that the integrity of external
information (e.g., data files) is maintained. It enables the developer to derive sets of input
conditions (test cases) that will fully exercise all functional requirements for a program.

b)It is the test method in which the user always test the application functionality he need not to
bother about the application structure because of the customer always looks at the screens how it
is developed.

c)xsually Test Engineer will do the lack ox Testing.

d)lack-box testing uncovers errors of the following categories:

!u In-correct or missing functions


!u Interface errors
!u Errors in the data structures or external data base access
!u Performance errors
!u Initialisation and termination errors

e)lack-box testing is applied during the later stages of the testing as it purposely disregards
control structure and attention is focused on the problem domain. Test cases are to be designed to
answer the following questions:
1) Dow is functional validity tested?
2) What categories of input will make good test case?
3) Is the system particularly sensitive to certain input values?
4) Dow are the boundaries of data input isolated?
5) What data rates and data volume can the system tolerate?
6) What effect will specific combinations of data have on system operation?

f) The following black-box testing methods are practically feasible and adopted depending on the
applicability:

1. Graph-based testing methods


2.Equivalence partitioning
3.oundary value analysis

g)lack box testing (data driven or input/output driven) is not based on any knowledge of
internal design or code. Tests are based on requirements and functionality. lack box testing
attempts to derive sets of inputs that will fully exercise all the functional requirements of a
system. It is not an alternative to white box testing.
Dere is a small presentation given by famous Professor 95 
Presentation on lack ox Testing.
#= / %   

This method divides the input domain of a program into classes of data from which test cases can
be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors
and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence
classes for an input condition. ¦n equivalence class represents a set of valid or invalid states for
input conditions
Equivalence classes may be defined according to the following guidelines:

!u If an input condition specifies a range, one valid and two invalid equivalence classes are
defined.
!u If an input condition requires a specific value, then one valid and two invalid equivalence
classes are defined.
!u If an input condition specifies a member of a set, then one valid and one invalid
equivalence class are defined.
!u If an input condition is boolean, then one valid and one invalid equivalence class are
defined.

  


#= /    


!u Good test case reduces by more than one the number of other test cases which must be
developed
!u Good test case covers a large set of other possible cases
!u Classes of valid inputs
!u Classes of invalid inputs

$ ' * 


This method leads to a selection of test cases that exercise boundary values. It complements
equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on
input conditions solely, V¦ derives test cases from the output domain also. V¦ guidelines
include:
1. For input ranges bounded by a and b, test cases should include values a and b and just above
and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be developed to exercise
the minimum and maximum numbers and values just above and below these limits.
3. ¦pply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be designed to
exercise the data structure at its boundary.
   
$ /   Situations on, above, or below edges of input,
output, and condition classes have high probability of success

  
 
.
This section will discuss various models for Software Testing. Definitions of these models will
differ. Dowever, the fundamental principles are agreed on by experts and practitioners alike.
There are many models used to describe the sequence of activities that make a Systems
Development Life Cycle (SDLC). SLDC is used to describe activities of both development and
maintenance work in Software Testing.

'   

=  7 c    8   

  7 c       8


.   
7 c  "  /"/  "+* "  
8
These models would all benefit from earlier attention to the testing activity that has to be done at
some time during the SDLC in Software Testing.
¦ny reasonable model for SDLC must allow for change and spiral approach allows for this with
emphasis on slowly changing (evolving) design. We have to assume change is inevitable will
have to design for change.

  
 
.'.

'

The V Model, while admittedly obscure, gives equal weight to testing rather than treating it as an
afterthought.

Initially defined by the late Paul Rook in the late 1980s, the V was included in the x.K.'s
National Computing Centre publications in the 1990s with the aim of improving the efficiency
and effectiveness of software development. It's accepted in Europe and the x.K. as a superior
alternative to the waterfall model; yet in the x.S., the V Model is often mistaken for the
waterfall.

The V shows the typical sequence of development activities on the left-hand (downhill) side and
the corresponding sequence of test execution activities on the right-hand (uphill) side.

In fact, the V Model emerged in reaction to some waterfall models that showed testing as a
single phase following the traditional development phases of requirements analysis, high-level
design, detailed design and coding. The waterfall model did considerable damage by supporting
the common impression that testing is merely a brief detour after most of the mileage has been
gained by mainline development activities. Many managers still believe this, even though testing
usually takes up half of the project time.

Several testing strategies are available and lead to the following generic characteristics:

1) Testing begins at the unit level and works "outward" toward the integration of the entire
system
2) Different testing techniques are appropriate at different points of S/W development cycle.
Testing is divided into four phases as follows:
a)xnit Testing
b)Integration Testing
c)Regression Testing
d)System Testing
e)¦cceptance Testing

The context of xnit and Integration testing changes significantly in the Object Oriented (OO)
projects. Class Integration testing based on sequence diagrams, state-transition diagrams, class
specifications and collaboration diagrams forms the unit and Integration testing phase for OO
projects. For Web ¦pplications, Class integration testing identifies the integration of classes to
implement certain functionality.

The meaning of system testing and acceptance testing however remains the same in the OO and
Web based ¦pplications context also. The test case design for system and acceptance testing
however need to handle the OO specific intricacies.

+  $   /   


%c
Testing is planned right from the xRD stage of the SDLC. The following table indicates the
planning of testing at respective stages. For projects of tailored SDLC, the testing activities are
also tailored according to the requirements and applicability.
The "V" Diagram indicating this relationship is as follows

DRE: - Where ¦ defects found by testing team.  defects found by customer side people during
maintenance.
+  /
To decrease cost and time complexity in development process, small scale and medium scale
companies are following a refinement form of VModel.
  
%c
0:   

¦s per the "V" diagram of SDLC, testing begins with xnit testing. xnit testing makes heavy use
of White ox testing techniques, exercising specific paths in a unitUs control structure to ensure
complete coverage and maximum error detection.

xnit testing focuses verification effort on the smallest unit of software design - the unit. The
units are identified at the detailed design phase of the software development life cycle, and the
unit testing can be conducted parallel for multiple units. Five aspects are tested under xnit
testing considerations:

!u The module interface is tested to ensure that information properly flows into and out of
the program unit under test.
!u The local data structure is examined to ensure that data stored temporarily maintains its
integrity during all steps in an algorithmUs execution.

!u oundary conditions are tested to ensure that the module operates properly at boundaries
established to limit or restrict processing.

!u ¦ll independent paths (basis paths) through the control structure are exercised to ensure
that all statements in a module have been executed at least once.
!u ¦nd finally, all error-handling paths are tested.

:   9/
-
% c9/


Path coverage technique is to verify whether each of the possible paths in each of the
functions has executed properly. ¦ path is a set of branches of possible flow. Since loop
introduces unbounded number of paths, the path coverage technique employs a test that
considers only a limited number of looping possibilities.

   9/


The statement coverage technique requires that every statement in the program to be
evoked at least at once. It verifies coverage at high level rather than decision execution or
oolean expressions. The advantage is this measure can be applied directly to object code
& does not require processing source code.

  7<
 $ c89/


The decision coverage test technique seeks to identify the percentage of all possible
decision outcomes that have been considered by a suite of test procedures. It requires that
every point of entry & exit in the software program be invoked at least once. It also
requires that all possible conditions for a decision in the program be exercised at least
once.

9   9/


This technique seeks to verify the accuracy of true or false outcome of each oolean sub
expression. This technique employs tests that measure the sub expressions independently.

.  9   9/

It takes care of covering different conditions, which are interrelated.
xnit Testing (COM/DCOM Technology):
The integral parts covered under unit testing will be:
¦ctive Server Page (¦SP) that invokes the ¦TL component (which in turn can use C++
classes) The actual component Interaction of the component with the persistent store or
database and Database tables Driver for the unit testing of a unit belonging to a particular
component or subsystem depends on the component alone. Wherever xser Interface is
available xI called from a web browser will initiate the testing process. If xI is not
available then appropriate drivers (code in C++ as an example) will be developed for
testing.

xnit testing would also include testing inter-unit functionality within a component. This
will consist of two different units belonging to same component interacting with each
other. The functionality of such units will be tested with separate unit test(s).
Each unit of functionality will be tested for the following considerations:
Type: Type validation that takes into account things such as a field expecting
alphanumeric characters should not allow user input of anything other than that.
Presence: This validation ensures all mandatory fields should be present, they should also
be mandated by database by making the column NOT NxLL (this can be verified from
the low-level design document).
Size: This validation ensures the size limit for a float or variable character string input
from the user not to exceed the size allowed by the database for the respective column.
Validation: This is for any other business validation that should be applied to a specific
field or for a field that is dependent on another field. (E.g.: Range validation U ody
temperature should not exceed 106 degree Celsius), duplicate check etc.
GxI based: In case the unit is xI based, GxI related consistency check like font sizes,
background color, window sizes, message & error boxes will be checked.

1 
   

¦fter unit testing, modules shall be assembled or integrated to form the complete
software package as indicated by the high level design. Integration testing is a systematic
technique for verifying the software structure and sequence of execution while
conducting tests to uncover errors associated with interfacing.
lack-box test case design techniques are the most prevalent during integration, although
limited amount of white box testing may be used to ensure coverage of major control
paths. Integration testing is sub-divided as follows:
8   
   
 Top-Down integration is an incremental approach to
construction of program structure. Modules are integrated by moving downward through
the control hierarchy, beginning with the main control module (main program). Modules
subordinate to the main control module are incorporated into the structure in either a
depth-first or breadth-first manner.
8$ : 
   
ottom-xp integration testing, as its name implies,
begins construction and testing with atomic modules (i.e., modules at the lowest level in
the program structure). Since modules are integrated from the bottom up, processing
required for modules sub-ordinate to a given level is always available and the need for
stubs is eliminated.
8 
   
))& 
Thread ased Testing Thread based testing follows an execution thread through objects
to ensure that classes collaborate correctly.
In thread based testing

au Set of class required to respond to one input or event for system are identified;
au each thread is integrated and tested individually
au Regression test is applied to ensure that no side effects occur
xse ased Testing
xse based testing evaluates the system in layers. The common practice is to employ the
use cases to drive the validation process
In xse ased Testing

au Initially independent classes (i.e., classes that use very few other classes) are
integrated and tested.
au Followed by the dependent classes that use independent classes. Dere dependent
classes with a layered approach are used
au Followed by testing next layer of (dependent) classes that use the independent
classes

This sequence is repeated by adding and testing next layer of dependent classes until
entire system is tested.
Integration Testing for Web applications:
Collaboration diagrams, screens and report layouts are matched to OO¦D and associated
class integration test case report is generated.

2+
  

Each time a new module is added as part of integration testing, new data flow paths may
be established, new I/O may occur, and new control logic may be invoked. These
changes may cause problems with functions that previously worked flawlessly. In the
context of integration test strategy, regression testing is the re-execution of some subset
of tests that have already been conducted to ensure that changes have not propagated
unintended side effects.
Regression testing may be conducted manually, by re-executing a subset of all test cases.
The regression test suite (the subset of tests to be executed) contains three different
classes of test cases:

au ¦ representative sample of tests that will exercise all software functions.


au ¦dditional tests that focus on software functions and are likely to be affected by
the change.
au Tests that focus on the software components that have been changed.

¦s integration testing proceeds the number of regression tests can grow quite large.
Therefore, the regression test suite shall be designed to include only those tests that
address one or more classes of errors in each of the major program functions. It is
impractical and inefficient to re-execute every test for every program function once a
change has occurred.

3  

¦fter the software has been integrated (constructed), sets of high order tests shall be
conducted. System testing verifies that all elements mesh properly and the overall system
function/performance is achieved.
The purpose of system testing is to fully exercise the computer-based system. The aim is
to verify that all system elements and validate conformance against SRS.    

  
E  c
1>  The type(s) of testing shall be chosen
depending on the customer / system requirements.
Different types of Tests that comes under System Testing are listed below:

au Compatibility / Conversion Testing: In cases where the software developed is a


plug-in into an existing system, the compatibility of the developed software with
the existing system has to be tested. Likewise, the conversion procedures from the
existing system to the new software are to be tested.

au Configuration Testing: Configuration testing includes either or both of the


following:
±u testing the software with the different possible hardware configurations
±u testing each possible configuration of the software

If the software itself can be configured (e.g., components of the program can be
omitted or placed in separate processors), each possible configuration of the
software should be tested.
If the software supports a variety of hardware configurations (e.g., different types
of I/O devices, communication lines, memory sizes), then the software should be
tested with each type of hardware device and with the minimum and maximum
configuration.

au Documentation Testing: Documentation testing is concerned with the accuracy of


the user documentation. This involves
i) Review of the user documentation for accuracy and clarity
ii)Testing the examples illustrated in the user documentation by preparing test
cases on the basis of these examples and testing the system

au Facility Testing:Facility Testing is the determination of whether each facility (or


functionality) mentioned in SRS is actually implemented. The objective is to
ensure that all the functional requirements as documented in the SRS are
accomplished.

au Installability Testing:Certain software systems will have complicated procedures


for installing the system. For instance, the system generation (sysgen) process in
IM Mainframes. The testing of these installation procedures is part of System
Testing.
Proper Packaging of application, configuration of various third party software and
database parameters settings are some issues important for easy installation.
It may not be practical to devise test cases for certain reliability factors. For e.g.,
if a system has a downtime objective of two hours or less per forty years of
operation, then there is no known way of testing this reliability factor.

au Performance Testing: Performance testing is designed to test run-time


performance of software within the context of an integrated system. Performance
testing occurs throughout all phases testing. Even at the unit level, the
performance of an individual module is assessed as white-box tests are conducted.
Dowever the performance testing is complete when all system elements are fully
integrated and the true performance of the system is ascertained as per the
customer requirements.

au Performance Testing for Web ¦pplications: The most realistic strategy for rolling
out a Web application is to do so in phases. Performance testing must be an
integral part of designing, building, and maintaining Web applications.
i) ¦utomated testing tools play a critical role in measuring, predicting, and
controlling application performance. There is a paragraph on automated tools
available for testing Web ¦pplications at the end of this document.
In the most basic terms, the final goal for any Web application set for high-
volume use is for users to consistently have
i) continuous availability
ii) consistent response timesUeven during peak usage times. Performance testing
has five manageable phases:
iii) architecture validation
iv) performance benchmarking
v) performance regression
vi) performance tuning and acceptance
vii) and the continuous performance monitoring necessary to control performance
and manage growth.

au Procedure Testing: If the software forms a part of a large and not completely
automated system, the interfaces of the developed software with the other
components in the larger system shall be tested. These may include procedures to
be followed by
i) The human operator
ii) Database administrator
iii) Terminal user
These procedures are to be tested as part of System testing.
au Recovery Testing:Recovery testing is a system test that forces the software to fail
in a variety of ways and verifies that recovery is properly performed. If recovery
is automatic (performed by the system itself), re-initialisation, check pointing
mechanisms, data recovery, and restart are each evaluated for correctness. If
recovery requires human intervention, the time required to repair is evaluated to
determine whether it is within acceptable limits.

au Reliability Testing:The various software-testing processes have the goal to test the
software reliability. The "Reliability Testing" which is a part of System Testing
encompasses the testing of any specific reliability factors that are stated explicitly
in the SRS.
Dowever, if the reliability factors are stated as say, Mean-time-to-failure (MTTF)
is 20 hours, it is possible to device test cases using mathematical models.

au Security Testing:Security testing attempts to verify that protection mechanisms


built into a system will protect it from improper penetration. During security
testing, the tester plays the role(s) of the individual who desires to penetrate the
system. Security testing involves designing test cases that try to penetrate into the
system using all possible mechanisms.

au Security Testing (Web applications): In case of web applications, one has take
into account testing with appropriate firewall set-up. For data security, one has to
take into consideration Data Transfer Checksum, Encryption or use of digital
certificates, MD5 hashing on all vulnerable data and database integrity. For xser
security, encrypted passwords, audit trail logs containing (who, where, why, when
and what information), auto log out based on system specifications (e.g. 5 minutes
of inactivity), display of user information on the xI can be taken care by
designing to code programmatically.

au Serviceability Testing: Serviceability testing covers the serviceability or


maintainability characteristics of the software. The requirements stated in the SRS
may include
i) service aids to be provided with the system, e.g., storage-dump programs,
diagnostic programs
ii) the mean time to debug an apparent problem
iii) the maintenance procedures for the system
iv) the quality of the internal-logic documentation
Test cases are to be devised to ensure the coverage of the stated aspects.

au Storage Testing: Storage testing is to ensure that the storage requirements are
within the specified bounds. For instance, the amounts of the primary and
secondary storage the software requires and the sizes of temporary files that get
created.

au Stress Testing: Stress tests are designed to confront programs with abnormal
situations. Stress testing executes a system in a manner that demand rescues in
abnormal quantity, frequency or volume. Test cases may be tailored by keeping
some of the following examples in view:
i) Input data rates may be increased by an order of magnitude to determine how
input functions will respond
ii) Test cases that may cause excessive hunting
iii) Test cases that may cause thrashing in a virtual operating system may be
designed
iv) Test cases that may create disk resident data.
v) Test cases that require maximum memory or other resources may be executed
To achieve this, the software is subjected to heavy volumes of data and the
behaviour is observed.

au Stress Testing (Web applications): This refers to testing system functionality


while the system is under unusually heavy or peak load; it is similar to the
validation testing but is carried out in a "high-stress" environment. This requires
some idea about expected load levels of the Web application. One of the criteria
for web applications would be number of concurrent users using the application.

au xsability Testing: xsability testing is an attempt to uncover the software usability


problems involving the human-factor.
Examples:
i) Das each user interface is amicable to the intelligence, educational background,
and environmental pressures of the end user?
ii) ¦re the outputs of the program meaningful, useable, storable, etc.?
iii) ¦re the error messages meaningful, easy to understand?
au xsability Testing (Web ¦pplications): The intended audience will determine the
"usability" testing needs of the Web site. ¦dditionally, such testing should take
into account the current state of the Web and Web culture.

au Volume Testing: Volume Testing is to ensure that the software


i) can handle the volume of data as specified in the SRS
ii) does not crash with heavy volumes of data, but gives an appropriate message
and/or makes a clean exit.
To achieve this, the software is subjected to heavy volumes of data and the
behaviour is observed.
Examples:
i) ¦ compiler would be fed an absurdly large source program to compile
ii) ¦ linkage editor might be fed a program containing thousands of modules
iii) ¦n operating system's job queue would be filled to capacity
iv) If a software is supposed to handle files spanning multiple volumes, enough
data are created to cause the program to switch from one volume to another
¦s a whole, the test cases shall try to test the extreme capabilities of the programs
and attempt to break the program so as to establish a sturdy system.

au Link testing (for web based applications): This type of testing determines if the
site's links to internal and external Web pages are working. ¦ Web site with many
links to outside sites will need regularly scheduled link testing, because Web sites
come and go and xRLs change. Sites with many internal links (such as an
enterprise-wide Intranet, which may have thousands of internal links) may also
require frequent link testing.

au DTML validation (for web based applications): The need for this type of testing
will be determined by the intended audience, the type of browser(s) expected to
be used, whether the site delivers pages based on browser type or targets a
common denominator. There should be adherence to the DTML programming
guidelines as defined in Qualify.

au Load testing (for web based applications): If there is a large number of


interactions per unit time on the Web site testing must be performed under a range
of loads to determine at what point the system's response time degrades or fails.
The Web server software and configuration settings, CGI scripts, database design,
and other factors can all have an impact.
au Validation or functional testing (for web applications): This is typically a core
aspect of testing to determine if the Web site functions correctly as per the
requirements specifications. Sites utilising CGI-based dynamic page generation or
database-driven page generation will often require more extensive validation
testing than static-page Web sites.
au Extensibility Promote-ability Testing: Software can be moved from one run-time
environment to another without requiring modifications to the software, e.g. the
application can move from the development environment to a separate test
environment.

4*    

When custom software is built for one customer, a series of acceptance tests are
conducted to enable the customer to validate all the requirements. ¦cceptance tests are
conducted at the development site or at the customer site depending upon the
requirements and mutually agreed principles. ¦cceptance testing may be conducted either
by the customer depending on the type of project & the contractual agreement. ¦ series
of acceptance tests are conducted to enable the customer to validate all requirements as
per user requirement document (xRD).

  
 
. .

The waterfall model derives its name due to the cascading effect from one phase to the other as is
illustrated in Figure1.1. In this model each phase well defined starting and ending point, with
identifiable deliveries to the next phase.
Note that this model is sometimes referred to as the linear sequential model or the software life
cycle.
The model consist of six distinct stages, namely:

0  c=   c


(a) The problem is specified along with the desired service objectives (goals)
(b) The constraints are identified
1  c    cthe system specification is produced from the detailed definitions
of (a) and (b) above. This document should clearly define the product function.
Note that in some text, the requirements analysis and specifications phases are combined and
represented as a single phase.
2  c   
c, the system specifications are translated into a
software representation. The software engineer at this stage is concerned with:
a) Data structure
b) Software architecture
c) ¦lgorithmic detail and
d) Interface representations
The hardware requirements are also determined at this stage along with a picture of the overall
system architecture. y the end of this stage should the software engineer should be able to
identify the relationship between the hardware, software and the associated interfaces. ¦ny faults
in the specification should ideally not be passed 'down stream'
3  c      
c stage the designs are translated into the software
domain
a) Detailed documentation from the design phase can significantly reduce the coding effort.
b) Testing at this stage focuses on making sure that any errors are identified and that the software
meets its required specification.
4  c 
      
call the program units are integrated and tested to
ensure that the complete system meets the software requirements. ¦fter this stage the software is
delivered to the customer [Deliverable- The software product is delivered to the client for
acceptance testing.]
6c   c the usually the longest stage of the software. In this phase the
software is updated to:
a) Meet the changing customer needs
b) ¦dapted to accommodate changes in the external environment
c) Correct errors and oversights previously undetected in the testing phases
d) Enhancing the efficiency of the software
Observe that feed back loops allow for corrections to be incorporated into the model. For
example a problem/update in the design phase requires a 'revisit' to the specifications phase.
When changes are made at any phase, the relevant documentation should be updated to reflect
that change.

¦dvantages of Waterfall Model


a) Testing is inherent to every phase of the waterfall model
b) It is an enforced disciplined approach
c) It is documentation driven, that is, documentation is produced at every stage
Disadvantages of Waterfall Model
The waterfall model is the oldest and the most widely used paradigm. Dowever, many projects
rarely follow its sequential flow. This is due to the inherent problems associated with its rigid
format.
Namely:
a) It only incorporates iteration indirectly, thus changes may cause considerable confusion as the
project progresses.
b) ¦s The client usually only has a vague idea of exactly what is required from the software
product, this WM has difficulty accommodating the natural uncertainty that exists at the
beginning of the project.
c) The customer only sees a working version of the product after it has been coded. This may
result in disaster any undetected problems are precipitated to this stage.

  
 
..

This section will discuss various models for testing. Definitions of these models will differ.
Dowever, the fundamental principles are agreed on by experts and practitioners alike. There are
many models used to describe the sequence of activities that make a Systems Development Life
Cycle (SDLC). SLDC is used to describe activities of both development and maintenance work.
Developed by arry oehm in 1988. it provides the potential for rapid development of
incremental versions of the software. In the spiral model, software is developed in a series of
incremental releases. During early iterations , the incremental release might be a paper model or
prototype.
Each iteration consists of
Planning, Risk ¦nalysis, Engineering, Construction & Release & Customer Evaluation.

!u Customer Communication:

Tasks required to establish effective communication between developer and customer.

!u Planning:

Tasks required to define resources, timelines, and other project related information.

!u Risk ¦nalysis:

Tasks required to assess both technical and management risks.

!u Engineering:

Tasks required to build one or more representatives of the application.

!u Construction & Release:


Tasks required to construct, test, install and provide user support (e.g., documentation
and training)

!u Customer evaluation:

Tasks required to obtain customer feedback based on evaluation of the software


representations created during the engineering stage and implemented during the
installation state.

  
 
 c = 

0! c  +/   

¦ formal technical review is conducted by the software quality assurance group. ¦ review
typically examines only a small part of the software project. ¦lso, only one developer is usually
responsible for the artifact.
The artifact is examined on various levels, the first of which is for compliance with the
requirements of the software. This includes things like function and logic as well as
implementation.
The artifact must also conform to the standards of the process used on the project. This ensures
that all artifacts of the project are developed in a uniform manner.
Typically a review will last 2 hours. The review may consist of walk-throughs, code inspections
or any other examination. Since the purpose of a review is to find errors, a review can be difficult
to control. Care must be taken to ensure that no hard feelings occur as a result.
19 c
c   

¦ source code walkthrough often is called a technical code walkthrough or a peer code review.
The typical scenario finds a developer inviting his technical lead, a database administrator, and
one or more peers to a meeting to review a set of source modules prior to production
implementation. Often the modified code is indicated after the fact on a hardcopy listing with
annotations or a highlighting pen, or within the code itself with comments.
¦ code walkthrough is an effective tool in the areas of quality assurance and education. The
developer is exposed to alternate methods and processes as the technical lead and database
administrator suggest and discuss improvements to the code. The technical lead is assured of an
acceptable level of quality and the database administrator is assured of an acceptable level of
database performance. The result is better performance of the developer, his programs, and the
entire application.
Despite all the benefits of source code walkthroughs, few organizations implement and enforce
them as a shop standard. Many excuses are given, but each has a practical solution.

$  

!u Improved Code QualityImproved code quality is ensured by the enforcement of coding


standards.
!u Improved ¦pplication Performance is ensured by the review of all database access paths
by the D¦ and technical lead, and the improvement/removal of questionable coding
practices.

!u Improved Developer Performanceis ensured by the mentoring of the developer by the


D¦ and technical lead, the discussion of coding style and technique. Peer reviews are
an important component of a continuing training plan. Dow else can a developer hone his
skills? Few training vendors offer formal sessions for developers with more than five
years of experience, and fewer employers take advantage of those sessions.

!u ExcusesDere are a few of the many excuses offered for not enforcing code walkthroughs
as a shop standard.

!u Volumes of data Technical leads and database administrators are unwilling to spend time
wading through large Natural and COOL listings searching for a few simple source
changes. Database administrators need to see quickly what database accesses were added
or changed.

!u Deleted code Deleted code cannot be reviewed, but leaving the code in place, converted
to comments, can render illegible an otherwise well-structured module.

!u Manual effort The amount of effort required by the developer is significant to create
useful documentation for a technical walkthrough. The manual procedures involved are
difficult, tedious, time-consuming, and error-prone

!u Lack of Consistency There must be consistency. If code is accepted by one reviewer but
rejected by another, or accepted on one occasion but rejected on another, developers will
become confused and frustrated. There must be consensus among the reviewers.
Once standards are published, developers can ensure compliance, allowing a much more
positive and less time-consuming review process. Reviewers can direct more attention to
unusual and complex coding techniques.
!u Developers are reluctant to be involved Without training, experience, and focus, a
technical lead can allow a code walkthrough to degrade into something resembling a
lynching. Constructive criticism deteriorates into destructive criticism.
The process must be considered by all parties an opportunity to train the developer,
enlighten the technical lead and database administrator, and maintain or improve the
quality and performance of the application system. It should be a win-win situation for all
persons involved.

29+/   

Code Reviews are a great way to improve both your software and your developers. Traditionally
code reviews or peer reviews take place in a regular basis, once a week for instance. Developers
swap code they produced during the week and go through a checklist to look for bugs security
problems, performance issues, adherence to coding standards, and other issues. The developer
then creates a report and goes over what he or she has found in the peer's code. This process
allows the developers to learn the tricks other developers have attained over the years.
Traditional code reviews certainly do a lot to improve the quality of the software developed, and
the developers themselves, but they certainly also take quite a bit of time. Many of the issues can
be easily picked up by an automated code review tool such as CFDEV's tool for reviewing
ColdFusion (CFML) code. CFDEV's tool also allows you to easily write your own rules most
rules can be written in just 4 lines of CFML code. In addition each issue the reviewer find has an
associated document explaining why, and how to fix the issue.
While automated code review tools can cut down the time it takes to review code, there are
certain tasks that an automated tool just can't do, such as algorithm design, or logic issues. To get
the full benefits of code reviews you should still involve the human eye.
39      

Software inspections have long been considered to be an effective way to detect and remove
defects from software. Dowever, there are costs associated with carrying out inspections and
these costs may outweigh the expected benefits.
It is important to understand the tradeoffs between these costs and benefits. We believe that these
are driven by several mechanisms, both internal and external to the inspection process. Internal
factors are associated with the manner in which the steps of the inspection are organized into a
process (structure), as well as the manner in which each step is carried out (technique). External
ones include differences in reviewer ability and code quality (inputs), and interactions with other
inspections, the project schedule, personal calendars, etc. (environment).
Most of the existing literature on inspections have discussed how to get the most benefit out of
inspections by proposing changes to the process structure, but with little or no empirical work
conducted to demonstrate how they worked better and at what cost.
We hypothesized that these changes will affect the defect detection effectiveness of the
inspection, but that any increase in effectiveness will have a corresponding increase in inspection
interval and effort. We evaluated this hypothesis with a controlled experiment on a live
development project using professional software developers.
We found that these structural changes were largely ineffective in improving the effectiveness of
inspections, but certain treatments dramatically increased the inspection interval. We also noted a
large amount of unexplained variance in the data suggesting that other factors must have a strong
influence on inspection performance.
On further investigation, we found that the inputs into the process (reviewers and code units)
account for more of the variation than the original treatment variables, leading us to conclude
that better techniques by which reviewers detect defects, not better process structures, are the key
to improving inspection effectiveness.

</)  


There are basically 4/ of Software Testing:--


08:   
   

1.In this level of testing small functions and modules of the project are tested.
2.  F!  F)  
3.Done by Developers.
4.The most 'micro' scale of testing; to test particular functions or code modules. Typically done
by the programmer and not by testers, as it requires detailed knowledge of the internal program
design and code. Not always easily done unless the application has a well-designed architecture
with tight code; may require developing test driver modules or test harnesses.

18.  
   

1.In this level of testing small functions which make a module are tested.
2.  F!  G!  G!  F)  
  F. F) 
3.Done by Team Lead.
28 
   
   

In this level of testing all the modules which make a application are tested.
2.Done by Test Manager.
3.Testing of combined parts of an application to determine if they function together correctly.
The 'parts' can be code modules, individual applications, client and server applications on a
network, etc. This type of testing is especially relevant to client/server and distributed systems.
Integration can be top-down or bottom-up:

!u Top-down testing starts with main and successively replaces stubs with the real modules.
!u ottom-up testing builds larger module assemblies from primitive modules.
!u Sandwich testing is mainly top-down with bottom-up integration and testing applied to
certain widely used components

38  
   

48:*    
   

1.In the presence of customer conducting the testing known as xser ¦cceptance Testing in
Software Testing.
efore that we need to know couple of main definitions regarding the terminology in the
company
%+)@#9 in which exact rules must be followed that is customer requirements
%+) :9 which are based on general requirements that is on our own requirements

  
%c 
  


The purpose of this section is to explore differences in perspective between tester and developer
(buyer & builder) and explain some of the difficulties management and staff face when working
together developing and testing computer software in Software Testing.

   

!u We have already discussed that none of the primary purposes of testing is to find faults in
software i.e., it can be perceived as a destructive process.
!u The development process on the other hand is a naturally creative one and experience
shows that staff working in development has a different mindset to that of testers.
!u We would never argue that one group is intellectually superior to another, merely that
they view systems development from another perspective.
!u ¦ developer is looking to build new and exciting software based on user's requirements
and really wants it to work (first time if possible). De or she will work long hours and is
usually highly motivated and very determined to do a good job.
!u ¦ tester, however, is concerned that user really does get a system that does what they
want, is reliable and doesn't do thing it shouldn't. De or she will also work long hours
looking for faults in software but will often find the job frustrating as their destructive
talents take their tool on the poor developers.
!u ¦t this point, there is often much friction between developer and tester. Developer wants
to finish system but tester wants all faults in software fixed before their work is done.

  
/

!u ¦re perceived as very creative - they write code without which there would be no system!
.
!u ¦re often highly valued within an organization.
!u ¦re sent on relevant industry training courses to gain recognized qualifications.
!u ¦re rarely good communicators (sorry guys)!
!u Can often specialize in just one or two skills (e.g. V, C++, J¦V¦, SQL).
 

!u ¦re perceived as destructive - only happy when they are finding faults!
!u ¦re often not valued within the organization.
!u xsually do not have any industry recognized qualifications, until now
!u xsually require good communication skills, tack & diplomacy.
!u Normally need to be multi-talented (technical, testing, team skills).

9     /   

It is vitally important that tester can explain and report fault to developer in professional manner
to ensure fault gets fixed. Tester must not antagonize developer. Tact and diplomacy are
essential, even if you've been up all night trying to test the wretched software

#     




This section looks at some of the economic factors involved in Software Testing activities.
¦lthough some research has been done to put forward the ideas discussed, few organizations
have yet to provide accurate figures to confirm these theories.
Major retailers and car manufacturers often issue product recall notices when they realize that
there is a serious fault in one of their products. Perhaps you can think of other examples. The
fixing of the so-called millennium but is probably one of the greatest product recall notices in
history

oehm's research suggests that cost of fixing faults increases dramatically as we move software
product towards field use. If fault is detected at an early stage of design, it may be only design
documentation that has to change resulting in perhaps just a few hours work. Dowever, as project
progresses and other components are built based on faulty design, more work is obviously
needed to correct fault once it has been found. This is because design work,coding and testing
will have to be repeated for components of system previously thought to have been completed.

If faults are found in documentation, then development based on that documentation may
generate many related faults which multiply the effect of the original fault. ¦nalysis of
specifications during test preparation (early test design) often brings faults in specifications to
light. This will also prevent faults from multiplying i.e. if removed earlier they will not propagate
into other design documents.

¦nalysis of specifications during test preparation (early test design) often brings faults in
specifications to light. This will also prevent faults from multiplying i.e. if removed earlier they
will not propagate into other design documents. In summary we suggest that it is generally cost
effective to use resources on testing throughout the project lifecycle starting as soon as possible.
The alternative is to potentially incur much larger costs associated with the effort required to
correct and re-test major faults. Remember that the amount of resources allocated to testing is a
management decision based on an assessment of the associated risks. Dowever, few
organizations are able to accurately compare the relative costs of Software Testing and the costs
associated with re-work.

  
  %

0Defect-Nonconformance to requirements or functional / program specification.


1ug-¦ fault in a program, which causes the program to perform in an unintended or
unanticipated manner.
2ug Report comes into picture once the actual testing starts.
3If a particular Test Case's ¦ctual and Expected Result will mismatch then we will report a ug
against that Test Case.
4For each bug we are having a Life Cycle.First time when tester identifies a bug then he will
give the Status of that bug as H,H.
6Once the Developer Team lead go through the ug Report and he will assign each bug to the
concerned Developer and he changes the bug status to H*
H. ¦fter that developer starts
working on it during that time by changing the bug status as H) Honce it got fixed he will
change the status to H!H. In the next Cycle we have to check all the Fixed bugs if those are
really fixed then concerned tester change the status of that bug to H9H else change the
status to H+/  H. Finally H H, those bugs which are going to be fixed in the
next iteration.
See the following sample template used for ug Reporting.
ADere also the name of ug Report file follows some naming convention like
%& F,F$
F+ F',F+  
D¦ll the bolded words should be replaced with the actual Project Name, Version Number and
Release Date.
For eg., ugzilla ug Report 1.2.0.3 01_12_04
C¦fter seeing the name of the file anybody can easily recognize that this is a ug Report of so
and so project and so and so version released on the particular date.
0> It reduces the complexity of opening a file and finding for which project it belongs to.
00It maintains the details of Project ID, Project Name, Release Version Number and Date on the
top of the Sheet.
01 For each bug it maintains :--
a)ug ID
b) Test Case ID
c) Module Name
d) ug Description
e) Reproducible (Y/N)
f) Steps to Reproduce
g) Summary
h) ug Status
i) Severity
j) Priority
k) Tester Name
l) Date of Finding
m) Developer Name
n) Date of Fixing.
$
  ug ID column represents the unique ug Number for each bug. For this one each
organization follows their own standard to define the format of ug ID.
 9  This column gives the reference to the Test Case Document, against which test
case the bug was reported. With this reference we can navigate very easy in the Test Case
Document for more details.
. , Module Name refers to the Module, in which the bug was raised. Finally based
on this information we can estimate for each module how many bugs are there in each Module.
$
     It gives the summary of the ug. What is that bug, what was happened
actually instead of expected result.
+  This column is very important for developers based on this they know whether it
can be reproducible or not. If it is reproducible then it is very to developer team to debug that
otherwise they will try to find it out. Simply it is Yes or No.

  +  This column specifies the complete steps to produce that bug. We can say
it as Navigation. This is very useful both for testers and developers to reproduce the bug and to
debug the bug. If the Reproducible column is yes then only we will specify steps to reproduce
column. Otherwise this column is Null.
  This column gives the detailed description of the bug.
$
   This column is very important in ug Report, it is used to track the bug in each
level.
ug Status It is used to keep track the status of the bug in this bug report
0,it is given by the tester when he find out the bug
1*
 It is given by the developer team lead after assigning to concerned developer
2) It is given by the developer while he fixing the bug
3!It is given by the developer after he fixed the bug
49 It is given by the tester if the bug is fixed in new uild
6+/   It is given by the test if the bug is not fixed in new build
A  The bug which is going to be fixed in next iteration
/  This column tells the effect of that bug to the application. xsually it is given by the
Testers. For this severity also various organizations follow different conventions. Dere I am
providing sample of this severity based on its affect.
/ It is specified by the tester how much effect it gives to the application
';
c Tester will give this status when u r not able to continue your testing eg. Not opening
application
;
c Tester will give this status if he is not able to test this Module but he can test some other
module
.  Tester will give the status if he is not able to progress in the current module
< Its like cosmetic some spell mistake or look and feel problem
%  This column is filled by Test Lead he will consider the severity of the bug, Time
Schedule, Risks associated with the project especially for that bug. ased on that he will give the
Status to Priority Very Digh, Digh, Medium or Low based on the all aspects.
 , This column is for the name of the tester, who identifies that particular bug by
using this column developers can easily communicate with that particular Tester if any confusion
is there to understand the ug Description.
 ! 
 This column contains the Date when the tester reported the bug. So that we
will get a report for a particular day how many bugs are reported.
/, This column contains the name of the Developer, who fixed that particular
bug. This information is very useful when a particular bug is fixed but still there then Testers can
communicate with the concerned Developer to clear doubt.
 !
 This column contains the Date when the developer fixed the bug. So that we will
get a report for a particular day how many bugs are fixed.
click here for the Defect profile document

  
 +=      

0% This document specifies the requirements for a system and the methods to be used to
ensure that each requirement has been met.
1 This paragraph describes the scope of requirements covered by this document. It shall
depict the context of the covered requirements with respect to other related and interfacing
systems to illustrate what requirements are not covered herein.
2     9 /  This section provides a list of notational and other document
conventions used within the document. Include a depiction of symbology used in diagrams along
with the meaning of each symbol. Provide a description of special text usage such as fixed width
fonts, alert, and warning icons.
3      )//This paragraph shall briefly state the purpose of the
system to which this document applies. It shall describe the general nature of the system;
summarize the history of system development, operation, and maintenance; identify the project
sponsor, acquirer, user, developer, and support agencies; and identify current and planned
operating and user sites.

4+=    .a) If the system is required to operate in more than one state or
mode having requirements distinct from other states or modes, this paragraph shall identify and
define each state and mode.
b) Examples of states and modes include: idle, ready, active, post-use analysis, training,
degraded, emergency, back up, wartime, peacetime.
c) The distinction between states and modes is arbitrary.
d) ¦ system may be described in terms of states only, modes only, states within modes, modes
within states, or any other scheme that is useful.
e) If no states or modes are required, this paragraph shall so state, without the need to create
artificial distinctions.
f)If states and/or modes are required, each requirement or group of requirements in this
specification shall be correlated to the states and modes.

6+=  
6.1.Functional and Performance. a)This paragraph shall be divided into subparagraphs to itemize
the requirements associated with each capability of the system.
b) ¦ "capability" is defined as a group of related requirements. The word "capability" may be
replaced with "function," "subject," "object," or other term useful for presenting the
requirements.
c)This paragraph shall identify a required system capability and shall itemize and uniquely
identify the requirements associated with the capability.
d) If the capability can be more clearly specified by dividing it into constituent capabilities, the
constituent capabilities shall be specified in subparagraphs.
e) The requirements shall specify required behavior of the system and shall include applicable
parameters, such as response times, throughput times, other timing constraints, sequencing,
accuracy, capacities (how much/how many), priorities, continuous operation requirements, and
allowable deviations based on operating conditions.
f) The requirements shall include, as applicable, required behavior under unexpected, unallowed,
or "out of bounds" conditions; user roles responsible for performing functions; requirements for
error handling; and any provisions to be incorporated into the system to provide continuity of
operations in the event of emergencies.

6.2. Organizational. This paragraph shall specify organizations, locations, roles and other user
attributes and the functional requirements that each must execute.

6.3. Security and Privacy Protection.a) This paragraph shall specify the system requirements, if
any, concerned with maintaining security and privacy.
b)These requirements shall include, as applicable, the security/privacy environment in which the
system must operate, the type and degree of security or privacy to be provided, the
security/privacy risks the system must withstand, required safeguards to reduce those risks, the
security/privacy policy that must be met, the security/privacy accountability the system must
provide, access instructions for user roles, and the criteria that must be met for security/privacy
certification/accreditation.

6.4. Duman-Factors Engineering (Ergonomics). a)This paragraph shall specify the system
requirements, if any, included to accommodate the number, skill levels, duty cycles, training
needs, or other information about the personnel who will use or support the system.
b) Examples include requirements for number of simultaneous users and for built-in help or
training features.
c) ¦lso included shall be the human factors engineering requirements, if any, imposed on the
system.
d) These requirements shall include, as applicable, considerations for the capabilities and
limitations of humans; foreseeable human errors under both normal and extreme conditions; and
specific areas where the effects of human error would be particularly serious.
e) Examples include requirements for color and duration of error messages, physical placement
of critical indicators or keys, and use of auditory signals.

6.5. Operations and Maintenance. This paragraph shall state requirements associated with
operations and maintenance such as system availability, backup and recovery, monitoring and
tuning, installation and configuration, auditing, batch scheduling, support, enhancement, and
defect repairs.

6.6. System External Interface. a)This paragraph shall identify the required external interfaces of
the system (that is, relationships with other systems that involve sharing, providing or
exchanging data).
b) The identification of each interface shall include an application-unique identifier and shall
designate the interfacing systems by name, number, version, and documentation references, as
applicable.
c) The identification shall state which systems have fixed interface characteristics (and therefore
impose interface requirements on interfacing systems) and which are being developed or
modified (thus having interface requirements imposed on them).
d)One or more interface diagrams shall be provided to depict the interfaces.

6.7.¦pplication-unique identifier of interface. a)This paragraph shall identify a system external


interface by application-unique identifier, shall briefly identify the interfacing system, and shall
be divided into subparagraphs as needed to state the requirements imposed on the system to
achieve the interface.
b)Interface characteristics of the other systems involved in the interface shall be stated as
assumptions or as "When [the system not covered] does this, the system shall..," not as
requirements on the other systems.
c)This paragraph may reference other documents (such as data dictionaries, standards for
communication protocols, and standards for user interfaces) in place of stating the information
here.
d)The requirements shall include the following, as applicable, presented in any order suited to
the requirements, and shall note any differences in these characteristics from the point of view of
the interfacing system (such as different expectations about the size, frequency, or other
characteristics of data elements):

More on Software Requirements Specification Document

 9 
  . 
 

The procedure for managing the test object and related testware should be described. The version
Management of the testware is ultimately the test team responsibility.
One of the issues that must be addressed in Configuration is that modified objects may only be
installed in the test environment with the test team permission, after action has been taken on the
basis of the errors. This is to prevent a test from failing when a different version of the object
under test is unexpectedly being used. The Change Management documentation also has to be
updated since the documentation is intimately connected to the testware.
In addition, the way in which change requests are dealt with must be indicated. If this results in
extra tests being required, a bottleneck could be created in the project. Therefore, the test team
must be informed so that any new tests can be included in the test plan.
0c  9 
  . 
 
Current definition would say that SCM is the control of the evolution of complex systems. More
pragmatically, it is the discipline that enable us to keep evolving software products under control,
and thus contributes to satisfying quality and delay constraints.
SCM emerged as a discipline soon after the so called "software crisis" was identified, i.e. when it
was understood that programming does not cover everything in Software Engineering (SE), and
that other issues were hampering SE development, like architecture, building, evolution and so
on.
SCM emerged, during the late 70s and early 80s, as an attempt to address some of these issues;
this is why there is no clear boundary to SCM topic coverage. In the early 80s SCM focussed in
programming in the large (versioning, rebuilding, composition), in the 90s in programming in the
many (process support, concurrent engineering), late 90s in programming in the wide (web
remote engineering). Currently, a typical SCM system tries to provide services in the following
areas:
a. Managing a repository of components. There is a need for storing the different components of
a software product and all their versions safely. This topic includes version management, product
modeling and complex object management.
b. Delp engineers in their usual activities. SE involves applying tools to objects (files). SCM
products try to provide engineers with the right objects, in the right location. This is often
referred as workspace control.
c. Compilation and derived object control is a major issue. Process control and support. Later
(end 80s), it became clear that, if not the, major issue is related to people. Traditionally, change
control is an integral part of an SCM product; currently the tendency is to extend process support
capability beyond these aspects.
1c ; 
In the 80s, the first systems were built in house and focussed closely on file control. Most of
them were built as a set of xnix scripts over RCS (a simple version control tool) and Make (for
derived object control).
From this period we can mention DSEE, the only serious commercial product, which introduced
the system model concept which was an ¦rchitecture Description Language ancestor; NSE
which introduced workspace and cooperative work control; ¦dele which introduced a
specialized product model with automatic configuration building, and ¦ides de Camp (now
TRxE software) which introduced the change set. The first real SCM products appeared in the
early 90s.

These systems are much better. They often use a relational database but still rely on file control,
they provide workspace support, but no or built-in process support. This generation included
Clear Case (DSEE successor) which introduced the virtual file system and Continuus which
introduced, with ¦dele, explicit process support. Continuus and Clear Case are currently the
market leaders.
In the second half of the 90s, process support was added and most products matured. This period
saw the consecration of SCM, as a mature, reliable and essential technology for successful
software development; the SCM market was over $1 billion sales in 1998.
Many observers consider SCM as one of the very few Software Engineering successes.
2 9  
Most SCM products are based on a tiny core of concepts and mechanisms. Dere is a summary of
these concepts.
3' 

In the early 70s, the first version control system appeared. The idea is simple: each time a file is
changed a revision is created. ¦ file thus evolves as a succession of revisions, usually referred to
by successive numbers (foo.1, foo.2 .). Then from any revision, a new line of change can be
created, leading to a revision tree. Each line is called a branch (the branch issued from foo.2 is
called foo.2.1, foo.2.2). ¦t the same time, 3 services were provided: Distory, delta, multi user
management and a bit later, merging facilities.
Distory simply records when and who created a revision along with a comment. Deltas were
provided because two successive revisions are often very similar (98% similar in average). The
idea is to store only differences (the 2% that are different). Of course, it vastly reduces the
amount of required memory.
Multi-user management consists in preventing concurrent changes from overlapping each other.
¦ user who wants to change a file creates a copy and sets a lock on that file (check-out); only
that user can create a new revision for that file (check-in).
Despite the fact that all this is 25 years old, it is still the base of the vast majority of today SCM
systems.
4% .
From the beginning, the focus was on file control. It is no surprise to see that, even today, the
data model proposed by most vendors resemble a file system, plus a few attributes, often
predefined. This is archaic and contrasts with today's data modeling.
69 
¦ configuration is often defined as a set of files, which together constitute a valid software
product. The question is twofold: (1) what is the real nature of a configuration, and (2) how to
build it, prove its properties and so on.
Surprisingly, in most systems, a configuration is not an object, but "something" special. It is a
consequence of a weak data model in which complex objects and explicit relationships are not
available.
The traditional way to build a configuration is by changing an existing one. No correctness
criteria are available.
In the change-set approach, a change, even it involves many files, receives a logical name (like
"Fixug243"). Later on, a configuration can be produced as a set of change-sets to add or
remove from a base configuration (like "C2 = C1 + Fixug243 - Extention2"), C1 being the base
configuration and C2 the new one. In the ¦dele system, a configuration is built interpreting a
semantic description which looks like a query: the system is in charge of finding the needed
components based on their attributes and their dependencies. None of these approaches is
available in the vast majority of today's systems.
A#
   Practitioners rejected the early systems because they were helping the
configuration manager, and bothering everybody else. ¦ major move toward acceptance was to
consider the software programmer as a major target customer: helping him/her in the usual SE
activity became a basic service.
D$ 
 + 

The aim of rebuilding is to reduce compilation time after a change, i.e. to recompile,
automatically, only what is needed. Make is the ancestor of a large family of systems based on
the knowledge of "dependencies" between files, and their last modification date. Make proved to
be extremely successful and versatile, but difficult to use and inadequate in many respects. ¦ll
attempts to do substantially better have so far failed. Most systems "only" generate the make
files.
C  
¦ workspace is simply a part of a file system where the files of interest (w.r.t. a given task like
debug, develop etc) are located. The workspace acts as a sphere where the programmer can
work, isolated from the outside world, for the task duration. The SCM system is responsible for
providing the right files (often a configuration), in the right file system, to let users work (almost)
independently, and to save the changes automatically when the job is done. It is this service that
really convinced practitioners that SCM was there to help them.
0>9 / 
¦ workspace is a support for concurrent engineering, since many concurrent workspaces may
contain and change the same objects (files). Thus there is a need for (1) resynchronizing objects
and (2) controlling concurrent work.
Resynchronizing, so far, means merging source files. Mergers found in today tools simply
compare (on a line by line basis) the two files to merge, and a file that is historically common to
both (the common ancestor). If a line is present in a file but not in the common ancestor, it was
added, and must be kept; if a line is present in the ancestor and not in the file, it was removed
and must be removed. If changes occurred at different places, the merger is able to decide
automatically what should be the merged file. This algorithm is simply a heuristic that provides
in output a set of lines with absolutely no guaranties about correctness. Nevertheless mergers
proved to work fine, to be very useful and became almost unavoidable.
Controlling concurrent work means defining who can perform a change, when, on which
attribute of which object. It is one of the topics of process support that, currently, no tool really
provides.
00%  
Process support means (1) the "formal" definition of what is to be performed on what (a process
model), and (2) the mechanisms to help/force reality to conform to this model.
¦ State Transition Diagram (STD) describes, for a product type, the legal succession of states
(and optionally which actions produce the transition), and thus describes the legal way to evolve
for entities of that type. Since SCM aims to control software product evolution, it is no surprise
many process models are based on STDs. It is a product-centered modeling. Indeed, experience
shows that complex and finegrained process models can be define that way.
xnfortunately, experience also shows that STDs do not provide a global view of a process, and
that large processes are difficult to define using (only) STDs.
The alternative way to model processes is the so-called activity centered modeling, in which the
activity plays the central role, and models express the data and control flow between activities.
This kind of modeling is preferred if a global view is required, if a large process is to be
structured, or if products are not the main concern. ut this approach lacks precision for product
control. Experience has demonstrated that both are needed, but integration is not easy and the
few tools that intended to do so only propose two independent modeling. Digh-level process
models mixing both are not currently available in commercial products, but have been
experimented.
01.      
Clearly the number one was change control, activity control, and workspace support. Then
comes, in differing order: Global view, traceability etc. Worse aspect, most missing feature
Clearly the number one was: etter and more flexible process support, concurrent and distributed
engineering support. Then comes: scalability, efficiency, incrementality, cross platform
capability, PDM compatibility, interoperability etc. It is interesting to see that both the most
appreciated and the most criticized feature concern process support. ¦lmost no comments
concerned the basic aspects of SCM, like versioning and merging. Practitioners think tools are
good and stable enough but still lack efficiency, scalability and interoperability with the other SE
tools. It is likely that, in the near future, the distinctive features between tools will be,
functionally, their strength in process support, technically, their capability to grow with the
company needs to inter-operate with other company tools and to support concurrent, distributed
and remote engineering.
  
+=     . 

What is the need for +=     .    

¦utomation requirement in an organization initiates it to go for a custom built Software. The
client who had ordered for the product specifies his requirements to the development Team and
the process of Software Development gets started.

In addition to the requirements specified by the client, the development team may also propose
various value added suggestions that could be added on to the software. ut maintaining a track
of all the requirements specified in the requirement document and checking whether all the
requirements have been met by the end product is a cumbersome and a laborious process.

The remedy for this problem is the Requirements Traceability Matrix.

c +=     .    



Requirements tracing is the process of documenting the links between the user requirements for
the system you're building and the work products developed to implement and verify those
requirements. These work products include Software requirements, design specifications,
Software code, test plans and other artifacts of the systems development process. Requirements
tracing helps the project team to understand which parts of the design and code implement the
user's requirements, and which tests are necessary to verify that the user's requirements have
been implemented correctly.

+=     .  Document is the output of +=  . 
 
c <9

The +=     . 7+.8captures the complete user and system
requirements for the system, or a portion of the system. The RTM captures all requirements and
their traceability in a single document, and is a mandatory deliverable at the conclusion of the
lifecycle.

The RTM is used to record the relationship of the requirements to the design, development,
testing and release of the software as the requirements are allocated to a specific release of the
software. Changes to the requirements are also recorded and tracked in the RTM. The RTM is
maintained throughout the lifecycle of the release, and is reviewed and baselined at the end of
the release.
It is very useful document to track Time, Change Management and Risk Management in the
Software Development.
Dere I am providing the sample template of Requirement Traceability Matrix, which gives
detailed idea of the importance of RTM in SDLC.

c+. c c.


    c +=   :
+=   +=  
¦ny changes that happens after the system has been built we can trace the impact of the change
on the ¦pplication through RTM Matrix. This is also the mapping between actual Requirement
and Design Specification. This helps us in tracing the changes that may happen with respect to
the Design Document during the Development process of the application. Dere we will give
specific Document unique ID, which is associated with that particular requirement to easily trace
that particular document.

In any case, if you want to change the Requirement in future then you can use the RTM to make
the respective changes and you can easily judge how many associated test scripts will be
changing.

+=     .      

Introduction
This document presents the requirements traceability matrix (RTM) for the Project Name
[workspace/workgroup] and provides traceability between the [workspace/workgroup] approved
requirements, design specifications, and test scripts.
The table below displays the RTM for the requirements that were approved for inclusion in
[¦pplication Name/Version]. The following information is provided for each requirement:
1. Requirement ID
2. Risks
3. Requirement Type (xser or System)
4. Requirement Description
5. Trace to xser Requirement/Trace From System Requirement
6. Trace to Design Specification
7. xT * xnit Test Cases
8. IT * Integration Test Cases
9. ST * System Test Cases
10. x¦T * xser ¦cceptance Test Cases
11. Trace to Test Script

The following is the sample +=     . 
  
+=     . 

/ 
   
   . 
What happens if the Traceability factor is not considered while developing the software?
a) The system that is built may not have the necessary functionality to meet the customers and
users needs and expectations
b) If there are modifications in the design specifications, there is no means of tracking the
changes
c) If there is no mapping of test cases to the requirements, it may result in missing a major defect
in the system
d) The completed system may have UExtraU functionality that may have not been specified in
the design specification , resulting in wastage of manpower, time and effort.
e) If the code component that constitutes the customerUs high priority requirements is not
known, then the areas that need to be worked first may not be known thereby decreasing the
chances of shipping a useful product on schedule
f) ¦ seemingly simple request might involve changes to several parts of the system and if proper
Traceability process is not followed, the evaluation of the work that may be needed to satisfy the
request may not be correctly evaluated
c     .   
Is the Traceability Matrix applicable only for big projects?
The Traceability Matrix is an essential part of any Software development process, and hence
irrespective of the size of the project, whenever there is a requirement to build a Software this
concept comes into focus.
The biggest advantage of Traceability Matrix is backward and forward traceability. (i.e) ¦t any
point of time in the development life cycle the status of the project and the modules that have
been tested could be easily determined thereby reducing the possibility of speculations about the
status of the project.

/
   . 
Dow is the Traceability Matrix developed?

In the diagram, based on the requirements the design is carried out and based on the design, the
codes are developed. Finally, based on these the tests are created. ¦t any point of time , there is
always the provision for checking which test case was developed for which design and for which
requirement that design was carried out. Such a kind of Traceability in the form of a matrix is the
Traceability Matrix.

In the design document, if there is a Design description ¦, which can be traced back to the
Requirement Specification ¦, implying that the design ¦ takes care of Requirement ¦. Similarly
in the test plan, Test Case ¦ takes care of testing the Design ¦, which in turn takes care of
Requirement ¦ and so on.
There has to be references from Design document back to Requirement document, from Test
plan back to Design document and so on.

xsually xnit test cases will have Traceability to Design Specification and System test cases
/¦cceptance Test cases will have Traceability to Requirement Specification. This helps to ensure
that no requirement is left uncovered (either un-designed / un-tested).

Requirements Traceability enhances project control and quality. It is a process of documenting


the links between user requirements for a system and the work products developed to implement
and verify those requirements. It is a technique to support an objective of requirements
management, to make certain that the application will meet end-users needs.

   .    

Where exactly does the Traceability Matrix gets involved in the broader picture of Testing?
The Traceability Matrix is created even before any test cases are written, because it is a complete
list indicating what has to be tested. Sometimes there is one test case for each requirement or
several requirements can be validated by one test scenario. This purely depends on the kind of
application that is available for testing.
9 
   . 
Traceability Matrix gives a cross-reference between a test case document and the
Functional/design specification document. This document helps to identify, if the Test case
document contains tests for all the identified unit functions from the design specification. From
this matrix we can collect the percentage of test coverage taking into account the percentage of
functionalities to the total tested and not tested.
¦ccording to the above table, the requirement specifications are clearly spelt out in the
requirementcolumn. The functional specification notified as RD section 6.5.8 tells about the
requirement that has been specified in the Requirement document (i.e it tells about the
requirement to which a test case is designed). With the help of the Design Specification, it is
possible to drill down to the level of identifying the Low Level Design for the Requirement
specified.
ased on the requirements, code is developed. ¦t any point of time, the program corresponding
to a particular requirement could be easily traced back. The test cases corresponding to the
requirements are available in the Test Case column.

  
 % 

;
c</ % 

Test Plan is the scheduler for entire Testing process. The Test Plan describes the approach to all
development, unit, integration, system, qualification and acceptance testing needed to complete a
project properly.
You should be aware that many people use the term 'test plan' to describe a document detailing
individual tests for a component of a system. We are introducing the concept of high level test
plans to show that there are a lot more activities involved in effective testing than just writing
test cases.

c c  
Establishing a test plan based on business requirements and design specification is essential for
the successful acceptance of a project's deliverables. It is important to note that the higher risk a
project has, the greater the need for a commensurate amount of testing. The project Schedule &
Task Plan and the project Staffing Plan need to account for testing requirements during the
planning and execution phases of the project.

Testing validates the requirements defined for the projects objectives and deliverables. Though
IT project practices require testing throughout the execution phase of a project, undoubtedly the
most important testing occurs at the end of development and prior to deployment. Orderly test
plans that specify the criteria for test passage or failure are critical to a project's success.

    
Prepare a Test Plan describing the scope, processes and criteria for testing particular deliverables
of the project. The plan should describe the following elements:
1. Provide an overview:
Describe project objectives and background (providing some context for the testers). - Give a
short system description. - Define the Test Plan objectives. - Provide testing references as
required. - Note any outstanding issues, assumptions, known risks and contingencies.
2. Define test scope, (features to be tested; Features not to be tested).
3. Describe test methodologies.
4. Describe testing approach. - Describe test data (test cases, system and user interface test cases,
user acceptance test plans, status reports of testing, and test outcome report at the end of testing
detailing the overall results of testing progress). - Provide all test documents. - Validate test
requirements. - Define test control procedures, (for example, classification code and
prioritization scheme for error tracking and resolution; tracking mechanisms for test results such
as a test case validation log or test error log).
5. Define and describe test phases. For each test phase such as unit, integration, system, etc.,
identify definition, participants, data sources, entrance and exit criteria, requirements and work
products.
6. Define the test environment, (description of hardware, software, location, staffing and
training).
7. Schedule testing tasks and make resource assignments.
8. Define test approvals process and result distributions.

;  


The size and nature of the project requirements should determine the scale of the test plan. The
actual test methods and techniques must be adapted to the type of project being developed and
the testing environment and tools that are available. Project managers need to think about the
purpose of the testing, keeping in mind the process and stages for testing. est practices dictate
that testing be done early and often.

###        


(¦NSI/IEEE Standard 829-1983)
This is a summary of the ¦NSI/IEEE Standard 829-1983. It describes a test plan as:
"¦ document describing the scope, approach, resources, and schedule of intended testing
activities. It identifies test items, the features to be tested, the testing tasks, who will do each
task, and any risks requiring contingency planning."
This standard specifies the following test plan outline:
 %  
1. ¦ unique identifier
  
1. Summary of the items and features to be tested
2. Need for and history of each item (optional)
3. References to related documents such as project authorization, project plan, Q¦ plan,
configuration management plan, relevant policies, relevant standards
4. References to lower level test plans
  
1. Test items and their version
2. Characteristics of their transmittal media
3. References to related documents such as requirements specification, design specification,
users guide, operations guide, installation guide
4. References to bug reports related to test items
5. Items which are specifically not going to be tested (optional)
!    
1. ¦ll software features and combinations of features to be tested
2. References to test-design specifications associated with each feature and combination of
features
! ,  $ 
1. ¦ll features and significant combinations of features which will not be tested
2. The reasons these features wonUt be tested
* c
1. Overall approach to testing
2. For each major group of features of combinations of features, specify the approach
3. Specify major activities, techniques, and tools which are to be used to test the groups
4. Specify a minimum degree of comprehensiveness required
5. Identify which techniques will be used to judge comprehensiveness
6. Specify any additional completion criteria
7. Specify techniques which are to be used to trace requirements
8. Identify significant constraints on testing, such as test-item availability, testing-resource
availability, and deadline
 %!9 
1. Specify the criteria to be used to determine whether each test item has passed or failed testing
   9  +   +=  
1. Specify criteria to be used to suspend the testing activity
2. Specify testing activities which must be redone when testing is resumed
  / 
1. Identify the deliverable documents: test plan, test design specifications, test case
specifications, test procedure specifications, test item transmittal reports, test logs, test incident
reports, test summary reports
2. Identify test input and output data
3. Identify test tools (optional)
 

1. Identify tasks necessary to prepare for and perform testing
2. Identify all task interdependencies
3. Identify any special skills required
# /  ,
1. Specify necessary and desired properties of the test environment: physical characteristics of
the facilities including hardware, communications and system software, the mode of usage (i.e.,
stand-alone), and any other software or supplies needed
2. Specify the level of security required
3. Identify special test tools needed
4. Identify any other testing needs
5. Identify the source for all needs which are not currently available
Testing is performed using hardware with the following minimum system requirements:
U 133 MDz Pentium
U Microsoft Window, 98
U 32 M R¦M
U 10 M available hard disk space
U ¦ display device capable of displaying 640x480 (VG¦) or better resolution
U Internet connection via a modem or network
+   
1. Identify groups responsible for managing, designing, preparing, executing, witnessing,
checking and resolving
2. Identify groups responsible for providing the test items identified in the Test Items section
3. Identify groups responsible for providing the environmental needs identified in the
Environmental Needs section
 
  
,
1. Specify staffing needs by skill level
2. Identify training options for providing necessary skills
 c 
1. Specify test milestones
2. Specify all item transmittal events
3. Estimate time required to do each testing task
4. Schedule all testing tasks and test milestones
5. For each testing resource, specify its periods of use
Testing scheduling and status reporting are performed by the Project Lead and project
¦dministrator to monitor progress towards meeting product testing schedules and release date, as
well as to identify any project scheduling risks. Each build will be tested before next subsequent
build date. Software testing schedules will coincide with module development and release
schedules
+ 9 
 
1. Identify the high-risk assumptions of the test plan
2. Specify contingency plans for each
*/
1. Specify the names and titles of all persons who must approve the plan
2. Provide space for signatures and dates

  
.  


Mainly in Manual Testing the following documents are required


i) %  --QC
ii)   
--Company Level
iii)  ! --Q¦
iv) . c
----TL

8 %  This is a company level document and will be developed by (9People.( ¦t most
management). This document defines "Testing Objective" in that organization

!u Small-scale company test policy


!u Testing Def:

Verification + Validation

!u Testing Process:

Proper planning before testing

!u Testing Standard:

Defect per 280 LOC / Defect per 10 functional points

!u Testing Measurements:

Q¦M (Quality ¦ssessment Measurements),


&nbspTMM ( Test Management Measurements)
PCM (Process Capability Measurements)

8   
 It is also a company level document and developed by (*people. It defines
testing approach followed by testing team.
Components in Test Strategy:

!u Scope and Objective:

¦bout testing need and their purpose

!u usiness Issues:

udget control for testing in terms of time and cost


100%----Project Cost
64% Development & Maintenance and 36% for Testing

!u Test ¦pproach:

It defines mapping between development stages and testing issues

!u Test Matrix (TM)/ Test Responsibilities Matrix (TRM)


!u Test Deliverables:

Required documents to prepare during testing of a project


Ex: Test Methodology, Test Plan, Test case etc.,

!u Roles & Responsibilies:

Names of jobs in testing team and their responsibility

!u Communication and Status Reporting:

Require negotiations between two consecutive job in testing team

!u ¦utomation Testing Tools:

Need of automation in our organization level project testing

!u Testing Measurements and Metrics:

Q¦M, TMM, & PCM

!u Defect Reporting and Tracking:

Required negotiations between testing team and development team

!u Risks & Mitigations:

Possible risks and mitigations to solve( risks indicates a future failure)

!u Change and Configurations Management:


Dow to handle change requests coming from customers during testing and maintenance

!u Training Plan :

Need of training to tester before starts of every project testing

8 ! 0 To define quality S/W, quality analyst defines 15 testing issues. Test factor
or issue means that a testing issue to apply on S/W to achieve quality.

The test factors are:

!u ¦uthorization:

Whether user is valid or not to correct application

!u ¦ccess Control:

¦uthorized to access specific services

!u ¦udit Trail:

Meta data about user operations

!u Continuity of processing:

Inter process communication (IPC) during execution.

!u Correctness:

Meet customer requirements in terms of inputs and out puts

!u Coupling :

Co-existence with other existing S/W

!u Ease of xse :

xser friendliness of screens

!u Ease of operate :

Installation, un installation, dumping, exporting etc.,

!u File integrate:

creation of internal files (ex back up)


!u Reliability:

Recover from abnormal situation

!u Portable :

Run on different plat forms

!u Performance :

speed of processing

!u Service Levels :

order of services

!u Methadology :

Follow standards

!u Maintainable :

Long time serviceable to customers

1 ! '$ $ 


 c = 

!u ¦uthorization :

Security Testing, Functional or requirement testing

!u ¦ccess Control :

Security Testing ,if there is no separate team then Functional or Requirement testing

!u ¦udit Trail :

Functionality or requirements, error handling testing

!u Correctness :

Functionality or requirements testing

!u Continuity of processing :

execution testing , operation testing (white ox)


!u Coupling :

Intersystems testing

!u Ease of use :

xsability testing

!u Ease of operate :

Installation testing

!u File Integrate :

Recovery , error handling testing

!u Realiability :

Recovery ,Stress testing

!u Portable:

Compatability,Configuration testing

!u Performance:

Load & Stress ,Storage & Data volume testing

!u Service Level :

Functionality or requirements testing

!u Maintainable:

Compliance Testing

!u Methodology:

Compliance Testing

'8 . c


It is a project level document and developed by (*(Quality ¦ssurance
Leader) or%.Project manager. It is a refinement form of Test Strategy. To prepare test
methodology , Q¦ or PM depends on below factors.

!u Step1 :
Determine project type such as Traditional, Out sourcing and Maintanence (depends on
project type Q¦ decreases no of columns in TRM)

!u Step 2 :

Determine application requirements (depends on application requirements Q¦ will


decrease no of rows in TRM)

!u Step 3:

Determine tactical risks (depends on risks, Q¦ decrease no of factors in selected list)

!u Step 4:

Determine scope of application (depends on expected future enhancements, Q¦ add some


of the deleted factors to TRM)

!u Step 5 :

Finalize TRM for current project

!u Step 6 :

Prepare system test plan (defining scheduling for above finalized approach)---Test Lead
will do

!u Step 7 :

Prepare module test plans if require

  
.  


v) % 


vi) %
vii)  
Test Script
viii) # 

'8 
% 

%#% 7%  "    c 


8This testing process developed by DCL
and approved quality analyst form of India. It is a refinement of V- Modal to define testing
process along with development stages.
'8 % 
¦fter completion of test initiation, TL of the testing team concentrate on test
planning to define "What to test", "Dow to test" ,"When to test", "Who to test"?. Test plan author
follows below work bench (process) to prepare test plan document.

1.Team formation: In general test planning process starts with testing team formation. In this step
test plan author depends on below factors
a) ¦vailability of testers
b) Test duration
c) ¦vailability of test environment resources
Case Study: Test duration
C/S, Web, ERP ---- 3 to 5 months of functional & system testing
Team size 3:1
2.Identify tactical risks: ¦fter completion of team formation test plan author study possible risks
raised during testing of that project
Ex Risk 1: Lack of knowledge on domain
Risk 2: Lack of budget (time)
Risk 3: Lack of resources
Risk 4: Delay in delivery
Risk 5: Lack of development process rigor (seriousness of dev team)
Risk 6: Lack of test data (some times test engg conducting ¦d Doc testing)
Risk 7: Lack of communication
3) Prepare Test Plan: ¦fter completion of team formation and risk analysis test plan author
prepare test plan document in IEEE format.
FORM¦T:
1) Test Plan ID: xnique number
2) Introduction: ¦bout project and test team
3) Test Items: Modules/Features/services/functions
4) Features to be tested: Responsible modules to prepare test cases
5) Features to be not tested: which ones and why not?
6) ¦pproach: Required list of testing techniques (depends on TRM)
7) Feature Pass or Fail Criteria: When a feature is pass when a feature is fail
8) Suspension Criteria : Possible abnormal situations raised during above features testing
9) Testing Tasks (Pre requisite): Necessary tasks to do before starts of every feature testing
10) Test Deliverables: Required test documents to prepare during testing
11) Test Environment: Required DW and SW including testing tools
12) Staff and Training Needs: Names of selected test engineers
13) Responsibilities: Work allocation
14) Schdule: Dates & Time
15) Risks and Mitigations :
16) ¦pprovals : Signatures of test plan author, Q¦ or PM

+/ %  ¦fter completion of test plan preparation, test plan author review the
document for completeness and correctness. In this review, responsible person conducts
coverage analysis. Topics in test plan review based on
a) RS & SRS based coverage
b) Risks based coverage
c) TRM based coverage
'8  
¦fter completion of test plan finalization, selected test engineers involved in
required training sessions to understand business logic. This type of training provided by
business analyst or functional lead or business consultant. ¦fter completion of required training
sessions test engineers are preparing test cases for responsible modules
There are 3 methods to prepare core level test cases (xI, Functionality, Input domain, error
handling, and manual support testing). They are
1)usiness Logic based test case design (80%)
2)Input domain based test case design (15%)
3)xser Interface based test case design (5%)
0$  <
 : In general functionality and error handling based test cases prepared by test
engineers depends on usecase in SRS. ¦ usecase describes that how a user use specific
functionality in our application. ¦ test case describes that a test condition to apply on that
application to validate. To prepare this type of test cases depends on usecases, we can follow
below approach.
Step 1: Collect responsible usecases
Step 2: Select xsecase and their dependencies
2.1 Identify entry condition (base state)
2.2 Identify input required (test data)
2.3 Identify exit condition (end state)
2.4 Identify out put and out come (expected)
2.5 Study normal flow (call states)
2.6 Identify alternative flows and exceptions
Step 3: Prepare test cases depends on above study
Step 4: Review the test case as per completeness and correctness
: 0
From a usecase and data modal, a login process allows userid and password. xserid is taking
alphanumeric and lowercase from 4 to 16 characters long. Password allow alphabets in lower
case from 4 to 8 characters long.
Test case 1: Successful entry of userid

V¦ (Size)------------------------------------ECP (Type )


Min 4-- Pass----------------------------------Valid----Invalid
Max 16-- Pass--------------------------------0-9-----¦-Z
Max -1-- Pass--------------------------------a-z-----Special Char and lank Spaces
Min+1-- Pass
Max+1-- Fail
Min-1-- Fail

Test Case 2: Successful entry of password

V¦ (Size)------------------------------------ECP (Type )


Min 4-- Pass----------------------------------Valid------Invalid
Max 8 --Pass---------------------------------------------¦-Z
Max -1 -- Pass-------------------------------a-z---------Special Char and lank Spaces
Min+1 -- Pass---------------------------------------------0-9
Max+1 --Fail
Min-1-- Fail

Test Case 3: Successful log in

xser id------------------Password--------------------------- Criteria


Valid--------------------Valid---------------------------------Pass
Valid--------------------Invalid-------------------------------Fail
Invalid------------------Invalid-------------------------------Fail
Valid--------------------lank--------------------------------Fail
lank------------------- Valid---------------------------------Fail

:91
In a insurance application, user can apply for different types of insurances. When a user select
type  insurance, system asks age to enter. ¦ge value should be greter than 18 years and should
be less than 60
Test case 1: Successful selection of type  Insurance
Test Case 2 : Successful focus to age when you select type 
Test Case 3: Successful entry of age value

V¦ (Size)---------------------------------------------------ECP (Type)


Min 19 -- Pass------------------------------------------------Valid-----Invalid
Max 59 --Pass----------------------------------------------- 0-9------- ¦-Z
Max -1 -- Pass----------------------------------------------------------Special Char and lank Spaces
Min+1 -- Pass------------------------------------------------ ----------a-z
Max+1-- Fail
Min-1 -- Fail

: 2 In shopping application customer can try for purchase order creation. ¦pplication
takes item no & qty. Item no allows alphanumeric from 4-6 charaters long and quantity allows
upto 10 items to purchase. ¦fter filling item no & qty , our system returns price of one item &
total amount

Test Case 1: Successful item no


V¦ (Size) --------------------------------------------------ECP (Type )
Min 4 --Pass--------------------------------------------------Valid--------Invalid
Max 6 --Pass--------------------------------------------------0-9---------¦-Z
Max -1-- Pass-------------------------------------------------a-z-------- Special Char and lank Spaces
Min+1 -- Pass-------------------------------------------------¦-Z
Max+1 -- Fail
Min-1 -- Fail

Test Case 2: Successful selection of Qty


V¦ (Size)--------------------------------------------------ECP (Type )
Min 1 -- Pass--------------------------------------------------Valid--------Invalid
Max 10 -- Pass------------------------------------------------0-9----------¦-Z
Max -1 -- Pass-------------------------------------------------------------Special Char and lank Spaces
Min+1 -- Pass
Max+1 -- Fail
Min-1 -- Fail

Test case 3: Successful calculation


Total=price * qty

: 3
In banking application user can dial bank using his person computer. In this process user can use
6 digit pwd & below fields.
¦rea code--3 digit no & allows blank
Prefix-- 3 digit no, not starts with 0 or 1
Suffix-- 6 digit & alphanumeric values
Commonds-- Deposit ,balance enquiry, mini statement ,bill pay
Test Case 1: Successful entry of password
V¦ (Size) ECP (Type )
Min 6 --Pass Valid--------Invalid
Max 6 --Pass 0-9--------¦-Z
Max -1 -- Fail-------------------------------------------------------------Special Char and lank Spaces
Min+1 -- Fail ---------------------------------------------------a-z
Max+1 -- Fail
Min-1 -- Fail

Test Case 2: Successful area code


V¦ (Size) ECP (Type )
Min 3 -- Pass Valid--------Invalid
Max 3 -- Pass 0-9----------¦-Z
Max -1-- Fail lank-------Special Char
Min+1 --Fail----------------------------------------------------------------a-z
Max+1 -- Fail
Min-1 --Fail

Test Case 3 : Successful prefix:


V¦ (Size) ECP (Type )
Min 200 -- Pass Valid--------Invalid
Max 999 -- Pass 0-9----------¦-Z
Max -1-- Pass-------------------------------------------------Special Char and lank Spaces
Min+1 -- Pass-------------------------------------------------a-z
Max+1 -- Fail
Min-1 --Fail

Test Case 4: Successful suffix


V¦ (Size) ECP (Type)
Min 6 -- Pass Valid----------Invalid
Max 6 -- Pass 0-9
Max -1 -- fail ¦-Z------------Special Char and lank Spaces
Min+1 -- Fail-------------------------------------------------- a-z
Max+1 -- Fail
Min-1 --Fail

Testcase 5: Successful commands such as deposit, balance enquiry etc.,


Test Case 6: Successful dialing with valid values
Test Case 7: xnsuccessful dialing without filling all field values except area code
Test Case 8: Successful dialing with out filling area code

 9! : During test design, test enggs are preparing test case documents in IEEE
format.
1)  9  xnique name or number
2)  9, Name of the test condition
3) !     : Module or feature or service or component
4)     : atch name, in which this case is a member
5) % : Importance of test case
P0--- asic Functionality
P1--- General functionality ( ex I/P domain, compatibility, error handling, intersystem testing
etc.,)
6)  # /  : Require D/W and S/W including testing tools
7)  # 7 c8: Time to execute this case, ex 20 min
8)     : Date & Time
9)   : Necessary tasks to do before starts this case execution
10)  %   *: Step by step procedure from    to    
Step No----¦ction----I/P Required----ExpectedResult---- Defect ID----Comments----Test Design
11)  9%!9  : When this case is pass, when this is case is fail?
Note: In general test enggUs are preparing test case documents with step-by-step procedure only
(i.e 10th field only)
Ex Prepare test case document for successful mail reply.
 ,*  %+= # 
---------1.--------Log on to site----Valid xID and PWD---------Inbox page appear
---------2.--------Click inbox link--------------------------------Mail box appear
---------3.--------Select Received mail subject-----------------Mail message appear
---------4---------Click reply---------------------------------------Compose window appears with to
Received mailed
---------5.----------Enter new message and click save-----------¦ck from web server

2     $ 9: :  are describing functionality in terms of inputs,
flow and output. ut usecases are not responsible to define size and type of input objects. Due to
this reasons test engineers are reading LLD's also. (data model or ER diagrams). To steady data
model test engineer follows below approach.
Step 1: Collect data models of responsible modules
Step 2: Study the data model to under every input attribute in terms of size , type and constraints
Step 3: Identify critical attributes, which are participating in data retrievals and data
manipulation.

Ex:¦C no----¦ccount Name----alance----¦ddress


&nbsp----------Critical----------------
&nbsp---------Non Critical----------
Step 4 : Identify non critical attributes, which are just input/ output type.
Step 5: Prepare data matrices for very input attribute in terms of V¦ & ECP
Input ¦ttribute-------------------------ECP-----------------------V¦
&nbspValid---------Invalid Max-----------Min

Ex 1: From usecase a bank application allows a fixed deposit form. From the data model that
form consists of below fields
Customer Name: ¦lphabets in lower case middle _
¦mount : 1500 to 100000
Tenor : upto 12 months
Interest: Numeric with decimal

From this usecase, if the tenor is greater than 10 months our system allows interest also as
greater than 10%. Prepare test case document from above scenario.

Test Case 1: Successful entry of customer name

Test Case 2: Successful entry of amount


Test Case 3: Successful entry of tenor

Test Case 4: Successful entry of input

Test Case 5: Successful fixed deposit with all valid values


Test Case 6: xn successful operation due to tenor is greater than 10 months and interest is less
than 10%.

Test Case 7: xnsuccessful operation due to without filling all field values
2:      
 To conduct user interface testing, test engineers are
preparing xI test cases depends on our organization, user interface rules, global xI conventions
(Microsoft 6 rules) and interest of customer site people.
Example: 1) Spelling check
2) Graphic check
3) Meaningful error messages
4) Meangful help documents (Manual support testing)
5) ¦ccuracy of data displayed
a.¦mount----DO
b.¦mount----DO
c.DO--------dd/mm/yy
6) ¦ccuracy of data in the database as a result of user input i)Form
ii)table
iii)Report
7) ¦ccuracy of data in the data base as a result of external factors
Ex: Imported files
    /: ¦fter completion of all possible test cases writing,   and test
engg are concentrating on     / for completeness and correctness. In this
review test lead apply coverage analysis on that cases
a) R based
b) xsecase based
c) Data model based
d) xI based
e) TRM based
¦t the end of this review, TL creates +=     . . This is the mapping
between RS and prepared test cases. This is also known as Requirement Validation Matrix.
(RVM).

VII)  # 


¦fter completion of all possible test cases writing for responsible modules and their review,
testing team concentrate on test execution to detect defect in build.
1.Test Execution Levels
2)
Test Execution vs Test Cases
Level -0 ¦ll P0
Level -1 ¦ll P0, P1 & P2 test cases as batches
Level -2 Selected P0, P1 & P2 test cases wrt modification
Level -3 Selected P0,P1 & P2 test cases wrt build
3) uild Version Control: Testing team receive build from development team through below
process
From the above model, testing team receives build from development through UFile Transfer
ProtocolU. To distinguish between old and modified builds, development use unique no version
system. This system is understandable to test engineers. For this version controlling
development, team people are using Visual SourceSafe.
4) Level -0: ¦fter receiving initial build from development team, testing team covers basic
functionality of that build to estimate stability. During this testing, testing team applies below
factors to check whether the build is stable for complete testing or not?
w xnderstandable
w Operatable
w Observable
w Consistency
w Simplicity
w Controllable
w Maintainable
w ¦utomatable
These are Testability Factors to do     
.
From the above factors, Sanity testing is also known as      
"$'") 
 
 
  cc    

5) Test Darness (Ready for testing)
Test harness= Test Environment + Test ed
6) Test ¦utomation: ¦fter receiving stable build from development, testing team concentrate on
test automation to create automated test script if possible.

From the above


model test engg's are following selective automation for repeatable and critical test cases only
7) Level -1 (9c /  
): ¦fter receiving stable build from development team and
completion possible automation, testing team concentrate on test execution to detect defects.
They execute tests as batches.    c is also known as     or test set. Every test batch
consists of a set of dependent test cases. During this test case execution as manual or automated,
test engineer's create "Test Log". It consists of 3 types of entries.
§ Passed, all expected equal to actual
§ Failed ,any one expected vary with actual
§ locked, due to failing of parent test

Level -2 (+
   
): During comprehensive test execution, test enggUs are reporting
defects to development team. ¦fter bug resolving, testing team receives modified build. efore
concentrating on remaining comprehensive testing, testing team
Re-execute their previous tests on that modified build to ensure bug fix work and possibility of
side effects. This type of re- execution of tests is called regression testing.
Note: If development team release modified builds due to project requirement changes, test
engineer executes all P0, P1 and carefully selected P2 test cases.

 9   




Component testing is described fully in S-7925 and should be aware that component testing is
also known as unit testing, module testing or Program Testing. The definition from S7925 is
simply the testing of individual software components.

Component testing has often traditionally been carried out by the programmer. This has proved
to be less effective than if someone else designs and funs the tests for the component. "uddy"
testing, where two developers test each other's work is more independent and often more
effective. Dowever, the component test strategy should describe what level of independence is
applicable to a particular component.

xsually white box (structural) testing techniques are used to design test cases for component
tests but some black box tests can be effective as well.

  
:*    


)//:*    

1.xser ¦cceptance Testing is a key feature of Project implementation.


2.xser ¦cceptance Testing(x¦T) is the formal means by which Company ensures that the new
system actually meets the essential user requirements.
3.Each module implemented will be subject to one or more user acceptance tests before sign off.
4.This x¦T Plan describes the test scenarios, test conditions, and test cycles that must be
performed to ensure that acceptance testing follows a precise schedule and that the system is
thoroughly tested before releasing.
5. The acceptance procedure ensures the intermediate or end-result supplied meets the users'
expectations by asking questions such as:

!u Is the degree of detail sufficient?

!u ¦re the screens complete?

!u Is the content correct from the user's point of view?

!u ¦re the results usable?

!u Does the system perform as required?

6.In x¦T, the software is tested for compliance with business rules as defined in the Software
Requirement Specifications and the Detailed Design documents.
7.x¦T also allows designated personnel to observe how the application will behave under
business functional operational conditions.
:*    

The x¦T team will be assigned the following tasks:

!u Verify the completeness & accuracy of the business functionality provided in the
application (Screens, Reports, Interfaces).

!u Verify the functionality of the application to ensure that users are comfortable with the
application.

  
:*    


 :*    
7:*8
xser ¦cceptance Testing addresses the broadest scope of requirements; therefore, the x¦T must
cover the following areas:
±u )  +=   ensure requirements for data capture, data processing, data
distribution and data archiving are met.

±u !  +=  ensure all business functions are performed as per the
business rules.

±u   +=   ensure all business systems linked to the software system in
x¦T pass and receive data or control as defined in the requirements specification.

±u The user, with limited help from the developers, is responsible for:
1. Planning tests
2. Executing tests
3. Reporting and clearing incidents

) & /:*

±u xser ¦cceptance Testing determines the degree to which the application actually meets
the agreed functional specifications, as stated in the usiness Functional Specifications
and the Detailed Design documents.
±u
±u It confirms whether the software provides new business improvements and if existing
processes continue to work correctly.

±u Even if software passes functional testing, it must still be tested to see how it will
perform in the business environment before release for general use.

±u During x¦T, the way the software is intended to perform and behave upon release for
general use is assessed. This includes the:
1. ¦ccuracy of successful completion of business processes
2. ¦ccuracy and utility of user documentation and procedures
3. Quality and accuracy of data being produced
4. Release and installation procedures
5. Configuration management issues
  
:*    


:*    
% 
For purposes of this planning document, xser ¦cceptance Testing has been divided into four
major functions:

!u Planning

!u Execution

!u Follow- xp

!u Re-Test

8% 

The goal of a x¦T(xser ¦cceptance Testing) Plan is to identify the essential elements of the
software to be tested. ¦ xser ¦cceptance Testing(x¦T) Plan delineates high-level testing
procedures and outlines the tests to be conducted.
8#  
The application will be verified / tested, using the ¦cceptance Test Feedback Form against the
following:

!u Is the degree of detail for business functionality sufficient?


!u Do the screens completely capture business functionality?
!u Is the business functionality content correct from the user's point of view, as recorded in
the reference documents?
!u Does the systemUs business functionality perform as required?

Execution of the x¦T Plan will be completed by performing the following tests:

1.u Requirements Testing


2.u Test Case Creation

3.u usiness Functional Requirements Testing

4.u Documentation Testing

5.u Verification of Online Delp

6.u Interface Testing

08+=   

The purpose of the Requirements Testing is to validate that the system meets all business
functional requirements. This validation shall involve Test Case creation, as well as usiness
Functional Requirement Testing.
18 99 
Test case data shall be created on manual forms for data entry on crucial screens in order to
cover all attributes of x¦T testing. Each test case includes the steps necessary to perform the
test, expected results and contains (or refers to) any data needed to perform the test and to verify
it works.The x¦T team will provide test cases for x¦T testing.
28$  !  +=   

usiness Functional Requirements testing information shall be created based on the Functional
Requirements contained in the System Requirements Specification Document. During x¦T, it is
the testersU responsibility to assure that the usiness Functional Requirements Testing occurs.
Dowever, company will ensure that each usiness Functional Requirement has been tested.
38      

Documentation Testing ensures that the hard copy and online documentation are understandable
and accurate.
48'   )  ;
Online Delp shall be verified by the following:
a) Corresponds to the xser documentation
b) Corresponds to the screens presented in the application
c) xser is directed to the appropriate help on the desired page by clicking on a page level help
icon available to the users on each screen.
68   

Interface Testing validates that the application interfaces with external systems and databases.
  
:*    


28!:

!u It isnUt sufficient just to find an error; the tester must also record the conditions prior to
starting a test, the actions taken during the test, and what results occurred.

!u The tester must produce physical evidence, for example screen prints, and be able to
repeat the problem.

!u If testing problems are not reciprocated, will only be able to record the problem for
historical purposes in the event it is reproduced in the future.

!u Test results of xser ¦cceptance Testing will reflect certain items requiring change in
specifications or functionality.
!u These items will be raised as change requests to be registered in the change control
process.

!u To ensure adequate control on the clearance of errors and to improve management


forecasting of x¦T completion, each incident must be recorded separately by the tester
using the ¦cceptance Test Feedback Form..

+ 

!u ¦s well as logging problems that result from the discovery of defects, testers will also
encounter test incidents caused by the need for clarification, or the need for
enhancements. ¦mbiguities in specifications are common and may not be discovered
until x¦T. These are clarified and resolved, between the teams involved, and agreed to
be either a software defect, to be cleared now, a clarification to be applied to the
specifications, or an enhancement to be provided in some future release.
!u Detailed checklists will be prepared for testing. These checklists shall list each parameter,
which the artifact is being tested toward. Checklists shall be prepared by the x¦T team
prior to commencement of these tests.

!u ¦ll user acceptance testing and feedback shall be captured on the Review or Test ¦ctivity
Record (¦ppendix ). Dowever, user acceptance testing will be done against detailed test
cases prepared by the x¦T team prior to commencement of this phase.

!u Once all the feedback has been resolved, the fine-tuned application / artifact shall be
released again to the x¦T team for additional testing.

!u The development team clears errors, and provides a new release, incorporating changes
and enhancements of the software to the x¦T team. This process is repeated until all
reported incidents are resolved to the x¦T organizationUs satisfaction.

38+ 

The application / artifact released to the x¦T team for retesting shall be tested again for the
points submitted in the feedback sheets. Detailed regression testing shall occur as per the
Integration / Regression Test Plan to be produced for the integration test phase.
OxTPxT
The following will form the outputs of the testing activities and will be stored electronically in
the "Test Results" folder secured in an archive.
a) Verified and approved test case records
b) Test results captured on test activity records
c) Screen dumps of error screens
d)xpdated checklists

  
:*    
!  !
  
*    


Welcome to all those interested in Software Testing which is emerging as a radicle boost in
Software Industry
No need to search for all the websites of testing In this website you can get the basic information
on testing which is the starting step

So ¦ll the est for everyone

$   *    




1. Test automation enables one to achieve detailed product testing with significant reduction in
test cycle time.
2. The efficiency of automated testing incorporated into product lifecycle can generate
sustainable time and money savings.
3. etter, faster testing.
4. Rapid validation of software changes with each new release of application is possible.

5. ¦utomated testing increases the significance and accuracy of testing and results in greater test
coverage.
6. ¦utomated testing offers a level of consistency, which is not achievable through the use of
manual testing.
7. ¦utomated testing eliminates the time constraints associated with manual testing. Scripts can
be executed at any time without human intervention.

8. ¦utomated test scripts are re-usable, and can be used across varying scenarios and
environments.
9. Enhanced productivity.
10. ¦utomation eliminates many of the mundane functions associated with regression testing.

  
. '*   

Welcome to all those interested in Software Testing which is emerging as a radicle boost in
Software Industry

No need to search for all the websites of testing In this website you can get the basic information
on testing which is the starting step

So ¦ll the est for everyone

  
*   

¦utomation Testing of the project is done through the ¦utomation Testing Tools. There are
many ¦utomation Tools in use developed by different companies. In that mainly,
.   /
$.
Mercury Interactive is developed by Mercury International Corporation
The main Tools from this are
0 + 
1   
2<+ 
3
4(   % 

The Tools from IM are


0+  + 

  
 + 

 + is the most used ¦utomated Software Testing Tool.


Main Features of  +  are

!u Developed by Mercury Interactive


!u Functionality testing tool
!u Supports C/s and web technologies such as (V, VC++, D2K, Java, DTML, Power
uilder, Delphe, Cibell (ERP))
!u To Support .net, xml, S¦P, Peoplesoft, Oracle applications, Multimedia we can use QTP.
!u Winrunner run on Windows only.
!u Xrunner run only xNIX and linux.
!u Tool developed in C on VC++ environment.
!u To automate our manual test win runner used TSL (Test Script language like c)

The main Testing Process in Win Runner is


1) Learning
Recognazation of objects and windows in our application by winrunner is called learning.
Winrunner 7.0 follows ¦uto learning.
2) Recording
Winrunner records over manual business operation in TSL
3) Edit Script
depends on corresponding manual test, test engineer inserts check points in to that record script.

4) Run Script
During test script execution, winrunner compare tester given expected values and application
actual values and returns results.
5) ¦nalyze Results
Tester analyzes the tool given results to concentrate on defect tracking if required.

  
   
Software ¦utomated Tool    simplifies test management by helping you organize and
manage all phases of the software testing process, including planning, creating tests, executing
tests, and tracking defects.
With   , you maintain a project's database of tests. From a project, you can build test
sets groups of tests executed to achieve a specific goal.
For example, you can create a test set that checks a new version of the software, or one that
checks a specific feature.
¦s you execute tests, TestDirector lets you    detected in the software. Defect
records are stored in a database where you can track them until they are resolved in the software.
TestDirector works together with  + , Mercury Interactive's automated GxI Testing
tool.
WinRunner enables you to create and execute automated test scripts. You can include
WinRunner automated tests in your project, and execute them directly from TestDirector.
TestDirector activates WinRunner, runs the tests, and displays the results, TestDirector offers
integration with other Mercury Interactive testing tools (LoadRunner, Visual ¦PI, ¦stra
QuickTest, QuickTest 2000, and XRunner), as well as with third-party and custom testing tools.

The    workflow consists of 3 main phases:


In each phase you perform several tasks:

!u Planning Tests
!u Running Tests
!u Tracking Defects

% 
 
Divide your application into test subjects and build a project.
1. Define your testing goals.
Examine your application, system environment, and testing resources to determine what and how
you want to test.
2. Define test subjects.
Define test subjects by dividing your application into modules or functions to be tested. uild a
test plan tree that represents the hierarchical relationship of the subjects.
3. Define tests.
Determine the tests you want to create and add a description of each test to the test plan tree.
4. Design test steps.
reak down each test into steps describing the operations to be performed and the points you
want to check. Define the expected outcome of each step.
5. ¦utomate tests.
Decide whether to perform each test manually or to automate it. If you choose to perform a test
manually, the test is ready for execution as soon as you define the test steps. If you choose to
automate a test, use WinRunner to create automated test scripts in Mercury InteractiveUs  
  <

7<8
6. ¦nalyze the test plan.
Generate reports and graphs to help you analyze your test plan. Determine whether the tests in
the project will enable you to successfully meet your goals.
+ 
 
Create test sets and perform test runs.
1. Create test sets.
Create test sets by selecting tests from the project. ¦ test set is a group of tests you execute to
meet a specific testing goal.
2. Run test sets.
Schedule test execution and assign tasks to testers. Run the manual and/or automated tests in the
test sets.
3. ¦nalyze the testing progress.
Generate reports and graphs to help you determine the progress of test execution

 
  
1.Report defects detected in your application and track how repairs are progressing.
2. Report defects detected in the software. Each new defect is added to the defect database.
3.Track defects.
Review all new defects reported to the database and decide which ones should be repaired. Test a
new version of the application after the defects are corrected.
4. ¦nalyze defect tracking.
Generate reports and graphs to help you analyze the progress of defect repairs, and to help you
determine when to release the application.

What is a Test Set?


¦fter planning and creating a project with tests, you can start running the tests on your
application. Dowever, since a project database often contains hundreds or thousands of tests,
deciding how to manage the test run process may seem overwhelming.
   helps you organize test runs by building test sets. ¦ test set is a subset of the tests
in your project, run together in order to achieve a specific goal. You build a test set by selecting
tests from the test plan tree, and assigning this group of tests a descriptive name. You can then
run the test set at any time, on any build of your application.
Do You Keep Track of Defects?
Locating and repairing software defects is an essential phase in software development. Defects
can be detected and reported by software developers, testers, and end users in all stages of the
testing process. xsing TestDirector, you can report flaws in your application, and track data
derived from defect reports.

When a defect is detected in the software:


a)end a defect report to the TestDirector database.
b)Review the defect and assign it to a member of the development team.
c)Repair the open defect.
d)Test a new build of the application after the defect is corrected. If the defect does not reoccur,
change the status of the defect.
e)Generate reports and graphs to help you analyze the progress of the defects in your
TestDirector project.
f)Reporting a New Defect
You can report a new defect at any stage of the testing process by adding a defect record to the
project database. Each defect is tracked through four stages: New, Open, Fixed, and Closed.
When you initially report a defect to the project database, you assign it the status New.

  
<+ 

For having the knowledge on <+ 


Specifically, you need to know the following knowledge and skills:

!u Components such as web servers, application servers, database servers, operating


systems, networks and network elements such as load balancers.

!u You need not have "guru" level knowledge of each of the components but should have
operational knowledge and an understanding of the performance issues associated with
the components.
!u For example, a    should know what multi-way joins, indexes and spin counts
are and what affect they have on a database server.

!u Protocol(s) used between the client and server such as DTTP/DTML, ODC, SQL*NET,
and DCOM.

!u The <+  script language is *,9. It helps to know the C language, but the
scripts are generated and manipulated by LoadRunner, so there is usually not need to
directly edit the code. There is also a icon script view which completely hides the C code.

!u <  
is not a heads down coding exercise. You will work with many parts of an
organization to coordinate activities, schedules and resources.

!u Daily interaction with a variety of people requires good oral and written communication
skills as well as good people skills. If you prefer to sit in a cube by yourself, you should
stay in functional testing or development.

These are the some of the !=I<+ 


1.What is load testing?
Load testing is to test that if the application works fine with the loads that result from large
number of simultaneous users, transactions and to determine weather it can handle peak usage
periods.
2.What is Performance testing?
Timing for both read and update transactions should be gathered to determine whether system
functions are being performed in an acceptable timeframe. This should be done standalone and
then in a multi user environment to determine the effect of multiple transactions on the timing of
a single transaction.
3.Did u use LoadRunner? What version?
Yes. Version 7.2.
4.Explain the Load testing process?
 0% 
 c t
Dere, we develop a clearly defined test plan to ensure the test scenarios we develop will
accomplish load-testing objectives.
 19 
' 
Dere, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by
Vusers as a whole, and tasks measured as transactions.
 29 
 c  
¦ scenario describes the events that occur during a testing session. It includes a list of machines,
scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner
Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual
scenarios, we define the number of Vusers, the load generator machines, and percentage of
Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where
we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for
us.
 3+ 
 c  
We emulate load on the server by instructing multiple Vusres to perform tasks simultaneously.
efore the testing, we set the scenario configuration and scheduling. We can run the entire
scenario, Vuser groups, or individual Vusers.
 4.  
 c  
We monitor scenario execution using the LoadRunner online runtime, transaction, system
resource, Web resource, Web server resource, Web application server resource, database server
resource, network delay, streaming media resource, firewall server resource, ERP server
resource, and Java performance monitors.
 6* E
    
During scenario execution, LoadRunner records the performance of the application under
different loads. We use LoadRunnerUs graphs and reports to analyze the applicationUs
performance.
5.When do you do load and performance Testing?
We perform load testing once we are done with interface (GxI) testing. Modern system
architectures are large and complex. Whereas single user testing primarily on functionality and
user interface of a system component, application testing focuses on performance and reliability
of an entire system. For example, a typical application-testing scenario might depict 1000 users
logging in simultaneously to a system. This gives rise to issues such as what is the response time
of the system, does it crash, will it go with different software applications and platforms, can it
hold so many hundreds and thousands of users, etc. This is when we set do load and performance
testing.
6.What are the components of LoadRunner?
The components of LoadRunner are The Virtual xser Generator, Controller, the ¦gent process,
LoadRunner ¦nalysis and Monitoring, LoadRunner ooks Online.
7.What Component of LoadRunner would you use to record a Script?
The Virtual xser Generator (VuGen) component is used to record a script. It enables you to
develop Vuser scripts for a variety of application types and communication protocols.
8.What Component of LoadRunner would you use to Play ack the script in multi user mode?
The Controller component is used to playback the script in multi-user mode. This is done during
a scenario run where a vuser script is executed by a number of vusers in a group.
9.What is a rendezvous point?
You insert rendezvous points into Vuser scripts to emulate heavy user load on the server.
Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a
certain point, in order that they may simultaneously perform a task. For example, to emulate
peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit
cash into their accounts at the same time.
10.What is a scenario?
¦ scenario defines the events that occur during each testing session. For example, a scenario
defines and controls the number of users to emulate, the actions to be performed, and the
machines on which the virtual users run their emulations.
11.Explain the recording mode for web vuser script?
We use VuGen to develop a Vuser script by recording a user performing typical business
processes on a client application. VuGen creates the script by recording the activity between the
client and the server. For example, in web based applications, VuGen monitors the client end of
the database and traces all the requests sent to, and received from, the database server.
We use VuGen to: a) Monitor the communication between the application and the server;
b) Generate the required function calls; and
c) Insert the generated function calls into a Vuser script.
12.Why do you create parameters?
Parameters are like script variables. They are used to vary input to the server and to emulate real
users.
a)Different sets of data are sent to the server each time the script is run.
b)etter simulate the usage model for more accurate testing from the Controller, one script can
emulate many different users on the system.
13.What is correlation? Explain the difference between automatic correlation and manual
correlation?
Correlation is used to obtain data which are unique for each run of the script and which are
generated by nested queries. Correlation provides the value to avoid errors arising out of
duplicate values and also optimizing the code(to avoid nested queries). ¦utomatic correlation is
where we set some rules for correlation. It can be application server specific. Dere values are
replaced by data which are created by these rules. In manual correlation, the value we want to
correlate is scanned and create correlation is used to correlate.
14.Dow do you find out where correlation is required? Give few examples from your projects?
Two ways:
First we can scan for correlations, and see the list of values which can be correlated. From this
we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We
can look up the difference file to see for the values which needed to be correlated.
In my project, there was a unique id developed for each customer, it was nothing but Insurance
Number, it was generated automatically and it was sequential and this value was unique. I had to
correlate this value, in order to avoid errors while running my script. I did using scan for
correlation.

15.Where do you set automatic correlation options?


¦utomatic correlation from web point of view, can be set in recording options and correlation
tab. Dere we can enable correlation for the entire script and choose either issue online messages
or offline actions, where we can define rules for that correlation.
¦utomatic correlation for database, can be done using show output window and scan for
correlation and picking the correlate query tab and choose which query value we want to
correlate. If we know the specific value to be correlated, we just do create correlation for the
value and specify how the value to be created.
16.What is a function to capture dynamic values in the web vuser script?
Web_reg_save_param function saves dynamic data information to a parameter.
17.When do you disable log in Virtual xser Generator, When do you choose standard and
extended logs?
Once we debug our script and verify that it is functional, we can enable logging for errors only.
When we add a script to a scenario, logging is automatically disabled.
Standard Log Option:
When you select Standard log, it creates a standard log of functions and messages sent during
script execution to use for debugging. Disable this option for large load testing scenarios. When
you copy a script to a scenario, logging is automatically disabled
Extended Log Option:
Select Extended log to create an extended log, including warnings and other messages. Disable
this option for large load testing scenarios. When you copy a script to a scenario, logging is
automatically disabled. We can specify which additional information should be added to the
extended log using the Extended log options.
18.Dow do you debug a LoadRunner script?
VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and
breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the
trace to be performed during scenario execution. The debug information is written to the Output
window.
We can manually set the message class within your script using the lr_set_debug_message
function. This is useful if we want to receive debug information about a small section of the
script only.
19.Dow do you write user defined functions in LR? Give me few functions you wrote in your
previous project?
efore we create the xser Defined functions we need to create the external library(DLL) with
the function. We add this library to VuGen bin directory. Once the library is added then we
assign user defined function as a parameter. The function should have the following format:
__declspec (dllexport) char* (char*,char*)
Examples of user defined functions are as follows:
GetVersion, GetCurrentTime, GetPltform are some of the user defined functions used in my
earlier project.
20.What are the changes you can make in run-time settings?
The Run Time Settings that we make are:
a) Pacing - It has iteration count.
b) Log - xnder this we have Disable Logging Standard Log and Extended.
c) Think Time - In think time we have two options like Ignore think time and Replay think time.
d) General - xnder general tab we can set the vusers as process or as multithreading and whether
each step as a transaction.
21.Where do you set Iteration for Vuser testing?
We set Iterations in the Run Time Settings of the VuGen. The navigation for this is Run time
settings, Pacing tab, set number of iterations.
22.Dow do you perform functional testing under load?
Functionality under load can be tested by running several Vusers concurrently. y increasing the
amount of Vusers, we can determine how much load the server can sustain.
23.What is Ramp up? Dow do you set this?
This option is used to gradually increase the amount of Vusers/load on the server. ¦n initial
value is set and a value to wait between intervals can be specified.
To set Ramp xp, go to 'Scenario Scheduling Options'
24.What is the advantage of running the vuser as thread?
VuGen provides the facility to use multithreading. This enables more Vusers to be run per
generator.
If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser,
thus taking up a large amount of memory. This limits the number of Vusers that can be run on a
single generator.
If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for
the given number of Vusers(say 100). Each thread shares the memory of the parent driver
program, thus enabling more Vusers to be run per generator.
25.If you want to stop the execution of your script on error, how do you do that?
The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop
executing the ¦ctions section, execute the vuser_end section and end the execution. This
function is useful when you need to manually abort a script execution as a result of a specific
error condition. When you end a script using this function, the Vuser is assigned the status
"Stopped". For this to take effect, we have to first uncheck the UContinue on errorU option in
Run-Time Settings.
26.What is the relation between Response Time and Throughput?
The Throughput graph shows the amount of data in bytes that the Vusers received from the
server in a second. When we compare this with the transaction response time, we will notice that
as throughput decreased, the response time also decreased. Similarly, the peak throughput and
highest response time would occur approximately at the same time.
27.Explain the Configuration of your systems?
The configuration of our systems refers to that of the client machines on which we run the
Vusers. The configuration of any client machine includes its hardware settings, memory,
operating system, software applications, development tools, etc. This system component
configuration should match with the overall system configuration that would include the network
infrastructure, the web server, the database server, and any other components that go with this
larger system so as to achieve the load testing objectives.
28. Dow do you identify the performance bottlenecks?
Performance ottlenecks can be detected by using monitors. These monitors might be
application server monitors, web server monitors , database server monitors and network
monitors. They help in finding out the troubled area in our scenario which causes increased
response time. The measurements made are usually performance response time, throughput,
hits/sec, network delay graphs, etc.
29. If web server, database and Network are all fine where could be the problem?
The problem could be in the system itself or in the application server or in the code written for
the application.
30. Dow did you find web server related issues?
xsing Web resource monitors we can find the performance of web servers. xsing these monitors
we can analyze throughput on the webserver, number of hits per second that occured during
scenario, the number of http responses per second, the number of downloaded pages per second.
31. Dow did you find database related issues?
y running UDatabaseU monitor and help of UData Resource GraphU we can find database
related issues. E.g. You can specify the resource you want to measure on before running the
controller and than you can see database related issues
32. Explain all the web recording options?
33. What is the difference between Overlay graph and Correlate graph?
)/-c It overlay the content of two graphs that shares a common x-axis. Left Y-axis
on the merged graph showUs the current graphUs value & Right Y-axis show the value of Y-
axis of the graph that was merged.
9 -c Plot the Y-axis of two graphs against each other. The active graphUs Y-axis
becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graphUs
Y-axis.
34. Dow did you plan the Load? What are the Criteria?
Load test is planned to decide the number of users, what kind of machines we are going to use
and from where they are run. It is based on 2 important documents, Task Distribution Diagram
and Transaction profile. Task Distribution Diagram gives us the information on number of users
for a particular transaction and the time of the load. The peak usage and off-usage are decided
from this Diagram. Transaction profile gives us the information about the transactions name and
their priority levels with regard to the scenario we are deciding.
35. What does vuser_init action contain?
Vuser_init action contains procedures to login to a server.
36. What does vuser_end action contain?
Vuser_end section contains log off procedures.
37. What is think time? Dow do you change the threshold?
Think time is the time that a real user waits between actions.
Example:
When a user receives data from a server, the user may wait several seconds to review the data
before responding. This delay is known as the think time.
Changing the Threshold:
Threshold level is the level below which the recorded think time will be ignored. The default
value is five (5) seconds. We can change the think time threshold in the Recording options of the
Vugen.
38. What is the difference between standard log and extended log?
The standard log sends a subset of functions and messages sent during script execution to a log.
The subset depends on the Vuser type
Extended log sends a detailed script execution messages to the output log. This is mainly used
during debugging when we want information about:
a) Parameter substitution
b) Data returned by the server
c) ¦dvanced trace
39. Explain the following functions:
a) lr_debug_message
The lr_debug_message function sends a debug message to the output log when the specified
message class is set.
b) lr_output_message
The lr_output_message function sends notifications to the Controller Output window and the
Vuser log file.
c) lr_error_message
The lr_error_message function sends an error message to the LoadRunner Output window.
d) lrd_stmt
The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This
function sets a SQL statement to be processed.
e) lrd_fetch
The lrd_fetch function fetches the next row from the result set.
40. What are the three sections of a Vuser script and what is the purpose of each one?
1) Vuser_init - used for recording the logon.
2) ¦ctions - used for recording the business process.
3) Vuser_end - used for recording the logoff.
41. For what purpose are Vusers created?
Vusers are created to emulate real users acting on the server for the purpose of load testing.
42. What are the benefits of multiple ¦ction files within a Vuser?
They allow you to perform different business processes in one Vuser to represent a real user who
does the same thing. They let you build Vusers that emulate real users defined in the xser
Community Profile. They also allow you to record the login and logoff separately from the
¦ction files and thus to avoid iteration.
43. Dow can you tell the difference between an integer value and a string value in a VuGen
script?
Strings are enclosed in quotes; integers are not.
44. What is the purpose of a LoadRunner transaction?
To measure one or more steps/user actions of a business process.
45. What is the easiest way to get measurements for each step of a recorded script?
For the entire action file?
Enable automatic transactions.(Runtime settings, Recording Options)
46. When would you parameterize a value rather than correlate queries?
Parameterize a value only when it is input by the user.
47. What are the four selection methods when choosing data from a data file?
Sequential, Random, xnique, and Same line as .
48. Dow can reusing the same data during iterative execution of a business process negatively
affect load testing results?
In reusing the same data for each iteration, the server recognizes the same data is requested and
places it in its cache. The load test then gets performance results that is not based on real server
activity but caching. This will not provide the correct results during the analysis of the load test.
49. Dow can caching negatively affect load testing results?
When data is cached in the serverUs memory, the server does not need to fetch it from the
database during playback. Then, test results do not reflect the same performance they would if
real users were loading the system with different data.
50. Why is it recommended to add verification checks to your Vusers?
You would want to verify, using LoadRunner, that the business process is functioning as
expected under load.
51. When does VuGen record a web_submit_data instead of a web_submit_form?
Why? (e as specific as possible)
¦ web_submit_data is recorded when VuGen cannot match the action, method, data fields,
and/or hidden data values with the page that is stored in the record proxy cache. Comparison
failures are typically caused by something other than DTML setting the properties of the DTTP
request. ecause VuGen can parse only DTML, it cannot find all the properties of the DTTP
request in memory. This results in the hard-coding of all the request information in a
web_submit_data statement.
52. What do you need to do to be able to view parameter substitution in the Execution Log?
Check Extended log and Parameter substitution in the Run-Time Settings.
53. Dow can you determine which field is data dependent?
Rerecord the same script using different input values, then compare the two scripts.
54. Where should the rendezvous be placed in the script?
The rendezvous should be placed immediately before the transaction where you want to create
peak load. In this case, the rendezvous should be placed right before starting the xpdateOrder
transaction.
55. For what purpose should you select continue on error?
Set it only when making Execution Logs more descriptive or adding logic to the Vuser.
56. What is the purpose of selecting Show browser during replay in the General Options
settings?
This setting allows you to see the pages that appear during playback. This is useful for
debugging your Vuser during the initial stages of Web Vuser creation.
57. What tools does VuGen provide to help you analyze Vuser run results?
Execution Log, Run-Time Viewer, and Mercury Test Results window.
58. If your Vuser script had two parameters, "DepartCity" and "¦rrivalCity," how could you
have the Vuser script return an error message which included the city names?
lr_error_message ("The Vuser could not submit the reservation request for %s to %s",
(lr_eval_string ("{DepartCity}"), lr_eval_string ("{¦rrivalCity}"));
59. Why should you run more Vusers than your anticipated peak load?
(1) To test the scalability of the system.
(2) To see what happens when there is a spike in system usage.
60. What is difference between manual scenario and Goal oriented scenario? What Goal
Oriented scenarios can be created?
Manual scenario:
Main purpose is to learn how many Vusers can run concurrently
Gives you manual control over how many Vusers run and at what times
Goal oriented scenario:
Goal may be throughput, response time, or number of concurrent Vusers
LoadRunner manages Vusers automatically
Different Goal Oriented Scenarios are:

!u Virtual xsers

!u Dits per second

!u Transaction per second

!u Transaction Response time

!u Pages per minute

61. Why wouldnUt you want to run virtual users on the same host as the Load-Runner Controller
or Database Server?
Running virtual users on the same host as the LoadRunner Controller will skew the results so
that they no longer emulate real life usage. y having both the Controller and the Vusers on the
same machine, the tester will not be able to determine the effects of the network traffic.
62. Each time you run the same scenario, the results will be slightly different. What are some of
the factors that can cause differences in performance measurements?
Different factors can effect the performance measurements including network traffic, CPx usage
and caching.
63. What are some of the reasons to use the Server Resources Monitor?
To find out how much data is coming from the cache To help find out what parts of the system
might contain bottlenecks
64. Explain the following:
a) Dits per second graph
The Dits per Second graph shows the number of DTTP requests made by Vusers to the Web
server during each second of the scenario run. This graph helps you evaluate the amount of load
Vusers generate, in terms of the number of hits.
b) Pages download per second graph
The Pages Downloaded per Second graph shows the number of Web pages (y-axis) downloaded
from the server during each second of the scenario run (x-axis). This graph helps you evaluate
the amount of load Vusers generate, in terms of the number of pages downloaded.
c) Transaction Response time (under load) graph
The Transaction Response Time (xnder Load) graph is a combination of the Running Vusers
and ¦verage Transaction Response Time graphs and indicates transaction times relative to the
number of Vusers running at any given point during the scenario. This graph helps you view the
general impact of Vuser load on performance time and is most useful when analyzing a scenario
with a gradual load.
d) Transaction Response time (percentile) graph
The Transaction Response Time (Percentile) graph analyzes the percentage of transactions that
were performed within a given time range. This graph helps you determine the percentage of
transactions that met the performance criteria defined for your system.
e) Network delay time graph
The Network Delay Time graph shows the delays for the complete path between the source and
destination machines (for example, the database server and Vuser load generator).

65.What protocols does LoadRunner support?


LoadRunner ships with support for the following protocols. Other protocols are available but are
not necessarily full supported.
E-usiness

!u FTP

!u LD¦P

!u Web/Winsocket Dual Protocol

!u Palm

!u SO¦P

!u Web (DTTP/DTML)

Wireless
!u i-mode

!u VoiceXML

!u W¦P

Streaming

!u Media Player (MMS)

!u Real

Mailing Services

!u Internet Messaging (IM¦P)

!u MS Exchange (M¦PI)

!u POP3

!u SMTP

Enterprise Java eans

!u Enterprise Java eans (EJ)


!u Rmi-Java

Distributed Components

!u COM/DCOM

!u Corba-Java

!u Rmi-Java

Middleware

!u Jacada

!u Tuxedo 6

!u Tuxedo 7

ERP

!u aan

!u Oracle NC¦

!u PeopleSoft - Tuxedo
!u Siebel - D2 CLI

!u Siebel - Oracle

!u Siebel - MSSQL

!u S¦P

Client/Server

!u D2 CLI

!u Domain Name Resolution (DNS)

!u Informix

!u MS SQL Server

!u ODC

!u Oracle (2-Tier)

!u Sybase CtLib
!u Sybase Dblib

!u Windows Sockets

Legacy

!u Terminal Emulation (RTE)

Custom

!u C Vuser

!u Javascript Vuser

!u Java Vuser

!u V Script Vuser

!u V Vuser

66.What can I monitor with LoadRunner?


LoadRunner ships with support for the following components. Other monitors are available but
are not necessarily full supported.

Client-side Monitors
End-to-end transaction monitors - Provide end-user response times, hits per second, transactions
per second.
V Dits per Second
V DTTP Responses per Second
V Pages Downloaded per Second
V Throughput
V Transaction Response Time
V Transaction per Second (Passed)
V Transaction per Second (Failed)
V xser-defined Data Point
V Virtual xser Status
V Web Transaction breakdown Graphs

Server Monitors
NT/xNIX/Linux monitors - Provide hardware, network and operating system performance
metrics, such as CPx, memory and network throughput.
V NT server resources
V xNIX / Linux server monitor

Load ¦ppliances Performance Monitors


V ¦ntara.net

¦pplication Deployment Solutions


V Citrix MetaFrame (available only for LoadRunner)

Network Monitors
V Network delay monitor - Provides a breakdown of the network segments between client and
server, as well as network delays.
V SNMP monitor - Provides performance data for network devices such as bridges and routers.

Web Server Performance Monitors


Web server monitors - Provide performance data inside the Web servers, such as active
connections, hits per second, etc.
V ¦pache
V Microsoft IIS
V iPlanet (NES)

Web ¦pplication Server Performance Monitors


Web application server monitor - Provides performance data inside the Web application server,
such as connections per second, active database connections, etc.
V ¦llaire ColdFusion
V ¦TG Dynamo
V E¦ WebLogic (via JMX)
V E¦ WebLogic (via SNMP)
V roadVision
V IM WebSphere
V iPlanet ¦pplication Server
V Microsoft COM+ Monitor
V Microsoft ¦ctive Server Pages
V Oracle 9i¦S DTTP Server
V SilverStream

Streaming Media Performance Monitors (available only for LoadRunner) Streaming specific
monitors for measuring the end user quality on the client side, and isolate performance
bottlenecks on the server-side.
V Microsoft Windows Media Server
V Real Networks RealServer
Firewall Server Resource Monitors
V CheckPoint FireWall-1
Database Server Resource Monitors
Database monitor - Provides performance data inside the database, such as active database
connections, etc.
V SQL Server
V Oracle
V D2
V Sybase (available only for LoadRunner)
ERP Performance Monitors (available only for LoadRunner)
V S¦P R/3 Monitor
Middleware Performance Monitors
V Tuxedo - Provides performance data inside a E¦ Tuxedo application server, such as current
requests in queue.
V IM WebSphere MQ (MQSeries) (available only for LoadRunner)
In addition to these monitors, LoadRunner also supports user defined monitors which allows you
to easily integrate the results from other measurement tools with LoadRunner data collection.
67.Dow many users can I emulate with LoadRunner on a PC?
This greatly depends on the configuration of the PC (number of CPxs, CPx speed, memory and
operating system), the protocol(s) used, the size and complexity of the script(s), the frequency of
execution (iteration pacing and think times) and the amount of logging.
68.Dow much memory is needed per user?
You can get some approximation of the memory needs by looking at the "LR 7.02
footprints.pdf" file located on the LoadRunner discussion group at
groups.yahoo.com/group/LoadRunner/files.
69.What is the current shipping version of LoadRunner?
7.8
70.What is the difference between LoadRunner and ¦stra LoadTest?
¦stra LoadTest is another load test tool from Mercury Interactive built specifically for testing
web applications. Relative to LoadRunner it:
V Supports only DTTP and DTTPS protocols.
V Das less functionality.
V xses the VScript scripting language.
V Das a larger footprint (~ 5 Mytes).
V Costs less.
V Is easier to learn.
In that LoadRunner supports web applications plus much more, it is the preferred tool for load
testing web applications. The exception is if the load testers are non-technical (bad idea) or the
load test project's budget is too limited to afford LoadRunner.
71.What is the relation between LoadRunner and Topaz?
Topaz is Mercury Interactive's line of products and hosted services for monitoring applications
after deployment to production. The Topaz products are built with LoadRunner technology and
use the same script recorder. Scripts built for load testing with LoadRunner can be used by
Topaz for monitoring without modification.
72.Dow much does LoadRunner cost?
The main cost drivers for a LoadRunner license are the number of users to be simulated and
number and type of protocols used. You will need to talk a sales representative to price out the
various components.
The total cost of LoadRunner typically runs from xSD$50,000 to $100,000 or more.
Maintenance cost is 18% of the total list price. The maintenance includes new LoadRunner
releases, patches, phone support and access to the support web site.
1. What are the different Vuser types?
2. What are the different phases in Loadrunner?
3. What are the components of LoadRunner?
4. What are the changes you can make in run-time settings?
5. What is a transaction in LoadRunner?
6. What is a scenario?
7. What is the difference between iteration and vusers?
8. What are the sections of Vuser Script?
9. What does vuser_init contain?
10. What does vuser_end contain?
11. What is a rendezvous point?
12. What are 2 modes in LoadRunner?
13. Which analysis tools did you use?
14. Where do you set Iteration for Vuser testing?
15. Did u use LoadRunner? What version?
16. Which tool did you use to recorded code in Load Runner?
17. What is the relation between Response Time and Throughput?
18. Explain Throughput and Dits per second?
19. Explain the Configuration of your systems?
20. Dow did you plan the Load? What are the Criteria?
21. When did you decide to do Load Testing?
22. What is cross-scenario analysis?
23. Why do you create parameters?
24. What is correlation?
25. Why do you compare two scripts? Dow you do that?
26. What are the different graphs in LoadRunner?
27. What are the different reports in LoadRunner?
28. When do you disable logging, When do you choose standard and extended logs?
29. What is the function of web_create_html_param?
30. Dow do you synchronize the scripts in LoadRunner?
31. Modes of logging in LoadRunner?
32. Types of extend logging?
33. Dow many times you can run a script?
34. Dow do we parameterize?
35. Dow do you plan load test?
36. Dow do you identify the bottlenecks in load test?
37. If web server, database and Network are all fine where could be the problem?
38. Dow would you know that itUs a resource contention problem?
39. Dow did you report web server related issues?
40. Dow did you report database related issues?
41. what are three main components in LoadRunner?
42. What are three modes of LoadRunner Logging?
43. Which mode would you use to debug a LoadRunner script?
44. What are three types of Extend log?
45. Where do you set how many times a LoadRunner script should repeat?
46. What steps would you use to replace data in a script with a Parameter?
47. What is the LoadRunner for web statement used to performer a Text check?
48. What are LoadRunner for web correlation function?
49. What is a rendezvous point
50. What is think time? Dow do you change the threshold?
51. what is the difference between standard log and extended log?

  
(  *  

What are the differences of (*A  



Software Q¦ involves the entire software development PROCESS - monitoring and improving
the process, making sure that any agreed-upon standards and procedures are followed, and
ensuring that problems are found and dealt with. It is oriented to 'prevention'
Testing involves operation of a system or application under controlled conditions and evaluating
the results (eg, 'if the user is in interface ¦ of the application while using hardware , and does
C, then D should happen'). The controlled conditions should include both normal and abnormal
conditions. Testing should intentionally attempt to make things go wrong to determine if things
happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'.
Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable.
Every software development, enhancement, or maintenance project includes some quality
assurance activities. Even a simple, one-person development job has Q¦ activities embedded in
it, even if the programmer denies that "quality assurance" plays a part in what is to be done. Each
programmer has some idea of how code should be written, and this idea functions as a coding
standard for that programmer.

Similarly, each of us has some idea of how documentation should be written;this is a personal
documentation standard. We proofread and revise our documents, and programmers review their
products to make sure they meet their personal standards. These are Q¦ reviews, or audits. Each
programmer and writer tests or inspects his or her own work, and these are verification and
validation processes.

¦ project's formal Q¦ program includes the assurance processes that each team member goes
through, but it involves planning and establishing project-wide standards, rather than relying on
personal standards and processes. The extent and formality of project Q¦ activities are decisions
that the client, project manager, and the Q¦ department make based on their assessment of the
project and its risks.

Software quality assurance Software Q¦ involves the entire software development PROCESS -
monitoring and improving the process, making sure that any agreed-upon standards and
procedures are followed, and ensuring that problems are found and dealt with. It is oriented to
'prevention'.

OR

The purpose of Software Quality ¦ssurance is to provide management with appropriate visibility
into the process being used by the software project and of the products being built.

Software Quality ¦ssurance involves reviewing and auditing the software products and activities
to verify that they comply with the applicable procedures and standards and providing the
software project and other appropriate managers with the results of these reviews and audits.

  
*  (  *  

To establish the =   c   , we need to evaluate the test process itself against
several properties of a quality standard. Each characteristic views the testing process from a
different perspective.

8# / 


Given the amount of   available for testing, we have to prioritize the allocation
of our testing effort. In the test strategy, the risks of the system under test are well defined.The
amount of time and money available for testing, prioritized and translated into test conditions
and test cases using the right    c = . y matching these test conditions to the
requirements, the completeness of the test set is enhanced, and the importance of specific test
results becomes apparent.

8#  
We want to test as much as possible in as short a time as possible. To do this, we must create a
test set that realizes the coverage we need with a minimal number of   . Thus, we want to
eliminate test cases that overlap, since they have no added value.
Instead, our goal is to ensure that every test case focuses on a unique aspect, or a set of related
aspects. ¦s with effectiveness,    c = are very helpful tools in developing efficient
testware.

8.   
To determine if a test has been successful, the test results must be measurable. This means that
test cases and     should be made in such a manner that the result is quantifiable
according to binary logic. The result of the execution of test case should be
1)passed or
2)not passed
3)never or may be

8 /  
There are many different aspects of an information system that could be tested. Not all aspects
are equally important, especially not to different kinds of stakeholders. For example, end users
are generally not concerned about       , their focus is on time-to-market.
On the other hand developers may be primarily interested in maintenance as it defines their
future workload.
This means that there are different kinds of tests 7c      8, which are
undertaken by different stakeholders. It is important that the division of responsibilities
concerning testing is made clear to all stakeholders before testing commences, otherwise
completeness may not be achieved.

8.    
One of the main problems with testware is maintenance. ¦fter initial project, the test set should
be maintained so that it can be used for the next release of the information system being tested.
.    should be made as easy as possible, otherwise the test set will be neglected. Thus
the structure of the   should be as simple as possible, and formally registered,thus if a
specification changes, the affected test cases should also be noted. This not only means adding
new test cases, but also that those which are no longer needed should be removed.
Maintainability has a big impact on    .

8+   


Since we try to avoid developing throw away software, we should also not develop throw away
testware. ¦ lot of time and money is wasted when a test is not easily reproducible, since new
tests must be created. Since we want to ensure that our tests are replicable with the same test
cases, a well-developed test set makes it possible to    
   with a minimum
of effort. These tests are particularly well suited to automation since a computer can, if required,
repeat test cases again and again.

    


(  *  
Quality assurance is the planned and systematic set of activities that ensures that software
processes and products conform to requirements, standards, and procedures.
Processes include all of the activities involved in designing, developing, enhancing, and
maintaining software.
Products include the software, associated data, its documentation, and all supporting and
reporting paperwork.
Q¦ includes the process of assuring that standards and procedures are established and are
followed throughout the software development lifecycle.
Standards are the established criteria to which the software products are compared.
Procedures are the established criteria to which the development and control processes are
compared.
Compliance with established requirements, standards, and procedures is evaluated through
process monitoring, product evaluation, audits, and testing.
The three mutually supportive activities involved in the software development lifecycle are
management, engineering, and quality assurance.
Software management is the set of activities involved in planning, controlling, and directing the
software project.
Software engineering is the set of activities that analyzes requirements, develops designs, writes
code, and structures databases.
Quality ¦ssurance ensures that the management and engineering efforts result in a product that
meets all of its requirements.

-)*<)!(:*<*:+*,9#
Software development, like any complex development activity, is a process full of risks. The
risks are both technical and programmatic; that is, risks that the software or website will not
perform as intended or will be too difficult to operate/browse, modify, or maintain are technical
risks, whereas risks that the project will overrun cost or schedule are programmatic risks.
The goal of Q¦ is to reduce these risks. For example, coding standards are established to ensure
the delivery of quality code.
If no standards are set, there exists a risk that the code will not meet the usability requirements,
and that the code will need to be reworked.
If standards are set but there is no explicit process for assuring that all code meets the standards,
then there is a risk that the code base will not meet the standards. Similarly, the lack of an Error
Management and Defect Life Cycle workflow increases the risk that problems in the software
will be forgotten and not corrected, or that important problems will not get priority attention.
The Q¦ process is mandatory in a software development cycle to reduce these risks, and to
assure quality in both the workflow and the final product. To have no Q¦ activity is to increase
the risk that unacceptable code will be released.
Check out the Testing Effectiveness

(** /   /  c  c /<  


Each of the five phases of Project Delivery Lifecycle will incorporate Q¦ activities and
deliverables that off-set the risks of common project problems.
This summary of the Project Delivery Lifecycle incorporates a high-level list of the Q¦ activities
and deliverables associated with each phase.
*#.#,%;*#
¦ssessment process consists of market research and a series of structured workshops that the and
client teams participate in to discuss and analyze the project objectives and develop a strategic
plan for the effort. The products of these meetings, combined with market research, form the
basis for the final output of the assessment: a tactical plan for realizing specific business and
project objectives.
Q¦ Deliverables
a) Q¦ Editor submits revised and approved deliverable documents.
%<*,,,-%;*#
In the Planning phase, the team defines specific system requirements and develops strategies
around the information architecture (static content and information flows) and the business
functions that will be addressed.
Q¦ ¦ctivities
a) Establishing Standards and Procedures: Q¦ records the set requirements.
b) Planning (Test Matrix): Q¦ develops a test matrix. Q¦ confirms that all set requirements are
testable and coincide with the project objectives.
c) ¦uditing ¦gainst Standards and Procedures: Q¦ editor edits the documents and confirms that
they meet the objectives and the quality standards for documents.
d) Establishing Completion Criteria: Q¦ records the completion criteria for the current phase.
Q¦ Deliverables
a) Q¦ submits an initial test matrix.
b) Q¦ Editor submits revised and approved deliverable documents.
#-,%;*#
During the Design phase, the team identifies all of the necessary system components based on
the requirements identified during the ¦ssessment and Planning phases. The team then creates
detailed design specifications for each component and for the associated physical data
requirements.
Q¦ ¦ctivities
¦uditing Standards and Procedures: Q¦ confirms that all designs meet the set requirements and
notes any discrepancies. ¦dditionally, Q¦ identifies any conflicts or discrepancies between the
final design of the system and the initial proposal for the system and confirms that an acceptable
resolution has been reached between the project team and the client.
Planning (Q¦ Plan, Q¦ Test Plan):
a) Q¦ begins developing the Q¦ Plan.
b) Q¦ revised the test matrix to reflect any changes and/or additions to the system.
Q¦ Deliverables
a) Q¦ presents the initial Q¦ test plan.
b) Q¦ submits a revision of the test matrix.

#'#<)%.#,%;*#
During the Development phase, the team constructs the components specified during the Design
Phase.
Q¦ ¦ctivities
a) Planning (Test Cases): xsing the test matrix, Q¦ develops a set of test cases for all deliverable
functionality for the current phase.
b) Prepare for Quality ¦ssurance Testing:
c) Q¦ confirms that all test cases have been written according to the guidelines set in the Q¦ test
plan.
d) Quality ¦ssurance works closely with the Configuration Management group to prepare a test
environment.
Q¦ Deliverables
a) Q¦ submits a set of Test Cases.
b) Q¦ Environment is set up.
.%<#.#,*),%;*#
In the Implementation phase, the team focuses on testing and review of all aspects of the system.
The team will also develop system documentation and a training or market test plan in
preparation for system launch.
Q¦ ¦ctivities
a) Q¦ Testing: Q¦ executes all test cases in the Q¦ testing cycle.
Q¦ Deliverables
a) Test Results
b) Defect Reports

(*  c%&  /<  


Common Project Problems
Like all software development activities, projects risk the following common technical and
programmatic problems:
ë Inaccurate understanding of the project requirements.
ë Inflexibility; inability to adapt to changing requirements.
ë Modules that do not work together.
ë Late discovery of serious project flaws.
ë Scant record of who changed what, when, or why.
ë Limited roll-back capabilities
ROOT C¦xSES
Such problems often stem from the following root causes:
ë Lack of communication; poor information work flow.
ë Lack of processes.
ë Lack of standards and/or procedures.
ë Lack of process for integration (the big picture).
ë Lack of solid testing methodology.
MISSING COMPONENTS
With the following programmatic components in place, the root causes of many common project
problems may be corrected:
ë Quality assurance
ë Configuration management
ë Version control
ë Controlled testing environment
ë Quality assurance testing
ë Error management
ë Standardized work flows for all of the components above

  
(  *  !*(I
15 point questions to plan for Q¦ in your Project?
1) What is the scope of testing? (xI testing? Database testing? Multi Platform testing? ¦PI
testing? Java Classes testing? Test ¦utomation using a specific tool?)
2) What is the skill set required from the test team? (White ox skills or lack ox skills? What
is the test tool? What trainings are required for building the skill set?)
3) What is the proposed architecture of the product (high level architecture from the business
perspective)?
4) Over view of the product functionality with emphasis on the critical modules?
5) What is test process followed by the client? Does the team need to follow the process of the
client or use the software providerUs test process?
6) What tailoring to the test process is required?
7) What are the specific tools used in the project (Test automation tools, xnit testing tools, uild
& deployment tools etc.,)
8) Digh level over view of the test environment (what operating systems, what databases, what
¦pp Servers/web servers, what browsers, what hardware, what software etc.,)

9) Who is responsible for test data? Dow do we generate test data?


10) What are the test reports / test deliverables due to each of the stake holders (what templates,
frequency, what format & what information to be included, who requires what report?)
11) What are the guidelines (guidelines for test coverage, guidelines for test case designing, and
guidelines for test case prioritizing, guidelines for test case execution, test automation guidelines,
naming conventions etc)?
12) What are the existing test cases that the customer has? What is the style? Dow is it designed?
Does the team need to follow the same approach as that of the client or could it make changes?
13) What is the build & deployment process? What is the involvement of the software
providerUs project team?
14) What is the Issue/Defect reporting process & what are the relevant guidelines?
15) What are the customer contact points & what is the escalation process? What is the
communication protocol?

  
(   


(* 

Software testing verifies that the software meets its requirements and that it is complete and
ready for delivery.
)$@#9'#)!(*#,-
ë ¦ssure the quality of client deliverables.
ë Design, assemble, and execute a full testing lifecycle.
ë Confirm the full functional capabilities of the final product.
ë Confirm stability and performance (response time, etc.) of the final product.
ë Confirm that deliverables meet client expectations/requirements.
ë Report, document and verify code and design defects.
%+#%*+,-!)+(*#,-
Prior to conducting formal software testing, Q¦ develops testing documentation (including test
plans, test specifications, and test procedures) and reviews the documentation for completeness
and adherence to standards. Q¦ confirms that:
ë The test cases are testing the software requirements in accordance with test plans.
ë The test cases are verifiable.
ë The correct or "advertised" version of the software is being tested (by Q¦ monitoring of the
Configuration Management activity).
Q¦ then conducts the testing in accordance with procedure, documents and reports defects, and
reviews the test reports.
;#5#)%+) :9'#(*#,-
It is crucial to recognize that all testing will be conducted by comparing the final product to the
productUs set requirements; therefore, product requirements must state all functionality of the
software and must be updated as changes are made. ¦ny functionality that does not meet the
requirements will be recorded as a defect until resolution is delivered.
#<'#%#)!(*#,-
1. xnit testing (conducted by Development)
xnit test case design begins after a technical review approves the high level design. The unit test
cases shall be designed to test the validity of the program's correctness. White box testing is used
to test the modules and procedures that support the modules. The white box testing technique
ignores the function of the program under test and focuses only on its code and the structure of
that code. To accomplish this, a statement and condition technique shall be used. Test case
designers shall generate cases that not only cause each condition to take on all possible values at
least once, but that cause each such condition to be executed at least once. In other words:
ë Each decision statement in the program shall take on a true value and a false value at least
once during testing.
ë Each condition shall take on each possible outcome at least once during testing.
2. Configuration Management
The configuration management team prepares the testing environment
3. uild Verification
When a build has met completion criteria and is ready to be tested, the Q¦ team runs an initial
battery of basic tests to verify the build.
ë If the build is not testable at all, then the Q¦ team will reject the build
ë If portions of the website are testable and some portions are not yet available, the project
manager, technical lead and Q¦ team will reassign the build schedule and deliverable dates.
ë If all portions of the build pass for testing, the Q¦ team will proceed with testing.

4. Integration Testing
Integration testing proves that all areas of the system interface with each other correctly and that
there are no gaps in the data flow. The final integration test proves that the system works as an
integrated unit when all the fixes are complete.
5. Functional Testing
Functional testing assures that each element of the application meets the functional requirements
of the business as outlined in the requirements document/functional brief, system design
specification, and other functional documents produced during the course of the project (such as
records of change requests, feedback, and resolution of issues).
6. Non-functional Testing (Performance Testing)
Non-functional testing proves that the documented performance standards or requirements are
met. Examples of testable standards include response time and compatibility with specified
browsers and operating systems.
If the system hardware specifications state that the system can handle a specific amount of traffic
or data volume, then the system will be tested for those levels as well.
7. Defect Fix Validation
If any known defects or issues existed during development, Q¦ tests specifically in those areas
to validate the fixes.
8. ¦d Doc Testing
This type of testing is conducted to simulate actual user scenarios. Q¦ engineers simulate a user
conducting a set of intended actions and behaving as a user would in case of slow response, such
as clicking ahead before the page is done loading, etc.
9. Regression Testing
Regression testing is performed after the release of each phase to ensure that there is no impact
on previously released software. Regression testing cannot be conducted on the initial build
because the test cases are taken from defects found in previous builds.
Regression testing ensures that there is an continual increase in the functionality and stability of
the software.

10. Error Management


During the Q¦ testing workflow, all defects will be reported using the error management
workflow.
Regular meetings will take place between Q¦, system development, interface development and
project management to discuss defects, priority of defects, and fixes.
11. Q¦ Reporting
Q¦ states the results of testing, reports outstanding defects/known issues, and makes a
recommendation for release into production.
12. Release into production
If the project team decides that the build is acceptable for production, the configuration
management team will migrate the build into production.

  
 
 9

!u  9 is a commonly used term for a specific test. This is usually the smallest unit of
testing.
!u ¦  9 will consist of information such as requirements testing, test steps,
verification steps, prerequisites, outputs, test environment, etc.
!u ¦ set of inputs, execution preconditions, and expected outcomes developed for a
particular objective, such as to exercise a particular program path or to verify compliance
with a specific requirement.
!u ¦    is a detailed procedure that fully tests a feature or an aspect of a feature.
Whereas the test plan describes what to test, a test case describes how to perform a
particular test. You need to develop a test case for each test listed in the test plan.
!u ¦ set of inputs, execution preconditions, and expected outcomes developed for a
particular objective, such as to exercise a particular program path or to verify compliance
with a specific requirement.
!u    should be written by a team member who understands the function or
technology being tested, and each test case should be submitted for peer review.
Organizations take a variety of approaches to documenting test cases; these range from
developing detailed, recipe-like steps to writing general descriptions. In detailed test cases, the
steps describe exactly how to perform the test. In descriptive   " the tester decides at the
time of the test how to perform the test and what data to use.

Most organizations prefer detailed    because determining pass or fail criteria is usually
easier with this type of case. In addition, detailed    are reproducible and are easier to
automate than descriptive   . This is particularly important if you plan to compare the
results of tests over time, such as when you are optimizing configurations. Detailed    are
more time-consuming to develop and maintain. On the other hand,    that are open to
interpretation are not repeatable and can require debugging, consuming time that would be better
spent on testing.

When planning your tests, remember that it is not feasible to test everything. Instead of trying to
test every combination, prioritize your testing so that you perform the most important tests U
those that focus on areas that present the greatest risk or have the greatest probability of
occurring U first.

Once the  < prepared the Test Plan, the role of individual testers will start from the
preparation of  9 for each level in the Software Testing like :   
" 
  
 
"  
 :*    
and for each Module.

To prepare these  9 each organization uses their own standard template, an ideal
template is providing below to prepare  9
The Name of this  9   itself follows some name convention like below so that
by seeing the name we can identify the Project Name and Version Number and Date of Release.

%& , 9',+  

!u The bolded words should be replaced with the actual Project Name, Version Number and
Release Date. For eg. ugzilla Test Cases 1.2.0.3 01_12_04
!u On the Top-Left Corner we have company emblem and we will fill the details like
Project ID, Project Name, ¦uthor of Test Cases, Version Number, Date of Creation and
Date of Release in this Template.
!u ¦nd we will maintain the fields Test Case ID, Requirement Number, Version Number,
Type of Test Case, Test Case Name, ¦ction, Expected Result, Cycle#1, Cycle #2,
Cycle#3, Cycle#4 for each Test Case. ¦gain this Cycle is divided into ¦ctual Result,
Status, ug ID and Remarks.

!u Test Case ID: To Design the  9 also we are following a standard: If a test case
belongs to application not specifically related to a particular Module then we will start
them as TC001, if we are expecting more than one expected result for the same test case
then we will name it as TC001.1. If a test case is related to Module then we will name it
as M01TC001, and if a module is having a sub-module then we name that as
M01SM01TC001. So that we can easily identify to which Module and which sub-module
it belongs to. ¦nd one more advantage of this convention is we can easily add new test
cases without changing all  9,   so it is limited to that module only
!u Requirement Number: It gives the reference of Requirement Number in +!+ for
Test Case. For Test Case we will specify to which Requirement it belongs to. The
advantage of maintaining this one here in Test Case Document is in future if a
requirement will get change then we can easily estimate how many test cases will affect
if we change the corresponding Requirement.
!u Version Number: xnder this column we will specify the Version Number, in which that
particular test case was introduced. So that we can identify finally how many  9
are there for each Version.
!u Type of Test Case: It provides the List of different type of  9-:"
!   "+
 "  " ":*   "<"% 
etc., which are included in the Test Plan. So while designing Test Cases we can select
one of this option. The main objective of this column is we can predict totally how many
GxI or Functionality test cases are there in each Module. ased on this we can estimate
the resources.
!u Test Case Name: This gives more specific name like particular utton or text box name,
for which that particular Test Case belongs to. I mean to say we will specify the Object
name for which it belongs to. For eg., OK button, Login form.
!u ¦ction: This is very important part in Test Case because it gives the clear picture what
you are doing on the specific object. We can say the navigation for this Test Case. ased
the steps we have written here we will perform the operations on the actual application.
!u Expected Result: This is the result of the above action. It specifies what the specification
or user expects from that particular action. It should be clear and for each expectation we
will sub-divide that Test Case. So that we can specify pass or fail criteria for each
expectation.
xp to the above steps we will prepare the Test Case Document before seeing the actual
application and based on System Requirement Specification/Functional Requirement
Document and xse Cases. ¦fter that we will send this document to the concerned Test
Lead for approval. De will review this document for coverage of all user Requirements in
the Test Cases. ¦fter that he approved the Document.
Now we are ready for testing with this Document and we will wait for the ¦ctual
¦pplication. Now we will use the Cycle #1 parts.
xnder each Cycle#1 we are having ¦ctual, Status, ug ID and Remarks. Number of
Cycles is based on the Organization. Some organizations document Three Cycles some
organizations maintain the information for Four Cycles.
ut here I provided only one Cycle in this Template but you have to add more cycles
based on your requirement.
!u ¦ctual: We will test the actual application against each Test Case and if it matches the
Expected result then we will say it as H*# H else we will write the actually what
happened after doing those action.
!u Status: It simply indicates Pass or Fail status of that particular  9. If ¦ctual and
Expected both mismatch then the Status is ! else it is %. For Passed Test Cases ug
ID should be null and for failed Test Cases ug ID should be ug ID in the ug Report
corresponding to that Test Case.
!u ug ID: This is gives the reference of ug Number in ug Report. So that
Developer/Tester can easily identify the ug associated with that Test Case.

Test cases are


Effective--Find Faults
Exemplary--represents others
Evolvable--easy to maintain
Economic--cheap to use
There are many Test cases written and here are some of the sample test cases

Sample Test Cases for Calculator

Sample Test Cases To Verify The Functionality Of Dome Page

Sample Test Cases for Login utton

  
 
'  

Validation refers to a set of activities that ensure that software that has been built is traceable to
the customer requirements. "¦re we building the right product?"
"Confirmation by examination and provisions of objective evidence that the particular
requirements for a specific intended use are fulfilled."
There are several ways to accomplish validation, the most common being:
8   
Focused on meeting particular customer constraints. For example: ¦n inspection of a machine to
see that it will fit in the desired space or an inspection of code modules to ensure their
compliance with maintenance demands.

8     
Daving the customer or a representative use the product to ensure it meets some minimum
constraints (i.e., usability). ¦lso can be used to perform some acceptance tests where the product
is running in the intended environment versus some test or development lab. For example:
Daving pilots fly an aircraft before the customer signs off on the program.
8* 
xsing some form of analysis to validate that the product will perform as needed when
demonstrating it is too costly, unsafe, or generally impractical. For example: xsing interpolation
of performance load based on the worst case, that is feasible to generate, to validate a need that is
more stringent than this worst case. If it can be shown that there is no scaling problem, this
would be sufficient to validate the performance need.
/8% 
When a component being used has been already validated for a previous project that had similar
or stricter constraints. For example: xsing a well-known encryption component to meet security
needs when the component has been already validated for tougher security requirements.
#/ 
Leaving validation until the end of the project severely increases the risk of failure. Validation
activities early in the project can reduce that risk. Early validation activities reveal:
89   
Perhaps the most important purpose of early validation is to clarify the real meaning of
requirements. The obvious cases are where requirements are incomplete. Dowever, the riskiest
requirements are subjective. These include phrases such as "readable" or "user-friendly" or
involve human interfaces in general. Early validation can get a response to various
interpretations and provide more specifics in areas such as acceptable size, placement, or motion.
8 /
Some requirements are more critical to the customer than others. Some have larger cost or design
impact on the product. With early validation you can uncover the customerUs priorities and
relate them to the development impact to identify the serious drivers.

8*  
You can use early validation to discover and coordinate new requirements during the program.
¦n issue is that no spec is totally complete, and it is assumed that the designer has a familiarity
with the intended end use environment. Particularly in a new environment that the designer is not
familiar with, early validation of requirements can uncover missing requirements. ¦nother use is
to coordinate derived requirements with the customer. In this case, the need is often driven by the
customerUs lack of knowledge with the technologies being applied and their impact on the use
of the product.
/8;    
Discussions with the customer can reveal unstated expectations or assumptions about the design.
One hint is extreme detail in the requirements that may be surrogate for "I want it to work like
the old or another system."
Various approaches to early validation include:
Very early validation of requirements closely parallels good requirements elicitation and
analysis. Techniques for doing this include involving the user, site visits, or goal-based use cases.

  
 
'   

Verification refers to a set of activities that ensure that software correctly implements a specific
function. "¦re we building the product right?"
"Confirmation by examination and provisions of objective evidence that specified requirements
have been fulfilled."
xsing the above definitions in software development
Validation, in its simplest terms, is the demonstration that the software implements each of the
software requirements correctly and completely. In other words, the "right product was built."
Verification is the activity, which ensures the work products of a given phase fully implement
the inputs to that phase, or "the product was built right."

</'  
There are four levels of verification:
89   

Testing conducted to verify the implementation of the design for one software element (unit,
module) or a collection of software elements.
8 
   

¦n orderly progression of testing in which various software elements and/or hardware elements
are integrated together and tested. This testing proceeds until the entire system has been
integrated.
8  

The process of testing an integrated hardware and software system to verify that the system
meets its specified requirements.
/8*    

Formal testing conducted to determine whether or not a system satisfies its acceptance criteria
and to enable the customer to determine whether or not to accept the system.

'  
There are four types of verification that can be applied to the various levels outlined above:
8  
Typical techniques include desk checking, walkthroughs, software reviews, technical reviews,
and formal inspections (e.g., Fagan approach).
8* 
Mathematical verification of the test item, which can include estimation of execution times and
estimation of system resources.
8 

¦lso known as "white box" or "logic driven" testing. Given input values are traced through the
test item to assure that they generate the expected output values, with the expected intermediate
values along the way. Typical techniques include statement coverage, condition coverage, and
decision coverage.
/8    
¦lso known as "black box" or "input/output driven" testing. Given input values are entered, and
the resulting output values are compared against the expected output values. Typical techniques
include error guessing, boundary-value analysis, and equivalence partitioning.
Explanation
The four methods for verification can be used at any of the levels although some work better
than others for a given level of verification.
¦s an example, the most effective way to find anomalies at the component level is inspection.
On the other hand, inspection is not applicable at the system level (you don't look at the details of
code when performing system level testing).
¦ logical approach to testing is to utilize techniques and methods that are most effective at a
given level.
Component level verification can easily get very expensive.
Companies need to avoid making statements like "all paths and branches will be executed during
component testing."
These statements make for a very expensive test program, as all code developed is required to
have one of the most labor-intensive type of testing performed on it.
To minimize the costs of component verification, the V&V group develops rules for determining
the type of verification method(s) needed for each of the software functions.
¦s an example, very low complexity software function, which is not on the safety critical list,
may only need informal inspection (walkthrough) performed.
Other complicated functions typically require white box testing since the functions become
difficult to determine how they work.
We recommend performing inspections before doing the white box testing for a given module as
it is less expensive to find the errors earlier in the development.
The resulting V&V effort has become a significant part of the software development effort for a
medical device.
One of the key pieces to demonstrate that the system is implemented completely is a
Requirements Traceability Matrix (RTM), which documents each of the requirements traced to
design items, code, unit, integration and system test cases.
The RTM is an easy and effective way for documenting - what are the requirements, where are
they implemented, and how have you tested them.

  
' 


!u Version numbering is a confusing topic. Getting more confusing by software companies


all using different schemes.
!u You're free to use whatever version numbering you'd like as you release new versions.
!u Dowever, it's useful to establish standards for choosing new version numbers.
!u Version numbers are stored as a 128-bit number that is logically partitioned into four 32-
bit numbers.
!u This means that each of the four parts can be any number in the range zero to 65,536.

Version number has four parts:


.& .  ¦ new Major or Minor part indicates that the new version is
incompatible with the old one. For example, version 2.0.0.0 should be incompatible with version
1.1.2.5.
You should change major version numbers whenever you introduce an incompatibility into your
code.
Major part involves a total rewrite or rearchitecting of a software product.
Changes in language, major changes in design, and changes in platform fall into this category.
This number starts at 1 (one).
$  ¦ new build part indicates probable compatibility.
Typically you should change minor version numbers when you introduce a service pack or a
minor upgrade.
For example, version 1.8.0.0 is probably compatible with version 1.7.0.0.
Minor part involves additions in features that require changes in documentation/external ¦PI.
This number starts at 0 (zero).
+/  ¦ new revision part indicates a QFE (Quick Fix Engineering) release that is
compatible with the previous version and that should be installed. For example, version 1.6.5.13
might be a mandatory bug-fix upgrade to version 1.6.5.12.
This is any change that doesn't require documentation/external ¦PI changes. This number starts
at 0 (zero).

Setting the Version attribute to "1.0.*" tells use 1 for the major part, 0 for the minor part, and to
come up with build and revision part numbers automatically. You can also specify all four parts
of the version number explicitly:
¦ll version numbering is based on external view. Meaning, this is a number scheme that
intended to reflect the release of a product, not the internal production. Thus, recompilation does
not effect the version number.
The terms "alpha" and "beta" shall not be used in the version number.
These terms reflect little these days and aren't qualitative (i.e. "1.0.0" sorts before "1.0.0 alpha",
but should sort after).

  
 / <9 

Life Cycle of Software Testing Process


The following are some of the steps to consider:

!u Obtain requirements, functional design, and internal design specifications and other
necessary documents
!u Obtain schedule requirements
!u Determine project-related personnel and their responsibilities, reporting requirements,
required standards and processes (such as release processes, change processes, etc.)
!u Identify application's higher-risk aspects, set priorities, and determine scope and
limitations of tests
!u Determine test approaches and methods - unit, integration, functional, system, load,
usability tests, etc.
!u Determine test environment requirements (hardware, software, communications, etc.)
!u Determine testware requirements (record/playback tools, coverage analyzers, test
tracking, problem/bug tracking, etc.)
!u Determine test input data requirements
!u Identify tasks, those responsible for tasks
!u Set schedule estimates, timelines, milestones
!u Determine input equivalence classes, boundary value analyses, error classes
!u Prepare test plan document and have needed reviews/approvals
!u Write test cases
!u Dave needed reviews/inspections/approvals of test cases
!u Prepare test environment and testware, obtain needed user manuals/reference
documents/configuration guides/installation guides, set up test tracking processes, set up
logging and archiving processes, set up or obtain test input data
!u Obtain and install software releases
!u Perform tests
!u Evaluate and report results
!u Track problems/bugs and fixes
!u Retest as needed
!u Maintain and update test plans, test cases, test environment, and testware through life
cycle
c c  
  
 / <9 



-) #
 


In this section we go through the list of Glossary of Software Engineering Terms


*99#%*,9#9+#+*The criteria that the software component, product, or system
must satisfy in order to be accepted by the customer.
*99#%*,9#%+)9# The process used to verify that a new or modified software
product is fully operational and meets the customer's requirements.
*99#%*,9##,-Formal testing conducted by the customer to determine whether or
not a software product or system satisfies the documented acceptance criteria. Successful
completion of acceptance testing defines the point at which the customer will accept the product
as a successful implementation.
*9' ¦ major unit of work to be completed in achieving the objectives of a software
project. ¦n activity incorporates a set of tasks to be completed, consumes resources, and results
in work products. ¦n activity may contain other activities in a hierarchical manner. ¦ll project
activities are described in the Project Plan.
*9)+ ¦ person or system that interacts with the software application in support of a specific
process or to perform a specific operation or related set of operations.

*<-)+;. ¦ set of well-defined rules for the solution to a problem in a finite number of
steps. Generally implemented as a logical or mathematical test or calculation.
*,).*< ¦ nice word for "bug." ¦nything observed in the operation of software that
deviates from expectations based on design documentation or user references.
*%%<9*), One or more software executables designed to fulfill a specific set of business
functions individually or in cooperation with other applications.
*#.#, ¦ formal examination of a deliverable, generally by a quality assurance
reviewer, for the presence of a specific set of attributes and structural elements. ¦n assessment is
not an in-depth examination of content, as the content of a deliverable may be outside the
reviewerUs domain of expertise.
*:.%), ¦ condition that is generally accepted as truth without proof or demonstration.
*+$:# ¦ piece of information describing part of a particular entity.
*:  ¦n independent examination of software or software documentation to assess
compliance with predetermined criteria.
*:;#,9*), The ability of each party in a transaction to verify the identity of the
other parties.
$*,  ; The capacity of a communications channel.
$*#<,# ¦ set of software components and documents to that has been formerly reviewed
and accepted, that serves as the basis for further development or current production, which can
be changed only through formal change control procedures.
$*9;%+)9#,- ¦ method of collecting and processing data in which transactions are
accumulated and stored until a specified time when it is convenient or necessary to process them
as a group.
$:,#%+)9## The unique ways in which organizations coordinate and organize
work activities, information, and knowledge to produce a product or service. For example, in a
sales environment, the information used and steps taken to record a new customer order is
considered a business process.
$:,#%+)9#9).%<#J ¦ project risk factor that takes into consideration the
complexity of the business process or processes under automation. Project risk is considered low
when all processes involve fairly simple data entry and update operations. Project risk is
considered medium when a minority of the business processes under automation are complex,
involving multiple steps, exchanges with external systems or significant validation/processing
logic. Project risk is considered high when a majority of the business processes under automation
are considered to be complex.
$:,#%+)9#.*:+ ¦ project risk factor that takes into consideration the
maturity and stability of the business process or processes to be automated. Project risk is
considered low when standard business processes that have been stable and in place for a
significant period of time are being automated. Project risk is considered medium when one or
more nonstandard but stable business processes, generally unique to the customers situation, are
being automated. Project risk rises significantly when the development team is attempting to
automate one or more new or unusual business processes.
$:,#+:<# ¦ logical or mathematical test that determines whether data entered in a
database complies with an organizationUs method of conducting its operations.
9<#, 1. The user point-of-entry for an application. Normally a software executable
residing on a desktop computer, workstation, or laptop computer. The user generally interacts
directly only with the client, using it to input, retrieve, analyze and report on data.
2. ¦ device or application that receives data from or manipulates a server device or application.
9) #+#'#¦ meeting at which source code is presented for review, comment, or
approval.
9).%),#, One of the parts that make up a system. ¦ component may be hardware,
software, or firmware and may be subdivided into other components.
9).%:#+)!*+# Detailed, pre-programmed instructions that control and coordinate
the work of computer hardware and firmware components in an information system.
9).%:#+* # )!*+##,-,##+,-79*#8The automation of step-by -
step methodologies for software and systems development to reduce the amount of repetitive
work required of the analyst or developer.
9),!-:+*),.*,*-#.#, ¦ process that effectively controls the coordination
and implementation of changes to software components.
9),+*, ¦ restriction, limitation, or regulation that limits a given course of action.
9),#J *-+*. Overview data flow diagram depicting an entire system as a single
process with its major inputs and outputs.
9),'#+), The process of changing from the old system to the new system.
9+9*<:99#!*9)+79!8 ¦ set of specific operational conditions shaped by
the business environment that are believed to significantly impact the success potential of an
organization or business function. In a software development effort, critical success factors are
composed of assumptions and dependencies that are generally outside the control of the
development team.
9:).#++#):+9#The number of subject matter experts for each xse Case (xC) in
an application under development. This project risk factor is considered low when more than one
SME is available per xC. ¦ high risk ensues when outside SMEs are involved with a software
development effort.
** Streams of raw facts representing events before they have been organized and arranged
into a form that people can understand and use.
** 9),*+ ¦ structured description of database objects such as tables, indexes,
views and fields, with further descriptions of field types, default values and other characteristics.
**#, ¦ data representation of a real world object or concept. xsually represented as
a row in a database table, such as information about a specific Product in inventory.
**!<) *-+*. ¦ primary tool in structured analysis that graphically illustrates a
system's component processes and the flow of data between them.
**%# ¦ description of how the computer is to interpret the data stored in a particular
field. Data types can include text or character string data, integer or floating point numeric data,
dates, date/time stamps, true/false values, or inary Large Objects (LOs) which can be used
to store images, video, or documents.
**$*# ¦ set of related data tables and other database objects, such as a data dictionary,
that are organized as a group. ¦ collection of data organized to service many applications at the
same time.
**$*#)$@#9¦ component of a database, such as a table or view.
**$*#* .,+*)+ Person(s) responsible for the administrative functions of
databases, such as system security, user access, performance and capacity management, and
backup and restoration functions.
**$*#.*,*-#.#,#.7 $.8 Software used to create and maintain a
database and enable individual business applications to extract the data they need without having
to create separate files or data definitions for their own use.
#!*:<¦n initial value assigned to a field by the application when a new database record is
created. xsed to facilitate data entry by pre-entering common values for the user.
#<'#+*$<# ¦ specific work product, such as requirements or design documentation,
produced during a task or activity to validate successful completion of the task or activity.
Sometimes, actual software is delivered.
#-,#<#.#, ¦ specification for a software object or component that fulfills, or assists
in the fulfillment of a functional element. ¦ part of the system design specification.
#-,*-# ¦ stage in the software development lifecycle that produces the functional
and system design specifications for the application under development.
#'#<)%#+5<<+#):+9# The availability of developers and other resources
with appropriate skills is a significant factor in project success. When developers and resources
are readily available, the likelihood of project success is very high. Most development firms
manage multiple projects, allowing some contention between projects for developers and other
resources.
This project risk factor is considered high when one or more developers with specific skill sets,
or resources with specific capabilities, need to be acquired before the project can continue.
)9:.#,*), Information made available to: 1) assist end-users in the operation of a
software application, generally in the form of on-line help, or 2) assist developers in locating the
correct root procedure or method for a specific software function, generally in the form of an
implementation map. Note that printed manuals are rarely delivered with software anymore; on-
line documentation is more consistently available from within the application and is easier to
use.
#,9+%), The coding and scrambling of messages to prevent unauthorized access to or
understanding of the data being stored or transmitted.
#, :#++#'# The review of a deliverable for functional accuracy by a Subject Matter
Expert who is familiar with the software product under design or development.
#, ¦ collection of attributes related to and describing a specific subject, such as
Products.
#,+#<*),;% *-+*. ¦ diagram illustrating the relationship between
various entities in a database.
#J#9:*$<# ¦ binary data file that can be run by the operating system to perform a specific
set of functions. In Windows, executables carry the extension .EXE and can be launched by
double-clicking on them.
#J#+,*<,#+!*9# In database applications, an external interface is a defined process
and data structure used to exchange data with other systems. For example, an order processing
application may have an interface to exchange data with an external accounting system.
#J#+,*<,#+!*9#9).%<#JThe level of complexity associated with an
external interface. ¦ simple interface is generally unidirectional, with limited, stable logic
defining the structure of the exchanged data. ¦ standard export from a database to a spreadsheet
is considered a simple interface. ¦ complex interface may be bi-directional, or may have
extensive, adaptive logic defining the structure of the exchanged data. The transmission of labor
data to a corporate payroll system, with its attendant validation and transaction confirmation
requirements, is considered a complex interface.
!#*$<: ¦ process that determines whether the solution under analysis is
achievable, given the organization's resources and constraints.
!#<  Synonym for a data element that contains a specific attribute's value; a single item of
information in a record or row.
!)9: The application object to which the user-generated input (usually keyboard and mouse)
is directed.
!)+#-,5# ¦ field or set of fields in a table whose value must match a primary key in
another table when joined with it.
!)+. ¦ screen formatted to facilitate data entry and review. xtilizes data entry fields, option
selection tools, and control objects such as buttons and menu items.
!:,9),*<*+#* ¦ny formally organized group focused on the development, execution,
and maintenance of business processes in support of a defined business function.
!:,9),*< #-,*-# That stage of the software development lifecycle that
focuses on the development and validation of designs for architecture, software components, data
and interfaces. Often combined with the system design stage into a single stage for smaller
applications.
!:,9),*<#<#.#, ¦ definition that specifies the actions that a software component,
product, or system must be able to perform.
!:,9),*<#,- ¦lso known as end-user testing. Testing that focuses on the outputs
generated in response to selected inputs and execution conditions.
!:,9),%),*,*<¦ software measurement process that focuses on the number
of inputs, outputs, queries, tables, and external interfaces used in an application. xsed for
software estimation and assessment of developer productivity.
-+):% During report generation, one or more records that are collected into a single category,
usually for the purpose of totaling. ¦lso used to identify a collection of database users with
common access privileges.
;*+ *+# Physical computer equipment and peripherals used to process, store, or transmit
software applications or data.
;#+*+9;9*<.#,: ¦ menu with multiple levels, consisting of a main Menu bar that
leads to one or more levels of sub menus from which choices or actions are made.
;%#+#J.*+5:%<*,-:*-#7;.<8¦ programming tool that uses Dyper Text
to establish dynamic links to other documents stored in the same or remote computers.
.%<#.#,*),#<#.#, ¦ specific software component created to fulfill a
specific function defined in the functional and system design documents.
.%<#.#,*),*-#¦ stage in the software development lifecycle during which a
software product is created from the design specifications and testing is performed on the
individual software units produced.
,9+#.#,*< #'#<)%.#,¦ software development technique where multiple small
software development lifecycles are used to develop the overall software product in a modular
fashion.
, #J ¦ specialized data structure used to facilitate rapid access to individual database
records or groups of records.
,!)+.*), Data that has been shaped into a form that is meaningful and useful to
humans.
,!)+.*),#. Interrelated components working together to collect, process,
store, and disseminate information to support decision-making, coordination, control, analysis,
and/or visualization in an organization.
,;#+*,9# ¦ feature of object-oriented programming where a specific class of objects
receives the features of a more general class.
,*< **<)* When a new database application is first brought online, certain sets
of data are preloaded to support operations. In some cases, a large amount of data is transferred
from one or more legacy systems that the new database application is replacing. The initial data
load figure is calculated as the sum of all records in operational and support data areas on day
zero of the applications production lifecycle. This figure is used as a baseline for estimating
development effort, server hardware requirements and network loads.
,%#9), ¦lso termed desk checking. ¦ quality assurance technique that relies on visual
examination of developed products (usually source code or design documentation) to detect
errors, violation of development standards, and other problems.
,*<<*),*-#¦ software lifecycle stage that consists of the testing, training, and
conversion efforts necessary to place the developed software application into production.
,#-+ The degree to which a software component or application prevents unauthorized
access to, or modification of, programs or data.
,#+!*9# ¦ formal connection point defined between two independent applications for the
purpose of data exchange.
,#+!*9##,- ¦ testing technique that evaluates whether software components pass
data and control correctly to one another.
,#+#9),¦ group of data elements included in two or more tables as part of a Join
operation.
@), ¦ database operation or command that links the rows or records of two or more tables by
one or more columns in each table.
@),*%%<9*), #-,7@* 8¦ design technique that brings users and IT
professionals into a facilitated meeting for the purpose of interactively designing an application.
5#!#< ¦ field used to identify a record or group of records by its value.
5#%+)9#*+#*¦ software engineering process identified by the Software
Engineering Institute Capability Maturity Model that is an essential to an organization's ability to
develop consistently high-quality software products.
5<)$#75$8One thousand bytes (actually 1024 storage positions). xsed as a measure of
storage capacity.
5,)<# -#.*,*-#.#, The process of systematically managing and leveraging the
stores of knowledge in an organization. This knowledge is generally stored as sets of documents
or database records.
<!#99<# ¦ set of software development activities, or stages that function together to guide
the development and maintenance of software products. Each stage is finite in scope, requires a
specific set of inputs, and produces a specific set of deliverables.
.*,#,*,9#The process of supporting production software to detect and correct faults,
optimize performance, and ensure appropriate availability to end-users.
.*#+*$<# ¦ table containing data on which detail data in another table depends.
Master tables have a primary key that's matched to a foreign key in a detail table and often have
a one-to-many relationship with detail tables.
.#-*$#7.$8¦pproximately one million bytes. ¦ unit of computer storage capacity.
.#-*;#+K7.;K8 ¦ measure of the clock speed of the CPx in a computer. One
megahertz equals one million cycles per second.
.#* ** Data that describes the structure, organization, and/or location of data. In
essence, metadata is "data about data."
.#;) )<)-¦ set of processes, procedures, and standards that defines an engineering
approach to the development of a work product.
.#+9Numeric data representing measurements of business processes or database activity.
.<#),# In project management, a scheduled event of significance for which an
individual or team is accountable. Often used to measure progress.
.) #< ¦n abstract representation that illustrates the components or relationships of a
specified application or module.
.) :<# ¦ functional part of an application that is discrete and identifiable with a specific
subject.
.) :<##,- The process of testing individual software modules or sets of related
modules to verify the implementation of the software.
.:<:#+Concurrent access to a single database by more than one user, usually through
the use of client workstations.
,)+.*<K*),The process of creating small stable data structures from complex groups
of data during the design of a relational database.
)$@#99) #Program instructions that have been translated into machine language so that
they can be executed by the computer.
)!!9#*:).*),#.7)*8 ¦ combination of software applications such as
word processing, electronic mail, and calendaring, that is designed to increase the productivity of
data workers in the office.
),<,#*,*<9*<%+)9#,-7)<*%8 ¦ technology that operates on non-
relational, multidimensional databases often known as data cubes. Data cubes are often created
via specialized processing from relational databases. The objective of OL¦P is to allow the end
user to perform highly flexible analysis and reporting.
),<,#+*,*9),%+)9#,-7)<%8 OLTP most commonly refers to large-
scale database applications, such as order entry and payroll systems, which use transaction
processing to assure data integrity.
)%#, **$*#9),,#9'7) $98¦ set of software drivers and database
functions that allow different applications to access client/server RDMSs, desktop database
files, text files, and spreadsheets for the purpose of exchanging and manipulating database
information.
)%#+*,-#. System software that manages and controls the activities of the
computer. The operating system acts as the interface between applications and the computer
hardware.
)%#+*),*< ***+#* ¦ module in a database application that supports the
maintenance of data associated with a major operational process. For example, in an order entry
system, the maintenance of customer data and the entry of orders would be considered
operational data areas.
)%#+*),*<+*,*9),<)*  The quantity of transactions per unit of time, for
all users, in all operational data areas of a database application. This figure is commonly
expressed as the number of transactions per day for all operational data areas, and is used to
estimate server and network capacity requirements.
)+-*,K*), ¦ formally defined structure that takes resources from the environment and
processes them to produce outputs.
):#+@),¦ SQL Join operation in which all rows of the joined tables are returned,
whether or not a match is made between columns.
):#+(:#+¦ synonym for the primary query in a statement that includes a subquery.
%*+*.##+ ¦ value passed to an application to direct performance. For example, a query
could be set up with a parameter that limits the returned records to those falling after a specific
date. Changing the value of the parameter changes the returned selection of records.
%##++#'# See Technical Review.
%#+.), ¦ synonym for privileges.
%<*,,,-*-# The first stage in the software development lifecycle. During the
planning stage, the needs and expectations of the customer are identified, the feasibility of the
project is determined, and the Project Plan is developed.
%+.*+5# ¦ field or fields whose individual or combined values uniquely identify a
record in a database.
%+'<#-# The authorities assigned to an end user by the database administrator or
database owner to perform operations on data objects.
%+)9# :+# ¦ written description or diagram of a course of action to be taken to perform a
given task.
%+) :9),The time period after the new system is installed and any data conversion
efforts are complete. The system is now being used for normal operations.
%+)@#9 ¦ concerted effort that is focused on developing or maintaining a specific software
product or system. ¦ project has a fixed scope, structure, and delivery schedule.
%+)@#9.*,*-#+ The individual with total business responsibility for all activities of a
project, including the structure of the activities, resource utilization, and schedule management.
%+)@#9%<*, ¦ document that describes a technical and management approach to be
followed for a project. The plan typically describes the scope of work, the methods to be used,
the project structure and initial schedule, as well as a list of deliverables and other key events
required for the project to be considered a success.
%+))%,- The process of building an experimental system quickly and inexpensively
for demonstration and evaluation so that end-users can better determine the requirements of an
application.
%#: )9) # ¦ combination of programming language constructs and natural language
used to define an algorithm or business rule. Pseudocode is often used as a communications
bridge between end-users and analysts or programmers.
(:*< Satisfaction of customer criteria; conformance to design specifications.
(:*<*:+*,9#*#.#,The assessment of a deliverable for the presence of
required internal and supporting elements. Generally performed by a Quality ¦ssurance Reviwer
who is not a member of the development team or end user base.
(:#+ ¦ statement structured to direct the retrieval or manipulation of data in a database.
+#<*),*< **$*#.*,*-#.#,#.7+ $.8 ¦n RDMS is a
database management application that can create, organize, and store data. The RDMS treats
data as if they were stored in two-dimensional tables. It can relate data stored in one table to data
in another as long as the two tables share a common data element, or key.
+#9)+  ¦ group of related fields. ¦ single row of a relational database table that contains
each field defined for the table.
+#!#+#,*<,#-+ ¦ set of rules governing the relationships between parent and
child tables within a relational database that ensures data consistency.
+#-+#),#,- Structured retesting of a software component or application to
verify that any modifications made have not caused unintended effects and that the software still
complies with its specified requirements.
+#<*$<The ability of a software application or component to perform its required
functions under design-compliant conditions for a specified period of time.
+#<#*#'#+), ¦ software application or component that has been tested and found to
be in compliance with design documentation, and placed into production. See production.
+#(:+#.#, ¦ condition or capability needed by the customer to solve a problem or
achieve an objective. This condition or capability must be met or possessed by the developed
software before it will be accepted by the customer.
+#(:+#.#,*-# ¦ stage in the software lifecycle that immediately follows the
planning stage. During this stage, the requirements for a software product are defined and
documented. The output of this stage is a Requirements Specification.
+#(:+#.#,%#9!9*), ¦ deliverable that specifies the manual and automated
requirements for a software product in non-technical language that the customer and end-users
can understand. The requirements specification focuses on what functions the application is to
perform, not how those functions will be executed.
+#(:+#.#,+*9#*$<.*+J7+.8 ¦ table or spreadsheet describing
the relationships between application requirements, functional elements, design elements,
implementation elements, and test cases. The RTM acts as a bridge between the different stages
of the software development lifecycle, and provides an auditable trail that shows how each
requirement is fulfilled and tested.
+#+#.#, Permanent removal of an application or software system from its operational
environment.
+#'#+##,-,##+,- The process of examining an existing application that has
characteristics that are similar to a desired application. xsing the existing application as a guide,
the requirements for the new application are defined, analyzed, and extracted all the way back to
specifications. From this point, the specifications are altered to comply with any new customer
requirements and the new application is developed.
+#'# The examination of a deliverable for specific content by a reviewer with expertise in
the domain of the deliverable. The reviewer examines the content for correctness, consistency,
completeness, accuracy, readability, and testability.
+#:*$<The degree to which a software application, component, or other work product
can be used in more than one computer program or software system.
+5 The possibility of suffering loss.
+5.*,*-#.#, ¦n approach to problem analysis that is used to identify, analyze,
prioritize, and control risks.
+:<# ¦ specification that determines the data type and data value that can be entered in a
column of a table. Rules are classified as validation rules and business rules.
#9:+ Policies, procedures, and technical measures used to prevent unauthorized access,
alteration, deft, or destruction of information systems or data.
#<!@),¦ SQL Join operation used to compare values within the columns of one table.
Self-joins join a table with itself, requiring that the table be assigned two different names, one of
which must be an alias.
.:<*,#)::#+¦ quantity of users and/or external systems connected to a multi-
user database application for the purpose of exchanging and/or maintaining data.
)!*+# Computer programs, procedures, and associated documentation pertaining to the
operation of an application. The detailed instructions that control the operation of a computer
system.
)!*+# #'#<)%.#,<!#99<#7 <98 ¦ set of activities, referred to as
stages, arranged to produce a software product. The generally accepted stages for software
development are Planning, Requirements, Design, Development, Integration & Test, Installation
& ¦cceptance, and Maintenance.
)!*+#.#+9 Objective assessments (in the form of quantified measurements) of a
software application or component.
)!*+#(:*<*:+*,9#7(*8 ¦ process designed to provide management
with appropriate visibility into the software engineering processes being used by the project
team. ¦ formal process for evaluating the work products produced during the software
development lifecycle.
):+9#9) # Software programming instructions written in a language readable by
humans that must be translated into machine language before it can be executed by the computer.
%#9!9*),Documentation that describes software requirements, designs, or other
characteristics in a complete, precise, and verifiable manner.
%+*< #'#<)%.#,.) #< ¦n iterative version of the waterfall software
development model. Rather than producing the entire software product in one linear series of
steps, the spiral development model implies that specific components or sets of components of
the software product are brought through each of the stages in the software development
lifecycle before development begins on the next set. Once one component is completed, the next
component in order of applicability is developed. Often, a dependency tree is used where
components that have other components dependent upon them are developed first.
*-# ¦ partition of the software development cycle that represents a meaningful and
measurable set of related tasks which are performed to obtain specific work products, or
deliverables.
*5#;)< #+Those individuals with decision-making authority over a project or group
of projects.
*, *+  ¦pproved reference models and protocols as determined by standard-setting
groups to prescribe a disciplined, uniform approach to software development and maintenance.
*, *+ )%#+*,-%+)9# :+#7)%8 Precise, defined rules, activities, and
practices developed by an organization to consistently manage normal operations.
+:9:+# *,*< ¦ top-down method for defining system inputs, processes, and
outputs to build models of software products or systems. The four basic features in structured
analysis are data flow diagrams, data dictionaries, procedure logic descriptions, and data store
descriptions.
+:9:+# (:#+<*,-:*-#7(<8 ¦ standard data definition and data
manipulation language for relational database management systems.
:$@#9.*#+#J%#+7.#8 ¦ person, generally a customer staff member, who is
considered to be an expert in one or more operational processes that are the focus of an
automation effort. SMEs are generally the primary sources of application requirements, and play
very significant roles in the requirements, design, and testing stages of the software development
lifecycle.
:$(:#+ ¦ny SQL Select statement that's included (nested) within another Select, Insert,
xpdate, or Delete statement or nested within another subquery.
:%%)+*9'# ¦ctivities that make the delivery of the primary services or products
of a firm possible. These activities typically focus on support of the organization's infrastructure,
human resources, technologies, and logistics.
:%%)+ ***+#* ¦ module in a database application that supports the maintenance of
data used primarily for reference and support of operational data areas. For example, in an order
entry system, the maintenance of lists of customer business types or order shipment methods
would be considered support data areas.
#. ¦ collection of hardware, software, firmware, and documentation components
organized to accomplish a specific function or set of related functions.
#. #-,*-# ¦ stage in the software development lifecycle during which the
requirements for the software product architecture, components, interfaces, and data structures
are refined and expanded to the extent that the design is sufficiently complete to be implemented.
#.),#+ The organizational unit or person that provides funding and has approval
authority for the project. ¦lso referred to as the customer or client. Typically, system owners are
also system users.
#.)!*+# Specialized programs that manage the resources of the computer, such
as the central processor, communication links, and peripheral devices.
#.#,- Testing of the application or information system as a whole to determine
if discrete modules will function together as planned and to evaluate compliance with specified
requirements.
#.*,*< The analysis of a problem that the organization will try to solve with
an information system.
#.*,*<Specialists who translate business problems and requirements into
information systems requirements, acting as a bridge between the information systems
department, the system owner, and end-users.
*$<# ¦ database object consisting of a group of rows (records) divided into columns (fields)
that contain data or Null values. ¦ table is treated as a database device or object.
*5 The smallest accountable unit of work. ¦ task is the lowest level of work division
typically included in the Project Plan and Work reakdown Structure. Related tasks are usually
grouped to form activities.
#9;,9*<!#*$< Determines whether a proposed application can be
implemented with the available hardware, software, and technical resources.
#9;,9*<+#'# The review of a deliverable for technical accuracy by a qualified
developer who is familiar with the software product under design or development.
#9;,)<)- Software and hardware tools and services used to develop and support a
database application. In general, a development team working with well-known, mature tools is
substantially more likely to succeed than a team working with newly released, unfamiliar
technology. Caveat: in rare cases, unique performance requirements dictate the use of new
technology for the project to have any chance of success.
#$# ¦ specific set of hardware, software, instrumentation, simulators, and other support
elements needed to conduct a test of a database application.
#9*#¦ defined set of database records, test inputs, execution conditions and
anticipated results designed to exercise specific application components and verify compliance
with design criteria and requirements. Contains detailed instructions for the set up, execution,
and evaluation of the results for the test case.
#,-*-# ¦lso often referred to as the test and acceptance stage. ¦ stage in the
software development lifecycle where the components of a software product are executed under
specified conditions, the results are observed and recorded, and an evaluation is made to
determine whether or not the requirements have been met.
##. ¦ software component that is the object of a test case.
#%<*, ¦ document that defines the preparations, test items, test data, and test cases for
the series of tests to be performed on an application.
#+#%)+¦ document that contains a chronological record of the execution and results
of the testing carried out for a software component or application.
;+###+ The architecture of a database application consisting of a front end, application
server, and database server. The front end handles user input and output, the application server
serves front end components and handles the business rules, while the database server manages
the associated data storage as directed by the application server or front end client.
.#*.% ¦ set of date and time data attributes applied to a disk file or database record
when it is created or edited.
+*9#*$< The degree to which a relationship can be established between two or more
products of the software development lifecycle.
+*,*9), ¦ set of processing tasks that are treated as a single activity to perform a
desired result. For example, a transaction would entail all the steps necessary to insert and or
modify values in multiple tables when a new invoice is created. If any of the record management
tasks fail, the entire activity is canceled, or rolled back. If all of the record management activities
are successful, the transaction is committed, or made permanent.
+*,*9),*,*< ¦ process used to divide complex data flow diagrams into
smaller, individual data flow diagrams for each class of transaction that the application will
process.
:,#,- The isolated testing of each logical path of a specific implementation element
or groups of related elements. The expected output from the execution of the logical path is
predefined to allow comparisons of the planned output against the actual output.
:*$< The ease with which a user can learn to operate a software application.
:#9*# ¦ description of a business process under automation, focused on how ¦ctors (user
and interfacing systems) interact with the process. Includes descriptions of the information the
¦ctors send to the system, the data the ¦ctors receive from the process, and the operations they
perform using the system.
:#+ Those individuals who use a specific software product or system. xser activities can
include data entry, queries and updates, the execution of batch operations, and the generation of
reports.
:#+,#+!*9#The part of the application through which the end-user interacts with the
system.
:#+.*,:*< ¦ document that describes a software application in sufficient detail to
enable an end-user to obtain desired results. Typically includes a tutorial, a description of
functions and data structures, options, allowable inputs, expected outputs, and possible error
messages.
'*< *), 1. The process of determining whether a value in a table's data cell its within
the allowable range or is a member of a set of acceptable values.
2. The process of evaluating software to ensure compliance with established requirements and
design criteria.
'#+!9*),The process of evaluating an application to determine whether or not the
work products of a stage of a software development lifecycle fulfill the requirements established
during the previous stage.
'+:*<)+-*,K*), ¦n organization that uses networks to link people, assets, and
ideas to create and distribute products and services without being limited to traditional
organizational boundaries or physical location.*<5;+):-;The review of a functional
element, design element, or implementation element by a team of subject matter experts to detect
possible errors, violation of development standards, and other problems.
*#+!*<< #'#<)%.#,.) #< ¦ software development lifecycle in which each
stage is dependent upon the outputs of the previous stage. The vision is that of water flowing
downhill. Once a stage is completed, the next stage begins. Previously completed stages cannot
be re-initiated.
)+5$+#*5 ),+:9:+#7$8 ¦ listing of all activities and tasks related to
those activities that make up a complete project. Generally described in outline form to show the
hierarchical relationship between activities and tasks.
)+5%+) :9 ¦ specific document or software component that results from a project
activity or task.
Testing Important Definitions
*    
Testing conducted to enable a user/customer to determine whether to accept
a software product. Normally performed to validate the software meets a set of agreed
acceptance criteria.
*    
Verifying a product is accessible to the people having disabilities (deaf,
blind, mentally disabled etc.).
*;  
¦ testing phase where the tester tries to 'break' the system by randomly trying
the system's functionality. Can include negative testing as well. See also Monkey Testing.
*
 
Testing practice for projects using agile methodologies, treating development as
the customer of testing and emphasizing a test-first design paradigm. See also Test Driven
Development.
$ The point at which some deliverable produced during the software engineering process
is put under formal change control.
$  
Testing of a rerelease of a software product conducted by customers.
$ %    
Testing an executable application for portability across system
platforms and environments, usually for conformation to an ¦I specification.
$ $ 
Testing based on an analysis of the specification of a piece of software
without reference to its internal workings. The goal is to test how well the component conforms
to the published requirements for the component.
$
¦ fault in a program, which causes the program to perform in an unintended or
unanticipated manner.
9..The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging
the maturity of the software processes of an organization and for identifying the key practices
that are required to increase the maturity of these processes.
99/
 ¦n analysis method that determines which parts of the software have been
executed (covered) by the test case suite and which parts have not been executed and therefore
may require additional attention.
9   ¦ formal testing technique where the programmer reviews source code with a
group who ask questions analyzing the program logic, analyzing the code with respect to a
checklist of historically common programming errors, and analyzing its compliance with coding
standards.
9 c
c ¦ formal testing technique where source code is traced by a group with a
small set of test cases, while the state of program variables is manually monitored, to analyze the
programmer's logic and assumptions.
9    
Testing whether software is compatible with other elements of a system
with which it should operate, e.g. browsers, Operating Systems, or hardware.
9   
Multi-user testing geared towards determining the effects of accessing the
same application code, module or database records. Identifies and measures the level of locking,
deadlocking and use of single-threaded code and locking semaphores.
 Nonconformance to requirements or functional / program specification
   
Testing software through executing it. See also Static Testing.
#    
Checks for memory leaks or other problems that may occur with prolonged
execution.
#  #   
Testing a complete application environment in a situation that mimics real-
world use, such as interacting with a database, using network communications, or interacting
with other hardware, applications, or systems if appropriate.
!   
Testing the features and operational behavior of a product to ensure they
correspond to its specifications. Testing that ignores the internal mechanism of a system or
component and focuses solely on the outputs generated in response to selected inputs and
execution conditions.
-$ 
¦ synonym for White ox Testing.
- 
Testing one particular module, functionality heavily.
-$ 
¦ combination of lack ox and White ox testing methodologies: testing a
piece of software against its specification but using some knowledge of its internal workings.
 
   
Testing of combined parts of an application to determine if they function
together correctly. xsually performed after unit and functional testing. This type of testing is
especially relevant to client/server and distributed systems.
     
Confirms that the application under test recovers from expected or
unexpected events without loss of data or functionality. Events can include shortage of disk
space, unexpected loss of communication, or power out conditions. Metric
¦ standard of measurement. Software metrics are the statistics describing the structure or content
of a program. ¦ metric should be a real objective measurement of something such as number of
bugs per lines of code.
.  
Testing a system or an ¦pplication on the fly, i.e just few tests here and there
to ensure the system or an application does not crash out.
,
 / 
Testing aimed at showing software does not work. ¦lso known as "test to
fail". See also Positive Testing.
%  
Testing conducted to evaluate the compliance of a system or component
with specified performance requirements. Often this is performed using an automated test tool to
simulate large number of users. ¦lso know as "Load Testing".
% / 
Testing aimed at showing software works. ¦lso known as "test to pass". See
also Negative Testing.
+ 9   ¦ cause of concurrency problems. Multiple accesses to a shared resource, at
least one of which is a write, with no mechanism used by either to moderate simultaneous access.
+ 
Continuously raising an input signal until the system breaks down.
+ / 
Confirms that the program recovers from expected or unexpected events
without loss of data or functionality. Events can include shortage of disk space, unexpected loss
of communication, or power out conditions.
+
  
Retesting a previously tested program following modification to ensure that
faults have not been introduced or uncovered as a result of the changes made.
   
rief test of major functional elements of a piece of software to determine if its
basically operational. See also Smoke Testing.
    
Performance testing focused on ensuring the application under test
gracefully handles increases in work load.
   
Testing which confirms that the program can restrict access to authorized
personnel and that the authorized personnel can access the functions available to their security
level.
 
¦ quick-and-dirty test that the major functions of a piece of software work.
Originated in the hardware testing practice of turning on a new piece of hardware for the first
time and considering it a success if it does not catch on fire.
 
Running a system at high load for a prolonged period of time. For example,
running several times more transactions in an entire day (or night) than would be expected in a
busy day, to identify and performance problems that appear after a large number of transactions
have been executed.
  
Testing conducted to evaluate a system or component at or beyond the limits of
its specified requirements to determine the load under which it fails and how. Often this is
performance testing using a very high level of simulated load.
  
Testing that attempts to discover defects that are properties of the entire system
rather than of its individual components.
 
The process of exercising software to verify that it satisfies specified requirements and
to detect errors. The process of analyzing a software item to detect the differences between
existing and required conditions (that is, bugs), and to evaluate the features of the software item
(Ref. IEEE Std 829).
The process of operating a system or component under specified conditions, observing or
recording the results, and making an evaluation of some aspect of the system or component.
 $¦n execution environment configured for testing. May consist of specific hardware,
OS, network topology, configuration of the product under test, other application or system
software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.
 9 Test Case is a commonly used term for a specific test. This is usually the smallest unit
of testing. ¦ Test Case will consist of information such as requirements testing, test steps,
verification steps, prerequisites, outputs, test environment, etc.
¦ set of inputs, execution preconditions, and expected outcomes developed for a particular
objective, such as to exercise a particular program path or to verify compliance with a specific
requirement.
  / ¦ program or test tool used to execute a tests. ¦lso known as a Test Darness.
 ;  ¦ program or test tool used to execute a tests. ¦lso known as a Test Driver.
 % ¦ document describing the scope, approach, resources, and schedule of intended
testing activities. It identifies test items, the features to be tested, the testing tasks, who will do
each task, and any risks requiring contingency planning. Ref IEEE Std 829.
   .  ¦ document showing the relationship between Test Requirements and Test
Cases.
:9 The specification of tests that are conducted from the end-user perspective. xse cases
tend to focus on operating software as an end-user would conduct their day-to-day activities.
'  The process of evaluating software at the end of the software development process to
ensure compliance with software requirements. The techniques for validation is testing,
inspection and reviewing.
'   The process of determining whether of not the products of a given phase of the
software development cycle meet the implementation steps and can be traced to the incoming
objectives established during the previous phase. The techniques for verification are testing,
inspection and reviewing.
'  
Testing which confirms that any values that may become large over time (such
as accumulated counts, logs, and data files), can be accommodated by the program and will not
cause the program to stop working or degrade its operation in any manner.
c $
¦ny Required features of the release whose Test is defined in the Test Plan
of the release fails then ug should be opened in this category against Proxy module of Vocal
and Release can't be done if it's not fixed Or ¦ll the Memory Leaks, performance, reliability,
flakiness problem go under this priority for the release. oth above Categories xGS are
considered as Show stoppers and Release can't be made till they are fixed.

!*(I)  


In this section we go through the list of F¦Q'S.

(0c /   


¦: Verification ensures the product is designed to deliver all functionality to the customer; it
typically involves reviews and meetings to evaluate documents, plans, code, requirements and
specifications; this can be done with checklists, issues lists, walkthroughs and inspection
meetings.
(1c /  
¦: Validation ensures that functionality, as defined in requirements, is the intended behavior of
the product; validation typically involves actual testing and takes place after verifications are
completed.

(2c  c


c
¦: ¦ walkthrough is an informal meeting for evaluation or informational purposes. ¦
walkthrough is also a process at an abstract level. It's the process of inspecting software code by
following paths through the code (as determined by input conditions and choices made along the
way). The purpose of code walkthroughs is to ensure the code fits the purpose. Walkthroughs
also offer opportunities to assess an individual's or team's competency.
(3c     
¦: ¦n inspection is a formal meeting, more formalized than a walkthrough and typically consists
of 3-10 people including a moderator, reader (the author of whatever is being reviewed) and a
recorder (to make notes in the document). The subject of the inspection is typically a document,
such as a requirements document or a test plan. The purpose of an inspection is to find problems
and see what is missing, not to fix anything. The result of the meeting should be documented in a
written report. ¦ttendees should prepare for this type of meeting by reading through the
document, before the meeting starts; most problems are found during this preparation.
Preparation for inspections is difficult, but is one of the most cost-effective methods of ensuring
quality, since bug prevention is more cost effective than bug detection.
(4c =  
¦: Quality software is software that is reasonably bug-free, delivered on time and within budget,
meets requirements and expectations and is maintainable. Dowever, quality is a subjective term.
Quality depends on who the customer is and their overall influence in the scheme of things.
Customers of a software development project include end-users, customer acceptance test
engineers, testers, customer contract officers, customer management, the development
organization's management, test engineers, testers, salespeople, software engineers, stockholders
and accountants. Each type of customer will have his or her own slant on quality. The accounting
department might define quality in terms of profits, while an end-user might define quality as
user friendly and bug free.
(6c 
 
¦: ¦ good code is code that works, is free of bugs and is readable and maintainable.
Organizations usually have coding standards all developers should adhere to, but every
programmer and software engineer has different ideas about what is best and what are too many
or too few rules. We need to keep in mind that excessive use of rules can stifle both productivity
and creativity. Peer reviews and code analysis tools can be used to check for problems and
enforce standards.
(Ac 


¦: Design could mean to many things, but often refers to functional design or internal design.
Good functional design is indicated by software functionality can be traced back to customer and
end-user requirements. Good internal design is indicated by software code whose overall
structure is clear, understandable, easily modifiable and maintainable; is robust with sufficient
error handling and status logging capability; and works correctly when implemented.
(Dc    
¦: Software life cycle begins when a software product is first conceived and ends when it is no
longer in use. It includes phases like initial concept, requirements analysis, functional design,
internal design, documentation planning, test planning, coding, document preparation,
integration, testing, maintenance, updates, re-testing and phase-out.
(Cc c  

¦: Generally speaking, there are bugs in software because of unclear requirements, software
complexity, programming errors, changes in requirements, errors made in bug tracking, time
pressure, poorly documented code and/or bugs in tools used in software development.
1. There are unclear software requirements because there is miscommunication as to what the
software should or shouldn't do.
2. Software complexity. ¦ll of the followings contribute to the exponential growth in software
and system complexity: Windows interfaces, client-server and distributed applications, data
communications, enormous relational databases and the sheer size of applications.
3. Programming errors occur because programmers and software engineers, like everyone else,
can make mistakes.
4. ¦s to changing requirements, in some fast-changing business environments, continuously
modified requirements are a fact of life. Sometimes customers do not understand the effects of
changes, or understand them but request them anyway. ¦nd the changes require redesign of the
software, rescheduling of resources and some of the work already completed have to be redone
or discarded and hardware requirements can be effected, too.
5. ug tracking can result in errors because the complexity of keeping track of changes can result
in errors, too.
6. Time pressures can cause problems, because scheduling of software projects is not easy and it
often requires a lot of guesswork and when deadlines loom and the crunch comes, mistakes will
be made.
7. Code documentation is tough to maintain and it is also tough to modify code that is poorly
documented. The result is bugs. Sometimes there is no incentive for programmers and software
engineers to document their code and write clearly documented, understandable code.
Sometimes developers get kudos for quickly turning out code, or programmers and software
engineers feel they cannot have job security if everyone can understand the code they write, or
they believe if the code was hard to write, it should be hard to read.
8. Software development tools , including visual tools, class libraries, compilers, scripting tools,
can introduce their own bugs. Other times the tools are poorly documented, which can create
additional bugs.
(0>;     (* 
¦: It depends on the size of the organization and the risks involved. For large organizations with
high-risk projects, a serious management buy-in is required and a formalized Q¦ process is
necessary. For medium size organizations with lower risk projects, management and
organizational buy-in and a slower, step-by-step process is required. Generally speaking, Q¦
processes should be balanced with productivity, in order to keep any bureaucracy from getting
out of hand. For smaller groups or projects, an ad-hoc process is more appropriate. ¦ lot depends
on team leads and managers, feedback to developers and good communication is essential among
customers, managers, developers, test engineers and testers. Regardless the size of the company,
the greatest value for effort is in managing requirement processes, where the goal is requirements
that are clear, complete and testable.
(00-//    c   
 / 
¦: Poorly written requirements, unrealistic schedules, inadequate testing, adding new features
after development is underway and poor communication.
1. Requirements are poorly written when requirements are unclear, incomplete, too general, or
not testable; therefore there will be problems.
2. The schedule is unrealistic if too much work is crammed in too little time.
3. Software testing is inadequate if none knows whether or not the software is any good until
customers complain or the system crashes.
4. It's extremely common that new features are added after development is underway.
5. Miscommunication either means the developers don't know what is needed, or customers have
unrealistic expectations and therefore problems are guaranteed.
(01     
   

¦: Yes and no. For larger projects, or ongoing long-term projects, they can be valuable. ut for
small projects, the time needed to learn and implement them is usually not worthwhile. ¦
common type of automated tool is the record/playback type. For example, a test engineer clicks
through all combinations of menu choices, dialog box choices, buttons, etc. in a GxI and has an
automated testing tool record and log the results. The recording is typically in the form of text,
based on a scripting language that the testing tool can interpret. If a change is made (e.g. new
buttons are added, or some underlying code in the application is changed), the application is then
re-tested by just playing back the recorded actions and compared to the logged results in order to
check effects of the change. One problem with such tools is that if there are continual changes to
the product being tested, the recordings have to be changed so often that it becomes a very time-
consuming task to continuously update the scripts. ¦nother problem with such tools is the
interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task.

(02-//     c   


 / 
¦: Solid requirements, realistic schedules, adequate testing, firm requirements and good
communication.
1. Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable.
¦ll players should agree to requirements. xse prototypes to help nail down requirements.
2. Dave schedules that are realistic. ¦llow adequate time for planning, design, testing, bug
fixing, re-testing, changes and documentation. Personnel should be able to complete the project
without burning out.
3. Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for
sufficient time for both testing and bug fixing.
4. ¦void new features. Stick to initial requirements as much as possible. e prepared to defend
design against changes and additions, once development has begun and be prepared to explain
consequences. If changes are necessary, ensure they're adequately reflected in related schedule
changes. xse prototypes early on so customers' expectations are clarified and customers can see
what to expect; this will minimize changes later on.
5. Communicate. Require walkthroughs and inspections when appropriate; make extensive use
of e-mail, networked bug-tracking tools, tools of change management. Ensure documentation is
available and up-to-date. xse documentation that is electronic, not paper. Promote teamwork and
cooperation.
(03c 
  
 
¦: ¦ good test engineer should have
1. Das a "test to break" attitude,
2. Takes the point of view of the customer,
3. Das a strong desire for quality,
4. Das an attention to detail, De's also
5. Tactful and diplomatic and
6. Das good a communication skill, both oral and written. ¦nd he
7. Das previous software development experience, too.
Good test engineers have a "test to break" attitude. We, good test engineers, take the point of
view of the customer, have a strong desire for quality and an attention to detail. Tact and
diplomacy are useful in maintaining a cooperative relationship with developers and an ability to
communicate with both technical and non-technical people. Previous software development
experience is also helpful as it provides a deeper understanding of the software development
process, gives the test engineer an appreciation for the developers' point of view and reduces the
learning curve in automated test tool programming.
(04c 
(*
 
¦: The same qualities a good test engineer has are useful for a Q¦ engineer. ¦dditionally, Rob
Davis understands the entire software development process and how it fits into the business
approach and the goals of the organization. Rob Davis' communication skills and the ability to
understand various sides of issues are important. Good Q¦ engineers understand the entire
software development process and how it fits into the business approach and the goals of the
organization. Communication skills and the ability to understand various sides of issues are
important.

!*(I)  


(06c 
 
¦: On the subject of resumes, there seems to be an unending discussion of whether you should or
shouldn't have a one-page resume. The followings are some of the comments I have personally
heard: "Well, Joe low (car salesman) said I should have a one-page resume." "Well, I read a
book and it said you should have a one page resume." "I can't really go into what I really did
because if I did, it'd take more than one page on my resume." "Gosh, I wish I could put my job at
IM on my resume but if I did it'd make my resume more than one page, and I was told to never
make the resume more than one page long." "I'm confused, should my resume be more than one
page? I feel like it should, but I don't want to break the rules." Or, here's another comment,
"People just don't read resumes that are longer than one page." I have heard some more, but we
can start with these. So what's the answer? There is no scientific answer about whether a one-
page resume is right or wrong. It all depends on who you are and how much experience you
have. The first thing to look at here is the purpose of a resume.

The purpose of a resume is to get you an interview. If the resume is getting you interviews, then
it is considered to be a good resume. If the resume isn't getting you interviews, then you should
change it. The biggest mistake you can make on your resume is to make it hard to read. Why?
ecause, for one, scanners don't like odd resumes. Small fonts can make your resume harder to
read. Some candidates use a 7-point font so they can get the resume onto one page. ig mistake.
Two, resume readers do not like eye strain either. If the resume is mechanically challenging, they
just throw it aside for one that is easier on the eyes. Three, there are lots of resumes out there
these days, and that is also part of the problem. Four, in light of the current scanning scenario,
more than one page is not a deterrent because many will scan your resume into their database.
Once the resume is in there and searchable, you have accomplished one of the goals of resume
distribution. Five, resume readers don't like to guess and most won't call you to clarify what is on
your resume.

Generally speaking, your resume should tell your story. If you're a college graduate looking for
your first job, a one-page resume is just fine. If you have a longer story, the resume needs to be
longer. Please put your experience on the resume so resume readers can tell when and for whom
you did what. Short resumes -- for people long on experience -- are not appropriate. The real
audience for these short resumes is people with short attention spans and low IQ. I assure you
that when your resume gets into the right hands, it will be read thoroughly.

(0Ac 
(* . 

¦: Q¦/Test Managers are familiar with the software development process; able to maintain
enthusiasm of their team and promote a positive atmosphere; able to promote teamwork to
increase productivity; able to promote cooperation between Software and Test/Q¦ Engineers,
have the people skills needed to promote improvements in Q¦ processes, have the ability to
withstand pressures and say *no* to other managers when quality is insufficient or Q¦ processes
are not being adhered to; able to communicate with technical and non-technical people; as well
as able to run meetings and keep them focused.
(0Dc  c     (*
¦: Documentation plays a critical role in Q¦. Q¦ practices should be documented, so that they
are repeatable. Specifications, designs, business rules, inspection reports, configurations, code
changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there
should be a system for easily finding and obtaining of documents and determining what
document will have a particular piece of information. xse documentation change management, if
possible.
(0Cc   =  
¦: Requirement specifications are important and one of the most reliable methods of insuring
problems in a complex software project is to have poorly documented requirement
specifications. Requirements are the details describing an application's externally perceived
functionality and properties. Requirements should be clear, complete, reasonably detailed,
cohesive, attainable and testable. ¦ non-testable requirement would be, for example, "user-
friendly", which is too subjective. ¦ testable requirement would be something such as, "the
product shall allow the user to enter their previously-assigned password to access the
application". Care should be taken to involve all of a project's significant customers in the
requirements process. Customers could be in-house or external and could include end-users,
customer acceptance test engineers, testers, customer contract officers, customer management,
future software maintenance engineers, salespeople and anyone who could later derail the
project. If his/her expectations aren't met, they should be included as a customer, if possible. In
some organizations, requirements may end up in high-level project plans, functional
specification documents, design documents, or other documents at various levels of detail. No
matter what they are called, some type of documentation with detailed requirements will be
needed by test engineers in order to properly plan and execute tests. Without such documentation
there will be no clear-cut way to determine if a software application is performing correctly.
(1>c    
¦: ¦ software project test plan is a document that describes the objectives, scope, approach and
focus of a software testing effort. The process of preparing a test plan is a useful way to think
through the efforts needed to validate the acceptability of a software product. The completed
document will help people outside the test group understand the why and how of product
validation. It should be thorough enough to be useful, but not so thorough that none outside the
test group will be able to read it.
(10c    
¦: ¦ test case is a document that describes an input, action, or event and its expected result, in
order to determine if a feature of an application is working correctly. ¦ test case should contain
particulars such as a...
1. Test case identifier;
2. Test case name;
3. Objective;
4. Test conditions/setup;
5. Input data requirements/steps, and
6. Expected results.
Please note, the process of developing test cases can help find problems in the requirements or
design of an application, since it requires you to completely think through the operation of the
application. For this reason, it is useful to prepare test cases early in the development cycle, if
possible.
(11c c    
 
¦: When a bug is found, it needs to be communicated and assigned to developers that can fix it.
¦fter the problem is resolved, fixes should be re-tested. ¦dditionally, determinations should be
made regarding requirements, software, hardware, safety impact, etc., for regression testing to
check the fixes didn't create other problems elsewhere. If a problem-tracking system is in place,
it should encapsulate these determinations. ¦ variety of commercial, problem-
tracking/management software tools are available. These tools, with the detailed input of
software test engineers, will give the team complete information so developers can understand
the bug, get an idea of its severity, reproduce it and fix it.

(12c   
   
 
¦: Configuration management (CM) covers the tools and processes used to control, coordinate
and track code, requirements, documentation, problems, change requests, designs, tools,
compilers, libraries, patches, changes made to them and who makes the changes.
(13c  c 

   I     


¦: In this situation the best bet is to have test engineers go through the process of reporting
whatever bugs or problems initially show up, with the focus being on critical bugs. Since this
type of problem can severely affect schedules and indicates deeper problems in the software
development process, such as insufficient unit testing, insufficient integration testing, poor
design, improper build or release procedures, managers should be notified and provided with
some documentation as evidence of the problem.
(14;  c     

¦: This can be difficult to determine. Many modern software applications are so complex and
run in such an interdependent environment, that complete testing can never be done. Common
factors in deciding when to stop are... 1. Deadlines, e.g. release deadlines, testing deadlines;
2. Test cases completed with certain percentage passed;
3. Test budget has been depleted;
4. Coverage of code, functionality, or requirements reaches a specified point;
5. ug rate falls below a certain level; or
6. eta or alpha testing period ends.
(16c  c I  
c  c
c  

¦: Since it's rarely possible to test every possible aspect of an application, every possible
combination of events, every dependency, or everything that could go wrong, risk analysis is
appropriate to most software development projects. xse risk analysis to determine where testing
should be focused. This requires judgment skills, common sense and experience. The checklist
should include answers to the following questions:
1. Which functionality is most important to the project's intended purpose?
2. Which functionality is most visible to the user?
3. Which functionality has the largest safety impact?
4. Which functionality has the largest financial impact on users?
5. Which aspects of the application are most important to the customer?
6. Which aspects of the application can be tested early in the development cycle?
7. Which parts of the code are most complex and thus most subject to errors?
8. Which parts of the application were developed in rush or panic mode?
9. Which aspects of similar/related previous projects caused problems?
10. Which aspects of similar/related previous projects had large maintenance expenses?
11. Which parts of the requirements and design are unclear or poorly thought out?
12. What do the developers think are the highest-risk aspects of the application?
13. What kinds of problems would cause the worst publicity?
14. What kinds of problems would cause the most customer service complaints?
15. What kinds of tests could easily cover multiple functionalities?
16. Which tests will have the best high-risk-coverage to time-required ratio?
(1Ac  c&  I  
 
c &    /  

¦: Consider the impact of project errors, not the size of the project. Dowever, if extensive testing
is still not justified, risk analysis is again needed and the considerations listed under "What if
there isn't enough time for thorough testing?" do apply. The test engineer then should do "ad
hoc" testing, or write up a limited test plan based on the risk analysis.
(1Dc     =   c

    
¦: Work with management early on to understand how requirements might change, so that
alternate test plans and strategies can be worked out in advance. It is helpful if the application's
initial design allows for some adaptability, so that later changes do not require redoing the
application from scratch. ¦dditionally, try to...
1. Ensure the code is well commented and well documented; this makes changes easier for the
developers.
2. xse rapid prototyping whenever possible; this will help customers feel sure of their
requirements and minimize changes.
3. In the project's initial schedule, allow for some extra time to commensurate with probable
changes.
4. Move new requirements to a 'Phase 2' version of an application and use the original
requirements for the 'Phase 1' version.
5. Negotiate to allow only easily implemented new requirements into the project; move more
difficult, new requirements into future versions of the application.
6. Ensure customers and management understand scheduling impacts, inherent risks and costs of
significant requirements changes. Then let management or the customers decide if the changes
are warranted; after all, that's their job.
7. alance the effort put into setting up automated testing with the expected effort required to
redo them to deal with changes.
8. Design some flexibility into automated test scripts;
9. Focus initial automated testing on application aspects that are most likely to remain
unchanged;
10. Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing
needs;
11. Design some flexibility into test cases; this is not easily done; the best bet is to minimize the
detail in the test cases, or set up only higher-level generic-type test plans;
12. Focus less on detailed test plans and test cases and more on ad-hoc testing with an
understanding of the added risk this entails.
(1Cc  c   c    c  I   c=  
¦: It may take serious effort to determine if an application has significant unexpected or hidden
functionality, which it would indicate deeper problems in the software development process. If
the functionality isn't necessary to the purpose of the application, it should be removed, as it may
have unknown impacts or dependencies that were not taken into account by the designer or the
customer.
If not removed, design information will be needed to determine added testing needs or regression
testing needs. Management should be made aware of any significant added risks as a result of the
unexpected functionality. If the functionality only affects areas, such as minor improvements in
the user interface, it may not be a significant risk.
(2>;   (*    c  
 / 
¦: Implement Q¦ processes slowly over time. xse consensus to reach agreement on processes
and adjust and experiment as an organization grows and matures. Productivity will be improved
instead of stifled. Problem prevention will lessen the need for problem detection. Panics and
burnout will decrease and there will be improved focus and less wasted effort. ¦t the same time,
attempts should be made to keep processes simple and efficient, minimize paperwork, promote
computer-based processes and automated tracking and reporting, minimize time required in
meetings and promote training as part of the Q¦ process. Dowever, no one, especially talented
technical types, like bureaucracy and in the short run things may slow down a bit. ¦ typical
scenario would be that more days of planning and development will be needed, but less time will
be required for late-night bug fixing and calming of irate customers.

!*(I)  


(20c 
 E  

  c (*  
¦: This is a common problem in the software industry, especially in new technology areas. There
is no easy solution in this situation, other than...
1. Dire good people (i.e. hire Rob Davis)
2. Ruthlessly prioritize quality issues and maintain focus on the customer;
Everyone in the organization should be clear on what quality means to the customer.
(21;  
   &  

¦: ¦ well-engineered object-oriented design can make it easier to trace from code to internal
design to functional design to requirements. While there will be little affect on black box testing
(where an understanding of the internal design of the application is unnecessary), white-box
testing can be oriented to the application's objects. If the application was well designed this can
simplify test design.

(22c    c    


 c
c
¦: ecause testing during the design phase can prevent defects later on. We recommend
verifying three things...
1. Verify the design is good, efficient, compact, testable and maintainable.
2. Verify the design meets the requirements and is complete (specifies all relationships between
modules, how to pass data, what happens in exceptional circumstances, starting state of each
module and how to guarantee the state of each module).
3. Verify the design incorporates enough memory, I/O devices and quick enough runtime for the
final product.
(23c  =    
¦: Software Quality ¦ssurance, when Rob Davis does it, is oriented to *prevention*. It involves
the entire software development process. Prevention is monitoring and improving the process,
making sure any agreed-upon standards and procedures are followed and ensuring problems are
found and dealt with. Software Testing, when performed by Rob Davis, is also oriented to
*detection*. Testing involves the operation of a system or application under controlled
conditions and evaluating the results. Organizations vary considerably in how they assign
responsibility for Q¦ and testing. Sometimes they're the combined responsibility of one group or
individual. ¦lso common are project teams, which include a mix of test engineers, testers and
developers who work closely together, with overall Q¦ processes monitored by project
managers. It depends on what best fits your organization's size and business structure. Rob Davis
can provide Q¦ and/or Software Q¦. This document details some aspects of how he can provide
software testing/Q¦ service. For more information, e-mail rob@robdavispe.com
(24c =    
¦: Quality ¦ssurance ensures all parties concerned with the project adhere to the process and
procedures, standards and templates and test readiness reviews.
Rob Davis' Q¦ service depends on the customers and projects. ¦ lot will depend on team leads
or managers, feedback to developers and communications among customers, managers,
developers' test engineers and testers.
(26%    c c
¦: Detailed and well-written processes and procedures ensure the correct steps are being
executed to facilitate a successful completion of a task. They also ensure a process is repeatable.
Once Rob Davis has learned and reviewed customer's business processes and procedures, he will
follow them. De will also recommend improvements and/or additions.
(2A     c       
¦: ¦ll documents should be written to a certain standard and template. Standards and templates
maintain document uniformity. It also helps in learning where information is located, making it
easier for a user to find what they want. Lastly, with standards and templates, information will
not be accidentally omitted from a document. Once Rob Davis has learned and reviewed your
standards and templates, he will use them. De will also recommend improvements and/or
additions.
(2Dc  c /  

¦: Rob Davis has expertise in testing at all testing levels listed below. ¦t each test level, he
documents the results. Each level of testing is either considered black or white box testing.
(2Cc      

¦: lack box testing is functional testing, not based on any knowledge of internal software
design or code. lack box testing are based on requirements and functionality.
(3>c c    

¦: White box testing is based on knowledge of the internal logic of an application's code. Tests
are based on coverage of code statements, branches, paths and conditions.

(30c     

¦: xnit testing is the first level of dynamic testing and is first the responsibility of developers
and then that of the test engineers. xnit testing is performed after the expected test results are
met or differences are explainable/acceptable.
(31c     

¦: Parallel/audit testing is testing where the user reconciles the output of the new system to the
output of the current system to verify the new system performs the operations correctly.
(32c     

¦: Functional testing is black-box type of testing geared to functional requirements of an
application. Test engineers *should* perform functional testing.
(33c      

¦: xsability testing is testing for 'user-friendliness'. Clearly this is subjective and depends on the
targeted end-user or customer. xser interviews, surveys, video recording of user sessions and
other techniques can be used. Programmers and developers are usually not appropriate as
usability testers.
(34c    
    

¦: Incremental integration testing is continuous testing of an application as new functionality is
recommended. This may require that various aspects of an application's functionality are
independent enough to work separately, before all parts of the program are completed, or that
test drivers are developed as needed. This type of testing may be performed by programmers,
software engineers, or test engineers.

!*(I)  


(36c  
    

¦: xpon completion of unit testing, integration testing begins. Integration testing is black box
testing. The purpose of integration testing is to ensure distinct components of the application still
work in accordance to customer requirements. Test cases are developed with the express purpose
of exercising the interfaces between the components. This activity is carried out by the test team.
Integration testing is considered complete, when actual results and expected results are either in
line or differences are explainable/acceptable based on client input.
(3Ac    

¦: System testing is black box testing, performed by the Test Team, and at the start of the system
testing the complete system is configured in a controlled environment. The purpose of system
testing is to validate an application's accuracy and completeness in performing the functions as
designed. System testing simulates real life scenarios that occur in a "simulated real life" test
environment and test all functions of the system that are required in real life. System testing is
deemed complete when actual results and expected results are either in line or differences are
explainable or acceptable, based on client input. xpon completion of integration testing, system
testing is started. efore system testing, all unit and integration test results are reviewed by
Software Q¦ to ensure all problems have been resolved. For a higher level of testing it is
important to understand unresolved problems that originate at unit and integration test levels.

(3Dc      



¦: Similar to system testing, the *macro* end of the test scale is testing a complete application in
a situation that mimics real world use, such as interacting with a database, using network
communication, or interacting with other hardware, application, or system.
(3Cc 
   

¦: The objective of regression testing is to ensure the software remains intact. ¦ baseline set of
data and scripts is maintained and executed to verify changes introduced during the release have
not "undone" any previous code. Expected results from the baseline are compared to results of
the software under test. ¦ll discrepancies are highlighted and accounted for, before testing
proceeds to the next level.
(4>c     

¦: Sanity testing is performed whenever cursory testing is sufficient to prove the application is
functioning according to specifications. This level of testing is a subset of regression testing. It
normally includes a set of core tests of basic GxI functionality to demonstrate connectivity to the
database, application servers, printers, etc.
(40c    

¦: ¦lthough performance testing is described as a part of system testing, it can be regarded as a
distinct level of testing. Performance testing verifies loads, volumes and response times, as
defined by requirements.
(41c   

¦: Load testing is testing an application under heavy loads, such as the testing of a web site
under a range of loads to determine at what point the system response time will degrade or fail.
(42c       

¦: Installation testing is testing full, partial, upgrade, or install/uninstall processes. The
installation test for a release is conducted with the objective of demonstrating production
readiness. This test includes the inventory of configuration items, performed by the application's
System ¦dministration, the evaluation of data readiness, and dynamic tests focused on basic
system functionality. When necessary, a sanity test is performed, following installation testing.
(43c         

¦: Security/penetration testing is testing how well the system is protected against unauthorized
internal or external access, or willful damage. This type of testing usually requires sophisticated
testing techniques.

(44c  /  



¦: Recovery/error testing is testing how well a system recovers from crashes, hardware failures,
or other catastrophic problems.
(46c       

¦: Compatibility testing is testing how well software performs in a particular hardware,
software, operating system, or network environment.
(4Ac     

¦: Comparison testing is testing that compares software weaknesses and strengths to those of
competitors' products.
(4Dc      

¦: ¦cceptance testing is black box testing that gives the client/customer/project manager the
opportunity to verify the system functionality and usability prior to the system being released to
production. The acceptance test is the responsibility of the client/customer or project manager,
however, it is conducted with the full support of the project team. The test team also works with
the client/customer/project manager to develop the acceptance criteria.
(4Cc c  

¦: ¦lpha testing is testing of an application when development is nearing completion. Minor
design changes can still be made as a result of alpha testing. ¦lpha testing is typically performed
by a group that is independent of the design team, but still within the company, e.g. in-house
software test engineers, or software Q¦ engineers.
(6>c     

¦: eta testing is testing an application when development and testing are essentially completed
and final bugs and problems need to be found before the final release. eta testing is typically
performed by end-users or others, not programmers, software engineers, or test engineers.

!*(I)  

(60c   
      
& 
¦: Depending on the organization, the following roles are more or less standard on most testing
projects: Testers, Test Engineers, Test/Q¦ Team Lead, Test/Q¦ Manager, System
¦dministrator, Database ¦dministrator, Technical ¦nalyst, Test uild Manager and Test
Configuration Manager. Depending on the project, one person may wear more than one hat. For
instance, Test Engineers may also wear the hat of Technical ¦nalyst, Test uild Manager and
Test Configuration Manager. You C¦N get a job in testing. Click on a link!
(61c  (*<
¦: The Test/Q¦ Team Lead coordinates the testing activity, communicates testing status to
management and manages the test team.
(62c  #
 
¦: Test Engineers are engineers who specialize in testing. We, test engineers, create test cases,
procedures, scripts and generate data. We execute test procedures and scripts, analyze standards
of measurements, evaluate results of system/integration/regression testing. We also...
1. Speed up the work of the development staff;
2. Reduce your organization's risk of legal liability;
3. Give you the evidence that your software is correct and operates properly;
4. Improve problem tracking and reporting;
5. Maximize the value of your software;
6. Maximize the value of the devices that use it;
7. ¦ssure the successful launch of your product by discovering bugs and design flaws, before
users get discouraged, before shareholders loose their cool and before employees get bogged
down;
8. Delp the work of your development staff, so the development team can devote its time to build
up your product;
9. Promote continual improvement;
10. Provide documentation required by FD¦, F¦¦, other regulatory agencies and your
customers;
11. Save money by discovering defects 'early' in the design process, before failures occur in
production, or in the field;
12. Save the reputation of your company by discovering bugs and design flaws; before bugs and
design flaws damage the reputation of your company.
(63c  $ . 

¦: Test uild Managers deliver current software versions to the test environment, install the
application's software and apply software patches, to both the application and the operating
system, set-up, maintain and back up test environment hardware. Depending on the project, one
person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a
Test uild Manager.

(64c  *   


¦: Test uild Managers, System ¦dministrators, Database ¦dministrators deliver current
software versions to the test environment, install the application's software and apply software
patches, to both the application and the operating system, set-up, maintain and back up test
environment hardware. Depending on the project, one person may wear more than one hat. For
instance, a Test Engineer may also wear the hat of a System ¦dministrator.
(66c    *   
¦: Test uild Managers, System ¦dministrators and Database ¦dministrators deliver current
software versions to the test environment, install the application's software and apply software
patches, to both the application and the operating system, set-up, maintain and back up test
environment hardware. Depending on the project, one person may wear more than one hat. For
instance, a Test Engineer may also wear the hat of a Database ¦dministrator.
(6Ac  c  *  
¦: Technical ¦nalysts perform test assessments and validate system/functional test requirements.
Depending on the project, one person may wear more than one hat. For instance, Test Engineers
may also wear the hat of a Technical ¦nalyst.
(6Dc  9 
  . 

¦: Test Configuration Managers maintain test environments, scripts, software and test data.
Depending on the project, one person may wear more than one hat. For instance, Test Engineers
may also wear the hat of a Test Configuration Manager.
(6Cc    c 
¦: The test schedule is a schedule that identifies all tasks required for a successful testing effort,
a schedule of all test activities and resource requirements.
(A>c    
 c

¦: One software testing methodology is the use a three step process of...
1. Creating a test strategy;
2. Creating a test plan/design; and
3. Executing tests.
This methodology can be used and molded to your organization's needs. Rob Davis believes that
using this methodology is important in the development and in ongoing maintenance of his
customers' applications.
(A0c  c
   
 
¦: The general testing process is the creation of a test strategy (which sometimes includes the
creation of test cases), creation of a test plan/design (which usually includes test cases and test
procedures) and the execution of tests.
(A1;       

¦: The test strategy is a formal description of how a software product will be tested. ¦ test
strategy is developed for all levels of testing, as required. The test team analyzes the
requirements, writes the test strategy and reviews the plan with the project team. The test plan
may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria
and risk assessment.
Inputs for this process:
1. ¦ description of the required hardware and software components, including test tools. This
information comes from the test environment, including test tool data.
2. ¦ description of roles and responsibilities of the r resources required for the test and schedule
constraints. This information comes from man-hours and schedules.
3. Testing methodology. This is based on known standards.
4. Functional and technical requirements of the application. This information comes from
requirements, change request, technical and functional design documents.
5. Requirements that the system can not provide, e.g. system limitations, Outputs for this
process:
6. ¦n approved and signed off test strategy document, test plan, including test cases.
7. Testing issues requiring resolution. xsually this requires additional negotiation at the project
management level.

(A2;      



¦: Test scenarios and/or cases are prepared by reviewing functional requirements of the release
and preparing logical groups of functions that can be further broken into test procedures. Test
procedures define test conditions, data to be used for testing and expected results, including
database updates, file outputs, report results. Generally speaking...
1. Test cases and scenarios are designed to represent both typical and unusual situations that may
occur in the application.
2. Test engineers define unit test requirements and unit test cases. Test engineers also execute
unit test cases.
3. It is the test team that, with assistance of developers and clients, develops test cases and
scenarios for integration and system testing.
4. Test scenarios are executed through the use of test procedures or scripts.
5. Test procedures or scripts define a series of steps necessary to perform one or more test
scenarios.
6. Test procedures or scripts include the specific data that will be used for testing the process or
transaction.
7. Test procedures or scripts may cover multiple test scenarios.
8. Test scripts are mapped back to the requirements and traceability matrices are used to ensure
each test is within scope.
9. Test data is captured and base lined, prior to testing. This data serves as the foundation for unit
and system testing and used to exercise system functionality in a controlled environment.
10. Some output data is also base-lined for future comparison. ase-lined data is used to support
future application maintenance via regression testing.
11. ¦ pretest meeting is held to assess the readiness of the application and the environment and
data to be tested. ¦ test readiness document is created to indicate the status of the entrance
criteria of the release.
Inputs for this process:
12. ¦pproved Test Strategy Document.
14. Test tools, or automated test tools, if applicable.
15. Previously developed scripts, if applicable.
16. Test documentation problems uncovered as a result of testing.
17. ¦ good understanding of software complexity and module path coverage, derived from
general and detailed design documents, e.g. software design document, source code and software
complexity data.
Outputs for this process:
18. ¦pproved documents of test scenarios, test cases, test conditions and test data.
19.U Reports of software design issues, given to software developers for correction.
(A3;    
¦: Execution of tests is completed by following the test documents in a methodical manner. ¦s
each test procedure is performed, an entry is recorded in a test execution log to note the
execution of the procedure and whether or not the test procedure uncovered any defects.
Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held
daily, if required, to address and discuss testing issues, status and activities.
1. The output from theexecution of test procedures is known as test results. Test results are
evaluated by test engineers to determine whether the expected results have been obtained. ¦ll
discrepancies/anomalies are logged and discussed with the software team lead, hardware test
lead, programmers, software engineers and documented for further investigation and resolution.
Every company has a different process for logging and reporting bugs/defects uncovered during
testing.
2. ¦ pass/fail criteria is used to determine the severity of a problem, and results are recorded in a
test summary report. The severity of a problem, found during system testing, is defined in
accordance to the customer's risk assessment and recorded in their selected tracking tool.
3. Proposed fixes are delivered to the testing environment, based on the severity of the problem.
Fixes are regression tested and flawless fixes are migrated to a new baseline. Following
completion of the test, members of the test team prepare a summary report. The summary report
is reviewed by the Project Manager, Software Q¦ Manager and/or Test Team Lead.
4. ¦fter a particular level of testing has been certified, it is the responsibility of the Configuration
Manager to coordinate the migration of the release software components to the next test level, as
documented in the Configuration Management Plan. The software is only migrated to the
production environment after the Project Manager's formal acceptance.
5. The test team reviews test document problems identified during testing, and update documents
where appropriate.
Inputs for this process:
6. ¦pproved test documents, e.g. Test Plan, Test Cases, Test Procedures.
7. Test tools, including automated test tools, if applicable.
8. Developed scripts.
9. Changes to the design, i.e. Change Request Documents.
10. Test data.
11. ¦vailability of the test team and project team.
12. General and Detailed Design Documents, i.e. Requirements Document, Software Design
Document.
13. ¦ software that has been migrated to the test environment, i.e. unit tested code, via the
Configuration/uild Manager.
14. Test Readiness Document.
15. Document xpdates.
Outputs for this process:
16. Log and summary of the test results. xsually this is part of the Test Report. This needs to be
approved and signed-off with revised testing deliverables.
17. Changes to the code, also known as test fixes. Test document problems uncovered as a result
of testing. Examples are Requirements document and Design Document problems.
18. Reports on software design issues, given to software developers for correction. Examples are
bug reports on code issues.
19. Formal record of test incidents, usually part of problem tracking.
ase-lined package, also known as tested source and object code, ready for migration to the next
level.
(A4c   
 c      
¦: Each of the followings represents a different testing approach:
1. lack box testing,
2. White box testing,
3. xnit testing,
4. Incremental testing,
5. Integration testing,
6. Functional testing,
7. System testing,
8. End-to-end testing,
9. Sanity testing,
10. Regression testing,
11. ¦cceptance testing,
12. Load testing,
13. Performance testing,
14. xsability testing,
15. Install/uninstall testing,
16. Recovery testing,
17. Security testing,
18. Compatibility testing,
19. Exploratory testing, ad-hoc testing,
20. xser acceptance testing,
21. Comparison testing,
22. ¦lpha testing,
23. eta testing, and
24. Mutation testing.

!*(I)  


(A6c    



¦: Stress testing is testing that investigates the behavior of software (and hardware) under
extraordinary operating conditions. For example, when a web server is stress tested, testing aims
to find out how many users can be on-line, at the same time, without crashing the server. Stress
testing tests the stability of a given system or entity. It tests something beyond its normal
operational capacity, in order to observe any negative results. For example, a web server is stress
tested, using scripts, bots, and various denial of service tools.
(AAc   

¦: Load testing simulates the expected usage of a software program, by simulating multiple
users that access the program's services concurrently. Load testing is most useful and most
relevant for multi-user systems, client/server models, including web servers. For example, the
load placed on the system is increased above normal usage patterns, in order to test the system's
response at peak loads. You C¦N learn load testing, with little or no outside help. Get C¦N get
free information. Click on a link!
(ACc  c       
   


¦: Load testing is a blanket term that is used in many different ways across the professional
software testing community. The term, load testing, is often used synonymously with stress
testing, performance testing, reliability testing, and volume testing. Load testing generally stops
short of stress testing. During stress testing, the load is so great that errors are the expected
results, though there is gray area in between stress testing and load testing. You C¦N learn
testing, with little or no outside help. Get C¦N get free information. Click on a link!
(D>c  c        
   

¦: Load testing is a blanket term that is used in many different ways across the professional
software testing community. The term, load testing, is often used synonymously with stress
testing, performance testing, reliability testing, and volume testing. Load testing generally stops
short of stress testing. During stress testing, the load is so great that errors are the expected
results, though there is gray area in between stress testing and load testing.
(D0c  c    /   
   

¦: Load testing is a blanket term that is used in many different ways across the professional
software testing community. The term, load testing, is often used synonymously with stress
testing, performance testing, reliability testing, and volume testing. Load testing generally stops
short of stress testing. During stress testing, the load is so great that errors are the expected
results, though there is gray area in between stress testing and load testing.
(D1c     

¦: Incremental testing is partial testing of an incomplete product. The goal of incremental testing
is to provide an early feedback to software developers.
(D2c    

¦: Software testing is a process that identifies the correctness, completenes, and quality of
software. ¦ctually, testing cannot establish the correctness of software. It can find defects, but
cannot prove there are no defects. You C¦N learn software testing, with little or no outside help.
Get C¦N get free information. Click on a link!
(D3c     

¦: ¦utomated testing is a formally specified and controlled method of formal testing approach.
(D4c c  

¦: ¦lpha testing is final testing before the software is released to the general public. First, (and
this is called the first phase of alpha testing), the software is tested by in-house developers. They
use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly.
Then, (and this is called second stage of alpha testing), the software is handed over to us, the
software Q¦ staff, for additional testing in an environment that is similar to the intended use.
(D6c     

¦: Following alpha testing, "beta versions" of the software are released to a group of people, and
limited public tests are performed, so that further testing can ensure the product has few bugs.
Other times, beta versions are made available to the general public, in order to receive as much
feedback as possible. The goal is to benefit the maximum number of future users.
(DAc  c    c     

¦: ¦lpha testing is performed by in-house developers and software Q¦ personnel. eta testing is
performed by the public, a few select prospective customers, or the general public.

(DDc     



¦: Clear box testing is the same as white box testing. It is a testing approach that examines the
application's program structure, and derives test cases from the application's program logic. You
C¦N learn clear box testing, with little or no outside help. Get C¦N get free information. Click
on a link!
(DCc   /  
¦: oundary value analysis is a technique for test data selection. ¦ test engineer chooses values
that lie along data extremes. oundary values include maximum, minimum, just inside
boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a
systems works correctly for these extreme or special values, then it will work correctly for all
values in between. ¦n effective way to test code, is to exercise it at its natural boundaries.
(C>c c   

¦: ¦d hoc testing is a testing approach; it is the least formal testing approach.

!*(I)  


(C0c 
  

¦: Gamma testing is testing of software that has all the required features, but it did not go
through all the in-house quality checks. Cynics tend to refer to software releases as "gamma
testing".
(C1c 
   

¦: Glass box testing is the same as white box testing. It is a testing approach that examines the
application's program structure, and derives test cases from the application's program logic.
(C2c     

¦: Open box testing is same as white box testing. It is a testing approach that examines the
application's program structure, and derives test cases from the application's program logic.
(C3c      

¦: lack box testing a type of testing that considers only externally visible behavior. lack box
testing considers neither the code itself, nor the "inner workings" of the software. You C¦N
learn to do black box testing, with little or no outside help. Get C¦N get free information. Click
on a link!

(C4c     



¦: Functional testing is same as black box testing. lack box testing a type of testing that
considers only externally visible behavior. lack box testing considers neither the code itself, nor
the "inner workings" of the software.
(C6c     

¦: Closed box testing is same as black box testing. lack box testing a type of testing that
considers only externally visible behavior. lack box testing considers neither the code itself, nor
the "inner workings" of the software.
(CAc      

¦: ottom-up testing is a technique for integration testing. ¦ test engineer creates and uses test
drivers for components that have not yet been developed, because, with bottom-up testing, low-
level components are tested first. The objective of bottom-up testing is to call low-level
components first, for testing
purposes.
(CDc  =  
¦: The quality of the software does vary widely from system to system. Some common quality
attributes are stability, usability, reliability, portability, and maintainability. See quality standard
ISO 9126 for more information on this subject.
(CC;     
¦: Software test cases are in a document that describes inputs, actions, or events, and their
expected results, in order to determine if all features of an application are working correctly. Test
case templates contain all particulars of every test case. Often these templates are in the form of a
table. One example of this table is a 6-column table, where column 1 is the "Test Case ID
Number", column 2 is the "Test Case Name", column 3 is the "Test Objective", column 4 is the
"Test Conditions/Setup", column 5 is the "Input Data Requirements/Steps", and column 6 is the
"Expected Results". ¦ll documents should be written to a certain standard and template.
Standards and templates maintain document uniformity. They also help in learning where
information is located, making it easier for users to find what they want. Lastly, with standards
and templates, information will not be accidentally omitted from a document. You C¦N learn to
create test case templates, with little or no outside help. Get C¦N get free information.
Click on a link!
(0>>c    
¦: Software faults are hidden programming errors. Software faults are errors in the correctness
of the semantics of computer programs.

(0>0c   


¦: Software failure occurs when the software does not do what the user expects to see.
(0>1c  c          
¦: Software failure occurs when the software does not do what the user expects to see. ¦
software fault, on the other hand, is a hidden programming error. ¦ software fault becomes a
software failure only when the exact computation conditions are met, and the faulty portion of
the code is executed on the CPx. This can occur during normal usage. Or, when the software is
ported to a different hardware platform. Or, when the software is ported to a different complier.
Or, when the software gets extended.
(0>2c   
 
¦: Test engineers are engineers who specialize in testing. We, test engineers, create test cases,
procedures, scripts and generate data. We execute test procedures and scripts, analyze standards
of measurements, evaluate results of system/integration/regression testing.
(0>3c  c  
 
¦: Test engineers speed up the work of the development staff, and reduce the risk of your
company's legal liability. We, test engineers, also give the company the evidence that the
software is correct and operates properly. We also improve problem tracking and reporting,
maximize the value of the software, and the value of the devices that use it. We also assure the
successful launch of the product by discovering bugs and design flaws, before...
users get discouraged, before shareholders loose their cool and before employees get bogged
down. We, test engineers help the work of software development staff, so the development team
can devote its time to build up the product. We, test engineers also promote continual
improvement. They provide documentation required by FD¦, F¦¦, other regulatory agencies,
and your customers. We, test engineers save your company money by discovering defects
E¦RLY in the design process, before failures occur in production, or in the field. We save the
reputation of your company by discovering bugs and design flaws, before bugs and design flaws
damage the reputation of your company.
(0>4c (*
 
¦: Q¦ engineers are test engineers, but Q¦ engineers do more than just testing. Good Q¦
engineers understand the entire software development process and how it fits into the business
approach and the goals of the organization. Communication skills and the ability to understand
various sides of issues are important. We, Q¦ engineers, are successful if people listen to us, if
people use our tests, if people think that we're useful, and if we're happy doing our work. I would
love to see Q¦ departments staffed with experienced software developers who coach
development teams to write better code. ut I've never seen it. Instead of coaching, we, Q¦
engineers, tend to be process people.

!*(I)  


(0>6c    


  

¦: Metrics that can be used for bug tracking include: total number of bugs, total number of bugs
that have been fixed, number of new bugs per week, and number of fixes per week. Metrics for
bug tracking can be used to determine when to stop testing, e.g. when bug rate falls below a
certain level. You C¦N learn to use defect tracking software, with little or no outside help. Get
C¦N get free information. Click on a link!
(0>Ac  c(*
 
¦: The Q¦ Engineer's function is to use the system much like real users would, find all the bugs,
find ways to replicate the bugs, submit bug reports to the developers, and to provide feedback to
the developers, i.e. tell them if they've achieved the desired level of quality.
(0>Dc  c   (*
 
¦: Let's say, an engineer is hired for a small software company's Q¦ role, and there is no Q¦
team. Should he take responsibility to set up a Q¦ infrastructure/process, testing and quality of
the entire product? No, because taking this responsibility is a classic trap that Q¦ people get
caught in. Why? ecause we Q¦ engineers cannot assure quality. ¦nd because Q¦ departments
cannot create quality. What we C¦N do is to detect lack of quality, and prevent low-quality
products from going out the door. What is the solution? We need to drop the Q¦ label, and tell
the developers, they are responsible for the quality of their own work. The problem is,
sometimes, as soon as the developers learn that there is a test department, they will slack off on
their testing. We need to offer to help with quality assessment only.

(0>Cc         / 


¦: Metrics refer to statistical process control. The idea of statistical process control is a great
one, but it has only a limited use in software development. On the negative side, statistical
process control works only with processes that are sufficiently well defined ¦ND unvaried, so
that they can be analyzed in terms of statistics. The problem is, most software development
projects are NOT sufficiently well defined and NOT sufficiently unvaried. On the positive side,
one C¦N use statistics. Statistics are excellent tools that project managers can use. Statistics can
be used, for example, to determine when to stop testing, i.e. test cases completed with certain
percentage passed, or when bug rate falls below a certain level. ut, if these are project
management tools, why should we label them quality assurance tools?
(00>;  
    

¦: First, unit testing has to be completed. xpon completion of unit testing, integration testing
begins. Integration testing is black box testing. The purpose of integration testing is to ensure
distinct components of the application still work in accordance to customer requirements. Test
cases are developed with the express purpose of exercising the interfaces between the
components. This activity is carried out by the test team. Integration testing is considered
complete, when actual results and expected results are either in line or differences are
explainable/acceptable based on client input. You C¦N learn to perform integration testing, with
little or no outside help. Get C¦N get free information. Click on a link!
(000c  
    

¦: Integration testing is black box testing. The purpose of integration testing is to ensure distinct
components of the application still work in accordance to customer requirements. Test cases are
developed with the express purpose of exercising the interfaces between the components. This
activity is carried out by the test team. Integration testing is considered complete, when actual
results and expected results are either in line or differences are explainable/acceptable based on
client input.
(001c       
   
¦: Metrics refer to statistical process control. The idea of statistical process control is a great
one, but it has only a limited use in software development.
On the negative side, statistical process control works only with processes that are sufficiently
well defined ¦ND unvaried, so that they can be analyzed in terms of statistics. The problem is,
most software development projects are NOT sufficiently well defined and NOT sufficiently
unvaried.
On the positive side, one C¦N use statistics. Statistics are excellent tools that project managers
can use. Statistics can be used, for example, to determine when to stop testing, i.e. test cases
completed with certain percentage passed, or when bug rate falls below a certain level. ut, if
these are project management tools, why should we label them quality assurance tools?
The followings describe some of the metrics in quality assurance:
. 9 .  
09   9 .  7/7-88 Cyclomatic Complexity is a measure of the
complexity of a module's decision structure. It's the number of linearly independent paths and
therefore, the minimum number of paths that should be tested.
1* 9 .  7*98¦ctual Complexity is the number of independent paths
traversed during testing.
2.  
9 .  7/7-88 Module Design Complexity is the complexity of
the design-reduced module, and reflects the complexity of the module's calling patterns to its
immediate subordinate modules. This metric differentiates between modules that seriously
complicate the design of a program they are part of, and modules that simply contain complex
computational logic. It is the basis upon which program design and integration complexities (S0
and S1) are calculated.
3# 9 .  7/7-88 Essential Complexity is a measure of the degree to
which a module contains unstructured constructs. This metric measures the degree of
structuredness and the quality of the code. This metric is used to predict the required
maintenance effort and to help in the modularization process.
4% c
 9 .  7/7-88 Pathological Complexity Metric is a measure of
the degree to which a module contains extremely unstructured constructs.
6 
9 .  7>8 Design Complexity Metric measures the amount of
interaction between modules in a system.
A 
  9 .  708 Integration Complexity Metric measures the amount of
integration testing necessary to guard against errors.
D) &  
  9 .  7)08 Object Integration Complexity Metric
quantifies the number of tests necessary to fully integrate an object or class into an OO system.
C-   9 .  7
/7-88 Global Data Complexity Metric quantifies the
cyclomatic complexity of a module's structure as it relates to global/parameter data. It can be no
less than one and no more than the cyclomatic complexity of the original flowgraph.
. 9   +  .  
0  9 .  7 '8 Data Complexity Metric quantifies the complexity of a
module's structure as it relates to data-related variables. It is the number of independent paths
through data logic, and therefore, a measure of the testing effort with respect to data-related
variables.
1   9 .  7 '8 Tested Data Complexity Metric quantifies the
complexity of a module's structure as it relates to data-related variables. It is the number of
independent paths through data logic that have been tested.
2  + .  7 +8 Data Reference Metric measures references to data-related
variables independently of control flow. It is the total number of times that data-related variables
are used in a module.
3   + .  7 +8 Tested Data Reference Metric is the total number of
tested references to data-related variables.
4.   / .  7 L/ 8 Maintenance Severity Metric measures how
difficult it is to maintain a module.
6  + / .  7 +L/ 8Data Reference Severity Metric measures
the level of data intensity within a module. It is an indicator of high levels of data related code;
therefore, a module is data intense if it contains a large number of data-related variables.
A  9 / .  7 'L/ 8Data Complexity Severity Metric
measures the level of data density within a module. It is an indicator of high levels of data logic
in test paths, therefore, a module is data dense if it contains data-related variables in a large
proportion of its structures.
D-   / .  7
/L/ 8 Global Data Severity Metric measures the
potential impact of testing data-related basis paths across modules. It is based on global data test
paths.
. 9 ) & )  .  M#    
0%  %    7%9%:$8 PCTPx is the percentage of public and proteced data
within a class.
1*  %    7%:$ **8PxD¦T¦ indicates the number of accesses to public
and protected data.
. 9 ) & )  .  M%c
0%  : /97%99*<<8 PCTC¦LL is the number of non-overloaded
calls in a system.
1,  + 7+))9,8 ROOTCNT is the total number of class hierarchy roots
within a program.
2!  7!*,,8 F¦NIN is the number of classes from which a class is derived.
. 9 ) & )  .  M(  
0. /7-87.*J'8 M¦XV is the maximum cyclomatic complexity value for any
single method within a class.
1. /7-87.*J#'8 M¦XEV is the maximum essential complexity value for any
single method within a class.
2; c(  7(:*<8Qx¦L counts the number of classes within a system that are
dependent upon their descendants.
) c) & )  .  
0  c7 #%;8 Depth indicates at what level a class is located within its class hierarchy.
1< 9c . c7<)9.8 LOCM is a measure of how the methods of a class
interact with the data in a class.
2,  9c 7,)98 NOC is the number of classes that are derived directly from a
specified class.
3+ !97+!98 RFC is a count of methods implemented within a class plus the
number of methods accessible to an object of this class type due to inheritance.
4
c . c%97.98 WMC is a count of methods implemented within a
class.
;  .  
0%
<
c Program length is the total number of operator occurences and the total
number of operand occurences.
1%
'  Program volume is the minimum number of bits required for coding the
program.
2%
</ %
    Program level and program difficulty is a measure
of how easily a program is comprehended.
3 
 9   Intelligent content shows the complexity of a given algorithm
independent of the language used to express the algorithm.
4%

#  Programming effort is the estimated mental effort required to develop a
program.
6##   Error estimate calculates the number of errors in a program.
A%

 Programming time is the estimated amount of time to implement an
algorithm.
< 9  .  
1. Lines of Code
2. Lines of Comment
3. Lines of Mixed Code and Comments
4. Lines Left lank

(002;     


¦: The test plan document template helps to generate test plan documents that describe the
objectives, scope, approach and focus of a software testing effort. Test document templates are
often in the form of documents that are divided into sections and subsections. One example of
this template is a 4-section document, where section 1 is the description of the "Test Objective",
section 2 is the the description of "Scope of Testing", section 3 is the the description of the "Test
¦pproach", and section 4 is the "Focus of the Testing Effort". ¦ll documents should be written to
a certain standard and template. Standards and templates maintain document uniformity. They
also help in learning where information is located, making it easier for a user to find what they
want. With standards and templates, information will not be accidentally omitted from a
document. Once Rob Davis has learned and reviewed your standards and templates, he will use
them. De will also recommend improvements and/or additions. ¦ software pro
ject test plan is a document that describes the objectives, scope, approach and focus of a software
testing effort. The process of preparing a test plan is a useful way to think through the efforts
needed to validate the acceptability of a software product. The completed document will help
people outside the test group understand the why and how of product validation. You C¦N learn
to generate test plan templates, with little or no outside help. Get C¦N get free information.
Click on a link!
(003c H
  H
¦: ug life cycles are similar to software development life cycles. ¦t any time during the
software development life cycle errors can be made during the gathering of requirements,
requirements analysis, functional design, internal design, documentation planning, document
preparation, coding, unit testing, test planning, integration, testing, maintenance, updates, re-
testing and phase-out. ug life cycle begins when a programmer, software developer, or architect
makes a mistake, creates an unintentional software defect, i.e. a bug, and ends when the bug is
fixed, and the bug is no longer in existence. What should be done after a bug is found? When a
bug is found, it needs to be communicated and assigned to developers that can fix it. ¦fter the
problem is resolved, fixes should be re-tested. ¦dditionally, determinations should be made
regarding requirements, software, hardware, safety impact, etc., for regression testing to check
the fixes didn't create other problems elsewhere. If a problem-tracking system is in place, it
should encapsulate these determinations. ¦ variety of commercial, problem-
tracking/management software tools are available. These tools, with the detailed input of
software test engineers, will give the team complete information so developers can understand
the bug, get an idea of its severity, reproduce it and fix it.
(004c   c    

¦: For larger projects, or ongoing long-term projects, automated testing can be valuable. ut for
small projects, the time needed to learn and implement the automated testing tools is usually not
worthwhile. ¦utomated testing tools sometimes do not make testing easier. One problem with
automated testing tools is that if there are continual changes to the product being tested, the
recordings have to be changed so often, that it becomes a very time-consuming task to
continuously update the scripts. ¦nother problem with such tools is the interpretation of the
results (screens, data, logs, etc.) that can be a time-consuming task. You C¦N learn to use
automated tools, with little or no outside help. Get C¦N get free information. Click on a link!
(006c  c /   
¦: This ratio is not a fixed one, but depends on what phase of the software development life
cycle the project is in. When a product is first conceived, organized, and developed, this ratio
tends to be 10:1, 5:1, or 3:1, i.e. heavily in favor of developers. In sharp contrast, when the
product is near the end of the software development life cycle, this ratio tends to be 1:1, or even
1:2, in favor of testers.
(00Ac      
 E  
¦: I'm a Software Q¦ Engineer. I use the system much like real users would. I find all the bugs,
find ways to replicate the bugs, submit bug reports to developers, and provides feedback to the
developers, i.e. tell them if they've achieved the desired level of quality.
(00Dc        

¦: Learning how to perform manual testing is an important part of one's education. I see no
reason why one should skip an important part of an academic program.
(00C;      + " c   c
¦: I suggest you read all you can, and that includes reading product description pamphlets,
manuals, books, information on the Internet, and whatever information you can lay your hands
on. Then the next step is getting some hands-on experience on how to use WinRunner. If there is
a will, there is a way! You C¦N do it, if you put your mind to it! You C¦N learn to use
WinRunner, with little or no outside help. Get C¦N get free information. Click on a link!
(01>    + "c 
         
    
¦: The cheapest, or free, education is sometimes provided on the job, by an employer, while one
is getting paid to do a job that requires the use of WinRunner and many other software testing
tools. In lieu of a job, it is often a good idea to sign up for courses at nearby educational
institutions. Classroom education, especially non-degree courses in local, community colleges,
tends to be cheap.
(010 I c/  ;    
   c     

¦: The cheapest, or free, education is sometimes provided on the job, by an employer, while one
is getting paid to do a job that requires the use of WinRunner and many other software testing
tools.

!*(I)  


(011c      c


¦: The software tools currently in demand include LabView, LoadRunner, Rational Tools, and
Winrunner -- and especially the Loadrunner and Rational Toolset -- but there are many others,
depending on the end client, and their needs, and preferences.
(012c c c c  
¦: I suggest you learn the most popular software tools (i.e. LabView, LoadRunner, Rational
Tools, Winrunner, etc.) -- and you want to pay special attention to LoadRunner and the Rational
Toolset.
(013c  c   
   
  
¦: Software configuration management tools include Rational ClearCase, DOORS, PVCS, CVS;
and there are many others. Rational ClearCase is a popular software tool, made by Rational
Software, for revision control of source code. DOORS, or "Dynamic Object Oriented
Requirements System", is a requirements version control software tool. CVS, or "Concurrent
Version System", is a popular, open source version control system to keep track of changes in
documents associated with software projects. CVS enables several, often distant, developers to
work together on the same source code. PVCS is a document version control tool, a competitor
of SCCS. SCCS is an original xNIX program, based on "diff". Diff is a xNIX command that
compares contents of two files. You C¦N learn to use SCM tools, with little or no outside help.
Get C¦N get free information. Click on a link!
(014c    
   
 
¦: Software Configuration management (SCM) is the control, and the recording of, changes that
are made to the software and documentation throughout the software development life cycle
(SDLC). SCM covers the tools and processes used to control, coordinate and track code,
requirements, documentation, problems, change requests, designs, tools, compilers, libraries,
patches, and changes made to them, and to keep track of who makes the changes. Rob Davis has
experience with a full range of CM tools and concepts, and can easily adapt to an organization's
software tool and process needs.

(016c  c   

¦: Depending on the organization, the following roles are more or less standard on most testing
projects: Testers, Test Engineers, Test/Q¦ Team Leads, Test/Q¦ Managers, System
¦dministrators, Database ¦dministrators, Technical ¦nalysts, Test uild Managers, and Test
Configuration Managers. Depending on the project, one person can and often wear more than
one hat. For instance, we Test Engineers often wear the hat of Technical ¦nalyst, Test uild
Manager and Test Configuration Manager as well.
(01Ac c c c     
¦: ¦s a yardstick of popularity, if we count the number of applicants and resumes, Tester roles
tend to be the most popular. Less popular roles are roles of System ¦dministrators, Test/Q¦
Team Leads, and Test/Q¦ Managers. The "best" job is the job that makes YOx happy. The best
job is the one that works for YOx, using the skills, resources, and talents YOx have. To find the
best job, you need to experiment, and "play" different roles. Persistence, combined with
experimentation, will lead to success.
(01Dc I c      / 
¦: "Priority" is associated with scheduling, and "severity" is associated with standards. "Piority"
means something is afforded or deserves prior attention; a precedence established by order of
importance (or urgency). "Severity" is the state or quality of being severe; severe implies
adherence to rigorous standards or high principles and often suggests harshness; severe is
marked by or requires strict adherence to rigorous standards or high principles, e.g. a severe code
of behavior. The words priority and severity do come up in bug tracking. ¦ variety of
commercial, problem-tracking/management software tools are available. These tools, with the
detailed input of software test engineers, give the team complete information so developers can
understand the bug, get an idea of its 'severity', reproduce it and fix it. The fixes are based on
project 'priorities' and 'severity' of bugs. The 'severity' of a problem is defined in accordance to
the customer's risk assessment and recorded in their selected tracking tool. ¦ buggy software can
'severely' affect schedules, which, in turn can lead to a reassessment and renegotiation of
'priorities'.
(01Cc I c        /
¦: "Efficient" means having a high ratio of output to input; working or producing with a
minimum of waste. For example, "¦n efficient engine saves gas". "Effective", on the other hand,
means producing, or capable of producing, an intended result, or having a striking effect. For
example, "For rapid long-distance transportation, the jet engine is more effective than a witch's
broomstick".
(02>c  c    /    /  
¦: Verification takes place before validation, and not vice versa. Verification evaluates
documents, plans, code, requirements, and specifications. Validation, on the other hand,
evaluates the product itself. The inputs of verification are checklists, issues lists, walkthroughs
and inspection meetings, reviews and meetings. The input of validation, on the other hand, is the
actual testing of an actual product. The output of verification is a nearly perfect set of documents,
plans, specifications, and requirements document. The output of validation, on the other hand, is
a nearly perfect, actual product.
(020c      c
 
 
¦: Documentation change management is part of configuration management (CM). CM covers
the tools and processes used to control, coordinate and track code, requirements, documentation,
problems, change requests, designs, tools, compilers, libraries, patches, changes made to them
and who makes the changes. Rob Davis has had experience with a full range of CM tools and
concepts. Rob Davis can easily adapt to your software tool and process needs.
(021c   
¦: xp time is the time period when a system is operational and in service. xp time is the sum of
busy time and idle time.
(022c      
¦: xpwardly compatible software is compatible with a later or more complex version of itself.
For example, an upwardly compatible software is able to handle files created by a later version
of itself.

(023c    


¦: In software design, upward compression means a form of demodularization, in which a
subordinate module is copied into the body of a superior module.
(024c    
¦: xsability means ease of use; the ease with which a user can learn to operate, prepare inputs
for, and interpret outputs of a software product.
(026c      
¦: xser documentation is a document that describes the way a software product or system should
be used to obtain the desired results.
(02Ac   
¦: xser manual is a document that presents information necessary to employ software or a
system to obtain the desired results. Typically, what is described are system and component
capabilities, limitations, options, permitted inputs, expected outputs, error messages, and special
instructions.

!*(I)  


(02Dc  c            


¦: When a distinction is made between those who operate and use a computer system for its
intended purpose, a separate user documentation and user manual is created. Operators get user
documentation, and users get user manuals.
(02Cc    
¦: ¦ computer program is user friendly, when it is designed with ease of use, as one of the
primary objectives of its design.
(03>c     
¦: ¦ document is user friendly, when it is designed with ease of use, as one of the primary
objectives of its design.
(030c  

¦: xser guide is the same as the user manual. It is a document that presents information
necessary to employ a system or component to obtain the desired results. Typically, what is
described are system and component capabilities, limitations, options, permitted inputs, expected
outputs, error messages, and special instructions.
(031c    
¦: xser interface is the interface between a human user and a computer system. It enables the
passage of information between a human user and hardware or software components of a
computer system.

(032c   


¦: xtility is a software tool designed to perform some frequently used support function. For
example, a program to print files.
(033c  E  
¦: xtilization is the ratio of time a system is busy, divided by the time it is available. xilization
is a useful measure in evaluating computer performance.
(034c 'A'
¦: V&V is an acronym for verification and validation.
(036c /   
¦: Variable trace is a record of the names and values of variables accessed and changed during
the execution of a computer program.
(03Ac /   
¦: Value trace is same as variable trace. It is a record of the names and values of variables
accessed and changed during the execution of a computer program.
(03Dc / 
¦: Variables are data items whose values can change. For example: "capacitor_voltage". There
are local and global variables, and constants.
(03Cc / 
¦: Variants are versions of a program. Variants result from the application of software diversity.
(04>c /    /  7'A'8
¦: Verification and validation (V&V) is a process that helps to determine if the software
requirements are complete, correct; and if the software of each development phase fulfills the
requirements and conditions imposed by the previos phase; and if the final software complies
with the applicable software requirements.

(040c  / 


¦: ¦ software version is an initial release (or re-release) of a software associated with a complete
compilation (or recompilation) of the software.
(041c   / 
¦: ¦ document version is an initial release (or complete a re-release) of a document, as opposed
to a revision resulting from issuing change pages to a previous release.
(042c ' 
¦: VDD is an acronym. It stands for "version description document".

!*(I)  


(043c /      7' 8


¦: Version description document (VDD) is a document that accompanies and identifies a given
version of a software product. Typically the VDD includes a description, and identification of the
software, identification of changes incorporated into this version, and installation and operating
information unique to this version of the software.

(044c /       


¦: ¦ vertical microinstruction is a microinstruction that specifies one of a sequence of operations
needed to carry out a machine language instruction. Vertical microinstructions are short, 12 to 24
bit instructions. They're called vertical because they are normally listed vertically on a page.
These 12 to 24 bit microinstructions instructions are required to carry out a single machine
language instruction. esides vertical microinstructions, there are horizontal as well as diagonal
microinstructions as well.
(046c / 
¦: In virtual storage systems, virtual addresses are assigned to auxiliary storage locations. They
allow those location to be accessed as though they were part of the main storage.
(04Ac / 
¦: Virtual memory relates to virtual storage. In virtual storage, portions of a user's program and
data are placed in auxiliary storage, and the operating system automatically swaps them in and
out of main storage as needed.
(04Dc /  

¦: Virtual storage is a storage allocation technique, in which auxiliary storage can be addressed
as though it was part of main storage. Portions of a user's program and data are placed in
auxiliary storage, and the operating system automatically swaps them in and out of main storage
as needed.
(04Cc /
¦: Waivers are authorizations to accept software that has been submitted for inspection, found to
depart from specified requirements, but is nevertheless considered suitable for use "as is", or
after rework by an approved method.
(06>c  c 
¦: Waterfall is a model of the software development process in which the concept phase,
requirements phase, design phase, implementation phase, test phase, installation phase, and
checkout phase are performed in that order, probably with overlap, but with little or no iteration.
(060c  cc c /  
¦: The software development process consists of the concept phase, requirements phase, design
phase, implementation phase, test phase, installation phase, and checkout phase.
(061c    / 
¦: In software development process the following models are used: waterfall model, incremental
development model, rapid prototyping model, and spiral model.
(062c  <9
¦: ¦: SDLC is an acronym. It stands for "software development life cycle".
(0639  
/     (*  
"  I 
/
¦: Yes, I can. You can visit my web site, and on pages www.robdavispe.com/free and
www.robdavispe.com/free2 you can find answers to many questions on software Q¦,
documentation, and software testing, from a tester's point of view. ¦s to questions and answers
that are not on my web site now, please be patient, as I am going to add more answers, as soon as
time permits.

(064c  c       


  
    

¦: System testing is high level testing, and integration testing is a lower level testing. Integration
testing is completed first, not the system testing. In other words, upon completion of integration
testing, system testing is started, and not vice versa. For integration testing, test cases are
developed with the express purpose of exercising the interfaces between the components. For
system testing, on the other hand, the complete system is configured in a controlled environment,
and test cases are developed to simulate real life scenarios that occur in a simulated real life test
environment. The purpose of integration testing is to ensure distinct components of the
application still work in accordance to customer requirements. The purpose of system testing, on
the other hand, is to validate an application's accuracy and completeness in performing the
functions as designed, and to test all functions of the system that are required in real life.
(066c  c    

¦: The term 'performance testing' is often used synonymously with stress testing, load testing,
reliability testing, and volume testing. Performance testing is a part of system testing, but it is
also a distinct level of testing. Performance testing verifies loads, volumes, and response times,
as defined by requirements.
(06Ac    
      
¦: Each of the followings represents a different type of testing approach: black box testing, white
box testing, unit testing, incremental testing, integration testing, functional testing, system
testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing,
performance testing, usability testing, install/uninstall testing, recovery testing, security testing,
compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison
testing, alpha testing, beta testing, and mutation testing.
(06Dc   /  

¦: Disaster recovery testing is testing how well the system recovers from disasters, crashes,
hardware failures, or other catastrophic problems
(06C;    /
¦: The peer review, sometimes called PDR, is a formal meeting, more formalized than a walk-
through, and typically consists of 3-10 people including a test lead, task lead (the author of
whatever is being reviewed), and a facilitator (to make notes). The subject of the PDR is
typically a code block, release, feature, or document, e.g. requirements document or test plan.
The purpose of the PDR is to find problems and see what is missing, not to fix anything. The
result of the meeting should be documented in a written report. ¦ttendees should prepare for this
type of meeting by reading through documents, before the meeting starts; most problems are
found during this preparation. Preparation for PDRs is difficult, but is one of the most cost-
effective methods of ensuring quality, since bug prevention is more cost effective than bug
detection.

!*(I)  


(0A>;  c  c      


¦: To check the security of an application, we can use security/penetration testing.
Security/penetration testing is testing how well the system is protected against unauthorized
internal or external access, or willful damage. This type of testing usually requires sophisticated
testing techniques.
(0A0;    c
¦: To test the password field, we do boundary value testing.
(0A1c   
 c"c   
¦: When testing the password field, one needs to verify that passwords are encrypted.
(0A2c  


 c    /
¦: ug prevention, i.e. inspections, PDRs, and walk-throughs, is more cost effective than bug
detection.

(0A3c  c & /


   

¦: The objective of regression testing is to test that the fixes have not created any other problems
elsewhere. In other words, the objective is to ensure the software has remained intact. ¦ baseline
set of data and scripts are maintained and executed, to verify that changes introduced during the
release have not "undone" any previous code. Expected results from the baseline are compared to
results of the software under test. ¦ll discrepancies are highlighted and accounted for, before
testing proceeds to the next level.
(0A4c  c    
      
¦: White box testing is a testing approach that examines the application's program structure, and
derives test cases from the application's program logic. Clear box testing is a white box type of
testing. Glass box testing is also a white box type of testing. Open box testing is also a white box
type of testing.
(0A6c       
      
¦: lack box testing is functional testing, not based on any knowledge of internal software
design or code. lack box testing is based on requirements and functionality. Functional testing
is also a black-box type of testing geared to functional requirements of an application. System
testing is also a black box type of testing. ¦cceptance testing is also a black box type of testing.
Functional testing is also a black box type of testing. Closed box testing is also a black box type
of testing. Integration testing is also a black box type of testing.
(0AA c
   
 
¦: It depends on the initial testing approach. If the initial testing approach is manual testing,
then, usually the regression testing is performed manually. Conversely, if the initial testing
approach is automated testing, then, usually the regression testing is performed by automated
testing. Q178. Please give me others' F¦Qs on testing.
¦: Visit my web site, and on pages www.robdavispe.com/free and www.robdavispe.com/free2
you can find answers to the vast majority of other testers' F¦Qs on testing, from a tester's point
of view. ¦s to questions and answers that are not on my web site now, please be patient, as I am
going to add more F¦Qs, as soon as time permits.
(0AC9  c c  
   

¦: Surely I can. For my knowledge on software testing, visit my web site,
www.robdavispe.com/free and www.robdavispe.com/free2. ¦s to knowledge that is not on my
web site at the moment, please be patient, as I am going to add more answers, as soon as time
permits.
(0D>;      

¦: I suggest you visit my web site, www.robdavispe.com/free and www.robdavispe.com/free2,
and you will find answers to most questions on software testing. ¦s to questions and answers
that are not on my web site now, please be patient, as I am going to add more answers, as soon as
time permits. I also suggest you get a job in software testing. Why? ecause you can get
additional, usually free, education on the job, while you are getting paid to do software testing.
On the job you can use many software tools, including Winrunner, LoadRunner, LabView, and
Rational Toolset. The selection of tools will depend on the end client, their needs, and
preferences. I also suggest you sign up for courses at nearby educational institutes. Classroom
education, especially non-degree courses in local community colleges, tends to be highly cost
effective.

(0D0c  / (*  



¦: Software Q¦/testing is easy, if requirements are solid, clear, complete, detailed, cohesive,
attainable and testable, if schedules are realistic, and if there is good communication. Software
Q¦/testing is a piece of cake, if project schedules are realistic, if adequate time is allowed for
planning, design, testing, bug fixing, re-testing, changes, and documentation. Software
Q¦/testing is easy, if testing is started early on, if fixes or changes are re-tested, and if sufficient
time is planned for both testing and bug fixing. Software Q¦/testing is easy, if new features are
avoided, if one is able to stick to initial requirements as much as possible.
(0D1;   
  
¦: We, good testers, take the customers' point of view. We are tactful and diplomatic. We have a
"test to break" attitude, a strong desire for quality, an attention to detail, and good
communication skills, both oral and written. Previous software development experience is also
helpful as it provides a deeper understanding of the software development process.
(0D2c  c     
   
¦: ¦ 'software bug' is a *nonspecific* term that means an inexplicable defect, error, flaw,
mistake, failure, fault, or unwanted behavior of a computer program. Other terms, e.g. 'software
defect' and 'software failure', are *more specific*. While the term bug has been a part of
engineering jargon for many-many decades, there are many who believe the term 'bug' was
named after insects that used to cause malfunctions in electromechanical computers.
(0D3;  /   (*  

¦: Invest in your skills! Learn all you can! Visit my web site, and on www.robdavispe.com/free
and www.robdavispe.com/free2 you will find answers to the vast majority of questions on
testing, from software Q¦/testers' point of view. Get additional education, on the job. Free
education is often provided by employers, while you are paid to do the job of a tester. On the job,
often you can use many software tools, including WinRunner, LoadRunner, LabView, and
Rational Toolset. Find an employer whose needs and preferences are similar to yours. Get an
education! Sign up for courses at nearby educational institutes. Take classes! Classroom
education, especially non-degree courses in local community colleges, tends to be inexpensive.
Improve your attitude! ecome the best software Q¦/tester! ¦lways strive to exceed the
expectations of your customers!
(0D4;   
¦: xse PVCS, SCCS, or "diff". PVCS is a document version control tool, a competitor of SCCS.
SCCS is an original xNIX program, based on "diff". Diff is a xNIX utility that compares the
difference between two text files.
(0D6c    
¦: Generally speaking, when we write a software program to compare files, we compare two
files, bit by bit. When we use "diff", a xNIX utility, we compare the difference between two text
files.
(0DAc  c  
¦: Configuration management, revision control, requirement version control, or document
version control. Examples are Rational ClearCase, DOORS, PVCS, and CVS. CVS, for example,
enables several, often distant, developers to work together on the same source code.
(0DDc    
¦: If we use detailed and well-written processes and procedures, we ensure the correct steps are
being executed. This facilitates a successful completion of a task. This is a way we also ensure a
process is repeatable.
(0DCc    
      
¦: The test strategy document is a formal description of how a software product will be tested. ¦
test strategy is developed for all levels of testing, as required. The test team analyzes the
requirements, writes the test strategy and reviews the plan with the project team. The test plan
may include test cases, conditions, the test environment, and a list of related tasks, pass/fail
criteria and risk assessment. ¦dditional sections in the test strategy document include: ¦
description of the required hardware and software components, including test tools. This
information comes from the test environment, including test tool data. ¦ description of roles and
responsibilities of the resources required for the test and schedule constraints. This information
comes from man-hours and schedules. Testing methodology. This is based on known standards.
Functional and technical requirements of the application. This information comes from
requirements, change request, technical, and functional design documents. Requirements that the
system cannot provide, e.g. system limitations.
(0C>c    c

¦: One test methodology is a three-step process. Creating a test strategy, Creating a test
plan/design, and Executing tests. This methodology can be used and molded to your
organization's needs. Rob Davis believes that using this methodology is important in the
development and ongoing maintenance of his customers' applications.
(0C0;      *    

¦: For one, I suggest you read all you can, and that includes reading product description
pamphlets, manuals, books, information on the Internet, and whatever information you can lay
your hands on. Two, get hands-on experience on how to use automated testing tools. If there is a
will, there is a way! You C¦N do it, if you put your mind to it! You C¦N learn to use
WinRunner, and many other automated testing tools, with little or no outside help. Click on a
link!

You might also like