You are on page 1of 14

Difference between Verification and Validation

Verification Verification is the process of evaluating products of a development phase to find out whether they meet the specified requirements. The objective of Verification is to make sure that the product being develop is as per the requirements and design specifications. Validation Validation is the process of evaluating software at the end of the development process to determine whether software meets the customer expectations and requirements. The objective of Validation is to make sure that the product actually meet up the users requirements and check whether the specifications were correct in the first place. !ollowing activities are involved in validation" Testing like black box testing white box testing gray box testing etc. Validation is carried out by testing team.

!ollowing activities are involved in Verification" #eviews $eetings and %nspections.

Verification is carried out by &' team to check whether implementation software is as per specification document or not. (xecution of code is not comes under Verification. Verification process explains whether the outputs are according to inputs or not.

(xecution of code is not comes under Validation. Validation process describes whether the software is accepted by the user or not.

Verification is carried out before the Validation activity is carried out just Validation. after the Verification. !ollowing items are evaluated during Verification" )lans #equirement *pecifications Design *pecifications +ode Test +ases etc +ost of errors caught in Verification is less than errors found in Validation. !ollowing item is evaluated during Validation" 'ctual product or *oftware under test.

+ost of errors caught in Validation is more than errors found in Verification.

%t is basically manually checking the %t is basically checking of developed of documents and files like program based on the requirement requirement specifications etc. specifications documents , files.

Difference between #egression Testing vs. #etesting


#e-Testing" 'fter a defect is detected and fixed the software should be retested to confirm that the original defect has been successfully removed. This is called +onfirmation testing or #e-Testing #egression testing" Testing your software application when it undergoes a code change to ensure that the new code has not affected other parts of the software.

Regression Testing
#egression testing is a type of software testing that intends to ensure that changes like defect fixes or enhancements to the module or application have not affecting unchanged part.

Retesting
#etesting is done to make sure that the tests cases which failed in last execution are passing after the defects against those failures are fixed.

#egression testing is not carried out on #etesting is carried out based on the specific defect fixes. %t is planned as defect fixes. specific area or full regression testing. %n #egression testing you can include the test cases which passed earlier. .e can say that check the functionality which was working earlier. #egression test cases we use are derived from the functional specification the user manuals user tutorials and defect reports in relation to corrected problems. %n #etesting you can include the test cases which failed earlier. .e can say that check the functionality which was failed in earlier build. Test cases for #etesting cannot be prepared before start testing. %n #etesting only re-execute the test cases failed in the prior execution.

'utomation is the key for regression /ou cannot automate the test cases testing. for #etesting. $anual regression testing tends to get more expensive with each new release. #egression testing is right time to start automating test cases. Defect verification is not comes under #egression testing. 0ased on the availability of resources the #egression testing can be carried out parallel with #etesting. Defect verification is comes under #etesting. )riority of #etesting over #egression testing is higher so it is carried out before regression testing.

What is difference between Priority and Severity

Priority:

Priority means how fast it has to be fixed. Priority is related to scheduling to resolve the problem. Severity means how severe it is affecting the functionality. Is largely related to Business or Marketing aspect. It is a pointer towards the importance of the bug. he priority status is set based on the customer re!uirements. Is related to technical aspect of the product. It reflects on how bad the bug is for the system. Priority means how urgently the issue can be fixed. Product manager is to decide the Priority to fix a bug. Based on "Pro#ect Priorities the product fixes are done. he Priority status is set by the tester to the developer mentioning the time frame to fix a defect. If $igh priority is mentioned then the developer has to fix it at the earliest.

Severity:

It is totally related to the !uality standard or devotion to standard. Severity means how severe it is affecting the functionality. Severity is associated with standards. he severity type is defined by the tester based on the written test cases and functionality. Is related to technical aspect of the product. It reflects on how bad the bug is for the system. It is totally related to the !uality standard or devotion to standard. Severity means how big functionality is affecting of the product.

he est %ngineer can decide the severity level of the bug. Based on Bug Severity the product fixes are done. &lso we can say he Severity status is used to explain how badly the deviation is affecting the build.

High Priority & High Severity:


1. &ll show stopper bugs would be added under this category (I mean to say
tester should log Severity as High, to set up Priority as High is Project managers call)' means bug due to which tester is not able to continue with the *oftware Testing' Blocker Bugs. (. )et*s take an example of $igh Priority + $igh Severity' ,pon login to system -.un time error/ displayed on the page' so due to which tester is not able to proceed the testing further.

High Priority & Lo


1.

Severity:

0n the home page of the company*s web site spelling mistake in the name of the company is surely a $igh Priority issue. In terms of functionality it is not breaking anything so we can mark as )ow Severity' but making bad impact on the reputation of company site. So it highest priority to fix this.

Lo
1.

Priority & High Severity:


he download 2uarterly statement is not generating correctly from the website + user is already entered in !uarter in last month. So we can say such bugs as $igh Severity' this is bugs occurring while generating !uarterly report. We have time to fix the bug as report is generated at the end of the !uarter so priority to fix the bug is )ow. (. System is crashing in the one of the corner scenario' it is impacting ma#or functionality of system so the Severity of the defect is high but as it is corner scenario so many of the user not seeing this page we can mark it as )ow Priority by pro#ect manager since many other important bugs are likely to fix before doing high priority bugs because high priority bugs are can be visible to client or end user first.

Lo

Priority & Lo

Severity:

1. Spelling mistake in the confirmation error message like -3ou have registered success/ instead of successfully' success is written. (. 4eveloper is missed remove cryptic debug information shortcut key which is used developer while developing he application' if you pressing the key combination )%5 6&) 7)%5 68 .)7.I9$ 68 .)7.I9$ 6&) 75:751; for 1 mins <!unny na=.

What is User Acceptance Testing?

User Acceptance testing is the software testing process where system tested

for acceptability + validates the end to end business flow. Such type of testing executed by client in separate environment <similar to production environment= + confirms whether system meets the re!uirements as per re!uirement specification or not. he 'cceptance testing is -black box/ tests' means ,& users doesn*t aware of internal structure of the code' they #ust specify the input to the system + check whether systems respond with correct result.

Prerequisites of User Acceptance Testing:


Prior to start the ,& following checkpoints to be considered>

he Business .e!uirements should be available. he development of software application should be completed + different levels of testing like ,nit esting' Integration esting + System esting is completed. &ll $igh Severity' $igh Priority defects should be verified. ?o any Showstoppers defects in the system. 8heck if all reported defects should be verified prior to ,& starts. 8heck if raceability matrix for all testing should be completed. Before ,& starts error like cosmetic error are acceptable but should be reported. &fter fixing all the defects regression esting should be carried out to check fixing of defect not breaking the other working area. he separate ,& environment similar to production should be ready to start ,& . he Sign off should be given by System testing team which says that Software application ready for ,& execution.

TYPES OF ACCEPTANCE TESTING

1. Alpha testing:- &lpha testing is conducted by 8ustomer at the developer*s site' it is


performed by potential users like developer' end users or organi@ation users before it is released to external customers + report the defects found while &lpha testing. his software product testing is not final version of software application' after fixing all reported bug <after bug triage= the new version of software application will release. Sometimes the &lpha esting is carried out by client or an outsider with the attendance of developer and tester. he version of the release on which &lpha testing is perform is called - Alpha Release/. 1. 0eta testing" - Beta testing is to be carried out without any help of developers at the end user*s site by the end users +' so it is performed under uncontrolled environment. Beta testing is also known as 5ield testing. his is used to get feedback from the market. his testing is conducted by limited users + all issues found during this testing are reported on continuous basis which helps to improve the system. 4evelopers are taking actions on all issues reported in beta testing after bug triage + then the software application is ready for the final release. he version release after beta testing is called -0eta #elease-.

What is ,sability estingA


Definition" B he testing aim is to recogni@e any usability problems' gather !ualitative and !uantitative data and establish the participant*s fulfilment with the product. ,sability testing is an essential element of !uality assurance. It is the measure of a product*s potential to accomplish the goals of the user. ,sability testing is a method by which users of a product are asked to perform certain tasks in an effort to measure the product*s easeBofBuse' task time' and the user*s perception of the experience. his look as a uni!ue usability practice because it provides direct input on how real users use the system. Usability testing measures humanBusable products to fulfil the user*s purpose. he item which takes benefit from usability testing are web sites or web applications' documents' computer interfaces' consumer products' and devices. ,sability testing processes the usability of a particular ob#ect or group of ob#ects' where common humanB computer interaction studies try to formulate universal principles. 2sability testing checklist is divided into three parts &ccessibility' ?avigation and 8ontent. *ection %" 'ccessibility

8heck 8heck 8heck 8heck 8heck

about the load time of Website is realistic. if &de!uate extBtoBBackground 8ontrast is present. if font si@e + spacing between the texts is properly readable. if website has its C;C page or any custom designed ?ot 5ound page. if appropriate &) tags are added for images.

*ection %%" 3avigation

8heck 8heck 8heck 8heck 8heck 8heck

if if if if if if

user is effortlessly recogni@es the website navigation. navigation options are understandable + short. number of buttonsDlinks are reasonable the 8ompany )ogo Is )inked to $omeBpage style of links is consistent on all pages + easy to understand. site search is present on page + should be easy to accessible.

*ection %%%" +ontent


8heck 8heck 8heck 8heck 8heck 8heck 8heck

if if if if if if if

,.)s &re Meaningful + ,serBfriendly $ M) Page itles &re %xplanatory 8ritical 8ontent Is &bove he 5old %mphasis <bold' etc.= Is ,sed Sparingly Main 8opy Is 8oncise + %xplanatory Ma#or $eadings &re 8lear + 4escriptive Styles + 8olours &re 8onsistent

&dvantages of ,sability esting>


,sability testing finds important bugs and potholes of the tested application which will be not visible to the developer. ,sing correct resources' usability test can assist in fixing all problems that user face before application releases. ,sability test can be modified according to the re!uirement to support other types of testing such as functional testing' system integration testing' ,nit testing' smoke testing etc. Planned "sa#ility testing becomes very economic' highly successful and beneficial. Issues and potential problems are highlighted before the product is launched.

)imitations of usability testing>


Planning and dataBcollecting process are time consuming. It is always be confusing that why usability problems come. Its small and simple si@e makes it unreliable for drawing conclusions about sub#ective user preferences. It*s hard to create the suitable context. 3ou can*t test longBterm experiences. ,nplanned social connections cannot be replicated. People act in a different way when they know they*re being observed.
.hat are the contents in an effective 0ug report4 1. Pro#ect (. Sub#ect E. 4escription C. Summary :. 4etected By <?ame of the ester= F. &ssigned o <?ame of the 4eveloper who is supposed to the Bug= G. est )ead <?ame= H. 4etected in Iersion J. 8losed in Iersion 1;. 4ate 4etected 11. %xpected 4ate of 8losure 1(. &ctual 4ate of 8losure 1E. Priority <Medium' )ow' $igh' ,rgent=

1C. Severity <.anges from 1 to := 1:. Status 1F. Bug I4 1G. &ttachment 1H. est 8ase 5ailed < est case that is failed for the Bug=

What is the difference between S ftware Testing and !"alit# Ass"rance $!A%& Software Testing involves operation of a system or application under controlled conditions and evaluating the result. It is oriented to 'detection'. Quality ssurance !Q " involves the entire software development #$%&'SS( monitoring and improving the process) making sure that any agreed(upon standards and procedures are followed) and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

I Model

In the V $odel *oftware Development 5ife +ycle' based on same information <re!uirement specification document= the development + testing activity is started. Based on the re!uirement document developer team started working on the design + after completion on design start actual implementation and testing team starts working on test planning' test case writing' test scripting. Both activities are working

parallel to each other. In Waterfall model + IBmodel they are !uite similar to each other. &s it is most popular Software esting )ife 8ycle model so most of the organi@ation is following this model. he IBmodel is also called as Verification and Validation model. he testing activity is performing in the each phase of Software esting )ife 8ycle phase. In the first half of the model validations testing activity is integrated in each phase like review user re!uirements' System 4esign document + in the next half the Ierification testing activity is come in picture. ypical IBmodel shows Software 4evelopment activities on the )eft hand side of model and the .ight hand side of the model actual esting Phases can be performed. In this process -4oBProcedure/ would be followed by the developer team and the -8heckBProcedure/ would be followed by the testing team to meets the mentioned re!uirements. In the IBModel software development life cycle different steps are followed however here we will take a most common type of IBmodel example. he IBmodel typically consists of the following phases> 1. (. E. C. ,nit esting> Preparation of ,nit est 8ases Integration esting> Preparation of Integration est 8ases System esting> Preparation of System test cases &cceptance esting> Preparation of &cceptance est 8ases

Software esting )ife 8ycle S )8


Software Testing Life Cycle STLC! is the testing process hich is e$ecuted in systematic and planned manner% In S&L' process, di!!erent activities are carried out to improve the (uality o! the product% Lets (uic)ly see hat all stages are involved in typical So!t are &esting Li!e 'ycle (S&L')%

#equirement 'nalysis"

(ntry +riteria

'ctivities

Deliverable

5ollowing Prepare the list of !uestions or !ueries and get )ist of !uestions documents should resolved from Business &nalyst' System with all answers to be available> &rchitecture' 8lient' echnical ManagerD)ead be resolved from etc. business i.e. B .e!uirements testable Specification. Make out the list for what all ypes of ests re!uirements performed like 5unctional' Security' and B &pplication Performance etc. &utomation architectural feasibility report 4efine the testing focus and priorities. <if applicable= &long with above documents )ist down the est environment details where &cceptance testing activities will be carried out. criteria should be 8heckout the &utomation feasibility if re!uired well defined. + prepare the &utomation feasibility report.

Test )lanning"
(ntry +riteria .e!uirements 4ocuments <,pdated version of unclear or missing re!uirement=. &utomation feasibility report. 'ctivities 4efine 0b#ective + scope of the pro#ect. )ist down the testing types involved in the S )8. est effort estimation and resource planning. Selection of testing tool if re!uired. 4efine the testing process overview. 4efine the test environment re!uired for entire pro#ect. Prepare the test schedules. Deliverable est Plan or est strategy document. esting estimatio n document.

4efine the control procedures. 4etermining roles and responsibilities. )ist down the testing deliverable. 4efine the entry criteria' suspension criteria' resumption criteria and exit criteria. 4efine the risk involved if any.

Test +ase Development"


(ntry +riteria .e!uirements 4ocuments <,pdated version of unclear or missing re!uirement=. &utomation feasibility report. 'ctivities Preparation of test cases. Preparation of test automation scripts <if re!uired=. Deliverable est cases. est data.

est &utomation .eBre!uisite test data preparation for executing Scripts <if test cases. re!uired=.

Test (nvironment *etup"


(ntry +riteria est Plan is available. 'ctivities Deliverable

&naly@e the re!uirements and prepare the list of est Software + hardware re!uired to set up test %nvironment will environment. be ready with Smoke est cases test data. are available. Setup the test environment. .esult of Smoke est data is 0nce the est %nvironment is setup execute the est cases. available. Smoke test cases to check the readiness of the test environment.

Test (xecution"

(ntry +riteria

'ctivities

Deliverable est case execution report.

est Plan or est Based on test planning execute the test strategy document. cases. est cases. est data.

Mark status of test cases like Passed' 5ailed' 4efect report. Blocked' ?ot .un etc. &ssign Bug Id for all 5ailed and Blocked test cases. 4o .etesting once the defects are fixed. rack the defects to closure.

Test +ycle +losure"


(ntry +riteria 'ctivities Deliverable

est case execution %valuate cycle completion criteria based on est 8losure is completed est coverage' 2uality' 8ost' ime' 8ritical report Business 0b#ectives' and Software Prepare est case %xecution test metrics based on the above parameters. est metrics report Prepare est closure report 4efect report Share best practices for any similar pro#ects in future

I'p rtant P ints( )


*+ *hen the software code has been built) it is executed and then any defects may &ause the system to fail to do what it should do !or do something it shouldn't") &ausing a failure+ ,+ 'rrors may produce defects in the software code or system) or in a document. If a defect in code is executed) the system may experience a failure. So the mistakes we make matter partly because they have conse+uences for the products for which we are responsible. ,. -ailures can be caused by environmental conditions as well. for example) a radiation burst) a strong magnetic field) electronic fields or pollution could cause faults in hardware or firmware. Those faults might prevent or change the execution

of software. -ailures may also arise because of human error in interacting with the software. /. &ost of defects. (

0. $igorous testing is necessary during development and maintenance to identify defects) in order to reduce failures in the operational environment and increase the +uality of the operational system. 1. *e may also be re+uired to carry out software testing to meet contractual or legal re+uirements) or industry(specific standards. 2. Testing helps to measure the +uality of software in terms of the number of defects found) the tests run) and the system covered by the tests. 3. *hen testing finds defects) the +uality of the software system increases. 4. The ISTQ5 glossary definition of +uality covers not 6ust the specified re+uirements but also user and customer needs and expectations. 17.
Viewpoint Software

Quality is measured by looking at the ttributes of the product.

Quality is fitness for use. Quality can have sub6ective aspects and not 6ust +uantitative aspects. Quality is based on good manufacturing processes) and meeting defined re+uirements. It is measured by testing) Inspection and analysis of faults and than

*e will measure the attributes of the software) e.g. its reliability in terms of mean time between failures !85T-") and release when they reach a specified level e.g. 8T5- of 19 hours. *e will ask the users whether they can carry out their tasks: if they are satisfied that they can we will release the software. *e will use a recogni;ed software development process. *e will only release the software if there are few than five outstanding high(priority defects once the

failures.

planned tests are &omplete. *e have time(boxed the testing to two weeks to stay in the pro6ect budget.

'xpectation of value for money. affordability) and a value(based trade(off 5etween time) effort and cost aspects. *e can afford to buy this software and *e expect a return on investment. Transcendent feelings ( this is about the feelings of an individual or group of individuals towards a product or a Supplier.

We like this software! It is fun and its the latest thing! So what if it has a small local farm few problems? We want to use it r. anyway...

11. The more rigorous our testing) the more defects we'll find.
19.

You might also like