You are on page 1of 25

Software Testing Master Test Plan for Functional Testing Table of Contents Table of Contents .............................................................. .............................

2 Revision History ................................ .............................................................4 Testing Framework ............................................................................... ..........5 1.0 INTRODUCTION ................................................... ........................................................................ 5 1.2 T RADITIONAL TESTING CYCLE ....................................................... ....................5 2.0 VERIFICATION AND VALIDATION TESTING STRATEGIES........ ........................... 6 2.1 VERIFICATION STRATEGIES ...................... ........................................................6 2.1.1 REVIEW S ........ ..............................................................................7 2.1.2 INSPECTIONS .............................................................. ....................8 2.1.3 WALKTHROUGHS ....................................... .......................................8 2.2 VALIDATION STRATEGIES ............. ...................................................................8 3.0 TESTING TYPES ......................................................................... ................................................... 9 3.1 WHITE BOX TESTING .... ................................................................................ .9 W HITE BOX TESTING TYPES..................................................... ............................ 10 3.1.1 BASIS PATH TESTING ....................... ............................................... 10 3.1.2 FLOW GRAPH N OTATION .. ................................................................. 10 3.1.3 CYCLO MATIC COMPLEXITY ............................................................... .. 10 3.1.4 GRAPH MATRICES ..................................................... ..................... 10 3.1.5 CONTROL STRUCTURE TESTING ....................... .................................... 10 3.1.5.1 Condition Testing .............. ............................................. 10 3.1.5.2 Data Flow Testing ..... ..................................................... 10 3.1.6 LOOP TESTING .... ................................................................................ .... 11 3.1.6.1 Simple Loops ................................................... ....................... 11 3.1.6.2 Nested Loops ................................ .......................................... 11 3.1.6.3 Concatenated Loops ....... .......................................................... 11 3.1.6.4 Unstructur ed Loops .................................................................. 11 3 .2 BLACK BOX TESTING ........................................................... ........................ 11 BLACK BOX TESTING TYPES ............................ ..................................................... 11 3.2.1 GRAPH BASED TESTI NG METHODS ....................................................... 11 3.2.2 EQUI VALENCE PARTITIONING ........................................................... ... 11 3.2.3 BOUNDARY VALUE ANALYSIS ........................................... ................... 12 3.2.4 COMPARISON TESTING ................................ ..................................... 12 3.2.5 ORTHOGONAL ARRAY TESTING ........ .................................................... 12 3.3 SCENARIO BASED TESTI NG (SBT).......................................................... 12 3.4 EXPLOR ATORY TESTING .................................................................. ..... 13 4.0 STRUCTURAL SYSTEM TESTING TECHNIQUES .............................. .......................... 13 5.0 FUNCTIONAL SYSTEM TESTING TECHNIQUES.......... ............................................... 13 4.0 TESTING PHASES .......... ................................................................................ ............................ 14 4.2 UNIT TESTING ............................... ............................................................ 15 4.3 INTEGRATION TESTING ........................................................................ ......... 15 4.3.1 TOP- DOWN I NTEGRATION....................................... ........................... 15 Software Testing Framework V2.0 2 of 25

4.3.2 BOTTOM- UP I NTEGRATION .................................................. ............... 15 4.4 SMOKE TESTING............................................ ............................................. 16 4.5 SYSTEM TESTING ............ ............................................................................ 16 4.5.1. RECOVERY TESTING ........................................................ ............... 16 4.5.2. SECURITY TESTING ..................................... ................................... 16 4.5.3. STRESS TESTING ................... ....................................................... 16 4.5.4. PERFORMANCE TE STING .................................................................. 16 4.5. 5. REGRESSION TESTING .......................................................... .......... 17 4.6 ALPHA TESTING ................................................ ......................................... 17 4.7 USER ACCEPTANCE TESTING ....... .................................................................... 17 4.8 BETA TESTING ....................................................................... .................... 17 5.0 METRICS ............................................ ................................................................................ ............. 17 6.0 TEST MODELS ................................................................ .............................................................. 19 6.1 THE V MODEL ................................................................................ .......... 19 6.2 THE W MODEL .................................................... ..................................... 20 6.3 THE BUTTERFLY MODEL ............... ................................................................. 21 7.0 DEFECT TRACKING PROCESS................................................................ .............................. 23 8.0 TEST PROCESS FOR A PROJECT ................................................. ....................................... 24 9.0 DELIVERABLES ............................................................... ............................................................ 25 Software Testing Framework V2.0 3 of 25

Revision History Version No. 1.0 2.0 Date August 6, 2003 December 15, 2003 Author Harinath Harina th Notes Initial Document Creation and Posting on web site. Renamed the document to Software Testing Framework V2.0 Modified the structure of the document. Adde d Testing Models section Added SBT, ET testing types. Next Version of this framework would include Test Estimation Procedures and More Metrics. Software Testing Framework V2.0 4 of 25

Testing Framework Through experience t hey det erm ined, t hat t here should be 30 defect s per 10 00 lines of code. I f t est ing does not uncover 30 defect s, a logical solut io n is t hat t he t est process was not effective. 1.0 Introduction Test ing plays an im port ant role in t odays Syst em Developm ent Life Cycle. Du ring Testing, we follow a systematic procedure to uncover defects at various sta ges of the life cycle. This fram ework is aim ed at providing t he reader variou s Test Types, Test Phases, Test Models and Test Met rics and guide as t o how t o perform effect ive Test ing in t he project. All t he definit ions and st anda rds m ent ioned in t his fram ework are exist ing ones. I have not alt ered any d efinit ions, but where ever possible I t ried t o explain t hem in sim ple words . Also, t he fram ework, approach and suggest ions are m y experiences. My int e nt ion of t his fram ework is t o help Test Engineers t o underst and t he conce pt s of t est ing, various t echniques and apply t hem effect ively in t heir da ily work. This framework is not for publication or for monetary distribution. I f you have any queries, suggest ions for im provem ent s or any point s found m issing, kindly write back to me. 1.2 Traditional Testing Cycle Let us look at t he t radit ional Soft ware Developm ent life cycle. The figure below depicts the same. Requirements Requirements Design Design Te st Code Code Test Maintenance Maintenance Fig A Fig B I n t he above diagram ( Fig A) , t he Test ing phase com es aft er t he Coding is com plet e and before the product is launched and goes into maintenance. Software Testing Framework V2.0 5 of 25

But , t he recom m ended t est process involves t est ing in every phase of t he life cycle ( Fig B) . During t he requirem ent phase, t he em phasis is upon va lidat ion t o det erm ine t hat t he defined requirem ent s m eet t he needs of t he proj ect . During t he design and program phases, t he em phasis is on veri ficat ion t o ensure t hat t he design and program s accom plish t he defined re quirem ent s. During t he t est and inst allat ion phases, t he em phasis is on inspect ion t o det erm ine t hat t he im plem ent ed syst em meets the system s pecification. The chart below describes the Life Cycle verification activities. Life Cycle Phase Requirements Verification Activities Determine verification app roach. Determine adequacy of requirements. Generate functional test data. Determ ine consistency of design with requirements. Determine adequacy of design. Gener ate structural and functional test data. Determine consistency with design Deter mine adequacy of implementation Generat e st ruct ural and funct ional t est dat a for programs. Test application system. Place tested system into production. M odify and retest. Design Program (Build) Test Installation Maintenance Throughout the entire lifecycle, neither development nor verification is a strai ght- line act ivit y. Modificat ions or correct ions t o a st ruct ure at one ph ase will require modifications or re- verification of structures produced during previous phases. 2.0 Verification and Validation Testing Strategies 2.1 Verification Strategies The Verificat ion St rat egies, persons / t eam s involved in t he t est ing, an d t he deliverable of that phase of testing is briefed below: Verification Strat egy Requirements Reviews Performed By Users, Developers, Test Engineers. Explana tion Requirement Reviews help in base lining desired requirements to build a syst em. Design Reviews help in validating if the design meets the requirements and b uild an effective system. Code Walkthroughs help in analyzing the coding techniq ues and if the code is meeting the coding standards Deliverable Reviewed and app roved statement of requirements. System Design Document, Hardware Design Documen t. Design Reviews Designers, Test Engineers Code Walkthroughs Developers, Subject Specialists, Test Engineers. Software ready for initial testing by the developer. Software Testing Framework V2.0 6 of 25

Code Inspections Developers, Subject Specialists, Test Engineers. Formal analysis of the program source code to find defects as defined by meeting system design specification. Software ready for testing by the testing team. 2.1.1 Reviews The focus of Review is on a work product ( e.g. Requirem ent s docu m ent , Code et c.) . Aft er t he work product is developed, t he Proj ect Leade r calls for a Review. The work product is dist ribut ed t o t he personnel who i nvolves in t he review. The m ain audience for t he review should be t he Proj e ct Manager, Proj ect Leader and t he Producer of the work product. Major reviews include the following: 1. In Process Reviews 2. Decision Point or Phase End Rev iews 3. Post Implementation Reviews Let us discuss in brief about t he above m e nt ioned reviews. As per st at ist ics Reviews uncover over 65% of t he defect s and t est ing uncovers around 30% . So, it s very important to maintain reviews as part of the V&V strategies. In- Process Review In- Process Review looks at t he product during a specific t im e period of a life cycle, such as act ivit y. They are usually lim it ed t o a segm ent of a proj ect , wit h t he goal of ide nt ifying defect s as work progresses, rat her t han at t he close of a phase or even later, when they are more costly to correct. Decision- Point or Phase- End Review This review looks at t he product for t he m ain purpose of det erm inin g whet her t o cont inue wit h planned act ivit ies. They are held at t he end o f each phase, in a sem iform al or form al way. Defect s found are t racked t hr ough resolut ion, usually by way of t he exist ing defect t racking syst em . Th e com m on phase- end reviews are Software Requirements Review, Critical Design Review and Test Readiness Review. The Soft w a r e Re qu ir e m e n t s Re vie w is aim ed at validat ing and appr oving t he docum ent ed soft ware requirem ent s for t he purpose of est ablishi ng a baseline and ident ifying analysis packages. The Developm ent Plan, Soft wa re Test Plan, Configurat ion Managem ent Plan are some of the documents reviews during this phase. The Cr it ica l D e sign Re vie w baselines t he det ailed de sign specificat ion. Test cases are reviewed and approved. The Te st Re a din e ss Re vie w is perform ed when t he appropriat e applicat ion com ponent s are n ear com plet ing. This review will det erm ine the readiness of the application for system and acceptance testing. Post Implementation Review These reviews are held aft er im plem ent at ion is c om plet e t o audit t he process based on act ual result s. Post - I m plem ent at ion reviews are also known as Postmortems and are held t o assess t he succes s of t he overall process aft er release and ident ify any opport unit ies for p rocess im provem ent . They can be held up t o t hree t o six m ont hs after imp lementation, and are conducted in a format. Software Testing Framework V2.0 7 of 25

There are three general classes of reviews: 1. Informal or Peer Review 2. Semifo rmal or Walk- Through 3. Format or Inspections Pe e r Re vie w is generally a on e- to- one m eet ing bet ween t he aut hor of a work product and a peer, init ia t ed as a request for im port regarding a part icular art ifact or problem . The re is no agenda, and result s are not form ally report ed. These reviews occur o n an as needed basis throughout each phase of a project. 2.1.2 Inspections A kno wledgeable individual called a m oderat or, who is not a m em ber of t he t eam or t he aut hor of t he product under review, facilit at es inspect ions. A reco rder who records the defects found and actions assigned assists the moderator. T he meeting is planned in advance and m at erial is dist ribut ed t o all t he pa rt icipant s and t he part icipant s are expect ed t o at t end t he m eet ing w ell prepared. The issues raised during t he m eet ing are docum ent ed and circu lat ed am ong t he m em bers present and the management. 2.1.3 Walkthroughs The aut hor of t he m at erial being reviewed facilit at es w alk- Through. The part icipant s are led t hrough t he m at erial in one of t wo form at s; t he present at ion is m ade wit hout int errupt ions and com m ent s are m ade at t he end, or com m ent s are m ade t hroughout . I n eit her case , t he issues raised are capt ured and published in a report dist ribut ed t o t he part icipant s. Possible solut ions for uncovered defect s are not discussed during the review. 2.2 Validation Strategies The Validat ion St rat egies, persons / t eam s involved in t he t est ing, and t he deliverable of that phase of testing is briefed below: Validation Strategy Unit Testing. Performed By Developers / Test Engineers. Explanation Testing of s ingle program, modules, or unit of code. Testing of integrated programs, modules , or units of code. Deliverable Software unit ready for testing with other syste m component. Portions of the system ready for testing with other portions of the system. Tested computer system, based on what was specified to be developed. Integration Testing. Test Engineers. System Testing. Test Engineers. Production Environment Testing. Developers, Test Engineers. Testing of entire computer system. This kind of testing usually includes functio nal and structural testing. Testing of the whole computer system before rolling out to the UAT. Stable application. Software Testing Framework V2.0 8 of 25

User Acceptance Testing. Users. Installation Testing. Test Engineers. Beta Testing Users. Testing of computer system to make sure it will work in the system regardless of what the system requirements indicate. Testing of the Computer System during th e Installation at the user place. Testing of the application after the installat ion at the client place. Tested and accepted system based on the user needs. Successfully installed application. Successfully installed and running application. 3.0 Testing Types There are two types of testing: 1. Functional or Black Box Testing, 2. Structura l or White Box Testing. Before t he Proj ect Managem ent decides on t he t est i ng act ivit ies t o be perform ed, it should have decided t he t est t ype t hat it is going t o follow. I f it is t he Black Box, t hen t he t est cases should be writ t en addressing t he funct ionalit y of t he applicat ion. I f it is t he Whit e Box, t hen t he Test Cases should be writ t en for t he int ernal and functional behavior of the system. Funct ional t est ing ensures t hat t he requ irem ent s are properly sat isfied by t he applicat ion syst em . The funct ions are t hose t asks t hat t he syst em is designed t o accomplish. Structural tes ting ensures sufficient testing of the implementation of a function. 3.1 White Box Testing Whit e Box Test ing; also know as glass box t est ing is a t est ing m et hod wh ere t he tester involves in testing the individual software programs using tools , standards etc. Using white box testing methods, we can derive test cases that: 1) Guarant ee t hat all independent pat hs wit hin a m odule have been exercise d at lease once, 2) Exercise all logical decisions on their true and false sides , 3) Execute all loops at their boundaries and within their operational bounds, and 4) Exercise internal data structures to ensure their validity. Advantages of White box testing: 1) Logic errors and incorrect assum pt ions are inversely pr oport ional t o t he probability that a program path will be executed. 2) Oft en , a logical pat h is not likely t o be execut ed when, in fact , it m ay be exec ut ed on a regular basis. 3) Typographical errors are random. Software Testing Framework V2.0 9 of 25

White Box Testing Types There are various t ypes of Whit e Box Test ing. Here in t his fram ework I will address the most common and important types. 3.1.1 Basis Path Testing Basis pat h t est ing is a whit e box t est ing t echnique first proposed by Tom McCabe. The Basis pat h m et hod enables t o derive a logical com plexit y m easure of a procedural design and use t his m easure as a guide for defining a basis set of execut ion pat hs. Test Cases derived t o exercise t he basis set are guarant e ed t o execute every statement in the program at least one time during testing. 3.1.2 Flow Graph Notation The flow graph depict s logical cont rol flow using a diagram m at ic not at ion. Each structured construct has a corresponding flow g raph symbol. 3.1.3 Cyclomatic Complexity Cyclom at ic com plexit y is a soft war e m et ric t hat provides a quant it at ive m easure of the logical complexity o f a program. When used in the context of a basis path testing m et hod, t he val ue com put ed for Cyclom at ic com plexit y defines t he num ber for independent pat hs in t he basis set of a program and provides us wit h an upper bound for t he num ber of t est s t hat m ust be conduct ed t o ensure t hat all st at em ent s have been executed at lease once. An independent pat h is any pat h t hrou gh t he program t hat int roduces at least one new set of processing statements or a new condition. Computing Cyclomatic Complexity Cyclom at ic com plexit y ha s a foundat ion in graph t heory and provides us wit h extremely useful software metric. Complexity is computed in one of the three ways: 1. The num ber of regi ons of t he flow graph corresponds t o t he Cyclom at ic complexity. 2. Cyclomat ic complexity, V(G), for a flow graph, G is defined as V (G) = E- N+2 Where E, i s the number of flow graph edges, N is the number of flow graph nodes. 3. Cyclom atic complexity, V (G) for a flow graph, G is also defined as: V (G) = P+1 Where P is the number of predicate nodes contained in the flow graph G. 3.1.4 Graph M atrices The procedure for deriving t he flow graph and even det erm ining a set of basis pat hs is am enable t o m echanizat ion. To develop a soft ware t ool t hat assist s in basis pat h testing, a data structure, called a graph matrix ca n be quite useful. A Graph Mat rix is a square matrix whose size is equal to the number of nodes on the flow graph. Each row and colum n corresponds t o an iden t ified node, and m at rix entries correspond to connections between nodes. 3.1. 5 Control Structure Testing Described below are some of the variations of Contro l Structure Testing. 3.1.5.1 Condition Testing Condit ion t est ing is a t est c ase design m et hod t hat exercises t he logical conditions contained in a progr am module. 3.1.5.2 Data Flow Testing The dat a flow t est ing m et hod select s t est pat hs of a program according t o t he locations of definitions and uses o f variables in the program. Software Testing Framework V2.0 10 of 25

3.1.6 Loop Testing Loop Test ing is a whit e box t est ing t echnique t hat focu ses exclusively on t he validit y of loop const ruct s. Four classes of loops ca n be defined: Sim ple loops, Concat enat ed loops, nested loops, and unstructure d loops. 3.1.6.1 Simple Loops The following set s of t est s can be applied t o sim ple loops, where n is t he maximum number of allowable passes through the loop . 1. Skip the loop entirely. 2. Only one pass through the loop. 3. Two passes th rough the loop. 4. m passes through the loop where m<n. 5. n- 1, n, n+1 passes thr ough the loop. 3.1.6.2 Nested Loops I f we ext end t he t est approach for sim p le loops t o nest ed loops, t he num ber of possible tests would grow geometrica lly as the level of nesting increases. 1. Start at the innermost loop. Set all o ther loops to minimum values. 2. Conduct sim ple loop t est s for t he innerm os t loop while holding t he out er loops at t heir m inim um it erat ion param et er values. Add ot her t est s for out - of- range or exclude values. 3. Work out ward, conduct ing t est s for t he next loop, but keeping all ot her out er loo ps at minimum values and other nested loops to typical values. 4. Continue until a ll loops have been tested. 3.1.6.3 Concatenated Loops Concat enat ed loops can b e t est ed using t he approach defined for sim ple loops, if each of t he loops is independent of t he ot her. However, if t wo loops are concat enat ed and t h e loop count er for loop 1 is used as t he init ial value for loop 2, then the l oops are not independent. 3.1.6.4 Unstructured Loops Whenever possible, t his cl ass of loops should be redesigned t o reflect t he use of the structured program ming constructs. 3.2 Black Box Testing Black box t est ing, also known as behavioral t est ing focuses on t he funct io nal requirem ent s of t he soft ware. All t he funct ional requirem ent s of t h e program will be used to derive sets of input conditions for testing. Black Box Testing Types The following are the most famous/frequently used Black Box Testing Types. 3.2.1 Graph Based Testing Methods Soft ware t est ing begins by creat ing a graph of im port ant obj ect s and t heir relationships and t hen devising a series of t est s t hat will cover t he graph so that each objects and their relationships a nd then devising a series of tests that will cover t he graph so t hat each obj ect and relat ionship is exercised and error are uncovered. 3.2.2 Equivalence Pa rtitioning Equivalence part it ioning is a black box t est ing m et hod t hat di vides t he input domain of a program into classes of data from which test cases can be derived. EP can be defined according to the following guidelines: Software Testing Framework V2.0 11 of 25

1. I f an input condit ion specifies a range, one valid and one t wo invalid cla sses are defined. 2. I f an input condit ion requires a specific value, one vali d and t wo invalid equivalence classes are defined. 3. I f an input condit ion s pecifies a m em ber of a set , one valid and one invalid equivalence class are d efined. 4. If an input condition is Boolean, one valid and one invalid class are defined. 3.2.3 Boundary Value Analysis BVA is a t est case design t echnique t hat com plem ent s equivalence part it ioning. Rat her t han select ing any elem ent of an equivalence class, BVA leads t o t he select ion of t est cases at t he edges of t he class. Rat her t han focusing solely on input conditions, BVA de rives test cases from the output domain as well. Guidelines for BVA are sim ilar in m any respect s t o t hose provided for equivalence partitioning. 3.2.4 Comp arison Testing Sit uat ions where independent versions of soft ware be developed for crit ical applicat ions, even when only a single version will be used in t he delivered com put er based syst em . These independent versions from t he bas is of a black box testing technique called Comparison testing or back- to- back testing. 3.2.5 Orthogonal Array Testing The orthogonal array testing method is p articularly useful in finding errors associated with region faults an error cate gory associated with faulty logic within a software component. 3.3 Scenario Based Testing (SBT) Dr.Cem Kaner in A Pat t ern for Scenario Test ing has explained scenario Based Te sting in great detail that can be found at www.testing.com. What is Scenario Bas ed Test ing and How/ Where is it useful is an int erest ing question. I shall ex plain in brief the above two mentioned points. Scenario Based Test ing is cat eg orized under Black Box Test s and are m ost helpful when t he t est ing is conce nt rat ed on t he Business logic and funct ional behavior of t he applicat ion. Adopt ing SBT is effect ive when t est ing com plex applicat ions. Now, every ap plicat ion is com plex, t hen it s t he t eam s call as t o im plem ent SBT or no t . I would personally suggest using SBT when t he funct ionalit y t o t est inc ludes various feat ures and funct ions. A best exam ple would be while t est ing banking applicat ion. As banking applicat ions require ut m ost care while t es t ing, handling various funct ions in a single scenario would result in effectiv e results. A sam ple t ransact ion ( scenario) can be, a cust om er logging int o t he applicat ion, checking his balance, t ransferring am ount t o anot her ac count , paying his bills, checking his balance again and logging out. In brief, use Scenario Based Tests when: 1. Testing complex applications. 2. Testing Busin ess functionality. When 1. 2. 3. 4. designing scenarios, keep in mind: The scena rio should be close to the real life scenario. Scenarios should be realistic. Sc enarios should be traceable to any/combination of functionality. Scenarios shoul d be supported by sufficient data. 12 of 25 Software Testing Framework V2.0

3.4 Exploratory Testing Explorat ory Test s are cat egorized under Black Box Test s and are aim ed at t est ing in conditions when sufficient time is not available for testing or prope r documentation is not available. Exploratory testing is Testing while Exploring. When you have no idea of how the application works, exploring the application wi th the intent of finding errors can be termed as Exploratory Testing. Performing Exploratory Testing This is one big question for many people. The following can be used to perform Exploratory Testing: Learn the Application. Learn the Busine ss for which the application is addressed. Learn the technology to the maximum e xtent on which the application has been designed. Learn how to test. Plan and De sign tests as per the learning. 4.0 Structural System Testing Techniques The following are the structural system testing techniques. Technique Stress Exe cution Recovery Operations Compliance Security Description Determine system perf ormance with expected volumes. System achieves desired level of proficiency. Sys tem can be returned to an operational status after a failure. System can be exec uted in a normal operational status. System is developed in accordance with stan dards and procedures. System is protected in accordance with importance to organ ization. Example Sufficient disk space allocated. Transaction turnaround time ad equate. Evaluate adequacy of backup data. Determine systems can run using docume nt. Standards follow. Access denied. 5.0 Functional System Testing Techniques The following are the functional system testing techniques. Technique Requiremen ts Regression Error Handling Manual Support Intersystems. Control Description Sy stem performs as specified. Verifies that anything unchanged still performs corr ectly. Errors can be prevented or detected and then corrected. The people- compu ter interaction works. Data is correctly passed from system to system. Controls reduce system risk to an acceptable level. Example Prove system requirements. Un changed system segments function. Error introduced into the test. Manual procedu res developed. Intersystem parameters changed. File reconciliation procedures wo rk. 13 of 25 Software Testing Framework V2.0

Parallel Old systems and new system are run and the results compared to detect unplanned differences. Old and new system can reconcile. 4.0 Testing Phases Requirement Study Requirement Checklist Software Requirement Specification Software Requirement Specification Functional Specification Checklist Functional Specification Document Functional Specification Document Architecture Design Architecture Design Detailed Design Document Coding Functional Specification Document Unit Test Case Documents Unit Test Case Document Design Document Functional Specification Document System Test Case Document Integration Test Case Document Regression Test Case Document Unit/Integration/System Test Case Documents Functional Specification Document Performance Criteria Software Requirement Spec ification Regression Test Case Document Performance Test Cases and Scenarios Sof tware Testing Framework V2.0 Performance Test Cases and Scenarios User Acceptance Test Case Documents/Scenarios 14 of 25

4.2 Unit Testing Goal of Unit t est ing is t o uncover defect s using form al t echniques like Bo undary Value Analysis ( BVA) , Equivalence Part it ioning, and Error Guessing. D efect s and deviat ions in Dat e form at s, Special requirem ent s in input cond it ions ( for exam ple Text box where only num eric or alphabet s should be ent ered) , select ion based on Com bo Boxs, List Boxs, Opt ion but t ons, Check Boxs w ould be ident ified during t he Unit Testing phase. 4.3 Integration Testing I nt egrat ion t est ing is a syst em at ic t echnique for const ruct ing t he p rogram st ruct ure while at t he sam e t im e conduct ing t est s t o uncover er rors associat ed wit h int erfacing. The obj ect ive is t o t ake unit t est ed com ponent s and build a program structure that has been dictated by design. Usu ally, the following methods of Integration testing are followed: 1. Top- down In tegration approach. 2. Bottom- up Integration approach. 4.3.1 Top- down Integrat ion Top- down int egrat ion t est ing is an increm ent al approach t o const ruc t ion of program st ruct ure. Modules are int egrat ed by m oving downward t hro ugh t he cont rol hierarchy, beginning wit h t he m ain cont rol m odule. Module s subordinat e t o t he m ain cont rol m odule are incorporat ed int o t he st r uct ure in eit her a dept h- first or breadt hfirst manner. 1. The Integration p rocess is performed in a series of five steps: 2. The main control module is use d as a test driver and stubs are substituted for all components directly subordi nate to the main control module. 3. Depending on t he int egrat ion approach sel ect ed subordinat e st ubs are replaced one at a time with actual components. 4. Tests are conducted as each component is integrated. 5. On com plet ion of each set of t est s, anot her st ub is replaced wit h t he real component. 6. Regres sion t est ing m ay be conduct ed t o ensure t hat new errors have not been intr oduced. 4.3.2 Bottom- up Integration Button- up int egrat ion t est ing begins c onst ruct ion and t est ing wit h at om ic m odules (i.e. components at the lowe st levels in the program structure). Because components are int egrat ed from t he but t on up, processing required for com ponent s subordinat e to a given lev el is always available and the need for stubs is eliminated. 1. A Bot t om - up int egrat ion st rat egy m ay be im plem ent ed wit h t he following steps: 2. L ow level com ponent s are com bined int o clust ers t hat perform a specific sof tware sub function. 3. A driver is written to coordinate test case input and out put. 4. The cluster is tested. 5. Drivers are rem oved and clust ers are com bin ed m oving upward in t he program structure. Software Testing Framework V2.0 15 of 25

4.4 Smoke Testing Smoke testing might be a characterized as a rolling integration strategy. Sm oke t est ing is an int egrat ion t est ing approach t hat is com m only used when shr ink- wrapped soft ware product s are being developed. I t is designed as a pacing m echanism for t im e- critical projects, allowing the software team to assess its project on a frequent basis. The sm oke t est should exercise t he ent ire s yst em from end t o end. Sm oke t est ing provides benefits such as: 1) Integrat ion risk is minimized. 2) The quality of the end- product is improved. 3) Error diagnosis and correction are simplified. 4) Progress is easier to asses. 4.5 System Testing Syst em t est ing is a series of different t est s whose prim ary purpose is t o fully exercise t he com put er based syst em . Alt hough each t est has a diffe rent purpose, all work t o verify t hat syst em elem ent s have been properly in t egrat ed and perform allocated functions. The following tests can be categoriz ed under System testing: 1. Recovery Testing. 2. Security Testing. 3. Stress Tes ting. 4. Performance Testing. 4.5.1. Recovery Testing Recovery t est ing is a syst em t est t hat focuses t he soft ware t o fall in a variet y of ways and verifies t hat recovery is properl y perform ed. I f recovery is aut om at ic, reinit ializat ion, checkpoint ing m echanism s, dat a recovery and rest art are evaluat ed for correct ness. I f re covery requires hum an int ervent ion, t he m ean- time- to- repair (MTTR) is ev aluated to determine whether it is within acceptable limits. 4.5.2. Security Tes ting Securit y t est ing at t em pt s t o verify t hat prot ect ion m echanism s built int o a syst em will, in fact , prot ect it from im proper penet rat ion. During Securit y t est ing, password cracking, unaut horized ent ry int o t he soft ware, net work securit y are all t aken int o consideration. 4.5.3. Stress Testing St ress t est ing execut es a syst em in a m anner t hat dem ands resour ces in abnorm al quant it y, frequency, or volum e. The following t ypes of t es t s m ay be conduct ed during stress testing; Special t est s m ay be designed t hat generat e t en int errupt s per second, when one or two is the average rate . I nput dat a rat es m ay be increases by an order of m agnit ude t o det ermin e how input functions will respond. Test Cases that require maximum memory or ot her resources. Test Cases that may cause excessive hunting for disk- resident da ta. Test Cases that my cause thrashing in a virtual operating system. 4.5.4. Per formance Testing Perform ance t est s are coupled wit h st ress t est ing and us ually require bot h hardware and software instrumentation. Software Testing Framework V2.0 16 of 25

4.5.5. Regression Testing Regression testing is the re- execution of some subset of tests that have already been conducted to ensure that changes have not propa gated unintended side affects. Regression may be conducted manually, by re- exec uting a subset of al test cases or using automated capture/playback tools. The R egression test suit contains three different classes of test cases: A representa tive sample of tests that will exercise all software functions. Additional tests that focus on software functions that are likely to be affected by the change. Tests that focus on the software components that have been changed. 4.6 Alpha Testing The Alpha testing is conducted at the developer sites and in a controlled enviro nment by the end- user of the software. 4.7 User Acceptance Testing User Accept ance t est ing occurs j ust before t he soft ware is released t o t he cust om er. The end- users along with the developers perform the User Accepta nce Testing with a certain set of test cases and typical scenarios. 4.8 Beta Testing The Bet a t est ing is conduct ed at one or m ore cust om er sit es by t he enduser of t he soft ware. The bet a t est is a live applicat ion of t he soft war e in an environm ent t hat cannot be controlled by the developer. 5.0 Metrics Met rics are t he m ost im port ant responsibilit y of t he Test Team . Met rics allow for deeper underst anding of t he perform ance of t he applicat ion and i t s behavior. The fine t uning of t he applicat ion can be enhanced only wit h m et rics. I n a t ypical QA process, there are many metrics which provide inform ation. The following can be regarded as the fundamental metric: IEEE Std 982.2 1988 defines a Functional or Test Coverage Metric. It can be used to measure te st coverage prior to software delivery. It provide a measure of the percentage o f the software tested at any point during testing. It is calculated as follows: Function Test Coverage = FE/FT Where FE is the number of test requirements that are covered by test cases that were executed against the software FT is the tota l number of test requirements Software Release Metrics The software is ready for release when: 1. It has been tested with a test suite that provides 100% functi onal coverage, 80% branch coverage, and 100% procedure coverage. 2. There are no level 1 or 2 severity defects. 3. The defect finding rate is less than 40 new d efects per 1000 hours of testing 4. The software reaches 1000 hours of operation 5. Stress testing, configuration testing, installation testing, Nave user testin g, usability testing, and sanity testing have been completed Software Testing Framework V2.0 17 of 25

IEEE Software Maturity Metric IEEE Std 982.2 - 1988 defines a Software Maturity Index that can be used to determine the readiness for release of a software syst em. This index is especially useful for assessing release readiness when changes , additions, or deletions are made to existing software systems. It also provide s an historical index of the impact of changes. It is calculated as follows: SMI = Mt - ( Fa + Fc + Fd)/Mt Where SMI is the Software Maturity Index value Mt is the number of software functions/modules in the current release Fc is the number of functions/modules that contain changes from the previous release Fa is the n umber of functions/modules that contain additions to the previous release Fd is the number of functions/modules that are deleted from the previous release Relia bility Metrics Perry offers the following equation for calculating reliability. Reliability = 1 - Number of errors (actual or predicted)/Total number of lines o f executable code This reliability value is calculated for the number of errors during a specified time interval. Three other metrics can be calculated during e xtended testing or after the system is in production. They are: MTTFF (Mean Time to First Failure) MTTFF = The number of time intervals the system is operable u ntil its first failure MTBF (Mean Time Between Failures) MTBF = Sum of the time intervals the system is operable Number of failures for the time period MTTR (Me an Time To Repair) MTTR = sum of the time intervals required to repair the syste m The number of repairs during the time period Software Testing Framework V2.0 18 of 25

6.0 Test Models There are various models of Software Testing. Here in this framework I would exp lain the three most commonly used models: 1. The V Model. 2. The W Model. 3. The But terfly Model 6.1 The V Model The following diagram depicts the V Model Requirements Acceptance Tests Specification System Tests Architecture Integration Tests Detailed Design Unit Tests Coding The diagram is self- explanatory. For an easy understanding, look at the following table: SDLC Phase Test Phase 1. Requirements 1. Build Test Strategy. 2 . Plan for Testing. 3. Acceptance Test Scenarios Identification. 2. Specificatio n 1. System Test Case Generation. 3. Architecture 1. Integration Test Case Gener ation. 4. Detailed Design 1. Unit Test Case Generation Software Testing Framework V2.0 19 of 25

6.2 The W Model The following diagram depicts the W model: Requirements Regression Round 3 Performance Testing Regression Round 2 Requirements Review Specification Specification Review System Testing Architecture Regression Round 1 Architecture Review Integration Testing Detailed Design Design Review Code Unit Testing Code Walkthrough The W model depicts that the Testing starts from day one of the initiation of the project and continues till the end. The following table will illustrate the phas es of activities that happen in the W model: SDLC Phase 1. Requirements The first V 1. Requirements Review The second V 1. Build Test Strategy. 2. Plan for Testing. 3 . Acceptance (Beta) Test Scenario Identification. 1. System Test Case Generation . 1. Integration Test Case Generation. 1. Unit Test Case Generation. 1. Execute Unit Tests 1. Execute Integration Tests. 1. Regression Round 1. 1. Execute Syste m Tests. 1. Regression Round 2. 1. Performance Tests 1. Regression Round 3 1. Pe rformance/Beta Tests 2. Specification 3. Architecture 4. Detailed Design 5. Code 2. 3. 4. 5. Specification Review Architecture Review Detailed Design Review Code Walkthrough Software Testing Framework V2.0 20 of 25

In the second V, I have mentioned Acceptance/Beta Test Scenario Identification. Th is is because, the customer might want to design the Acceptance Tests. In this c ase as the development team executes the Beta Tests at the client place, the sam e team can identify the Scenarios. Regression Rounds are performed at regular in tervals to check whether the defects, which have been raised and fixed, are retested. 6.3 The Butterfly Model The t est ing act ivit ies for t est ing soft ware produ ct s are preferable t o follow t he Butterfly Model. The following picture depic ts the test methodology. Test Design Test Execution Test Analysis Fig: Butterfly Model I n t he Butterfly m odel of Test Developm ent , t he left wing of t he but t er fly depict s the Te st An a lysis. The right wing depict s t he Te st D e sign , and finally t he body of t he but t erfly depict s t he Te st Ex e cu t ion . H ow t his exact ly happens is described below. Test Analysis Analysis is t he key fact or which drives in any planning. During t he analysis, t he analyst unders tands the following: Verify t hat each requirem ent is t agged in a m anner t hat allows correlat ion of t he tests for that requirement to the requirement itself. (Establish Test T raceability) Verify traceability of the software requirements to system requirem ents. Inspect for contradictory requirements. Inspect for ambiguous requirements . Inspect for missing requirements. Check t o m ake sure t hat each requirem ent , as well as t he specificat ion as a whole, is understandable. I dent ify one or m ore m easurem ent , dem onst rat ion, or analysis m et hod t hat m ay be us ed to verify the requirements implementation (during formal testing). Creat e a t est sket ch t hat includes t he t ent at ive approach and indicat es t he tests o bjectives. During Test Analysis t he required docum ent s will be carefully st u died by t he Test Personnel, and the final Analysis Report is documented. The fo llowing documents would be usually referred: 1. Software Requirements Specificat ion. 2. Functional Specification. 3. Architecture Document. Software Testing Framework V2.0 21 of 25

4. Use Case Documents. The An a lysis Re por t would consist of t he underst and ing of t he applicat ion, t he funct ional flow of t he applicat ion, num ber of m odules involved and t he effect ive Test Time. Test Design The right wing of t he but t erfly represent s t he act of designing and im plem ent ing t he t es t cases needed t o verify t he design art ifact as replicat ed in t he im plem e nt at ion. Like test analysis, it is a relatively large piece of work. Unlike te st analysis, however, t he focus of t est design is not t o assim ilat e inform at ion creat ed by ot hers, but rat her t o im plem ent procedures, t echniques, and dat a set s t hat achieve t he t est s objective(s). The out put s of t he t est analysis phase are t he foundat ion for t est design. Each requirem ent or design const ruct has had at least one t echnique ( a m easurem ent , dem onst r at ion, or analysis) ident ified during t est analysis t hat will validat e or v erify that requirement. The tester must now implement the intended technique. So ft ware t est design, as a discipline, is an exercise in t he prevent ion, det e ct ion, and elim inat ion of bugs in soft ware. Prevent ing bugs is t he prim ar y goal of soft ware t est ing. Diligent and com pet ent t est design prevent s b ugs from ever reaching t he im plem ent at ion st age. Test design, wit h it s a t t endant t est analysis foundat ion, is t herefore t he prem iere weapon in t he arsenal of developers and t est ers for lim it ing the cost associated with f inding and fixing bugs. During Test Design, basing on t he Analysis Report t he t est personnel would develop the following: 1. 2. 3. 4. 5. Test Plan. Test Appr oach. Test Case documents. Performance Test Parameters. Performance Test Plan. Test Execution Any test case should adhere to the following principals: 1. Accur ate tests what the description says it will test. 2. Economical has only the ste ps needed for its purpose. 3. Repeatable tests should be consistent, no matter w ho/when it is executed. 4. Appropriate should be apt for the situation. 5. Trace able the functionality of the test case should be easily found. During t he Test Execut ion phase, keeping t he Proj ect and t he Test schedule, t he t est case s designed would be execut ed. The following docum ent s will be handled during the test execution phase: 1. Test Execution Reports. 2. Daily/Weekly/monthly Def ect Reports. 3. Person wise defect reports. After the Test Execution phase, the following documents would be signed off. 1. Project Closure Document. 2. Reliabi lity Analysis Report. 3. Stability Analysis Report. 4. Performance Analysis Repo rt. 5. Project Metrics. Software Testing Framework V2.0 22 of 25

7.0 Defect Tracking Process The Defect Tracking process should answer the following questions: 1. When is th e defect found? 2. Who raised the defect? 3. Is the defect reported properly? 4. Is the defect assigned to the appropriate developer? 5. When was the defect fix ed? 6. Is the defect re- tested? 7. Is the defect closed? The defect tracking pr ocess has to be handled carefully and managed efficiently. The following figure illustrates the defect tracking process: The Tester/Developer finds the Bug. Reports the Defect in the Defect Tracking Tool. Status Open The concerned Developer is informed The Developer fixes the Defect The Developer changes the Status to Resolved If the Defect reoccurs, the status changes to Re- Open The Tester Re- Tests and changes Status to Closed Defect Classification This sect ion defines a defect Severit y Scale fram ework for det erm ining defect crit icalit y and t he associat ed defect Priorit y Lev els t o be assigned t o errors found software. Software Testing Framework V2.0 23 of 25

The defects can be classified as follows: Classification Critical Major Minor Co smetic Suggestion Description There is s funct ionalit y block. The applicat ion is not able t o proceed any further. The applicat ion is not working as desired . There are variat ions in the functionality. There is no failure report ed due t o t he defect , but cert ainly needs to be rectified. Defects in the User Inte rface or Navigation. Feature which can be added for betterment. Priority Level of the Defect The priorit y level describes t he t im e for resol ut ion of t he defect . The priorit y level would be classified as follows: Clas sification Immediate At the Earliest Normal Later Description Resolve the defect with immediate effect. Resolve the defect at the earliest, on priority at the s econd level. Resolve the defect. Could be resolved at the later stages. 8.0 Test Process for a Project I n t his sect ion, I would explain how t o go about planning your t est ing act ivit ies effect ively and efficient ly. The process is explained in a t abular form at giving t he phase of testing, activity and person responsible. For t his , I assum e t hat t he proj ect has been ident ified and t he t est ing t eam co nsist s of five personnel: Test Manager, Test Lead, Senior Test Engineer and 2 T est Engineers. SDLC Phase 1. Requirements Testing Phase/Activity 1. Study the req uirements for Testability. 2. Design the Test Strategy. 3. Prepare the Test Plan . 4. Identify scenarios for Acceptance/Beta Tests 1. Identify System Test Cases / Scenarios. 2. Identify Performance Tests. 1. Identify Integration Test Cases / Scenarios. 2. Identify Performance Tests. 1. Generate Unit Test Cases Personnel Test Manager / Test Lead 2. Specification Test Lead, Senior Test Engineer, and Test Engineers. Test Lead, Senior Test Engi neer, and Test Engineers. Test Engineers. 3. Architecture 4. Detailed Design Software Testing Framework V2.0 24 of 25

9.0 Deliverables The Deliverables from the Test team would include the following: 1. 2. 3. 4. 5. 6. 7. 8. Test Strategy. Test Plan. Test Case Documents. Defect Reports. Status R eports (Daily/weekly/Monthly). Test Scripts (if any). Metric Reports. Product Si gn off Document. Software Testing Framework V2.0 25 of 25

You might also like