You are on page 1of 15

Software Testing Types: Black box testing Internal system design is not considered in this type of testing.

. Tests are based on requirements and functionality. White box testing This testing is based on knowledge of the internal logic of an applications code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on co erage of code statements! branches! paths! conditions. Unit testing Testing of indi idual software components or modules. Typically done by the programmer and not by testers! as it requires detailed knowledge of the internal program design and code. may require de eloping test dri e modules or test harnesses. Incremental integration tests "ottom up approach for testing i.e continuous testing of an application as new functionality is added# Application functionality and modules should be independent enough to test separately. done by programmers or by testers. Integration testing Testing of integrated modules to erify combined functionality after integration. $odules are typically code modules! indi idual applications! client and ser er applications on a network! etc. This type of testing is especially rele ant to client%ser er and distributed systems. Functional testing This type of testing ignores the internal parts and focus on the output is as per requirement or not. "lack&box type testing geared to functional requirements of an application. System testing 'ntire system is tested as per the requirements. "lack&box type testing that is based on o erall requirements specifications! co ers all combined parts of a system. En !to!en testing (imilar to system testing! in ol es testing of a complete application en ironment in a situation that mimics real&world use! such as interacting with a database! using network communications! or interacting with other hardware! applications! or systems if appropriate. Sanity testing & Testing to determine if a new software ersion is performing well enough to accept it for a ma)or testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix. "egression testing Testing the application as a whole for the modification in any module or functionality. *ifficult to co er all the system in regression testing so typically automation tools are used for these testing types. #cceptance testing &+ormally this type of testing is done to erify if system meets the customer specified requirements. ,ser or customers do this testing to determine whether to accept application.

$oa testing Its a performance testing to check system beha ior under load. Testing an application under hea y loads! such as testing of a web site under a range of loads to determine at what point the systems response time degrades or fails. Stress testing (ystem is stressed beyond its specifications to check how and when it fails. -erformed under hea y load like putting large number beyond storage capacity! complex database queries! continuous input to system or database load. %erformance testing Term often used interchangeably with .stress and .load testing. To check whether system meets performance requirements. ,sed different performance and load tools to do this. Usability testing ,ser&friendliness check. Application flow is tested! /an new user understand the application easily! -roper help documented whene er user stuck at any point. "asically system na igation is checked in this testing. Install&uninstall testing & Tested for full! partial! or upgrade install%uninstall processes on different operating systems under different hardware! software en ironment. "eco'ery testing Testing how well a system reco ers from crashes! hardware failures! or other catastrophic problems. Security testing /an system be penetrated by any hacking way. Testing how well the system protects against unauthori0ed internal or external access. /hecked if system! database is safe from external attacks. (ompatibility testing Testing how well software performs in a particular hardware%software%operating system%network en ironment and different combination s of abo e. (omparison testing /omparison of product strengths and weaknesses with pre ious ersions or other similar products. #lpha testing In house irtual user en ironment can be created for this type of testing. Testing is done at the end of de elopment. (till minor design changes may be made as a result of such testing. Beta testing Testing typically done by end&users or others. 1inal testing before releasing application for commercial purpose.

Testing metho s
Static 's) ynamic testing
There are many approaches to software testing. 2e iews! walkthroughs! or inspections are referred to as static testing! whereas actually executing programmed code with a gi en set of test cases is referred to as dynamic testing. (tatic testing can be omitted! and unfortunately in

practice often is. *ynamic testing takes place when the program itself is used. *ynamic testing may begin before the program is 3445 complete in order to test particular sections of code and are applied to discrete functions or modules. Typical techniques for this are either using stubs%dri ers or execution from a debugger en ironment.

The box approach


(oftware testing methods are traditionally di ided into white& and black&box testing. These two approaches are used to describe the point of iew that a test engineer takes when designing test cases. White!Box testing White!box testing 6also known as clear box testing! glass box testing! transparent box testing! and structural testing7 tests internal structures or workings of a program! as opposed to the functionality exposed to the end&user. In white&box testing an internal perspecti e of the system! as well as programming skills! are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit! e.g. in&circuit testing 6I/T7. 8hile white&box testing can be applied at the unit! integration and system le els of the software testing process! it is usually done at the unit le el. It can test paths within a unit! paths between units during integration! and between subsystems during a systemle el test. Though this method of test design can unco er many errors or problems! it might not detect unimplemented parts of the specification or missing requirements. Techniques used in white&box testing include9

A-I testing 6application programming interface7 & testing of the application using public and pri ate A-Is /ode co erage & creating tests to satisfy some criteria of code co erage 6e.g.! the test designer can create tests to cause all statements in the program to be executed at least once7 1ault in)ection methods & intentionally introducing faults to gauge the efficacy of testing strategies $utation testing methods (tatic testing methods

/ode co erage tools can e aluate the completeness of a test suite that was created with any method! including black&box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points ha e been tested.:;3< /ode co erage as a software metric can be reported as a percentage for9

Function coverage! which reports on functions executed

Statement coverage! which reports on the number of lines executed to complete the test

3445 statement co erage ensures that all code paths! or branches 6in terms of control flow7 are executed at least once. This is helpful in ensuring correct functionality! but not sufficient since the same code may process different inputs correctly or incorrectly. Black!box testing Black!box testing treats the software as a =black box=! examining functionality without any knowledge of internal implementation. The tester is only aware of what the software is supposed to do! not how it does it.:;;< "lack&box testing methods include9 equi alence partitioning! boundary alue analysis! all&pairs testing! state transition tables! decision table testing! fu00 testing! model&based testing! use case testing! exploratory testing and specification&based testing. Specification!base testing aims to test the functionality of software according to the applicable requirements.:;>< This le el of testing usually requires thorough test cases to be pro ided to the tester! who then can simply erify that for a gi en input! the output alue 6or beha ior7! either =is= or =is not= the same as the expected alue specified in the test case. Test cases are built around specifications and requirements! i.e.! what the application is supposed to do. It uses external descriptions of the software! including specifications! requirements! and designs to deri e test cases. These tests can be functional or non&functional! though usually functional. (pecification&based testing may be necessary to assure correct functionality! but it is insufficient to guard against complex or high&risk situations. ?ne ad antage of the black box technique is that no programming knowledge is required. 8hate er biases the programmers may ha e had! the tester likely has a different set and may emphasi0e different areas of functionality. ?n the other hand! black&box testing has been said to be =like a walk in a dark labyrinth without a flashlight.= "ecause they do not examine the source code! there are situations when a tester writes many test cases to check something that could ha e been tested by only one test case! or lea es some parts of the program untested. This method of test can be applied to all le els of software testing9 unit! integration! system and acceptance. It typically comprises most if not all testing at higher le els! but can also dominate unit testing as well. *rey!box testing *rey!box testing 6American spelling9 gray!box testing7 in ol es ha ing knowledge of internal data structures and algorithms for purposes of designing tests! while executing those tests at the user! or black&box le el. The tester is not required to ha e full access to the software@s source code.:;A<:not in citation given< $anipulating input data and formatting output do not qualify as grey&box! because the input and output are clearly outside of the =black box= that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different de elopers! where only the interfaces are exposed for test. Bowe er! modifying a data repository does qualify as grey&box! as the user would not

normally be able to change the data outside of the system under test. Grey&box testing may also include re erse engineering to determine! for instance! boundary alues or error messages. "y knowing the underlying concepts of how the software works! the tester makes better& informed testing choices while testing the software from outside. Typically! a grey&box tester will be permitted to set up his testing en ironment# for instance! seeding a database# and the tester can obser e the state of the product being tested after performing certain actions. 1or instance! in testing a database product he%she may fire an (CD query on the database and then obser e the database! to ensure that the expected changes ha e been reflected. Grey&box testing implements intelligent test scenarios! based on limited information. This will particularly apply to data type handling! exception handling! and so on.

+isual testing
The aim of isual testing is to pro ide de elopers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the de eloper can easily End the information he requires! and the information is expressed clearly. At the core of isual testing is the idea that showing someone a problem 6or a test failure7! rather than )ust describing it! greatly increases clarity and understanding. Fisual testing therefore requires the recording of the entire test process capturing e erything that occurs on the test system in ideo format. ?utput ideos are supplemented by real&time tester input ia picture&in& a&picture webcam and audio commentary from microphones. Fisual testing pro ides a number of ad antages. The quality of communication is increased dramatically because testers can show the problem 6and the e ents leading up to it7 to the de eloper as opposed to )ust describing it and the need to replicate test failures will cease to exist in many cases. The de eloper will ha e all the e idence he requires of a test failure and can instead focus on the cause of the fault and how it should be fixed. Fisual testing is particularly well&suited for en ironments that deploy agile methods in their de elopment of software! since agile methods require greater communication between testers and de elopers and collaboration within small teams. Ad hoc testing and exploratory testing are important methodologies for checking software integrity! because they require less preparation time to implement! whilst important bugs can be found quickly. In ad hoc testing! where testing takes place in an impro ised! impromptu way! the ability of a test tool to isually record e erything that occurs on a system becomes ery important. Fisual testing is gathering recognition in customer acceptance and usability testing! because the test can be used by many indi iduals in ol ed in the de elopment process. 1or the customer! it becomes easy to pro ide detailed bug reports and feedback! and for program users! isual testing can record user actions on screen! as well as their oice and image! to pro ide a complete picture at the time of software failure for the de eloper.

Testing le'els
Tests are frequently grouped by where they are added in the software de elopment process! or by the le el of specificity of the test. The main le els during the de elopment process as defined by the (8'"?G guide are unit&! integration&! and system testing that are distinguished by the test target without implying a specific process model.:>4< ?ther test le els are classified by the testing ob)ecti e.:>4<

Unit testing
,nit testing! also known as component testing! refers to tests that erify the functionality of a specific section of code! usually at the function le el. In an ob)ect&oriented en ironment! this is usually at the class le el! and the minimal unit tests include the constructors and destructors.:>3< These types of tests are usually written by de elopers as they work on code 6white&box style7! to ensure that the specific function is working as expected. ?ne function might ha e multiple tests! to catch corner cases or other branches in the code. ,nit testing alone cannot erify the functionality of a piece of software! but rather is used to assure that the building blocks the software uses work independently of each other.

Integration testing
Integration testing is any type of software testing that seeks to erify the interfaces between components against a software design. (oftware components may be integrated in an iterati e way or all together 6=big bang=7. +ormally the former is considered a better practice since it allows interface issues to be locali0ed more quickly and fixed. Integration testing works to expose defects in the interfaces and interaction between integrated components 6modules7. -rogressi ely larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.:>;<

System testing
(ystem testing tests a completely integrated system to erify that it meets its requirements.:>><

#cceptance testing
At last the system is deli ered to the user for Acceptance testing.

Testing approach

Top! own an bottom!up


Bottom Up Testing is an approach to integrated testing where the lowest le el components are tested first! then used to facilitate the testing of higher le el components. The process is repeated until the component at the top of the hierarchy is tested. All the bottom or low&le el modules! procedures or functions are integrated and then tested. After the integration testing of lower le el integrated modules! the next le el of modules will be formed and can be used for integration testing. This approach is helpful only when all or most of the modules of the same de elopment le el are ready. This method also helps to determine the le els of software de eloped and makes it easier to report testing progress in the form of a percentage. Top own Testing is an approach to integrated testing where the top integrated modules are tested and the branch of the module is tested step by step until the end of the related module.

,b-ecti'es of testing
Installation testing
An installation test assures that the system is installed correctly and working at actual customer@s hardware.

(ompatibility testing
A common cause of software failure 6real or percei ed7 is a lack of its compatibility with other application software! operating systems 6or operating system ersions! old or new7! or target en ironments that differ greatly from the original 6such as a terminal or G,I application intended to be run on the desktop now being required to become a web application! which must render in a web browser7. 1or example! in the case of a lack of backward compatibility! this can occur because the programmers de elop and test software only on the latest ersion of the target en ironment! which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier ersions of the target en ironment! or on older hardware that earlier ersions of the target en ironment was capable of using. (ometimes such issues can be fixed by proacti ely abstracting operating system functionality into a separate program module or library.

Smoke an sanity testing


(anity testing determines whether it is reasonable to proceed with further testing. (moke testing is used to determine whether there are serious problems with a piece of software! for example as a build erification test.

"egression testing

2egression testing focuses on finding defects after a ma)or code change has occurred. (pecifically! it seeks to unco er software regressions! or old bugs that ha e come back. (uch regressions occur whene er software functionality that was pre iously working correctly stops working as intended. Typically! regressions occur as an unintended consequence of program changes! when the newly de eloped part of the software collides with the pre iously existing code. /ommon methods of regression testing include re&running pre iously run tests and checking whether pre iously fixed faults ha e re&emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete! for changes added late in the release or deemed to be risky! to ery shallow! consisting of positi e tests on each feature! if the changes are early in the release or deemed to be of low risk.

#cceptance testing
Acceptance testing can mean one of two things9
1. A smoke test is used as an acceptance test prior to introducing a new build to the main

testing process! i.e. before integration or regression. ;. Acceptance testing performed by the customer! often in their lab en ironment on their own hardware! is known as user acceptance testing 6,AT7. Acceptance testing may be performed as part of the hand&off process between any two phases of de elopment.:citation
needed<

#lpha testing
Alpha testing is simulated or actual operational testing by potential users%customers or an independent test team at the de elopers@ site. Alpha testing is often employed for off&the&shelf software as a form of internal acceptance testing! before the software goes to beta testing.:>H<

Beta testing
"eta testing comes after alpha testing and can be considered a form of external user acceptance testing. Fersions of the software! known as beta ersions! are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. (ometimes! beta ersions are made a ailable to the open public to increase the feedback field to a maximal number of future users.
:citation needed<

Functional 's non!functional testing


1unctional testing refers to acti ities that erify a specific action or function of the code. These are usually found in the code requirements documentation! although some de elopment methodologies work from use cases or user stories. 1unctional tests tend to answer the question of =can the user do this= or =does this particular feature work.= +on&functional testing refers to aspects of the software that may not be related to a specific function or user action! such as scalability or other performance! beha ior under certain

constraints! or security. Testing will determine the flake point! the point at which extremes of scalability or performance leads to unstable execution. +on&functional requirements tend to be those that reflect the quality of the product! particularly in the context of the suitability perspecti e of its users.

.estructi'e testing
*estructi e testing attempts to cause the software or a sub&system to fail. It erifies that the software functions properly e en when it recei es in alid or unexpected inputs! thereby establishing the robustness of input alidation and error&management routines.:citation needed< (oftware fault in)ection! in the form of fu00ing! is an example of failure testing. Farious commercial non&functional testing tools are linked from the software fault in)ection page# there are also numerous open&source and free software tools a ailable that perform destructi e testing.

Software performance testing


-erformance testing is in general executed to determine how a system or sub&system performs in terms of responsi eness and stability under a particular workload. It can also ser e to in estigate! measure! alidate or erify other quality attributes of the system! such as scalability! reliability and resource usage. Load testing is primarily concerned with testing that the system can continue to operate under a specific load! whether that be large quantities of data or a large number of users. This is generally referred to as software scalability. The related load testing acti ity of when performed as a non&functional acti ity is often referred to as endurance testing. Volume testing is a way to test software functions e en when certain components 6for example a file or database7 increase radically in si0e. Stress testing is a way to test reliability under unexpected or rare workloads. Stability testing 6often referred to as load or endurance testing7 checks to see if the software can continuously function well in or abo e an acceptable period. There is little agreement on what the specific goals of performance testing are. The terms load testing! performance testing! reliability testing! and olume testing! are often used interchangeably.

Usability testing
,sability testing is needed to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application.

#ccessibility
Accessibility testing may include compliance with standards such as9

Americans with *isabilities Act of 3II4 (ection J4K Amendment to the 2ehabilitation Act of 3IL>

8eb Accessibility Initiati e 68AI7 of the 8orld 8ide 8eb /onsortium 68>/7

Security testing
(ecurity testing is essential for software that processes confidential data to pre ent system intrusion by hackers.

Internationali/ation an locali/ation
The general ability of software to be internationali0ed and locali0ed can be automatically tested without actual translation! by using pseudo locali0ation. It will erify that the application still works! e en after it has been translated into a new language or adapted for a new culture 6such as different currencies or time 0ones7.:>J< Actual translation to human languages must be tested! too. -ossible locali0ation failures include9

(oftware is often locali0ed by translating a list of strings out of context! and the translator may choose the wrong translation for an ambiguous source string. Technical terminology may become inconsistent if the pro)ect is translated by se eral people without proper coordination or if the translator is imprudent. Diteral word&for&word translations may sound inappropriate! artificial or too technical in the target language. ,ntranslated messages in the original language may be left hard coded in the source code. (ome messages may be created automatically at run time and the resulting string may be ungrammatical! functionally incorrect! misleading or confusing. (oftware may use a keyboard shortcut which has no function on the source language@s keyboard layout! but is used for typing characters in the layout of the target language. (oftware may lack support for the character encoding of the target language. 1onts and font si0es which are appropriate in the source language may be inappropriate in the target language# for example! /MG characters may become unreadable if the font is too small. A string in the target language may be longer than the software can handle. This may make the string partly in isible to the user or cause the software to crash or malfunction. (oftware may lack proper support for reading or writing bi&directional text. (oftware may display images with text that was not locali0ed. Docali0ed operating systems may ha e differently named system configuration files and en ironment ariables and different formats for date and currency.

.e'elopment testing
*e elopment Testing is a software de elopment process that in ol es synchroni0ed application of a broad spectrum of defect pre ention and detection strategies in order to reduce software de elopment risks! time! and costs. It is performed by the software de eloper or engineer during the construction phase of the software de elopment lifecycle. 2ather than replace traditional CA focuses! it augments it. *e elopment Testing aims to eliminate construction errors before code is promoted to CA# this strategy is intended to increase the quality of the resulting software as well as the efficiency of the o erall de elopment and CA process. *epending on the organi0ation@s expectations for software de elopment! *e elopment Testing might include static code analysis! data flow analysis metrics analysis! peer code re iews! unit testing! code co erage analysis! traceability! and other software erification practices.

unit testing
In computer programming! unit testing is a method by which indi idual units of source code! sets of one or more computer program modules together with associated control data! usage procedures! and operating procedures! are tested to determine if they are fit for use. Intuiti ely! one can iew a unit as the smallest testable part of an application. In procedural programming a unit could be an entire module but is more commonly an indi idual function or procedure. In ob)ect&oriented programming a unit is often an entire interface! such as a class! but could be an indi idual method. ,nit tests are created by programmers or occasionally by white box testers during the de elopment process. Ideally! each test case is independent from the others9 substitutes like method stubs! mock ob)ects! fakes and test harnesses can be used to assist testing a module in isolation. ,nit tests are typically written and run by software de elopers to ensure that code meets its design and beha es as intended. Its implementation can ary from being ery manual 6pencil and paper7 to being formali0ed as part of build automation.

Benefits
The goal of unit testing is to isolate each part of the program and show that the indi idual parts are correct. A unit test pro ides a strict! written contract that the piece of code must satisfy. As a result! it affords se eral benefits.

Fin problems early


,nit tests find problems early in the de elopment cycle. In test&dri en de elopment 6T**7! which is frequently used in both 'xtreme -rogramming and (crum! unit tests are created before the code itself is written. 8hen the tests pass! that code is considered complete. The same unit tests are run against that function frequently as the larger code base is de eloped either as the code is changed or ia an automated process with the build.

If the unit tests fail! it is considered to be a bug either in the changed code or the tests themsel es. The unit tests then allow the location of the fault or failure to be easily traced. (ince the unit tests alert the de elopment team of the problem before handing the code off to testers or clients! it is still early in the de elopment process.

Facilitates change
,nit testing allows the programmer to refactor code at a later date! and make sure the module still works correctly 6e.g.! in regression testing7. The procedure is to write test cases for all functions and methods so that whene er a change causes a fault! it can be quickly identified and fixed. 2eadily a ailable unit tests make it easy for the programmer to check whether a piece of code is still working properly. In continuous unit testing en ironments! through the inherent practice of sustained maintenance! unit tests will continue to accurately reflect the intended use of the executable and code in the face of any change. *epending upon established de elopment practices and unit test co erage! up&to&the&second accuracy can be maintained.

Simplifies integration
,nit testing may reduce uncertainty in the units themsel es and can be used in a bottom&up testing style approach. "y testing the parts of a program first and then testing the sum of its parts! integration testing becomes much easier. An elaborate hierarchy of unit tests does not equal integration testing. Integration with peripheral units should be included in integration tests! but not in unit tests.:citation needed< Integration testing typically still relies hea ily on humans testing manually# high&le el or global&scope testing can be difficult to automate! such that manual testing often appears faster and cheaper.:citation needed<

.ocumentation
,nit testing pro ides a sort of li ing documentation of the system. *e elopers looking to learn what functionality is pro ided by a unit and how to use it can look at the unit tests to gain a basic understanding of the unit@s A-I. ,nit test cases embody characteristics that are critical to the success of the unit. These characteristics can indicate appropriate%inappropriate use of a unit as well as negati e beha iors that are to be trapped by the unit. A unit test case! in and of itself! documents these critical characteristics! although many software de elopment en ironments do not rely solely upon code to document the product in de elopment. "y contrast! ordinary narrati e documentation is more susceptible to drifting from the implementation of the program and will thus become outdated 6e.g.! design changes! feature creep! relaxed practices in keeping documents up&to&date7.

.esign
8hen software is de eloped using a test&dri en approach! the unit test may take the place of formal design. 'ach unit test can be seen as a design element specifying classes! methods! and obser able beha iour. The following Ma a example will help illustrate this point. Bere is a test class that specifies a number of elements of the implementation. 1irst! that there must be an interface called Adder! and an implementing class with a 0ero&argument constructor called AdderImpl. It goes on to assert that the Adder interface should ha e a method called add! with two integer parameters! which returns another integer. It also specifies the beha iour of this method for a small range of alues.

Integration testing
Integration testing 6sometimes called Integration and Testing! abbre iated =INT=7 is the phase in software testing in which indi idual software modules are combined and tested as a group. It occurs after unit testing and before alidation testing. Integration testing takes as its input modules that ha e been unit tested! groups them in larger aggregates! applies tests defined in an integration test plan to those aggregates! and deli ers as its output the integrated system ready for system testing.

%urpose
The purpose of integration testing is to erify functional! performance! and reliability requirements placed on ma)or design items. These =design items=! i.e. assemblages 6or groups of units7! are exercised through their interfaces using "lack box testing! success and error cases being simulated ia appropriate parameter and data inputs. (imulated usage of shared data areas and inter&process communication is tested and indi idual subsystems are exercised through their input interface. Test cases are constructed to test that all components within assemblages interact correctly! for example across procedure calls or process acti ations! and this is done after testing indi idual modules! i.e. unit testing. The o erall idea is a =building block= approach! in which erified assemblages are added to a erified base which is then used to support the integration testing of further assemblages. (ome different types of integration testing are big bang! top&down! and bottom&up. ?ther Integration -atterns:3< are9 /ollaboration Integration! "ackbone Integration! Dayer Integration! /lient%(er er Integration! *istributed (er ices Integration and Bigh&frequency Integration.

System testing
System testing of software or hardware is testing conducted on a complete! integrated system to e aluate the system@s compliance with its specified requirements. (ystem testing falls within the

scope of black box testing! and as such! should require no knowledge of the inner design of the code or logic. :3< As a rule! system testing takes! as its input! all of the =integrated= software components that ha e successfully passed integration testing and also the software system itself integrated with any applicable hardware system6s7. The purpose of integration testing is to detect any inconsistencies between the software units that are integrated together 6called assemblages7 or between any of the assemblages and the hardware. (ystem testing is a more limited type of testing# it seeks to detect defects both within the =inter&assemblages= and also within the system as a whole.

Testing the whole system


(ystem testing is performed on the entire system in the context of a 1unctional 2equirement (pecification6s7 612(7 and%or a (ystem 2equirement (pecification 6(2(7. (ystem testing tests not only the design! but also the beha iour and e en the belie ed expectations of the customer. It is also intended to test up to and beyond the bounds defined in the software%hardware requirements specification6s7.:citation needed<

Types of tests to inclu e in system testing


The following examples are different types of testing that should be considered during (ystem testing9

Graphical user interface testing ,sability testing (oftware performance testing /ompatibility testing 'xception handling Doad testing Folume testing (tress testing (ecurity testing (calability testing

(anity testing (moke testing 'xploratory testing Ad hoc testing 2egression testing Installation testing $aintenance testing:clarification needed< 2eco ery testing and failo er testing. Accessibility testing! including compliance with9
o o o

Americans with *isabilities Act of 3II4 (ection J4K Amendment to the 2ehabilitation Act of 3IL> 8eb Accessibility Initiati e 68AI7 of the 8orld 8ide 8eb /onsortium 68>/7

Although different testing organi0ations may prescribe different tests as part of (ystem testing! this list ser es as a general framework or foundation to begin with.

You might also like