You are on page 1of 8

1.

Software Testing is more Oriented to Detecting the defects or often equated to finding
bugs. Testing is a process of executing a software system to determine whether it matches
its specification and executes in its intended environment under controlled conditiions.
The controlled conditions should include both normal and abnormal conditions. Testing
should intentionally attempt to make things go wrong to determine if things happen when
they shouldn't or things don't happen when they should.

SQA: - Software QA involves the entire software development PROCESS - monitoring


and improving the process, making sure that any agreed-upon standards and procedures
are followed, and ensuring that problems are found and dealt with. It is oriented to
'prevention'.

Stop Testing: -
Testing is potentially endless. We can not test till all the defects are unearthed and
removed -- it is simply impossible. At some point, we have to stop testing and ship the
software. The question is when.

Realistically, testing is a trade-off between budget, time and quality. It is driven by profit
models. The pessimistic, and unfortunately most often used approach is to stop testing
whenever some, or any of the allocated resources -- time, budget, or test cases -- are
exhausted. The optimistic stopping rule is to stop testing when either reliability meets the
requirement, or the benefit from continuing testing cannot justify the testing cost.
[Yang95] This will usually require the use of reliability models to evaluate and predict
reliability of the software under test. Each evaluation requires repeated running of the
following cycle: failure data gathering -- modeling -- prediction. This method does not fit
well for ultra-dependable systems, however, because the real field failure data will take
too long to accumulate.

For Verification & Validation (V&V)


Just as topic Verification and Validation indicated, another important purpose of testing is
verification and validation (V&V). Testing can serve as metrics. It is heavily used as a
tool in the V&V process. Testers can make claims based on interpretations of the testing
results, which either the product works under certain situations, or it does not work. We
can also compare the quality among different products under the same specification, based
on results from the same test.

2. What is the purpose of software testing?


The purpose of software testing is
a. To demonstrate that the product performs each function intended;
b. To demonstrate that the internal operation of the product performs according to
specification and all internal components have been adequately exercised;
c. To increase our confidence in the proper functioning of the software.
d. To show the product is free from defect.
e. All of the above.

3. Types of Levels: -
COMPATIBILITY TESTING. Testing to ensure compatibility of an application or Web
site with different browsers, OSs, and hardware platforms. Compatibility testing can be
performed manually or can be driven by an automated functional or regression test suite.

CONFORMANCE TESTING. Verifying implementation conformance to industry


standards. Producing tests for the behavior of an implementation to be sure it provides the
portability, interoperability, and/or compatibility a standard defines.

FUNCTIONAL TESTING. Validating an application or Web site conforms to its


specifications and correctly performs all its required functions. This entails a series of
tests which perform a feature by feature validation of behavior, using a wide range of
normal and erroneous input data. This can involve testing of the product's user interface,
APIs, database management, security, installation, networking, etcF testing can be
performed on an automated or manual basis using black box or white box methodologies.

LOAD TESTING. Load testing is a generic term covering Performance Testing and
Stress Testing.

PERFORMANCE TESTING. Performance testing can be applied to understand your


application or WWW site's scalability, or to benchmark the performance in an
environment of third party products such as servers and middleware for potential
purchase. This sort of testing is particularly useful to identify performance bottlenecks in
high use applications. Performance testing generally involves an automated test suite as
this allows easy simulation of a variety of normal, peak, and exceptional load conditions.

REGRESSION TESTING. Similar in scope to a functional test, a regression test allows


a consistent, repeatable validation of each new release of a product or Web site. Such
testing ensures reported product defects have been corrected for each new release and that
no new quality problems were introduced in the maintenance process. Though regression
testing can be performed manually an automated test suite is often used to reduce the time
and resources needed to perform the required testing.

SMOKE TESTING. A quick-and-dirty test that the major functions of a piece of


software work without bothering with finer details. Originated in the hardware testing
practice of turning on a new piece of hardware for the first time and considering it a
success if it does not catch on fire.

STRESS TESTING. Testing conducted to evaluate a system or component at or beyond


the limits of its specified requirements to determine the load under which it fails and how.
A graceful degradation under load leading to non-catastrophic failure is the desired result.
Often Stress Testing is performed using the same process as Performance Testing but
employing a very high level of simulated load.
UNIT TESTING. Functional and reliability testing in an Engineering environment.
Producing tests for the behavior of components of a product to ensure their correct
behavior prior to system integration.

Black Box Testing


Black box testing methods focus on the functional requirements of the software. Tests sets
are derived that fully exercise all functional requirements. This strategy tends to be
applied during the latter part of the lifecycle.
Tests are designed to answer questions such as:

1) How is functional validity tested?


2) What classes of input make good test cases?
3) Is the system particularly sensitive to certain input values?
4) How are the boundaries of data classes isolated?
5) What data rates or volumes can the system tolerate?
6) What effect will specific combinations of data have on system operation?

Equivalence Partitioning: -
This method divides the input of a program into classes of data. Test case design is based
on defining an equivalent class for a particular input. An equivalence class represents a set
of valid and invalid input values.
Guidelines for equivalence partitioning -

1) If an input condition specifies a range, one valid and two invalid equivalence classes
are defined.
2) If an input condition requires a specific value, one valid and two invalid equivalence
classes are defined.
3) If an input condition specifies a member of a set, one valid and one invalid equivalence
class are defined.
4) If an input condition is boolean, one valid and one invalid class are defined.

Boundary Value Analysis: -


Boundary value analysis is complementary to equivalence partitioning. Rather than
selecting arbitrary input values to partition the equivalence class, the test case designer
chooses values at the extremes of the class. Furthermore, boundary value analysis also
encourages test case designers to look at output conditions and design test cases for the
extreme conditions in output.
Guidelines for boundary value analysis -

1) If an input condition specifies a range bounded by values a and b, test cases should be
designed with values a and b, and values just above and just below and b.
2) If an input condition specifies a number of values, test cases should be developed that
exercise the minimum and maximum numbers. Values above and below the minimum and
maximum are also tested.
3) Apply the above guidelines to output conditions. For example, if the requirement
specifies the production of an table as output then you want to choose input conditions
that produce the largest and smallest possible table.
4) For internal data structures be certain to design test cases to exercise the data structure
at its boundary. For example, if the software includes the maintenance of a personnel list,
then you should ensure the software is tested with conditions where the list size is 0, 1 and
maximum (if constrained).

Cause-Effect Graphs
A weakness of the two methods is that do not consider potential combinations of
input/output conditions. Cause-effect graphs connect input classes (causes) to output
classes (effects) yielding a directed graph.
Guidelines for cause-effect graphs -

1) Causes and effects are listed for a modules and an identifier is assigned to each.
2) A cause-effect graph is developed (special symbols are required).
3) The graph is converted to a decision table.
4) Decision table rules are converted to test cases.

We can not test quality directly, but we can test related factors to make quality visible.
Quality has three sets of factors -- functionality, engineering, and adaptability. These three
sets of factors can be thought of as dimensions in the software quality space. Each
dimension may be broken down into its component factors and considerations at
successively lower levels of detail.

Performance testing
Not all software systems have specifications on performance explicitly. But every system
will have implicit performance requirements. The software should not take infinite time or
infinite resource to execute. "Performance bugs" sometimes are used to refer to those
design problems in software that cause the system performance to degrade.

Reliability testing
Software reliability refers to the probability of failure-free operation of a system. It is
related to many aspects of software, including the testing process. Directly estimating
software reliability by quantifying its related factors can be difficult. Testing is an
effective sampling method to measure software reliability. Guided by the operational
profile, software testing (usually black-box testing) can be used to obtain failure data, and
an estimation model can be further used to analyze the data to estimate the present
reliability and predict future reliability. Therefore, based on the estimation, the developers
can decide whether to release the software, and the users can decide whether to adopt and
use the software. Risk of using software can also be assessed based on reliability
information. [Hamlet94] advocates that the primary goal of testing should be to measure
the dependability of tested software.

Security testing
Software quality, reliability and security are tightly coupled. Flaws in software can be
exploited by intruders to open security holes. With the development of the Internet,
software security problems are becoming even more severe.
Many critical software applications and services have integrated security measures against
malicious attacks. The purpose of security testing of these systems include identifying and
removing software flaws that may potentially lead to security violations, and validating
the effectiveness of security measures. Simulated security attacks can be performed to
find vulnerabilities.

TESTING means "quality control"


* QUALITY CONTROL measures the quality of a product
* QUALITY ASSURANCE measures the quality of processes used to create a quality
product.
Beta testing is typically conducted by end users of a software product who are not paid a
salary for their efforts.

Acceptance Testing
Testing the system with the intent of confirming readiness of the product and customer
acceptance.

Ad Hoc Testing
Testing without a formal test plan or outside of a test plan. With some projects this type of
testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can
often find problems that are not caught in regular testing. Sometimes, if testing occurs
very late in the development cycle, this will be the only kind of testing that can be
performed. Sometimes ad hoc testing is referred to as exploratory testing.

Alpha Testing
Testing after code is mostly complete or contains most of the functionality and prior to
users being involved. Sometimes a select group of users are involved. More often this
testing will be performed in-house or by an outside testing firm in close cooperation with
the software engineering department.

Automated Testing
Software testing that utilizes a variety of tools to automate the testing process and when
the importance of having a person manually testing is diminished. Automated testing still
requires a skilled quality assurance professional with knowledge of the automation tool
and the software being tested to set up the tests.

Beta Testing
Testing after the product is code complete. Betas are often widely distributed or even
distributed to the public at large in hopes that they will buy the final product when it is
released.

Black Box Testing


Testing software without any knowledge of the inner workings, structure or language of
the module being tested. Black box tests, as most other kinds of tests, must be written
from a definitive source document, such as a specification or requirements document..

Compatibility Testing
Testing used to determine whether other system software components such as browsers,
utilities, and competing software will conflict with the software being tested.

Configuration Testing
Testing to determine how well the product works with a broad range of
hardware/peripheral equipment configurations as well as on different operating systems
and software.

Functional Testing
Testing two or more modules together with the intent of finding defects, demonstrating
that defects are not present, verifying that the module performs its intended functions as
stated in the specification and establishing confidence that a program does what it is
supposed to do.

Independent Verification and Validation (IV&V)


The process of exercising software with the intent of ensuring that the software system
meets its requirements and user expectations and doesn't fail in an unacceptable manner.
The individual or group doing this work is not part of the group or organization that
developed the software. A term often applied to government work or where the
government regulates the products, as in medical devices.

Installation Testing
Testing with the intent of determining if the product will install on a variety of platforms
and how easily it installs.

Integration Testing
Testing two or more modules or functions together with the intent of finding interface
defects between the modules or functions. Testing completed at as a part of unit or
functional testing, and sometimes, becomes its own standalone test phase. On a larger
level, integration testing can involve a putting together of groups of modules and
functions with the goal of completing and verifying that the system meets the system
requirements. (see system testing)

Load Testing
Testing with the intent of determining how well the product handles competition for
system resources. The competition may come in the form of network traffic, CPU
utilization or memory allocation.

Performance Testing
Testing with the intent of determining how quickly a product handles a variety of events.
Automated test tools geared specifically to test and fine-tune performance are used most
often for this type of testing.

Pilot Testing
Testing that involves the users just before actual release to ensure that users become
familiar with the release contents and ultimately accept it. Often is considered a Move-to-
Production activity for ERP releases or a beta test for commercial products. Typically
involves many users, is conducted over a short period of time and is tightly controlled.
(see beta testing)

Regression Testing
Testing with the intent of determining if bug fixes have been successful and have not
created any new problems. Also, this type of testing is done to ensure that no degradation
of baseline functionality has occurred.

Security Testing
Testing of database and network software in order to keep company data and resources
secure from mistaken/accidental users, hackers, and other malevolent attackers.

Software Testing
The process of exercising software with the intent of ensuring that the software system
meets its requirements and user expectations and doesn't fail in an unacceptable manner.
The organization and management of individuals or groups doing this work is not
relevant. This term is often applied to commercial products such as internet applications.
(contrast with independent verification and validation)

Stress Testing
Testing with the intent of determining how well a product performs when a load is placed
on the system resources that nears and then exceeds capacity.

System Integration Testing


Testing a specific hardware/software installation. This is typically performed on a COTS
(commerical off the shelf) system or any other system comprised of disparent parts where
custom configurations and/or unique installations are the norm.

White Box Testing


Testing in which the software tester has knowledge of the inner workings, structure and
language of the software, or at least its purpose.
Difference Between Verification & Validation: -
- Verification is about answering the question "Does the system function properly?" or
"Have we built the system right?"
- Validation is about answering the question "Is the product what the customer wanted?"
or "Have we built the right system?"

This definition indicates that Validation could be the same thing as Acceptance Test (or at
least very similar).

I have often described Verification and Validation processes in the same way, ie:

1. Plan the test (output Test Plan)


2. Specify the test (output Test Specification)
3. Perform the test (output Test Log and/or Test Report

Verification & Validation


Verification typically involves reviews and meetings to evaluate documents, plans, code,
requirements, and specifications. This can be done with checklists, issues lists,
walkthroughs, and inspection meetings. Validation typically involves actual testing and
takes place after verifications are completed. The term 'IV & V' refers to Independent
Verification and Validation.

You might also like