You are on page 1of 4

Usability Testing

Usability testing is the process of working with end-users directly and indirectly to assess how the
user perceives a software package and how they interact with it. This process will uncover areas
of difficulty for users as well as areas of strength. The goal of usability testing should be to limit
and remove difficulties for users and to leverage areas of strength for maximum usability.

This testing should ideally involve direct user feedback, indirect feedback (observed behavior),
and when possible computer supported feedback. Computer supported feedback is often (if not
always) left out of this process. Computer supported feedback can be as simple as a timer on a
dialog to monitor how long it takes users to use the dialog and counters to determine how often
certain conditions occur (ie. error messages, help messages, etc). Often, this involves trivial
modifications to existing software, but can result in tremendous return on investment.

Ultimately, usability testing should result in changes to the delivered product in line with the
discoveries made regarding usability. These changes should be directly related to real-world
usability by average users. As much as possible, documentation should be written supporting
changes so that in the future, similar situations can be handled with ease.

Data driven testing


Testing in which the action of a test case is parameterized by externally defined data values,
maintained as a file or spreadsheet. A common technique in Automated Testing.

XML Testing
Validation of XML data content on a transaction-by-transaction basis. Where desirable, validation
of formal XML structure (metadata structure) may also be included.

Java Testing (EJB, J2EE)


Direct exercise of class methods to validate that both object properties and methods properly
reflect and handle data according to business and functional requirements of the layer. Exercise
of transactions at this layer may be performed to measure both functional and performance
characteristics

Data Integrity Testing


Validation of system data at all data capture points in a system, including front-end, middle- or
content-tier, and back-end database. Data integrity testing includes strategies to examine and
validate data at all critical component boundaries.
GUI Testing
Validation of GUI characteristics against GUI requirements.

Issue/Defect Tracking
Tracking software issues and defects is at the core of the software quality management process.
Software quality can be assessed at any point in the development process by tracking numbers
of defects and defect criticality. Software readiness-for-deployment can be analyzed by following
defect trends for the duration of the project.

Requirements Management
Requirements both define the shape of software (look-and-feel, functionality, business rules) and
set a baseline for testing. As such, requirements management, or the orderly process of gathering
requirements and keeping requirements documentation updated on a release-by- release basis,
is critical to the deployment of quality software.

Interoperability Testing
Validation that applications in a given platform configuration do not conflict, causing loss of
functionality.

Functional Testing
Validation of business requirements, GUI requirements and data handling in an application.

Security Testing
Validation that security requirements of a system have been correctly implemented, including:
resistance to password cracking, Denial of Service (DOS) attacks, and that known security flaws
have been properly patched.

Business Rules Testing


Validation that business rules have been properly implemented in a system, enforcing correct
business practices on the user.

COM+ Testing
Direct exercise of COM methods to validate that both object properties and methods properly
reflect and handle data according to business and functional requirements of the COM layer.
Exercise of transactions at this layer may be performed to measure both functional and
performance characteristics.
Integration Testing
Testing in which software components, hardware components, or both are combined and tested
to evaluate the interaction between them.

Network Latency Modeling


Analysis of the fundamental amount of time it takes a given message to traverse a given distance
across a specific network. This factor influences all messages that traverse a network, and is key
in modeling network behavior.

Transaction Characterization
Determining the footprint of business transactions. This includes bandwidth on the network, CPU
and memory utilization on back-end systems. Additionally used in Network Latency Modeling and
Resource Usage Testing.

Load/Scalability Testing
Increase load on the target environment until requirements are exceeded or saturation of a
resource. This is usually combined with other test types to optimize performance.

Performance Testing
Determining if the test environment meets requirements at set loads and mixes of transactions by
testing specific business scenarios.

Stress Testing
Exercising the target system or environment at the point of saturation (depletion of a resource:
CPU, memory, etc.) to determine if the behavior changes and possibly becomes detrimental to
the system, application or data.

Configuration Testing
Encompasses testing various system configurations to assess the requirements and resources
needed.

Volume Testing
Determining the volume of transactions that a complete system can process. Volume Testing is
conducted in conjunction with Component, Configuration and/or Stress Testing.

Resource Usage Testing


Multi-user testing conducted beyond Transaction Characterization to determine the total resource
usage of applications and subsystems or modules.

Concurrency Testing
Multi-user testing geared towards determining the effects of accessing the same application code,
module or database records. Identifies and measures the level of locking, deadlocking and use of
single-threaded code and locking semaphores.

Infrastructure Testing
Verifying and quantifying the flow of data through the environment infrastructure.

Component Testing
The appropriate tests are conducted against the components individually to verify that each
individual component can support without failure. This testing is typically conducted while the
environment is being assembled to identify any weak links.

Failover Testing
In environments that employ redundancy and load balancing, Failover Testing analyzes the
theoretical failover procedure, tests and measures the overall failover process and its effects on
the end-user.

Reliability Testing
Once the environment or application is working and optimized for performance, a longer period
(24 to 48 hour) Reliability Test will determine if there are any long term detrimental issues that
may effect performance in production.

SLA Testing
Specialized business transaction testing to measure Service Level Agreements with third party
vendors. The typical agreement guarantees a specified volume of activity over a predetermined
time period with a specified maximum response time.

Web Site Monitoring


Monitoring business transaction response times after production deployment to ensure end-user
satisfaction.

You might also like