Professional Documents
Culture Documents
0/13-Nov-2003
System Testing
Version 1.2/11-Mar-2005
Prepared By
Shanthala .A.R
Table of Contents
1 INTRODUCTION...........................................................................................................................................................3
1.1 PURPOSE OF THE DOCUMENT.............................................................................................................................................3
1.2 INTENDED AUDIENCE......................................................................................................................................................3
1.3 REFERENCES..................................................................................................................................................................3
2 SYSTEM TESTING.......................................................................................................................................................3
2.1 SYSTEM TESTING APPROACH............................................................................................................................................4
2.1.1 Requirement Analysis & System Study...............................................................................................................4
2.1.2 Test Plan.............................................................................................................................................................4
2.1.3 Roles and Responsibility.....................................................................................................................................5
2.1.4 Test Case Preparation........................................................................................................................................6
2.1.5 QA Environment Setup.......................................................................................................................................7
2.1.6 System Test Execution.........................................................................................................................................8
2.1.7 Regression Testing..............................................................................................................................................8
2.1.8 Defect Tracking..................................................................................................................................................9
2.2 FUNCTIONALITY TESTING...............................................................................................................................................10
2.2.1 Testing Approach..............................................................................................................................................10
2.3 INTERNATIONALIZATION/LOCALIZATION TESTING................................................................................................................10
2.3.1 Testing Approach..............................................................................................................................................10
2.4 PERFORMANCE TESTING.................................................................................................................................................11
2.4.1 Load/Volume testing..........................................................................................................................................11
2.4.2 Stress Testing...................................................................................................................................................12
2.4.3 Volume Testing..................................................................................................................................................13
2.4.4 Endurance Testing............................................................................................................................................14
2.5 CONFIGURATION TESTING..............................................................................................................................................15
2.5.1 Testing Approach..............................................................................................................................................15
2.6 COMPATIBILITY TESTING................................................................................................................................................15
2.6.1 Testing Approach..............................................................................................................................................16
2.7 INSTALLATION TESTING..................................................................................................................................................16
2.7.1 Testing Approach..............................................................................................................................................17
2.8 USER INTERFACE TESTING.............................................................................................................................................17
2.8.1 Testing Approach..............................................................................................................................................17
2.9 USABILITY TESTING......................................................................................................................................................18
2.9.1 Testing Approach..............................................................................................................................................18
2.10 DOCUMENTATION TESTING...........................................................................................................................................18
2.10.1 Testing Approach............................................................................................................................................19
2.11 SECURITY TESTING:....................................................................................................................................................19
2.11.1 Testing Approach............................................................................................................................................20
2.12 RELIABILITY TESTING..................................................................................................................................................20
2.13 EXPLORATORY TESTING................................................................................................................................................20
2.13.1 Testing Approach............................................................................................................................................21
2.14 FAILOVER AND RECOVERY TESTING..............................................................................................................................21
2.14.1 Testing Approach............................................................................................................................................22
2.15 SCALABILITY TESTING.................................................................................................................................................23
2.15.1 Testing Approach............................................................................................................................................23
2.16 METRICS...................................................................................................................................................................23
2.17 TEST REPOSITORY AND TEST PACK CATEGORIZATION........................................................................................................24
2.18 CONFIGURATION MANAGEMENT....................................................................................................................................24
2.19 RISKS THAT IMPACT TESTING.........................................................................................................................................24
3 APPENDIX....................................................................................................................................................................25
3.1 MIND TREE’S STANDARD TEMPLATES..............................................................................................................................25
1 Introduction
1.1 Purpose of the document
This document describes the MindTree System Testing methodology that will be implemented for the
projects.
3. Customer
1.3 References
1.
2.
2 System Testing
The purpose of System Testing is to deliver code that has been comprehensively tested and certify the
functionality to meet the Business rules and regulations & Requirements for the stake holders of the
system.
When the Requirement/Functional Specification is ready, System Testing activities can be planned i.e.
Requirement analysis & system study, Requirement Matrix, Test Planning & Test case Preparation &
Testing.
This section describes the System testing activities & Types of testing that can be conducted during
system Testing.
The following types of testing can be performed under system testing depending on the requirements.
• Functionality testing
• Internationalization testing
• Performance testing
• Compatibility testing
• Installation testing
• System study involves reviewing the Business rules and regulation docs, Domain specific trainings &
Face to Face meeting with End users of the system.
• Testing minds will review the requirement specs for testability aspect and adequacy.
• The outcome of this phase is that testing team will develop Requirement Traceability Matrix.
• Test Lead prepares the test plan and signed off by the customer.
• The activities that are planned for System testing will be estimated and scheduled in the test plan.
The test strategy, Roles & responsibilities of the minds involved in the testing.
Project and software components that should be tested.
The types of testing that will be performed.
The expected use of any automated testing tools and usage requirements for regression testing
Deliverable elements of the test phases.
Configuration Management & Test Management tools & procedures
Defect Tracking Process
Sign off criteria of testing artifacts.
Risks and assumptions in the project, if any.
Entry and exit criteria for each of the testing phase.
Metrics that need to be collected in the testing phase.
Timelines of the project milestones
• MindTree has a standard test plan template available to prepare test plan. However, if customer asks
to use his/her specific templates and the same will be used.
• Customer
• QA Manager
• Configuration coordinator
Customer • Sponsoring
• Provide infrastructure
Primary Responsibilities • Responsible for test planning & coordinate with Onsite Test lead
• Provide the project status to the QA Manager and Client.
• Involve in the Weekly meetings with the Client’s team
• Technical & functional oversight to the testers.
• Responsible for reviewing offshore artifacts before send it to the
Client.
• Team coordination
Deliverables and secondary • Test plans &Test strategy
responsibilities • Daily/Weekly status report
• QA Metrics
• Team coordination
• Test estimation
• Automation Frameworks (For automation projects)
• Usually System Test Case preparation starts once the requirements/Use cases and Test Plans are base
lined.
o Alternative flows
• Test Cases will have reference to Requirements/Use cases and recorded in Traceability Matrix.
• The Test cases will be prepared based on referring to SRS, Functional specifications, Use cases and Story
boards which ever is available. In specific cases depending on the requirements, we also refer Domain
specific material, Design documents, Business Regulatory documents etc.
• While preparing the test cases, the functions will be identified form the requirement documents, The
functions will be further break down to Unit functions (smallest function which can’t be further break
down to test conditions).Test cases will be prepared for all the functions/unit functions using the
following testing techniques ;
Equivalence partitioning: Based on the property (integer/Boolean) of the input, its domain can be
classified into 2 or 3 classes. These classes will be derived as test cases.
Boundary value analysis: The values for the test cases are selected from the boundaries (i.e. max,
max+1, max-1 and min, min+1, min-1) and are given as input to the system.
Cause Effect Graphing Technique: Causes and Effects are listed for a module and an identifier is
assigned to each. A cause-effect graph is developed. The graph is converted into a decision table.
Decision table rules are converted to test cases.
• If the requirements captured in Use cases, the test cases will follow the Use Case –Test Case methodology.
The test scenarios will be identified and each scenario refers to Use case name/No. The Test scenarios
will be breaked in to test cases & test conditions. All the test cases/Conditions follow the Actor steps with
valid and invalid data inputs, alternative flows, and negative flows.
• Test Cases internally reviewed by peers and then by customer. In case of requirement change, appropriate
changes will be made to the test case document and same review process will take place.
• Test case execution instructions/pre conditions will be covered in the test case document to help the
tester for smooth execution.
• QA team starts with Build Acceptance Testing (Smoke testing or sanity testing of the build) once the build
is installed in the QA environment. The smoke test check list contains the tests which are risk based /
critical functionality & Business priority. If the build passes the smoke test phase, then the system testing
phase starts. Smoke test report will be send and if the build fails in the smoke test level, the build will be
rejected and same will be notified to Development team asking the new build.
• The defects found during the System test execution will be entered into defect tracking system. During
critical testing phases such as nearing production deployment, the defects identified shall be discussed
and communicated to Client.
• The exit criteria for system testing would be any of the following conditions are met.
• At the end of each test cycle test lead will send a Test Summary Report and Test metrics to the customer
and QA manager.
Testing Approach:
• An additional re-test plan created by test lead provides a clear road map of regression testing activities.
• The system test cases, Impact analysis, Release notes and the resolved defects found during the earlier
test rounds will be executed and update the Defect Tracking system as appropriate. The test cases will be
developed for the defects, if the existing test cases are not covered them.
• QA team should maintain a controlled test environment that prevents the unknown from creeping into
testing.
• The regression testing phase should exit after meeting the following criteria
• The defects will be tracked by using the Mind tree’s standard defect tracking tool (Bugzilla), it can be
changed according to the Customer’s need.
• As soon as any bug is identified, the same will be updated in Bugzilla and tracked to closure.
• Test engineer selects appropriate version, Component, Platform, Operating system, and assign severity to
each bug.
• While testing the production candidate release, bug details will be provided to Client immediately as they
are detected through an email or telephonic conversation.
• The number of bugs identified, closed and reopened for the day will be provided in the Daily Status
Report.
• Test lead should report defect status every week to customers, onsite / offshore tech leads and project
managers through weekly status report.
Update
Valid Register
Test Result
Bug in
with
Bugzilla
tool
Bug ID
Regression Re-open
enterBug
Automated
Bug,
Test Result Validate
Update
file Bug
Test Result
Invalid Consolidate
Test Result
Re-test &
Update
Test Result
or Script
Send Test
Results
Resultsto
Client
to DB
The goal of Functionality testing is to verify the feature/functional behavior of the system. Typically,
Calculations, data accuracy and end to end business flow.
• We will follow the black box testing approach and perform the test execution using the prepared test
data.
• At the end of the functionality testing phase testing mind has to send
Internationalization testing is performed to verify the functional behavior and message display in the
identified languages.
Test cases related to a set of conventions affected or determined by human language and customs, as
defined within a particular geographical location. i.e. Locales incorporate cultural information such as
time, date, font, currency conversion.
• Typically the tests covers the Unicode & multi code related issues, Message display in local languages &
impact of the message libraries on features and incase of Localization testing ,we also test the
application on various locale OSs.
• Based on the project need, Mind tree sets up the infrastructure for the test environment. A combination
matrix of hardware and software platforms will be created to test various possible environments.
• The specified numbers of desktops will be installed with required operating system and service packs for
different languages.
• The test plan will be done according to the Mind tree’s Standard Test Plan format.
• To verify the locale language messages other then English, the language converter or message dictionaries
should be developed.
• Mind Tree follows the rapid testing with Exploratory testing methods and the specification based testing.
• Release/Build Acceptance: QA team will do a round of sanity testing to confirm the build is testable. If
Showstoppers and critical bugs found during sanity testing, the build will be rejected and Client
engineering team need to give a new build to QA.
• The application will be installed in the identified Windows operating systems with different service packs
and locale languages.
• We will follow the black box testing approach to test the functionality, internationalization and
localization of the application on different locale languages with windows service packs.
• The test cases are executed as per the iteration notes.
• The defects found during the test execution will be entered into defect tracking system
• The test results are entered in the test matrix and will be sent to the customer and QA manager on daily
basis.
The Performance Testing is designed towards a successful software application launch with reliability and
speed. Its primary focus is to determine how well applications on a near deployment infrastructure
perform under real world loads, stress and recommendations to eliminate bottlenecks for the identified
loads.
The deliverables of this testing include a detailed assessment report of test results, recommendations on
the readiness to "go live", and areas which could be improved. Companies who are ready with a
functionally tested application/product will want to employ this service when they are confident of the
accurate transactional functionality of their software applications, but uncertain about their performance
and availability under projected user volumes.
• Load/Volume Testing
• Stress Testing
• Volume Testing
• Endurance Testing
The goal of load testing is to determine and ensure that the system functions properly beyond the expected
maximum workload. Additionally, load testing evaluates the performance characteristics, such as response
times, transaction rates, and other time-sensitive issues.
• The Performance Testing Environment holds the same configuration standard as the QA Environment. It is
dedicated to automated load testing and includes software used to monitor the behavior of the web
server and database under heavy load.
• Load Testing will consist of a select group of (N) scripts that accurately represent a cross section of
functionality. Scripts will be executed to generate up to 1500 user peak load levels.
• Tests will be executed for 1500 concurrent users at load levels of 50, 100, 200, 500, 1000 and 1500. The
test execution would be completed when the 2000 user load is ramped up or any failure condition
necessitates stopping the test. The team would monitor the test execution and record the timings, errors
for report preparation.
• Determine the serviceability of the system for a volume of 1500 concurrent users.
• Measure response times for users
Test Steps:
• Virtual users estimation: Arrive at a maximum number of concurrent users hitting the system where the
system response time is within the response time threshold and the system is stable. This number would
be the virtual user number and should be higher by a factor of x times the average load.
• Load simulation schedule: Schedule for concurrent user testing with a mix of user scenarios and the
acceptable response times.
Statistics:
• The graph with y-axis representing response times and x-axis concurrent users will depict the capability of
the system to service concurrent users.
• The response times for slow users will provide worst-case response times.
In order to check whether the system is tolerant to extreme conditions, stress testing is performed.
Test Objective:
Exercise the target-of-test functions under the following stress conditions to observe and log target
behavior that identifies and documents the conditions under which the system fails to continue
functioning properly:
Little or no memory available on the server (RAM and persistent storage space)
Multiple users performing the same transactions against the same data or accounts
• To test limited resources, tests should be run on a single machine, and RAM and persistent storage
space on the server should be reduced or limited.
• For remaining stress tests, multiple clients should be used, either running the same tests or
complementary tests to produce the worst-case transaction volume or mix.
In order to test whether the system is able to handle large quantities of data, volume testing is done.
Test Objective:
Exercise the target-of-test functions under the following high volume scenarios to observe and log target
behavior:
Maximum (actual or physically-capable) number of clients connected, or simulated, all performing the
same, worst case (performance) business function for an extended period.
Maximum database size has been reached (actual or scaled) and multiple queries or report
transactions are executed simultaneously.
• Multiple clients should be used, either running the same tests or complementary tests to produce the
worst-case transaction volume or mix (see Stress Testing) for an extended period.
• Maximum database size is created (actual, scaled, or filled with representative data), and multiple clients
are used to run queries and report transactions simultaneously for extended periods.
Determine the robustness - check for breakages in the web server, application server and data base server
under CHO conditions.
Test Steps:
• Arrive at a base line configuration of the web server and application server resources i.e. CPU, RAM and
Hard disk for the endurance and reliability test.
• The test would be stopped when one of the components breaks. A root cause analysis is to be carried out
based on the data collection described under the server side monitoring section.
• Collect data for analysis to tune the performance of web server, application server and database server
• If there is an alarm support in the tool through an agent, check for alerts when the activity level exceeds
preset limits.
Result:
The result of this test will be a proof of confidence for Continuous Hours of Operation. The data
collected in this phase would give pointers to improve the reliability of the system and fix any
configuration, component parameters for reliable performance.
4. Transaction throughput
7. RAM usage
8. CPU usage
10. Conclude with a performance statement based on the statistics collected and analyzed
Not Completed The configuration testing is normally conducted by both user and a software maintenance
test team.
Compatibility testing is ensures that the product works properly on the different platforms and operating
systems on the market and also with the applications and devices in its environment.
The aim of compatibility tests is to locate application problems by running them in real environments,
thus ensuring you do not launch any product that could damage your corporate image or increase
maintenance costs.
Operating Systems:
Windows 95/98, Windows NT, Windows 2000, Windows Me, Windows XP Professional and
Home, Windows 2003 Server, MacOS 7.5 a X, Linux or Unix in its localized versions.
Browsers:
Internet Explorer, Netscape, Mozilla, AOL, Safari... etc
Network Operating Systems:
Novell NetWare 4.x, 5.x and 6.x, Windows NT and 2003 Servers, Unix and Linux, Mac OS X,
AppleTalk...
Connectivity:
USB, Firewire (IEEE 1394), Parallel (IEEE 1284) Bluetooth, Wireless (WiFi), VPN’s…
Languages:
Western (EFIGSP), Asian, North and Central Europe Languages.
• The test lead finalizes a representative sample of combinations in terms of budget and time.
• Testing minds should identify the customer's requirements and historical compatibility issues; they should
also determine the possible test scenarios and coverage measurements. Testing minds can use the test
cases developed for functionality testing.
• Based on the project need, Mind tree sets up the infrastructure for the test environment. A combination
matrix of hardware and software platforms will be created to test with various operating systems/service
packs/browsers.
• Mind Tree follows the rapid testing with Exploratory testing methods and the specification based testing.
• The testing minds run the test cases against the created required hardware and software combinations.
• The defects found during the test execution will be entered into defect tracking system. The found
defects and issues together with analysis of possible causes and fixing suggestions should be
communicated to Client.
• At the end of each test cycle test lead should send a Test Summary Report and Test metrics to the
customer and QA manager.
Installation testing confirms that the application under test recovers from expected or unexpected events
without loss of data or functionality. Events can include shortage of disk space, unexpected loss of
communication, or power out conditions.
Installation under normal conditions (such as a new installation, an upgrade, a remove and a complete
or custom installation)
1) New Installation
2) Update:
o A machine previously installed, same version
o A machine previously installed, older version
3) Complete or custom installation
4) Remove
Installation under abnormal conditions (such as insufficient disk space, lack of privilege to create
directories, and so on)
Software operation after the installation.
• When the build is ready to test, upload the build in a QA environment and start the test execution using
the installation test cases.
• If Showstoppers and critical bugs found during testing, the build will be rejected and the development
team need to give a new build to QA.
• The defects found during the test execution will be entered into defect tracking system and the status
report will be sent to the client.
User Interface testing will be carried out to verify an actor’s interaction with the system. The goal of the
UI Testing will be to ensure that the User Interface provides the actor with the appropriate access and
navigation through the use cases. In addition, UI Testing will ensure that the elements of the UI function
as expected and conform to corporate or industry standards.
• Test lead should develop a Test plan for the whole activity.
• Testing mind should develop the test cases which ensures that:
Navigation through the application properly reflects business functions and requirements, including page
to page, field to field, frames, etc. (tab keys, mouse movements, accelerator keys)
GUI objects and characteristics, such as menus, size, position, state, and focus conform to standards.
All images should present and hotspots created and linked, and there should be no orphaned pages.
• When the build is ready to test, upload the build in a QA environment and start the test execution.
• At the end of each test cycle test lead should send a Test Summary Report and Test metrics to the
customer and QA manager.
Usability testing verifies a user’s interaction with the software. The goal of Usability testing is to ensure
that the User Interface provides the user with the appropriate access and navigation through the
functions.
• Test lead should develop a Test plan for the whole activity.
• Testing mind should develop the test cases which ensures that:
The requirements of the user interfaces are fulfilled according to descriptions in Use Case
Specifications and Supplementary Specification.
Verification of Friendliness, Ease of use, Usefulness, Ease of learning, Ease of relearning, consistency,
error prevention and correction, visual clarity, navigation, language of the application from user’s
point of view.
Create or modify tests for each User Interface to verify proper navigation and object states for each
application window and objects.
• When the build is ready to test, upload the build in a QA environment and start the test execution.
• Testing will be carried out manually as per the prepared checklist for usability testing.
• We can make use of many usability testing tools like ErgoLight WebTester, WebSAT, Bobby etc... to
perform the security testing.
• At the end of each test cycle test lead should send a Test Summary Report and Test metrics to the
customer and QA manager.
It is important to ensure that the right documentation has been prepared, is complete and current,
reflects the criticality of the system, and contains all necessary elements.
2.10.1Testing Approach
• Documentation testing is involved in two stages.
The testing mind has to test the documents received by the developer before starting the actual test
execution. Here the testing mind will involve in testing the documents like Release Notes,
Requirement Specification document etc…
Once the testing cycle is over the QA documents (i.e. Test Plan, Test case document, Test Data etc...)
should be tested by testing mind before sending to the customer.
• Before starting the documentation testing the testing mind should prepare a note on
• During the test execution cycle the testing mind should test whether the document
Provide managers with technical documents to determine that requirements have been met.
Provide the customers with technical documents to determine that installation procedure, user guide
etc…
• Once the document is completely tested, send back the documents to the respective owners and the
owners should update it according to the comments given by reviewer. This cycle ends only when right
documentation is done.
Security and Access Control Testing focuses on two key areas of security:
− System-level Security
2.11.1Testing Approach
• Test lead and testing minds should start with a System Study.
• Test lead should develop a Test plan for the whole activity.
• During the test case development testing mind should cover the scenarios on:
Application-level security, including access to the Data or Business Functions i.e. An actor can access
only those functions or data for which their user type is provided permissions. Application-level
security test cases should ensure that actors are restricted to specific functions or use cases, or they
are limited in the data that is available to them.
System-level Security, including logging into or remotely accessing to the system i.e. only those actors
with access to the system and applications are permitted to access them. System level security test
cases should ensure that only those users granted access to the system are capable of accessing the
applications and only through the appropriate gateways.
Identify and list each user type and the functions or data for which each type has permissions.
Create tests for each user type and verify each permission by creating transactions specific to each
user type.
Modify user type and rerun test cases for same users. In each case, verify those additional functions
or data are correctly available or denied.
Access to the system must be reviewed or discussed with the appropriate network or systems
administrator. This testing may not be required as it may be a function of network or systems
administration.
• We can make use of many security testing tools like Cisco Secure Scanner, Network Monitoring etc... to
perform the security testing.
• The defects found during the test execution will be entered into the defect tracking system and the
status report will be sent to the client.
Exploratory testing is an interactive process of concurrent product exploration, test design and test
execution.
The outcome of the exploratory testing is a set of notes about the product, failures found, and a concise
record of how the product was tested.
Exploratory testers should continually learn about the software they are testing, the market for the
product and the best ways to test the software.
2.13.1Testing Approach
• Test lead and testing minds should start with a System Study.
• Test lead should develop a Test plan for the whole activity.
• The test cases already created for Function and Business Cycle testing can be used as a basis for
exploratory testing. Most of the functionality and stability test cases should be chosen to perform this
testing.
• When the build is ready to test, upload the build in a QA environment.
• The testing mind should make sure that the following conditions are covered during the execution
cycle:
Each primary function tested is observed to operate in a manner apparently consistent with its
purpose, regardless of the correctness of its output.
Any incorrect behavior observed in the product does not seriously impair it for normal use.
• The defects found during the test execution will be entered into defect tracking system. During critical
testing phases such as nearing production deployment, the defects identified shall be discussed and
communicated to Client.
Failover and recovery testing ensures that the target-of-test can successfully failover and recover from a
variety of hardware, software, or network malfunctions with undue loss of data or data integrity.
For those systems that must be kept running, failover testing ensures that when a failover condition
occurs, the alternate or backup systems properly "take over" for the failed system without any loss of data
or transactions.
Recovery testing is an antagonistic test process in which the application or system is exposed to extreme
conditions, or simulated conditions, to cause a failure, such as device Input/Output (I/O) failures, or
invalid database pointers and keys. Recovery processes are invoked, and the application or system is
monitored and inspected to verify proper application, or system, and data recovery has been achieved.
The loss of data is very critical and should be avoided, so Failover or recovery testing is been performed.
2.14.1Testing Approach
• Test lead and testing minds should start with a System Study.
• Test lead should develop a Test plan for the whole activity.
• During the test case development testing mind should cover the scenarios on:
Interruption, communication, or power loss to DASD (Direct Access Storage Devices) and DASD
controllers
Incomplete cycles (data filter processes interrupted, data synchronization processes interrupted)
• The test cases already created for Function and Business Cycle testing can be used as a basis for
creating a series of transactions to support failover and recovery testing.
• The testing minds should make sure that the following conditions are tested completely during the
test execution cycle.
Primarily the following conditions have to be tested to say that recovery was successful. The
following types of conditions are included in the testing to observe and log behavior after
recovery:
− Power interruption to the server: simulate or initiate power down procedures for the server.
− Interruption via network servers: simulate or initiate communication loss with the network
(physically disconnects communication wires or power down network servers or routers).
Once the above conditions or simulated conditions are achieved, additional transactions should be
executed upon reaching this second test point state, recovery procedures should be invoked.
Testing for incomplete cycles utilizes the same technique as described above except that the
database processes themselves should be aborted or prematurely terminated.
Testing for the following conditions requires that a known database state be achieved. Several
database fields, pointers, and keys should be corrupted manually and directly within the database
(via database tools). Additional transactions should be executed using the tests from Application
Function and Business Cycle Testing and full cycles executed.
• The defects found during the test execution will be entered into defect tracking system and the
status report will be sent to the client.
Scalability testing is an evaluation of the increase in performance due to expanded configuration, number
of nodes by adding additional resource.
2.15.1Testing Approach
• As part of the system study testing minds will
Baseline the scalability with one node and vary its resources to determine the configuration that
yields good performance.
Add more nodes of similar configuration and determine the number of nodes that yields best
performance.
• Testing minds, with inputs from the client’s business team, will finalize the frequently used business
transactions, and generate the scripts.
• Testing minds will use the test data derived by client on availability or will identify and set up the
required test data for scalability testing with the help of Client.
The configuration under test for a single node could have 3 degrees of variance namely
i. Number of CPUs
ii. Memory
iii. Disk storage scheme for the database.
The system response time is to be recorded for each permutation of changing variable values, and
measures the relative effect against a baseline configuration.
Simultaneously system-monitoring tools need to be used to record resource utilization statistics
for root cause analysis/bottleneck analysis.
Determine the yield point for each configuration and iteratively take the best mix as the relative
configuration for peak performance.
• The result of this test will list the configuration and number of nodes for peak performance.
2.16 Metrics
• Defect Status Metrics – It will indicate the number of defects in various states like, new, assigned,
resolved, verified, etc.
Test Repository shall be created to maintain the existing test packs and additional test packs created in
the future. A Change Management procedure shall be defined to ensure that the test packs are
maintained uniquely and addition of new test cases into the existing test pack is possible with the help of
Test management tool. The activity is illustrated as follows:
Test preparation
process
Test Test
Repository Execution
Process
Additional
tests
identified, Additional Test identification
Prepared, process
Test Repository ramp up process reviewed and
Approved
Functional
specs and
Business
scenarios
• The Configuration management tool used for the project should be finalized before starting the
project. Mind Tree is using CVS, VSS as a configuration management tool.
• The entire test related documents will have to be checked in to the CM tool and the files must be
checked out before modification.
• All the checked out files must be checked back in. CM tool administrator will do the CM
administration. Sufficient Access rights should be provided to the QA resources to access the QA
Region of CM tool.
plan. appropriately.
System failure and loss of Medium Low A database backup strategy should
data during the Testing be in place so that loss of data can
Process. be prevented.
Test data not migrated in Medium Low Test the functionality not involving
time. data feed until migration.
Lost connectivity during test Low High Local test setup should be in place.
execution from Except scenarios involving
offshore/onsite. mainframes, others can be executed
locally.
3 Appendix
3.1 Mind tree’s Standard Templates.
4) Traceability Matrix
5) Defect Metrics
6) Graphs
1) ErgoLight WebTester provides navigation analysis, site evaluation and mechanisms for user feedback
2) Web Static Analyzer Tool (WebSAT) analyzes a site to determine the level of adherence to a set of
usability guidelines
3) Bobby, is a free service that assists in validating the accessibility of a web site by identifying site
aspects that make it difficult to use by web users with disabilities
1) Port scanners Nmap attempts to connect to a server on various ports and also uses various types of
packets and fragments to bypass firewall and other filters
2) Cisco Secure Scanner provides port scanning and vulnerability assessment in addition to security
management and documentation
3) Network Monitoring – Windows NT/2000 Network Monitor, part of OS, allows the capture and
examination of packets destined to or transmitted from the local machine
D:\TestingProcess\
I18N Checklists.doc
4 Revision history
Version Date Author(s) Reviewer(s) Change Description