You are on page 1of 33

Performance Testing Process

Why We Need Performance Testing?

 Before release, managers need to know:

• Do we have enough hardware?


• Can we handle the target load?
• How many users can we handle?
• Is the system fast enough to make customers
happy?
Nature of Performance Testing

 It is very different from functional testing.


A very challenging job
 It requires stellar cooperation and
coordination: it is a whole team effort!
 Automation tools are very powerful, but
expensive and complex, training is
needed
 It can be fun too!
Why We Need Performance Testing?

 The failure of an application can be costly


 Assure performance and functionality
under real-world conditions
 Locate potential problems before our
customers do
 Reduce development time – multiple
rounds of load testing
 Reduce infrastructure cost
When we do it
 During design and development
– What is the best server to support target load?
– Define system performance requirements
 Before release
– Is the system reliable enough to go into production?
– After functional testing done
 Post-deployment
– What is the cause of performance degradation?
What we are doing
Performance testing before release:
 Application response times
- How long does it take to complete a task?
 Configuration sizing
- Which configuration provides the best performance level?
 Capacity planning

- How many users can the system handle?


 Regression

- Does the new version of the software adversely affect response time?
 Reliability

- How stable is the system under heavy work load?


Load Testing Process

Plan Create Scenario Scenario Result Performance


Test Scripts Creation Execution Analysis Tuning
Perf. Test Planning Documents
 Performance Testing Initial Assessment
- Pre-test plan document
- Help project team to brainstorm their test scope

 Performance Test Request Form


- Detail information related to whole performance testing
process, including setup goals, environment, business
process, performance requirement (e.g., response time),
usage information, internal support team, etc.
What we are doing
1. Test Planning - Before we run load testing
- Setup goals
 Measure application response time
 Configuration sizing
 Capacity planning
 Regression
 Reliability
- Type of testing
 Load Testing (System performance testing with SLA target load)
 Stress Testing (Capacity testing to find out breaking point)
 Duration Testing (Reliability testing to test the system under load)
What we are doing – Cont.

- Identify usage information - Business Profile

 Which business processes to use


– BA, Dev team responsible for definition
 Isolate peak load and peak time
– BA, Dev, application support responsible for definition
 Document user actions and input data for each
business process
– SME/Functional Testing team responsible for creation of
business process document
Sample : Business Profile 1
- HR App. Business Processes
Business Total Avg Number Peak number Preferred Unacceptable
Process Users of concurrent of concurrent Response Response
(%) users users Time (Total, Time (Total ,
including including
think time) think time)
Browse 20% 100 2000 2-3 min > 5 min
Time Entry 60% 200 6000 3 min > 5 min
Update 20% 50 1000 1 min > 3 min
personal
info.
Total 100% 350 9000
Sample : Business Profile 2 –
eCommerce Business Processes
Business Total Peak Peak Preferred Unacceptable
Process Users Time Load Response Time Response Time
(%) (# of (Each transaction) (Each transaction)
users)
Create 20% 4-6 pm 1000 3-5 sec > 8 sec
Order
Browse 60% 4-6 pm 6000 3-5 sec > 8 sec
Catalog
Display 20% 4-6 pm 1000 3-5 sec > 8 sec
order
Total 100% 8000
What we are doing – Cont.
- Business Profile is the basis for load testing
 It is the traffic model of the application
 The better the documentation of the

business processes, the better the test


scripts and scenarios.
 Save time on script and scenario creation

 Good business profile can make it possible to

reuse existing load testing scripts and results


later.
What we are doing – Cont.

2. Create Scripts
- Automate business processes in LoadRunner VUGen (Virtual
User Generator):
Scripts are C, C++-like code
Scripts are different with different protocol/technology
LoadRunner has about 50 protocols, including WAP

- Record user actions


 Need assistance of SME/Functional Testing group

- Add programming and test data in the scripts


- E.g. add correlation to handle dynamic data, e.g. session id
- Test data may need lot of work from project team
Sample Script
 web_submit_data("logon.sap",
 "Action=http://watstwscrm02:50000/bd/logon.sap",
 "Method=POST",
 "RecContentType=text/html",
 "Referer=http://watstwscrm02:50000/bd/startEBPP.sap",
 "Snapshot=t3.inf",
 "Mode=HTML",
 ITEMDATA,
 "Name=login_submit", "Value=true", ENDITEM,
 "Name=j_authscheme", "Value=default", ENDITEM,
 "Name=j_alias", "Value={UserName}", ENDITEM,
 "Name=j_password", "Value=coffee@2", ENDITEM,
 "Name=j_language", "Value=EN", ENDITEM,
 "Name=AgreeTerms", "Value=on", ENDITEM,
 "Name=Login", "Value=Log on", ENDITEM,
 LAST);
What we are doing – Cont.

3. Create Test Scenario


- Build test scenario according to usage information in
Business Profile
- Load Calculation
- Can use rendezvous point, IP Spoofing, etc.
- Run-Time setting
 Think time
 Pacing
 Browser Emulation: simulate browser cache, new user each iteration
 Browser version, bandwidth, etc.
What we are doing – Cont.

4. Execute Load Testing


- Execute test scenarios with automated test
scripts in LoadRunner Controller
- Isolate top time transactions with low load
- Overdrive test (120% of full load) to isolate SW &
HW limitations
- Work with Internal Support Team to monitor the
whole system, e.g., web server, DB server,
middleware, etc.
Example Parameters to Monitor

 system - % total processor time


 Memory - page faults/sec
 Server work queues - bytes transferred/sec
 HTTP Response
 Number of connections

• Support team will have better ideas for what to monitor


• Individual write-up is highly suggested as part of test report
• ---need to get csv files, then import to LoadRunner
What we are doing – Cont.

5. Analyze Test Result - Analysis


- Collect statistics and graphs from LoadRunner
- Report results
- Most commonly requested results:
Transaction Response time
Throughput
Hits per sec
HTTP response
Network Delay
*Server Performance
- Merge graphs to make it more meaningful
Transaction response time under load
Response time/Vuser vs CPU utilization
Cross scenario graphs
What we are doing – Cont.

6. Test Report
- Don’t send LoadRunner result and graphs
directly
- Send summary to the whole team
- Report key performance data and back end
performance data
- Add notes for each test run
- Keep test history: for team to compare test runs
What we are doing – Cont.

7. Performance Tuning
- Help identify the bottlenecks and degradation points
to build an optimal system
- Hardware, Configuration, Database, Software, etc
- Drill down on transaction details,
- e.g. webpage breakdown
- Diagnostics
- Show Extended Log to dev team
- Data returned by server
- Advanced Trace: Show logs of all VUser messages
and function calls
What we are doing – Cont.

8. Communication Plan
- Internal Support Team:
- - PM, BA, environment / development /
architect, network, DBA, functional
test lead, etc.
- Resource plan
Timeline/Activities - Example

Test Planning, Script Creation – 4 weeks

Test Execution – 4 weeks


Trail run - 2 days
1. Round 1 – Load Testing: Response time with SLA target load: 1
week
2. Round 2 – Stress Testing: find breaking point: 1 week
3. Round 3 – Duration (Reliability) test: 2 days
4. More performance tuning – 3 days
5. Document and deliver final report – 2-3 days
Projects

Projects :
 All performance testing projects in
T-Mobile’s IT dept
 40+ projects in <3 years
 The Standard Performance Testing
Process has worked very well on all
projects
Automation Tools
- Mercury LoadRunner
– Scripting: VUGen (Virtual User Generator)
– Performance test execution:
 Controller – build test scenarios according to
business profile and load calculation
 Load Generator – run virtual users

– Performance test result analysis


 Analysis

– provides test reports and Graphs


– Summarize the system performance
Automation Tools
– Performance Center
 Web-enabled global load testing tool
Performance Testing team can manage
multiple, concurrent load testing projects
across different geographic locations
 User Site - conduct and monitor load tests.
 Privilege Manager- manage user and project access
rights
 Administration Site - for overall resource management
and technical supervision
Automation Tools -
Diagnostics
 - Pinpoint Root Cause
– Solve tough problems
 Memory leaks and trashing
 Thread deadlock and synchronization
 Instance tracing
 Exceptions
Diagnostics Methodology in
Pre-production
 Start with monitoring of business process
– Which transactions are problematic

 Eliminate system and network components


– Infrastructure monitors and metrics

 Isolate application Tier and method


– Triage (using Transaction Breakdown)

 Correct behavior and re-test


Broad Heterogeneous Platform
Support
 WebSphere  MS .NET
J2EE/Portal Server  Generic/Custom JAVA
 WebLogic J2EE/Portal  SAP Net/Weaver
Server J2EE/Portal
 JBoss, Tomcat, JServ  Oracle 11i
Applications
 Oracle Application  Siebel
Server J2EE
Performance Engineering
- Bridge the Gap
 80% of IT Organizations experience failures in apps
that passed the test phases and rolled into production

 HyPerformix – Performance Engineering


 Production line: Designer, Optimizer and Capacity
Manager

 HyPerformix Optimizer (Capacity Planning): can bridge


the gap between testing and production environments
and leverage load test data to accurately show how the
application will perform when in production.
Performance Engineering
- HyPerformix Optimizer
 Configuration sizing, Capacity planning
 Create production-scale models
– Perf. Test team and Architect team work together
 Load test and production perf. data are
seamlessly integrated with Optimizer
 Ensure capacity is match to current and future
business requirements
 Reduce risk before application deployment
What Performance Testing can do
for business?
 Performance testing is critical. Competition in market is
high: customer switch cost is low, cost to keep
customers is high
 Performance Testing can protect revenue by helping to
isolate and fix problems in the software infrastructure
 Improve availability, functionality, and scalability of
business critical applications
 Ensure products are delivered to market with high
confidence that system performance will be acceptable
 Proactive performance testing can decrease costs of
production support and help desk
 A good Performance Testing Process is essential to get
performance testing done right and on time!
Questions?

You might also like