Professional Documents
Culture Documents
User guideline
Revision: 1.3
Test Automation Service Revision: 1.3
User guideline Revision date: 2013-08-28
Table of contents
1 INTRODUCTION .................................................................................................... 3
1.1 Purpose ................................................................................................. 3
1.2 Intended audience .................................................................................. 3
1.3 Definitions, Acronyms, and Abbreviations .................................................. 3
1.3.1 Definitions.......................................................................................... 3
1.3.2 Acronyms and Abbreviations ................................................................ 3
1.4 References ............................................................................................ 3
1.5 Open issues ........................................................................................... 3
2 TEST AUTOMATION .............................................................................................. 4
2.1 General information ................................................................................ 4
2.1.1 Purpose of test automation .................................................................. 4
2.1.2 Benefits of test automation .................................................................. 4
2.1.3 Objectives - What to automate? ............................................................ 4
2.2 TestCenter - Test Automation Service ....................................................... 4
2.2.1 Test Automation Request ..................................................................... 5
2.2.2 Test Automation Process ...................................................................... 5
Test Automation Service – User guideline Revision: 1.1
Full generic template Revision: 4.1 Date 2009-05-11
1 Introduction
1.1 Purpose
This document describes what the Test Automation Service at TestCenter delivers and
how the service and its deliveries should be used. It also describes what an initiative
needs to prepare and think about when writing test cases that should be automated by
the Test Automation Service.
Test script Script created by QTP (or other automation tool) that is used to
run an automated manual test case. The test script is the
automated implementation of a manual test case.
1.4 References
[1] <exchange this with a reference name>
<define where to find reference>
2 Test Automation
2.1 General information
2.1.1 Purpose of test automation
To reduce the time and resources required for regression testing, thereby
speeding up the test execution and increasing the overall coverage of
regression testing.
Support and script maintenance: The Test Automation service owns and
takes full responsibility for maintenance/update of delivered scripts including
support in test script usage and test data handling.
Tools & Framework: External vendors that want to use test automation can
use our framework and tools. (But will not receive any education or support on
tool usage)
The Test Automation Service can be requested through a mail to the TA service
Full generic template Revision: 4.1 Date 2009-05-11
Qualification phase
During this phase, a service request (TPR) is sent to the Test Desk (or through a mail to
the Test Automation Service coordinator); TestCenter decides if a Feasibility study is
needed based on previous experiences of the application/system.
An Initial Feasibility Meeting is held and a Pre-Qualifying Checklist is filled in together
with the project; all requested information is sent to TestCenter; the readiness of the
project for the Test Automation Service is then assessed.
Feasibility study
During the feasibility study phase the application is proven to work with an automation
tool; an automated Test Script is produced and a Feasibility Report containing a ROI
Calculation and a recommendations for an automation assignment is delivered to the
project.
Assignment planning
During this phase, the Test Case that should be automated should be defined and
completed. Execution result must be passed and each step and verification point in the
Test Case must be documented on a detailed level. Test Data is defined and provided by
the project. TestCenter reviews and approves the Test Cases and verifies that the
requirements for an assignment are fulfilled. An Assignment Description is created by
TestCenter, reviewed and approved by all stakeholders.
Script Development
During the Script Development phase, all Test Scripts, and Test Scenarios are created
and tested. The project provides necessary Test Data, application upgrades and support
to investigate and resolve any issues that come up and block the script development.
During this phase there is at least one handover meeting where the scripts are
delivered, reviewed and approved and the users receives education in how to run the
scripts, setup scheduled executions and read the QTP log files.
Reporting
During the assignment reporting phase, a Customer Satisfaction Survey is filled in by
the project. TestCenter delivers an Assignment Report, documents lessons learned and
investigates if the Service can be improved.
test data that should exist in the system and what actions that needs to be done before
starting the test case, i.e. “A user with role YY has logged on to the system and the
main screen is displayed. At least one new order exist” and post condition “Return to
main screen”
WHY: We do not know how your system/application works and therefore we need step
by step instructions on what to do and what to verify.
It is not clear enough to have a step that says “Create an order in system Z” or “Verify
that the order was sent to system X”. You need to specify how to create an order and
how and what to verify in system X:
“Fill in info_x in field_1, info_y in field_2 … and click button ‘Create order’, write down
the order number”
“Open System_X and navigate to ‘Show placed orders’ under Orders menu and verify
that the order number from system Z is displayed in list ‘New orders’ with the correct
information info_x, info_y…”
If test data is needed as input to the test case and to compare the expected results it
needs to be available, specified where it can be found or how to generate it. (See
[2.3.6])
How we work: Before we start an assignment we review the test cases and run them
manually to understand how to run it and what to verify. (If there are unclear steps or
verification points we need to contact the project for clarification.) Then we design and
create automated scripts that are based reusability, flexibility and maintainability.
The test cases are broken down into reusable components, such as Login, CreateUser,
VerifyXXXXX etc. which are easy to update if something changes in the GUI or test case.
If we find similar actions (like creating different kind of users) we try to build one
component that can handle both kinds of actions based on different input parameters.
2.3.5 Support
The automation specialist needs a contact person from the project that can answer
questions about the usage of the application/system, test cases and test data. Normally
this is a minor issue and does not require much time from the project resource, but if
the test cases are unclear or the application is not stable enough more time might be
needed.
WHY: If issues with the test cases or application usage are found, the time schedule for
the assignment is put at risk if we don’t get help to solve it quickly.
Time to execute all tests that should be automated manually (man hours)
Application Lifetime (year)
on the start menu on a computer that has QTP installed. (He hasn’t got the Application
Access to QTP).
If you have a user that needs access to QTP on a machine you need to grant the user
Application Access: MercuryQTP.
Note: This access is only needed when a user needs to be able to start and
execute tests from QTP and not from ALM. (Executing QTP tests from ALM does
not require QTP application access!)
of your local file before you upload and overwrite the old file
7. Go to local folder and edit your locally saved copy and save.
8. Press “Upload File” and select your locally saved file. (IMPORTANT)
4. You then click on desired cell for each row (test case) and change value.
5. Important: Change to another Test Set and back (So ALM saves changes).
6. Run the Test Set. Your updated test data should now be used.
5 Test execution
To start execution you have two options to choice from. Either you can use ALM to start
your execution or you can use our “SchedulePlanner” (see 5.2 Schedule an execution)
tool which use ALM as well but add extra features. When execution is done you can view
results from ALM.
5.1 ALM
The benefit of using ALM to start execution is that you will see the execution and can
stop it if something goes wrong.
1. Login using remote desktop to the computer that you want to execute the
tests on. Open remote desktop (Start -> All Programs -> Accessories ->
Remote Desktop Connection).
2. If it is an IKEA.com computer, login using your normal IKEA.com login or
if it is a computer in IKEADT use your IKEADT user.
3. Open a browser and navigate to ALM
Test Automation Service – User guideline Revision: 1.1
http://iwww.alm.ikeadt.com/qcbin/start_a.htm
Full generic template Revision: 4.1 Date 2009-05-11
4. Go to Test Lab
5. Navigate to the Test Set you want to execute
1. To run all tests in the Test Set, click “Run Test Set”
2. To run selected tests:
1. Select the Tests you want to run by clicking on them and
holding the “CTRL”-key
2. Run selected tests by clicking “Run”
6. In the “Automatic runner” window, make sure “Run All Tests Locally” is
checked.
7. Click “Run All”
8. The execution starts
IMPORTANT: Remember to keep the remote desktop window open
during the entire execution. Don’t close or minimize the window as
that will tell Windows to stop updating the screen which might make
execution fail!
1. Go to \\itsehbg-nt0007.ikea.com\Common\TEST_INSIDE\Test
Automation\Tools\SchedulePlanner and download SchedulePlanner.exe (Also
have a look at the manual)
Test Automation Service – User guideline Revision: 1.1
Full generic template Revision: 4.1 Date 2009-05-11
2. On General tab fill in all fields. (Tip1: Use “Select” to select test set more
easily. You can also select several test sets to be executed after each other.
Tip2: You can send email to several emails by separating them with ;
(semicolon).)
3. On Schedule tab fill in time for execution (If you want to execute immediately
set at least 2 minutes in the future.
4. In RDP tab. If the Host you specified on the General tab is in IKEA.com then
you need to uncheck “Use default login” and change domain to
“IKEA.COM” and input your own login. (This is because default user is not
allowed to login to your IKEA.com computer)
5. Click on “Schedule”
When you see the success message in the status bar everything is setup and you can
close down the tool and your computer. When execution is completed you will get an
email depending on your settings.
6 Execution results
6.1 Open logs
1. In your ALM project, navigate to your Test Set
2. Select test your want to read report for
3. Click on Launch report
(Note: if this button isn’t visible click on this button in the bottom right
corner of the window)
When you first open up the report you will get an overview of passed/failed/warnings. If
Full generic template Revision: 4.1 Date 2009-05-11
test has failed you can fast forward to next failed by pressing Find Next (Alt+N) in the
toolbar.
Normally when an error has happened you will on the next step see a screenshot. If you
get “Run error” that means that an unanticipated event has happened. This can for
example be that a popup dialog has appeared which blocks execution. When this
happens the rest of the test will most likely fail and to see what actually went wrong you
might need to scroll down in the log to the next screenshot and see what state the
application is in.
icon description
should be done
6.2.2 Examples
Run error
Sometimes you can get run errors that are cause by errors in the application that hasn’t
been predicted and there for no error handling is done. It could be lack of test data that
prevents a selection, popup that blocks the script etc. In this example it happens to be
just that when a picture should be selected there is an error dialog (caused by
connection to picture share is down). Since error message isn’t closed it because several
Run errors in the log, if option to save screenshot on error is turned on you will see a
screenshot on the next step in the log that might help identify what the problem is.
Revision history
Describe the changes made and the reason for the change between the revisions.
Revision Date Description of changes Author
1.0 2010-11-13 Document approved TA Team
1.1 2011-09-02 Updated with general information about Peter Johansson
automation, service requirements and
preparation.
1.2 2013-02-05 Updated contact information and some Olof Ernstsson
links
1.3 2013-08-28 Updated links and changed to ALM Olof Ernstsson
Test Automation Service – User guideline Revision: 1.1
Full generic template Revision: 4.1 Date 2009-05-11