You are on page 1of 32

1.

Test strategy

Test Strategy for single high-level test


1A1

1A2

1A3
1A4

1A5
1A6

Combined testing strategy for high-level tests


1B1

1B2
1B3
1B4

1B5

Combined strategy for high-level tests plus low-level tests (like Component Integration Tests) o
1C1
1C2
1C3
1C4

1C5
1C6

Combined strategy for all test and evaluation levels


1D1
1D2
1D3
1D4
1D5
1D6
2. Life-cycle model
2A1
2A2

Planning, Preparation, Design, Execution and Completion


2B1
2B2
3. Momement of involvement
3A1

Start of test basis


3B1

Start of requirements definition


3C1

Project initiation
3D1
4. Planning and estimation

Substantiated estimating and planning

4A1
4A2
4A3

Statistically substantiated estimating and planning


4B1
4B2
5. Test design techniques

Informal techniques
5A1
5A2

Formal techniques
5B1
5B2
5B3

5C1
6. Static test techniques

Inspection of test basis


6A1
6A2

Checklists
6B1
7. Metrics

Project metrics (product)


7A1

7A2

7A3

Project metrics (process)

7B1

7B2

System metrics
7C1
7C2

Organization metrics (>1 system)


7D1
7D2
8. Test automation

Use of tools
8A1
8A2
8A3

Managed test automation


8B1
8B2
8B3

8B4

8B5

8B6

Optimal test automation


8C1
8C2
8C3
8C4
8C5

9. Test environment

Managed and controlled test environment


9A1
9A2

9A3

9A4

9A5

9A6
9A7
9A8

Testing in the most suitable environment


9B1

9B2

9B3

Environment on call
9C1
10. Office and laboratory environment
10A1
10A2
10A3
11. Motivation and engagment

Assignment of budget and time


11A1
11A2
11A3

11A4
11A5
11A6

Testing integrated in project organisation


11B1
11B2
11B3
11B4
11B5
11B6

Test engineering
11C1
11C2
11C3
11C4
11C5
11C6
11C7
12. Test functions and training

12A1
12A2
12A3
12A4

(Formal) Methodical, Technical and Functional support, Management of the test process, testware an
12B1
12B2
12B3
12B4

12B5

12B6

12B7
12B8
12C1
12C2
12C3
12C4
13. Scope of methodology

Project specific
13A1
13A2
13A3

Project specific with external scope


13B1

Organization generic
13C1
13C2
13C3

Organization optimizing, R&D activities


13D1

13D2
14. Communication

Internal communication
14A1

14A2
14A3

Project communication (defects, change control)


14B1
14B2
14B3
14B4
14B5
14B6
14B7
14B8

Communication in organization about the quality of the test processes


14C1
14C2
15. Reporting

Defects
15A1
15A2

15A3

Progress (status of tests and products), activities (costs and time, milestones), defects with p
15B1
15B2

Risks and recommendations, substantiated with metrics


15C1
15C2

15C3
15C4

15D1
16. Defect management

Internal defect management


16A1

16A2

Extensive defect management with flexible reporting facilities


16B1

16B2
16B3

16B4

Project defect management


16C1
16C2

16C3

17. Testware management

Internal testware management


17A1

17A2
17A3

External management of test basis and test object


17B1
17B2
17B3
17B4
17B5
17B6
17B7

External management of test basis and test object


17C1
17C2
18. Test processs management

Planning and execution


18A1

18A2

Planning, executing, monitoring and adjusting


18B1
18B2
18B3
18B4

Monitoring and adjustment in organisation


18C1
18C2
18C3
19. Evaluation

Informal evaluation
19A1
19A2
19A3
19A4

Evaluation techniques
19B1

Evaluation strategy
19C1
19C2
19C3
19C4
19C5
20. Low-level testing
20A1
20A2

White-box techniques

20B1
20B2
20B3

Low-level test strategy


20C1
20C2

20C3
20C4
20C5
21. Integration test

Integration identified as a separate and planned process


21A1
21A2
21A3

21A4

21A5

Strategy for integration


21B1
21B2
21B3
21B4

21B5
21B6

21B7

Standardized strategy for integration


21C1
21C2
21C3

1. Test strategy

Test Strategy for single high-level test

A motivated consideration of the product risks takes place. Typical risk categories to be verified are: technical ris
be used as basis), organizational risks associated to development/test, operational usage of the product, political a
risks and liability.

The consideration implies at least the following aspects:


- Regression testing of unmodified parts of the software is part of this strategy when the test object is an update o
existing software.
- Software is often parameterized, for instance because of different country legislation. If this is the case, part of t
should be whether and how the different parameter settings are tested, based on the estimated risks
- The software to test can be commercial-off-the-shelf (COTS), reuse or core, tailor made and often a combinatio
test strategy should take into account the different risk profiles of COTS, reuse/core or tailor made software.
- Risks starting from the dependencies of products of the HW- and SW-baselines are taken into account, e.g. requ
compatibility or HW breaks.
The stakeholders of the product are involved in the process of defining the test strategy. At least the stakeholders
acceptant of the product) have to be invited to review the proposed test strategy and its present status.

There is a differentiation in test depth, depending on the risks and, if present, on the acceptance and entry and exi
system parts, variants and versions are tested equally thoroughly and not all quality characteristics are tested (equ
When incremental delivery of functionality takes place (so-called A-Muster, B-Muster, etc) for every increment a
made what must be tested and which regression tests must be carried out .
One or more test design techniques are used, suited for the required depth of a test.

For re-tests also a (simple) strategy determination takes place, in which a motivated choice between "test solutio
re-test" is made. At least a differentiation between changes in parameter settings and changes in source code is m
Combined testing strategy for high-level tests

Coordination takes place between the different high-level tests, often the system-, acceptance- and production acc
supplier and commissioner side testing, in the field of test strategy (risks, quality characteristics, area of consider
and planning).

The result of the coordination is a coordinated strategy, which is documented. During the total test process this st
controlled.
Each high-level test determines its own test strategy, based on the coordinating strategy, as is described in level A

Deviations from the coordinating strategy are reported. A substantiated adjustment of the coordinating strategy is
the risks identified for these deviations. The validity of the strategy is checked in case of incremental delivery for
For retest coordination takes place between the different test levels. In case the different test levels exceed the bo
organization commissioner and supplier make clear decisions about the retest at both sides based on the entry and
Combined strategy for high-level tests plus low-level tests (like Component Integration Tests) or evaluatio

Coordination takes place between the high-level tests and the low-level tests or the evaluation levels in the ar
(risks, quality characteristics, area of consideration of the test/evaluation and planning)

The result of the coordination is a coordinated strategy, which is documented. During the total (evaluation and
strategy is controlled.
Each high-level test determines, on the basis of the coordination, its own test strategy, as described in level A.

(if applicable) Each low-level test determines, on the basis of the coordination, its own test strategy, as is des
"Low-level testing", level C.

(if applicable) Each evaluation level determines, on the basis of the coordination, its own evaluation strategy, as
area "Evaluation", level C .

Deviations from the coordinating strategy are reported. A substantiated adjustment of the coordinating strategy is
the risks identified for these deviations.
Combined strategy for all test and evaluation levels
Coordination takes place between the high-level tests, the low-level tests and the evaluation levels in the area of
quality characteristics, area of consideration of the test/evaluation and planning).

The result of the coordination is a coordinating strategy, which is documented. During the total evaluation an
strategy is controlled.
Each high-level test determines its own test strategy on the basis of the coordination, such as described in level A

Each low-level test determines its own test strategy on the basis of the coordination, such as is described in th
level testing", level C.

Each evaluation level determines its own evaluation strategy on the basis of the coordination, such as is describ
"Evaluation", level B.

Deviations from the coordinating strategy are reported. A substantiated adjustment of the coordinating stra
based on the risks identified for these deviations.
2. Life-cycle model

Planning, Design , Execution

For the tests (at least) the following phases are recognized: planning, design and execution. These are subsequent
possibly per subsystem. A certain overlap between the phases is allowed.

Activities to be performed per phase are mentioned in sheet2. Each activity contains sub-activities and/or as
activities and/or aspects are meant as additional information and are not obligatory.
Planning, Preparation, Design, Execution and Completion
For the tests the following phases are distinguished: Planning, Preparation, Design , Execution and Completio
executed consecutively, possibly per subsystem. A certain overlap between the phases is allowed.

Each activity is supplied with sub activities and/or aspects. These are meant as additional information and a
Activities to be executed per phase are mentioned in Sheet2
3. Momement of involvement

Completion of test basis

The activity "testing" starts simultaneously with or earlier than the completion of the test basis for a restricted
that is to be tested separately. The system can be divided into several parts which are built, finished and test
testing of the first subsystem has to start at the same time or earlier than the completion of the test basis
subsystem.
Start of test basis

The activity "testing" starts simultaneously with or earlier than the phase in which the test basis (oft
specifications) is defined.
Start of requirements definition

The activity "testing" starts simultaneously with or earlier than the phase in which the (customer and system)
defined.
Project initiation
When the project is initiated, the activity "testing" is also started.
4. Planning and estimation

Substantiated estimating and planning

The test estimating and -planning can be substantiated (so not just "we did it this way in the last project"). For
clear how much time it costs to execute those activities..
In the test process, estimating and planning are monitored, and adjustments are made if needed.
In case of short-term changes forced by the commissioner and/or supplier, a re-planning of test activities is perfor
Statistically substantiated estimating and planning

Metrics about progress and quality are structurally maintained (on level B of the key area Metrics) for mu
projects.
This data is used to substantiate test estimating and -planning.
5. Test design techniques

Informal techniques
The test cases are defined according to a documented technique which describes how test cases should be derived
The technique at least consists of: a) start situation, b) change process = test actions to be performed, c) expected
Formal techniques
Besides informal techniques, formal techniques are used; unambiguous ways of getting from test basis to test cas

A substantiated judgment is possible about the level of coverage based on the collection of test cases (compared t
The testware is reusable (within the test team) by means of a uniform working method.
Mathematical methods
At least one mathematical method is used to derive test cases.
6. Static test techniques

Inspection of test basis


Preceding to the definition of the test cases, a study of the testability of the test basis is performed and documente
In this study checklists are used. The checklists are related to the test design techniques that are selected in the te
Checklists

Static tests other than inspection of the test basis take place by means of checklists (approved by project and/
These checklists are used to conduct a static test on the test object for non functional quality characteristics.
7. Metrics

Project metrics (product)

In the (test) project Input metrics are recorded:


used resources - hours,
performed activities - hours and lead time,
size and complexity of the tested system - number of functions and/or building effort, number of system require
Code, etc.

In the (test) project Output metrics are recorded:


test products - specifications and test cases, log reports,
test progress - performed tests, status (executed - passed/failed /not finished),
number of defects - defects by test level, by subsystem, by cause, priority, status (new, in solution, corrected, ret
Achieved code coverage for at least the low-level test, e.g. statement coverage C0, branch coverage C1
The metrics are used in test reporting.
Project metrics (process)

In the (test) project Result measurements are made for at least 2 of the items mentioned below:
defect detection-effectiveness:
- the detected defects compared to the total defects present (in %); the last entity is difficult to measure, but thin
defects in later tests or in the first months after SOP (Start of Production);
- analyse which previous test level should have detected the defects (this indicates something about the effectiv
test levels!);
defect detection-efficiency:
- the number of detected defects per spent hour, measured over the entire test period or over several test levels;
test coverage level:
- test objectives covered by a test case compared to the number of possible test objectives (in %). These
determined for system requirements, software requirements and software design, e.g. functional coverage or requ
testware defects:
- number of "defects" detected whose cause turned out to be wrong testing, compared to the total number of defe
perception of quality:
- means of reviews and interviews of users, testers and other people involved, e.g. provided by quality departmen
Metrics including trend analysis (e.g. predefined curve compared with actual situation) are used in test reporting.
System metrics
Metrics mentioned above are recorded for development, maintenance and after SOP .
Metrics are used in the assessment of the effectiveness and efficiency of the test process.
Organization metrics (>1 system)
Organization-wide mutually comparable metrics are maintained for the already mentioned data.

Metrics are used in assessing the effectiveness and efficiency of the separate test processes, to achieve an o
generic test methodology and future test processes.
8. Test automation

Use of tools

A decision has been taken to automate certain activities in the planning and/or execution phases. The test ma
party who allocates budget for the tools (generally the line management or project management) are involved in t
Use is made of automated tools that support certain activities in the planning and execution phases (such as a
defects registration tool and/or home-built stubs and drivers, code checkers, MiL, Sil and Hil).

The test management and the party allocating budget for the tools acknowledge that the tools being used provid
than disadvantages.
Managed test automation

A well-considered decision has been taken regarding the parts of the test execution that should or should not b
decision involves those types of test tools and test activities that belong to the test execution.
If the decision on automation of the test execution is a positive one, there will now also be a tool for test executio

The introduction of new test tools is preceded by an inventory of technical aspects (does the test tool work in t
and any possible preconditions set for the test process (for example, test cases should be established in a certain s
in a free-text form, so that the test tool can use this as input).
If use is made of a test sequencer for automated test execution, explicit consideration should be given during
maintainability of the test scripts included.

Most of the test tools can be reused for a future test project . To do so, the management of the test tools has been
that in general test tools should be reusable, means that the test tools that are used explicitly within one test p
reusable.

The use of the test tools matches the desired methodology of the test process, which means that use of a test too
inefficiency or undesired limitations of the test process.
Optimal test automation
A well-considered decision has been taken regarding the parts of the test process that should or should not
possible types of test tool and all test activities are included in this decision.

There is insight in the cost/profit ratio for all test tools in use (where costs and profits need not merely be exp
money).
There is a periodic review of the advantages of the test automation.
There is awareness of the developments on the test tool market.

New test tools for the test process are implemented according to a structured process. Aspects that require at
process include:
- aims (what should the automation yield in terms of time, money and/or quality);
- scope (which test levels and which activities should be automated);
- required personnel and expertise (any training to be taken);
- required technical infrastructure;
- selecting the tool;
- implementation of the tool;
- developing maintainable scripts;
- institutionalize management and control of the tool.
9. Test environment

Managed and controlled test environment


Only with the permission of the test manager are changes allowed to the test object and the test environment.

The test environment must be set up in time (can also mean that the test object must be delivered in time if the te
the test environment). In case of a dedicated designed and/or build test environment (e.g. stub components, etc
design, purchasing, installation and configuration must be planned.

The test environment is managed (with regard to setup, availability, maintenance, configuration management (so
error handling, authorizations, system parts supplied by suppliers and third parties ). The configuration is up
the expectations of the next test level.

The saving and restoring of certain test situations with the associated version of the test environment (soft- and
arranged quickly and easily. In case of a prototype car which is only available for a limited time this will be alm
realise later on in the project. For this case this checkpoint can be neglected.

The environment is sufficiently representative for the test to be performed. This depends on the scope of the te
testing. In general: the closer the test level is to pre-production test the more the test environment has to be as the
The (hardware and software) requirements for the test environment are well defined, understood and documented

The test environment which is supplied by the commissioner (e.g. prototype, external ECUs or prototype c
documented .
Commissioner and supplier shall coordinate the configuration of the shared/non shared test environment.
Testing in the most suitable environment

Each test is performed in the most suitable environment, either by execution in another environment (the enviro
or commissioner) or by quickly and easily adapting the own environment.

The environment is finished in time for the test and there is no disturbance by other activities during the test. In
cars the disturbance is most of the time there, because a prototype car can contain additional prototype ECUs w
the test. In this case the disturbance must be minimised as far as possible under control of the tester.
The risks taken with adapted and changed environments are analyzed and adequate measures have been taken.
Environment on call
The environment which is most suited for a test is very flexible and can quickly be adapted to changing requirem
10. Office and laboratory environment

Adequate and timely office and laboratory environment

The office and laboratory infrastructure needed for testing (offices, meeting rooms, telephones, PCs, network c
software, printers, data communication connections, etc.) is arranged on time.

Things related to office organization have a minimal impact on the progress of the test process (as little m
physical distance between testers and the rest of the project not too large, etc.).

The office and laboratory infrastructure needed for external testing (e.g. test benches, car testing abroad), h
connection to the "headquarter".
11. Motivation and engagment

Assignment of budget and time


Testing is regarded by the people involved as necessary and important.
An amount of time and budget is allocated for testing.

Management controls testing based on time and money. A feature is that if the test time or budget is exceeded, in
sought within the test (doing overtime or employing extra people when exceeding these limits or on the con
and/or budget ).

In the team there is enough knowledge and experience in the field of testing to complete testing tasks allocate
team can use knowledge built up in expertise groups dedicated to a certain subject (e.g. automated testing, HIL-te

The activities for testing are full-time for most participants during the test project (therefore not many co
activities).
There is a well defined relationship between the testers and other disciplines in the project and the organization.
Testing integrated in project organisation
All those involved find that testing has a noticeable positive influence on the quality of the product.
The management wants to have insight in the depth and quality of testing.

The management controls testing based on time, money and quality. A feature is that the solution for test prob
exceeding test time or budget) is also sought outside the test project. Possibly the developer is addressed here.
In the project planning the cycle testing, rework and re-testing is taken into account.
Testing is involved in the planning of the delivery sequence of the parts.
The advices from testing are discussed in the project meetings.
Test engineering
The test team is involved in the design and realization to provide an optimal testability of the system ("design for
The test team has sufficient knowledge and skills to provide a meaningful realization of the checkpoint mentione
Recommendations of the test team are considered "seriously" by the organization and/or project.
Management supports testers (with people and means) and is working continually on the improvement of the test
Participation in testing is regarded as a "promotion", testing has a high status.
The development process is of sufficient maturity: at least time and quality are controlled.
Test jobs are described at an organization level, including career possibilities and reward structures.
12. Test functions and training

Test manager, integrator and testers

At least there are two roles defined, test manager and tester. If it is part of the test project to integrate deliver
parties the role of an integrator has to be defined additionally.
The tasks and responsibilities, with needed experience and possible training, have been defined and documented

The test personnel has had specific test training (e.g. test management, test design techniques, etc.) or has suffic
the field of testing.
For the test, domain expertise is available to the test team.
(Formal) Methodical, Technical and Functional support, Management of the test process, testware and infrastru

The role Methodical Support is separately outlined. Its activities are defining and maintaining test instruction
techniques and advising about and evaluating the right application of the above.
The role Technical Support is separately outlined.
The role Functional Support is separately outlined.

The task Management test process is outlined separately and is responsible for the registration, storage, and
management objects of the test process. Sometimes one will carry out the management oneself, in other cases
and/or evaluate that management. Objects to be managed are progress, budgets and defects.

The task Management testware is outlined separately and is responsible for the registration, storage, and
management objects of the testware. Sometimes one will carry out the management oneself, in other cases one w
evaluate that management. Objects to be managed are test documentation, test basis, test objects (internal), test c
files and databases, test instructions and procedures.

The task Management test infrastructure is outlined separately and is responsible for the registration, storage, and
management objects of the test infrastructure. Sometimes one will carry out the management oneself, in oth
organize and/or evaluate that management. Objects to be managed are test environments (test databases) and test
The persons who carry out these tasks have sufficient knowledge and experience.
The time needed for these tasks is planned. Supervision is carried out to see that these tasks are in fact performed
Formal internal reviewing
Parallel to the test plan, an internal reviewing plan for testing is formulated.
The person for the test reviewing task has no other tasks within the test team..
The results of test review activities are used as input for further test process improvement.
The person, who performs the review, has sufficient test knowledge and experience.
13. Scope of methodology

Project specific
Methodology is formulated for each project.

The aspects described cover at least: description of the full life-cycle model of testing, management of the test
and quality), test product management, defect management, test design techniques to be used.
Methodology is followed.
Project specific with external scope

The dependencies (test strategy, life cycle model, test design techniques, communication, reporting, defect manag
testware management) arranging the interfaces between commissioner and supplier, are realized.
Organization generic
The methodology is defined in a generic model for the organization.
Each project works according to this generic model.
Variances are sufficiently argued and documented.
Organization optimizing, R&D activities

There is a structured feedback process (both formally elicited and implemented by the R&D departmen
methodology.

Structural maintenance and innovation (R&D) are done in the generic methodology, e.g. on the basis of feedback
14. Communication

Internal communication

There is a periodical meeting within the test team and within the development team . This meeting has a fixed
main focus progress (lead time and spent hours) and the quality of the object to be tested. The results of
documented by means of notes, protocol or a status list..
Periodically, each team member participates in the meeting.
Deviations from the test plan are communicated and documented.
Project communication (defects, change control)
In the test team meeting minutes are taken.

In the test team meeting, besides progress and the quality of the test object, the quality of the test process is a fi
agenda.

Periodically, the test manager reports about the progress and about the quality of the object to be tested, includi
project meeting. The test manager also reports about the quality of the test process.
Agreements in this meeting are documented.
The test manager is informed immediately about changes in planned and agreed delivery dates of test basis as
and test environment (e.g. mechanical parts, prototype cars, simulators , software, models.)

In a periodic defects meeting (or analysis meeting) solutions to defects are discussed between representatives o
of other parties (e.g. supplier and/or commissioner) involved.
Testing is involved in change control for judging the impact of change proposals on the test effort.

Agreements for support are made between the test team and the supplier of the test object(s). These agreements i
defects, solving of test-blocking defects, lines of communication, escalation procedure.
Communication in organization about the quality of the test processes
There is periodic meeting in which propositions for improvement of the used test methodology used and the
discussed.
Participants are representatives of the test teams and of the line department for testing.
15. Reporting

Defects
The defects found are reported periodically, divided into solved and unsolved defects (pending or closed).
It is agreed beforehand, preferably defined in the test plan, what the aspects in terms of reporting are:
Content of reports
Interval of report generation (periodically, on request and ad hoc)
Addressee of reports
Formal/informal
Besides the commissioner of the test, other stakeholders like the developer of the system must be reported to.
Progress (status of tests and products), activities (costs and time, milestones), defects with priorities
The defects are reported, divided into severity categories according to clear and objective norms.

The progress of each test activity is documented and reported periodically. Aspects to be reported are: lead
which tests have been specified, what has been tested, what part of the object performed correctly and incorrec
still be tested.
Risks and recommendations, substantiated with metrics
A quality judgement on the test object is made. The judgement is based on the acceptance criteria, entry or exit
and related to the test strategy.
Possible trends with respect to progress and quality are documented and reported periodically.

The reporting contains risks (for the commissioner) and recommendations.

The quality judgment and the detected trends are substantiated with metrics (from the defect administration and t
monitoring) e.g. found defects against executed test cases per timeframe or executed test cases against planned te
Recommendations have a Software Process Improvement character
Advice is given not only in the area of testing but also on other parts of the project.
16. Defect management

Internal defect management

The different stages of the life-cycle of the defects are administrated (up to and including re-test). Possible status
New
Assigned
In progress
Postponed
Rejected
Ready for retest
Retest OK
Closed
The following items of the defect are recorded:
unique identification
person raised defect
date
severity category
test object plus version
problem description
status
Extensive defect management with flexible reporting facilities

Defect data needed for later trend analysis are recorded in detail:
- test case
- test level
- system part
- sub system
- priority (test blocking Y/N)
- test object plus version
- cause (probable + definitive)
- all status transitions of the defect including dates
- a description of the problem solution
- (version of) test object in which defect is solved
- problem solver
- test configuration
The administration supports extensive reporting possibilities, which means that reports can be selected and
ways.
There is someone responsible for ensuring that defect administration is carried out properly and consistently.

Synchronization takes place between defect management system of supplier and defect management of c
workflow, possible states of defects, attributes, time for synchronization). This means that open defects mention
at the moment of a (partial) delivery should be entered in the defect management system of the commissioner. A
found by the commissioner must be submitted to the defect management system of the supplier.
Project defect management

Defect management system is provided by the commissioner (usually the OEM) and is accessible by the parti
project (also the ones outside the commissioner organization).

Only one defect management system is in use throughout the whole project (e.g. the development of a certain E
than one independent organization is involved in the test process. The defects originate from the various discip
supplier and commissioner) or departments are submitted to the defect management system.

Every entity (commissioner, supplier, sub-supplier) has its own view of the defect management. This view gives
kind of information necessary to do his job. Every entity manages its own information and decides what type
made accessible to whom by using authorization profiles.
17. Testware management

Internal testware management

The testware (test cases, test scripts, (initial) test data, trace files , etc.), test basis, test object, test enviro
components HW and SW) , test configurations , test documentation and test guidelines are managed interna
described procedure, containing steps for delivery, registration, archiving and referring.

The relations between the various parts (test basis, test object, testware, test environment, hardware, etc.) are
controlled.
Transfer to the test team takes place according to a standard procedure. The parts comprising a transfer should be
which parts and versions of the test object (including Tailor-Made, Reuse/Core and Commercial-off-the-shel
which (version of the) test basis,
solved defects, still open defects, including those from the developer himself,
Optionally other parts (such as source code, required hard- or software, testware from previous tests) may
transfer.
External management of test basis and test object

The test basis and the test object are managed by the project according to a described procedure, with s
registering, archiving and reference.
Management contains the relations between the various parts (test basis and test object).

The test team is informed about changes in test basis or test object in a timely fashion. This also appli
Commercial-Off-The-Shelf or Tailor Made components or software.
Each requirement and/or design item is related to one or more test cases.

These relations are traceable through separate versions (e.g. system requirement A, version 1.0, is related to func
version 1.3, is related to programs C and D, version 2.5 and 2.7, and is related to test cases X to Z, version 1.4).

Criteria for compliance with system requirements, software requirements and software design are defined and rel
version of these requirements and design (depending on scope of analysis whether all three must be realized).
The new configuration items (delivered by internal and external parties) are only delivered and accepted in a stan
External management of test basis and test object

(A selection, which is agreed on beforehand of) the test products are completed after the end of the test (=fully a
transferred to the maintenance organization, after which the transfer is formally agreed.
The transferred test products are actually reused.
18. Test processs management

Planning and execution

Prior to the actual test activities a test plan is formulated in which all activities to be performed are defined
stakeholders and their responsibilities are identified. For each activity there is an indication of the period in
executed, the resources (people or means) required and the products to be delivered .

The commissioner of the test reviews the test plan resulting from the planning phase. Changes in this plan sho
reviewing to this commissioner
Planning, executing, monitoring and adjusting
Monitoring of the execution of all planned activities takes place.
Each activity is also to be monitored in terms of time and money.

In case of deviations adjustment is done, either by adjusting the planning, or by again performing activitie
adjustment is substantiated.
Deviations are documented and communicated to the commissioner.
Monitoring and adjustment in organisation

At an organizational level, monitoring of the application of the organizations methodology (methods, standard
procedures) is executed.
Deviations are documented and are reported to the test process.

In the case of deviations the risks are analyzed and adjustments are made for instance by adjusting the methodol
activities or products so that they are in line with the methodology. The adjustment is substantiated.
19. Evaluation

Informal evaluation
Checklists in combination with peer-expertise are used for the evaluation.
Reporting of the evaluation and its results are documented.
The handling of the results is monitored.

Testers and people representing the different stakeholders (e.g. project leader, technical experts, developer and
involved in these evaluations.
Evaluation techniques
In evaluating (intermediate) products techniques are used, in other words a formal and described working method
Evaluation strategy
A conscious evaluation of risks takes place.
There is a differentiation in the scope and the depth of the evaluation, depending on the possible risks and, if pres
the acceptance criteria: not all types of software are equally evaluated and not every quality attribute is equally ev
A choice is made from multiple evaluation techniques, suitable for the desired depth of an evaluation.
For re-evaluations a (simple) strategy determination takes place, in which a conscious choice is made between
only' and ' complete re-evaluation'.

The strategy is defined and afterwards also executed. It is monitored that the tests are executed according to t
necessary, the execution will be adjusted.
20. Low-level testing

Life cycle: planning, design and execution

For the low-level test (at least) the following phases are recognized: planning, design and execution. These
sequence, for each subsystem, if applicable.

Each activity is supplied with subactivities and/or aspects. These are meant as additional information and a
Activities to be executed per phase are mentioned in Sheet2.
White-box techniques

Besides informal test design techniques the low-level tests use also formal test design techniques, providing an
from the test basis to test cases.

For the low-level tests it is possible to make a substantiated statement about the level of coverage of the test se
test basis).
The testware is reusable (within the test team) by a uniform working method.
Low-level test strategy
A motivated consideration of the product risks takes place whereby the commissioner is involved, for which
system, its use and its operational management is required.

There is a differentiation with respect to area of consideration and depth of the tests, depending of the risks taken
the acceptance criteria: not all kinds of programs are tested equally thoroughly; this is also the case for quality ch
One or multiple formal or informal test design techniques are used, suitable for the desired test depth.
For retests a (simple) strategy determination takes place, in which a substantiated choice is made between
solutions only" and "complete retest".

The strategy is determined and subsequently executed. It is checked that the execution of the tests takes place acc
strategy. If necessary, adjustments are made.
21. Integration test

Integration identified as a separate and planned process


A person is made responsible for integrating, including integration testing, the separate parts into an assembled
role).
The sequence of delivery from supplier(s) to testing has been previously defined and documented.

The integration and testing sequence for both hardware and software (so-called integration strategy), based
architecture and delivery plan of the parts, has been defined and documented. This planning includes:
- Activities
- Dependencies
- Milestones

Transfer to and from the test team takes place according to a standard procedure. The parts comprising a transfer
(in the form of a delivery report): which parts and versions of the test object, which version of the test basis, (un)
configuration.
In the case of deviations of the plan (e.g. late delivery of parts) adjustments are made. The adjustment is substant
Strategy for integration

The sequence of delivery and the entry criteria for the parts to be delivered is coordinated, documented and agree
parts supplier and integrator.

The number and sequence of integration steps are based on a motivated consideration of the estimated risks of th
of the product, the technical constraints and the impact of the changes.

Entry criteria for the parts and exit criteria for the integration tests have been defined, documented and used. The
preferably defined in terms of achieved test coverage and/or amount of unsolved defects.

There is a differentiation in the depth of the tests, depending on the risks and, if present, depending on the accep
exit criteria: not all parts, variants and versions are tested equally thoroughly and not all quality characteristics
thoroughly).

For retests also a (simple) strategy determination takes place, in which a motivated choice between 'test soluti
retest' is made.

Deviations from the coordinating strategy are reported, after which a substantiated adjustment to the coordinatin
based on the risks.

Agreements for support by the part supplier are made. These agreements involve: solving of defects, solvin
defects, lines of communication (e.g. integration meetings on a regular basis) , and escalation procedure.
Standardized strategy for integration
The procedures are defined in a generic approach for the organization.
Each project works according to this generic approach.
Variances are sufficiently argued and documented.

Life cycle model - level A

For the Planning phase:

Activity
formulate assignment

Sub activities / aspects


- commissioner and supplier
- scope
- aim
- precondition
- starting points

determine the test basis

- determine relevant documentation


- like system requirements, software
requirements, software design and other
documents that are used to derive test
cases
- identify documentation

define scope (area(s) of consideration):


what will and what will not be tested
determine test strategy for this
iteration

- strategy determination
- estimating

set up organization

- determine required functions


- allocate tasks, authorizations and
responsibilities
- describe organization
- allocate personnel
- determine training
- determine communication structures
- determine reporting lines

identify test deliverables

- determine test products


- set up norms and standards
- define test environment
- define test tools
- define office environment
- define infrastructure planning

define infrastructure and tools

define test management

- define test process management


(progress, quality, reporting)
- define infrastructure management
- define test product management
- define defects procedure

determine planning (aspects like


activities, dependencies, milestones,
start & end dates and needed
resources )

- define general planning

produce test plan

- determine risks, threats and measures in


relation to the test process
- determine critical test environments (e.g.
EMV- hall, prototype car)
- determine test plan
- fixate test plan (commissioner approval)

synchronize the planning of the test


process with the complete product
process

For the Design phase:

Activity
design test cases and test scripts

Sub activities / aspects


- test cases
- define starting test databases
- test scripts
- define parameter settings

specify entry check of test object and


infrastructure

- checklist test object and infrastructure


(completeness check )
- test script pre-test

realize test infrastructure

- test environment
- test tools

For the Execution phase:

Activity
intake test object and infrastructure

Sub activities / aspects


- intake infrastructure and test object
(completeness check)
- perform pre-test

create initial data, (starting conditions - Enter initial data


for execution of testcases)
- Set parameter values
execute (re)tests

- execute test scripts


- execute static tests
(incl. evaluation of test results and
analysing differences)
Life cycle model - Level B

For the Preparation phase:

Activity
inspection of test basis (check if the
test basis is suitable for the selected
test design techniques)

Sub activities / aspects


- determine relevant documentation
- define checklists for study
- define documentation (study)
- report about testability

For the Completion phase:

Activity
Archive the testware (complete and
bring the testware up to date , in a way
that the testware is re-usable for other
test processes)

Sub activities / aspects


- select testware to be archived
- collect and update testware
- transfer testware

evaluate test object

- determine open defects and identified


trends
- determine risks at release
- formulate advice

evaluate test process

- evaluation of test strategy


- planning versus realization

formulate final report


Low Level testing - Level A

For the Planning phase:


Activity
formulate assignment

Sub-activities/aspects
- commissioner and supplier
- area of consideration
- aim
- preconditions
- starting points

determine test basis

- determine relevant documentation


- identify documentation

setup organization

- determine required functions


- allocate tasks, authorizations and
responsibilities
- describe organization
- allocate personnel

describe test products

- determine test products


- define test tools
- define test environment
- define test tools
- define test process management
(progress, quality, reporting)
- define test product management
- define defect procedure

define infrastructure and tools


define test management

determine planning
produce test plan

- formulate global planning


- determine risks, threats and measures
- determine test plan
- fixate test plan (commissioner approval)

For the Design phase:


Activity
design test cases and test scripts

Sub-activities/aspects
- test cases
- define initial data
- test scripts

For the Execution phase:


Activity
execute (re-)tests

Sub-activities/aspects
- execute test scripts
- execute static tests (e.g. conformance
checks on coding standards),
incl. evaluation of test results and analysis
of differences
- produce code metrics

Life cycle model - level A


Product
defined in test plan

defined in test plan

defined in test plan


defined in test plan

defined in test plan


defined in test plan

defined in test plan

defined in test plan

test plan

defined in test plan; ideally the planning is integrated in the overall project planning

Product
- test cases
- definition starting test database / table for parameter settings
- test scripts
#NAME?

operational test environment and tools

Product
testable test object

initial data sets

- test defects
- test reports

Life cycle model - Level B

Product
- test basis defects
- testability report

Product
testware

release advice, defined in final report

defined in final report


final report

Low Level testing - Level A

Product
determined in test plan

determined in test plan


determined in test plan

determined in test plan


determined in test plan
determined in test plan

determined in test plan


test plan

Product
- test cases
- initial data sets
- tests scripts

Product
- test defects
- test reports

You might also like