You are on page 1of 18

The T-Advanced Model: A Continuous

Approach To Software Testing Maturity


Assessment
Marcel Robeer, Joey van den Heuvel, Lars van den Bos, and Luc Beelen
Utrecht University, Utrecht, The Netherlands
{m.j.robeer, j.vandenheuvel2, l.vandenbos, l.g.n.m.beelen}@students.uu.nl

Abstract. Software testing has become one of the most vital parts when
it comes to producing an end product, but ensuring a good product is no
easy task. How can an organization ensure that the right strategy towards
testing software is chosen in order to provide a quality product? Several
software models, such as the Test Maturity Model Integration (TMMi)
offer a method to scale companies in different categories, but this and
other models lack a continuous approach towards assessing the testing
maturity level of an organization. In this paper, the researchers will propose a new model, namely the T-Advanced model, that is built upon
the TMMi model. To compare the two models, interviews about various aspects of software testing were conducted at twelve organizations in
the Dutch product software market. Analysis of the T-Advanced model
shows that these companies have a different distribution when compared
to the TMMi model, giving a more realistic assessment of their actual
test maturity. By comparing and discussing the reasons for the differences of both models, a greater insight is established for understanding
the testing maturity environment in product software companies in The
Netherlands and for software testing in general.
Keywords: Software Testing, Test Maturity, T-Advanced Model, Test
Maturity Model Integration

Introduction: Software Testing Maturity

As software takes on a more vital role in the daily activities of both individuals and organizations, there is an ever increasing need for quality in software.
Software testing encapsulates the activities that are aimed at (i) evaluating an
attribute or capability of a program or system, and (ii) determining whether
it meets its required results [13]. It is considered as having the most important role in the software development process to achieve and assess the software
product quality [9]. While several papers argue that software testing remains
an important or even vital task in product development, often accounting for
approximately 50 percent of the total development time and costs, it remains
difficult to execute properly [6],[19],[23],[30].

Robeer, Van den Heuvel, Van den Bos, and Beelen

Testing maturity models provide a way for organizations to evaluate and improve their software testing process, as the quality of testing is dependent on the
maturity of the process [22]. Investments into good software testing can have
a major impact on an organizations financial success [12]. Various case studies
found that the overall maturity level of software testing in organizations is still
low. In 2004, Ng et al. [20] found that nearly 65 percent of the studied Australian
software producing organizations used any form of structured software testing in
the three years prior. The other 35 percent used no structured software testing
techniques or methodologies. This view is confirmed in three case studies performed in the Canadian province of Alberta [10,11]. These case studies showed
that software testing has become exceedingly important since 2004. However, as
of 2013, the gap between theory and industry practice remains immense.
Even though the software producing organizations are generally aware of
the low level of maturity, there is still a barrier for organizations to improve
the software testing process [12]. The main reasons for this barrier is a lack of
expertise in the industry and that the testing process improvement only happens
when it can be justified by possible savings later on in the process [16],[20].
There are various models to assess the software testing maturity of an organization. One of these models is the Test Maturity Model Integration (TMMi).
The TMMi is an independent test maturity model which can be used by organizations to improve their test processes [34]. Before improvements can be
made, the current situation needs to be assessed. The TMMi Foundation strives
to make their model the accepted standard in the industry, which is done by
actively using feedback to improve the model [26]. This opportunity lead to our
improvement for the TMMi model assessment, the T-Advanced model.
Our research into the software maturity in Dutch product software industry,
therefore, is two-sided. On the one hand, we propose the T-Advanced model,
a continuous scoring method for TMMi assessments. On the other hand, we
want to present the research findings, as this data on Dutch product software
producing organizations may provide valuable insights.
The rest of the paper is constructed as follows: Section 2 continues with an
overview of the various software maturity models and provides the rationale behind the choice for the TMMi. In Section 3 the applied research approach is
introduced. Subsequently, in Section 4 the T-Advanced model is proposed. The
software maturity of the software producing organizations is determined by using both the TMMi and T-Advanced models. Section 5 presents the research
findings. These findings are then further discussed in Section 6. Finally, in Section 7 the insights that were gathered during the research are wrapped up and
presented, and areas for further research are given.

Models for Software Testing Maturity

Nowadays, there are numerous test maturity models being used in the software
producing industry. The two most well-known models are the Test Maturity
Model (TMM) and Test Process Improvement (TPI) model. TMM focuses on

The T-Advanced Model

testing issues to help an organization obtain test maturity, by providing them


with a set of recommended practices and an assessment model [6],[15]. TPI,
on the other hand, focuses on simplifying the development process of software
testing, to determine the weak and strong areas of software testing within an
organization [1],[8]. Both models are based on the Capability Maturity Model
(CMM) [17]. CMM provides an organization with a model so they can establish
process improvement programs [25]. Another model used for testing measurement is the Testability Support Model (TSM). The main difference with the
aforementioned models is that the TSM mainly focuses on lowering the costs of
software testing [31]. As of today, however, Bertolino [4] and Ericson et al. [7]
note that there is still a gap between the state of practice and the state of the
art of software testing. This is why the TIM (Test Improvement Model) was
developed, with the purpose of improving test processes by using a maturity
model and an assessment procedure. This model is based on the principles of
the TMM and CMM. However, the development of new, better suited models
to measure test maturity did not stop with the development of the TIM model.
Nowadays, new models are developed [29], one of them being the TMMi model.
2.1

Test Maturity Model Integration

The scope of our research is focused on the TMMi model. First and foremost
because it is based on the best practices and standards for software testing [6].
Secondly, the model is constantly updated and improved using a data set and
feedback from organizations who are currently using the model [26],[34]. Thirdly,
the TMMi defines testing in the broad sense, not limited to techniques alone [32].
A vital part of the TMMi model is the test process assessment. Test process
assessments focus on identifying improvement opportunities and understanding
the organizations position relative to the selected model or standard. The TMMi
provides an excellent reference model to be used during such assessments. The
TMMi model generates a benchmark, which can be used for internal purposes
and for external partners such as customers or suppliers. The benchmark reassures the partners that an organization is following industry standards in test
maturity. Using a widely accepted model is also useful when an organization integrates its information technology with a third party, since it provides a quick
overview of the testing practices and the test environment present.
The TMMi model consist of five levels. An organization can climb from level
one (the lowest level; an ad-hoc and unmanaged approach towards testing) to
level five (the highest maturity level; a company has a well managed and effectively operating/performing approach towards testing). Each maturity level
contains testing practices that can be learned and used in a systematic way to
support the testing process and its qualities [32],[35]. Each hierarchical level,
except for level one, contains various process areas which are the focus of improvement in that level. Level one does not contain any process areas, since an
organization is automatically placed into this level. In each process area there are
a set of goals, which can be used to assess the maturity level of an organization.
The five levels and their respective process areas are:

Robeer, Van den Heuvel, Van den Bos, and Beelen

1. Initial
2. Managed: (a) test policy and strategy, (b) test planning, (c) test monitoring
and control, (d) test design an execution, and (e) test environment.
3. Defined: (a) test organization, (b) test training program, (c) test life cycle
and integration, (d) non-functional testing, and (e) peer reviews.
4. Measured: (a) test measurement, (b) software quality evaluation, and (c) advanced peer reviews.
5. Optimization: (a) defect prevention, (b) test process optimization, and
(c) quality control.

Fig. 1. The levels and process areas of the Test Maturity Model Integration [34]

Research Approach

In order to investigate the maturity of the software testing process in The Netherlands, we conducted a literature study that lead to the T-Advanced model, and
twelve participants working for twelve different companies in the Dutch Product

The T-Advanced Model

Software industry took part in a semi-structured in-depth interview. The twelve


representatives of the organizations were chosen using either convenience sampling or purposeful sampling [24]. Some representatives used additional people
in the organization to help answer the questions.
The interviews were conducted at the participant company by twelve pairs
of students attending the Utrecht University Product Software course. The interviews were taken throughout May 2015. Each interview lasted approximately
one to two hours.
Since the purpose of the research is to get an overview of the current maturity
state, and not to judge the organizations on their performance, the organizations
names remain anonymous.
3.1

Research Methodology

To explore the subsurface of the maturity of software testing, the research was
subdivided into seven steps. The first step was a literature study into software
testing maturity in general, the TMMi model, and improvements in the TMMi
model. This lead to step two, where a proposal was made for the T-Advanced
model, a continuous version of the TMMi model. Subsequently, face-to-face interviews were taken. A semi-structured approach to the interviews was chosen
because it can be used to gain quality in-depth data in just one interview, which
can be used to compare the different participant organizations, while it gives
the chance to investigate non-predetermined aspects [2,3]. The interview process
consisted of (iii) the creation of the interview protocol, (iv) the data collection in
the actual interview, (v) an interview report containing the interview data and
information about the organization and representative, and finally (vi) feedback
on the interview report from the participants. Lastly, the information gathered
during the interviews was used to score the organizations and decide on their maturity levels. In the following sections, we detail the steps taking in the interview
process.
Step I Literature study. In a literature study into software testing, with
an emphasis on maturity, the focus shifted towards the TMMi model for test
maturity. The study mainly focused on the TMMi, in combination with case
studies of testing maturity in various countries.
Step II The T-Advanced model. A proposal was made for the T-Advanced
model. Based on literature, the main findings on improvements that could be
made in the TMMi model were taken into account to create a formula to calculate
the software testing maturity level of an organization. The proposal will be
further elaborated in Section 4.
Step III Interview protocol. For the interview, one closed-ended question
and four open-ended questions were formulated. The interview questions were

Robeer, Van den Heuvel, Van den Bos, and Beelen

peer-reviewed within the research group. The closed-ended question, Q1, was
used to see whether the participant organization used software testing in their
development process. This was to prevent any data from organizations that
were not using software testing from contaminating the results of the remaining
questions.
The four open-ended questions were used to gain an in-depth insight into
the testing process of the organizations. To gain more insightful answers, the
questions stated explicitly that in answering the questions the interviewer should
not stop at just answering the question, but also find out the rationale behind
the answer.
The open-ended questions encompassed various aspects of the testing process
of the organization. These aspects were (i) the ratio of manual and automatic
software testing, (ii) the organizations most important software testing methods,
(iii) the order in which the testing methods are executed in the development
process, (iv) the percentage of the total development costs that is spent on
software testing, and (v) the inputs and outputs of the software test processes.

Step IV Interview. The semi-structured interview consisted of three parts.


Firstly, the two interviewers introduced themselves, as well as the context of the
product software field of study. They expanded on the structure of the interview
and told the representative what would happen to the data gathered during
the interview. Subsequently, the interviewers got to know the representative and
the representatives company. Then, the representative was asked to show the
different products in the portfolio of the company. One of these was agreed upon
by the interviewers and the representative to be the main focus of the interview.
Secondly, the actual main part of the interview was started. The actual interview was recorded to be used in the creation of an interview report. The main
part was made up of multiple topics regarding various internal and external aspects of the organization, including our own topic. The interviewers were given
free choice of the order in which they would cover these topics.
Finally, the interview was closed off with the opportunity for the representative to ask any remaining questions. The interviewer explained that the data
from the interview would result in an interview report for the representative to
provide feedback on and that was ultimately used in this research. Lastly, the
interviewer thanked the representative for their participation.

Step V Interview report. The pairs of interviewers constructed an interview


report by transcribing the interview that they held with their representative.
Additionally, information about the organization and the representative were
added to this report.

Step VI Feedback. The results in the interview reports were validated by the
companies where the interviews were conducted.

The T-Advanced Model

Step VII Information processing. Information gathered during the interviews were processed. For every organization, the testing maturity criteria they
adhered too were decided upon. This was then used to compare the various
organizations using both the TMMi and T-Advanced models.
3.2

Participants

The twelve Dutch product software companies that took part in the interviews
range in size from twelve to approximately 5000 employees and vary in age.
The companies have different types of software products including Enterprise
Resource Planning (ERP) systems, Content Management Systems (CMS), an
e-procurement platform, and Business Process Management (BPM) systems.
Some of these companies focus on a niche market while others use mass marketing strategies. Examples of the niches chosen by these companies are insurance
companies, industrial manufacturing, employment agencies, and banks. Each of
the companies used software testing in their development process.
The representatives take on various roles in their company, ranging from developer to product manager, and from software tester to Chief Technical Officer
(CTO) or Chief Executive Officer (CEO).
The product software companies provided a mix of business models, ranging from commercial off-the-shelf software to Software as a Service. Some of
them outsourced parts of their work to third parties and some of them had an
international customer base.
An overview of the different characteristics of the organizations and the representative roles in the organizations is provided in Table 1. To anonymize the
organizations, the names of the organizations have been replaced with Firm A
through L. Information in the table that is unknown is substituted with a hyphen
(-). Confidential information is replaced with an x.

The T-Advanced Model

Currently, an organization being assessed with the TMMi model should fulfill
practices and goals in a level to obtain an achievement on the next level [33]. For
instance, no assessment is made for any level four achievements if not all of the
level three process areas are adhered to. In their empirical analysis of the TMMi
model, Rungi and Matulevicius [27] already noted that a continuous version of
the TMMi model would greatly benefit organizations. It allows organizations to
choose their own paths in which aspects to focus their software testing maturity
improvements on. However, it remains important that the improvements made in
the software testing process should not merely be technological. Myers et al. [19]
indicate the importance of the psychology and economics of testing. These can
be seen as mainly organizational aspects [18]. The focus on the organizational
aspects in the TMMi model, such as a test training program and software quality
evaluation, lies mainly in the latter stages of the model.

Robeer, Van den Heuvel, Van den Bos, and Beelen


Table 1. Participant organizations characteristics and representatives roles

Firm Employee
count

Main product
type

Representatives role

Software
testing?

Founded

A
B

80-100
5000

CMS
Various

Yes
Yes

1999
1992

C
D
E

1250
51-200
52

x
Various

Yes
Yes
Yes

2000
2006
2006

F
G
H

40+
45
12

Yes
Yes
Yes

2005
1998
2008

I
J
K
L

25
29
40
100

ERP
ERP
Press release
tool
E-procurement
Various
BPM
CMS

Senior developer
Product alignment
manager
x
Product owner / team
manager
CEO, software tester
Operations director
Founder / product
team leader
Product manager
Software architect
General manager
CTO

Yes
Yes
Yes
Yes

2000
1992
1986
2008

To find a combination between these views we propose the T-Advanced Model,


which combines the need for a continuous model with a focus on the staged
approach. The T-Advanced Model uses a variable scale to assess the testing
maturity level of an organization. Process improvement done at the higher levels
of the TMMi have a greater weight in comparison with the lower levels. This
leads to the higher levels of the TMMi having a larger share on the T-Advanced
Level scale than the lower levels, which leads to a more spread out distribution.
4.1

Calculating the T-Advanced Level

The first step in determining the T-Advanced level of an organization is calculating the organizations grade, denoted by G. The grade is calculated using
Equation 1. The calculation of the grade consists of several elements. Firstly,
a calculation is made for every level n. Since the TMMi model consists of five
levels, the sum of levels one through five is taken. Secondly, the number of criteria at level n is denoted by sn , where sn R. To get the share of achieved
criteria out of the total number of criteria, we divide sn,achieved by sn,max . For
this share, 0 sn,achieved /sn,max 1 always holds. Thirdly, a weight is assigned
to every level, in order to increase the weight of the higher levels compared to
the lower levels. This is done using a Borda count [28]. A weight of j is given to
level five, a weight of j 1 to level four, j 2 to level three, etc. We determine j
so that it equals n for every level n in the TMMi model. This means that j = 5,
and it can be substituted by n (j N := n) in the equation.

The T-Advanced Model

G=

5
X
n=1

sn,achieved
sn,max

(1)

Subsequently, the second step is determining the test maturity (T M ). Determining an organizations test maturity is done using Equation 2. To transform
the grade G back to the original one to five scale, as is used with the TMMi,
a formula has to be found for which each increase of T M results in an increase
of n for the five levels. For this function (T MG ), T M1 should be equal to one,
T M2 = 1 + 2 = 3, T M3 = 1 + 2 + 3 = 6, etc. The function for which this holds
true is G = 0.5 T M 2 + 0.5 T M . Finding T M can then be done by expressing
T M in G for 0 G 15.
TM =

2 G + 0.25 0.5

(2)

Finally, the T-Advanced level can be decided on. A T M -score is at a TAdvanced level if it falls withing the maximum achievable T M -score for that
level, but is larger than the maximum achievable T M -score for the previous
level. In other words, if 0 T M < 1 the organization is at level one, and if
1 T M < 2 the organization is at level two. Note that the turning points per
level are at the whole numbers. If an organization has a T M -score of four, it has
achieved the first four levels and is therefore at level five.
An example. For the levels of the TMMi, five criteria (sn,max = 5 for n = [1
5]) are chosen to assess the testing maturity level of organization X. Organization
X has currently achieved four criteria of level one, two criteria of level two, and
three of level threes criteria. No criteria at levels four or five are accomplished.
This leads to a grade of Gx = 1 54 + 2 25 + 3 35 = 0.8 + 0.8 + 1.6 = 3.4.

This produces a test maturity score of T Mx = 2 3.4 + 0.25 0.5 2.16. Our
conclusion would be that that organization X is at the start of level three of
test maturity in the T-Advanced Model, as 2 T Mx < 3. If this was done using
the classical staged approach of the TMMi model, the organization would have
been classified as a level one organization, as it does not adhere to the criteria
of level one.
4.2

Advantages of Using A Continuous Model

Using a continuous model poses some advantages for an organization. OHara


found that in practice, the TMMi model usage was mainly limited to informal
use [21]. By being more actively involved in assessing the current software testing
situation, an organization can continuously be aware of their current state and
work on improving it. This can be done by choosing their own path, while still
being led by a model to identify the most important improvement areas. For
the organizations and the people in the organizations, maturity models can be
perceived on a day-to-day basis if the steps in reaching a new level are not too
steep [5]. Using a continuous version of the T-Advanced model, assessments of an

10

Robeer, Van den Heuvel, Van den Bos, and Beelen

organizations test maturity level can be more easily done on a day-to-day basis
and better benchmarks can be created for usage in the industry. The software
testing maturity should always address that organizations have environment or
product/project related needs [14], and that testing improvement should not be
limited by predefined levels.
4.3

Criteria

For this particular case study, 41 criteria were identified based on the TMMi
model. The characteristics of the different levels of the TMMi were taken, and
duplicates were removed. Each organization is analyzed by using the principle
of a check-list, deciding for each criterion whether an organization meets the
criterion. The criteria used in this research are the official characteristics for each
maturity level as listed by the TMMi foundation [32]. Below, the characteristics
used in the study are given. The characteristics are numbered to easily see the
number of criteria per level, sn,max . Below, the lists of the per level are presented.
Level 1: Initial
1. Usage of testing for debugging
2. Ad-hoc testing
3. Testing to make sure the software runs without any critical failure
Level 2: Managed
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.

Testing as a process that is managed and separated from debugging


Test environment
Company and/or program wide test strategy
Developed test plans with defined test approaches, based on product risk
management
Product risk management based on documented requirements
Testing is monitored and controlled
Testing techniques chosen by analyzing requirements and specifications
Multi-leveled testing (component, integration, system, and acceptance testing)
Testing to verify that the product satisfies the requirements
Functionality testing

Level 3: Defined
1. Testing is integrated into the development life cycle
2. Test measurement
3. Testing is done at an early project stage (requirements phase) and documented in a master plan
4. The test organization and focus on testing are seen as a profession, not a
side job

The T-Advanced Model

11

5. A specific test training program is used


6. A review program is present (does not have to be linked to the testing
process)
7. Peer reviews as a defect detection technique
8. More than functional testing (i.e., non-functional, usability and reliability
testing)
9. Standards, process descriptions and procedures differ for each instance/process
10. Group of specialized/trained employees in at least a starting stage with low
responsibility
Level 4: Measured
1. Testing as a thoroughly defined, well-founded measurable process with an
application for evaluation
2. Software quality evaluation
3. An organization wide test measurement program to evaluate the quality of
testing processes, assess productivity and monitor improvements
4. A product quality evaluation based on quality needs and attributes
5. Product quality measurements using quantitative criteria (e.g., usability, reliability, maintainability)
6. Reviews to measure the product quality in the early life cycle and control it
7. Peer reviews as a product quality measurement technique
8. Peer reviews as a fully integrated process as part of the test strategy
9. Group of specialized/trained employees with at least medium responsibility
Level 5: Optimization
1.
2.
3.
4.
5.
6.

Quality control
Testing is statistically controlled and predictable
Testing focused on continuous improvement
Percentage of automation in software testing
Re-use of tests
A permanent test process improvement group of specialized/trained employees with a high responsibility
7. Defect prevention process to identify common causes of defects and define
actions to enable prevention
8. Sampling-based quality measurements in the test process
9. Re-use of test ware and test processes
Meeting a criterion. In order to check if an organization meets an individual criterion, a principle was used that is quite similar to the reasoning that
the TMMi Foundation handles. The official method for reasoning of the TMMi
Foundation is called TAMAR, which stands for the TMMi Assessment Method
Application Requirements. This method is defining the requirements considered essential to Assessment methods intended for use with the Test Maturity

12

Robeer, Van den Heuvel, Van den Bos, and Beelen

Model Integration (TMMi) [33]. The assessment methods mentioned are comparable the criteria elements of this research. This model defines the measuring
scale criteria by classifying the answers into: N for not achieved, P for partially
achieved, L for largely achieved and F for fully achieved. The model prescribes
that whenever an organization satisfies the criterion in range from zero to 15%,
the N would be the most appropriate. In addition, the P score range consists
of 15 up to 50%, the L ranges from 50 till 85% and scores between 85 and 100%
fall under the L. For an organization the increase a level, every criteria have to
receive an L, except for one. This one can receive a P.
In this research, we used a similar method of reasoning. As the data on
the various criteria was not extensive, a decision had to be made whether a
company met partially met one of the criteria. With the support of the logic
of the successfully proven TAMAR model, we decided when an organization
adhered for less than 50% to a criterion, it was counted as a criterion not being
met. This criterion would then receive a score of zero. A criterion being met
for more than 50% was counted as a successfully met criterion, and a score of
one was applied. This means that if the organization gets a score of one on a
criterion, it adheres for over 50% to the criteria. Otherwise, it would receive a
zero.

Results

After acquiring the interview data, a composition of each organization with its
own scores on the TMMi criteria was at our disposal. These were then used
to determine the test maturity levels of the organizations. While the scoring
system is the same, the determination of the level the organization is dependent
on whether the T-Advanced or TMMi model is used. For both models, the same
criteria were used. The amount of criteria per level differ. As could be seen in
Section 4.3, there were three criteria for level one, ten for levels two and three,
and nine for levels four and five.
Table 2 provides an overview of the number of achieved criteria for the twelve
Dutch product software organizations. For every level, the number of criteria that
the organization attained are given. The table can be interpreted as follows. As
an example, we take organization A. Organization A has met one out of three
criteria at level one, eight out of ten at level two, five out of ten at level three,
and 3.2 out of nine at level five. Using the T-Advanced method, the T M -score
can then be calculated. This results in a T M -score for the firm of 1.56. This
table will later be used to determine the test maturity levels of the organizations
using both the TMMi model and the T-Advanced model.
Even though it has been said that organizations could not partly meet one
of the criteria in our study, this was based on the assumption that no detailed
assessment could be done. An exception had to be made with the percentage
of automated testing compared to manual testing, a level five criterion. Since
the exact ratio was known here, a detailed assessment could be made and this
resulted in the decimal numbers in the level five row in Table 2.

The T-Advanced Model

13

Table 2. Organizations achieved criteria per level and calculated TM score


Firm
Level
Level
Level
Level
Level

1
2
3
4
5

1
8
5
0
3.2

3
10
2
0
0.35

3
5
0
1
1.5

1
2
0
0
0

1
6
0
1
0

3
6
2
5
2.6

2
3
1
2
0

3
3
2
1
1

3
6
4
3
0.1

3
9
5
1
1.5

3
9
5
8
4.8

3
2
1
0
1.2

T M -score 1.56 1.67 1.45 0.65 1.03 1.85 1.18 1.42 1.72 1.87 2.38 1.27

The classification of the organizations into test maturity levels, based on the
number of achieved criteria in Table 2, is given in Table 3. The first column
provides the test maturity level, and the following two columns provide the
organizations anonymized names based on the two methods for the test maturity
assessment.
In the T-Advanced method, the organizations were placed at a level based
on their T M -score and corresponding maturity level. For the TMMi method,
an organization had to conform every criteria in a level to ascend to the next
level. This is with the exception of level one, which is the minimal level for
the organization to be in. For instance, to be at level three using the TMMi
assessment method, an organization has to adhere to the three criteria at level
one, every criteria of level two, and every criteria of level three.
Table 3. Comparison of T-Advanced Levels and TMMi levels of the studied organizations
Level T-Advanced
1
2
3
4
5

TMMi

D
A, C - L
A - C, E - J, L B
K

As can be observed in Table 3, the TMMi model would rate every software
producing organizations testing practices at level one, except for firm B. The
reason for this can be found in several aspects. As an organization has to meet
every criteria in a single level to be in that level, and every previous levels have
to be adhered to in full. Organizations J and K are close to reaching level two in
the TMMi model, as they both have three out of three in level one and nine out
of ten in level two. However, they are still seen as a level one organization and
therefore seen as immature and comparable to other level one organizations.
This is taken into account in the T-Advanced model. As higher levels weigh
more in the T-Advanced approach, a different distribution will arise. As visible

14

Robeer, Van den Heuvel, Van den Bos, and Beelen

in Table 3, the organizations are mainly at level two, except for organizations D
and K. D is at level one, while K is at level three. By studying their T M -scores
in Table 2, we notice that organization D only adheres to three criteria: one in
level one and two in level two. This indicates that this organization is at a really
low test maturity level. Organization K, however, has numerous criteria that
are met throughout many stages. While it would have been classified as a level
one organization using the TMMi method, it can be classified as a level three
organization in the T-Advanced model.
Some TM scores might differ and some might be the same. Organizations
that share a T M -score can still reach this level of test maturity using their own
path. To make an example, let us compare the two organizations with close
T M -scores. Because firm A was rated with a T M -score of 1.56 and firm B of
1.67, there is only a difference of 0.11. This can be considered a small distance
since the T M -scores can range from zero till five, and the interval between a
stage and another is a whole point. When analyzing the testing methods of
these organizations, it is evident that the testing methods differ. While firm A
has five test different test methods and B has four, they only share one common
method (unit testing). Moreover, their Firm B used more automation in their
test processes, with 35% compared to the 20% of Firm A. Firm A, however,
spend more on their software testing (40-50%) than Firm B (25%). This shows
that two different organizations can still be comparable using the T-Advanced
method, and both reap the benefits of having an accurately measured testing
maturity level.

Discussion

As has been said in the introduction, software producing organizations are aware
of their low level of maturity in software testing [12], which can be explained
by the lack of expertise or financial uncertainties behind decisions of innovation [16],[19]. The conclusion will support and elaborate the part of the statement above, concerning the level of software testing maturity. Furthermore, it
has to be taken into account that organizations differ from each other. During
this research and after analyzing the results, an even bigger aspect affecting the
choice of test activities came up front. One organization produces fully concentrated product software, other fabricate combinations by adding features desired
by clients, making their software a combination of product and custom software.
This difference affects the testing activities that the organizations where the interviews took place choose to do, limiting the reliability of the n-group of the
research.
Furthermore, some interviewed employees might have had little knowledge
about the testing process since they were not the right persons to be asked about
the topic, or organizations were not aware of the importance and possibilities of
testing and the diversity of testing methods.
Another limitation concerns the amount of organizations that were able to
be analyzed within this research. This might concern the validity of the research

The T-Advanced Model

15

results that come out of this paper, due to lack of representativeness. Due to
the large amount of questions asked individually to each organization and the
supplementary time it would take to ask those, we accepted this limitation since
we were restricted to a number of twelve organization to be analyzed. Moreover,
the limitation of the amount of organization being analyzed also influences the
vision on the comparison between the TMMi results and our T-Advanced results.
The TMMi results showed that every company, except one, were classified in level
one on the contrary to the companies being spread out over the first three levels
in the T-Advanced model. The number of twelve is simply not representative. For
example, there is a fair probability twelve other organizations in the Netherlands
would end up in TMMi level two.
Furthermore, there is a chance of subjectivity hidden in the answers of the
interviewed experts. They might have felt the need to meet certain expectations
and compete with results of other organizations and competitors. Instead of
acquiring fully objective answers, this might have brought some subjective answers, describing the organizations testing activities to be different and probably
more effective, intense and efficient than they actually are. This might slightly
lower the validity of the research. As an attempt to reduce this limitation, we
benchmarked and analyzed the companies anonymously.
Finally, the fact that we had to rely on other students to gather the data set
for us could be a potential limitation. We were not present at the moment the
data was gathered in the different product software organizations so we cannot
be entirely certain that the data is accurate and correct.
If warranted, the next step in our study will be to evaluate the benchmark
results and show them to the participant organizations, asking the organizations if the results were fulfilling their expectations and therefore testing their
awareness of the importance of testing.

Conclusion and Further Research

Software testing has become more and more important these days, meaning research on this topic is quite necessary and useful for organizations as well. The
focus within this paper lied on analyzing twelve Dutch product software organizations on their maturity level of software testing. These organizations were
rated on maturity level by two models: the TMMi model, which is a most wellknown model existing of software testing maturity, and the T-Advanced model
we invented ourselves. Other different existing testing models have been mentioned within the paper, in order to create an overview testing models and to
compare two models. Our own created model, The T-Advanced model combines
the focus of a leveled approach (TMMi model) with the need for a continuous
model. The difference between the TMMi model and the T-Advanced model is
the fact that in the TMMi model the focus on organizational aspects mainly lies
in the latter levels while in the T-Advanced, the focus lies in every level equally.
The major findings of the research done in this paper, derived from the
results, were distributed differently when compared to each other. While the

16

Robeer, Van den Heuvel, Van den Bos, and Beelen

TMMi model rated every organization to just maturity level one, the T-Advanced
model, classified the organizations spread out over the first three levels. To be
more precise about the T-Advanced results, two organizations were rated to be
present in level two, nine organizations were rated to be in level two and at last,
one organization in maturity three. As been elaborated in the chapter Results,
the difference mainly comes from giving criteria elements of higher levels a relatively bigger weight, while leaving out the TMMi rule of necessity to meet every
criteria elements of a level and the levels beneath it. The T-Advanced results
present the organizations as more mature, experienced and driven in software
testing since they got ranked higher. Still there is an improvement to make for
the organizations. No organization was listed under level four and five, therefore
missing important test characteristics that would probably lead the organization
to a higher success when it comes to testing effectively and efficient.
After writing this paper, we are convinced that the research on the software
maturity subject must be expanded to develop an understanding of the relationship between the type of organization and their testing processes. Our research
can be used as reference material on the maturity of software testing in the
Netherlands for product software organizations. Using our paper, product software organizations can obtain results concerning the maturity level of software
testing. Afterwards, the organizations can use this data to compare themselves
against the organizations who were interviewed for this paper. Also, our research
can redone, using a larger sample of product software organizations to gain a
better view of the level of maturity of software testing in the Netherlands.
In the end, further research might be necessary to accurately map the software testing process used by the product software organizations in the Netherlands. It should focus on establishing the maturity level of testing within organizations. Also, it should provide an explanation for the current maturity levels
of testing that are present in the Netherlands. And nonetheless, it should give
organizations more insight on how to improve their level of software maturity,
using for example guidelines from research.

Acknowledgements. We would to thank the participants for their openness


in their practices in software testing processes. Furthermore, we would to thank
the anonymous reviewers of the paper for their valuable feedback.

References
1. Andersin, J.: TPI a model for Test Process Improvement. In: Seminar on Quality
Models for Software Engineering. University of Helsinki, Helsinki (2004)
2. Bailey, K.D.: Methods of social research. 4th edn. Free Press, New York (2007)
3. Bernard, H.R.: Research methods in anthropology: Qualitative and quantitative
approaches. 5th edn. Altamira Press (2011)
4. Bertolino, A.: The (im)maturity level of software testing. WERST Proceedings/ACM SIGSOFT SEN 29(5). (2004)

The T-Advanced Model

17

5. Buglione, .: Leveraging reuse-related maturity issues for achieving higher maturity


and capability levels. Safe and Secure Software Reuse, 13th International Conference
on Software Reuse (ICSR 2013), pp. 343355 (2013)
6. Burnstein, I., Suwanassart, T., Carlson, C. R.: Developing a testing maturity model
for software test process evaluation. In: Proceedings of the International Test Conference (ITC96), pp. 581-589 (1996)
7. Ericson, T., Subotic, A., Ursing. S.: TIM - A Test Improvement Model. In: Software
Testing, Verification & Reliability (STVR) 7(4), pp. 229-246, John Wiley & Sons,
Inc., Hoboken, NJ (1997)
8. Computer Science Dept., Illinois Institute of Technology: Test maturity model
project (2007). http://www.cs.iit.edu/research/tmm.html
9. Desai, S., Abhishek, S.: Software testing: A practical approach. PHI Learning, Delhi
(2012)
10. Garousi, V., Varma, T.: A replicated survey of software testing practices in the
Canadian province of Alberta: What has changed from 2004 to 2009? Journal of
Systems and Software 83(11), pp. 22512262 (2010)
11. Garousi,V., Zhi, J.: A survey of software testing practices in Canada. What has
changed from 2004 to 2009? Journal of Systems and Software 86(5), pp. 13541376
(2013)
12. Grindal, M., Offut, J., Mellin, J.: On the testing maturity of software producing
organizations. In: Proceedings of the Testing: Academic & Industrial Conference on
Practice And Research Techniques. pp. 171180 (2006)
13. Hetzel, W. C.: The complete guide to software testing 2nd edn. QED Information
Sciences, Inc., Wellesley, MA (1988)
14. Jacobs, J.C., Trienekens, J.J.M.: Towards a metrics based verification and validation maturity model. In: Proceedings of the 10th International Workshop on
Software Technology and Engineering Practice (STEP02), pp. 123128 (2002)
15. Ryu, H., Ryu, D.-K., Baik, J.: A strategic test process improvement approach using
an ontological description for MND-TMM. In: Seventh IEEE/ACIS International
Conference on Computer and Information Science (ICIS 2008), pp. 561566. IEEE,
Portland, OR (2008)
16. Kasurinen, J., Taipale, O., Smolander, K.: How test organizations adopt new testing practices and methods? In: IEEE International Conference On Software Testing,
Verification And Validation, pp. 553558 (2011)
17. Kulkarni, S.: Test proces maturity models - Yesterday, today and tomorrow. In:
Proceedings of the 6th Annual International Software Testing Conference, Delhi,
India (2006)
18. 3. Martin, D., Rooksby, J., Rouncefield, M., Sommerville, I.F.: Good organisational reasons for bad software testing: An ethnographic study of testing in a small
software company. In: ICSE 2007. IEEE, Minneapolis (2007)
19. Myers, G. J., Sandler, C., Badgett, T.: The art of software testing 3rd edn. John
Wiley & Sons, Inc., Hoboken, NJ (2011)
20. Ng, S.P., Murnane, T., Reed, K., Grant, D., Chen, T.Y.: A preliminary survey
on software practices in Australia. In: Proceedings of the 2004 Australian Software
Engineering Conference. IEEE Computer Society, Los Alamitos, CA (2004)
21. OHara, F.: Experiences from informal test process assessments in Ireland - Top
10 findings. In: Software Process Improvement and Capability Determination, proceedings of 11th International SPICE Conference, pp. 194196. Springer, Berlin
Heidelberg (2011)
22. ORegan, G.: Introduction to software quality. Springer (2014)

18

Robeer, Van den Heuvel, Van den Bos, and Beelen

23. Osterweil, L., Clarke, L. A., DeMillo, R. A., Feldman, S. I., McKeeman, B., Salasin,
E. F. M., Jackson, D., Wallace, D., Harrold, M. J.: Strategic directions in software
quality. In: ACM Computing Surveys 28(4) (1996)
24. Patton, M.: Qualitative evaluation and research methods, pp. 169186. Sage, Beverly Hills, CA (1990)
25. Paulk, M.: Capability Maturity Model for software. Encyclopedia of Software Engineering. John Wiley & Sons, Inc., Hoboken, NJ (2002)
26. Rasking, M.: Experiences developing TMMir as a public model. In: Software
Process Improvement and Capability Determination, pp. 190193. Springer-Verlag,
Berlin Heidelberg (2011)
27. Rungi, K., Matulevicius, R.: Empirical analysis of the Test Maturity Model integration (TMMi). In: Information and Software Technologies, pp. 376391 (2013)
28. Saari, D.G.: The optimal ranking method is the Borda count. Discussion Paper
638, Northwestern University, Center for Mathematical Studies in Economics and
Management Science (1985)
29. Skersys, T., Butleris, R., Butkiene, R.: Information and Software Technologies. In:
19th International Conference (ICIST 2013)
30. Spillner, A., Linz, T., Schaefer, H.: Software testing foundations: A study guide for
the certified tester exam 4th edn. Rocky Nook, Inc., Santa Barbara, CA (2014)
31. Swinkels, R.: A commparison of TMM and other test process improvement
tools. Technical Report Frits Philips Institute, Technische Universiteit, Eindhoven,
Netherlands (2002)
32. TMMi Foundation: Test Maturity Model integration (TMMi) Release 1.0 (2012).
http://www.tmmi.org/pdf/TMMi.Framework.pdf
33. TMMi Foundation: TMMi Assesment Method Application Requirements
(TAMAR) (2014). http://www.tmmi.org/pdf/TMMi.TAMAR.pdf
34. TMMi Foundation: Welcome To The TMMi Foundation (2012). http://www.tmmi.
org
35. Veenendaal, E., Cannegieter, J.J.: Test Maturity Model integration (TMMi) Where are we today? Results of the first TMMi benchmark. In: Testing Experience 3(3), pp. 7274 (2012)

You might also like