You are on page 1of 68

Agile Development Methods

for Mobile Applications

Andrei Cristian Spataru

Master of Science
Computer Science
School of Informatics
University of Edinburgh
2010

Abstract
This thesis is aimed at evaluating the suitability of agile methods for mobile application
development projects, bringing a set of improvements to an established agile method
called Mobile-D, and providing tool support to enable these improvements, facilitating
performance testing and usage logging in the lifecycle. The motivation for this work is
to better understand mobile development, and to improve related activities through
research in the field and development of a support tool. After establishing agile
methods as a good approach towards mobile application development, a number of
improvements to the Mobile-D method are presented, including a study in mobile
application categories, related paradigms, end-user inclusion in the lifecycle, as well as
performance testing of components and adoption of software product line principles.
The support tool enabling some of these improvements is then presented, with
functionalities including performance testing for Android components, usage logging
and automatic test case generation. These contributions intend to bring Mobile-D closer
to an ideal mobile application development methodology, while providing useful
features that can be outside the process, either in the form of practices or tools.

Acknowledgements
I would like to thank my supervisor, Mr. Stuart Anderson, for all the insightful ideas,
suggestions and feedback Ive received throughout my work. I would also like to thank
the Dinu Patriciu Foundation for funding my studies; I am grateful for the opportunity
they have given me. Special thanks to Panos Tsogas and to Marinos Argyrou who kindly
provided his Android application for me to experiment with.
Finally, I would like to thank my parents for their full support during my studies.

ii

Declaration
I declare that this thesis was composed by myself, that the work contained herein is my
own except where explicitly stated otherwise in the text, and that this work has not
been submitted for any other degree or professional qualification except as specified.

(Andrei Cristian Spataru)

iii

Table of Contents
1 Introduction .................................................................................................................................. 1
1.1

Mobile application development ...........................................................................................1

1.2

Agile development for mobile applications ......................................................................2

1.3

Mobile-D overview ......................................................................................................................4

1.4

Improvement approaches ........................................................................................................6

2 Improvements to Mobile-D ...................................................................................................... 9


2.1

Categories of mobile applications .........................................................................................9

2.2

Alternatives approaches to mobile development ........................................................ 13

2.3

Bringing end-users into the lifecycle ................................................................................ 16

2.4

Performance testing in Mobile-D ........................................................................................ 23

2.5

Software product line principles ........................................................................................ 28

2.6

Questionnaire results .............................................................................................................. 30

3 Android Resource Management .......................................................................................... 32


3.1

System description ................................................................................................................... 32

3.1.1

Evaluate method performance .................................................................................. 35

3.1.2

Evaluate method sequence performance .............................................................. 37

3.1.3

Evaluate test performance........................................................................................... 39

3.1.4

Log user actions ............................................................................................................... 41

3.1.5

Analyze usage logs .......................................................................................................... 45

3.1.6

Generate tests from logs ............................................................................................... 46

3.1.7

System utilization ............................................................................................................ 50

3.2

Impact on Mobile-D .................................................................................................................. 50

iv

4 Conclusions and further work ............................................................................................. 52


4.1

Overview of improvements .................................................................................................. 52

4.2

Discussion .................................................................................................................................... 53

4.3

Further work............................................................................................................................... 54

4.3.1

Mobile-D .............................................................................................................................. 54

4.3.2

Support tool ....................................................................................................................... 54

5 Bibliography ............................................................................................................................... 56

List of Figures
Figure 1. Mobile-D phases and stages; Source: (VTT Electronics, 2006)................................6
Figure 2. Proposed categories of mobile applications, based on (Oinas-Kukkonen &
Kurkela, 2003), (Unhelkar & Murugesan, 2010) and (Kunz & Black, 1999) ................. 11
Figure 3. Scope definition stage, adapted from (VTT Electronics, 2006) ............................ 12
Figure 4. Mobile development method proposed in (Rahimian & Ramsin, 2008) .......... 15
Figure 5. Main elements of stakeholder identification; Source: (Sharp, Finkelstein, &
Galal, 1999) ............................................................................................................................................... 19
Figure 6. Stakeholder establishment stage, adapted from (VTT Electronics, 2006)....... 20
Figure 7. End-user establishment steps............................................................................................. 21
Figure 8. Initial requirements collection steps; Source: (VTT Electronics, 2006) ........... 21
Figure 9. Mobile-D with added Evolve phase. Adapted from (VTT Electronics, 2006) ... 22
Figure 10. Performance requirements evolution model; Source: (Ho, Johnson, Williams,
& Maximilien, 2006).............................................................................................................................. 25
Figure 11. Tasks for Project establishment stage; Source: (VTT Electronics, 2006) ...... 26
Figure 12. Working day including the performance testing task; Adapted from: (VTT
Electronics, 2006) .................................................................................................................................. 27
Figure 13. Adoption Factory pattern, high-level view; Source: (Jones & Northrop, 2010)
........................................................................................................................................................................ 29
Figure 14. Use Case Diagram for the system ................................................................................... 33
Figure 15. Spinner (left) and MultiRes (right) applications ...................................................... 34
Figure 16. Wizard used to generate a performance test case for methods ........................ 35
Figure 17. Code generated by method performance wizard .................................................... 36
Figure 18. Code generated by method sequence performance wizard ................................ 38
Figure 19. Method queue performance test log ............................................................................. 39
Figure 20. Wizard used to generate a timed test for existing Android test cases ............ 39
Figure 21. Code generated by timed test wizard ........................................................................... 40
Figure 22. Usage log lifecycle ................................................................................................................. 42
Figure 23. Logging instructions ............................................................................................................ 43
Figure 24. Log generated for 5 runs of the MultiRes application ............................................ 44
Figure 25. Log file statistics view ......................................................................................................... 45
Figure 26. Logs and serialized objects ............................................................................................... 46
Figure 27. Test case generation wizard ............................................................................................. 47

vi

Figure 28. Code created by test case generation wizard ............................................................ 48


Figure 29. Test result interpretation for generated test cases................................................. 49

vii

List of Tables
Table 1. Methods for performance evaluation ................................................................................ 37
Table 2. Methods available for test performance evaluation.................................................... 41

viii

Chapter 1
Introduction

1.1 Mobile application development


The mobile applications market is currently undergoing rapid expansion, as mobile
platforms continue to improve in performance, and as the users need for a wide
variety of mobile applications increases. The latest mobile platforms allow for
extensive utilization of network resources, and thus offer a strong alternative to
workstations and associated software.
Software development for mobile platforms comes with unique features and
constraints that apply to most of the lifecycle stages. The development environment
and the technologies that support the software are different compared to traditional
settings. The most important distinguishing characteristics are identified in
(Abrahamsson, et al., 2004). Environment particularities include: a high level of
competitiveness; necessarily short time-to-delivery; and added difficulty in identifying
stakeholders and their requirements. Development teams must face the challenge of a
dynamic environment, with frequent modifications in customer needs and
expectations. (Abrahamsson, 2007) Technological constraints apply to mobile
platforms in the form of limited physical resources and rapidly changing specifications.
There is also a great variety of devices, each with particular hardware characteristics,
firmware and operating systems. Another view of the constraints associated with
mobile applications is presented in (Hayes, 2003). The author mentions two types of
constraints, evolving and inherent. Evolving constraints, such as bandwidth, coverage
and security, currently apply to the mobile technology, but are likely to be addressed
and possibly resolved in the near future. On the other hand, inherent constraints such
as limited screen real estate, reduced data entry capability (due to a limited keypad for
example), memory capacity, processing power and limited power reserve, are
permanent, at least relative to desktop environments. Various approaches must be
used in order to lower the impact of inherent constraints.

Due to significant differences in the environment and in platform specifications, mobile


application development requires a suitable development methodology. By taking into
account the main features of a mobile application development scenario, a matching
development paradigm can be identified. These features are presented in
(Abrahamsson, 2005): the software is released in an uncertain and dynamic
environment with high levels of competition. Teams that develop mobile applications
are usually small to medium-sized, co-located, and generally use object-oriented tools
and practices. The applications themselves are small-sized, are not safety-critical, and
do not have to satisfy interoperability or reliability constraints. They are delivered in
rapid releases in order to meet market demands, and are targeted at a large number of
end-users. The author suggests agile methods as a suitable approach to development,
by comparing the above features to agile home ground characteristics: small-scale,
application-level software, developed in a highly dynamic environment by a small to
medium-sized team using object-oriented approaches, in relatively short development
cycles.
The following section provides a short overview of agile methods, focusing on their
suitability for mobile application development.

1.2 Agile development for mobile applications


Agile methods represent a relatively new approach to software development, becoming
wide-spread in the last decade. The ideas behind these methods originate from the
principles of Lean Manufacturing (in the 1940s) and Agile Manufacturing (1990s),
which emphasized the adaptability of enterprises to a dynamic environment (Salo,
2006). The unique features of agile methods derive from the list of principles found in
the Agile Manifesto: individuals and interactions are more important than processes
and tools, working software is more valuable than comprehensive documentation,
customer collaboration is preferred over contract negotiation, and adaptability is
valued higher than creating and following a plan (Agile Alliance, 2001).
In (Boehm & Turner, 2003), the authors identify fundamental concepts to agile
development: simple design principles, a large number of releases in a short time
frame, extensive use of refactoring, pair programming, test-driven development, and

seeing change as an advantage. Another definition of agile methods is provided in


(Abrahamsson, et al., 2002): an agile development method is incremental (multiple
releases), cooperative (a strong cooperation between developer and client),
straightforward (easy to understand and modify) and adaptive (allowing for frequent
changes).
The use of agile methods in software development has received both supporting and
opposing arguments. The main argument against agile methods is the asserted lack of
scientific validation for associated activities and practices, as well as the difficulty of
integrating plan-based practices with agile ones. Indeed, some projects present a mix of
plan-based and agile home ground characteristics, in which case a balance must be
achieved in the use of both types of methods (Boehm, 2002). There is also some
amount of uncertainty in distinguishing agile methods from ad-hoc programming.
However, as stated in (Salo, 2006), agile methods do provide an organized
development approach.
When trying to compare mobile application characteristics to those of an agile method,
difficulty comes partly from the fact that boundaries of agile methodologies are not
clearly established. A comprehensive overview of research in the field is presented in
(Dyba & Dingsoyr, 2009). The authors partition studies into four categories:
introduction and adaptation, human and social factors, perception of agile methods,
and comparative studies. Findings indicate that the introduction of agile methods to
software projects yields benefits, especially if agile practices do not completely replace
traditional ones, but work in conjunction with them. However, according to the
authors, studies in the field are mostly focused on Extreme Programming (XP), are
limited in number and are of doubtful quality.
In (Abrahamsson, 2005), the author performs a direct comparison between agile
method characteristics and mobile application features, focusing on environment
volatility, amount of documentation produced, amount of planning involved, size of the
development team, scale of the application in-development, customer identification,
and object orientation. Except customer identification, all other agile characteristics
render the methods suitable for mobile application development. The customer may be
identified as the software distributor. However, especially in the case of mobile
applications, the customer identification problem is much more complex, as will be
detailed in a later section of this work.

A new development methodology, specifically tailored for mobile application


development, called Mobile-D, is presented in (Abrahamsson, et al., 2004). The method
is based on agile practices, drawing elements from well established agile methods such
as Extreme Programming and Crystal Methodologies, but also from the heavier
Rational Unified Process. Additional information on XP is available in (Beck & Andres,
2004), while Crystal Methodologies are thoroughly described in (Cockburn, 2004). The
Rational Unified Process is explained from a practical point of view in (Kroll &
Kruchten, 2003). Practices associated to Mobile-D include test-driven development,
pair programming, continuous integration, refactoring, as well as software process
improvement tasks. The methodology serves as a basis for this work, and will be
further detailed in the following section.

1.3 Mobile-D overview


According to (Abrahamsson, et al., 2004), the Mobile-D process should be used by a
team of at most ten co-located developers, working towards a product delivery within
ten weeks. There are nine main elements involved in the different practices throughout
the development cycle:
1. Phasing and Placing
2. Architecture Line
3. Mobile Test-Driven Development
4. Continuous Integration
5. Pair Programming
6. Metrics
7. Agile Software Process Improvement
8. Off-Site Customer
9. User-Centred Focus
The Architecture Line in the methodology is a new addition to the already established
agile practices. An architecture line is used to capture an organizations knowledge of
architectural solutions, from both internal and external sources, and to use these
solutions when needed.

Mobile-D comprises five phases: Explore, Initialize, Productionize, Stabilize, and System
Test & Fix. Each of these phases has a number of associated stages, tasks and practices.
The complete specifications of the method are available in (VTT Electronics, 2006).
In the first phase, Explore, the development team must generate a plan and establish
project characteristics. This is done in three stages: stakeholder establishment, scope
definition and project establishment. Tasks associated to this phase include customer
establishment (those customers that take active part in the development process),
initial project planning and requirements collection, and process establishment.
In the next phase, Initialize, the development team and all active stakeholders
understand the product in development and prepare the key resources necessary for
production activities, such as physical, technological, and communications resources.
This phase is divided into three stages: project set-up, initial planning and trial day.
The Productionize phase mainly comprises implementation activities. At the end of this
phase, most of the implementation should be complete. This phase is divided into
planning days, working days, and release days. Planning days are aimed at enhancing the
development process, prioritizing and analyzing requirements, planning the iteration
contents, and creating acceptance tests that will be run later in release days. In working
days, the Test-Driven Development (TDD) practice is used to implement functionalities,
according to the pre-established plan for the current iteration. Using TDD along with
Continuous Integration, developers create unit tests, write code that passes the tests,
and integrate new code with the existing version of the product, addressing any errors
that may arise in the integration process. Finally, in release days a working version of
the system is produced and validated through acceptance testing.
The final two phases, Stabilize and System Test & Fix, are used for product finalization
and testing respectively. They comprise stages similar to the Productionize phase, with
some modifications to accommodate documentation building and system testing.

Figure 1. Mobile-D phases and stages; Source: (VTT Electronics, 2006)

Mobile-D has already been applied in development projects, and some advantages have
been observed, such as increased progress visibility, earlier discovery and repair of
technical issues, low defect density in the final product, and a constant progress in
development (Abrahamsson, et al., 2004). Other applications of the method are
presented in (Pikkarainen, Salo, & Still, 2005) and (Hulkko & Abrahamsson, 2005).

1.4 Improvement approaches


In order to obtain a set of improvements to a given development methodology, one
must first analyze the key method characteristics that have yielded successful results in
previous projects. For mobile application development methods, key success
characteristics are identified in (Rahimian & Ramsin, 2008). These are agility of the
approach, market consciousness, software product line support, architecture-based
development, support for reusability, inclusion of review and learning sessions, and
early specification of physical architecture. Some of these key features can already be
found in the Mobile-D method (agility, early specification of physical architecture,
architecture-based development, and review and learning sessions); however, the
method could be improved if more of these key success features could be integrated.

Possible additions to Mobile-D include better market awareness, software product line
support and reusability support. The last two features could be implemented with the
final goal of cost minimization, but may be infeasible to micro-companies, or companies
with low experience in mobile application development, as these companies may not
have established libraries of components in order to successfully apply software reuse
principles. Review and learning sessions can be found in the methodology as Postiteration Workshops and Wrap-ups, designed to improve the development process by
identifying its strengths and weaknesses, and to communicate progress and problems
within the team.
The list of ideal traits found in (Rahimian & Ramsin, 2008) can be further extended. In
the case of mobile applications, end-user identification is not straightforward. This has
also been pointed out in (Abrahamsson, 2005), when comparing agile home ground
characteristics to mobile application development requirements. For agile methods, the
customer has to be easily identifiable, which is not always the case for mobile
applications. In order to address this issue, the list of key characteristics of a mobile
development methodology can be extended to include balancing customer, end-user,
and sometimes platform vendor requirements. These three entities rarely coincide
when dealing with mobile applications. In cases where the entity requesting the
software product differs from the end-user, the contracting entity should be made
aware of the possibly different set of end-user requirements. For the development
team, this means a balance must be established between customer, requirements, enduser expectations and platform vendor constraints. We can assume that identifying and
including end-users in the development process will prove beneficial by ensuring their
requirements and expectations are met, and for finding possible defects. The approach
to integrating end-user requirements in the development process is detailed in a later
section. The issue has been addressed by organizations through user testing sessions in
special labs, observing the way users interact with the system. The following list
represents an adapted prioritization of traits for successful mobile application
development methodologies.
1. Agility
2. Market consciousness
3. Early specification of physical architecture
4. End-user feedback support
5. Software product line support

6. Reusability support
7. Architecture-based development
8. Review and learning sessions
This work focuses on improvements to the mobile application development methods in
general, and to the Mobile-D method in particular. Using the above list as a guideline,
the project investigates possible improvements in particular sub-areas.
By examining categories of successful mobile applications, the product-in-development
may be aligned to one category, and specific measures can be taken to improve quality
and minimize risk for the current application. The alignment procedure can be included
as a task in a certain stage of Mobile-D. Principles from other development
methodologies can be integrated into Mobile-D, when the scenario would favour their
use. The entire method could import concepts from other approaches, and become
adaptable to the project at hand.
The end-users contribution to the development process must be identified, in order to
ensure the success of a product. More precisely, the timing, extent and potential
benefits of end-user participation must be established. Obtaining a balance between the
three sets of requirements, customer, end-user and platform vendor, is important
especially in those cases where the requirements are not convergent.
Mobile platforms suffer from performance constraints. Mobile-D makes extensive use of
the Test-Driven Development practice; however, the test cases do not take into account
resource utilization, partly because the tool support for this task is poor. In order to
improve the methodology, TDD must be adapted to take into account resource testing,
especially for limited-resource situations. This modification also affects related tasks in
different stages of the Mobile-D methodology. When potential resource bottlenecks are
identified, they can be eliminated by writing better code, or later through refactoring
and the use of patterns.
Finally, the methodology can be improved by examining the benefits of software
product line principles and by establishing their correct integration with Mobile-D.

Chapter 2
Improvements to Mobile-D

2.1 Categories of mobile applications


There are many ways in which mobile applications can be categorised. Nevertheless,
any plausible partition can lead to better results in the development process, due to a
higher focus on issues that are specific to the respective application type. Depending on
the experience of the development team, different measures can be taken. For a
seasoned team, identifying the application type means experiences from developing
similar applications in the past can be used. Teams with less development experience
can also benefit from categorisation, by obtaining and implementing a specific set of
guidelines and principles for the specific type of application.
In (Varshney & Vetter, 2001) the authors identify twelve classes of mobile commerce
applications. Example classes include Mobile financial applications (banking and micropayments), Product location and shopping (locating and ordering items), and Mobile
entertainment services (video-on-demand and similar services). However, these classes
only apply to mobile commerce applications (mobile applications that involve
transactions of goods and services) and do not help to provide guidelines to developing
new applications. To this purpose, the findings in (Oinas-Kukkonen & Kurkela, 2003)
prove more useful. Citing a report by Ramsay and Nielsen on WAP usability, the authors
divide mobile applications into two groups: highly goal-driven and entertainmentfocused. The definition of each group is quite simple: highly goal-driven applications
aim to provide fast responses to inquiries, while entertainment-focused applications
help users pass the time. The authors move on to provide seven guiding principles for
the development of highly goal-driven mobile services: mobility (provide information
while on the move), usefulness, relevance (include only relevant information), ease of
use, fluency of navigation (most important information should be easiest to locate),
user-centred (adapt to the users way of interaction and way of thinking), and
personalization (adapt to users needs and capabilities). Providing guidelines for the

entertainment-focused category of applications can be quite difficult and outside the


scope of the current discussion.
A taxonomy of mobile applications from an enterprise point of view is established in
(Unhelkar & Murugesan, 2010). The authors state that this organization and
representation of mobile applications will make the demands placed on the
applications more visible, and will help developers focus on the most important aspects
of design and implementation for each project. The lowest level in the taxonomy
(organized by application richness and complexity) is represented by Mobile broadcast
(M-broadcast) applications, that are aimed at providing large-scale broadcast of
information to mobile platforms. Higher-level applications are Mobile information (Minformation) applications, which provide information required by mobile users, such as
weather conditions. The third level of applications is Mobile transactions (Mtransaction) facilitating e-transactions and customer relationship management. The
fourth level, Mobile operation or M-operation deals with operational aspects of the
business such as inventory management or supply-chain management. Finally, the top
level of the taxonomy is represented by Mobile collaboration (M-collaboration), a class
of applications that support collaboration within and outside the enterprise.
Even though the authors exclusively analyze mobile applications in an enterprise
context, recommendations are provided for each type of application; these can be
applied in most similar projects. In M-broadcast applications, content is broadcast to a
large number of unregistered users, while in M-information users request and receive
information in an individual fashion. Issues associated to this category of applications
include usability and privacy, security not being of high relevance. M-transaction
applications enable mobile transactions, such as placing and tracking orders and
making electronic payments. This category of applications has higher requirements in
terms of security, responsiveness and reliability, and requires communication between
three parties: user, service provider and financial mediator (such as an online payment
gateway). M-operation applications are required to provide real-time information and
also integrate back-end systems and databases. The final group of applications, Mcollaboration, have associated coding and data-management challenges due to the
required support for the interaction between different software modules.
Six different categories of mobile applications are identified in (Kunz & Black, 1999):
standalone applications (games or utilities), personal productivity software (word

10

processors and office applications), Internet applications (e-mail clients, browsers),


vertically integrated business applications (security), location-aware applications (tour
planners and interactive guides) and ad-hoc network and groupware application (a
group of users establish an ad-hoc network to exchange documents). The authors point
out some important requirements associated to the identified groups of mobile
applications. For personal productivity software, synchronization between the mobile
and desktop versions of the software is indicated as an important requirement. For the
third category, Internet applications, the issue of client application performance and
resource requirements is emphasized. The authors state that a mobile client
application cannot borrow from non-mobile client applications, as these have
completely different underlying assumptions in terms of performance requirements
and availability of resources. These issues also apply to vertically integrated business
applications, as the servers should remain unaware of the type of client they are
communicating with (mobile or non-mobile), in order to ease the deployment of mobile
applications.
The works described above serve as a basis for establishing a way to categorize mobile
applications, and to integrate the categorization task with the Mobile-D methodology.
The new proposed taxonomy in presented in Figure 2.

Figure 2. Proposed categories of mobile applications, based on (Oinas-Kukkonen &


Kurkela, 2003), (Unhelkar & Murugesan, 2010) and (Kunz & Black, 1999)

The categories are not exhaustive or exclusive. Each team or company can expand a
category, according to their own experience and past projects. For example, the

11

Entertainment category does not have sub-categories due to the high diversity of this
type of applications. A company specializing in mobile games can further expand this
category by taking into account different types of games they have developed in the
past.
To sum up, the benefits of application categorization in the lifecycle are twofold:
obtaining a set of guidelines customized to the specific application type, and using
previous experiences to develop a new application of the same type. These experiences
can be used to estimate effort for example, similar to using the Standard Component
Estimation technique in software estimation tasks.
The optimal location of the categorization task within the Mobile-D method is in the
Explore phase, which contains the Scope definition stage (Figure 1). According to the
Mobile-D specifications (VTT Electronics, 2006), the Scope definition stage defines goals
and sets the timeline for the project. If the team identifies the category of application
they are developing, they can establish project goals that respect the specific guidelines,
and can shape the initial schedule according to data gathered from previous similar
projects.
The Scope definition stage comprises two essential tasks: Initial project planning and
Initial requirements collection. By performing the Initial project planning task, the
development team sets the project timeline, rhythm of development and estimates the
required investments for the project, in terms of effort, finances, etc. The outcome of
this task can be positively influenced if it is preceded by the new task concerned with
project category establishment, so that estimates within the stage will be more
accurate. The new flow of tasks within the Scope definition stage is presented in Figure
3.

Figure 3. Scope definition stage, adapted from (VTT Electronics, 2006)

12

When a project is completed, the experiences in terms of recommended practices, tools,


and estimates should be recorded under the specific category for later reference.

2.2 Alternatives approaches to mobile development


Even though agile methodologies offer a good solution for mobile application
development, different approaches exist in literature. Mobile-D is clearly the most
detailed methodology for the purpose, having a comprehensive specification for each
phase and stage, and for the associated tasks. However, other promising approaches
might improve Mobile-D in some aspects, and also increase the flexibility of the method.
A recurring theme in related research is the use of Model-Driven Engineering (MDE),
more specifically Model-Driven Development (MDD), for mobile application
development. According to (Beydeda, Book, & Gruhn, 2005), Model-Driven
Development involves using models not only to document code, but to serve as a basis
for application development. In (Balagtas-Fernandez & Hussmann, 2008) the authors
propose a development approach that combines principles from both MDD and
Human-Computer Interaction (HCI), more precisely from the field of User Centred
Design. The purpose of the approach is to obtain a system that lets novice users with no
programming experience create their own mobile applications. The main argument for
MDD in mobile application development is the possibility to create a platformindependent model of the application, which will be automatically transformed into
platform-specific code. This is an important advantage for MDD, as the number of
platforms for the software is large and constantly increasing. By allowing end-users to
create their own applications, and by making the process user-friendly through UserCentred Design principles, the need for external developers would disappear,
decreasing costs and development time.
Aside from the large number of platforms the software has to run on, there is another
issue associated with mobile applications, namely factors that affect every element of
an application, such as resource limitations. In order to address both of these issues,
(Carton, et al., 2007) propose a development approach that combines Aspect-Oriented
Software Development (AOSD) techniques with Model-Driven Development ones. The
authors talk about crosscutting factors, such as device limitations or user
connectivity, as being influences that cannot be easily handled by traditional

13

programming approaches such as Object-Oriented Programming or Procedural


Programming, and that affect the entire application. These factors lead to a number of
problems. Firstly, software engineers cannot easily separate application semantics
from technical specifics, and secondly, developers have to implement these
crosscutting concerns throughout the application. This in turn leads to lower software
maintainability, extensibility and comprehensibility. Crosscutting concerns represent
the basis for aspects in Aspect-Oriented Design, so by using this approach in
development, the concerns can be modularized, thus increasing the maintainability,
extensibility and comprehensibility of the code. In this approach MDD is used, as
before, to translate a high level platform-independent design into multiple, platformdependent implementations. In Model-Driven Architecture (MDA), which is MDD
standardized by the Object Management Group (OMG), a number of Platform-Specific
Models (PSMs) are generated from a Platform-Independent Model (PIM) using
transformations. In general terms, the combination of AOSD and MDA yields benefits
from both approaches, by minimizing the effects of crosscutting concerns and platformspecific issues. An important part of MDD is transforming models into code (code
generation). This issue has not been detailed in the considered papers; however an
approach to code generation in mobile application development is available in
(Amanquah & Eporwei, 2009).
Even though the subject of MDD for mobile applications is still largely unexplored,
there are some initial impressions of usage. In (Braun & Eckhaus, 2008), MDD is used to
develop an architecture that supports the provision of a mobile service both as a Web
service and Mobile application. The goal was to allow the provided services to be
accessed both via built-in XHTML browser and pre-installed Java application. The
outcome of the approach was successful, as MDD proved flexible enough for the task.
Another MDD approach has been documented in (Khambati, et al., 2008), for the
development of mobile personal healthcare applications. In this case, MDD helps
healthcare providers create personalized applications modelled from a health plan
specific to each patient. Unfortunately, there are no comprehensive studies on the effect
of MDD in mobile applications in literature, and no empirical data provided in the
considered papers. This leaves MDD as an experimental approach that may be
integrated in Mobile-D only if the development team has experience in using the
associated practices, and considers that noticeable benefits may be gained through the
use of MDD.

14

A different approach is presented in (Rahimian & Ramsin, 2008). The authors use a
Methodology Engineering approach, called Hybrid Method Engineering, to generate a
method suitable for mobile application development. Methodology Engineering is a
discipline concerned with creating methodologies suitable for different development
scenarios, motivated by the belief that no single process fits all situations. Hybrid
Methodology Design uses pre-established key requirements and conclusions as input, to
iteratively generate the desired methodology. In the present case, the authors use as
input a list of key methodology traits, similar to the ones previously described on page
7, as well as conclusions from related work in the field. Each iteration of the method
comprises the following tasks: prioritization of requirements, selection of the design
approaches to be used in the current iteration, application of the selected design
approaches, revision, refinement and restructuring of the methodology built so far,
defining the abstraction level for the next iteration, and finally the revision and
refinement of the requirements, prioritizing them for the next iteration.
The proposed mobile development methodology is created in four iterations, starting
from a generic software development lifecycle (Analysis, Design, Implementation, Test,
and Transition). In the first iteration, the methodology is detailed by adding practices
commonly found in agile methods. Taking into account market considerations, the
second iteration includes activities from New Product Development, a process
concerned with introducing a new product or service to the market. In the third
iteration, Adaptive Software Development (ASD) ideas were integrated into the
methodology, while in the final iteration prototyping was added to mitigate likely
technology-related risks. The final methodology phases proposed by the authors are
presented in Figure 4.

Figure 4. Mobile development method proposed in (Rahimian & Ramsin, 2008)

15

The proposed development framework takes into account most of the issues identified
in related work in the field. However, the methodology is still at a high-level, and no
specific tasks for the identified stages have been provided. According to the authors,
future work includes performing further iterations to obtain lower-level tasks in the
process. In its current state, the methodology is too high-level to integrate with MobileD.
In (Unhelkar & Murugesan, 2010), the authors describe a framework for enterprise
mobile application development. Their approach uses a layered architecture, composed
of Communications (network with protocols), Information (database), Middleware and
binding (service frameworks), Application (business applications like bookings and
payments) and Presentation layers (user interface), with the Security layer orthogonal
to the previous five. The presented framework has been successfully applied in three
enterprise projects, with more attention being paid to certain layers, according to the
organizations profile.
If categorization is used in the Mobile-D methodology as previously explained, and the
project in question is found to be an enterprise project, the framework presented in
(Unhelkar & Murugesan, 2010) could provide a useful architectural model for the
development team. The information could be taken into consideration in the Explore
phase, Project establishment stage (see Figure 1), when performing the task called
Architecture Line Definition. According to Mobile-D specifications, the aim of the task is
to gain confidence in the architectural approach, in order for the project to be
successfully carried out. More precisely, the goal of the task is to define an architecture
line for the project. Further details on the architecture line in Mobile-D are available in
(Ihme & Abrahamsson, 2005).

2.3 Bringing end-users into the lifecycle


In certain mobile development scenarios stakeholder establishment is not
straightforward, especially in terms of end-user identification. This issue has been
pointed out in (Abrahamsson, 2005) when discussing the suitability of agile methods
for mobile development. The conclusion was that, for mobile products, customers were
harder to identify due to the multitude of software distribution channels, resulting in a

16

clash with agile principles that require close customer contact. In this section, we will
examine the role of the end-user (consumer) in the mobile development lifecycle, and
how their requirements are addressed and balanced against those of other
stakeholders. In general terms, end-users could be consulted during two stages of
product development: during requirements gathering, and after releasing the product,
to measure acceptance and obtain feedback. This section studies both cases, and
suggests a way of integrating end-user contact in Mobile-D.
The recent trend in mobile software distribution is towards App Stores (such as Apple
App Store or Android Market), and significantly less towards operator portals and ondevice preloading through Original Equipment Manufacturer (OEM) deals. These
findings are presented in a recent comprehensive study on mobile development
economics (VisionMobile, 2010). According to the study, app stores are regarded as
direct developer-to-consumer channels, drastically reducing the time-to-shelf (time
taken by application to become available for purchase) and time-to-payment (time
elapsed from the moment the application was sold, to the developer receiving the
proceeds). App stores have also eliminated serious obstacles from the route to market
of a mobile product, such as complex submission and certification procedures and low
revenue shares. However, even though App stores bridge the gap to end-users from a
marketing perspective, the extremely high number of applications available to the user
in such a store means exposure for a small development firm will be quite low. One
developer described the situation as going to a record store with 200,000 CDs; youll
only look at the top-10. This means developers find it hard to reach representative
users to perform beta testing and obtain valuable feedback on the product. The authors
of the study suggest creating a testing and feedback channel between developers and
end-users, provided by network operators, as well as crowd-sourced testing through
the few existing platforms.
In order to ensure the success of a software product, requirements of stakeholders
must be carefully identified, prioritized and balanced. In some cases with mobile
applications, a 3-way balance must be struck between the customer, end-user and
possibly platform vendor or network operator requirements. Conflicting requirements
could have serious negative effects on the project outcome, as described in (McGeeLennon, 2008). The effects are envisaged for home care software systems, but apply to
any system that is expected to have a high level of usability, such as mobile
applications. System failure may occur if the product fails to deliver desired benefits;

17

poor usability can result if the end-user is not properly considered in the design
process, and finally reluctance to accept and use the system may result from an
incorrect approach towards the user, or an incorrect identification of user categories.
To address conflicting requirements, developers may turn to the field of Requirements
engineering (RE) and its associated activities. According to (Hofmann & Lehner, 2001),
requirements engineering is concerned with specifying requirements according to the
stakeholders needs, as well as analyzing and refining those specifications. Four
interconnected activities have to be performed in order to obtain a useful set of
requirements: elicitation, modelling, validation and verification. The team must first
elicit requirements using available sources, such as users, experts or documents, and
then model them. The resulting model must then undergo a normalization process, the
result of which being validated and verified. Requirements elicitation methods include
user interviews, group sessions using focus groups, or even the knowledge of experts
employed solely to obtain requirements for a given system. The end-user can
participate in most RE activities: elicitation, validation and verification, using
interviews, peer reviews, and walk-throughs. The authors also provide a grouping of
stakeholders, differentiating between customer and user groups. The groups are
separated by their motivation and expertise areas; customers would want to introduce
change with maximum benefit, while users want to introduce change with minimum
disruption.
The issue of stakeholder identification is presented in detail in (Sharp, Finkelstein, &
Galal, 1999). The analysis starts by identifying baseline stakeholders of two types:
supplier and client. Suppliers contribute information and tasks, while clients inspect
the products. Then, satellite stakeholders are recognized. This is a broader category
of stakeholders, which cooperate with the baseline in multiple ways. The approach
focuses on the interaction between these stakeholder categories, presented in Figure 5.
There are four groups of baseline stakeholders: users, developers, legislators and
decision-makers. The identification process is done in a number of steps, for each
baseline group, in order to explore the entire web of stakeholders. The group relevant
to the current discussion is the users, which have to be split into different roles,
according to their expertise, expected goals, and position in the organization.
The authors describe a detailed procedure to identify all relevant stakeholders, with
the added benefit of starting from a known core of baseline stakeholder groups, and

18

exploring onwards. Still, the process might become too time-consuming for small
projects. The important aspect that should be considered by lightweight methodologies
such as Mobile-D is that the user stakeholder group should be carefully examined in
terms of roles, and, according to (Hofmann & Lehner, 2001), should be regarded
separately from the customer group having different objectives.

Figure 5. Main elements of stakeholder identification; Source: (Sharp, Finkelstein, &


Galal, 1999)

The next step after identifying end-users is including them in the development process.
In (Oinas-Kukkonen & Kurkela, 2003), the authors argue that real users must be taken
into design and development work in order to ensure the success of a mobile product.
Furthermore, the developers must perform usability studies, to ensure their product is
usable enough. (Mahmood, Burn, Gemoets, & Jacquez, 2000) also support the principle
of user involvement in the lifecycle. They suggest end-user satisfaction will increase
with increased involvement of users in system development, and so the finished system
will be viewed as more useful by consumers.
End-users can be involved in two different points of the products lifecycle: design and
post-release. At design time, user feedback is enabled through the fore mentioned
requirements engineering activities of identifying and involving them in development,

19

through user beta testing, and focus groups. Other activities such as reviews and
questionnaires might be used to obtain feedback after the products release. However,
for small development companies these solutions might prove too expensive. As for
their agile, short-iteration, short time-to-market products, the time required for these
activities could prove too long. Therefore, for a successful integration of these activities
with Mobile-D, we must consider the scale of the projects Mobile-D is suitable for.
The Explore phase of the methodology includes a stage pattern that deals with
stakeholder establishment. The most important stakeholders that must be identified
during this stage are steering group, project team, customer group, support group and
exploration group. The customer group is established using a dedicated task pattern
within the Stakeholder establishment stage, and has three main goals: identifying
participative customers, gaining their commitment to the project, and defining their
tasks, roles and responsibilities, as well as their location (on-site or off-site). According
to the specification, the representation of the customer changes according to the
project type: if the team is developing a product for a specific customer organization,
identification is straightforward, and the customer is treated as in other agile methods.
However, if the product in-development is a component for an in-house product, or is a
commercial item, specifications recommend the customer be represented by members
of the same organization. This decision limits the input from real users and might cause
a misrepresentation of their expectations. By including end-users in the lifecycle in the
previously mentioned points, Mobile-D should become more useful for developing
commercial products, to be released directly to end-users. Figure 6 shows the
integration of the new End-user establishment task in the Stakeholder establishment
stage.

Figure 6. Stakeholder establishment stage, adapted from (VTT Electronics, 2006)

20

The new task pattern is concerned with identifying end-user groups and defining the
sources of requirements generated by end-users. Depending on project parameters
(especially time devoted to this stage), identification may be more or less in-depth. If
enough resources are available, focus groups or questionnaires might be used as a
source of requirements. Otherwise, the team might generate requirements from
existing guidelines and own experience. Figure 7 shows the sequence of steps needed
to perform End-user establishment.

Figure 7. End-user establishment steps

This task is in close connection with the Initial Requirements Collection task in the next
stage (Scope definition), as shown in Figure 1. After stakeholder establishment, the
team proceeds to perform initial project planning and requirements collection. Figure 8
shows the steps necessary to conduct Initial Requirements Collection. When gathering
requirements, end-user requirements are also collected from the previously
established sources. During requirements discussion, the relevance and reliability of
end-user requirements is agreed upon, and balanced with existing requirements from
all other stakeholders, such as the customer, vendor and operator. End-user
requirements must be taken into consideration whenever requirements analysis
activities are performed, such as in Planning days.

Figure 8. Initial requirements collection steps; Source: (VTT Electronics, 2006)

21

The next point in the lifecycle suitable for end-user participation is post-release.
Unfortunately, Mobile-D does not provide guidelines on this phase of a products
lifecycle, as the final documented phase is System Test & Fix, ending with a final release.
To address this issue, the methodology can be extended in terms of lifecycle coverage,
as shown in Figure 9.

Figure 9. Mobile-D with added Evolve phase. Adapted from (VTT Electronics, 2006)

The new Evolve phase deals with continuously integrating end-user feedback on the
delivered product into future releases. Feedback can come from multiple sources, such
as consumer and peer reviews, or data generated by the application itself (usage
statistics and crash reports). The first task, Data analysis, requires the team to obtain
and analyze feedback data. By analyzing usage statistics, conclusions can be drawn on
whether a particular component of the software is used enough to justify further
maintenance and updates, while error and crash reports trigger a sequence of stages,
similar to those in the System Test & Fix phase. When a defect is reported, the team
locates and documents it. Then, a Fix iteration comprising a Planning day, Working day
and Release day is performed. In the Planning day, the developers attempt to reproduce
reported defects, in order to fix them and create a new release of the product. A
support tool responsible for all activities associated to logging defects and usage

22

information, as well as interpreting these logs, is presented in a later section of this


work.

2.4 Performance testing in Mobile-D


Mobile platforms are usually limited in terms of physical resources compared to
desktop environments, and the utilization of these resources has to be carefully taken
into consideration. However, the well-known practices adopted by the Mobile-D
process do not account for the performance of components under development, either
at design time or during coding. To address the issue, this section suggests the
integration of performance testing activities in the lifecycle, more specifically extending
the Test-Driven Development practice with performance aspects.
A good starting point for this task is (Johnson, Ho, Maximilien, & Williams, 2007). The
authors describe the implementation of a new technique called Test-first performance
(TFP), to be used in conjunction with TDD. There are some notable differences between
classic TDD and TFP. First of all, in TFP, performance scenarios are generated as unit
test cases and, instead of using assertions like in regular JUnit test cases, measurements
are taken in different parts of the test case to create performance statistics. Another
different feature of TFP is the designer of the test cases. Here, a performance specialist
with a clear understanding of user expectations and possible performance bottlenecks
designs the test cases. The authors approach to performance testing comprises three
phases: test design, critical test suite execution and master test suite execution.
The test design process includes three main activities: identifying performance areas,
specifying performance objectives and specifying test cases. Performance areas are
discovered by analyzing the utilization of different resources in specific points in the
software; for example, in the case of a layered architecture, a performance area worthy
of investigation is the elapsed time in a certain layer. Of course, these areas are highly
application-specific, their identification being a task reserved for domain specialists.
The next step in the test design process is specifying performance objectives, which
may be sourced from architecture and performance experts, or from customer
requirements and expectations on performance in certain areas. Finally, test cases are
specified according to these objectives.

23

One key requirement for a project developed using TDD is to obtain rapid feedback on
test results. If, however, performance test designers decide stress testing (testing under
heavy workload) is required, this means TDD principles are no longer respected, as
stress testing requires a significant time to perform. This is why, in TFP, test cases are
split in two categories: critical test suite and master test suite. The critical test suite
represents a subset of all test cases generated during test design, while the master test
suite is the entire set of performance test cases.
In the practical application presented in (Johnson, Ho, Maximilien, & Williams, 2007),
developers execute the critical test suite along with other (functional) tests, but only
after the latter have passed. The issue here is a pass/fail result for a performance test
case is not indicated. Instead, developers have to analyze the generated logs and find
points of performance degradation (compared to previous runs) or other issues. The
successful outcome of this activity depends on the developers expertise. For the master
test suite, the authors define performance test cases for multiple runs, heavy
workloads, multiple devices and multiple platforms, testing the applications scalability
and behaviour under different environments. Due to the long running times associated
to these types of tests, they are run by the performance architect in parallel with
developers and as soon as the required functionality is implemented. Any performance
issues found by the performance architect are communicated to the developers.
The use of TFP during development has some notable benefits, most of them stemming
from the early identification and increased visibility of performance-critical areas and
code paths in the system. This raises the awareness on performance issues and
prevents premature optimization. Continuous performance tracking leads to an overall
increase of performance over time, while early discovery of performance leaks leads
to a more efficient use of design and coding resources.
The authors of TFP have also experimented with an evolutionary model for
performance requirements, described in (Ho, Johnson, Williams, & Maximilien, 2006).
The approach is a good fit with agile methodologies, requiring no complex performance
analysis upfront, but rather allowing for an incremental specification of performance
requirements and test cases. The Performance requirements evolution model (PREM)
uses levels to specify the amount of detail for performance requirements, as well as the
form of validation required to satisfy performance requirements on a given level of
detail. In order to move the performance requirements to a higher level, more

24

influencing factors must be found. It is also not recommended to jump to a certain level
based on assumptions, as this might lead to a misuse of resources (over-engineering).
Instead, the process of refining requirements should stop when the appropriate level
for the project has been reached. The evolution model is presented in Figure 10.

Figure 10. Performance requirements evolution model; Source: (Ho, Johnson, Williams, &
Maximilien, 2006)

In short, Level 0 requirements are qualitative in nature, similar to XP stories. Level 1


requirements quantify Level 0 requirements (users might consider 0.5 seconds an
unacceptably long time to run). Higher-level requirements are applicable for multiuser systems, specifying the performance of components within the system under
realistic workloads.
Using Test-first performance together with the Performance requirements evolution
model in the Mobile-D process should prove sufficient to successfully integrate
performance testing concepts in the methodology. However, the previously presented
activities cannot be included at a single point in the lifecycle, but rather in multiple
points located in different phases and stages. First, a performance expert or
performance architect must be nominated within the development team. This is to be
done in the first phase (Explore), during the Project establishment stage. According to
specifications, this stage is concerned, in part, with defining and allocating technical
and human resources needed for the project. Figure 11 shows the sequence of tasks
during Project establishment, including the Personnel Allocation task used to nominate
the performance architect.

25

Figure 11. Tasks for Project establishment stage; Source: (VTT Electronics, 2006)

The second point for performance testing integration is in the Productionize phase,
Planning day stage. A planning day is used to plan the work that has to be done for the
iteration, and includes the Requirements analysis task. Here, performance testing
activities can be started, by first identifying performance areas in the current
architecture, and generating performance requirements using input from the
performance architect, the development team, and the customer. Using the
Performance requirements evolution model, for each generated requirement, the
appropriate level will be set at Level 0, according to PREM guidelines. Performance
requirements that have not been met in previous iterations can be re-analyzed and
advanced to a superior PREM level, using relevant data. For most mobile projects, the
decision to advance a given requirement to levels 2 or 3 should be well founded. Also at
this time, performance test cases must be split into the critical suite and the master
suite, as described earlier. For mobile software where load and stress testing is not of
high importance, the master test suite should focus on exercising the software
behaviour on multiple devices and multiple platforms. As an example, for a Bluetooth
chat application, a master test suite could focus on the behaviour of the application
when n devices communicate in a chat room, as well as observing the performance of
the software on different platforms.
The final point of integration with Mobile-D is the Working day stage in the
Productionize phase, the purpose of the stage being the implementation of the system
functionality decided upon during the previous stage (Planning day). Figure 12 shows
how performance testing fits into the working day.

26

Figure 12. Working day including the performance testing task; Adapted from: (VTT
Electronics, 2006)

When working on the Performance testing task, developers and the performance
architect create test cases according to previously established requirements. The
critical test suite is run by developers alongside the usual functional test suite, while
the master test suite is run in parallel by the performance architect, always maintaining
a close connection between the two in order to communicate any new findings. The last
two

integration

points

(creating/updating

performance

requirements

and

specifying/executing the test suite) in the Productionize phase could also be used in the
following Stabilize phase, as the declared purpose of this stage is to ensure the quality
of the implementation.
An important problem with performance testing for mobile applications is related to
the poor tool support available. As presented earlier, normal test cases can be
enhanced to include logging at specific execution points, an analysis of these logs
providing performance data. There is an existing alternative, called PJUnit, a
performance testing environment for mobile applications on the J2ME platform. The
tool is presented in (Kim, Choi, & Wong, 2009), but is unavailable to developers. A new
tool to support performance testing for mobile applications is presented in a later
section.

27

2.5 Software product line principles


A well-known definition of software product lines describes them as being a set of
software-intensive systems sharing a common, managed set of features that satisfy the
specific needs of a particular market segment or mission and that are developed from a
common set of core assets in a prescribed way (Clements & Northrop, 2002). The
practice relies on a combination of three main activities: core asset development,
product development and management. The set of reusable components (core asset)
has to be developed and expanded in a guided way, estimating future product
requirements. A family of products is developed using a combination of core assets
(reused by a number of products in the line) and product-specific assets that provide
varied functionality to the core assets. From an architectural point of view, this is done
by providing so-called variation points that allow products to have diverse
functionalities. Expected benefits from a successful adoption of software product lines
within a development organization include reduced development costs, reduced
development duration and increased product quality (Clements & Northrop, 2002).
Even though the expected benefits appear tempting to companies, the adoption of
software product lines is non-trivial. In (Staples & Hill, 2004), the authors describe
their experiences while adopting software product line development in an
organization. One important barrier for companies that wish to adopt the approach is
the requirement of a Product Line Architecture (PLA) to be in place, which provides
variation points on core assets to allow the selection of multiple variant functionalities.
In this case, the problem was overcome by using a configuration management
infrastructure. However, this approach might not work in every scenario, and there are
still other barriers to adopting product lines, such as significant time and costs required
for related activities. (Northrop, 2004).
In (Jones & Northrop, 2010), the authors present a complete approach for adopting
software product line development within an organization. The newly introduced
Adoption Factory Pattern and associated sub-patterns are presented in Figure 13. The
aspect most relevant to the current discussion is the scope of the adoption process. For
further details regarding the pattern and sub-patters, such as lists of constituent
activities, refer to (Jones & Northrop, 2010).

28

Figure 13. Adoption Factory pattern, high-level view; Source: (Jones & Northrop, 2010)

As the above figure shows, the process of integrating product line development is
organization-wide. The approach cannot be properly integrated with Mobile-D due to
the clear incompatibility of scope. There is also a clash in the fundamental approaches
of Mobile-D and product lines. While Mobile-D is product-oriented, the underlying
product line philosophy is to regard a line of products as a whole, individual products
being just customizations of the core assets (Clements & Northrop, 2002). However,
product lines have been used for mobile applications, specifically middleware (Zhang &
Hansen, 2007) and games (Nascimento, de Almeida, & de Lemos Meira, 2008) (Zhang,
2005). Unfortunately the papers discuss code-level issues regarding software product
lines in mobile applications, so are outside the scope of the current discussion.
Probably the most useful product line principle applicable to Mobile-D projects is
designing with reuse in mind. Mobile applications have to run on multiple platforms, so
designing components in such a way to ensure reusability should prove useful when
adapting the application for a new environment. Mobile-D currently develops singlesystems with reuse, called a clone and own process in (Clements & Northrop, 2002),
meaning the new product borrows from previous ones, with some economic
advantages. This is, however, not the same as applying a complete product line
development process, firstly because in software product lines, assets are specifically
designed for reuse, and secondly because, for Mobile-D, the products are still regarded
as separate, not as variations of core assets. Still, by estimating which assets should be
reused in future projects, the development team can build a better asset base and thus
ease the reuse of these assets in future projects.

29

2.6 Questionnaire results


This section describes the results of a survey run on members of the Mobile Apps
(Mobile Apps, 2010) and Techmeetup (Techmeetup, 2010) groups. The questionnaire
comprises three sections: agile methods for mobile applications, Mobile-D, and mobile
development tools. The aim of the first section is to see if agile methods are generally
used to develop mobile applications, and if the fundamental assumptions about mobile
project characteristics expressed by the creators of Mobile-D are founded (such as usual
project duration and size of the development team). The second section consists of an
evaluation of Mobile-D in terms of perceived benefits brought on by its usage, level of
agreement with fundamental principles and proposed improvements. The final section
of the questionnaire is concerned with tool support and usage data for mobile
development.
Unfortunately, the number of responses was low, so a quantitative analysis of answers
would be inconclusive. Still, qualitative data in responses was quite insightful and
convergent. The following list presents a number of conclusions drawn from the
survey.

All participants worked exclusively in small teams (of less than 10 developers)
when developing mobile applications.

When asked to suggest a methodology suitable for mobile development, all


responses indicated that an amalgamation of practices from different methods
would be best suited for this context.

Participants considered Mobile-D to be suitable for larger projects (and larger


companies), mainly due to the high level of planning involved in the process.
One respondent considered the method best suited for a company big enough
to spend days on planning; another suggestion was to replace days with
hours for the respective stages.

When asked to provide general comments on the method, respondents


considered Mobile-D not agile enough, suggesting more modularity in the
approach.

All participants included Eclipse in their development tool set.

All participants considered mobile device emulators too slow, and unsuccessful
in simulating the applications performance on a real device.

30

Respondents regularly used profilers (such as the Android Trace View) to


evaluate application performance.

Error and crash reports were considered the most useful usage metric.

Participants expressed the need for feature usage figures, in order to provide
arguments to management for removing features.

The overall impression on Mobile-D was that the methodology would be suitable for
larger projects, as the level of planning required was perceived as high. Respondents
suggested a higher modularity would be beneficial, increasing the agility of the process.
Regarding tool support, participants used Eclipse for development and profiling tools
to evaluate application performance; the metrics considered most useful were crash
and error reports, as well as feature usage statistics.

31

Chapter 3
Android Resource Management

3.1 System description


This section focuses on a new support tool for mobile software development projects.
As previously stated, tool support for activities such as performance testing and
application usage logging is rather poor, especially in the case of mobile software. In its
current form, the tool is aimed at helping developers handle the utilization of two main
types of resources: physical platform resources, and design resources. While the
management of physical resources is done through testing and evaluating resource
utilization, design resource utilization is optimized by developing only useful
components for a given product, and by saving time and effort wherever possible by
discontinuing unused features. In the following part, the system will be described in
terms of capabilities and high-level components, through examples; then, the impact of
the support tool on the Mobile-D methodology will be discussed.
The fundamental requirements of the support tool are given below:
1. Evaluate component performance in the tested system, with respect to a
number of metrics
2. Support usage log lifecycle
Non-functional requirements include: ease of use, so developers can use the tool to
their benefit; ease of distribution and integration, making the tool available and ready
for application; and extendibility, allowing developers to add functionality according to
their specific needs.
In order to address these requirements appropriate development platforms must be
selected. Due to its high popularity, wide use and powerful API, the chosen platform
was Android (Android, 2010); to address the above non-functional requirements, the
tool was developed as an Eclipse plug-in (Eclipse, 2010). The general approach for
performance evaluation is through customized test cases and assertions. Using them,

32

developers can assert a maximum level for a given performance metric, having a failure
result if the level is exceeded. According to (Android, 2010), an Android application
makes use of four different types of components: activities, services, broadcast
receivers and content providers. The support tool described here only deals with
activity components when evaluating performance, as will be explained later. Figure 14
shows the use case diagram of the system.

Figure 14. Use Case Diagram for the system

33

Use cases 1 through 3 deal with physical resources, while use cases 4 through 6 focus
on design resources. Out of the four Android application components mentioned
earlier, only Activities were considered for performance testing (use cases 1-3) due to
several reasons. Firstly, they are always present in applications and carry a significant
amount of performance-sensitive code, for example handling graphical interfaces.
There is scope to extend the support tool to include testing of Service components;
however, in many cases, services provide background functionality for an application
and are running for an indefinite period of time, usually attached to an Activity, and
have no user interface. This means a timed test might be irrelevant for services. Finally,
testing the other two types of components (broadcast receivers and content providers)
for performance is an issue for further study, as this functionality may not be useful to
developers.
In order to demonstrate the functionality of the support tool, we will use two
applications available as samples in Android SDK distributions, called Spinner and
MultiRes. The former is a simple Activity that presents a selection menu of planets to
the user, while MultiRes allows the user to navigate a collection of photographs. The
applications are pictured below.

Figure 15. Spinner (left) and MultiRes (right) applications

34

3.1.1 Evaluate method performance


This section of the tool allows developers to test the performance of methods in an
Activity with respect to run time (in milliseconds), memory allocation size (in bytes)
and number of memory allocations. The methods used to accomplish this are provided
in a new class that extends the widely used ActivityInstrumentationTestCase2 provided
in the Android SDK.
To create performance test cases, developers can use the wizards provided by the plugin.

Figure 16. Wizard used to generate a performance test case for methods

As shown in Figure 16, the wizard helps in creating a test case for method performance.
The user must create this new test case inside an Android Test Project, which is a
custom Eclipse project used to test Android applications, and run the newly generated
test case as an Android Test Case. By doing this, the test case is installed on the
connected emulator or device, tests are run, and results are reported back to Eclipse.
The user must provide specific information on the first page, such as the Activity class
under test, whether setUp or tearDown stubs are required, or if the tests must run on
the activitys UI thread. The last parameter actually asks the user if the methods under
performance test are affecting the main UI screens directly. Due to the specific nature of
activities, the main (UI) thread of an activity cannot be influenced by any external

35

threads (such as the test thread calling activity methods), so an annotation must be
used to bypass this requirement. A good example of a method that runs on the activitys
UI thread is onCreate, a method called whenever an activity is started, and
implemented by all activities. The second page of the wizard allows the user to select
the methods they want to evaluate. The wizard interface is inspired by JUnit wizards for
better familiarity. The figure below displays the most important sections of code
generated by the wizard.

Figure 17. Code generated by method performance wizard

Figure 17 shows the generated code. In point A, the current test case is declared to
extend the new performance test case class from the support tool. Generics are used to
specify the activity under test; here, it is SpinnerActivity (point B). In point C, the

36

generated test stub leaves parameter objects uninitialized, leaving it up to developers


to decide parameter values. In point D, the methodTimeMillis method is called to obtain
the run time in milliseconds of the tested method for the given parameters, and finally
in point E, an assertion is made regarding the run time; the assertion is generated for 0
ms by default, in order to let developers select an appropriate time. Developers may
select other methods appropriate to test other performance characteristics, as well as
other asserts, all provided by the support tool libraries. Table 1 lists all methods
available to the developer.

Method name
methodTimeMillis
methodTimeMillisAverage
methodMemoryAllocSize
methodMemoryAllocNumber

Description
Returns a methods run time in
milliseconds.
Returns the average run time of a method
over a given number of runs.
Returns the amount of memory allocated
to the method in bytes.
Returns the number of memory
allocations within the method.

Table 1. Methods for performance evaluation

Finally, when the performance test case is run, it passes or fails according to the
conditions set in point E (Figure 17). If the assertion fails, a message will be displayed
showing the actual performance of the method.

3.1.2 Evaluate method sequence performance


In some cases, especially when Test-Driven Development is used, developers have a
clear view of the dependencies between different components. This functionality
addresses the issue of performance testing on a sequence of methods, verifying if the
aggregate run time of these methods is within a specified limit, and observing the
trade-off in resource utilization between dependent methods. This is done in a similar
way to the previous use case, but calling the selected methods in sequence and, at each
call, computing the time slack available to that method. By default, when the timer has
expired, execution stops and returns a failure. This use case has an associated wizard,

37

similar to the one presented in Figure 16. Figure 18 shows the generated code,
necessary to carry out a method sequence performance evaluation.

Figure 18. Code generated by method sequence performance wizard

The above figure focuses on the single test generated by the wizard. The rest of the
generated class is the same as in the previous case (Figure 17). A MethodQueue (point
A) is generated and filled with MethodTraces (point B) in the order they should be
called (point C). As before, parameters remain uninitialized. Finally, in point D, the
checkAggregateTime method is called to verify if the method queue gets called before
time expires. Code generation sets the aggregate target time at 0, to be modified by
developers to an appropriate value before running the test.
The result of the test is a pass/fail, as well as entries on the Android Logger. Figure 19
shows the results logged by the test, given a queue of 4 methods with target run time
set at 300ms.

38

Figure 19. Method queue performance test log

3.1.3 Evaluate test performance


This functionality allows developers to run a timed test on a section of code. While
generating functional test cases for TDD, they may use these tests as subjects for
performance evaluation. The functionality is also useful when there is a need to test not
only individual methods, but entire segments of code.
In order to achieve this, the tests must be previously installed and passed on the
Android device or emulator, as well as the application for which the tests were created.
This is not a drawback to the approach, as running timed tests for sections of code that
fail functional tests would not provide any relevant data. In order to evaluate the run
time of an existing Android test, developers must create a regular Java project and
include JUnit libraries in the build path. As opposed to previous cases, this performance
test runs on the development machine (not on an Android device) because it uses
Androids Debug Bridge to obtain run time information. The wizard for this
functionality (Figure 20) helps developers create the appropriate class.

Figure 20. Wizard used to generate a timed test for existing Android test cases

39

The wizard and associated methods allow developers to obtain a run time for entire
packages of Android test cases, single test cases, or individual tests (test case methods).
If the user selects a test case and no individual tests (from the second wizard page), the
generated class will do a timed test for the target test case.; if individual tests are
selected, only those will be timed; finally, the user may choose to time an entire
package of Android test cases. Figure 21 shows code generated by the wizard.

Figure 21. Code generated by timed test wizard

In point A, the current test case is declared an extension of AndroidTimedTest, which is,
in turn, an extension of the JUnit TestCase. In point B, a method is called to get the run
time of a single test method inside a test case class. An assertion is made in point C, like
before. Table 2 lists the methods available to developers in an AndroidTimedTest.

40

Method name
getTimeSingleTest

Description
Return the run time of a single test inside
a test case.

getTimeTestCase

Return the run time of a test case.

getTimeAllTests

Returns the run time of an entire package


of test cases.

Table 2. Methods available for test performance evaluation

The result of such a timed test is given by the assertion at the end of each method,
resulting in a pass or fail accordingly.

3.1.4 Log user actions


Logging application usage can yield important benefits to developers. These include
gaining a better understanding of which components are highly utilized and which ones
are not, and basing future design decisions on these findings, as well as obtaining data
on errors and crashes encountered by users. To help with these tasks, the support tool
presented here offers a solution to cover the most important aspects of a usage logs
lifecycle, presented in Figure 22.
More specifically, the support tool offers a logger attached to a class, capable of
registering events such as method calls, constructor calls and errors. When the logger
starts, it creates an empty log file in the applications own directory that resides on the
Android devices external storage medium; subsequent events are logged in the file,
with information such as time of occurrence, name of event (name of method or
constructor) and even call parameters in case the event is an error. The use of these
parameters will be explained in the later section dedicated to test case generation.

41

Figure 22. Usage log lifecycle

The logger is started with a parameter indicating the maximum number of runs
before the log file related to the current class can be sent back to the developers. A
run in this case means the number of times the logger has been started; if the logger is
started in a logical entry point of an application, such as the onCreate method for
Android activities, the number of logger runs corresponds to the number of times the

42

application was started. A simple upload procedure was created in order to simulate
sending log files to developers. The implemented procedure copies respective files
from the applications directory to the root of the external storage medium; however,
an interface has been provided (called UploadProcedure) that can be implemented by
developers in order to upload usage data in the desired way.
To lower the workload necessary to add logging instructions to application code, the
logger is utilized with as little coding as possible. For the remaining use cases, examples
are given using the second application presented earlier (MultiRes). We will start the
logger and use it to record events. Figure 23 shows parts of the main application code
with added lines for logging purposes.

Figure 23. Logging instructions

43

In point A, a new EventLogger is instantiated as a class variable. Then, it is started


inside the onCreate method (point B). The parameters represent: upload procedure, full
name of the class, the current applications data directory, and the maximum number of
runs before an upload is attempted. All external storage paths are provided by the
Android Environment and Context classes, to ensure compatibility with the end-users
set-up. In point C, the current method call is logged. In order to demonstrate error
logging, we will assume an exception is thrown if the showPhoto method is called with
parameter 3 (point D). The exception is handled by logging details. In point E, an
ErrorInfo is declared, having the name as the method that caused it. Then, in point F,
the parameter class and instance are added to the error information object. In point G
the error is logged, using the ErrorInfo object; finally, in point H, the error is also
written in the Android log. Method parameters are logged in order to attempt to
recreate these errors when logs reach the developers.

Figure 24. Log generated for 5 runs of the MultiRes application

Figure 24 shows the log file generated by running the application. The onCreate method
was called 5 times, corresponding to the number of times the application was started.
The errors section shows that the user encountered 3 errors. As shown in Figure 23,
this means the user reached the photo with index 3 and an exception was thrown.
Errors are logged with the method name, time stamp and parameters (type and

44

reference to the actual object, serialized and saved to storage medium). When
uploading usage logs these object files are also sent to enable test case generation.

3.1.5 Analyze usage logs


The support tool includes an Eclipse view to facilitate log analysis. The user may select
multiple logs and run a counter to obtain event numbers for each type of event; for
errors, an option is available to show errors details such as parameters in the method
that caused them. Figure 25 displays the Eclipse view responsible for this task.

Figure 25. Log file statistics view

This functionality is useful both when analyzing component utilization and for viewing
contents of parameter objects generated by errors and received from users. As can be
seen in the above image, the showPhoto method caused an error when being called with
parameter equal to 3. The set of statistics can be extended to include more complex
ones, such as time and error distribution statistics.

45

3.1.6 Generate tests from logs


The final functionality of the support tool described here is test case generation, the
purpose of which being to recreate an error encountered by the end-user. In order to
accomplish this, the support tool takes a usage log and associated serialized objects and
uses the Java Reflection API to attempt to bring the application under test to the same
execution point that was logged as an error. When considering the usage logging
lifecycle shown in Figure 22, one can see that most activities are covered by the support
tool. The ones that are not covered are either too high-level to be automated (fix defects
found by test cases, re-design components), or organization-specific (the upload
procedure for logs and associated files). This section describes the use of logs and
serialized objects uploaded to the development team from end-users devices.
As shown in Figure 23, when an error is logged, the logger requires parameters of the
method in which the error occurred. These parameters are then serialized and saved to
the devices external medium, to be then sent along with the log file, when the
maximum number of runs has been reached. When developers receive the logs and
serialized object files, they can generate test cases that aim to obtain the same
behaviour and then move on to correct it.

Figure 26. Logs and serialized objects

The approach is able to generate a test case that brings the application to the same
execution point described in the log, when the error itself is caused by a parameter in
the method call. Of course, an error can come from other sources, such as class
variables or external input. However, by using this approach, if the generated test case

46

cannot cause the application to reach the logged execution point, the cause can be
narrowed down to other factors. Also, the logger can only use serializable objects to log
as parameters; fortunately, most Java classes are serializable. The downside is that a
significant number of Android classes are not, this functionality being replaced by the
Android Parcelable interface. If parameters are not serializable, developers have the
option to log an error with no parameters; this will still provide some information
about error occurrence, even though test case generation will no longer be possible.
The final option available to developers in order to get an idea of an error that occurred
is to log the exception instead of the parameters, using the same logging mechanism. As
all exceptions are serializable and contain stack traces, instead of logging and
serializing parameter objects, an exception object can be serialized and sent to
developers, to be analyzed at a later time.
In order to facilitate test case generation from log files, a wizard is provided as part of
the support tool, presented in Figure 27.

Figure 27. Test case generation wizard

47

The wizard requires a log file to generate a test case. The resulted test case is shown in
the next figure. A test case was generated for each error in the log.

Figure 28. Code created by test case generation wizard

To successfully run a generated test case on the Android device or emulator, the
necessary files have to be uploaded to its external storage mediums root directory, an
action facilitated by the wizard (Figure 27). When a log file is loaded, users can specify
the name of the external medium their current device is using, and the wizard will
upload the log file and all necessary serialized object files to the root directory of that
medium. In point A (Figure 28) the log file is loaded from the root directory. Then, in
point B, method traces are obtained for those methods that caused errors. This step

48

also includes loading the required serialized objects. Then, a test method is generated
for each error in the loaded log file (point G), and attributed a method trace (point C).
Using the Java Reflection API the method that caused the respective error is reflected
and invoked using the obtained method trace (points D and E). The test case fails if
method reflection or invocation fails or if the method throws an exception when
invoked, and passes if the method has been successfully invoked with the retrieved
parameters. After successfully reaching the execution point that was logged as an error,
it is up to developers to analyze the causes of this behaviour. If the test passes and the
logged execution point is not reached, it is an indication that method parameters were
not the cause of the behaviour, and other sources must be investigated. Figure 29
shows an interpretation of test results.

Figure 29. Test result interpretation for generated test cases

In the current example, running the generated test case results in an Android log entry
(see Figure 23, point H), meaning the behaviour encountered by the user was
successfully recreated by the test case (the method was driven to throw an exception
by calling it with parameter 3).

49

3.1.7 System utilization


The support tool is distributed as three JAR files, available for download from the
projects website1. One is the Eclipse plug-in; the second contains the necessary
libraries, while the third comprises only the logger library, to be distributed alongside
Android applications. Physical resource testing tasks are available for Activity classes
(for methods and method sequences); timed test cases can be performed on any type of
Android test case (including service and content provider test cases); logging tasks can
be performed on any type of class. In fact, the logger, analyzer and test case generator
can be used for any Java application, after minor modifications to the logger. The
support tool was designed for Eclipse 3.5 (Galileo), Android 1.6, and requires JUnit
libraries, as well as the Android Development Tools (ADT) plug-in for Eclipse.
The support tool was also tested on an Android application that was part of a students
Masters Project. The behaviour was as expected; selected methods were tested for
performance, while an Android device was used to create logs, which were then pulled
off the device and analyzed. Test cases were successfully generated from these logs.

3.2 Impact on Mobile-D


The support tool influences two aspects of the Mobile-D methodology. More specifically,
it enables performance testing activities and the introduction of end-users into the
development lifecycle, both aspects being discussed in previous chapters. For
performance testing, the support tool can be used in Working days, as shown in Figure
12. An established performance evaluation tool for Android is Trace View, a profiler
provided with the standard distribution. By using Trace View combined with Android
Resource Management performance evaluation, a complete set-up is obtained; a typical
sequence of events for effective testing would include the following steps:
1. Ensure tested component passes functional tests
2. Use the Trace View profiler to identify performance bottlenecks
3. Use the support tool to evaluate the performance in those problem areas
4. Refactor problem areas to improve performance; improvements are verified
using the support tool
1

http://armeclipse.webs.com/

50

The logging functionality of the support tool is a perfect fit with end-user integration at
post-release time, as shown in Figure 9. The new Evolve phase in the process is centred
on the usage log lifecycle presented in Figure 22. Thus, in the Data analysis and System
test stages attached to this phase, usage logs received from users are analyzed, and then
test cases are generated to identify and fix these errors in the later Planning and
Working days. Finally, a new release is created (Release day stage).

51

Chapter 4
Conclusions and further work

This section provides a discussion of overall project goals and the extent to which these
goals have been fulfilled. We will first examine the contributions this work has brought,
and then discuss how these contributions improve mobile application development;
finally, directions for further work will be described.

4.1 Overview of improvements


The current project has attempted to improve mobile development best practice by
considering a popular and mature development methodology, Mobile-D, and improving
it using ideas based on relevant literature. This was done after analyzing the soundness
of the fundamental principles of the methodology, and the way these principles fit with
mobile development projects. The final major improvement approach was the
development of a support tool for mobile developers, which tries to fill a gap in current
tool support, as well as sustain the proposed improvements to the methodology.
The first chapter, along with the results of the questionnaire presented in Chapter 2,
have consolidated the belief that agile methods are usually the best alternative for
mobile developers. Chapter 2 focuses on improvements to Mobile-D, in an attempt to
bring the methodology closer to the list of ideal traits adapted from (Rahimian &
Ramsin, 2008) and presented in Section 1.4. The first improvement was the addition of
a classification task to the lifecycle (Section 2.1), and a classification tree was
introduced to be used as is, or extended according to organization specifics. By
performing this new classification task, developers can expect to benefit from a set of
guidelines tailored to the specific application type they are currently developing, as
well as using previous experiences from projects in the same category. In Section 2.2,
alternative approaches for mobile development were explored, in an attempt to
integrate new concepts with Mobile-D. However, approaches from literature were
generally too high-level to integrate with the detailed methodology; for enterprise

52

mobile applications a useful approach would be considering the framework described


in (Unhelkar & Murugesan, 2010). Section 2.3 discusses bringing end-users into the
development lifecycle, a major improvement to the Mobile-D process. This is to be done
in two different points: at design time, and after the first product release. For the first
integration point, a new task, End-user establishment, has been described, including
user group identification and requirement source identification activities. For postrelease integration, Mobile-D has been extended to include a new phase called Evolve.
This phase is concerned with improving the product through constant end-user
feedback, an operation enabled by component usage reporting and error logging. The
second major improvement is introducing performance testing in the lifecycle, as
described in Section 2.4. First, performance tests are designed using Test-first
performance (TFP) (Johnson, Ho, Maximilien, & Williams, 2007), and the Performance
requirements evolution model (PREM) (Ho, Johnson, Williams, & Maximilien, 2006).
Then, in order to integrate running these tests, a new task (Performance testing) was
described and integrated into the process. The final section of the chapter deals with
software product line principles and their applicability to the methodology (Section
2.5). The section provides an argument towards using the product line principle of
developing with reuse in mind in mobile development.
Chapter 3 describes Android Resource Management, a support tool developed as an
Eclipse plug-in. The main functionalities of the tool are performance testing of Android
activity methods, timing Android tests, and supporting a usage logging lifecycle. The
support tool fulfils two goals: filling a gap in currently available support tools for
performance testing and logging, and enabling major improvements to Mobile-D, such
as post-release end-user feedback integration (Section 2.3) and the running of
performance tests (Section 2.4). The support tool functionalities include: method
performance

testing,

method

sequence

performance

testing,

code

segment

performance testing, usage logging, usage log analysis and automatic test case
generation from usage logs.

4.2 Discussion
An important goal of the project was to obtain a methodology that is as close as
possible to the ideal trait list presented in Section 1.4. Indeed, by introducing end-user

53

feedback support, application categories and software product lines principles, MobileD now presents more of the ideal characteristics. Furthermore, the methodology was
improved in other aspects not mentioned in the list, such as the inclusion of
performance testing in the lifecycle. Another important contribution was the support
tool, that not only enables corresponding features of Mobile-D, but decreases overall
development effort. Some characteristics of the tool may even be used outside mobile
development; features such as usage logging, log analysis and test case generation can
be used in any development scenario, with only minor adjustments to the support tool.

4.3 Further work

4.3.1 Mobile-D
The methodology would further benefit from an investigation into introducing market
study practices in the lifecycle, as this aspect is not specifically addressed by Mobile-D;
this is the only item on the ideal trait list not expressly treated by the approach.
Instead, the methodology relies on team members experience in the field when making
decisions regarding project planning. By improving market consciousness, planning
decisions such as setting the project timeline and development rhythm can be adjusted
to better meet market requirements.
As with all methodologies, the improvements brought to Mobile-D should undergo
experimental validation in a real organization. Unfortunately, such an experiment
would require extensive work to be performed in collaboration with development
teams working with different methodologies, and analyzing if the envisaged benefits of
the improvements described here are met when applied on real projects.

4.3.2 Support tool


Further work on the support tool includes evaluating if it would be useful to include
performance testing for other Android components (services, content providers and
broadcast receivers) and to eventually extend the tool with these functionalities. Also,
the wizards used to generate performance test cases can be extended so that almost no

54

more coding is necessary after code generation. For example, the wizard could allow
developers to select the type of metric for each test, and the target measurement in the
assertion, leaving only parameter initialization to be done after the code has been
generated. The most interesting direction for future work, however, is related to
automatic test suite generation from usage logs. Currently, test cases are generated
using parameters of the method in which the error was logged. The mechanism cannot
recreate errors that are caused by other factors such as class variables or external
input, or if the method parameters are not serializable. The approach can be extended
by including more entities from the environment that caused the error, such as logging
any class variable that was accessed or modified within the method. A study can be
performed, evaluating possible causes for errors, and the correct way to log these
causes. Another approach would be to generate test cases from serialized exception
objects generated by errors, using the information contained in these objects, such as a
stack trace dump. Finally, the wizard used for test case generation can be improved to
use multiple log files as input.

55

Chapter 5
Bibliography
Abrahamsson, P. (2007). Agile Software Development of Mobile Information Systems.
In Advanced Information Systems (pp. 1-4). Berlin: Springer.
Abrahamsson, P. (2005). Mobile software development - the business opportunity of
today. Proceedings of the International Conference on Software Development, (pp. 2023). Reykjavik.
Abrahamsson, P., Hanhineva, A., Hulkko, H., Ihme, T., Jlinoja, J., Korkala, M., et al.
(2004). Mobile-D: an agile approach for mobile application development. Conference on
Object Oriented Programming Systems Languages and Application; Companion to the
19th annual ACM SIGPLAN conference on Object-oriented programming systems,
languages, and applications (pp. 174-175). Vancouver: ACM.
Abrahamsson, P., Salo, O., Ronkainen, J., & Warsta, J. (2002). Agile Software Development
Metods: Review and Analysis. VTT Electronics.
Agile Alliance. (2001). Agile Software Development Manifesto. Retrieved from Manifesto
for Agile Software Development: http://agilemanifesto.org/
Amanquah, N., & Eporwei, O. T. (2009). Rapid Application Development for Mobile
Terminals. 2nd International Conference on Adaptive Science & Technology, ICAST, (pp.
410-417).
Android.
(2010).
Retrieved
http://developer.android.com/index.html

from

Android

Developers:

Balagtas-Fernandez, F., & Hussmann, H. (2008). Model-Driven Development of Mobile


Applications. 23rd IEEE/ACM International Conference on Automated Software
Engineering, (pp. 509-512). L'Aquila .
Beck, K., & Andres, C. (2004). Extreme Programming Explained: Embrace Change (2nd
Edition). Addison-Wesley Professional.
Beydeda, S., Book, M., & Gruhn, V. (2005). Model-driven software development.
Birkhauser.
Boehm, B. (2002). Get Ready for Agile Methods, with Care. Computer , 35 (1), 64-69.
Boehm, B., & Turner, R. (2003). Balancing Agility and Discipline: A Guide for the
Perplexed. Addison-Wesley.
Braun, P., & Eckhaus, R. (2008). Experiences on Model-Driven Software Development
for Mobile Applications. Proceedings of the 15th Annual IEEE International Conference
and Workshop on the Engineering of Computer Based Systems, (pp. 490-493).

56

Carton, A., Clarke, S., Senart, A., & Cahill, V. (2007). Aspect-Oriented Model-Driven
Development for Mobile Context-Aware Computing. Proceedings of the First
International Workshop on Software Engineering for Pervasive Computing Applications,
Systems, and Environments, (pp. 5-8).
Clements, P., & Northrop, L. (2002). Software Product Lines : Practices and Patterns.
Boston, MA: Addison-Wesley.
Cockburn, A. (2004). Crystal Clear: A Human-Powered Methodology for Small Teams.
Addison-Wesley Professional.
Dyba, T., & Dingsoyr, T. (2009). What Do We Know about Agile Software Development?
IEEE Software , 26, 6-9.
Eclipse. (2010). Retrieved from http://www.eclipse.org/
Hayes, I. S. (2003). Just enough wireless computing. Prentice Hall.
Ho, C.-W., Johnson, M. J., Williams, L., & Maximilien, E. M. (2006). On Agile Performance
Requirements Specification and Testing. Proceedings of the conference on AGILE, (pp.
47-52).
Hofmann, H. F., & Lehner, F. (2001). Requirements Engineering as a Success Factor in
Software Projects. IEEE Software , 18 (4), 58-66.
Hulkko, H., & Abrahamsson, P. (2005). A Multiple Case Study on the Impact of Pair
Programming on Product Quality. Proceedings of the 27th international conference on
Software engineering, (pp. 495-504). St Louis.
Ihme, T., & Abrahamsson, P. (2005). The Use of Architectural Patterns in the Agile
Software Development of Mobile Applications. International Conference on Agility,
ICAM, (pp. 155-162). Helsinki.
Johnson, M. J., Ho, C.-W., Maximilien, E. M., & Williams, L. (2007). Incorporating
Performance Testing in Test-Driven Development. IEEE Software , 24 (3), 67-73.
Jones, L. G., & Northrop, L. M. (2010). Clearing the Way for Software Product Line
Success. IEEE Software , 27 (3), 22-28.
Khambati, A., Grundy, J., Warren, J., & Hosking, J. (2008). Model-driven Development of
Mobile Personal Health Care Applications. Proceedings of the 23rd IEEE/ACM
International Conference on Automated Software Engineering, (pp. 467-470).
Kim, H., Choi, B., & Wong, W. E. (2009). Performance Testing of Mobile Applications at
the Unit Test Level. Proceedings of the 2009 Third IEEE International Conference on
Secure Software Integration and Reliability Improvement, (pp. 171-180).
Kroll, P., & Kruchten, P. (2003). The Rational Unified Process Made Easy: A Practitioner's
Guide to the RUP. Addison-Wesley Professional.
Kunz, T., & Black, J. (1999). An Architecture For Adaptive Mobile Applications.
Proceedings of Wireless 99, the 11th International Conference on Wireless
Communications, (pp. 27-38).

57

Mahmood, M. A., Burn, J. M., Gemoets, L. A., & Jacquez, C. (2000). Variables affecting
information technology end-user satisfaction: a meta-analysis of the empirical
literature. International Journal of Human-Computer Studies , 52 (4), 751-771.
McGee-Lennon, M. R. (2008). Requirements Engineering for Home Care Technology.
Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing
systems, (pp. 1439-1442 ). Florence.
Mobile Apps. (2010). Mobile Apps Group. Retrieved from http://www.informaticsventures.com/connect/mobile-apps-group
Nascimento, L. M., de Almeida, E. S., & de Lemos Meira, S. R. (2008). A Case Study in
Software Product Lines - The Case of the Mobile Game. 34th Euromicro Conference on
Software Engineering and Advanced Applications, (pp. 43-50). Parma.
Northrop, L. (2004, September). Software Product Line Adoption Roadmap. Retrieved
August
10,
2010,
from
http://www.sei.cmu.edu/library/abstracts/reports/04tr022.cfm
Oinas-Kukkonen, H., & Kurkela, V. (2003). Developing Successful Mobile Applications.
Proceedings of the International Conference on Computer Science and Technology, (pp.
50-54). Cancun, Mexico.
Pikkarainen, M., Salo, O., & Still, J. (2005). Deploying Agile Practices in Organizations: A
Case Study. Springer Berlin / Heidelberg.
Rahimian, V., & Ramsin, R. (2008). Designing an Agile Methodology for Mobile Software
Development: A Hybrid Method Engineering Approach. Second International Conference
on Research Challenges in Information Science, 2008. RCIS 2008. , (pp. 337-342).
Marrakech.
Salo, O. (2006). Enabling Software Process Improvement in Agile Software Development
Teams and Organisations. Helsinki: VTT.
Sharp, H., Finkelstein, A., & Galal, G. (1999). Stakeholder Identification in the
Requirements Engineering Process. Proceedings of 10th International Workshop on
Database & Expert Systems Applications (DEXA) (pp. 387-391). IEEE Computer Society
Press.
Staples, M., & Hill, D. (2004). Experiences Adopting Software Product Line Development
without a Product Line Architecture. Proceedings of the 11th Asia-Pacific Software
Engineering Conference (pp. 176-183). IEEE Computer Society.
Techmeetup. (2010). Retrieved from Techmeetup: http://techmeetup.co.uk/
Unhelkar, B., & Murugesan, S. (2010). The Enterprise Mobile Applications Development
Framework. IT Professional , 12 (3), 33-39.
Varshney, U., & Vetter, R. (2001). A Framework for the Emerging Mobile Commerce
Applications. Proceedings of the 34th Annual Hawaii International Conference in System
Sciences, 9, pp. 9014-9023.
VisionMobile. (2010). Mobile
http://www.visionmobile.com.

Developer

58

Economics

2010

and

Beyond.

VTT Electronics. (2006). Portal of Agile Software Development Methodologies. Retrieved


from Mobile-D Method: http://virtual.vtt.fi/virtual/agile/mobiled.html
Zhang, W. (2005). Architecturally Reconfigurable Development of Mobile Games.
Second International Conference on Embedded Software and Systems (ICESS'05), (pp. 6672). Xian.
Zhang, W., & Hansen, K. M. (2007). Synergy between Software Product Line and
Intelligent Mobile Middleware. International Conference on Intelligent Pervasive
Computing, (pp. 515-520).

59

You might also like