You are on page 1of 27

p l a t f o r m

S60 Platform: How to


Develop Unit Tests
Version 1.0
July 13, 2007

S60
S60 Platform: How to Develop Unit Tests | 2

Legal notice

Copyright © 2007 Nokia Corporation. All rights reserved.

Nokia and Nokia Connecting People are registered trademarks of Nokia Corporation.
Other product and company names mentioned herein may be trademarks or trade
names of their respective owners.

Disclaimer

The information in this document is provided “as is,” with no warranties whatsoever,
including any warranty of merchantability, fitness for any particular purpose, or any
warranty otherwise arising out of any proposal, specification, or sample. This document
is provided for informational purposes only.

Nokia Corporation disclaims all liability, including liability for infringement of any
proprietary rights, relating to implementation of information presented in this document.
Nokia Corporation does not warrant or represent that such use will not infringe such
rights.

Nokia Corporation retains the right to make changes to this specification at any time,
without notice.

License

A license is hereby granted to download and print a copy of this specification for
personal use only. No other license to any other intellectual property rights is granted
herein.

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 3

Contents

1. Introduction ......................................................................................................... 5
2. Practical introduction to Symbian C++ unit testing......................................... 6
2.1 Setting up a test project ..................................................................................................6
2.2 Stubbing dependencies...................................................................................................9
2.3 Implementing tests ........................................................................................................10
2.4 Code coverage..............................................................................................................11
3. Understanding testability ................................................................................. 14
4. Developing unit tests ........................................................................................ 15
4.1 Black-box versus white-box testing...............................................................................15
4.2 Behavioral testing techniques .......................................................................................15
4.3 Structural methods ........................................................................................................16
4.4 Using stubs and mock objects ......................................................................................17
5. Other techniques and tools.............................................................................. 18
6. Further reading.................................................................................................. 19
7. References ......................................................................................................... 20
Appendix A. TestSource.cpp.................................................................................. 21
Appendix B. EUnit Professional, key features...................................................... 24
Appendix C. Unit testing, TDD, test frameworks .................................................. 25
Evaluate this resource.................................................................................................. 27

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 4

Change history

July 13, 2007 Version 1.0 Initial document release

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 5

1. Introduction
This document is a practical guideline that explains unit testing, techniques that can be
used when designing unit tests, and the tools and techniques that are available when
creating and running unit tests on the S60 platform.

The intended audience for this document is developers who write and run unit tests
against their own code modules.

Software testing in general is an enormous topic that embraces numerous techniques,


definitions, and methodologies. A common approach is to classify testing at various
levels, such as:

ƒ Unit testing, in which each unit (usually single class) of the software is tested
(usually by the developer) to verify that the detailed design for the unit has
been correctly implemented.

ƒ Integration testing, in which components created through integration (smaller


and tested components) are tested.

ƒ System testing, in which final software built from integrated components is


tested to show that all requirements are met.

ƒ Acceptance testing, in which usually the customer runs the tests to accept or
reject the delivered software.

A small program might be composed from a small unit, where there is no separate
integration testing level. With large systems, it is wise to separate larger components,
which are built from smaller units. In these cases, integration testing plays an important
role in ensuring that integration did not disrupt any essential functionality.

Unit testing is most often designed and run by the developer. This means that errors are
detected early in development, so fixing them costs less than it would during system or
acceptance testing. Unit testing, Test Driven Development, and test frameworks are
nicely linked together by Richard Carlsson and Mickaël Rémond [13]; see Appendix C.

A good practice in S60 C++ development is to separate algorithms and business rules
from the user interface. It is common to have engine DLL, which does not have UI
dependencies. Unit testing is then applied only for classes that form the engine DLL.
Testing the engine DLL as a whole is usually called "component testing," and it falls
within the integration testing level. Separate classes that form the DLL should be tested
separately, which falls into the unit testing level.

The remainder of this document covers unit testing techniques and practices from the
S60 C++ development perspective. As previously stated, the intended audience for this
document is developers, but persons in other roles might also find the content interesting
and informative.

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 6

2. Practical introduction to Symbian C++ unit


testing
In its simplest form, a unit test can be non-UI executable, where the exit code is used to
decide whether the test succeeded or not. The executable may be implemented to run
one or more tests. In the long run, when there are many tests it is easier to create
frameworks, which will automatically create an environment for the test, and then
execute the small test piece and report results. These kinds of frameworks are called
test frameworks. EUnit [4] is a commercial solution, whereas SymbianOSUnit [2], [3] is
available at no cost.

SymbianOSUnit is used for demonstrating purposes due to its free availability. For large-
scale development, the commercial solutions usually pay for themselves because they
provide features (see Appendix B) that require time and money when implemented from
scratch.

The S60 Platform: Map and Location Example [1] is extended with tests for
demonstrating purposes. In real life, unit tests should be written during the development
stage of the project, before concrete classes are implemented. Unit tests are created for
class CMapExampleSmsEngine.

2.1 Setting up a test project

Developers should have Carbide C++ 1.2 and S60 3rd Edition, Feature Pack 1 installed
on their PC. Carbide’s command line tools should be activated (select Configure
environment for WINSCW command line from Carbide's Start menu).

Download the S60 Platform: Map and Location Example from Forum Nokia [1] and
extract it so that its root directory is C:\temp\MapEx. Then download SymbianOSUnit
from Sourceforge [3] and extract it. Copy the SymbianOSUnit directory from the
extracted directory to C:\temp\SymbianOsUnit. SymbianOSUnit requires nmake, so
download it [6], extract the files, and copy NMAKE.EXE and NMAKE.ERR, to a directory,
which is in PATH — for example, to C:\Program Files\Nokia\Carbide.c++
v1.2\x86Build\Symbian_Tools\Command_Line_Tools.

Before continuing, it is advisable to read the tutorial documentation and example


provided with the test framework.

Now it is time to create the unit test project and some tests:

1. Create a test directory under the example project:


C:\vidyasvn\MapEx\test.

2. Copy \Tutorial\group\ExtraTestBuildTasks.bldmake and


\Tutorial\test\testgen.bat from the test framework tutorial application
to the test directory.

3. Create a minimal test suite to the file TestHeader.h: Each test prefixed
method is treated as a test case; the test target is added as a class variable,
and the test class (aka fixture) is derived from CxxTest::TestSuite.

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 7

#ifndef TESTHEADER_H
#define TESTHEADER_H

#include "TestSuite.h"

// forward declaration
class CMapExampleSmsEngine;
class MSmsEngineObserver;

class CMapExampleSmsEngineTest : public CxxTest::TestSuite


{
public:
CMapExampleSmsEngineTest(const TDesC8& aSuiteName) :
CxxTest::TestSuite(aSuiteName){}

private: // from CxxTest::TestSuite


virtual void setUp();
virtual void tearDown();

public:
void testParseMsgCoordinates();
void testParseMsgRequestType();
void testParseMsgUid();
void testSendMessage();
void testSendMessageExceptions();

private: // data
MSmsEngineObserver* iObserver;
CMapExampleSmsEngine* iTarget;
};

#endif // TESTHEADER_H

Note: Perl script, which generates a test suite from this header,
requires that the class definition line and constructor line are not
broken with a line feed.

4. Create an empty implementation for the unit tests to the file


TestSource.cpp:

#include "TestDriver.h"
#include "Logger.h"

void CMapExampleSmsEngineTest::setUp(){}
void CMapExampleSmsEngineTest::tearDown(){}
void CMapExampleSmsEngineTest::testParseMsgCoordinates(){}
void CMapExampleSmsEngineTest::testParseMsgRequestType(){}
void CMapExampleSmsEngineTest::testParseMsgUid(){}
void CMapExampleSmsEngineTest::testSendMessage(){}
void CMapExampleSmsEngineTest::testSendMessageExceptions(){}

Note: TestDriver.h is a file that is generated on the fly from the


TestHeader.h during the build process.

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 8

5. Create a minimal Symbian makefile for the test: SymbianOSUnit.mmp:

// test class definitions & implementations


USERINCLUDE .
SOURCEPATH .
SOURCE TestSource.cpp

// test target class definitions & implementations


USERINCLUDE ..\inc
SOURCEPATH ..\src
// SOURCE CMapExampleSMSEngine.cpp // Our tests don’t test actual class yet

// libraries the test target depends on


LIBRARY etext.lib

// include SymbianOSUnit mmp file from proper


// directory depending on relative path and target platform
#include "..\..\SymbianOSUnit\SymbianOSUnitApp\group\s60_3rd\SymbianOSUnit.source"

Note: CMapExampleSMSEngine source is commented out because


our test does not test it yet, and that component has dependencies to
other classes, which need to be handled when the target is really
being tested.

6. Create a bld.inf file for the test project:

PRJ_MMPFILES
makefile ExtraTestBuildTasks.bldmake
SymbianOSUnit.mmp

After following the steps described above, the example directory with tests should look
like Figure 1.

Figure 1: Project directory structure

Now build the test and run it in the emulator. First, open the command prompt and
change the directory to C:\temp\MapEx\test. Then create build files with the
command bldmake bldfiles. Next, create makefiles with the command abld
makefile. This is an essential part of the process because it executes
ExtraTestBuildTasks.bldmake, which generates framework code for the tests.
Finally, compile tests for the emulator with the command abld build winscw udeb.

Start the emulator. The SymbianOSUnit application, which is used to run the tests, will
appear in the menu. Select the “Run all suites” from the menu and find out how the tests
are executed (see Figure 2).

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 9

Figure 2: Running unit tests with SymbianOSUnit

In the example above, all tests were run without errors. This was expected, because the
test cases were empty implementations.

2.2 Stubbing dependencies

Developers should now test the actual engine class. First, include a definition of the test
target (CMapExampleSmsEngine) to the source file (TestSource.cpp).
#include "cmapexamplesmsengine.h"

Then add the test target implementation to the project by uncommenting the line from
.mmp file:
SOURCE CMapExampleSMSEngine.cpp

Here is the most difficult part: Testability is not considered during implementation, so
there are probably difficult dependencies, private fields, etc., which make unit testing
challenging. In our case, there are messaging classes RSendAsMessage and RSendAs,
which could be used, but simulating exceptions would then be difficult.

The problem is solved by replacing default library implementations with the developer's
own. This is achieved by not linking against existing libraries, but by using the
developer's own implementations for the desired methods. We already miss the libraries
from the .mmp and thus the compiler will compile the source files, but the linker will
refuse to generate final binaries. Its errors look like this:
Undefined symbol: 'void RSendAsMessage::CreateL(class RSendAs &, class TUid)
(?CreateL@RSendAsMessage@@QAEXAAVRSendAs@@VTUid@@@Z)'

The developer's task is to implement the methods with some functionality, which
satisfies the test needs. Simple empty implementations come in handy in the first step.
Methods are set to return NULL or other hard-coded default values. Note that only
methods that are used by the test target need to be implemented (for example, there is
no need to implement all 29 RSendAsMessage methods). Empty implementations similar
to lines below can be can be used to satisfy linker:

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 10

void RSendAsMessage::CreateL(RSendAs &, TUid) {}


TInt CMsvStore::HasBodyTextL(void) const { return KErrNone; }
CMsvStore * CMsvEntry::ReadStoreL(void) { return NULL; }

When all missing methods have been implemented, the target compiles and links. The
tests can be run and no errors should exist.

2.3 Implementing tests

Now it is time to test the test target in actual reality. Each method having test prefix is
treated as a test case, which is executed separately by the framework. The test
framework calls setUp() before calling the test method and tearDown() after the test
case execution. In setup it is advisable to set up the test target to a default state. In test
cases, it is only necessary to run methods from an already instantiated test target and
verify that behavior and state is expected. Teardown shall be implemented to clean up
the test target and other resources created during setup. Setup could be implemented as
follows:
void CMapExampleSmsEngineTest::setUp()
{
iObserver = new (ELeave) DummyObserver();
iTarget = CMapExampleSmsEngine::NewL(iObserver);
}

The engine requires an observer to be passed during construction. A dummy stub can
be used (see details in Appendix A) for the parameter. Teardown is implemented to free
the resources:
void CMapExampleSmsEngineTest::tearDown()
{
delete iTarget;
delete iObserver;
}

A first real test case could be to test message sending. This implementation is fairly
simple: the method is called and if it does not leave (throw exception), the test case is
passed:
void CMapExampleSmsEngineTest::testSendMessage()
{
iTarget->SendSmsL(_L("12345678"), _L("abcd"));
}

Message sending may also fail. To simulate exception,


RSendAsMessage::SetBodyTextL() can be implemented to leave. Then the test case
will be implemented to ensure expect SendSmsL to leave. However, SetBodyTextL shall
leave only for this test case and thus its behavior shall be controllable from test case.

One way to achieve controllability is to use global variable, which the test case sets
before calling the test target, and then implement SetBodyTextL to behave based on
variable state. A more generic approach is to define a global function pointer, which
SetBodyTextL will call if it is set. The stubbed code and test case would be something
like this:
// global function pointer
void (*gRSendAsMessage_SetBodyTextLHook)() = NULL;

void ThrowExceptionL()
{
User::Leave(KErrGeneral);
}

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 11

void RSendAsMessage::SetBodyTextL(const TDesC16& a)


{
if(gRSendAsMessage_SetBodyTextLHook)
gRSendAsMessage_SetBodyTextLHook();
}

void CMapExampleSmsEngineTest::testSendMessageExceptions()
{
gRSendAsMessage_SetBodyTextLHook = ThrowExceptionL;
TS_ASSERT_THROWS_ANYTHING(
iTarget->SendSmsL(_L("12345678"), _L("abcd"))
);
}

The test case first sets the function pointer to refer to the function, which will leave. Then
the test case calls SendSmsL. The call is capsulated within assert macro. That macro will
check that the capsulated code will throw an exception. If it does not, it will report to the
test framework that the test case did not pass.

2.4 Code coverage

Code coverage tools are used to find out how well the tests cover the code being tested.
BullseyeCoverage [5] is commonly used in Symbian development. The process is
simple:

1. Turn on coverage compiler by pressing button from BullseyeCoverage (see


Figure 3).

2. Recompile tests.

3. Run tests in emulator.

4. View coverage results.

Figure 3: Code coverage after test round

The results include function coverage and branch coverage. From the Figure 3 above, it
will be seen that SendSMSL() was fully tested (from a structural point of view) and
ParseMsgUid() partially. Using a detailed coverage view (see Figure 4), the source
code is shown and branches are marked with a description if not all decision paths were
executed.

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 12

Figure 4: Detailed coverage analysis

It is a fairly difficult task to raise branch coverage of a method to 100 percent. Common
advice in unit testing is to keep test cases small and simple. This is why many test cases
are needed until the target method's coverage is raised to an acceptable level. Test
cases should pass expected values for the method under test until all interesting
branches in the target are walked through during execution. The target object's state
might also need to be modified outside before calling the actual implementation from the
test. This may be achieved by

ƒ Altering attributes directly, when those are public;

ƒ Altering attributes indirectly by calling necessary methods, which change the


state (for example when there are setter methods for the needed attributes).

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 13

If target class attributes are protected or private, a special approach is needed, such as

ƒ Defining a test class as a friend for a test target , for example:

class CMapExampleSmsEngine : public CBase,


public MMsvSessionObserver
{
#ifdef __SYMBIANOSUNIT
friend class CMapExampleSmsEngineTest;
#endif
...
}

ƒ Deriving a wrapper class from the test target class and then implementing
methods to the wrapper, which give access to protected attributes in the
actual class;

ƒ Using precompiler directions in the class under test to provide more open
access to class attributes or even in the class implementation. Note that the
code split to different blocks guarded by precompiler directives makes code
less readable and may even raise new bugs that are hard to figure out.

The following chapters will explain unit testing from a theoretical point of view and will
provide descriptions of strategies and techniques that are important to remember when
developing unit tests.

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 14

3. Understanding testability
What makes software testable or non-testable? By examining existing implementations
(as described in the previous chapter), it is easy to see that adding tests to a program
later on may be difficult — if the internal structure of the program is complex, developers
will not have the necessary access to data, methods, and the event handling system
(when asynchronous interface are used).

Pressman [7] defines the following principles, which make software more testable:

Operability: The better the target software works, the easier it is to test. So by
default, don’t let code bloat, and remove bugs early (before they kill you).

Observability: What you see is what you test. Missing source or documentation
makes it hard to figure out how the target should be tested. Inaccessible attributes make
it difficult or impossible to decide whether the results are correct or incorrect. Exceptions
and error conditions and their outputs should be traceable so that it is possible to decide
whether behavior was expected or unexpected.

Controllability: Keep implementations open so that tests can control the test target
as easily as possible. In practice, slit long and complex methods to pieces and make
them public or protected. The data that software uses should be also accessible and
changeable from the tests.

Decomposability: Isolate problems by controlling the testing scope. Software


should be built from small pieces, which can be tested independently. In practice, one
test case suite should test one class, and all dependencies should be replaceable with
stubs or mock objects.

Simplicity: The less code there is, the less need there is for tests. Functionality that
is not used should not exist in the system (remove dead code). Software architecture
should be simple and modularized. Code is written as short as possible, is readable, and
has any unnecessary control structures removed.

Stability: Fewer changes mean fewer disruptions to testing. Changes to software


need to be controlled and planned. The software is designed (and tested) to recover
from failures. Pressman suggests minimizing changes to code and tests, whereas agile
developers prefer continuous code refactoring (change code, and test and remove
unnecessary code).

Understandability: The more information testers have, the better the tests.
Documentation in general is sufficient and correct. Component dependencies, the usage
of external and internal components, are clear and understandable. Changes to the code
base are communicated (use of versioning systems and visual diff viewers are handy for
this when changes are made wisely).

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 15

4. Developing unit tests


How does a developer know what kind of tests to create and when to test? There is no
clear answer. Keep in mind that unit testing is most often used during the development
phase, and its purpose is to reduce as many bugs as possible early on, thus decreasing
the amount of testing and corrections at higher testing levels. Making the software unit
testable also forces developers to create better software architectures, which helps in
bug hunting and software maintenance.

Testing methods are sometimes classified to behavioral and structural techniques.


Behavioral tests ensure that implementation behaves in the manner it should. For
example, implementation takes into account the possibility that streamed data from the
network might not fit to the destination buffer. Structural techniques on the other hand
verify that all the essential control paths are covered by tests.

In practice it is useful to first create behavioral tests and then measure branch coverage.
After that, test cases are added based on structural analysis until necessary coverage is
reached.

The sections in this chapter describe common testing techniques in detail.

4.1 Black-box versus white-box testing

Black-box testing is a way to develop tests by examining the object under test with no
knowledge of its internal structure. Only inputs and outputs are examined.

The drawback of this type of testing is than it is difficult to be sure that all pathways have
been tested. However, for large, complex systems, the tendency is to do black-box
testing to simplify things.

White-box testing, on the other hand, aims to develop tests for an object with full
knowledge of its inner workings. It allows the tester to select the test inputs in such a
manner that all (or the most important) pathways can be tested. Paths can be
constructed from control structures and data access. Code coverage tools are used to
automatically find paths and report how well they were tested; see Wikipedia [6] for
details.

Since white-box tests reflect the inner workings of an object, they require updating when
the method under test changes. Black-box tests need only be changed when the method
signature or semantic changes.

4.2 Behavioral testing techniques

One very common source for errors is improper handling of boundaries. The idea in
boundary value analysis is to test boundaries on the edges, for example MIN, MIN-1,
MAX, MAX+1. These values should be exercised both for input and output of methods
(when applicable).

It is impossible to create all the possible input data for the methods, which may happen
when the system is used in actual reality. However, the similar data passed for the
method does not usually change the execution path, and thus data can be abstracted.
Equivalence class partitioning is a behavioral test design method that divides all the
similar input values into classes. For example, if the method accepts values between -5
and 15, there are three classes, and thus only three test cases are needed:

ƒ Valid: [-5 – 15]

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 16

ƒ Invalid [-n – -6]

ƒ Invalid [16 – n]

Special values are a good source of errors and require extra attention from the
implementation. Some examples include

ƒ Zero and one in arithmetic operations and function usages

ƒ 90 degree and multiples

ƒ Empty strings

ƒ NULL values

Experience and intuition can guide developers to predict possible sources of errors.
Because programmers often make similar errors, creating test cases through error
guessing can be practical and effective. This is a good practice especially when
software and development teams evolve, and test suites are run continuously. Such
tests can locate bugs that crawl into software during changes.

Error-guessing-related examples include:

ƒ Improper or missing error handling, for example writing RFile.Open()


instead of User::LeaveIfError(RFile.Open()).

ƒ Improper exception handling, for example, trapping an exception and


ignoring the problem (when the exception happens in production, it will crash
the software).

ƒ Leaking code (heap or other resources). Proper use of the cleanup stack is
the solution for automatic variables and constructors and destructors for
class variables. When deleting objects behind pointers, those pointers shall
always be set to NULL before doing anything else (which may cause a leave)
unless deletion happens in the destructor.

ƒ Improper transaction semantics. For example, data is written to the file in


pieces in nested method calls rather than gathering all the needed data first
and then writing it in one atomic operation. The first approach easily corrupts
the file if the exception happens in the middle of execution.

ƒ Improper CleanUpStack and CActive usage.

ƒ Invalid event handling, for example, what happens if events come in a


different order? A common approach is to design a state machine, which
defines all the states and allowed transitions from state to state. Then
implementation should simply implement the machine and nothing else.

ƒ Object life-cycle changes from a common scenario, for example, a referred


object gets deleted when the referring object does not expect it (call backs
together with active objects cause this kind of situations if not considered
and documented properly).

ƒ Multithreading, parallel execution, and dead locks.

4.3 Structural methods

Code coverage analysis [14] is the process of

ƒ Finding areas of a program not exercised by a set of test cases;

ƒ Creating additional test cases to increase coverage;

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 17

ƒ Determining a quantitative measure of code coverage, which is an indirect


measure of quality.

It is important to understand that code coverage analysis cannot identify missing code,
i.e., missing functionality. That is why structural testing methods should never been used
as the only testing technique. Structural testing is a supplement to behavioral testing, not
a substitute for it.

There are many structural coverage metrics (and thus testing techniques), which include

ƒ Statement coverage

ƒ Decision coverage

ƒ Condition coverage

ƒ Multiple condition coverage

ƒ Condition/decision coverage

ƒ Modified condition/decision coverage

ƒ Path coverage

Each coverage metric has pros and cons. BullseyeCoverage implements condition and
decision coverage and thus has the advantage of simplicity without shortcomings from a
certain metric. For more about coverage metrics and testing techniques, see [9], [10],
[11], and [14].

4.4 Using stubs and mock objects

"Mocks Aren’t Stubs," [15] by Martin Fowler, is a nice article describing the differences
between stubs and mocks. A common characteristic for both is that dependencies that
the test target depends are replaced with implementations that give feedback about
execution, and may offer the possibility of altering the runtime environment from the test
case. Fowler defines mock objects in the following manner:

The term 'Mock Objects' has become a popular one to describe special case objects that
mimic real objects for testing. Most language environments now have frameworks that
make it easy to create mock objects. What's often not realized, however, is that mock
objects are but one form of special case test object, one that enables a different style of
testing.

The example code from Appendix A uses implementation, which logs every method
execution to a file with a macro _LOGF. The macro could be changed to write the result
to a dynamic buffer. The test case could then execute the test and after execution verify
from the buffer that certain methods are called (with the right content), and in the right
order. This kind of passive implementation replacing is referred to as stubbing.

When the semantics of a stubbed method can be altered dynamically (from the test
case), the implementation the approach should be called mocking. When unit testing
shall reach high coverage, it is practical to have mock implementations for each class.
Because actual classes usually refer to each other, the test cases can select which
instances to replace with a mock object, and where to use a concrete object.

jMock is a library for the Java™ language that supports test-driven development of Java
code with mock objects. The practices introduced in jMock are interesting, but
implementing some of the practices in C++ is quite difficult because everything needs to
be done from scratch. The article "Mock Roles, Not Objects" [17] is worth reading to
understand the concepts behind mocking.

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 18

5. Other techniques and tools


Manual code review and inspection is an effective method for locating bugs in the
software. Pair programming is another practice – two sets of eyes do not let bugs crawl
into code as easily as one.

There are also tools that can be used to scan the source code and report structural
errors or bad practices from the code. Some tools include

ƒ LeaveScan (integrated to Carbide C++ 1.2)

ƒ CodeScanner,
http://www.mobileinnovation.co.uk/products/codeScanner/index.html

ƒ SymScan,
http://developer.symbian.com/main/tools/devtools/code/index.jsp

ƒ PC-lint,
http://www.gimpel.com/
http://www.gimpel-online.com/OnlineTesting.html

ƒ Understand for C++,


http://www.scitools.com/products/understand/cpp/features.php

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 19

6. Further reading
Forum Nokia [12] provide links to useful quality and testing material. Wikipedia [6] offers
good overall explanations and some theoretical background for testing and terms.

Classic reference books include Software Testing Techniques [10] and The Complete
Guide to Software Testing [9]. Software Engineering – A Practitioner’s Approach [7]
offers, as the name implies, useful, easy-to-understand information. Agile developers
may argue practices in these books, but the theory is adaptable to today's lightweight
processes.

S60 Smartphone Quality Assurance [11] is a helpful new book for anyone working with
the S60 platform. Symbian OS C++ for Mobile Phones and other Symbian press books
are also worthwhile reading.

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 20

7. References
[1] S60 Platform: Map and Location Example, Forum Nokia

[2] SymbianOSUnit Web site, http://www.symbianosunit.co.uk/

[3] SymbianOSUnit download site, http://sourceforge.net/projects/symbianosunit/

[4] EUnit professional, http://www.sysopendigia.com/C2256FEF0043E9C1/0/405001166

[5] BullseyeCoverage, http://www.bullseye.com/

[6] Software Testing, Wikipedia, http://en.wikipedia.org/wiki/Software_testing

[7] Software Engineering – A Practitioner’s Approach, Fourth Edition, Roger S.


Pressman, McGraw-Hill, 1997

[8] Symbian OS C++ for Mobile Phones, Volume 2, Richard Harrison, Wiley, 2004

[9] The Complete Guide to Software Testing, Second Edition, Bill Hetzel, Wiley, 1988

[10] Software Testing Techniques, Second Edition, Boris Beizer, Van Nostrand Reinhold,
1990

[11] S60 Smartphone Quality Assurance, Saila Laitinen, Wiley, 2006

[12] Breakthrough with Quality, http://forum.nokia.com/main/resources/quality/

[13] EUnit, A Powerful Unit Testing Framework for Erlang, Richard Carlsson, Mickaël
Rémond, http://user.it.uu.se/~richardc/eunit/EUnit.ppt

[14] Code Coverage Analysis, http://www.bullseye.com/coverage.html

[15] "Mocks Aren't Stubs," http://www.martinfowler.com/articles/mocksArentStubs.html

[16] jMock - A Lightweight Mock Object Library for Java, http://www.jmock.org/

[17] "Mock Roles, Not Objects," http://www.jmock.org/oopsla2004.pdf

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 21

A p p e n d i x A . Te s t S o u r c e . c p p
#include "TestHeader.h"
#include "TestDriver.h"
#include "Logger.h"
#include "cmapexamplesmsengine.h"

#include <msvstore.h>

// ========== logger ==========

#include <flogger.h>

#define __DEFINE_LITERAL(aLiteralName, aStr) _LIT(aLiteralName, aStr);

_LIT( _KLogDir, "MyLogs" );


_LIT( _KLogFile, "test.txt" );

#define _LOGF( aEllipsis )\


{\
_LIT(_KFormat,"%S(%d):%Ld:%S: ");\
__DEFINE_LITERAL( _KFile, __FILE__ );\
TPtrC8 _func8((TUint8*)__FUNCTION__);\
TBuf<40> _func;\
_func.Copy(_func8.Right(40));\
TBuf<256> _log;\
_log.Format(_KFormat, &_KFile, __LINE__, RThread().Id().Id(), &_func);\
_log.AppendFormat aEllipsis;\
RFileLogger::Write( _KLogDir, _KLogFile, EFileLoggingModeAppend, _log
);\
}

#define _HERE() _LOGF((KNullDesC))

// ========== stubbed / mocked implementations ==========

class DummyObserver : public MSmsEngineObserver


{
virtual void MessageSent()
{
_LOGF((_L("DummyObserver::MessageSent()")));
}
virtual void MessageReceived(TDesC& aMsg, TDesC& aAddr)
{
_LOGF((_L("DummyObserver::MessageReceived(%S, %S)"), &aMsg, &aAddr));
}
virtual void MessageRequested(TDesC& aMsg, TDesC& aAddr)
{
_LOGF((_L("DummyObserver::MessageRequested(%S, %S)"), &aMsg, &aAddr));
}
virtual void SmsEngineError(TInt aErrorCode)
{
_LOGF((_L("DummyObserver::SmsEngineError(%d)"), aErrorCode));
}
};

void RSendAsMessage::AddRecipientL(const TDesC16& a,


RSendAsMessage::TSendAsRecipientType b)
{
_LOGF((_L("RSendAsMessage::AddRecipientL(%S, %d)"), &a, b));
}
void RSendAsMessage::Close()

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 22

{
_LOGF((_L("RSendAsMessage::Close()")));
}
void RSendAsMessage::CreateL(RSendAs &a, TUid b)
{
_LOGF((_L("RSendAsMessage::CreateL(%d, %d)"), &a, b));
}
void RSendAsMessage::SendMessage(class TRequestStatus &)
{
_LOGF((_L("RSendAsMessage::SendMessage()")));
}

// global function pointer


void (*gRSendAsMessage_SetBodyTextLHook)() = NULL;

void RSendAsMessage::SetBodyTextL(const TDesC16& a)


{
_LOGF((_L("RSendAsMessage::SetBodyTextL(%S)"), &a));
if(gRSendAsMessage_SetBodyTextLHook)
gRSendAsMessage_SetBodyTextLHook();
}

CMsvEntry * CMsvEntry::NewL(CMsvSession &, long, TMsvSelectionOrdering const


&)
{
_LOGF((_L("CMsvEntry::NewL()")));
return NULL;
}
CMsvSession * CMsvSession::OpenAsyncL(MMsvSessionObserver &)
{
_LOGF((_L("CMsvSession::OpenAsyncL()")));
return NULL;
}
CMsvStore * CMsvEntry::ReadStoreL(void)
{
_LOGF((_L("CMsvEntry::ReadStoreL()")));
return NULL;
}
TInt CMsvStore::HasBodyTextL(void) const
{
_LOGF((_L("CMsvStore::HasBodyTextL()")));
return KErrNone;
}
TInt RSendAs::Connect(void)
{
_LOGF((_L("RSendAs::Connect()")));
return KErrNone;
}
TMsvSelectionOrdering::TMsvSelectionOrdering(void)
{
_LOGF((_L("TMsvSelectionOrdering::TMsvSelectionOrdering()")));
}
void CMsvEntry::DeleteL(long)
{
_LOGF((_L("TCMsvEntry::DeleteL()")));
}
void CMsvEntry::SetEntryL(long)
{
_LOGF((_L("CMsvEntry::SetEntryL()")));
}
void CMsvStore::RestoreBodyTextL(CRichText &)
{
_LOGF((_L("CMsvStore::RestoreBodyTextL()")));
}

// ========== test suite ==========

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 23

void CMapExampleSmsEngineTest::setUp()
{
_HERE();
gRSendAsMessage_SetBodyTextLHook = NULL;
iObserver = new (ELeave) DummyObserver();
iTarget = CMapExampleSmsEngine::NewL(iObserver);
}

void CMapExampleSmsEngineTest::tearDown()
{
_HERE();
delete iTarget;
delete iObserver;
}

void CMapExampleSmsEngineTest::testParseMsgCoordinates()
{
_HERE();
}

void CMapExampleSmsEngineTest::testParseMsgRequestType()
{
_HERE();
}

void CMapExampleSmsEngineTest::testParseMsgUid()
{
_HERE();
iTarget->ParseMsgUid(_L("REQ E01FF1Cd"));
}

void CMapExampleSmsEngineTest::testSendMessage()
{
_HERE();
iTarget->SendSmsL(_L("12345678"), _L("abcd"));
}

void ThrowExceptionL()
{
_HERE();
User::Leave(KErrGeneral);
}

void CMapExampleSmsEngineTest::testSendMessageExceptions()
{
_HERE();
gRSendAsMessage_SetBodyTextLHook = ThrowExceptionL;
TS_ASSERT_THROWS_ANYTHING(
iTarget->SendSmsL(_L("12345678"), _L("abcd"))
);
}

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 24

Appendix B. EUnit Professional, key features


ƒ Advanced test creation wizard

ƒ Test skeletons from source code

ƒ Automated stub and adapter creation

ƒ Command line support

ƒ Multiple test environment support

ƒ Test parameter support

ƒ Setting item for resource checking level

ƒ Extension API

ƒ Free text print anywhere from test code

ƒ Memory allocation testing

ƒ Decorator handling

ƒ Automated memory leak detection

ƒ Test monitoring separated from test execution

ƒ Two test monitoring environments

ƒ Panic, exception, and leave handling

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 25

Appendix C. Unit testing, TDD, test frameworks


From [13]:

What is unit testing?

ƒ Testing "program units" in isolation

o Functions, modules, subsystems

ƒ Testing specified behavior (contracts)

o Input/output

o Stimulus/response

o Preconditions / post conditions, invariants

What unit testing is not

ƒ Unit testing does not cover:

o Performance testing

o Usability testing

o System testing

o Etc.

ƒ Unit tests do not replace, but can be an important part of:

o Regression testing

o Integration testing

Test-driven design

ƒ Write unit tests (and run them, often) while developing the program, not
afterwards.

ƒ Write tests for a feature before implementing that feature.

ƒ Move on to another feature when all tests pass.

ƒ Good for focus and productivity:

o Concentrate on solving the right problems

o Avoid over-specification and premature optimization

ƒ Regression tests for free

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 26

Unit testing frameworks

ƒ Make it easy to:

o Write tests: minimal typing overhead

o Run tests: at the push of a button

o View test results: efficient feedback

ƒ Made popular by the JUnit framework for Java by Beck and Gamma

Version 1.0 | July 13, 2007


S60 Platform: How to Develop Unit Tests | 27

Evaluate this resource


Please spare a moment to help us improve documentation quality and recognize the
resources you find most valuable, by rating this resource.

Version 1.0 | July 13, 2007

You might also like