You are on page 1of 41

Best Practices for

Testing Secure
Applications for Embedded
Devices

)NCLUDEDåINåTHISå7HITEå0APER -OCANAå#ORPORATION
350 Sansome Street
 Executive Summary
Suite 1010
San Francisco, CA 94104
 Introduction
415-617-0055 Phone
 Embedded Test Planning 866-213-1273 Toll Free

 Security Test Planning info@mocana.com


www.mocana.com
 Testing Secure Embedded Applications
Copyright © 2009
 Conclusion Mocana Corp.

 References and Further Reading

 Appendix: NanoDefender™

Revised April 7, 2009


Table of Contents
Executive Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Secure Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Embedded Test Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5


Embedded Systems’ Unique Characteristics . . . . . . . . . . . . . . . . 5
The Embedded Testing Process . . . . . . . . . . . . . . . . . . . . . . . 6
Embedded Testing Focus. . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Security Test Planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9


Security Testing’s Negative Requirements . . . . . . . . . . . . . . . . . . 9
Eliminating Security Design Flaws . . . . . . . . . . . . . . . . . . . . . 10
Secure Software Development Lifecycle . . . . . . . . . . . . . . . . . 14

Testing Secure Embedded Applications . . . . . . . . . . . . . . . . . . . . 19


Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Types of Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
When to Stop Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Building a Custom Test Environment. . . . . . . . . . . . . . . . . . . . 29

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

References and Further Reading . . . . . . . . . . . . . . . . . . . . . . . . 33

Appendix: Nano$EFENDERTM . . . . . . . . . . . . . . . . . . . . . . . . . . 36

About Mocana . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Downloads and Contacts . . . . . . . . . . . . . . . . . . . . . . . . . 39

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html II
Executive Summary
Despite the fact that embedded systems design teams spend a considerable
portion of their overall development budget on testing, embedded systems
continue to be buggy and vulnerable to security breaches. Why?

Embedded systems are generally more difficult and expensive to test than
standard desktop applications due to many factors, such as the separation of
development and target platforms, lack of test tools for their target operating
systems, and the need to focus on difficult-to-obtain performance metrics.
Further, because many embedded devices are connected to the outside world,
often through the inherently insecure Internet, testing beyond mere functionality
is required to validate the applications as secure, robust, and resistant to attacks.
It’s easy to see why testing embedded systems is such a challenge.

Another reason is that contrary to common perception, simply including or using


security products and protocols (such as using SSL in an embedded browser,
which can mask some bugs if used with mutual authentication) does not
make the application secure. In reality, declaring an application secure means
saying that it does not contain any vulnerabilities that could be exploited by an
attacker—a declaration that is difficult to make with confidence because testing
for security is quite different from performing standard functionality tests.

The good news is that the usual causes of buggy, vulnerable code can be
mitigated by employing best practices, such as designing security right into
the code, crafting a test plan that recognizes what makes embedded systems
unique, using a wide array of testing techniques, and giving as much importance
to designing the test environment as is given to designing the end product itself.

By following these best practices, you’ll not only uncover bugs, but faults and
vulnerabilities before you release your code or device to the outside world...
far better than having those vulnerabilities discovered by your customers. Well-
tested code leads to secure and robust systems, which lead in turn to lower
lifetime development costs, reduced time to market for follow-on releases,
dramatically reduced support costs, reduced vulnerability to attacks, and best of
all, positive brand identity.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
Introduction
Back in the days when software programs were simple, testing was also simple
and straightforward. But today’s complex applications connect with each other
using an enormous variety of communication protocols, and require significantly
more complex testing to ensure that they not only function as intended under
ideal conditions, but that they also contain no vulnerabilities that attackers can
exploit.

This point is particularly important for embedded applications, and absolutely


critical for security code within connected devices. In an attack on these
systems, bricking the device (rendering it inoperable) is actually a best-case
scenario. In the worst case, faulty security could allow an attacker to take over
a device, unbeknownst to the user, and gain control of an organization’s entire
network and stored data.

Despite these dangers, many designers of embedded systems offer a host of


reasons why their applications couldn’t possibly contain security vulnerabilities or
why nobody would target their systems. Their arguments typically go something
like, “That’s not a user scenario,” “It’s hidden and the user can’t even see it,” and
“No one is interested in trying to hack this product.” However, each of these
arguments fails to address some key points: you can’t trust users to behave
predictably, you can’t rely on data remaining hidden, and you certainly can’t count
on attackers using the application in the same way as ethical designers and
consumers. (For more information, see “Avoiding Misplaced Trust,” on page 12.)

The key element of secure embedded applications is repelling attacks. And


the best way to foil attacks is to 1) build with security in mind, and 2) test—
extensively.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
The System Under Test
Before delving into the details about types of tests, when to run them, how
they uncover bugs, and so on, it is important to first have a clear idea of which
devices we’re talking about when we refer to secure embedded applications.

As shown in Figure 1, the types of devices being connected to the Internet


today extend far beyond desktop PCs, laptops, and cell phones. In fact, IP-
addressable devices already outnumber PCs and servers by approximately 5 to
1, and the ratio is growing fast. Virtually all manufactured objects, from toys and
coffee makers to cars and medical diagnostic equipment, either already are IP-
addressable or will be in the next five years. But while connecting a device to the
Internet is easy, securing it can be a challenge.

Electric Toothbrush: Automobile: Computer: Media Player:


Automatically reorders Maps traffic in real Centralized control for Remotely orders
brush heads, shares time; others can remote interface to new songs & video
brushing habits track your location any other device
with your dentist

Alarm Clock: Refrigerator: VoIP phone: Printer: Microwave:


Remote programs, RFID tags reorders Automatic updates, Automatically Automatically sets
custom tones, turns groceries as integration and reorders toner and cook cycle with
on coffee maker needed, and forwarding paper as needed RFID recognition
suggests recipes

COMMUTE COMMUTE

Home / Bed Workplace Home / Bed

Coffee Maker: Oven: Oven HVAC: Controls Building Security: Television:


Custom setting for settings from temperature & Security cameras Immediate “one-click”
each coffee type, computer or phone lights for maximum interact with facial ordering of products
starts when alarm if running late efficiency recognition database seen on commercials
goes off

Smart Scale: Cell Phone: Vending: Exercise Equipment:


Measures and Secure performs Automatically Recognizes individual
sends weight info for identification & reorders supplies user and tracks
progress tracking verification for before it’s empty workout schedule
payments

Figure 1. The “Internet of Things” encompasses an ever-growing list of connected devices, which must not
be allowed to harm the rest of your network assets.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
Secure Applications
When we discuss secure embedded applications, the first things that come
to mind are security products and protocols such as firewalls and SSL
implementations. However, it might be more accurate to state that all embedded
applications need to be secured—that is, be able to defend against imposters,
eavesdropping, takeover, or subversion. This exactly fits the focus of this paper:
how to design and test embedded applications so as to ensure that they are
secure. ... 70% of
business
Vulnerabilities security
vulnerabilities
RFC 2828 defines a vulnerability as “a flaw or weakness in a system’s design,
implementation, or operation and management that could be exploited to violate
are at the
the system’s security policy.” application layer.
[S12]
Vulnerabilities can be divided into two classes:

 Design flaws—The most prevalent and critical class of vulnerability. Typical


design flaws that lead to security risks are memory management issues,
ordering and timing errors (especially in multi-threaded systems), and
missing access control mechanisms. By adding security into the initial design
focus, such flaws can be eliminated before testing even begins. (For more
information, see “Eliminating Security Design Flaws,” on page 10.)

 Implementation flaws (also known as bugs)—Typically easier to exploit than


design flaws, but also easier to find during testing.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
Embedded Test Planning
Embedded systems have some unique characteristics that affect the testing
)Nå4HISå3ECTION
process. This section explains how embedded software is different, where in the
s Embedded Systems’
development cycle various tests should be performed, and where testing should Unique Characteristics
be focused so as to find the greatest number of bugs. s Embedded Testing
Process
s Embedded Testing Focus
Embedded Systems’ Unique Characteristics
A basic question is, “is testing embedded software any different from testing any
application’s software?” The answer is yes, because embedded software:

 Usually runs for much longer between “reboots” than typical application
software. Desktop users routinely shut down their computers when they go
home for the night. Embedded devices are often “always on.”

 Is often used in safety-critical applications in which human lives are at stake.


Testing safety-critical applications is its own testing specialty, and customers
may require adherence to a variety of ISO, IEEE, or other standards.

 Is usually extremely resource-constrained and cost-sensitive, with little or no


margin for inefficiencies. Therefore, resource monitoring assumes a much
greater role during testing than it does for desktop systems.

 Must often compensate for hardware problems. It is not unusual to include


code in embedded devices that is essentially a work-around for hardware
problems in the host board. This can make it particularly challenging to test
the software, not only on the affected board, but especially on other platforms
that do not exhibit such hardware defects.

 Typically operates in real-time environments with asynchronous and


nondeterministic events, as well as tight timing constraints, which makes
simulation tests difficult and unreliable.

 Typically resides on hardware boards that are very expensive. This adds extra
cost to setting up a comprehensive test environment that provides for testing
on all supported platforms.

 Typically runs on an execution platform that is not the same as the


development platform.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
The Embedded Testing Process
Although it can seem as if there are many flavors of testing lifecycles, especially
when you consider different development methodologies (such as waterfall and
Agile™, as well as the secure software development lifecycle that is discussed
later in this paper), the test process itself always consist of the activities shown
in Figure 2.

Subsystem System Post-


Planning Analysis Design Construction Testing Testing Deployment
&IGUREååå4ESTåPROCESSå
ACTIVITIES

Although there is much to discuss about test methodologies and the testing
process (see “Security and Testing Books and Articles,” on page 33), in brief,
embedded software testing (like testing any application) consists of:

 0LANNING In the planning stage, which can begin as soon as the system’s
high-level requirements are complete, you create a high-level test plan.

 !NALYSIS As the system requirements are refined, continued analysis leads


to detailed test plans, including validation matrices and test cases.

 $ESIGN At this stage, you can design your test environment so it will support
executing the tests as outlined in the detailed test plans. This is also the time
to perform risk assessments, define test data sets, and determine which
tests can be automated.

 #ONSTRUCTION Now it’s finally time to code the actual test cases, as well as
accompanying test harness code (software and test data required to exercise
a system under test, and particularly simulation code for external systems
that will be unavailable during testing).

 3UBSYSTEMåTESTING As coding of functions, modules, and subsystems is


completed, you can begin testing. Typically testing, bug reports and analysis,
code modifications, and retesting is done in a cyclical fashion, until the entire
system is available for testing.

 3YSTEMåTESTING When all the system’s components are released by the


development team, tests that are applicable to the system as a whole (such
as load, performance, and soak testing) can be performed.

 0OST DEPLOYMENT A final test activity that is all too frequently overlooked
is the post-deployment evaluation. By closely examining bug reports, you
can find out which sorts of bugs occurred (and of course take measures to
prevent them in the future); test equipment can be restored to a “clean”
configuration state for future use; and wish-lists developed for future test
planning.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
Embedded Testing Focus
Embedded software testing differs from traditional application testing in several
important ways. Instead of concentrating solely on functional requirements,
embedded testing must focus on:

å Test bed setup.

å Real-time behavior.

å Performance and capacity testing.

å Code coverage.

å Reliability testing.

Let’s look at these differences in a little more detail.

4ESTå"EDå3ETUP

Because embedded software is typically integrated in very expensive hardware


boards, you can significantly reduce your equipment costs by using a dedicated,
permanent, sharable test bed setup. It should be accessible by developers,
testers, and your support team.

2EAL 4IMEå"EHAVIOR

To determine which tests to perform, you should consider how embedded


software typically fails. Embedded systems typically experience a large number
of asynchronous events; therefore, your test suite should focus on real-time
failure modes.

At a minimum, you should create test cases for typical and worst case real-time
scenarios. For example, if a message handling system expects to receive six
different types of messages, all in a particular sequence, tests should include all
possible sequences, especially out-of-order and redundant messages.

You should analyze your system to determine its critical sequences—those


combinations of events that cause the greatest delay between an event’s
trigger and the event response. Your test suite should generate all such critical
sequences and measure the associated response time.

For some real-time systems, a deadline (performing a task at exactly a certain


time) is more important than latency, and the failure to meet a timing deadline is
considered a time-critical failure. But what happens if a critical event sequence

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
happens at the moment of an unrelated deadline? If resources are scarce, the
deadline may be missed. Your test suite must be sure to test for such corner
cases.

0ERFORMANCEåANDå#APACITYå4ESTING

While performance is important to any application, it is particularly critical in


embedded systems, where resources such as memory and bandwidth are
typically constrained. Even if an application is completely secure and bug-free, it
will not be successful in the market place if it is crippled by limited throughput
or long latency times. Likewise, critical cost decisions, such as whether to use
a faster processor, or more and faster RAM and ROM, must take into account a
system’s overall performance.

You should be sure to include tests to obtain the following measurements:

 Encryption/decryption times

 Throughput

 Latency

 Footprint size

 Runtime memory size

How you actually measure your embedded application’s performance and


capacity depends on many variables, such as the platform, architecture,
connected devices, design load, and so on. (Discussion of specific performance
and capacity testing techniques is beyond the scope of this paper, and you are
encouraged to consult the resources listed in “References and Further Reading,”
on page 33.)
Every project
I know about
#ODEå#OVERAGE
where a
It is important to ensure that all parts of your code be tested, not merely the commitment
typical code flow. This is particularly applicable to exception handling, such as an
was made
if loop or default switch case that is expected to be only rarely executed.
to coverage
You can use code analyzers to build lists of decision points, and then ensure that analysis, there
you have tests for all possible outcomes. Additionally, you can use trace tools
was a dramatic
while running your test suite to discover areas of code that are never executed
(and then, of course, add appropriate tests to your system).
improvement in
reliability.
Walter Bright, Dr. Dobb’s CodeTalk,
[S13]

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
Finally, an unexpected benefit of code coverage analysis is the probability of
finding dead code—code that was needed during development, or code for a
feature that has been removed. While dead code obviously does not contribute
to performance issues, it certainly increases the code footprint, which is
something to minimize in most embedded systems.

2ELIABILITYå4ESTING

An often missed area of testing is verifying system behavior when it is running


at or near full capacity for extended periods. For example, a malloc() error will
rarely be visible if your system is running at only 50% load, but when that same
system runs at 85% load, the same malloc() call may fail several times a day.

Security Test Planning


As mentioned before, embedded software remains buggy and vulnerable to
)Nå4HISå3ECTION
security breaches because too often, testing is treated as an end-of-project
s Security Testing’s
activity instead of an integral part of the entire product development life-cycle. Negative Requirements
s Eliminating Security
Indeed, one of the key elements of modern software development Design Flaws
methodologies such as eXtreme Programming™ and Agile Development™ s Secure Software
is ensuring that test code is developed alongside the source code. Sloppy or Development Lifecycle
inadequate embedded development processes, which fail to test edge cases and
determine how a component will operate within its larger system, can contribute
to serious security vulnerabilities. It’s vital to build security into development and
testing from the start.

Security Testing’s Negative Requirements


Whereas traditional testing is largely focused on a system’s functional
requirements (does the application do what it’s intended to do?) and operational
requirements (are the performance, stress, backup, and recovery characteristics
acceptable?), security testing is concerned with negative requirements (what
should not be allowed).

For example, a traditional functional requirement for logging into a system might
be stated as “User names must be six to eight characters, contain at least one
number, and contain at least one upper-case letter.” A security requirement for
that same user login would focus on the negative requirements, as follows:

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
“The user name processing should validate the user’s input and display an error
message if any illegal characters are included.”

Because security requirements are negative requirements, they are usually


overlooked when it comes to creating an application’s requirements documents,
and it’s not unusual for a requirements document to contain nothing at all about
security requirements or attack use cases. Consequently, the corresponding test
plans, which logically define test cases based on a product requirements, do not
adequately address security.

(For more about security requirements, see “SSDL Phase 2: Security


Requirements,” on page 16.)

Eliminating Security Design Flaws


As discussed earlier, even the most rigorous testing cannot take the place of
)Nå4HISå3ECTION
properly designed and coded software. Before you even begin your test planning,
s Mitigating Platform Risks
you need to design your application with security in mind.
s Avoiding Misplaced Trust
Common security design errors include [S19]: s Designing to Prevent
Common C/C++ Errors
 Unsecured network traffic. s Making Example Code
Secure
 Failure to protect the privacy of data in storage and memory.

 Insufficient use of ACLs (access control lists) on OS objects.

 Lack of or weak authentication mechanisms, particularly for passwords.

 Insufficient randomness in pseudorandom number generation.

Such design errors lead to vulnerabilities, regardless of how perfectly the


program’s stated requirements have been implemented. For example, if the
program’s design fails to specify how to exchange encryption keys securely,
and such keys are stored in a program’s executable, network traffic that is
supposedly encrypted can easily be compromised if an attacker obtains the
executable and performs basic reverse engineering.

To minimize the severity and impact of security design flaws, software engineers
should employ the following techniques [S19]:

 Compartmentalization—Employing strong abstractions and interface


validations to ensure the proper use of a module.

 Least privilege—Granting a user or process only the privileges needed for it to


complete its job.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
 Attack surface reduction—Limiting software interfaces to only those
necessary for the program to complete its job.

-ITIGATINGå0LATFORMå2ISKS

In addition to focusing on the security of your application, it’s important to


consider the security of the operating system in which the application runs. Even
if an application program itself contains no known vulnerabilities, it is unlikely that
the same can be said about any operating system. For embedded applications to
be secure, therefore, they must practice “defensive computing“ to account for
operating system vulnerabilities, which include problems such as:

 Symbolic linking—Symbolic links are files in a file system that point to other
files; for example, symlinks in UNIX, and hardlinks in UNIX and Windows.
A clever attacker can exploit symbolic linking to trick the application into
operating on a file of his or her choosing.

For example, the attacker could create symlink with a predictable filename
that an application is likely to try to operate on (perhaps opening and
subsequently deleting a file such as /tmp/temp), and link the symlink it to a
likely system file (such as /etc/passwd). The result is that the program itself
would delete its password file.

To mitigate this risk, the programmer should be careful to check for symbolic
links every time the program creates, opens, or deletes a file, or changes the
file’s permissions.

 Directory traversal—Directory traversal allows an attacker to trick the OS’s file


sharing mechanism into allowing access to directories that are above a legally
accessible directory by using the “..” notation to go up a level in the file
system.
To mitigate this risk, the program must carefully parse and interpret all
filenames to be sure that such notation is not improperly allowed for file
access requests.

 Character conversions—in order to support different types of character


encodings, operating systems allow characters to be represented in many
ways. For example, in URLs, spaces are represented as %20.

Applications that perform validation on user input must therefore understand


all the ways such characters may be converted by the host platform. If a
possible encoding is not checked, illegal characters may be allowed to pass
through.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
!VOIDINGå-ISPLACEDå4RUST

In the security world, trust refers to a reliance on things being as they appear:
users being who they say they are, and data being correct, valid, and for its
intended purpose. Trust in this context can be explicit (a source is verified,
and then anything coming from that specific source is trusted). Or trust can
be implicit (incoming information is trusted because, for example, it uses a
particular protocol and the correct port). Regardless, when evaluating and testing
applications for security, information should never be trusted without verification. &IGUREååå4RUSTåISåTRICKY

As van der Linden [S16] explains, a key effort of security testing is finding and
documenting all the places in the target application and system where trust
is misplaced—granted without appropriate checks. Common examples of
misplaced trust include:

 4RUSTINGåUSERSåTOåPROVIDEåDATAåINåTHEåEXPECTEDåFORMAT For example, if


users are supposed to enter a filename, software should not trust that the
filename is valid, but should validate that the string is not maliciously formed
so as to cause a stack overflow.

 4RUSTINGå!0)åPARAMETERSåTOåBEåVALID Even if data is not passed directly by a


user, but by an application, API functions should never use that data without
validating it. Even if invalid data is not maliciously sent, a simple mistake
(such as using a parameter as a divisor when the parameter value was never
changed from its default of zero) can cause a stack overflow.

 4RUSTINGåUSERSåTOåUSEåTHEåSOFTWAREåASåINTENDED Although your intended


users probably will do as you expect, you can be sure that attackers will not,
with the likely result being that they will exercise your code in unimaginable
ways in an attempt to break it.

 4RUSTINGåEXTERNALåSYSTEMSåTOåBEåWHOåTHEYåSAYåTHEYåARE Especially in multi-


tier systems where applications execute across many machines, each server
must authenticate its peers and use encryption to prevent impersonation and
man-in-the-middle attacks.

Any of the above scenarios make embedded code vulnerable to attack. But
simple measures, such as validating every input no matter where it comes from,
will eliminate many typical weaknesses that attackers can exploit.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
$ESIGNINGåTOå0REVENTå#OMMONå## å%RRORS

Although the C language (which is the focus of this paper because C is the most
commonly used language for embedded applications) has historically exhibited
the most security-related issues of any programming language, careful coding
and strict adherence to a coding style guide can mitigate most of the risks
associated with C library functions. These risks include:

 There is no safe native string type, nor any safe string handling functions.

To mitigate this risk, it is imperative that your code always manage string
buffer sizes and confirm that target buffers are large enough.

 Buffer overruns can overrun function return addresses on the stack.

To mitigate this risk, application code should always check the length of
user-supplied input variables and avoid unbounded string operations. An even
better practice is to use a string buffer module that automatically manages
memory allocation and string lengths, and avoids using the native C string
functions altogether.

 There is no built-in protection against buffer overflows.

Generally you can prevent buffer overflows by checking the length of all
externally-supplied input variables—from users, external programs, and even
any shared memory and data storage that can be modified by an external
program or user. Additionally, measures taken to mitigate specific buffer
overrun risks should be employed.

 Buffer overruns are easily caused by printf style formatting functions.

The only guaranteed techniques to prevent string attacks are to use static
strings instead of dynamically formatting strings with variable numbers of
source arguments, and to never accept user input as input for a variable-
formatted string. (For a detailed explanation of format string attacks, refer
to the Windows 2000 Format String Vulnerabilities paper available on the
Next Generation Security Software website, http://www.webcitation.org/
query?url=http%3A%2F%2Fwww.nextgenss.com%2Fpapers%2F&date=20
09-03-09.)

 There is no protection against integer overflows.

To avoid this problem, do not use implicit type conversions, particularly from
signed to unsigned integers.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
-AKINGå%XAMPLEå#ODEå3ECURE

Frequently example code is included in platform and API documentation, as


well as with products that are distributed as source code. Such examples
typically focus on a small piece of functionality, and are usually not intended
to be examples of robust application programming, often omitting parameter
validation, buffer size checks, error handling, and other security-related defensive
coding practices. Unfortunately, embedded developers who are in a hurry and
just want to get a task done will often copy-and-paste such code samples and
use them in their applications. The result is code with security vulnerabilities, and
developers pointing fingers saying, “I coded it just like you told me.”

It’s important, therefore, to not take shortcuts when writing example code
intended for your customer-developers, to include proper validation, and to make
use of all an API’s built-in security features.

Secure Software Development Lifecycle


Like modern software development methodologies such Agile™ and eXtreme
Programming™, the Secure Software Development Lifecycle (SSDL) advocates
that testing be considered at the earliest stages of product design and
development. In the SSDL, security engineers are involved from the outset so
that security requirements are included in a product’s specifications, attack use
cases are considered, and the software architecture and design have security
built-in from the start.

Secure embedded software not only requires a security-focused development


cycle; it also depends on secure processes and environments:

 Secure deployment—To ensure that an application is secured as intended,


the software must be installed with secure defaults, including appropriate file
permissions and secure configuration settings.

 Secure updates—After a secure application is deployed, it must be continually


updated to protect it from an ever more-threatening network environment.
Professional vendors evaluate potential threats, manage vulnerabilities, and
distribute software patches when necessary. (Mocana’s NanoUpdate™ can
simplify the secure patch management process; you can read more about
NanoUpdate, and integrated module within NanoDefender™, in “Appendix:
NanoDefenderTM,” on page 36.)

 Secure infrastructure—To shield your embedded application from intrusion or


takeover, it is important that your network infrastructure include components

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
such as a firewall, DMZ, and IDS (Intrusion Detection System). (Mocana’s
NanoWall™ and NanoDefender are custom built for embedded devices; see
“Appendix: NanoDefenderTM,” on page 36.)

As shown in Figure 4, the SSDL encompasses six phases [S19].

1. Security
Guidelines /
Rules and
Regulations

6. Determine 2. Security
Exploitability Requirements

SSDL

3. Architectural
5. Testing Reviews/
Threat
Modeling

4. Secure
Coding
Guidelines

Patch Management

Infrastructure, Security, Firewalls, IDSes & DMZs


&IGUREååå33$,åPHASES

33$,å0HASEåå3ECURITYå'UIDELINES å2ULES åANDå2EGULATIONS

Depending on the industry in which your product will operate, your product may 0HASEåå$ELIVERABLE

be expected to conform to security policy standards such as the VISA (Virtual s System-wide security
specification
Instrument Software Architecture) standard for communicating with test and
measurement instruments from a PC, HIPAA (Health Insurance Portability and
Accountability Act), or SOX (Sarbanes-Oxley Public Company Accounting Reform
and Investor Protection Act of 2002). However, often there is no customer-
mandated security policy to follow, in which case it is important to define your
own.

Although a security policy can include physical requirements (such as “server


rooms must be kept locked at all times, with access only by personnel with copy-
prohibited keys”), legal constraints (such as the SOX section 404 requirement
that “various internal controls must be in place to curtail fraud and abuse”),
and guarding intellectual property, this paper is more concerned with the
security rules under which an embedded application operates. Defining these
requirements is the focus of the next phase.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
33$,å0HASEåå3ECURITYå2EQUIREMENTS

Security requirements identify the rules necessary to ensure a trusted 0HASEåå$ELIVERABLES


s Security requirements
computing environment that protects the connected (networked) devices and
s Attack use cases
assets, your device’s operating system and application program, user data, and
assets such as digital certificates. What kind of requirements can ensure your
system’s security?

Although the answer depends on the specifics of your system, typical topics
included in a security policy are:

 Cryptographic module accessibility.

 Key management.

 Encryption algorithm selection.

 Password requirements.

 Temporary data reset values.

 Obfuscation to prevent reverse engineering.

 Privilege level requirements.

 Memory management.

 Language-specific coding requirements.

As the above list implies, defining security rules is not only about defining
functional requirements for an embedded application, but also, constraining how
the system operates and responds to behavior that should not be allowed.

33$,å0HASEåå!RCHITECTURALå2EVIEWSåANDå4HREATå-ODELING
0HASEåå$ELIVERABLES
No matter how well thought-out the test plan, and how complete a test case
s Test plan
suite, it is impossible to test a program for every possible input, code branching
s Risk analysis
scenario, and so on. Therefore, you must use an intelligent method, such as
threat modeling, to determine which tests to perform.

Threat modeling is a type of risk-based testing where potential attacks are ranked
according to the ease of attack and the seriousness of the attack’s impact.
After modeling, testing efforts can be focused on those areas that are easiest
to attack and/or where the impact is greatest. For example, high-priority tests
should focus on any security flaws that can be exploited by anonymous remote
attackers to execute arbitrary code.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
The threat modeling process is composed of four main steps:

å Identify threat paths.

å Identify threats.

å Identify vulnerabilities.

å Rank the vulnerabilities.

(A detailed explanation of threat modeling is outside the scope of this paper.


Readers are encouraged to consult current test literature, particularly those
books listed in “References and Further Reading,” on page 33.)

33$,å0HASEåå3ECUREå#ODINGå'UIDELINES

Secure coding guidelines make it possible to prevent both kinds of security 0HASEåå$ELIVERABLES
s Software development
vulnerabilities—design weaknesses and implementation flaws. Such guidelines
procedures
ensure that designers use proper techniques to minimize the effects of
s Coding style guide
implementation flaws, and enable coders to avoid such flaws in the first place.

For details about secure programming, see the previous discussion, “Eliminating
Security Design Flaws,” on page 10.

33$,å0HASEåå4ESTING
0HASEåå$ELIVERABLES
Finally, we come to the focus of this paper: testing. It’s important to understand
s Test cases
that “testing” is performed at many different points throughout the development
s Bug reports
lifecycle, encompasses many types of tests, and is performed on many different
s System analysis
subsets of a system.

Although testing is such a broad subject that one paper cannot serve as a
single source of information, “Testing Secure Embedded Applications,” on
page 19, provides background and best practices information that enables you to
confidently test your secure embedded application.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
33$,å0HASEåå$ETERMINEå%XPLOITABILITY
0HASEåå$ELIVERABLES
Vulnerabilities can be categorized as low-level or high-level. Vulnerabilities that
s Test cases
corrupt the state of the running application or runtime facilities, such as buffer
s Bug reports
overflows, are said to be low-level. They may not be 100% reproducible because
s System analysis
they can corrupt either the application or the programming language runtime
state. Conversely, high-level vulnerabilities, such as logic errors in the application,
are typically very reliable and reproducible, and tend not to crash the application.

In an ideal world, every vulnerability that is uncovered by testing would of course


be eliminated or mitigated. (As a security company, this is the approach Mocana
takes, and we strongly recommend this course.)

In reality, though, depending on the vulnerability’s cause and implementation, the


effort to address it may make it impractical given product release time or staffing
constraints. Therefore, vulnerabilities, just like non-security-related bugs, should
be ranked and evaluated for their risk. The following factors should be weighed
[S19]:

 Required access or positioning for the attacker to attempt exploitation.

 Resulting level of access or privilege yielded by successful exploitation.

 How much time or work is required to exploit the vulnerability.

 Exploit’s potential reliability.

 Repeatability of exploit attempts.

It is important to remember that if a vulnerability is determined to have a low risk


of exploitability, and is therefore allowed to remain in the software, it should be
regularly re-evaluated because exploitability gets easier over time. Cryptography
gets weaker, computers get more powerful and less expensive, and malware
writers figure out new techniques.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
Testing Secure Embedded Applications
Given that there are development methodologies and a testing framework in
place that are specifically designed for secure and embedded applications, it’s )Nå4HISå3ECTION
s Best Practices
important to make sure that your embedded software test suite encompasses
s Types of Tests
the full range of software test classes, described below.
s When to Stop Testing
s Building a Custom Test
Environment
Best Practices
Although the unique security challenges of embedded systems require particular
attention, many problems can be avoided by following a few simple guidelines.
 5SEåAåCODINGåSTYLEåGUIDE Just as journalists follow writing style guides, CODING GUIDELINES AND STYLE

so should programmers following a coding style guide that spells out the
Contents
requirements for development environment directory and file organization, Overview...................................................

naming conventions, declarations and types, error handling, white space ANSI Standard C......................................

formatting, memory management, library use, and so on. (For C language- File Organization......................................
General Structure...................................

specific suggestions, see “Designing to Prevent Common C/C++ Errors,” on Source Files............................................
Header Files...........................................

page 13. For references to model coding style guides, see “C Coding Style Naming Conventions................................
General Guidelines.................................

Guidelines,” on page 34.) Filenames..............................................


“Header Guard” Names..........................
Flag Names............................................
Constants...............................................
Figure 5 shows the contents of a typical C coding style guide—Mocana’s. Enums....................................................
Function Names.....................................
Parameter Names..................................

 7RITEåAåTHOROUGHåTESTåPLAN Although you could use theoretically take a Use of the Preprocessor.............................

simple approach to test planning and use spreadsheets to keep track of test Declarations and Types............................

Functions..................................................
cases (perhaps noting their status, such as “in design,” “coded,” “executed,
Expressions and
no bugs found,” and “executed, bugs found”), writing an official test plan Statements...............................................

brings many benefits. A test plan is an agreement among product design, Error Handling..........................................

Indentation, Layout, & Whitespace...........


development, and testing teams as to the system’s requirements, schedule, General Guidelines.................................
Switch Statements..................................

and release criteria. Test plans also typically include the scope of work, risk Boolean Expressions..............................
Function Definitions................................
Structure Definitions...............................
analysis, and contingency plans. Comments................................................
General Guidelines.................................

 )MPLEMENTåAåFULL FEATUREDåDEVELOPMENTåENVIRONMENT When single Function Comments...............................


In-line Comments...................................

programmers developed applications in their garage, they could list bugs on The Standard Library................................

scraps of paper. But with teams of developers and dozens, if not hundreds, or Memory Management...............................

Debugging................................................
thousands of interacting and dependent software modules, it’s essential that
References...............................................
both target and test code be managed with a suite of tools:
MOCANA CONFIDENTIAL

 Code repository, for both source code and test code.


&IGUREååå-OCANASåCoding
 Bug tracking database, where version, component, module, bug priority,
Style GuideåTABLEåOFåCONTENTS
test case, and so on are dynamically and automatically maintained.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
 Test machines, where test monkeys reside to automatically build and test
code as it’s checked into the code repository. (For more about setting up a
test environment, see “Building a Custom Test Environment,” on page 29.)

 )NCORPORATEåTESTåMONKEYS Before engineers check in code to the source


control repository, the code should be exercised by a test monkey on the
local development machine, ensuring that the code basically functions as
expected. Following code check-in, test monkeys running on all supported
OS-platform combinations should automatically build the target executable(s),
run all their tests (see “Test Monkeys,” on page 30), and notify developers and
test engineers of any build or test errors.

 0ERFORMåSOAKåTESTSåEARLYåANDåOFTEN Designing tests that run days to weeks


is vitally important: such tests make memory leaks more visible and ensure
robustness. Some soak tests simply repeat a test over and over; others
change test parameters (randomly or per a predefined pattern).

Types of Tests 4YPESåOFå4ESTS


s RTOS-Less System Testing
Because different types of tests uncover different types of vulnerabilities
s Functional (Black-Box)
and bugs, a fully and properly tested secure embedded application should be and Conformance Testing
subjected to many different tests during its development lifecycle. s Coverage (White-Box)
Testing
s Gray-Box Testing
24/3 ,ESSå3YSTEMå4ESTING s Attack Testing
s Load Testing
Specific to embedded systems, RTOS-less testing entails adding code to the
s Fuzz Testing
main process thread to measure performance. By sending debug status to
s Interoperability Testing
log files (for example, printf output to stdout), you can also track process
s System Testing
execution, which is particularly helpful in cases of system crash.
s Third-Party Testing

&UNCTIONALå"LACK "OX åANDå#ONFORMANCEå4ESTING

Traditional black-box testing, also known as functional testing, typically includes


suites of unit tests, which ensure that all the intended features operate
as designed. (In particular, security code may need to conform to industry
standards, including RFCs, IEEE specifications, NIST requirements, and so on.)
The majority of functional testing is done from the viewpoint of the customer,
ensuring that the software provides the expected functionality.

Within the security context, black-box testing is most often used during the pre-
deployment test phase (system test) or periodically after deployment to reassess
system vulnerability. Such tests complement the ongoing security activities of

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
the SSDL (see “Secure Software Development Lifecycle,” on page 14), helping
testers identify undiscovered implementation errors, discover potential security
issues resulting from boundary conditions, uncover security issues resulting
from build problems, and detect issues caused by interactions within the
underlying environment (for example, improper configuration files).

Black-box tests include:

 3TRESSåTESTS—Tests that deliberately exceed memory buffers and numerical


thresholds such as the maximum number of open connections, exercise
memory management functions, and so on.

 "OUNDARYåVALUEåTESTS—Parameter inputs at the boundaries of validity, such


as the largest and smallest integers for an integer input.

 %XCEPTIONåTESTS—Tests that trigger a failure or exception mode.

 2ANDOMåTESTS—Generally not as productive at finding errors as other tests,


but widely used for evaluating the robustness of user-interface code. When
performing random tests, it’s important to log not only the results, but the
input parameters as well so that the tests are reproducible.

 0ERFORMANCEåTESTS—Performance analysis.

 #ONFORMANCEåTESTS—Verification that code conforms to required industry


standards. It’s recommended that you use commercial standalone test
tools, such as IxANVL™ from Ixia[T1], to validate protocol compliance and
interoperability.

(Note that in addition to functional tests, black-box testing includes fuzz testing;
see “Fuzz Testing,” on page 25.)

Black-box testing is particularly important for security testing because it enables


testing against the application’s attack surface and generating test data for
functionality that may not be part of the intended design.

Because black-box testing does not rely on knowledge of the completed code,
test planning can often begin in the design phase, with testing performed
throughout the software development lifecycle.

Although it is tempting to assume that a given set of black-box tests find both
traditional and security bugs, it generally doesn’t work that way. Because the
pass-fail criteria is quite different (functional tests traditionally are positive tests,
while security tests are often negative tests), you should perform redundant
tests that focus on different pass-fail results. (For more about negative
requirements, see “Security Testing’s Negative Requirements,” on page 9.)

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
#OVERAGEå7HITE "OX å4ESTING

Although functional (black-box) testing is obviously required to ensure that


software functions as intended, its biggest weakness is that it rarely exercises all
the code, for example, every cipher in an SSL client. Coverage tests address this
weakness by exercising every code statement and decision path at least once.

For embedded systems, coverage testing is vital because the greater the test
coverage, the less likely it is that bugs will become apparent later. White-box
tests include:

 Statement coverage—Test cases that execute every program statement.

 Decision (branch) coverage—Test cases that cause every branch (both true
and false results) to execute.

 Condition coverage—Test cases that force each condition in a decision to take


on all possible logic values.

It’s recommended that you use commercial test tools, such as Insight from
Klocwork [T3], to perform static testing of your embedded code.

'RAY "OXå4ESTING

Gray-box testing is a combination of white-box and black-box testing. It does not


explicitly test the internal logic of the code, but deliberately chooses functional
tests based on knowledge of where the likely vulnerabilities exist within the
code.

Gray-box testing is particularly apropos to development methodologies such as


Agile™ and eXtreme Programming™, which require concurrent test and code
development, often by the same engineer.

A typical technique for performing gray-box testing is to run the software under
test in a debugger. As soon as the software is running, the normal tools of black-
box testing, such as fuzzers and automated regression suites, can be used. By
setting break points on lines of code that are potentially dangerous, the tester
can determine if such code cold be reached with external input to the program.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
!TTACKå4ESTING

Attackers will gain access (see Figure 6). And attackers make it their business to
understand typical program entry points, as well as coding patterns that are often
overlooked by developers who are focusing on the program’s intended functions
more than securing code.

Therefore, to properly test code for vulnerabilities, it is vital to understand how


vulnerabilities make their way into software in the first place. Such knowledge
enables the tester to design test cases to see how the code reacts when faced
with common attack patterns. Attack testing forces a program to perform actions
on invalid or malicious data to reveal what the program could allow an attacker to
do.

The first step to designing attack tests is to fully analyze the inputs that an
attacker could use to gain unauthorized access to your program and through
which could manipulate it. These inputs, called the attack surface, can include
network I/O (such as sockets), APIs, open files, pipes, shared memory, and OS
calls.

Device Platform
Internet
Network I/O Memory Read

User Input socket


User
OS API Secure Shared
Open File pipe Embedded Memory
File Application

Shared Data API


File
Data
Store

OS Calls
External
Process
&IGUREååå!TTACKERSåCANåGAINå
(Attack surface shown in a dashed-red line)
ACCESSåTOåANåAPPLICATIONå
THROUGHåITSåATTACKåSURFACE

The following resources can help you to fully define the attack surface [S19]:

 System debugging tools that list the files, network ports, and other system
resources a program is using.

 The source code itself for its use of system input/output APIs.

 Design documentation such as functional specifications.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
 Developer interviews to learn more about the code’s architecture.

 Detective work using the same tools an attacker might (see “Attack Tools,” on
page 29).

Typical attack testing will

 Verify that user input will not allow an attacker to manipulate a back-end
database through an attack known as SQL injection.

 Verify that cross-site scripting, an attack that can cause an attacker’s script to
execute in a victim’s Web browser, is not possible.

 Verify that poor buffer handling while reading data from the network will not
cause a server to crash when it is sent an invalid packet that it erroneously
processes, resulting in a denial of service (DoS) attack or allowing a remote
attacker to execute code of his choosing.

 Verify that errors are handled correctly so that the program safely recovers
from an unexpected input—the bread and butter of a software attack.

 Verify that private data is protected when in transit over a network or when it
is stored.

 Verify that information leakage, which can help an attacker stage attacks, does
not occur.

 Verify that audit logs are protected.

 Verify access controls.

 Verify that security mechanisms default to Deny and are implemented


correctly.

Attack testing should employ a combination of manual input, input generated by


off-the-shelf or custom fuzzer tools (see “Fuzz Testing,” on page 25), and input
manipulated by proxies.

,OADå4ESTING

Load testing is just what its name implies: testing the software and embedded
device at (and exceeding) its intended capacity, whether that capacity is
measured as the number of users, connections, calculations, or what have you.
It is not unusual for a system to work perfectly fine at 50% capacity, but to
experience performance degradation, run up against resource limitations, or even
fail altogether when the load is at its design capacity.

In addition to standard module tests, it’s often quite beneficial to use commercial
standalone test tools, such as Spirent Communications’ SmartBits® [T2].

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
0ENETRATIONå4ESTING

Penetration testing is the process of attempting to compromise the security of a


computer system or software application by applying the same techniques that
an attacker would use.

Frequently organizations use penetration testing as an independent audit


function before rolling out new applications or deploying new hardware systems.
In this scenario, penetration testing is conducted late in the development and
testing lifecycles, in a black-box, “outside -> in” approach. That is, the testing
mimics how an attacker outside the organization would proceed, with very little
knowledge of the underlying software being tested. Although a reasonable
approach, this sort of testing does have a significant shortcoming: it is unlikely
to result in useful remediation recommendations. That is, if the penetration test
succeeds, the superficial recommendation might be to turn off any services that
were compromised. However, turning off such services is rarely an option if the
desired functionality is to be maintained.

So given that such test results are not particularly helpful to developers,
is penetration testing worth the time? Yes, especially when the testing is
performed earlier in the development lifecycle, and when the tests are designed
as white-box tests instead of black-box tests. This enables the development
team to use the test results to modify the code and even its design. In this
way, the test scenarios can exercise not only the program’s external interfaces,
but programmatic interfaces between modules, data flow, environmental
boundaries, and so on.

&UZZå4ESTING

Providing fuzz (random data) to the inputs of a program augments test monkey
suites (see “Test Monkeys,” on page 30) by finding bugs that occur in unusual
situations outside the normal program flow and testing.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
Fuzzers can be categorized in two ways [S15]:

 !TTACKåVECTORS Although there are some general purpose fuzzing


frameworks, typically different fuzzers target different entry points into an
application (Figure 7).

Apps/GUI File System Files/Media

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Application Applications
Layer

NFS Graphics Libraries


CIFS Memory Handling
Session SSL, TLS,
Layer ISCSI OS System Calls
EAP
RPC Network API

Transport IP
Layer

Wireless WPA2,
Datalink Bluetooth,
etc.

Embedded Application
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Wireless IP Based
Embedded System &IGUREååå#LASSIFYINGåFUZZERSå
BYåTHEIRåTARGETåATTACKåVECTORS

 !PPLICATIONåLOGICåLAYER Different fuzzers penetrate different layers in


the application logic (Figure 8), which in turn finds different application
vulnerabilities.

% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %% % %

% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %

Field Level What Fuzzing Finds


Fuzzer overflows,
integer anomalies crashes

denial of service (DOS)


Structural
security exposures
Fuzzer underflows,
reptition of elements, performance degration
unexpected elements
slow responses
Sequence Level
trashing
Fuzzer out of sequence
omitted/unexpected anomalous
repetition/spamming

Application Layer
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %

&IGUREååå#LASSIFYINGåFUZZERSå
BYåTHEIRåTARGETEDåAPPLICATIONå
LOGICåLAYERS

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
Commercial tools such as Codenomicon Defensics [T1], Nessus™ from
Tenable Network Security [T4] (especially useful for random testing), and
Rapid7’s SSHredder™ [T6] can help ensure that your code is free from
malware vulnerabilities, network vulnerabilities, and communications protocol
vulnerabilities.

Regardless of whether in-house fuzz tests or commercial tools are used, it is


very important to be sure to log the generated parameters as the test cases are
run so that the results are reproducible.

)NTEROPERABILITYå4ESTING

Interoperability testing tests your application’s communications and functionality


with third-party tools. Common examples include:

 Open source applications, such as OpenSSL and OpenSSH.

 Linux IPsec tools, such as Raccoon and Strongswan.

 IKE/IKEv2 implementations, such as those required for VPNC Interoperability


testing.

3YSTEMå4ESTING

System testing is not a single, unique type of test, but the logical culmination
of functional, integration, and security testing that is performed on the system
as a whole instead of its separate pieces. Activities such as stress testing,
performance testing, load testing, and many forms of penetration testing are
meaningless until the entire system is available. It is also important to repeat
functional and integration tests that may have been performed so early in the
development cycle that some components were replaced by test stubs.

System testing, at a minimum, should ensure error-free builds, basic crash-free


operation on all supported platforms, and proper routing of sent and received
packets (detectable by using packet tracking tools such as Wireshark™ [T7].

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
4HIRD 0ARTYå4ESTING

No matter how fully you think you’re tested your code, it’s doubtful that you’ve
been thinking like a hacker. Part of what makes hackers successful is that they
deliberately try to subvert a program’s normal and logical behavior. Simply testing
that the code does what it’s supposed to do under normal circumstances just
isn’t enough.

Therefore, it might be a good idea to contract with third parties who provide
independent testing and verification for protocol conformance, interoperability,
performance metrics, and more. Organizations such as Cryptography
Research (www.cryptography.com), VPNC (www.vpnc.org), and Offensive
Computing (www.offensivecomputing.net), which can certify performance and
interoperability, as well as serve as white hat hackers, help ensure that your code
really is safe and secure.

When to Stop Testing


Ideally test designers determine a test coverage threshold, and testing and
development continues until all functions are completed and all tests pass.
But reality often sets in when the calendar says, “it’s time to ship the product.”
However, without a formal means of categorizing feature requirements and
defect severity, it’s difficult to make good business decisions around a product’s
release.

At this point, a properly designed development and test environment makes it


easier to determine whether the product is acceptable.

By assigning priorities to components, it’s easy to see whether missing features


are in the “nice to have” category or so critical as to make shipping without them
pointless. Likewise, reported bugs should always be assigned a severity level.
Bugs that are deemed trivial or cosmetic, or for which there is a simple work-
around can often be “let out the door.” But a product with crash bugs or known
security holes simply cannot ship.

Regardless of when you stop actively testing, it is recommended that even


after release you maintain one or two test monkeys for every shipped version to
ensure that no longevity bugs appear.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
Building a Custom Test Environment
One of the biggest challenges when testing embedded systems is building your
4ESTå%NVIRONMENTå
test environment—both the infrastructure and testing tools. Unlike desktop
#OMPONENTS
systems, whose vendors themselves test the operating system stack, CPU,
s Testing infrastructure
drivers, and so on, embedded systems are highly customized. Therefore, you (servers, platforms,
must test the hardware, stack communications, drivers, and so on yourself. databases, traffic
generators)
s Attack tools
4ESTINGå)NFRASTRUCTURE s Monitoring tools
s Commercial tools
A full-function test infrastructure should include at least the following elements:
s Test monkeys
 Database for storing results and generating reports.

 Traffic generators.

 Dedicated hardware, such as servers, for interoperability testing, as well as


the software applications to be tested against (such as OpenSSL).

 Variety of platforms (board + CPU combinations).

!TTACKå4OOLS

Many of the most effective attack tools are available for free download, or are
even open-source, making it easy for attackers to obtain them. Therefore, you
should use such tools in your own security testing:

 Fuzzers—Generate malformed inputs and send them to a network interface


or store them in input files.

 Proxies—Allow data to be manipulated as it flows between client and server,


or peers.

 Sniffers—Allows inspection of network and other system interfaces.

There are so many attack tools; which should you use? First, look for tools that
have been written to attack the same protocols and file formats that your target
program uses. Next, if no such tools exist, or you’re using a custom protocol or
file format, look for a fuzzer framework that allows you to integrate your own
protocol or format handler. (For a list of fuzzing software, refer to the following
website: www.fuzzing.org.) Finally, you can often use the same functionality and
unit test tools that you use for traditional testing by modifying them to send the
same sort of random or malformed data that fuzzers do.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
-ONITORINGå4OOLS

Because embedded system OSes are generally proprietary, you usually must
create your own test tools to monitor memory, network operations, and RAM
utilization.

Such tools are of necessity environment-specific. Typically, you must create an


application test thread to hook into the device’s task manager, and then measure
tick times on every context switch.

#OMMERCIALå4OOLS

In addition to the tests and tools you develop, your test environment should
include those commercial tools necessary to perform the full range of tests
discussed earlier (see “Types of Tests,” on page 20). Such tools typically can
exercise your code in a more automated and complete fashion than would
otherwise be feasible. It is important to evaluate the tools early in the test
environment design phase so as to ensure operating system compatibility,
sufficient resources (disk space and memory availability, for example), and
communications/interface support.

For a representative list of recommended tools, see “Test Tools,” on page 35.

4ESTå-ONKEYS

Test monkeys are build commands and tests designed to run automatically,
without human intervention, on a scheduled basis (whether that schedule is
according to the clock or on-demand due to a code check-in). Because embedded
systems usually must be tested on many platforms, test monkeys are particularly
helpful because they ensure that every test is run every time on every target.

As described earlier, engineers should run test monkeys on the local &IGUREååå4ESTåMONKEYSåAREå
AåCRITICALåCOMPONENTåOFåYOURå
development machine before checking in code, and an automated test monkey TESTåENVIRONMENT
framework should perform builds and run test suites on all of a device’s
supported OS-platform combinations automatically.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
Test monkeys perform several important families of tasks:

 "UILDS—Before the monkeys can run tests, they need to build the required
executables. For embedded devices, this involves not only the actual build
commands (such as invoking make files), but building the required images and
downloading them to the target devices. All this requires in-depth knowledge
of every target device’s operating system and the steps required to download
firmware onto the device.

 4EST—The primary purpose of test monkeys is of course to run tests. But


what tests should be included? At a minimum, the following:

 API functions—Parameter validation and input-to-output functionality.

 Supported platforms—Typical list for embedded systems is Linux,


FreeBSD, and Solaris.

 Supported architectures—Integer sizes, little and big endian, and so on.

 Regression testing.

 Security protocol interoperability testing—Test communications and


functionality with expected systems, such as OpenSSL.

 $EVICEåMANAGEMENT—As part of the overall test monkey framework, the


test monkeys must do much more than simply run tests:

 Power cycle devices and boards when necessary, such as when loading
new images or recovering from crashes.

 Log into devices and boards to evaluate test logs.

 Write test log results and dashboards (summaries) to the test database.

 Send notifications to test engineers whenever tests fail.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
Conclusion
Although only a surface-level survey of security design and testing (see
“References and Further Reading,” on page 33), this paper can certainly speed
you on your way to improving your development and testing processes for
secure embedded applications. From examining the characteristics of embedded
systems, it’s clear that your testing focus must be broader than required for
traditional application testing, particularly in the areas of real-time behavior and
performance and capacity testing.

Developing secure applications requires additional design activities focused


on negative requirements and eliminating design flaws. As modeled by the
SSDL (secure software development cycle), testing for security includes threat
modeling and choosing test cases based on exploitability.

Taking the two topics together (that is, embedded systems and secure
applications), this paper provides some best practices, describes in some detail
the variety of tests that are important to perform (particularly security-related
tests such as attack, penetration, and fuzz testing), and outlines what to include
when building your test environment.

Using these recommendations as a guide, you can ensure that your embedded
application is robust and secure, reduce time to market, and promote positive
brand identity.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
References and Further Reading
3ECURITYåANDå4ESTINGå"OOKSåANDå!RTICLES 2EFERENCES

[S1] R. M. Backus, Embedded systems security has moved to the forefront, Embedded. s Security and Testing
com, 10/07/07, URL:http://www.embedded.com/design/networking/202103432, Books and Articles
accessed: 2009-03-07. (Archived by WebCite® at http://www.webcitation. s Coding Style Guidelines
org/5f6YJ5ENw.) s Test Tools

[S2] Sean Beatty, Sensible Software Testing, Embedded.com, URL:http://www.


embedded.com/2000/0008/0008feat3.htm, accessed: 2009-03-06. (Archived by
WebCite® at http://www.webcitation.org/5f56sYkrp.)

[S3] Arnold S. Berger, Embedded Systems Design; An Introduction to Processes, Tools,


& Techniques, 2002, CMP Books.

[S4] Walter Bright. Code Coverage Analysis, Dr.Dobb’s CodeTalk, June 7, 2008,
URL:http://dobbscodetalk.com/index.php?option=com_myblog&show=Coverage-
Analysis.html&Itemid=29, accessed: 2009-03-05. (Archived by WebCite® at http://
www.webcitation.org/5f3ifBGHH.)

[S5] Vincent Encontre, Testing embedded systems: Do you have the GuTs for it?,
2005, IBM, URL: http://www.ibm.com/developerworks/rational/library/459.
html, accessed: 2009-02-23. (Archived by WebCite® at http://www.webcitation.
org/5eoXrvU9M.)

[S6] Mark G. Graff & Kenneth R. van Wyk, Secure Coding Principles & Practices, 2003,
O’Reilly & Associates.

[S7] Michael Howard and David LeBlanc, Writing Secure Code, 2003, Microsoft
Corporation.

[S8] Michael Howard, David LeBlanc, and John Viega, 19 Deadly Sins of Software
Security: Programming Flaws and How to Fix Them, 2005, McGraw-Hill
Companies.

[S9] Nat Hillary, Measuring Performance for Real-Time Systems, 2005, Freescale
Semiconductor, URL: http://www.freescale.com/files/soft_dev_tools/doc/white_
paper/CWPERFORMWP.pdf, accessed 2009-02-23.

[S10] Girish Janardhanudu, White Box Testing, Cigital, 2005, URL:https://buildsecurityin.


us-cert.gov/daisy/bsi/articles/best-practices/white-box/259-BSI.html, accessed:
2009-02-23. (Archived by WebCite® at http://www.webcitation.org/5eoV64krr)

[S11] Gary McGraw, editor, Software Security Testing, IEEE Security & Privacy,
September/October 2004, URL: http://www.cigital.com/papers/download/bsi4-
testing.pdf, accessed 2009-02-23.

[S12] Gary McGraw, editor, Software Penetration Testing, IEEE Security & Privacy,
January/February 2005, URL: http://www.cigital.com/papers/download/bsi6-
pentest.pdf, accessed 2009-02-23.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
[S13] C. C. Michael and Will Radosevich. Black Box Security Testing Tools, 2005,
Cigital, Inc., URL:https://buildsecurityin.us-cert.gov/daisy/bsi/articles/tools/black-
box/261-BSI.html, accessed: 2009-02-23. (Archived by WebCite® at http://www.
webcitation.org/5eoQuS66s.)

[S14] C. C. Michael and Will Radosevich, Risk-Based and Functional Security Testing,
2005, Cigital, Inc., URL:https://buildsecurityin.us-cert.gov/daisy/bsi/articles/best-
practices/testing/255-BSI.html, accessed: 2009-02-23. (Archived by WebCite® at
http://www.webcitation.org/5eoVUWdcx.)

[S15] Ari Takanen, Jared D. Demott, and Charles Miller, Fuzzing for Software Security
Testing and Quality Assurance, 2008, Artech House, Inc.

[S16] Maura A. van der Linden, Testing Code Security, 2007, Auerbach Publications.

[S17] Kenneth R. van Wyk, Adapting Penetration Testing for Software Development
Purposes, Carnegie Mellon University, 2007, URL:https://buildsecurityin.us-cert.
gov/daisy/bsi/articles/best-practices/penetration/655-BSI.html, accessed: 2009-02-
23. (Archived by WebCite® at http://www.webcitation.org/5eoUKoUtg.)

[S18] James A. Whittaker, How to Break Software: A Practical Guide to Testing, 2003,
Pearson Education, Inc.

[S19] Chris Wysopal, Lucas Nelson, Dino Dai Zovi, and Elfriede Dustin, The Art of
Software Security Testing, 2007, Symantec Corporation.

#å#ODINGå3TYLEå'UIDELINES

[C1] Aladdin Enterprises, Aladdin’s C coding guidelines, URL:http://pages.cs.wisc.


edu/~ghost/doc/AFPL/6.01/C-style.htm, accessed: 2009-03-06. (Archived by
WebCite® at http://www.webcitation.org/5f4tvyFLN.)

[C2] L.W. Cannon, et. al., Recommended C Style and Coding Standards, URL:http://
www.doc.ic.ac.uk/lab/cplus/cstyle.html, accessed: 2009-03-06. (Archived by
WebCite® at http://www.webcitation.org/5f4tdERfB.)

[C3] Jim Larson, Standards and Style for Coding in ANSI C, URL:http://www.jetcafe.
org/~jim/c-style.html, accessed: 2009-03-06. (Archived by WebCite® at http://
www.webcitation.org/5f4tA3DQ7.)

[C4] Swarthmore University, C Code Style Guidelines, URL:http://www.cs.swarthmore.


edu/~newhall/unixhelp/c_codestyle.html, accessed: 2009-03-06. (Archived by
WebCite® at http://www.webcitation.org/5f4tpYTyb.)

[C5] Conrad Weisert, Thoughts on Some Proposed C Coding Standards, URL:http://


www.idinews.com/cppStds.html, accessed: 2009-03-06. (Archived by WebCite® at
http://www.webcitation.org/5f4tSOnM8.)

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
4ESTå4OOLS

[T1] Codenomicon Defensics, http://www.codenomicon.com/, accessed: 2009-03-03.


(Archived by WebCite® at http://www.webcitation.org/5f0neo8o7.)

[T2] Insight from Klocwork, http://www.klocwork.com/products/insight.asp, accessed:


2009-03-03. (Archived by WebCite® at http://www.webcitation.org/5f0mvK79U.)

[T3] IxANVL™, from Ixia, http://www.ixiacom.com/products/display?skey=ixanvl,


accessed: 2009-03-03. (Archived by WebCite® at http://www.webcitation.
org/5f0lenWPy.)

[T4] Nessus from Tenable Network Security, http://www.nessus.org/nessus/, accessed:


2009-03-03. (Archived by WebCite® at http://www.webcitation.org/5f0nt0C5U.)

[T5] SmartBits® from Spirent Communications, http://www.spirent.com/analysis/


technology.cfm?media=7&ws=325&ss=110&stype=15&a=1, accessed: 2009-03-
03. (Archived by WebCite® at http://www.webcitation.org/5f0m82sJu.).

[T6] SSHredder from Rapid7, http://www.rapid7.com/securitycenter/sshredder.


jsp, accessed: 2009-03-03. (Archived by WebCite® at http://www.webcitation.
org/5f0nKdtsK.)

[T7] Wireshark protocol analyzer, http://www.wireshark.org/, accessed: 2009-03-09.


(Archived by WebCite® at http://www.webcitation.org/5f9qHTVgD.)

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
Appendix: Nano$EFENDERTM
Mocana’s device intrusion detection system that
defeats malware while eliminating false positives

Mocana’s patent-pending new anti-malware Mocana NanoDefender provides protection


product, NanoDefender, is a device-based to function flow, and especially system calls. Features & Benefits
intrusion detection system that is designed For example, if an attacker takes advantage
to instantly detect and shut down malware or of a buffer overflow in glob() in glibc and True runtime intrusion
detection for devices
viruses before they have a chance to spread subsequently attempts to overwrite system
throughout the network or hijack data—and configuration files with fwrite(), the attack Protects common code
it does so while eliminating “false positives.” would be stopped immediately by Mocana libraries
NanoDefender is the latest addition to the because glob() does not call fwrite() in
Device Security Framework™, Mocana’s normal operation. Prevents system
takeover
top-to-bottom architecture for planning,
implementing and managing comprehensive NanoDefender is basically a set of tools
Minimal CPU usage
device security across the enterprise. and code designed to “harden” executable
images against arbitrary code execution. Easily integrated into
4HEå-OCANAå.ANO$EFENDERå$IFFERENCE When a new application is compiled, applications—no code
NanoDefender performs a static analysis of changes are required
Mocana NanoDefender approaches intrusion the code to determine the call flow of the
No false positives
detection in a completely different way. executable. In other words, NanoDefender
Unlike anti-malware products currently on the determines which functions call which Protects against zero-
market, which rely on attack databases for functions, and which functions make day attacks
defense, NanoDefender tracks the function which system calls. Later, at link time, the
flow within the application. executable is instrumented to track function Supports common
platforms such as
calls. Finally, at runtime, NanoDefender Linux and BSD
Designed to prevent malicious code
runtime code and the (now specially
execution in the context of an existing
modified) OS together enforce the proper call Advanced
application or process, NanoDefender is cryptography support
flow.
focused on recognizing previously unknown
attacks, especially on handheld and wireless Supports real-time
.ANO$EFENDERå3ECUREå"OOTå.ANO"OOT
operating systems such
devices. It isn’t an add-on. It’s designed to
as VxWorks
be integrated into the device or application The security posture of any device relies, to a
during the manufacturing process to prevent large extent, on your ability to trust that it has
damage from attacks—known and unknown. booted up “cleanly”. Mocana NanoBoot™,
bundled with NanoDefender, provides all
(OWå.ANO$EFENDERå7ORKS the tools and firmware source code needed
to perform pre-boot verifications. NanoBoot
In Mocana NanoDefender, every action
uses strong cryptography to validate the
an application takes is checked against a
BIOS, firmware, and boot loader images.
known “good behavior” model. Mocana
NanoBoot can run in memory-constrained
NanoDefender maintains a database of
environments (depending on cryptographic
behaviors and functions that are deemed
configuration), requiring less than 8 KB
“acceptable” for a given application, and if
uncompressed firmware space and less than
the function or behavior does not match the
2 KB of RAM.
known “good behavior,” the application is
terminated and the security breach is logged.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
NanoBoot consists of two components: NanoDefender™ Features
a command line tool, which digitally signs
the authorized firmware image, and a NanoDefender is a comprehensive intrusion
small signature verification application that prevention that secures all aspects of a
executes during initialization from within a device: communications, identity, access,
processor’s protected flash memory. The privilege, control and execution. It tracks the
NanoBoot application may be a little as 8 KB, function flow within an application instead of
and require less than 2 KB of RAM, enabling relying on an “attack database” for defense.
SoC design. When the device is powered And, better yet, it delivers complete security
up, NanoBoot verifies the device’s signature, without time-consuming false positives.
thereby ensuring that the device’s firmware
has not been altered. #OMMONå#ODEå0ROTECTION

.ANO$EFENDERå3ECUREå5PDATEå Applications that rely on general-purpose


.ANO5PDATE libraries such as libc/glibc also inherit any
vulnerability that may exist within those
Today, virtually all hardware devices use libraries. With NanoDefender, these general-
firmware of one sort or another, from purpose libraries can be hardened in advance,
expansion cards, USB adaptors, switches, avoiding difficult and costly post-shipment
routers, printers, storage drives, digital library swap-outs.
cameras, mobile phones and more. Where
once firmware was read-only and fixed, today -INIMALå&OOTPRINTåANDå#05å5SAGE
most firmware can be dynamically updated,
giving “old” hardware new features and NanoDefender delivers minimal impact
capabilities. at runtime with no hindrance to quality of
performance. Instead of a large database that
For years, code updates and patches have requires constant updating, It relies only on a
been automatically delivered and installed on small set of data describing the function flow
millions of Macs and PCs. But most network- and system calls within a given application.
attached devices—if they can be updated at In an embedded or handheld environment
all—still require manual updating, a process where storage space is at a premium, this is
that is time consuming, tedious and can an absolute necessity.
actually introduce new security problems into
the network. 0LATFORMå)NDEPENDENT

Mocana’s NanoUpdate is an easy to use, Like all of Mocana’s device security toolkits,
high performance Secure Firmware Update NanoDefender is CPU-architecture and
solution. Mocana NanoUpdate enables platform independent. Linux platforms are
firmware images and other messages to supported out-of-the-box, and ports to other
be securely delivered to devices in the common platforms such as BSD, OSE,
field automatically, eliminating the need for Nucleus, Solaris, ThreadX, Windows, MacOS
insecure manual methods, like email, TFTP, X, (ARC) MQX, pSOS, and Cygwin, as well
FTP, HTTP, or physical DVDs. as real-time operating systems such as
VxWorks, are easily achieved.
NanoUpdate’s command line tool can
create a PKCS #7–digitally signed message.
The signed message is placed at a well
known URL that is is programmed to check
for updates. The signed message is then
downloaded, authenticated, verified, de-
capsulated, saved, and/or acted upon.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
NanoDefender™ Benefits NanoBoot Module Benefits
 Prevent subversion (tampering) of
#OMPREHENSIVEå!TTACKå0ROTECTION
firmware images. Blocks unlicensed
Designed to prevent malicious code firmware upgrades and protects
execution in the context of an existing intellectual property.
application or process, NanoDefender can
 Enables you to assign unique IDs, such
shut down any exploit changing the function
as SKUs, to firmware images using
flow within running code before it has the
cryptographic private keys.
chance to do any damage. NanoDefender
even provides protection from remote and  One simple API function to call at startup
local stack-based overflows, format string or periodically as desired. Endian neutral
attacks/string exploits, heap overflows, and & RTOS not required.
return-to-libc integer overflows.
 Code can run in ROM, not just RAM. Ultra
small footprint enables SoC (system on
.Oå&ALSEå0OSITIVES
chip) design.
Because NanoDefender only acts if
“disallowed” behavior is detected, false
positives are impossible. Using a rules base NanoUpdate Module Benefits
of acceptable behavior for any applications  Endian-neutral and RTOS not required,
running on the new device, NanoDefender CPU-architecture and platform
only terminates an application if begins independent. Platforms supported out-of-
behaving erratically due to malware or some the-box include Linux, Monta Vista Linux,
other security threat. VxWorks, OSE, Nucleus, Solaris, ThreadX,
Windows, MacOS X, (ARC) MQX, pSOS,
4RULYå0AINLESSå)NTEGRATION
and Cygwin.
NanoDefender was built for ease-of-use and  Powerful, simple, easy-to-use API. No
ease of installation from the ground up. It’s crypto expertise required.
a snap to integrate into applications—just
 Simple, secure, easy-to-use and install.
rebuild an application using a Mocana-
provided code analyzer and linker. Absolutely Exclusive command line tool for signing
no changes to your code are required. Plus firmware images and messages.
Mocana’s developer support team is available  Extends device lifetime out in the field.
24x7 to answer your questions about crypto, Creates new revenue opportunities for
our toolkits, or embedded development in already-deployed hardware.
general.
 Can be used for both wired and wireless
mobile applications, over local or remote
networks.

 Fast, inventory-wide security updates


mean your product line is significantly
less vulnerable to zero-day attacks.

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 
About Mocana
Mocana provides device management solutions and embedded security tools
for consumer electronics manufacturers, datacom companies, telecom carriers, Mocana Solutions
industrial automation applications, and the enterprise. Mocana’s industry-leading NanoBoot™
Secure preboot verification
infrastructure software solutions ensure that wired and wireless devices, for firmware
networks, and their services all scale securely. Mocana offers 18 integrated NanoUpdate™
products, which are the security solution of choice for more than 90 major Secure firmware updates

customers, including Cisco, Freescale, Philips, Dell, Nortel Networks, Harris, NanoWall™
Embedded system firewall
Honeywell, Symbol, Net.com, and Radvision. NanoSSH™
High-performance
Winner of the 2008 Red Herring Top 100 Tech Startups in the World and 2008 SSH client and server

Frost & Sullivan Technology Innovation of the Year awards, Mocana was founded NanoSSL™
Super-small SSL client and
in 2004, is privately held, and is headquartered in San Francisco, California. For server
more information, visit www.mocana.com. NanoSec™
Device-optimized IPsec,
IKEv1/v2, MOBIKE
NanoEAP™
Downloads and Contacts EAP supplicant and
802.11 extensions
 For details about the Mocana Device Security Framework, visit http://www. NanoCert™
mocana.com/device-security-framework.html. Certificate managment
for client devices
 For your 90-day free trial, visit www.mocana.com/evaluate.html. NanoDTLS™
Embedded DTLS client
 For pricing and purchase information, email sales@mocana.com or call NanoDefender™
866-213-1273. Intrusion detection
for devices
NanoPhone™
Quick-development
security toolkit for
Google Android handsets

VPNC
CERTIFIED
Basic
Interop

Tech AES
Interop
Choice IKEv2 Basic
2008 Interop
IPv6
Interop

Best Practices for Testing Secure Applications for Embedded Devices – Free evaluation code at www.mocana.com/evaluate.html 

You might also like