You are on page 1of 21

An embedded system that has a marginal timing problem

or a cross-talk problem can appear to work correctly for long stretches of time and then just die.
When the right combination of 1s and 0s appears on the

right bus at the right time, a glitch occurs, and a bit flips where it shouldnt, taking the system down with it.
To avoid these kinds of problems we go for

Program Validation
Program testing

Validation

loosely refers to the determining that a design is correct.

process

of

Simulation remains the main tool to validate a

model, but the importance of post-production verification is growing, especially for safety of critical embedded systems.
Although still in its infancy, it shows more promise

than verification of software programs, because embedded systems are often specified in a more restricted way.

Simulating

embedded systems is challenging because they are heterogeneous.

Most ES contain both software and hardware

components that must be simulated at the same time, which is the co-simulation problem.
execute

the software as fast as possible, often on a host machine that may be faster than the final embedded CPU

Keep

the hardware and software simulations synchronized, so that they interact just as they will in the target system.

General-purpose software simulator (VHDL or

Verilog) to simulate a model of the target CPU, executing the software program on this simulation model. Different models can be employed,
Gate-level

models architecture models

Instruction-set Bus-functional

Translation-based

models

Its an approach which keeps track of time in software and hardware independently and using various mechanisms it synchronizes them periodically.

Their

are two basic mechanisms for synchronizing time in hardware and software.

Software is the master and hardware is the slave.

In this case, software decides when to send a message, tagged with the current software clock cycle, to the hardware simulator. Depending on the relation between software and hardware time, the hardware simulator can either continue simulation until software time or back-up the simulation to software time
Hardware is the master and software is the slave.

In this case, the hardware simulator directly calls communication procedures which, in turn, call software code.

Formal

verification is the process of mathematically checking that the behavior of a system, described using a formal model, satisfies a given property, also described using a formal model.
Specification Verification
Implementation Verification

The properties we check are traditionally broken

into two classes


Safety properties Liveness properties

Embedded systems software testing shares much

in common with application software testing. But some important differences exist between application testing and embedded systems testing. Embedded developers often have access to hardware-based test tools that are generally not used in application development Also, embedded systems often have unique characteristics that should be reflected in the test plan. These differences tend to give embedded systems testing its own distinctive flavor.

Before we begin to design tests, its important to

have a clear understanding of why we are testing. This understanding influences which tests we stress and how early we begin testing. In general, we test for four reasons:
To find bugs in software (testing is the only way to

do this)
To reduce risk to both users and the company To reduce development and maintenance costs To improve performance

One of the important results from theoretical

computer science is a proof (known as the Halting Theorem) that its impossible to prove that an arbitrary program is correct. Given the right test, however, you can prove that a program is incorrect (that is, it has a bug). Its important to remember that testing isnt about proving the correctness of a program but about finding bugs. Experienced programmers understand that every program has bugs. The only way to know how many bugs are left in a program is to test it with a carefully designed

Testing minimizes risk to yourself, your company,

and your customers. The objectives in testing are to demonstrate to yourself that the system and software works correctly as designed. You want to be assured that the product is as safe as it can be. I.e., you want to discover every conceivable fault or weakness in the system and software before its deployed in the field.

The classic argument for testing comes from Quality

Management. In 1990, HP sampled the cost of errors in software development during the year. The answer, $400 million, which shocked HP into a completely new effort to eliminate mistakes in writing software. If half of it is spent in the labs on rework and half of it in the field to fix the mistakes that escaped from the labs, which will amounted to one-third of the companys total R&D budget and could have increased earnings by almost 67%.

The earlier a bug is found, the less expensive it is to fix. The cost of finding errors and bugs in a released product is significantly higher than during unit testing.

Testing maximizes the performance of the system. Finding and eliminating dead code and inefficient code can help ensure that the software uses the full potential of the hardware and thus avoids the dreaded hardware re-spin.

We want to test every possible behavior in our

program. This implies testing every possible combination of inputs or every possible decision path at least once. The basic approach is to select the tests that have the highest probability of exposing an error.
Functional Coverage Gray-Box

Tests

Tests Testing

Functional testing is often called black-box testing

because the test cases for functional tests are devised without reference to the actual code. Black-box tests are based on what is known about which inputs should be acceptable and how they should relate to the outputs. Black-box tests know nothing about how the algorithm in between is implemented. Because black-box tests depend only on the program requirements and its I/O behavior, they can be developed as soon as the requirements are complete.

Stress

tests Boundary value tests Exception tests Error guessing Random tests Performance tests

Coverage tests attempt to avoid this weakness in

functional tests by ensuring that each code statement, decision point, or decision path is exercised at least once. Also known as white-box tests.
Statement
Decision

coverage

or branch coverage coverage

Condition

white-box tests can be intimately connected to

the internals of the code, they can be more expensive to maintain than black-box tests. black-box tests remain valid as long as the requirements and the I/O relationships remain stable, Tests that only know a little about the internals are called gray-box tests. Gray-box tests can be very effective when coupled with error guessing.

Embedded software must run reliably without

crashing for long periods of time. Embedded software is often used in applications in which human lives are at stake. Embedded systems are often so cost-sensitive that the software has little or no margin for inefficiencies of any kind. Embedded software must often compensate for problems with the embedded hardware. Real-world events are usually asynchronous and nondeterministic, making simulation tests difficult and unreliable. Your company can be sued if your code fails.

You might also like