ELN5622 Embedded Systems Class 10 Spring, 2003

Download Report

Transcript ELN5622 Embedded Systems Class 10 Spring, 2003

ELN5622
Embedded Systems
Class 10
Spring, 2003
Aaron Itskovich
[email protected]
Outlook
Testing, & Verification, Reliability
 Test Plan Creation & Execution.
 Relaiability & Correctness
 Design for Test & Debugging
 Testing (Yield, Field Return, Golden
System, Coverage), JTAG

Why test?
To reduce risk to both user and company
 To reduce development and maintenance
cost
 To improve performance
 To find bugs in software and hardware

To Find the Bugs





Halting Theorem proves it’s impossible to
prove that an arbitrary program is correct
Given the right test you can prove that a
program is incorrect
Testing is not about proving “correctness” of a
program but about finding bugs
The only way to “know” how many bugs left is
to test it with carefully designed test plan
Known bug is already “half bug”
To reduce the risk and costs
Minimize risk to yourself, your company
and your customers
 Earlier you detect the problem cheaper
the fix

$
Cost to fix
a problem
Project time
Costs of bugs

In 1990 HP sampled the cost of errors in
software development during the year.
The answer $400 million, shocked HP
into a completely new effort to eliminate
mistakes in writing software. The $400
waste , half of ot spent in the labs on
rework and half in the field to fix the
mistakes that escaped from lab amounted
to 1/3 of the company total R&D budget
How to make bug fixing
cheaper?
If we can’t ensure correctness of the
released system how to make bug fix
cheaper?
 Design system to be field upgradeable

– Re-configurable hardware (FPGA)
– Separate application software from boot
– Use Flash or EEPROM as application
storage
When to test?
As early as possible
 Statistically about 70% of the bugs found
during the integration phase of the
project were generated by code that had
never been exercised before

What tests?
Every time the program is modified, it
should be retested to assure that the
changes didn’t break some unrelated
behavior - REGRESSION TESTING
 Individual developers test at the module
level by writing stub code to substitute
for the rest of the system hardware and
software – UNIT TESTINGH

Test case design

Functional testing (black box)
– Can and should be written in parallel with
the requirements document

Coverage testing (white box)
– Coverage test implies that your code is stable

Both kind of testing are necessary to test
rigorously your embedded design
A bit of history

The first known computer bug came
about in 1946 when a primitive computer
used by Navy to calculate the trajectories
of artillery shells shut down when a moth
got stuck in one of it’s computing
elements, a mechanical relay. Hence, the
name bug for computer error.
When to stop testing
When the boss says
 When the new iteration of the test cycle
founds fewer than X new bugs
 When a certain coverage threshold has
been met without uncovering any new
bugs
 In case your system is mission critical
look into DO-178B specification.

Choosing Test Cases

Functional tests
– Stress tests: test that intentionally overload input
input channels, memory buffers…
– Boundary value tests: Inputs that represent
“boundaries”within particular rangeand input
values that should case the output to transition
across a similar boundary in the output range
– Exception test: Test that should trigger a failure
mode or exception mode
– Error guessing: Test based on the prior experience
with testing similar products
– Random tests: Usually the least productive form of
testing
– Performance tests: Test that performance
expectation from the requirements are met
Choosing Test Cases

Coverage tests
– Statement coverage: Test cases selected because they
execute every statement in the program at least
once.
– Decision or branch coverage: Test cases chosen
because they cause every branch (both true and
false path) to be executed at least once.
– Condition coverage: Test cases chosen to force each
condition (term) in decision to take on all possible
logic values.
Practical alternatives

Gray box testing-what it is?
– White box tests – expensive to maintain need
to be reengineered every time code is
changed
– Gray box – exploit knowledge of
implementation without being intimately
tied to the coding details
Some distinguishers of the
embedded system




Embedded system must run reliably without
crashing for long periods of time.
Embedded software must often compensate for
problems with the embedded hardware
Real world events are usually asynchronous
and non deterministic, making simulations tests
difficult and unreliable
Did you read software “license agreement”?
Measuring test coverage

Code instrumentation methods - aka Software
logging
–
–
–
–
Printf: intrusive - slows down system
Low intrusion Printf
Usage of logic analyzer to measure coverage
Decision coverage (DC): measures results of decision
points in the code
– Modified decision coverage (MDC): One step
farther than DC evaluates the terms that make up
decision point.

Hardware instrumentation methods (logic
analyzer, trace,
How to test performance
Manufacturing tests
Build in self test (BIST)
 Test bed
 Golden system concept
 JTAG boundary scan
 Yield
 Field return
 Fault correlation and root cause analyzes
