Transcript Slide 1

Best Practices
Chapter 7 - Debugging and Testing
Agenda









Introduction - What it Means to Debug
Categories of Coding Errors
The Process of Debugging Code
Using Assertions
Principles and Practices for Debugging
Exercising Your Software
Types of Software Tests
Principles and Practices for Testing
Conclusion
2
Introduction - What it Means to Debug






Bugs are software defects that result because of
coding errors.
Some bugs are readily evident E.G.  when your
program crashes, others can go unnoticed for some
time
The activity of finding and removing defects from
code  debugging
When faced with a bug, never say “but that’s
impossible”  if it happened then it is possible Hunt
& Thomas (2000)
Users often push an application to its limit by using the
application in ways that were not anticipated by
programmers
All code must be checked for bugs, and all bugs found
must be corrected
3
Categories of Coding Errors

Syntax errors:



Runtime errors:



Occur during the execution of a program E.G.  division-by-zero operation
Cause the execution of a program to terminate unless the error has been anticipated and
handled with error-handling code
Logic errors:





Occur because the rules of the programming language used have been violated E.G.  a
keyword may have been spelled incorrectly
Easy to find  compiler notifies the developer of them
Flaws in the design of a program E.G.  where the total amount due on an invoice is
incorrect
Can cause a program to terminate or produce incorrect results
More difficult to find  compiler does not inform the programmer of them and the
program may not crash
Discover logic errors by thoroughly inspecting and testing code
By designing and writing code so that it is easier to debug E.G.  by breaking the
program into manageable pieces, you can simplify the error-removal process
4
The Process of Debugging Code

McConnell (2004) recommends the
following approach for debugging
code:

1. Stabilize the error: Try make the
error repeatable by finding the simplest
test case that produces the error
5
The Process of Debugging Code
(cont)

Approach for debugging code (cont):

2. Locate the source of the error: The
following steps are followed:
 a. Gather the data that produces the defect
 b. Analyze the data that has been gathered
and form a hypothesis about the defect
 c. Determine how to prove or disprove the
hypothesis, either by testing the program or
examining the code
 d. Prove or disprove the hypothesis by using
the procedure identified in 2(c)
6
The Process of Debugging Code
(cont)

Approach for debugging code (cont):



3. Fix the defect: Make sure that you have
diagnosed the problem correctly first
4. Test the fix: Run the same tests that you
used to diagnose the problem. Make sure the
problem has been solved and that there are no
side effects of the changes made
5. Look for similar errors: Defects tend to
occur in groups. When you find one defect,
look for others that are similar
7
The Process of Debugging Code
(cont)

Some helpful tips for finding defects McConnell
(2004):

Test the code in isolation: Use unit tests to test the
code in isolation

Use the available tools:






Source-code comparators: These can identify the
differences between two source-code files
Compiler warning messages: These should not be ignored
Syntax and logic checkers: These can check code more
thoroughly than a compiler can
Execution profilers: These can uncover surprising defects
that might otherwise go unnoticed
Test frameworks/scaffolding: Writing test code to
exercise a piece of problem code can be useful for uncovering
defects
Debuggers: Many of today’s debuggers offer a rich set of
debug features
8
The Process of Debugging Code
(cont)

Helpful tips for finding defects (cont):






Be suspicious of classes and routines that have had
defects before
Check code that has changed recently
Start by focusing on a smaller section of code
Check for common defects:
 Use code-quality checklists to stimulate your thinking
about possible defects. Keep records of errors you have
made in the past to help you discover future errors
Explain the problem to someone:
 Often, when reading code, we see what we intended to
write, not what we actually wrote
Take a break from the problem:
 Often, given time, your subconscious mind is able to
figure out the cause
9
The Process of Debugging Code
(cont)

Helpful tips for finding defects (cont):

Use brute-force debugging:

Some brute-force techniques recommended by McConnell:













Perform a full design and/or code review on the broken code.
Discard the section of code and redesign/recode it from scratch.
Discard the entire program and redesign/recode it from scratch.
Compile code with full debugging information.
Use a unit test harness to test the new code in isolation (this is discussed
later in this chapter).
Create an automated test suite and let it run all night.
Manually step through a large loop in the debugger until you reach the error
condition.
Instrument the code with print, display, or other logging statements to more
clearly see what the program is doing.
Compile the code with a different compiler.
Compile and run the program in a different environment.
Link or run the code against special libraries or execution environments that
produce warnings when code is used incorrectly.
Replicate the end-user’s full machine configuration.
Integrate new code in small pieces, fully testing each piece as it is integrated.
10
The Process of Debugging Code
(cont)





Sometimes an easy way to figure out what a program is
doing is by examining the data it is operating on (Hunt &
Thomas, 2000).
This can be done quite simply by displaying or printing the
data as the program runs.
Some debuggers offer useful tools to actually visualize
data and the interrelationships that exist among that data.
A technique called trace debugging can also be used to
verify the flow of control in a program. This involves
printing or displaying messages while a program executes.
Examples of such messages are:





“Entering procedure CalculateInterest”
“Exiting procedure CalculateInterest”
“Entering loop to calculate average”
“Loop to calculate average completed”
“In CalculateInterest: variable interest is 12862.45”
11
Using Assertions






An assertion is code that a program uses to check itself.
Useful for checking assumptions E.G.  if a parameter should never have a certain
value, use an assertion to check the assumption.
Visual Basic example:
The Assert statement checks a condition and if it is false the assertion fails
Test conditions that should hold true if code is correct.
Because assertions check for conditions that should never happen (unexpected
conditions), do not use assertions in place of real error handling.
12
Using Assertions (cont)

Guidelines for using assertions McConnell (2004):

Use error-handling code for conditions you
expect to occur; use assertions for conditions
that should never occur

Avoid putting executable code into assertions

Use assertions to document and verify
preconditions and postconditions:


Preconditions  properties that “client code” promises will
be true before it calls a routine or instantiates an object.
Postconditions  properties that the routine or class
promises will be true once it has executed
13
Using Assertions (cont)

Guidelines for using assertions McConnell (2004) (cont):

Use assertions to document and verify preconditions and
postconditions (cont):

.
14
Using Assertions (cont)

Guidelines for using assertions McConnell
(2004) (cont):

Use assertions to document and verify
preconditions and postconditions (cont):
 If the values for the function’s parameters
come from an external source, invalid values
should be checked and handled by errorhandling code and not assertions.
 If values come from a trusted, internal source,
and it is assumed that these values will be
within their valid ranges, assertions are
appropriate.
15
Principles and Practices for
Debugging

Fix the problem, not the blame:


Don’ t waste energy by trying to fix the blame on someone.
Don’t just fix the symptoms:


Always search for the root cause of a problem – don’ t just fix its
symptoms.
E.G.  a simple word processor that offers a spell-check feature.
Private Sub cmdSpellCheck_Click(
ByVal sender As System.Object,
ByVal e As System.EventArgs)
Handles cmdSpellCheck.Click
End Sub
’Perform spell-check
DoSpellCheck()
16
Principles and Practices for
Debugging (cont)

Don’t just fix the symptoms (cont):



Imagine when the spell-check feature is launched it fails to spell-check the
first word of the document.
If the document is spell-checked again immediately afterwards it spellchecks the first word, as it should.
A bad “quick fix” would be to modify the code as follows:
Private Sub cmdSpellCheck_Click(
ByVal sender As System.Object,
ByVal e As System.EventArgs)
Handles cmdSpellCheck.Click
End Sub
’Perform spell-check
DoSpellCheck()
DoSpellCheck()
17
Principles and Practices for
Debugging (cont)

Don’t just fix the symptoms
(cont):



This simply fixes the symptoms, not
the root cause of the problem.
The spell-check feature obviously has a
defect that should be corrected.
Not doing so may mean that the defect
could manifest itself in other ways too,
even with such a “quick fix” in place.
18
Principles and Practices for
Debugging (cont)

The likely cause of a bug is in your code:



Do not hastily assume that the platform
environment is the cause of a bug.
It is rare to find a bug in the operating system or
the compiler, or even a third-party product or
library
Don’t assume it – prove it:


The amount of surprise you feel when you
uncover a bug is directly proportional to the
amount of trust and faith you have in the code.
Prove that code works, in the current context,
with the current data, and with the current
boundary conditions
19
Principles and Practices for
Debugging (cont)

Understand your code and the problem:

Do not code by trial and error

Save the original source code

Don’t make random changes:

Be confident that the change will resolve the defect.

Make one change at a time

Debug in small increments:

It is easier to debug smaller segments of code than large ones

Don’t waste time searching for a problem when the
compiler can do it for you

Don’t ignore a compiler’s warning messages
20
Principles and Practices for
Debugging (cont)

Don’t use a debugger as a crutch:



A debugger is not a substitute for good thinking
Debugging can be used as an opportunity to
learn more about a program, its defects, its code
quality, and the approach to problemsolving.
Don’t apply production constraints to the
development version:


It is common for developers to impose the
limitations of the production software on the
development version.
Developers should be willing to trade speed and
resource usage during development in exchange
for built-in tools that can streamline development
21
Principles and Practices for
Debugging (cont)

Use standard compile-time settings
across a project:


This will avoid the unnecessary emergence
of errors and warnings during integration.
Make abnormal behaviour blatantly
obvious during development:

McConnell (2004) sums this up by stating:
“Sometimes the best defense is a good
offense. Fail hard during development so
that you can fail softer during production.”
22
Principles and Practices for
Debugging (cont)

Consider removing debugging aids from
production code:




Debugging code (like assertions) can impact code
size and speed negatively.
When this is prohibitive (E.G.  code for
commercial use), debugging code should be
removed.
Remove only debugging code that noticeably
impacts on performance.
Consider writing debugging information to
a log file:

A log file can preserve information that might not
be displayed as a result of a fatal error.
23
Exercising Your Software


Software testing is conducted to detect errors and to ensure
that the software performs as it should.
Broken down into two broad categories:

Black box testing:




When a tester is only interested in what the code does, not how it does
it.
The code is tested without knowledge of its internal structure.
A black box test might check whether an invoice that is printed by an
application is the correct one for the customer, or that the correct
prices appear on the invoice.
White box testing:


White box testing is the opposite of black box testing; the tester is
aware of the inner workings of the code being tested.
A white box test might check for customer numbers that are out of
range, or that an invoice is printed correctly for zero invoice items or
for an unusually large number of invoice items.
24
Exercising Your Software (cont)

Test results have further value in
that they can be used to:

Assess reliability

Guide corrections

Reveal common errors
25
Exercising Your Software (cont)







All code you write should be built in such a way that it can
easily be tested.
Orthogonal systems are easier to test because more of
the testing can be done at the module level.
When you fix bugs, you should use the opportunity to
assess orthogonality.
When you encounter a bug, ask yourself how localized the
fix is. If you simply need to change one module, that’s
good.
When you make a change, does the change fix everything,
or do other problems appear?
Each piece of code must be thoroughly tested to verify its
behaviour before trying to integrate the pieces to form the
larger system.
Testing should take 8 to 25 percent of the total project
time, excluding debugging time
26
Exercising Your Software (cont)


E.G>  imagine that you need to create
a module of code that handles the
encryption and decryption of text strings
up to 254 characters in length (for strings
longer than 254 characters an error must
be thrown).
The module must contain only two
functions:

Encrypt and Decrypt. Both functions must
receive a text string that must be encrypted or
decrypted as well as an “offset” value to adjust
the encryption/decryption mechanism.
27
Exercising Your Software (cont)

An extract of the code you might write is shown below:
Const OFFSET = 8
Public Function Encrypt(
ByVal strText As String,
ByVal bytOffset As Byte) As String
:
End Function
Public Function Decrypt(
ByVal strText As String,
ByVal bytOffset As Byte) As String
:
End Function
28
Exercising Your Software (cont)

Code to test the module must be written:
Public Sub TestEncryptionValue(ByVal strText As String, Optional ByVal strExpected As String = "")
Dim strEncrypted As String
Dim strDecrypted As String
Try
strEncrypted = Encrypt(strText, OFFSET)
strDecrypted = Decrypt(strEncrypted, OFFSET)
’Might throw an exception, e.g. if string exceeds 254 chars
Catch ex As Exception
If strText.Length >= 254 Then
’We are expecting the exception
Return
Else
Debug.Assert(False, ex.Message)
End If
End Try
If strExpected <> "" Then
Debug.Assert(strEncrypted = strExpected,
"Encrypted value does not match expected value")
End If
End Sub
Debug.Assert(strDecrypted = strText, "Decrypted value does not match original value")
29
Exercising Your Software (cont)
Public Sub TestEncryption()
’Test empty string
TestEncryptionValue("")
’Test one-char string
TestEncryptionValue("A")
’Test max length string (254 chars)
TestEncryptionValue(Space(254))
’Test string that exceeds max length
TestEncryptionValue(Space(255))
’Test first chars in char set
TestEncryptionValue(Chr(0) & Chr(1) & Chr(2))
’Test last chars in char set
TestEncryptionValue(Chr(253) & Chr(254) &
Chr(255))
’General tests
TestEncryptionValue("Memory is a thing we forget with",
"Umuwz (q{(i(|pqvo( m(nwzom|( q|p")
:
End Sub
30
Exercising Your Software (cont)











The unit test establishes an artificial environment and then invokes the
routines in the module being tested.
It tests for cases of empty strings, maximum length strings, strings that
exceed the maximum length, boundary conditions, etc.
The results returned by these tests are checked.
A unit should pass all such tests before being wired to other units.
When the unit is integrated with other units the developers can be
confident that the unit will work as expected.
The same unit tests can later be used to test the system as a whole.
Units testing code must be conveniently located.
The test code can also be embedded inside the module itself.
Common test operations can even be included in a base class and more
specific tests can be included in the relevant subclasses.
The tests can possibly be called automatically upon program execution
during development, or possibly by a test harness.
Once development and testing is complete, the tests can be excluded from
the production version by, E.G., using compiler directives.
31
Types of Software Tests

Unit testing:






Integration testing:




A unit test is a test carried out on a module, in isolation from the rest of the system, to exercise
the module.
A unit might be a single cohesive function or procedure or a small piece of code that can be
separately compiled, or some other segment of code that can be tested.
The role of the unit test is to check that the module provides the functionality that it has been
designed to offer and to check boundary conditions.
The results returned by the module are compared against known values or against results from
earlier tests.
All modules must pass their own unit tests before you can carry out integration testing
An integration test tests that individual modules tested in isolation work correctly together
properly.
The modules must first pass the unit testing stage before integration testing is carried out.
Integration is a continuous process; therefore the code being tested becomes progressively larger
as each module is integrated and tested, until finally the whole system is complete and tested.
Validation and verification:


Software needs to be checked to see that it is being built correctly and that it meets the
specifications defined.
The requirements specification document and the design specifications are typical documents used
to conduct validation and verification
32
Types of Software Tests (cont)

Resource exhaustion, errors, and recovery:




Performance testing:



Software needs to be checked that it will run under real-world conditions
where resources are exhaustible E.G.  disk space and memory.
The software should also work well with the user’s equipment E.G.  their
display size.
The system should also be tested to see how it responds to error conditions,
whether error conditions can be recovered from, and whether it fails
gracefully.
With performance testing (or stress testing) the software is tested
under load to ensure that it meets the performance requirements.
The throughput of the application is measured and areas that might lead to
degrading performance are identified.
Usability testing:

Usability testing tests whether an interactive system is actually usable, that
is, how well it supports the performance of activities by users.
33
Types of Software Tests (cont)




Test results can be compared to previous
results of the same tests to make sure
that nothing has “gone wrong” since the
previous test.
This is known as regression testing.
Regression tests uncover defects in
software which did not exist when the
same set of tests were previously
executed.
All the above tests can be run as
regression tests. Regression testing can
ideally be run in an automated manner
34
Types of Software Tests (cont)

Other types of tests may be appropriate, depending on how essential
it is to minimize the chance of software failure, and how much testing
time and effort the client is willing to pay for.

Acceptance testing by the user or the testing team:


Beta testing:


This is conducted when the software is delivered. A formal set of tests is run to
determine whether or not a system meets the client’s acceptance criteria.
Teams of outsiders who represent the type of users likely to use the software being
developed conduct beta testing. Beta tests are conducted before the software is
finally shipped.
Release testing:

Release tests check that the software is complete, the package contains all the
required disks, the required files are present, the correct versions of the files are
being used, the files are virus-free, and the correct set of documentation is
included. The tester checks that the software does what it claims to do by
comparing it with the requirements documentation, the marketing material and the
user documentation.
35
Principles and Practices for Testing

Create test plans:


Conduct test plan and test case inspections:


Large projects  create test plans to describe the sequence and
nature of the testing to be performed E.G.  unit testing, new function
testing, regression testing, stress or performance testing, integration
testing, system testing, usability testing, and external beta testing
Test cases often contain more errors than the applications they were
built to test
Use test specialists:



Utilize professional testing personnel to take over testing after the
developers have completed code inspections and unit tests.
These personnel might carry out regression testing, performance
testing, capacity or load testing, integration testing, lab testing using
special equipment, human factor testing, and system testing.
Professional testing personnel can also coordinate external beta tests
and work with clients during acceptance testing
36
Principles and Practices for Testing
(cont)

Perform globalisation testing:


For commercial software used in other
countries E.G.  test that the software
is presented in the relevant national
language correctly
Perform multi-platform testing:

If creating a software product that will
operate on several platforms, test each
platform-specific version thoroughly
37
Principles and Practices for Testing
(cont)

More technical guidelines:

Design to test:



When you design a module you should also
design the code to test it.
Consider boundary conditions and other issues
that may not have occurred to you otherwise.
Make unit testing simple:


Unit-testing code should be easily accessible and
usable.
E.G.  include the unit tests inside the modules
to which they apply
38
Principles and Practices for Testing
(cont)

Technical guidelines (cont):

Start testing early:

Don’t write test code after completing the production code.
Start testing as soon as you have code.

The earlier a bug is found the cheaper it will be to fix.

Test cases can even be written before writing any code.

Test-first programming  the most beneficial recent
software practices because it offers advantages:



Detects defects earlier and correct them easily
Forces you to think about the requirements and design before
writing code  helps produce better code
Exposes requirements problems sooner  difficult to write a test
case for poor requirements
39
Principles and Practices for Testing
(cont)

Technical guidelines (cont):

Test code often:


Retest the code after changes:


Testing is ideally done together with daily builds, both of
which can be entirely automated
Use regression testing to retest changed code, making use of
at least the same tests used to test the code before
Test subcomponents first:



If a module depends on one or more other modules, test the
submodule(s) first to ensure that they work as expected
Once this has been established, the main module can be
tested
Errors that arise during this test can then be attributed to the
main module or interactions with it
40
Principles and Practices for Testing
(cont)

Technical guidelines (cont):

Use good test data:
Use test data that will exercise your code
thoroughly
 This data cannot simply be a set of realworld data
 It must also include artificial data, that is,
data specially created to test every aspect
of the code

41
Principles and Practices for Testing
(cont)

Technical guidelines (cont):

Use good test data (cont):

Typical aspects that test data should test include the
following:





Boundary conditions:
 Any off-by-one errors that may exist must be uncovered
Each algorithm used:
 All algorithms should work as expected
Each line of code:
 Each line of code should be tested with at least one test
case
Each data-flow path:
 Each data flow path should be tested with at least one
test case. Thus, test every pathway through selections.
Each loop:
 Each loop must be tested for correct termination.
42
Principles and Practices for Testing
(cont)

Technical guidelines (cont):

Use good test data (cont):

Typical aspects that test data should test include the following (cont):


How bad data is handled:

Code should be tested with bad data, such as too little (or missing)
data, too much data, invalid data, data of the wrong size and
uninitialized data

E.G.  would the software crash if the user entered alphabetic
characters into a textbox that requires a numeric value?

Or, what would happen if the user entered an age value of 444?
How good data is handled:

McConnell (2004) recommends that code should be tested with
expected values (nominal cases), minimum required data E.G.  an
empty spreadsheet, maximum possible data E.G.  a spreadsheet of
maximum size, and old data E.G.  if a new routine replaces an old
one it should produce the same results with old data as the old routine
did
43
Principles and Practices for Testing
(cont)

Technical guidelines (cont):

Aim to break the code:




Formalize ad hoc tests:


Try doing everything you can to make the software crash
Don’t just test whether the code works - test for ways to
actually break the code
Try doing the wrong thing – users always will
If you create ad hoc tests during debugging E.G.  checking
the contents of a variable using a display statement, or
entering a piece of code interactively in the debugger, you
need to add it to the existing unit test so that you can run it
again later
Maintain a “common error” checklist:

Use this to check the code your write
44
Principles and Practices for Testing
(cont)

Technical guidelines (cont):

Keep tests up to date:


The tests must evolve as the system is developed, becoming
increasingly thorough. If the test fails to catch a bug, a new test must
be added immediately to ensure that it is trapped in future.
Test the tests:

Test your tests to make sure that they work properly. E.G.  bugs
should be deliberately caused to make sure the tests catch them. The
following can reduce the number of errors in your tests:




Check the tests: Walkthroughs and inspections are useful.
Plan the tests carefully: Planning for testing should begin at the
requirements stage or as soon as possible thereafter.
Keep your test cases: Save your test cases for regression testing and for
testing version 2 of the software.
Plug unit tests into a test framework: Use a test framework to store your
tests.
45
Principles and Practices for Testing
(cont)

Technical guidelines (cont):

Use a testing harness:
A testing harness (test framework)
can be used to select and run tests and to
analyze test results
 Such a harness may be GUI driven, may
be in the same language as the project,
may be a set of classes that provide the
tests, or may even be comprised of a set
of scripts (Hunt & Thomas, 2000)

46
Principles and Practices for Testing
(cont)

Technical guidelines (cont):

Use a testing harness (cont):

Test harnesses should include the
following capabilities:




A standard way to specify setup and cleanup.
A method for selecting individual tests or all
available tests.
A means of analyzing output for expected (or
unexpected) results.
A standardized form of failure reporting.
47
Principles and Practices for Testing
(cont)

Technical guidelines (cont):

Provide a test window:
 Provide views into the internal state of a
module in the production environment to
verify that the application is running correctly.
 Three mechanisms for doing this:



Generating log files that contain trace messages.
Providing a diagnostic control window that
appears when a hotkey sequence is pressed.
Including a built-in Web server so that, by using a
Web browser, the application can provide details
of its internal status and log entries, and provide a
debug control panel.
48
Principles and Practices for Testing
(cont)

Technical guidelines (cont):

Use automated test tools:


Large projects  use automated test library tools to keep track of test cases, weed
out redundant tests, and link tests to applications as needed (Jones, 2000).
Use the available tools:

Some useful tools (McConnell, 2004):



Scaffolding to test individual classes:

Software scaffolding is built to make it easy to exercise code.

One type of scaffolding is a dummy (“mock”) class or routine used by another
class being tested.

It might return control immediately without performing an action, get return
values from interactive input, return the same value(s) regardless of its input,
etc…

This is also called stub programming.

Another type of scaffolding is a fake routine (“driver”) that calls the real
routine being tested.

It might call the object with a fixed set of inputs, call the object with inputs
obtained interactively, call the object with arguments read from a file, etc…
File comparators:

Useful to compare actual output with expected output
Test-data generators:

Random-data generators can generate unusual combinations of test data and
can exercise code thoroughly
49
Principles and Practices for Testing
(cont)

Technical guidelines (cont):

Use the available tools:




Coverage monitors:
 Keeps track of what parts of the code have been exercised and
what parts have not
 Coverage analysis is useful to determine whether a set of test
cases fully exercises the code
Data recorder/logging:
 Monitors the execution of a program and records state
information
 This can be used in the event of a failure to try to ascertain the
cause of the failure
Symbolic debuggers:
 Step through code line by line as it executes.
System perturbers:


Include: memory filling tools (tools that can fill memory with arbitrary
values to uncover variables that have not been initialized), memory
shaking tools (tools that can rearrange memory during execution to
identify code that depends on data being in absolute rather than
relative locations), memory failing tools (tools that simulate lowmemory conditions), and memory-access checking tools (tools that
detect invalid memory access operations)
Error databases:
 A database of errors that have been reported is useful for
checking for recurring errors, tracking the rate at which errors are
detected and corrected, etc…
50
Conclusion





You have to test every facet of your program, no
matter how simple the program is.
Do not be the programmer who, through poor design
and testing, panics and adopts a hit-and-miss approach
to testing and debugging upon the sight of a multitude
of errors that begin to creep out of the woodwork
towards the later stages of development.
Attention must be paid to not only good software
design and construction, but also to building testability
into the code.
Maximizing orthogonality and building and testing a
program one unit at a time will prevent monotony and
wasted time later in testing and debugging.
By building correctness into the program, step by step,
any errors that may arise will be small and far easier to
correct.
51