What is Software Engineering and Why is it so Different

Download Report

Transcript What is Software Engineering and Why is it so Different

What Is Software Requirements
Engineering and Why Is It So Hard?
A Very Brief Look at a (Very) Few of the Many Issues
and Some (But Even Fewer) of the Answers
(Some of which I’m pretty sure of; some I’m less so)
Requirements: The Most Critical and Least
Well Understood Phase in Software Engineering
• Software errors found in field operations can be up to
several hundred times more expensive to fix than if they
were found in the requirements phase
• Requirements errors are responsible for a disproportionate
share of fielded software problems
• Published results range from over 30% up to over 60%
• For safety critical systems, requirements errors can be a
lot more distressing than merely $$$
3 Feb 2003
MSJ - 2
The Background and Motivation
• Current software engineering life cycle models and consensus
documentation standards are inadequate guides to actually doing
requirements engineering
Requirements
analysis
ANSI/IEEE std 830,
MIL-STD-2167, etc
The Standard Waterfall
Design
Code
(Implementation)
Test
Maintenance
• Newer OOA techniques such as UML tend to focus on
requirements elicitation and high level information portrayal
3 Feb 2003
MSJ - 3
Some of the Key Issues With Software
Requirements Engineering
• Little or no agreement as to:
•
•
•
•
Are there really different “types” of requirements? If so, what are they?
What is “a” (single) requirement?
How much information is really required to specify “a” requirement?
How many different levels of abstraction are possible? Useful?
• How many are appropriate for a given project? A given requirement?
• What downstream activities (design or requirements analyses) are dependent
on which levels of abstraction?
• Poor definitions for some (not all) of the key quality factors for
requirements specifications
• Completeness?
• Consistency? (Of what with what?)
• Traceability? (Of what to what for what purpose?)
3 Feb 2003
MSJ - 4
Example: Are These All Well Defined,
Distinguishable Types of Requirements?
•
•
•
•
•
•
•
Functional requirements
Performance requirements
High level requirements
Detailed requirements
Derived requirements
Interface requirements
Output requirements
•
•
•
•
•
•
•
•
•
•
Input requirements
User requirements
Design requirements
Operational requirements
Principal requirements
Parasitic requirements
Behavioral requirements
•
•
•
• And what’s really the difference between a requirement and a
constraint, anyway?
3 Feb 2003
MSJ - 5
Functional Requirements:
The Starting Point
• Generally, no two engineers will ever totally agree on exactly
how many types of requirements there are
• But they probably will both agree that “functional” requirements
need to be at the core of the requirements engineering process
• According to Websters, function is “the action for which a
person or thing is specially fitted or used or for which a thing
exists: Purpose”
• All software of whatever type always has but a single purpose:
provide acceptable outputs
• Functional requirements are thus statements about the acceptable,
observable characteristics of outputs
• Acceptable: What good are outputs with unacceptable characteristics?
• Observable: What good is it to document unobservable characteristics?
3 Feb 2003
MSJ - 6
Function vs Performance:
A Misleading Distinction
• What is observable in an output?
• Its value (bit pattern) and the time of its initial availability for observation,
nothing else
• Function vs performance does not split cleanly on value vs time
• Value accuracy (e.g., ±¼ mile) is often, but not universally, considered a
performance requirement but does not involve time
• An output that comes out at the wrong time is not fulfilling the purpose of the
software, hence the software is, at least for that instant, non-functional
• Hence when all the outputs do come out at the right time the software can be
said to be (at least partially) functional
• Thus statements about the acceptable observation times for outputs help
determine whether or not the software is functional, so then turning around
and stating that descriptions of acceptable observation times are not
functional requirements (but performance) seems to be asking for confusion
3 Feb 2003
MSJ - 7
The Real Point
• Better perhaps to speak of acceptable behavioral characteristics
rather than getting overly hung up on the distinction (if any)
between functional and performance requirements
• Most other common “types” of requirements appear to be either:
• Waypoints (possibly fictitious) along the process of stepwise refinement
of abstraction in the development of behavioral characteristics, or
• Behavioral characteristics derived from other characteristics via various
(too often implicit and imperfectly understood) closure criteria, or
• Constraints on the development process or the design space
• The real need is to understand all the types of, and relationships
among, the information that must be developed, specified, and
analyzed prior to (various stages of) design
3 Feb 2003
MSJ - 8
Is Behavioral Analysis For Software Really
Different Than For Other Types of
Engineered Artifacts?
Software artifacts
Other artifacts
Stimulus/response behavior
is the only type of functional
requirement for software in
any type of system
Always include and are (usually?
always?) driven by functional
requirements that are not simply
stimulus-response behavior†
The number of discrete
behavioral cases to be
individually engineered is
typically amazingly large
Many (most? all?) characteristics
are specifiable as a (relatively)
small set of numbers† or set of
equations and some boundary
conditions
†
3 Feb 2003
E.g., the required payload of an airplane
MSJ - 9
The Significance of Behavioral Complexity
• The domain-dependent set of conceptual abstractions by which
software requirements are initially expressed is much larger and
more diverse than the largely standardized basic conceptual
vocabulary of other engineering projects
• “Design a bridge to span the straits of Gibraltar that will carry 6
lanes of highway traffic in each direction”
vs
• “Design an air traffic control system for the United States that will …
will … what?”
• As a result, the initial requirements are usually much less well
understood at the beginning of a new software project than for
other types of engineered artifacts
• Often, the very vocabulary used to characterize the behavior of a
large software-intensive system may not even be known initially
3 Feb 2003
MSJ - 10
The Significance of Behavioral Complexity
(cont’d)
• For other engineered artifacts, figuring out the requirements is
often not even really considered part of the engineering process,
or at least not one which requires special tools, techniques and
expertise (separate from design expertise) – the design of a
bridge is hard; understanding its requirements much less so
• For software, the emphasis is reversed:
• A much larger portion of the overall project’s time and effort is spent in
the “softest” of engineering phases (requirements specification) – trying to
understand and document the requirements is hard, design much less so
• Note: Good design is still not trivial, but compliant and theoretically
workable designs are almost always a dime a dozen; it’s immaturity in our
measures of effectiveness for designs that makes coming up with good
ones hard, not difficulty synthesizing something that will (probably) work
3 Feb 2003
MSJ - 11
My Conclusion?
• Software requirements engineering needs a much more
detailed process model
additional derived outputs
1. Initial outputs, boundaries, and constraints
3 Feb 2003
2. Output characteristics (and input references)
3. Standard robustness
4. Logical completeness and consistency
5. Output hazard analyses
MSJ - 12
Initial Outputs, Boundaries, Safety
Requirements and Constraints
1. Initial outputs, boundaries, and constraints
1.1 Initial outputs
•
•
preliminary
hazard analysis
•
1.2 Black-box
boundary
identification
1.1.1 Principal
outputs
1.3 Constraints
1.1.2 Initial
derived
outputs
2
3 Feb 2003
2
2
4.3.2
MSJ - 13
Output Characteristics
And the Identification and Characterization
of the Inputs Necessary to Specify Them
additional derived outputs
1.1 Initial outputs
1.2 Black box boundary
2. Output characteristics and their referenced inputs
(and then their characteristics, and eventually, more outputs)
2.1 Output fields
2.2 Output timing
2.1.1 Delineation and
classification
2.2.1 Basic abstraction(s)
2.1.2 Reference definition,
a.k.a. initial algorithm
definition
2.2.2 Proximate triggers
3, 4, & 5
3 Feb 2003
coupling and
cohesion analysis
leading to initial
modularization
(top level design)
3
2.3 Preconditions
(a.k.a. States)
Stepwise refinement
is appropriate here
MSJ - 14
Where do Inputs Fit in this Picture?
• In the requirements phase, inputs are references used to
help specify how the software should behave
• Output X must appear within 0.25 seconds
after the occurrence of input Z
• The output value of X must be within ±½ mile
of the average of the last two position
inputs of type Z
• The possibilities for stepwise refinement of some of such
references has caused some confusion in the past
3 Feb 2003
MSJ - 15
Abstraction and Stepwise Refinement
• There is no easy mapping of levels of abstraction to “stages” of
systems or requirements engineering or standard engineering
specification levels (if such really existed, which they don’t,
despite many managers’ religious belief that they do)
• The system shall be cost effective
• The survival likelihood over a 2 hour mission
shall exceed 98%
• The individual target Pk shall exceed 99%
• The single shot Pk shall exceed 95%
• The output a/c range shall be sufficiently
accurate to permit intercept guidance to acquire
the target 98% of the time
• The output a/c range shall be accurate to within
±½ mile of the actual range of the actual aircraft
at the time the range is output
• The output a/c range shall be accurate to within
±¼ mile of the reference value computed by the
following reference algorithm: [20 pages of math]
3 Feb 2003
MSJ - 16
Accuracy References, Algorithms, and
Requirements
• In the past, that last/lowest level of requirement was often written:
The software shall compute aircraft position using the
following algorithm:
…
• There are at least two problems with that language:
• That’s not a black box testable requirement: you can’t see what algorithm
has actually been implemented without looking inside the box
• It has also, at least in the past, lead to some rather pointless arguments:
• Between systems engineering (who wrote the requirement) and software
engineering, who wanted to use “an equivalent” algorithm
• Between software engineering and perhaps overly literal minded QA types who
wanted to see the implementation exactly matching the specified requirement,
e.g., “the spec says ‘compute using X=Y+Z’ but you coded X=Z+Y”
3 Feb 2003
MSJ - 17
Accuracy References and Algorithms
(cont’d)
• By noting that the algorithm itself is not actually the requirement
but only the definition of a reference against which the observable
behavior will be tested, we can have our cake and eat it too:
• Analysis, derivation, and specification of reference algorithms is still
appropriately considered a requirements engineering activity (can’t write the
requirements spec without a reference for an accuracy requirement for each
approximate field, and, for that matter, for many definitions of acceptable
values in exact fields)
• Downstream design activities may choose to implement alternative but
equivalent algorithms but the notion of equivalence is now well defined –
equivalent within the specified accuracy – and the burden of showing
equivalence is where it should be: on the design team
• After completion of refinement of abstraction and shrinkage of the black box
boundary, the reference algorithms themselves will refer to actual inputs,
whose characteristics are then a source of additional (derived) requirements
3 Feb 2003
MSJ - 18
Problems with Abstraction References
• One contributor to some of the historic confusion in this area
(e.g., is an algorithm in a requirements specification really a
requirement ?) has been that not all outputs permit meaningful
specification at every level of abstraction
• There may not be any externally observable reference to use as an
abstract accuracy reference
• Look at the difference between:
The output aircraft range shall be accurate to within ±½
mile of the actual range of the actual aircraft at the
time the range is output
and
The output of recommended course to intercept shall be
accurate to within ±3° of … ??? Of what?
• There’s no observable phenomenon to use there as a (more
abstract) reference for that latter requirement
3 Feb 2003
MSJ - 19
Algorithms and Requirements:
Conclusion
• Algorithms belong in a requirements specification (or an
appendix published at the same time); but they are not in and
of themselves requirements, they are definitions in terms of
which requirements are stated
• Such an algorithm (in the requirements specification) is also
not design – programmers are not required to use it; although
they often will, as they are unlikely to want to duplicate the
years of labor to come up with a different algorithm and
prove its equivalence
3 Feb 2003
MSJ - 20
Developing Robustness – Anticipating
Unexpected, Undesired, or Even Downright Impossible
Events as Seen in Referenced Inputs
additional derived outputs
2
2
3. Standard robustness
3.1 Input validity definition
3.1.1 Input fields
3.1.1.1 Delineation & 3.1.1.2 Validity
classification
definition
3.2 Responses to invalid inputs
3.1.2 Assumptions about the
environment's behavior
3.1.2.1 State
predictability
3.1.2.2 Input
timing
3.3 Semi-final triggers and
state preconditions
4
3 Feb 2003
MSJ - 21
Logical Completeness and Consistency
2
3
1.3
additional derived outputs
4. Logical completeness & consistency
4.1 Individual requirements completeness
4.1.1 Stimulus
4.1.2 Response
4.1.1.1 Events, conditions,
and states
4.1.2.1 Uniqueness
4.1.1.2 Proximate triggers
4.1.1.2.1 Positive
4.1.2.3 Value
4.1.2.2 Timing
4.1.2.4 STMO
4.1.1.2.2 Negative
4.3 Consistency
4.3.1 Determinacy:
Consistency among
output requirements
4.3.2 Consistency between
requirements and
safety constraints
4.2 Set completeness
5
3 Feb 2003
MSJ - 22
Summary of the Key Messages for Today
• Requirements engineering for software intensive artifacts is
(much) more complicated than for other types of artifacts
• Requirements engineering for software intensive artifacts is
much more complicated that most textbooks and practitioners
know or admit
• There is a great deal of cant in modern software engineering,
often all too effectively disguising the extent of our ignorance
• Which is not to say that we don’t know anything; just that you
should take everything you don’t fully understand (and a lot of
what you think you do) with a grain of salt
• Don’t accept techno-babble definitions and process descriptions:
if we don’t know, we don’t know; but deceiving ourselves about
our ignorance is no way to make progress
3 Feb 2003
MSJ - 23