Document 7550216
Download
Report
Transcript Document 7550216
Report of Architecture
and Product Working
Group
ICM Workshop
Washington, DC
July 17, 2008
Working Group Members
•
•
•
•
•
•
•
•
•
•
J. D. Baker, BAE Systems
A. Winsor Brown, USC-CSSE
Karl Brunson, Lockheed Martin
Paul Croll, CSC
Thomas Knott, OSD
Art Pyster, Stevens
Paul Russell, Aerospace
Robert Schwenk, Army ASA(ALT)
J. Bruce Walker, SAF/AQRE
Lee Zhou, Boeing
2
Working Group Charter
• Identify and prioritize the most important issues
associated with Architecture and Products (engineering
artifacts) for ICM and Competitive Prototyping (CP)
• Suggest OSD initiatives and other actions to address
those issues
3
Definition of Architecture
• IEEE 1471: fundamental organization of a system
embodied in its components, their relationships to each
other, and to the environment, and the principles guiding
its design and evolution.
• Don Firesmith (from the SEI): The set of all the most
important, pervasive, higher-level strategic decisions,
inventions, engineering trade-offs, and assumptions
(DIETAs), and their associated rationales concerning
how the system meets its allocated and derived product
and process requirements.
The Firesmith definition is the more useful for CP and ICM
4
Focus
• Because CP is conducted to reduce risk, and the ICM is a
risk-driven life cycle model, we focused on how to use
Architecture and Product to understand, manage, and
reduce risk.
• As defined by Firesmith, the architecture includes many
DIETAs and their rationale, not just the risky ones.
• For CP and anchor points in the ICM, we will focus on
risky DIETAs; i.e., DIETAs with weak rationale which, if
wrong, could have a significant negative impact on
program cost, schedule, or performance.
• Strong rationale is based on objective evidence. Weak
rationale is based on assertion and opinion.
5
System Architecting Paradigm
Three activities should happen concurrently and iteratively:
1.
Systems and software engineers establish the most
critical requirements/objectives – including those for
“ilities”
2.
Systems and software architects develop a system
and software architecture that the architects believe
will simultaneously support all critical
requirements/objectives
3.
Engineers evaluate the architecture for how well it
really supports critical requirements/objectives,
creating substantiating evidence for the architecture
or identifying weaknesses in it
Today, it is common for any of these activities to be
shortchanged, especially the third.
6
Types of Evidence
1. Analytic models
2. Scenario-based execution of prototypes
3. Scenario-based execution of simulations
4. Benchmarking
5. Appeal to historical analogy (we did something similar
several times before)
6. Architecture Quality Cases (analogous to safety cases)
with claims, arguments, and evidence
7. Process execution results, such as test results from
early software builds
7
CP/ICM Issues and Actions (unordered)
1. Architectures expressed using DoDAF typically do not
include all of the DIETAs in sufficient detail to support
rigorous evaluation.
Action: Develop architectural representation guidance requiring DIETAs to be
developed in sufficient detail to support rigorous evaluation. For example,
DoDAF architectures typically don’t contain enough information to perform
safety case analyses or to understand the security properties of the
system.
2. The “ilities” are often understated in the
requirements/objectives, yet are often a key source of
problems later in system development. An architectural
view for each of the relevant quality characteristics is
required.
Action: Develop guidance requiring “ilities” to be sufficiently documented and
articulating what sufficient means.
Action: Research how to present sufficient information in the views to support
adequate evaluation.
8
Examples of Quality Characteristics
1.
Efficiency
13. Installability
2.
Completeness
14. Reusability
3.
Correctness
15. Reliability
4.
Security
16. Error Tolerance
5.
Compatibility
17. Availability
6.
Interoperability
18. Usability
7.
Maintainability
19. Understandability
8.
Expandability
20. Ease of Learning
9.
Testability
21. Operability
10. Portability
22. Communicativeness
11. Hardware Independence
23. Survivability
12. Software Independence
24. Flexibility
9
CP/ICM Issues and Actions (unordered)
3. Architectures often do not state the rationale (evidence)
for their DIETAs in sufficient detail to understand which
ones are particularly risky.
Action: Develop guidance requiring the rationale for DIETAs to be stated in
sufficient detail and articulating what sufficient means.
4. There is no guidance for what evidence is adequate for
any given situation or how that evidence should be
presented (analogous to the problem of knowing when
you have tested enough). How much prototyping is
“enough”? How much evidence is “enough”?
Action: Conduct research on how much prototyping and evidence is enough and
then document the research results in guidance.
Action: Engage Chris Powell on his dissertation research based upon his
assessment of ACAT 1D program architectures since July 2004.
10
CP/ICM Issues and Actions (unordered)
5. Government program offices are probably not staffed
with enough people with the skills to request the correct
evidence from the supplier and to evaluate that evidence
when the supplier provides it. Government offices
should not request evidence unless they are able to
evaluate it.
Action: Consider forming an architecture assessment team (and other types of
assessment teams) at the OSD level that would be a resource available
to interested programs.
6. Since competing suppliers will have different
architectures, the architectures will have different risk
profiles and therefore require different evidence. Who
decides what evidence will be provided? The
government? The supplier? How will the government
fairly evaluate competing prototypes when presented
with different types of evidence?
11
CP/ICM Issues and Actions (unordered)
7. A competition should involve regular submission of
evidence – not just once at the end of the competition.
Can suppliers “fix” problems along the way and resubmit
stronger evidence? It would seem to be in the
government’s best interest to allow this, but could be
construed as “unfair” by some competitors.
Action:
Investigate legal and contractual implications of requesting regular
submission of evidence and propose ways to enable regular
submission of evidence.
8. Creating evidence is often dependent on exercising
scenarios, which are extremely difficult to generate in
sufficient number and sufficient diversity to uncover weak
DIETAs, especially for SoS.
Action:
Research how to generate an adequate and diverse set of scenarios,
especially for SoS or investigate alternative approaches to developing
12
scenarios.
ICM Issues and Actions (Unordered)
9. Providing evidence for an SoS at regular milestones is
especially challenging because the evidence provided
by the individual system elements may not be available
when originally expected. Understanding impact
analysis across elements when something changes is
challenging.
Action:
Research how to perform impact analysis across elements and how
to respond to “breakage” in synchronization across elements.
10. As development progresses from milestone to
milestone, new evidence reconfirming key DIETAs is
needed. There is no guidance as to what that evidence
should be and how often it should be collected.
Action:
Research what evidence is required to reconfirm key DIETAs and
then document the approaches in guidance.
13
ICM Issues and Actions (Unordered)
11. Program offices are inherently biased when it comes to
evaluating evidence that a supplier is making sufficient
progress to pass a milestone. Having independent nonadvocate reviews of evidence eliminates that problem,
but can be expensive and difficult to staff.
Action:
Investigate the cost and feasibility of independent non-advocate
reviews vs. the cost of inadequate review by failing to use
independent reviewers.
14
Value and Ease of Implementing Actions
Value and Ease of Implementing Actions
16