Transcript Document

Systems & Cross-Cutting Issues
Moderator: John Baras
Scribe: Eric Cooper
Attendees: Claire Tomlin (UC Berkeley), Mingyan Li (Boeing),
Lyle Long (Penn State), Walter Storm (Lockheed Martin),
Peter Stanfill (Lockheed Martin), Eric Cooper (NASA Langley),
Kristina Lundqvist (MIT), Ernie Lucier,
Andres Zeilweger (JPDO), Barbara Lingberg (FAA),
Elroy Weind (Cessna), John Baras (University of MD),
Glenn Roberts (Mitre)
Systems & Cross-Cutting Issues
• What are top three lessons learned for this
technology area
• What are top three needs that cannot be
met
• Top three challenges with timelines
• Education issues
Systems & Cross-Cutting Issues
• Lessons learned:
– Systems engineering must be done up front and must include
SW components that are annotated with the methodology/plan
that will be used for certification / V & V
– Systems engineering needs to define “contract” definition – and
enforcement -- for interfaces so as to enable systems
certification / verification (may include formal models of
interfaces) from specification of components
– User must be considered an integrated part of the whole system
– A formalized approach to systems engineering is lacking with
respect to tracking requirements across system components and
generating, refining and clarifying requirements
Systems & Cross-Cutting Issues
• Needs:
– A systematic philosophy for integrating requirements
management and “design-to-test”
Have SW certification “hooks” in place for all system
components from the very beginning in order to enable
systems level V & V and testing assurance
– Models of pilot / user behavior and integrate these into
the systems engineering process.
– Unified model representation to facilitate the use of
analysis tools
– Quantitative methods that link V & V results back to
requirements and testing
Systems & Cross-Cutting Issues
• Research Challenges
– Definition and extension of what is meant by certifiable and
dependable (reliable, available,) including such new elements
such as monitoring and self-correcting SW components [needed
next 2 – 3 years]
– Methods to perform timing analysis [next 2 – 3 years]
– Development of interface modeling methodology to fully describe
the component interface behavior so as to enable system V & V
[next 3 - 5 years, maybe longer]
– Change management methodology within cost and time
constraints (change in requirements and technologies)
[dependent on above, next 5 – 10 years]
Systems & Cross-Cutting Issues
• Other Challenges
– Tools to monitor software performance,
integrate experimental data into monitoring,
for updating
– Understanding and quantifying software reuse
– Understanding implications of COTS on
system certification (will differ based on level
of certitude)
Systems & Cross-Cutting Issues
• Education
– Need to teach system-level thinking; composability and
scalability – avionics systems are much different from other SWbased application disciplines
– Need for educational culture change
– Need a pedagogical approach to systems engineering using
common teaching approach
– Teach life cycle view of software systems (from requirements, to
design, through operational maintenance)
– Need to have industry involved in the education process (not
enough domain expertise currently in academia)
– Increase industry internships
– Need more technically rigorous software engineering programs
in US at all levels (undergrad, grad, and continuing education)
– Need more SW engineering training in all engineering disciplines
Systems & Cross-Cutting Issues
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
Systems engineering must be done up front, and must recognize that software concerns must be considered
to be of equal importance with other components
Essential to have certification awareness included from the very beginning of the development process
User must be considered part of the whole system
Formal Methods shown valuable for refining / clarifying requirements
Have SW certification “hooks” in place from the very beginning for system level testing assurance (i.e., V & V
at all levels of the requirements / specification / design)
Deciding what should be considered hardware vs. what should be considered software
Models of pilot / user response (models for professional, amateur, person off-the-street); “Cooper-Harper”
ratings / bringing pilots in to test the system
A defined “design-to-test” philosophy
Unified model representation to facilitate the use of analysis tools
Quantitative methods that link V & V results back to requirements and testing
Do not currently have the interface modeling mechanisms to fully describe the component interface behavior
Definition of what it means to be dependable
Formal interface definition methodology (i.e., contract relative to data/timing/etc)
Multiple kinds of systems (boxes, heterogeneous aircraft, ATC system, etc.)
Software re-use is often problematic for many reasons – better approach might be to re-use certification basis
Incremental changes – no re-certification
Affect of software architecture on dependency and reliability
Timing constraints should be addressed
How do we capture probabilistic requirements?