Document 7406304

Download Report

Transcript Document 7406304

COSYSMO Working Group Meeting Industry Calibration results

Ricardo “two months from the finish line” Valerdi

USC Center for Software Engineering & The Aerospace Corporation

Morning Agenda

7:30 8:30 9:00 9:15 Continental Breakfast (in front of Salvatori Hall) Introductions [All] Brief overview of COSYSMO [Ricardo] Calibration results [Ricardo] 9:45 Break 10:15 11:15 Size driver counting rules exercise [All] Mini Delphi for EIA 632 activity distributions [All] 12:00 Lunch (in front of Salvatori Hall) 2

Afternoon Agenda

1:00 2:00 2:45 3:15 4:30 5:00 Joint meeting with COSOSIMO workshop [JoAnn Lane] COSYSMO Risk/Confidence Estimation Prototype [John Gaffney] Break Open issues Local calibrations Lies, damned lies, and statistical outliers Future plans for COSYSMO 2.0 (including ties to SoS work) Action items for next meeting: July 2005 in Keystone, CO Adjourn 3

7-step Modeling Methodology

Analyze Existing literature 1 Perform Behavioral Analysis 2 Identify Relative Significance 3 Perform Expert Judgement, Delphi Assessment 4 Gather Project Data

WE ARE HERE

5 Determine Bayesian A-Posteriori Update 6 Gather more data; refine model 7 4

COSYSMO Operational Concept

# Requirements # Interfaces # Scenarios # Algorithms + Volatility Factors Size Drivers Effort Multipliers

-

Application factors

-

8 factors Team factors

-

6 factors COSYSMO Calibration Effort WBS guided by EIA/ANSI 632 5

COSYSMO Cost Drivers

• Application Factors – Requirements understanding – Architecture understanding – Level of service requirements – Migration complexity – Technology Maturity – Documentation Match to Life Cycle Needs – # and Diversity of Installations/Platforms – # of Recursive Levels in the Design • Team Factors – Stakeholder team cohesion – Personnel/team capability – Personnel experience/continuity – Process maturity – Multisite coordination – Tool support

6

COSYSMO 1.0 Calibration Data Set

• Collected

35

data points • From

6

companies;

13

business units • No single company had

> 30%

influence

7

COSYSMO Data Sources

Raytheon Northrop Grumman Lockheed Martin General Dynamics BAE Systems SAIC

Intelligence & Information Systems (Garland, TX) Mission Systems (Redondo Beach, CA) Transportation & Security Solutions (Rockville, MD) Integrated Systems & Solutions (Valley Forge, PA) Systems Integration (Owego, NY) Aeronautics (Marietta, GA) Maritime Systems & Sensors (Manassas, VA) Maritime Digital Systems/AIS (Pittsfield, MA) Surveillance & Reconnaissance Systems/AIS (Bloomington, MN) National Security Solutions/ISS (San Diego, CA) Information & Electronic Warfare Systems (Nashua, NH) Army Transformation (Orlando, FL) Integrated Data Solutions & Analysis (McLean, VA)

Data Champions

• • • • • • • • • • • • •

Gary Thomas, Raytheon Steven Wong, Northrop Grumman Garry Roedler, LMCO Paul Frenz, General Dynamics Sheri Molineaux, General Dynamics Fran Marzotto, General Dynamics John Rieff, Raytheon Jim Cain, BAE Systems Merrill Palmer, BAE Systems Bill Dobbs, BAE Systems Donovan Dockery, BAE Systems Mark Brennan, BAE Systems Ali Nikolai, SAIC 9

Meta Properties of Data Set

Almost half of the data received was from

Military/Defense

programs 55% was from

Information Processing

systems and 32% was from

C4ISR 10

Meta Properties of Data Set

Two-thirds of the projects were

software-intensive

First

4 phases

of the SE life cycle were adequately covered

11

Industry Calibration Factor

PM NS

A

   

k

(

w e

,

k

e

,

k

w n

,

k

n

,

k

w d

,

k

d

,

k

)  

E

j

14   1

EM j

Calculation is based on aforementioned data (n = 35) ln(

SE

_

HRS

)  3 .

14  1 .

01  ln(

Size

)

SE

_

HRS

 22 .

87 

Size

1 .

01 This calibration factor must be adjusted for each organization Evidence of diseconomies of scale

12

(partially captured in Size driver weights)

Size Driver Influence on Functional Size

N = 35

# of scenarios and # of requirements accounted for 83% of functional size # of Interfaces and # of Algorithms drivers proved to be less significant 13

Parameter Transformation

14

Size vs. Effort

35 projects R-squared = 0.55

Range of SIZE: Min = 82, Max = 17,763

15

Intra-Size Driver Correlation

REQ INTF ALG REQ 1.0

0.63

0.48

OPSC 0.59

INTF 1.0

0.64

0.32

ALG 1.0

0.05

OPSC 1.0

• •

REQ & INTF are highly correlated (0.63) ALG & INTF are highly correlated (0.64) 16

A Day In the Life…

• •

Common problems Requirements reported at “sky level” rather than “sea level”

– –

Test: if REQS < OPSC, then investigate Often too high; requires some decomposition

Interfaces reported at “underwater level” rather than “sea level”

Test: if INTF source = pin or wire level, then investigate

Often too low; requires investigation of physical or logical I/F 17

We will revisit these issues later

A Day In the Life… (part 2)

• •

Common problems (cont.) Algorithms not reported

– –

Only size driver omitted by 14 projects spanning 4 companies Still a controversial driver; divergent support Operational Scenarios not reported

Only happened thrice (scope of effort reported was very small in all cases)

Fixable; involved going back to V&V documentation to extract at least one OPSC

We will revisit these issues later

18

The Case for Algorithms

• •

N = 21 Reasons to keep ALG in model

Accounts for 16% of the total size in the 21 projects that reported ALG

It is described in the INCOSE SE Handbook as a crucial part of SE Reasons to drop ALG from model

– – – –

Accounts for 9% of total SIZE contribution Omitted by 14 projects, 4 companies Highly correlated with INTF (0.64) Has a relatively small (0.53) correlation with Size (compared to REQ 0.91, INT 0.69, and OPSN 0.81) 19

Cost Drivers

• Original set consisted of > 25 cost drivers • Reduced down to 8 “application” and 6 “team” factors • See correlation handout • Regression coefficient improved from 0.55 to 0.64 with the introduction of cost drivers • Some may be candidates for elimination or aggregation

20

Cost Drivers: Application Factor Distribution (RQMT, ARCH, LSVC, MIGR) Requirem ents Understanding (RQMT)

20 15 10 5 0 Very Low Low Nominal High Very High

Architecture Understanding (ARCH)

20 15 10 5 0 Very Low Low Nominal High Very High

Level of Service Requirem ents (LSVC)

20 15 10 5 0 Very Low Low Nominal High Very High

Migration Com plexity (MIGR)

20 15 10 5 0 Nominal High Very High Extra High

21

Cost Drivers: Application Factor Distribution (TMAT, DOCU, INST, RECU)

20 15 10 5 0 Very Low

Technology Maturity (TMAT)

Low Nominal High Very High

Docum entation (DOCU)

20 15 10 5 0 Very Low Low Nominal High Very High

Installations & Platform s (INST)

25 20 15 10 5 0 Nominal High Very High Extra High

Recursive Levels in the Design (RECU)

20 15 10 5 0 Very Low Low Nominal High Very High

22

Cost Drivers: Team Factor Distribution (TEAM, PCAP, PEXP, PROC) Stakeholder Team Cohesion (TEAM)

20 15 10 5 0 Very Low Low Nominal High Very High

Personnel/Team Capability (PCAP)

20 15 10 5 0 Very Low Low Nominal High Very High 25 20 15 10 5 0 Very Low

Personnel Experience (PEXP)

Low Nominal High Very High

Process Capability (PROC)

20 15 10 5 0 Very Low Low Nominal High Very High Extra High

23

Cost Drivers: Team Factor Distribution (SITE, TOOL) Multisite Coordination (SITE)

15 10 5 0 Very Low Low Nominal High Very High Extra High

Tool Support (TOOL)

20 15 10 5 0 Very Low Low Nominal High Very High

24

Top 10 Intra Driver Correlations

• •

Size drivers correlated to cost drivers

 

0.39

Interfaces & # of Recursive Levels in the Design -0.40

Interfaces & Multi Site Coordination

0.48

Operational Scenarios & # of Recursive Levels in Design Cost drivers correlated to cost drivers

0.47

Requirements Und. & Architecture Und.

    

-0.42

Requirements Und. & Documentation 0.39

Requirements Und. & Stakeholder Team Cohesion 0.43

Requirements Und. & Multi Site Coordination 0.39

Level of Service Reqs. & Documentation 0.50

Level of Service Reqs. & Personnel Capability

0.49

Documentation & # of Recursive Levels in Design 25

Candidate Parameters for Elimination

• Size Drivers – # of Algorithms * ^ • Cost Drivers (application factors) – Requirements Understanding * ^ – Level of Service Requirements ^ – # of Recursive Levels in the Design * – Documentation ^

Motivation for eliminating parameters is based on the high ratio of parameters (18) to data (35) and the need for degrees of freedom

– # of Installations & Platforms ^ – Personnel Capability ^ – Tool Support ^

By comparison, COCOMO II has 23 parameters and over 200 data points

*Due to high correlation ^Due to regression insignificance

26

The Case for

# of Recursive Levels in the Design

• •

Reasons to keep RECU in model

– –

Captures emergent properties of systems Originally thought of as independent from other size and cost drivers Reasons to drop RECU from model

Highly correlated to

• • • •

Size (0.44) Operational Scenarios (0.48) Interfaces (0.39) Documentation (0.49) 27

Size driver counting rules

Are there any suggested improvements?

• Requirements – Need to add guidance with respect to • “system” vs. “system engineered” vs. “subsystem” requirements • “decomposed” vs. “derived” requirements – Current guidance includes • Requirements document, System Specification, RVTM, Product Specification, Internal functional requirements document, Tool output such as DOORS, QFD.

28

Counting Rules: Requirements

Number of System Requirements

This driver represents the number of requirements for the system-of-interest at a specific level of design. The quantity of requirements includes those related to the effort involved in system engineering the system interfaces, system specific algorithms, and operational scenarios. Requirements may be functional, performance, feature, or service-oriented in nature depending on the methodology used for specification. They may also be defined by the customer or contractor. Each requirement may have effort associated with is such as V&V, functional decomposition, functional allocation, etc. System requirements can typically be quantified by counting the number of applicable shalls/wills/shoulds/mays in the system or marketing specification. Note: some work is involved in decomposing requirements so that they may be counted at the appropriate system-of-interest.

How can we prevent requirements count from being provided too high?

29

Counting Rules: Interfaces

Number of System Interfaces

This driver represents the number of shared physical and logical boundaries between system components or functions (internal interfaces) and those external to the system (external interfaces). These interfaces typically can be quantified by counting the number of external and internal system interfaces among ISO/IEC 15288-defined system elements.

• Examples would be very useful • Current guidance includes – Interface Control Document, System Architecture diagram, System block diagram from the system specification, Specification tree.

30

How can we prevent interface count from being provided too low?

Counting Rules: Algorithms

Number of System-Specific Algorithms

This driver represents the number of newly defined or significantly altered functions that require unique mathematical algorithms to be derived in order to achieve the system performance requirements. As an example, this could include a complex aircraft tracking algorithm like a Kalman Filter being derived using existing experience as the basis for the all aspect search function. Another example could be a brand new discrimination algorithm being derived to identify friend or foe function in space-based applications. The number can be quantified by counting the number of unique algorithms needed to realize the requirements specified in the system specification or mode description document.

• Current guidance includes – System Specification, Mode Description Document, Configuration Baseline, Historical database, Functional block diagram, Risk analysis.

Are we missing anything?

31

Counting Rules: Op Scn

Number of Operational Scenarios

This driver represents the number of operational scenarios that a system must satisfy. Such scenarios include both the nominal stimulus response thread plus all of the off-nominal threads resulting from bad or missing data, unavailable processes, network connections, or other exception-handling cases. The number of scenarios can typically be quantified by counting the number of system test thread packages or unique end-to-end tests used to validate the system functionality and performance or by counting the number of use cases, including off nominal extensions, developed as part of the operational architecture.

• Current guidance includes – Ops Con / Con Ops, System Architecture Document, IV&V/Test Plans, Engagement/mission/campaign models.

How can we encourage Operational Scenario reporting?

32

Effort Profiling mini-Delphi

• •

Step 4 of the 7-step methodology Two main goals 1. Develop a typical distribution profile for systems engineering across 4 of the 6 life cycle stages (i.e., how is SE distributed over time?) 2. Develop a typical distribution profile for systems engineering across 5 effort categories (i.e., how is SE distributed by activity category?) 33

COCOMO II Effort Distribution

MBASE/RUP phases and activities Source : Software Cost Estimation with COCOMO II, Boehm, et al, 2000

34

Acquisition & Supply Technical Management System Design Product Realization Technical Evaluation

Our Goal for COSYSMO

Conceptualize Develop Operational Test & Evaluation Transition to Operation Operate, Maintain, or Enhance Replace or Dismantle ISO/IEC 15288 35

Mini Delphi Part 1

Goal: Develop a distribution profile for 4 of the 6 life cycle phases 5x6 matrix of EIA 632 processes vs. ISO 15288 life cycle phases 33 EIA 632 requirements (for reference) 36

Acquisition & Supply Technical Management System Design Product Realization Technical Evaluation Previous Results Are Informative EIA/ANSI 632 & ISO/IEC 15288 Allocation Clause No. (Requirements) Product Supply - 4.1.1 (1) Product Acquisition 4.1.2 (2) Supplier Performance 4.1.2 (3) Technical Management 4.2 (4-13) Requirements Definition - 4.3.1 (14-16) EIA/ANSI 632 - Pre System %

40% 40% 30% 15% 35%

Solution Definition 4.3.2 (17-19) Implementation - 4.4 (20)

25% 5%

Transition to Use - 4.4 (21) Systems Analysis 4.5.1 (22-24) Requirements Validation - 4.5.2 (25-29) System Verification 4.5.3 (30-32) End Products Validation - 4.5.4.1 (33)

5% 25% 10% 10% 20%

EIA/ANSI 632 - Sys Definition % EIA/ANSI 632 Subsyste m Design % EIA/ANSI 632 Detailed Design %

30% 30% 30% 20% 30% 35% 10% 10% 40% 35% 25% 25% 20% 20% 20% 20% 20% 30% 25% 25% 25% 30% 20% 20% 10% 10% 20% 20% 10% 5% 40% 30% 5% 15% 20% 15%

EIA/ANSI 632 Integration, Test, and Evaluation %

0% 0% 0% 25%

ISO-IEC 15288 Operation s %

0% 0% 0% 0% 5% 0%

ISO-IEC 15288 Maintenan ce or Support %

0% 0% 0% 0% 0% 5% 20% 30% 5% 10% 25% 20% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0%

ISO-IEC 15288 Retireme nt %

0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0%

Breadth and Depth of Key SE Standards

System life Process description ISO/IEC 15288 High level practices EIA/ANSI 632 Detailed practices Conceptualize Develop Transition to Operation Operate, Maintain, or Enhance Replace or Dismantle

Purpose of the Standards:

ISO/IEC 15288 - Establish a common framework for describing the life

cycle of systems

EIA/ANSI 632 - Provide an integrated set of fundamental processes to

aid a developer in the engineering or re-engineering of a system

IEEE 1220 - Provide a standard for managing systems engineering Source : Draft Report ISO Study Group May 2, 2000

38

5 Fundamental Processes for Engineering a System

Source: EIA/ANSI 632

Processes for Engineering a System

(1999)

33 Requirements for Engineering a System

Source: EIA/ANSI 632

Processes for Engineering a System

(1999)

Mini Delphi Part 2

Goal: Develop a typical distribution profile for systems engineering across 5 effort categories 5 EIA 632 fundamental processes 33 EIA 632 requirements (for reference) 41

Preliminary results

4 person Delphi done last week at GSAW

EIA 632 Fundamental Process Acquisition & Supply Technical Management System Design Product Realization Technical Evaluation Average 5% 13.75% 26.25% 22.5% 32.5% Standard Deviation 0 2.5

9.4

6.4

15 42

COSYSMO Invasion

In chronological order:

Developer

Gary Thomas (Raytheon) Ricardo Valerdi (USC) John Gaffney (Lockheed Martin) Dan Liggett (Costar)

Implementation Availability

my

COSYSMO v1.22

Academic

COSYSMO Prototype at: www.valerdi.com/cosysmo August 2005 Risk add-on Prototype developed, not yet integrated

commercial

COSYSMO TBD

COSYSMO Risk Estimation Add-on

Justification – USAF (Teets) and Navy acquisition chief (Young) require "High Confidence Estimates“ – COSYSMO currently provides a single point solution – Elaboration of the “Sizing confidence level” in

my

COSYSMO

Final Items

Open issues

Local calibrations

Lies, damned lies, and statistical outliers

Future plans for COSYSMO 2.0 (including ties to SoS work)

Action items for next meeting: July 2005 in Keystone, CO

Combine Delphi R3 results and perform Bayesian approximation

Dissertation defense: May 9

Ricardo Valerdi [email protected]

Websites http://sunset.usc.edu

http://valerdi.com/cosysmo 46