Risk Management

Download Report

Transcript Risk Management

Chapter 28

Risk Analysis

Coming up: Project Risks 1

What is risk?

  You have some expected outcome  Of some event in the future Risk is the deviation of the actual future outcome from the expected outcome  Other definitions:  Hazard: something negative that can happen in the future  Risk is the probability of the hazard

What is the expected outcome?

Why analyze risk

  Let ’ s say you know the risk of permanent injury/death of a is 1/1000 instances.

  Would you perform the activity? Why? Why not?

This activity was “ optional ” . What about: Let ’ s say you have a disease and there is a treatment that works 25% of the time, does nothing 50% of the time, and results in immediate death 25% of the time   Would you perform this activity? Why? Why not?

The consequence of not performing this activity is death within five years. You must do it now, you can ’t do it five years from now.

Why do we care about risk analysis?

  What does knowing the risk of some hazard buy you?

   We know we can only care about future activities We know (or hope) that our risk analysis provides some actionable outcomes What are we really trying to decide?

How would the following statement be useful:   The estimated damage by hazard X would be 2 million dollars The risk of hazard X is 1%

What risks exist in software?

   Project risks  Schedule slips  Cost increases Technical risks   The problem is harder to solve than you thought it would be Threaten quality and timeliness Business risks  Market risk, strategic risk, sales risk, management risk, budget risks

Consequences of Project Risks

   Schedule slips and costs increase:       Budgetary risks Schedule risks Personnel risks Resource risks Stakeholder risks Requirements problems What is the worst possible outcome in this case?

When looking for these risks, what are we trying to avoid?

Consequences of Technical Risks

    Reduces quality and timeliness  Design, implementation, interface, verification, and maintenance problems  Specification ambiguity, technical uncertainty, technical obsolescence, “ leading-edge ” technology The problem is harder to solve than you thought it would be What is the worst possible outcome with these risks?

What are we trying to avoid?

Other risks

  Mentioned business risks: threaten the viability of the software to be built Risks can also be classified into the following categories:    Known risks – after a careful analysis Predictable risks – extrapolated from past project experiences Unpredictable risks – extremely difficult to identify in advance

Project Risks

What can go wrong?

What is the likelihood?

What will the damage be?

What can we do about it?

Coming up: Option 1: Deal with the problem when it occurs 9

What is the point of identifying risks?

  Avoid them when possible Control them when necessary  Again, we perform risk analysis in order to create actionable items  not simply to “ be aware ” of the risk of something, without changing what we will do as a response

What treatments can we apply to risk (in general)?

    Do nothing  i.e. if you don ’ t try, you can never fail Risk sharing Risk retention Risk reduction

Option 1: Deal with the problem when it occurs

Caller: I have a slight problem, I ’ m trapped in my burning house 911: Fire truck on it ’ s way Coming up: Option 2: Contingency plan: Plan ahead what you will do when the risk occurs What kinds of risks could you handle like this?

12

Option 2: Contingency plan: Plan ahead what you will do when the risk occurs

Coming up: Option 3: Risk mitigation: Lessen the probability of the risk occuring. Reduce the impact of occurence 13

Option 3: Risk mitigation: Lessen the probability of the risk occurring. Reduce the impact of occurrence

Lets read about not playing with fire Reduce probability Coming up: Risk Management Paradigm Reduce impact 14

What is a risk with 100% probability?

 A constraint

Risk Management Paradigm

control track

RISK

identify plan analyze

Coming up: How to identify risks 16

Step 1: identification

  Generic risks  Potential threat to every software project Product-specific risks  What special characteristics of this project may threaten the project plan?

Checklist of generic risks

       Product size Business impact: mgmt or market Stakeholder characteristics: sophistication, communication Process definition Development environment Technology complexity and newness Staff size and experience

How to identify risks

 Common risks - many risks are common to many projects. Start with a list of these. (Your book has a list, and the web has many)  Some examples: • • • • Schedule is optimistic, "best case," rather than realistic, "expected case ” .

Layoffs and cutbacks reduce team ’ s capacity Development tools are not in place by the desired time End user ultimately finds product to be unsatisfactory, requiring redesign and rework.

• • Customer insists on new requirements.

Vaguely specified areas of the product are more time-consuming than expected.

• Personnel need extra time to learn unfamiliar software tools or environment 19

Step 2: risk analysis

  Want to figure out the impact of the potential risk and the likelihood that it will occur Why? What do you think the next step is?

 Prioritization that leads to allocating resources where they will have the most impact

Risk Projection

 

Risk projection

, also called

risk estimation,

attempts to rate each risk in two ways   the likelihood or probability that the risk will occur the consequences of the problems associated with the risk, should it occur. The are four risk projection steps:     establish a scale that reflects the perceived likelihood of a risk occuring (high, medium, low or numeric) delineate the consequences of the risk estimate the impact of the risk on the project and the product if it occurs note the overall accuracy of the risk projection so that there will be no misunderstandings.

These slides are designed to accompany

Software Engineering: A Practitioner

s Approach, 7/e

(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 21

Building a Risk Table

Risk

Text description of the risk Probability of occurrence Impact if occurs (Negligible=1…Catastrophic=5)

Probability Impact Exposure RMMM Risk Mitigation Monitoring & Management

Coming up in a few slides… These slides are designed to accompany

Software Engineering: A Practitioner

s Approach, 7/e

(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 22

Building the Risk Table

   Estimate the probability of occurrence Estimate the impact on the project on a scale of 1 to 5, where  1 = low impact on project success  5 = catastrophic impact on project success Determine the exposure:   Risk Exposure = Probability x Impact Some use cost to the project rather than impact, but in my experience cost is hard to estimate accurately. - Fleck These slides are designed to accompany

Software Engineering: A Practitioner

s Approach, 7/e

(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 23

What is the point of the risk table?

 To sort the risks by exposure  You may decide to study the resultant table and come up with a cutoff line   What do you do with a risk that has a high impact but a very low probability?

What about risks with high impact and high to moderate probability

How do you assess risk impact?

 Three factors affect the consequences of a risk occurring:    Nature (technical, business, project) Scope (combines severity with its overall distribution) Timing (how long will the impact be felt) – do you want to know early or know late?

 Impact is then used to assess overall risk exposure:  RiskExposure = Probability x Impact

Risk Exposure Example

   

Risk identification.

custom developed.

Only 70 percent of the software components scheduled for reuse will, in fact, be integrated into the application. The remaining functionality will have to be

Risk probability.

80% (likely).

Risk impact.

60 reusable software components were planned. If only 70 percent can be used, 18 components would have to be developed from scratch (in addition to other custom software that has been scheduled for development). Since the average component is 100 LOC and local data indicate that the software engineering cost for each LOC is $14.00, the overall cost (impact) to develop the components would be 18 x 100 x 14 = $25,200.

Risk exposure.

RE = 0.80 x 25,200 ~ $20,200.

These slides are designed to accompany

Software Engineering: A Practitioner

s Approach, 7/e

(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 26

Ways to view risk

  Financial loss  Implies that adding more to the budget mitigates these risks Arbitrary, numerical ratings assigned to impact  Which one is better? Why?

Step 3: risk planning

 Our goal is to develop strategies to deal with risk  Three issues:  Risk avoidance   Risk monitoring Risk management

Risk Mitigation, Monitoring, and Management

   mitigation —how can we avoid the risk?

monitoring —what factors can we track that will enable us to determine if the risk is becoming more or less likely?

management —what contingency plans do we have if the risk becomes a reality?

These slides are designed to accompany

Software Engineering: A Practitioner

s Approach, 7/e

(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 29

Risk Management Paradigm

control Key step:

Once you have created your risk spreadsheet… you must track and update it as things change.

track

RISK

identify plan analyze

These slides are designed to accompany

Software Engineering: A Practitioner

s Approach, 7/e

(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 30

Risk Impact revisited

 So far what sorts of worst case scenarios have we talked about?

 The project doesn ’ t get delivered on time/budget   The project never happens We don ’ t get re-hired  But it could be much worse  There are software applications where lives are directly at stake  So-called dependable or ultra-dependable systems

Risks as hazards

  A hazard is a system (or project) state we want to avoid A hazard has an associated probability  This is in contrast to the hazard actually occurring, which we can call an accident (John Knight)   Example hazard: a failed ABS system Example accident: the car crashes due to the failed ABS system

Hazard analysis through fault trees

 See John Knight ’ s slides:  http://www.cs.virginia.edu/~jck/cs686/slides/4 a.dependability.analysis.pdf

 Slides 16-24

Risk Impact revisited

 So far what sorts of worst case scenarios have we talked about?

 The project doesn ’ t get delivered on time/budget   The project never happens We don ’ t get re-hired  Other risks related to software security or privacy   Unauthorized access to information May directly or indirectly result in monetary losses

Risk Impact revisited

 From Verdon et al: http://www.cigital.com/papers/download/bsi3 risk.pdf

Risk Analysis and requirements

   SecureUML: role-based access control and models security requirements for well-behaved applications in predictable environments UMLsec: modeling confidentiality and access control Federal and state laws (HIPAA, etc)  Make conformance with laws into must-have requirements From Verdon et al: http://www.cigital.com/papers/download/bsi3-risk.pdf

36

Risk Analysis Example: research

  Assume you are writing a piece of software that uses a bunch of commercial-off-the-shelf components (COTS) or is made up of many modules. How do you know risk associated with using any given component?

   Risk = probability x cost Probability is the probability of a fault in the component Cost is the impact of the fault (measured by software fault injection) From Moraes et al in “ Experimental Risk Assessment and Comparison Using Software Fault Injection ” , 2007.

37

How do we measure the cost of an injected fault?

 What happens after a single fault is injected:  System hang    System crash System wrong System correct From Moraes et al in “ Experimental Risk Assessment and Comparison Using Software Fault Injection ” , 2007.

38

Example COTS

From Moraes et al in “ Experimental Risk Assessment and Comparison Using Software Fault Injection ” , 2007.

39

Example COTS

From Moraes et al in “ Experimental Risk Assessment and Comparison Using Software Fault Injection ” , 2007.

40

Reminder

 Team Presentations next week:  Should be 10-20 minutes These slides are designed to accompany

Software Engineering: A Practitioner

s Approach, 7/e

(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 41