Document 7544846

Download Report

Transcript Document 7544846

CSE Senior Design I
Critical Topic Review
Instructor: Mike O’Dell
CSE Senior Design I
Classic Mistakes
Instructor: Mike O’Dell
This presentations was derived from the textbook used for this class:
McConnell, Steve, Rapid Development, Chapter 3.
1 Why Projects Fail - Overview
Five main reasons:





Failing to communicate
Failing to create a realistic plan
Lack of buy-in
Allowing scope/feature creep
Throwing resources at a problem
3
1 Categories of Classic Mistakes
People-related
Process-related
Product-related
Technology-related
4
1 Classic Mistakes Enumerated
1. Undermined motivation:
 The Big One - Probably the largest single factor in
poor productivity
 Motivation must come from within
2. Weak personnel:
 The right people in the right roles
3. Uncontrolled problem employees:
 Problem people (or just one person) can kill a team
and doom a project
 The team must take action… early
 Consider the Welch Grid
5
1 Classic Mistakes Enumerated
4. Heroics:
 Heroics seldom work to your advantage
 Honesty is better than empty “can-do”
5. Adding people to a late project:
 Productivity killer
 Throwing people at a problem seldom helps
6. Noisy, crowded offices:
 Work environment is important to productivity
 Noisy, crowded conditions lengthen schedules
6
1 Classic Mistakes Enumerated
7. Friction between developers and
customers:
 Cooperation is the key
 Encourage participation in the process
8. Unrealistic expectations:
 Avoid seat-of-the-pants commitments
 Realistic expectations is a TOP 5 issue
9. Lack of effective project sponsorship:
 Management must buy-in and provide support
 Potential morale killer
7
1 Classic Mistakes Enumerated
10. Lack of stakeholder buy-in:
 Team members, end-users, customers,
management, etc.
 Buy-in engenders cooperation at all levels
11. Lack of user input:
 You can’t build what you don’t understand
 Early input is critical to avoid feature creep
12. Politics placed over substance:
 Being well regarded by management will not make
your project successful
8
1 Classic Mistakes Enumerated
13. Wishful thinking:
 Not the same as optimism
 Don’t plan on good luck!
 May be the root cause of many other mistakes
14. Overly optimistic schedules:
 Wishful thinking?
15. Insufficient risk management:
 Identify unique risks and develop a plan to
eliminate them
 Consider a “spiral” approach for larger risks
9
1 Classic Mistakes Enumerated
16. Contractor failure:
 Relationship/cooperation/clear SOW
17. Insufficient planning:
 If you can’t plan it… you can’t do it!
18. Abandonment of planning under
pressure:
 Path to failure
 Code-and-fix mentality takes over… and will fail
10
1 Classic Mistakes Enumerated
19. Wasted time during fuzzy front end:
 That would be now!
 Almost always cheaper and faster to spend time
upfront working/refining the plan
20. Shortchanged upstream activities:
 See above… do the work up front!
 Avoid the “jump to coding” mentality
21. Inadequate design:
 See above… do the required work up front!
11
1 Classic Mistakes Enumerated
22. Shortchanged quality assurance:
 Test planning is a critical part of every plan
 Shortcutting 1 day early on will likely cost you
3-10 days later
 QA me now, or pay me later!
23. Insufficient management controls:
 Buy-in implies participation & cooperation
24. Premature or overly frequent
convergence:
 It’s not done until it’s done!
12
1 Classic Mistakes Enumerated
25. Omitting necessary tasks from
estimates:
 Can add 20-30% to your schedule
 Don’t sweat the small stuff!
26. Planning to catch up later:
 Schedule adjustments WILL be necessary
 A month lost early on probably cannot be made
up later
27. Code-like-hell programming:
 The fast, loose, “entrepreneurial” approach
 This is simply… Code-and-Fix. Don’t!
13
1 Classic Mistakes Enumerated
28. Requirements gold-plating:
 Avoid complex, difficult to implement features
 Often, they add disproportionately to schedule
29. Feature creep:
 The average project experiences 25% change
 Another killer mistake!
30. Developer gold-plating:
 Use proven stuff to do your job
 Avoid dependence on the hottest new tools
 Avoid implementing all the cool new features
14
1 Classic Mistakes Enumerated
31. Push-me, pull-me negotiation:
 Schedule slip = feature addition
32. Research-oriented development:
 Software research schedules are theoretical, at
best
 Try not to push the envelop unless you allow for
frequent schedule revisions
 If you push the state of the art… it will push back!
33. Silver-bullet syndrome:
 There is no magic in product development
 Don’t plan on some new whiz-bang thing to save
your bacon (i.e., your schedule)
15
1 Classic Mistakes Enumerated
34. Overestimated savings from new tools
or methods:
 Silver bullets probably won’t improve your
schedule… don’t overestimate their value
35. Switching tools in the middle of the
project:
 Version 3.1…version 3.2… version 4.0!
 Learning curve, rework inevitable
36. Lack of automated source control:
 Stuff happens… enough said!
16
1
Recommendation: Develop a Disaster
Avoidance Plan
Get together as a team sometime soon and
make a list of “worst practices” that you
should avoid in your project.
Include specific mistakes that you think
could/will be made by your team
Post this list on the wall in your lab space
or where ever it will be visible and
prominent on a daily basis
Refer to it frequently and talk about how
you will avoid these mistakes
17
CSE Senior Design I
Your Plan: Estimation
Instructor: Mike O’Dell
This presentations was derived from the textbook used for this class, McConnell,
Steve, Rapid Development, Chapter 8, further expanded on by Mr. Tom Rethard
for this course.
1 The Software-Estimation Story
 Software/System development, and thus
estimation, is a process of gradual
refinement.
 Can you build a 3-bedroom house for
$100,000? (Answer: It depends!)
 Some organizations want cost estimates to
within ± 10% before they’ll fund work on
requirements definition. (Is this possible?)
 Present your estimate as a range instead of a
“single point in time” estimate.
 The tendency of most developers is to underestimate and over-commit!
19
1 Estimate-Convergence Graph
Project Cost
(effort and size)
Project
Schedule
1.6
4
2
1.25
1.5
1.15
1.25
1.0
0.8
1.1
1.0
0.9
0.67
0.85
0.5
0.8
0.25
Initial
product
definition
Approved Requirements
Product
Detailed
product
specification
design
design
definition
specification specification
0.6
Product
complete
20
1 Estimation tips
Avoid off-the-cuff estimates
Allow time for the estimate (do it right!)
Use data from previous projects
Use developer-based estimates
Estimate by walk-through
Estimate by categories
Estimate at a low-level of detail
Don’t forget/omit common tasks
Use software estimation tools
Use several different techniques, and compare the
results
 Evolve estimation practices as the project progresses










21
1 Function-Point Estimation
Based on number of
 Inputs
(screens, dialogs, controls, messages)
 Outputs
(screens, reports, graphs, messages)
 Inquiries
(I/O resulting in a simple, immediate output)
 Logical internal files
(Major logical groups of end-user data, controlled by
program)
 External interface files
(Files controlled by other programs that this program
uses. Includes logical data that enters/leaves
program)
22
1 Function-Point Multipliers
Program
Characteristic
Number of inputs
Number of outputs
Inquiries
Logical internal files
External interface files
Low
Complexity
3
4
3
7
5
Function Points
Medium
Complexity
4
5
4
 10
7
High
Complexity
6
7
6
 15
 10
Sum these to get an “unadjusted function-point total”
Multiply this by an “influence multiplier” (0.65 to 1.35),
based on 14 factors from data communication to ease of
installation.
All of this gives a total function-point count.
Use this with Jones’ First-Order Estimation Practice, or
compare to previous projects for an estimate
23
1 Estimate Presentation Styles
 Plus-or-minus qualifiers
“6 months, +3 months, -2
months”
 Ranges
“5-9 months”
 Risk quantification
 Cases
Best case
Planned case
Current case
Worst case
April 1
May 15
May 30
July 15
 Coarse dates and time
periods
“6 months...
+1 month for late
“3rd quarter 97”
subcontractor,
+0.5 month for staff sickness,  Confidence factors
etc...”
April 1
5%
May 15
50%
July 1
95%
24
1 Schedule Estimation
 Rule-of-thumb equation
 schedule in months = 3.0 * man-months 1/3
This equation implies an optimal team size.
 Use estimation software to compute the schedule
from your size and effort estimates
 Use historical data from your organization
 Use McConnell’s Tables 8-8 through 8-10 to look up
a schedule estimate based on the size estimate
 Use the schedule estimation step from one of the
algorithmic approaches (e.g., COCOMO) to get a
more fine tunes estimate than the “Rule of thumb”
equation.
25
1 Shortest Possible Schedule
Table 8.8
Probability of
Completing
Exactly on the
Scheduled Date
High Risk of late
completion.
Shortest
possible
schedule
Scheduled Completion Date
Impossible schedule
• This tables assumes:
-
Top 10% of talent pool, all motivated, no turnover
entire staff starts working on Day 1, & continue until project released
advanced tools available to everyone
most time-efficient development methods used
requirements completely known, and do not change
26
1 Efficient Schedules (Table 8-9)
 This table assumes:





Top 25% of talent pool
Turnover < 6% per year
No significant personnel conflicts
Using efficient development practices from Chap 1-5
Note that less effort required on efficient schedule
tables
 For most projects, the efficient schedules represent
“best-case”
27
1 Nominal Schedules (Table 8-10)
 This table assumes:





Top 50% of talent pool
Turnover 10-12% per year
Risk-management less than ideal
Office environment only adequate
Sporadic use of efficient development practices
 Achieving nominal schedule may be a 50/50 bet.
28
1 Estimate Refinement
Estimate can be refined only with a more
refined definition of the software
product
Developers often let themselves get
trapped by a “single-point” estimate, and
are held to it (Case study 1-1)
 Impression of a slip over budget is created
when the estimate increases
When estimate ranges decrease as the
project progresses, customer confidence
is built-up.
29
1 Conclusions
 Estimate accuracy is directly proportional to
product definition.
 Before requirements specification, product is very
vaguely defined
 Use ranges for estimates and gradually refine
(tighten) them as the project progresses.
 Measure progress and compare to your historical
data
 Refine… Refine… Refine…
30
CSE Senior Design I
Feature-Set Control
Instructor: Mike O’Dell
The slides in this presentation are derived from materials in the textbook
used for CSE 4316/4317, Rapid Development: Taming Wild Software
Schedules, by Steve McConnell.
1 The Problem
Products are initially stuffed with more
features (requirements) than can be
reasonably accommodated
Features continue to be added as the
project progresses (“Feature-Creep”)
Features must be removed/reduced or
significantly changed late in a project
32
1 Sources of Change
End-users: driven by the “need” for
additional or different functionality
Marketers: driven by the fact that
markets and customer perspectives on
requirements change (“latest and greatest”
syndrome)
Developers: driven by emotional/
intellectual desire to build the “best”
widget
33
1 Effects of Change
 Impact in every phase: design, code, test,
documentation, support, training, people,
planning, tracking, etc.
 Visible effects: schedule, budget, product
quality
 Hidden effects: morale, pay, promotion, etc.
 Costs are typically 50 -200 times less if changes
are made during requirements phase, than if you
discover them during implementation
34
1 Change Control
Goals:
 Allow change that results in the best possible
product in the time available. Disallow all
other changes
 Allow everyone affected by a proposed change
to participate in assessing the impact
 Broadly communicate proposed changes and
their impact
 Provide an audit trail for all decisions (i.e.,
document them well)
 A process to accomplish the above as
efficiently as possible
35
1 Late Project Feature Cuts
Goal: Eliminate features in an effort save
the project’s schedule
Fact: Project may (will) fall behind for
many reasons other than feature-set
control
Fact: Removing features too late incurs
additional costs and schedule impact
Approach: analyze the cost of removal and
reusability, then strip out unused code,
remove documentation, eliminate test
cases, etc.
36
CSE Senior Design I
Risk Management
Instructor: Mike O’Dell
This presentations was derived from the textbook used for this class:
McConnell, Steve, Rapid Development, Chapter 5.
1 Why Do Projects Fail?
Generally, from poor risk management
 Failure to identify risks
 Failure to actively/aggressively plan for,
attack and eliminate “project killing” risks
Risk comes in different shapes and sizes
 Schedule risks (short to long)
 Cost risks (small to large)
 Technology risks (probable to impossible)
38
1 Elements of Risk Management
 Managing risk consists of: identifying, addressing
and eliminating risks
 When does this occur?
 WORST – Crisis management/Fire fighting : addressing risk
after they present a big problem
 BAD – Fix on failure : finding and addressing as the occur.
 OKAY – Risk Mitigation : plan ahead and allocate resources to
address risk that occur, but don’t try to eliminate them
before they occur
 GOOD – Prevention : part of the plan to identify and prevent
risks before they become problems
 BEST – Eliminate Root Causes : part of the plan to identify
and eliminate the factors that make specific risks possible
39
1 Elements of Risk Management
Effective Risk Management is made up of:
 Risk Assessment: identify, analyze, prioritize
 Risk Control: planning, resolution, monitoring
IDENTIFICATION
RISK
ASSESSMENT
RISK
MANAGEMENT
ANALYSIS
PRIORITIZATION
PLANNING
RISK
CONTROL
RESOLUION
MONITORING
40
1 Risk Monitoring
Risks and potential impact will change
throughout the course of a project
Keep an evolving “TOP 10 RISKS” list
 See Table 5-7 for an example
 Review the list frequently
 Refine… Refine… Refine…
Put someone in charge of monitoring risks
Make it a part of your process & project
plan
41
CSE Senior Design I
Overview:
Software System Architecture
Software System Test
Mike O’Dell
Based on an earlier presentation by
Bill Farrior, UTA, modified by Mike O’Dell
1 What is System Design?
A progressive definition of how a system
will be constructed:
 Guiding principles/rules for design (Metaarchitecture)
 Top-level structure, design abstraction
(Architecture Design)
 Details of all lowest-level design elements
(Detailed Design)
CSE 4317
43
1 What is Software Architecture?
A critical bridge between what a system
will do/look like, and how it will be
constructed
A blueprint for a software system and
how it will be built
An abstraction: a conceptual model of
what must be done to construct the
software system
 It is NOT a specification of the details of
the construction
CSE 4317
44
1 What is Software Architecture?
The top-level breakdown of how a system
will be constructed:




design principles/rules
high-level structural components
high-level data elements (external/internal)
high-level data flows (external/internal)
Discussion: Architectural elements of
the new ERB
CSE 4317
45
1 System Architecture Design
Process
Define guiding principles/rules for design
Define top-level components of the
system structure (“architectural layers”)
Define top-level data elements/flows
(external and between layers)
Deconstruct layers into major functional
units (“subsystems”)
Translate top-level data elements/flows
to subsystems
CSE 4317
46
1 Layer Example: The Internet
Protocol Stack Architecture
Layers/Services:
 application: supporting network
applications
application
 transport: host-host data transfer
transport
 ftp, smtp, http
 tcp, udp
 network: routing of datagrams from
source to destination
 ip, routing protocols
 link: data transfer between
neighboring network elements
 E.g., Ethernet, 802.11 WLAN
network
link
physical
 physical: bits “on the wire”
47
1: Introduction
1
Subsystem Example: The Internet
Network Layer
Transport layer: TCP, UDP
Network
layer
IP protocol
•addressing conventions
•datagram format
•packet handling conventions
Routing protocols
•path selection
•RIP, OSPF, BGP
routing
table
ICMP protocol
•error reporting
•router “signaling”
Link layer
Physical layer
4b4: Network
48Layer
1 Criteria for a Good Architecture
(The Four I’s)
 Independence – the layers are independent of
each other and each layer’s functions are
internally-specific and have little reliance on
other layers. Changes in the implementation of
one layer should not impact other layers.
 Interfaces/Interactions – the interfaces and
interactions between layers are complete and
well-defined, with explicit data flows.
 Integrity – the whole thing “hangs together”.
It’s complete, consistent, accurate… it works.
 Implementable – the approach is feasible, and
the specified system can actually be designed
and built using this architecture.
CSE 4317
49
1 How do you Document a Software
Architecture?
Describe the “rules” : meta-architecture
 guiding principles, vision, concepts
 key decision criteria
Describe the layers
 what they do, how they interact with other
layers
 what are they composed of (subsystems)
CSE 4317
50
1 How do you Document a Software
Architecture?
Describe the data flows between layers
 what are the critical data elements
 provider subsystems (sources) and consumer
subsystems (sinks)
Describe the subsystems within each
layer
 what does it do
 what are its critical interfaces of the
subsystem, within and external to its layer
 what are its critical interfaces outside the
system
CSE 4317
51
1 Final Thoughts – Verification
System Definition
SRD: System
Requirements
MAP:
All Specified
Requirements
System
Validation
Test
ADS: Architecture
Specification
MAP:
All Subsystem
& Layer
Interfaces &
Interactions
Integration Test
DDS: Detailed
Design Specification
MAP:
All Module
Interfaces &
Interactions
Component
Test (a.k.a.
Function Test)
MAP:
All Modules
Implementation
Unit Test
System Verification
CSE 4317
52
1 Final Thoughts – Verification
 Unit Test: verifies that EVERY module
(HW/SW) specified in the DDS operates as
specified.
 Component/Function Test: verifies integrity of
ALL inter-module interfaces and interactions.
 Integration Test: verifies integrity of ALL
inter-subsystem interfaces and interactions.
 System Verification Test: verifies ALL
requirements are met.
CSE 4317
53