Evaluating information system effectiveness and efficiency

Download Report

Transcript Evaluating information system effectiveness and efficiency

Evaluating information system effectiveness
and efficiency
•SECTION ONE - Why study effectiveness?
•Problems have arisen or criticisms have been voiced
in connection with a system;
•Some indicators of the ineffectiveness of the
hardware and software being used may prompt the
review;
•Management may wish to implement a system initially
developed in one division throughout the
organization, but may want to first establish its
effectiveness;
•Post-implementations review to determines whether
new system is meeting its objectives.
Indicators of System Ineffectiveness
•excessive down time and idle time
•slow system response time
•excessive maintenance costs
•inability to interface with new hardware/software
•unreliable system outputs
•slow system response time
•data loss
•excessive run costs
•frequent need for program maintenance and modification
•user dissatisf. with output format, content or timeliness.
Two approaches to measurement of system
effectiveness
•Goal-centered view - does system achieve goals set out?
•Conflicts as to priorities, timing etc. can lead to objectives met
in the short run by sacrificing fundamental system qualities,
leading to long run decline of effectiveness of the system
•System resource view - desirable qualities of a system
are identified and their levels are measured.
•If the qualities exist, then information system objectives, by
inference, should be met. By measuring the qualities of the system
may get a better, longer-term view of a system's effectiveness.
•The main problem– measuring system qualities is much
more difficult than measuring goal achievement.
2 Types of Eval'ns for Sys. Effectiveness
•Relative evaluation - auditor compares the state of goal
accomplish. after the system implemented, with the state
of goal accomplishment before system implemented.
•Improved task accomplishment, and
•Improved quality of working life.
•Absolute evaluation - the auditor assesses the size of the
goal accomplish. after the system has been implemented.
•Operational effectiveness,
•Technical effectiveness, and
•Economic effectiveness.
Task Accomplishment - an effective I/S
improves the task accomp. of its users.
•Providing specific measures of past accomplishment that
auditor can use to evaluate IS is difficult.
•Performance measures for task accomplishment differ
across applications and sometimes across organizations.
•For a manufacturing control system might be:
•number of units output,
•number of defective units reworked, units scrapped
•amount of down time/idle time.
•Important to trace task accomplishment over time.
System may appear to have improved for a short time
after implementation, but fall into disarray thereafter.
Quality of Working Life
•High quality of working life for users of a system is a
major objective in the design process. Unfortunately,
there is less agreement on the definition and
measurement of the concept of quality of working life.
•Different groups have different vested interests - some
productivity, some social
•Major advantages - relatively objective, verifiable, and
difficult to manipulate. Data required is relatively easy to
obtain.
•Major disadvantages - it is difficult to link them to IS quality
and difficult to pinpoint what corrective action is needed
Operational Effectiveness Objectives
•Auditor examines how well a system meets its goals
from the viewpoint of a user who interacts with the
system on a regular basis. Four main measures:
•
Frequency of use,
•
Nature of use,
•
Ease of use, and
•
User satisfaction.
Frequency and Nature of Use
•Frequency - employed widely, but problematic
•sometimes a high quality system leads to low frequency of use
because the system permits more work to be accomplished in a
shorter period of time.
•sometimes a poor quality system leads to a low frequency of use
since users dislike the system
•Nature - can use systems in many ways
•lowest level: treat as black box providing solutions to the
•highest level: use to redefine how tasks, jobs performed and viewed
Ease of Use and User Satisfaction
•Ease of use - positive correlation betw. users' feelings about
systems and the degree to which the systems were easy
to use. In evaluating ease of use, it is important to
identify the primary and secondary users of a system.
•Terminal location, flexibility of reporting, ease of error correction
•User satisfaction - has become an important measure of
operational effectiveness because of the difficulties and
problems associated with measures of frequency of use,
nature of use, and ease of use.
•problem finding, problem solving, input, processing, report form
Technical Effectiveness Objectives •Has the appropriate hardware and software technology
been used to support a system, or, whether a change in
the support hardware or software technology would
enable the system to meet its goals better.
•Hardware performance can be measured using hardware monitors
or more gross measures such as system response time, down time.
•Software effectiveness can be measured by examining
the history of program maintenance, modification and
run time resource consumption. The history of program
repair maintenance indicates the quality of logic existing
in a program; i.e., extensive error correction implies:
inappropriate design, coding or testing; failure to use
structured approaches, etc.
•Major problem: hardware and software not independent
Economic Effectiveness Objectives -
• Requires the identification of costs and benefits and the
proper evaluation of costs and benefits - a difficult task since
costs and benefits depend on the nature of the IS.
•For example, some of the benefits expected and derived from an IS
designed to support a social service environment
would differ significantly from a system designed to
support manufacturing activities. Some of the most
significant costs and benefits may be intangible and
difficult to identify, and next to impossible to value.
SECTION TWO - Evaluating system efficiency
•Why would an auditor get involved in a study of system
efficiency?
•evaluate an existing operational system to
determine whether its performance can be improved;
•evaluate alternate systems that the installation is
considering purchasing or leasing. For example,
management may be considering two systems with
different database management approaches.
•To determine whether a system is efficient, the auditor
will need to identify:
• an appropriate performance index to assess system efficiency.
• an appropriate workload model to measure the system's
performance in the context of that workload.
Performance Indices
•Measure system efficiency; quantitatively how well system
achieves an efficiency criterion. Have several functions:
•allow users to decide whether a system will meet needs,
•permit comparison of alternate systems, and
•show whether changes to the hardware/software configuration of
system have produced the desired effect.
•Expressed using ranges or probability distributions - avg.
may be deceiving (look at response time variations)
•Expressed in terms of workload - e.g., response time
of an interactive system will vary depending on the
number and the nature of the jobs in the system.
Indices - Timeliness
•How quickly a system is able to provide users with the
output they require.
•For a batch system, typically is turnaround time - the
length of time between submission of a job and receipt
of the complete output.
•For interactive systems, the response time - the length
of time between submission of an input transaction to
the system and receipt of the first character of output.
•Must be defined in terms of a unit of work and the priority
categorization given to the unit of work.
•In a batch system the unit of work usually is a job.
•In an interactive system it may be a job consisting of
multiple transactions, or a single transaction.
Indices - Throughput & Utilization
•Throughput indices measure how much work is
done by the system over a period of time.
•Throughput rate of a system is the amount of work done per unit of time
•The system capability is the maximum achievable throughput rate.
•Throughput indices must be defined in terms of some
unit of work: a job, a task, or an instruction.
•The more responsive a system, the greater its throughput.
•Utilization indices measure the proportion of time a
system resource is busy.
•For example, the CPU utilization index is calculated by
dividing the amount of time the CPU is busy by the
total amount of time the system is running.
Workload
•A system's workload is the set of resource demands
imposed upon the system resources by the set of jobs
during a given time period.
•Using the real workload of the system for evaluation may be
too costly and too disruptive.
•To measure efficiency for a representative workload, the
time period for evaluation may be too long.
•Also, the real workload cannot be used if the system to be evaluated is
not operational.
•Need a workload model representative of the real workload
Workload Models
•Natural workload models, or benchmarks, are
constructed by taking some subset of the real workload.
•In a time subset, the content of the workload model is the same as
the real workload, but the time interval for performance indices
is smaller than the interval for the real workload.
•In a content subset, sample jobs from the real workload are
selected in some way.
•Artificial workload models not constructed from jobs in the
real workload; useful when system unable to process the
natural workload
•Natural - more representative and less costly to construct
•Artificial - more flexible and more compact
SECTION 3- Comparison of 3 Audit
Approaches - Objectives
•F/S audit - express an opinion as to whether financial
statements are in accordance with GAAP
•Effectiveness audit - express an opinion on whether a
system achieves the goals set for the system. These
goals may be quite broad or specific.
•Audits of system efficiency - whether maximum output is
achieved at minimum cost or with minimum input
assuming a given level of quality.
Comparison of 3 Approaches - Planning
•F/S audit - part is identifying controls upon which the
auditor could rely and reduce other audit verification
procedures; or, id controls upon which the auditor is forced
to rely
•Effectiveness audit - id goals, measures for determining
whether the goals obtained during a specific period,if
explicit measures are more straight-forward; however, when
broad and multi-dimensional, the auditor may need to
develop relevant measures and indicators of achievement.
•Audits of system efficiency - often comparable to a scientific
experiment. A scheme for obtaining measurements must be
developed explicitly for the performance index defined. For
example, if average turnaround time is used as a measure of
efficiency, then the experimental task must control for
various job sizes, time of day, etc.
Comparison of 3 Approaches - Execution
•F/S audit - controls analysis and CAATs
•Effectiveness - Once the system goals have been
identified, measures of goal achievement have been
selected, and the population to be studied has been
identified, it is necessary to actually obtain measures of
goal achievement and analyze the results.
•Efficiency - During the execution phase the benchmark
or workload model test is actually run and the result are
subjected to analysis. Care must be taken to
control for interference by factors other than those built
into the model. And measurements must be taken
carefully.
Comparison of 3 Approaches - Reporting
•F/S audit - letter re I/C deficiencies
•Effectiveness - the analysis will likely highlight areas of
successful attainment of objectives as well as failures.
Explanations of the causes of significant successes and
failures should be sought out and included in the report.
•Efficiency - reports of studies of system efficiency must
typically contain specific recommendations identifying
ways in which the identified inefficiencies can be
eliminated.