Document 7796851

Download Report

Transcript Document 7796851

•
•
•
•
•
•
•
Magnetic Disk Characteristics
I/O Connection Structure
Types of Buses
Cache & I/O
I/O Performance Metrics
I/O System Modeling Using Queuing Theory
Designing an I/O System
•
•
•
•
RAID (Redundant Array of Inexpensive Disks)
I/O Benchmarks
ABCs of UNIX File Systems
A Study Comparing UNIX File System
Performance
EECC551 - Shaaban
#1
Lec # 13
Winter2000 2-8-2001
RAID (Redundant Array of Inexpensive Disks)
• The term RAID was coined in a 1988 paper by Patterson, Gibson
and Katz of the University of California at Berkeley.
• In that article, the authors proposed that large arrays of small,
inexpensive disks --usually SCSI, IDE support just started-- could be
used to replace the large, expensive disks used on mainframes and
minicomputers.
• In such arrays files are "striped" and/or mirrored across multiple
drives.
• Their analysis showed that the cost per megabyte could be
substantially reduced, while both performance (throughput) and
fault tolerance could be increased.
• The Catch: Array Reliability without any redundancy :
Reliability of N disks = Reliability of 1 Disk ÷ N
50,000 Hours ÷ 70 disks = 700 hours
– Disk system MTTF: Drops from 6 years to 1 month!
– Arrays (without redundancy) too unreliable to be useful!
EECC551 - Shaaban
#2
Lec # 13
Winter2000 2-8-2001
Manufacturing Advantages of Disk Arrays
Disk Product Families
Conventional:
4 disk form
3.5” 5.25”
factors
Low End
14”
10”
High End
Disk Array:
1 disk form
factor
3.5”
EECC551 - Shaaban
#3
Lec # 13
Winter2000 2-8-2001
RAID Subsystem Organization
host
host
adapter
array
controller
manages interface
to host, DMA
control, buffering,
parity logic
physical device
control
Striping software off-loaded from
host to array controller
No application modifications
No reduction of host performance
single board
disk
controller
single board
disk
controller
single board
disk
controller
single board
disk
controller
often piggy-backed
in small format devices
EECC551 - Shaaban
#4
Lec # 13
Winter2000 2-8-2001
Basic RAID Organizations
•
•
•
•
•
•
•
•
Non-Redundant (RAID Level 0)
Mirrored (RAID Level 1)
Memory-Style ECC (RAID Level 2)
Bit-Interleaved Parity (RAID Level 3)
Block-Interleaved Parity (RAID Level 4)
Block-Interleaved Distributed-Parity (RAID Level 5)
P+Q Redundancy (RAID Level 6)
Striped Mirrors (RAID Level 10)
EECC551 - Shaaban
#5
Lec # 13
Winter2000 2-8-2001
Non-Redundant (RAID Level 0)
• RAID 0 simply stripes data across all drives (minimum 2 drives) to
increase data throughput but provides no fault protection.
– Sequential blocks of data are written across multiple disks in stripes,
as follows:
• The size of a data block, which is known as the "stripe width", varies
with the implementation, but is always at least as large as a disk's sector
size.
• This scheme offers the best write performance since it never needs to
update redundant information.
• It does not have the best read performance.
– Redundancy schemes that duplicate data, such as mirroring, can
perform better on reads by selectively scheduling requests on the disk
with the shortest expected seek and rotational delays.
EECC551 - Shaaban
#6
Lec # 13
Winter2000 2-8-2001
Optimal Size of Data Striping Unit
(Applies to RAID Levels 0, 5, 6, 10)
•
•
•
•
•
Lee and Katz [1991] use an analytic model of non-redundant disk arrays to derive an
equation for the optimal size of data striping unit.
They show that the optimal size of data strip-ing is equal to:
PX ( L  1) Z
N
Where:
– P is the average disk positioning time,
– X is the average disk transfer rate,
– L is the concurrency, Z is the request size, and
– N is the array size in disks.
Their equation also predicts that the optimal size of data striping unit is dependent
only the relative rates at which a disk positions and transfers data, PX, rather than P
or X individually. Lee and Katz show that the opti-mal
striping unit depends on request size; Chen and Patterson show that this dependency
can be ignored without significantly affecting performance.
EECC551 - Shaaban
#7
Lec # 13
Winter2000 2-8-2001
Mirrored (RAID Level 1)
• Utilizes mirroring or shadowing of data using twice as many disks as a
non-redundant disk array.
• Whenever data is written to a disk the same data is also written to a
redundant disk, so that there are always two copies of the information.
• When data is read, it can be retrieved from the disk with the shorter
queuing, seek and rotational delays
• If a disk fails, the other copy is used to service requests.
• Mirroring is frequently used in database applications where availability
and transaction rate are more important than storage efficiency.
EECC551 - Shaaban
#8
Lec # 13
Winter2000 2-8-2001
Memory-Style ECC (RAID Level 2)
• RAID 2 performs data striping with a block size of one bit or byte, so
that all disks in the array must be read to perform any read operation.
• A RAID 2 system would normally have as many data disks as the word
size of the computer, typically 32.
• In addition, RAID 2 requires the use of extra disks to store an errorcorrecting code for redundancy.
– With 32 data disks, a RAID 2 system would require 7 additional disks for a
Hamming-code ECC.
– Such an array of 39 disks was the subject of a U.S. patent granted to Unisys
Corporation in 1988, but no commercial product was ever released.
•
For a number of reasons, including the fact that modern disk drives
contain their own internal ECC, RAID 2 is not a practical disk array
scheme.
EECC551 - Shaaban
#9
Lec # 13
Winter2000 2-8-2001
Bit-Interleaved Parity (RAID Level 3)
• One can improve upon memory-style ECC disk arrays ( RAID 2) by
noting that, unlike memory component failures, disk controllers can
easily identify which disk has failed. Thus, one can use a single parity
disk rather than a set of parity disks to recover lost information.
• As with RAID 2, RAID 3 must read all data disks for every read
operation.
– This requires synchronized disk spindles for optimal performance, and
works best on a single-tasking system with large sequential data
requirements. An example might be a system used to perform video
editing, where huge video files must be read sequentially.
EECC551 - Shaaban
#10 Lec # 13
Winter2000 2-8-2001
Block-Interleaved Parity (RAID Level 4)
•
•
•
RAID 4 is similar to RAID 3 except that blocks of data are striped across
the disks rather than bits/bytes.
Read requests smaller than the striping unit access only a single data disk.
Write requests must update the requested data blocks and must also
compute and update the parity block.
– For large writes that touch blocks on all disks, parity is easily computed by
exclusive-or’ing the new data for each disk.
– For small write requests that update only one data disk, parity is computed by
noting how the new data differs from the old data and apply-ing those
differences to the parity block.
•
This can be an important performance improvement for small or random
file access (like a typical database application) if the application record
size can be matched to the RAID 4 block size.
EECC551 - Shaaban
#11 Lec # 13
Winter2000 2-8-2001
Block-Interleaved Distributed-Parity (RAID Level 5)
• The block-interleaved distributed-parity disk array eliminates the
parity disk bottleneck present in RAID 4 by distributing the parity
uniformly over all of the disks.
• An additional, frequently overlooked advantage to distributing the
parity is that it also distributes data over all of the disks rather than
over all but one.
• RAID 5 has the best small read, large read and large write performance
of any redundant disk array.
– Small write requests are somewhat inefficient compared with redundancy
schemes such as mirroring however, due to the need to perform readmodify-write operations to update parity.
EECC551 - Shaaban
#12 Lec # 13
Winter2000 2-8-2001
Problems of Disk Arrays: Small Writes
RAID-5: Small Write Algorithm
1 Logical Write = 2 Physical Reads + 2 Physical Writes
D0'
new
data
D0
D1
D2
D3
old
data (1. Read)
P
old
(2. Read)
parity
+ XOR
+ XOR
(3. Write)
D0'
D1
(4. Write)
D2
D3
P'
EECC551 - Shaaban
#13 Lec # 13
Winter2000 2-8-2001
P+Q Redundancy (RAID Level 6)
• An enhanced RAID 5 with stronger error-correcting codes used .
• One such scheme, called P+Q redundancy, uses Reed-Solomon codes,
in addition to parity, to protect against up to two disk failures using
the bare minimum of two redundant disks.
• The P+Q redundant disk arrays are structurally very similar to the
block-interleaved distributed-parity disk arrays (RAID 5) and
operate in much the same manner.
– In particular, P+Q redundant disk arrays also perform small write
opera-tions using a read-modify-write procedure, except that
instead of four disk accesses per write requests, P+Q redundant
disk arrays require six disk accesses due to the need to update
both the ‘P’ and ‘Q’ information.
EECC551 - Shaaban
#14 Lec # 13
Winter2000 2-8-2001
RAID 5/6: High I/O Rate Parity
A logical write
becomes four
physical I/Os
Independent writes
possible because of
interleaved parity
Reed-Solomon
Codes ("Q") for
protection during
reconstruction
Targeted for mixed
applications
D0
D1
D2
D3
P
D4
D5
D6
P
D7
D8
D9
P
D10
D11
D12
P
D13
D14
D15
Increasing
Logical
Disk
Addresses
Stripe
P
D16
D17
D18
D19
D20
D21
D22
D23
P
.
.
.
.
.
.
.
.
.
.
Disk Columns
.
.
Stripe
Unit
.
.
.
EECC551 - Shaaban
#15 Lec # 13
Winter2000 2-8-2001
RAID 10 (Striped Mirrors)
• RAID 10 (also known as RAID 1+0) was not mentioned in the original
1988 article that defined RAID 1 through RAID 5.
• The term is now used to mean the combination of RAID 0 (striping)
and RAID 1 (mirroring).
• Disks are mirrored in pairs for redundancy and improved
performance, then data is striped across multiple disks for maximum
performance.
• In the diagram below, Disks 0 & 2 and Disks 1 & 3 are mirrored pairs.
•
Obviously, RAID 10 uses more disk space to provide redundant data than
RAID 5. However, it also provides a performance advantage by reading
from all disks in parallel while eliminating the write penalty of RAID 5.
EECC551 - Shaaban
#16 Lec # 13
Winter2000 2-8-2001
RAID Levels Comparison:
Throughput Per Dollar Relative to RAID Level 0.
EECC551 - Shaaban
#17 Lec # 13
Winter2000 2-8-2001
RAID Levels Comparison:
Throughput Per Dollar Relative to RAID Level 0.
EECC551 - Shaaban
#18 Lec # 13
Winter2000 2-8-2001
RAID Levels Comparison:
Throughput Per Dollar Relative to RAID Level 0.
EECC551 - Shaaban
#19 Lec # 13
Winter2000 2-8-2001
RAID Levels Comparison:
Throughput Per Dollar Relative to RAID Level 0.
EECC551 - Shaaban
#20 Lec # 13
Winter2000 2-8-2001
RAID Reliability
• Redundancy in disk arrays is motivated by the need to overcome
disk failures.
• When only independent disk failures are considered, a simple
parity scheme works admirably. Patterson, Gibson, and Katz
derive the mean time between failures for a RAID level 5 to be:
MTTF (disk)2 / N (G - 1) MTTR(disk)
•
•
•
•
•
•
•
where MTTF(disk) is the mean-time-to-failure of a single disk,
MTTR(disk) is the mean-time-to-repair of a single disk,
N is the total number of disks in the disk array
G is the parity group size
For illustration purposes, let us assume we have:
100 disks that each had a mean time to failure (MTTF) of 200,000 hours
and a mean time to repair of one hour. If we organized these 100 disks into
parity groups of average size 16, then the mean time to failure of the
system would be an astounding 3000 years! Mean times to failure of this
magnitude lower the chances of failure over any given period of time.
EECC551 - Shaaban
#21 Lec # 13
Winter2000 2-8-2001
System Availability: Orthogonal RAIDs
Array
Controller
String
Controller
. . .
String
Controller
. . .
String
Controller
. . .
String
Controller
. . .
String
Controller
. . .
String
Controller
. . .
Data Recovery Group: unit of data redundancy
Redundant Support Components: power supplies, controller, cables
EECC551 - Shaaban
#22 Lec # 13
Winter2000 2-8-2001
System-Level Availability
host
host
Fully dual redundant
I/O Controller
Array Controller
I/O Controller
Array Controller
...
...
...
...
Goal: No Single
Points of
Failure
...
Recovery
Group
.
.
.
with duplicated paths, higher performance can be
obtained when there are no failures
EECC551 - Shaaban
#23 Lec # 13
Winter2000 2-8-2001
I/O Benchmarks
• Processor benchmarks classically aimed at response time
for fixed sized problem.
• I/O benchmarks typically measure throughput, possibly
with upper limit on response times (or 90% of response
times)
• Traditional I/O benchmarks fix the problem size in the
benchmark.
• Examples:
Benchmark
Size of Data
% Time I/O
Year
I/OStones
Andrew
1 MB
4.5 MB
26%
4%
1990
1988
– Not much I/O time in benchmarks
– Limited problem size
– Not measuring disk (or even main memory)
EECC551 - Shaaban
#24 Lec # 13
Winter2000 2-8-2001
The Ideal I/O Benchmark
•
•
•
•
•
•
An I/O benchmark should help system designers and users understand
why the system performs as it does.
The performance of an I/O benchmark should be limited by the I/O
devices. to maintain the focus of measuring and understanding I/O
systems.
The ideal I/O benchmark should scale gracefully over a wide range of
current and future machines, otherwise I/O benchmarks quickly become
obsolete as machines evolve.
A good I/O benchmark should allow fair comparisons across machines.
The ideal I/O benchmark would be relevant to a wide range of
applications.
In order for results to be meaningful, benchmarks must be tightly
specified. Results should be reproducible by general users;
optimizations which are allowed and disallowed must be explicitly
stated.
EECC551 - Shaaban
#25 Lec # 13
Winter2000 2-8-2001
I/O Benchmarks Comparison
EECC551 - Shaaban
#26 Lec # 13
Winter2000 2-8-2001
Self-scaling I/O Benchmarks
• Alternative to traditional I/O benchmarks: self-scaling
I/O benchmarks; automatically and dynamically increase
aspects of workload to match characteristics of system
measured
– Measures wide range of current & future applications
• Types of self-scaling benchmarks:
– Transaction Processing - Interested in IOPS not bandwidth
• TPC-A, TPC-B, TPC-C
– NFS: SPEC SFS/ LADDIS - average response time and
throughput.
– Unix I/O - Performance of files systems
• Willy
EECC551 - Shaaban
#27 Lec # 13
Winter2000 2-8-2001
I/O Benchmarks: Transaction Processing
• Transaction Processing (TP) (or On-line TP=OLTP)
– Changes to a large body of shared information from many terminals,
with the TP system guaranteeing proper behavior on a failure
– If a bank’s computer fails when a customer withdraws money, the TP
system would guarantee that the account is debited if the customer
received the money and that the account is unchanged if the money
was not received
– Airline reservation systems & banks use TP
• Atomic transactions makes this work
• Each transaction => 2 to 10 disk I/Os & 5,000 and 20,000
CPU instructions per disk I/O
– Efficiency of TP SW & avoiding disks accesses by keeping
information in main memory
• Classic metric is Transactions Per Second (TPS)
– Under what workload? how machine configured?
EECC551 - Shaaban
#28 Lec # 13
Winter2000 2-8-2001
I/O Benchmarks: TPC-C Complex OLTP
•
•
•
•
•
•
Models a wholesale supplier managing orders.
Order-entry conceptual model for benchmark.
Workload = 5 transaction types.
Users and database scale linearly with throughput.
Defines full-screen end-user interface
Metrics: new-order rate (tpmC)
and price/performance ($/tpmC)
• Approved July 1992
EECC551 - Shaaban
#29 Lec # 13
Winter2000 2-8-2001
SPEC SFS/LADDIS Predecessor:
NFSstones
• NFSStones: synthetic benchmark that generates
series of NFS requests from single client to test
server: reads, writes, & commands & file sizes
from other studies.
– Problem: 1 client could not always stress server.
– Files and block sizes not realistic.
– Clients had to run SunOS.
EECC551 - Shaaban
#30 Lec # 13
Winter2000 2-8-2001
SPEC SFS/LADDIS
• 1993 Attempt by NFS companies to agree on standard
benchmark: Legato, Auspex, Data General, DEC,
Interphase, Sun.
• Like NFSstones but:
–
–
–
–
–
–
–
Run on multiple clients & networks (to prevent bottlenecks)
Same caching policy in all clients
Reads: 85% full block & 15% partial blocks
Writes: 50% full block & 50% partial blocks
Average response time: 50 ms
Scaling: for every 100 NFS ops/sec, increase capacity 1GB.
Results: plot of server load (throughput) vs. response time &
number of users
• Assumes: 1 user => 10 NFS ops/sec
EECC551 - Shaaban
#31 Lec # 13
Winter2000 2-8-2001
Unix I/O Benchmarks: Willy
• UNIX File System Benchmark that gives insight into I/O
system behavior (Chen and Patterson, 1993)
• Self scaling to automatically explore system size
• Examines five parameters
– Unique bytes touched: data size; locality via LRU
• Gives file cache size
– Percentage of reads: %writes = 1 – % reads; typically 50%
• 100% reads gives peak throughput
– Average I/O Request Size: Bernoulli, C=1
– Percentage sequential requests: typically 50%
– Number of processes: concurrency of workload (number processes
issuing I/O requests)
• Fix four parameters while vary one parameter
• Searches space to find high throughput
EECC551 - Shaaban
#32 Lec # 13
Winter2000 2-8-2001
OS Policies and I/O Performance
• Performance potential determined by HW: CPU, Disk,
bus, memory system.
• Operating system policies can determine how much of
that potential is achieved.
• OS Policies:
1) How much main memory allocated for file cache?
2) Can boundary change dynamically?
3) Write policy for disk cache.
• Write Through with Write Buffer
• Write Back
EECC551 - Shaaban
#33 Lec # 13
Winter2000 2-8-2001
ABCs of UNIX File Systems
• Key Issues
–
–
–
–
File vs. Raw I/O
File Cache Size Policy
Write Policy
Local Disk vs. Server Disk
• File vs. Raw:
– File system access is the norm: standard policies apply
– Raw: alternate I/O system to avoid file system, used by data bases
• File Cache Size Policy
– Files are cached in main memory, rather than being accessed from
disk
– With older UNIX, % of main memory dedicated to file cache is
fixed at system generation (e.g., 10%)
– With new UNIX % of main memory for file cache varies depending
on amount of file I/O (e.g., up to 80%)
EECC551 - Shaaban
#34 Lec # 13
Winter2000 2-8-2001
ABCs of UNIX File Systems
• Write Policy
– File Storage should be permanent; either write immediately
or flush file cache after fixed period (e.g., 30 seconds)
– Write Through with Write Buffer
– Write Back
– Write Buffer often confused with Write Back
• Write Through with Write Buffer, all writes go to disk
• Write Through with Write Buffer, writes are asynchronous,
so processor doesn’t have to wait for disk write
• Write Back will combine multiple writes to same page; hence
can be called Write Cancelling
EECC551 - Shaaban
#35 Lec # 13
Winter2000 2-8-2001
ABCs of UNIX File Systems
• Local vs. Server
– Unix File systems have historically had different policies
(and even file systems) for local client vs. remote server
– NFS local disk allows 30 second delay to flush writes
– NFS server disk writes through to disk on file close
– Cache coherency problem if allow are allowed to have file
caches in addition to server file cache
• NFS just writes through on file close
Stateless protocol: periodically get new copies of file blocks
• Other file systems use cache coherency with write back to
check state and selectively invalidate or update
EECC551 - Shaaban
#36 Lec # 13
Winter2000 2-8-2001
Network File Systems
Application Program
UNIX System Call Layer
Virtual File System Interface
remote
accesses
NFS Client
UNIX File System
local
accesses
Block Device Driver
Network Protocol Stack
UNIX System Call Layer
UNIX System Call Layer
Virtual File System Interface
Virtual File System Interface
NFS File System
Server Routines
RPC/Transmission Protocols
RPC/Transmission Protocols
Client
Server
Network
EECC551 - Shaaban
#37 Lec # 13
Winter2000 2-8-2001
UNIX File System Performance Study
Using Willy
Mini/Mainframe
Desktop
9 Machines & OSs
Machine
Alpha AXP 3000/400
DECstation 5000/200
DECstation 5000/200
HP 730
IBM RS/6000/550
SparcStation 1+
SparcStation 10/30
Convex C2/240
IBM 3090/600J VF
OS
Year
OSF/1
1993
Sprite LFS 1990
Ultrix 4.2
1990
HP/UX 8 & 91991
AIX 3.1.5
1991
SunOS 4.1 1989
Solaris 2.1 1992
Convex OS 1988
AIX/ESA
1990
Price
$30,000
$20,000
$20,000
$35,000
$30,000
$30,000
$20,000
$750,000
$1,000,000
Memory
64 MB
32 MB
32 MB
64 MB
64 MB
28 MB
128 MB
1024 MB
128 MB
EECC551 - Shaaban
#38 Lec # 13
Winter2000 2-8-2001
EECC551 - Shaaban
#39 Lec # 13
Winter2000 2-8-2001
Self-Scaling Benchmark Parameters
EECC551 - Shaaban
#40 Lec # 13
Winter2000 2-8-2001
Disk Performance
Machine and Operating System
Convex C240,
ConvexOS10
4.2
2.4
SS 10, Solaris 2
2.0
AXP/4000, OSF1
IPI-2, RAID
5400 RPM SCSI-II disk
1.6
RS/6000,AIX
1.4
HP 730, HP/UX 9
1.1
3090,AIX/ESA
Sparc1+,SunOS
0.7
4.1
DS5000,Ultrix
IBM Channel, IBM 3390 Disk
0.6
0.5
DS5000,Sprite
0.0
1.0
2.0
3.0
4.0
5.0
Megabytes per Second
• 32 KB reads
• SS 10 disk spins 5400 RPM; 4 IPI disks on Convex
EECC551 - Shaaban
#41 Lec # 13
Winter2000 2-8-2001
File Cache Performance
• UNIX File System Performance: not how fast disk, but
whether disk is used (File cache has 3 to 7 x disk perf.)
• 4X speedup between generations; DEC & Sparc
31.8
Machines & Operating Systems
AXP/4000, OSF1
RS/6000,AIX
28.2
HP 730, HP/UX 9
27.9
Fast Mem Sys
27.2
3090,AIX/ESA
DEC
Generations
11.4
SS 10, Solaris 2
Convex C240,
ConvexOS10
DS5000,Sprite
9.9
8.7
Sun Generations
5.0
DS5000,Ultrix
Sparc1+,SunOS
4.1
2.8
0.0
10.0
20.0
30.0
40.0
Megabytes per Second
EECC551 - Shaaban
#42 Lec # 13
Winter2000 2-8-2001
File Cache Size
• HP v8 (8%) vs. v9 (81%);
DS 5000 Ultrix (10%) vs. Sprite (63%)
90%
71%
70%
74%
77%
80% 81%
1000
60%
100
50%
40%
30%
10
20%
20%
8% 10%
File Cache Size (MB)
63%
10%
Convex C240,
ConvexOS10
HP730, HP/UX 9
RS/6000, AIX
OSF1
Alpha,
Solaris 2
SS 10,
Sparc1+, SunOS
4.1
DS5000, Sprite
3090, AIX/ESA
1
DS5000, Ultrix
0%
HP730, HP/UX 8
% Main Memory for FIle Cache
80%
87%
EECC551 - Shaaban
#43 Lec # 13
Winter2000 2-8-2001
File System Write Policies
• Write Through with Write Buffer (Asynchronous):
AIX, Convex, OSF/1 w.t., Solaris, Ultrix
35
Convex
30
MB/sec
MB/sec
Fast Disks
25
Solaris
20
AIX
15
Fast File
Caches for
Reads
OSF/1
10
5
0
0%
20%
40%
60%
80%
100%
% Reads
EECC551 - Shaaban
#44 Lec # 13
Winter2000 2-8-2001
File cache performance
vs. read percentage
EECC551 - Shaaban
#45 Lec # 13
Winter2000 2-8-2001
Performance vs. Megabytes Touched
EECC551 - Shaaban
#46 Lec # 13
Winter2000 2-8-2001
Write policy Performance For Client/Server Computing
• NFS: write through on close (no buffers)
• HPUX: client caches writes; 25X faster @ 80% reads
18
Megabytes per Second
16
FDDI Network
14
MB/sec 12
10
HP 720-730, HP/UX 8, DUX
8
6
Ethernet
4
2
SS1+, SunOS 4.1, NFS
0
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
% Reads
EECC551 - Shaaban
#47 Lec # 13
Winter2000 2-8-2001
UNIX I/O Performance Study Conclusions
• Study uses Willy, an I/O benchmark which supports self-scaling
evaluation and predicted performance.
• The hardware determines the potential I/O performance, but the
operating system determines how much of that potential is delivered:
differences of factors of 100.
• File cache performance in workstations is improving rapidly, with
over four-fold improvements in three years for DEC (AXP/3000 vs.
DECStation 5000) and Sun (SPARCStation 10 vs. SPARCStation 1+).
• File cache performance of Unix on mainframes and minisupercomputers is no better than on workstations.
• Workstations benchmarked can take advantage of high performance
disks.
• RAID systems can deliver much higher disk performance.
• File caching policy determines performance of most I/O events, and
hence is the place to start when trying to improve OS I/O
performance.
EECC551 - Shaaban
#48 Lec # 13
Winter2000 2-8-2001