Transcript Outline

Introduction to Computer
Organization and Architecture
Lecture 6
By Juthawut
Chantharamalee
http://dusithost.dusit.ac.th/~juthawut_cha/
home.htm
Outline

Interrupts




Program Flow
Multiple Interrupts
Nesting
IO





Architecture
Bus Types
Transfer Methods
Disks
Disk Arrays
Introduction to Computer Organization and Architecture
2
Interrupts


Mechanism by which other modules (e.g. I/O) may
interrupt normal sequence of processing
Program


Timer



Generated by internal processor timer
Used in pre-emptive multi-tasking
I/O


e.g. overflow, division by zero
from I/O controller
Hardware failure

e.g. memory parity error
Introduction to Computer Organization and Architecture
3
Interrupt Cycle
Added to instruction cycle
 Processor checks for interrupt


Indicated by an interrupt signal
If no interrupt, fetch next instruction
 If interrupt pending:






Suspend execution of current program
Save context
Set PC to start address of interrupt handler routine
Process interrupt
Restore context and continue interrupted program
4
Transfer of Control via Interrupts
5
Program Flow Control
6
Program Timing Short I/O Wait
7
Program Timing Long I/O Wait
8
Multiple Interrupts

Disable interrupts




Processor will ignore further interrupts whilst
processing one interrupt
Interrupts remain pending and are checked after first
interrupt has been processed
Interrupts handled in sequence as they occur
Define priorities


Low priority interrupts can be interrupted by higher
priority interrupts
When higher priority interrupt has been processed,
processor returns to previous interrupt
9
Multiple Interrupts - Sequential
Introduction to Computer Organization and Architecture
10
Multiple Interrupts – Nested
Introduction to Computer Organization and Architecture
11
Time Sequence of Multiple Interrupts
Introduction to Computer Organization and Architecture
12
Input/Output & System Performance Issues

System Architecture & I/O Connection Structure






Types of Buses/Interconnects in the system.
I/O Data Transfer Methods.
Cache & I/O: The Stale Data Problem
I/O Performance Metrics.
Magnetic Disk Characteristics.
Designing an I/O System & System Performance:

Determining system performance bottleneck.

(which component creates a system performance bottleneck)
Introduction to Computer Organization and Architecture
13
The Von-Neumann Computer Model

Partitioning of the computing engine into components:

Central Processing Unit (CPU): Control Unit (instruction decode, sequencing of
operations), Datapath (registers, arithmetic and logic unit, buses).
Memory: Instruction (program) and operand (data) storage.

Input/Output (I/O): Communication between the CPU and the outside world

Control
Input
Memory
(instructions,
data)
Datapath
registers
ALU, buses
Computer System
CPU
I/O
Subsystem
Output
I/O Devices
System performance depends on many aspects of the system
(“limited by weakest link in the chain”)
14
Input and Output (I/O) Subsystem


The I/O subsystem provides the mechanism for
communication between the CPU and the outside world
(I/O devices).
Design factors:





I/O device characteristics (input, output, storage, etc.).
I/O Connection Structure (degree of separation from memory
operations).
I/O interface (the utilization of dedicated I/O and bus controllers).
Types of buses (processor-memory vs. I/O buses).
I/O data transfer or synchronization method (programmed I/O,
interrupt-driven, DMA).
Introduction to Computer Organization and Architecture
15
Typical System Architecture
System Bus or Front Side Bus (FSB)
Memory Controller
(Chipset North Bridge)
I/O Controller Hub
(Chipset South Bridge)
Isolated I/O
I/O
Subsystem
16
System Components
(possibly
on-chip)
CPU
L2
L3
Caches
SDRAM
PC100/PC133
100-133MHz
64-128 bits wide
2-way inteleaved
~ 900 MBYTES/SEC )64bit)
(FSB)
Bus Adapter
Memory
Controller
I/O Controllers
NICs
Main I/O Bus
Example: PCI, 33-66MHz
32-64 bits wide
133-528 MB/s
PCI-X 133MHz 64-bits wide
1066 MB/s
Memory
Disks
Displays
Keyboards
Chipset
RAMbus DRAM (RDRAM)
400MHZ DDR
16 bits wide (32 banks)
~ 1.6 GBYTES/SEC
Important issue: Which component
creates a system performance
bottleneck?
System Bus
Memory Bus
Double Date
Rate (DDR) SDRAM
PC3200
200 MHz DDR
64-128 bits wide
4-way interleaved
~3.2 GBYTES/SEC (64bit)
Time(workload) = Time(CPU) +
Time(I/O) - Time(Overlap)
L1
North
Bridge
Chipset
I/O Devices
South
Bridge
Introduction to Computer Organization and Architecture
Networks
I/O Subsystem
17
I/O Interface
I/O Interface, I/O controller or I/O bus adapter:

Specific to each type of I/O device.

To the CPU, and I/O device, it consists of a set of control and data
registers (usually memory-mapped) within the I/O address space.

On the I/O device side, it forms a localized I/O bus which can be shared
by several I/O devices


Handles I/O details (originally done by CPU) such as:

Processing
off-loaded
from CPU
(e.g IDE, SCSI, USB ...)



Assembling bits into words,
Low-level error detection and correction
Accepting or providing words in word-sized I/O registers.
Presents a uniform interface to the CPU regardless of I/O
device.
Introduction to Computer Organization and Architecture
18
I/O Controller Architecture
Chipset
North Bridge
Host
Memory
Processor
Cache
Host
Processor
Peripheral or Main I/O Bus (PCI, PCI-X, etc.)
Chipset
South Bridge
Peripheral Bus Interface/DMA
Micro-controller
or
Embedded processor
Buffer
Memory
µProc
ROM
I/O Channel Interface
I/O Controller
SCSI, IDE, USB, ….
19
Types of Buses in The System (1/2)
Processor-Memory Bus
 System Bus, Front Side Bus, (FSB)





Should offer very high-speed (bandwidth) and low latency.
Matched to the memory system performance to maximize
memory-processor bandwidth.
Usually design-specific (not an industry standard).
Examples:
 Alpha EV6 (AMD K7), Peak bandwidth = 400 MHz x 8 = 3.2
GB/s
 Intel GTL+ (P3), Peak bandwidth = 133 MHz x 8 = 1 GB/s
 Intel P4, Peak bandwidth = 800 Mhz x 8 = 6.4 GB/s
 HyperTransport 2.0: 200Mhz-1.4GHz, Peak bandwidth up to
22.8 GB/s (point-to-point system interconnect not a bus)
20
Types of Buses in The System (2/2)
 I/O





buses (sometimes called an interface):
Follow bus industry standards.
Usually formed by I/O interface adapters to handle
many types of connected I/O devices.
Wide range in the data bandwidth and latency
Not usually interfaced directly to memory instead
connected processor-memory bus via a bus adapter
(chipset south bridge).
Examples:


Main system I/O bus: PCI, PCI-X, PCI Express
Storage: SATA, IDE, SCSI.
Introduction to Computer Organization and Architecture
21
Intel Pentium 4 System Architecture
(Using The Intel 925 Chipset)
CPU
(Including cache)
Memory Controller Hub
(Chipset North Bridge)
System Bus (Front Side Bus, FSB)
Bandwidth usually should match or exceed
that of main memory
System
Memory
Two 8-byte DDR2 Channels
Graphics I/O Bus (PCI Express)
Storage I/O (Serial ATA)
Main
I/O Bus
(PCI)
Misc.
I/O
Interfaces
I/O Controller Hub
(Chipset South Bridge)
Misc.
I/O
Interfaces
I/O Subsystem
Introduction to Computer Organization and Architecture
22
Bus Characteristics
Option
Bus width
High performance
Low cost/performance
Separate address
& data lines
Multiplex address
& data lines
Data width
Wider is faster
(e.g., 64 bits)
Narrower is cheaper
(e.g., 16 bits)
Transfer size
Multiple words has
less bus overhead
Single-word transfer
is simpler
Bus masters
Multiple
(requires arbitration)
Single master
(no arbitration)
Split
Yes, separate
Request and Reply
packets gets higher
bandwidth
(needs multiple masters)
No , continuous transaction?
connection is cheaper
and has lower latency
Clocking
Synchronous
Asynchronous
23
Storage IO Interfaces/Buses
Data Width
Clock Rate
Bus Masters
Max no. devices
Peak Bandwidth
IDE/Ultra ATA
16 bits
Upto 100MHz
1
2
200 MB/s
SCSI
8 or 16 bits (wide)
10MHz (Fast)
20MHz (Ultra)
40MHz (Ultra2)
80MHz (Ultra3)
160MHz (Ultra4)
Multiple
7 (8-bit bus)
15 (16-bit bus)
320MB/s (Ultra4)
Introduction to Computer Organization and Architecture
24
I/O Data Transfer Methods (1/2)

Programmed I/O (PIO): Polling (For low-speed I/O)





The I/O device puts its status information in a status register.
The processor must periodically check the status register.
The processor is totally in control and does all the work.
Very wasteful of processor time.
Used for low-speed I/O devices (mice, keyboards etc.)
Time(workload) = Time(CPU) + Time(I/O) - Time(Overlap)
Introduction to Computer Organization and Architecture
25
I/O Data Transfer Methods (2/2)

Interrupt-Driven I/O (For medium-speed I/O):





An interrupt line from the I/O device to the CPU is used to
generate an I/O interrupt indicating that the I/O device needs
CPU attention.
The interrupting device places its identity in an interrupt
vector.
Once an I/O interrupt is detected the current instruction is
completed and an I/O interrupt handling routine (by OS) is
executed to service the device.
Used for moderate speed I/O (optical drives, storage,
neworks ..)
Allows overlap of CPU processing time and I/O processing
time
Introduction to Computer Organization and Architecture
26
I/O data transfer methods

Direct Memory Access (DMA) (For high-speed I/O):





Implemented with a specialized controller that transfers data
between an I/O device and memory independent of the
processor.
The DMA controller becomes the bus master and directs reads
and writes between itself and memory.
Interrupts are still used only on completion of the transfer or when
an error occurs.
Low CPU overhead, used in high speed I/O (storage, network
interfaces)
Allows more overlap of CPU processing time and I/O processing
time than interrupt-driven I/O.
Introduction to Computer Organization and Architecture
27
DMA transfer step
 DMA



transfer steps:
The CPU sets up DMA by supplying device identity, operation,
memory address of source and destination of data, the number
of bytes to be transferred.
The DMA controller starts the operation. When the data is
available it transfers the data, including generating memory
addresses for data to be transferred.
Once the DMA transfer is complete, the controller interrupts the
processor, which determines whether the entire operation is
complete.
Introduction to Computer Organization and Architecture
28
Cache & I/O: The Stale Data Problem

Three copies of data, may exist in: cache, memory, disk.

Similar to cache coherency problem in multiprocessor systems.

CPU or I/O (DMA) may modify/access one copy while other
copies contain stale (old) data.

Possible solutions:




Connect I/O directly to CPU cache: CPU performance suffers.
With write-back cache, the operating system flushes caches into
memory (forced write-back) to make sure data is not stale in
memory.
Use write-through cache; I/O receives updated data from
memory (This uses too much memory bandwidth).
The operating system designates memory address ranges
involved in I/O DMA operations as non-cacheable.
29
I/O Connected Directly To Cache
This solution
may slow down CPU
performance
DMA
I/O
A possible solution for
the stale data problem
However:
CPU performance suffers
30
Factors Affecting Performance

I/O processing computational requirements:

CPU computations available for I/O operations.
Operating system I/O processing policies/routines.

I/O Data Transfer/Processing Method: Polling, Interrupt Driven. DMA


I/O Subsystem performance:





Memory subsystem performance:


Raw performance of I/O devices (i.e magnetic disk performance).
I/O bus capabilities.
I/O subsystem organization. i.e number of devices, array level ..
Loading level of I/O devices (queuing delay, response time).
Available memory bandwidth for I/O operations (For DMA)
Operating System Policies:


File system vs. Raw I/O.
File cache size and write Policy.
31
I/O Performance Metrics: Throughput:

Throughput is a measure of speed—the rate at which the
I/O or storage system delivers data.
I/O Throughput is measured in two ways:

I/O rate, Measured in:




Accesses/second,
Transactions Per Second (TPS) or,
I/O Operations Per Second (IOPS).

I/O rate is generally used for applications where the size of each
request is small, such as in transaction processing.

Data rate, measured in bytes/second or
megabytes/second (MB/s).

Data rate is generally used for applications where the size of each request is
large, such as in scientific and multimedia applications.
32
Magnetic Disks
Characteristics:
Current Rotation speed
7200-15000 RPM
Diameter (form factor): 2.5in - 5.25in

Rotational speed: 3,600RPM-15,000 RPM

Tracks per surface.

Sectors per track: Outer tracks contain
more sectors.

Recording or Areal Density: Tracks/in X Bits/in

Cost Per Megabyte.

Seek Time: (2-12 ms)
The time needed to move the read/write head arm.
Reported values: Minimum, Maximum, Average.

Rotation Latency or Delay: (2-8 ms)
The time for the requested sector to be under
the read/write head. (~ time for half a rotation)

Transfer time: The time needed to transfer a sector of bits.

Type of controller/interface: SCSI, EIDE

Disk Controller delay or time.

Average time to access a sector of data =
average seek time + average rotational delay + transfer time
+ disk controller overhead (ignoring queuing time)
Access time = average seek time + average rotational delay
Seek Time
33
Read Access

Steps






Memory mapped I/O over bus to controller
Controller starts access
Seek + rotational latency wait
Sector is read and buffered (validity check)
Controller DMA’s to memory and says ready
Access time
Queue + controller delay +block size/bandwidth
+ seek time + transfer time + check delay
Introduction to Computer Organization and Architecture
34
Basic Disk Performance Example

Given the following Disk Parameters:






Average seek time is 5 ms
Disk spins at 10,000 RPM
Transfer rate is 40 MB/sec
Actual time to process the disk request
is greater and may include CPU I/O processing Time
and queuing time
Controller overhead is 0.1 ms
Assume that the disk is idle, so no queuing delay exists
What is Average Disk read or write service time for a 500-byte (.5
KB) Sector?
Avg. seek + avg. rot delay
+ transfer time + controller overhead
=
5 ms + 0.5/(10000 RPM/60) + 0.5 KB/40 MB/s +
0.1 ms
=
5
+
3
+
0.13
+
0.1 = 8.23 ms
Time for half a rotation
Tservice (Disk Service Time for this request)
Here: 1KBytes = 103 bytes, MByte = 106 bytes,
1 GByte = 109 bytes
35
Disk Arrays
Disk Product Families
Conventional:
4 disk
3.5”
designs
5.25”
Low End
14”
10”
High End
Disk Array:
1 disk design
3.5”
Introduction to Computer Organization and Architecture
36
Array Reliability
• Reliability of N disks = Reliability of 1 Disk / N
50,000 Hours / 70 disks = 700 hours
Disk system MTBF: Drops from 6 years to 1 month!
• Arrays (without redundancy) too unreliable to be useful!
Hot spares support reconstruction in parallel with
access: very high media availability can be achieved
Introduction to Computer Organization and Architecture
37
Redundant Array of Disks
• Files are "striped" across multiple spindles
• Redundancy yields high data availability
Disks will fail
Contents reconstructed from data redundantly stored in the array
Capacity penalty to store it
Bandwidth penalty to update
Mirroring/Shadowing (high capacity cost)
Techniques:
Horizontal Hamming Codes (overkill)
Parity & Reed-Solomon Codes
Failure Prediction (no capacity overhead!)
VaxSimPlus — Technique is controversial
Introduction to Computer Organization and Architecture
38
RAID Levels
Raid level
Failures
Data disks
Check
disks
0 Nonredundant
0
8
0
1 Mirrored
1
8
8
2 Memory-style ECC
1
8
4
3 Bit-interleaved parity
1
8
1
4 Block-interleaved parity
1
8
1
5 Block-interleaved distributed parity
1
8
1
6 P+Q redundancy add 2nd parity
2
8
2
39
Raid 1: Disk Mirroring
recovery
group
• Each disk is fully duplicated onto its "shadow"
Very high availability can be achieved
• Bandwidth sacrifice on write:
Logical write = two physical writes
• Reads may be optimized
• Most expensive solution: 100% capacity overhead
Targeted for high I/O rate, high availability environments
Introduction to Computer Organization and Architecture
40
Raid 3: Parity Disk
10010011
11001101
10010011
...
logical record
Striped physical
records
P
1
0
0
1
0
0
1
1
1
1
0
0
1
1
0
1
1
0
0
1
0
0
1
1
0
0
1
1
0
0
0
0
• Parity computed across recovery group to protect against HD failures
• 33% capacity cost for parity in this configuration
• wider arrays reduce capacity costs, decrease expected availability,
increase reconstruction time
• Arms logically synchronized, spindles rotationally synchronized
logically a single high capacity, high transfer rate disk
Targeted for high bandwidth applications: Scientific, Image Processing
41
Raid 5+: High I/O Rate Parity
D0
D1
D2
D3
P
D4
D5
D6
P
D7
D8
D9
P
D10
D11
Reed-Solomon
Codes ("Q") for
protection during
reconstruction
D12
P
D13
D14
D15
Targeted for mixed
applications
D20
A logical write
becomes four
physical I/Os
Independent writes
possible because
of
interleaved parity
Increasing
Logical
Disk
Addresses
Stripe
P
.
.
.
D16
D17
D18
D21
D22
D23
.
.
.
. Disk .Columns.
.
.
.
D19
P
.
.
.
Stripe
Unit
42
Subsystem Organization
host
host
adapter
array
controller
manages interface
to host, DMA
control, buffering,
parity logic
physical device
control
striping software off-loaded from
host to array controller
no applications modifications
no reduction of host performance
single board
disk
controller
single board
disk
controller
single board
disk
controller
single board
disk
controller
often piggy-backed
in small format devices
43
System Availability
Array
Controller
String
Controller
. . .
String
Controller
. . .
String
Controller
. . .
String
Controller
. . .
String
Controller
. . .
Data Recovery Group: unit of data redundancy
Redundant Support Components: fans, power supplies, controller, cables
End to End Data Integrity: internal parity protected data paths
44
System-Level Availability
host
I/O Controller
Fully dual redundant
Array Controller
...
host
I/O Controller
Array Controller
...
...
...
Goal: No Single
Points of
Failure
...
Recovery
Group
.
.
.
with duplicated paths, higher performance can be
45
obtained when there are no failures
Peripheral Component Interconnect

2 Types of Agents on the Bus



3 Address Spaces




Memory
IO
Configuration
Transactions done in 2 (or more) phases



Initiator (master)
Target
Address/Command
Data/Byte Enable Phase(s)
Synchronous Operation (positive edge of clock)
Introduction to Computer Organization and Architecture
46
Typical PCI Topology
Host
PCI bridge
Main
memory
PCI bus
Disk
Printer
Ethernet
interf ace
Introduction to Computer Organization and Architecture
47
PCI Signals
Name
Function
CLK
A 33-MHz or 66-MHz clock.
FRAME#
Sent by the initiator to indicate the start and duration
of a transaction.
AD
32 address/datalines, which may be optionally
increasedto 64.
C/BE#
4 command/byte enable lines (8 for a 64-bit bus.)
IRD Y#, TRD Y#
Initiator-ready and Target-readysignals.
DEVSEL#
A responsefrom the device indicating that it has
recognized its address and is ready for a data
transfertransaction.
IDSEL#
Initialization Device Select.
Introduction to Computer Organization and Architecture
48
PCI Read
1
2
3
4
5
6
7
CLK
Frame#
AD
C/BE#
Adress
Cmnd
#1
#2
#3
#4
Byte enable
IRDY#
TRDY#
DEVSEL#
Introduction to Computer Organization and Architecture
49
The End
Lecture 6