Background - The University of Alabama in Huntsville

Download Report

Transcript Background - The University of Alabama in Huntsville

Background
Computer System Architectures
Computer System Software
Computer System
Architectures
Centralized (Tightly Coupled)
Distributed (Loosely Coupled)
Centralized v Distributed
• Centralized systems consist of a single
computer
– Possibly multiple processors
– Shared memory
• A distributed system consists of multiple
independent computers that “appear to its
user as a single coherent system”
Tanenbaum, p. 2
Centralized Architectures with
Multiple Processors
(Tightly Coupled)
• All processors share same physical
memory.
• Processes (or threads) running on
separate processors can communicate
and synchronize by reading and writing
variables in the shared memory.
• SMP: shared memory multiprocessor/
symmetric multiprocessor
Tightly-Coupled Architectures
CPUs are connected at the bus level
M
M
M
Interconnection Network – Bus-based or
Switched
(all accesses to memory go through the
network)
P0
P1
P2
Drawback
• Scalability based on adding processors.
• Memory and interconnection network
become bottlenecks.
• Caching improves bandwidth and access
times (latency) up to a point.
• Shared memory multiprocessors are not
practical if large numbers of processors
are desired.
UMA: Uniform Memory Access
(tightly coupled/shared memory)
• Based on processor access time to
system memory.
• All processors can directly access any
address in the same amount of time.
• Symmetric Multiprocessors are UMA
machines.
NUMA: Non-Uniform Memory
Access
• Still one physical address space
• A memory module is attached to a specific
CPU (or small set of CPUs) = node
• A processor can access any memory
location transparently, but can access its
own local memory faster.
• NUMA machines were designed to address
the scalability issues of SMPs
Dual (Multi) Core Processors
• Two (or a few more) CPUs on a single die
• Somewhat slower than a traditional
multiprocessor but faster than a single
processor machine
• To take advantage, the OS must support
multiple threads and the software must be
multi-threaded
Contention and Coherence
• In shared memory multiprocessors the
hardware must deal with
– Memory contention: two processors try to
access the same block of memory at the
same time
– Cache coherence: If one processor changes
data in its cache other processors must be
notified that their cached copy is out of date
Distributed Architectures
(Loosely Coupled)
• Memory is partitioned
– Each processor has its own private address space.
– Accesses to address N by two different processors
will refer to two different locations.
• Processes must use message passing (maybe
hardware-based) to communicate and
synchronize
• Memory contention and cache coherence are
not problems.
Loosely-Coupled Architectures
M
M
M
P0
P1
P2
Interconnection Network – Bus-based or Switched
(all accesses to memory are local; the network is
used to send messages between processes
running on different processors)
Multiprocessors
• We will usually refer to tightly-coupled
machines as multiprocessors (or shared
memory multiprocessors).
• Loosely-coupled machines will be called
distributed systems or multicomputers.
Examples of Distributed Systems
•
•
•
•
Massively Parallel Processors (MPPs)
Cluster
Grid
Network of Workstations
MPP
• Many (maybe tens of thousands) of
separate computers, running in parallel,
connected by a high-speed network
• Communication between processors:
message passing
• Supercomputers
• Usually one-of-a-kind; individually tuned
for high performance
Clusters
• The computers in a cluster are usually
connected by a high-speed LAN.
• Commodity processors and commodity
operating systems (Linux/UNIX)
• Homogeneous
• Server farms, high-performance applications
• MPPs versus clusters: scale, amount of
individualization, no clear dividing line.
Grid Computing
• Grid computing distributes work across
several computers to solve large parallel
problems.
• Grids versus MPP/cluster model
– Geographic distribution
– Different administrative domains
– Heterogeneous processors
– More loosely coupled
• Compare to the electrical grid.
Networks of Workstations
• Separate workstations, usually in the
same administrative domain.
• Usually function in a stand-alone fashion
but may also cooperate on some projects
• Primarily designed to utilize idle CPU
cycles.
Computer System Software
Operating Systems
Middleware
System Software
• The operating system itself
• Compilers, interpreters, language run-time
systems, various utilities
• Middleware
– Runs on top of the OS
– Connects applications running on separate
machines
– Communication packages, web servers, …
Operating Systems
• General purpose operating systems
• Real time operating systems
• Embedded systems
General Purpose Operating
Systems
• Manage a diverse set of applications with
varying and unpredictable requirements
• Implement resource-sharing policies for
CPU time, memory, disk storage, and
other system resources
• Provide high-level abstractions of system
resources; e.g., virtual memory, files
• Applications (usually) are built on top of
the abstractions
Kernel
• The part of the OS that is always in
memory – lowest level of abstraction
• Monolithic kernels versus microkernels
– Monolithic: all OS code is in kernel space,
runs in kernel mode
– Hybrid: minimal functionality in kernel space,
some OS functions (servers) execute in user
space
• Hybrid kernels: a mixture of the two
System Architecture and the OS
• Shared memory architectures have one or more
CPUs
• Multiprocessor OS is more complex
– Master-slave operating systems
– SMP operating systems
• Distributed systems run a local OS and typically
various kinds of middleware to support
distributed applications
• Pure distributed operating systems are rare.
Non-traditional Kernel Architectures
• Traditional: UNIX/Linux, Windows, Mac …
• Non-traditional:
– Microkernels
– Extensible operating systems
– Virtual machine monitors
• Non-traditional kernels experiment with
various approaches to improving the
performance of more traditional systems.