Transcript 1.01

Review: Chapters 1 – 7
Chapter 1:

OS is a layer between user and hardware to make life easier for user and
use hardware efficiently



Control program or resource allocator
Computer organization

CPU(s), memory, and I/O devices connect to a common bus

Devices request CPU attention through interrupts
Storage hierarchy: speed, cost, volatility

Caching: copy frequently used data to faster storage

Multiprogramming: multiple jobs in memory  efficiency

Timesharing: frequently switch between jobs  interactive, short response
time  users get the impression that each has his/her own computer

Dual mode operation: user and kernel modes


Protect OS and users from each other

Privileged instructions executed only in kernel mode
Timer to prevent processes from holding resources forever
1.2
Operating-System Operations


OS is interrupt driven: it sits idle till something happens

Interrupts are generated by devices (hardware)

Traps (or exceptions) are software-generated interrupts due to

software errors, e.g., divide by zero

Request for operating system services (system calls)
Dual-mode operation allows OS to protect itself and other system
components

User mode and kernel mode

Mode bit provided by hardware

Provides ability to distinguish when system is running user code or
kernel code

Some instructions designated as privileged, only executable in
kernel mode

System call changes mode to kernel, return from call resets it to
user
1.3
Transition from User to Kernel Mode
1.4
Chapter 2: OS Services and Structures
 OS provides two sets of services for

user convenience and

efficient use of resources
 System calls: programming interface to OS services

Typically used through APIs for portability and ease
 OS structures

monolithic

layered

microkernel

modular
1.5
Chapter 3: Processes
 Process is a program in execution

OS maintains process info in PCB

Process State diagram

Creating and terminating processes (fork)
 Process scheduling

Long-, short-, and medium-term schedulers

Scheduling queues
1.6
Scheduling: The Big Picture
Midterm sched.
Jobs
Disk
Job sched.
CPU sched.
In most small and interactive systems (UNIX, WinXP, …),
only the CPU scheduler exists
1.7
Process Lifetime
1.8
CPU Switch From Process to Process
(Context Switch)
 When switching occurs,
kernel
 Saves state of P0 in
PCB0 (in memory)

Loads state of P1
from PCB1 into
registers
 State = values of the
CPU registers, including
the program counter,
stack pointer
1.9
Interprocess Communications Models
Message Passing
Shared Memory
1.10
Chapter 4: Threads
 A thread is a basic unit of CPU utilization, a process is composed of
one or more threads
 Each thread has: Program counter, stack, registers
 Threads share: code, data, OS resources (e.g., open files and signals)
1.11
Single and Multithreaded Processes
Shared among threads
1.12
User level threads vs. kernel threads
1.13
Chapter 5: CPU Scheduling
 Process execution: cycle of CPU bursts and I/O bursts

CPU bursts lengths: many short bursts, and few long ones
 Scheduler selects one process from ready queue

Dispatcher performs the switching
 Scheduling criteria (usually conflicting)

CPU utilization, waiting time, response time, throughput, …
 Scheduling Algorithms

FCFS, SJF, Priority, RR, Multilevel Queues, …
1.14
First-Come, First-Served (FCFS) Scheduling
Process
Burst Time
P1
24
P2
3
P3
3
 Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1
P2
0
24
 Waiting time for P1 = 0; P2 = 24; P3 = 27
 Average waiting time: (0 + 24 + 27)/3 = 17
1.15
P3
27
30
Multilevel Feedback Queues
1.16
CPU Scheduling
 Multiprocessor Scheduling

Processor affinity vs. load balancing
 Evaluation of Algorithms

Modeling, simulation, implementation
1.17
Chapter 6: Synchronization

Processor Synchronization


Race condition


Techniques to coordinate access to shared data
Multiple processes manipulating shared data and result depends on
execution order
Critical section problem

Three requirements: mutual exclusion, progress, bounded waiting

Software solution: Peterson’s Algorithm

Hardware support: TestAndSet(), Swap()



Busy waiting (or spinlocks)
Semaphores:

Not busy waiting

wait(), signal() must be atomic  moves the CS problem to kernel
Monitors

High-level (programming language) constructs
1.18
Synchronization

Some classical synchronization problems

Consumer-producer

Dining philosopher

Readers-writers
1.19
Chapter 7: Deadlock
 A set of blocked processes each holding a resource and waiting to
acquire a resource held by another process in the set
 Four necessary (but not sufficient) conditions
 Mutual exclusion: only one process at a time can use a resource
 Hold and wait: a process holding at least one resource and is waiting
to acquire additional resources held by other processes
 No preemption: a resource can be released only voluntarily by the
process holding it, after that process has completed its task
 Circular wait: there exists a set {P0, P1, …, P0} of waiting processes
such that P0 is waiting for a resource that is held by P1, P1 is waiting for
a resource that is held by
P2, …, Pn–1 is waiting for a resource that is held by
Pn, and P0 is waiting for a resource that is held by P0
1.20
Deadlock Handling
 Prevention: ensure that at least one of the necessary conditions
does not hold
 Avoidance: decide for each request whether or not the issuing
process should wait to avoid leaving the system in unsafe state

Resource-allocation graph: single instance of a resource type

Banker’s algorithm: multiple instances of a resource type
 Detection and Recovery

Detection algorithm

Recovery: process termination or resource preemption
1.21
Good Luck on the Exam!
1.22