Transcript Slide 1

Lecture 1
History of Operating Systems
Introduction of Concurrent and Distributed
Systems
What Is an Operating System?
An Operating System is the software that makes the computer's hardware usable.
The OS manages the HW and the SW resources of the computer.
An OS is a software application that is initiated upon computer startup
and continues to run while the computer is operational. The purpose
of an OS is to provide access to and control of application programs,
file transfers and other tasks required by the computer user. The OS
is the software that makes the computer hardware usable.
End-User Applications
Utilities
Operating System
Computer
Hardware
Why do we study Operating Systems?
Because they are more interesting that the broken ones*
To be able to manage our computer resources more effectively
To take advantage of functionality provided by the OS in our programming projects
Why do we study Distributed Systems?
To build more secure software applications
To better prepare us for the increasing complexity of networked and distributed
software design and implementation
* bahrumpbump
Functions of an Operating System
Separates applications from the hardware they access
Software layer
Manages software and hardware to produce desired results
Operating systems primarily are resource managers
Hardware
Processors
Memory
Input/output devices
Communication devices
Software applications
Mainframe Operating Systems History
1940’s No O.S. (The 0th Generation)
all instructions handcoded in machine language
one program resident in computer at a time
programmer at console
1950’s Batch O.S. (The 1st Generation)
batch processing - single jobs running to completion in sequence
O.S. regains control after each job is completed
programs included Job Control Language (JCL) to support O.S. functions
1960’s Multiprogramming O.S. (The 2nd Generation)
multiprogramming - multiple jobs running concurrently on a mainframe
device independence - peripherals interchangeable
timesharine - interactive programming
real-time systems - immediate response and hardware in the loop (HWIL)
1970’s Multimode O.S. (The 3rd Generation)
multimode systems - combining batch, timeshare, real-time and multiprocessing
software layer - layers of abstraction separating programmer and user from HW
virtual machine - the emulation of one machine on another
software became unbundled from hardware creating a new industry
1980’s Personal PC O.S. (The 4th Generation)
personal computers - minicomputers, workstations, desktops, laptops, handheld
local area network - regaining processing power by connecting smaller systems
1990’s Distributed O.S. (The 5th Generation)
global interconnectivity - Internet, World Wide Web
thin client - functionality left on server system to manage large databases
Personal Computer Development Timeline
1974
Ed Roberts invents the ALTAIR 8800. Appears in Popular Electronics article.
1975
Paul Allen and Bill Gate create a BASIC Interpreter for the ALTAIR
Allen and Gates take a paper tape copy of their BASIC Interpreter to Roberts
1976
Allen and Gates move to Albequerque, NM and open an office across the street from
Roberts’ company. This is the beginning of Microsoft.
1977
Steven Jobs and Steve Wozniak debut the Apple at the West Coast Computer Fair
and sell 50 units.
Jobs and Wozniak sell the first Apple II.
Gary Kindall writes CP/M (Intergalactic Digital Research) sells 600,000 copies
1979
Steven Jobs tours Xerox Palo Alto Research Center and sees ethernet, laser printers,
and a Graphical User Interface (GUI).
1980
IBM contacted Bill Gates looking for a BASIC interpreter and OS for their upcoming
PC.
1980 Gates did not have an OS so he recommended that IBM talk to Kindall. Kindall was
not available for the meeting but left his wife to talk with them. She would not sign
their non-disclosure agreement so they gave up on getting CP/M.
Apple sells 35,000 Apple II’s. Microsoft moves office to Bellevue, Washington
Gates contacted Tim Patterson the creator of QDOS (an OS similar to CP/M). But
Patterson would not make a deal since he had a exclusive rights agreement with a
company called Seattle Computer Products.
1981 Microsoft makes a deal with IBM to provide BASIC and QDOS.
Gates purchases the rights to QDOS from Seattle Computer Products for $50K and
renames it PCDOS 1.0. Tim Patternson goes to work for Microsoft.
1982 Apple develops the Lisa (forerunner to the Macintosh).
1983 Microsoft recommends a GUI for DOS that would look-and-feel like the Mac OS but
IBM rejects it.
IBM asks Microsoft to help develop OS/2 to compete with clone operating systems
1984 Apple introduces the Macintosh during Superbowl.
1984
Microsoft breaks ties with IBM and begins work on Windows 1.0
Approximately 1000 servers connected to “Internet”.
1985
Microsoft releases Windows 1.0
1987
OS/2 are introduced and are largely ignored by the public.
Microsoft announces Windows 2.0
Apple takes Microsoft to court and eventually loses.
1990
Microsoft introduces Windows 3.0 and sells over 30 million copies.
1992
Microsoft ships Windows 3.1
Apple’s Powerbook first computer to break $1 billion threshold
1994
Microsoft releases Windows 3.11
1995
Microsoft introduces Windows 95 and spends $300 million promoting it.
1999
Microsoft introduces Windows NT 5.0
Hardware Innovations that have Improved OS Performance
Storage interleaving - making multiple banks of secondary storage independently accessible.
Relocation register - holds address of program in memory so it made be moved during use.
memory buffer - a block of memory used to hold data during I/O operations.
I/O and DMA channels - special hardware to handle I/O and memory access instead of CPU
cycle stealing - giving DMA channels priority access to memory over the CPU
Interrupts vs polling - methods for peripherals to gain the attention of the CPU
storage protection - limiting the range of programs in memory in a multiprogramming system
clocks and timers - interval and real-time clocks are used to support job scheduling/sharing
base-pluse-displacement - makes processes relocatable and permits shorter addresses
virtual storage - allows the creation of arbitrarily large seamless programs
multiprocessing - multiple processors sharing primary memory controlled by one OS
pipelining - a technique to parallelize certain inherently sequential processes
memory hierarchy - using different types of memory to improve performance, cache, RAM...
Single-Platform Operating Systems
The early OS's were designed to execute on a single computer. These systems had no
mechanisms for accessing another computer through a network.
Somewhat later in the
history of OS's, tools were created for machine to machine data transfer through peer-to-peer
connections, however these were limited to basic file transfer for copying and backup.
Types of Operating Systems
Real-Time OS - Features and settings not accessible by user. The primary goal
of an RTOS is to ensure that a specific set of operations occur within a precise
time period.
Embedded OS - A single-user, single-tasking OS used on many small hand-held
devices such as personal data assistants, cell-phones and media players.
PC OS - These are the most popular and well-known type of OS, They are
single-user, multi-tasking and are used on desktop and laptop computers.
Mainframe OS - Also called a multi-user operating system, makes the resources
of the computer simultaneously available to many different users. These OS
must balance the processing load and resources to provide fair and effectice
access to data and processes.
Networked & Distributed OS - For many multi-user and client-server
applications the mainframe computer has been replaced with many distributed
computers managed by a single OS.
Network Operating Systems
A networking operating system is an operating system that contains components and
programs that allow a computer on a network to serve requests from other computer for
data and provide access to other resources such as printer and file systems.
JUNOS, used in routers and switches from Juniper Networks.
Cisco IOS (formerly "Cisco Internetwork Operating System") is a NOS having a
focus on the internetworking capabilities of network devices. It is used on Cisco
Systems routers and some network switches.
BSD, also used in many network servers.
Linux
Microsoft Windows Server
Novell Netware
Distributed Operating Systems
A distributed operating system is one that looks to its users like an ordinary
centralized operating system but runs on multiple, independent central processing
units (CPUs). The key concept here is transparency. In other words, the use of
multiple processors should be invisible (transparent) to the user. Another way of
expressing the same idea is to say that the user views the system as a virtual
uniprocessor, not as a collection of distinct machines.
- Tanenbaum
Plan 9 from Bell Labs, a distributed operating system - designed from the
ground up as a distributed system: the architecture of Plan 9 is inherently
grid-enabled.
Amoeba - an open source microkernel-based distributed operating system
developed by Andrew S. Tanenbaum and others at the Vrije Universiteit.
The aim of the Amoeba project is to build a timesharing system that makes
an entire network of computers appear to the user as a single machine.
(stalled)
BOINC - Open-source software for volunteer computing and grid
computing.
Networked vs Distributed Operating Systems
A typical configuration for a network operating system would be a collection of
personal computers along with a common printer server and file server for archival
storage, all tied together by a local network. Generally speaking, such a system will
have most of the following characteristics that distinguish it from a distributed system:
• Each computer has its own private operating system, instead of running part of a
global, systemwide operating system.
• Each user normally works on his or her own machine; using a different machine
invariably requires some kind of “remote login,” instead of having the operating
system dynamically allocate processes to CPUs.
• Users are typically aware of where each of their files are kept and must move files
between machines with explicit “file transfer” commands, instead of having file
placement managed by the operating system.
• The system has little or no fault tolerance; if 1 percent of the personal computers
crash, 1 percent of the users are out of business, instead of everyone simply being
able to continue normal work, albeit with 1 percent worse performance.
Parallel vs. Concurrent Processing
A concurrent program is a set of sequential programs that can be executed in
parallel.
A process is one of the sequential threads of execution making up a concurrent
program.
In the textbook the term program is reserved for concurrent programs.
Parallel processes are two or more processes executing at the same time.
Concurrency is an abstraction that refers to multiple processes each executing a
sequence of operations whose relative timing is arbitrary.
Multitasking
Multitasking is a simple generalization from the concept of overlapping I/O with a
computation to overlapping the computation of one program with that of another.
A scheduler program is run by the operating system to determine which process
should be allowed to run for the next interval of time.
The scheduler can take into account priority considerations and usually implements
time-slicing, where computations are periodically interrupted to allow a fair sharing
of the computational resources, in particular, of the CPU.
When multitasking is performed within a program it is referred to as multithreading.
Threads
The term process is used in the theory of concurrency, while the term thread is
commonly used in programming languages. A technical distinction is often made
between the two terms:
a process runs in its own address space managed by the OS
a thread runs within the address space of a single process
The term thread was popularized by pthread (POSIX threads), a specification of
concurrency implement in UNIX/LINUX systems.
The differences between processes and threads is not relevant for the study
concurrent systems, so we will be using process except when referring to a specific
programming language. C# and Java use threads while Ada refers to tasks.
Multiprocessors & Distributed Programming
The modern personal computer contains more than one processor, the graphics
processor is a specialized computer, as well as the processors for I/O and
communications. In this sense desktop computers are multiprocessor systems.
Multiprocessing can also be performed using multiple computers. A program that runs
on multiple networked computers is called a distributed program.
The entire Internet can be considered to be one distributed system working to
disseminate information in the form of email and Web pages.
The Challenge of Concurrent Programming
The challenge in concurrent programming comes from the need to synchronize the
execution of different processes and to enable them to communicate.
It turns out to be very challenging to implement safe and efficient synchronization and
communication. When a concurrent program crashes or produces incorrect answers or
unexpected behavior it can be very difficult to reproduce, diagnose and correct the
problem.