Chapter 10 Operating Systems Software Categories S o ftw a re A p p lic atio n S o ftw a re S y.

Download Report

Transcript Chapter 10 Operating Systems Software Categories S o ftw a re A p p lic atio n S o ftw a re S y.

Chapter 10

Operating Systems

Software Categories

Software Application Software System Software Utlity Software Operating System Shell Kernel 2

Software Categories

  Application software is written to address our specific needs—to solve problems in the real world.

 Word processing programs, games, inventory control systems, automobile diagnostic programs, and missile guidance programs are all application software.

System software manages a computer system at a fundamental level.

 It provides the tools and an environment in which application software can be created and run.

3

System Software

Within the class of system software categories: are two   Utility software  programs for performing various activities fundamental to computer installations, but not part of the OS. (Examples include formating a disk, networking, copying files, using a modem, and data compression.) Operating Systems 4

Operating System

  An operating system parts.

also consists of two   The kernel manages computer resources, such as memory and input/output devices.

The shell provides an interface through which a human can interact with the computer.

An operating system also allows application programs to interact with the other system resources.

5

Operating System

Figure 10.1 An operating system interacts with many aspects of a computer system.

6

Operating System

 The various roles of an operating system generally revolve around the idea of “sharing nicely”.

 An operating system manages resources, and these resources are often shared in one way or another among programs that “want” to use them.

7

Managing Resources

Resource management consists of:  Memory management  Process management  CPU scheduling 8

Memory Management

 Memory management keeps track of what is stored in memory and where in memory it is.

 Multiprogramming is the technique of keeping multiple programs in main memory at the same time. These programs compete for access to the CPU so that they can execute.

9

Memory

Figure 10.3 Memory is a continuous set of bits referenced by specific addresses

10

Logical and Physical Addresses

A program may include instructions that transfer control. For example, in BASIC a programmer can say GOTO 200 where 200 is the line number of the instruction to be executed next. This line number is relative to the start of the program and so is a logical address .

However, the physical address is the actual location in memory where this instruction is stored.

11

Memory Management

   A logical address (sometimes called a virtual or relative address) is a value that specifies a generic location,

relative to the program

but not to the reality of main memory.

A physical address memory device.

is an actual address in the main Operating systems must employ techniques to:   Track where and how a program resides in memory.

Convert logical program addresses into actual memory addresses.

12

Memory Management

 There are three approaches to memory management depending on how we conceive of memory being organised:    Single Contiguous Memory Partitioned Memory Paged Memory 13

Single Contiguous Memory Management

Figure 10.4 Main memory divided into two sections

  There are only two programs in memory  The operating system  The application program This approach is called single contiguous memory management.

14

Single Contiguous Memory Management

 In this system, a logical address is simply an integer value relative to the starting point of the program.

 To produce a physical address, we add a logical address to the starting address of the program in physical main memory.

15

Single Contiguous Memory Management

Figure 10.5 binding a logical address to a physical one

16

Partition Memory Management

 When using fixed partitions , main memory is divided into a particular number of partitions.

 When using dynamic partitions , the partitions are created to fit the need of the programs.

17

Partition Memory Management

Figure 10.6 Address resolution in partition memory management

   At any point in time, memory is divided into a set of partitions, some empty and some allocated to programs.

The Base register holds the beginning address of the current partition.

The Bounds register holds the length of the current partition.

18

Partition Selection

  

First fit

 Program is allocated to the first partition big enough to hold it.

Best fit

 Program is allocated to the smallest partition big enough to hold it.

Worst fit

 Program is allocated to the largest partition big enough to hold it.

19

Paged Memory Management

  Paged memory technique : main memory is divided into small fixed-size blocks of storage called frames .  A program is divided into pages that (for the sake of our discussion) we assume are the same size as a frame .

The operating system maintains a separate page-map table ( PMT ) for each program in memory.

20

Paged Memory Management

  To produce a physical address, you first look up the page in the PMT to find the frame number in which it is stored.

Then multiply the frame number by the frame size and add the offset to get the physical address.

Figure 10.7 A paged memory management approach

21

Paged Memory Management

 An important extension is demand paging .

   Not all parts of a program actually have to be in memory at the same time.

In demand paging, the pages are brought into memory on demand.

The act of bringing in a page from secondary memory, which often causes another page to be written back to secondary memory, is called a page swap .

22

Paged Memory Management

 The demand paging approach gives rise to the idea of virtual memory , the illusion that there are no restrictions on the size of a program.

 Too much page swapping, however, is called thrashing and can seriously degrade system performance.

23

Resource Management

 A process can be defined as a program in execution.

 The operating system performs process management to carefully track the progress of each process and all of its intermediate states.

24

Batch Processing

Figure 10.2 In early systems, human operators would organize jobs into batches

25

Timesharing

   Multiprogramming allowed multiple processes to be active at once, which gave rise to the ability for programmers to interact with the computer system directly, while still sharing its resources.

A timesharing system allows multiple users to interact with a computer at the same time.

In a timesharing system, each user has his or her own virtual machine , in which all system resources are (in effect) available for use.

26

Process Management

 The Process States

Figure 10.8 The process life cycle

27

The Process Control Block

 The operating system must manage a large amount of data for each active process.

 Usually that data is stored in a data structure called a process control block ( PCB ).

 Each state is represented by a list of PCBs, one for each process in that state.

28

The Process Control Block

 Keep in mind that there is only one CPU and therefore only one set of CPU registers.

  These registers contain the values for the currently executing process.

The values define the state of the machine at any given time.

 Each time a process is moved to the running state:    Register values for the interrupted process are stored into its PCB.

Register values of the process admitted to the running state are loaded into the CPU from its waiting state PCB.

This exchange of information is called a context switch .

29

CPU Scheduling

 The act of determining which process in the ready state should be moved to the running state.

 That is, decide which process should be given over to the CPU.

30

CPU Scheduling

 Nonpreemptive scheduling occurs when the currently executing process gives up the CPU voluntarily.

 Preemptive scheduling occurs when the operating system decides to favour another process, preempting the currently executing process.

 Turnaround time for a process is the amount of time between when the process arrives in the ready state to the time it exits the running state for the last time.

31

CPU Scheduling

In each of the following examples we will consider 5 processes arriving in the Ready state. The service time for each is listed in this table.

How does the dispatcher decide their order?

32

First-Come, First-Served

 The first ordering structure that comes to mind is the queue .

  Processes are moved to the CPU in the order in which they arrive in the Ready state.

FCFS scheduling is nonpreemptive – one process completes before the next begins.

33

First-Come, First-Served

 The Gantt Chart below shows the order and turn around time for the 5 processes.

34

Page 336

First-Come, First-Served

Process

p1 p2 p3 p4 p5

Average Turn Around Time

140 215 535 815 940 529

Page 336

35

Shortest Job Next

 This technique looks at all processes in the Ready state and dispatches the one with the shortest service time.

 It is also generally implemented as a nonpreemptive algorithm.

36

Shortest Job Next

The same 5 processes produce a much smaller average turn around time. SJN is provably optimal as a strategy.

It’s weakness is that it relies on knowledge of the future.

Process

p2 p5 p1 p4 p3

Average Turn Around Time

75 200 340 620 940 435 37

Round Robin Scheduling

 …distributes the processing time equitably among all ready processes.

 The algorithm establishes a particular time slice (or quantum ), which is the amount of time each process receives before being preempted. It is then returned to the ready state to allow another process its turn.

 The Round-robin algorithm is preemptive .

38

Round Robin Scheduling

Suppose the time slice is 50.

This Gannt Chart shows how the processes will be scheduled.

Process

p1 p2 p3 p4 p5

Average Turn Around Time

515 325 940 920 640 668 39

Round Robin Scheduling

 Notice that Round Robin is much less efficient in principle.

 However, multiprogramming requires a pre emptive strategy. Can you think of a reason?

40