Memory management - anuradhasrinivas

Download Report

Transcript Memory management - anuradhasrinivas

Memory management

Ref: Stallings G.Anuradha

What is memory management?

• • The task of subdivision of user portion of memory to accommodate multiple processes is carried out dynamically by the operating system and is known as memory management

Memory management terms

Memory management requirements

Memory management requirements – Contd… • • Relocation – Users generally don’t know where they will be placed in main memory – – May want to swap in at a different place Must deal with user pointers – Generally handled by hardware Protection – Prevent processes from interfering with the O.S. or other processes – Often integrated with relocation

Memory management requirements – Contd… – – – Sharing • Allow processes to share data/programs Logical Organization • Main memory and secondary memory are organized into linear/1D address space of segments and words. • • • Secondary memory is also similarly organized Most programs are modularized Advantages of modular approach – Written and compiled independently – – Can have different degrees of protection Module level of sharing Segmentation Physical Organization • Transferring data in and out of main memory to secondary memory can’t be assigned to programmers • Manage memory  disk transfers (System responsibility)

Memory Partitioning

• • • • • • Fixed partitioning Dynamic partitioning Simple Paging Simple segmentation Virtual memory paging Virtual memory segmentation

Fixed partitioning

Difficulties in equal size fixed partitions • • • A program may be too big to fit into a partition. In such cases overlays can be used Main memory utilization is extremely inefficient. Leads to internal fragmentation – BLOCK OF DATA LOADED IS SMALLER THAN THE PARTITION

Placement algorithm

• • With equal size partition a process can be loaded into a partition as long as there is an available partition With unequal size partitions there are two possible ways to assign processes to partitions – One process queue per partition – Single partition

Memory assignment for fixed partitioning

• • Advantages and disadvantages of each of the approaches One process queue per partition – Advantages: • Internal fragmentation is reduced – Disadvantages • Not optimal from the system point of view IBM Mainframe OS.

OS/MFT Single queue – Advantages • Degree of flexibility, simple, requires minimal OS S/w and processing overhead – Disadvantages • Limits the number of active processes in the system • Small jobs will not utilize partition space efficiently

Dynamic partitioning

Create partitions as programs loaded Avoids internal fragmentation, but must deal with external fragmentation

Effect of dynamic partitioning

Dynamic partitioning

• • • External fragmentation:-As time goes on more and more fragments are added and the effective utilization declines External fragmentation can be overcome using

compaction Compaction

– Time consuming

Placement algorithm

• • • Best-fit:- Chooses the block that is closest in size to the request First-fit:- scans memory from the beginning and chooses the first available block that is large enough Next-fit:-Scan memory from the location of the last placement and chooses the next available block that is large enough

Which amongst them is the best?

• • • First fit: – Simple, best and fastest Next fit: – Next to first fit. Requires compaction in this case Best fit: – Worst performer. Memory compaction should be done as frequently as possible

Buddy system

• Overcomes the drawbacks of both fixed and dynamic partitioning schemes

Algorithm of buddy system

• • • • The entire space available for allocation is treated as a single block of size 2 U If a request of size s ST is made then the entire block is allocated Otherwise the block is split into two of size 2 U-1 This process continues until the smallest block greater than or equal to s is generated and allocated to the request

Example of a buddy system

Free representation of buddy system Modified version used in UNIX kernel memory allocation

Relocation

Not a major problem with fixed-sized partitions Easy to load process back into the same partition Otherwise need to deal with a process loaded into a new location Memory addresses may change When loaded into a new partition If compaction is used To solve this problem a distinction is made among several types of addresses.

Relocation

• Different types of address – Logical address:- reference to a memory location independent of the current assignment of data to memory – Relative address:- example of logical address in which the address is expressed as a location relative to some known point – Physical address:- actual location in main memory A hardware mechanism is needed for translating the relative addresses to physical main memory addresses at the time of execution of the instruction that contain the reference

Hardware support for relocation

• • • • Base/Bounds Relocation Base Register – Holds beginning physical address – Add to all program addresses Bounds Register – Used to detect accesses beyond the end of the allocated memory – Provides protection to system Easy to move programs in memory – Change base/bounds registers Largely replaced by paging

Paging

• • • • • Problems with unequal fixed-size and variable size partitions are external and internal fragmentations respectively.

If the process is also divided into chunks of same size - pages Memory is divided into chunks called frames Then a page can be framed into a page frame Then there will be only internal fragmentation especially in the last page of the process

Assignment of Process pages to free frames

Page tables • • • • When a new process D is brought in, it can still be loaded even though there is no contiguous memory location to store the process.

For this the OS maintains a page table for each process The page table shows the frame location for each page of the process Within the program, each logical address consists of a page number and an offset within the page

Page table contd…

• • • In a simple partition, a logical address is the location of a word relative to the beginning of the program The processor translates it into physical address. With paging the logical-physical address translation is done by hardware Logical address {page number, offset} processor Physical address {Frame number, offset}

Data structures when process D is stored in main memory Paging similar to fixed size partitioning Difference:- 1. partitions are small 2. Program may occupy more than one partition 3. Partitions need not be contiguous

Computation of logical and physical addresses • • Page size typically a power of 2 to simplify the paging hardware – Example (16-bit address, 1K pages) • Relative address of 1502 is • Top 6 bits (000001)= page # {Page number is 1} • Bottom 10 bits (0111011110) = {offset within page, in this case 478} Thus a program can consist of a maximum of 2 6 = 64 pages of 1K bytes each.

Why page size is in multiples of 2?

• • • The logical addressing scheme is transparent to programmer, assembler, linker.

Easy to implement a function in hardware to perform dynamic address translation at runtime.

Steps in address translation

Logical to physical address translation using paging

Hardware support

• • • OS has its own method for storing page tables. Pointer to page table is stored with the other register values in the PCB, which is reloaded whenever the process is loaded. Page table may be stored in special registers if the number of pages is small.

Page table may be stored in physical memory, and a special register, page-table base register, points to the page table.(Problem is time taken for accessing)

Implementation of Page Table

• • • • • • Page table is kept in main memory Page-table base register (PTBR) points to the page table Page-table length register (PRLR) indicates size of the page table In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction.

The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or

translation look-aside buffers (TLBs)

Some TLBs store address-space identifiers (ASIDs) in each TLB entry – uniquely identifies each process to provide address-space protection for that process

Hardware support contd…

• • • • Use translation look-aside buffer (TLB). TLB stores recently used pairs (page #, frame #). It compares the input page # against the stored ones. If a match is found, the corresponding frame # is the output. Thus, no physical memory access is required.

The comparison is carried out in parallel and is fast.

TLB normally has 64 to 1,024 entries.

Paging Hardware With TLB

Memory Protection

• Memory protection implemented by associating protection bit with each frame • Valid-invalid bit attached to each entry in the page table: – “valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page – “invalid” indicates that the page is not in the process’ logical address space

Valid (v) or Invalid (i) Bit In A Page Table

Shared pages

• • • Possibility of sharing common code This happens in the case if the code is re entrant code(pure code).

Re-entrant code is non-self modifying code: it never changes during execution. Two or more processes can simultaneously utilize the same code.

Shared Pages Example

Summary

1. Main memory is divided into equal sized frames 2. Each process is divided into frame-sized pages 3. When a process is brought in all of its pages are loaded into available frames and a page table is set up 4. This approach solves the problems in partitioning

Segmentation

• • • • Segmentation is a memory management scheme that supports user view of memory Logical address space is a collection of segments Each segment has a name and length Each address has a segment and a offset within a segment

User’s View of a Program

Logical View of Segmentation

1 4 1 2 3 4 2 3 user space physical memory space

Segmentation Hardware

Example of Segmentation

Segment 2 , ref to byte 53=4300+53=4353 Segment 3, ref to byte 852=3200+852=4052 Segment 0, ref to byte 1222= 6300+1222

Implementation of Segment Tables

• • Just like page tables the segment tables can be kept in registers and accessed When the program contains a large number of segments a segment table base register and segment table limit register is kept in the memory and checks are performed

Protection and sharing

• • Segments are always protected becos its semantically defined Code and data can be shared in segmentation. Segments are shared when entries in the segment tables of two different processes point to the same physical location