Disk Stroage, File Structures and Hashing

Download Report

Transcript Disk Stroage, File Structures and Hashing

Chapter 17
Disk Storage, Basic
File Structures, and
Hashing
Copyright © 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley
Disk Storage Devices




Preferred secondary storage device for high
storage capacity and low cost.
Data stored as magnetized areas on magnetic
disk surfaces.
A disk pack contains several magnetic disks
connected to a rotating spindle.
Disks are divided into concentric circular tracks
on each disk surface.

Track capacities vary typically from 4 to 150
Kbytes or more
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Disk Storage Devices (cont.)
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Disk Storage Devices (cont.)

A track is divided into smaller blocks or sectors


The division of a track into sectors is hard-coded on the
disk surface and cannot be changed.


because it usually contains a large amount of information
One type of sector organization calls a portion of a track
that subtends a fixed angle at the center as a sector.
A track is divided into blocks by OS during formatting.

The block size B is fixed for each system.


Typical block sizes range from B=512 bytes to B=4096 bytes.
Whole blocks are transferred between disk and main
memory for processing.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Disk Storage Devices (cont.)
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Typical Disk Parameters
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Disk Storage Devices (cont.)

A disk is a random access addressable device



A physical disk block (hardware) address consists of:




A read-write head moves to the track that contains the
block to be transferred.
Disk rotation moves the block under the read-write head for
reading or writing.
a cylinder number (imaginary collection of tracks of same
radius from all recorded surfaces)
the track number or surface number (within the cylinder)
and block number (within track).
Reading or writing a disk block is time consuming
because of the seek time s and rotational delay (latency)
rd.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Buffering of Blocks

Double buffering can be used to speed up the transfer of
contiguous disk blocks
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Records

Data is usually stored in the form of records.


Fixed and variable length records
Records contain fields which have values of a particular type



E.g., amount, date, time, age
Fields themselves may be fixed length or variable length
Variable length fields can be mixed into one record:


Separator characters or length fields are needed so that the
record can be “parsed.”
A file of records with optional fields can be formatted in different
ways.



include in each record a sequence of <field-name, field-value>
pairs rather than just the field values.
A repeating field needs one separator character to separate the
repeating values of the field and another separator character to
indicate termination of the field.
A file that includes records of different types, each record is
preceded by a record type indicator.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Records
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Blocking

Blocking:




Blocking factor (bfr) refers to the number of records per
block.
bfr = ⎣B/R⎦
There may be empty space in a block if an integral
number of records do not fit in one block.
To utilize this unused space, we can store part of a
record on one block and the rest on another.


Refers to storing a number of records in one block on the
disk.
A pointer at the end of the first block points to the block
containing the remainder of the record in case it is not
the next consecutive block on disk.
File records can be unspanned or spanned


Unspanned: no record can span two blocks
Spanned: a record can be stored in more than one block
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Files of Records


A file is a sequence of records, where each record is a collection of data
values (or data items).
Allocating File Blocks on Disk






contiguous allocation: makes reading fast, but it makes expanding the file
difficult.
linked allocation: easy to expand the file but makes it slow to read the whole
file.
file segments: a combination of the two allocates clusters of consecutive disk
blocks, and the clusters are linked.
indexed allocation
A file descriptor (or file header) includes information that describes the
file, such as the field names and their data types, and the addresses of
the file blocks on disk.
To search for a record on disk, one or more blocks are copied into main
memory buffers.

The goal of a good file organization is to locate the block that contains a
desired record with a minimal number of block transfers.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Operation on Files

Typical file operations include:

OPEN: Readies the file for access, and associates a pointer that will refer
to a current file record at each point in time.

RESET. Sets the file pointer of an open file to the beginning ofthe file.

FIND: Searches for the first file record that satisfies a certain condition, and
makes it the current file record.

FINDNEXT: Searches for the next file record (from the current record) that
satisfies a certain condition, and makes it the current file record.

READ: Reads the current file record into a program variable.

INSERT: Inserts a new record into the file & makes it the current file
record.

DELETE: Removes the current file record from the file, usually by marking
the record to indicate that it is no longer valid.

MODIFY: Changes the values of some fields of the current file record.

CLOSE: Terminates access to the file.

REORGANIZE: Reorganizes the file records.
 For example, the records marked deleted are physically removed from
the file or a new organization of the file records is created.

READ_ORDERED: Read the file blocks in order of a specific field of the
file.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Operation on Files



The preceding operations (except for Open and Close) are called recordat-a-time operations because each operation applies to a single record.
SCAN. If the file has just been opened or reset, Scan returns the first
record; otherwise it returns the next record. If a condition is specified with
the operation, the returned record is the first or next record satisfying the
condition.
In database systems, additional set-at-a-time higher-level operations may
be applied to a file.


FindAll. Locates all the records in the file that satisfy a search condition.
Find (or Locate) n. Searches for the first record that satisfies a search
condition and then continues to locate the next n – 1 records satisfying
the same condition. Transfers the blocks containing the n records to
the main memory buffer (if not already there).


FindOrdered. Retrieves all the records in the file in some specified order.
Reorganize. Starts the reorganization process. As we shall see, some file
organizations require periodic reorganization.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Unordered Files



Also called a heap or a pile file.
New records are inserted at the end of the file.
A linear search through the file records is necessary to search for a
record.





This requires reading and searching half the file blocks on the average,
and is hence quite expensive.
additional access paths, such as the secondary indexes often is used.
Record insertion is quite efficient.
Reading the records in order of a particular field requires sorting the
file records.
To delete a record,


first find its block, copy the block into a buffer, delete the record from the
buffer, and finally rewrite the block back to the disk.
have an extra byte or bit, called a deletion marker, stored with each
record. remove deleted records during reorganization,
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Ordered Files



Also called a sequential file.
File records are kept sorted by the values of an ordering field.
Ordered records have some advantages over unordered files.

First, reading the records in order of the ordering key values becomes
extremely efficient
 Second, finding the next record from the current one in order of the
ordering key usually requires no additional block accesses
 Third, using a search condition based on the value of an ordering key
field results in faster access

Insertion is expensive: records must be inserted in the correct order.

It is common to keep a separate unordered overflow (or transaction)
file for new records to improve insertion efficiency; this is periodically
merged with the main ordered file.
A binary search can be used to search for a record on its ordering field
value.
 This requires reading and searching log2 of the file blocks on the
average, an improvement over linear search.

Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Ordered Files (cont.)
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Average Access Times

The following table shows the average access
time to access a specific record for a given type
of file
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Hashed Files






Hashing for disk files is called External Hashing
The file blocks are divided into M equal-sized buckets, numbered
bucket0, bucket1, ..., bucketM-1.
 Typically, a bucket corresponds to one (or a fixed number of) disk
block.
One of the file fields is designated to be the hash key of the file.
The record with hash key value K is stored in bucket i, where i=h(K),
and h is the hashing function.
Search is very efficient on the hash key.
Collisions occur when a new record hashes to a bucket that is already
full.
 An overflow file is kept for storing such records.
 Overflow records that hash to each bucket can be linked together.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Hashed Files (cont.)
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Hashed Files (cont.)

There are numerous methods for collision resolution, including the
following:
 Open addressing: Proceeding from the occupied position
specified by the hash address, the program checks the
subsequent positions in order until an unused (empty) position is
found.
 Chaining: For this method, various overflow locations are kept,
usually by extending the array with a number of overflow
positions. In addition, a pointer field is added to each record
location. A collision is resolved by placing the new record in an
unused overflow location and setting the pointer of the occupied
hash address location to the address of that overflow location.
 Multiple hashing: The program applies a second hash function if
the first results in a collision. If another collision results, the
program uses open addressing or applies a third hash function
and then uses open addressing if necessary.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Hashed Files - Overflow Handling
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Hashed Files (cont.)


To reduce overflow records, a hash file is typically
kept 70-80% full.
The hash function h should distribute the records
uniformly among the buckets


Otherwise, search time will be increased because
many overflow records will exist.
Main disadvantages of static external hashing:


Fixed number of buckets M is a problem if the
number of records in the file grows or shrinks.
Ordered access on the hash key is quite inefficient
(requires sorting the records).
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Dynamic And Extendible Hashed
Files

Dynamic and Extendible Hashing Techniques



Both dynamic and extendible hashing use the binary
representation of the hash value h(K) in order to access a
directory.



Hashing techniques are adapted to allow the dynamic growth and
shrinking of the number of file records.
These techniques include the following: dynamic hashing,
extendible hashing, and linear hashing.
In dynamic hashing the directory is a binary tree.
In extendible hashing the directory is an array of size 2d where d is
called the global depth.
Records are distributed among buckets based on the values of
the leading bits in their hash values
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Dynamic And Extendible Hashing
(cont.)

The directories can be stored on disk, and they expand or
shrink dynamically.


An insertion in a disk block that is full causes the block to
split into two blocks and the records are redistributed
among the two blocks.



Directory entries point to the disk blocks that contain the
stored records.
The directory is updated appropriately.
Dynamic and extendible hashing do not require an
overflow area.
Linear hashing does require an overflow area but does
not use a directory.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Extendible Hashing

In extendible hashing, a type of directory—an array of 2d
bucket addresses—is maintained, where d is called the global
depth of the directory.




The integer value corresponding to the first (high-order) d bits of a
hash value is used as an index to the array to determine a directory
entry,
Several directory locations with the same first d’ bits for their hash
values may contain the same bucket address if all the records that
hash to these locations fit in a single bucket.
A local depth d’ stored with each bucket—specifies the number of
bits on which the bucket contents are based.
The value of d can be increased or decreased by one at a time,
thus doubling or halving the number of entries in the directory
array.
 Doubling is needed if a bucket, whose local depth d’ is equal to
the global depth d, overflows.
 Halving occurs if d >d’ for all the buckets after some
deletions occur.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Extendible
Hashing
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Extendible Hashing

Advantage:

The performance of the file does not degrade as the file
grows
no space is allocated in extendible hashing for future
growth, but additional buckets can be allocated dynamically
as needed. The space overhead for the directory table is
negligible.
splitting causes minor reorganization in most cases, since

the directory must be searched before accessing the



only the records in one bucket are redistributed to the
two new buckets. The only time reorganization is more
expensive is when the directory has to be doubled (or
halved).
Disadvantage:
buckets themselves, resulting in two block accesses
instead of one in static hashing.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Dynamic Hashing


In dynamic hashing the addresses of the buckets were
either the n high-order bits or n − 1 high-order bits,
depending on the total number of keys belonging to the
respective bucket.
Dynamic hashing maintains a tree-structured directory
with two types of nodes:

Internal nodes that have two pointers—the left pointer
corresponding to the 0 bit (in the hashed address) and a
right pointer corresponding to the 1 bit.

Leaf nodes—these hold a pointer to the actual bucket with
records.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Dynamic
Hashing
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Linear Hashing


The idea behind linear hashing is to allow a hash file to expand
and shrink its number of buckets dynamically without needing a
directory.
Overflow because of collisions is still needed and can be
handled by maintaining individual overflow chains for each
bucket.




when a collision leads to an overflow record in any file bucket, the
first bucket in the file—bucket 0—is split into two buckets: the
original bucket 0 and a new bucket M at the end of the file.
The records originally in bucket 0 are distributed between the two
buckets based on a different hashing function hi+1(K) = K mod
2M.
A key property of the two hash functions hi and hi+1 is that any
records that hashed to bucket 0 based on hi will hash to either
bucket 0 or bucket M based on hi+1;
If enough overflows occur, all the original file buckets 0, 1, ...,
M− 1 will have been split, so the file now has 2M instead of M
buckets, and all buckets use the hash function hi+1.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Parallelizing Disk Access using RAID
Technology.



Secondary storage technology must take steps to
keep up in performance and reliability with
processor technology.
A major advance in secondary storage
technology is represented by the development of
RAID, which originally stood for Redundant
Arrays of Inexpensive Disks.
The main goal of RAID is to even out the widely
different rates of performance improvement of
disks against those in memory and
microprocessors.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Improving Performance with RAID



A natural solution is a large array of small independent disks acting as
a single higher-performance logical disk.
A concept called data striping is used, which utilizes parallelism to
improve disk performance.
Data striping distributes data transparently over multiple disks to
make them appear as a single large, fast disk.

bit-level data striping consists of splitting a byte of data and writing bit
j to the jth disk.

block-level striping.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Improving Reliability with RAID

For an array of n disks, the likelihood of failure is n times as
much as that for one disk.




Keeping a single copy of data in such an array of disks will cause
a significant loss of reliability.
An obvious solution is to employ redundancy of data so that disk
failures can be tolerated.
The disadvantages?
One technique for introducing redundancy is called mirroring or
shadowing.



Data is written redundantly to two identical physical disks that are
treated as one logical disk.
When data is read, it can be retrieved from the disk with shorter
queuing, seek, and rotational delays.
Disk mirroring also doubles the rate at which read requests are
handled, since a read can go to either disk. The transfer rate of
each read, however, remains the same as that for a single disk.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Improving Reliability with RAID

Another solution is to store extra information that is
not normally needed but that can be used to
reconstruct the lost information in case of disk failure.

The incorporation of redundancy must consider two
problems:


selecting a technique for computing the redundant
information, addressed by using error-correcting codes
involving parity bits, or specialized codes such as
Hamming codes.
selecting a method of distributing the redundant
information across the disk array. Two major approaches
are either to store the redundant information on a small
number of disks or to distribute it uniformly across all
disks.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
RAID Technology (cont.)

Different raid organizations were defined based on different
combinations of the two factors of granularity of data interleaving
(striping) and pattern used to compute redundant information.
 Raid level 0 has no redundant data and hence has the best write
performance at the risk of data loss
 Raid level 1 uses mirrored disks.
 Raid level 2 uses memory-style redundancy by using Hamming
codes, which contain parity bits for distinct overlapping subsets of
components. Level 2 includes both error detection and correction.
 Raid level 3 uses a single parity disk relying on the disk controller
to figure out which disk has failed.
 Raid Levels 4 and 5 use block-level data striping, with level 5
distributing data and parity information across all disks.
 Raid level 6 applies the so-called P + Q redundancy scheme
using Reed-Soloman codes to protect against up to two disk
failures by using just two redundant disks.
Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Use of RAID Technology (cont.)

Different raid organizations are being used under different situations
 Raid level 1 (mirrored disks) is the easiest for rebuild of a disk from
other disks


It is used for critical applications like logs
Raid level 2 uses memory-style redundancy by using Hamming
codes, which contain parity bits for distinct overlapping subsets of
components.

Level 2 includes both error detection and correction.
Raid level 3 (single parity disks relying on the disk controller to figure
out which disk has failed) and level 5 (block-level data striping) are
preferred for Large volume storage, with level 3 giving higher transfer
rates.
Most popular uses of the RAID technology currently are:
 Level 0 (with striping), Level 1 (with mirroring) and Level 5 with an
extra drive for parity.
Design Decisions for RAID include:
 Level of RAID, number of disks, choice of parity schemes, and
grouping of disks for block-level striping.



Copyright © 2011 Ramez Elmasri and Shamkant Navathe
Use of RAID Technology (cont.)
Copyright © 2011 Ramez Elmasri and Shamkant Navathe