The Data Link Layer- part 2

Download Report

Transcript The Data Link Layer- part 2

IN THE NAME OF GOD
COMPUTER NETWORKS
CHAPTER 3:
THE DATA LINK LAYER
(PART2)
Dr. Shahriar Bijani
Shahed University
March 2014

References:
A. S. Tanenbaum and D. J. Wetherall, Computer
Networks (5th Edition), Pearson Education, the
book slides, 2011.
 Chapter 6, Data Communications and Computer
Networks: A Business User's Approach, 6th
Edition
 B. A. Forouzan, Data Communications and
Networking, 5th Edition, Behrouz A. Forouzan,
McGraw Hill, lecture slides, 2012.

2
ERROR DETECTION AND CORRECTION

Noise is always present

White Noise
(thermal or Gaussian noise)

Impulse Noise
3
ERROR DETECTION AND CORRECTION

Two basic strategies to deal with errors:
1.
Include enough redundant information to enable the receiver
to deduce the original data: Error correcting codes.
2.
Include only enough redundancy to allow the receiver to
deduce that an error has occurred (but not which error):
Error detecting codes.
4
ERROR DETECTION & CORRECTION CODE
1. Hamming
codes.
2. Binary convolutional codes.
3. Reed-Solomon codes.
4. Low-Density Parity Check codes.
5
ERROR DETECTION & CORRECTION CODE





All the codes presented in the previous slide add redundancy
to the sent information.
A frame consists of
 m data bits (message) and
 r redundant bits (check).
Block code - the r check bits are computed solely as function
of the m data bits with which they are associated. the m bits
were looked up in a large table to find their corresponding r
check bits.
Systemic code – the m data bits are send directly along with
the check bits (rather than being encoded).
Linear code – the r check bits are computed as a linear
function of the m data bits. XOR or modulo 2 addition is a
popular choice.
6
ERROR DETECTION & CORRECTION CODE
n – total length of a block (i.e., n = m + r)
 (n, m) code
 n –bit codeword containing n bits.
 m/n – code rate (range ½ for noisy channel and
close to 1 for high-quality channel).

7
ERROR DETECTION & CORRECTION CODE
Example
 Transmitted:
10001001
 Received:
10110001
XOR operation gives number of bits that are different.
 XOR:
00111000
Hamming Distance: the number of bit positions in
which two codewords differ.
 It shows that two codes are d distance apart =
d errors to convert one into the other.
 Minimum Hamming distance: the smallest Hamming
distance between all possible pairs in a set of words.

8
ERROR DETECTION & CORRECTION CODE
All 2m possible data messages are legal, but due to the
way the check bits are computers not all 2n possible
code words are used.
 Only small fraction of 2m/2n=1/2r of possible messages will be
legal codewords.
 The error-detecting and error-correcting codes of the
block code depend on this Hamming distance.
 To reliably detect d error, we need a distance d+1 code.
 To correct d' error: we need a distance 2d' +1 code.

9
ERROR DETECTION & CORRECTION CODE
Example:
 4 valid codes:
 0000000000
 0000011111
 1111100000
 1111111111
 The Minimal Distance is 5 => can correct 2 errors and detect 4
errors.
 0000000111 => single or double – bit error. Hence the receiving
end must assume the original transmission was 0000011111.
 0000000000 had triple error => 0000000111 received
The error can only be detected.
10
ERROR DETECTION & CORRECTION CODE

Error correction requires evaluation of each candidate
codeword which may be time consuming search.

Through design this search time can be minimized.

In theory if n = m + r, a lower limit on the number of
check bits needed to correct single errors:

(m + r + 1) ≤ 2r
11
1. THE HAMMING CODE

Create the codeword:
Check bits (parity bits): All bit positions that are powers of 2:
(p1, p2, p4, p8, p16, …).
2.
The rest of the bit positions are filled with m data bits:
(m3, m5, m6, m7, m9, m10, m11, m12, m13,…)
3.
Each parity bit calculates the parity for some of the bits in the
code word. The position of the parity bit determines the sequence
of bits that it alternately checks and skips.
Position 1: check 1 bit, skip 1 bit, check 1 bit, skip 1 bit, etc.
(1,3,5,7,9,11,13,15,...)
Position 2: check 2 bits, skip 2 bits, check 2 bits, skip 2 bits, etc.
(2,3,6,7,10,11,14,15,...)
Position 4: check 4 bits, skip 4 bits, check 4 bits, skip 4 bits, etc.
(4,5,6,7, 12,13,14,15, 20,21,22,23,...)
Position 8: check 8 bits, skip 8 bits, check 8 bits, skip 8 bits, etc.
(8-15, 24-31, 40-47,...)
etc.
4. Set a parity bit to 1 if the total number of ones in the positions
it checks is odd. Set a parity bit to 0 if the total number of ones in 12
the positions it checks is even.
1.
HAMMING CODE: EXAMPLE
m = 4 data bits (D) and r = 4 check bits, n = 7-bit codeword:
• This would be called a (7,4) code.
• The 3 bits to be added are 3 EVEN Parity bits (P), where the parity of each
is computed on different subsets of the message bits as shown below:
7 6 5 4 3 2 1
D D D P D P P
7-BIT CODEWORD
D - D - D -
P
(EVEN PARITY)
D D - - D P
-
(EVEN PARITY)
D D D P -
-
(EVEN PARITY)
-
13
HAMMING CODE: EXAMPLE

For example, the message 1101 would be sent as
1100110, since:
7
6
5
4
3
2
1
1
1
0
0
1
1
0
7-BIT CODEWORD
1
-
0
-
1
-
0
(EVEN PARITY)
1
1
-
-
1
1
-
(EVEN PARITY)
1
1
0
0
-
-
-
(EVEN PARITY)
14
HAMMING CODE: PARITY CIRCLES

When these 7 bits are entered into the parity
circles, it can be confirmed that the choice of these
3 parity bits ensures that the parity within each
circle is EVEN:
15
HAMMING CODE: EXAMPLE

If an error occurs in any of the seven bits, it will affect different
combinations of the three parity bits depending on the bit
position.

E.g. a single bit error occurs:
transmitted message
1100110
BIT No: 7 6 5 4 3 2 1
received message
1110110
BIT No.: 7 6 5 4 3 2 1
The above error (in bit 5) can be corrected by examining which
of the three parity bits was affected by the bad bit:
16
HAMMING CODE: ERROR DETECTION
received message:
7
6
5
4
3
2
1
1
1
1
0
1
1
0
7-BIT CODEWORD
1
-
1
-
1
-
0
(EVEN PARITY)
NOT!
1
1
1
-
-
1
1
-
(EVEN PARITY)
OK!
0
1
1
1
0
-
-
-
(EVEN PARITY)
NOT!
1
• The bad parity bits labeled 101 point directly to the bad bit since
101 binary equals 5.
• Examination of the 'parity circles' confirms that any single bit
error could be corrected in this way.
17
HAMMING CODE: ERROR DETECTION
Example of an (11, 7) Hamming code
correcting a single-bit error.
18
HAMMING CODE: SUMMARY

The value of the Hamming code:
1.
Detection of 2 bit errors (assuming no correction is
attempted);

2.
Correction of single bit errors;
3.
Cost of 3 bits added to a 4-bit message.
The ability to correct single bit errors comes at a cost which is
less than sending the entire message twice. (Recall that
simply sending a message twice accomplishes no error
correction.)
19
2. ERROR DETECTION & CORRECTION:
CONVOLUTIONAL CODES





Not a block code
There is no natural message size or encoding boundary as in
a block code.
The output depends on the current and previous input bits.
Encoder has memory.
Constraint length of the code: the number of previous bits on
which the output depends.
They are deployed as part of the
 GSM mobile phone system
 Satellite Communications, and
 802.11 (see example in the previous slide).
20
CONVOLUTIONAL ENCODERS

A convolutional encoder is a linear system.

A binary convolutional encoder can be represented as a shift register.

The outputs of the encoder: modulo 2 sums of the values in the certain register's
cells.

The input to the encoder is either the unencoded sequence (for non-recursive
codes) or the unencoded sequence added with the values of some register's
cells (for recursive codes).

Convolutional codes can be systematic or non-systematic.

Systematic codes: an unencoded sequence is a part of the output sequence.
Almost always recursive

Non-recursive codes are almost always non-systematic.
21
CONVOLUTIONAL ENCODERS



A combination of register's cells that forms one of the output
streams (or that is added with the input stream for recursive
codes) is defined by a polynomial.
m: the maximum degree of the polynomials forming a code,
then K =m+1 is a constraint length of the code.
E.g. the polynomials of Figure 1:
g1(z)=1+z+z2+z3+z6
g2(z)=1+z2+z3+z5+z6
22
Figure 1: A standard NASA convolutional encoder with polynomials (171,133).
CONVOLUTIONAL ENCODERS: EXAMPLE 1





Example:
g1(z)=1+z+z2+z3+z6
g2(z)=1+z2+z3+z5+z6
A code rate is an inverse number of output polynomials.
For the sake of clarity, here we restrict ourselves to the codes
with rate R=1/2. Decoding procedure for other codes is
similar.
Encoder polynomials are usually denoted in the octal notation.
For the above example:
“1111001” = 171 and “1011011” = 133.
The constraint length of this code is 7.
23
CONVOLUTIONAL ENCODER: EXAMPLE 2
An example of a recursive convolutional encoder is on the
Figure 2.
Figure 2. A recursive convolutional encoder.
24
TRELLIS DIAGRAM



A convolutional encoder is often seen as a finite state machine.
Each state corresponds to some value of the encoder's register.
Given the input bit value, from a certain state the encoder can
move to two other states.
A solid line= input 0, a dotted line = input 1 (the rightmost bit is
the newest one).
Any valid sequence from the encoder's output can be
represented as a path on the trellis diagram. One of the possible
paths is denoted as red (as an example).
25
Figure 3. A trellis diagram corresponding to the encoder on the Figure 2.
TRELLIS DIAGRAM




Each state transition on the diagram corresponds to a pair of
output bits.
There are only 2 allowed transitions for every state (2 allowed
pairs of output bits, and the 2 other pairs are forbidden)
If an error occurs, it is very likely that the receiver will get a set
of forbidden pairs, which don't create a path on the trellis
diagram.
So, the task of the decoder is to find a path on the trellis
diagram which is the closest match to the received sequence.
26
VITERBI ALGORITHM

A convolutional code is decoded by finding the sequence of
input bits that is most likely to have produced the observed
sequence of output bits (which includes any errors).

Viterbi algorithm reconstructs the maximum-likelihood path for
a given input sequence.

The input sequence requiring the fewest errors at the end is
the most likely message.
27
3. ERROR DETECTION & CORRECTION:
REED-SOLOMON




Like Hamming codes, Reed-Solomon codes are linear
block codes, and they are often systematic too.
Unlike Hamming codes, which operate on individual bits,
Reed-Solomon codes operate on m bit symbols.
based on the fact that every n degree polynomial is
uniquely determined by n + 1 points.
Example
ax + b is determined by two points. Extra points on the same
line are redundant, which is helpful for error correction.
 2 data points represent a line. we send those two data points
plus two check points on the same line.
 If one of the points is received in error, we can still recover the
data points by fitting a line to the received points. 3 points will
lie on the line, and 1 error point will not.
 By finding the line we have corrected the error

28
ERROR-DETECTING CODES
Linear, systematic block codes
1. Parity.
2. Checksums.
3. Cyclic
Redundancy Checks (CRCs).
29
1. PARITY BITS

Idea: add extra bits to keep the number of 1s even
 Example:
7-bit ASCII characters + 1 parity bit
0101001 1 1101001 0 1011110 1 0001110 1 0110100 1
10
Detects 1-bit errors and some 2-bit errors
 Not reliable against bursty errors

30
TWO DIMENSIONAL PARITY
Parity bit for
each column
0101001
1101001
1011110
0001110
0110100
1011111
1
0
1
1
1
0
1111011 0
Parity bit for
each row
Parity bit for
the parity
byte
Can detect all 1-, 2-, and 3-bit errors, some 4-bit errors
 14% overhead

2. CHECKSUMS

Idea:
Add up the bytes in the data
 Include the sum in the frame

START
Data
Checksum
Use ones-complement arithmetic
 Lower overhead than parity: 16 bits per frame
 But, not resilient to errors
 Why?
10101001 + 01101001= 10010010
 Used in UDP, TCP, and IP

END
3. CYCLIC REDUNDANCY CHECK (CRC)
Uses field theory to compute a semi-unique value for a
given message
 In a cyclic code, rotating a codeword always results in
another codeword
 Example:


Much better performance than previous approaches



Fixed size overhead per frame (usually 32-bits)
Quick to implement in hardware
Only 1 in 232 chance of missing an error with 32-bit CRC
CRC ENCODER/DECODER
34
CYCLIC REDUNDANCY CHECK (CRC)
Example calculation of the CRC
35