Transcript Document
The Data Link Layer
• Highlights of this chapter
– Data Link Layer Design Issues.
– Error Detection and Correction.
– Medium access control.
– The Ethernet
The Data Link Layer (Cont’d)
• Data Link Layer Design Issues
– Providing a well-defined service interface
to the network layer.
– Dealing with transmission errors.
– Regulating the flow of data so that slow
receivers are not swamped by fast senders.
The Data Link Layer (Cont’d)
• Provide interface to network layer
Relationship between packets
and frames.
(a) Virtual communication.
(b) Actual communication.
The Data Link Layer (Cont’d)
• Three possible services
– Unacknowledged connectionless service.
• Send independent frames to the destination machine.
• Appropriate when the error rate is very low or real-time
traffic.
• Most LANs use this kind of service.
– Acknowledged connectionless service.
• Over unreliable channels, such as wireless systems.
• Providing acknowledgements in the data link layer is just
an optimization, never a requirement!
– Acknowledged connection-oriented service.
• E.g., connections established between modems.
The Data Link Layer (Cont’d)
• Error Detection and Correction
– Where, why, and how?
• In local loops, wireless communication, errors are common.
• Thermal, electric noises.
• tend to come in bursts rather than singly!
– Two basic strategies for dealing with errors
• error-correcting codes (forward error correction):
include enough redundant information along with each
block of data sent, to enable the receiver to deduce what
the transmitted data must have been. (wireless comm)
• error-detecting codes: include only enough redundancy to
allow the receiver to deduce that an error occurred, but
not which error, and have it request a retransmission.
The Data Link Layer (Cont’d)
• A simple error detection algorithm (a code in
which a single parity bit)
– The parity bit is chosen so that the number of 1
bits in the codeword is even (or odd).
– When 1011010 is sent in even parity, a bit is added
to the end to make it 10110100. With odd parity
1011010 becomes 10110101.
• More sophisticate error detection and
correction algorithms
– CRC (Cyclic Redundancy Check) —— rather popular.
– We stop here for they are rather sophisticated.
The Data Link Layer (Cont’d)
• The Medium Access Control
– networks can be divided into two categories: those
using point-to-point connections and those using
broadcast channels.
– Due to the cost considerations, many modern
network architecture employs broadcast channels.
(e.g., Ethernet, Wireless Ad hoc network, etc.)
Seldom of them are fully connected.
– Challenge: Who can access the network? When and
how?
The Data Link Layer (Cont’d)
• The Medium Access Control
– In the literature, broadcast channels are
sometimes referred to as multiaccess
channels or random access channels.
– The protocols used to determine who goes
next on a multiaccess channel belong to a
sublayer of the data link layer called the
MAC (Medium Access Control) sublayer.
The Data Link Layer (Cont’d)
• Frequency Division Multiplexing (FDM)
– If there are N users, the bandwidth is
divided into N equal-sized portions, each
user being assigned one portion. each user
has a private frequency band, there is no
interference between users.
– Does FDM fit for the real use? (number of
senders is large and continuously varying;
traffic is bursty)
The Data Link Layer (Cont’d)
• Problems of FDM
– Waste of the bandwidth: If the spectrum is cut up into N
regions and fewer than N users are currently interested in
communicating, a large piece of valuable spectrum will be
wasted. If more than N users want to communicate, some of
them will be denied permission for lack of bandwidth, even if
some of the users who have been assigned a frequency band
hardly ever transmit or receive anything.
– Inefficiency: assuming that the number of users could
somehow be held constant at N, when some users are
quiescent, their bandwidth is simply lost. They are not using
it, and no one else is allowed to use it either. As data traffic
is extremely bursty, most of the channels will be idle most of
the time.
The Data Link Layer (Cont’d)
• Problems of FDM
– for a channel of capacity C bps, with an arrival rate of λ
frames/sec, each frame having a length drawn from an
exponential probability density function with mean 1/µ
bits/frame. With these parameters the arrival rate is
λframes/sec and the service rate is µC frames/sec. From
queueing theory, we have the mean time delay T:
– Now let us divide the single channel into N independent
subchannels, each with capacity C/N bps. The mean input
rate on each of the subchannels will now be λ/N.
Recomputing T we get
– The mean delay using FDM is N times worse than if all the
frames were somehow magically arranged orderly in a big
central queue.
The Data Link Layer (Cont’d)
• Pure ALOHA
– Problem: Multiple users share a single ground-based radio
broadcasting channel. They transmit whenever they have
data to be sent. There will be collisions, of course, and the
colliding frames will be damaged. However, due to the
feedback property of broadcasting, a sender can always find
out whether its frame was destroyed by listening to the
channel, the same way other users do. If the frame was
destroyed, the sender just waits a random amount of time
and sends it again. The waiting time must be random or the
same frames will collide over and over, in lockstep.
– Systems in which multiple users share a common channel in a
way that can lead to conflicts are widely known as contention
systems.
The Data Link Layer (Cont’d)
• Pure ALOHA
In pure ALOHA, frames are transmitted at completely arbitrary times.
– Whenever two frames try to occupy the channel at the same time,
there will be a collision and both will be garbled. If the first bit of a
new frame overlaps with just the last bit of a frame almost finished,
both frames will be totally destroyed and both will have to be
retransmitted later. The checksum cannot (and should not)
distinguish between a total loss and a near miss. Bad is bad.
– An interesting question is: What is the efficiency of an ALOHA
channel?
The Data Link Layer (Cont’d)
• Pure ALOHA
– We assume infinite users share the same pure
ALOHA channel, and the generate new frames
according to a Poisson distribution with mean N
frames per frame time.
– If N > 1, the user community is generating frames
at a higher rate than the channel can handle, and
nearly every frame will suffer a collision. For
reasonable throughput we would expect 0 < N < 1.
– In addition to new frames, the stations also
generate retransmissions that previously suffered
collisions. We further assume that the probability
of k transmission attempts per frame time, old and
new combined, is also Poisson, with mean G per
frame time. Clearly, G ≥ N.
The Data Link Layer (Cont’d)
• Pure ALOHA
– At low load (i.e., N ≈ 0), there will be few collisions,
hence few retransmissions, so G ≈ N. At high load
there will be many collisions, so G > N.
– If P0 is the probability that a frame does not
suffer a collision, S, the offered load can be
calculated: S = GP0
– Let t be the time needed to transmit a single
frame (frame time). The probability that k frames
are generated during a given frame time is given by
the Poisson distribution:
– so the probability of zero frames is just e-G.
The Data Link Layer (Cont’d)
• Pure ALOHA
– In an interval two frame times long, the mean
number of frames generated is 2G. The probability
of no other traffic being initiated during the
entire vulnerable period is thus given by P0 = e-2G.
Using S = GP0, we get: S = Ge-2G
– The maximum throughput occurs at G = 0.5, with S
= 1/2e, which is about 0.184 (18%).
The Data Link Layer (Cont’d)
• Slotted ALOHA
– Divide time into discrete intervals, each interval
corresponding to one frame. This approach
requires the users to agree on slot boundaries. One
way to achieve synchronization would be to have
one special station emit a pip at the start of each
interval, like a clock. (Roberts, 1972)
– Since the vulnerable period is now halved, the
probability of no other traffic during the same
slot as our test frame is e-G which leads to:
S = Ge-G
– See the above figure, slotted ALOHA peaks at G =
1, with a throughput of S =1/e or about 0.368,
twice that of pure ALOHA.
The Data Link Layer (Cont’d)
• Carrier Sense Multiple Access Protocols
(CSMA)
– In local area networks, however, it is possible for
stations to detect what other stations are doing,
and adapt their behavior accordingly.
– Protocols in which stations listen for a carrier (i.e.,
a transmission) and act accordingly are called
carrier sense protocols.
– A number of them have been proposed. (1persistent CSMA、non-persistent CSMA、ppersistent CSMA.)
The Data Link Layer (Cont’d)
• 1-persistent CSMA
– When a station has data to send, it first listens to
the channel to see if anyone else is transmitting at
that moment. If the channel is busy, the station
waits until it becomes idle. When the station
detects an idle channel, it transmits a frame. If a
collision occurs, the station waits a random amount
of time and starts all over again.
– The protocol is called 1-persistent because the
station transmits with a probability of 1 when it
finds the channel idle.
The Data Link Layer (Cont’d)
• Problems of 1-persistent CSMA
– The propagation delay has an important effect on
the performance of the protocol.
– Two stations want to send their respective frames.
If the first station's signal has not yet reached
the second one, the latter will sense an idle
channel and will also begin sending, resulting in a
collision.
– The longer the propagation delay, the more
important this effect becomes, and the worse the
performance of the protocol.
– Even if the propagation delay is zero, there will
still be collisions.
– higher performance than pure ALOHA and slotted
ALOHA.
The Data Link Layer (Cont’d)
• Non-persistent CSMA
– In this protocol, a conscious attempt is made to be
less greedy than in the 1-persistent CSMA.
– If the channel is already in use, the station does
not continually sense it for the purpose of seizing
it immediately upon detecting the end of the
previous transmission. Instead, it waits a random
period of time and then repeats the algorithm.
– This algorithm leads to better channel utilization
but longer delays than 1-persistent CSMA.
The Data Link Layer (Cont’d)
• p-persistent CSMA
– It applies only to slotted channels.
– When a station becomes ready to send, it
senses the channel. If it is idle, it
transmits with a probability p. With a
probability q = 1 - p, it defers until the next
slot. If that slot is also idle, it either
transmits or defers again, with
probabilities p and q.
The Data Link Layer (Cont’d)
• Comparison of the channel utilization versus
load for various random access protocols.
The Data Link Layer (Cont’d)
• CSMA with Collision Detection (CSMA/CD)
– If two stations sense the channel to be idle and
begin transmitting simultaneously, they will both
detect the collision almost immediately. Rather
than finish transmitting their frames, which are
irretrievably garbled anyway, they should abruptly
stop transmitting as soon as the collision is
detected.
– Quickly terminating damaged frames saves time
and bandwidth.
– This protocol, is widely used on LANs in the MAC
sublayer. In particular, it is the basis of the
popular Ethernet LAN.
The Data Link Layer (Cont’d)
• An analysis of CSMA/CD
– Suppose that two stations both begin transmitting at
exactly time t0. The minimum time to detect the
collision is then just the time it takes the signal to
propagate from one station to the other. We consider
the following worst-case scenario.
– Let the time for a signal to propagate between the
two farthest stations be τ. At t0, one station begins
transmitting. At τ-ε, an instant before the signal
arrives at the most distant station, that station also
begins transmitting. Of course, it detects the collision
almost instantly and stops, but the little noise burst
caused by the collision does not get back to the
original station until time 2τ-ε.
The Data Link Layer (Cont’d)
• An analysis of CSMA/CD (Cont’d)
– This means that in the worst case a station cannot
be sure that it has seized the channel until it has
transmitted for 2τ without hearing a collision.
– we will model the contention interval as a slotted
ALOHA system with slot width 2τ, and the offered
bandwidth can be denoted as: S = Ge-τ
– As τ is small (in most conditions), the bandwidth
will be large.
The Data Link Layer (Cont’d)
• Notes on CSMA/CD
– a sending station must continually monitor the
channel, listening for noise bursts that might
indicate a collision. For this reason, CSMA/CD with
a single channel is inherently a half-duplex system.
– It is impossible for a station to transmit and
receive frames at the same time because the
receiving logic is in use, looking for collisions during
every transmission.
The Data Link Layer (Cont’d)
• Ethernet
– Ethernet Cabling
– The Ethernet MAC Sublayer Protocol
– Switched Ethernet
– Fast Ethernet & Gigabit Ethernet
The Data Link Layer (Cont’d)
• Ethernet Cabling
– The most common kinds of Ethernet cabling
(a) 10Base5. (b) 10Base2. (c) 10Base-T.
The Data Link Layer (Cont’d)
• Ethernet Cabling
– 10Base2 (thin Ethernet)
• Thin Ethernet is much cheaper and easier to install, but it
can run for only 185 meters per segment, each of which
can handle only 30 machines.
• Detecting cable breaks, excessive length, bad taps, or
loose connectors can be a major problem.
– 10Base-T
• Hubs are used to connect the machines.
• Maximum rang is 100 meters, allowing 1024 hosts
connection.
• Hubs do not buffer incoming traffic. They can be
regarded as bus system, which connect parts of the
network.
The Data Link Layer (Cont’d)
• The Ethernet MAC Sublayer Protocol
– Frame structure
– The high-order bit of the destination
address is a 0 for ordinary addresses and 1
for group addresses.
– When a frame is sent to a group address,
all the stations in the group receive it.
Sending to a group of stations is called
multicast. The address consisting of all 1
bits is reserved for broadcast.
The Data Link Layer (Cont’d)
• Switched Ethernet
– As more and more stations are added to an
Ethernet, the traffic will go up. Eventually,
the LAN will saturate.
– there is an additional way to deal with
increased load: switched Ethernet.
The Data Link Layer (Cont’d)
• Switched Ethernet
– When a station wants to transmit an Ethernet
frame, it outputs a standard frame to the switch.
The plug-in card getting the frame may check to
see if it is destined for one of the other stations
connected to the same card. If so, the frame is
copied there. If not, the frame is sent over the
high-speed backplane to the destination station's
card.
– From non-store to store-and-forward.
The Data Link Layer (Cont’d)
• Fast Ethernet & Gigabit Ethernet
– FDDI (Fiber Distributed Data Interface)
• ring-based optical LAN
– Gigabit Ethernet
• All configurations of gigabit Ethernet are point-to-point
rather than multidrop as in the original 10 Mbps standard.
Each individual Ethernet cable has exactly two devices on
it, no more and no fewer.
The Data Link Layer (Cont’d)
– Gigabit Ethernet (Cont’d)
• Gigabit Ethernet supports two different modes of
operation: full-duplex mode and half-duplex mode.
• Full-duplex mode allows traffic in both directions at the
same time. This mode is used when there is a central
switch connected to computers. In this mode, no
contention is possible.
• Half-duplex, is used when the computers are connected
to a hub rather than a switch. Because a minimum (i.e.,
64-byte) frame can now be transmitted 100 times faster
than in classic Ethernet, the maximum distance is 100
times less, or 25 meters.
The Data Link Layer (Cont’d)
• Gigabit Ethernet cabling
– 1000Base-SX is suitable for office network;
– 1000Base-LX is suitable for backbone of campus
network;
– 1000Base-CX and 1000Base-T are always the poor
man’s gigabit Ethernet.
The Data Link Layer (Cont’d)
• Retrospective on Ethernet
– Ethernet has been around for over 20 years and has no
serious competitors in sight, so it is likely to be around for
many years to come. Why?
• Probably the main reason for its longevity is that Ethernet is
simple and flexible. In practice, simple translates into reliable,
cheap, and easy to maintain.
• Another point is that Ethernet interworks easily with TCP/IP,
which has become dominant. IP is a connectionless protocol, so it
fits perfectly with Ethernet, which is also connectionless.
– How about the competitors?
• FDDI、ATM、Fibre Channel and etc.
• Although they are faster when they are introduced, they were
incompatible with Ethernet, far more complex, and harder to
manage.
– Adoption without modifications on software is the key issue
to survive!
The Data Link Layer
• Exercises
– Please try to solve these problems:
• 3.1, 3.4, 3.10, 3.12, 4.2, 4.3, 4.15, 4.24