Systematic Design of Space-Time Trellis Codes for Wireless
Download
Report
Transcript Systematic Design of Space-Time Trellis Codes for Wireless
ECE 6332, Fall, 2014
Wireless Communication
Zhu Han
Department of Electrical and Computer Engineering
Class 21
Apr. 7th, 2014
Outline
MIMO/Space time coding
Trellis code modulation
BICM
Video transmission (optional)
Unequal error protection and joint source channel coding
Homework 4
– Will announced by email.
MIMO
Model
T: Time index
YN T H N M X M T WN T
W: Noise
Alamouti Space-Time Code
•
Transmitted signals are orthogonal
=> Simplified receiver
•
Redundance in time and space
=> Diversity
•
Equivalent diversity gain as maximum ratio combining
=> Smaller terminals
Antenna 1 Antenna 2
Time n
Time n + T
d0
d1
- d1*
d0*
Space Time Code Performance
Block of T
symbols
Constellation
mapper
Data in
STBC
Block of K
symbols
• K input symbols, T output symbols T K
• R=K/T is the code rate
• If R=1 the STBC has full rate
• If T= nt the code has minimum delay
• Detector is linear !!!
nt transmit
antennas
BLAST
Bell Labs Layered Space Time Architecture
V-BLAST implemented -98 by Bell Labs (40 bps/Hz)
Steps for V-BLAST detection
1. Ordering: choosing the best channel
2. Nulling: using ZF or MMSE
3. Slicing: making a symbol decision
4. Canceling: subtracting the detected symbol
5. Iteration: going to the first step to
detect the next symbol
Time
s1 s1 s1 s1 s1 s1
s2 s2 s2 s2 s2 s2
s3 s3 s3 s3 s3 s3
V-BLAST
s0 s1 s2 s0 s1 s2
s0 s1 s2 s0 s1
s0 s1 s2 s0
D-BLAST
Trellis Coded Modulation
1.
Combine both encoding and modulation. (using Euclidean
distance only)
2.
Allow parallel transition in the trellis.
3.
Has significant coding gain (3~4dB) without bandwidth
compromise.
4.
Has the same complexity (same amount of computation, same
decoding time and same amount of memory needed).
5.
Has great potential for fading channel.
6.
Widely used in Modem
Set Partitioning
1.
Branches diverging from the same state must have the largest distance.
2.
Branches merging into the same state must have the largest distance.
3.
Codes should be designed to maximize the length of the shortest error event
path for fading channel (equivalent to maximizing diversity).
4.
By satisfying the above two criterion, coding gain can be increased.
Coding Gain
About 3dB
Bit-Interleaved Coded Modulation
Coded bits are interleaved prior to modulation.
Performance of this scheme is quite desirable
Relatively simple (from a complexity standpoint) to implement.
Binary
Encoder
Bitwise
Interleaver
M-ary
Modulator
Channel
Soft
Decoder
Bitwise
Deinterleaver
Soft
Demodulator
BICM Performance
12
AWGN Channel,
Noncoherent Detection
M: Modulation Alphabet Size
10
Minimum Eb/No (in dB)
CM
BICM
8
M=2
6
M=4
4
M = 16
2
M = 64
0
0
0.1
0.2
0.3
0.4
0.5
0.6
Code Rate R
0.7
0.8
0.9
1
Video Standard
Two camps
– H261, H263, H264;
– MPEG1 (VCD), MPEG2 (DVD), MPEG4
Spacial Redundancy: JPEG
– Intraframe compression
– DCT compression + Huffman coding
Temporal Redundancy
– Interframe compression
– Motion estimation
Discrete Cosine Transform (DCT)
120 108
0 – black
255 – white
127
134
137
131
90 75 69 73 82
115 97 81 75
122 105 89 83
125 107 92 86
119 101 86 80
117 105
100 88
89 77
79
87
90
83
89
88 95
96 103
99 106
93 100
87 72 65 69 78
70 55 49 53 62
59 44 38 42 51
85
69
58
DCT and Huffman Coding
700 90 100 0 0 0 0 0
90 0
0 0 0 0 0 0
-89 0
0 0 0 0 0 0
0 – black
255 – white
0
0
0
0
0
0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0
0
0
0
0 0 0 0 0 0
0 0 0 0 0 0
Basis vectors
Using DCT in JPEG
DCT on 8x8 blocks
Comparison of DF and DCT
Quantization and Coding
Zonal Coding: Coefficients outside the zone
mask are zeroed.
•The coefficients
outside the zone may
contain significant
energy
•Local variations are not
reconstructed properly
30:1 compression and 12:1 Compression
Motion Compensation
I-Frame
– Independently
reconstructed
P-Frame
– Forward predicted
from the last I-Frame
or P-Frame
B-Frame
– forward predicted
and backward
predicted from the
last/next I-frame or Pframe
Transmitted as - I P B B B P B B B
Motion Prediction
Motion Compensation Approach(cont.)
Motion Vectors
– static background is a very special case, we should consider the
displacement of the block.
– Motion vector is used to inform decoder exactly where in the previous
image to get the data.
– Motion vector would be zero for a static background.
Motion estimation for different frames
X
Z
Available from earlier frame (X)
Available from later frame (Z)
Y
I P
B
B
B P
1 5
2
B B B P B B
3
4 9
6 7 8
B
13 10 11
A typical group of pictures in coding order
I B
B
B
P B
B B P B B B
P
A typical group of pictures in display order
12
Coding of Macroblock
Y
0
CR
4
5
1
2
CB
3
Spatial sampling relationship for MPEG-1
-- Luminance sample
-- Color difference sample
A Simplified MPEG encoder
Scale
factor
IN
Frame
recorder
DCT
Rate
controller
Variablelength
coder
Quantize
DC
Motion
predictor
Prediction
Reference
frame
Motion vectors
Dequantize
Inverse
DCT
Prediction
encoder
Buffer
fullness
Transmit OUT
buffer
MPEG Standards
MPEG stands for the Moving Picture Experts Group. MPEG is
an ISO/IEC working group, established in 1988 to develop
standards for digital audio and video formats. There are five
MPEG standards being used or in development. Each
compression standard was designed with a specific application
and bit rate in mind, although MPEG compression scales well
with increased bit rates. They include:
– MPEG1
– MPEG2
– MPEG4
– MPEG7
– MPEG21
– MP3
MPEG Standards
MPEG-1
Designed for up to 1.5 Mbit/sec
Standard for the compression of moving pictures and audio. This was based on CD-ROM video
applications, and is a popular standard for video on the Internet, transmitted as .mpg files. In
addition, level 3 of MPEG-1 is the most popular standard for digital compression of audio--known as
MP3. MPEG-1 is the standard of compression for VideoCD, the most popular video distribution
format thoughout much of Asia.
MPEG-2
Designed for between 1.5 and 15 Mbit/sec
Standard on which Digital Television set top boxes and DVD compression is based. It is based on
MPEG-1, but designed for the compression and transmission of digital broadcast television. The
most significant enhancement from MPEG-1 is its ability to efficiently compress interlaced video.
MPEG-2 scales well to HDTV resolution and bit rates, obviating the need for an MPEG-3.
MPEG-4
Standard for multimedia and Web compression. MPEG-4 is based on object-based compression,
similar in nature to the Virtual Reality Modeling Language. Individual objects within a scene are
tracked separately and compressed together to create an MPEG4 file. This results in very efficient
compression that is very scalable, from low bit rates to very high. It also allows developers to control
objects independently in a scene, and therefore introduce interactivity.
MPEG-7 - this standard, currently under development, is also called the Multimedia Content
Description Interface. When released, the group hopes the standard will provide a framework for
multimedia content that will include information on content manipulation, filtering and
personalization, as well as the integrity and security of the content. Contrary to the previous MPEG
standards, which described actual content, MPEG-7 will represent information about the content.
MPEG-21 - work on this standard, also called the Multimedia Framework, has just begun. MPEG-21
will attempt to describe the elements needed to build an infrastructure for the delivery and
consumption of multimedia content, and how they will relate to each other.
JPEG
JPEG stands for Joint Photographic Experts Group. It is also an
ISO/IEC working group, but works to build standards for
continuous tone image coding. JPEG is a lossy compression
technique used for full-color or gray-scale images, by exploiting
the fact that the human eye will not notice small color changes.
JPEG 2000 is an initiative that will provide an image coding
system using compression techniques based on the use of
wavelet technology.
DV
DV is a high-resolution digital video format used with video cameras and
camcorders. The standard uses DCT to compress the pixel data and is a form
of lossy compression. The resulting video stream is transferred from the
recording device via FireWire (IEEE 1394), a high-speed serial bus capable
of transferring data up to 50 MB/sec.
– H.261 is an ITU standard designed for two-way communication over
ISDN lines (video conferencing) and supports data rates which are
multiples of 64Kbit/s. The algorithm is based on DCT and can be
implemented in hardware or software and uses intraframe and interframe
compression. H.261 supports CIF and QCIF resolutions.
– H.263 is based on H.261 with enhancements that improve video quality
over modems. It supports CIF, QCIF, SQCIF, 4CIF and 16CIF
resolutions.
– H.264
HDTV
4-7 Mbps
25 - 27 Mbps
Unequal Error Protection
Multiple Description Coding
Video
– Base layer vs. enhancement layer
Unequal Error Protection
For different packets with different importance, different channel coding is
used.
Joint Source Channel Coding
Limited bandwidth
If source data is more, less channel
protection data.
What is the best tradeoff