Network Video Codecs
The Brandywine NVA family of video encoders and decoders provides
cutting edge video and audio compression/decompression in a variety of
channel counts and formats.
NVA-100 NVA-240 NVA-400 NVA-800
What Exactly is 'Codec‘?
"Codec" is a technical name for "compression/decompression". It also stands for
"compressor/decompressor" and "code/decode". All of these variations mean the same thing: a
codec is a computer program that both shrinks large movie files, and makes them playable on
your computer. A codec can consist of two components—an encoder and a decoder. The encoder
compresses a file during creation, and the decoder decompresses the file so that it can be played.
Why do we need codecs?
Because video and music files are large, they become difficult to transfer across the Internet
quickly. To help speed up downloads, mathematical "codecs" were built to encode ("shrink") a
signal for transmission and then decode it for viewing or editing.
Without codecs, downloads would take three to five times longer than they do now.
What are "hard" and "soft" codecs?
Hard codecs are hardware codecs, normally a computer chip integrated into a product,
like our NVA units. You supply power and raw video at one end, and get compressed
video out the other end in real time. Soft codecs are software modules that do the same
thing, such as the DV codecs supplied by QuickTime, VLC, Adobe player and Microsoft.
Which is better, hard or soft codec?
One thing to keep in mind is that "hard" vs. "soft" doesn't matter when it comes to video
quality, both give excellent results when working properly. Too much depends on other
factors, like the speed of the computer's CPU, bus and bus interface chipset, to decisively
say that one codec will be faster than the other in effects rendering. However, hard codes
do have some advantages depending on your requirements. Hard codec systems usually
come with breakout boxes that include analog (composite, Y/C, component or HDMi)
connections as well as 1394 connections. You can connect any video format with analog
I/O to the box and capture it in real-time or output to it in real-time. Hard codec
eliminates the delays and often uncertainty of customer dependent equipment.
To better understand our NVA products let’s discuss some key
features & functions and define a few terms.
MPEG & H.264 defined
Streaming video over an ethernet network
Network Protocols for Streaming Video over IP
Unicast and Multicast
Resolution, HD, Full D1 & CIF (Common Intermediate Format)
Frame Rate & Bit Rate
GOP (Group of Pictures), Interlace, Progressive
Definition of “streaming” video and audio:
Digital video and audio are streaming when they are moving quickly from one
chunk of hardware to another and don’t have to be all in one place for the
destination device to do something with them. The data also does not make
any demand for long-term storage on/at the receiving hardware such as
permanent space on a hard disk or storage device. Meaning you can view the
content without having to download anything and storing it on your computer.
MPEG (pronounced EHM-pehg), the Moving Picture Experts Group, develops standards for digital video and digital audio
compression. It operates under the auspices of the International Organization for Standardization (ISO). The MPEG standards
are an evolving series, each designed for a different purpose.
Streamed from high-output servers or network appliances. Usually streamed as-is (.mpg files).
Use is flexible. Can be used for high-quality videoconferencing and video-on-demand (VOD).
MPEG-2 evolved out of the shortcomings of MPEG-1.
Lower quality than MPEG-2
Doesn’t need as much bandwidth as MPEG-2, but highest quality still demands moderate bandwidth.
A less efficient audio compression system
Lack of flexibility (fewer variations of acceptable packet types)
Does not support interlaced footage
Absorbs many of the features of MPEG-1 & MPEG-2 and other related standards, adding some new
Aimed primarily at low bit-rate video communications
[see next slide]
H.264/AVC [Advance Video Coding]
H.264 represents a revolutionary advance in video compression technology.
H.264 is a join standard by ITU-T Video Coding Experts Group (VCEG) and ISO/IEC
Moving Picture Experts Group (MPEG).
Developed to provide high-quality video at a much lower bit rate than standard
MPEG-4 or JPEG.
Any video application can benefit from a reduction in bandwidth requirements, but
highest impact will involve applications where such reduction relieves a hard
technical constraint, or which makes more cost-effective use of bandwidth as a
For example, JPEG compression operates at 260Kb/s, while MPEG-4 transmits at
85Kb/s and H.264 transmits at 50K/bs.
To put this into perspective, MPEG-4 requires approximately one-third of the
bandwidth used by JPEG and H.264 requires just one-fifth. That’s almost a 40%
saving between standard MPEG-4 and H.264.
Streaming Video over Ethernet
Through advancements in digital video compression composite audio and video
signals can now be carried over typical network circuits both on the LAN and across
the WAN, and even over the Internet.
Video over IP or IP Streaming Video are technologies that allow video signals to be
captured, digitized, streamed and managed over IP networks using an Ethernet
The first step is the capturing of the video content. This can be accomplished via
– The content is processed, compressed, stored and edited on a video server.
– The content can either be “live” (captured and processed in real-time) or prerecorded and stored.
These transmissions can then be sent via the network to either one or several
stations for viewing singularly or simultaneously.
The viewing station will need either a hardware or software viewer or in some cases
Network Protocols for Streaming Video over IP
[all are supported by our NVA products]
TCP/IP – Transmission Control Protocol/Internet Protocol
NTP – Network Time Protocol
Used to synchronizing the clocks of computer/network systems over packet-switched, variable-latency data networks.
One of the oldest Internet protocols still in use (since 1985).
UDP – User Datagram Protocol or Universal Datagram Protocol
One of the core protocols of the Internet Protocol Suite providing reliable, ordered delivery of a stream of bytes from a program on a
computer to another program on another computer.
Controls segment size, flow control, the rate at which data is exchanged, and network traffic congestion.
Is a core member of the Internet Protocol Suite allowing users to send messages to other hosts on an IP network without requiring
prior communications or hand shaking.
HTTP – Hypertext Transfer Protocol
An application-level protocol for distributed, collaborative, hypermedia information systems.
Used for retrieving interlinked resources, called hypertext documents.
RTP – Real-time Transport Protocol
The World Wide Web, or WWW is a system of interlinked hypertext documents contained on the Internet.
With HTTP one can view web pages that may contain text, images, videos and other multimedia and navigate between them.
Defines a standardized packet format for delivering audio and video over the Internet.
Used extensively in communication and entertainment systems that involve streaming media.
RTSP – Real Time Streaming Protocol
A network control protocol used to establish and control media sessions between end points.
Has similarities to HTTP, but RTSP adds new requests, while HTTP is stateless.
DHCP – Dynamic Host Configuration Protocol
SSL – Secure Sockets Layer
A computer networking protocol used by devices which dynamically distributes the IP address to the destination host.
A predecessor to TLS (Transport Layer Security), SSL is a cyptographic protocol that provides security for communications over
networks such as the Ineternet.
Unicast & Multicast
Unicast is communication between a single sender and a single receiver over a network.
The term exists in contradistinction to multicast, communication between a single sender and
multiple receivers, and anycast, communication between any sender and the nearest of a group
of receivers in a network.
An earlier term, point-to-point communication, is similar in meaning to unicast.
Currently, our product only supports unicast.
Sending out data to distributed servers on the MBone (Multicast Backbone).
For large amounts of data, IP Multicast is more efficient than normal internet transmissions
because the server can broadcast a message to many recipients simultaneously.
Unlike traditional Internet traffic that requires separate connections for each source-destination
pair, IP Multicasting allows many recipients to share the same source. This means that just one
set of packets is transmitted for all the destinations.
Resolution, HD, Full D1, CIF
The term resolution applies to fixed-pixel-array displays such as plasma, liquid
and digital light displays, projectors or similar technologies.
It is the physical number of columns and rows of pixels creating the display
HD is a digital TV broadcasting system with higher resolution than traditional
television systems. Also known as HDTV.
D1 refers to a resolution standard
1080i (1920x1080 split into two interlaced fields of 540 lines)
1080p (1920x1080 progressive scan)
Full D1 in the NTSC system means 720x480 pixels, and in PAL and SECAM systems
full D1 is 720x576.
CIF – Common Intermediate Format is used to standardize the horizontal and
vertical resolutions in pixels.
Frame Rate & Bit Rate
Frame rate, or frame frequency, is the number of frames or images that are projected or
displayed per second. Frame rate is most often expressed in frames per second (FPS).
FPS is one of the factors affecting movie file size and quality.
The higher frame rate a movie has the smoother object move in the movie.
Bit Rate is the number of bits that pass a given point in a network in a given amount of
time, usually a second. Thus, a bit rate is usually measured in some multiple of bits per
second. The term bit rate is a synonym for data transfer rate (or simply data rate).
16 kbit/s – videophone quality (minimum necessary for a consumer-acceptable "talking head" picture
using various video compression schemes)
128 – 384 kbit/s – business-oriented videoconferencing quality using video compression
1.25 Mbit/s – VCD quality (with bit-rate reduction from MPEG-1
1374 kbit/s – VCD (Video CD) – audio and video streams multiplexed in an MPEG-PS
5 Mbit/s typ – DVD quality (with bit-rate reduction from MPEG-2 compression)
8 to 15 Mbit/s typ – HDTV quality (with bit-rate reduction from MPEG-4 AVC compression)
29.4 Mbit/s max – HD DVD
40 Mbit/s max – Blu-ray Disc
GOP, Interlace, Progressive
In video coding, a group of pictures, or GOP structure, specifies the order in
which intra- and inter- frames are arranged.
Having more frames in a GOP will reduce your encoding size.
Video can be interlaced or progressive. Represented as 1080p or 1080i
Interlacing was invented as a way to achieve good visual quality within the
limitations of a narrow bandwidth. The horizontal scan lines of each interlaced
frame are numbered consecutively and partitioned into two fields.
NTSC, PAL and SECAM are interlaced formats.
i is used to indicate interlacing
For example: video format specified as 1080i60
1080 indicates the vertical line resolution, i indicates interlacing, and 60 indicates frames per second.
In Progressive video the lines that make up the picture are in order from top to
bottom (1,2,3,4…) see next slide.
In basic terms, a video can be thought of as being made up of numerous snapshots, called frames. The frame rate, or
the number of frames displayed each second, is 29.97 in the United States and other NTSC based countries. For the
sake of simplicity, we can round this number to 30 frames per second (fps). In many European countries, they use a
PAL or SECAM video system that displays exactly 25 fps. For this article, I will base my explanations on 30 fps, but
you can replace the number '30' with '25' for PAL/SECAM video and the same principles will hold true.
A television, however, does not deal with video in terms of frames. Instead, it displays a video using half-frames, called
fields. Each frame contains exactly two fields. One field is made up of the odd horizontal lines in a frame. This is
called the odd field or the top field since it contains the top line of the image. The other field is made up of the even
horizontal lines in a frame. This is called the even field or bottom field. Here is an illustration: