Why Compression?


Despite the many advantages of digital representation of signals compared
to the analog counterpart, they need a very large number of bits for storage
and transmission. For example, a high-quality audio signal requires approximately
1.5 megabits per second for digital representation and storage. A
television-quality low-resolution color video of 30 frames per second with each
frame containing 640 x 480 pixels (24 bits per color pixel) needs more than
210 megabits per second of storage. As a result, a digitized one-hour color
movie would require approximately 95 gigabytes of storage.
The storage requirement
for upcoming high-definition television (HDTV) of resolution 1280
x 720 at 60 frames per second is far greater. A digitized one-hour color movie
of HDTV-quality video will require approximately 560 gigabytes of storage.
A digitized 14 x 17 square inch radiograph scanned at 70 pm occupies nearly
45 megabytes of storage. Transmission of these digital signals through limited
bandwidth communication channels is even a greater challenge and sometimes
impossible in its raw form. Although the cost of storage has decreased drastically
over the past decade due to significant advancement in microelectronics
and storage technology, the requirement of data storage and data processing
applications is growing explosively to outpace this achievement.
Interestingly enough, most of the sensory signals such as still image, video,
and voice generally contain significant amounts of superfluous and redundant
information in their canonical representation as far as the human perceptual
system is concerned. By human perceptual system, we mean our eyes and
ears. For example, the neighboring pixels in the smooth region of a natural
image are very similar and small variation in the values of the neighboring
pixels are not noticeable to the human eye. The consecutive frames in a
stationary or slowly changing scene in a video are very similar and redundant.
Some audio data beyond the human audible frequency range are useless for
all practical purposes. This fact tells us that there are data in audic-visual
signals that cannot be perceived by the human perceptual system. We call
this perceptual redundancy. In English text files, common words (e.g., “the”)
or similar patterns of character strings (e.g., “ze”, “th”) are usually used
repeatedly. It is also observed that the characters in a text file occur in
a well-documented distribution, with letter e and “space” being the most
popular. In numeric data files, we often observe runs of similar numbers or
predictable interdependency among the numbers. We have mentioned only a
few examples here. There are many such examples of redundancy in digital
representation in all sorts of data.

Data compression is the technique to reduce the redundancies in data representation
in order to decrease data storage requirements and hence communication
costs. Reducing the storage requirement is equivalent to increasing the
capacity of the storage medium and hence communication bandwidth. Thus
the development of efficient compression techniques will continue to be a design
challenge for future communication systems and advanced multimedia
applications.

sumber: JPEG2000 Standard
for Image Compression
(Tinku Acharya)


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s