You are on page 1of 18

Digitizing the Video Signal

Page 1 of 2

There are two basic approaches to delivering video on a computer screen –


analogue and digital video.

• Analogue video is essentially a product of the television industry and


therefore conforms to television standards.
• Digital video is a product of the computing industry and therefore
conforms to digital data standards.

Video, like audio. Is usually recorded and played as an analog signal. It must
therefore be digitized in order to be incorporated into a multimedia title.

Figure below shows the process for digitizing an analog video signal.

Click to enlarge

A video source, such as video camera, VCR, TV, or videodisc, is connected to a


video capture card in a computer. As the video source is played, the analog
signal is sent to the video card and converted into a digital file that is stored on
the hard drive. At the same time, the sound from the video source is also
digitized.
Digitizing the Video Signal
Page 2 of 2

PAL (Phase Alternating Line) and NTSC (National Television


System Committee) are the two video standards of most
importance for analogue video.

PAL is the standard for most of Europe and the


Commonwealth, NTSC for North and South America. The
standards are inter-convertible, but conversion normally has to
be performed by a facilities house and some quality loss may
occur.

Analogue video can be delivered into the computing interface


from any compatible video source (video recorder, videodisc
player, live television) providing the computer is equipped with
a special overlay board, which synchronizes video and
computer signals and displays computer-generated text and
graphics over the video.

the problem with sending composite video is that NTSC or PAL encoding interleaves 4 channels of
information, sync, luma, i and q chroma on one cable. it is nearly impossible to decode a composite
analog with any quality without employing very expensive electronic processing, and the results can still
be poor. In fact in the traditional analog TV station, once the camera encodes video, it is never decoded
but passes through the chain unmodified until it gets to the transmitter. then it becomes the problem of the
consumer's TV set to try and untangle this mess.

A slight improvement can be had with s-video using 2 channels, one for sync and luma, the other for i and
q chroma but all of this can be avoided by having the camera encode the video digitally. then the decode
process, if needed, will produce a near perfect regeneration of analog components. and a computer of
course can use the camera encoded digital product directly.
and this is just for standard definition. HD absolutely requires digital encoding at the camera. at the
television station, camera HD output is SDI (serial digital interface). This is the broadcast equivalent of
HDMI except it is not MPEG2 compressed. Consumer gear, HDTVs, camcorders and computers are
limited to compressed versions of digital HD, full bandwidth is not possible.

Digital video
From Wikipedia, the free encyclopedia
For other uses, see Digital video (disambiguation).

This article needs additional citations for verification.


Please help improve this article by adding reliable references. Unsourced material may be challenged and removed.
(January 2010)

Digital video is a type of video recording system that works by using a digital rather than an analog video
signal. The terms camera, video camera, and camcorder are used interchangeably in this article.

Contents
[hide]

• 1 History

• 2 Overview of basic properties

o 2.1 Regarding Interlacing

o 2.2 Properties of compressed video

o 2.3 More on bit rate and BPP

 2.3.1 Constant bit rate versus variable

bit rate

• 3 Technical overview

o 3.1 Poster frame

• 4 Interfaces and cables

• 5 Storage formats

o 5.1 Encoding

o 5.2 Tapes

o 5.3 Discs

• 6 See also
• 7 References

• 8 External links

[edit]History

Starting in the late 1970s to the early 1980s, several types of video production equipment- such as time base
correctors (TBC) and digital video effects (DVE) units (two of the latter being the Ampex ADO, and
the NEC DVE) were introduced that operated by taking a standard analog video input and digitizing it internally.
This made it easier to either correct or enhance the video signal, as in the case of a TBC, or to manipulate and
add effects to the video, in the case of a DVE unit. The digitized and processed clip from these units would then
be converted back to standard analog video.

Later on in the 1970s, manufacturers of professional video broadcast equipment, such as Bosch (through
their Fernseh division), RCA, andAmpex developed prototype digital videotape recorders in their research and
development labs. Bosch's machine used a modified 1" Type Btransport, and recorded an early form of CCIR
601 digital video. None of these machines from these manufacturers were ever marketed commercially,
however.

Digital video was first introduced commercially in 1986 with the Sony D-1 format, which recorded an
uncompressed standard definitioncomponent video signal in digital form instead of the high-band analog forms
that had been commonplace until then. Due to its expense, D-1 was used primarily by large television
networks. It would eventually be replaced by cheaper systems using compressed data, most notably
Sony's Digital Betacam (still heavily used as a field recording format by professional television producers) that
were introduced into the network's studios.

One of the first digital video products to run on personal computers was PACo: The PICS Animation
Compiler from The Company of Science & Art in Providence, RI, which was developed starting in 1990 and first
shipped in May 1991.[1] PACo could stream unlimited-length video with synchronized sound from a single file on
CD-ROM. Creation required a Mac; playback was possible on Macs, PCs, and Sun Sparcstations. In 1992,
Bernard Luskin, Philips Interactive Media, and Eric Doctorow, Paramount Worldwide Video, successfully put
the first fifty videos in digital MPEG 1 on CD, developed the packaging and launched movies on CD, leading to
advancing versions of MPEG, and to DVD.

QuickTime, Apple Computer's architecture for time-based and streaming data formats appeared in June, 1991.
Initial consumer-level content creation tools were crude, requiring an analog video source to be digitized to a
computer-readable format. While low-quality at first, consumer digital video increased rapidly in quality, first
with the introduction of playback standards such as MPEG-1 and MPEG-2 (adopted for use in television
transmission and DVD media), and then the introduction of the DV tape format allowing recording direct to
digital data and simplifying the editing process, allowing non-linear editing systems to be deployed cheaply and
widely on desktop computers with no external playback/recording equipment needed. The widespread adoption
of digital video has also drastically reduced the bandwidth needed for a high definition television signal
(with HDV and AVCHD, as well as several commercial variants such as DVCPRO-HD, all using less bandwidth
than a standard definition analog signal) and Tapeless camcorders based on flash memory and often a variant
of MPEG-4.

[edit]Overview of basic properties


Digital video comprises a series of orthogonal bitmap digital images displayed in rapid succession at a constant
rate. In the context of video these images are called frames.[2] We measure the rate at which frames are
displayed in frames per second (FPS).

Since every frame is an orthogonal bitmap digital image it comprises a raster of pixels. If it has a width
of W pixels and a height of H pixels we say that the frame size is WxH.

Pixels have only one property, their color. The color of a pixel is represented by a fixed number of bits. The
more bits the more subtle variations of colors can be reproduced. This is called the color depth (CD) of the
video.

An example video can have a duration (T) of 1 hour (3600sec), a frame size of 640x480 (WxH) at a color
depth of 24bits and a frame rate of 25fps. This example video has the following properties:

 pixels per frame = 640 * 480 = 307,200

 bits per frame = 307,200 * 24 = 7,372,800 = 7.37Mbits

 bit rate (BR) = 7.37 * 25 = 184.25Mbits/sec

 video size (VS)[3] = 184Mbits/sec * 3600sec = 662,400Mbits = 82,800Mbytes = 82.8Gbytes

The most important properties are bit rate and video size. The formulas relating those two with all other
properties are:

BR = W * H * CD * FPS
VS = BR * T = W * H * CD * FPS * T
(units are: BR in bit/s, W and H in pixels, CD in bits, VS in bits, T
in seconds)

while some secondary formulas are:

pixels_per_frame = W * H
pixels_per_second = W * H * FPS
bits_per_frame = W * H * CD

[edit]Regarding Interlacing
In interlaced video each frame is composed of two halves of an image. The first half contains only the odd-
numbered lines of a full frame. The second half contains only the even-numbered lines. Those halves are
referred to individually as fields. Two consecutive fields compose a full frame. If an interlaced video has a frame
rate of 15 frames per second the field rate is 30 fields per second. All the properties and formulas discussed
here apply equally to interlaced video but one should be careful not to confuse the fields per second rate with
the frames per second rate.

[edit]Properties of compressed video


The above are accurate for uncompressed video. Because of the relatively high bit rate of uncompressed
video, video compression is extensively used. In the case of compressed video each frame requires a small
percentage of the original bits. Assuming a compression algorithm that shrinks the input data by a factor of CF,
the bit rate and video size would equal to:

BR = W * H * CD * FPS / CF
VS = BR * T / CF

Please note that it is not necessary that all frames are equally compressed by a factor of CF. In practice they
are not, so CF is the averagefactor of compression for all the frames taken together.

The above equation for the bit rate can be rewritten by combining the compression factor and the color depth
like this:

BR = W * H * ( CD / CF ) * FPS

The value (CD / CF) represents the average bits per pixel (BPP). As an example, if we have a color depth of
12bits/pixel and an algorithm that compresses at 40x, then BPP equals 0.3 (12/40). So in the case of
compressed video the formula for bit rate is:

BR = W * H * BPP * FPS

In fact the same formula is valid for uncompressed video because in that case one can assume that the
"compression" factor is 1 and that the average bits per pixel equal the color depth.

[edit]More on bit rate and BPP


As is obvious by its definition bit rate is a measure of the rate of information content of the digital video stream.
In the case of uncompressed video, bit rate corresponds directly to the quality of the video (remember that bit
rate is proportional to every property that affects the video quality). Bit rate is an important property when
transmitting video because the transmission link must be capable of supporting that bit rate. Bit rate is also
important when dealing with the storage of video because, as shown above, the video size is proportional to
the bit rate and the duration. Bit rate of uncompressed video is too high for most practical applications. Video
compression is used to greatly reduce the bit rate.

BPP is a measure of the efficiency of compression. A true-color video with no compression at all may have a
BPP of 24 bits/pixel. Chroma subsampling can reduce the BPP to 16 or 12 bits/pixel.
Applying jpeg compression on every frame can reduce the BPP to 8 or even 1 bits/pixel. Applying video
compression algorithms like MPEG1, MPEG2 or MPEG4 allows for fractional BPP values.

[edit]Constant bit rate versus variable bit rate

As noted above BPP represents the average bits per pixel. There are compression algorithms that keep the
BPP almost constant throughout the entire duration of the video. In this case we also get video output with
a constant bit rate (CBR). This CBR video is suitable for real-time, non-buffered, fixed bandwidth video
streaming (e.g. in videoconferencing).

Noting that not all frames can be compressed at the same level because quality is more severely impacted for
scenes of high complexity some algorithms try to constantly adjust the BPP. They keep it high while
compressing complex scenes and low for less demanding scenes. This way one gets the best quality at the
smallest average bit rate (and the smallest file size accordingly). Of course when using this method the bit rate
is variable because it tracks the variations of the BPP.

[edit]Technical overview
Standard film stocks such as 16 mm and 35 mm record at 24 frames per second. For video, there are two
frame rate standards: NTSC, which shoot at 30/1.001 (about 29.97) frames per second or 59.94 fields per
second, and PAL, 25 frames per second or 50 fields per second.

Digital video cameras come in two different image capture formats: interlaced and progressive scan.

Interlaced cameras record the image in alternating sets of lines: the odd-numbered lines are scanned, and then
the even-numbered lines are scanned, then the odd-numbered lines are scanned again, and so on. One set of
odd or even lines is referred to as a "field", and a consecutive pairing of two fields of opposite parity is called
a frame.
A progressive scan video camera records each frame as distinct, with all scan lines being captured at the same
moment in time. Thus, interlaced video captures samples the scene motion twice as often as progressive video
does, for the same number of frames per second.

Progressive-scan camcorders generally produce a slightly sharper image. However, motion may not be as
smooth as interlaced video which uses 50 or 59.94 fields per second, particularly if they employ the 24 frames
per second standard of film. (Note that even though the digital video format only allows for 29.97 interlaced
frames per second [or 25 for PAL], 24 frames per second progressive video is possible through a technique
called 3:2 pulldown)

Digital video can be copied with no degradation in quality. No matter how many generations of a digital source
is copied, it will still be as clear as the original first generation of digital footage.

Digital video can be manipulated and edited to follow an order or sequence on an NLE, or non-linear
editing workstation, a computer-based device intended to edit video and audio. More and more, videos are
edited on readily available, increasingly affordable consumer-grade computer hardware and software.
However, such editing systems require ample disk space for video footage. Digital video recorded with
standard consumer-grade DV/DVCPRO compression takes up about 250 megabytes per minute or 13
gigabytes per hour.[citation needed]

Digital video has a significantly lower cost than 35 mm film. The tape stock itself is very inexpensive — about
$3 for a 60 minute MiniDV tape, in bulk, as of December, 2005. Digital video also allows footage to be viewed
on location without the expensive chemical processing required by film. By comparison, 35 mm film stock costs
about $1000 per minute, including processing.[citation needed]

Digital video is used outside of movie making. Digital television (including higher quality HDTV) started to
spread in most developed countries in early 2000s. Digital video is also used in modern mobile
phones and video conferencing systems. Digital video is also used for Internetdistribution of media,
including streaming video and peer-to-peer movie distribution.

Many types of video compression exist for serving digital video over the internet and on optical disks. The file
sizes of digital video used for professional editing are generally not practical for these purposes, and the video
requires further compression with codecs such as the Windows Media format, MPEG2, MPEG4, Real Media,
and more recently H.264. Probably the most widely used formats for delivering video over the internet are
MPEG4 and Windows Media, while MPEG2 is used almost exclusively for DVDs, providing an exceptional
image in minimal size but resulting in a high level of CPU consumption to decompress.

While still images can have any number of pixels, the video community defines various standards for
resolution. A path through devices that use incompatible resolutions may require that video be rescaled several
times from capture to ultimate audience display.
As of 2007, the highest resolution demonstrated for digital video generation is 33 megapixels (7680 x 4320) at
60 frames per second ("Ultra High Definition Television"), though this has only been demonstrated in special
laboratory settings. The highest speed is attained in industrial and scientific high speed cameras that are
capable of filming 1024x1024 video at up to 1 million frames per second for brief periods of recording.

[edit]Poster frame
A poster frame or preview frame is a selected frame of the video used as a thumbnail.[4]

[edit]Interfaces and cables


Many interfaces have been designed specifically to handle the requirements of uncompressed digital video (at
roughly 400 Mbit/s):

 Serial Digital Interface

 FireWire

 High-Definition Multimedia Interface

 Digital Visual Interface

 Unified Display Interface

 DisplayPort

 USB

 Digital component video

The following interface has been designed for carrying MPEG-Transport compressed video:

 DVB-ASI

Compressed video is also carried using UDP-IP over Ethernet. Two approaches exist for this:

 Using RTP as a wrapper for video packets

 1-7 MPEG Transport Packets are placed directly in the UDP packet

[edit]Storage formats
[edit]Encoding

All current formats, which are listed below, are PCM based.

 CCIR 601 used for broadcast stations

 MPEG-4 good for online distribution of large videos and video recorded to flash memory
 MPEG-2 used for DVDs, Super-VCDs, and many broadcast television formats

 MPEG-1 used for video CDs

 H.261

 H.263

 H.264 also known as MPEG-4 Part 10, or as AVC, used for Blu-ray Discs and some broadcast
television formats

 Theora used for video on wikipedia

[edit]Tapes

 Betacam, BetacamSP, Betacam SX, Betacam IMX, Digital Betacam, or DigiBeta — Commercial video
systems by Sony, based on original Betamax technology

 HDCAM was introduced by Sony as a high-definition alternative to DigiBeta.

 D1, D2, D3, D5, D9 (also known as Digital-S) — various SMPTE commercial digital video standards

 DV, MiniDV — used in most of today's videotape-based consumer camcorders; designed for high
quality and easy editing; can also record high-definition data (HDV) in MPEG-2 format

 DVCAM, DVCPRO — used in professional broadcast operations; similar to DV but generally


considered more robust; though DV-compatible, these formats have better audio handling.

 DVCPRO50, DVCPROHD support higher bandwidths as compared to Panasonic's DVCPRO.

 Digital8 — DV-format data recorded on Hi8-compatible cassettes; largely a consumer format

 MicroMV — MPEG-2-format data recorded on a very small, matchbook-sized cassette; obsolete

 D-VHS — MPEG-2 format data recorded on a tape similar to S-VHS

[edit]Discs

 VCD

 DVD

 Blu-ray Disc

[edit]See also

 Digital audio

 Digital cinematography

 Digital visual interface

 DVD
 HDV

 HDVSL

 ProHD

 AVCHD

 HD video

 Camcorder and Tapeless camcorder

 List of video topics

 Online media center

 Television

 Video

 Video coding

 Video editing software

 Video sharing

 Video quality

 Webcam

[edit]References

1. ^ CoSA Lives: The Story of the Company Behind After

Effects, http://www.motionworks.com.au/2009/11/cosa-lives/, retrieved 11/15/2009.

2. ^ In fact the still images correspond to frames only in the case of progressive scan video. In

interlaced video they correspond to fields. See section about interlacing for clarification

3. ^ we use the term video size instead of just size in order to avoid confusion with the frame size

4. ^ Delivering a reliable Flash video

experience, http://www.adobe.com/devnet/flash/articles/flash_cs3_video_techniques_ch12.pdf, retrieved

14/1/2010

plug inns

[edit]External links

 The DV, DVCAM, & DVCPRO Formats -- tech details, FAQ, and links

 Standard digital TV and video formats.

[show]v · d · eDigital systems


[show]v · d · eVideo storage formats
Categories: Film and video technology

Digitizing
From Wikipedia, the free encyclopedia
"Digitizer" redirects here. This article covers the general concept of digitization- for other uses,
see Digitizer (disambiguation).

Digitising old slides at home by photographing their projections using a slide projector, tripod, and digital camera

Digitizing or digitization[1] is the representation of an object, image, sound, document or asignal (usually
an analog signal) by a discrete set of its points or samples. The result is calleddigital representation or,
more specifically, a digital image, for the object, and digital form, for the signal. Strictly speaking, digitizing
means simply capturing an analog signal in digital form. For a document the term means to trace the
document image or capture the "corners" where the lines end or change direction.

McQuail identifies the process of digitization having immense significance to the computing ideals as it
"allows information of all kinds in all formats to be carried with the same efficiency and also intermingled"
(2000:28) [2]

Contents
[hide]

• 1 Process

• 2 Examples

• 3 Analog signals to digital


• 4 Analog texts to digital

• 5 Implications of digitization

• 6 Collaborative digitization

projects

• 7 Library Preservation

• 8 Lean philosophy

• 9 Fiction

• 10 See also

• 11 References

[edit]Process

The term digitization is often used when diverse forms of information, such as text, sound, image or
voice, are converted into a single binary code. Digital information exists as one of two digits, either 0 or 1.
These are known as bits (a contraction of binary digits) and the sequences of 0s and 1s that constitute
information are called bytes.[3]

Analog signals are continuously variable, both in the number of possible values of the signal at a
given time, as well as in the number of points in the signal in a given period of time. However, digital
signals are discrete in both of those respects – generally a finite sequence of integers – therefore a
digitization can, in practical terms, only ever be an approximation of the signal it represents.

Digitization occurs in two parts:

Discretization

The reading of an analog signal A, and, at regular time intervals (frequency), sampling the value of the
signal at the point. Each such reading is called a sample and may be considered to have infinite
precision at this stage;

Quantization

Samples are rounded to a fixed set of numbers (such as integers), a process known as quantization.

In general, these can occur at the same time, though they are conceptually distinct.

A series of digital integers can be transformed into an analog output that approximates the
original analog signal. Such a transformation is called a DA conversion. The sampling
rate and the number of bits used to represent the integers combine to determine how close
such an approximation to the analog signal a digitization will be.

[edit]Examples
The term is often used to describe the scanning of analog sources (such as printed photos or
taped videos) into computers for editing, but it also can refer to audio (where sampling rate is
often measured in kilohertz) and texture map transformations. In this last case, as in normal
photos, sampling rate refers to the resolution of the image, often measured in pixels per inch.

Digitizing is the primary way of storing images in a form suitable


for transmission and computer processing, whether scanned from two-dimensional analog
originals or captured using an image sensor-equipped device such as a digital
camera, tomographical instrument such as a CAT scanner, or acquiring precise dimensions
from a real-world object, such as a car, using a 3D scanning device.[4]

Digitizing is central to making a digital representations of geographical features, using raster


or vector images, in a geographic information system, i.e., the creation of electronic maps,
either from various geographical and satellite imaging (raster) or by digitizing traditional
papermaps (vector).

"Digitization" is also used to describe the process of populating databases with files or data.
While this usage is technically inaccurate, it originates with the previously-proper use of the
term to describe that part of the process involving digitization of analog sources such as
printed pictures and brochures before uploading to target databases.

Digitizing may also used in the field of apparel, where an image may be recreated with the
help of embroidery digitizing software tools and saved as embroidery machine code. This
machine code is fed into an embroidery machine and applied to the fabric. The most
supported format is DST file.[citation needed]

[edit]Analog signals to digital


Analog signals are continuous electrical signals. Digital signals are non-continuous.[5]

Nearly all recorded music has been digitized. About 12 percent of the 500,000+ movies listed
on the Internet Movie Database are digitized onDVD.

Digitization of personal multimedia such as home movies, slides, and photographs is a


popular method of preserving and sharing older repositories. Slides and photographs may be
scanned using an image scanner, but videos are more difficult.[6] Many companies offer
personal video digitization services.[7][8][9]

[edit]Analog texts to digital


About 5 percent of texts have been digitized as of 2006.[10]
Older print books are being scanned and optical character recognition technologies applied by
academic and public libraries, foundations, and private companies like Google.[11]

Unpublished text documents on paper which have some enduring historical or research value
are being digitized by libraries and archives, though frequently at a much slower rate than for
books (see digital libraries). In many cases, archives have replaced microfilming with
digitization as a means of preserving and providing access to unique documents.

[edit]Implications of digitization
This shift to digitization in the contemporary media world has created implications for
traditional mass media products, however these "limitations are still very unclear" (McQuail,
2000:28). The more technology advances, the more converged the realm of mass media will
become with less need for traditional communication technologies. For example, the Internet
has transformed many communication norms, creating more efficiency for not only individuals,
but also for businesses. However, McQuail suggests traditional media have also benefited
greatly from new media, allowing more effective and efficient resources available (2000:28).

[edit]Collaborative digitization projects


There are many collaborative digitization projects throughout the United States. Two of the
earliest projects were the Collaborative Digitization Project in Colorado and NC ECHO - North
Carolina Exploring Cultural Heritage Online, based at the State Library of North Carolina.

These projects establish and publish best practices for digitization and work with regional
partners to digitize cultural heritage materials. Additional criteria for best practice have more
recently been established in the UK, Australia and the European Union.[12] Wisconsin Heritage
Online is a collaborative digitization project modeled after the Colorado Collaborative
Digitization Project. Wisconsin uses a wiki to build and distribute collaborative documentation.
Georgia's collaborative digitization program, the Digital Library of Georgia, presents a
seamless virtual library on the state's history and life, including more than a hundred digital
collections from 60 institutions and 100 agencies of government. The Digital Library of
Georgia is a GALILEO initiative based at the University of Georgia Libraries.

In South-Asia Nanakshahi trust is digitizing manuscripts of Gurmukhi Script.

[edit]Library Preservation
Main article: Digital preservation

Digital preservation in its most basic form is a series of activities maintaining access to digital
materials over time.[13] Digitization in this sense is a means of creating digital surrogates of
analog materials such as books, newspapers, microfilm and videotapes. Digitization can
provide a means of preserving the content of the materials by creating an accessible facsimile
of the object in order to put less strain on already fragile originals. For sounds, digitisation of
legacy analogue recordings is essential insurance against technological obsolescence.[14]

The prevalent Brittle Books[15] issue facing libraries across the world is being addressed with a
digital solution for long term book preservation.[16] For centuries, books were printed on wood-
pulp paper, which turns acidic as it decays. Deterioration may advance to a point where a
book is completely unusable. In theory, if these widely circulated titles are not treated with de-
acidification processes, the materials upon those acid pages will be lost forever. As digital
technology evolves, it is increasingly preferred as a method of preserving these materials,
mainly because it can provide easier access points and significantly reduce the need for
physical storage space.

Google, Inc. has taken steps towards attempting to digitize every title with "Google Book
Search".[17][18] While some academic libraries have been contracted by the service, issues of
copyright law violations threaten to derail the project.[19] However, it does provide - at the very
least - an online consortium for libraries to exchange information and for researchers to
search for titles as well as review the materials.

[edit]Lean philosophy
The broad use of internet and the increasing popularity of Lean philosophy has also increased
the use and meaning of "digitizing" to describe improvements in the efficiency of
organizational processes. This will often involve some kind of Lean process in order to
simplify process activities, with the aim of implementing new "lean and mean" processes by
digitizing data and activities.

[edit]Fiction

Works of science-fiction often include the term digitize as the act of transforming people into
digital signals and sending them into a computer. When that happens, the people disappear
from the real world and appear in a computer world (as featured in the cult film Tron, the
animated series Code: Lyoko, or the late 1980s live-action series Captain Power and the
Soldiers of the Future). In the video game Beyond Good and Evil the protagonist's holographic
friend digitizes the player's inventory items.

[edit]See also
Look
up digitizing ordigitisation in Wiktionary,
the free dictionary.

 Analog to digital converter

 Book scanning

 Digital audio

 Digital Library

 Digital television

 Frame grabber

 Graphics tablet

 Raster graphics

 Raster image

 Raster to vector

 Vector graphics

 Optical character recognition

[edit]References

1. ^ Also known as digitising or digitisation, digitalizing or digitalization;

see American and British English spelling differences. NB

notdigitalising or digitalisation (http://www.thefreedictionary.com/digitalisation)

2. ^ McQuail, D (2000) McQuail's Mass Communication Theory (4th edition), Sage,

London, pp. 16-34

3. ^ Flew, Terry. 2008. New Media An Introduction. South Melbourne. 3rd Edition.

South Melbourne: Oxford University Press.

4. ^ "Digimation for 3D Models, 3D Software and Creative Services".

5. ^ http://cbdd.wsu.edu/kewlcontent/cdoutput/TR502/page8.htm

6. ^ Paul Heltzel. "Good-Bye, VHS; Hello, DVD".

7. ^ http://www.yesvideo.com/

8. ^ http://www.homemoviedepot.com/

9. ^ http://www.videoconversionexperts.com/

10. ^ New York Times; May 14, 2006; Scan This Book!
11. ^ http://www.google.com/press/pressrel/print_library.html "Google Checks Out

Library Books," Press release, December 14, 2004

12. ^ Digital Libraries: Principles and Practice in a Global Environment, Ariadne April

2005.

13. ^ “What is Digital Preservation”. Library Technology Reports 44:2 (Feb/March

2008): 5.

14. ^ IASA (2009). Guidelines on the Production and Preservation of Digital Audio

Objects

15. ^ http://en.wikipedia.org/wiki/Brittle_Books_Program

16. ^ Cloonan, M.V. and Sanett, S. “The Preservation of Digital Content,” Libraries

and the Academy. Vol. 5, No. 2 (2005): 213-37.

17. ^ http://en.wikipedia.org/wiki/Google_Book_Search

18. ^ http://books.google.com/

19. ^ Baksik, C. “Fair Use or Exploitation? The Google Book Search Controversy,”

Libraries and the Academy. Vol. 6, No. 2 (2006): 399-415.

Categories: Video game development | Data transmission

You might also like