You are on page 1of 14

The IP00C705 is an advanced image processor that performs scaling and de-interlacing in full 12-bit internal front-to-back processing

resolution. It also features extended line buffers to handle more than 1920 pixels per line, and allow for an easy integration of multiple devices working together. The IP00C705 also has a wide scaling core filter of 12x12 pixels that brings you broadcast-quality format conversion. The IP00c705 is ideally suited for broadcast conversion products and high-end display systems. Features 1. De-interlacing o Temporal motion detection o Enhanced diagonal interpolation o All cadences supported 2. Noise filtering o Block noise, Mosquito, temporal noise filtering o Chroma error filter 3. Image scaling o 6x6 filter for image zoom o 12x12 filter for image shrink o Independent H and V scaling ratios o Non-linear H and V scaling (aspect ratio correction) o Dynamic scaling Frame Rate Conversion o Independent clock, H and V sync. for the input and output ports o Input/output frame synchronization 4. On-Screen Display o Bitmap OSD featuring 256 colors, 64x64 pixels fonts 5. Other Features o Support for xvYCC processing o Horizontal and vertical edge enhancement circuits o Brightness and contrast adjustments o 14-bit color Gamma correction tables (7 LUTs available) o Dithering for 12, 10 or 8-bit output o Color management o Image flip (vertical, horizontal) o 90 deg. Rotation o Edge blending o Vertical Keystone 6. External Memory o DDR-SDRAM PC400 ( 256Mbit x32) x 2 or (256 /128 Mbit, x16) x 4 7. CPU Interface o 8-bit parallel o Flash memory interface: SPI, 100 MHz, up to 128 Mbit 8. Power Supply o 3.3V /2.5V/1.2V 9. Package

508-pin plastic BGA (Body size 27mm, Ball pitch 1.0mm)

Input/Output 1. Input
o o o o o

36-bit RGB / 36-bit YUV444 / 24-bit YUV422 at 166 MHz or 83 MHz in DDR mode 12-bit YUV422 (Bt656) at 166 MHz Progressive or interlaced formats Up to 4096 pixels per line, with 2176 active pixels External synchronization 36-bit RGB / 36-bit YUV444 / 24-bit YUV422 at 166 MHz or 83 MHz in DDR mode 12-bit YUV422 (Bt656) at 166 MHz Progressive or interlaced formats Up to 4096 pixels per line, with 2176 active pixels Internal/External synchronization

2. Output
o o o o o

Image scaling
From Wikipedia, the free encyclopedia Jump to: navigation, search

An image scaled with nearest-neighbor scaling (left) and 2SaI scaling (right). In computer graphics, image scaling is the process of resizing a digital image. Scaling is a non-trivial process that involves a trade-off between efficiency, smoothness and sharpness. With bitmap graphics, as the size of an image is reduced or enlarged, the pixels which comprise the image become increasingly visible, making the image appear "soft" if pixels are averaged, or jagged if not. With vector graphics the trade-off may be in processing power for re-rendering the image, which may be noticeable as slow rerendering with still graphics, or slower frame rate and frame skipping in computer animation. Apart from fitting a smaller display area, image size is most commonly decreased (or subsampled or downsampled) in order to produce thumbnails. Enlarging an image (upsampling or interpolating) is generally common for making smaller imagery fit a bigger screen in fullscreen mode, for example. In zooming a bitmap image, it is not

possible to discover any more information in the image than already exists, and image quality inevitably suffers. However, there are several methods of increasing the number of pixels that an image contains, which evens out the appearance of the original pixels.

Scaling methods
An image size can be changed in several ways. Consider doubling the size of the following image:

Nearest-neighbor interpolation One of the simpler ways of doubling its size is nearest-neighbor interpolation, replacing every pixel with four pixels of the same color:

The resulting image is larger than the original, and preserves all the original detail, but has undesirable jaggedness. The diagonal lines of the W, for example, now show the characteristic "stairway" shape. Other scaling methods below are better at preserving smooth contours in the image:

Bilinear interpolation For example, bilinear interpolation produces the following result:

Linear (or bilinear, in two dimensions) interpolation is typically good for changing the size of an image, but causes some undesirable softening of details and can still be

somewhat jagged. Better scaling methods include bicubic interpolation (example below) and Lanczos resampling.

hqx For magnifying computer graphics with low resolution and/or few colors (usually from 2 to 256 colors), better results will be achieved by hqx or other pixel art scaling algorithms. These produce sharp edges and maintain high level of detail.

Supersampling For scaling photos (and raster images with many colors), see also anti-aliasing algorithms called supersampling.

Vectorization

An entirely different approach is vector extraction or vectorization. Vectorization first creates a resolution independent vector representation of the graphic to be scaled. Then the resolution-independent version is rendered as a raster image at the desired resolution. This technique is used by Adobe Live Trace, inkscape, and several recent papers.[1]

Algorithms
Two standard scaling algorithms are bilinear and bicubic interpolation. Filters like these work by interpolating pixel color values, introducing a continuous transition into the output even where the original material has discrete transitions. Although this is desirable for continuous-tone images, some algorithms reduce contrast (sharp edges) in a way that may be undesirable for line art. Nearest-neighbor interpolation preserves these sharp edges, but it increases aliasing (or jaggies; where diagonal lines and curves appear pixelated). Several approaches have been developed that attempt to optimize for bitmap art by interpolating areas of continuous tone, preserve the sharpness of horizontal and vertical lines and smooth all other curves.

Pixel art scaling algorithms


As pixel art graphics are usually in very low resolutions, they rely on careful placing of individual pixels, often with a limited palette of colors. This results in graphics that rely on a high amount stylized visual cues to define complex shapes with very little resolution, down to individual pixels. As such, a number of specialized algorithms have been developed to handle pixel art graphics, as the traditional scaling algorithms do not take such perceptual cues into account.

Efficiency
Since a typical application of this technology is improving the appearance of fourthgeneration and earlier video games on arcade and console emulators, many are designed to run in real time for sufficiently small input images at 60 frames per second. Many work only on specific scale factors: 2 is the most common, with 3 and 4 also present.

EPX/Scale2/AdvMAME2
Eric's Pixel Expansion (EPX) is an algorithm developed by Eric Johnston at LucasArts around 1992,[2] when porting the SCUMM engine games from the IBM PC (which ran at 320200256 colors) to the early color Macintosh computers, which ran at more or less double that resolution.[3] The algorithm works as follows:

Image Size You should keep in mind that an image can be located in one of four places: in the image file, in RAM after it has been loaded, on your screen when it is displayed, or on paper after it has been printed. Scaling the image changes the number of pixels (the amount of information) the image contains, so it directly affects the amount of memory the image needs (in RAM or in a file). However printing size also depends upon the resolution of the image, which essentially determines how many pixels there will be on each inch of paper. If you want to change the printing size without scaling the image and changing the number of pixels in it, you should use the Print Size dialog. The screen size depends not only on the number of pixels, but also on the screen resolution, the zoom factor and the setting of the Dot for Dot option. If you enlarge an image beyond its original size, GIMP calculates the missing pixels by interpolation, but it does not add any new detail. The more you enlarge an image, the more blurred it becomes. The appearance of an enlarged image depends upon the interpolation method you choose. You may improve the appearance by using the Sharpen filter after you have scaled an image, but it is

best to use high resolution when you scan, take digital photographs or produce digital images by other means. Raster images inherently do not scale up well. You may need to reduce your image if you intend to use it on a web page. You have to consider that most internet users have relatively small screens which cannot completely display a large image. Many screens have a resolution of 1024x768 or even less. Adding or removing pixels is called Resampling. Width; Height When you click on the Scale command, the dialog displays the dimensions of the original image in pixels. You can set the Width and the Height you want to give to your image by adding or removing pixels. If the chain icon next to the Width and Height boxes is unbroken, the Width and Height will stay in the same proportion to each other. If you break the chain by clicking on it, you can set them independently, but this will distort the image. However, you do not have to set the dimensions in pixels. You can choose different units from the drop-down menu. If you choose percent as the units, you can set the image size relative to its original size. You can also use physical units, such as inches or millimeters. If you do that, you should set the X resolution and Y resolution fields to appropriate values, because they are used to convert between physical units and image dimensions in pixels. X resolution; Y resolution You can set the printing resolution for the image in the X resolution and Y resolution fields. You can also change the units of measurement by using the drop-down menu. Quality To change the image size, either some pixels have to be removed or new pixels must be added. The process you use determines the quality of the result. The Interpolation drop down list provides a selection of available methods of interpolating the color of pixels in a scaled image: Interpolation

None: No interpolation is used. Pixels are simply enlarged or removed, as they are when zooming. This method is low quality, but very fast. Linear: This method is relatively fast, but still provides fairly good results.

Cubic: The method that produces the best results, but also the slowest method. Sinc (Lanczos 3): New with GIMP-2.4, this method gives less blur in important resizings.

DIGITAL IMAGE INTERPOLATION


Image interpolation occurs in all digital photos at some stage whether this be in bayer demosaicing or in photo enlargement. It happens anytime you resize or remap (distort) your image from one pixel grid to another. Image resizing is necessary when you need to increase or decrease the total number of pixels, whereas remapping can occur under a wider variety of scenarios: correcting for lens distortion, changing perspective, and rotating an image.

Even if the same image resize or remap is performed, the results can vary significantly depending on the interpolation algorithm. Itis only an approximation, therefore an image will always lose some quality each time interpolation is performed. This tutorial aims to provide a better understanding of how the results may vary helping you to minimize any interpolation-induced losses in image quality.

CONCEPT
Interpolation works by using known data to estimate values at unknown points. For example: if you wanted to know the temperature at noon, but only measured it at 11AM and 1PM, you could estimate its value by performing a linear interpolation:

If you had an additional measurement at 11:30AM, you could see that the bulk of the temperature rise occurred before noon, and could use this additional data point to perform a quadratic interpolation:

The more temperature measurements you have which are close to noon, the more sophisticated (and hopefully more accurate) your interpolation algorithm can be.

IMAGE RESIZE EXAMPLE


Image interpolation works in two directions, and tries to achieve a best approximation of a pixel's color and intensity based on the values at surrounding pixels. The following example illustrates how resizing / enlargement works: 2D Interpolation

183%

Original

Before

After

No Interpolation

Unlike air temperature fluctuations and the ideal gradient above, pixel values can change far more abruptly from one location to the next. As with the temperature example, the more you know about the surrounding pixels, the better the interpolation will become. Therefore results quickly deteriorate the more you stretch an image, and interpolation can never add detail to your image which is not already present.

IMAGE ROTATION EXAMPLE


Interpolation also occurs each time you rotate or distort an image. The previous example was misleading because it is one which interpolators are particularly good at. This next example shows how image detail can be lost quite rapidly: Image Degrades Rotation Original 45 Rotation 90 Rotation 2 X 45 6 X 15

(Lossless)

Rotations

Rotations

The 90 rotation is lossless because no pixel ever has to be repositioned onto the border between two pixels (and therefore divided). Note how most of the detail is lost in just the first rotation, although the image continues to deteriorate with successive rotations. One should therefore avoid rotating your photos when possible; if an unleveled photo requires it, rotate no more than once. The above results use what is called a "bicubic" algorithm, and show significant deterioration. Note the overall decrease in contrast evident by color becoming less intense, and how dark haloes are created around the light blue. The above results could be improved significantly, depending on the interpolation algorithm and subject matter.

TYPES OF INTERPOLATION ALGORITHMS


Common interpolation algorithms can be grouped into two categories: adaptive and nonadaptive. Adaptive methods change depending on what they are interpolating (sharp edges vs. smooth texture), whereas non-adaptive methods treat all pixels equally. Non-adaptive algorithms include: nearest neighbor, bilinear, bicubic, spline, sinc, lanczos and others. Depending on their complexity, these use anywhere from 0 to 256 (or more) adjacent pixels when interpolating. The more adjacent pixels they include, the more accurate they can become, but this comes at the expense of much longer processing time. These algorithms can be used to both distort and resize a photo.

250% Original

Adaptive algorithms include many proprietary algorithms in licensed software such as: Qimage, PhotoZoom Pro, Genuine Fractals and others. Many of these apply a different version of their algorithm (on a pixel-by-pixel basis) when they detect the presence of an edge aiming to minimize unsightly interpolation artifacts in regions where they are most apparent. These algorithms are primarily designed to maximize artifact-free detail in enlarged photos, so some cannot be used to distort or rotate an image.

NEAREST NEIGHBOR INTERPOLATION


Nearest neighbor is the most basic and requires the least processing time of all the interpolation algorithms because it only considers one pixel the closest one to the interpolated point. This has the effect of simply making each pixel bigger.

BILINEAR INTERPOLATION

Bilinear interpolation considers the closest 2x2 neighborhood of known pixel values surrounding the unknown pixel. It then takes a weighted average of these 4 pixels to arrive at its final interpolated value. This results in much smoother looking images than nearest neighbor. The diagram to the left is for a case when all known pixel distances are equal, so the interpolated value is simply their sum divided by four.

BICUBIC INTERPOLATION

Bicubic goes one step beyond bilinear by considering the closest 4x4 neighborhood of known pixels for a total of 16 pixels. Since these are at various distances from the unknown pixel, closer pixels are given a higher weighting in the calculation. Bicubic produces noticeably sharper images than the previous two methods, and is perhaps the ideal combination of processing time and output quality. For this reason it is a standard in many image editing programs (including Adobe Photoshop), printer drivers and incamera interpolation.

HIGHER ORDER INTERPOLATION: SPLINE & SINC


There are many other interpolators which take more surrounding pixels into consideration, and are thus also much more computationally intensive. These algorithms include spline and sinc, and retain the most image information after an interpolation. They are therefore extremely useful when the image requires multiple rotations / distortions in separate steps. However, for single-step enlargements or rotations, these higher-order algorithms provide diminishing visual improvement as processing time is increased.

INTERPOLATION ARTIFACTS TO WATCH OUT FOR


All non-adaptive interpolators attempt to find an optimal balance between three undesirable artifacts: edge halos, blurring and aliasing.

Original

400%

Aliasing

Blurring

Edge Halo

Even the most advanced non-adaptive interpolators always have to increase or decrease one of the above artifacts at the expense of the other two therefore at least one will be visible. Also note how the edge halo is similar to the artifact produced by over sharpening with an unsharp mask, and improves the appearance of sharpness by increasing acutance. Adaptive interpolators may or may not produce the above artifacts, however they can also induce non-image textures or strange pixels at small-scales:

Crop Enlarged 220%

On the other hand, some of these "artifacts" from adaptive interpolators may also be seen as benefits. Since the eye expects to see detail down to the smallest scales in fine-textured

areas such as foliage, these patterns have been argued to trick the eye from a distance (for some subject matter).

ANTI-ALIASING
Anti-aliasing is a process which attempts to minimize the appearance of aliased or jagged diagonal edges, termed "jaggies." These give text or images a rough digital appearance: 300% Anti-aliasing removes these jaggies and gives the appearance of smoother edges and higher resolution. It works by taking into account how much an ideal edge overlaps adjacent pixels. The aliased edge simply rounds up or down with no intermediate value, whereas the anti-aliased edge gives a value proportional to how much of the edge was within each pixel:

Ideal Edge Resampled to Low Resolution

Ideal Edge on Low Resolution Grid

Choose:

Aliased

Anti-Aliased

A major obstacle when enlarging an image is preventing the interpolator from inducing or exacerbating aliasing. Many adaptive interpolators detect the presence of edges and adjust to minimize aliasing while still retaining edge sharpness. Since an anti-aliased edge contains information about that edge's location at higher resolutions, it is also conceivable that a powerful adaptive (edge-detecting) interpolator could at least partially reconstruct this edge when enlarging.

NOTE ON OPTICAL vs. DIGITAL ZOOM


Many compact digital cameras can perform both an optical and a digital zoom. A camera performs an optical zoom by moving the zoom lens so that it increases the magnification of light before it even reaches the digital sensor. In contrast, a digital zoom degrades quality by simply interpolating the image after it has been acquired at the sensor.

10X Optical Zoom

10X Digital Zoom

Even though the photo with digital zoom contains the same number of pixels, the detail is clearly far less than with optical zoom. Digital zoom should be almost entirely avoided, unless it helps to visualize a distant object on your camera's LCD preview screen. Alternatively, if you regularly shoot in JPEG and plan on cropping and enlarging the photo afterwards, digital zoom at least has the benefit of performing the interpolation before any compression artifacts set in. If you find you are needing digital zoom too frequently, purchase a teleconverter add-on, or better yet: a lens with a longer focal length.

You might also like