You are on page 1of 5

Week 1:

If planning on using Matlab (recommended), watch the tutorial videos provided in

the corresponding section, and perform help images in the Matlab command line for
examples of important image related commands.
Write a computer program capable of reducing the number of intensity levels in an

image from 256 to 2, in integer powers of 2. The desired number of intensity levels
needs to be a variable input to your program.
Using any programming language you feel comfortable with (it is though

recommended to use the provided free Matlab), load an image and then perform a
simple spatial 3x3 average of image pixels. In other words, replace the value of every
pixel by the average of the values in its 3x3 neighborhood. If the pixel is located at (0,0),
this means averaging the values of the pixels at the positions (-1,1), (0,1), (1,1), (-1,0),
(0,0), (1,0), (-1,-1), (0,-1), and (1,-1). Be careful with pixels at the image boundaries.
Repeat the process for a 10x10 neighborhood and again for a 20x20 neighborhood.
Observe what happens to the image (we will discuss this in more details in the very
near future, about week 3).
Rotate the image by 45 and 90 degrees (Matlab provides simple command lines for

doing this).
For every 33 block of the image (without overlapping), replace all corresponding 9

pixels by their average. This operation simulates reducing the image spatial resolution.
Repeat this for 55 blocks and 77 blocks. If you are using Matlab, investigate simple
command lines to do this important operation.
Week 2:

Divide the image into non-overlapping 8x8 blocks.

Compute the DCT (discrete cosine transform) of each block. This is


implemented in popular packages such as Matlab.
Quantize each block. You can do this using the tables in the video or simply

divide each coefficient by N, round the result to the nearest integer, and multiply
back by N. Try for different values of N. You can also try preserving the 8 largest
coefficients (out of the total of 8x8=64), and simply rounding them to the closest
integer.
o

Visualize the results after you invert the quantization and the DCT.

Repeat the above but instead of using the DCT, use the FFT (Fast Fourier

Transform).
Repeat the above JPEG-type compression but dont use any transform, simply

perform quantization on the original image.


Do JPEG now for color images. In Matlab, use the rgb2ycbcr command to convert

the Red-Green-Blue image to a Lumina and Chroma one; then perform the JPEG-style
compression on each one of the three channels independently. After inverting the
compression, invert the color transform and visualize the result. While keeping the
compression ratio constant for the Y channel, increase the compression of the two
chrominance channels and observe the results.
Compute the histogram of a given image and of its prediction errors. If the pixel

being processed is at coordinate (0,0), consider

predicting based on just the pixel at (-1,0);

predicting based on just the pixel at (0,1);

predicting based on the average of the pixels at (-1,0), (-1,1), and (0,1).
Compute the entropy for each one of the predictors in the previous exercise. Which

predictor will compress better?


Week 3:
(Optional programming exercises)

Implement a histogram equalization function. If using Matlab, compare your


implementation with Matlabs built-in function.

Implement a median filter. Add different levels and types of noise to an image and
experiment with different sizes of support for the median filter. As before, compare your
implementation with Matlabs.

Implement the non-local means algorithm. Try different window sizes. Add different
levels of noise and see the influence of it in the need for larger or smaller
neighborhoods. (Such block operations are easy when using Matlab, see for example
the function at http://www.mathworks.com/help/images/ref/blockproc.html). Compare
your results with those available in IPOL as demonstrated in the video lectures.

Consider an image and add to it random noise. Repeat this N times, for different
values of N, and add the resulting images. What do you observe?

Implement the basic color edge detector. What happens when the 3 channels are
equal?

Take a video and do frame-by-frame histogram equalization and run the resulting
video. Now consider a group of frames as a large image and do histogram equalization
for all of them at once. What looks better? See this example on how to read and handle
videos in Matlab:

xyloObj = VideoReader('xylophone.mp4');

nFrames = xyloObj.NumberOfFrames;

vidHeight = xyloObj.Height;

vidWidth = xyloObj.Width;

% Preallocate movie structure.

mov(1:nFrames) = struct('cdata', zeros(vidHeight, vidWidth, 3, 'uint8'),


'colormap', []);

% Read one frame at a time.

for k = 1 : nFrames

im = read(xyloObj, k);

% here we process the image im

mov(k).cdata = im;
end

% Size a figure based on the video's width and height.

hf = figure;

set(hf, 'position', [150 150 vidWidth vidHeight])

% Play back the movie once at the video's frame rate.

movie(hf, mov, 1, xyloObj.FrameRate);

Take a video and do frame-by-frame non-local means denoising. Repeat but now
using a group of frames as a large image. This allows you for example to find more
matching blocks (since you are searching across frames). Compare the results. What
happens if now you use 3D spatio-temporal blocks, e.g., 553 blocks and consider
the group of frames as a 3D image? Try this and compare with previous results.

Search for camouflage artist liu bolin. Do you think you can use the tools you are
learning to detect him?

Week 4:
(Optional programming exercises)

Add Gaussian and salt-and-pepper noise with different parameters to an image of


your choice. Evaluate what levels of noise you consider still acceptable for visual
inspection of the image.

Apply median filter to the images you obtained above. Change the window size of
the filter and evaluate its relationship with the noise levels.

Practice with Wiener filtering. Consider for example a Gaussian blurring (so you
know exactly the H function) and play with different values of K for different types and
levels of noise.

Compare the results of non-local-means from the previous week (use for example
the implementation in www.ipol.im) with those of Wiener filtering.

Blur an image applying local averaging (select different block sizes and use both
overlapping and not overlapping blocks). Apply to it non-local means. Observe if it helps
to make the image better. Could you design a restoration algorithm, for blurry images,
that uses the same concepts as non-local-means?

Make multiple (N) copies of the same image (e.g., N=10). To each copy, apply a
random rotation and add some random Gaussian noise (you can test different noise
levels). Using a registration function like imregister in Matlab, register the N images
back (use the first image as reference, so register the other N-1 to it), and then average
them. Observe if you manage to estimate the correct rotation angles and if you manage
to reduce the noise. Note: Registration means that you are aligning the images again,
see for example http://www.mathworks.com/help/images/ref/imregister.html or
http://en.wikipedia.org/wiki/Image_registration

Apply JPEG compression to an image, with high levels of compression such that
the artifacts are noticeable. Can you apply any of the techniques learned so far to
enhance the image, for example, reduce the artifacts or the blocking effects? Try as
many techniques as you can and have time to do.

Apply any image predictor as those we learned in Week 2. Plot the histogram of the
prediction error. Try to fit a function to it to learn what type of distribution best first the
prediction error.

Week 5:

Implement the Hough transform to detect circles.

Implement the Hough transform to detect ellipses.

Implement the Hough transform to detect straight lines and circles in the same
image.

Consider an image with 2 objects and a total of 3 pixel values (1 for each object and
one for the background). Add Gaussian noise to the image. Implement and test Otsus
algorithm with this image.

Implement a region growing technique for image segmentation. The basic idea is to
start from a set of points inside the object of interest (foreground), denoted as seeds,
and recursively add neighboring pixels as long as they are in a pre-defined range of the
pixel values of the seeds.

Implement region growing from multiple seeds and with a functional like MumfordShah. In other words, start from multiple points (e.g., 5) randomly located in the image.
Grow the regions, considering a penalty that takes into account average gray value of
the region as it grows (and error it produces) as well as the new length of the region as
it grows. Consider growing always from the region that is most convenient.

You might also like