Professional Documents
Culture Documents
Summary of Results
As depicted in diagram 1, CompareBrightness() found that SKY-16 (score: 82.41%) was most
similar to the target image. CompareLab() and CompareEdges() found that SKY-23 (score:
80.32%) and SKY-3 (score: 30.77%) were the most similar images to the target image
respectively.
ResultsTable() was used to generate results for speed comparisons. It ran image comparisons 10
times each for 40 different image sizes and recorded the speeds and results (see AllResults.xls).
The image sizes ranged from 20x13 pixels to 800x532 pixels. Chart 1 depicts the average time
taken of each technique for different number of pixels. All techniques take longer to complete as
the number of pixels increase, but the increase in CompareLab() and CompareEdges() is much
higher than CompareBrightness(). This is likely due to the higher number of computations
required.
In terms of consistency, CompareBrightness() and CompareLab() identified the same image as
being most similar for images at all sizes, while CompareEdges() was inconsistent, identifying
10 different images. Chart 2 depicts the similarity values of CompareBrightness() and
CompareLab() for the 40 different image sizes. Both techniques scores exhibit the same
behavior: erratic in the beginning, peaking at image sizes 280x186 pixels, 300x199 pixels and
320x212 pixels, and stabilizing at a lower score from 340x226 pixels onwards. CompareEdges()
was discarded from this analysis due to its inconsistent nature at identifying the most similar
image.
Discussion of Results
CompareBrightness() and CompareLab() were consistent and performed similarly in the tests.
While they identified different images as most similar, it is worth noting that the image rated as
CompareBrightness()s second closest was the image rated by CompareLab() as closest, and vice
versa. The slight differences are likely due to CompareBrightness()s focus on brightness values,
and CompareLab()s focus on how humans perceive color. CompareBrightness() is likely more
useful if the amount of sunlight is more important, and CompareLab() may be more useful if the
color of the sky is more important. Both techniques work well at low resolution levels to identify
the most similar image. If the accuracy of the similarity score is important, an image resolution
of 340x226 pixels can be used to reduce the processing time required.
CompareEdges() was inconsistent in identifying the most similar image and was poor at
identifying edges at low resolutions. Although edge identification improved slightly at higher
resolutions, it was still inconsistent at identifying the most similar image. Edge detection
comparisons are likely more useful when differences between the comparison image and the
target image are small.
Future implementations could improve the results by applying the full Canny edge detection
technique to better identify edges. If edges are reasonably identified, different portions of the
image could be isolated for finer comparison (e.g. between the sky and the buildings).
CompareBrightness() and CompareLab() could also take into account neighbouring pixels (e.g.
through Gaussian blurring) to better reflect how humans judge similarity.