You are on page 1of 4

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)

Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 2, March April 2013 ISSN 2278-6856

Classification of images using similar Objects


Ajay Kurhe1, Suhas Satonkar2 and Prakash Khanale3
1

Department of Computer Science, Shri Guru Buddhiswami College, Purna (Jn.), Dist. Parbhani, Maharashtra, India
2

Department of Computer Science, A.C.S., College, Gangakhed, Dist. Parbhani, Maharashtra, India

Department of Computer Science, D.S.M. College, Parbhani, Maharashtra, India

Abstract: In this paper we have described a method for


retrieving images which has similar one main object from pixel labeled database. For objects similarity we have used a general idea that normally one main object of an image are centrally located in the image, this idea is for interest of one region of an images. It is possible to classify an image by understanding only its central part. In this paper we have proposed a simple algorithm by which it is possible to search for an image within large database which has same object. The searching of image is done by using dominant color from central part. We have applied the algorithm on pixel labeled/Ground Truth images of Microsoft database and have achieved a classification accuracy is given in below table.

Keywords: object, classification, pixel labeled database, retrieving, image, region.

segmentation or divide image into regions. After segmentation, to determine shape of the objects is one research problem because one object may appear in different shapes and in different sizes and we cant fix particular shape for particular object. To determine shape of objects, many researchers have many different techniques. An image has different types of textures of different parts e.g. background texture, different object has its own texture. Texture is third factor to understand image, one object can be appear on many different texture. Texture is another research thread in image processing, lot of techniques used for recognition of textures. From these problems it is difficult to understand image, we develop a technique to classify images and verified this technique using Matlab tool, result shown in experimental result section in this paper.

1. INTRODUCTION
Retrieving similar object is part of retrieving semantic meaning from an image. It is very difficult to a computer, because image contains various types of objects in different angles, shapes, colors and texture. Image also contains lot of noise, which is obstacle for recognition of object. Images are in 2D form while human see every object in 3-D form. Also, the resolution of human eye is very high as compared to any high resolution image. Human can easily understand any object because humans are trained and they learn about object from the infant stage to recognize it. With the steady growth of computer power, rapidly declining cost of storage and ever-increasing access to the Internet, digital acquisition of information has become increasingly popular in recent years. Interest in the potential of digital images has increased enormously over the last few years. Users are exploiting the opportunity to access remotely-stored images in all kinds of new and exciting ways. This has exacerbated the problem of locating a desired image in a large and varied collection. This has led to the rise of a new R & D field ContentBased Image Retrieval (CBIR), the retrieval of images on the basis of features automatically extracted from the images themselves. For recognition of object, three main parameters are color, texture and shape is very important factor for Volume 2, Issue 2 March April 2013

2. RELATED WORK
Object recognition classifier based on the proposed trainable similarity measure which is specifically designed for supervised classification of images. Common global measures such as correlation suffer from uninformative pixels and occlusions. The proposed measure is based on local matches in a set of regions within an image which increases its robustness. The configuration of local regions is derived specifically for each prototype by a training procedure[1]. The objects in an image are localized by a detector [2], [3]. the identified candidate regions are processed by a more elaborate classifier performing the possible multiclass discrimination and rejecting the false alarms inevitably introduced by the detector [4]. The completely automated integration of object detection and place classification in a single system. We first perform automated learning of object-place relations from an online annotated database. We then train object detectors on some of the most frequently occurring objects. Finally we use detection scores as well as learned object-place relations to perform place classification of images[5]. Unlike rooms that are defined by geometric properties of the environment (e.g., walls), places are defined by the objects that they contain and the set of related tasks that occur within them [6]. Page 255

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 2, March April 2013 ISSN 2278-6856
Object-based methods have also been used for scene classification, as in [8], where places and functional regions of the environment are labeled based on object occurrences. However, the main drawbacks of these methods is that they are environment-specific (objects are handpicked) and require manual training of detectors for the selected objects. The challenges in selecting reliable objects that can be recognized in various environments, and gathering the required training data have prompted researchers to use alternate techniques such as the ones mentioned previously. In addition, generic object class recognition has been a challenging task in computer vision research. More recently, part-based models have shown themselves to be highly effective for detection of both rigid and deformable objects [9]. This method, however, requires a large amount of training data with segmented examples of objects. Color histogram with Minkowski form distance are used for recognition[10]. image lies in the centre part of an image and any main object of an any image can not occupy four corners of an image but exceptionally in some images, maximum part of the main object is very small in such images the main object may be in one side it may be in top side or may be in left side, may be in right or bottom side. It is difficult to recognize which is dominant color for main object. When small two objects appeared and there is distance between these two small objects as shown in fig. 2 image(a) in such case we should select second dominant color for object by using centre part of the image because maximum centre part of image occupy by grass i.e. green color (background). In this case, for computing dominant color for object we check dominant color with the corner color of an image, if corner color is same with the dominant color (DC) of the centre part, that means dominant color is background color, skip that color and select second dominant color i.e. blue color as object color for object recognition, we ignore black color because black color is very small and long distance object which is not clear in an image . 4.1 Centre part of an image and computing Dominant Color Centre part of an image means omit 25% part of an image from four sides i.e. from top, bottom, left and right as shown in figure 2 (b) is centre part of an image (a) in figure 2. In this image green is background color and blue is object color so blue is dominant color for object recognition. In figure 2(b), green is dominant color because green occupy maximum area of centre part of an image, but green is background color as shows in figure 2 (a) leave green color for dominance and take second color as dominant color i.e. blue color, here ignored black color for computation.

3. GROUND TRUTH
IMAGES

OR

PIXEL LABELED

We used Micro-Soft research database for this work [7]. This Image Database is free for research purpose and academic purpose and any one can use this database for research, academic purpose. In this database different object category images are in different folders and images are of .BMP format having different sizes e.g. 213x320, 480x640 etc. Database also contain pixel labeled images. Ground truth or pixel labeled images are like segmented images but in ground truth images has less color region as compared to segmented images, figure 1 shows three types of images like original images, segmented images and ground truth images. Ground truth images consists of less color as compared to segmented images and segmented images consists of less color as compared to original images so object recognition in ground truth is easy.

Fig. 2

5. METHODOLOGY
For retrieving similar object images, first we extract object color or dominant color from image by using dominant color from centre part of an image as given in above section and then match that object color with pixel labeled database images, if object color matched than Page 256

4 RECOGNITION OF OBJECT FROM IMAGE


One observation is that main part of the object of an Volume 2, Issue 2 March April 2013

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 2, March April 2013 ISSN 2278-6856
display matched images else skip not matched images. Recognize object from pixel labeled image by selecting center part of the image because major part of the main object occupy center part of the image[4]. An any object cant occupy four corner of the object, center part of the query image we calculate dominant color and select that color is color of an object. Same way we calculated the Dominant Color(DC) for object of database image and then matched with the dominant color of the query image, color matched images displayed. iii) compare this dominant color with the query image color(gray scale color), which is calculated in step 4 if color match than display image else read next image from database. 6. End 6.2Experimental Result The result evaluated by using following equation for 311 images of 13 categories.
Precision= The number of retrieved images that are relevant The number of retrieved images (1) Recall = The number of retrieved images that are relevant The total number of relevant images (2)

Figure 3 Block diagram of system

6. EXPERIMENT
For extracting object from pixel labeled query image, first we convert rgb image into gray color image and then select centre part of the query image. The size of centre part of an image is not standard it can vary as the size of the query image. Here the size of the query image is 213x320 and we select the size of the centre part as rows from 81:131, and column from 122:197. Then calculate the value of dominant color from center part of the query image, then read first image from pixel labeled image database and convert it from RGB to gray scale image and as above select same center part of the database image, than calculate the value of dominant color from centre part and match this value of dominant color with the value of dominant color of query image if value is same then display database image else read next image from pixel labeled database, do same while not end of images from database. For this experiment we selected 313 pixel labeled images of 13 category has size 213x320. Result of this experiment is evaluated by using equation 1 & 2 and also displaying result in table and in images. 6.1 Algorithm: 1. Read pixel labeled image as query image. 2. Perform center part of the query image. 3. Calculate the maximum portion of the color i.e. dominant color from center part of the query image. 4. Check dominant color with the corner color of query image if color is not same then keep first dominant color else select second dominant color from center part of the query image. 5. Repeat following steps while, images in database. i) Read pixel labeled image from database. ii) Repeat steps 2,3 and 4 for computing dominant color.

Fig. 3 Figure 3 shows the result for test image (a) of object horse

7. CONCLUSION:
In this paper we describe a technique for classification of images using Dominant Color (DC) from center part of image. by using this technique we can easily classify object similar images and the classificaretrieving time is very less but restricted to ground truth images. Above table shows the result of the system. Also we observe that main object of image cant occupy four corner of image, so we can check four corner for object reorganization. Acknowledgement: Author thankful to UGC, Pune for funding MRP.

REFERENCES
[1] A Trainable Similarity Measure for Image Classification Pavel Paclk1, Jana Novovicova2, Robert P.W.Duin1, The 18th International Conference on Pattern Recognition (ICPR'06) 07695-2521-0/06 $20.00 2006 IEEE [2] P. Viola and M. J. Jones. Robust real-time face detection. Int.Journal of Computer Vision, 57(2):137154, 2004. Page 257

Volume 2, Issue 2 March April 2013

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 2, March April 2013 ISSN 2278-6856
[3] P. Viola, M. Jones, and D. Snow. Detecting pedestrians using patterns of motion and appearance. In Proc. of IEEE Int. Conference on Comp.Vision (ICCV), volume 2, pages 734741, October 2003. [4] P. Paclk, J. Novovicova, and R. P. W. Duin. Building road sign classifiers using a trainable similarity measure. IEEE Trans. on Int. Transp. Systems, to appear, 2006. [5] Automated Place Classification using Object Detection Pooja Viswanathan Tristram Southey James Little, 978-0-7695-4040-5/10 $26.00 2010 IEEE DOI 10.1109/CRV.2010.49 [6] T. Southey and J. J. Little. Object discovery through motion, appearance and shape. In AAAI Workshop on Cognitive Robotics, Technical Report WS-06-03. AAAI Press, 2006. [7]http://research.microsoft.com/vision/cambridge/recogn ition/default.htm. [8] S. Vasudevan and R. Siegwart. Bayesian space conceptualization and place classification for semantic maps in mobile robotics. Robotics and Autonomous Systems, 56(6):522 537, June 2008. [9] D. Crandall, P. Felzenszwalb, and D. Huttenlocher. Spatial priors for part-based recognition using statistical models. In CVPR 05: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR05) - Volume 1, pages 1017, Washington, DC, USA, 2005. IEEE Computer Society. [10] Ajay B. Kurhe, Suhas S. Satonka, Prakash B. Khanale, Color Matching of Images by using Minkowski- Form Distance, Global Journal of Computer Science and Technology, Vol 11, No 5 pg. No. 87-89. 2011 Table No. 1 shows the evaluation of the experiment for 313 images of 13 categories. AUTHOR Ajay Kurhe received the M.Sc. degrees in computer Science from S.R.T.M. University, Nanded, Maharashtra in 1997. he has Minor Research Project, sanctioned from University Grant Commission for this work. He is currently Asst. Professor in S.G.B. College, Purna, Dist. Parbhani. Research interests include CBIR, Handwritten character recognition. Suhas Satonkar received the M.Sc. degree in Computer Science from S.R.T.M. University, Nanded, Maharashtra in 1997. He is currently Asst. Professor in A.C.S. College, Gangakhed, Dist. Parbhani. Research interests include Face recognition, CBIR. Prakash Khanale received M.Phil. and Ph.D. degree from Shivaji University,Kolhapur respectively in 1995 and 2005. He is currently associate professor in D.S.M. College parbhani. research interests include soft computing, CBIR, face recognition.

Volume 2, Issue 2 March April 2013

Page 258

You might also like