You are on page 1of 3

Features

Gabor Texture Feature In image processing, a Gabor filter, named after Dennis Gabor, is a linear filter used for edge detection. Frequency and orientation representations of Gabor filters are similar to those of the human visual system, and they have been found to be particularly appropriate for texture representation and discrimination. In the spatial domain, a 2D Gabor filter is a Gaussian kernel function modulated by a sinusoidal plane wave. The Gabor filters are self-similar: all filters can be generated from one mother wavelet by dilation and rotation. Gabor texture feature is a linear filter used for edge detection. Basically, Gabor texture feature is a group of wavelets, with each wavelet capturing energy at a specific frequency and a specific direction. From this group of energy distributions the texture feature representing the image can be extracted. Frequency (scale) and orientation representations of Gabor filters are similar to those of the human and mammalian visual system, and they have been found to be particularly appropriate for texture analysis.[1]

Gray level difference method (GLDM) Based on gray level difference method and gray level difference density functions, five texture features can defined: viz. Contrast, Angular Second Moment, Entropy, Mean and Inverse [2,3,4] Contrast is the difference in luminance and/or color that makes an object (or its representation in an image or display) distinguishable. In visual perception of the real world, contrast is determined by the difference in the color and brightness of the object and other objects within the same field of view. Because the human visual system is more sensitive to contrast than absolute luminance, we can perceive the world similarly regardless of the huge changes in illumination over the day or from place to place. The maximum contrast of an image is the contrast ratio or dynamic range. In image processing, computer vision and related fields, an image moment is a certain particular weighted average (moment) of the image pixels' intensities, or a function of such moments, usually chosen to have some attractive property or interpretation. And second angular moment is feature is a measure of the smoothness of the image. Angular second moment (ASM) is a measure of homogeneity. If the difference between gray levels over an area is low then those areas are said to be having higher ASM values. In information theory, entropy is a measure of the uncertainty associated with a random variable. In this context, the term usually refers to the Shannon entropy, which quantifies the expected value of the information contained in a message, usually in units such as bits.

Algorithms
Support vector machine In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. The basic SVM takes a set of input data and predicts, for each given input, which of two possible classes forms the input, making it a non-probabilistic binary linear classifier. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on. k-Means k-Means is a rather simple but well known algorithms for grouping objects, clustering. Again all objects need to be represented as a set of numerical features. In addition the user has to specify the number of groups (referred to as k) we wishes to identify. Each object can be thought of as being represented by some feature vector in an n dimensional space, n being the number of all features used to describe the objects to cluster. The algorithm then randomly chooses k points in that vector space, these points serve as the initial centers of the clusters. Afterwards all objects are each assigned to center they are closest to. Usually the distance measure is chosen by the user and determined by the learning task. After that, for each cluster a new center is computed by averaging the feature vectors of all objects assigned to it. The process of assigning objects and recomputing centers is repeated until the process converges. The algorithm can be proven to converge after a finite number of iterations. Several tweaks concerning distance measure, initial center choice and computation of new average centers have been explored, as well as the estimation of the number of clusters k. Yet the main principle always remains the same.

References [1.] D. Zhang, A. Wong, M. Indrawan, and G. Lu, Content-based image retrieval using gabor texture features, IEEE Transactions PAMI, pp. 1315, 2000. [2.] Ivan Kitanovski, Blagojce Jankulovsk,Comparison of Feature Extraction Algorithms for Mammography Images 4th International Congress on Image and Signal Processing,2011

[3.] J. Weszka, C. Dyer, and A. Rosenfeld, A comparative study of texture measures for terrain classification, Systems, Man and Cybernetics, IEEE Transactions on, no. 4, pp. 269285, 1976. [4.] R. Conners and C. Harlow, A theoretical comparison of texture algorithms, Pattern Analysis and Machine Intelligence, IEEE Transactions on, no. 3, pp. 204222, 1980.

You might also like