You are on page 1of 8

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 35, NO.

2, MAY 1988

The Analysis of Natural Textures Using Run Length


Features

Abstract-In this paper a family of texture features that have the ability to evaluation of the entire system and techniques used is
discriminate different textures in a 3-D scene as well as the ability to recover presented in Section IV.
the range and orientation of the surfaces of the scene is presented. These
texture features are derived from the gray level run length matrices
(GLRLM's) of an image. The GLRLM's are first normalized so that they all 11. GLRLM NORMALIZATION
have equal average gray level run length. Features extracted from the
A . Gray Level Run Length Approach
normalized GLRLM's are independent of the surface geometry. Experiments
for the discrimination of natural textures have been conducted. The results A wide variety of features have been used for visual texture
demonstrated that these features have the ability to discriminate different analysis. Some of these feature sets have included features based
textures in a nontrivial 3-D scene.
on gray level run lengths, but these features have not
been used extensively. A gray level run is a set of
I. INTRODUCTION consecutive, collinear picture points having the same gray

T EXTURE IS one of the important properties used in


identifying objects or regions of interest in an image,
level value. The length of the run is the number of picture
points (pixels) in the run.
The major reason for the use of the GLRLM's as the bases
whether the image be a photomicrograph, an aerial photo-
graph, or a satellite image [l]. Texture is one of the of the features is that the length of the runs reflect the size of
fundamental pattern elements used in interpreting pictorial the texture elements. For example, if the distance between
information. Textural properties of image regions are often the surface and the camera is shortened, then the perceived
used for classification (e.g., of terrain types or materials) or
for segmentation of the image into differently textured texture elements will be enlarged and the corresponding gray
regions. The classification of pictorial data can be made on a level runs will be lengthened. Hence, the length of the runs is
resolution cell basis or on a block of contiguous resolution in reverse proportion to the distance between the camera and
cells [2]. the object surface.
Texture is also a well-known source of geometric clues to Furthermore, if the alignment of the texture is rotated with
three-dimensional surface structure [3]. The variations in size, respect to the camera line of sight for specific angle U, the
density, eccentricity, and orientation of the texture elements are GLRLM's for a given direction r will be recorded under the
hardly random; they are predictable consequences of the
direction w + r. Hence, the GLRLM's also contain the
foreshortening and size scaling that occurs when a uniformly
information about the alignment of the texture. Finally, the
textured tilted surface is viewed under perspective projection
surface slant and surface tilt of a textured surface are also
[4]. Scaling and foreshortening affect the appearance of
reflected in the GLRLM's. The run length in the direction of
texture independently. All dimensions of the projection of a tilt will be foreshortened in proportion to the cosine of the
texture element scale inverse with distance. This suggests angle between the surface normal and the camera line of
decomposing the recovery of shape from texture into sight (surface slant). The run lengths in the direction
recovery of distance from cues that depend only on scaling, perpendicular to the surface tilt direction will not be
and recovery of surface orientation from cues that depend foreshortened. From the above discussion it is found that the
only on foreshortening. GLRLM's respond to the range and orientation of a textured
In this paper we intend to find a method based on texture run surface in a direct and meaningful way.
length that can analyze the surfaces of a scene to recover the For a given picture, we can compute a set of the gray level
three-dimensional relations. In Section 11, the key methodolo- run length matrices (GLRLM's) for runs with any given
gies and techniques of our experiments are introduced. The direction. However , we usually only take four principle
results of the experiments are analyzed in Section 111. The directions (0", 45 ', 90", 135 ") in computing the run lengths.
The matrix element (i, j ) specifies the number of times
that the picture contains a run length j in the given direction,
Manuscript received March 30, 1987; revised November 9, 1987.
H.-H. Loh and R. C. Luo are with the Department of Electrical and consisting of points having gray level i (or lying in gray
Computer Engineering, North Carolina State University, Raleigh, NC 27695- level range i).
7911. Our experiments will mainly be based on the gray level run
G. Leu is with the Department of Computer Science, Wayne State
University, Detroit, MI 48202. length approach, which means we will use the same statistical
IEEE Log Number 882007 1. methods to extract the textural features from the GLRLM's of
our images. A textural feature is a distinguishing primitive

0278-0O46/88/0500-0323$01.OO O 1988 IEEE


324 IEEE TRANSACTIONSON INDUSTRIAL ELECTRONICS, VOL. 35, NO. 2, MAY 1988

characteristic or attribute of an image field. After normalizing Digitized


Image Textures
the GLRLMs, numerical measures are then extracted. They can
be used to characterize our image texture statistically. These 1
features can be used in conjunction with more sophisti-cated I PREPROCESSING I

A
pattern classification techniques for the purposes of three-
dimensional scene analysis.
COMPUTINGGLRLMS
B. Preprocessing
Fig. 1 illustrates the procedures we used for extracting the
features. First, video images are digitized from a camera, NORMALIZED FEATURES EXTRACTION
and those digitized images are preprocessed. The
computation of GLRLMs from the images follows. f
In our experiment, the only preprocessing employed was Image Features

the nonhomogeneous illumination adjustment or background Fig. 1. Flow chart of the features extraction
subtraction. For various reasons, most natural texture images
were not under a homogeneous or uniform illumination
normalizing factors (one for each direction considered) are
condition when the image was taken. Since our GLRLMs are
stored for use in the recovery of surface range and orien-
based on the pixels gray level, it was necessary to make such
tation.
adjustments to our images before computing our GLRLMs.
To obtain numerical texture measures from the GLRLMs,
Details of the preprocessing procedure can be found in [ 5 ] .
we compute functions analogous to those used by Haralik
[6] for gray level cooccurrence matrices. Some are similar to
C. Constructing GLRLMs those proposed by Galloway [7]. The following six features
After adjusting the nonhomogeneous illumination, the are widely used in our experiments; these features are all
GLRLMs were computed. Computation of these matrices is extracted from our GLRLMs.
relatively simple. First, we reduced the image size (e.g., 132 Some of the elements of our features are defined as
following:
* 132 in our experiment) so that all texture features were
preserved and the computation was simplified, because the P(i,j ) the (i,j ) t h entry in the given run length matrix.
number of calculations is in direct proportion to the number Ng: the number of gray levels in the image.
of points in the image. Nr: the number of different run lengths that occur (so
D. Computing ARL and Features from GLRLMs that the matrix is Ng by Nr).

In order to make the extracted textural features The following are all the features we used shown in order:
independent with the surface range and orientation, the 1) Second moment with respect to the run length (long
GLRLMs need to be normalized. To do so, we need first run emphasis)
compute the average gray level run length (ARL) of the
GLRLMs. As we shall see later, the ARL was one of the
most important properties of the GLRLMs used. Let P(i, j )
be the (i, j ) t h entry of a given GLRLM, Ngbe the (3)
number of gray levels in the image, and Nr be the length of
the longest run considered. The ARL can be computed as

This function multiplies each run length value by the length


of the run squared. This should emphasize long runs. The
denominator is the total number of runs in the picture that can
i = l j-1
be used to normalize the difference from image size.
2) First moment with respect to the gray level
The denominator in the above equation is the total
number of runs in the matrix. In the numerator, each run
count is weighted by the length of the run. The GLRLM
under consideration is normalized by multiplying the length
(4)
of every run in it by a normalizing factor.
K
NF= - .
ARL
This function multiplies each run length value by its gray
The K is a system-wide constant. Any normalized GLRLM level. This should help to determine the gray tone of texture.
should have its average run length equal or close to K . The 3 ) Second moment with respect to the gray level (bright
LOH el al. : RUN LENGTH TEXTURE FEATURES 325

color emphasis) determine the difference in the distance length between the
diagonal run lengths and other run lengths. Some details of
our experiments and the analysis of our experiment results
will be demonstrated in the next section.

111. EXPERIMENTATIONANDRESULTS
In this section we describe the details of our experiment.
The design of the experiments is overviewed. The results of
our experiment, including the basic analysis, are presented.
This function multiplies each run length value by the gray A . Experimental Setup
level squared. This should emphasize the bright color.
The experiments can be divided into four parts-perception
4) Gray level nonuniformity
distance, rotation, tilt and slant, and image texture classifica-
tions. The first three parts used the natural textures as
mentioned in Section 11. Those textures, shown in Fig. 2,
were selected from Brodatzs album [8]. The fourth part,
image texture classification, uses the features derived from
those natural textures.
/
In the perception distance part, the coarseness property can be
This function squares the number of run lengths for each well presented by the average run length of pictures, so that the
gray level. The sum of the squares is then divided by the ARLs in the picture decrease proportionally as the distance
normalization factor of the total number of runs in the between camera and the surface becomes longer. The
image. This measures the gray level nonuniformity of the relationship between the value of the ARLs and the perception
image. When runs are equally distributed throughout the distance between camera and the surface can be demonstrated in
gray levels, the function takes on its lowest values. High run the experimental results later in this section.
length values contribute most of the function. In our experiment we used several pictures that were being
5 ) Run length nonuniformity enlarged from an original picture in order to simulate the
effect of varying distance; the original picture together with
other pictures simulated from it became a set of testing
pictures. We may compute and normalize GLRLMs from
(7) those picture sets.
Using the line between the camera and the surface as an
/ axis, we can rotate the textural surface to a certain angle to
get pictures with different orientation. In the experiment we
This function squares the number of runs for each length. used pictures with the same texture, but with different
The sum of the squares is then divided by the normalizing rotation with respect to the camera line of sight, and
factor. This function measures the nonuniformity of the run computed their GLRLMs. We then extracted the features of
length. If the runs are equally distributed throughout the each GLRLM. Comparing these features, we found that the
lengths, the function will have a low value. Large run counts pictures direc-tional property can be preserved within the
contribute most to the function. GLRLMs. The angle rotated between these pictures can be
6) Sum of variance determined by analyzing their coordinate features.
Ng Nr Recovering the effect caused by surface tilt and slant is
F6= ((NF j )- U J 2 P(i,j) (8) probably the hardest part of our experiment. The tilted and
; = Ij = l slanted pictures are more complicated than the pictures with
different perception distances and orientation. All dimensions
where
of the projection of a texture element scale inversely with
distance. However, only the dimension in the direction of
greatest distance gradient (known as the direction of tilt) is
foreshortened when a surface is viewed obliquely; that
dimension is compressed by an amount equal to the cosine of
the angle between the surface normal and the line of sight
(known as the slant angle). Since the direction of tilt is
This function multiplies each run length value by the unknown, we will need to normalize the tilted surface from
square of the difference between its run and the average run all four directions. The features obtained from the normalized
length of its gray level. This function emphasizes the run GLRLMs provide a very good base in doing texture classifi-
length nonuniformity on each gray level base. When runs of cation. As shown later, the normalization greatly improves
the same gray level have the same length, the function the features quality compared to the features extracted before
should have the lowest values. normalization.
Note that when computing those features, all diagonal run
lengths should be multiplied by the square root of 2, in order to
326
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 35, NO. 2,

Average
run length

(pixe Is)

(4
Fig. 2 Image textures used in
experiments.

0 100 133 177 235(%)


i/ Perception distance

Fig. 3. A typical average run length value versus llperception


Surfaces in the three-dimensional world can have all distance CUNe.
kinds of relative location with the camera. In this section,
we are going to take those normalized features obtained
length values change almost linearly with the perception
from our experiment to classify pictures into different
distance. This indicates that the run length matrix responses to
categories. We are using a pattern-clustering technique to
the perception distance change directly. After the ARL's have
classify pictures from their normalized features.
been found, the normalized factor (NF) can be computed.
In this pattern-clustering technique, we condense the de-
Features extracted from a GLRLM can now be normalized with
scription of relevant features of image textures into points in
its normalizing factor, so that these features are indepen-dent of
a 2-D figure with X and Y axes. Each point in the figure
the perception distance differences. Fig. 4 shows a typical
represents a value for the feature vector {U,, U*} applied to
feature value versus perception distance curve. Part (a) of Fig. 4
a different picture. The measurement value for a feature
shows the feature values before normalization, and part (b)
should be correlated with the picture's categories. Feature
shows the same features after normalization. The reason we put
vector values were clustered based on the picture from
only three features on the figure is because these were the only
which they were derived.
three features that had to be normalized. Each feature was
In the 2-D figure, we first choose two features from a
normalized according to how run length ( j ) is used in its
possible 24 (6 features in 4 directions) of each picture to
expression. It can be seen from Fig. 4 that before normalization
form the feature vectors for all pictures. Then, feature
each feature responded to the perception dis-tance changes in
vector from pictures are used to partition the figure into
its own way. After normalization, the curves appear to be flat
regions where each represents a different category.
and independent of perception distance.
Once a figure has been partitioned into regions, unknown
pictures can then be classified by their feature vectors. Since In rotation, we still used :he six original pictures as in the
we may need several figures to classify all the categories, experiment of Part I; however, other pictures were simulated
the unknown pictures may also need to be marked on to let them rotate a certain degree (i.e., 45", 90, and 135").
From each of the 24 pictures, four GLRLM's were
several figures with their feature vectors to determine their
computed. All six features and the average run length were
category.
computed from each GLRLM. When the picture rotated a
certain degree, the direction with the longest ARL also
B. Experimental Results ana' Analysis rotated the same degree. The same thing happens to the
In this paper, three enlarged pictures were simulated from features of GLRLM's, which proves that the rotated pictures
each original picture. The rates of enlargement are 133- still preserve their run length properties.
percent linear (or 177-percent area), 177-percent linear (or 3 Consider the textural features under different
13-percent area), and 235-percent linear (or 553-percent combinations of direction of tilt and slant angle. After all
area), respectively. the computation of the GLRLM's, the normalized features
From each of these 24 pictures (including the original can be extracted. With those features, the pictures can be
pictures), four GLRLM's were computed, one for each principal classified in the texture classification part. The feature
direction (0", 45", 90, and 135 "). Then the average run length properties before and after normalization are shown in Fig.
(ARL) was computed from each GLRLM. The overall average 5. Those features are extracted from the second picture set,
run length for each picture can also be found. A typical average and the direction of tilt is 0 degrees.
run length versus perception distance (rate of reduction or Using the pattern clustering techniques as described above,
enlargement) is shown in Fig. 3, which is taken from the second we can get figures that have been partitioned into regions. A
picture set. It shows that the average run
LOH et al. : RUN LENGTH TEXTURE FEATURES

9 -

8-

Feature
value
(at 0')

3 -

- 0
0

I/

Fig. 4. A typical feature value versus Uperception distance curve. (a) Before normalization. @) After normalization.

7
Feature
value
(at 0') 5

0
0'
Slant angle
(a)
Fig. 5. A typical feature vale versus slant angle curve.

categories. Those lines were drawn according to the clustering


centers of the category or category group. Put lines between two
typical figure is shown in Fig. 6. In this figure, one feature nearby categories or category groups; the figures can then be
value is plotted against another for all the pictures. There are partitioned into regions. Using those regions in the figures,
many possible combinations among all the features we have.
The feature selection is important to separate clearly all the
pictures into different categories. Take Fig. 6 as an example;
each point in these figures represents a picture. The alphabet
in each point shows what categqry the picture belongs to.
Note that in Fig. 6 , the capital letters representing two points
are overlaid.
We may also find in Fig. 6 that most points in these figures
can be easily separated with several lines according to their
the GLRLM's from the pictures first; then, a set of features
was extracted and normalized. These features can be used in
we can define the category of unknown pictures three-dimensional scene analysis where textures need to be
according to their features. identi-fied according to their differences. Based on the
average run length information and the classification results,
IV. CONCLUSIONS surface range as well as surface orientation of a textured
In summary, a set of texture features based on the surface can be recovered. As we have shown in this section,
GLRLM's was suggested in this experiment. We derived it is obvious that the texture features based on gray level run
length matrices can be used in 3-D scene analysis.
328 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 35, NO. 2, MAY I988

0.1895+
0.18831
0.18711
0.18601
0.18481
0.1836+ b
0.18241
0.18121 b
0.18001 b
0.17881 b
0.1776+
0.17641
0.17531

Aa
a

REFERENCES Zntell., vol. PAMI-8, no. 4 , pp. 472-481, July 1986.


[5] H.-H. Loh, Run-length features applied to 3-D perception to
[l] H. Kaizer, A quantification of textures on aerial
textures, Masters thesis, Univ. South Carolina, Columbia, SC, Dec.
photographs, Boston Univ., Boston, MA, Tech. Note 121, AD
1985.
69484, 1955.
[6] R. M. Haralick et al., Texture features for image
[2] R. M. Haralick, Statistical and structural approaches to
classification, ZEEE Trans. Syst., Man, Cyber., vol. SMC-3, no. 6,
texture, Proc. ZEEE, vol. 67, no. 5 , pp. 786-804, May 1979. pp. 610-621, Nov. 1973.
[3] D. Brzakovic and J . T. Tou, Image understanding vis texture
analysis, in Proc. ZEEE Arti. Zntell. Appli. First Conf.,pp. [7] M. M. Galloway, Texture analysis using gray level run lengths,
Cornput. Graph. Image Proc., vol. 4 , pp. 171-179, 1975.
585-590, 1984.
[4] R. L. Kashyap and A. Khotanzad, A model-based method for [8] P. Brodatz, Textures. Toronto: Dover, 1966.
rotation invariant texture classification, ZEEE Trans. Patt. Anal.
Mach.

You might also like