You are on page 1of 25

1.1.

Motivation for a Currency Identification System

Statistics on eye health in India in September 2003 have estimated that there are
currently approximately 380,000 people living in India with legal blindness or low
vision (Vision India Foundation, 2003). This number is expected to double in the
next 20 years, as a result of the ageing population. As a result, considerable
research work has been contributed significantly to an understanding of the needs
of visually disabled people for participating fully in fundamental activities of life.
One of the many issues faced by people suffering from a large degree of vision
impairment is the determination of Indian currency notes. There needs to be
significant research in this field which specifically addresses features that will
make it possible for all visually disabled individuals to denominate Indian
banknotes independently and confidentially.

A desirable approach of currency note design would be to permit virtually all


visually disabled individuals, together with normally sighted notes, to denominate
banknotes without the aid of devices. It is conceivable, however, that under certain
circumstances, very simple, inexpensive, unobtrusive and accurate devices could
be of assistance under dim lighting conditions – not only to visually disabled
people but also to normal sighted individuals. There currently exists no such device
described above that is capable of accurately differentiating Indian polymer notes.
Therefore, there exists a need for a Currencyheld device which is:

 Small
 Cost effective
 Accurate
 Easy to operate, and
 Power Efficient.

The system described below is a prototype device that was designed and
constructed in attempt to provide vision impaired members of the community with
a small device that would assist in note differentiation.

RELATED WORK:

Various different approaches have been proposed for Currency geometry


recognition. The proposed approaches differ mostly in the ways of extracting and
manipulating the features of Currency.
In a 4 B-spline curve have been used to represent Currency. This enables the
removal of fixed pegs utilized in most other systems. The identification is done by
utilizing the differences between the curves generated by various Currency
geometries and utilizing the curves as a signature for the individual. Only the
Currencys are represented using the curves, the thumb is not a part of the signature.
On a databaseof 6 images each from 20 notes has a recognition rate of 97%. The
error ratein verification for the same database is 5%
Due to their interpolation property implicit polynomial 2D curves and 3Dsurfaces
have been utilized in for analyzing the Currencyprint by. This methodworks by
finding an implicit polynomial function to fit the Currency print. To keep
thechange in the co-efficient of the function to a minimum due to slight variation
inthe data new methods such as 3L fitting and Fourier fitting are tried insteadof the
traditional least square fitting. In the identification rate is found to be95% and the
verification rate 99%in efforts have been made to utilize the vein patterns in the
Currency asa biometric. This is done by taken the thermal image of the Currency
and obtainingthe vein pattern from it. Using the heat conduction law several
features can beextracted from each feature point of the vein patters. The FAR is
found to be 3.5%while the FRR is 1.5%.
In a Currency print recognition method is proposed based on the Eigen
Currencytechnology. In this method the original Currency prints are transformed to
a set of featureswhich are the Eigen vectors of the training set. Euclidian distance
classifier isused after extracting the Eigen vectors from a new Currency print for
Currency print recognition.On a set about 200 people the system works with an
FRR of about 1% andan FAR of around 0.03%.
Bimodal systems, a combination of two biometrics using Currency geometry
andCurrency prints has been realized as an effective method to improve the
performanceof systems using Currency geometry alone. In a bimodal biometric
system usingfusion of shape and texture of the Currency is proposed. Currency
print authentication isdone using discrete cosine transform. New Currency shape
features are also proposed.A score level fusion of Currency shape features
andCurrency prints is done using productrule. Using either of Currency shape or
Currency prints alone produce high FRR and FARbut when the two are combined
to form a bimodal system the both the FAR andFRR are considerably reduced. It is
especially effective when the Currency shapes oftwo different individuals are very
similar as in such cases the Currency prints increasethe performance remarkably.
On a database of 100 users the FRR is found to be0.6% and the FAR is found to be
0.43%.
In geometric classifiers are utilized in Currency recognition. As in the proposed
system in a document scanner is used to collect Currency data. Also very
fewrestrictions are imposed on the positioning of the Currency on the scanner. A
total of 30 different features are obtained from a Currency. For each individual 3 to
5vof the person’s data images are used as the trading set. In the 30 dimensional
feature space a bounding box is found for each of these training sets. The
distanceof the query image to these bounding boxes is used as the measure of
similarity.The threshold is determined by experimentation on the database. On a
databaseof 714 from 70 different people image an FRR of 6% is obtained while the
FAR is 1%.In 3-D Currency modeling and gesture recognition techniques are
studied.Although the Currency can be modeled in other aspects besides shape like
dynamicsand kinematical structure they are not discussed here. This is because the
proposedsystem uses 2-D Currency shape modeling. A geometrical Currency
shape model can be approximatedusing splines as there are complicated
geometrical surfaces. The modelis can be made very accurately but this will
increase the complexity as parametersand control points will increase.

MODULES OF A FAKE CURRENCY DETECTION SYSTEM:

A biometric system comprises of three important modules.Preprocessing, Feature


Extraction and Matching. When the input data isfed into the biometric system it
may be unsuitable for feature extraction.
Input image

Pre-Processing

Feature Extraction

Matching
Result: Pass/Fail
Figure 2.1: Components of a Biometric System

This is due to the several noise elements which may creep into the data. Noise may
be theresult of the atmosphericconditions or the surroundings. It may also be
introducedby the equipment used for collecting the data. Also the users may
introduce somenoise inadvertently. The primary job of the preprocessing module is
to clean upthe noise introduced o that the features can be extracted correctly. The
proposedsystem first runs the input image through a noise removal algorithm for
this purpose.The job of the preprocessing module is to prepare the data for feature
extraction.
Besides noise removal this may entail other work on the input data. Inthe proposed
system the data is entered in jpeg format but a monochromatic imageis required for
feature extraction. Furthermore only a single bit is required to representeach pixel.
This is because the only two colors any pixel can represent arewhite and black. In
this work a pixel with value one represents a white pixel and isreferred to as a lit
pixel. Similarly a pixel with value zero represents a black pixeland is also referred
to as a dark pixel. The conversion from jpeg format to a matrixcontaining one bit
representation of each pixel is also a part of preprocessing.The monochromatic
representation achieved using the one bit matrix is stillunsuitable for feature
extraction. To extract features an image which contains onlythe boundaries of the
Currency is required. The preprocessing module converts the imageto one
containing only boundaries of the Currency using an edge detection algorithm.The
output of the preprocessing module is a matrix with rows and columns equalto the
no of rows and columns of pixels in the image respectively. Each element ofthis
matrix is either a one or a zero representing a lit or a dark pixel respectively.
The feature extraction module is the most important module in a biometricsystem.
The function of this module is to extract and store features from the inputdata. In
the proposed system length and width of the Currencys and the perimeter ofthe
entire Currency is extracted from the input image. For each Currency one
measurementof length is taken. Also two measurements of width at different
positions are takenfor each Currency. Finally an approach to measure the perimeter
of the Currency in onepass is discussed. The output of the feature extraction
module is the measure ofthese features.The last module of the biometric system is
matching. Here the featuresextracted in the previous section are matched up with
the features of that individualpreviously stored in the database. The proposed
system produces a match scorebased on a comparison scheme. The match score
represents the closeness of thecurrent image to the one present in the database. A
higher score represents ahigher closeness of the images. Based on
experimentations a threshold is decided.The threshold is a value which lies in the
range of match score. For any image ifthe match score is less than the threshold the
image is rejected as a match to thedatabase image specified.
PREPROCESSING

The images are captured using a flatbed scanner. The input image is a
grayscaleimage of the right Currency without any deformity. The input image,
shown in Figure3.1 is stored in jpeg format. In cases of standard deformity such as
a missing Currencythe system expresses its inability to process the image. It is also
critical that theCurrencys are separated from each other. However it is not required
to stretch theCurrencys to far apart as possible. The Currency should be placed in a
relaxed state withCurrencys separated from each other. Since features such as
length and width whichare dependent on the image size and resolution are being
used, it is critical that tohave uniform size of images.Red, green and blue (RGB)
values of each pixel is extracted. Since a monochromaticimage is required for the
proposed system a threshold is determined. All pixels withRGB values above the
threshold are considered white pixels and all pixels belowthe threshold are
considered black pixels. Initially the threshold is set very low, veryclose to the
RGB value of a black pixel in the image. This produces an image witha completely
white Currency on a black background as shown in Figure 3.2. Featuressuch as
Currency lengths perimeter and area of the Currency can be more easily extracted.
Fig 3.1: input grayscale image

However setting the threshold very low results in a lot of noise inthe image. A
good threshold is determined and then noise removal algorithms areapplied to the
image.

NOISE REMOVAL:
Ideally the scanned input image should contain no noise. However due to dustand
dirt both on the Currency and on the scanner bed, even in minute quantities may
produce differences between the actual image scanned and the Currency print.
These variations may also be the result of a host of other factors including the
settings ofthe scanner, the lighting effects, humidity in the atmosphere etc. These
variations unless removed adversely affect the performance of the system. The
larger the degree of variations or noise the less accurate the system. So before
extracting features from the image, noise is reduced as much as possible. However
most noise removal algorithms also effect the actual features so a balanced
approach is taken such that the features are undamaged after noise removal. The
noise removal algorithm in this case utilizes the fact that the required
Currencyprint is present in only a portion of the total image i.e. in the lower central
side of the image. The rest is simply black space with some noise. It works by
trying to find an entire row consisting only of black pixels. This can be done using
the binary search algorithm.
The algorithm starts from the center of the image and works its way upwards.
Since binary search is used the number of rows searched is very low. When such a
row is determined all the points above that row are set as black. This eliminates all
the noise above this row. Using the binary search algorithm again a column is
identified left of which is not required for feature extraction. This is any column
between the left marginand the center of the image which has no lit pixels. All the
pixels left of this columnare set to black. Similarly a column right of which no
features are to be extractedis determined. All the pixels right to this column are set
to black. This reduces thenoise present in the image considerably without effecting
the actual Currency print.
The remaining noise exists between the Currencys and the inside of the
Currencyperimeter. A convolution filter is applied which checks if a white pixel is
surroundedon all sides by black pixels. If that is found to be the case then the white
pixel isconsidered to be noise and is converted to a black pixel. The size of the
convolutionfilter is variable. First the filter uses a 3*3 template, then a 5*5 and
finally a 7*7.This progressively removes larger and larger noise elements from the
image.

3.2 EDGE DETECTION:

The image obtained after elimination of noise contains regions of black andwhite
pixels. In order to extract geometric features of the Currency print it is requiredthat
the image contains only edges. Consequently it is required to convert regionsof
white space to an image containing only the boundary of the white pixels. Thisis
achieved by using an edge detection algorithm. The algorithm converts all
pixelsexcluding those at the boundary of black and white regions to black pixels.
Thealgorithm also has to ensure that the thickness of this boundary is as low as
possible. This is because a thick boundary will adversely affect the accuracy of the
featuredetection algorithm.It is critical for any edge detection algorithm not to miss
any edges. It is alsoimportant that no non edges are recognized as edges. These two
criteria define theerror rate of the edge detection filter. Besides the low error rate
there are two otherqualities that a good edge detection filter should possess. The
distance between theactual edge and the edge located by the filter should be as low
as possible. Also thefilter should not provide multiple responses for single edges.
Canny’s edge dictionalgorithm possess these two qualities in addition to having a
low error rate.

Figure 3.3: The kernel to calculate the gradient along X axisthe canny filter utilizes
the convolution operation. A convolution operation

Figure 3.4: The kernel to calculate the gradient along Y axis

The canny filter utilizes the convolution operation. A convolution operationis


performed by sliding the convolution kernel over the input image. Starting at
thetop left corner the kernel is moved till the end of the row. This process is
repeatedfor all the rows of the image. The kernel used is a 3x3 matrix. The output
at anypixel location is the sum of the product of values of each cell of the kernel to
thevalues of the underlying pixel. Thus each movement of the kernel provides one
pixelof the output image.Producing an output image of size greater than the input
image leads tocomplications. This happens when the convolution kernel slides to
such a positionwhere it no longer fits entirely within the image. To prevent this
from occurringthe kernel is restricted to slide only till the point where it is
completely within theimage and no further. This however leaves the points on the
edges unprocessed. Toprocess the points at the edges the image is enlarged at the
beginning by inventingnew input values for points where the kernel slides off the
image. These points arelocated at the right and at the bottom of the image. These
invented pixels are allassigned the value one (black). This is not a preferred
method as it distorts the outputimage but in the case of a Currency it may provide
an added advantage. The pixelsat the right already have the value zero (black) so
adding another column would notmatter.
The kernels used in the edge detection algorithm are shown in Figures 3.3and
Figure 3.4. These kernels calculate the gradients along the X and Y axes. Oncethe
gradient is calculated the angle to which the edge bends is determined by
calculatingthe inverse tangent of the division of gradx and grady. The inverse of
tangentangle can be anything between 0 and 180 degrees. Since the next pixel
composingthe edge has to be either 0, 45, 90, 135 or 180 degrees from the current
pixel, asthese are the only possible traversals from any pixel the angle is rounded
off to one ofthese values..
Algorithm:
Step1: Determine gradx and grady, the values returned by the kernels.
Step2: Determine the angle of the edge theta = tan−1 (gradx/grady).
Step3: Approximate theta to one of these values 45 90, 135 and 0 or 180.
Step4: Traverse along the edge in the direction of the approximated theta and setto
0 any pixel which is not along theta.
The approximation made in Step 3 is rather large. This is done because sincea pixel
has only 8 surrounding pixel and the edge has to proceed to one of theseangles. A
region encompassing 45 degrees is formed for each of the four angles 0,45, 90 and
135. The theta value lies in one of these regions and is approximated tothe angle in
whose region it lies.

FEATURE EXTRACTION:

There are several features that can be extracted from the geometry of the Currency.
EachCurrency has three major lines running perpendicular to the length of the
Currency. Thefirst feature that can be extracted is the length of a Currency which is
defined as thedistance between the tip of the Currency and the third and
bottommost line on theCurrency. The second major feature is the width of the
Currency. One or more measurementscan be taken for the width at varying points
along the Currency. The length ofthe lines on the Currency can also be used as the
measure of Currency width. Since the Currencys may not have uniform width
usually two or more measurements are taken foreach Currency along different
points.The thumb has only one clear line which bisects it. The definition of length
ofthe thumb is thus a little vague and varies upon the system used. The diameterof
the largest circle that can be inscribed inside the Currency and between the
Currencyslines are also sometimes used as features. The perimeter of the entire
Currency and ofindividual Currencys can also be used as features.

Figure 4.1: The Features Extracted from the Input Image

The proposed system extracts for each Currency one measurement of length
andtwo measurements of width. The thumb is not included in the feature
extractionprocess. Also the perimeter of the entire Currency is measured. These
constitute thefeatures based on which the proposed system authenticates the users.
The mostimportant geometric part of the Currency for feature extraction is the
Currencys. For eachCurrency one measurement of length and two measurements of
width are taken. Thatmakes it a total of 12 features for the four Currencys.
Including the perimeter bringsthe number of total features to 13.
In each case the tip of the Currency is identified first. The Currencys are
eachtreated as two parallel lines with two parallel lines running almost
perpendicular tothe first two lines. This simplifies the problem into a line detection
problem. Thelength of each of these lines is extracted. The length of the Currency
is the length ofany one of the longer lines while the lengths of the smaller lines
make the width.The features extracted from the Currency are all shown in Figure
4.1.

4.1 CURRENCY LENGTH:

The first step is to determine the top position of each Currency. This is done
startingfrom the little Currency and moving up to the index Currency. Since the
image is of theright Currency the little Currency is the leftmost. The algorithm to
determine the tip ofthe other Currencys starts by utilizing the previous tip found.
Since the little Currency isthe first the algorithm starts from (0, 0), the top left
corner of the image. It thenfinds the first lit pixel traversing column wise. This lit
pixel is somewhere along theleft boundary of the Currency. Now the algorithm has
to traverse along lit pixels so asto reach the tip of the Currency. During this
traversal the value of the y co-ordinateis constantly decreasing while value of the x
co-ordinate may increase or decrease.The algorithm halts when the value of the y
co-ordinate can no longer decrease. Itthen assigns the point as the tip of the little
Currency (x1, y1). The system now findsthe bottom of the Currency (x2, y2) by
using a line detection algorithm. The lengthof the Currency is then calculated by
taking the distance between the top and bottompoints.
The algorithms for line detection uses a convolution based scheme similar tothe
one utilized in the canny filter [6] used for edge detection. The convolutionmethod
is useful only for lines which are very thin but since that is the case here itsuits the
needs of the system perfectly. Four separate convolution kernels are usedeach to
detect a line of unit pixel width and at directions of 0, 45, 90 and 135. Thekernels
are shown in the Figures 4.2 to 4.5.

Figure 4.2: Kernel for a Horizontal line

Figure 4.3: Kernel for a vertical line

The kernels are tuned to produce a high positive response for dark linesagainst a
light background which is the case in the system as the edge detection
algorithmleaves the Currency outlines dark and the area inside and outside of the
Currencys

Figure 4.4: Kernel for line at 45 degrees

Figure 4.5: Kernel for line at 135 degreeslight. For a light line against a dark
background results in a high negative response.In case both types of lines are to be
recognized the absolute value returned by thekernel can be used.
Algorithm for determining length of a given Currency is given below:
Step1: Determine the tip of the Currency (x1, y1) as described earlier.
Step2: Apply the line detection kernels at the current pixel and obtain the
responses.
Step3: Pick the kernel with the highest response.
Step4: Move to the pixel indicated by the kernel with the highest response
subjectto the condition that the value of y should always increase as the length is
beingmeasured from top to bottom.
Step5: If no kernel shows a positive response for which the next y value will
increasethen mark the point (x2, Y2) and move to Step 7.
Step6: Repeat Step 2 to Step 5.
Step7: Obtain the length of the line as the distance between points (x1, y1) and(x2,
y2).
The same function can be applied to the little, ring and middle Currencys butthe
index Currency has to be treated differently. This is because the line that is
beingmeasured is one of the two boundaries of the Currency. Starting from the
little Currency itis the boundary closest to the next Currency. However for the
index Currency the boundaryclosest to the previous Currency is taken. This is done
because if the valleys arenot used as stop points then the little and the index
Currencys may show muchlarger lengths than actual. This may be due to the line
being considered straightenough when it continues down the Currency. So where
the right boundary of the little,ring and middle Currencys is considered while
taking the length measurements the leftboundary of the index Currency has to be
considered.
The tip of the ring Currency always lies above that of the top of the little Currency
so the algorithm searches only further away from the x co-ordinate of the top ofthe
little Currency and only up to the y co-ordinate of the top of the little Currency.
Themiddle Currency and the ring Currency have the same relation as the middle
Currency is thelongest of them all. The index Currency however is shorter than the
middle Currency. Itis usually longer than the little Currency. Thus the search is
done for x greater thanthe middle Currency and y less than that of the little
Currency. If however no tip is foundit is assumed that the little Currency is longer
than the index Currency and search is madebelow the tip of the little Currency.
Once the tip of the Currencys is found they are passedon the function which
calculates the length of the Currencys.

4.2 WIDTH:

For Currency widths two fixed points are found on the Currency boundary and the
widthis taken at these fixed points. To find two fixed points on the Currency
boundary theboundary is treated as a line and two points which divide the line in
three equalparts are found. The parameters for the line required are found during
the calculationof the length of the Currency. While the algorithm is traversing
through thepixels for calculating the length enough information can be gathered
from them tointerpolate a straight line along these pixels. The least square method
is used for the interpolation. All these values are determined in only one pass
thorough theCurrency. This may improve the system performance
considerably.From the algorithm for determination of Currency length the values
of the tipof the Currency (x1, y1) and the bottom (x2, y2) are obtained. By
dropping perpendicularlines from these points to the line, obtained using the least
square method thestarting and ending points of the line are determined. With the
starting and endingpoints of the line known the line can be divided into 3 equal
parts generating twomore points (x3, y3) and (x4, y4). A line perpendicular to the
interpolated line andstarting at (x3, y3) is drawn toward s the other boundary of the
Currency. The width isconsidered to be the distance between the starting point (x3,
y3) and the point wherethe perpendicular meets the other boundary of the
Currency.
To determine the point where the perpendicular meets the other boundary ofthe
Currency the algorithm traverses along a line parallel to the X axis using the
point(x3, y3) as the starting point. The algorithm traverses till it encounters a lit
pixel.This pixel is on the other boundary of the Currency. The algorithm then
attempts tofind a pixel closest to the perpendicular line by considering all lit pixels
within acertain distance from the current pixel. The algorithm considers all pixels
which areat a distance of two pixels or less from the current pixel. The algorithm
iterates till the pixel being considered is at the minimum distance from the
perpendicular line.
The distance between this pixel and (x3, y3) gives the width of the
Currency.Algorithm for determining width of a given Currency is shown below:
Step1: Draw a line perpendicular to the interpolated line starting at (x3), (y3).
Step2: Traverse to the right parallel to the x axis until a lit pixel is encountered.
Step3: Take a 5X5 pixels section of the image with the pixel encountered in step2
as the center.
Step4: For each lit pixel calculate the distance from the perpendicular line.
Step5: The pixel for which the distance to the line is minimum is the new centerfor
the 5X5 section.
Step5: Repeat Steps 4 and 5 till the minimum stops changing.
Step7: Obtain the Currency width which is the distance between the pixel at
thecenter and (x3, y3).
This algorithm is repeated for (x4, y4), the other point on the Currency at whichthe
width is to be calculated. For little Currency, middle Currency and the ring
Currency thealgorithm traverses to the right in Step 2. This is because for these
Currencys the leftboundary is considered while calculating the length. The case for
the index Currencyis however the reverse. For the index Currency the right
boundary is considered whilecalculating the length so the traversal by the Currency
width algorithm at Step 2 is tothe left. The rest of the algorithm remains unchanged
for the index Currency.

4.3 PERIMETER:

The perimeter of the Currency is another significant feature. To determine the


perimeterone has to move along the outer boundary of the Currency forming a
closed loop. Thelength of the loop is the perimeter. The lines of the Currency may
pose a complexity.The problem is to find the loop with the largest perimeter as the
lines on the Currencysand the Currency with the outer boundary may form several
different loops. The loopwith the largest perimeter is the actual perimeter of the
Currencyprint. The numberof loops rises with the increase in the number of inner
lines considered. An algorithmwhich takes the path with the chance of returning
the maximum perimeterin this case is proposed. While traversing to determine the
perimeter complicationsarise only at a fork, a place where there is more than one
choice available. Thishappens when two or more Currency lines meet at a point.
The boundary here is alsobeing considered a Currency line. On a Currency a fork
would be between a line on the outerboundary and one of the inner lines. A fork
may also occur between only inner Currencylines and not the boundary. If however
the algorithm begins from a boundary andalways make a correct choice at a fork it
will never encounter one where only innerCurrency lines meet. Utilizing the fact
that the basic structure of the Currency is constant,the probability of making the
correct choice when encountered with a fork, is madeunity. Thus only one pass
through the entire outer boundary is sufficient to find theperimeter since no detours
are taken.

Figure 4.6: probability matrix

Since at any point a pixel is surrounded by eight pixels for the next pixel
thealgorithm can pick i any one of these eight. But one of these eight pixels is the
onefrom where this pixel has been picked. That pixel is eliminated from
considerationas moving back there would make the algorithm loop indefinitely.
This leaves sevenpixels from which to pick the next destination.
Figure 4.7: priority matrix

This is shown in Figure 4.6 where ’P’ is the current position and the ’0/1’ is the
position that the algorithm can move to next. ’0/1’ is used to represent the position
as any of them can be either 0or 1. The next possible position that the algorithm
considers can only be a positioncontaining 1. There has to be minimum two such
locations, one from where the positionP has been obtained i.e. the previous
position of the algorithm and the secondis the next position of P. In case there are
only two 1’s among the eight possibilities,there are no conflicts and the algorithm
proceeds to the next position of P. If howevermore than two 1’s exist in the eight
possibilities a conflict between two or morepositions arises. The algorithm has to
pick a single position as the next position of P.
If the traversal starts from the left boundary of the Currency then always
theleftmost pixel is chosen to traverses the perimeter. If however the traversal
startsfrom the right boundary the rightmost pixel is always selected to obtain the
perimeter.The proposed system starts from the left boundary of the Currency and
consequentlychooses the leftmost pixel at each juncture where a choice is to be
made. This choiceis made by assigning priorities to each position as shown in
Figure 4.7. One depictsthe highest priority and eight the lowest. Now if there is any
conflict the algorithmsimply chooses the one with the highest priority.The
priorities are assigned in such a way that they always provide the pathwhich leads
to the maximum perimeter. The priority system works fine when thealgorithm
traces upwards. However when the tip of the Currency is reached the algorithm
must trace toward the valley between two Currencys. Here the priority matrixhas to
be modified to produce the correct path. This is exactly similar to the problemof
walking on a road. If one turns 180 degrees what is at the left Currency
sideinitially, now is at the right Currency side and vice versa. The algorithm
however hasto determine when the turn is made so that the matrix can be changed
accordingly.Whenever the algorithm is forced to choose a pixel with a priority
greater than orequal to six a turning point is recognized. The matrix is then
consequently reversedso as to still choose the leftmost pixel. This is done by taking
a mirror image ofthe priority matrix along the central point P. The mirror of the
original matrix isshown in Figure 4.8. This matrix is used until another turning
point is reached atwhich point the algorithm reverts to using the original priority
matrix.

Figure 4.8: Mirror of the Priority Matrix along PAlgorithm for determining
Perimeter

Step1: Starting from left bottom find the lower boundary of the left side of the
Currency (l).
Step2: Use the priority matrix to determine the next pixel.
Step3: Save the value of the priority matrix used as value.
Step4: Set the value of current pixel to 0 move to the next pixel chosen by
thematrix.
Step5: If value >= 6 then use mirror of the priority matrix.
Step6: Repeat Step 3 to Step 5 till rightmost end is reached.

CONCLUSION AND FUTURE WORK:

The use of fake currency detection as a reliable means meeting the security
concerns of today’s information and network based society cannot be belittled. The
proposed work shows how to detect fake currency notes to using very simple
algorithms.
For huge databases the search takes a long time and colour is so distinct a feature
that it can be used as an initial classifier so as to narrow the search space in the
database considerably.

BIBLIOGRAPHY:

[1] Individuality of Currencywriting: A validation study. In ICDAR ’01:


Proceedings of the Sixth International Conference on Document Analysis and
Recognition,page 106, Washington, DC, USA, 2001. IEEE Computer Society.
[2] Francesco Bergadano, Daniele Gunetti, and Claudia Picardi. User
authentication through keystroke dynamics. ACM Trans. Inf. Syst. Secur.,
5(4):367–397,
2002.
[3] Michael M. Blane, Zhibi Lei, HakanCivi, and David B. Cooper. The 3l
algorithm for fitting implicit polynomial curves and surfaces to data. IEEE Trans.
Pattern Anal. Mach. Intell., 22(3):298–313, 2000.
[4] Y. Bulatov, S. Jambawalikar, P. Kumar, and S.Sethia. Currency recognition
using geometric classifiers. 1999.
[5] M. Burge and W. Burger. Ear biometrics, personal identification in networked
society. kluwer academic, boston. 1999.
[6] J. Canny. A computational approach to edge detection. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 08:679–698, 1986.
[7] James B. Hayfron-Acquah, Mark S. Nixon, and John N. Carter. Automatic gait
recognition by symmetry analysis. In AVBPA, pages 272–277, 2001.
[8] Anil K. Jain, SharathPankanti, SalilPrabhakar, and Arun Ross. Recent advances
in fingerprint verification. In AVBPA, pages 182–191, 2001.
[9] A. Kholmatov. Biometric identity verification using on-line & off-line
signature verification. Master’s thesis, Sabanci University, 2003.
[10] Ajay Kumar and David Zhang. Integrating shape and texture for Currency
verification.
In ICIG ’04: Proceedings of the Third International Conference on
Image and Graphics (ICIG’04), pages 222–225, Washington, DC, USA, 2004.
IEEE Computer Society.
[11] Chih-Lung Lin and Kuo-Chin Fan. Biometric verification using thermal
images of palm-dorsa vein patterns. IEEE Trans. Circuits Syst. Video Techn.
14(2):199–213.
[12] Guangming Lu, David Zhang, and Kuanquan Wang. Palmprint recognition
using eigenpalms features. Pattern Recogn. Lett., 24(9-10):1463–1467, 2003.
[13] YingLiang Ma, Frank Pollick, and W. Terry Hewitt. Using b-spline curves for
Currency recognition. International Conference of Pattern Recognition, 03:274–
277,
2004.
[14] A. Malaviya. A fuzzy online Currencywriting recognition system. 2nd
International
Conference on Fuzzy Set Theory and Technology.
[15] Hikaru Morita, D. Sakamoto, TetsuOhishi, Yoshimitsu Komiya, and Takashi
Matsumoto. On-line signature verifier incorporating pen position, pen pressure,
and pen inclination trajectories. In AVBPA, pages 318–323, 2001.
[16] Cenker Oden, VedatTaylanYildiz, HikmetKirmizitas, and BurakBuke.
Currency recognition using implicit polynomials and geometric features. In
AVBPA ’01:
Proceedings of the Third International Conference on Audio- and Video-Based
Biometric Person Authentication, pages 336–341, London, UK, 2001. Springer-
Verlag.
[17] S. Rieck, E. Schukat-Talamazzini, T. Kuhn, S. Kunzmann, and E. N¨oth.
Automatic transformation of speech databases for continuous speech recognition.
In P. Laface and R. DeMori, editors, Speech Recognition and Understanding.
Recent Advances, Trends, and Applications, NATO ASI Series F75, pages 181–
186. Springer, 1992.
[18] SaitSener and Mustafa Unel. Affine invariant fitting of algebraic curves using
fourier descriptors. Pattern Anal. Appl., 8(1):72–83, 2005.
[19] Y. Wu and T. Huang. Human Currency modeling. Analysis and Animation in
the
Context of HCI, 1999.
[20] Yong Zhu, Tieniu Tan, and Yunhong Wang. Biometric personal identification
based on Currencywriting. International Conference of Pattern Recognition,
02:2797, 2000.

You might also like