Professional Documents
Culture Documents
The blood vessels are the part of the circulatory system that transport blood throughout the body.
There are three major types of blood vessels: the arteries, which carry the blood away from the
heart; the capillaries, which enable the actual exchange of water and chemicals between the
blood and the tissues; and the veins, which carry blood from the capillaries back toward the
heart.
Anatomy
The arteries and veins have the same structure with three layers, from inside to outside.
Tunica intima (the thinnest layer): a single layer of simple squamous endothelial cells
Tunica media (the thickest layer): circularly arranged elastic fiber, connective tissue,
polysaccharide substances, the second and third layer are separated by another thick
elastic band called external elastic lamina. The tunica media may (especially in arteries)
be rich in vascular smooth muscle, which controls the caliber of the vessel.
Tunica adventitia: entirely made of connective tissue. It also contains nerves that supply
the vessel as well as nutrient capillaries (vasa vasorum) in the larger blood vessels.
Capillaries consist of little more than a layer of endothelium and occasional connective tissue.
When blood vessels connect to form a region of diffuse vascular supply it is called an
anastomosis (pl. anastomoses). Anastomoses provide critical alternative routes for blood to flow
in case of blockages.
Types
Blood vessel with an erythrocyte (E) within its lumen, endothelial cells forming its tunica intima,
Arteries
o Branches of the aorta, such as the carotid artery, the subclavian artery, the celiac
trunk, the mesenteric arteries, the renal artery and the iliac artery.
Arterioles
Venules
Veins
o Large collecting vessels, such as the subclavian vein, the jugular vein, the renal
o Venae cavae (the 2 largest veins, carry blood into the heart)
They are roughly grouped as arterial and venous, determined by whether the blood in it is
flowing away from (arterial) or toward (venous) the heart. The term "arterial blood" is
nevertheless used to indicate blood high in oxygen, although the pulmonary artery carries
"venous blood" and blood flowing in the pulmonary vein is rich in oxygen. This is because they
are carrying the blood to and from the lungs, respectively, to be oxygenated.
Physiology:
Blood vessels do not actively engage in the transport of blood (they have no appreciable
peristalsis), but arteries - and veins to a degree - can regulate their inner diameter by contraction
of the muscular layer. This changes the blood flow to downstream organs, and is determined by
the autonomic nervous system. Vasodilation and vasoconstriction are also used antagonistically
as methods of thermoregulation.
Oxygen (bound to hemoglobin in red blood cells) is the most critical nutrient carried by the
blood. In all arteries apart from the pulmonary artery, hemoglobin is highly saturated (95-100%)
with oxygen. In all veins apart from the pulmonary vein, the hemoglobin is desaturated at about
mmHg = 133 Pa). In the arterial system, this is usually around 120 mmHg systolic (high pressure
wave due to contraction of the heart) and 80 mmHg diastolic (low pressure wave). In contrast,
pressures in the venous system are constant and rarely exceed 10 mmHg.
sectional area) by contracting the vascular smooth muscle in the vessel walls. It is regulated by
vasoconstrictors (agents that cause vasoconstriction). These include paracrine factors (e.g.
prominent vasodilator is nitric oxide (termed endothelium-derived relaxing factor for this
reason).
Permeability of the endothelium is pivotal in the release of nutrients to the tissue. It is also
Role in disease
Blood vessels play a role in virtually every medical condition. Cancer, for example, cannot
progress unless the tumor causes angiogenesis (formation of new blood vessels) to supply the
malignant cells' metabolic demand. Atherosclerosis, the formation of lipid lumps (atheromas) in
the blood vessel wall, is the most common cardiovascular disease, the main cause of death in the
Western world.
Blood vessel permeability is increased in inflammation. Damage, due to trauma or
spontaneously, may lead to haemorrhage due to mechanical damage to the vessel endothelium.
In contrast, occlusion of the blood vessel by atherosclerotic plaque, by an embolised blood clot
or a foreign body leads to downstream ischemia (insufficient blood supply) and possibly
necrosis. Vessel occlusion tends to be a positive feedback system; an occluded vessel creates
eddies in the normally laminar flow or plug flow blood currents. These eddies create abnormal
fluid velocity gradients which push blood elements such as cholesterol or chylomicron bodies to
the endothelium. These deposit onto the arterial walls which are already partially occluded and
multiple segments (sets of pixels, also known as superpixels). The goal of segmentation is to
simplify and/or change the representation of an image into something that is more meaningful
and easier to analyze. Image segmentation is typically used to locate objects and boundaries
(lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a
label to every pixel in an image such that pixels with the same label share certain visual
characteristics.
The result of image segmentation is a set of segments that collectively cover the entire image, or
a set of contours extracted from the image (see edge detection). Each of the pixels in a region are
similar with respect to some characteristic or computed property, such as color, intensity, or
texture. Adjacent regions are significantly different with respect to the same characteristic(s).
Applications
Medical Imaging
o Computer-guided surgery
o Diagnosis
o Treatment planning
Face recognition
Fingerprint recognition
Machine vision
Several general-purpose algorithms and techniques have been developed for image
segmentation. Since there is no general solution to the image segmentation problem, these
techniques often have to be combined with domain knowledge in order to effectively solve an
The K-means algorithm is an iterative technique that is used to partition an image into K clusters.
2. Assign each pixel in the image to the cluster that minimizes the variance between the
3. Re-compute the cluster centers by averaging all of the pixels in the cluster
4. Repeat steps 2 and 3 until convergence is attained (e.g. no pixels change clusters)
In this case, variance is the squared or absolute difference between a pixel and a cluster center.
The difference is typically based on pixel color, intensity, texture, and location, or a weighted
This algorithm is guaranteed to converge, but it may not return the optimal solution. The quality
of the solution depends on the initial set of clusters and the value of K.
In statistics and machine learning, the k-means algorithm is clustering algorithm to partition n
objects into k clusters, where k < n. It is similar to the expectation-maximization algorithm for
mixtures of Gaussians in that they both attempt to find the centers of natural clusters in the data.
The model requires that the object attributes correspond to elements of a vector space. The
objective it tries to achieve is to minimize total intra-cluster variance, or, the squared error
function. The k-means clustering was invented in 1956. The most common form of the algorithm
uses an iterative refinement heuristic known as Lloyd's algorithm. Lloyd's algorithm starts by
partitioning the input points into k initial sets, either at random or using some heuristic data. It
then calculates the mean point, or centroid, of each set. It constructs a new partition by
associating each point with the closest centroid. Then the centroids are recalculated for the new
clusters, and algorithm repeated by alternate application of these two steps until convergence,
which is obtained when the points no longer switch clusters (or alternatively centroids are no
longer changed). Lloyd's algorithm and k-means are often used synonymously, but in reality
Lloyd's algorithm is a heuristic for solving the k-means problem, as with certain combinations of
starting points and centroids, Lloyd's algorithm can in fact converge to the wrong answer. Other
variations exist, but Lloyd's algorithm has remained popular, because it converges extremely
quickly in practice. In terms of performance the algorithm is not guaranteed to return a global
optimum. The quality of the final solution depends largely on the initial set of clusters, and may,
in practice, be much poorer than the global optimum. Since the algorithm is extremely fast, a
common method is to run the algorithm several times and return the best clustering found. A
drawback of the k-means algorithm is that the number of clusters k is an input parameter. An
inappropriate choice of k may yield poor results. The algorithm also assumes that the variance is
Histogram-based methods
Histogram-based methods are very efficient when compared to other image segmentation
methods because they typically require only one pass through the pixels. In this technique, a
histogram is computed from all of the pixels in the image, and the peaks and valleys in the
histogram are used to locate the clusters in the image.[1] Color or intensity can be used as the
measure.
A refinement of this technique is to recursively apply the histogram-seeking method to clusters
in the image in order to divide them into smaller clusters. This is repeated with smaller and
significant peaks and valleys in the image. In this technique of image classification distance
Edge detection
Edge detection is a well-developed field on its own within image processing. Region boundaries
and edges are closely related, since there is often a sharp adjustment in intensity at the region
boundaries. Edge detection techniques have therefore been used as the base of another
segmentation technique.
The edges identified by edge detection are often disconnected. To segment an object from an
The first region growing method was the seeded region growing method. This method takes a set
of seeds as input along with the image. The seeds mark each of the objects to be segmented. The
regions are iteratively grown by comparing all unallocated neighbouring pixels to the regions.
The difference between a pixel's intensity value and the region's mean, δ, is used as a measure of
similarity. The pixel with the smallest difference measured this way is allocated to the respective
region. This process continues until all pixels are allocated to a region.
Seeded region growing requires seeds as additional input. The segmentation results are
dependent on the choice of seeds. Noise in the image can cause the seeds to be poorly placed.
Unseeded region growing is a modified algorithm that doesn't require explicit seeds. It starts off
with a single region A1 – the pixel chosen here does not significantly influence final
segmentation. At each iteration it considers the neighbouring pixels in the same way as seeded
region growing. It differs from seeded region growing in that if the minimum δ is less than a
predefined threshold T then it is added to the respective region Aj. If not, then the pixel is
considered significantly different from all current regions Ai and a new region An + 1 is created
One variant of this technique, proposed by Haralick and Shapiro (1985), is based on pixel
intensities. The mean and scatter of the region and the intensity of the candidate pixel is used to
compute a test statistic. If the test statistic is sufficiently small, the pixel is added to the region,
and the region’s mean and scatter are recomputed. Otherwise, the pixel is rejected, and is used to
Curve propagation is a popular technique in image analysis for object extraction, object tracking,
stereo reconstruction, etc. The central idea behind such an approach is to evolve a curve towards
the lowest potential of a cost function, where its definition reflects the task to be addressed and
imposes certain smoothness constraints. Lagrangian techniques are based on parameterizing the
contour according to some sampling strategy and then evolve each element according to image
and internal terms. While such a technique can be very efficient, it suffers from various
limitations like deciding on the sampling strategy, estimating the internal geometric properties of
the curve, changing its topology, addressing problems in higher dimensions, etc.
The level set method was initially proposed to track moving interfaces by Osher and Sethian in
1988 and has spread across various imaging domains in the late nineties. It can be used to
central idea is to represent the evolving contour using a signed function, where its zero level
corresponds to the actual contour. Then, according to the motion equation of the contour, one can
easily derive a similar flow for the implicit surface that when applied to the zero-level will reflect
the propagation of the contour. The level set method encodes numerous advantages: it is implicit,
parameter free, provides a direct way to estimate the geometric properties of the evolving
structure, can change the topology and is intrinsic. Furthermore, they can be used to define an
optimization framework as proposed by Zhao, Merriman and Osher in 1996. Therefore, one can
vision and medical image analysis.[4] Furthermore, research into various level set data structures
Graph partitioning methods can effectively be used for image segmentation. In these methods,
the image is modeled as a weighted, undirected graph. Usually a pixel or a group of pixels are
associated with nodes and edge weights define the (dis)similarity between the neighborhood
pixels. The graph (image) is then partitioned according to a criterion designed to model "good"
clusters. Each partition of the nodes (pixels) output from these algorithms are considered an
[5]
object segment in the image. Some popular algorithms of this category are normalized cuts ,
random walker, minimum cut, isoperimetric partitioning and minimum spanning tree-based
segmentation.
Watershed transformation
surface. Pixels having the highest gradient magnitude intensities (GMIs) correspond to watershed
lines, which represent the region boundaries. Water placed on any pixel enclosed by a common
watershed line flows downhill to a common local intensity minimum (LIM). Pixels draining to a
The central assumption of such an approach is that structures of interest/organs have a repetitive
form of geometry. Therefore, one can seek for a probabilistic model towards explaining the
variation of the shape of the organ and then when segmenting an image impose constraints using
this model as prior. Such a task involves (i) registration of the training examples to a common
pose, (ii) probabilistic representation of the variation of the registered samples, and (iii)
statistical inference between the model and the image. State of the art methods in the literature
for knowledge-based segmentation involve active shape and appearance models, active contours
Image segmentations are computed at multiple scales in scale-space and sometimes propagated
Segmentation criteria can be arbitrarily complex and may take into account global as well as
local criteria. A common requirement is that each region must be connected in some sense.
Witkin's seminal work in scale space included the notion that a one-dimensional signal could be
unambiguously segmented into regions, with one scale parameter controlling the scale of
segmentation.
A key observation is that the zero-crossings of the second derivatives (minima and maxima of
the first derivative or slope) of multi-scale-smoothed versions of a signal form a nesting tree,
which defines hierarchical relations between segments at different scales. Specifically, slope
extrema at coarse scales can be traced back to corresponding features at fine scales. When a
slope maximum and slope minimum annihilate each other at a larger scale, the three segments
that they separated merge into one segment, thus defining the hierarchy of segments.
There have been numerous research works in this area, out of which a few have now reached a
state where they can be applied either with interactive manual intervention (usually with
application to medical imaging) or fully automatically. The following is a brief overview of
some of the main research ideas that current approaches are based upon.
The nesting structure that Within described is, however, specific for one-dimensional signals and
does not trivially transfer to higher-dimensional images. Nevertheless, this general idea has
inspired several other authors to investigate coarse-to-fine schemes for image segmentation.
Koenderink proposed to study how iso-intensity contours evolve over scales and this approach
was investigated in more detail by Lifshitz and Pizer. Unfortunately, however, the intensity of
image features changes over scales, which implies that it is hard to trace coarse-scale image
Lindeberg studied the problem of linking local extrema and saddle points over scales, and
proposed an image representation called the scale-space primal sketch which makes explicit the
relations between structures at different scales, and also makes explicit which image features are
stable over large ranges of scale including locally appropriate scales for those. Bergholm
proposed to detect edges at coarse scales in scale-space and then trace them back to finer scales
with manual choice of both the coarse detection scale and the fine localization scale.
Gauch and Pizer studied the complementary problem of ridges and valleys at multiple scales and
developed a tool for interactive image segmentation based on multi-scale watersheds. The use of
multi-scale watershed with application to the gradient map has also been investigated by Olsen
and Nielsen and been carried over to clinical use by Dam Vincken et al. proposed a hyperstack
for defining probabilistic relations between image structures at different scales. The use of stable
image structures over scales has been furthered by Ahuja[20] and his co-workers into a fully
automated system.
More recently, these ideas for multi-scale image segmentation by linking image structures over
scales have been picked up by Florack and Kuijper. Bijaoui and Rué associate structures detected
in scale-space above a minimum noise threshold into an object tree which spans multiple scales
and corresponds to a kind of feature in the original signal. Extracted features are accurately
Semi-automatic segmentation
In this kind of segmentation, the user outlines the region of interest with the mouse clicks and
algorithms are applied so that the path that best fits the edge of the image is shown.
Techniques like Siox, Livewire, or Intelligent Scissors are used in this kind of segmentation.
Neural Network segmentation relies on processing small areas of an image using an artificial
[23]
neural network or a set of neural networks. After such processing the decision-making
mechanism marks the areas of an image accordingly to the category recognized by the neural
network. A type of network designed especially for this is the Kohonen map.
Pulse-Coupled Neural Networks (PCNNs) are neural models proposed by modeling a cat’s visual
cortex and developed for high-performance biomimetic image processing. In 1989, Eckhorn
introduced a neural model to emulate the mechanism of cat’s visual cortex. The Eckhorn model
provided a simple and effective tool for studying small mammal’s visual cortex, and was soon
recognized as having significant application potential in image processing. In 1994, the Eckhorn
model was adapted to be an image processing algorithm by Johnson, who termed this algorithm
Pulse-Coupled Neural Network. Over the past decade, PCNNs have been utilized for a variety of
extraction, motion detection, region growing, noise reduction, and so on. A PCNN is a two-
dimensional neural network. Each neuron in the network corresponds to one pixel in an input
image, receiving its corresponding pixel’s color information (e.g. intensity) as an external
stimulus. Each neuron also connects with its neighboring neurons, receiving local stimuli from
them. The external and local stimuli are combined in an internal activation system, which
accumulates the stimuli until it exceeds a dynamic threshold, resulting in a pulse output. Through
iterative computation, PCNN neurons produce temporal series of pulse outputs. The temporal
series of pulse outputs contain information of input images and can be utilized for various image
processing applications, such as image segmentation and feature generation. Compared with
conventional image processing means, PCNNs have several significant merits, including
Fuzzy Logic
Fuzzy logic is a form of multi-valued logic derived from fuzzy set theory to deal with reasoning
that is approximate rather than precise. In contrast with "crisp logic", where binary sets have
binary logic, fuzzy logic variables may have a truth value that ranges between 0 and 1 and is not
constrained to the two truth values of classic propositional logic. Furthermore, when linguistic
Fuzzy logic emerged as a consequence of the 1965 proposal of fuzzy set theory by Lotfi Zadeh.
Though fuzzy logic has been applied to many fields, from control theory to artificial intelligence,
it still remains controversial among most statisticians, who prefer Bayesian logic, and some
Degrees of truth
Fuzzy logic and probabilistic logic are mathematically similar – both have truth values ranging
between 0 and 1 – but conceptually distinct, due to different interpretations -- see interpretations
of probability theory. Fuzzy logic corresponds to "degrees of truth", while probabilistic logic
corresponds to "probability, likelihood"; as these differ, fuzzy logic and probabilistic logic yield
Both degrees of truth and probabilities range between 0 and 1 and hence may seem similar at
first. For example, let a 100 ml glass contain 30 ml of water. Then we may consider two
concepts: Empty and Full. The meaning of each of them can be represented by a certain fuzzy
set. Then one might define the glass as being 0.7 empty and 0.3 full. Note that the concept of
emptiness would be subjective and thus would depend on the observer or designer. Another
designer might equally well design a set membership function where the glass would be
considered full for all values down to 50 ml. It is essential to realize that fuzzy logic uses truth
mathematical model of randomness. A probabilistic setting would first define a scalar variable
for the fullness of the glass, and second, conditional distributions describing the probability that
someone would call the glass full given a specific fullness level. This model, however, has no
sense without accepting occurrence of some event, e.g. that after a few minutes, the glass will be
half empty. Note that the conditioning can be achieved by having a specific observer that
randomly selects the level for the glass, a distribution over deterministic observers, or both.
Consequently, probability has nothing in common with fuzziness, these are simply different
concepts which superficially seem similar because of using the same unit interval of real
numbers [0,1]. Still, since theorems such as De Morgan's have dual applicability and properties
of random variables are analogous to properties of binary logic states, one can see where the
temperature measurement for anti-lock brakes might have several separate membership functions
defining particular temperature ranges needed to control the brakes properly. Each function maps
the same temperature value to a truth value in the 0 to 1 range. These truth values can then be
In this image, the meaning of the expressions cold, warm, and hot is represented by functions
mapping a temperature scale. A point on that scale has three "truth values" — one for each of the
three functions. The vertical line in the image represents a particular temperature that the three
arrows (truth values) gauge. Since the red arrow points to zero, this temperature may be
interpreted as "not hot". The orange arrow (pointing at 0.2) may describe it as "slightly warm"
Linguistic variables
While variables in mathematics usually take numerical values, in fuzzy logic applications, the
non-numeric linguistic variables are often used to facilitate the expression of rules and facts.
A linguistic variable such as age may have a value such as young or its antonym old. However,
the great utility of linguistic variables is that they can be modified via linguistic hedges applied
to primary terms. The linguistic hedges can be associated with certain functions. For example, L.
A. Zadeh proposed to take the square of the membership function. This model, however, does
Example
Fuzzy set theory defines fuzzy operators on fuzzy sets. The problem in applying this is that the
appropriate fuzzy operator may not be known. For this reason, fuzzy logic usually uses IF-THEN
For example, a simple temperature regulator that uses a fan might look like this:
IF temperature IS very cold THEN stop fan
There is no "ELSE" – all of the rules are evaluated, because the temperature might be "cold" and
The AND, OR, and NOT operators of boolean logic exist in fuzzy logic, usually defined as the
minimum, maximum, and complement; when they are defined this way, they are called the
NOT x = (1 - truth(x))
x OR y = maximum(truth(x), truth(y))
There are also other operators, more linguistic in nature, called hedges that can be applied. These
are generally adverbs such as "very", or "somewhat", which modify the meaning of a set using a
mathematical formula.
Logical analysis
In mathematical logic, there are several formal systems of "fuzzy logic"; most of them belong
the residuum of the t-norm. Its models correspond to MTL-algebras that are prelinear
defined by a continuous t-norm, and implication is also defined as the residuum of the t-
Łukasiewicz fuzzy logic is the extension of basic fuzzy logic BL where standard
conjunction is the Łukasiewicz t-norm. It has the axioms of basic fuzzy logic plus an
Gödel fuzzy logic is the extension of basic fuzzy logic BL where conjunction is Gödel t-
norm. It has the axioms of BL plus an axiom of idempotence of conjunction, and its
Product fuzzy logic is the extension of basic fuzzy logic BL where conjunction is product
t-norm. It has the axioms of BL plus another axiom for cancellativity of conjunction, and
Fuzzy logic with evaluated syntax (sometimes also called Pavelka's logic), denoted by
EVŁ, is a further generalization of mathematical fuzzy logic. While the above kinds of
fuzzy logic have traditional syntax and many-valued semantics, in EVŁ is evaluated also
syntax. This means that each formula has an evaluation. Axiomatization of EVŁ stems
from Łukasziewicz fuzzy logic. A generalization of classical Gödel completeness
These extend the above-mentioned fuzzy logics by adding universal and existential quantifiers in
a manner similar to the way that predicate logic is created from propositional logic. The
semantics of the universal (resp. existential) quantifier in t-norm fuzzy logics is the infimum
(resp. supremum) of the truth degrees of the instances of the quantified subformula.
The notions of a "decidable subset" and "recursively enumerable subset" are basic ones for
classical mathematics and classical logic. Then, the question of a suitable extension of such
concepts to fuzzy set theory arises. A first proposal in such a direction was made by E.S. Santos
by the notions of fuzzy Turing machine, Markov normal fuzzy algorithm and fuzzy program (see
Santos 1970). Successively, L. Biacino and G. Gerla showed that such a definition is not
adequate and therefore proposed the following one. Ü denotes the set of rational numbers in
S×N Ü exists such that, for every x in S, the function h(x,n) is increasing with respect to n and
s(x) = lim h(x,n). We say that s is decidable if both s and its complement –s are recursively
enumerable. An extension of such a theory to the general case of the L-subsets is proposed in
Gerla 2006. The proposed definitions are well related with fuzzy logic. Indeed, the following
theorem holds true (provided that the deduction apparatus of the fuzzy logic satisfies some
of logically true formulas is recursively enumerable in spite of the fact that the crisp set of valid
formulas is not recursively enumerable, in general. Moreover, any axiomatizable and complete
theory is decidable.
It is an open question to give supports for a Church thesis for fuzzy logic claiming that the
proposed notion of recursive enumerability for fuzzy subsets is the adequate one. To this aim,
further investigations on the notions of fuzzy grammar and fuzzy Turing machine should be
necessary (see for example Wiedermann's paper). Another open question is to start from this
Fuzzy databases
Once fuzzy relations are defined, it is possible to develop fuzzy relational databases. The first
fuzzy relational database, FRDB, appeared in Maria Zemankova's dissertation. Later, some other
models arose like the Buckles-Petry model, the Prade-Testemale Model, the Umano-Fukami
model or the GEFRED model by J.M. Medina, M.A. Vila et al. In the context of fuzzy databases,
some fuzzy querying languages have been defined, highlighting the SQLf by P. Bosc et al. and
the FSQL by J. Galindo et al. These languages define some structures in order to include fuzzy
aspects in the SQL statements, like fuzzy conditions, fuzzy comparators, fuzzy constants, fuzzy
Application areas
Automobile and such vehicle subsystems as automatic transmissions, ABS and cruise
control
Tokyo monorail
Cameras
Dishwashers
Elevators
Language filters on message boards and chat rooms for filtering out offensive text
The Massive engine used in the Lord of the Rings films, which allowed large-scale
Rice cookers
Comparison to probability
Fuzzy logic and probability are different ways of expressing uncertainty. While both fuzzy logic
and probability theory can be used to represent subjective belief, fuzzy set theory uses the
concept of fuzzy set membership (i.e., how much a variable is in a set), probability theory uses
the concept of subjective probability (i.e., how probable do I think that a variable is in a set).
inherently different from the probability measure, hence they are not directly equivalent.
However, many statisticians are persuaded by the work of Bruno de Finetti that only one kind of
mathematical uncertainty is needed and thus fuzzy logic is unnecessary. On the other hand, Bart
Kosko argues that probability is a subtheory of fuzzy logic, as probability only handles one kind
of uncertainty. He also claims to have proven a derivation of Bayes' theorem from the concept of
fuzzy subsethood. Lotfi Zadeh argues that fuzzy logic is different in character from probability,
and is not a replacement for it. He fuzzified probability to fuzzy probability and also generalized
Fuzzy concept:
A fuzzy concept is a concept of which the content, value, or boundaries of application can vary
according to context or conditions, instead of being fixed once and for all.
Usually this means the concept is vague, lacking a fixed, precise meaning, without being
meaningless altogether. It does have a meaning, or multiple meanings (it has different semantic
associations), which however can become clearer only through further elaboration and
specification. Fuzzy concepts (Markusen, 2003) "lack clarity and are difficult to test or
operationalize". In logic, fuzzy concepts are often regarded as concepts which in their application
are neither completely true or completely false, or which are partly true and partly false.
Consequently, fuzzy concepts may generate uncertainty and reducing fuzziness may generate
more certainty. However, this is not necessarily so, insofar as a concept, although it is not fuzzy
at all, may fail to capture the meaning of something adequately. A concept can be very precise,
but not - or insufficiently - applicable or relevant in the situation to which it refers. A fuzzy
concept may indeed provide more security, because it provides a meaning for something when
an exact concept is unavailable - which is better than not being able to denote it at all.
Ordinary language, which uses symbolic conventions and associations which are often not
logical, inherently contains many fuzzy concepts - "knowing what you mean" in this case
depends on knowing the context or being familiar with the way in which a term is normally used,
or what it is associated with. This can be easily verified for instance by consulting a dictionary, a
thesaurus or an encyclopedia which show the multiple meanings of words, or by observing the
To communicate, receive or convey a message, an individual somehow has to bridge his own
meaning and the meanings which are understood by others, i.e. the message has to be conveyed
in a way that it will be socially understood. Thus, people might state: "you have to say it in a way
that I understand".
This may be done instinctively, habitually or unconsciously, but it usually involves a choice of
terms, assumptions or symbols whose meanings may often not be completely fixed, but which
depend among other things on how the receiver of the message responds to it, or the context. In
this sense, meaning is often "negotiated" (or, more cynically, manipulated). This gives rise to
discovered that it is possible to generate statements which are logically speaking not completely
true or imply a paradox, even although in other respects they conform to logical rules.
The origin of fuzzy concepts is partly due to the fact that the human brain does not operate like a
computer. While computers use strict binary logic gates, the brain does not. i.e. it is capable of
making all kinds of neural associations according to all kinds of ordering principles (or fairly
chaotically) in patterns which are not logical but nevertheless meaningful. Something can be
meaningful although we cannot name it, or we might only be able to name it and nothing else.
In part, fuzzy concepts are also due to the fact that learning or the growth of understanding
involves a transition from a vague awareness which cannot orient behaviour greatly, to clearer
Some logicians argue that fuzzy concepts are a necessary consequence of the reality that any
kind of distinction we might like to draw has limits of application. As a certain level of
generality, it works fine. But if we pursued its application in a very exact and rigorous manner,
or overextend its application, it appears that the distinction simply does not apply in some areas
or contexts, or that we cannot fully specify how it should be drawn. An analogy might be that
zooming a telescope, camera or microscope in and out reveals that a pattern which is sharply
In psychophysics it has been discovered that the perceptual distinctions we draw in the mind are
often more sharply defined than they are in the real world. Thus, the brain actually tends to
"sharpen up" our perceptions of differences in the external world. Between black and white, we
are able to detect only a limited number of shades of gray, or colour gradations. If there are more
gradations and transitions in reality than our conceptual distinctions can capture, then it could be
argued that how those distinctions will actually apply must necessarily become vaguer at some
point. If, for example, one wants to count and quantify distinct objects using numbers, one needs
to be able to distinguish between those separate objects, but if this is difficult or impossible, then,
although this may not invalidate a quantitative procedure as such, quantification is not really
possible in practice; at best, we may be able to assume of infer indirectly a certain distribution of
quantities.
Finally, in interacting with the external world, the human mind may often encounter new, or
partly new phenomena or relationships which cannot (yet) be sharply defined given the
"Crisis management plans cannot be put 'on the fly' after the crisis occurs. At the outset,
information is often vague, even contradictory. Events move so quickly that decision makers
experience a sense of loss of control. Often denial sets in, and managers unintentionally cut off
information flow about the situation" - L. Paul Bremer, "Corporate governance and crisis
It also can be argued that fuzzy concepts are generated by a certain sort of lifestyle or way of
working which evades definite distinctions, makes them impossible or inoperable, or which is in
some way chaotic. To obtain concepts which are not fuzzy, it must be possible to test out their
application in some way. But in the absence of any relevant clear distinctions, or when
everything is "in a state of flux" or in transition, it may not be possible to do so, so that the
Fuzzy concepts often play a role in the creative process of forming new concepts to understand
something. In the most primitive sense, this can be observed in infants who, through practical
experience, learn to identify, distinguish and generalise the correct application of a concept, and
However, fuzzy concepts may also occur in scientific, journalistic, programming and
philosophical activity, when a thinker is in the process of clarifying and defining a newly
emerging concept which is based on distinctions which, for one reason or another, cannot (yet)
be more exactly specified or validated. Fuzzy concepts are often used to denote complex
phenomena, or to describe something which is developing and changing, which might involve
In politics, it can be highly important and problematic how exactly a conceptual distinction is
drawn, or indeed whether a distinction is drawn at all; distinctions used in administration may be
deliberately sharpened, or kept fuzzy, due to some political motive or power relationship. A
politician may be deliberately vague about some things, and very clear and explicit about others.
The "fuzzy area" can also refer simply to a residual number of cases which cannot be allocated to
in one language may not have quite the same meaning or significance in another language, or it
may not be feasible to translate it literally, or at all. Some languages have concepts which do not
exist in another language, raising the problem of how one would most easily render their
meaning.
In information services fuzzy concepts are frequently encountered because a customer or client
asks a question about something which could be interpreted in many different ways, or, a
document is transmitted of a type or meaning which cannot be easily allocated to a known type
It could be argued that many concepts used fairly universally in daily life (e.g. "love" or "God"
or "health" or "social") are inherently or intrinsically fuzzy concepts, to the extent that their
meaning can never be completely and exactly specified with logical operators or objective terms,
and can have multiple interpretations, which are in part exclusively subjective. Yet despite this
It may also be possible to specify one personal meaning for the concept, without however
placing restrictions on a different use of the concept in other contexts (as when, for example, one
says "this is what I mean by X" in contrast to other possible meanings). In ordinary speech,
concepts may sometimes also be uttered purely randomly; for example a child may repeat the
same idea in completely unrelated contexts, or an expletive term may be uttered arbitrarily.
Fuzzy concepts can be used deliberately to create ambiguity and vagueness, as an evasive tactic,
might be used to indicate that there is definitely a connection between two things, without giving
a complete specification of what the connection is, for some or other reason. This could be due to
a failure or refusal to be more precise. But it could also could be a prologue to a more exact
In mathematical logic, programming, philosophy and linguistics fuzzy concepts can however be
by classifying or categorizing all or most cases or uses to which the concept applies
(taxonomy).
by identifying operational rules for the use of the concept, which cover all or most cases.
by allocating different applications of the concept to different but related sets (e.g. using
Boolean logic).
the concept.
by some other kind of measure or scale of the degree to which the concept applies.
by specifying a series of logical operators (an inferential system or algorithm) which
by mapping or graphing the applications of the concept using some basic parameters.
by reducing or restating fuzzy concepts in terms which are simpler or similar, and which
by relating the fuzzy concept to other concepts which are not fuzzy or less fuzzy, or
simply by replacing the fuzzy concept altogether with another, alternative concept which
possibly decrease the amount of fuzziness. It may not be possible to specify all the possible
the majority of them, statistically or otherwise, this may be useful enough for practical purposes.
The difficulty that can occur in judging the fuzziness of a concept can be illustrated with the
question "Is this one of those?". If it is not possible to clearly answer this question, that could be
because "this" (the object) is itself fuzzy and evades definition, or because "one of those" (the
concept of the object) is fuzzy and inadequately defined. Thus, the source of fuzziness may be in
the nature of the reality being dealt with, the concepts used to interpret it, or the way in which the
Mathematical morphology
A shape (in blue) and its morphological dilation (in green) and erosion (in yellow) by a diamond-
Mathematical morphology (MM) is a theory and technique for the analysis and processing of
geometrical structures, based on set theory, lattice theory, topology, and random functions. MM
is most commonly applied to digital images, but it can be employed as well on graphs, surface
MM was originally developed for binary images, and was later extended to grayscale functions
and images. The subsequent generalization to complete lattices is widely accepted today as
History
Mathematical Morphology was born in 1964 from the collaborative work of Georges Matheron
and Jean Serra, at the École des Mines de Paris, France. Matheron supervised the PhD thesis of
Serra, devoted to the quantification of mineral characteristics from thin cross sections, and this
In 1968, the Centre de Morphologie Mathématique was founded by the École des Mines de Paris
During the rest of the 1960's and most of the 1970's, MM dealt essentially with binary images,
treated as sets, and generated a large number of binary operators and techniques: Hit-or-miss
erosion, conditional bisector, and others. A random approach was also developed, based on novel
image models. Most of the work in that period was developed in Fontainebleau.
From mid-1970's to mid-1980's, MM was generalized to grayscale functions and images as well.
Besides extending the main concepts (such as dilation, erosion, etc...) to functions, this
generalization yielded new operators, such as morphological gradients, top-hat transform and the
In the 1980's and 1990's, MM gained a wider recognition, as research centers in several countries
began to adopt and investigate the method. MM started to be applied to a large number of
In 1986, Jean Serra further generalized MM, this time to a theoretical framework based on
complete lattices. This generalization brought flexibility to the theory, enabling its application to
a much larger number of structures, including color images, video, graphs, meshes, etc... At the
same time, Matheron and Serra also formulated a theory for morphological filtering, based on the
The 1990's and 2000's also saw further theoretical advancements, including the concepts of
In 1993, the first International Symposium on Mathematical Morphology (ISMM) took place in
Barcelona, Spain. Since then, ISMMs are organized every 2-3 years, each time in a different part
of the world: Fontainebleau, France (1994); Atlanta, USA (1996); Amsterdam, Netherlands
(1998); Palo Alto, CA, USA (2000); Sydney, Australia (2002); Paris, France (2004); Rio de
Structuring element
The basic idea in binary morphology is to probe an image with a simple, pre-defined shape,
drawing conclusions on how this shape fits or misses the shapes in the image. This simple
"probe" is called structuring element, and is itself a binary image (i.e., a subset of the space or
grid).
Here are some examples of widely used structuring elements (denoted by B):
Let ; B is a 3x3 square, that is, B={(-1,-1), (-1,0), (-1,1), (0,-1), (0,0), (0,1), (1,-
Let ; B is the "cross" given by: B={(-1,0), (0,-1), (0,0), (0,1), (1,0)}.
Basic operators
The basic operations are shift-invariant (translation invariant) operators strongly related to
Minkowski addition.
The erosion of the dark-blue square by a disk, resulting in the light-blue square.
The erosion of the binary image A by the structuring element B is defined by:
When the structuring element B has a center (e.g., B is a disk or a square), and this center is
located on the origin of E, then the erosion of A by B can be understood as the locus of points
reached by the center of B when B moves inside A. For example, the erosion of a square of side
10, centered at the origin, by a disc of radius 2, also centered at the origin, is a square of side 6
it was written with a pen that is bleeding. Erosion process will allow thicker lines to get skinny
Dilation
The dilation of the dark-blue square by a disk, resulting in the light-blue square with rounded
corners.
If B has a center on the origin, as before, then the dilation of A by B can be understood as the
locus of the points covered by B when the center of B moves inside A. In the above example, the
dilation of the square of side 10 by the disk of radius 2 is a square of side 14, with rounded
Example application: Dilation is the opposite of the erosion. Figures that are very lightly drawn
get thick when "dilated". Easiest way to describe it is to imagine the same fax/text is written with
a thicker pen.
Opening
The opening of the dark-blue square by a disk, resulting in the light-blue square with round
corners.
image by B:
.
The opening is also given by, which means that it is the locus of translations of the structuring
element B inside the image A. In the case of the square of radius 10, and a disc of radius 2 as the
structuring element, the opening is a square of radius 10 with rounded corners, where the corner
radius is 2.
Example application: Let's assume someone has written a note on a non-soaking paper that
writing looks like it is growing tiny hairy roots all over. Opening essentially removes the outer
tiny "hairline" leaks and restores the text. The side effect is that it rounds off things. The sharp
Closing
The closing of the dark-blue shape (union of two squares) by a disk, resulting in the union of the
structure by B:
closing is the complement of the locus of translations of the symmetric of the structuring element
Here are some properties of the basic binary morphological operators (dilation, erosion, opening
and closing):
, etc.
erosion satisfies .
The dilation is a pseudo-inverse of the erosion, and vice-versa, in the following sense:
if and only if .
Hit-or-miss transform
Morphological skeleton
Filtering by reconstruction
Granulometry
Grayscale morphology
In grayscale morphology, images are functions mapping an Euclidean space or grid E into
, where is the set of real’s, is an element larger than any real number,
functions".
Denoting an image by f(x) and the structuring function by b(x), the grayscale dilation of f by b is
given by
Just like in binary morphology, the opening and closing are given respectively by
, and
,
where .
In this case, the dilation and erosion are greatly simplified, and given respectively by
, and
In the bounded, discrete case (E is a grid and B is bounded), the supremum and infimum
operators can be replaced by the maximum and minimum. Thus, dilation and erosion are
particular cases of order statistics filters, with dilation returning the maximum value within a
moving window (the symmetric of the structuring function support B), and the erosion returning
In the case of flat structuring element, the morphological operators depend only on the relative
ordering of pixel values, regardless their numerical values, and therefore are especially suited to
the processing of binary images and grayscale images whose light transfer function is not known.
Morphological Gradients
Top-hat transform
Watershed (algorithm)
By combining these operators one can obtain algorithms for many image processing tasks, such
as feature detection, image segmentation, image sharpening, image filtering, and classification.
Mathematical morphology on complete lattices
Complete lattices are partially ordered sets, where every subset has an infimum and a supremum.
In particular, it contains a least element and a greatest element (also denoted "universe").
respectively. Its universe and least element are symbolized by U and , respectively. Moreover,
A dilation is any operator that distributes over the supremum, and preserves the
,
.
An erosion is any operator that distributes over the infimum, and preserves the
universe. I.e.:
,
.
Dilations and erosions form Galois connections. That is, for all dilation δ there is one and only
for all .
Similarly, for all erosion there is one and only one dilation satisfying the above connection.
Furthermore, if two operators satisfy the connection, then δ must be a dilation, and an erosion.
Pairs of erosions and dilations satisfying the above connection are called "adjunctions", and the
, and
The morphological opening and closing are particular cases of algebraic opening (or simply
opening) and algebraic closing (or simply closing). Algebraic openings are operators in L that
are idempotent, increasing, and anti-extensive. Algebraic closings are operators in L that are
Binary morphology is a particular case of lattice morphology, where L is the power set of E
(Euclidean space or grid), that is, L is the set of all subsets of E, and is the set inclusion. In this
case, the infimum is set intersection, and the supremum is set union.
Similarly, grayscale morphology is another particular case, where L is the set of functions
mapping E into , and , , and , are the point-wise order, supremum, and
infimum, respectively. That is, is f and g are functions in L, then if and only if
Morphology-based Operations
a(x,y) or two, discrete variables a[m,n]. An alternative definition of an image can be based on the
notion that an image consists of a set (or collection) of either continuous or discrete coordinates.
In a sense the set corresponds to the points or pixels that belong to the objects in the image. This
is illustrated in Figure 35 which contains two objects or sets A and B. Note that the coordinate
system is required. For the moment we will consider the pixel values to be binary as discussed in
Section 2.1 and 9.2.1. Further we shall restrict our discussion to discrete space (Z2). More
The object A consists of those pixels a that share some common property:
Object -
Background -
object A is defined on the basis of C-connectivity (C=4, 6, or 8) then the background Ac has a
connectivity given by 12 - C. The necessity for this is illustrated for the Cartesian grid in Figure
36.
Figure 36: A binary image requiring careful definition of object and background connectivity.
Fundamental definitions
The fundamental operations associated with an object are the standard set operations union,
Note that, since we are dealing with a digital image composed of pixels at integer coordinate
The basic Minkowski set operations--addition and subtraction--can now be defined. First we note
that the individual elements that comprise B are not only pixels but also vectors as they have a
clear coordinate position with respect to [0,0]. Given two sets A and B:
Minkowski addition -
Minkowski subtraction -
Dilation and Erosion
From these two Minkowski operations we define the fundamental mathematical morphology
Dilation -
Erosion -
where . These two operations are illustrated in Figure 37 for the objects defined
in Figure 35.
Figure 37: A binary image containing two object sets A and B. The three pixels in B are "color-
While either set A or B can be thought of as an "image", A is usually considered as the image
Dilation, in general, causes objects to dilate or grow in size; erosion causes objects to shrink. The
amount and the way that they grow or shrink depend upon the choice of the structuring element.
Dilating or eroding without specifying the structural element makes no more sense than trying to
lowpass filter an image without specifying the filter. The two most common structuring elements
(given a Cartesian grid) are the 4-connected and 8-connected sets, N4 and N8. They are illustrated
in Figure 38.
(a) N4 (b) N8
Commutative -
Non-Commutative -
Associative -
Translation Invariance -
Duality -
With A as an object and Ac as the background, eq. says that the dilation of an object is equivalent
to the erosion of the background. Likewise, the erosion of the object is equivalent to the dilation
of the background.
Except for special cases:
Non-Inverses -
Translation Invariance -
Dilation and erosion have the following important properties. For any arbitrary structuring
element B and two image objects A1 and A2 such that A1 A2 (A1 is a proper subset of A2):
Increasing in A -
Decreasing in B -
The decomposition theorems below make it possible to find efficient implementations for
morphological filters.
Dilation -
Erosion -
Erosion -
Multiple Dilations -
An important decomposition theorem is due to Vincent . First, we require some definitions. A
convex set (in R2) is one for which the straight line joining any two points in the set consists of
points that are also in the set. Care must obviously be taken when applying this definition to
discrete pixels as the concept of a "straight line" must be interpreted appropriately in Z2. A set is
bounded if each of its elements has a finite magnitude, in this case distance to the origin of the
coordinate system. A set is symmetric if B = -B. The sets N4 and N8 in Figure 38 are examples of
Vincent's theorem, when applied to an image consisting of discrete pixels, states that for a
bounded, symmetric structuring element B that contains no holes and contains its own center,
where A is the contour of the object. That is, A is the set of pixels that have a background
pixel as a neighbor. The implication of this theorem is that it is not necessary to process all the
pixels in an object in order to compute a dilation or (using eq. ) an erosion. We only have to
process the boundary pixels. This also holds for all operations that can be derived from dilations
and erosions. The processing of boundary pixels instead of object pixels means that, except for
pathological images, computational complexity can be reduced from O(N2) to O(N) for an N x N
image. A number of "fast" algorithms can be found in the literature that are based on this result .
The simplest dilation and erosion algorithms are frequently described as follows.
* Dilation - Take each binary object pixel (with value "1") and set all background pixels (with
value "0") that are C-connected to that object pixel to the value "1".
* Erosion - Take each binary object pixel (with value "1") that is C-connected to a background
Comparison of these two procedures to eq. where B = NC=4 or NC=8 shows that they are
equivalent to the formal definitions for dilation and erosion. The procedure is illustrated for
(a) B = N4 (b) B= N8
Figure 39: Illustration of dilation. Original object pixels are in gray; pixels added through
Boolean Convolution
An arbitrary binary image object (or structuring element) A can be represented as:
where and * are the Boolean operations OR and AND as defined in eqs. (81) and (82), a[j,k] is
a characteristic function that takes on the Boolean values "1" and "0" as follows:
and d[m,n] is a Boolean version of the Dirac delta function that takes on the Boolean values "1"
which, because Boolean OR and AND are commutative, can also be written as
Thus, dilation and erosion on binary images can be viewed as a form of convolution over a
Boolean algebra.
In Section 9.3.2 we saw that, when convolution is employed, an appropriate choice of the
the binary image is "0" or everything outside the binary image is "1".
We can combine dilation and erosion to build two important higher order operations:
Opening -
Closing -
Duality -
Translation -
For the opening with structuring element B and images A, A1, and A2, where A1 is a subimage of
A2 (A1 A2):
Antiextensivity -
Increasing monotonicity -
Idempotence -
For the closing with structuring element B and images A, A1, and A2, where A1 is a subimage of
A2 (A1 A2):
Extensivity -
Increasing monotonicity -
Idempotence -
The two properties given by eqs. and are so important to mathematical morphology that they can
itandMiss operation
The hit-or-miss operator was defined by Serra but we shall refer to it as the hit-and-miss operator
and define it as follows. Given an image A and two structuring elements B1 and B2, the set
it-and-Miss -
where B1 and B2 are bounded, disjoint structuring elements. (Note the use of the notation from
eq. (81).) Two sets are disjoint if B1 B2 = , the empty set. In an important sense the hit-and-
miss operator is the morphological equivalent of template matching, a well-known technique for
matching patterns based upon cross-correlation. ere, we have a template B1 for the object and a
The results of the application of these basic operations on a test image are illustrated below. In
Figure 40 the various structuring elements used in the processing are defined. The value "-"
Figure 40: Structuring elements B, B1, and B2 that are 3 x 3 and symmetric.
The results of processing are shown in Figure 41 where the binary value "1" is shown in black
The opening operation can separate objects that are connected in a binary image. The closing
operation can fill in small holes. Both operations generate a certain amount of smoothing on an
object contour given a "smooth" structuring element. The opening smoothes from the inside of
the object contour and the closing smoothes from the outside of the object contour. The hit-and-
miss example has found the 4-connected contour pixels. An alternative method to find the
4-connected contour -
or
8-connected contour -
Skeleton
i) one-pixel thick,
These are not always realizable. Figure 42 shows why this is the case.
(a) (b)
In the first example, Figure 42a, it is not possible to generate a line that is one pixel thick and in
the center of an object while generating a path that reflects the simplicity of the object. In Figure
42b it is not possible to remove a pixel from the 8-connected object and simultaneously preserve
the topology--the notion of connectedness--of the object. Nevertheless, there are a variety of
A basic formulation is based on the work of Lantuéjoul . The skeleton subset Sk(A) is defined as:
Skeleton subsets -
where K is the largest value of k before the set Sk(A) becomes empty. (From eq. ,
disc, that is, convex, bounded and symmetric. The skeleton is then the union of the skeleton
subsets:
Skeleton -
An elegant side effect of this formulation is that the original object can be reconstructed given
This formulation for the skeleton, however, does not preserve the topology, a requirement
described in eq. .
an object without permitting it to vanish. A general thinning algorithm is based on the hit-and-
miss operation:
Thinning -
Depending on the choice of B1 and B2, a large variety of thinning algorithms--and through
x 3 neighborhood, similar to the structuring element B = N8 in Figure 40a, then we can view the
thinning operation as a window that repeatedly scans over the (binary) image and sets the center
pixel to "0" under certain conditions. The center pixel is not changed to "0" if and only if:
ii) removing a pixel would change the connectivity (e.g. Figure 43b),
As pixels are (potentially) removed in each iteration, the process is called a conditional erosion.
Three test cases of eq. are illustrated in Figure 43. In general all possible rotations and variations
have to be checked. As there are only 512 possible combinations for a 3 x 3 window on a binary
image, this can be done easily with the use of a lookup table.
Figure 43: Test conditions for conditional erosion of the center pixel.
If only condition (i) is used then each object will be reduced to a single pixel. This is useful if we
wish to count the number of objects in an image. If only condition (ii) is used then holes in the
objects will be found. If conditions (i + ii) are used each object will be reduced to either a single
pixel if it does not contain a hole or to closed rings if it does contain holes. If conditions (i + ii +
iii) are used then the "complete skeleton" will be generated as an approximation to eq. .
Propagation
It is convenient to be able to reconstruct an image that has "survived" several erosions or to fill
an object that is defined, for example, by a boundary. The formal mechanism for this has several
names including region-filling, reconstruction, and propagation. The formal definition is given
by the following algorithm. We start with a seed image S(0), a mask image A, and a structuring
Iteration k -
With each iteration the seed image grows (through dilation) but within the set (object) defined by
A; S propagates to fill A. The most common choices for B are N4 or N8. Several remarks are
central to the use of propagation. First, in a straightforward implementation, as suggested by eq. ,
the computational costs are extremely high. Each iteration requires O(N2) operations for an N x
N image and with the required number of iterations this can lead to a complexity of O(N3).
Fortunately, a recursive implementation of the algorithm exists in which one or two passes
through the image are usually sufficient, meaning a complexity of O(N2). Second, although we
have not paid much attention to the issue of object/background connectivity until now (see
Figure 36), it is essential that the connectivity implied by B be matched to the connectivity
associated with the boundary definition of A (see eqs. and ). Finally, as mentioned earlier, it is
important to make the correct choice ("0" or "1") for the boundary condition of the image. The
The application of these two operations on a test image is illustrated in Figure 44. In Figure
44a,b the skeleton operation is shown with the endpixel condition (eq. i+ii+iii) and without the
end pixel condition (eq. i+ii). The propagation operation is illustrated in Figure 44c. The original
image, shown in light gray, was eroded by E(A,6N8) to produce the seed image shown in black.
The original was then used as the mask image to produce the final result. The border value in
Several techniques based upon the use of skeleton and propagation operations in combination
a) Skeleton with end pixels b) Skeleton without end pixels c) Propagation with N8
matters we will restrict our presentation to structuring elements, B, that comprise a finite number
of pixels and are convex and bounded. Now, however, the structuring element has gray values
For a given output coordinate [m,n], the structuring element is summed with a shifted version of
the image and the maximum encountered over all shifts within the J x K domain of B is used as
the result. Should the shifting require values of the image A that are outside the M x N domain of
A, then a decision must be made as to which model for image extension, as described in Section
Erosion -
The duality between gray-level erosion and gray-level dilation--the gray-level counterpart of eq.
Duality -
The definitions of higher order operations such as gray-level opening and gray-level closing are:
Opening -
Closing -
The important properties that were discussed earlier such as idempotence, translation invariance,
increasing in A, and so forth are also applicable to gray level morphological processing. The
significantly reduced through the use of symmetric structuring elements where b[j,k] = b[-j,-k].
The most common of these is based on the use of B = constant = 0. For this important case and
using again the domain [j,k] B, the definitions above reduce to:
Dilation -
Erosion -
Opening -
Closing -
The remarkable conclusion is that the maximum filter and the minimum filter, introduced in
Section 9.4.2, are gray-level dilation and gray-level erosion for the specific structuring element
given by the shape of the filter window with the gray value "0" inside the window. Examples of
into two, one-dimensional windows. Further, a one-dimensional maximum or minimum filter can
be written in incremental form. (See Section 9.3.2.) This means that gray-level dilations and
erosions have a computational complexity per pixel that is O(constant), that is, independent of J
The operations defined above can be used to produce morphological algorithms for smoothing,
gradient determination and a version of the Laplacian. All are constructed from the primitives for
gray-level dilation and gray-level erosion and in all cases the maximum and minimum filters are
Morphological smoothing
This algorithm is based on the observation that a gray-level opening smoothes a gray-value
image from above the brightness surface given by the function a[m,n] and the gray-level closing
Morphological gradient
For linear filters the gradient filter yields a vector representation (eq. (103)) with a magnitude
(eq. (104)) and direction (eq. (105)). The version presented here generates a morphological
Morphological Laplacian
The effect of these filters is illustrated in Figure 46. All images were processed with a 3 x 3
structuring element as described in eqs. through . Figure 46e was contrast stretched for display
purposes using eq. (78) and the parameters 1% and 99%. Figures 46c,d,e should be compared to
d) Gradient e) Laplacian
Proposed Method;
IMAGE PREPROCESSING
During input image preprocessing stage, 4 linear filters were employed, as shown in Figure
Fig.2. pre-processing system applied Sobel operators used to estimate the first derivative of Input
image angiogram in horizontal and vertical directions. HP h and M h are the kernels of a high-
pass and low-pass filters. Sobel operators DH h and DV h are kernels with 3x3 elements given
by
the output image is the arithmetic mean of the gray levels in a 5x5 neighborhood of the same
Given the kernels associated with each filter, the filtered images may be computed through a bi
The system implementation was carried out considering that the input image I and the output
image obtained after defuzzification are both 8-bit quantized; this way their gray levels are
always between 0 and 255. These values define the working interval of the output variable and
the input variable G (the other input variables are not guaranteed to be less than 255). Besides,
three fuzzy sets were created to represent each variable's intensities; these sets were associated to
the linguistic variable "low", "medium" and "high". The adopted membership functions for the
fuzzy sets associated to the input G and to the output were Triangular functions with means 0,
127.5 and 255, as shown in Fig.3(a). For the sets associated to the other input images, Triangular
functions were also adopted for the linguistic variables "low" and "medium", but for the variable
"high" a sigmoid function was chosen (Fig.3(b)), since in this case we can not guarantee
METHOD DEFINITIONS
The functions adopted to implement the "and" (norm-T) and "or" (norm-S) operations were the
minimum and maximum functions, respectively. The Mamdani method was chosen as the
defuzzification procedure, which means that the fuzzy sets obtained by applying each inference
rule to the input data were joined through the add function; the output of the system was then
computed as the centroid of the resulting membership function [12, pages 148-161].
high only for those pixels belonging to edges in the input image. A robustness to contrast and
lighting variations were also in mind when these rules were established. The first 3 rules were
defined to represent the general notion that in pixels belonging to an edge there is a high
variation of gray level in the vertical or horizontal directions: To guarantee that edges in regions
of relatively low contrast can be detected, the two following rules were established to favor
medium variations of the gray level in a specific direction in regions of low frequency of the
input image (HP "low"): Rules 6 and 7 were chosen in such a way as to avoid including in the
output image pixels belonging to regions of the input were the mean gray level is lower. These
regions are proportionally more affected by noise, supposed it is uniformly distributed over the
whole image. The goal here is to design a system which makes it easier to include edges in low
contrast regions, but which does not favor false edges by effect of noise. Rules 8 to 11 were
established to avoid forming double edges in the output image ( they tend to appear due to
shadows in the natural images). Considering that high variations in gray level in horizontal
direction correspond to vertical edges, we conclude that high values of DH(i, j) and DH(i, j 1) do
not imply edges pixels in (i, j) and (i, j 1) , simultaneously. Analogously, high values of DV(i, j)
and DV(i 1, j) do not correspond to edge pixels in (i, j) and (i 1, j) . Finally, rule 12 was defined
to avoid including isolated pixels in the output image, favoring only continuous lines. It
also avoids including points by effect of noise, since this tends to generate isolated pixels in the
image which represents the input's edges. The threshold value to be applied may be estimated
given the root mean square value (RMS) associated to the input image [11, page 77-51].
The outputs of fuzzy interference system generally determined the edge of vessels. On account
MORPHOLOGY FILTERS
Mathematical morphology is a new mathematical theory which can be used to process and
analyze the images [13-14]. It provides an alternative approach to image processing based on
shape concept stemmed from set theory [15], not on traditional mathematical modeling and
analysis. In the mathematical morphology theory, images are treated as sets, and morphology
transformations which derived from Minkowski addition and subtraction are defined to extract
features in images. As the performance of classic edge detectors degrades with noise,
morphology edge detector has been studied [16]. The basic mathematical morphology operators
are dilation and erosion and the other morphology operators are the synthesization of the two
basic operations. In the following, we introduce some basic mathematical morphological
Erosion is a transformation of shrinking, which decreases the grey-scale value of the image,
while dilation is a transformation of expanding, which increases the grey-scale value of the
image. But both of themare sensitive to the image edge whose grey-scale value changes
obviously. Erosion filters the inner image while dilation filters the outer image. Opening is
erosion followed by dilation and closing is dilation followed by erosion. Opening generally
smoothes the contour of an image, breaks narrow gaps. As opposed to opening, closing tends to
fuse narrow breaks, eliminates small holes, and fills gaps in the contours. Therefore,
morphological operation is used to detect image edge, and at same time, denoise the image.
In medical image edge detection, we must select appropriate structuring element by texture
features of the image, and the size, shape and direction of structuring element must been
considered roundly. Usually, except for special demand, we select structuring element by 3x3
In this paper, a novel mathematical morphology edge detection algorithm is proposed. Opening-
closing operation is firstly used as preprocessing to filter noise. Then smooth the image by first
closing and then dilation. The perfect image edge will be got by performing the difference
between the processed image by above process and the image before dilation. In the following
CONCLUSION
In this paper, a novel fuzzy inference system and mathematical morphologic algorithms is
proposed to segmentation of blood vessels. The results show that the algorithm is more efficient
for segmentation of angiogram images and noise cancelling than other methods such as canny
method.