You are on page 1of 207

STUDY OF FUZZY MEASURE AND SOME

PROPERTIES OF NULL-ADDITIVE FUZZY


MEASURE
Thesis
Submitted to the
KUMAUN UNIVERSITY,
NAINITAL, Uttarakhand, INDIA

IN PARTIAL FULFILMENT OF THE REQUIREMENTS


FOR THE DEGREE OF

Doctor of Philosophy
(Mathematics)
May, 2015

UNDER THE SUPERVISION OF


DR. H.S. NAYAL
Associate Professor
Department of mathematics
Govt. Postgraduate College, Ranikhet (Almora)
UTTARAKHAND.

SUBMITTED BY
PARUL AGARWAL
M. Sc., M. Phil. (Mathematics)

2015

ACKNOWLEDGMENTS
The precious gift of learning is a debt that is difficult to pay. Only gratitude can
be left. Indeed, the words at my command are inadequate in form or in sprit to
express my heartfelt, deep sense of unbound gratitude and indebtedness to Dr.
H. S. Nayal, Associate Professor, Department of mathematics, Govt.
Postgraduate College, Ranikhet (Almora), Uttarakhand. I feel golden
opportunity and proud privilege to work under the inspiring guidance,
imperative and facial suggestions of Dr. H. S. Nayal, who is not just a guide,
but an angelic to me. He has not only helped me in my research work but also
provided benevolent guidance during the whole degree programme. I have real
appreciation and regard for him for his magnanimous and cooperative attitude,
painstaking efforts, peerless criticism and especially for realizing to capitalize
on my strength to work hard and acquire confidence to deal with every
difficulty.

I am immensely grateful to Dr. Sushil Jain and Dr. Priyanka Pathak,


Department of education, Govt. Postgraduate College, Ranikhet (Almora),
Uttarakhand for their scholastic guidance, inspiring suggestions and help at
various stages of the research work and for their all time support, blessing,
impeccable counsel and cooperation during the course of present study.

I have deepest respect and regards for my father, Shri R. N. Agarwal and
mother, Smt. Kusum Agarwal for their encouragement, motivation and valuable
guidance during this academic venture. I dont have words to express my
gratitude and indebtedness to my brother, Mr. Saurabh Agarwal, who has
always motivated and inspired me to work. From the deepest core of my heart, I
express very affectionate thanks to my sister, Dr. Nitu Agarwal, whose emotive
support and care enabled me to bring this work in shape. The love, affection
and encouragement that I got from my family are unforgettable. They are my
strength, my support and my pride.

I find myself incapable to express my warmest thanks to my seniors, juniors,


and friends for their encouragement, suggestions, moral support during my
study period and for the enjoyable and precious moments shared with them.
Research fellowship provided by Kumaun University, Nainital during period of
my doctoral degree programme is thankfully acknowledged.

Ranikhet
April,2015

(Parul Agarwal)
Authoress

Contents
CHAPTER 1
Introduction
1.1 Measure
1.2 Fuzzy
1.3 Uncertainty
1.4 Fuzzy measure
1.5 Null-additive fuzzy measure

CHAPTER 2
Possibility Theory versus Probability Theory in Fuzzy Measure Theory
2.1 Concept of associative probabilities to a fuzzy measure
2.2 Shannon Entropy of fuzzy measure
2.3 Basic properties of entropy of fuzzy measure
2.4 Application of entropy of fuzzy measure

CHAPTER 3
Properties of Null-Additive and Absolute Continuity of a Fuzzy Measure
3.1 Absolute continuity of a fuzzy measure
3.2 Some results on null-additive fuzzy measure
3.3 Lebesgue decomposition type null-additive fuzzy measure
3.4 Generalization of the symmetric fuzzy measure
3.5 Properties of fuzzy measures

CHAPTER 4
Properties of Strong Regularity of Fuzzy Measure on Metric Space
4.1 Null additive fuzzy measure on metric space
4

4.2 Strong regularity of fuzzy measure on metric space


4.3 Properties of inner\outer regularity of fuzzy measure

CHAPTER 5
Continuous, auto-continuous and completeness of fuzzy measure
5.1 Continuous and auto-continuous of fuzzy measure
5.2 Results on completeness of fuzzy measure space
5.3 Convergence in fuzzy measure
5.4 Some other properties of fuzzy measure and convergence

BIBLIOGRAPHY

Introduction
1.1

Measure:

In mathematical analysis, a measure on a set is a systematic way to assign a


number to each suitable subset of that set, intuitively interpreted as its size by
[29]. In this sense, a measure is a generalization of the concept of length, area,
and volume. A particularly important example is the Lebesgue measure on a
Euclidean space, which assigns the conventional length, area, and volume of
Euclidean geometry to suitable subsets of the n-dimensional Euclidean space
R

, according to [29], [86].

Measure theory was developed in successive stage during the late 19 th and early
20th centuries by

mile Borel, Henri Lebesgue, Johann Radon and Maurice

Frchet, among others, by [100], [18], [31]. The main applications of measures
are in the foundations of the Lebesgue integral [64], in Andrey Kolmogorovs
axiomatisation of probability theory and in ergodic theory [42]. In integration
theory, specifying a measure allows one to define integrals on spaces more
general than subsets of Euclidean space; moreover, the integral with respect to
the Lebesgue measure on Euclidean spaces is more general and has a richer
theory than its predecessor, the Riemann integral (see [29]). Probability theory
considers measures that assign to the whole set of size 1, and considers
measurable subsets to be events whose probability is given by the measure by
[86]. Ergodic theory considers measures that are invariant under, or arise
naturally from, a dynamical system [42].
Technically, a measure is a function that assigns a non-negative real number or
+

to certain subsets of a set X [42]. It must assign 0 to the empty set and

be countably additive: the measure of a large subset that can be decomposed


into a countable number of smaller disjoint subset, is the sum of the measures
of the smaller subsets by [9]. In general, if one wants to associate a consistent
size to each subset of a given set while satisfying the other axioms of a
measure, one only finds trivial examples like the counting measure [100]. This
problem was resolved by defining measure only on a sub-collection of all

subsets; the so called measurable subsets, which are required to form a

algebra [18], this means that countable unions, countable intersections and
complements of measurable subsets are measurable.
1.1.1

Definition:

Let X be a set and a

extended real number line i.e.

algebra over X. A function

from to the

: [ ,+] is called a measure if it

satisfies the following properties: according to [79].


(a)

Non-negativity:

( E)0

(b)

Null empty set:

( )=0

(c)

Countable additivity (or

For all countable collections

for all

- additivity):

{ Ei }i I of pairwise disjoint sets in :

( i I Ei )= ( E i ) (1 a)
i I

One may require that at least one set E has finite measure. Then the null set
automatically has measure zero because of countable additivity, because

i=0

i=0

( E )= ( E .. )= ( E ) + ( ) ( ) ,(1 b)

is finite if and only if the empty set has measure zero. If only the second and
third conditions of the definition of measure above are met, and

at most one of the values

takes on

, then is called a signed measure by [29],

[18].
1.1.2

Properties:

Several further properties can be derived from the definition of a countably


additive measure by [31], [9], [79].

(a) Monotonicity: A measure

is monotonic: If

E1

and

E2

are

measurable sets with


E1 E 2

then,

( E1 ) ( E2 )

(1c)

(b) Measures of infinite unions of measurable sets:

A measure

is countably subadditive: If E1, E2, E3, . is a countable

sequence of sets in , not necessarily disjoint, then


i=1

( Ei ) ( Ei ).(1 d)
i=1

A measure is continuous from below: If E1, E2, E3, . are measurable sets
and En is a subset of En+1 for all n, then the union of the sets E n is measurable,
and
i=1
( Ei ) =lim ( Ei) .(1 e)
i

(d)

Measures of infinite intersections of measurable sets:

A measure

is continuous from above: If E1, E2, E3, . are measurable

sets and En+1 is a subset of En for all n, then the intersection of the sets E n is
measurable; furthermore, if at least one of the En has finite measure, then

i=1
Ei ) =lim ( Ei) .(1 f )
i

This property is false without the assumption that at least one of the E n has

finite measure. For instance, for each

n N , let

En= R

, which all

have infinite Lebesgue measure, but the intersection is empty.


1.1.3. Some measures are defined here by [9], [18], [79], [100]
I

Sigma-finite measure: A measure space (X, , ) is called finite if

(X) is a finite real number. It is called

- finite if X can be decomposed

into a countable union of measurable sets of finite measure. A set in a measure

space has

finite measure if it is a countable union of sets with finite

measure.
For example, the real numbers with the standard Lebesgue measure are

finite but not finite. Consider the closed intervals [k, k+1] for all integers k;
there are countably many such intervals, each has measure 1, and their union is
the entire real line. Alternatively, consider the real numbers with the counting
measure, which assigns to each finite set of real numbers i.e. number of points
in the set. This measure space is not

finite, because every set with finite

measure contains only finitely many points, and it would take uncountably
many such set to cover the entire real line. The

have some very convenient properties;

finite measure spaces

finiteness can be compared in

this respect to the Lindelf property of topological spaces. They can be also
thought of as a vague generalization of the idea that a measure space may have
uncountable measure.
II

Complete measure: A measurable set X is called a null set if (X)

0 . A subset of a null set is called a negligible set. A negligible set need


not be measurable, but every measurable negligible set is automatically a null

10

set. A measure is called complete measure if every negligible set is


measurable.
A measure can be extended to a complete one by considering the

algebra of subsets Y which differ by a negligible set from a measurable set X,


that is, such that the symmetric difference of X and Y is contained in a null set.
III

any

<

- Additive measure: A measure on is

and any family

X , <

additive if for

the following conditions hold;

[ a ] X
[ b] ( X )= ( X ) .

Note that the second condition is equivalent to the statement that the ideal of
null sets is

- complete. Some other important measures are listed here by

[29], [86], [9].


IV

Lebesgue Measure: The Lebesgue measure on R is a complete

translation-invariant measure on a

-algebra containing the intervals in R

11

such that

( [ 0,1 ] ) =1 ; and every other measure with these properties extends

Lebesgue measure.
The Hausdorff measure is a generalization of the Lebesgue measure to sets with
non-integer dimension, in particular, fractal sets. The Haar measure for a locally
compact topological group is also a generalization of the Lebesgue measure
(and also of counting measure and circular angle measure) and has similar
uniqueness properties. Circular angle measure is invariant under rotation, and
hyperbolic angle measure is invariant under squeeze mapping.
Other measures used in various theories include: Borel measure, Jordan
measure, Ergodic measure, Euler measure, Gaussian measure, Baire measure,
Radon measure, Young measure and so on by [31], [79].
1.1.4. Generalizations: For certain purposes, it is useful to have a measure
whose values are not restricted to the non-negative real or infinity. For instance,
a countably additive set function with values in the (signed) real numbers is
called a signed measure, while such a function with values in the complex
numbers is called a complex measure. Measures that take values in Banach
spaces have been studied extensively by [42]. A measure that takes values in the
set of self-adjoint projections on a Hilbert space is called a projection-valued
measure; these are used in functional analysis for the spectral theorem. When it
is necessary to distinguish the usual measure, which take non-negative values
from generalizations, the term positive measure is used. Positive measures are

12

closed under conical combination but not general linear combination, while
signed measures are the linear closure of positive measures [100].
Another generalization is the finitely additive measure, which are sometimes
called contents. This is the same as a measure except that instead of requiring
countable additivity we require only finite additivity. Finite additive measures

are connected with notions such as Banach limits, the dual of

and the

Stone-Cech compactification by [9]. All these are linked in one way or another
to the axiom of choice.

1.2.

Fuzzy:

Fuzzy mathematics forms a branch of mathematics related to fuzzy set theory


and fuzzy logic. It started in 1965 after the publication of Lotfi Asker Zadehs
seminal work Fuzzy Sets by [60]. A fuzzy subset A of a set X is a function
A : X L , where L is the interval [0,1]. This function is also called a
membership function. A membership function is a generalization of a
characteristic function or an indicator function of a subset defined for
L={0,1} . More generally, one can use a complete lattice L in a definition of
a fuzzy subset A [55].
The evolution of the fuzzification of mathematical concepts can be broken
down into three stages: by [23]
13

(a) straightforward fuzzification during the sixties and seventies,


(b) the explosion of the possible choices in the generalization process during the
eighties,
(c) the standardization, axiomatization and L-fuzzification in the nineties.
Usually, a fuzzification of mathematical concepts is based on a generalization
of these concepts from characteristic functions to membership functions. Let A
and B be two fuzzy subsets of X. Intersection A B

and union

A B

are

defined as follows:

( A B )( x )=min ( A ( x ) , B ( x ) ) , ( A B )( x )=max ( A ( x ) , B ( x ) ) for all x X .

Instead of min and max one can use t-norm and t-conorm, respectively, by [24]
for example, min (a, b) can be replaced by multiplication ab. A straightforward
fuzzification is usually based on min and max operations because in this case
more properties of traditional mathematics can be extended to the fuzzy case.
A very important generalization principle used in fuzzification of algebraic
operations is a closure property. Let * be a binary operation on X. The closure
property

for

fuzzy

subsets

x , y X , A ( xy ) min ( A ( x ) , A ( y ) ) .

of

is

that

for

all

Let (G, *) be a group and A a fuzzy

14

subset of G. Then A is a fuzzy subgroup of G if for all x, y in G,

A ( xy 1 ) min ( A ( x ) , A ( y1) ) [24].

A fuzzy is a concept of which the meaningful content, value, or boundaries of


application can vary considerably according to context or conditions, instead of
being fixed once and for all [97]. This generally means the concept is vague,
lacking a fixed, precise meaning, without however being meaningless
altogether [91]. It has a meaning, or multiple meaning. But these can become
clearer only through further elaboration and specification, including a closer
definition of the context in which they are used. Fuzzy concepts lack clarity
and are difficult to test or operationalize [6].
In logic, fuzzy concepts are often regarded as concepts which in their
application, or formally speaking, are neither completely true nor completely
false, or which are partly true or partly false; they are ideas which require
further elaboration, specification or qualification to understand their
applicability [75] (the conditions under which they truly make sense).
In mathematics and statistics, a fuzzy variable (such as the temperature, hot
or cold) is a value which could lie in a probable

range defined by

quantitative limits or parameters, and which can be usefully described with


imprecise categories (such as high, medium of low).

15

In mathematics and computer science, the gradations of applicable meaning of a


fuzzy concept are described in terms of quantitative relationships defined by
logical operators. Such an approach is sometimes called degree-theoretic
semantics by logicians and philosophers [92], but more usual term is fuzzy
logic or many-valued logic. The basic idea is, that a real number is assigned to
each statement written in a language, within a range from 0 to 1, where 1 means
that the statement is completely true, and 0 means that the statement is
completely false, while values less than 1 but greater than 0 represent that the
statements are partly true, to a given, quantifiable extent. This makes it
possible to analyze a distribution of statement for their truth-content, identify
data patterns, make inferences and predictions, and model how processes
operate.
Fuzzy reasoning (i.e. reasoning with graded concepts) has many practical uses
[57]. It is nowadays widely used in the programming of vehicle and transport
electronics, household appliances, video games, language filters, robotics, and
various kinds of electronic equipment used for pattern recognition, surveying
and monitoring(such as radars). Fuzzy reasoning is also used in artificial
intelligence and virtual intelligence research [74] Fuzzy risk scores are used
by project managers and portfolio managers to express risk assessments [49].
A fuzzy set can be defined mathematically by assigning to each possible
individual in the universe of discourse a value representing its grade of
membership in the fuzzy set. This grade corresponds to the degree to which that

16

individual is similar or compatible with the concept represented by the fuzzy


set. Thus, individuals may belong in the fuzzy set to a greater or lesser degree
as indicated by a larger or smaller membership grade. As already mentioned,
these membership grades are very often represented by real-number values
ranging in the closed interval between 0 and 1(by [37] on page 4-5). Thus, a
fuzzy set representing our concept of sunny might assign a degree of
membership of 1 to a cloud cover of 0%, 0.8 to a cloud cover of 20%, 0.4 to a
cloud cover of 30%, and 0 to a cloud cover of 75%. These grades signify the
degree to which each percentage of cloud cover approximate our subjective
concept of sunny, and the set itself models the semantic flexibility inherent in
such a common linguistic term. Because full membership and full nonmembership in the fuzzy set can still be indicated by the values of 1 and 0,
respectively, we can consider the concept of a crisp set to be a restricted case of
the more general concept of a fuzzy set for which only these two grades of
membership are allowed [37].

1.3.

Uncertainty:

Fuzzy concepts can generate uncertainty because they are imprecise (especially
if they refer to a process in motion or a process of transformation where
something is in the process of turning into something else). In that case, they
do not provide a clear orientation for action or decision-making, reducing
fuzziness, perhaps by applying fuzzy logic, would generate more certainty [75].
A fuzzy concept may indeed provide more security, because it provides a
17

meaning for something when an exact concept is unavailable which is better


than not being able to denote it at all [90].
It is generally agreed that an important point in the evolution of the modern
concept of uncertainty was the publication of a seminal paper by [60], even
though some ideas presented in the paper were envisioned some few years
earlier by [76]. In this paper, Zadeh introduced a theory whose objects fuzzy
sets are sets with boundaries that are not precise. The membership in a fuzzy
set is not a matter of affirmation or denial, but rather a matter of a degree [37].
The significance of [60] was that it challenged not only probability theory as
the sole agent for uncertainty, but the very foundations upon which probability
theory is based: Aristotelian two-valued logic. When A is a fuzzy set and x is a
relevant object, the proposition x is a member of A is not necessarily either
true or false, as required by two-valued logic, but it may be true only to some
degree, the degree to which x is actually a member of A. it is most common, but
not required, to express degrees of membership in fuzzy sets as well as degree
of truth of the associated propositions by numbers in the closed unit interval
[0,1]. The extreme values in this interval, 0 and 1, then represent, respectively,
the total denial and affirmation of the membership in a given fuzzy set as well
as the falsity and truth of the associated proposition.
For instance, suppose we are trying to diagnose an ill patient. In simplified
terms, we may be trying to determine whether this patient belongs to the set of

18

people with, say, pneumonia, bronchitis, emphysema, or a common cold. A


physical examination may provide us with helpful yet inconclusive evidence
(by [37] on page 179). For example, we might assign a high values, say 0.75, to
our best guess, bronchitis, and a lower value to the other possibilities, such as
0.45 for the set consisting of pneumonia and emphysema and 0 for a common
cold. These values reflect the degrees to which the patients symptoms provide
evidence for the individual diseases or set of diseases; their collection
constitutes a fuzzy measure representing the uncertainty associated with several
well-defined alternatives. It is important to realize that this type of uncertainty,
which results from information deficiency, is fundamentally different from
fuzziness, which results from the lack of sharp boundaries.
Uncertainty-based information was first conceived in terms of classical set
theory and in terms of probability theory. The term information theory has
almost invariably been used to a theory based upon the well known measure of
probabilistic uncertainty established by [12]. Research on a broader conception
of uncertainty-based information, liberated from the confines of classical set
theory and probability theory, began in the early eighties. The name generalized
information theory was coined for a theory based upon this broader conception.
The ultimate goal of generalized information theory is to capture properties of
uncertainty-based information formalized within any feasible mathematical
framework. Although this goal has not been fully achieved as yet, substantial
progress has been made in this direction. In addition to classical set theory and

19

probability theory, uncertainty-based information is now well understood in


fuzzy set theory, possibility theory and evidence theory.
Three types of uncertainty are now recognized in the five theories, in which
measurement of uncertainty is currently well established. These three
uncertainty types are: nonspecificity (or imprecision), which is connected with
sizes (cardinalities) of relevant sets of alternatives; fuzziness (or vagueness),
which results from imprecise boundaries of fuzzy sets; and strife (or discord),
which expresses conflicts among the various sets of alternatives.
1.3.1

Nonspecificity (or Imprecision): Measurement of uncertainty and

associated information was first conceived in terms of classical set theory. It


was shown by [39] that using a function from the class of functions
U ( A )=c . log b|A|,

where

(1g)

| A| denotes the cardinality of a finite nonempty set A, and b, c are

positive constants

(b>1, c >0) , is the only sensible way to measure the

amount of uncertainty associated with a finite set of possible alternatives. Each


choice of values of the constants b and c determines the unit in which
uncertainty is measured. When b=2 and c=1, which is most common choice,
uncertainty is measured in bits, and obtain
U ( A )=log 2| A|.

(1h)

20

One bit of uncertainty is equivalent to the total uncertainty regarding the truth
or falsity of one proposition. The set function U defined by equation (1h) be
called a Hartley function. Its uniqueness as a measure of uncertainty
associated with sets of alternatives can also be proven axiomatically.
A natural generalization of the Hartley function from classical set theory to
fuzzy set theory was proposed in the early eighties under the name Uuncertainty. For any nonempty fuzzy set A defined on a finite universal set X,
the generalized Hartley function [39] has the form

U ( A )=

1
h( A)

h (A )

log 2||d (1i)

where

denotes the cardinality of the

- cut of A and h(A) is the height of

A. Observe that U(A), which measures nonspecificity of A, is a weighted

average of values of the Hartley function for all distinct

- cuts of the

normalized counterpart of A, defined by A(x)/h(A) for all x X. Each weight is

a difference between the values of

preceding

of a given

-cut and the immediately

-cut. Fuzzy sets are equal when normalized have the same

nonspecificity measured by function U [39].

21

1.3.2. Fuzziness (or Vagueness): The second type of uncertainty that involves
fuzzy sets (but not crisp sets) is fuzziness (or vagueness). In general, a measure

f : P(X) R ,

of fuzziness is a function

where P(X) denotes the set of all

fuzzy subsets of X (fuzzy power set). For each fuzzy set A, this function assign
a nonnegative real number f(A) that express the degree to which the boundary
of A is not sharp [37].
One way is to measure fuzziness of any set A by a metric distance between its
membership grade function and the membership grade function (or
characteristic function) of the nearest crisp set [60]. Even when committing to
this conception of measuring fuzziness, the measurement is not unique. To
make it unique, we have to choose a suitable distance function.
Another way of measuring fuzziness, which seems more practical as well as
more general, is to view the fuzziness of a set in terms of the lack of distinction
between the set and it complement. Indeed, it is precisely the lack of distinction
between the sets and their complements that distinguishes fuzzy sets from crisp
sets [60]. The less a set differs from its complement, the fuzzier it is. Let us
restrict our discussion to this view of fuzziness, which is currently predominant
in the literature.
1.3.3. Strife (or discord): This type of uncertainty which is connected with
conflicts among evidential claims has been far more controversial. Although it

22

is generally agreed that a measure of this type of uncertainty must be a


generalization of the well-established Shannon entropy [39] from probability
theory, which all collapse to the Shannon entropy within the domain of
probability theory, is the right generalization. As argued in the companion paper
[35], either of these measures is deficient in some trails. To overcome these
deficiencies, another measure was prepared in the companion paper [35], which
is called a measure of discord. This measure is expressed by a function D and
defined by the formula,

D ( m) = m ( A ) log2 1 m ( B )
A F

B F

|B A|
(1 j)
|B|

The rationale for choosing this function is explained as follows. The term,

Con ( A ) = m(B)
B F

|B A|
( 1k )
|B|

In equation (1j) expresses the sum of conflicts of individual evidential claim


m(B) for all

B F

with respect to the evidential claim m(A) focusing on a

particular set A; each individual conflict is properly scaled by the degree to


which the subsethood relation B A

is violated. The function

log 2 [1Con ( A ) ] ,

23

which is employed in equation (1j), is monotonic increasing with Con (A) and,
consequently, it represents the same quantity as Con(A), but on the logarithmic
scale. The use of the logarithmic scale is motivated in the same way as in the
case of the Shannon entropy [39].
Function D is clearly a measure of the average conflict among evidential
claims within a given body of evidence. Consider, as an example, incomplete
information regarding the age of a person X. Assume that the information is
expressed by two evidential claims pertaining to the age of X: X is between 15
and 17 years old with degree m(A), where A = {15, 17}, and X is a teenager
with degree m(B), where B = {13, 19}. Clearly, the weaker second claim does
not conflict with the stronger first claim. Assume that

A B , in this case,

the situation is inverted: the claim focusing on B is not implied by the claim
focusing on A and, consequently, m(B) dose conflict with m(A) to a degree
proportional to number of elements in A that are not covered by B. This conflict
is not captured by function Con since |B A|=0

in this case.

It follows from these observations that the total conflict of evidential claims
within a body of evidence {F, m} with respect to a particular claim m(A) should
be expressed by function

CON ( A )= m ( B )
B F

| AB|
(1 l)
|A|

24

rather than function Con given by equation (1k). Replacing Con (A) with CON
(A), we obtain a new function, which is better justified as a measure of conflict
in evidence theory than function D. This new function, which is called strife
and denoted by S, is defined by the form,

S ( m) = m( A)log 2 1 m( B)
A F

B F

In ordered possibility distributions

| AB|
.(1 m)
| A|

1=r 1 r 2 r n

, the form of function S

(strife) in possibility theory is defined by:


n

S ( r )=U ( r ) (r ir i+1 ) log2


i=2

[ ]
i

j=1

r j ,(1 n)

Where U(r) is the measure of possibilistic nonspecificity (U-uncertainty). The


maximum value of possibilistic strife, given by equation (1n), depends on n in
exactly the same way as the maximum value of possibilistic discord: it
increases with n and converges to a constant, estimated as 0.892 as

[51]. However the possibility distributions for which the maximum of


possibilistic strife are obtained (one for each value of n) are different from those
for possibilistic discord. This property and the intuitive justification of function
S make this function better candidate for the entropy-like measure in Dempstershafer theory (DST) than any of the previously considered functions.

25

1.3.4. Applications of Measures of Uncertainty:


Uncertainty measures becomes well justified, they can be used in many other
contexts as for managing uncertainty and the associated information. For
example, they can be used for extrapolating evidence, assessing the strength of
relationship between given groups of variables, assessing the influence of given
input variables on given output variables, measuring the lost of information
when a system is simplified, and the like. In many problem situations, the
relevant measures of uncertainty are applicable only in their conditional or
relative terms. The use of relevant uncertainty measures is as broad as the use
of any relevant measuring instrument [33]. Three basic principles of uncertainty
were developed to guide the use of uncertainty measures in different situations.
These principles are: a principle of minimum uncertainty, a principle of
maximum uncertainty, and a principle of uncertainty invariance.
The principle of minimum uncertainty is an arbitration principle. It is used for
narrowing down solutions in various systems problems that involve uncertainty.
It guides the selection of meaningful alternatives from possible solutions of
problems in which some of the initial information is inevitably lost but in
different solutions it is lost in varying degrees. The principle states that we
should accept only those solutions with the least loss of information i.e. whose
uncertainty is minimal. Application of the principle of minimum uncertainty is
the area of conflict-resolution problems. For some development of this principle
see [38]. For example when we integrate several overlapping models into one

26

larger model, the models may be locally inconsistent. It is reasonable to require


that each of the models be appropriately adjusted in such a way that the overall
model becomes consistent. It is obvious that some information contained in the
given models is inevitably lost by these adjustments. This is not desirable.
Hence, we should minimize this lost of information. That is, we should accept
only those adjustments for which the total increase of uncertainty is minimal.
The principle of maximum uncertainty is essential for any problem that
involves ampliative reasoning. Ampliative reasoning is indispensable to science
in a variety of ways. For example, when we want to estimate microstates from
the knowledge of relevant microstates and partial information regarding the
microstates, we must resort to ampliative reasoning. Ampliative reasoning is
also common and important in our daily life where the principle of maximum
uncertainty is not always adhered to. Its violation leads almost invariable to
conflicts in human communication, as well expressed by [93] ..whenever
you find yourself getting angry about a difference in opinion, be on your guard;
you will probably find, on examination, that your belief is getting beyond what
the evidence warrants. This is reasoning in which conclusions are not entailed
in the given premises. The principle may be expressed by the following
requirement: in any ampliative inference, use all information available, but
make sure that no additional information is unwittingly added. The principle of
maximum uncertainty is applicable in situations in which we need to go beyond
conclusions entailed by verified premises [33]. The principle states that any

27

conclusion we make should maximize the relevant uncertainty within


constraints given by the verified premises. In other words, the principle guides
us to utilize all the available information but at the same time fully recognize
our ignorance. This principle is useful, for example, when we need to
reconstruct an overall system from the knowledge of some subsystems. The
principle of maximum uncertainty is well developed and broadly utilized within
classical information theory, [53] where it is called the principle of maximum
entropy [37].
The last principle, the principle of uncertainty invariance, is of relatively recent
origin [19]. Its purpose is to guide meaningful transformations between various
theories of uncertainty. The principle postulates that the amount of uncertainty
should be preserved in each transformation of uncertainty from one
mathematical framework to another. The principle was first studied in the
context of probability-possibility transformations [50]. Unfortunately, at the
time, no well justified measure of uncertainty was available. Due to the unique
connection between uncertainty and information, the principle of uncertainty
invariance can also be conceived as a principle of information invariance or
information preservation [34].

1.4.

Fuzzy measure:

In mathematics, fuzzy measure considers a number of special classes of


measures, each of which is characterized by a special property. Some of the

28

measures used in this theory are plausibility and belief measures, fuzzy set
membership function and the classical probability measures. In the fuzzy
measure theory, the conditions are precise, but the information about an element
alone is insufficient to determine which special classes of measure should be
used [44]. The central concept of fuzzy measure theory is the fuzzy measure
which was introduced by Choquet in 1953 by [44] and independently defined
by Sugeno in 1974 by [72] in the context of fuzzy integrals.
Consider, however, the jury members for a criminal trial who are uncertain
about the guilt or innocence of the defendant. The uncertainty in this situation
seems to be of a different type; the set of people who are guilty of the crime and
the set of innocent people are assumed to have very distinct boundaries. The
concern, therefore, is not with the degree to which the defendant is guilty, but
with the degree to which the evidence proves his membership in either the crisp
set of guilty people or the crisp set of innocent people. We assume that perfect
evidence would point to full membership in one and only one of these sets.
However, our evidence is rarely, if ever, perfect, and some uncertainty usually
prevails. In order to represent this type of uncertainty, we could assign a value
to each possible crisp set to which the element in question might belong. This
value would indicate the degree of evidence or certainty of the elements
membership in the set. Such a representation of uncertainty is known as a
fuzzy measure.

29

In classical, two-valued logic, we would have to distinguish cold from not cold
by fixing a strict changeover point. We might decide that anything below 8
degrees Celsius is cold, and anything else is not cold. This can be rather
arbitrary. Fuzzy logic lets us avoid having to choose an arbitrary changeover
point essentially by allowing a whole spectrum of degrees of coldness. A set
of temperature like hot or cold is represented by a function. Given a
temperature, the function will return a number representing the degree of
membership of that set. This number is called a fuzzy measures.
1.4.1

Example:

Temperature in 0C
-273
-40
0
5
10
15
100
1000

Some fuzzy measure for the set cold:


Fuzzy Measure
1
1
0.9
0.7
0.3
0.1
0
0

Cold
Cold
Not quite cold
On the cold side
A bit cold
Barely cold
Not cold
Not cold

The basis of the idea of coldness may be how people use the word cold
perhaps. 30% of people think that 100C is cold (function value 0.3) and 90%
think that 00C is cold (function value 0.9). Also it may depend on the context. In
terms of the weather, cold means one thing. In terms of the temperature of the
coolant in a nuclear reactor cold may means something else, so we would
need to have a cold function appropriate to our context.
1.4.2

Definition:
30

Given a universal set X and a non empty family

measure on X,

is a function

of subsets of X, a fuzzy

g : [0,1]

that satisfies the

following requirements by [37]:


[1]

Boundary requirements:

[2]

Monotonicity: If

[3]

Continuity

g ( )=0g ( X )=1

A B , then g ( A ) g ( B ) for all A , B .

from

below:

For

any

increasing

sequence

any

decreasing

sequence

A 1 A 2 ,

i=1
if i=1 Ai , then lim g( Ai )=g ( Ai )
i

[4]

Continuity

from

above:

For

A 1 A 2 ,

i=1
if i=1 Ai , then lim g ( A i )=g ( A i )
i

The boundary requirements [1] state that the element in question definitely
does not belong to the empty set and definitely does belong to the universal set.
The empty set does not contain any element hence it cannot contain the element

31

of our interest, either; the universal set contains all elements under
consideration in each particular context; therefore it must contain our element
as well.
Requirement [2] states that the evidence of the membership of an element in a
set must be at least as great as the evidence that the element belongs to any
subset of that set. Indeed with some degree of certainty that the element belongs
to a set, then our degree of certainty that is belongs to a larger set containing the
former set can be greater or equal, but it cannot be smaller. Requirements [3]
and [4] are clearly applicable only to an infinite universal set. They can
therefore be disregarded when the universal set is finite. Fuzzy measures are
usually defined on families
semirings,

that satisfy appropriate properties (rings,

- algebras, etc.). In some cases,

consists of the full power

set P(X) [37].


Three additional remarks regarding this definition are needed. First, functions
that satisfy [1], [2], and either [3] or [4] are equally important in fuzzy
measure theory as functions that satisfy all four requirements. These functions
are called semicontinuous fuzzy measure; they are either continuous from
below or continuous from above. Second, it is sometimes needed to generalize
fuzzy measures by extending the range of function from [0, 1] to the set of
nonnegative real numbers and by excluding the second boundary requirement,

32

g ( X )=1 . These generalizations are not desirable for our purpose. Third,
fuzzy measures are generalizations of probability measures or generalizations
of classical measures. The generalization is obtained by replacing the additivity
requirement with the weaker requirements of monotonicity and continuity or, at
least, semicontinuity [106].
1.4.3

Properties of fuzzy measures: For any

A , B X , a fuzzy measure

is
1.

Additive:

if

g ( A B )=g ( A )+ g ( B ) , for all A B= ;

2.

super additive:

if

g ( A B ) + g ( A B ) g ( A ) + g ( B )

for all A B= ;

3.

sub additive:

if

g ( A B ) + g ( A B ) g ( A ) + g ( B )

for all A B= ;

4.

super modular:

if

g ( A B ) + g ( A B ) g ( A ) + g ( B ) ;

5.

sub modular:

if

g ( A B ) + g ( A B ) g ( A ) + g(B) ;

6.

Symmetric:

If | A|=|B| g ( A )=g ( B ) ;

33

7.

Boolean:

If g ( A )=0g ( A )=1.

Let A and B are any two sets then

A B AA B B . The monotonicity

of fuzzy measures that every fuzzy measures g satisfies the inequality for any
three sets

A , B , A B ,

g( A B) min [g ( A ) , g ( B ) ]

(1o)

Similarly

A A BB A B

for any two sets, the monotonicity of

fuzzy measures implies that every fuzzy measure g satisfies the inequality for
any three sets

A ,B , A B ,

g( A B) max [g ( A ) , g ( B ) ]

(1p)
Understanding the properties of fuzzy measures is useful in application. When a
fuzzy measure is used to define a function such as the Sugeno integral or
Choquet integral, these properties will be crucial in understanding the function's
behavior. For instance, the Choquet integral with respect to an additive fuzzy
measure reduces to the Lebesgue integral [47]. In discrete cases, a symmetric
fuzzy measure will result in the ordered weighted averaging (OWA) operator.

34

Submodular fuzzy measures result in convex functions, while supermodular


fuzzy measures result in concave functions when used to define a Choquet
integral [72].
Since fuzzy measures are defined on the power set (or, more formally, on
the

-algebra associated with X), even in discrete cases the number of

variables can be quite high (2 ) . For this reason, in the context of multicriteria decision analysis and other disciplines, simplification assumptions on
the fuzzy measure have been introduced so that it is less computationally
expensive to determine and use. For instance, when it is assumed the fuzzy
measure is additive, it will hold that
g ( E )= g( {i } )( 1q)
i E

and the values of the fuzzy measure can be evaluated from the values on X.
Similarly, a symmetric fuzzy measure is defined uniquely by

values.

Two important fuzzy measures that can be used are the Sugeno- -fuzzy

measure and

-additive measures, introduced by Sugeno and Grabisch

respectively [66], [68].

35

Fuzzy measure theory is of interest of its three special branches: probability


theory, evidence theory and possibility theory. Although our principle interest is
in possibility theory and its comparison with probability theory, evidence theory
will allow us to examine and compare the two theories from a broader
perspective.
1.4.4

Probability Theory:

Probability represents a unique encoding of incomplete information. The


essential task of probability theory is to provide methods for translating
incomplete information into this code. The code is unique because it provides a
method, which satisfies the following set of properties for any system (95).
1. If a problem can be solved in more than one way, all ways must lead to the
same answer.
2. The question posed and the way the answer is found must be totally
transparent. There must be no laps of faith required to understand how the
answer followed from the given informations.
3. The methods of solution must not be ad-hoc. They must be general and
admit to being used for any problem, not just a limited class of problems.
Moreover the applications must be totally honest. For example, it is not proper
for someone to present incomplete information, develop an answer and then

36

prove that the answer is incorrect because in the light of additional information
a different answer is obtained.
4. The process should not introduce information that is not present in the
original statement of the problem.
1.4.4.1 Basic Terminologies Used in Probability Theory:
(I)

The Axioms of Probability: The language systems, contains a set of

axioms that are used to constrain the probabilities assigned to events [95]. Four
axioms of probability are as follows:
1. All values of probabilities are between zero and one i.e.
0 P ( A ) 1 A .

2. Probabilities of an event that are necessarily true have a value of one, and
those that are necessarily false have a value of zero i.e. P(True) = 1 and P(False)
= 0.
3. The probability of a disjunction is given by:
P ( A B )=P ( A ) + P ( B )P( A B)

A ,B

4. A probability measure, Pro is required to satisfy the equation

37

Pro ( A B )=Pro ( A ) + Pro (B)

A , B s .t . A B= .

This requirement is usually referred to as the additivity axiom of probability


measures [36].
An important result of these axioms is calculating the negation of a probability
of an event. i.e.
P ( A ' )=1P ( A ) A .

(II)

The Joint Probability Distribution: The joint probability distribution

is a function that specifies a probability of certain state of the domain given the
states of each variable in the domain. Suppose that our domain consists of three

random variables or atomic events

{ a1 , a2 , a3 } [95]. Then an example of the

joint probability distribution of this domain where

would be

a1=1, a 2=0,a3=1

P ( a1 , a2' , a 3) = same value depending upon the values of different

probabilities of

a1 , a2 ,a 3

. The joint probability distribution will be one of

the many distributions that can be calculated using a probabilistic reasoning


system [50].

38

(III)

Conditional Probability and Bayes Rule: Suppose a rational agent

begins to perceive data from its world, it stores this data as evidence. This
evidence is used to calculate a posterior or conditional probability which will be
more accurate than the probability of an atomic event without this evidence,
known as a prior or unconditional probability [95].
We take example to define bayes rule; the reliability of a particular skin test for
tuberculosis (TB) is as follows:

If the subject has TB then the sensitivity of the test is 0.98.

If the subject does not have TB then the specificity of the test is 0.99.

From a large population, in which 2 in every 10,000 people have TB, a person
is selected at random and given the test, which comes back positive. What is
the probability that the person actually has TB?
Lets define event A as the person has TB and event B as the person tests
positive

P ( A )=

for

TB.

It

is

clear

that

the

prior

probability

2
=0.0002P ( A )=1P ( A )=0.9998
.
10,000

The conditional probability P ( B| A ) ,the probability that the person will test
positive for TB given that the person has TB. This was given as 0.98. The other
39

value we need

)
P ( B| A
, the probability that the person will test positive for

TB given that the person does not have TB. Since a person who does not have
TB will test negative 99% (given) of the time, he/she will test positive 1% of

the time and therefore

P ( A|B )=

)=0.01
P ( B| A
. By Bayes Rule as:

P ( A ) P ( B| A )
0.0002 0.98
=

P ( A ) P ( B| A )+ P ( A ) P ( B| A ) ( 0.0002 0.98 ) + ( 0.9998 0.01 )

0.0192 ( ) =1.92 ( )

We might find this hard to believe, that fewer than 2% of people who test
positive for TB using this test actually have the disease. Ever though the
sensitivity and specificity of this test are both high, the extremely low incidence
of TB in the population has a tremendous effect on the tests positive predictive
value, the population of people who test positive that actually have the disease.
To see this, we might try answering the same question assuming that the
incidence of TB in the population is 2 in 100 instead of 2 in 10,000.
(IV)

Conditional Independence: When the result of one atomic event, A,

does not affect the result of another atomic event, B, those two atomic events
are known to be independent of each other and this helps to resolve the
uncertainty [13]. This relationship has the following mathematical property:

40

P ( A , B )=P ( A ) . P (B) .

P ( A|B )=P ( A ) .

Another

mathematical

implication

is

that:

Independence can be extended to explain irrelevant data in

conditional relationship.
1.4.4.2 Disadvantages with Probabilistic Method:
Probabilities must be assigned even if no information is available and assigns
an equal amount of probability to all such items. Probabilities require the
consideration of all available evidence, not only from the rules currently under
consideration [50]. Probabilistic methods always require prior probabilities
which are very hard to found out apriority [36]. Probability may be
inappropriate where as the future is not always similar to the past. In
probabilistic method independence of evidences assumption often not valid and
complex statements with conditional dependencies cannot be decomposed into
independent parts. In this method relationship between hypothesis and evidence
is reduced to a number [13]. Probability theory is an ideal tool for formalizing
uncertainty in situations where class frequencies are known or where evidence
is based on outcomes of a sufficiently long series of independent random
experiments [95].
1.4.5

Evidence Theory:

41

Dempster-Shafer theory (DST) is a mathematical theory of evidence. The


seminal work on the subject is done by Shafer [43], which is an expansion of
previous work done by Dempster [4]. In a finite discrete space, DST can be
interpreted as a generalization of probability theory where probabilities are
assigned to sets as opposed to mutually exclusive singletons. In traditional
probability theory, evidence is associated with only one possible event. In DST,
evidence can be associated with multiple possible events, i.e. sets of events.
DST allows the direct representation of uncertainty. There are three important
functions in DST by [4], [43]: (1) The basic probability assignment function
(bpa or m), (2) The Belief function (Bel) and (3) The Plausibility function (Pl).
The theory of evidence is based on two dual non-additive measures: (2) and (3).
(1) Basic Probability Assignment: Basic probability assignment does not
refer to probability in the classical sense. The basic probability assignment,
represented by m, defines a mapping of the power set to the interval between 0
and 1, s.t. m: ( X ) [ 0,1] satisfying the following properties:[50].

(a) m ( ) =0 , and

( b)

m( A )=1

A ( X )

The value of m(A) pertains only to the set A and makes no additional claim
about any subset of A. Any further evidence on the subset of A would be
42

represented by another basic probability assignment. The summation of the


basic probability assignment of all the subsets of the power set is 1. As such, the
basic probability assignment cannot be equated with a classical probability in
general [13].
(2) Belief Measure: Given a measurable space ( X , ) , a belief measure is a

function Bel : ( X ) [0,1] satisfying the following properties:

(a) Bel ( )=0 ,

(b) Bel ( X )=1 , and

( c ) Bel ( X 1 X 2 X n )

Bel ( X i ) Bel ( X i X k ) + + (1 )n+1 Bel ( X 1 X 2 ... X n ) (1r )


i

i <k

Due to the inequality (1r), belief measures are called superadditive. When X is
infinite, function Bel is also required to be continuous from above. For each
Y ( X ) , Bel(Y )

is defined as the degree of belief, which is based on

available evidence [4], that a given element of X belongs to the set Y. The
inequality (1r) implies the monotonicity requirement [2] of fuzzy measure. Let

43

X 1 X 2 where X 1 , X 2 ( X ) let X 3=X 2X 1 .

Then X 1 X 3=X 2 X 1 X 3= .

Applying now

X 1 X 3 for n=2

to (1r), we get

Bel ( X 1 X 3 ) =Bel ( X 2 ) Bel ( X 1 ) + Bel ( X 3 ) Bel ( X 1 X 3 ) .

Since

X 1 X 3= ,Bel ( )=0 , we have

Bel ( X 1 X 3 ) Bel ( X 1 ) + Bel ( X 3)

Let

X 1= AX 2 = A

in (1r) for n=2. Then we have,

) Bel ( A A ) Bel( X) 1
Bel ( A ) + Bel ( A

)1
Bel ( A ) + Bel ( A

(1s)

Inequality (1s) is called the fundamental property of belief measures.


(3) Plausibility Measure: Given a measurable space (X , ) , a plausibility

measure is a function

Pl : ( X ) [0,1] satisfying the following properties:

44

(a)

Pl ( )=0 ,

(b)

Pl ( X ) =1 , and

( c ) Pl ( X 1 X 2 X n )

Pl ( X i ) Pl ( X i X k ) + + (1 )n+1 Pl ( X 1 X 2 X n ) (1 t)
i

i <k

Due to the inequality (1s), plausibility measures are called subadditive. When X
is infinite, function Pl is also required to be continuous from below [43].

Let

X 1= AX 2 = A

in (1t) for n=2. Then we have,

)Pl ( A A
) Pl ( A A )
Pl ( A ) + Pl ( A

)Pl ( X ) Pl ()
Pl ( A ) + Pl ( A

) 1
Pl ( A ) + Pl ( A
(1u)
According to inequality (1s) and (1u) we say that each belief measure, Bel, is a
plausibility measure, Pl, i.e. the relation between belief measure and plausibility
measure is defined by the equation [59].

45

)
Pl ( A )=1Bel ( A

(1v)

)
Bel ( A )=1Pl ( A

1.4.6

(1w)

Possibility Theory:

Possibility theory is an uncertainty theory devoted to the handling of


incomplete information. It is comparable to probability theory because it is
based on set-functions. Possibility theory has enabled a typology of fuzzy rules
to be laid bare, distinguishing rules whose purpose is to propagate uncertainty
through reasoning steps, from rules whose main purpose is similarity-based
interpolation [15]. The name Theory of Possibility was coined by Zadeh [59],
who was inspired by a paper by Gaines and Kohout [8]. In Zadeh's view,
possibility distributions were meant to provide a graded semantics to natural
language statements. Possibility theory was introduced to allow a reasoning to
be carried out on imprecise or vague knowledge, making it possible to deal with
uncertainties on this knowledge. Possibility is normally associated with some
fuzziness, either in the background knowledge on which possibility is based, or
in the set for which possibility is asserted [36].
Let S be a set of states of affairs (or descriptions thereof), or states for short. A
possibility distribution is a mapping

from S to a totally ordered scale L,

with top 1 and bottom 0, such as the unit interval. The function

46

represents

the state of knowledge of an agent (about the actual state of affairs)


distinguishing what is plausible from what is less plausible, what is the normal
course of things from what is not, what is surprising from what is expected
[70]. It represents a flexible restriction on what is the actual state with the
following conventions (similar to probability, but opposite of Shackle's
potential surprise scale):
(1) (s) = 0 means that state s is rejected as impossible;
(2) (s) = 1 means that state s is totally possible (= plausible).
For example, imprecise information such as Xs height is above 170cm
implies that any height h above 170 is possible any height equal to or below
170 is impossible for him. This can be represented by a possibility measure
defined on the height domain whose value is 0 if

h<170

and 1 if

h 170

(0 = impossible and 1 = possible).when the predicate is vague like in X is tall,


the possibility can be accommodate degrees, the largest the degree, the largest
the possibility.
For consonant body of evidence, the belief measure becomes necessity measure
and plausibility measure becomes possibility measure.
Hence

Bel ( A B )=min [ Bel ( A ) , Bel ( B ) ]

Becomes

( A B )=min [ ( A ) , ( B ) ]

47

And

Pl ( A B )=max [ Pl ( A ) , Pl ( B ) ]

Becomes

( A B )=max [ ( A ) , ( B )]

And also

( A )=1 ( A ) ; ( A )=1 ( A )

Some other important measures on fuzzy sets and relations are defined here.
1.4.7

Additive Measure:
be a measurable space. A function

Let X,

is an - additive

measure when the following properties are satisfied:


1.

2.

( )=0

If

An ,

n = 1, 2, ... is a set of disjoint subsets of

then

n=1

( A n ) = ( A n) .
n=1

The second property is called

-additivity, and the additive property of a

measurable space requires the -additivity in a finite set of subsets

well-known example of

-additive is the probabilistic space (X,

48

An

.A

, p)

where

the

probability

p ( X )=1 p ( A ) =1 p( A ' )

is

an

additive

for all subsets

measure

such

that

A . Other known examples

of -additive measure are the Lebesgue measures defined that are an important
base of the XX century mathematics [63]. The Lebesgue measures generalise
the concept of length of a segment, and verify that if
n

[ c , d ] i =1 n [ ai , bi ] then ( dc ) ( b iai ) .(1 x)


i=1

Other measures given by Lebesgue are the exterior Lebesgue measures and
interior Lebesgue measures. A set A is Lebesgue measurable when both interior
and exterior Lebesgue measures are the same [67]. Some examples of Lebesgue
measurable sets are the compact sets, the empty set and the real numbers set R.
1.4.8

- additive fuzzy measure

A discrete fuzzy measure on a set X is called

if its Mbius representation verifies

any

that

-additive 1 X

M ( E )=0 , whenever

E X , and there exists a subset F with


M (F) 0 .

49

for

elements such

The

-additive fuzzy measure limits the interaction between the subsets

EX

to size E . This drastically reduces the number of variables

needed to define the fuzzy measure, and as

can be anything from 1 to

X , it allows for a compromise between modelling ability and simplicity

[66].
1.4.9

Normal Measure

Let X,

be a measurable space. A measure

measure if there exists a minimal set

A0

m: [ 0,1 ]

and a maximal set

is a normal

Am

in

such that:

1.

A
m( 0)=0.

2.

m ( A m )=1.

For example, the measures of probability on a space X,

measures with

A 0=

and

A m =1

necessarily norma [63].


1.4.10 Sugeno Fuzzy Measure:

50

are normal

. The Lebesgue measures are not

Let

be an -algebra on a universe X. A Sugeno fuzzy measure is

g : [ 0,1 ] verifying by [99]:


g ( )=0g ( X )=1

1.

2.

If

A B , then g ( A ) g ( B ) for all A , B .

3.

If

An

and

lim g ( An )=g lim A n

A1 A2

then

Property 3 is called Sugenos convergence. The Sugeno measures are monotone


but its main characteristic is that additivity is not needed. Several measures on
finite algebras, as probability, credibility measures and plausibility measures are
Sugeno measures. The possibility measure on possibility distributions
introduced by Zadeh [59] gives Sugeno measures.
1.4.11 Fuzzy Sugeno

-Measure

Sugeno [72] introduces the concept of fuzzy

that is

-additive. So the fuzzy

-measure as a normal measure

-measures are fuzzy measures.

51

Let (1, )

g : [ 0,1 ]

and let X,

is a fuzzy

be a measurable space. A function

-measure if for all disjoint subsets A, B in

g ( A B ) =g ( A )+ g ( B ) + g ( A ) g (B) . For example, if

fuzzy

-measure is an additive measure.

1.5.

Null-additive fuzzy measure:

=0

then the

The concept of fuzzy measure does not require additivity , but it requires
monotonicity related to the inclusion of sets [45]. Additivity can be very
effective and convenient in some applications, but can also be somewhat
inadequate in many reasoning environments of the real world as in approximate
reasoning, fuzzy logic, artificial intelligence, game theory, decision making,
psychology, economy, data mining, etc., that require the definition of nonadditive measures and large amount of open problems [26]. For example, the
efficiency of a set of workers is being measured, the efficiency of the some
people doing teamwork is not the addition of the efficiency of each individual
working on their own.
The theory of non-additive set functions had influences on many parts of
mathematics and different areas of science and technology. Recently, many
authors have investigated different type of non-additive set functions, as sub-

52

- decomposable measures, null-

measures, k-triangular set functions,

additive set functions and others. The range of the null-additive set functions,
introduced by Wang [109]. Suzuki [45] introduced and investigated atoms of
fuzzy measures and Pap [26] introduced and discussed atoms of null-additive
set functions.
1.5.1

Definition:

Let X be a set and a

algebra over X. A set function m,

m: [0, ] is called null-additive, if:


m ( A B )=m ( A ) whenever A , B , A B= ,m ( B )=0

It is obvious that for null-additive set function m we have

[109].

m ( ) =0 whenever

there exists B such that m ( B )=0.


1.5.2

1.

Examples:

- decomposable measure

conorm

whenever

, such that

m: [0,1]

m ( ) =0

A , B , and

and

with respect to a t-

m ( A B )=m ( A ) m(B)

A B= , is always null-additive [25].

53

2.

A generalization of -decomposable measure from example 1 is the

m: [a , b ] . Namely, let [a, b] be a

-decomposable measure

a<b + . The operation

closed real interval, where

(pseudo-addition) [25] is a function : [ a , b ] [ a , b ] [ a , b]

which is

commutative, non-decreasing, associative and has a zero element,

denoted by 0 which is either a or b. A set function

-decomposable

measure

m ( A B )=m ( A ) m(B)

if

whenever

there

m: [a , b ] is a

hold

A , B ,

m ( ) =0

and

and

A B= .

Then m is null-additive.

3.

Let m ( A ) 0 whenever

4.

Let

A , A . Then m is null-additive.

X ={x , y } and define m in the following way:

m ( X )=1m ( A )=0 for A X

Then m is not null-additive.


1.5.3

Saks decomposition:

54

Let m be a null-additive set function. A set


an atom provided that if B A

1.

m ( B )=0 ,or

A with m(A )> 0 is called

then [25]

2.

1.5.4

Remarks:

1.

For null-additive set functions the condition 2 implies

what is analogous to the property of the usual

m ( B )=m( A) ,

-finite measures (for

this case the opposite implication is also true).


2.

For

1.5.5

Darboux property:

-finite measure m, each of its atom has always finite measure.

A set function m is called autocontinuous from above (below) if for every


> 0 and every

A ,

there exists = ( A , )> 0 such that [110]

m ( A ) m ( A B ) m ( A )+

(resp.
m ( A ) m

whenever B , A B= ( resp . B A )m(B)<

55

holds.

1.5.6

Remark:

If m is a fuzzy measure, then we can omit in the preceding definition the


supposition A B= (resp. B A ).
The set function m is a null-additive non fuzzy measure which is
autocontinuous from above and from below.
The autocontinuity from above (below) in the sense of Wang [110], if
A {B
m ( A Bn ) m ( A ) (resp . m ( n ) m ( A ))

whenever

A , Bn , A B n= ( resp . Bn A ) m(B n) 0

, is called

W-autocontinuity. If m is a finite fuzzy measure, then m is autocontinuous from


above (below) iff m is W-autocontinuous from above (below). If m is a finite
fuzzy measure, then m is autocontinuous from below iff it is autocontinuous
from below.
Each set function which is autocontinuous from above is null-additive but there
exists null-additive fuzzy measure which is not autocontinuous from above
(Example 1). If the set X is countable, then null-additivity of a finite fuzzy
measure is equivalent to the autocontinuity [109].

56

Possibility Theory versus Probability Theory in Fuzzy


Measure Theory
2.1

Concept of associative probabilities to a fuzzy measure:

Fuzzy Measures are appropriate tools to represent information or opinion state.


A method for associating a set of probabilities to each fuzzy measure has been
developed [21]. The purpose of concept of associative probabilities to a fuzzy
measure is to use this concept to define differences between fuzzy measures and
briefly to show how differences could be used for studying some characteristics
of fuzzy measures.

Let us consider a finite set

X ={ x 1 , x 2 , x 3 , x n }

. A fuzzy measure on

X defined in chapter 1 section 1.4. Two fuzzy measures g and g* are called dual
fuzzy measures if and only if the following relation is satisfied:
g ( A )=1g ( A ) , A X .

Where

is the complement of A.

Duality is an important concept, since it permits us to obtain alternative


representations of a piece of information: we will consider the situation that two
dual fuzzy measures contain the same information, but codified in a different
way.
The property of the Choquet integral [30] called monotone expectation: the
integral of the function h with respect to a fuzzy measure g coincides with the
mathematical expectation of h with respect to a probability measure which

57

depends only on g and the ordering of the values of h, was the starting point
[21] to associate a set of n! probabilities to each fuzzy measure, in the following
way.
2.1.1
PA

Definition: Let g be a fuzzy measure on X. The probability functions

defined by:

P A ( x a )=g ( { x a }) , P A ( x a ) =g ( { x a , x a })g ( { x a }) ,
1

P A ( x a )=g ( { x a , x a , x a x a })g ( { x a , x a , xa x a
i

P A ( x a ) =1g ( { xa , x a , x a , x a
n

For each

n1

})

i1

}),

(2a)

A=( a1 , a2 ,a 3 , a n ) n

probabilities to the fuzzy measure g, where

are called the associated

is the group of permutation

of a set with n elements.


From the basic probability assignment the upper and lower bounds of an
interval can be defined. This internal contains the precise probability of a set of
interest and is bounded by two non-additive continuous measures called Belief
and plausibility. The lower bound is Belief and the upper bound is Plausibility.
It is possible to obtain the basic probability assignment from the Belief measure
with the following inverse function:
58

|R S|
P A ( R )= (1 )
Bel ( S ) (2 b)
SR

where |R-S| is the difference of the cardinality of the two sets.


The very general result of the approximation problem has been established by
[103] and is not well known in fuzzy literature. Volker [58] will generalize it,
following a different line of reasoning which emphasizes the contribution of the
inner extension procedures within abstract measure theory. Further application
leads to plausibility and possibility measures as upper envelopes of probability
measure, even in the context of infinite universes of discourse. Especially this
supports the semantically point of view to regard possibility degrees as upper
bounds of probabilities an argument proposed by [14], [16].
2.1.2

Theorem: A Belief measure (Bel) on a finite power set

( X )

is a

probability measure if and only if the associated basic probability assignment


PA

P A ( Y )=0

for all

Proof: Assume that Bel is a probability measure. For the empty set

, the

function

is given by:

P A ( { x } )=Bel ( { x } )

and

subsets of X that are not singletons.

theorem trivially holds, since P A ( )=0

59

by definition of

PA

. Let

and assume

Y = {x 1 , x 2 , . x n }

. Then by repeated application of the

additivity axiom:
P A ( R S )=P A ( R ) + P A ( S)

R , S ( X ) such that

For all sets

R S= . We obtain,

Bel ( Y )=Bel ( { x1 } ) +Bel ( { x 2 } ) + Bel ( { x n } )

Since Bel ( { x } )=P A ( { x } ) for any

x X , we have

Bel ( Y )= P A ( { x i } ) (2 c)
i=1

Hence, Bel is defined in terms of a basic probability assignment that focuses


only on singletons. Assume now that a basic probability assignment function
PA

is given such that,

P A ( { x } )=1

xX

Then for any set

R , S (X ) such that

Bel ( R ) +Bel ( S )= P A ( { x } ) + P A ( { x } ) =
x R

x S

60

R S= , we have

x R S

P A ( { x } )=Bel ( R S )

Consequently, Belief measure is a probability measure. Hence proved

According to above theorem, probability measures on finite sets are thus fully

represented by a function
function

is

usually

p= { p ( x ) : x X }

p: X [0,1]

called

such that

probability

p ( x )=P A ( { x }) . This

distribution

function.

Let

be referred to as a probability distribution on X. When the

basic probability assignment function focuses only on singletons, as required


for probability measures, then the Belief measure and Plausibility measure
become equal. Hence,
Bel ( R )=Pl ( R )= P A ( { x } ) for all R ( X ) (2 d)
xR

2.1.3

Basic

Mathematical

Properties

of

Possibility

Theory

and

Probability Theory:
1. Probability Theory: It is based on measures of one type:
Probability measure (P).
Possibility Theory: It is based on measures of two types:
(a) Possibility measures () ,

(b) Necessity measures () .

2. Probability Theory: Here body of evidence consists of singletons.

61

Possibility Theory: Body of evidence consist of a family of nested subsets.


3. Probability Theory: Probability measures holds additivity i.e.,
P ( A B )=P ( A ) + P ( B )P( A B) .

Possibility Theory: Possibility measures and Necessity measures follow the


max\min rules:
( A B )=max [ ( A ) , ( B ) ] ( A B )=min [ ( A ) , ( B ) ] .

4. Probability Theory: Unique representation of P by a Probability distribution


function

p: X [0,1] via the formula

P ( A )= p (x)
x A

Possibility Theory: Unique representation of


function : X [0,1] via the formula
( A )=max ( x)
xA

5. Probability Theory: It is normalized by

p (x)=1

xX

62

by a Possibility distribution

Possibility Theory: It is normalized by


max (x)=1
xX

6. Probability Theory: Total ignorance:

p ( x )=

1
| X|

for all

x X

Possibility Theory: Total ignorance:


( x )=1 for all x X

7. Probability Theory:

Possibility Theory:

)=1
P ( A )+ P ( A

)1
( A )+ ( A

) 1
( A ) + ( A

max [ ( A ) , ( A ) ]=1

) ]=0
min [ ( A ) , ( A

As obvious from their mathematical properties, possibility, necessity and


probability measures do not overlap with one another except for one very
special measure, which is characterized by only one focal element, a singleton

63

[37]. The two distribution functions that represent probabilities and possibilities
become equal for this measure: one element of the universal set is assigned the
value of 1, with all other element being assigned a value of 0. This is clearly the
only measure that represents perfect evidence.
2.1.4

Difference between probability theory and possibility theory: [22]

1. The theory of possibility is analogous to, yet conceptually different from the
theory of probability. Probability is fundamentally a measure of the frequency
of occurrence of an event, while possibility is used to quantify the meaning of
an event.
2. Value of each probability distribution are required to add to 1, while for
possibility distributions the largest values are required to be 1.
3. Probability theory is an ideal tool for formalizing uncertainty in situations
where class frequencies are known or where evidence is based on outcomes of a
sufficiently long series of independent random experiments. On the other hand
possibility theory is ideal for formalizing incomplete information expressed in
terms of fuzzy propositions.
4. Possibility measures replace the additivity axiom of probability with the
weaker subadditivity condition.
5. Probabilistic bodies of evidence consist of singletons, while possibilistic
bodies of evidence are families of nested set.

64

6. A difference between the two theories is in their expressions of total


ignorance. In probability theory, total uncertainty is expressed by the uniform

probability distribution on the universal set:

p ( x )=

1
| X|

for all

x X . In

possibility theory, it is expressed in the same way as in evidence theory


( x )=1, for all x X .
7. Possibility measures degree of ease for a variable to be taken a value where
as probability measures the likelihood for a variable to take a value.
8. Possibility theory are still less developed than their probabilistic
counterparts, it is already well established that possibility theory provides a link
between fuzzy sets and probability theory are connected with probability
theory.
2.1.5

Similarity between probability theory and possibility theory: [22]

1. When information regarding some phenomenon is given in both probabilistic


and possibilistic terms, the two descriptions should be in some sense consistent.

That is, given a probability measure (P) and a possibility measure

define on

( )

both

( X ) , the two measures should satisfy some consistency

condition.

65

2. Possibility theory and probability theory are suitable for modelling certain
type of uncertainty and suitable for modelling other types.
3. Notion of non-interactiveness on possibility theory is analogous to the notion
of independence in probability theory. If two random variables x and y are
independent, their joint probability distribution is the product of their individual
distributions. Similarly if two linguistic variables are non-interactive, their joint
probabilities are formed by combining their individual possibility distribution
through a fuzzy conjunction operator.
4. Possibility theory may be interpreted in terms of interval-valued
probabilities, provided that the normalization requirement is applied. Due to the
nested structure of evidence, the intervals of estimated probabilities are not

totally arbitrary. If

interval

(A )< 1 , then the estimated probabilities are in the

[ 0, Pl ( A)] ; if Bel ( A)>0 , then the estimated probabilities are in

the interval

[ Bel ( A ) ,1 ]

. Due to these properties, belief measures and

plausibility measures may be interpreted as lower and upper probability


estimates.
There are multiple interpretations of probability theory and possibility theory.
Viewing necessity and possibility measures as lower and upper probabilities
opens a bridge between the two theories, which allow us to adjust some of the

66

interpretations of probability theory to the interval-valued probabilities of


possibilistic type.
There are two basic approaches to possibility/probability transformations,
which both respect a form of probability-possibility consistency. One, due to
[32], [50] is based on a principle of information invariance, the other [17] is
based on optimizing information content. Klir assumes that possibilistic and
probabilistic information measures are commensurate. The choice between
possibility and probability is then a mere matter of translation between
languages neither of which is weaker or stronger than the other [36]. It
suggest that entropy and imprecision capture the same facet of uncertainty,
albeit in different guises.

2.2

Shannon entropy of fuzzy measure:

Discuss some variants of fuzzy subsets entropies and the linking relations of
these quantities with indicate a similarity of many important characteristics of
fuzzy subsets entropies and Shannons entropy.
A function which forms the basis of classical information theory measures the
average uncertainty associated with the prediction of outcomes in a random

experiment; its range is

[ 0, log 2|x|]

. It is denoted by H and defined as:

H ( ) = ( {x }) log 2 ({x })(2 e)


x X

67

and it is satisfies the following conditions:

1.

H ( ) =0 ,

2.

H ( ) =log 2| x| ,

When

H ()

when ( { x } ) =1 for some

x X .

1
| X| for all

x X .

when

( {x })=

defines the uniform probability distribution on X then the function

is called Shannon entropy, which is applicable only to probability

measures in evidence theory. The function H was proposed by Shannon [10]. It


was proven in numerous ways, from several well-justified axiomatic
characterisations, that this function is the only sensible measure of uncertainty
in probability theory. The Shannon entropy is a measure of the average
information content one is missing when one does not know the value of the
random variable. It represents an absolute limit on the best possible lossless
compression of any communication under certain constraints, treating messages
to be encoded as a sequence of independent and identically distributed random
variables.
Since value

( { x })

are required to add to 1 for all

rewritten as

H ( ) = ( { x } ) log 2 1 ( { y } ) (2 f )
x X

y x

68

x X , (2.6) can be

The choice of the logarithmic function is a result of the axiomatic requirement


that the joint uncertainty of several independent random variables be equal to
the sum of their individual uncertainty. From equation (2f) the Shannon entropy
is the mean value of the conflict among evidential claims within a given
probabilistic body of evidence. The information that we receive from an
observation is equal to the degree to which uncertainty is reduced.
Let us perform a splitting of the set X point by point. In this case Shannons

entropy of probability distribution

={ i } ={ 1 , 2 , . , n }

H ()

H ( M , M D ) where,

turns into

M ={ i i }= { 1 1 , 2 2 , .. , n n } ,

M ={ i i }= { 1 1 , 2 2 , . , n n } ,
D

i.e.

is the membership function of a fuzzy point

i + i =1 ,

~
xi

and

is its dual,

i=1,2, ., n . If we avail the branching property of

function H [2], then:


~
~
~
~
H ( M , M D )=Z ( X / ) + Z ( X D / ) + L ( X / ) + L ( X D / ) (2 g)

69

~
Where , Z ( X / )= i i log 2 i (2 h)
i=1

Z (~
X D / ) = (1i )i log 2 i( 2i)
i=1

And
~
Z ( X / )

~
~
Z ( X / ) + Z ( X D / ) =H ( )

(2j)

is Zadehs entropy [61], i.e. an entropy of fuzzy set X with respect

to probability distribution . And also where,


n

~
L ( X / )= i i log 2 i ( 2 k )
i=1

L(~
X D / )= (1i) i log 2 (1i )(2l)
i=1

Function

~
L ( X / ) is actually a kullback directed divergence

and
(1 i)
i log 2 i+(1 i)log 2

i
n

L(~
X / )+ L ( ~
X D / ) =
i=1

is a weighted entropy of De Luca and Termini [1].


70

I ( M : ) [94]

Equality (2g) is Hirotos measure of uncertainty [3]. Notice that if we avail the
branching property in some other way then we rewrite (2g) in following form:
H ( M , M D )=H ( P (~
X ) , P(~
X D ) ) + P (~
X) H

i=1

i=1

M
M
+ P (~
X D) H
(2 n)
~
P( X )
P(~
X D)

( )

Where P (~
X )= i i ,P (~
X D ) = iD i (2 o)

In addition functions

~
Z ( X / )

~
L ( X / )

and

are connected by Jumaries

entropy [41].
~
~
~
EJ ( i , i )=Z ( X / ) + Z ( X D / ) + L ( X / )

(2p)

Formulas (2h) and (2j) represent particular cases of Zadehs and De Luca and
Terminis
~
Z ( X / )

entropies, respectively. The characteristic equations of

and

~
L ( X / )

are particular cases of a more general equation

whose solutions are entropies; they are connected, with the usual
entropy of probability distribution as (2j) and (2m) with Shannons

entropy of split distribution [38].

71

entropy of probability distribution M ={ i i } , where i i 1,


i=1

is defined as
n

H ( M )= ( i i ) +1 log 2 ( i i ) =Z (~
X / M ) + L ( ~
X / M ) (2q )

i=1

~
+1
Where Z ( X / M )= ( i i ) log 2 ( i ) (2 r)

i=1

is a

entropy

of Zadeh and

~
+1
L ( X / M ) = ( i i ) log 2 ( i ) ( 2 s)

i=1

is the weighted

entropy

of De Luca and Termini. Consequently,

~
~
0 ~
0 ~
Z ( X / ) =Z ( X / M )L ( X / )=L ( X /M ) (2 t)

Definitions analogous to (2q) (2t) can be given for dual subsets.


Properties of Zadehs entropy and weighted entropy which will be considered
below are represented as inequalities. These are consequence of the relation
between the arithmetical and geometrical means which in logarithmic
representation has the form [2]:

72

i loga i log
i=1

( )
i=1

i ai (2 u)

Where ai , i 0, i=1i . e . i isa probability distribution on X ,


i=1

i=1,2, , n And which is also a basic inequality when one considers the
Shannons entropy.
2.2.1 Theorem: Let

function

m={ i }

~
X

be a fuzzy subset with values of membership

and

= i

probability

distribution

on

i=1,2, , n then:

~
~
Z ( X / ) P ( X ) H

( PM(~X ) )(2 v)

~
Where P ( X )= i i . Equality holds when all i are equal .
i=1

Proof: Equation (2v) is a direct consequence of inequality (2u). Assuming:

i=

ii

, ai= i , i=1,2, . , n , we get ,


~

P(X)
i

73

X,

i =1

n
i i
ii
i i
log

i
~
~ log
~ (2 w)
P(X)
P( X)
i=1 P ( X )

( )

Where equality holds if and only if

i=

i i
~
P( X)

i.e. all

are equal,

i=1,2, , n . Hence proved.

Recently, Marichal [52] the notion of entropy of a discrete Choquet capacity as


a generalization of the Shannon entropy and showed that it satisfies many
properties that one would intuitively require from such a measure.
Let us consider N be a finite set of n natural numbers i.e.

and

(N )

be a power set of N. Again let

N= {1, 2, 3, n }

be a discrete Choquet

capacity or discrete fuzzy measure on N. The Choquet capacity is said to be

normalized if

( N )=1

hence

FN

denote the set of all normalized

Choquet capacity on N. Let S be any subset of N as

( S) can be interpreted

as the degree of importance of S, then generalized entropy is defined as:


n

H ( )= r ( n) h [ ( S i )( S) ] (2 x)
i=1 S A

74

Where r=

( nr1 ) !
, ( r =0, 1,2, . , n1 ) ,h ( x )= x log x if x> 0
n!
0
if x=0

Hence equation (2x) is the generalized form of Shannon entropy.


For particular fuzzy measures, such as belief and plausibility measures, two
entropy-like measures were proposed in evidence theory in the early 1980s,
namely the measure of dissonance and the measure of confusions [16]. For
general fuzzy measures it seems that no definition of entropy was given until
1999 when two proposals were introduced successively by Marichal [52] and
Yager [88]. These definitions are very similar but have been introduced
independently and within completely different frameworks. The first one gives
the degree to which numerical values are really used when aggregated by a
Choquet integral. The second one measures the amount of information in a
fuzzy measure when it is being used to represent the knowledge about an
uncertain variable.
Suppose that

N= { 1 , 2 , , n }

represents a set of criteria in a multi-

criteria decision making problem and consider a fuzzy measure

suppose that

x1 , x2 , , xn R

object with respect to criteria

on N and

represent quantitative evaluations of an

1 , 2 , , n

respectively. We assume that

these evaluations are defined on the same measurement scale. A global

75

evaluation (average value) of this object can be calculated by means of the


Choquet integral with respect to . Formally the Choquet integral of

xR

on N is defined by,

with respect to a fuzzy measure


n

C ( x )= xi [ A i Ai +1 ](2 y)
i=1

x 1 x2 ... xn

such that

, also

A i= { 1 , 2 , , n }

and

A n +1=

It would be interesting to appraise the extent to which the arguments


x1 , x2, , xn

are really used in the aggregation by the Choquet integral.

This extent, which depends only on the importance

of criteria, can be

measured by the following entropy-like function, proposed and justified by


Marichal [52]. We call it lower entropy. The lower entropy of a fuzzy
measure

on N is defined by,
n

H l ( )=

i=1 T N / { i }

|T| ( n ) h ( T { i } ) ( T )

Consider a variable V whose the exact value, which lies in the space
N= { 1 , 2 , , n }

, is not completely known. In many situations, the best

we can do is to formulate our knowledge about V by means of a fuzzy measure

76

on N. For each subset

S N

of values,

(S )

represents a measure

associated with our belief (or the confidence we have) that the value of V is

contained in the subset S. Here monotonicity of

means that we cannot be

less confident that V lies in a smaller set than a larger one.


Now, consider the Shapley value [62] of , i.e., the vector,
( )=( 1 ( ) , 2 ( ) , n ( ) ) [0,1]
whose i-th component, called Shapley index of
i ( ) =

T N / { i }

|T |( n ) ( T { i } ) ( T )

, is defined by:

It can be easily proved that the indices

i ( )

always sum up to one, so that

the Shapley value of any fuzzy measure on N is a probability measure on N.


From this observation, Yager [88] proposed to evaluate the uncertainty
associated with the variable V by taking the Shannon entropy of the Shapley
value of

. This leads to the upper entropy. The upper entropy of a fuzzy

measure on N is defined by,


H u ( )=H ( ( ) ) , that is,
n

H u ( ) = h
i=1

2.3

T N / {i }

|T |( n ) ( T { i } ) ( T )

]]

Basic properties of entropy of fuzzy measure:


77

Here we analyze some basic properties of entropy of fuzzy measure. The main
properties of entropy of fuzzy measure, we have:
2.3.1

Continuity:

The measure H should be continuous, in the sense that

changing the values of the probabilities by a very small amount, should only
change the H value by a small amount.
2.3.2

Maximality: The maximality property for the Shannon entropy says

that the uncertainty of the outcome of an experiment is maximum when all


outcomes have equal probabilities, i.e. the uncertainty is highest when all the
possible events are equiprobable. Thus for any probability measure
H ( 1 , 2 , , n ) H

( 1n , 1n , , 1n )=log n

on N,

And the entropy will increase with the number of outcomes,

( 1n , 1n , , 1n )< H ( n+11 , n+11 , , n+11 )

More precisely,

H ()

reaches its maximal value (log n) if and only if

the uniform distribution on N. concerning the entropies


the following result,
H l ( ) log nH u ( ) logn . Moreover:

78

H lH u

is

, we have

(i)

H l ( )=log n if and only if

(ii)

H u ( )=log n

( S )=|S|/n for all S N .

if and only if i ( )=1/n for all i=1,2, . ,n .

The second inequality and property (ii) follow from above property of the
Shannon entropy.
2.3.3

Additivity:

The amount of entropy should be independent of how the

process is considered, as being divided into parts. Such a functional relationship


characterizes the entropy of a system with respect to the sub-systems. It
demands that the entropy of every system can be identified and then computed
from the entropies of their sub-systems. i.e. if
n

S= i=1 n S i then H ( S )= H ( S i ) .
i=1

This is because statistical entropy is a probabilistic measure of uncertainty or


ignorance about data, whereas information is a measure of a reduction in that
uncertainty.
2.3.4

Symmetry:

It is obvious that the Shannon entropy H is symmetric in

the sense that permuting the elements of N has no effect on the entropy. This

symmetry property is actually also fulfilled by

permutation

on

Hl

and

{ 1,2, , n } , we denote by

79

Hu

. For any

the fuzzy measure

( S )= { (i |) i S }
on N defined by ( ( S ) )= ( S ) for all S N , where

. We then have the following result.


H l ( ) =H l ( ) H u ( )=H u ( ) .
This accord with the interpretation of the entropy in the aggregation framework.
Indeed, one can easily show that
C ( x (1 ) , x (2 ) , , x ( n) ) =C ( x 1 , x 2 , , x n )

Thus, permuting the arguments of the Choquet integral has no effect on the
degree to which one uses these arguments. We also have the uncertainty
associated with the variable V is independent of any permutation of elements of
N.
2.3.5

Expansibility:

The classical expansibility property for the

Shannon entropy says that suppressing an outcome with zero probability does
not change the uncertainty of the outcome of an experiment [54]:
H ( 1 , 2 , n1 ,0 )=H ( 1 , 2 , n 1 ) .
This property can be extended to the framework of fuzzy measures in the

following way. Let

element for

be a fuzzy measure on N and let

, that is such that

80

k N

be a null

( T { k } )= ( T )

restriction of

for all

T N / { k } . Denote also by

to N / { k } . We then have the following result, if

is a null element for

the

k N

then,

H l ( )=H l ( )H u ( )=H u ( )
k

This is a very natural property in the aggregation framework. Since


k

element

does not contribute in the decision problem, it can be omitted without

changing the result [99],


C =( x 1 , x 2 , ., x n )=C ( x 1 , x 2 , , x k1 , x k +1 . , xn )
k

Whenever

2.3.6

is a null element for

Decisivity:

For any probability measure

on N, we clearly have

H ( ) 0 . Now, the decisivity property for the Shannon entropy says that
there is no uncertainty in an experiment in which one outcome has probability
one [54]:
H (1,0, , 0 )=H ( 0,1, , 0 )= =H ( 0,0, .,1 )=0 .

81

More precisely,

H ( ) reaches its minimal value (0) if and only if

Dirac measure on N. Concerning the entropies

Hl

two different behaviours, as the following result shows,

and

Hu

is a

, we observe

H l ( ) 0H u ( ) 0

. Moreover,
(i)

H l ( )=0 if and only if

is a binary-valued fuzzy measure.

(ii)

H u ( )=0 if and only if

is a Dirac measure.

Property(i) is quite relevant for the aggregation framework. Indeed, it can be


shown [52] that

is a binary-valued fuzzy measure if and only if

C ( x ) {x 1 , x 2 , , x n } , ( x R n ) .
In other terms,

Hl( )

is minimum (0) if and only if only one argument is

really used in the aggregation, this is a fundamental condition.

2.3.7

Increasing Monotonicity:

Let

by
=+ ( ) [ 0,1 ] .

82

F A / { }

and define

F A

Then for any

0 1 < 2 1

, we have

H M ( )< H M ( ) .
1

We now state another very important property of

HM

which follows by the

equation,

H M ( )=

1
H ()
n!

is called strict concavity of Shannon entropy.


2.3.8

Strict Concavity:

For any

1 , 2 F A

and

(0,1) , we have

H M ( 1 + ( 1 ) 2) > H M ( 1 ) + ( 1 ) H M ( 2 ) .

An immediate consequence of the previous property is that maximizing

over a convex subset of

FA

HM

always leads to a unique global maximum.

For probability distributions, the strict concavity of the Shannon entropy and it
naturalness as a measure of uncertainty gave rise to the maximum entropy
principle, which was stated by [52] as follows:

83

When one has only partial information about the possible outcomes of a
random variable, one should choose its probability distribution so as to
maximize the uncertainty about the missing information.
In other words, all the available information should be used, but one should be
a uncommitted as possible about missing information. In more mathematical
terms, this principle states that among all the probability distributions that are in
accordance with the available prior knowledge (that is as set of constraints), one
should choose the one that has maximum uncertainty.

2.4.

Applications of entropy of fuzzy measure:

The study of different concepts of entropy will be very interesting, and not only
on physics, but also on information theory, and other mathematical sciences as
fuzzy measures, considered in its more general vision. Also it may be a very
useful tool for bio-computing, or in many others, such as studying
environmental sciences. This is because, among other interpretations with
important practical consequences, the law of entropy means that energy cannot
be fully recycled. Many quotations have been made until now referring to the
content and significance of this fuzzy measure, for example:
Gain in entropy always means loss of information, and nothing more. [40].
Information is just known entropy. Entropy is just unknown information.
[71].

84

Mutual information and relative entropy, also called Kullback-Leibler


divergence, among other related concepts, have been very useful in learning
systems, both in supervised and unsupervised cases.
Entropy and related information measures provide descriptions of the long term
behaviour of random processes [5], and that this behaviour is a key factor in
developing the Coding Theorems of IT (Information Theory).
The contributions of Andrei Nikolaievich Kolmogorov to this mathematical
theory provide great advances to the Shannon formulations, proposing a new
complexity theory, now translated to Computer Sciences. According to such
theory, the complexity of a message is given by the size of the program
necessary to enable the reception of such a message. From these ideas,
Kolmogorov analyzes the entropy of literary texts and the subject Pushkin
poetry. Such entropy appears as a function of the semantic capacity of the texts,
depending on factors such as their extension and also the flexibility of the
corresponding language.
It should also be mentioned that Norbert Wiener, considered the founder of
Cybernetics, who in 1948 proposed a similar vision for such a problem.
However, the approach used by Shannon differs from that of Wiener in the
nature of the transmitted signal and in the type of decision made by the receiver.
In the Shannon model, messages are firstly encoded, and then transmitted,
whereas in the Wiener model the signal is communicated directly through the
channel without need of being encoded.

85

Another measure conceptualized by R. A. Fischer, the so called Fisher


Information (FI), applies statistics to estimation, representing the amount of
information that a message carries concerning an unobservable parameter.
Certainly the initial studies on IT were undertaken by Harry Nyquist in, and
later by Ralph Hartley, who recognized the logarithmic nature of the measure of
information. This was later essential the key in Shannon and Wieners papers.
The contribution of the Romanian mathematician and economist Nicholas
Georgescu-Roegen, is also very interesting, whose great work was The Entropy
Law and the Economical Process. In this memorable book, he proposed that the
second law of thermodynamics also governs economic processes. Such ideas
permitted the development of some new fields, such as Bioeconomics or
Ecological Economics.
Also some others should be noted, studying a different kind of measure, the so
called inaccuracy measure, involving two probability distributions.
R. Yager [89], and M. Higashi and G. J. Klir [69] showed the entropy measure
as the difference between two fuzzy sets. More specifically, this is the
difference between a fuzzy set and its complement, which is also a fuzzy set.

Properties of Null-Additive and Absolute Continuity of


a Fuzzy Measure

86

3.1

Absolute continuity of a fuzzy measure:

In Calculus, absolute continuity is a smoothness property of functions that is


stronger than continuity and uniform continuity. In classical measure theory
(X , I , M )

[79], if

is a measure space and f is a nonnegative integrable

function, then the Lebesgue integral,

N (T )= f dM T I (3 a)
T

Defines a measure N on

(X , I )

that is absolutely continuous with respect to

M in the sense that,


M ( T )=0 N ( T )=0 T I (3 b)

The absolute continuity of N with respect to M is usually denoted by


N M . When N is finite, then absolute continuity can also be defined as,
M ( T n ) 0 N ( T n ) 0 { An } I (3 c )

or alternatively, as for any

whenever T I

and

>0

there exists

M ( T ) < .

87

>0

such that

N (T ) <

The notion of absolute continuity allows one to obtain generalisations of the


relationship between the two central operations of calculus, differentiation and
integration, expressed by the fundamental theorem of calculus in the framework
of Riemann integration. Such generalisations are often formulated in terms of
Lebesgue integration.
Similarly, if

( X , I , ) is a fuzzy measure space [106] and f is a nonnegative

measurable function then the fuzzy integral [46], [72],

( A )= f d A I (3 d)
A

defines a lower semi-continuous fuzzy measure

moreover if

In general let

is finite, then
(X , I)

(X , I )

denote a measurable space and let M and N denote

continuity of N with respect to M is denoted by

(X , I ) . The absolute

N S M

of type

and the absolute continuity of classical measures can

be generalized for fuzzy measures in different ways which as follows:


3.1.1

and

is a finite fuzzy measures [106]

fuzzy measures (or semicontinuous fuzzy measures) on

S {1,2,3,4, .. ,9 }

on

Definition:
88

(1)

N 1 M

iff

N (T )=0 : T I ,

(2)

N 2 M

iff

N (T )=N ( U ) :T , U I ,T U M ( T )= M (U )< .

(3)

N 3 M

iff

M ( T )=0 .

N ( T n ) 0 : { T n } I , {T n }

is

non-increasing,

M (T n ) 0 .
N 4 M

(4)

iff

N ( T n ) N (T ) :T I , {T n } I , T n n T n T , M (T n) M ( T )<

N 5 M

(5)

iff

N ( T n ) N (T ) :T I , {T n } I , T n n T n T , M (T n) M (T )<

(6)

N 6 M

iff

N ( T n ) 0 : {T n } I , M ( T n ) 0 ;

there exists > 0 such that

(7)

N 7 M

or iff for any

N (T ) < , T I , M ( T ) < .

iff

N ( T n ) N (T ) :T I , {T n } I , T T n , M (T n) M (T )<

89

>0

N 8 M

(8)

iff

N ( T n ) N (T ) :T I , {T n } I , T n T , M (T n) M (T )<

N 9 M

(9)

iff

N ( U n )N ( T n ) 0: { T n } , { U n } I , T n U n , M ( U n )M ( T n ) 0 ;

any

> 0 there

exists

> 0 such

or iff for

that

N (U )N ( T ) < :T , U I , T U , M ( U ) M ( T )< .

Where

n={1,2,3, }

and the colon(:) symbol stands for

whenever (when appropriate, we include equivalent formulations). Type (1) and


(6) in definition 1 has been used in classical measure theory as the definition of
absolute continuity, where they coincide when N is finite. Type (9) has been
employed for an extension theorem of fuzzy measures [107]. All types of
absolute continuity given in definition 3.1.1 coincide with the classical
definition of absolute continuity when both M and N are classical measures (i.e.
additive) and N is finite. Now we define varieties of type 2, 4, 5, 7, 8 and 9 of
absolute continuity, which are denoted by the subscripts a and b. Otherwise the
same notation is used as in definition 3.1.1.
3.1.2

Definition:

90

(1)

N 2 a M iff N ( T )=N ( U ) :T , U I ,T U , M ( UT )=0 ;

N (U T )=N ( U ) :

T , U I , M ( T )=0 ;

N (U T )=N ( U ) : T , U I , M ( T )=0 ;

or

iff

or

iff

or

iff

N (U T )=N ( U ) :

T , U I , M ( T )=0 .

(2)

N 2 a M iff N ( UT )=0 :T ,U I ,T U , M ( U )=M ( T ) < ;

N (T )=0:T ,U I ,

T U = , M ( U T )=M ( U ) < ;

or

or

iff

iff

N (T )=0:T ,U I , T U , M ( U T )= M ( U ) < .

(3)

N 4a M

N ( T n ) N (T ) :T I , {T n } I , T n n T n T , M ( T nT ) 0 ;

iff

or

N ( T T n) N ( T ) :T I , { T n } I , { T n } is nonincreasing , M (T n ) 0

91

iff

N 4b M

(4)

iff

N ( T n T ) 0 :T I , { T n } I , T n n T n T , M ( T n ) M (T ) < ;

or iff

N ( T n ) 0 :T I , { T n } I , { T n } is nonincreasing ,

T n T = , M ( T T n ) M ( T ) < .

N 5 a M

(5)

iff

N ( T n ) N (T ) :T I , {T n } I , T n n T n T , M ( T n T ) 0 ;

or

N ( T T n) N ( T ) :T I , { T n } I , { T n } is nonincreasing , M (T n ) 0

(6)

N 5 b M

iff

iff

N ( T n ) N (T ) :T I , {T n } I , T n n T n T

M ( T n ) M ( T )< ;

or

iff

N ( T n ) 0 :T I , { T n } I , { T n } is non increasing , T n T , M ( T T n ) M ( T ) <


.

92

(7)

or iff

N 7 a M

iff

N ( T n ) N (T ) :T I , {T n } I , T T n , M ( T n T ) 0 ;

N ( T T n) N ( T ) :T I , { T n } I , M (T n)0

N 7 b M

(8)

iff

N ( T n T ) 0 :T I , { T n } I , T T n , M ( T n ) M ( T ) < ;

or

N ( T n ) 0 :T I , { T n } I , T n T = , M ( T T n ) M ( T )<

(9)

or iff

(10)

N 8a M

iff

iff

N ( T n ) N (T ) :T I , {T n } I , T n T , M ( T T n ) 0 ;

N ( T T n ) N (T ) :T I , {T n } I , M (T n ) 0

N 8b M

iff

N ( T T n ) 0 :T I , { T n } I , T n T , M ( T n ) M ( T ) < ;

N ( T n ) 0 :T I , { T n } I , T n T , M ( T T n ) M ( T ) <

93

or

iff

(11)

N 9a M

iff for any > 0, there exists

N (U )N ( T ) < :T , U I , T U , M ( U T ) < ;

exists > 0 such that

(12)

N 9b M

>0

such that

or iff for any > 0, there

|N ( U )N ( T )|< :T , U I , M ( U T ) <

iff for any > 0, there exists

> 0

such that

T
.
N (U T )< :T , U I ,T U , M

In general these relations associated with the varieties of absolutely continuity


introduced in definition 3.1.2 are neither reflexive nor transitive. All types of
absolute continuity given in this section can be regarded as generalizations of
the classical concept of absolute continuity. A theorem for absolute continuity
of a fuzzy measure is defined as:
3.1.3

Theorem: Let M and N be two finite fuzzy measures such that they are

continuous from above and continuous from below. If N is auto-continuous


from above, then M is absolutely continuous with respect to N iff M is
absolutely continuous with respect to N.

94

Proof: It is obvious that if M is absolutely

continuous with respect to

N, then M is absolutely continuous with respect to N.


T I , N ( T )=0 M ( T ) =0 . If the theorem would not be

Suppose now that

>0

true, then there would exist

and a sequence

{T n }

from

such

that
1
N ( T n ) < M ( T n ) >
n

where n={1,2,3,.........}

Since N is auto- continuous from above there exist a sub-sequence

the sequence

{T n }

such that

i= p
1
N ( k T n ) < for p=1,2,3 . k (3 f )
p
i

By the continuity from above of N we have,


i=p
p=1

i=p
lim N ( T n ) =N ( T n ) (3 g)
p

Since N is continuous from below we obtain by equation (3.6),

95

(3e)

{T n }
k

of

i= p
i= p
N ( T n ) =lim N ( T n )
i

1
p

Hence by equation (3g),


p=1

i= p
p=1

i= p
N ( T n )=0 M ( T n )=0
i

On the other hands, we obtain by the continuity from above and continuity from
below of the fuzzy measure M and equation (3e)
p=1

i= p
i= p
i= p

T
(
M ( T n ) = lim M
n ) =lim lim M ( T n ) M ( T n ) >
i

p k

Which contradict our assumption, hence proved this theorem.

3.2

Some results on null additive fuzzy measure:

A set function
( A B )= (A )

, : I [0, ] , is called null additive, if we have

whenever

A , B I , A B= ( B )=0

96

[28]. Some

results on null additive fuzzy measures are as follows. A set function

:I [0, ] is said to be

(1)

Weakly

null

additive,

( A B )=0

whenever

( AB )=0

whenever

( B C )=(C)

whenever

if

A , B I , ( A )= ( B ) =0 .

(2)

Converse

null

additive,

if

A I , B A I , ( A )= (B) .

(3)

Pseudo

null

additive,

if

A I , B ,C A I , ( AB ) =( A) .

(4) If

is null additive, then it is weakly null additive. If

is pseudo null

additive, then it is converse null additive.


(5) A non additive measure

sequence

{ Ai }

of

is said to be

pairwise

disjoint

( A i )=0,i=1,2,3

97

- null additive, if for every

sets

from

and

( i=1 A i )= ( A ) A I

(6)

is

- null additive if and only if

( A i )=0,i=1,2,3

is null additive and

implies

i=1
A i ) =0

for every sequence

{ Ai } of pairwise disjoint sets from I .

For the case of non additive measure theory, the situation is not so simple.
There are many discussions on Lebesgue decomposition type theorem such as,
a version on submeasure [82], a version on

a version on

- decomposition measure [25],

finite fuzzy measure [81] and a version on signed fuzzy

measure [46] and so on.


A non additive measure on

is an real valued set function

is said to be,

( 1 ) Exhaustive if

98

:I [0, ]

lim ( An )=0 for any infinite disjoint sequence { An } .[110]

( 2 ) Order continuous if

lim ( An )=0 whenever An I A n ( n ) .[65]

( 3 ) Strongly order continuous if lim ( A n ) =0 whenever A n , A I ,


n

A ( A ) =0 [ 107 ] .

is singular with respect to

, and denote

absolutely continuous with respect to

>0

say that

there is a

> 0

is absolutely

such that

and

P I , ( P )=0 ( P )=0 , then

be two finite fuzzy measures. If

every

. Let

is

[107]. On the other hand, if for


P I , ( P ) < ( P ) <

continuous with respect to

then we

. Now we

define a theorem,
3.2.1

Theorem: Let

be a null additive fuzzy measure which is

continuous from above and continuous from below. Then there exist a set A

99

from

such

that,

( A ) ={ ( p) : P I } ,

( P A )=0 ( P )= ( P A ) , P I .

{ An } from I which will generate the desired set A.

Proof: Let a sequence

Let

A 0=

, we take A1 from

such that,

( A 1 ) ={ ( p) : P I }

This is possible by the continuity from below of

. We choose A from
2

such that,
( A 2 ) ={

( p ) : P X A1 }

Repeating this procedure, we choose a sequence

( p ) : P X i=0
( A n ) ={ n1 A , P I } (3 h)
i

holds. We take
A=i=0 Ai

100

{ An } such that,

Then by the construction equation (3h) holds. The continuity from above of
implies
P i=0
lim ( n Ai ) = ( P A )( 3i)

From equation (3h) we obtain


P i=0
lim ( A n ) lim ( n A i )

Hence by the exhaustivity of

Hence by the null additivity of

[Ref.] and equation (3i),

( P A )=0 .

P
( P A ) ( ) =( P A)
( P )=

Hence proved.

3.3
Let

where

Lebesgue decomposition type null additive fuzzy measure:

be two finite fuzzy measures are represented as

and

=c + s

are absolutely continuous and singular with respect to

101

, respectively then

is said to be Lebesgue decomposition. Let

and be two finite fuzzy measures defined on I . The fuzzy measure

is called singular with respect to

exist a set A from

, denoted by

, defined by if there

such that [107],

( P A )= ( P )=0 P I

If for null-additive fuzzy measures

, which are continuous from above

and continuous from below,

holds, then we have

too. Now

we have some theorems of Lebesgue decomposition type [27].


3.3.1

Theorem: Let

A I

I , then there is a set

is null additive, then

Proof: Let

sequence

be converse null additive non additive measure on

such that

( P )= (P A)

( P A )=0 ; and further if

for any

P I .

={ ( P) : P I } , by the definition of

{ Pn }
1

from

, we can choose a

such that for any n=1,2,3 ..

102

1
< ( Pn )
n
1

Let A1= n=1 Pn then A1 I ,thus we have ,


1

1
< ( Pn ) ( A1 )
n
1

Let

from

( A1 ) =

we have

, similarly there exist a sequence

such that

Pn X A1 , n 1
2

And

( A2 ) ={

( P ) : P I , P X A 1}

Where A2= n=1 Pn

Let

A= A 1 A2 then A I

and

= ( A 1 ) ( A ) { ( P ) : P I }

103

{Pn }
2

Therefore

noting

( A )= ( A 1 )=

A 1 A 2=

, and

. By the converse null- additive of

( A2 ) = ( AA 1 )=0

, we have

. Therefore, for any P

I , it is follows from,
P A P A 1 X A 1

( P A ) ( A2 ) =0

Thus for any

P I , we have

( P A )=0

when

is null additive,

then
( P )= ( ( P A ) ( P A ) )= ( P A ) for any

P I . The proof of theorem

is now complete.
Similarly we have if

there is a set

A I

additive, then

( P )= (P A)

3.3.2

Theorem: Let

I , then

be exhaustive non-additive measure on

such that

( P A )=0 ; and further, if

for any

is null-

P I

be two null additive fuzzy measures on

then there exists two null-additive fuzzy measures


104

and

I ,

such that

c ( P )= (P A)

s (P)=(P A)

and

absolutely continuous with respect to

for a set

, and

A I

and

are

are singular with respect

to .

Proof: Let the family

ring

A I1

I.

I 1 ={ P I : ( P )=0 }

is a

By theorem 3.2 the restriction of

( P A )=0

such that

and

sub-algebra of the

( P )= ( P A)

on

for

I1

has a set

P I 1

. We

take
c ( P )= ( P A ) s ( P )=( P A)

for each

P I . It is easy to check that

measures and that

c s

are null- additive fuzzy

is absolutely continuous with respect to , and

are singular with respect to .

3.3.3

Theorem: Let

measures on

and

be two autocontinuous from above fuzzy

I , which are continuous from above and from below. Then

105

there exists two autocontinuous from above fuzzy measures

such that

c ( P )= (P A)

are absolutely

and

s (P)=(P A)

for a set

continuous with respect to

and

A I

and

are

, and

singular with respect to .

Proof: We take the same

by theorem 3.3.1

and

as in the proof of theorem 3.3.2. Then

is absolutely continuous.

3.3.4

Theorem: Let

and

be non additive measure on

is either converse null-additive or exhaustive,

continuous from below, then there exist a set

additive measure

and

c ( P )= (P A) and

defined by

s ( P )= ( P A ) P I

106

I . If

is weakly null-additive and


A I

such that those non

Satisfy

, and

are absolutely continuous with respect to

are

by

singular with respect to .

Proof: Let the family

I 1 ={ P I : ( P )=0 }

is a

sub-ring of

the weak null-additivity and continuity from below of

3.3 we choose a set

A I

by using theorem

such that,

( A ) ={ ( P ): P I } ( PA )=0 P I

Now we assume that

c s

c ( P )= (P A) and

s ( P )= ( P A ) , P I , then

satisfy the required properties:

c ( P )=0 if ( P )=0 s ( P A )= ( ( P A ) A ) = ( )=0 for any P I .

Similarly we have many results on Lebesgue decomposition type null additive


fuzzy measure which are proved as similar as proved above theorems, which
state as:

107

(1) Let

additive,

and

be non additive measure on

I . If

is super-

is weakly null-additive and continuous from below, then there

exist non additive measures

and

on

such that

c 1 , s c + s .

(2) Let

and

be non additive measure on

converse null-additive or exhaustive,

exist non additive

c 1 s

(3) Let

and

is

measures

and

and

and

is either

null-additive, then there


s

on

such that

additive,

(4) Let

measure

is

I . If

be non additive measure on

I . If

is super-

null-additive and, then there exist non additive

on

such that

c 1 , s c + s .

be non additive measure on

I . If

is strongly

order continuous, is weakly null-additive and continuous from below, then

108

A I

there exist a set

such that those non additive measures

and

defined by
c ( P )= (P A) and

Satisfy

c 6 s

(5) Let

additive,

A I

and

, respectively.

be non additive measure on

such that those non additive measures

c 1 s

Similarly if

I . If

is pseudo-null

is null-additive and continuous from below, then there exist a set

c ( P )= (P A) and

Satisfy

s ( P )= ( P A ) P I

and

defined by

s ( P )= ( P A ) P I

, respectively.

is super additive, then it is converse null additive and

exhaustive.

3.4

Generalization of the symmetric fuzzy measure:

In applied mathematics specifically in fuzzy logic the ordered weighted


averaging (OWA) operators provide a parameterized class of mean type
aggregation operators [87]. Formally an OWA operator of dimension n is a

109

f : In I

mapping

that has an associated collection of weights

w={w1 , w2 , wn }

lying in the unit interval and summing to one and

with,
n

f ( a1 , a 2 , .. an )= w j b j (3 j)
j=1

where

bj

is the ith largest of the

ai

. By choosing different w one can

implement different aggregation operators. The OWA operator is a non-linear

operator as a result of the process of determining the

bj

. Aggregation is

defined as the integration of a function with respect to a fuzzy measure.


It is well known fact that an OWA operator is a discrete Choquet integral with
respect to a symmetric fuzzy measure. Hence, Choquet integral generalizes
OWA operators and the richness of Choquet integral is paid by the complexity.
Our aim is to introduce a concept, similar to

additive measures,

bridging the gap between symmetric fuzzy measures and general fuzzy
measures. Choquet integral with respect to a m-symmetric fuzzy measure
generalizes the concept of OWA. Another generalization of OWA operators can
be found in [98], in which it is defined the double aggregation operators as an
aggregation of two other aggregation operators. If we deal with a space of n

110

elements, a probability measure only needs n-1 coefficients, while a fuzzy

measure needs

2 2 . Restricted fuzzy measures are those measures that

require less than

2n

parameters because they are constrained to satisfy

additional properties than general or unrestricted fuzzy measures. For example


the case of Sugeno

-measures [72],

decomposable measures [96],

additive fuzzy measures [65], m-symmetric [78], or hierarchically


decomposable ones [101].
A fuzzy measure

on A is said to be symmetric if it satisfies,

|X|=|Y | ( X )= ( Y ) X ,Y A

where

A={a1 , a2 , ., a n }

let two elements

ai , a j

is a finite universal set of n-elements. Now

of the universal set A, then

aia j

are said to be

indifferent elements iff,


( X ai ) = ( X a j ) X A{ai , a j }

This concept can be generalized for subsets of more than two elements i.e. Let
X be a subset of A then X is a set of indifference iff,

111

|X 1|=|X 2| X 1 , X 2 X
( X 1 Y )= ( X 2 Y ) Y A X (3 k )

3.4.1

Theorem: Let X A, X is a set of indifference if and only if

|X 1|=|X 2| X 1 , X 2 X
( X 1 Y )= ( X 2 Y ) Y A( X 1 X 2 )

Proof: For Y AX

with the help of equation (3k) we have,

( X 1 Y )= ( X 2 Y ) .

Let us consider

Z X (X 1 X 2 )

Y A(X 1 X 2)

such that

but

Y =Z Y 1

Y AX . Then there exist

, with

Y 1 AX

. Thus from

equation (3k) we have,


( X 1 Y )= ( X 1 Z Y 1) = ( X 2 Z Y 1 )= ( X 2 Y )

and therefore the result holds.

112

Similarly if X is a set of indifference and

indifference. A subset

XA

Y X , then Y is also itself a set of

is called a null set with respect to

if,

( X Y )= ( Y ) , Y AX

Theorem 3.8: A null set is a set of indifference.


Proof: Let X be a null set, then
( X Y )= ( Y ) , Y AX

Let us consider now

X1 , X2 X

such that

|X 1|=|X 2| for Y AX

( Y ) ( X 1 Y ) ( X Y )= (Y )

( Y ) ( X 2 Y ) ( X Y )= (Y )

Then,

( X 1 Y )= ( Y )= ( X 2 Y )

and hence, X is a set of indifference.

Now to define m-symmetric fuzzy measure, firstly we define 2- symmetric


fuzzy measure. Since a symmetric measure is just a 1- symmetric measure. Let

be a fuzzy measure. It is 2- symmetric fuzzy measure if and only if, there

exist a partition of the universal set

{X 1 , X 2 }

113

X1 , X2

, such that both

X1

and

X2

are sets of indifference and A is not a set of indifference. To

define m-symmetric fuzzy measure, firstly we define a lowest partition of A.

Let

X ={ X 1 , X 2 , . , X m }

a universal set A. If for every

and

Xi

Y ={Y 1 , Y 2 , ., Y r }

there exist

Yj

be two partition of

such that

Y j X i j=1,2, , ri=1,2, . , m . Then X is lower than Y.

Let

be a fuzzy measure is m-symmetric fuzzy measure if and only if the

lowest partition of the universal set in sets of indifference is

{ X 1 , X 2 , . , X m } , X i i=1,2, . , m .

Let us consider the partition given by

X 2={x k+1 , x k +2 , , x n }

with

X1

X2

X 1= { x1 , x2 , , xk }

two sets of indifference. Then

in order to define the fuzzy measure we just need to know,

singletons and

( x 1 , x 2 ) , ( xk +1 , x k+2 ) , ( x 1 , x k+ 1)

and

( x 1 ) , ( x k+1 )

for

for sets of two elements and

so on. In order to define m-symmetric fuzzy measure, we need to know that,


which are the sets of indifference partitioning the universal set. For symmetric
fuzzy measures, we have only one set of indifference A.
114

3.4.3

Theorem: If

be a m-symmetric fuzzy measure associated to the

X ={ X 1 , X 2 , . , X m }

partition

then for

Y =(Y 1 ,Y 2 , . ,Y m ) A , we

have,

m ( Y 1 , Y 2 , , Y m ) =

Proof: Let

i1

(1 )Y ++Y
1

i1im

i1 Y 1 , ,im Y m

( )( ) ( )

Y1 Y 2
Y
m ( i1 , , im )
i 1 i2
im

Y =(Y 1 ,Y 2 , . ,Y m ) , then the number of subsets of Y with

element of

X1

Y1 Y2
Y
m
i1 i2
im

( )( )

( )

i2

element of

X2

,.....,

im

element of

Xm

is

and they have all the same measure, as

m ( Y ) = (1 )|Y ||X| ( X ) ,
X Y

then by definition of Mobius transform i.e. If

then the Mobius transform of

is defined by,

m ( X )= (1 )|X Y | ( Y ) X A .
Y X

We obtain,

115

be a fuzzy measure on A

m ( Y 1 , Y 2 , , Y m ) =

(1 )Y ++Y
1

i1im

i1 Y 1 , ,im Y m

hence the result.

3.5

( )( ) ( )

Y1 Y 2
Y
m ( i1 , , im )
i 1 i2
im

Properties of fuzzy measures:

We have considered the learning of two dimensional fuzzy measures for data
modelling using the Choquet integral. Due to the fact that there are no public
repositories with files for learning aggregation operators or information fusion
models, we have used two examples taken from the machine learning repository
[77], the iris and the abalone data files. These data files already used in
[102]. The iris" data file consists of 150 examples each one with four input
variables and one output variable. The variable class was recorded because it is
a categorical one. The abalone data file consists of 4177 examples each one
with 8 input variables and 1 output one. The variable Sex was recorded because
it is a categorical one.
For the iris data file the results obtained with our approach are not much
relevant. Only a small improvement has been obtained (2.64903740 was the
best distance achieved using a one dimensional distorted probability and
2.64903728 is our best result with a two dimensional one). From our point of
view, this small improvement is mainly due to the fact that two of the four
variables (x1, x4) result in probabilities equal to zero in the 26 model based on 1

116

dimensional distorted probability and the same occurs for the 2 dimensional
models.
For the abalone data file, the best results with a one-dimensional distorted
probability had a distance of 37.4931. This is achieved when four of the
variables (x2, x5, x6 and x7) had a probability equal to zero. The probabilities of
the variables are given in Table 3A. We have considered all partitions for
variables with non-null probability and the best distance is given in Table 3B. In
such partitions, null variables have also been included in one of the sets. The
results show that in all cases, the best 2 dimensional DP also assigns
probabilities equal to zero to such null variables.
Table 3A: Probability distribution for the variables in the abalone data file.

x1
0.018

x2
0.0

x3
0.369

x4
0.524

x5
0.0

x6
0.0

x7
0.0

Table 3B: Partitions considered with the abalone data file and minimum
distance achieved.
Partition
{ x 3 , x 4 , x 8 } , { x 1 , x 2 , x 5 , x 6 , x7 }

Minimum distance
37.3569

{ x 4 , x8 } , { x 1 , x 2 , x 3 , x 5 , x 6 , x7 }

37.2745

{ x 3 , x 8 } , { x1 , x2 , x 4 , x 5 , x 6 , x7 }

37.1682

{ x3 , x4 } , { x1 , x2 , x5 , x6 , x7 , x8 }

37.2751

{ x 8 } , { x 1 , x 2 , x3 , x 4 , x 5 , x 6 , x7 }

37.2866

117

x8
0.088

{ x4 } , { x1 , x2 , x3 , x5 , x6 , x7 , x8 }

37.2634

{ x 3 } , {x 1 , x 2 , x 4 , x 5 , x 6 , x 7 , x 8 }

37.2810

Tables 3C and 3D give the probability distributions p1, p2 and the function F for
the best solution obtained. This corresponds to the partition defined by the sets
{x3, x8} and {x1, x2, x4, x5, x6, x7}. Its best distance was 37.1682. Table 3C
corresponds to the measure before the iterative process described in this section
and Table 3D corresponds to the measure after the iterative process. The results
show that the probability distributions p1 and p2 have changed distributing part
of the probability. In particular, in the dimension {x 1, x4} the former has lost
importance and has been acquired by the latter. Similarly, the function F has
also been modified.
Table 3C: Initial probability distributions p1 and p2 for the partition defined by

{ x 3 , x 8 } and { x 1 , x 2 , x 4 , x5 , x 6 , x 7 } .
p1

p1 ( x 1 )=0.033, p1 ( x 2) =0, p 1 ( x 4 )=0.966, p 1 ( x 5 ) =0,


p1 ( x 6 ) =0, p 1 ( x 7 )=0

p2

p2 ( x 3 ) =0.806, p2 ( x 8 )=0.193

118

Table 3D: Final probability distributions p 1 and p2 for the partition defined by

{ x 3 , x 8 } and { x 1 , x 2 , x 4 , x5 , x 6 , x 7 } .
p1

p1 ( x 1 )=0, p1 ( x 2 )=0, p1 ( x 4 )=1.0, p 1 ( x 5 )=0, p1 ( x 6 )=0, p1 ( x7 ) =0

p2

p2 ( x 3 ) =0.981, p2 ( x 8 )=0.018

Fuzzy measures and integrals [67] are important tools for comparing classical
or fuzzy sets with respect to their size. Usually fuzzy measures are set functions
defined on some algebra of sets that are monotone with respect to inclusion and
that assign zero to the empty set.

If
I

be a null-additive fuzzy measure on the

denote the

algebra of the

ring

I , and let

measurable subsets of A, X

denote a metrizable space, d denote a metric on X compatible with the topology


of X.
A function

measure

f : A X

or

is said to be measurable with respect to the fuzzy

measurable if it satisfies the following properties:

119

(1) For every closed sphere C (or open sphere O) of X,

f (C ) I

(or,

f (O) I .

(2) For every

integrable set

T I

set

N T

Let

M [ A ] denote the collection of all

on

and a countable set

H X

, there exist a

be a

integrable set then

negligible

such that f ( T N ) XH .
measurable functions defined

with values in X. Let a sequence

T I

{ f n } M [ A ] , f : A X

and

{ f n } converges to f almost

uniformly ( pseudo-almost uniformly) on T, if there exist a sequence

{T m }

of sets of T I

such that

lim ( T T m )=0 lim ( T m ) = (T )

And

{ f n } converges to f uniformly on T m

(m=1,2, ) .

Properties of Strong Regularity of Fuzzy Measure on


Metric Space

120

4.1

Null-additive fuzzy measure on metric space:

In mathematics a Borel set is any set in a metric space that can be formed from
open sets or from closed sets through the operations of countable union,
countable intersection and relative complement. Borel sets are named after
Emile Borel [7]. For a metric space X, the collection of all Borel sets on X
formed a

algebra known as the Borel algebra or Borel

- algebra.

algebra containing all open sets

The Borel algebra on X is the smallest

or all closed sets [82]. Borel sets are important in measure theory, since any
measure defined on the open sets of a space or on the closed sets of a space,
must also be defined on all Borel sets of that space.
We assume that X is a metric space, and that O

sets in X. Borel

- algebra

is the class of all the open

is the smallest

ring containing

O , and unless stated otherwise all the subsets are supposed to belong to
B . We shall denote by
C

the class of all the compact subsets of X and by

the class of all the closed set in X.

A set function

:B [0, ] is said to be

121

(a) exhaustive if

( A n ) 0

for any infinite disjoint sequence

{ An } of

B ;

( b ) order continuous ( at ) if A n lim ( A n )=0;


n

( c ) monotonously continuous if lim ( A n )= ( A ) , whenever An A A n A ;


n

( d ) autocontinuous above if lim ( A n )=0 lim ( A A n )= ( A ) A ;


n

( e ) autocontinuous below if lim ( A n )= ( A )


n

lim ( A An ) = ( A ) A ;
n

(f) auto-continuous if it is both auto-continuous from above and from below.


(g) absolutely

such that,

- continuous if for each

( A ) < ( A)<

where

defined on B .

122

>0

and

there exists

= ( ) >0

are two set functions

(h) uniformly auto-continuous if for every

such that

Let

= ( ) >0

there exists

( A B ) ( A ) + ( AB ) ( A ) whenever ( B ) ; and

(i) null-additive if

every

>0

( A B )= ( A )

for any A whenever

( B ) =0 . For

A , B , An B , n=1,2, .

(X , B , )

( A ) >0

is

be a null- additive fuzzy measure space. A Borel set A with

called

an

atom of

if

B A

implies

(1)

( B ) =0 (2 ) ( AB ) =0 . Atoms of fuzzy measures defined on a complete

separable metric space. If a fuzzy measure

continuous from above, then every atom of

is exhaustive and auto-

has an outstanding property

that all the mass of the atom is concentrated on a single point in it [110]. This
fact makes calculation of the fuzzy integral over an atom and a finite union of
disjoint atoms easy. Now we discuss some theorems which concerning some
properties of fuzzy measures on metric space. If a fuzzy measure

then

is exhaustive. If a fuzzy measure

order continuous. The converse is also true.


123

is finite,

is exhaustive, then

is

4.1.1

Theorem: Let X be a separable metric space and

additive fuzzy measure of X. Then there exists a unique closed set

be a nullA

such

that

(1)

( A' )=0 where the symbol '

(2) If B is any closed set such that

Moreover,

denote the complement of

( B ' ) =0 , then

is the set of all points

x X

A B

having the property that

(G )> 0 for each open set G containing x.

Proof: Let

H={G:G is open , ( G )=0 . Since X is separable there are many

countably open sets

G1 , G2 ,

such that

n=1 Gn= { G :G H }

Let us denote this union by

and

A =X G

, we have
n=1
( G )

= ( G n ) =0,i . e .

( A ' )=0.

124

. By the null-additivity of

Further if B is any closed set with

X B G

, namely

A B

X B H

( B ' ) =0 ,

. The uniqueness of

and hence

is obvious. Hence

this proves the both assertion. Now there exists an open set containing x for any
x A

such that

( G ) =0

and

(G ) must be positive if

G is an open set containing x, we end the proof. The closed set

the spectrum or support of

x A

and

is called

Let X be any metric space and

that

a null additive fuzzy measure on X such

( XB )=0 , for some separable Borel set

spectrum

which is separable and

A B

B X . Then

has a

. We shall investigate a

smaller class of fuzzy measures on metric spaces, i.e. tight fuzzy measures.
Tight fuzzy measures have the property that they are determined by their values
taken on compact sets [81]. A fuzzy measure

125

is called to be tight if for

each set

that

A B

and each

( AK )

such

be a tight uniformly autocontinuous fuzzy measure

has separable support and for any Borel set A and any > 0

K A

, there is a compact set

Proof: Let

K A

4.1.2 Theorem: Let

on X. Then

> 0 , there exists a compact set

Kn

such that

be a compact set such that

( AK ) <

1
( XK n ) < , n=1,2,
.A
n

compact set in a metric space is separable and, hence


n=1 K n is separable .

If A 0= n=1 K n then ( A0 ) =0 by themonotonicity of .

Firstly we prove that for every

such that

( XK )<

> 0 , there exists a compact set

126

K A ,

In fact, since X is separable, there exists denumerable dense sequence of points

{ x n } in X, let C a ( x n ) be a closed sphere with radius


xn

1
a

and centre

in X, where a is arbitrary positive integer, then for every a we have,

X n=1
n=1 Ca ( x n ) X ,i . e . [ C a ( x n ) ]

As

is a fuzzy measure, we have

X n=1
( C a ( xn )) 0

Hence, there exists positive integer b, such that


X n=1

( b C a ( x n ) ) < a+1 (4 a)
2

Taking Ca = n=1 b C a ( x n )

Let

{C a }
i

then by the autocontinuity of

of {Ca}, such that

127

, there exists sub-sequence

i=1
X i=1
( ( X C a ) ) = ( Ca ) =0( 4 b)
i

Hence from equations (4a) and (4b), we have,


X i=1
( C 0 )<
i

Let K=i=1 C0

Since

Ca

is finite, K is also finite. Hence K is a complete bounded set. By

the selection of K, it is obvious that K is a closed set. Hence K is a compact set


[56] page 29, as desired.

Secondly, we prove the inequality


, for every

autocontinuity of

(E F ) ( E)+

whenever

( AK ) <

> 0 , there exists

K =K C

, then

> 0 , such that

E , F B , ( F )< . For arbitrary A B ,

there exists some closed set C A , such that

Let

. By the uniform

( AC ) < .

is a compact set and

128

K A

. Thus we have,


( AK ) ( ( XK ) ( AK )) ( XK )+ +
2 2 2

This completes the proof.

4.1.3 Theorem: Let X be a separable metric space with the property that there
exists a complete separable metric space Y such that X is contained in Y as a
topological subset and X is a Borel subset of Y. Then every uniformly
autocontinuous fuzzy measure

on X is tight. In particular, if X itself is a

complete separable metric space, every uniformly autocontinuous fuzzy


measure on X is tight.
Proof: Let

X Y , where Y is a complete separable metric space and X is a

Borel set in Y. Given a fuzzy measure

class

BY

by setting

on

BX

( A )= ( A X ) ,

, we define

A BY

( Y X )=0 . We claim that it is enough to prove that

, compact in Y, such that

( X K ) <

on the

X BY

. Since

is a tight measure

on Y. Indeed, since X is a Borel set in Y, there will exist for each

K X

> 0 , a set

, by theorem 4.1.2.

is

also compact in X since X is a topological subset of Y. Further

129

( XK ) = ( X K ) <

. This implies that

is tight. Thus, we may

assume that X is itself a complete separable metric space. For the rest part we
prove in above theorem.
Let a fuzzy measure

be exhaustive and autocontinuous from above on a

complete separable metric space. If

is tight. Moreover, if

is a set function with

, then

is a null additive fuzzy measure, then

( A ) ={ ( K ): K A , K K } for each A B .

Let

X =(0, 1) and B

( A )=tan

Where

be a Borel algebra on X, defined as

( m(2 A) )

A B

and m(A) denotes the Lebesgue measure of A. Then

finite, exhaustive and autocontinuous from above, but

is

( X )= .

4.1.4 Definition: A fuzzy measure space ( X , B , ) is said to be perfect if for


any B -measurable real valued function f and any set A on the real line such

130

f ( A) B , there are Borel sets A1 and A2 on the real line such that

that

[106],

A1 A A2

and

( f 1 ( A 2A 1 ) )=0 .

4.1.5 Theorem: Let X be any metric space and

autocontinuous tight fuzzy measure on X, then

( X , B , )

a uniformly

is a perfect fuzzy

measure space.
Proof: Let f be any real valued measurable function. It is sufficient to prove

that for any

A R , where R is the real line, such that

exists a Borel set

A1 A

Borel set such that

A A2

such that

with

and

f 1 ( A) B , there

( f 1 ( A A1 ) )=0 ; A be defined as a
2

( f 1 ( A 2A ) )=0

Suppose that A is a set

B=f 1 ( A ) B . Let {Cn} and {Kn}, n=1, 2, ....... be two sequences

of sets such that


(1)

K 1 K 2 . ..

continuous, and

, each Kn is compact,

(X K n ) 0 ,

131

f K n

(f restricted to Kn) is

(2)

C1 C2 . . B

If we write

f Q n

Qn=K n C n

, each Cn is closed and

, then

is continuous and

( BC n ) 0 .

Q1 Q2 . .. B

( BQ n ) 0

as

then Bn is a compact subset of the real line since

, each Qn is compact,

n> . If

f Q n

B n=f (Qn )

is continuous and

hence
A 1= n=1 Bn

is a Borel set since


n=1
f ( Q n )= A 1

it follows that,
f

( A1 ) n=1 Qn

Clearly,

A1 A

and

f 1 ( A1 ) f 1 ( A )=B . By the continuity of

obtain

B n=1
K n ) =lim ( BK n ) =0
n

132

we

so

( Bf 1 ( A 1 ) )=0 . This completes the proof.

4.2 Strong regularity of fuzzy measure on metric space:


Every probability measure P on a metric space is regular; that is, for every
Borel set A and
S A T

and

> 0 , there exists a closed set S and an open set T such that
P(T S)

[56]. We discussed the regularity of a null-

additive fuzzy measure and proved Egoroffs theorem and Lusins theorem for
fuzzy measures on a metric space. The Egoroffs theorem and Lusins theorem
in the classical measure theory are important and useful for discussion of
convergence and continuity of measurable functions. The Egoroffs theorem for
a fuzzy measure space was proposed by [110], [106], but there the finiteness of
fuzzy measures was assumed. The Egoroffs theorem and Lusins theorems
hold for those fuzzy measures that are defined on metric spaces and supposed to
be exhaustive and auto continuous from above. It will be proved that
exhaustivity and autocontinuity are sufficient for a fuzzy measure to have
regularity and tightness, which are enjoyed by classical measures. We define the
strong regularity of fuzzy measures and show our main result: the null-additive
fuzzy measures possess a strong regularity on complete separable metric
spaces. By using strong regularity we shall show a version of Egoroffs theorem
and Lusins theorem for null additive fuzzy measures on complete separable
metric spaces.
133

4.2.1 Definition: Let a fuzzy measure

be exhaustive and auto-continuous

B . If

is a set function with

regular. Furthermore, if

is a monotonously continuous and null-additive

from above on

, then

is also

fuzzy measure, then


( A ) ={ ( S) : S A , S C }

inf { ( T ) :T A , T O }

For each

A B . Let fuzzy measures

continuous from above on

B . If either

and

be exhaustive and auto-

( S )=(S)

for all

S C

or

( T )=(T ) for all T O , then on B .

4.2.2 Definition: A fuzzy measure

is called strongly regular, if for each

A B and each > 0 , there exist a compact set


T O

Let

such that

K A T

and

K K

and an open set

(T K )< .

be null-additive and order continuous. If for any

134

A B

( A ) ={ ( K ): K A , K K }

then

is strongly regular.

4.2.3 Theorem: If a fuzzy measure

from above on the Borel

any

algebra

is exhaustive and auto-continuous


B , then

is regular, and for

A B

( A ) ={ ( S ): S A ,S C }

inf { ( T ) :T A , T O }

Proof: Let

be the class of all the sets

such that for any

there are

S C

To prove

=B it is sufficient to show that B .

and

T O

A B

satisfying

S A T

Denote the distance from x to G by d(x, G). If

T n= x :d ( x , G ) <

1
n

decrease to G, i.e.

we have

135

T n G

and

>0

(T S) .

G C , then the open sets

. Since

is exhaustive,

lim ( T nG )=0

Thus

C , now let

{ A n }

and

>0

be given. There exists

> 0 such that

( A ) ( B ) ( A B) .

Hence there exists two sequences of closed sets {S n} and of open sets {Tn} such
that
n=1
S n A n T n n=1,2, ( ( T n S n ) )

On the other hand


n=1

( S n n=1 m Sn ) (as m )

implies the existence of

n0 1

such that

n=1

( S n n=1 n0 S n ) .

Put T = n=1 T nS= n=1 n 0 Sn then

136

n=1
n=1
(T S ) ( ( ( T nS n ) ) ( Sn S ) )

Thus

is closed under the formation of countable unions. It is obvious that

is closed under complementation, we get

=B . Now let

A B

and

we prove,
( A )=inf { ( T ) :T A ,T O }

n 1 , there exists

for each

( T n A ) <

T n O

such that

A T n

and

1
n . Thus we have

n=1
( T nA ) =0hence

n=1
i=1
(

T
)
( A )=
n =lim ( n T i ) since i=1 nT i
n

is also open and contains A, the equation above to be proved is obtained. The
other equation is proved similarly.

137

4.2.4 Theorem: Let

>0

be a finite continuous fuzzy measure, then for any


{Bn , m :n 1, m 1} B

and any double sequence

B n ,m ( m ) , n=1,2, ..

there exists a subsequence

satisfying

{ Bn , m }
n

of

{ Bn ,m :n 1,m 1 } such that

( B n ,m

n=1
) < ( m1 <m 2 < .)

B n ,m ( m ) , n=1,2, ..., for given

Proof: Since for any

the continuity from above of fuzzy measures, we have


lim ( B 1,m )=0.

Therefore there exists m1 such that

( B1,m B 2,m ) B1, m


1

as

( B 1,m ) <
1

for this m1,

Therefore it follows from the continuity from above of


( B 1,m B2, m ) = ( B1,m ) .
lim
1

138

, that

> 0 , using

Thus there exists

m2> m1

m1 , m2 , .. , mi

exists

, such that

Hence we obtain a sequence

{ Bn , m }
n

2 . Generally, there

such that

( B 1,m B2, m Bi ,m ) <


1

( B 1,m B2, m ) <

{ mn } n=1,2, .. of numbers and a sequence

of sets. By using the monotonicity and the continuity from below of

, we have

n=1

( B n ,m ) < .
2
n

This gives the proof of the theorem.


4.2.5 Theorem: If

Proof: Let

A B

is null-additive, then

and given

is strongly regular.

> 0 , from theorem 4.5 we know that

is regular. Therefore, there exists a sequence {Sn}, n=1,2,.... of closed sets and

a sequence {Tn} of open sets such that for every

139

n=1,2, .. , S n A T n

( T n S n ) <

{Sn }

1
n

without loss of generality, we can assume that the sequence

is increasing in n and the sequence

{T nSn }

{T n }

is decreasing in n. Thus

is a decreasing sequence of sets with respect to n and as n .

T nS
n=1
( n) ( ( T nS n ) ) .

Let F1 = n=1 ( T n S n )

And noting that

1
( F1 ) ( T nS n )< n=1,2, .. then ( F1 ) =0.
n

{ K n}

On the other hand, from theorem 4.2 there exists a sequence

compact subsets in X, such that for every

can assume that

n=1,2, .. , ( XK n ) <

{ K n } is decreasing in n. Therefore as n

( X K n ) n=1 ( XK n ) .
Let F2 = n=1 (X K n ), then ( F 2 )=0.

140

1
n

of

and we

F
( 1 F2 )as n
Thus we have,
, noting that
[ ( X K n ) ( T nSn ) ]

1
F 2)=0 by the continuity of
(

, then

lim ( ( X K n ) ( T n S n ) ) =0

Therefore there exists

n0

such that

Let

K =K n S n T =T n then K

and

K A T

This shows that

( ( XK n ) ( T n S n ) )< .
0

is a compact set and

is an open set,

T K ) (( XK n ) ( T n S n ) )<
. Since (
0

is strongly regular.

Let f and fn be real-valued

that if

f n f

measurable function on X. It is well known

every where on a finite measure space

any > 0 there exists a subset E such that m ( XE ) <

(X , B , m) , then for

and {fn} converges

to f uniformly on E. This Egoroffs theorem was extended to a finite fuzzy

141

measure space [106], [110]. The following is a further generalization of the


theorem to a fuzzy measure space which is not necessarily finite.
4.2.6 Theorem: Let a fuzzy measure
f f
from above on B . If n

exists a closed subset

uniformly on

be exhaustive and auto-continuous

every where on X, then for any

such that

( XS ) <

> 0 there

and {fn} converges to f

Proof: If a fuzzy measure

exists > 0 such that

is auto-continuous, then for any

>0

there

( A ) ( B ) ( A B) . Put

En , k = i=n x :|f i ( x )f ( x)|<

1
k=1,2,
k

then En,k is increasing in n for each fixed k. The set of all those xs, for which
f n f

is

n=1
k =1 E n ,k .

142

Since

f n f

every where, we have

( X En , k )

equivalently

En , k X as n

as n

k 1 or

k 1 . For the

for any

given above, we may take a sub-sequence

for any

{ E n ,k }

of

>0

{ En , k : n 1, k 1 }

such that
k=1
( ( X En , k ) ) . Put S= k =1 E n ,k ,then ( XS )
k

{ f n } converges f

such that

uniformly on S. Now take any closed subset

( SS )

uniformly on

. Then

( XS )

and

of S

{ f n } converges to f

Similarly we can prove the following theorem. Let the metric space X be
complete and separable, and a fuzzy measure

continuous from above. If

f n f

be exhaustive and auto-

every where on X, then for any

143

>0

there exist a compact subset

converges to f uniformly on

such that

( XK ) <

and

{f n}

Lusins Theorem is also important in the real analysis. We generalized Lusins


theorem from a classical measure space to a finite auto-continuous fuzzy
measure space [85], [11]. We extend the result of [85] to a

finite fuzzy

measure space.
4.2.7 Theorem: Let a fuzzy measure

auto-continuous from above on

then for each

continuous on

>0

there exists

be

and

( XS )

>0

finite, exhaustive and

B .If f is a real measurable function on X,

there exist a closed subset

Proof: If a fuzzy measure

>0

S C

such that f is

is auto-continuous from above, then for any

such that

( A ) ( B ) ( A B) . To

prove this theorem we use three steps in different situations where


fixed.
(1) Suppose that f is a simple function, i.e.
144

>0

is

f ( x )= ai X A (x)
i=1

Where ai a j , Ai A j = , ( i j ) , Ai B X= i=1 n A i .

By the regularity of

, for any > 0 there exists some closed subsets

S i A i(1 i n) such that

(n

i=1
( A iS i ) ) .

Put S =i=1 n Si , thenf is continuous on the closedS of X ,

i=1
(X S ) ( n ( A iSi ) )

since the distance of two disjoint closed set is greater than zero, it is obvious

that f is continuous on

(2) Let f be a non-negative measurable function. Then f is a limit of an

increasing sequence {fn} of simple functions. Let


satisfy
( A ) ( B ) i ( A B) i1

145

1> 0

and

2> 0

to

where

0=

. By the result of case (1) there exist a sequence {S n} in

such that
n=1
( ( X S n ) ) 2

and fn is continuous on Sn. If we put


S 0= n=1 Sn

then

( XS 0 ) 2

is

finite, there exist a sequence {X m} such that

( Xm)<

where

(X X m ) 2

m=1,2, . .Then

and thus

We have a subset F of

and

and fn is continuous of S0. On the other hand, since

( X ( S0 X m ) ) 1
0

( S0 X m )

( ( S 0 X m )F ) 2

there

exists

m0

X m X

such

that

such that {fn} converges to f uniformly on F

[110]. By the regularity of

146

, there is a closed

subset

( FS ) 2

of F such that

Thus f is continuous on

, and then

( ( S 0 X m )S ) 1
0

, and we have

( XS ) ( ( X( S 0 X m ) ) ( ( S0 X m )S ))
0

(3) Let f be an arbitrary measurable function on X. If we put

|f |f

,
2
|f |+ f
+=
f
2
f
=

then

+
f

and

are non-negative measurable functions and

Applying the result of case (2) to

+f
.
f =f

f
and f , we have two closed subsets

147

+
S

on

and

of X such that

S and S , and

+
f

and

are continuous respectively

+
X S

X S .

+ S
Thus f is continuous on the closed set
and
S =S

+
X S

. Thus the theorem is proved.


X S

( XS ) =

4.3 Properties of inner\outer regularity of fuzzy measure:


be the Borel

algebra

and O is the class of all open sets belonging to B . Let C

be the class of

We assume that X is a metric space and Let

all closed sets belonging to B .

148

4.3.1 Definition: A fuzzy measure


A B

> 0 , there exists a open set

and each

A T , and

is called outer regular, if for each


T O

such that

(T A ) < .

A fuzzy measure

is called inner regular, if for each

> 0 , there exists a closed set

S C

A B

such that

and each

S A , and

( AS ) < .

A fuzzy measure

is called inner regular, if for each

> 0 , there exists a closed set

S A T

and

S C

and an open set

A B

and each

T O

such that

(T S )< .

In the following we present some properties of the inner regularity and outer
regularity of fuzzy measure.
4.3.2 Theorem: If

is an auto-continuous fuzzy measure, then

(1) A finite union of inner regular sets is inner regular.


(2) A finite intersection of outer regular sets is outer regular.

149

(3) A finite union of outer regular sets is outer regular.

{ A1 , A 2 , , An } be a finite class of inner regular sets, then

Proof: (1) Let

for any

Si

>0 ,

in C

>0

such that

A i (i=1,2, . , n) , there exists a set

and for every

Si Ai

and

( A iSi ) <

Let S=i=1 n Si A= i=1 n A i it is clear S A Csince

j=1
AS= i=1 n n ( A iS j ) i=1 n ( AiS i ) ,

Then by

is an auto-continuous we have

i=1
( AS ) ( n ( AiS i ) ) < .

i. e . A= i=1 n Ai is inner regular .

(2) Let

>0 ,

{ B1 , B2 , , Bn } be a finite class of outer regular sets. Then for any


> 0

and for every

O , such that B i T i and

B i (i=1,2, . , n) , there exists a set

( T i Bi ) <

150

Ti

in

Let T = i=1 nT i , B= i=1 n Bi , obviously , B T Osince

j =1
T B= i=1 n T ii=1 n Bi= i=1 n n ( T i B j ) i=1 n ( T iBi ) ,

Then by

is an auto-continuous we have,

i=1
(T B ) ( n ( T iBi ) ) <

Therefore B= i=1 n Bi is outer regular .

(3) Let

any

Ti

{ B1 , B2 , , Bn } be a finite class of outer regular sets. Then for

>0 ,

>0

and for every

B i (i=1,2, . , n) , there exists a open set

B i T i
in O , such that
and

( T i Bi ) <

Let T = i=1 nT i , B= i=1 n Bi , obviously , B T Osince

j =1
T B= i=1 n T ii=1 n Bi= i=1 n n ( T i B j ) i=1 n ( T iBi ) ,

Then by

is an auto-continuous we have,

151

i=1
(T B ) ( n ( T iBi ) ) <

Therefore B= i=1 n Bi is outer regular .

4.3.3 Theorem: If

is uniform auto-continuous fuzzy measure then

(1) The union of a sequence of outer regular set is outer regular.


(2) The intersection of a sequence of inner regular sets is inner regular.

Proof: (1) Let

for any

set

Ti

>0 ,

{ Bi }i=1,2, . be a sequence of outer regular sets, then


>0

and for every

B i (i=1,2, ) , there exists a open

B i T i
in O , such that
and

( T i Bi ) <

Let T = i=1 T i , B= i=1 Bi , obviously , B T Othen

j=1
T B= i=1 T i i=1 Bi=i=1 ( T iB j ) i=1 ( T iBi ) ,

Since

is a uniform auto-continuous fuzzy measure, we have,

i=1
(T B ) ( ( T i B i) ) <

152

Therefore B= i=1 Bi is outer regular .

{ Ai } i=1,2, . be a sequence of inner regular sets, then for any

(2) Let

>0 ,

Si

> 0

in C

and for every

such that

Si Ai

A i (i=1,2, ) , there exists a closed set

and

( A iSi ) <

Let S=i=1 S i A= i=1 Ai it is clear S ACthen

j=1
AS= i=1 Ai i=1 Si= i=1 ( A iS j ) i=1 ( A iSi ) ,

since

is an uniform auto-continuous fuzzy measure we have,

i=1
( AS ) ( ( A iSi ) ) < .

Therefore A= i=1 A i is inner regular .

4.3.4 Theorem: If

is an auto-continuous fuzzy measure, then a necessary

and sufficient condition that every set in


bounded set in O be inner regular.

153

be outer regular is that every

Proof: Necessary Condition:


Let us suppose that every set in C

is outer regular and let T be a bounded set

in O and > 0 . Let S be a set in

is closed and

ST B , then

such that, T S , since ST

ST C . Thus

ST

Therefore there is a set H in O , such that ST H

is outer regular.

and

( H( ST ) ) < .

Since T =[ S(ST ) ] ( SH ) C

then

( T ( SH ) ) = (T H ) ( H( ST ) ) <

hence T is inner regular.


Sufficient Condition:

Let every bounded set in

is inner regular. If S be a set in

positive number, T be a bounded set in

O , then there is a set

154

C ,

be a

G C , such

G(T S )

that

and

( ( T S )G ) < .

Since

S=T ( T S ) (T G)O , thus

( ( T G )S ) = ( ( TS )G ) < .

Hence S is outer regular.

The other properties of the inner\outer regularity of fuzzy measure are defined,
which can be proved easily by the help of above theorems are,
(a) If

is weekly null-additive and strongly order continuous, then both

outer and inner regularity imply regularity.


(b) If

be null-additive fuzzy measure then,

(i) If

is continuous from below, then inner regularity implies

( A ) ={ ( S ): S A ,S C } for all A B .

(ii) If

is continuous from above, then outer regularity implies

( A )=inf { ( T ) : A T ,T O } for all A B .

(c) If

be converse null-additive fuzzy measure,

155

(i) If

is continuous from below and strongly order continuous and for any

A B ,

( A ) ={ ( S ): S A ,S C }

then

(ii) If

is inner regular.
is continuous from above and for any

A B ,

( A )=inf { ( T ) : A T ,T O }

then

is outer regular.

Continuous, auto-continuous and completeness of fuzzy


measure
5.1

Continuous and auto-continuous of fuzzy measure:

The concepts of two structural characteristics that are the pseudo-metric


generating property and the auto-continuity play important roles in the fuzzy
measure theory [83], [106]. We show that any uniformly auto-continuous finite
fuzzy measure is equivalent to a sub-additive finite fuzzy measure in the sense
of absolute continuity to each other. We discuss

156

finite and purely

atomic fuzzy measures. In this case the uniform auto-continuity can be replaced
with the null-additive.
Let X be a non empty set,

be a ring of subsets of X, unless stated

otherwise, all subsets are supposed to belong to

and all the considered set

functions are assumed to be monotonic and equal to zero on the empty set.
5.1.1

: [0, ] is said to be, [106]

Definition: A fuzzy measure

(a) exhaustive if, For any infinite disjoint sequence {An} of

lim ( A n )=0;

> 0 , there exists

(b) uniformly exhaustive if, for any

n=1,2, .

that, for any disjoint


sets A 1 , A 2 , . , An , min ( A i ) ;
1 i n

(c) monotonically continuous if, for any monotonic sequence {An},


lim ( A n )= lim A n = ( A ) ;

(d) null-additive if

( A B )= ( A ) for any A whenever

157

( B ) =0 .

such

( e ) auto continuous above if lim ( An ) =0 lim ( A An ) = ( A ) A ;


n

( f ) autocontinuous below if lim ( An ) = ( A ) lim ( A A n )= ( A ) A ;


n

(g) auto-continuous if it is both auto-continuous from above and from below.


>0

(h) uniformly autocontinuous if for every

such that

= ( ) >0

( A B ) ( A ) + ( AB ) ( A ) whenever ( B ) ;

(i) pseudo-metric generating property if for any

such that

there exists

>0

there exists

>0

( A ) ( B ) < ( A B )< ;

(j) absolutely continuous with respect to

absolute continuity of

with respect to

if

( A )=0 ( A )=0 . The

is usually denoted by

.
(k) weakly absolutely continuous with respect to

( A )=0 and it is denoted as

158

if

( A )=0

whenever

(l) strongly absolutely continuous with respect to

exists > 0 such that ( A ) < ( A ) <

(m) sub-additive if

, if for any

and it is denoted as

>0

there

( A B ) ( A ) + ( B ) A , B ; and

( n ) countablyadditive if , for any sequence { A n } ,

n=1

( A n ) ( An ) .
n=1

For every

5.1.2

A , B , An n=1,2, .

Definition: We say that

has the property if

lim ( A n )=0

then there exists a subsequence

{ An }
i

of {An},

n 1, i 1 such that,

lim ( A n ) =0 .
i

5.1.3

Definition: We say that

has the pseudo-metric generating property

if and only if for any [83],

159

lim [ ( A n ) ( Bn ) ]=0

There exist sub-sequences

{ An }
i

and

{ Bn }
i

such that

lim ( A n Bn )=0.
i

If

is uniformly auto-continuous then it has the pseudo-metric generating

property.
A fuzzy measure

is said to be finite if

( A ) <

for any A;

finite if, for any A, there exists an increasing sequence of subsets

such that

( A n ) < n 1

{ An } n 1

and

A= n=1 A n .

The auto-continuity from above is equivalent to auto-continuity from below for


any finite fuzzy measures [109]. In general, this equivalence is not valid even if
a fuzzy measure is

finite, exhaustive, null-additive, auto-continuous

from below and has the pseudo-metric generating property. If

160

is both

weakly absolutely continuous and strongly absolutely continuous with respect


to , then

and

are equivalent and it is denoted by

Now we have some results, which are based on pseudo-metric generating


property, absolute continuity, null-additivity and exhaustivity. These results are
also useful to prove some important theorems [81].
(1) Let

have the pseudo-metric generating property, and let

additive and exhaustive, then

(2)

Let

have

the

( B ) =0 ( A B )= ( A)

only if

(3) Let

that

and

if and only if

property

for any

and

be

(4) If the set function

be null-

exhaustive.

A , B , then

If

if and

is exhaustive.

be null-additive and exhaustive. Then there exists

( AH )=0 and

( A )=( A H ) for any

such

A .

is exhaustive and sub-additive, then it is

monotonically continuous and countably sub-additive, and further it is a


uniformly auto-continuous fuzzy measure.

161

Now we investigate the auto-continuity of fuzzy measures by using the pseudometric generating property and absolute continuity of set functions as shown in
[109].
5.1.4

Theorem: If

is auto-continuous from above, then it has the

pseudo-metric generating property.


be auto-continuous and

Proof: Let

lim [ ( A n ) ( Bn ) ]=0

Then there exists a subsequence

{ An }
i

of {An},

i=n+1
1
( n+ k A n ) n , k 1
n
i

n 1
for any > 0 , we can find 0

such that

i=n 0+ 1
( A n ) .
i

lim ( Bn )=0,it follows that ,


i

j=n0 +1
lim

( A n Bn )
i

lim
i

( ( An A n ) )
j

162

n 1, i 1 such that,

by the auto-continuity of

{ An }

sequences

lim

and

{ Bn }
i

from above. This shows that there exist sub-

such that

( A n Bn ) =0

has the pseudo-metric generating property.

so that

5.1.5

Theorem: Let

be null-additive. Then

is exhaustive and auto-

continuous if and only if it is monotonically continuous and has the pseudometric generating property.
Proof: Let

be monotonically continuous and have the pseudo-metric

generating property and let

Then

and

( B )= ( A B ) ( A ) B .

( A ) < . Put

( A )=0 ( A B )= ( A)

is monotonically continuous, hence

fuzzy measure. Therefore both

and

A . Since

is a monotonically continuous

are exhaustive, so that

holds, it is clear by above result 2. This shows that

163

for any

is auto-continuous

from above and hence, it is auto-continuous. The theorem is now proved.

5.1.6 Theorem: Let

be null-additive. Then there exists an auto-continuous

finite fuzzy measure

such that

if and only if

is exhaustive

and has the pseudo-metric generating property.


Proof: Let

Then,

be an auto-continuous finite fuzzy measure and

has the exhaustivity and the pseudo-metric generating property.

Consequently, it is trivial that

that

there exist

is exhaustive by

. Now we prove

has the pseudo-metric generating property. In fact, for any

1> 0

and

2> 0

such that ( A ) < 1 ( A ) <

>0 ,

and

( A ) ( B ) < 2 ( A B ) < 1 .

On the other hand, since

( A ) < ( A ) < 2 and hence

there exists

>0

such that,

( A ) ( B ) < ( A B )< .

164

This shows that

has the pseudo-metric generating property, so that only if

be null-additive, exhaustive and has the pseudo-metric

part is valid.
Conversely, let

generating property. Then there exists

H and {H } such that,


n

H= n=1 H n , ( H n) < , n 1,

n=1
( A )= ( H A )= ( ( H n A ) ) , A

By result (3), put

1 (H n A)
A
n
1+ ( H n )
n=1 2

( A ) =

Then,

is a null-additive finite fuzzy measure with the pseudo-metric

generating property and hence,

definition of

if

, we know that

( A )=0 , for any A, so that

is auto-continuous by theorem 5.2. By the

( A) ( A)

now complete.

( A )=0

if and only

is valid by result (1). The proof is

165

and

Theorem 5.4: If

( B ) ={ ( A B ) ( A ) : A , B } .

be exhaustive, and let

Then the following statements are equivalent.


(a)

is finite and uniformly auto-continuous;

(b)

; and

(c)

is a sub-additive finite fuzzy measure.

Proof: (i) (a)

>0

(b): Let

there exists

> 0

be uniformly auto-continuous. For any


( A B) ( A)+

such that

whenever

( B ) < . Thus we have s .

On the other hand,


(ii) (b)
s

( A ) ( A)

for any

so that

(a): The uniform auto-continuity can be directly obtained by

. Thus there exists

( B ) < . For any fixed

> 0 such that

( A B ) ( A ) +1

whenever

A , we can find an increasing sequence of

subsets {An} such that


( A n ) < , n 1, ,

166

A= n=1 A n .

Also lim ( A A n) =0 by the exhaustivity of ,hence , there


n

n0 >1

( A A n ) <

such that

. Therefore we have

( A )= ( A n ( A An ) )< ( A n ) +1< . This shows that


0

(iii) (a) (c): Since

exists

H such that

is finite.

is uniformly auto-continuous and finite, there

( A )=( A H ) for any A by result (3).

Consequently, ( B ) ( H )< B , and for any B1and B2,

{ ( A ( B B )) ( A ) : A }

( B1 B2 ) =

{ ( A ( B B )) ( A B ) : A }
1

{ ( A B1) ( A ) : A }

( B1 ) + ( B 2)

167

On the other hand, the exhaustivity of

proved.

5.2.
Let

is trivial by

and the

. Therefore by the definition of sub-additive fuzzy measure

exhaustivity of

we say that

is a sub-additive finite fuzzy measure. The theorem is now

Results on completeness of fuzzy measure space:


( X , B , )

be a fuzzy measure space. Let a subset A of X is

negligible if there is a subset B of X such that,

B B , A B , ( B )=0 .

Where B is a Borel - algebra and X be a non-empty set.

5.2.1

every

Definition: A fuzzy measure

is said to be complete if and only if

negligible subset of X belong to

B . If

is a complete

fuzzy measure, then ( X , B , ) is said to be a complete fuzzy measure space.

168

Let

( X , B , )

under

be a null-additive fuzzy measure space. The completion of

is the collection

of subsets B of X for which there are

such that S B T

sets S and T in B

and

(T S )=0 , i.e.

B = { B X : S , T B s .t . S B T ( T S )=0 } .

Now we define a set function

:B [0, ]

as follows:

( B ) = ( S ) ( ( T ) ) where , S , T B s . t . S B T ( T S )=0

The set function

often called

Since

is called the completion of

are

-measurable.

is

null-additive.

(T )= [ S ( T S ) ]= ( S ) .

then

. The set in

It

Furthermore, if

(C ) ( T )=( S) . Hence,

( S ) ={ (C ): C BC B } ,

169

follows

CB

immediately

and belongs to

that

and so the common value of

( S)

and

(T )

depends only on the set B

, and not on the choice of sets S and T. Therefore, the

and fuzzy measure

definition of set function

is reasonable.

Complete measure plays an important role in classical measure theory,


especially in dealing with the measurable projection, which we expect to remain
valid in fuzzy measure theory [106]. We give the results on complete fuzzy
measure. If

additive fuzzy measure and if

then

is null-

is finite and auto-continuous from below

is auto-continuous from above [106]. If

continuous then

is auto-continuous from above (or below) then

is uniformly auto-

is auto-continuous, also these above results are used in

following theorem.
5.2.2

Theorem: Let

( X , B , )

Then

algebra on X that includes

measure on

is a

be a null-additive fuzzy measure space.

and

is a fuzzy

that is complete and whose restriction to B

is

170

Proof: It is clear that

B B

complementation as we have the relation

where

AX

and

X B

, and hence

L,M B

L A M

implies the relations

is closed under

and

( M L )=0

M c A c Lc

and

( Lc M c ) = ( Lc M ) = ( M L )=0 .

Let {Bn} is a sequence of sets in

. For each n let sets Ln and Mn in

such that
Ln Bn M n ( M nLn )=0.

Then n=1 Ln n=1 M n belong Bsatisfy

n=1 Ln n=1 Bn n=1 M n

n=1

n=1
( M n n=1 Ln ) ( ( M nL n) )=0 .

by the null-additivity of

Thus n=1 B n belong B .

171

Consequently , B is a algebraon X that includes B.

Since for any B in

thus

we have

is an extension of

( B ) =( B)

. It is clear that

by letting L and M equal B,

has non-negative values

and satisfies
( )=0 , and

B 1 B2

{Bn }

( B 1) ( B 2)

. Let the sequence of sets in

be an increasing sequence of sets in

Mn in B

such that

B 1 , B2

for any sets

in

and

be an increasing sequence i.e. let

. For each n choose Ln and

Ln Bn M n ( M nLn )=0.

Let Gn= j=1 L jH n= j=1 M j

then

satisfy

{Gn } and { H n } are both increasing sequences of sets in B and


G n Bn H n

and

j=n

j=n
( H nGn ) = ( GcnH cn ) = ( Lcj j=n M cj ) ( ( LcjM cj ) )=0

172

by the null-additivity of

. Since

n=1 Gn n=1 Bn n=1 H n

n=1

n=1
( H n n=1 Gn ) ( ( H nGn ) ) =0 .

We have,
n=1
n=1
( B n ) = ( Gn ) =lim ( Gn ) =lim ( B n )
n

Hence,

is a fuzzy measure on

sequence of sets in

and

( B 1) <

. Now let {An} be a decreasing

. It is similar to prove

n=1
A n ) =lim ( A n ) .
n

Consequently,

is a fuzzy measure on

obvious by the definition of

. The completeness of

. Thus the theorem is proved.

173

is

5.2.3

Theorem: If

( X , B , )

be a null-additive fuzzy measure space then

( X , B , ) is a complete null-additive fuzzy measure space.

Proof: By the definition of

, it is obvious that

this theorem it is sufficient to prove that

Let A be a set in

and

and

is complete. To prove

is null-additive.

( A )=0 . For any set B in

, let sets

Ln

M n (n=1,2) in B , and satisfy

L1 B M 1 ( M 1L1 ) =0, L2 A M 2 ( M 2L2 ) =0 since

( L1 L2 ) ( B A ) ( M 1 M 2 )
n=1
( ( M 1 M 2 )( L1 L2 ) ) ( 2 ( M nLn ) ) =0

We have

( B A )= ( L1 L2) = ( L1 )= (B)

hence

X , B , )
Hence (
is a complete null-additive fuzzy measure.

174

is null-additive.

5.2.4

Theorem: If

converse-null-additive on

is converse-null-additive on
B

B , then

is also

Proof: Let us assume two sets L and M in

Ln

( L )= ( M ) < . For L and M let

satisfy

M n (n=1,2)

and

L M

in

B,

and

and

satisfy
L1 L L2 ( L2 L1) =0,

M 1 M M 2 ( M 2M 1 )=0 since

c
c
( L1) = ( L )= ( M )= ( M 2) [ ( M L ) =( M L ) ] ( M 2 L1)

we have

( M L ) ( M 2 Lc1 ) =0 .

By the converse-null-additivity of

. Hence

is converse-null-additive.

5.2.4

Theorem: If

continuous on

is auto-continuous on

175

B , then

is auto-

Proof: Let {An} be a sequence of sets in

For any set B in


L B M

and

and

( A n ) 0 (as n )

let us consider two sets L and M in B

and satisfy

( M L )=0

For each n let us consider

Ln

and

Mn

be two sets in B

and satisfy,

Ln Bn M n ( M nLn )=0 since,

( L Ln ) ( B A n ) ( M M n )
M
( M L ) ( nL n)

, we have

[ ( M M n ) ( L Ln ) ]

lim ( B An ) =lim ( L Ln )= ( L )= ( B ) .

Hence

is auto-continuous on B .

5.2.6

Theorem: If

space, then

B =B

( X , B , )

and

is a complete null-additive fuzzy measure

= .

176

Proof: Let

B B

L B M

and

, then there are two sets L and M in B

( M L )=0 .

( BL) ( M L ) , we have

Since,

such that

( BL) B

by the completeness of

. Hence,
B=[ L (BL) ] B . Thus B B

and so B =B

= follows immediately by the null-additivity of

is proved.

5.3

Convergence in fuzzy measure:

The convergence of set-valued function sequence on measure space is given by


[104], [105]. By mean of the asymptotic structural characteristics of fuzzy
measure [108] we discuss the convergence in fuzzy measure space and some
results, such as Lebesgues theorem, Rieszs theorem, Egoroffs theorem and
their generalizations, of fuzzy measure theory [106]. First we give the
definitions of convergence of measurable function and we discuss the relation
among convergences like Lebesgues theorem, Rieszs theorem, Egoroffs
theorem and their generalization.

177

Let

be p-dimensional Euclidean space, and d a Euclidean metric on

. Let X be a metric space and

said to be the Borel measurable space,

denote a Borel set in X. (X, B ) is

[ X ]

denoted the collection of all

measurable function from X to non empty closed subsets of

R p . Let

{ f n , f } [ X ] , B B and n 1
(1) We say

f n f

{f n}

converges to f almost everywhere on B, denoted by

on B, if there exists,

A B B , such that

( A )=0

and for

every ( BA ) ,
lim f n ( )=f ( ) .

(2) We say

f n pf

{ f n } converges to f pseudo-almost everywhere on B, denoted by

on B, if there exists

A B B , such that

for every ( BA )

178

( B ) =(B A)

and

lim f n ( )=f ( ) .

(3) We say

f n pf

{ f n } converges to f pseudo-almost everywhere in B, denoted by

everywhere on C, i.e.

(4) If

C B B , f converges to f pseudo-almost
n

on B, if for every

f n pf

on C.

is null-additive, then

if and only if for every,

{ f n } converges to f almost everywhere on B

x R p ,d ( x , f n )

d (x , f )

converges to

almost

everywhere i.e.
f n f d ( x , f n ) d ( x , f ) x R p .

(5) If

is pseudo-null-additive and

( B ) < , then

pseudo-almost everywhere on B if and only if for every

converges to d ( x , f ) pseudo-almost everywhere i.e.


p

f n p f d ( x , f n) p d ( x , f ) x R .

179

fn

converges to f

x R p ,d ( x , f n )

5.3.1

Definition: Let

{ f n, f } [ X ] , B B

converges in fuzzy measure

and

n 1 , we say

to f on B, denoted by

any > 0 , and compact subset K of

f n f

{f n}

on B, if for

Rp ,

lim [ B ( 1 ( K )) ]=0 .

we say

f n p. f

{ f n } converges pseudo- in fuzzy measure to f on B, denoted by

on B, if for any > 0 , and compact subset K of

Rp ,

lim [ B1 ( K ) ]= ( B ) .

If for every

f n p. f

CBB ,

pseudo- in fuzzy measure

to f in B, denoted by

on C, then we say

f n p. f

in B.

180

{ f n } converges

Where,

1 ( K )= { X : F n ( ) K } , B ( 1 ( K ) )= { B : F n ( ) K }

and where

Fn =( f nf ) ( f f n ) , f ( ) ={ x R p : d ( x , f ( ) ) < } and

f n ( )= { x R p : d ( x , f n ( )) < } .

5.3.2

Definition: Let

{ f n , f } [ X ] , B B and n 1 , we say { f n }

converges almost uniformly to f on B, denoted by

a sequence

f nuf

on B, if there exists

{ Bm } of measurable set of B B , such that,

lim ( B m )=0

and fn converges pseudo-almost uniformly to f on B, denoted by

B, if there exists a sequence

f n p .u . f

on

{ Bm } of measurable set of B B , such that,

lim ( BBm )= ( B )

181

And fn converges uniformly to f on

CBB ,

f n p .u . f

on C, then we say

uniformly to f in B, denoted by

5.3.3

( BBm ) , m=1,2, .

f n p .u . f

If for every

{ f n } converges pseudo-almost

in B.

Theorem: (Lebesgues theorem). Let

{ f n, f } [ X ] , B B

and

n 1 .

(a)

(b)

( B ) < , f n f on B f n f on B ,

f n p f on B f n p . f on B ,

(c)

is null-additive, f n f on B f n p. f on B ,

(d)

is pseudo-null-additive with respect to B (or converse null-additive),

( B ) < , f n p f on B f n f on B .

Proof: By using null-additivity, pseudo-null-additivity with respect to B (or


converse null-additivity) of

, (c) and (d) can be obtained from (b) and (a)

182

respectively. Now we only prove (b), and (a) is similar to prove. Since

f n p f on B

, there exists A B B

such that

( BA )= (B) , and for every ( BA ) .

lim f n ( )=f ( ) .

Let

( BA ) , for any

>0

and any compact subset K of

R p , there

exists some positive integer N such that [104],


Fn ( ) K = .

Where Fn is defined in definition (5.4) and n N . Let


D n ( , K )= m=n { ( B A ) : Fm ( ) K = } ,

Then

D n ( , K ) ( B A )

and

D n ( , K ) [ B1 ( K ) ] n=1,2, ..

have,
( D n ) [ B1 ( K ) ] ( B ) . Therefore we have,

lim [ B1 ( K ) ]= ( B ) .

183

so we

And the proof of (b) is complete.

{ f n , f } [ X ] , B B and n 1 .

5.3.4

Theorem: (Rieszs theorem). Let

(a) If

f on B ,
is auto-continuous from above, f n
then there exists some

{f n }

subsequence

(b) If

{f n }
i

on B, and

f n f on B ,

of {fn}, such that

f n pf
i

f n pf
i

in B.

then there exists

on B.

is pseudo-auto-continuous from above with respect to B,

( B ) < , f n p . f on B ,

such that

f n f

is auto-continuous from below,

some subsequence

(c) If

of {fn}, such that

f n f
i

then there exists some subsequence

on B.

184

{f n }
i

of {fn},

(d) If

is pseudo-auto-continuous from below with respect to B,

( B ) < , f n p . f on B ,

such that

f n f

then there exists some subsequence

on B and

f n pf
i

{f n }

of {fn},

and

R p be

in B.

Proof: We only prove (a), the rest is similar to prove. Let

l >0

presented as,
p

R = l=1 T l ,

Tl

Where

Rp

is a bounded open subset of

and its closure

f on B
T l T l +1 ,l=1,2, . since f n
, we have

lim B ( 1 ( T l ) ) =0 ,

For each l, there exists some positive integer n l such that

nl

. There is no harm in assuming that

nl l=1,2, .

then,

185

nl <n l+1

. Let

B ( 1 ( T l ) ) < l at

B l=B ( 1 ( T l ) )

at

lim [ Bl ]=0.

Since

{ Bl }
i

is auto-continuous from above, then there exists some subsequence

of {Bl}, such that,[108]

i=1

j=1
( Bl )=0.
j

i=1

j=1
Now we prove for every [ B( B l ) ] , lim f n ( )=f ( ) .
j

And there exists some positive integer

li

i ( ) , such that

Bl

, whenever

j i ( ) , that is
Fn ( ) T l = .
li

For any > 0 and any compact subset K of

integer

i0

, such that

l <
i0

and

186

K T l

i0

, there exists some positive

( since the open covering

{ T l } of compact subset K of
Fn ( ) K=

Therefore,

lj

Rp

can be reduced to a finite covering).

. Whenever

j i ( ) i 0 . Since,

i=1

j=1
[ B( Bl ) ] is arbitrary thus we have f n f on B .
j

li

By using null-additivity of

, we also have

f n pf
li

in B.

5.3.5

Theorem: (Egoroffs Theorem). Let

{ f n , f } [ X ] , B B

and

n 1 .

f n f f n u f f n p .u . f ,

(a) If is null-additive, then on B

(b) If

is pseudo-null-additive with respect to B, then on B

f n pf f nu f ,

(c) On B

f n pf f n p.u.f

Proof: We only prove (c), (a) and (b) are similar to prove. Let

187

l >0

and

R = l=1 T l

Where

Tl

is bounded open subset of

, and its closure

T l T l +1 l=1,2,

Since

f n pf

on B, there exists

A B B

such that

( BA )= (B) ,

{ f n } converges to f everywhere on B A . For each l, let

and

B m = n=m { ( B A ) : F n ( ) T l= } ,then Bl B l . ..
l

m=1 Bm =BA .
l

For arbitrary given

integer i, such that,

> 0 , since

( B m ) (B)
l

(B i )> ( B ) .
2

exists some positive integer

j>i

and

Since

, there exists some positive

( B i Bi ) ( B i )
1


( B i B j ) > ( B ) 2
2 2
1

on,

we finally obtain some subsequence

{ Bi } { Bm }
l

188

, such that,

there

and so

l=1
( B i ) ( B ) .
l

If we denote G= l=1 Bi , thenf n u f on G,


l

In fact, for any

> 0 , and any compact subset K of

positive integer

l0

R p , there exists some

, such that l < , K T l , so we have


0

G Bi n=i 0 { ( B A ) : Fn ( ) K = } ,
l0

1
That is G ( ( K ) ) =

f n p .u . f

whenever

n N =i 0

, therefore we have

on B.

5.4

Some other properties of fuzzy measure and convergence:

The main methodological idea, basic for the theory of fuzzy sets, states that
elements may belong to some collection not absolutely but to some extent. We
discuss a property as continuity of functions and investigate what does it means
that a function is continuous only to some extent. In order to do this in
mathematical terms, some estimations of this property are introduced. These
estimations play a role of the membership function for the fuzzy sets of fuzzy

189

continuous functions. A function f is called fuzzy continuous if the value of the


corresponding membership function is not equal to zero [73]. In particular, the
continuity of the fuzzy integral with respect to different kinds of convergence
has been exhaustively studied in the last few years. But up to now, these have
been only a few discussions on the convergence of sequences of fuzzy
measures.[20]. We introduce some properties of fuzzy measure and
convergence. The notations have usual meaning as defined in this chapter and
these properties can be easily proved with the help of [48], [80], [84].
(1) If

be exhaustive and auto-continuous fuzzy measure and

everywhere on X, then for any

( XT ) <

(2) If

and

> 0 there exists a closed subset T such that

{ f n } converges to f uniformly on T where n=1,2, .. .

be exhaustive and pseudo-metric generating property and

almost everywhere on X, then for any

such that

f n f

( XK )<

and

f n f

> 0 there exists a compact subset K

{ f n } converges to f uniformly on k.

190

(3) If

be exhaustive and pseudo-metric generating property and if f is a

real measurable function on X, then for any > 0 there exists a closed subset

T such that f is continuous on T and

(4) If

( XT ) .

be an exhaustive and auto-continuous

finite fuzzy measure.

If f is a real measurable function on X, then for each

closed subset T such that f is continuous on T and

(5) Let

{ f n , f } [ X ] , ( B ) < and n 1 .

(a) If

measure

is auto-continuous from above, then

>0

( XT ) .

{ f n } converges in fuzzy

to f on B if and only if for any subsequence

there exists some subsequence

{f n }
ij

of

191

{f n }
i

there exists a

, such that

{f n }
i

f n f
ij

of

{f n} ,

on B.

(b) If

f n p..f

is pseudo-auto-continuous from below with respect to B, then

on B if and only if for any subsequence

exists some subsequence

fn pf
ij

(a) If

{f n }
i

{ f n } , there

, such that

{ f n , f } [ X ] , ( B ) < and n 1 .

xRp,

(b) If

of

ij

of

on B.

(6) If

{f n }

{f n }

(or in B),

is auto-continuous from above then on B

d ( x , f n) d ( x , f )

f nf

for every

is pseudo-auto-continuous from below with respect to B, then on B

f n p..f

for every

xR ,

d ( x , f n) p . .d ( x , f )

(7) If

{ f n , f } [ X ] , ( B ) < and n 1 .

(a) If

is null-additive fuzzy measure then on B,

192

f nu f

for every

xR ,

d ( x , f n) u d ( x , f )

is pseudo- null-additive fuzzy measure with respect to B, then on B

(b) If

(or in B),
f n p..f

(8) If

for every

d ( x , f n) p . .d ( x , f )

xRp,

be a fuzzy measure on a

B . Let

algebra

{ f n : X [0, ]} be a sequence of Borel measurable positive functions which


converges everywhere to

function z such that

f : X [0, ] . If there exists a

f n( ) z ( ) , X

integrable, for every n=1,2, . and

n X

lim f n d= f d .

193

integrable

then fn and f are

Reference:
[1] A. De Luca, and S. Termini, (1972), A definition of non-probabilistic
entropy in the setting of fuzzy sets, Inform. and Control, 20, 301.
[2] A. Finstein, (1958), Foundations of Information Theory, New York.
[3] A. Hiroto, (1982), Ambiguity based on the concept of subjective entropy, in:
M.M. Gupta and E. Sanchez, Eds., Fuzzy Information and Decision Processes,
North-Holland Amsterdam.
[4] A. P. Dempster, (1967), Upper and lower probabilities induced by a
multivalued mapping. Annals Mathematical Statistics 38: 325339.
[5] A. Wehrl, (1978), General properties of entropy. Rev. Mod. Phys., 50, 221260.
[6] Ann Markusen, (2003), fuzzy Concepts, Scanty Evidence, Policy Distance:
The Case for Rigour and Policy Relevance in Critical Regional Studies. In:
Regional Studies, volume 37, Issue 6-7, pp. 701-717.

194

[7] Alexander S. Kechris, (1995), Classical descriptive set theory. SpringerVerlag, Graduate texts in math., vol. 156.
[8] B. R. Gaines, and L. Kohout, (1975), Possible automata. Proc. Int. Symp.
Multiple-Valued logics, Bloomington, IN, 183-196.
[9] Bhaskara Rao, K. P. S. and M. Bhaskara Rao, (1983), Theory of Charges: A
Study of Finitely Additive Measures, London: Academic press, x+315, ISBN 012-095780-9.
[10] C. E. Shannon, (1948), A mathematical theory of communication. Bell
Syst. J. 27, 379-423, 623-656.
[11] C. Wu and M. Ha, (1994), On the regularity of the fuzzy measure on metric
fuzzy measure spaces, Fuzzy Sets and Systems, 66, 373-379.
[12] Claude E. Shannon and Warren Weaver, (1949), The Mathematical Theory
of Communication, The University of Illinois Press, Urbana, Illinois. ISBN 0252-72548-4.
[13] D. Cayrac, D. Dubois, M. Haziza and H. Prade, (1996), Handling
uncertainty with possibility theory and fuzzy sets in a satellite fault diagnosis
application. IEEE Trans. on Fuzzy Systems, 4, 251-269.
[14] D. Dubois and H. Prade, (1997), Bayesian conditioning in possibility
theory, Fuzzy Sets and Systems 92, 223-240.

195

[15] D. Dubois and H. Prade, (1996), What are fuzzy rules and how to use
them. Fuzzy Sets and Systems, 84, 169-185.
[16] D. Dubois and H. Prade, (1992), When upper probabilities are possibility
measures, Fuzzy Sets and Systems 49, 65-74.
[17] D. Dubois, H. Prade and S. Sandri, (1993), On possibility/probability
transformations. In R. Lowen, M. Roubens, editors. Fuzzy logic: State of Art,
Dordrecht: Kluwer Academic Publ., 103-112.
[18] D. H. Fremlin, (2000), Measure Theory (http://www.essex.ac.uk/maths/
people/ fremlin/mt.htm). Torres Fremlin.
[19] D. Harmanec, and G. J. Klir, (1996), Principle of uncertainty invariance.
International Fuzzy Systems and Intelligent Control Conference (IFSICC), 331339.
[20] D. Ralescu and G. Adams, (1980), The fuzzy integral, J. Math. Anal. Appl.,
75, 562-570.
[21] De Campos and M. J. Balanos, (1989), Representation of fuzzy measures
through probabilities, Fuzzy Sets and Systems 31, 23-36.
[22] Didier Dubois and Henri Prade, (2001), Possibility Theory, Probability
Theory and Multiple-valued Logics: A Clarification, Annals of Mathematics
and Artificial Intelligence 32, 3566.

196

[23] E. E. Kerre and J. N. Mordeson, (2005), A historical overview of fuzzy


mathematics, New Mathematics and Natural Computation, 1, 1-26.
[24] E. P. Klement, R. Mesiar and E. Pap, (2000), Triangular Norms. Dordrecht,
Kluwer.
[25] E. Pap, (1990), Lebesgue and Saks decompositions of

decomposable measures, Fuzzy Sets and Systems 38, 345-353.


[26] E. Pap, (1991), On non-additive set functions, Atti Sem. Mat. Fis. Univ.
Modena 39, 345-360.
[27] E. Pap, (1994), The Lebesgue decomposition of the null-additive fuzzy
measures, Univ. of Novi. Sad, Yugoslavia, 24, 1, 129-137.
[28] E. Pap, (1994), The range of null-additive fuzzy and non-fuzzy measures;
Fuzzy Sets and Systems 65, 105-115.
[29] G. Bartle Robert (1995), The Elements of Integration and Lebesgue
Measure, Wiley Interscience.
[30] G. Choquet, (1953), Theory of capacities, Ann. Inst. Fourier 5, 131-296.
[31] G. E. Shilov and B. L. Gurevich, (1978), Integral, Measure, and
Derivative: A unified approach, Richard A. Silverman, trans. Dover
Publications. ISBN 0-486-63519-8. Emphasizes the Daniell integral.

197

[32] G. J. Klir, (1990), A principle of uncertainty and information invariance,


International Journal of General Systems,17, 249-275.
[33] G. J. Klir, (1995), Principles of Uncertainty: What are they? Why do we
need them? Fuzzy Sets and Systems, 74(1): 15-31.
[34] G. J. Klir, (1991), Some applications of the principle of uncertainty
invariance. In proceedings of International Fuzzy Engineering Symposium,
Yokohama, Japan, 15-26.
[35] G. J. Klir and A. Ramer, (1990), Uncertainty in the Dempster-Shafer
theory: a critical reexamination. Intern, J. of General Systems, 18, No. 2, pp.
155-166.
[36] G. J. Klir and B. Parviz, (1992), Probabilitypossibility transformations, a
comparison, International Journal of General Systems 21 (3),pp. 291310.
[37] G. J. Klir and Bo Yuan, (1995), Fuzzy Sets and Fuzzy Logic, Theory and
Applications, Prentice-Hall, ISBN-978-81-203-1136-7.
[38] G. J. Klir and M. Mariano, (1987), On the uniqueness of possibilistic
measure of uncertainty and information, Fuzzy Sets and Systems, 24(2), 197220.
[39] G. J. Klir and T. A. Folger, (1988), Fuzzy sets, Unertainty, and information.
Prentice Hall, Englewood cliffs (N.J.).

198

[40] G. N. Lewis, (1927), The entropy of radiation. Proc. Natl. Acad. Sci.
U.S.A., 13, 307-314.
[41] G. Jumarie, (1975), Further advances on the general thermodynamics of
open systems via information theory, effective entropy, negative information,
Internat. J. Systems Sci. 6, 249-268.
[42] Gerald B. Folland, (1999), Real analysis: Modern Techniques and Their
Applications, John Wiley and Sons, ISBN [Special: Book Sources/
04713171600|04713171600] Second edition.
[43] Glenn Shafer, (1976), A Mathematical Theory of Evidence, Princeton
University Press, ISBN 0-608-02508-9.
[44] Gustave Choquet (1953). Theory of Capacities. Annales de l'Institut
Fourier 5, 131295.
[45] H. Suzuki, (1991), Atoms of fuzzy measures and fuzzy integrals, Fuzzy
Sets and Systems 41, 329-342.
[46] H. Suzuki, (1988), On fuzzy measures defined by fuzzy integrals. J. Math.
Anal. Appl. 132, 87-101.
[47] H. Tahani and J. Keller, (1990), Information Fusion in Computer Vision
Using the Fuzzy Integral, IEEE Transactions on Systems, Man and
Cybernetic 20 (3), 733741. Doi, 10.1109/21.57289.

199

[48] Ines Couso, Susana Motes and Pedro Gil, (2002), Stochastic convergence,
uniform integrability and convergence in mean on fuzzy measure spaces, Fuzzy
Sets and Systems, 129, 95-104.
[49] Irem Dikmen, M. Talat Birgonal and Sedal Han,(July 2007)Using fuzzy
risk assessment to rate cost overrun risk in international construction projects.
International Journal of Project Management, Vol. 25 no. 5, 494-505.
[50] J. F. Geer and G. J. Klir, (1992), A mathematical analysis of informationpreserving transformations between probabilistic and possibilistic formulations
of uncertainty, International Journal of General systems, 20(2): 143-176.
[51] J. F. Geer, and G. J. Klir, (1991), Discord in Possibility theory. Intern.J. of
General Systems, 19, No. 2, 119-132.
[52] J. L. Marichal and M. Roubens, (2000), Entropy of discrete fuzzy
measures: International Journal of Uncertainty, Fuzziness and KnowledgeBased Systems, 8(6): 625-640.
[53] J. N. Kapur, G. Baciu and H. K. Kesavan (1995), The min max information
measure. International Journal of Systems Science, 26(1): 1-12.
[54] J. Aczel and Z. Daroczy, (1975), On measures of information and their
characterization, Academic Press, New York- San Francisco, London.
[55] J. Goguen, (1967), L-fuzzy sets, J. Math. Anal. Appl.,18,145-174.

200

[56] K. R. Parthasarathy, (1967), Probability measures on metric spaces,


Academic Press, New York.
[57] Kazuo Tanaka, (1996), An Introduction of Fuzzy Logic for Practical
Applications. Springer; World Scientific Publishing Co.
[58] Kratschmer Volker, (2003), When fuzzy measures are upper envelopes of
probability measures. Fuzzy Sets and Systems 138, 455-468.
[59] L. A. Zadeh, (1978), Fuzzy sets as a basis for theory of possibility. Fuzzy
Sets and Systems, 1, 3-28.
[60] L.A. Zadeh, (1965) fuzzy sets, Information and Control, 8, 338-353.
[61] L. A. Zedeh, (1968), Probability measures of fuzzy events, J. Math. Anal.
Appl. 23, 424-427.
[62] L. S. Shapley, (1953), A value of n-person games, in: H. W. Kuhn and
A.W. Tucker editors, Contributions to the Theory of games Vol. II, 307-317,
Princeton University press, Princeton.
[63] Louis Perrin and Henri Lebesgue, (2004), Renewer of Modern Analysis. In
Le Lionnais, Francois. Great Currents of Mathematical Thought 1 (2nd ed.).
Courier Dover Publications.ISBN 978-0-486-49578-1.
[64] M. E. Munroe, (1953). Introduction to Measure and Integration. Addison
Wesley.

201

[65] M. Grabisch, (1996),

-order additive fuzzy measures, Proc. 6 th Int.

Conf. on In-formation processing and management of uncertainty in


knowledge-based system (IPMU). 1345-1350, Granada, Spain.
[66] M. Grabisch, (1997),

- order additive discrete fuzzy measures and

their representation, Fuzzy Sets and Systems 92 (2):167-189. Doi: 10.1016/


S0165-0114(97)00168-1.
[67] M. Grabisch, T. Murofushi, and M. Sugeno, (2000), Fuzzy measures and
integrals, theory and applications, studies in fuzziness and soft computing.
Physica Verlag, Heidelberg.
[68] M. Grabisch, T. Murofushi and M. Sugeno (1992), Fuzzy measure of fuzzy
events defined by fuzzy integrals. Fuzzy Sets and Systems 50, 293-313.
[69] M. Higashi and G. J. Klir, (1982), Measure of uncertainty and information
based on possibility distributions, Int. J. General syst., 9, 43-58.
[70] M. L. Puri and D. Ralescu, (1982), A possibility measure is not a fuzzy
measure (short communication). Fuzzy Sets and Systems, 311-313.
[71] M. P. Frank, (2005), Approaching physical limits of computing. Multiplevalued logic, doi.: 10.1109/ISMVL.9.
[72] M. Sugeno (1974). Theory of fuzzy integrals and its applications. Ph.D.
thesis, Tokyo Institute of Technology, Tokyo, Japan.

202

[73] Mark Burgin, (1999), General approach to continuity measures, Fuzzy Sets
and Systems, 105, 225-231.
[74] Mark Stosberg, (16 December 1996). The Role of Fuzziness in Artificial
Intelligence. Minds and Machines. Retrieved 19 april 2013.
[75] Masao Mukaidono, (2001), Fuzzy logic for beginners. Singapore: World
Scientific Publishing.
[76] Max Black, (1937). Vagueness: An exercise in logical analysis, Philosophy
of Science 4: 427455. Reprinted in R. Keefe, P. Smith (eds.): Vagueness: A
Reader, MIT Press 1997, ISBN 978-0-262-61145-9.
[77] P. M. Murphy and D. W. Aha, UCI repository machine learning databases,
http://www.ics.uci.edu/mlearn/MLrepository.html, Irvine CA:University.
[78] P. Miranda, M. Grabisch, P.Gil, (2002), p-symmetric fuzzy measures. Int.
J. of Unc., Fuzzy and knowledge-based systems 10 (Supplement) 105-123.
[79] P. R. Halmos, (1968) Measure Theory, Van Nostrand, Princeton, N. J.,
(1962). Van Nostrand Reinhold, New York.
[80] Q. Jiang and H. Suzuki, (1996), Fuzzy measures on metric spaces, Fuzzy
Sets and Systems, 83, 99-106.
[81] Q. Jiang and H. Suzuki, (1995), Lebesgue and Saks decompositions of
finite fuzzy measures, Fuzzy Sets and Systems, 75, 373-385.

203

[82] Q. Jiang and H. Suzuki, (1995), The range of

finite fuzzy

measures, Proc. Fuzz-IEEE/IFES95, Yokohama, Japan.


[83] Q. Jiang, H. Suzuki, Z. Wang, G. J. klir, J. Li and M. Yasuda, (1995),
Property (p.g.p.) of fuzzy measures and convergence in measure, The Journal of
fuzzy mathematics, 3, 699-710.
[84] Q. Jiang, Shengrui Wang, and Diemel Ziou, (1999), A further investigation
for fuzzy measures on metric spaces, Fuzzy Sets and Systems, 105, 293-297.
[85] Q. Jiang, (1992), Some properties of finite auto-continuous fuzzy measures
in metric spaces, J. Hebei University 5, 1-4.
[86] R. M. Dudley, (2002), Real Analysis and Probability, Cambridge
University Press.
[87] R. R. Yager, (1988), On ordered weighted averaging aggregation operators
in multi-criteria decision making. IEEE Trans. on SMC, 18, 183-190.
[88] R. R.Yager, (1999), On the entropy of fuzzy measures, Technical report
MII-1917 R, Machine intelligence institute, Iona college, New Rochelle.
[89] R. Yager, (1979), on the measure of fuzziness and negation, Int. J. General
syst., 5, 221-229.
[90] Radim Belohlavek and George J. Klir (eds.) (2011), Concepts and Fuzzy
Logic, MIT Press.

204

[91] Richard Dietz & Sebastiano Moruzzi (eds.), (2009), Cuts and clouds.
Vagueness, Its Nature, and Its Logic, Oxford University Press.
[92] Roy T. Cook, (2009), A dictionary of philosophical logic, Edinburgh
University Press, 84.
[93] Russell Bertrand (1950), Unpopular Essays, Allen &Unwin.
[94] S. Kullback, (1958), Information Theory and Statistics, Wiley, New York.
[95] S. Selvin, (1975), A Problem in Probability, American Statistician, 29,1-67
[96] S. Weber, (1984),

-decomposable measure and integrals for

Archimedean t-conorm, J. Math. Anal. Appl. 101, 114-138.


[97] Susan Haack (1996), Deviant logic: beyond the formalism. Chicago:
University of Chicago Press.
[98] T. Calvo and A. Pradera, (2001), Some characterizations based on double
aggregation operators. European Society for fuzzy logic and technology, 470474, Leicester (U.K.).
[99] T. Murofushi and M.

Sugeno, (1991), A theory of fuzzy measures,

Representation, the Choquet integral and null sets. J. Math. Anal. And Appl.
159 (2), 532-549.
[100] V. I. Bogachev, (2007), Measure theory, Berlin: Springer, ISBN 978-3540-34513-8.

205

[101] V. Torra, (1999), On hierarchically S-decomposable fuzzy measures. Int.


J. of Intel. System 14:9; 923-934.
[102] V. Torra, (1999), On the learning of weights in some aggregation
operators; The weighted mean and the OWA operators, mathware and soft
computing, 6, 249-265.
[103] W. Adamski, (1977), Capacity like set functions and upper envelopes of
measures, Math. Ann. 229, 237-244.
[104] W. Zhang and G. Wang, (1991), Introduction of fuzzy mathematics, Xian
Jiaotong University.
[105] W. Zhang, (1987), Set valued measures and random sets, Xian Jiaotong
University.
[106] Z. Wang and George J. Klir, (1991), Fuzzy Measure Theory, Plenum
Press, New York.
[107] Z. Wang, (1990), Absolute continuity and extension of fuzzy measures.
Fuzzy Sets and Systems, 36, 395-399.
[108] Z. Wang, (1985), Asymptotic structural characteristics of fuzzy measures
and their applications, Fuzzy Sets and Systems, 16, 277-290.
[109] Z. Wang, (1992), On the null-additivity and the auto-continuity of a fuzzy
measure, Fuzzy Sets and Systems, 45, 223-226.

206

[110] Z. Wang, (1984), The auto-continuity of set function and the fuzzy
integral, J. Math. Anal. Appl. 995, 195-218.

207

You might also like