You are on page 1of 2

2014 IEEE World Congress on Computational Intelligence (WCCI)

06 Jul - 11 Jul 2014, Beijing, China

2014 International Joint Conference on Neural Networks

2014 IEEE World Congress on Computational Intelligence (WCCI)


06 Jul - 11 Jul 2014, Beijing, China

Model ensemble for an effective on-line reconstruction


of missing data in sensor networks
Author 1, Author 2, Author 3
Affiliation
k-NN classifiers associate a classification label to an input as the majority of its k nearest training samples
No proper training phase Reduced computational complexity
Consistency: Bayes errore
Leave-One-Out (LOO)
Fukunaga et Al.
Lack of theoretical results

How to select
k given n?

1
2

class
class

r*

point

( x , r*)

The proposed algorithm


Configuration

Sensing
node 2

Sensing
node 1

Sensing
node 3

Cluster
head

Sensing
node 5

Execution

The Distributed Change-Detection Test:


1. Each unit: configure the ICI-based CDT using {, 1 i N};
2. Each unit: send feature extracted from to the cluster-head.
3. while(units acquire new observations at time T){
4. Each unit: run the ICI-based CDT at time T;
5. let ST be the set of units where the ICI-based CDT detects a change at time T;
6. if (ST is not empty) {
7. Each unit in ST: run the refinement procedure, sent Tref,i to the cluster-head.
8. Cluster-head: compute Tref out of Tref,i, 1 i N , send Tref to each unit.
9. Each unit: send to the cluster-head the values in [Tref ,T] of the feature
detecting the change.
10.Cluster-head: run the Hotelling T2 test to assess stationarity of features
11.if (second-level test detects a change){
12.Change is validated.
13.Each unit in ST the ICI-based CDT is re-trained on the new process status}
14.else{
15.Change is discarded (false positive);
16.Each unit in ST: reconfigure the ICI-based CDT to improve its performance
}}}

Sensing
node 4

Network in stationary conditions .


Sensing
node 2

Sensing
node 1

Sensing
node 3

Cluster
head

Sensing
node 5

Sensing
node 4

H.T.

Experimental Results
Pe w.r.t. k

0.4

Simulated
Theory

0.45
0.4
0.35

0.35

0.35
0.3
0.25

pe(x)

0.3

0.3
0.25

0.25

0.2

ko

400

k CV

100

250
200
150
100

50

50

0.15

0.1

0.15

50

100

150

200

0.05

10

20

30

40

50
k

60

70

80

90

100

0
-4

-3

-2

-1

250
k

300

350

400

450

k LOO

300

150

0.2

0.2

0.1

450

350

0.4

200

ko w.r.t. n

Theory
CV
LOO
Bayes Error

0.45

0.45

= 0.005
= 0.01
= 0.05

0.5

0.5

N = 100
0.5

N = 50
N = 100
N = 500
Bayes Error

0.55

Number of k within
[Pe(ko),Pe(ko)+]

(with

The theoretical derivation is experimentally sound

500
250

Pe(k) w.r.t. k

a) a mono-dimensional classification problem with


equi-probable classes ruled by Gaussian distributions
p( x | 1 ) N (, 0,1) p( x | 2 ) N (2,1)
T=[-4,6]:

0
50

500

100

150

200

250

300

350

400

450

0
50

500

100

150

200

250

300

350

400

450

500

0.45

T {0 x1 , x2 50}

0.25

0.2

0.15

0.1

50

100

150

200

250
k

300

350

400

450

500

70

ko

250

k CV

50
40
30

k LOO

200

60

0.3

80

0.35

300

= 0.005
= 0.001
= 0.05

ko w.r.t. n

p( x1 | 2 ) p( x2 | 2 ) 2 (25)

Number of k within
[Pe(ko),Pe(ko)+]

p( x1 | 1 ) p( x2 | 1 ) 2 (15 )

0.4

Pe(k) w.r.t. k

b) a two-dimensional classification problem


characterized by the two equi-probable classes ruled by
chi-square distributions:

90

N = 50
N = 100
N = 500

150

100

20
50

10
0
50

100

150

200

250

300
n

350

400

450

500

0
50

100

150

200

250

300
n

2014 International Joint Conference on Neural Networks

350

400

450

500

You might also like