You are on page 1of 72

1

ADAPTIVE BLIND SIGNAL


PROCESSING:
Blind Source Separation




Guillermo Bedoya
UNIVERSITAT
POLITCNICA DE
CATALUNYA
UPC
INPGrenoble
Grup d Arquitectures
Hardware Avanades [AHA]
Laboratoire des Images et
des Signaux [LIS]
2
1. Introduction to Blind Signal Processing: Problems and
applications
2. Blind Source Separation (BSS) : Statistical principles
3. Application of Information theory to BSS

Coffee Break

4. Adaptive Learning Algorithms for BSS
5. BSS of Nonlinear mixing models
6. Hardware considerations
OUTLINE
3
1. INTRODUCTION TO BLIND
SIGNAL PROCESSING
4
Objective:
To discuss the basic principles of Blind Source Separation
in the context of Blind Signal Processing.
INTRODUCTION
5
DEFINITION
1. INTRODUCTION TO BLIND SIGNAL PROCESSING
For many authors the so-called Blind signal processing techniques
include:

BLIND SOURCE SEPARATION (BSS)
Blind Signal Extraction (BSE)
Blind Source Deconvolution (BSD)

6
DEFINITION
Blind Source Separation (BSS) and Independent Component
Analysis (ICA) are emerging techniques of array processing
and data analysis that aim to recover unobserved signals or
sources from observed mixtures (typically, the output of an
array of sensors), exploiting only the assumption of Mutual
Independence between the signals.
1. INTRODUCTION TO BLIND SIGNAL PROCESSING
7
DEFINITION II
BSS / ICA
Algorithm
Environment
Sensor Array
Signal processing
s
1

s
2

s
3

s
1

s
2

s
3

x
1

x
2

x
3

Speakers Microphones
s
y =
x
A B
1. INTRODUCTION TO BLIND SIGNAL PROCESSING
8
APPLICATIONS
Radiating Sources Estimating: Applications to Airport Surveillance.
Blind Beamforming.
Blind Separation of multiple co-channel BPSK signals arriving to antenna arrays.
1. INTRODUCTION TO BLIND SIGNAL PROCESSING
Processing of Communication Signals:
Biomedical Signal Processing:
ECG & EEG.
Monitoring:
Multitag Contactless Identification Systems.
Power Plant monitoring.
Environmental and Bio-medical Chemical Detection and Identification.
Alternative to Principal Components Analysis
9
PRINCIPLES
1. INTRODUCTION TO BLIND SIGNAL PROCESSING
The simplest BSS Model: Noiseless Linear Instantaneous Mixing.
We assume the existence of j independent signals s
1
(t),,s
j
(t) and the
same number of observations (or observed mixtures) x
1
(t),,x
n
(t),
where number of sensors = number of sources, expressed as:
x
i
[t] a
11
s
1
[t]++a
1j
s
j
[t]


x
n
[t] a
i1
s
j
[t]++a
ij
s
j
[t]
For each i = 1, n.
and i=j
s
1

x
1

A =
Mixing Matrix
(sensor array)
s
2

s
j

x
2

x
n

x[t]=As[t]
10
PRINCIPLES II
1. INTRODUCTION TO BLIND SIGNAL PROCESSING
s
1



s
n

y
1



y
n

= s y =
x
A B
Unobserved signals Observations Estimated source signals
=
x[t]=(x
1
[t]x
n
[t])=As[t] y[t]=(y
1
[t]y
n
[t])=Bx[t]
B = A
-1

y=BAs, C=BA, C=PD
11
PRINCIPLES III
1. INTRODUCTION TO BLIND SIGNAL PROCESSING
The BSS problem exists in recovering the sources vector s(t) using only:

The observed data x(t).
The assumption of mutual independence between the entries of
the input vector s(t).
Some prior information about the probability distribution of the
inputs.
12
ECG
1. INTRODUCTION TO BLIND SIGNAL PROCESSING
APPLICATIONS (ECG)
13
CDMA System with N-users:
La seal recibida x es la
superposicin de las seales
ensanchadas de los N usuarios ms
ruido aditivo gaussiano.
b
1
b
2
b
N
.
.
.
code
1

code
2

code
N

L chips

P
1
P
2
P
N
NOISE

L chips

1 L 1 N N L 1 L
n b x

+ =
N L
H
BSS permite recuperar b a partir de x sin conocer H independencia
estadsitica
1. INTRODUCTION TO BLIND SIGNAL PROCESSING
APPLICATIONS (CDMA)
14
APPLICATIONS (Image processing)
Image
decomposition
ICA
B
Components
processing
Image
Composition
I
x
t

y
t

X=
I =
n
m/k x n/k

k x k
m
x
t

1. INTRODUCTION TO BLIND SIGNAL PROCESSING
15
APPLICATIONS (Image processing)
1. INTRODUCTION TO BLIND SIGNAL PROCESSING
16
2. BLIND SOURCE SEPARATION:
STATISTICAL PRINCIPLES
17
STATISTICAL MODEL
2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES
BSS exploits:
SPATIAL DIVERSITY: BSS looks for structure across the sensors, not across
the time.
SAMPLES DISTRIBUTION.
Statistical Model:
MIXING MATRIX: Its columns are assumed to be linearly independent (so that
it is invertible)
SOURCE DISTRIBUTION: the distribution of each source is a Nuisance
parameter, i.e., we are not primarily interested in it...
18
LINEAR MIXING
2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES
If each source is assumed to have a pdf denoted q
i
(.), the joint pdf q(s) of the
source vector s is:
q(s)=q
1
(s
1
) x x q
n
(s
n
)= q
i
(s
i
)
i=1,n
19
LINEAR MIXING
2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES
s
1

s
2

x
1

x
2

y
1

y
2

z
2

z
1

x = As
p
1
(s
1
) p
2
(s
2
)
z = Wx
1. Whitening
y=Uz
2. Rotation
s
1

s
2

Joint PDF
p

(s
1,
s
2
)
( )
T
E = zz I
marginal PDF
General approach
20
CONSTRAINTS
2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES
whitening
Rotation
Joint PDF of different mixing
matrices for two signals with uniform
distribution
21
CONSTRAINTS
Joint PDF of different mixing matrices for two
signals with Gaussian distribution
2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES
22
INDEPENDENCE & DECORRELATION
2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES

Intuitivo
Las variables y
1
e y
2
son independiente si el valor de y
1
no aporta
ninguna informacin acerca del valor de y
2
.


Matemtico: funciones de densidad de probabilidad (pdf)
Sea p(y
1
,y
2
) la funcin de densidad de probabilidad conjunta de y
1
e y
2
.
Sea p
1
(y
1
) la pdf marginal de y
1


y de forma similar para y
2
Entonces, y
1
e y
2
son independientes si y slo si:
1 1 1 2 2
( ) ( , ) p y p y y dy =
}
1 2 1 1 2 2
( , ) ( ) ( ). p y y p y p y =
23
INDEPENDENCE & DECORRELATION
2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES
Concept of Independence

Math: If y
1
and y
2
are independents



Uncorrelatedness: h(), g() are equal to the Identity
( ) ( ) ( )
1 2 1 2
( ), ( ) ( ) ( ) . E g y h y E g y E h y =
( ) ( ) ( )
1 2 1 2
, 0 E y y E y E y =
Uncorrelatedness : Independence
24
INDEPENDENCE
2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES
Statistical measurements of Independence and Gaussianity:

Kullback-Leibler Divergence (Signal distribution)



Kurtosis or fourth order cumulant (Signal Gaussianity)



Negentropy

| |
( )
| ( ) log
( )
f
K f g f d
g
=
}
y
y y
y
{ } { }
( )
2
4 2
kurt( ) 3 y E y E y =
{ }
4
kurt( ) 3 y E y =
Mutual
Information (I) &
Entropy (H)
;
25
OBJECTIVE (CONTRAST) FUNCTIONS
2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES
BSS ALGORITHM:
Contrast Function Optimization Method +
SOME CONTRAST FUNTIONS:
Maximum Likelihood
Mutual Information or Entropy Maximization
Orthogonal Contrast
INFOMAX
High-Order approximations
Non linear cross correlations
Minimize or maximize
the measurement
Measurement
26
OBJECTIVE (CONTRAST) FUNCTIONS
2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES
Properties of the BSS/ICA method depend on both of the
elements (Contrast function and optimization method). In
particular:

The statistical properties (e.g., consistency, asymptotic variance,
robustness) of the ICA method depend on the choice of the objective
function.

The algorithmic properties (e.g., convergence speed, memory
requirements, numerical stability) depend on the optimization
algorithm.

27
ENTROPY MAXIMIZATION CONTRAST
2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES
MUTUAL INFORMATION OR ENTROPY MAXIMIZATION


Where y is an independent components vector
Considering the whitening constraint:



The entropy of y, H[y], is invariant under rotations.

Separating algorithm.
| | | |
|
ME
K | = y y y
| | | | | | | |
cte.
1 1
N N
o
ME i i
i i
H y H H y |
= =
= =

y y
| |
min
ME
|
B
Bx
28
MAXIMUM LIKELIHOOD CONTRAST
2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES
Maximum Likelihood:



Where the PDF of = A
-1
x is known.
The Maximum Likelihood contrast, maximizes the
independence of the output components and minimizes
the DISTANCE to the PDF of the data.
| | | |
1 1
( )
| ( ) log
( ) ( )
ML
N N
p
K p d
q s q s
| = =
}
y
y y s y y
29
KURTOSIS
Non gaussianity measurement (Central Limit Theorem)
Kurtosis (fourth-order cumulant)


If y is unit variance


kurtosis = 0 for random Gaussian variables.

{ } { }
( )
2
4 2
kurt( ) 3 y E y E y =
{ }
4
kurt( ) 3 y E y =
2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES
30
KURTOSIS
Laplacian PDF
Pos. Kurtosis
leptokurtic
SuperGaussian
UNIFORM PDF
Negative Kurtosis
platykurtic
SubGaussian
2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES
2
1
( )
2
y
p y e

=
31
3. APPLICATION OF
INFORMATION THEORY TO BSS
32
INTRODUCTION
3. APPLICATIONS OF INFORMATION THEORY TO BSS
Objective:
To discuss the INFOMAX criterion and apply it to the
problem of BSS
33
INTRODUCTION TO INFORMATION THEORY
3. APPLICATIONS OF INFORMATION THEORY TO BSS
The binary Entropy function
The entropy of a random variable X is:
Suppose X is a binary random variable,
Then the entropy of X is,
Since it depends on p, this is also writen sometimes as H(p)
H(X)=- p(x) logp(x)
i=1
N
H(X)=- p logp - (1-p) log(1-p)
X=
1 with probability p
0 with probability 1-p
34
INTRODUCTION TO INFORMATION THEORY
3. APPLICATIONS OF INFORMATION THEORY TO BSS
35
INTRODUCTION TO INFORMATION THEORY
3. APPLICATIONS OF INFORMATION THEORY TO BSS
36
RELATIVE ENTROPY AND MUTUAL INFORMATION
3. APPLICATIONS OF INFORMATION THEORY TO BSS
37
H(.) and I(.) applied to BSS I
3. APPLICATIONS OF INFORMATION THEORY TO BSS
s
i
(t) y(t)
x(t)
A B
Let s
i
(t), i=1,2,,n be a set of statistically independent signals,
x(t)=As(t)

We desire to determine matrix B so that: y(t)=Wx(t)=WAs(t)

recovers s(t) as fully as possible. We will take as a criterion the Mutual
Information at the output H(y)
38
H(.) and I(.) applied to BSS II
H(y)= H(y
i
) - I(y
1
,,y
N
)
i=1
N
If we maximize H(y),we should :
1. Maximize each H(y
i
)
2. Minimize I(y
1
,,y
N
)

H(y
i
) are maximized when (and if) the otputs are uniformly distributed.
The mutual information is minimized when the are all independent!
3. APPLICATIONS OF INFORMATION THEORY TO BSS
39
3. APPLICATIONS OF INFORMATION THEORY TO BSS
H(.) and I(.) applied to BSS III
How to work with the outputs mutual information ?
We consider the case of adapting a processing function g which operates on
the scalar X using a function Y = g(X) in order to maximize or minimize the
mutual information between X and Y.
But, achieving the MI miminimization and consecuently the
independence, requires that g have the form of the Cumulative Density
Function (CDF) of s
i
.
y(t) x(t) B g
40
3. APPLICATIONS OF INFORMATION THEORY TO BSS
H(.) and I(.) applied to BSS IV
g(X) = g(X,b) g(x)=1/1+e
-bx

We may not know the pdf of X, however, we can assume a particular
function form, e.g., Assuming that p(s
i
) is super-Gaussian :
41
H(.) and I(.) applied to BSS V
Based on the last assumption (p(s
i
) super-Gaussian), we can write:
H(y
i
)= -E [log p(y
i
)]
where we have
p(y
i
)= p(bx
i
) / |c y
i
/ c bx
i
|
so that
H(y
i
)= -E [log p(bx
i
) / |c y
i
/ c u
i
|]
and
H(y)= H(y
i
) - I(y)
i=1
N
H(y)=- E [log p(bx
i
) / |c y
i
/ c u
i
|] - I(y)
i=1
N
;
3. APPLICATIONS OF INFORMATION THEORY TO BSS
42
H(.) and I(.) applied to BSS V
We want to determine B to maximize the joint entropy of the output H(y). So, an
adaptive scheme is to take

Ab o c Hb =

In our specific case we have,
g(x) = y= 1/1+e
-bx
;

= by(1-y) ; = y(1-y)[1+by(1-2y)]
c
c b
(ln |c y/ c x|) =
c y
c x
-1
c
c b
c y
c x
3. APPLICATIONS OF INFORMATION THEORY TO BSS
c y
c x
c
c b
c y
c x
43
From,
Ab =

Ab = y(1-y)[1+by(1-2y)]
so,
Ab o 1/b + x(1-2y) ; Ab = b
-1
+ (1-2y) x

And the weight update rule can be
b
[k+1]
= b
[k]
+
b
Ab
c y
c x
-1
c
c b
c y
c x
c y
c x
-1
Score function
44
General form
H(y)=- E [log p(bx
i
) / |c y
i
/ c u
i
|] - I(y)
i=1
N
From the term:
c H(y)
c B
= B
-T
- + (u) x
T

+ (u)= -
c p(u)
c u
p(u)
Score function
3. APPLICATIONS OF INFORMATION THEORY TO BSS
45
METHOD OVERVIEW
2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES
s y
x
A B
1. Initialization of the separating matrix B (randomly).
2. Generation of the outputs y. y=Bx
3. Measurement of the outputs independence (contrast function).
For count = 1 to Number of iterations (nit):
Change the matrix B : ( ) and repeat until nit according to:
B
[k+1]
= B
[k]
+
b
AB
y= B
[k+1]
x
4. End of algorithm.
c H(y)
c B
46
47
4. ADAPTIVE LEARNING
ALGORITHMS FOR BSS
48
ADAPTIVE LEARNING
4. ADAPTIVE LEARNING ALGORITHMS FOR BSS
BSS Framework:




En espacios eucldeos, la ley de aprendizaje basada en el gradiente
(estocstico) permite alcanzar una solucin:


El algoritmo es estable cuando el contraste es mnimo.
Problema
el espacio de las matrices invertibles B no es eucldeo.
1
( )
t t t
|
+
= V B B B
s y
x
A B
49
GRADIENT TECHNIQUES
4. ADAPTIVE LEARNING ALGORITHMS FOR BSS
Gradiente convencional: espacio eucldeo
la transformacin infinitesimal de B se expresa como

Gradiente relativo:
la transformacin infinitesimal de B se expresa como

( )
+ = + B I B B B
+ B B
B
+ B B
+ B
Espacio de las
matrices de separacin
50
RELATIVE GRADIENT
4. ADAPTIVE LEARNING ALGORITHMS FOR BSS
Gradiente convencional: espacio eucldeo





Gradiente relativo:



( | ( ) ( ) ( ) ) o | | | + V + = + B B B
, 1
donde |
N
T
ij ij
i j
Traza A B
=
( = =


A B A B
y ( )
ij
B
|
|
c
V =
c
B
( | ( ) ( ) ( ) ) o | | | + V + = + B B B B B

( | ( ) ( ) ( ) ) o | | | + V + = + B B B B
51
RELATIVE GRADIENT
4. ADAPTIVE LEARNING ALGORITHMS FOR BSS
Comparacin:

( ) ( )
T
| | V V = B B B
B
+ B B
+ B
Espacio de las
matrices de separacin

( ) | V B
( ) | V B
52
NATURAL GRADIENT
4. ADAPTIVE LEARNING ALGORITHMS FOR BSS
Gradiente natural:
Planteamiento: encontrar dB que minimiza








Comparacin:
( | ) ( ) ( ) d d | | | + V = + B B B B B
2
2
teniendo en cuenta que d c = B
B
+ B B
+ B
Espacio de las
matrices de separacin

( ) | V B
( ) | V B
min ( )
d
d | +
B
B B
d + B B
( ) | V B
gradiente natural

( ) ( ) ( )
T
| | | V V V = = B B B B B B
53
LEARNING ALGORITHMS
4. ADAPTIVE LEARNING ALGORITHMS FOR BSS
Algoritmo basado en el Gradiente convencional:

En el caso de emplear ML:


Teniendo en cuenta que

Por tanto

y que
1
( )
t t t
|
+
= V B B B
( )( )
-1
1
( )
T
T
t t t

+
= B B y y I B
( )
1
( ) log ( )
( )
= = ( )
( )
y k k
i i
j i i
j
ij i i
p q y d
q y
x y
B q y


c
( '
(
(

c

}
y y
B y
| | | |
1 1
( )
( ) | ( ) ( ) log
( ) ( )
y
ML ML y y
N N
p
K p q p d
q s q s
( = = =
}
y
y Bx y s y y
.
( ) ( ) log ( ) log det( )
cte
y y
H p p d = =
}
y y y y B
( )
1
( )
T
H

V =
B
y B
54
LEARNING ALGORITHMS
Algoritmo basado en el Gradiente natural:




En el caso de emplear:






Algoritmo basado en el gradiente relativo
B
+ B B
+ B
Espacio de las
matrices de separacin

( ) | V B
( ) | V B
min ( )
d
d | +
B
B B
d + B B
( ) | V B
gradiente natural
1
( )
t t t
|
+
= V B B B
( )
1
( )
T
t t t

+
= B B y y I B
| |
ML
| y
1, ,
( )
siendo ( ) "score function"
( )
T
i i
i i
i N
q y
q y

=
( '
=
(

y
( )
1
( )
T
t t

+
= B B y y I

( ) ( ) ( )
T
| | | V V V = = B B B B B B
55
5. BSS OF NON LINEAR MIXING
MODELS
56
THE POST NON-LINEAR MODEL
Given a set of sources s linearly mixed by means the matrix A, we have the
linear mixing part described by:


The invertible non-linear transfer function f (.) is represented by:


were e represents the observations at the output of the sensors system. The
sources s are recovered if the non-linear function g
i
(e
i
) and the de-mixing matrix
B are the inverse functions of f
i
(x
i
) and A respectively.
5. BSS OF NON-LINEAR MIXING MODELS
s A x =
) (x f e=
A
f1
f2
x
1

x
2

s
1

s
2

e
1

e
2

g1
g2
y
1

y
2

B
u
1

u
2

57
THE POST NON-LINEAR MODEL
Consequently, the signals are related by the equation:

where,


The output is described by:


In order to separate the observations into statistically independent components,
we use a measure of the dependence degree of the components of u. When
this measure of u reaches an absolute minimum, we ensure the independence
of the components.

5. BSS OF NON-LINEAR MIXING MODELS
) (e g y=
) ( ( s A f g y =
) ( ( s A f g W y W u = =
58
THE POST NON-LINEAR MODEL


5. BSS OF NON-LINEAR MIXING MODELS
The mutual information I(.), can be employed as independence measure of the
components of u. The mutual information (MI) of the output, can be expressed
via the signal entropy H:


where,


The MI is non-negative and zero when the components of u are statistically
independent from one other.

The MI has a property that, if we perform invertible transformations on the
individual components of u, resulting in z
i
=
i
(u
i
), the mutual information of the
components z
i
is equal to the mutual information of the components u
i
.

=
i
H i y H I ) ( ) ( ) ( u u
u u u u d p p H ) ( log ) ( ) (
}
=
59
REALISTIC EXAMPLE
5. BSS OF NON-LINEAR MIXING MODELS
Linear mixing
stage
Non linear
distortion
Non linear
compensation
Linear de-mixing
stage
A B
f1
f2
g1
g2
ai
aj
i
j
ID1
ID2
gID1
gID2
ID = a + b*ln(ai + Kijaj ) ;
zi/zj
gID = e
(ID a)/ b
60
1. Initialization B = I, gID = ID, a and b.
2. Loop
1. Compute outputs by: y =Bx;
2. Estimation of parameters
a[k+1]= a[k]+ stepsize a
b[k+1]= b[k]+ stepsize b
3. Normalization
4. Linear BSS algorithm: b[k+1]=b[k]+ stepsize b
5. Repeat until convergence
ALGORITHM
5. BSS OF NON-LINEAR MIXING MODELS
61
6. HARDWARE
CONSIDERATIONS
62
OVERVIEW
HARDWARE IMPLEMENTATION:

DSP based implementation
FPGA
Fully Analog system Integration
HYBRID INTEGRATION
ASIC with a DSP core associated to the sensor array.
6. HARDWARE CONSIDERATIONS
63
DSP IMPLEMENTATION
A higher flexibility at a low cost system can be obtained by using a DSP for
software fast algorithm implementation

Finite Precision errors
Fixed point - floating point
Number of bits
Analog to Digital conversion
Finite word-length used to store all internal algorithmic quantities
6. HARDWARE CONSIDERATIONS
64
DSP ARCHITECTURE
6. HARDWARE CONSIDERATIONS
65
ANALOG CMOS CIRCUIT IMPLEMENTATION
6. HARDWARE CONSIDERATIONS
The INFOMAX algorithm finds the un-mixing
matrix B by means of maximizing the joint
entropy H(y) of the outputs. Its learning rule
(using te natural gradient) is:

AB o BB
T
= [I-+(u)u
T
]B
c H(y)
c B
66
MULTIPLIER CIRCUIT
The rule can be writen as:
u = BX; SUM=u
T
B
AB = q [B- +(u)SUM]
6. HARDWARE CONSIDERATIONS
67
WEIGHT UPDATE CIRCUIT
6. HARDWARE CONSIDERATIONS
Vout = outp-outm = [| (V
c1
- V
c2
)/sC](vip - vim)
68
PROPUESTA DE TRABAJO !
69
PROPUESTA
IMPLEMENTACION HARDWARE DE UN ALGORITMO DE SEPARACION CIEGA
DE FUENTES APLICADO A PROCESADO DE VOZ.

REQUISITOS BASICOS:
2 Estudiantes (buen nivel de Ingls).
Conocimientos de MATLAB, C/C++ y Assembler.
Conocimientos de Estadstica Multivariable.
Tarjeta y plataforma de desarrollo para DSP.

DURACION Y FORMA DE TRABAJO:
5 Meses. El algoritmo ya esta diseado y Listo!
Cx via INTERNET (recoleccin bibliogrfica, algoritmia,
etc) y dirigido por docente CUTB.
70
IMPLEMENTACION HARDWARE DE UN ALGORITMO DE SEPARACION
CIEGA DE FUENTES APLICADO A PROCESADO DE VOZ.
PLAN GENERAL DE TRABAJO:
Recoleccin Bibliogrfica y contextualizacin (- 20 dias).
Simulacin en MATLAB del algoritmo
Separar dos o ms seales de voz (- 10 dias).
Simulacin del algoritmo en C/C++
Separar dos o ms seales de voz (- 30 dias).
Desarrollo del algoritmo en la Tarjeta DSP (90 dias).
Seleccion y adquisicin de la tarjeta de desarrollo.
Definicin de los parmetros de diseo.

Generacin de reportes cada 20 dias (cada uno ser un
captulo del proyecto de final de carrera).
71
GRACIAS !
72
ADAPTIVE BLIND SIGNAL
PROCESSING:
Blind Source Separation



Guillermo Bedoya
UNIVERSITAT
POLITCNICA DE
CATALUNYA
UPC
INPGrenoble
Grup d Arquitectures
Hardware Avanades [AHA]
Laboratoire des Images et
des Signaux [LIS]

You might also like