You are on page 1of 25

Blind source separation

Naresh patidar Lokesh gupta

Mean and Variance

Covariance Matrix

Central Limit Theorem

EigenValue and EigenVectors A=


A - matrix - eigenVector - eigenValue of A corresponing to v The non-zero vectors that after being multiplied by the matrix, remain proportional to the original vector. Only change in magnitude, no change in direction.

EigenValue and EigenVectors

Covariance

Always between two dimension Positive value both dimension increase together Negative value one dimension increases, other dimension decreases Zero dimensions are independent of each other Covariance between one dimension and itself gives variance

Singular Value Decomposition


Decompose X=AZ into X=USVT S is a diagonal matrix of singular values with elements arranged in descending order of magnitude (the singular spectrum) The columns of V are the eigenvectors of C = XTX U is the matrix of projections of X onto the eigenvectors of C

Singular Value Decomposition

Introduction
Blind No information about the sources of the signal No information about the mixing of the signal Assumption Sources are statically independent Mixing is linear and stationary Method Principle Component Analysis Independent Component Analysis

Definition of BSS
Assuming observation signal is a linear and stationary mixer of more than one unknown independent source signal, Blind Source Separation separates the source signals using the property of statically independence of the sources. Method of separation of a set of signals from a set of mixed signals, without the aid of information about the source signals or the mixing process.

The Cocktail Party Problem - Find Z

Formal Statement of Problem


N- No. of sources M- No. of samples N independent sources linear square mixing (#sources=#sensors) -

Zmn ( M xN ) Ann ( N xN)

produces a set of observations - Xmn ( M xN ) XT = AZT

Formal Statement of Solution


demix observations XT ( N xM ) into YT = WXT YT ( N xM ) ZT W ( N xN ) A-1

How do we recover the independent sources? ( We are trying to estimate W A-1 )

Blind Source Separation


Signal Source

Noise Source

Observed Mixture

Blind Source Separation

XT

ZT

YT

= XT

BSS is a Transform ?
Like Fourier, we decompose into components by transforming the observations into another vector space which maximises the separation between interesting (signal) and unwanted (noise). Unlike Fourier, separation is not based on frequency, Its based on independence Source can have the same frequency content No assumptions about the signals (other than they are independent and linearly mixed)

The Fourier Transform

EigenSpectrum of Decomposition

EigenValue

EigenSpectrum= Plot of EigenValues

EigenVector Number

SVD noise/signal separation


To perform SVD filtering of a signal, use a truncated SVD decomposition (using the first p eigenvectors) Y=USpVT Reduce the dimensionality of the data by discarding noise projections Snoise =0, then reconstruct the data with just the signal subspace]
EigenValue

Most of the signal is contained in the first few principal components. Discarding these and projecting back into the original observation space effects a noise-filtering or a noise/signal separation
EigenVector Number

Graphs of SVD
X

Xp =USpVT

Two Dimensional Example


Signal 1

Signal 2

Original Source Signal

Original PDF with Principle and Independent component

Skewness
Negative Positive

Uneven moment Is most of the data greater than or less than mean?

Gaussians are Mesokurtic with =3 How non-Gaussian is the data?


Kurtosis
SubGaussian Negative Platykurtic SuperGaussian Positive Leptokurtic

Central independent => =3 Make data independent

Non-Gaussianity Statistical Independence?


components sources

Limit Theorem : add enough signals together, for Gaussian PDF


non-Gaussian to find

Sources, Z P(Z) ( <3 (1.8) ) Mixtures (XT=AZT) P(X) ( =3.4)

You might also like