6 views

Original Title: AnalysisWMFilters Pranam

Uploaded by bovonto

- Umur Orangtua Prosentase
- Filter
- Output
- maths prac wk 4
- image processing for pro 1
- 9 descriptive statistics
- OUTPUT.doc
- Hasil Widya Wilcoxon n Umum
- Analog_Chebyshev_Elliptic_.ppt
- matchedFilter-nptel
- Image
- VGd Attack Rate Komplit KTP Ya
- A Survey on Assertiveness Among Secondary School Students
- The Accounts Payable Network - Benchmarks_ Invoices Processed Per FTE
- 00707194
- 08 Statistics
- 96_Morton_01
- 08 Statistics
- (MBAsubjects.com)Robbins9 Ppt09
- A Novel 2-D Model Approach for the Prediction

You are on page 1of 10

Pranam Janney & Guang Deng Department of Electronic Engineering La Trobe University, Melbourne, Australia pranamjanney@yahoo.com

ABSTRACT:

Analysis is performed on weighted median filters given a group of predictors. The tests were performed on different test images. The report also presents a brief explanation for choosing the proposed methods of taking weighted median of a group of predictors as an alternative and a competitive adaptive image prediction method.

1. INTRODUCTION:

Signal processing always provides challenges when it comes to approximating or predicting the next sample. The main reason being that samples have abrupt changes between each other, thus its an arduous task to predict these abrupt changes. Linear filters were introduced to counter these challenges. But Linear filters were not optimal class filters and often unable to recover the desired signal effectively if the governing distribution of the corrupting noise samples is other than Gaussian [1]. Thus weighted median (WM) filters were first introduced as generalisation standard median filters, where a non-negative integer weight is assigned to each position in the filter window and a median value is chosen using the sample and their corresponding weights.

Analysis of WM Filters

In lossless compression the algorithm is designed in a way to scan the input image matrix row by row, predicting each pixel as a linear combination of previously predicted pixels and encoding the prediction error. The standard uses totally eight different kinds of predictors as listed in Table 1[2]. Even with these eight predictors, we have to use the best predictor possible for that particular image, thus we base our selection criterion for the best predictor depending on different parameters. Even though selected by the group of predictors in [2], the best predictor is not optimized; we have to improve on the selected predictor to optimize it. This leads us to the optimisation problem wherein we have to maximise the predictor:

( eqn 2.1)

Considering X to be the original image and P to be the prediction output. Then, the error e is given by e=XP The predicted output can be expressed as:

P = i Pi

Mean square error of the prediction is: L = E[ e2 ]

Analysis of WM Filters

Table 1. JPEG Predictors for lossless coding [2]

Where E = represents the correlation. In case of discrete variables, the above equation can be read as: L = ei2 P( ei ) For minimum error: dL / di = 0 Considering one of the predictors: (eqn 2.2)

S [ n ] = a1 S[ n 1 ] + a2 S[ n 2 ] L = ( S [ n ] - S [ n ] )2

dL/di = 0

(eqn 2.3)

(eqn 2.4)

( S[ n ]. S [ n 1 ] ) = (a1 S[ n 1 ] 2 + a2 S[ n 2 ]. S[ n 2 ] )

Analysis of WM Filters

For the first order predictor coefficients, i = 1; if R(x , y ) represents the correlation coefficient between two variables x and y, then R( 1 ) = a1 R( 0 ) + a2 R( 1 ) i = 2, R( 2 ) = a1 R( 1 ) + a2 R( 0 ) Representing the correlation coefficients in the form of a matrix R( 0 ) = R( 1 ) Or Rxa=r equation can also be generalised for any n > 2. R( 0 ) R( 1 ) . R( n 1) R( 1 ) R( 2 ) . R( n 2 ) R= : : R(n-1) R(n-2) : : R( 0 ) (eqn 2.5) Where R, a and r represent the above three matrices respectively.The above R( 0 ) a2 R( 2 ) R( 1 ) a1 R( 1 )

The matrix R is in the circular Toeplitz form. Assuming the Markovs model for the above equation, we have: R( k ) = | k | reduces to: max Prob ( 2 , / { S[ n ]} , I ) (eqn 2.7) (eqn 2.6) Where and are constants. Now, the optimisation problem (eqn 2.1)

Analysis of WM Filters

From the above equation (eqn 2.2), we can see that probability is a function of the constants, which means by varying the constants we can optimise the predictor. Constants are henceforth called the Weights. For a Laplacian model we have [3]: Prob ( x k | , I ) = Where

1 2 k

e | xk |/ k

(eqn 2.8)

2

optimised prediction is then given by the Weighted Median filter output of the predictions, using 1/ k

[4]

= WM ({Pi , i }n =1:N )

Pi = WM ({Pi , i }n =1:N ) ,

Now considering a simple case of Laplacian Distribution, we have

(eqn 2.9)

In our case, the parameter of interest is the optimised predictor, thus we have (eqn 2.10)

2 j

(if i j )

(eqn 2.11)

Thus the weighted median reduces down to a simple median; therefore the optimised prediction solution reduces to:

Pi = Median ({Pi })

(eqn 2.12)

The median is the maximum likelihood estimate of the signal level in the presence of uncorrelated additive biexponentially distributed noise [5]. Weighted Median filters belong to the broader class of stack filters tools. In

Analysis of WM Filters

the binary domain WM filters are self dual, linear separable positive Boolean functions [5].

3.1 Definition:

3.11 Positive integer weights: For the discrete time continuos valued input vector, X = [X1, X2, X3XN], The output Y of the WM filter of width N with corresponding integer weights: W = [W1, W2, W3WN], is given by the filtering procedure [5] : Y = MED [W1*X1, W2*X2WN*XN] (eqn 3.1) Where MED is the median operation and * denotes Multiplication, The median value is chosen from the sequence of the products of the samples and their corresponding weights. 3.12 Positive non-integer weights: The weighted median of X is the value minimizing the expression [5] L( ) = 3.13 Filtering procedure: Sorting the samples inside the filter window; adding up the corresponding weights from the upper end of the sorted set until the sum just

N i =1

Wi | X i |

(eqn 3.2)

Analysis of WM Filters

exceeds half of the total sum of weights, i.e.,

1 2

N i =1

Wi ; output of the WM

Median filtering was performed on various images using weights. Analysis was performed using MATLAB (ver. 6.1). For analysis purposes the entropy and the signal to noise ratio were used for the prediction errors and the predicted image, respectively. The signal to noise ratio (dB) was calculated by SNR = 10 * log10

2

(eqn 4.1)

Where xij = original pixel value and Pij = predicted pixel value. Signal to

noise ratio is calculated for the predicted image with respect to the original image. Histogram of an image can be defined by

n=

(eqn 4.2)

Where n = histogram of the image, mpv = number of image pixels with pixel value (pv) and N = total number of image pixels. Thus entropy was calculated using:

Np i =1

E=

ni log ni , log e 2

(eqn 4.3)

Analysis of WM Filters

Table 2. Entropy of the prediction errors

2

The weights in each case were assigned using different parameters. There are two kinds of weights assigned: global weights (entropy (Ent), variance (Var), random (rand) etc), Simple median using the number of predictors (N) and local weights (Sum of Squared Errors (SSE)). When the weights are assigned considering the whole image it is called the global weights, when SSE is used as weights it is called the local weight, because the SSE is independent for each pixel. Experimental tests were conducted on various images and the results obtained are shown in Table 2 and Table 3. Table 2,shows the entropy of the prediction errors for different images. The entropy of the prediction error is quite less when the weights are assigned using the variance parameter. When random weights were used the results for some of the images were good, but random weights cannot be taken into consideration due to the fact that the methodology of getting these random weights is a random process and the probability of getting the best results is very low. . When localised weights are used the results are also better in cases where the image was large (eg: Saturn.tif(328 x 438)). Thus by using the variance as the weights for a medium sized image or localised

Analysis of WM Filters

Table 3. Shows the signal to noise ratio (dB) of the predicted image for different images.

2

weights for a large image, the prediction error entropy decreases denoting that it is the best possible prediction. From Table 3, we can see that the signal to noise ratio is consistently high for predicted images derived using variance assigned to weights especially for medium sized images. Here again the usage of localised weights has shown better results in cases where the image is too large (eg: Saturn (328 x 438)). Higher signal to noise ratio indicates that the even though there is noise addition due to this prediction process, the noise rejection capacity is high; thus using variance as weights on whole image or localised weights for large image would result in better noise rejection than others.

5. CONCLUSION:

From the experimental results i.e. Table 2 and Table 3, we can see that using particular weights for prediction could result in lower prediction errors and hence resulting in better signal to noise ratio compared with the original image.

Analysis of WM Filters

In conclusion, weighted median of a group of predictors could be an alternate and a competitive method for adaptive image prediction technique. From our experiments we could suggest that variance can be used as global weights for attaining the best possible weighted median filter with better prediction and noise rejection capacity especially for medium sized images, but in cases where the image size is large, then the localised weights seem to be better than variance.

6.REFERENCES:

[1] L. Yin, Y. Neuvo, Fast adaptation and performance characteristics of FIR-WOS Hybrid filters, IEEE Trans. Signal Processing, v. 42, issue:7,pp. 1610-1628, July 1994. [2] Rashid Ansari, Nasir Memon,The JPEG Lossless Standards, http://isis.poly.edu/memon/pdf/7.pdf [3] D.S.Sivia,Data Analysis: A Bayesian Tutorial, Clareendon Press, Oxford, 1996. [4] Deng. G, Ye. H, Maximum likelihood based framework for secondlevel adaptive prediction, IEE Proc.- Vis. Image Signal Process., Vol. 150, No. 3, pp. 193-197,June 2003. [5] L. Yin, et al, Weighted median filters: A Tutorial, IEEE Trans on Circuits and System II, v.43, issue:3, pp, 157-192,March 1996.

10

- Umur Orangtua ProsentaseUploaded byArasda Febi K
- FilterUploaded bysouvik5000
- OutputUploaded byadarsh
- maths prac wk 4Uploaded byapi-300378200
- image processing for pro 1Uploaded byBenedict Jo
- 9 descriptive statisticsUploaded byapi-308082215
- OUTPUT.docUploaded byAmelia Arsiti
- Hasil Widya Wilcoxon n UmumUploaded byIndra Coid
- Analog_Chebyshev_Elliptic_.pptUploaded byPraneeth Kumar
- matchedFilter-nptelUploaded bySai Sandeep
- ImageUploaded byHajer Ahmed
- VGd Attack Rate Komplit KTP YaUploaded byIndriChilitonga
- A Survey on Assertiveness Among Secondary School StudentsUploaded byAlexander Saladin
- The Accounts Payable Network - Benchmarks_ Invoices Processed Per FTEUploaded byJose David Perez Avilés
- 00707194Uploaded byskh_1987
- 08 StatisticsUploaded byalana39
- 96_Morton_01Uploaded byVi6ena
- 08 StatisticsUploaded byraja
- (MBAsubjects.com)Robbins9 Ppt09Uploaded byAnas Saleem
- A Novel 2-D Model Approach for the PredictionUploaded byRene
- Gp 3212141218Uploaded byAnonymous 7VPPkWS8O
- Effect of transportation costs on tourism development in IranUploaded byIOSRjournal
- 10.11648.j.wcmc.20170506.11Uploaded bytruetrue
- Acoustic echoUploaded byMuhammad Osama
- Digital Image Processing and AnalysisUploaded byQuang Nam
- Errors in Chemical AnalysisUploaded byJeffrey Palcone
- ACSTemplatepovday2008Uploaded bynelsonjs
- Advantages and Disadvantages of Central TedencyUploaded bysmilesweet2240
- HOUSE HEARING, 110TH CONGRESS - THE STATE OF THE FOIA: ASSESSING AGENCY EFFORTS TO MEET FOIA REQUIREMENTSUploaded byScribd Government Docs
- Presentation 1Uploaded byKaran Chaudhary

- A Novel Methodology for Designing Linear Phase IIR FiltersUploaded byIDES
- ClarkeWright Saving HeuristicsUploaded byArie Chandra
- Or PresentationUploaded bynassermohdali
- Sparse RepUploaded bymuhammadriz
- 82Uploaded bysaihari08
- OPTIMIZATION-Shuffled Frog Leaping AlgorithmUploaded byuday wankar
- ADA Lab Manual_AnujJain ITM Universe VadodaraUploaded byanujgit
- pagerankalgorithmUploaded byism33
- Exp Fourier TransformUploaded byRami Gasmi
- Data structures projectUploaded bysarath
- SN 4 UncertaintiesUploaded byAparna Akhilesh
- Mws Gen Ode Ppt FinitedifferenceUploaded byNilesh Nagose
- Scientific Computing Selected SolutionsUploaded bynomitav11
- 2013_IEEE Effective and Efficient Approach for Power Reduction by Using Multi Bit Flip FlopsUploaded bydinuarslan
- !NNECFS-21Uploaded byGeorge Chiu
- Real-Time Time-Domain Pitch Tracking Using WaveletsUploaded byyoudiez
- M6L3_LNUploaded byswapna44
- Packet SchedulingUploaded bySuresh Hakunamatata
- Predictive CodingUploaded bySabir Bhati
- Lecture 3 Compressiond AlgoUploaded byKanishka Gopal
- PolyphaseUploaded byjasonpeng
- Write a program for image enhancement using blurring and deblurrring.Uploaded byAnish Bansal
- Video Summarization PptUploaded bypinkycr6
- Step by Step K Means ExampleUploaded byΓιαννης Σκλαβος
- Radial Basis Function in Neural Network for Clustering DataUploaded byARVIND
- 2-FFT-Based Power Spectrum EstimationUploaded byChaeriah Wael
- HMM text to speechUploaded byHiếu Trung Hoàng
- Signal and System Interview QuestionsUploaded byaishwarya
- Daa Lab ManualUploaded byHathiram Pawar
- Assignment SignalUploaded bySaravanan Sukumaran