You are on page 1of 5

Abstract-I An Oblivious Image fusion technique using Wavelet Transform

In this project focus on the fusion of images from different sources using multiresolution wavelet transforms. With rapid advancements in technology, it is now possible to obtain information from multisource images. However, all the physical and geometrical information required for detailed assessment might not be available by analyzing the images separately. In multisensory images, there is often a trade-off between spatial and spectral resolutions resulting in information loss. Image fusion combines perfectly registered images from multiple sources to produce a high quality fused image with spatial and spectral information. Based on reviews of popular image fusion techniques used in data analysis, different pixel and energy based methods are experimented. A novel architecture with a hybrid algorithm is proposed which applies pixel based maximum selection rule to low frequency approximations and filter mask based fusion to high frequency details of wavelet decomposition. The key feature of hybrid architecture is the combination of advantages of pixel and region based fusion in a single image which can help the development of sophisticated algorithms enhancing the edges and structural details. A Graphical User Interface is developed for image fusion to make the research outcomes available to the end user.

Abstract-II Multi Source Image Fusion Techniques Based on HIS & PCA analysis
In this project a multi-resolution fusion algorithm, this combines aspects of region and pixel-based fusion. In these multi resolution decompositions to represent the input images at different scales, and introduce multi resolution/ multi modal segmentation to partition the image domain at these scales. A region-based multi resolution approach allows us to consider low-level as well as intermediate- level structures, and to impose data-dependent consistency constraints based on spatial, inter- and intra-scale dependencies. Although these wavelets share some common properties, each wavelet also has a unique image decompression and reconstruction characteristics that lead to different fusion results. In this project the above three classes are being compared for their fusion results. Normally, when a wavelet transformation alone is applied the results are not so good. However if a wavelet transform and a traditional transform such as IHS transform or a PCA transform are integrated for better fusion results may be achieved. Hence we introduce a new novel approach to improve the fusion method by integrating with HIS/PCA transforms. The fusion results are compared graphically, visually and statistically and show that wavelet integrated methods can improve the fusion result, reduce the ringing or aliasing effects to some extent and make image smoother. Medical Image Fusion has become an active research topic due to advances in sensor technology, microelectronics, processing techniques that combine information

from different sensors into a single composite image for analysis, interrelation and better clinical diagnosis in much rapid and accurate way. The main objective of medical image fusion using wavelets is to create new image by regrouping the complementary information of multi sensor output.

Abstract-III Segmentation by Fusion of Histogram-Based K-Means Clusters in Different Color Spaces


This paper presents a new, simple, and efficient segmentation Approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (k-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.

Abstract-IV A novel hybrid approach based on subpattern technique and PCA for face recognition
Recently, in a task of face recognition, some researchers presented that independent component analysis (ICA) Architecture I involves a vertically centered principal component analysis (PCA) process (PCA I) and ICA Architecture II involves a whitened horizontally centered PCA process (PCA II). They also concluded that the performance of ICA strongly depends on its involved PCA process. This means that the computationally expensive ICA projection is unnecessary for further process and involved PCA process of ICA, whether PCA I or II, can be used directly for face recognition. But these approaches only consider the global information of face images. Some local information may be ignored. Therefore, in this paper, the sub-pattern technique was combined with PCA I and PCA II, respectively, for face recognition. In other words, two new different sub-patterns based whitened PCA approaches (which are called Sp-PCA I and SpPCA II, respectively) were performed and compared with PCA I, PCA II, PCA, and sub-pattern based PCA (SpPCA). Then, we find that sub-pattern technique is useful to PCA I but not to PCA II and PCA. Simultaneously, we also discussed what causes this result in this paper. At last, by simultaneously considering global and local information of face images, we developed a novel hybrid approach which combines PCA II and Sp-PCA I for face recognition. The

experimental results reveal that the proposed novel hybrid approach has better recognition performance than that obtained using other traditional methods.

Abstract-V An Optimal Bi-level thresholding using a two-stage Otsu approach


Otsus method of image segmentation selects an optimum threshold by maximizing the between-class variance in a gray image. However, this method becomes very time-consuming when extended to a Bi-level threshold problem due to the fact that a large number of iterations are required for computing the cumulative probability and the mean of a class. To greatly improve the efficiency of Otsus method, a new fast algorithm called the TSMO method (Two-Stage Multi threshold Otsu method) is presented. The TSMO method outperforms Otsus method by greatly reducing the iterations required for computing the betweenclass variance in an image. The experimental results show that the computational time increases exponentially for the conventional Otsu method with an average ratio of about 76. For TSMO-32, the maximum computational time is only 0.463 s when the class number M increases from two to six with relative errors of less than 1% when compared to Otsus method. The ratio of computational time of Otsus method to TSMO-32 is rather high, up to 109,708, when six classes (M = 6) in an image are used. This result indicates that the proposed method is far more efficient with an accuracy equivalent to Otsus method. It also has the advantage of having a small variance in runtimes for different test images.

You might also like