You are on page 1of 12

International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.

4, August 2013

Automatic 3D view Generation from a Single 2D Image for both Indoor and Outdoor Scenes
Geetha Kiran A1 and Murali S2
1

Malnad College of Engineering, Hassan, Karnataka, India


geethaamk@gmail.com

Maharaja Institute of Technology, Mysore, Karnataka, India


murali@mitmysore.in

ABSTRACT
Image based video generation paradigms have recently emerged as an interesting problem in the field of robotics. This paper focuses on the problem of automatic video generation of both indoor and outdoor scenes. Automatic 3D view generation of indoor scenes mainly consist of orthogonal planes and outdoor scenes consist of vanishing point. The algorithm infers frontier information directly from the images using a geometric context-based segmentation scheme that uses the natural scene structure. The presence of floor is a major cue for obtaining the termination point for the video generation of the indoor scenes and vanishing point plays an important role in case of outdoor scenes. In both the cases, we create the navigation by cropping the image to the desired size upto the termination point. Our approach is fully automatic, since it needs no human intervention and finds applications, mainly in assisting autonomous cars, virtual walk through ancient time images, in architectural sites and in forensics. Qualitative and quantitative experiments on nearly 250 images in different scenarios show that the proposed algorithms are more efficient and accurate.

KEYWORDS
Floor segmentation,canny edge detector, hough transform, vanishing point, video generation

1. INTRODUCTION
Video generation from a single image is inherently a challenging problem. In Imaging devices, there is a trade-off between the images (snapshots) and video because of the limitation in storage capacity. Video clips need more storage space compared to images. This motivated to generate the video from a single 2D image rather than storing video clips. Humans analyze variety of single image cues and act accordingly, unlike robots. The work is an attempt to make robots analyze similar to humans using single 2D image. The task of generating video from photographs is receiving increased attention in many of the applications. We are addressing here the key case where dimension of the real world object or measurement of object dimension in 2D plane is unknown. However generating video using above methods is very difficult because of perspective view. Alternatively, video could be generated using proper ground known i.e., floor segmentation in case of indoor scenes. In the absence of accurate measurements, we wish to exploit geometric characteristics (windows/doors) along with the color variations. Such relationships are plentiful in man-made structures and often provide sufficient information to our work. In case of Road scenes, video could be generated using proper ground known i.e., vanishing point. We describe a unified framework for navigation through a single 2D image in lesser time. The input image may be easily acquired since no calibration target is needed or we can download images from internet. The work is well-suited for navigation on Personal Digital assistants(PDAs) and personal computers, includes cases where buildings are destroyed and only the archive images are available. It can also be applied in forensics and to assist autonomous cars by generating video from a single 2D image and assessing in advance - how far there is a straight
DOI:10.5121/ijcsa.2013.3404 37

International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

road? If there is any suspected person or item in our path of journey, it could be detected prior and necessary action can be taken. We describe a unified framework for generating video from a single 2D image. In the next section, a review on the related works is highlighted. Section 3 gives description of floor segmentation from single view scene constraints along with the computation of length of the floor. Section 4 gives description of finding the vanishing point from single view scene constraints and computing the distance from the ground truth position to the vanishing point. This is followed by the method of 3D view generation in section 5. Finally, some of the experimental results are presented in section 6 followed by conclusion in section 7.

2. RELATED WORK
It is observed that some methods have been developed for segmentation on a single image, few which are directly relevant to the work are highlighted here. Erick Delage et al have used a graph based segmentation algorithm to generate a partition of the image and assigned a unique identifier to each partition output by the segmentation algorithm in [18]. Erick Delage et al [20] have built a probabilistic model that incorporates a number of local image features and tries to reason about the chroma of the floor in each column of the image. Ma Ling et al [22] has segmented the floor region automatically by adopting clustering analysis and also have proposed a PCA based improved version of the algorithm to remove negative effect of shadow for segmented results. Xue Nan Cui et al [23] have proposed detecting and segmenting the floor by computing plane normals from motion fields in image sequences. A geometric characteristic that objects are placed perpendicular to the ground floor can be utilized to find the floor in 2D images. Surfaces often have fairly uniform appearances in texture and color and thus image segmentation algorithms provide another set of useful features which can be used in many other applications, including video generation. Some of the methods developed for detecting

vanishing point on a single image have been highlighted here. Techniques for estimating vanishing points can be roughly divided into two categories. One requiring the knowledge of the internal parameters of the camera and the other operates in an uncalibrated setting. A large literature exists on automatic detection of vanishing points, after Barnard [1] first introduced the use of the Gaussian Sphere as an accumulation space. He suggested that the unbounded space can be mapped into the bounded surface of the Gaussian sphere. Tuytelaars et al [2] mapped points into different bounded subspaces according to their coordinates. Rother [3] pointed out these methods could not preserve the original distances between lines and points. In this method, the intersections of all pairs of non-collinear lines are considered as accumulator cells instead of a parameter space. But these accumulator cells are difficult to index, searching for the maximal from the accumulator cells is slow. The simple calculation of a weighted mean of pairwise intersection is used by Caprile et al [4]. Researches [5-7] have used vanishing point as global constraint for road. They compute the texture orientation for each pixel and select the effective vote-points, then locate the vanishing point by using a voting procedure. Hui Kong et al [8-11] have proposed an adaptive soft voting scheme which is based upon a local voting region using high-confidence voters. However, there are some redundancies during the voting process and the accuracy on updating vanishing point. Murali S et al [12,13] have detected edges using canny edge detector and hough transform is applied. The maximum votes of first N number of cells in the hough space is used for computing the vanishing point. We use the similar framework [12-13] in our work to decide the vanishing point.A very few Researchers have proposed different methods for navigation through a Single 2D image. Shuqiang
38

International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

jiang et al [14] have proposed a method to automatically transform static images to dynamic video clips in mobile devices. Xian-sheng Hua et al [15] developed a system named photo2video to convert a photographic series into a video by simulating camera motions. The camera motion pattern (both the key-frame sequencing scheme and trajectory/ speed control strategy) is selected for each photograph to generate a corresponding motion photograph clip. A region based method to generate a multiview video from a conventional 2-dimensional video using color information to segment an image has been proposed by Yun-Ki-Baek et al [16]. Na-Eun Yang et al [17] have proposed method to generate depth map using local depth hypothesis and grouped regions for 2D-to-3D conversion. The various methods of converting 2D to stereoscopic 3D images involves the fundamental, underlying principle of horizontal shifting of pixels to create a new image so that there are horizontal disparities between the original image and the new version. The extent of horizontal shift depends on the distance of the feature of an object to the stereoscopic camera that the pixel represents. It also depends on the inter-lens separation to determine the new image viewpoint.
The methods proposed by the authors for floor segmentation is time consuming and have made certain assumptions specific to the application. These artifacts are not of much importance in our work, this made us to propose a simple method for floor segmentation in lesser time. Using the segmented image, length of the floor could be computed by distance method. This helps in video generation. The methods proposed by the authors for detecting vanishing points have made certain assumptions specific to the application. These artifacts are not of much importance in our work, this made us to propose a new method as proposed in [12,13], which decides the vanishing point in lesser time. Using the vanishing point, the distance from the ground truth position to the vanishing point could be computed. This helps in navigating through the single Road image.

3. FLOOR SEGMENTATION
The goal is to obtain floor segmentation of a given single 2D indoor image. The crucial part of the work is detecting the pixels belonging to the floor. There are methods available for floor segmentation with known camera parameters. Requirements is to segment floor without having knowledge of camera parameters. There is possibility to find the geometric relationship, may be using color. The primary steps involves converting the given color image to gray, further convert the gray image to binary image by computing a global threshold. Finally, segment the floor by applying the dilation and erosion methods.

3.1. Segmentation
The floor path [19] is the major cue to generate video from a single 2D image of indoor scenes. To segment the floor from the remaining parts of the indoor image scenes, dilation and erosion techniques using the structuring elements are used. Assuming E to be a Euclidean space or an integer grid, A a binary image in E, and B a structuring element. The dilation of A by B is defined by: (1)

39

International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

The erosion of A by B is given by: (2)

Structuring element is used for probing and expanding the shapes contained in the input image yielding to floor segmentation (Figure 1).

(a)

(b) Figure 1. (a) Original Image (c) Binary Image using Otsus method

(c) (b) Gray Image (d) Segmented Image

(d)

The segmented image obtained ( Figure 1(d) ) is used to find the length of the floor. The distance between the start and end of the white pixel (row wise) from the floor segmented image is found by using the Euclidean distance method. This length of the floor identified could directly be used to decide the number of frames to be generated, generally 1:2 depends on the length and it can be varied with requirements. These frames are incorporated in the video generation.

4. VANISHING POINT DETECTION


Images considered for modeling are perspective. In a perspective image, lines parallel in the world space appear to meet at a point called Vanishing point. Vanishing points provide a strong geometric cue for inferring information about 3 dimensional structure of a scene in almost all kinds of man-made environment. There are methods available for detecting vanishing points with known camera parameters and also with uncalibrated setting. The method described in this section requires no knowledge of the camera parameters and proceeds directly from geometric relationships. The step involves detecting edges using canny edge technique to identify the straight lines depending upon the threshold fixed by the hough transform, compute the vanishing point using the intersection points of the lines. The above steps have been explained in the subsequent sections.

4.1. Line Determination


The given color image is converted to gray. Lines are edges of the objects and environment present in an image. These lines may or may not contribute to form the actual vanishing point. The existence of the lines are obtained by applying the canny edge detection algorithm. The versatility of the canny algorithm is to adapt to various parameters like the sizes of the Gaussian filter and the thresholds. This will allow it to be used in detecting edges of differing characteristics.
40

International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

The input image (Figure 2(a)) is converted to gray image (Figure 2(b)), the edges are detected by applying Canny edge detection algorithm. A set of white pixels containing edges are obtained and the rest of the contents of the image are removed. A Canny edge detected image (Figure 2(c)) contains pixels contributing to straight lines and also other miscellaneous edges. Considering all these pixels of the edges contributing to the straight lines, Hough transformation is applied on the input image and the result (Figure 2(d)) is obtained as desired.

(a)

(b)

(c)

(d)

Figure 2. (a) Original Image (b) Gray Image (c) Edge detection (d) Hough transformation As the outcome of the Hough transformation, a large number of straight lines are detected. These straight lines depend upon the threshold fixed up for the Hough transformation. Points belonging to the same straight line in the image plane have corresponding sinusoids which intersect in a single point in the polar space (Figure 1(d)). The need for calculating the number of straight lines is that there could be several straight lines in the image which intersects each other at different points in the image plane. In such case there arises a situation that more than one peak value in the polar space is obtained. Thus by selecting the number of peak values (in descending order of their votes) equal to the number of straight lines N present in the image we restrict the unwanted lines which may not contribute to the real vanishing point. This reduces the computational complexity of vanishing point detection to only the number of straight lines contributing the possible vanishing point.

4.2. Intersection Point of any Two Lines


Lines drawn by Hough transformation are on edges of the object and environment in an image. These lines may or may not contribute to form the actual vanishing point. Depending upon the number of lines present in the image, the number of peaks in the Hough space is fixed up in a descending order of their occurrences. Each peak in the hough space signifies the existence of a longer edge in the image than any other points in the Hough space and hence a peak is formed. These peaks of the voted points of the hough space are calculated to find the intersection between two lines to calculate the vanishing point. Finding the intersection points for all combination of lines selecting two at a time, corresponding one (x,y) pair is obtained. The number of pairs of x and y values obtained for all combinations is given by the relation

(3)

where N is the number of peaks selected. These (x,y) pairs are the probable vanishing points. All of them are within the vicinity of the actual vanishing point. We have taken the mean of the
41

International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

probable vanishing points (Figure 3(a)-blue color), since they are within the vicinity of the actual vanishing point. In our work vanishing point is used to find the distance from the ground truth position to the detected Vanishing point position. The distance obtained is used in the next section to facilitate the termination point for the navigation. The distance of the Road identified could directly be used to decide the number of frames to be generated, generally 1:2 depends on the length and it can be varied with requirements.

(a)
Figure 3.(a) Lines detected (green Color),Probable (b) Vanishing Point (Pink Color)

(b)
Vanishing Points(blue Color)

5. 3D VIEW GENERATION
Automatic 3D view generation from a single image is inherently a challenging problem. The proposed method has attempted to generate the 3D view (Algorithm) for both indoor and outdoor scenes. Algorithm Step 1: Read the Input Image Step 2: Compute the termination point using

i ) Floor segmentation for indoor scenes ii)Vanishing point for outdoor scenes Step 3: Generate the frames based on the predefined rectangle Step 4: Navigate through the single 2D Image upto the termination point

5.1 Indoor Scenes


The information obtained in the floor segmentation is used to generate the 3D view. The input for the video generation are: single 2D image, computed termination point based on the distance calculated using floor segmentation, the size of the rectangle based on which cropping takes place. The input image is considered as the first frame and the image is cropped based on the size of the predefined rectangle. The rectangle has to be clearly defined as it is the frame further used for 3D view generation. The floor segmentation plays a vital role in detecting the termination point to generate the frames. Then the cropped image is resized to the original image
42

International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

and stored in an array of images. An appropriate set of key-frames (Figure 4) are determined for each image based on the distance computed by using floor segmentation.

(a)

(b)

(c)

(d)

(e)

(f)

Figure 4. (a) 20th Frame (b) 40th Frame (c) 60th Frame(d) 80th Frame (e) 100th Frame (f) 120th Frame

5.2 Outdoor Scenes (Road Scenes)


The information obtained from section 4 is used to navigate through a single Road image. The input for the navigation are - single 2D image, computed termination point based on the distance from the ground truth position to the detected vanishing point. Based on this strategy, the frames for navigation are generated by cropping the image based on the size of the image up to the computed distance. The input image is considered as the first frame and the image is cropped based on the size of the predefined rectangle. Then the cropped image is resized to the original image and stored in an array of images. An appropriate set of key-frames (Figure 5) are determined for each image based on the distance computed by using vanishing point.

(a)

(b)

43

International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

(c)

(d)

(e)

(f)

Figure 5. (a) 10th Frame (b) 20th Frame (c) 40th Frame(d) 70th Frame (e) 100th Frame (f) 180th Frame

6. EXPERIMENTAL RESULTS
The algorithm is applied to a test set of 250 images obtained from different buildings, all of them are fairly different in interior decoration themes from each other. Since the indoor images contained a diverse range of orthogonal geometries (wall posters, doors, windows, boxes, cabinets etc.), we have observed that the results presented are indicative of the algorithm performance on images of new buildings (interior) and scenes. We also have evaluated the algorithm( Figure 6(a)) by manually detecting the floor path of a set of images and compared it with the floor path generated by our method and the overall accuracy obtained from the result is 91.46%. In case of outdoor scenes obtained from different real-road images in different scenarios that mainly consist of single vanishing point, we have observed that the results presented are indicative of the algorithm performance. The images used in the experimentation are downloaded from internet and few of them are self captured. The steps involves detecting edges using canny edge technique, to identify the straight lines, compute the vanishing point using the intersection points of the lines. All of them are within the vicinity of the actual vanishing point. Based on the ground truth position, we compute the distance from the ground truth position to the computed vanishing point. We also have evaluated the algorithm (figure 6(b)) by manually detecting the distance from the ground truth value to the vanishing point and compared it with the distance generated by our method and the overall accuracy obtained from the result is 97%.

44

International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

(a)

(b)

Figure 6. (a) Comparison of the length of the floor computed manually with our method (b) Comparison of vanishing point accuracy computed manually with our method The first, intermediate and final frame generated by the methods for both Indoor and Road Scenes (Figure 7) after deducing the termination point are clearly viewed. The frames give the finer details in the intermediate and final frames that could be used in various applications including virtual walk through ancient time images, in forensics, in architectural sites and in automated vehicle.

7. CONCLUSION
An algorithm for automatic video generation from a single 2D image is proposed and experimented for both indoor and outdoor images. This paper provides a solution to transform static single 2D image into video clips. It not only helps the users to enjoy the important details of the image but also provides a vivid viewing manner. The experimental results show that the algorithm is performing well on a number of indoor and outdoor scenes. The work is experimented on nearly 250 images in difficult scenarios. Further work can be extended to produce videos including side view, working at planar level. This requires maintenance of perspective view of the scene. Further work may be extended to include investigating on more reliable Region Of Interest (ROI) detection techniques. Even finer details can be obtained from the key frames used in video generation. The work is done in view of assisting the automated vehicle and robots at low cost.

45

International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

REFERENCES
[1] BARNARD S T, (1983) INTERPRETING PERSPECTIVE IMAGES; ARTIFICIAL INTELLIGENCE, 21, PP.435-462. [2] TUYTELAARS T, VAN GOOL L, PROESMANS M, MOONS T, (1998) THE CASCADED HOUGH TRANSFORM AS AN AID IN AERIAL IMAGE INTERPRETATION; IN: PROC. INTERNATIONAL CONFERENCE ON COMPUTER VISION, PP.67-72. [3] ROTHER C,(2002) A NEW APPROACH FOR VANISHING POINT DETECTION IN ARCHITECTURAL ENVIRONMENTS; IMAGE AND VISION COMPUTING 20, PP.647-655. [4] B.CAPRILE AND V TORRE, (1990) USING VANISHING POINTS FOR CAMERA CALIBRATION; INTERNATIONAL JOURNAL OF COMPUTER VISION, 4, PP.127-139 [5] NICHOLAS SIMOND, PATRICK RIVES, (2003) HOMOGRAPHY FROM A VANISHING POINT IN URBAN SCENCES; IN PROCEEDINGS OF IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, PP.1005-1010. [6] CHRISTOPHER RASMUSSEN, (2004) GROUPING DOMINANT ORIENTATIONS FOR ILL-STRUCTURED ROAD; IN PROCEEDINGS OF IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, PP.470-477. [7] CHRISTOPHER RASMUSSEN, THOMMEN KORAH, (2005) ON-VEHICLE AND AERIAL TEXTURE ANALYSIS FOR VISION-BASED DESERT ROAD; IN PROCEEDINGS OF INTERNATIONAL WORKSHOP ON COMPUTER VISION AND PATTERN RECOGNITION, PP. 66-71. [8] HUI KONG, JEAN-YUES AUDIBERT, JEAN PONCE(2009) VANISHING POINT DETECTION FOR ROAD DETECTION; IN PROCEEEDINGS OF IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, PP.96-103. [9] HUI KONG, JEAN-YUES AUDIBERT(2010) JEAN PONCE, GENERAL ROAD DETECTION FROM A SINGLE IMAGE; IN PROCEEDINGS OF IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL.19, NO.8,PP.22112220. [10] M NIETO AND L SALSGDO,(2007) REAL-TIME VANISHING POINT ESTIMATION IN ROAD SEQUENCES USING ADAPTIVE STEERABLE FILTER BANKS; ADVANCED NOTES IN COMPUTER SCIENCE. [11] CHRISTOPHER RASMUSSEN,(2004) TEXTURE-BASED VANISHING POINT VOTING FOR ROAD SHAPE ESTIMATION; BMVC. [12] AVINASH AND MURALI S,(2005), A VOTING SCHEME FOR INVERSE HOUGH TRANSFORM BASED VANISHING POINT DETERMINATION; IN PROCEEDINGS OF INTERNATIONAL CONFERENCE ON COGNITION AND RECOGNITION, MYSORE, INDIA. [13] AVINASH AND MURALI S,(2007) MUTIPLE VANISHING POINT DETERMINATION; IN PROCEEDINGS OF IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION AND INFORMATION TECHNOLOGY, AURANGABAD, INDIA. [14] SHUQIANG JIANG AND HUIYING LIU AND ZHAO ZHAO AND INGMING HUANG AND WEN GAO, (2007) GENERATING VIDEO SEQUENCE FROM PHOTO IMAGE FOR MOBILE SCREENS BY CONTENT ANALYSIS; ICME, PP.1475-1478. [15] XIAN-SHENG HUA AND LIE LU AND HONG-JIANG ZHANG,(2004) AUTOMATICALLY CONVERTING PHOTOGRAPHIC SERIES INTO VIDEO; 12TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA,PP.708715. [16] YUN-KI BAEK, YOUNG-HO SEO,DONG-WOOK KIM AND JI-SANG YOO,(2012) MULTIVIEW VIDEO GENERATION FROM 2-DIMENSIONAL VIDEO; INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING, INFORMATION AND CONTROL, VOL 8, NUMBER 5(A), PP. 3135-3148. [17] NA-EUN YANG, JI WON LEE, RAE-HONG PARK,(2012) DEPTH MAP GENERATION FROM A SINGLE IMAGE USING LOCAL DEPTH HYPOTHESIS; 2012 IEEE ICCE ,PP.311-312. [18] ERICK DELAGE HONGLAK LEE ANDREW Y. NG, (2006) A DYNAMIC BAYESIAN NETWORK MODEL FOR AUTONOMOUS 3D RECONSTRUCTION FROM A SINGLE INDOOR IMAGE; CVPR. [19] GEETHA KIRAN,A.AND MURALI,S, (2013) AUTOMATIC VIDEO GENERATION USING FLOOR SEGMENTATION FROM A SINGLE 2D IMAGE 21ST WSCG 2013 CONFERENCE ON COMPUTER GRAPHICS, VISUALIZATION AND COMPUTER VISION, ISBN 978-80-86943-76-3. [20] ERICK DELAGE, HONGLAK LEE, AND ANDREW Y. NG, (2005) AUTOMATIC SINGLE-IMAGE 3D RECONSTRUCTIONS OF INDOOR MANHATTAN WORLD SCENES; ISRR. [21] D. HOIEM, A. A. EFROS, AND M. HEBERT,(2005) GEOMETRIC CONTEXT FROM A SINGLE IMAGE; 10TH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION. 46

International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013 MA LING , WANG JIANMING ; ZHANG BO ; WANG SHENGBEI,(2010) AUTOMATIC FLOOR SEGMENTATION FOR INDOOR ROBOT NAVIGATION; 2ND INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING SYSTEMS (ICSPS),PP. 684 689. [23] XUE-NAN CUI, YOUNG-GEUN KIM, AND HAKIL KIM,(2009) FLOOR SEGMENTATION BY COMPUTING PLANE NORMALS FROM IMAGE MOTION FIELDS FOR VISUAL NAVIGATION; INTERNATIONAL JOURNAL OF CONTROL, AUTOMATION, AND SYSTEMS, PP.788-798. [24] PEDRO F. FELZENSZWALB AND DANIEL P. HUTTENLOCHE, (2004) EFFICIENT GRAPH-BASED IMAGE SEGMENTATION, INTERNATIONAL JOURNAL OF COMPUTER VISION. [25] YOUNG GEUN KIM AND HAKIL KIM, (2004) LAYERED GROUND FLOOR DETECTION FOR VISION BASED MOBILE ROBOT NAVIGATION.;IN IEEE ROBOTICS AND AUTOMATION (ICRA), VOLUME 1, PP 13 18. [26] Y. J. JUNG, A. BAIK, J. KIM, AND D. PARK, (2009) A NOVEL 2D-TO-3D CONVERSION TECHNIQUE BASED ON RELATIVE HEIGHT DEPTH CUE; IN PROC. STEREOSCOPIC DISPLAYS AND APPLICATIONS XX, VOL. [22] 7237. [27] C.-C. CHENG, C.-T. LI, AND L.-G. CHEN,(2010) A NOVEL 2D-TO-3D CONVERSION SYSTEM USING EDGE INFORMATION; IEEE TRANS. CONSUMER ELECTRONICS, VOL. 56, NO. 3, PP. 17391745. [28] R. C. GONZALEZ AND R. E. WOODS, (2010), DIGITAL IMAGE PROCESSING, THIRD EDITION. UPPER SADDLE RIVER, NJ: PEARSON EDUCATION INC. [29] W.-N. LIE, C.-Y. CHEN, AND W.-C. CHEN.(2011) 2D TO 3D VIDEO CONVERSION WITH KEY-FRAME DEPTH PROPAGATION AND TRILATERAL FILTERING; ELECTRON. LETT., VOL. 47, NO. 5, PP. 319321. [30] A. CRIMINISI, I. REID, AND A. ZISSERMAN,(2000) SINGLE VIEW METROLOGY. INTERNATIONAL JOURNAL OF COMPUTER VISION; 40:123148. [31] FENG HAN, SONG-CHUN ZHU, (2004) AUTOMATIC SINGLE VIEW BUILDING RECONSTRUCTION BY INTEGRATING SEGMENTATION; CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOP.

47

International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

Figure 7. (a)First Frame (b) Intermediate Frame (c) Final Frame

48

You might also like