You are on page 1of 6

A

Synopsis
on

Femto Photography

Submitted By

Chirag Joshi

(12EGCEI018)

Submitted To
Ankit Bansal
(Assistant Professor of Department of EIC)
&
Ankesh Gupta
(Assistant Professor of Department of EIC)

Department of Electronic Instrumentation & Control Engineering


Global College of Technology, Jaipur
(Rajasthan Technical University)
c
Global
College of Technology Jaipur, 2016. All rights reserved.
1

Abstract

Femto Photography is an imaging solution that allows us to visualize propagation of light.


The effective exposure time of each frame is two trillionths of a second and the resultant
visualization depicts the movement of light at roughly half a trillion frames per second.
Direct recording of reflected or scattered light at such a frame rate with sufficient brightness is nearly impossible. We use an indirect stroboscopic method that records millions
of repeated measurements by careful scanning in time and viewpoints. Then we rearrange
the data to create a movie of a nanosecond long event.
The device has been developed by the MIT Media Labs Camera Culture group in
collaboration with Bawendi Lab in the Department of Chemistry at MIT. A laser pulse
that lasts less than one trillionth of a second is used as a flash and the light returning from
the scene is collected by a camera at a rate equivalent to roughly half a trillion frames per
second. However, due to very short exposure times (roughly two trillionth of a second)
and a narrow field of view of the camera, the video is captured over several minutes by
repeated and periodic sampling.
The new technique, which we call Femto Photography, consists of femtosecond laser
illumination, picosecond-accurate detectors and mathematical reconstruction techniques.
Our light source is a Titanium Sapphire laser that emits pulses at regular intervals every
13 nanoseconds. These pulses illuminate the scene, and also trigger our picosecond accurate streak tube which captures the light returned from the scene. The streak camera has
a reasonable field of view in horizontal direction but very narrow (roughly equivalent to
one scan line) in vertical dimension. At every recording, we can only record a 1D movie
of this narrow field of view. In the movie, it record roughly 480 frames and each frame
has a roughly 1.71 picosecond exposure time. Through a system of mirrors, we orient the
view of the camera towards different parts of the object and capture a movie for each
view. It maintain a fixed delay between the laser pulse and our movie starttime. Finally,
the algorithm uses this captured data to compose a single 2D movie of roughly 480 frames
each with an effective exposure time of 1.71 picoseconds.
Beyond the potential in artistic and educational visualization, applications include
industrial imaging to analyze faults and material properties, scientific imaging for understanding ultrafast processes and medical imaging to reconstruct sub-surface elements,
i.e., ultrasound with light. In addition, the photon path analysis will allow new forms of
computational photography, e.g., to render and re-light photos using computer graphics
techniques.

Introduction

Forward and inverse analysis of light transport plays an important role in diverse fields,
such as computer graphics, computer vision, and scientific imaging. Because conventional
imaging hardware is slow compared to the speed of light, traditional computer graphics
and computer vision algorithms typically analyze transport using low time-resolution photos. Consequently, any information that is encoded in the time delays of light propagation
is lost. Whereas the joint design of novel optical hardware and smart computation, i.e,
computational photography, has expanded the way we capture, analyze, and understand
visual information, speed-of-light propagation has been largely unexplored. In this paper, we present a novel ultra-fast imaging technique, which we term femto-photography,
consist-ing of femtosecond laser illumination, picosecond-accurate detec-tors, and mathematical reconstruction techniques, to allow us to vi-sualize movies of light in motion as it
travels through a scene, with an effective framerate of about one half trillion frames per
second. This allows us to see, for instance, a light pulse scattering inside a plastic bottle,
or image formation in a mirror, as a function of time.

Literature Review

Ultrafast Devices The fastest 2D continuous, real-time monochromatic camera operates at hundreds of nanoseconds per frame [Goda et al. 2009] (about 6 106 frames per
second), with a spatial resolution of 200200 pixels, less than one third of what we achieve.
Avalanche photodetector (APD) arrays can reach temporal resolutions of several tens of
picoseconds if they are used in a photon starved regime where only a single photon hits a
detector within a time window of tens of nanoseconds [Charbon 2007]. Repetitive illumination techniques used in incoherent LiDAR [Tou 1995; Gelbart et al. 2002] use cameras
with typical exposure times on the order of hundreds of picoseconds [Busck and Heiselberg 2004; Colaco et al. 2012], two orders of magnitude slower than our system. Liquid
nonlinear shutters actuated with powerful laser pulses have been used to capture single
analog frames imaging light pulses at picosecond time resolution [Duguay and Mattick
1971]. Other sensors that use a coherent phase relation between the illumination and
the detected light, such as optical coherence tomography (OCT) [Huang et al. 1991],
coherent LiDAR [Xia and Zhang 2009], light-in-flight holography [Abramson 1978], or
white light interferometry [Wyant 2002], achieve femtosecond resolutions; however, they
require light to maintain coherence (i.e., wave interference effects) during light transport,
and are therefore unsuitable for indirect illumination, in which diffuse reflections remove
coherence from the light. Simple streak sensors capture incoherent light at picosecond to
nanosecond speeds, but are limited to a line or low resolution (20 20) square field of view
[Campillo and Shapiro 1987; Itatani et al. 2002; Shiraga et al. 1995; Gelbart et al. 2002;
Kodama et al. 1999; Qu et al. 2006]. They have also been used as line scanning devices
for image transmission through highly scattering turbid media, by recording the ballistic
3

photons, which travel a straight path through the scatterer and thus arrive first on the
sensor [Hebden 1993]. The imaging were first demonstrated by Velten et al. [2012c].
Recently, photonic mixer devices, along with nonlinear optimization, have also been used
in this context [Heide et al. 2013].
This system can record and reconstruct space-time world informa-tion of incoherent
light propagation in free-space, table-top scenes, at a resolution of up to 672 1000 pixels
and under 2 picosec-onds per frame. The varied range and complexity of the scenes we
capture allow us to visualize the dynamics of global illumina-tion effects, such as scattering, specular reflections, interreflections, subsurface scattering, caustics, and diffraction.
Time-Resolved Imaging Recent advances in time-resolved imaging have been exploited
to recover geometry and motion around corners [Raskar and Davis 2008; Kirmani et al.
2011; Vel-ten et al. 2012b; Velten et al. 2012a; Gupta et al. 2012; Pandharkar et al.
2011] and albedo of from single view point [Naik et al. 2011]. But, none of them explored
the idea of capturing videos of light in motion in direct view and have some fundamental
limitations (such as capturing only third-bounce light) that make them unsuitable for the
present purpose. Wu et al. [2012a] separate direct and global il-lumination components
from time-resolved data captured with the system we describe in this paper, by analyzing
the time profile of each pixel. In a recent publication [Wu et al. 2012b], the authors
present an analysis on transient light transport in frequency space, and show how it can
be applied to bare-sensor imaging.

Method

It uses a pico-second accurate detector. It use a special imager called a streak tube
that behaves like an oscilloscope with corresponding trigger and deflection of beams. A
light pulse enters the instrument through a narrow slit along one direction. It is then
deflected in the perpendicular direction so that photons that arrive first hit the detector
at a different position compared to photons that arrive later. The resulting image forms a
streak of light. Streak tubes are often used in chemistry or biology to observe milimeter
sized objects but rarely for free space imaging.

Conclusion & Future Work

The research fosters new computational imaging and image pro-cessing opportunities by
providing incoherent time-resolved in-formation at ultrafast temporal resolutions. We
hope our work will inspire new research in computer graphics and computational photography, by enabling forward and inverse analysis of light transport, allowing for full scene
capture of hidden geometry and materials, or for relighting photographs. To this end,

captured movies and data of the scenes shown in this paper are available at femtocamera.info. This exploitation, in turn, may influence the rapidly emerging field of ultrafast
imaging hardware.
The system could be extended to image in color by adding addi-tional pulsed laser
sources at different colors or by using one contin-uously tunable optical parametric oscillator (OPO). A second color of about 400 nm could easily be added to the existing system
by doubling the laser frequency with a nonlinear crystal (about $1000). The streak tube
is sensitive across the entire visible spectrum, with a peak sensitivity at about 450 nm
(about five times the sensitiv-ity at 800 nm). Scaling to bigger scenes would require less
time resolution and could therefore simplify the imaging setup. Scaling should be possible
without signal degradation, as long as the cam-era aperture and lens are scaled with the
rest of the setup. If the aperture stays the same, the light intensity needs to be increased
quadratically to obtain similar results.
Beyond the ability of the commercially available streak sensor, ad-vances in optics,
material science, and compressive sensing may bring further optimization of the system,
which could yield in-creased resolution of the captured x-t streak images. Nonlinear shutters may provide an alternate path to femto-photography cap-ture systems. However,
nonlinear optical methods require exotic materials and strong light intensities that can
damage the objects of interest (and must be provided by laser light). Further, they often
suffer from physical instabilities.
The mass production of streak sensors can lead to af-fordable systems. Also, future
designs may overcome the current limitations of our prototype regarding optical efficiency.
Future research can investigate other ultrafast phenomena such as prop-agation of light in
anisotropic media and photonic crystals, or may be used in applications such as scientific
visualization (to under-stand ultra-fast processes), medicine (to reconstruct subsurface

Figure 1: Captured-Light in Motion

el-ements), material engineering (to analyze material properties), or quality control (to
detect faults in structures). This could provide radically new challenges in the realm
of computer graphics. Graph-ics research can enable new insights via comprehensible
simula-tions and new data structures to render light in motion. For in-stance, relativistic
rendering techniques have been developed using our data, where the common assumption
of constant irradiance over the surfaces does no longer hold [Jarabo et al. 2013]. It may
also allow a better understanding of scattering, and may lead to new physically valid
models, as well as spawn new art forms.

References
[1] P Sen, B Chen, G Garg, S Marschner, M Horowitz, M Levoy and H Lensch. Dual
photography. in ACM SIG. 05.
[2] S M Seitz, Y Matsushita, and K N Kutulakos. A theory of inverse light transport
in ICCV 05.
[3] S K Nayar, G Krishnan, M Grossberg, and R Raskar. Fast separation of direct and global components of a scene using high frequency illumination in SIGGRAPH 06.
[4] K Kutulakos and E Steger A theory of refractive and specular 3d shape by lightpath triangulation IJCV 07.
[5] B. Atcheson, I. Ihrke, W. Heidrich, A. Tevs, D. Bradley, M. Magnor, H.-P. Seidel
Time-resolved 3D Capture of Non-stationary Gas Flows Siggraph Asia, 2008.

You might also like