You are on page 1of 13

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 33, NO.

9, SEPTEMBER 2015

1793

Vehicle Motion and Pixel Illumination Modeling for


Image Sensor Based Visible Light Communication
Takaya Yamazato, Member, IEEE, Masayuki Kinoshita, Student Member, IEEE, Shintaro Arai, Member, IEEE,
Eisho Souke, Tomohiro Yendo, Member, IEEE, Toshiaki Fujii, Member, IEEE,
Koji Kamakura, Member, IEEE, and Hiraku Okada, Member, IEEE

AbstractChannel modeling is critical for the design and performance evaluation of visible light communication (VLC). Although a considerable amount of research has focused on indoor
VLC systems using single-element photodiodes, there remains a
need for channel modeling of VLC systems for outdoor mobile
environments. In this paper, we describe and provide results for
modeling image sensor based VLC for automotive applications.
In particular, we examine the channel model for mobile movements in the image plane as well as channel decay according to
the distance between the transmitter and the receiver. Optical
flow measurements were conducted for three VLC situations for
automotive use: infrastructure to vehicle VLC (I2V-VLC); vehicle
to infrastructure VLC (V2I-VLC); and vehicle to vehicle VLC
(V2V-VLC). We describe vehicle motion by optical flow with subpixel accuracy using phase-only correlation (POC) analysis and
show that a single-pinhole camera model successfully describes
these three VLC cases. In addition, the luminance of the central
pixel from the projected LED area versus the distance between
the LED and the camera was measured. Our key findings are
twofold. First, a single-pinhole camera model can be applied to
vehicle motion modeling of a I2V-VLC, V2I-VLC, and V2V-VLC.
Second, the DC gain at a pixel remains constant as long as the
projected image of the transmitter LED occupies several pixels. In
other words, if we choose a pixel with highest luminance among the
projected image of transmitter LED, the value remains constant,
and the signal-to-noise ratio does not change according to the
distance.
Index TermsVisible light communication (VLC), image sensor, outdoor mobile channel modeling, infrastructure to vehicle
VLC (I2V-VLC), vehicle to infrastructure VLC (V2I-VLC), vehicle to vehicle VLC (V2V-VLC), vehicle motion model, optical flow,
pinhole camera model, pixel illumination model, DC gain.

I. I NTRODUCTION

IGHT-EMITTING Diodes (LEDs) offer a new and revolutionary light source that saves energy [1]. The LED market
continues to grow, with LEDs successfully competing with
conventional light sources used in traffic signal and pedestrian

Manuscript received May 28, 2014; revised November 3, 2014; accepted


April 27, 2015. Date of publication May 12, 2015; date of current version
August 17, 2015. This work was partially supported by Toyota Central R&D
Labs., Inc.
T. Yamazato, M. Kinoshita, T. Fujii, and H. Okada are with Nagoya University, Nagoya 464-8603, Japan (e-mail: yamazato@ieee.org).
S. Arai and E. Souke are with National Institute of Technology, Kagawa
College, Kagawa 769-1192, Japan.
T. Yendo is with Nagaoka University of Technology, Nagaoka 464-8063
Japan.
K. Kamakura is with Chiba Institute of Technology, Chiba 275-0016 Japan.
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/JSAC.2015.2432511

lights, electric signage boards, street and area lights, automotive


headlights, and brake lights [2]. Visible light communication
(VLC) uses LEDs to not only provide light but also broadcast
data [3]. Since LEDs are solid-state lighting devices, they can
be modulated at high speeds undetectable to the human eye.
Optical wireless channels, including VLC channels, for indoor environments have been extensively investigated [4][6].
A considerable amount of research has focused on intensity
modulation with direct detection using a single-element photodiode and its channel model [4]. Ambient light radiation from
the Sun, skylights, streetlights, and others is a dominant source
of noise.
If a single element photodiode is used as a VLC reception
device, the VLC system cannot be used in direct sunlight,
especially for wide field-of-view (FOV) cases because direct
sunlight is typically strong and can often be received at an
average power that is much higher than that of the desired
signal. Furthermore, it is very difficult to reduce the enormous
amount of noise signals from background lights in a wide FOV
to the optical signal level, even if an optical band-pass-filter
is used. Therefore, when a single-element photodiode is used
outdoors, directed linkage with small optical beam divergence
is required; otherwise, the photodiode cannot be used in direct
sunlight in a wide FOV case.
Although fixed free-space optical communication (FSO)
links between buildings have long been established, the outdoor
mobile application of FSO remains a challenge. For example,
in [7], Haruyama et al. succeeded in FSO transmissions of
1 Gbit/s to high-speed trains with mechanical tracking (mirror
actuator). In [8], FSO transmission to an aircraft from a ground
station was demonstrated with a data reception rate of 1 Gbps.
Accurately pointing and tracking a target transmitter is key to
realizing the outdoor mobile application of FSO; unfortunately,
such mechanical tracking systems require large and complex
tracking algorithms.
An alternative approach is to use an image sensor, rather than
a single-element photodiode, as a receiver frontend. Experiments have established that ambient noise can be eliminated
via an image sensor [9][13]. Thanks to the spatial separation
of multiple sources, the VLC receiver only uses the pixels
that sense LED transmission sources and discards other pixels,
including those sensing noise sources. Furthermore, a tracking
algorithm based on image-processing techniques is considerably easier to implement than that of based on mechanical
techniques. Hence image sensor based VLC is an attractive solution for outdoor mobile applications; however, little evidence

0733-8716 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

1794

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 33, NO. 9, SEPTEMBER 2015

is available for channel modeling of image sensor based VLC


in mobile outdoor environments. In particular, the fluctuation of
the VLC transmitter in the image plane caused by the movement
of a vehicle has not been reported. This fluctuation confuses the
VLC receiver as to which pixels are the correct pixels for data
reception.
The purpose of our present study, therefore, is to describe
and examine a channel model for VLC using an image sensor
for outdoor mobile applications. In particular, we examine the
vehicle motion model in the image plane; further, we examine
pixel illumination model (channel decay according to the distance between the transmitter and the receiver).
The motion of objectsin our case LEDsin a visual scene
caused by the relative motion between the camera and the scene
is referred to as optical flow [14]. Optical flow analysis of
vehicles has been extensively investigated. Such studies include
optical flow analysis of vehicles for road-traffic monitoring
[15], [16], lane-change decision-aid systems [17], and opticflow-based tracking [18]; however, these studies do not consider the following crucial VLC situations for automotive use:
(a) infrastructure to vehicle VLC (I2V-VLC); (b) vehicle to
infrastructure VLC (V2I-VLC); and (c) vehicle to vehicle VLC
(V2V-VLC). We further note that as VLC requires high-framerate (HFR) image-processing, most of the vehicle motions are
limited to within one pixel area. Therefore, optical flow analysis
with subpixel accuracy is required.
The first contribution of our present study is that a singlepinhole camera model can be applied to the three aforementioned situations (i.e., I2V-VLC, V2I-VLC, and V2V-VLC). We
further reveal that vehicle movement is well approximated by
the vibrations induced by surface irregularities in roads, and
since VLC uses an HFR image sensor, the vehicle motion along
the vertical and horizontal axes of the image plane is limited to
within one pixel. Proposed vehicle motion model can be used
for a design of VLC systems in automotive applications. The
vehicle motion model provides a simulation model that reflects
vehicle motion in an image plane. This is particularly beneficial
for VLC signal detection and tracking design.
Another important channel parameter is luminance intensity
decay or DC gain of the channel. In [19], Siegmann et al.
showed that the A/D converter output signal given by a pixel
of the digital camera can be related to the luminance and the
reflectivity of the corresponding surface element whose image
is formed on a pixel. However, little attention has been paid
to the pixel illumination model (DC gain at a pixel) from the
perspective of VLC signal reception.
The second contribution of our study is that the DC gain
at a pixel remains constant as long as the projected image of
the transmitter LED occupies several pixels. We have derived
simple forms of the gain based on image sensor based VLC and
provide insight from the perspective of VLC signal reception.
If we choose a pixel with the highest luminance within several
pixels occupied by the incident light from the LED, then the
luminance value remains constant, and the signal-to-noise ratio
(SNR) does not change according to distance.
The remainder of this paper is organized as follows. In
Section II, we briefly describe image sensor based VLC and
its channel parameters for this study. In Section III, we intro-

Fig. 1. Image sensor based VLC and spatial separation of multiple sources
by an image sensor. Owing to spatial separation of multiple sources, the VLC
receiver uses only the pixels that sense LED transmission sources (i.e., data 1
and data 2) and discards other pixels, including those sensing noise sources
(the Sun).

duce optical flow measurement for I2V-VLC, V2I-VLC, and


V2V-VLC; we also describe brightness constancy, luminance
sequence in VLC, and phase-only correlation (POC) as the
post-processing of measurement data; finally, we summarize
our experimental results. In Section IV, we present a vehicle
motion model of a VLC transmitter for I2V, V2I, and V2V
using a pinhole camera model and show that simulated results
obtained by our proposed model coincide with our experimental
results. In Section V, we present a pixel illumination model
of image sensor based VLC; our analysis shows that there is
no direct dependence on distance for the DC gain at a pixel.
Finally, in Sections VI and VII, we summarize our findings.
II. I MAGE S ENSOR BASED VLC AND
ITS C HANNEL PARAMETERS
A photodiode is usually used as a VLC reception device;
however, an image sensor consisting of various pixels can
also be used as a VLC reception device [9][13]. The two
of particular advantages of the image sensor based VLC are
the ability to spatially separate sources and wide field-of-view
(FOV). Because of the massive number of available pixels and
wide FOV, it is possible to receive and process multiple transmitting sources. Fig. 1 illustrates that the data transmitted from
two different LED transmitters can be captured simultaneously.
Further, outdoor usage of VLC is possible by discarding pixels
associated with noise sources such as the Sun and street lights.
In image sensor based VLC, a transmitted optical signal is
first captured by the image sensor as a relative position in image
coordinate (u, v) in the image sensor plane and a luminance
value [20]. For example, the transmitted data shown in Fig. 1,
the intensity of a pixel at (u, v) is sampled and demodulation
is performed. Because movement in image coordinate (u, v) of
the target transmitter is important in image sensor based VLC,
we treat such movement as a parameter to evaluating for VLC
channel modeling. Such movement is referred to as optical flow
and given by vector (u, v), where u and v represent
the distance the LED moves. For the mobile environment,
image coodinate (u, v) of the transmitter changes because of the
movement of the transmitter, receiver, or both. This translates to

YAMAZATO et al.: VEHICLE MOTION AND PIXEL ILLUMINATION MODELING FOR IMAGE SENSOR BASED VLC

the following three cases: (1) I2V-VLC, (2) V2I-VLC, and (3)
V2V-VLC. Each of these is discussed in more detail below.
1) I2V-VLC: If an image sensor receiver is fixed on-board,
suppose on the dashboard of a vehicle, and optical data are
transmitted from LEDs placed in a street or traffic light, then
the image sensor receiver moves; therefore, pixel positions of
captured images move according to vehicle movement. Such
movement must be considered to accurately receive the optical
signal.
2) V2I-VLC: Conversely, if a transmitter is moving and an
image sensor receiver is fixed on a road, then the relative position of the transmitter will also shift in the u and v directions of
the image sensor plane.
3) V2V-VLC: In the V2V-VLC case, both the image sensor
receiver and LED transmitter move. The effect of vehicle movement is expected to be large. Due to such vehicle movements,
pixel positions of captured images move in a manner very
similar to that in the I2V-VLC case; in addition, the position
of the LED transmitters move in a manner similar to that in the
V2I-VLC case.
Another important channel model is pixel illumination
model. We will describe the model in Section V and show that
there is no direct dependence on distance for the DC gain at a
pixel.
III. O PTICAL F LOW M EASUREMENT
A. Measurement Equipment and Setup
All our measurements were done with a high-speed camera
(HSC) connected to a personal computer (PC).
A Photron IDP-Express R2000 (1,000 frames per second
(fps); resolution, 5121,024 pixels) was used as the HSC.
In our measurements the pixel size of the image sensor was
10 m, and it output 8-bit gray scale images. The focal length
of the lens was 35 mm, and the lens diaphragm was set to 16.
Autofocusing is difficult when a vehicle is moving; thus, the
focus was set to infinity. We recorded for 5 seconds (5,000
frames) for each experiment. The measured data was postprocessed in our laboratory using a PC.
For I2V-VLC channel measurement, 1,024 LEDs arranged in
a 3232 matrix were used for the transmitter. The LED spacing
was 15 mm, and its half value angle was 26 . The LEDs used
are the same as those used in LED traffic lights in Japan. All
LEDs were on during measurements.
For V2I-VLC and V2V-VLC channel measurements a headlight of a vehicle is used. In both cases, no blinking is performed
and we focus only the movement of the headlight in the
captured image.
B. Measurement Scenarios
The measured sites were at Nagoya University, Japan. All
measurements were conducted during the day (i.e., 10 a.m. to
2 p.m.) on a clear day. Fig. 2 shows the scenario for the three
VLC channels. For I2V channel measurements, as shown in
Fig. 2(a), we set the HSC on the dashboard of the vehicle and
recorded images of an LED array set on the ground. The vehicle

1795

Fig. 2. VLC channel measurement scenarios (a) I2V-VLC: LED transmitters


are placed on a street or in fixed locations, and an image sensor receiver is
on the dashboard of a vehicle. In this case, pixel positions of captured images
move according to vehicle movement. (b) V2I-VLC: LED transmitters are on
vehicles, and receivers are placed on a street or in fixed locations. In this case,
only the position of the mobile transmitter changes, whereas the background
scene does not. (c) V2V-VLC: Both LED transmitters and an image sensor
receiver move. In this case, pixel positions of captured images as well as the
position of the transmitter move.

moved toward the LED array with speeds of 20 and 30 km/h.


For V2I channel measurements, as shown in Fig. 2(b), we set
the HSC on the ground and recorded images of a vehicle with
its headlights on, moving toward the HSC with speeds of 20 and
30 km/h. For V2V channel measurements, as shown in
Fig. 2(c), we set the HSC on the back of the vehicle and
recorded images of the headlight from the vehicle behind. Both
vehicles moved with speeds of 20 and 30 km/h, with a spacing
of approximately 30 m.
Note that the surrounding environment, (e.g. urban areas,
mountainous areas, or coastal areas) or nighttime has little
effect on the channel. At night, LED illuminations, such streetlights and vehicle lights, are sources of noise. In addition,
during the day, the Sun acts as a source of strong noise, and
the absence of the Sun at night typically improves the detection
and tracking performance of the VLC transmitter; thus, VLC
performance also improves.
C. Post-Processing of Measurement Data
For post-processing of the collected data, we first generated binary images from the measured images by setting the
threshold of the luminance value. The threshold value should be
selected the optimum value that reduces the background noise.
For the HSC with 8-bit gray scale, the maximum luminance was
255. We varied the threshold (luminance) value and generated
a binary image with a luminance threshold of 200. With that
threshold, we could eliminate most background noise. We
further performed opening and closing algorithms to remove
morphological noise. The opening algorithm removed small
objects, whereas the closing algorithm removed small holes. We
then calculated the movement of LED optical flow by assuming
brightness constancy in optical flow. Finally, we applied POC
to estimate the movement of LEDs in the subpixel accuracy.

1796

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 33, NO. 9, SEPTEMBER 2015

For post-processing, about 30 minutes were required to obtain


results from measured data. Processing the POC takes time
because it estimates the displacement of an LED between two
consecutive frames at 0.1 subpixel accuracy for 5,000 frames.
Below, we briefly explain brightness constancy and the sequence of luminance in VLC. We also explain POC.
1) Brightness Constancy and Sequence of Luminance in
VLC: Let I(u, v, t) be the intensity of a pixel at (u, v) at time t.
Assuming brightness constancy in which the intensity of the
LEDs does not change from one image to another, brightness
constancy can be written as
I(u, v, t) = I(u + u, v + v, t + t),

(1)

where u and v are the movements of the LED, and t is the


time difference between two image frames. The fraction of t
expresses a frame rate. In our experiment, we set the LED on
throughout the measurement procedure so that the LED signal
at time t and t + t (i.e., the next frame) is highly correlated to
meet the brightness constancy assumption.
Assuming that the movement is small, or equivalently a high
frame rate, as in our experimental case, then by applying a
first-order Taylor expansion to the right-hand side, we yield the
approximation
I(u, v, t) = I(u, v, t) + u

I
I
I
+ v
+ t ,
u
v
t

(2)

which simplifies to the optical flow constrain given as


u

I
I
I
+ v
+ t = 0.
u
v
t

(3)

In general, both brightness constancy and optical flow constraint provide one constraint on the two unknowns at each
pixel and cannot be solved [14]; however, in VLC, a similar
constancy to augment or replace (1) can be provided.
In VLC using HSC, a transmitter can send a known sequence
to the receiver such as a transmitter ID (e.g., for positioning
purposes) and a header sequence for signal acquisition. Suppose that an 11-chip Barker sequence, adopted in the IEEE
802.11b standard, is used for the header; then the on and
off pattern of the LED is 11100010010, with on indicated
by 1 and off indicated by 0. In this case, instead of the
brightness constancy, the sequence of luminance is supplied.
It is generally accepted that no other perfect binary phase
sequence exists other than a Barker sequence with maximum
autocorrelation. Therefore, the sequence of luminance is more
robust to illumination changes and other appearance changes
that are not tolerated by the brightness constancy assumption.
Further, the blinking rate of the LED is usually considerably
faster, and such a fast blinking rate is rare in the natural world.
Therefore, this time-domain feature can easily be extracted
by the receiver for signal acquisition and tracking. This is an
advantage of VLC using HSC; thus, we can design a sequence
that can provide robust optical flow performance as well as
enhance detection and tracking performance.
2) POC: It is a strong method for estimating motion between
two images; this approach is based on the well-known Fourier
shift property, i.e., a shift in the spatial domain of two images

results in a linear phase difference in the frequency domain of


the Fourier Transform.
Let us consider two images captured at time t and t + 1,
denoted by f (u, v, t) and f (u, v, t + 1), respectively. If we assume that during one frame sample period, the image is shifted
by u and v, f (u, v, t + 1) = f (u + u, v + v, t). If the
corresponding Fourier Transforms are denoted by F(u, v, t) and
F(u, v, t + 1), we obtain
F(u, v, t + 1) = F(u, v, t) exp {i(u u + v v)} .

(4)

The phase correlation is defined as the normalized cross power


spectrum between F(u, v, t) and F(u, v, t + 1), i.e.,
Q(u, v) =

F(u, v, t)F(u, v, t + 1)
|F(u, v, t)F(u, v, t + 1) |

(5)

= exp {i(u u + v v)} ,


where denotes the complex conjugate. The inverse Fourier
Transform of Q(u, v) is a delta function, and its peak identifies
the integer magnitude of the shift between the pair of images.
More specifically,
q(u, v) = (u + u, v + v).

(6)

POC estimates subpixel accuracy by replacing the amplitude


component of F(u, v, t) and F(u, v, t + 1) with unity, then takes
the inverse Fourier transform of synthetic image H(u, v) =
F  (u, v, t)(F  (u, v, t + 1)) , where F  (u, v, t) and F  (u, v, t +
1) are the phase components of F(u, v, t) and F(u, v, t + 1),
respectively. Computational load is dramatically reduced in
POC because it focuses only on the phase components.
Introducing small values u and v, we rewrite u = [u] +
u, v = [v] + v, where [x] is the nearest integer function
of real number x. Then, we obtain the inverse Fourier transform
of H(u, v) as

(u = [u]+u, v = [v]+v)
1
,
h(uu, vv) = (< 1) (u = [u], v = [v])


otherwise
(7)
where  is the noise term. Note that since [u] and [v] are
integers, the peak is reduced to (< 1). The phase difference
in subpixel accuracy can be obtained by finding the best twodimensional fit of the phase difference, u and v in (7), such
that the peak reaches 1 [13], [21]. If we approximate (7) by the
sinc function, then we obtain
h(uu, vv)  sinc (u[u]u) sinc (v[v]v) .
(8)
In our experiments, we varied u and v with 0.1 subpixel
steps to measure the movements.
D. Experimental Results
Fig. 3 shows our experimental results for the case of I2V with
a speed of 30 km/h. The joint probability density of u and v is
shown in Fig. 3(a), and its horizontal and vertical cross-sections
are shown in Fig. 3(b) and (c), respectively. As the images were
captured with 1,000 fps (1 ms), optical flow calculated by (7)

YAMAZATO et al.: VEHICLE MOTION AND PIXEL ILLUMINATION MODELING FOR IMAGE SENSOR BASED VLC

Fig. 3. Experimental results of I2V with a speed of 30 km/h obtained from


5000 frames of images captured at 1 000 fps. The occurrence probability for
optical flows u and v calculated by POC in 0.1 subpixel step accuracy
is shown. (a) Joint probability density of u (horizontal direction) and v
(vertical direction). (b) Cross-section of (a) in u (horizontal) direction (u =
2 = 1.52 102 ). (c) Cross-section of (a) in v (vertical) direction
0.1, u
2 = 3.95 102 ).
(v = 0.1, v

is small, such that [u] = 0 and [v] = 0, therefore, u = u


and v = v.
We observe that the mean values of both u and v are
u = 0.1 and v = 0.1 pixels, respectively. These shifts
occurred because the LED array was located away from the cen2 = 1.52 102 and 2 =
ter of the images. Variances were u
v
2
3.95 10 for u and v, respectively. Note that maximum
flows were 1.5 pixels horizontally and 1.4 pixels vertically.
As the vehicle movement primarily came from the vibrations
2 2 .
induced by surface irregularities, we observe that v
u
Similar tendencies were observed in the case of V2I.
Fig. 4 shows our experimental results for this case. The means
2 =
were shifted 0.1 pixels horizontally, and variances were u
2
4
3
6.11 10 and v = 1.14 10 for u and v, respectively. Maximum flows were only 0.2 pixels in both the horizontal and vertical directions. Although their means were shifted,
motion characteristics of V2I and I2V had similar properties,
i.e., primarily distributed on average and vertical flow greater
than horizontal flow; however, their variances, especially in
the vertical direction, were different. The variance of I2V was
greater than that of V2I. These results show that I2V has more
complex motion characteristics than V2I.
For the case of V2V, shown in Fig. 5, similar tendencies were
observed. The means were shifted 0.1 pixels horizontally, and
2 = 3.64 103 and 2 = 6.40 102 for
variances were u
v
u and v, respectively. For this case, maximum flows were
0.7 pixels in horizontally and 1.8 pixels in vertically. The variance of v is largest in all three cases. For the vertical direction,
both the transmitter and receiver moved; thus, the effect of
vibration was larger than in other cases. Conversely, the effect

1797

Fig. 4. Experimental results of V2I with a speed of 30 km/h obtained from


5 000 frames of images captured at 1 000 fps. The occurrence probability for
optical flows u and v calculated by POC in 0.1 subpixel step accuracy
is shown. (a) Joint probability density of u (horizontal direction) and v
(vertical direction). (b) Cross-section of (a) in u (horizontal) direction (u =
2 = 6.11 104 ). (c) Cross-section of (a) in v (vertical) direction
0.1, u
2 = 1.14 103 ).
(v = 0.1, v

Fig. 5. Experimental results of V2V with a speed of 30 km/h obtained from


5 000 frames of images captured at 1 000 fps. The occurrence probability for
optical flows u and v calculated by POC in 0.1 subpixel step accuracy
is shown. (a) Joint probability density of u (horizontal direction) and v
(vertical direction). (b) Cross-section of (a) in u (horizontal) direction (u =
2 = 3.64 103 ). (c) Cross-section of (a) in v (vertical) direction
0.1, u
2 = 6.40 102 ).
(v = 0.1, v

of vehicle movement on the horizontal direction was reduced


because the transmitter vehicle chased the receiver vehicle and
the inter-vehicle distance was mostly constant (approximately
30 m) during the measurement.

1798

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 33, NO. 9, SEPTEMBER 2015

Fig. 6. Geometory of a pinhole camera; HSC is set in the origin and the world
coordinate (x, y, z) is projected to the image coordinate (u, v).

Moreover, for the case in which the vehicle moved with


a speed of 20 km/h, vehicle motion characteristics indicated
similar results, but variances were narrower. Therefore, we
conclude that channel fluctuation is related to vehicle speed.
IV. V EHICLE M OTION M ODEL OF VLC T RANSMITTER
FOR I2V, V2I, AND V2V
Comparing image sequences of I2V and V2I, the entire
image moves between frames for I2V, whereas only the transmitter moves between frames, and the scenery is static for V2I;
however, the both channels fluctuate between frames. In this
section, we focus on the vehicle motion of the VLC transmitter
on the captured images for I2V and V2I and derive a vehicle
motion model. Further, we extend the proposed vehicle motion
model for V2V-VLC.
A. Vehicle Motion Model of VLC Transmitter
We propose the vehicle motion model of the VLC transmitter on the captured image by introducing a pinhole camera
model. The pinhole camera model is a model for projecting
world coordinate (x, y, z) to image coordinate (u, v) [22], [23].
Fig. 6 shows the projection model of the pinhole camera. Its
image process can be expressed as

x
u
f
0
0

f
0 R
T
v = 0
(9)
z ,
1
0
0
1
1
where is an arbitrary scale factor, f is focal length, R is a
33 rotation matrix, and T is a translation vector. The rotation
matrix R is defined as

r12
r13
r11
r22
r23
R = r21
r31
r32
r33

1
0
0
cos
0
sin
cos
sin 0
1
0
= 0
0
sin
cos
sin
0
cos

cos
sin
0
cos
0 ,
sin
(10)
0
0
1

where , , and are rotation angles of the X-, Y-, and Z-axes
circumferences, respectively, as shown in Fig. 6. In the righthand side of (9), the first matrix is called the camera calibration
parameter.
Suppose that the camera calibration parameter is constant.
Let R and T be extrinsic parameters such that R is the camera
posture and T is the camera position. Further, we consider the
world coordinate component as the position of the transmitter.
We apply this camera model to I2V-VLC and V2I-VLC below.
1) I2V-VLC: For I2V-VLC, the camera moves with the
vehicle and the transmitter is static. Therefore, camera posture,
R fluctuates based on vehicle vibrations and camera position,
T can be expressed as the sum of a time function and the
vibration component. Thus, the vehicle motion model of the
VLC transmitter for I2V-VLC can be expressed as

f
0
0
u(t)
f
0
v(t) = 0
1
0
0
1

x

r12
r13
Tx (t) + n1x
r11
y
r22
r23
Ty (t) + n1y
r21
z , (11)
r31
r32
r33
Tz (t) + n1z
1
where Tx (t), Ty (t), and Tz (t) are the time functions, and n1x , n1y ,
and n1z are the vibration components of T, respectively.
2) V2I-VLC: Conversely, for V2I-VLC, the camera is static,
but the transmitter moves with the vehicle. Therefore, the
extrinsic parameters are constant, and the transmitter position
can be expressed as the sum of a time function and the vibration
component.
Suppose the camera is at the origin; then, the vehicle motion
model of the VLC transmitter for V2I-VLC can be expressed as

x(t) + n2x
u(t)
f
0
0
1
0
0
0
y(t) + n2y

f
0 0
1
0
0
v(t)=0
z(t) + n2z ,
1
0
0
1
0
0
1
0
1
(12)
where x(t), y(t), and z(t) are the time functions, and n2x , n2y and
n2z are the vibration components of (x, y, z), respectively.
3) V2V-VLC: The models proposed above are applicable
to V2V-VLC, in which both the transmitter and receiver are
movable. The vehicle motion model of the VLC transmitter for
V2V-VLC can be expressed as

r12
r13
Tx (t) + n1x
u(t)
f
0
0
r11
r22
r23
Ty (t) + n1y
f
0 r21
v(t)=0
r31
r32
r33
Tz (t) + n1z
1
0
0
1

x(t) + n2x
y(t) + n2y

z(t) + n2z . (13)


1
4) Comparison Between I2V-VLC and V2I-VLC: we consider a case in which camera posture is fixed, i.e., R equals a
unit matrix. Suppose the vehicle speed and initial position of the
transmitter is v = (vx , vy , vz ) and x = (xLED , yLED , zLED ), respectively. For simplicity, we assume n1x = n1y = n1z = 0 and

YAMAZATO et al.: VEHICLE MOTION AND PIXEL ILLUMINATION MODELING FOR IMAGE SENSOR BASED VLC

TABLE I
S IMULATION PARAMETERS FOR HSC

1799

TABLE II
V EHICLE V IBRATION PARAMETERS FOR (16) AND (17)

n2x = n2y = n2z = 0. Under these conditions, the expansion of


(11) yields

+Tx (t)
+vx t
= f xzLED
u(t) = f xzLED
LED +Tz (t)
LED +vz t
.
(14)
yLED +Ty (t)
yLED +vy t
v(t) = f zLED +Tz (t) = f zLED +vz t
Under the same conditions, the expansion of (12) yields

xLED +vx t
u(t) = f x(t)
z(t) = f zLED +vz t
.
yLED +vy t
v(t) = f y(t)
z(t) = f zLED +vz t

(15)

Comparing (14) and (15), the vehicle motion model of I2V


corresponds with that of V2I. In other words, the camera motion
for I2V and the transmitter motion for V2I are equivalent. With
a fixed camera posture, the correspondences of I2V and V2I
indicate that the difference between I2V and V2I is whether
the camera posture fluctuates or is static. Therefore, changes in
the relative distance between the transmitter and receiver are
equivalent for both cases, even with vibration.
Note that an object located at the center of the image remains in its position, whereas an object located away from the
center moves toward the outside of the image as the vehicle
approaches. In addition, the apparent size of the object in the
image depends on the distance to the camera.
B. Simulations
To show the validity of our proposed vehicle motion models,
we performed simulations that assumed similar situations for
our experiments. For the case of I2V, we moved the camera
position toward the transmitter along the Z-axis at a constant
speed with vibration (Tx (t) = Ty (t) = 0). For the case of V2I,
we moved the transmitter position toward the camera along the
Z-axis with vibration (x(t) = y(t) = 0). For the case of V2V, we
moved both the transmitter and camera in the same direction at
the same speed with spacing of 30 m. Then, we detected u and
v between the frames and evaluated the motion characteristics
via the probability density of u and v. Table I shows our
simulation parameters for HSC. Note the vehicle moved with
a speed of 30 km/h in all our simulations.
In [24], the authors showed that frequencies of vehicle vibrations are mainly distributed in frequencies less than 20 Hz.
Therefore we assumed vehicle vibrations described by summing sinusoidal waveforms of 3 Hz and 10 Hz to be representative vehicle vibration frequencies [24], [25]. In addition,
we assumed that road surface irregularity occurred by Gaussian
random variables [25], [26]. Therefore, the vehicle vibration
component was expressed as


nx
i Axfi cos 2fi t + Gx
ny = i Ayfi cos 2fi t + Gy ,
(16)

nz
A
cos
2f
t
+
G
i
z
i zfi

where Axfi , Ayfi , and Azfi are amplitudes of sinusoidal waveform


with frequency fi representing vehicle vibration, and Gx , Gy and
Gz are Gaussian random variables representing road surface
irregularity, respectively for nx , ny , and nz . In this simulation,
since the vehicle drives along the Z-axis, the vibration component in the Z-direction can be ignored (nz 0).
Fluctuation of camera posture was expressed as

i Bfi cos 2fi t + G


= i Bfi cos 2fi t + G .
(17)

B
cos
2f
t
+
G

i fi
For simplicity, we only considered and as fluctuations of
camera posture [26]. Also, Bfi , Bfi , and B fi are amplitudes
of sinusoidal waveform with frequency fi representing vehicle
vibration, and G , G , and G are Gaussian random variables
representing road surface irregularity, respectively for , ,
and .
Table II summarizes the vehicle vibration parameters of
(16) and (17). The parameters depicted in the above tables
were determined based on the experimental results discussed
in Section III-D.
Figs. 7, 8, and 9 show our simulation results. From the
results, we observe that the probability density of u and v was
distributed mainly on an average of 0 pixel for all cases. This
was caused by the vehicle moving only in the Z-direction in
our simulation. Comparing with experimental results discussed
in Section III-D, the form and tendency of the probability
distributions were quite similar, although their means were
shifted. Variances were also close in close agreement with our
experimental results.
We further measured two probabilities by calculating the
Kullback Leibler (KL) divergence. The KL divergence is always non-negative, with zero indicating the two probabilities
are equal. Thus, the smaller the value, the more likely the two
probabilities are the same. We normalized two probabilities and
calculated KL divergence between the experimental results and
the simulation results for each vehicle motion characteristic.
The KL divergences for I2V, V2I, and V2V were 0.363, 0.259,
and 0.491, respectively, and these values are relatively small.
For references, we compared standard Gaussian distribution,
i.e., N1 (0, 1.02 ), and Gaussian distributions having different
variances, i.e., N2 (0, 1.12) and N3 (0, 1.22). The KL divergence for N1 (0, 1.02 ) and N2 (0, 1.12 ) was 0.123, and the KL

1800

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 33, NO. 9, SEPTEMBER 2015

Fig. 7. Simulation results of I2V with a speed of 30 km/h obtained from 5 000
frames of simulation at 1 000 fps. The occurrence probability for optical flows
u and v in 0.1 subpixel step accuracy is shown. (a) Joint probability density
of u (horizontal direction) and v (vertical direction). (b) Cross-section of (a)
2 = 4.57 102 ). (c) Cross-section
in u (horizontal) direction (u = 0, u
2 = 5.97 102 ).
of (a) in v (vertical) direction (v = 0, v

Fig. 9. Simulation results of V2V with a speed of 30 km/h obtained from 5 000
frames of simulation at 1 000 fps. The occurrence probability for optical flows
u and v in 0.1 subpixel step accuracy is shown. (a) Joint probability density
of u (horizontal direction) and v (vertical direction). (b) Cross-section of (a)
2 = 6.23 103 ). (c) Cross-section
in u (horizontal) direction (u = 0, u
2 = 1.23 102 ).
of (a) in v (vertical) direction (v = 0, v

2 for I2V, V2I, and V2V. This graph shows


Fig. 10. Vehicle speed vs v
2 at each speed (20 km/h to 100 km/h) in the simulation. Note that
variance v
2 because 2 > 2 .
we only plot v
v
u

Fig. 8. Simulation results of V2I with a speed of 30 km/h obtained from 5 000
frames of simulation at 1 000 fps. The occurrence probability for optical flows
u and v in 0.1 subpixel step accuracy is shown. (a) Joint probability density
of u (horizontal direction) and v (vertical direction). (b) Cross-section of (a)
2 = 5.47 104 ). (c) Cross-section
in u (horizontal) direction (u = 0, u
2 = 1.93 103 ).
of (a) in v (vertical) direction (v = 0, v

divergence for N1 (0, 1.02 ) and N3 (0, 1.22) was 0.426, respectively. Comparing with reference values, our results are acceptable as good approximations. Consequently, our proposed
vehicle motion models in this paper are valid. According to

our proposed vehicle motion models, the difference of variance


between I2V and V2I are caused by fluctuations of camera
posture.
Using our proposed vehicle motion models, we performed
another simulation by varying vehicle speed from 20 km/h
to 100 km/h under the same conditions described above.
Figs. 10 and 11 show vehicle speed characteristics of u and
v. As vehicle speeds increased, the variance of motion characteristics increased, as shown in Fig. 10. Note that we only
2 because 2 > 2 . Since the inter-vehicle distance
plot v
v
u
was constant (30 m) for V2V in the simulation, the increment

YAMAZATO et al.: VEHICLE MOTION AND PIXEL ILLUMINATION MODELING FOR IMAGE SENSOR BASED VLC

1801

Fig. 12. Simplified diagram of the ray-tracing between an LED and the image
sensor.

Fig. 11. Vehicle speed vs the probability of u 1 or v 1 for I2V, V2I,


and V2V at each speed (20 km/h to 100 km/h) in the simulation.

of V2V variance was lower than that of I2V. Fig. 11 shows


the probability of u 1 or v 1. The graph shows that the
probability of the optical flow exceeding one pixel is small,
e.g., it is less than 10% for the worst case (I2V with speed of
100km/h). This implies that fast tracking is possible and it dose
not affect the communication performance.
V. P IXEL I LLUMINATION M ODEL

Here, we consider simple ray-tracing between an LED (transmitter) and the image sensor, as shown in is Fig. 12. Let d be
the distance between the LED and the lens (camera); further, let
d be the image distance, and f be the focal length of the lens.
Then, the well known thin lens equation is given as
(18)

or in the Newtonian form as


f 2 = (d f )(d f ).

(19)

The lens magnification, M, is given by


d
f
f
M= =

,
d
f d
d

(20)

(m + 1)
cosm (),
2

(21)

where is the directivity of an LED, and m is the order of


Lambertian emission [4], i.e.,
ln 2
m=
,
ln cos 1/2

(22)

Ap
Ro () cos(),
d2

(23)

where is the angle of incidence with respect to the receiver


axis, d is the distance between the transmitter and receiver, and
Ap is the area of the entrance pupil of the camera lens [4].
B. Pixel Illumination Model (DC Gain at a Pixel Considering
LED Area)
The projected area of an LED (SI [pixels]) can be geometrically calculated from the actual area of the LED (S [m2 ]). Let h
and w be the height and width of the actual LED, respectively.
Since its projected height (hI ) and width (wI ) are obtained using
M, SI is expressed as
SI =

where, for the rightmost term, we assume d  f without loss


of generality.
The emitted luminous intensity of an LED is given by
Lambertian radiant intensity [27][29]
Ro () =

where 1/2 is a half value angle of an LED.


The total DC gain is given as
Ht (0) =

A. DC Gain at Image Sensor

1
1
1
= + ,
f
d d

Fig. 13. Projected image of an LED for near and far distances; when distance
is near, the projected image occupies several pixels, whereas when distance is
far, the image fits into only one pixel (i.e., no digital image can be formed).

hI wI
M 2 hw
1 f 2S
=
=
,
2
2
2 d2

(24)

where is a pixel size to convert the projected image into


pixels. More specifically, 2 indicates the sensor size per pixel.
If SI > 1, then the projected image of the LED occupies
several pixels, as is illustrated in Fig. 13. This is the case when
d is short, and a digital electronic representation of the LED can
be formed using these pixels. Since the incident light from the
LED is spread over SI pixels, we can obtain total DC gain as
Hp (0) =

2 Ap
Ht (0)
= 2 Ro () cos().
SI
f S

(25)

(25) shows that there is no direct dependence on distance for


the DC gain. Conversely, if SI 1, then the LED fits only into a
single pixel, and no digital image can be formed. In this case, all

1802

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 33, NO. 9, SEPTEMBER 2015

Fig. 14. Experimental results: [left vertical axis] normalized highest luminance extracted from the projected LED area (i.e., central pixel of the
LED) versus the distance between the LED and camera; [right vertical axis]
geometrically-calculated LED pixel size (i.e., projected area of LED (SI ))
versus distance. Less than d = 15 m, both results show a highly steady state.
More than d = 15 m, the curves fell exponentially relative to d.

incident light from the LED falls on a single pixel. Therefore,


we can obtain the DC gain at a pixel as
Hp (0) = Ht (0) =

Ap
Ro () cos().
d2

(26)

C. Experimental Results
We performed an experiment to actually measure the luminance of a central pixel from the projected LED area. The
experimental equipment (HSC) and parameters are the same as
those used for the optical flow measurements described earlier.
We placed a single LED with a diameter of 5 mm face-to-face
with the camera ( = 10 m), and then captured the lighted
LED with each change in the distance between them. We chose
the highest pixel value (i.e., the central pixel) within the pixels
occupied by the incident light from the LED and extracted this
value as the LED luminance value.
Fig. 14 shows the normalized extracted luminance versus the
distance (d) between the LED and the camera.
We also plotted SI , which is geometrically calculated at each
distance, and shown in the figure. From the figure, the curve
of the extracted luminance nearly corresponds to the SI curve.
Less than d = 15 m, both results are kept in a steady state with
a high level. Note that these curves do not fully correspond
to each other because the units differ. More than d = 15 m,
two curves fell exponentially depending on d. Let us focus
on SI around d = 15 m. When d = 15 m and 16 m, SI was
approximately 1.07 pixels and 0.94 pixels, respectively. This
indicates that the projected area of the LED dropped below
2 (= 1 1010 m2 ) at 16 m. More specifically, SI fell below
1 pixel over 16 m. In this case, the image sensor corresponding
to 1 pixel cannot receive enough LED light since the LED only
fits in a single pixel, as described above. Thus, LED luminance
decreases in inverse proportion to d2 when SI < 1 pixel.
Here, we explain why the experimental curve is unstable.
In the actual experimental environment, it was difficult to
concentrate the LED light onto any single pixel. The image

sensor often projected LED light to two pixels when SI < 1.


In this case, the pixel value was smaller compared to the case
in which LED light was concentrated to one pixel. Moreover,
the incident light from the LED sometimes dropped the nonsensor area between the pixels. Therefore, the variability of the
extracted highest luminance occurs depending on the positional
relationship between the projected area of the LED and the
pixels.
For wireless communication systems using radio waves,
transmission energy generally decreases in inverse proportion
to d2 . Conversely, transmission energy of the image sensor
based VLC is equivalent to luminance. As expressed in (24)
above, SI decreases in inverse proportion to d2 . This phenomenon is similar to the energy decrease of using radio waves.
Let us focus on the central pixel of SI . We can obtain a high
luminance during SI 1, as shown in Fig. 14. In this case, the
camera can clearly distinguish the blinking of the LED. This
effect would certainly help achieve error-free communication.
Therefore, transmission energy depends on whether we regard
the received luminance as an area or a pixel.
Here, we examine a performance comparison of the image
sensor with the single photodiode. It was expected that the
characteristic, which is similar to the image sensor, might be
observed even if a single photodiode is used. The constant
light intensity was obtained until a certain distance between the
LED and the single photodiode. Then, its intensity decreased
depending on square of distance. However, the sensitivity of the
LED light depends on the size of the sensor of each receiving
device.
The results suggest that a constant luminance value can be
obtained while SI > 1 in the case of the image sensor. To increase the distance at which that constant luminance can be obtained, we only have to reduce sensor size, as is shown in (24).
In the case of the single photodiode, its size is larger than
the pixel size of the image sensor. Here, we assume that the
single photodiode does not track the LED of the transmitter
mechanically like the image sensor. In this case, the physical
photodiode size must be increased to detect LED light in a wide
range, or FOV. In the case of the image sensor, the range by
which LED light can be detected depends on the size of the
sensor. In other words, we do not need to increase the physical
pixel size of the image sensor. This indicates that the image
sensor has a wide range by which it can detect LED light
compared to the single photodiode. This is the most significant
difference between the image sensor and the single photodiode
in a VLC situation. Therefore, we consider that it is the most
significant advantage of using the image sensor for VLC is the
ability to receive LED light in a plane.
VI. D ISCUSSION
In Sections III and IV, we presented the optical flow measurement and proposed vehicle motion models for each VLC
case. The vehicle motion model is useful for simulating a
fluctuation channel for an image sensor based VLC for automotive applications. Needless to say, the simulation that reflects
vehicle motion in an image plane beneficial for VLC signal
detection and tracking design.

YAMAZATO et al.: VEHICLE MOTION AND PIXEL ILLUMINATION MODELING FOR IMAGE SENSOR BASED VLC

In a practical case, the estimation of actual vehicle motion


is difficult because the motion is random; thus, it is difficult
to predict the motion. However, the proposed model provides
a statistical model of vehicle motion in an image plane. For
example, from our results, the vehicle motion along the vertical
and horizontal axes of the image plane is limited to within one
pixel in most cases. Using this result and the vehicle motion
model facilitates the design of an LED detection and tracking
system that limits the tracking area of the VLC transmitter and
reduces calculation cost.
In Section V, our analysis showed the SNR remains constant
as long as the projected image of the transmitter LED occupies
several pixels. The pixel illumination model provides a criterion
for the appropriate transmitter LED size for a desired communication distance with given FOV, or with given a focal length
of the lens, a resolution of the image sensor, and a pixel size of
the image sensor.
Thus, the vehicle motion model and the pixel illumination
model may be used as a guideline in the design of image sensor
based VLC system applications.
The limitations of image sensor based VLC systems compared to photodiode based VLC systems can be attributed to
their reception devices; i.e., image sensors or photodiodes.
A single photodiode is attractive and has been used frequently as a receiver because PDs are much faster than image
sensors. Photodiodes are easy to fabricate and have low production costs; however, they usually have nonlinear response
to light. For transmissions exceeding Gbps speeds, a twodimensional photodiode array can be used for massively parallel VLC.
On the other hand, image sensor based VLC systems use
generally slow CMOS image sensors. The readout processes for
CMOS image sensors are a bottleneck. However, the readout
process could be increased by selecting a specific set of pixels
while discarding others. Sampling a small number of relevant
pixels dramatically increases readout speeds. By using such a
technique or other related techniques, frame rates of 10,000 fps
or higher are possible.
Another disadvantage is cost. The cost of high-speed CMOS
image sensors is high. Recently, most chips found in computers
and other electric devices are manufactured using CMOS technology. Chip manufacturing plants cost millions of dollars to set
up; however, provided they produce chips in sufficient quantities, the resultant price per chip is very low, particularly when
compared with other technologies. Thus, drastic reductions in
costs are possible even for high-speed CMOS image sensors as
long as the market demand for such chips is high.
VII. C ONCLUSION
In this paper, we described and provided results for vehicle
motion and pixel illumination modeling of image sensor based
VLC for automotive applications. In particular, we examined
the vehicle motion in the image plane as well as pixel illumination according to the distance between the transmitter and the
receiver.
We measured the vibration of a vehicle described by optical flow with subpixel accuracy using POC and proposed a

1803

vehicle motion model for the VLC transmitter. The singlepinhole camera model successfully describes the three cases of
VLCI2V-VLC, V2I-VLC, and V2V-VLC. We demonstrated
that our measurement results coincided with simulation results
using our proposed vehicle motion model; we also confirmed
the validity via KL divergence. The movement of source LEDs
in the image plane depends on vehicle speed, HSC frame rate,
and road irregularity.
Further, we measured the DC gain at a pixel and confirmed
that the gain remains constant as long as the projected image of
the transmitter LED occupies several pixels. In other words, if
we choose a pixel with highest luminance, then the luminance
value remains constant, and the SNR does not change according
to distance.
ACKNOWLEDGMENT
The authors would like to thank Prof. Masaaki Katayama
and Assistant Prof. Kentaro Kobayashi for their variable
suggestions.
R EFERENCES
[1] A. L. Lars Bergstrm, Per Delsing, and O. Ingans, The nobel prize
in physics 2014-blue LEDs-filling the world with new light, The
Royal Sedish Academy of Sciences, Oct. 2014. [Online]. Available:
http://www.nobelprize.org/nobel_prizes/physics/laureates/2014/popularphysicsprize2014.pdf
[2] M. Wright, Packaged led and SSL market report and forecast, LEDs
Magazine, Mar. 2013, pp. 3540.
[3] J. Gancarz, H. Elgala, and T. Little, Impact of lighting requirements
on VLC systems, IEEE Commun. Mag., vol. 51, no. 12, pp. 3441,
Dec. 2013.
[4] J. Kahn and J. Barry, Wireless infrared communications, Proc. IEEE,
vol. 85, no. 2, pp. 265298, Feb. 1997.
[5] K. Lee, H. Park, and J. Barry, Indoor channel characteristics for visible
light communications, IEEE Commun. Lett., vol. 15, no. 2, pp. 217219,
Feb. 2011.
[6] T. Komine, J. Lee, S. Haruyama, and M. Nakagawa, Adaptive equalization system for visible light wireless communication utilizing multiple
white led lighting equipment, IEEE Trans. Wireless Commun., vol. 8,
no. 6, pp. 28922900, Jun. 2009.
[7] H. Urabe et al., High data rate ground-to-train free-space optical communication system, Opt. Eng., vol. 51, no. 3, Mar. 2012, Art. ID. 031 204.
[Online]. Available: http://dx.doi.org/10.1117/1.OE.51.3.031204
[8] C. Schmidt et al., High-speed, high-volume optical communication for
aircraft, SPIE Newsroom, Oct. 2013. [Online]. Available: http://spie.org/
x103948.xml
[9] S. Nishimoto et al., High-speed transmission of overlay coding for roadto-vehicle visible light communication using led array and high-speed
camera, in Proc. IEEE GC Wkshps, Dec. 2012, pp. 12341238.
[10] Y. Amano, K. Kamakura, and T. Yamazato, Alamouti-type coding for
visible light communication based on direct detection using image sensor, in Proc. IEEE GLOBECOM, Dec. 2013, pp. 24302435.
[11] I. Takai et al., Led and CMOS image sensor based optical wireless
communication system for automotive applications, IEEE Photon. J.,
vol. 5, no. 5, Oct. 2013, Art. ID. 6801418.
[12] T. Yamazato et al., Image-sensor-based visible light communication for
automotive applications, IEEE Commun. Mag., vol. 52, no. 7, pp. 8897,
Jul. 2014.
[13] T. Yamazato and S. Haruyama, Image sensor based visible light communication and its application to pose, position, and range estimations,
IEICE Trans. on Commun., vol. E97-B, no. 9, pp. 17591765,
Sep. 2014.
[14] S. Baker et al., A database and evaluation methodology for optical flow,
in Proc. IEEE 11th ICCV, Oct. 2007, pp. 18.
[15] S. Kamijo, Y. Matsushita, K. Ikeuchi, and M. Sakauchi, Traffic monitoring and accident detection at intersections, IEEE Trans. Intell. Transp.
Syst., vol. 1, no. 2, pp. 108118, Jun. 2000.
[16] Z. Sun, G. Bebis, and R. Miller, On-road vehicle detection using optical
sensors: A review, in Proc. IEEE 7th ITSC, Oct. 2004, pp. 585590.

1804

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 33, NO. 9, SEPTEMBER 2015

[17] J. Alonso, E. Ros Vidal, A. Rotter, and M. Muhlenberg, Lane-change decision aid system based on motion-driven vehicle tracking, IEEE Trans.
Veh. Technol., vol. 57, no. 5, pp. 27362746, Sep. 2008.
[18] S. Avidan, Support vector tracking, IEEE Trans. Pattern Anal. Mach.
Intell., vol. 26, no. 8, pp. 10641072, Aug. 2004.
[19] P. Siegmann, R. Lopez-Sastre, P. Gil-Jimenez, S. Lafuente-Arroyo, and
S. Maldonado-Bascon, Fundaments in luminance and retroreflectivity
measurements of vertical traffic signs using a color digital camera, IEEE
Trans. Instrum. Meas., vol. 57, no. 3, pp. 607615, Mar. 2008.
[20] M. Kinoshita et al., Motion modeling of mobile transmitter for image
sensor based I2V-VLC, V2I-VLC, and V2V-VLC, in Proc. GC Wkshps,
Dec. 2014, pp. 450455.
[21] K. Takita, T. Aoki, Y. Sasaki, T. Higuchi, and K. Kobayashi, Highaccuracy subpixel image registration based on phase-only correlation, IEICE
Trans. Fundam., vol. E86-A, no. 8, pp. 19251933, Aug. 2003.
[22] O. Faugeras, Three-Dimensional Computer Vision: A Geometric
Viewpoint. Cambridge, MA, USA: MIT Press, 1993.
[23] K. Y. K. Wong, P. Mendonca, and R. Cipolla, Camera calibration from
surfaces of revolution, IEEE Trans. Pattern Anal. Mach. Intell., vol. 25,
no. 2, pp. 147161, Feb. 2003.
[24] M. Griffin, The evaluation of vehicle vibration and seats, Appl. Ergonom., vol. 9, no. 1, pp. 1521, Mar. 1978. [Online]. Available: http://
www.sciencedirect.com/science/article/pii/0003687078902144
[25] M. Agostinacchio, D. Ciampa, and S. Olita, The vibrations induced by
surface irregularities in road pavementsA MATLAB approach, Eur.
Transp. Res. Rev., vol. 6, no. 3, pp. 267275, Sep. 2014. [Online]. Available: http://dx.doi.org/10.1007/s12544-013-0127-8
[26] W. Sun, H. Gao, and B. Yao, Adaptive robust vibration control of full-car
active suspensions with electrohydraulic actuators, IEEE Trans. Control
Syst. Technol., vol. 21, no. 6, pp. 24172422, Nov. 2013.
[27] P. A. Haigh, H. Le Minh, and Z. Ghassemlooy, Transmitter distribution for MIMO visible light communication systems, in Proc. 12th
Annu. PGNet Symp. Convergence Telecommun., Broadcast., Jan. 2011,
pp. 190193.
[28] Z. Ghassemlooy, D. Wu, M. A. Khalighi, and X. Tang, Indoor nondirected optical wireless communicationsOptimization of the Lambertian
order, J. Elect. Comput. Eng. Innov., vol. 1, no. 1, pp. 19, Autumn,
2013.
[29] Y. He, L. Ding, Y. Gong, and Y. Wang, Real-time audio & video transmission system based on visible light communication, Opt. Photon. J.,
vol. 3, no. 2B, pp. 153157, Jun. 2013.

Takaya Yamazato (S91M93) received the Ph.D.


degree from Department of Electrical Engineering,
Keio University, Yokohama, Japan, in 1993. He is
a Professor at the Institute of Liberal Arts and Sciences, Nagoya University, Japan. From 1993 to 1998,
he was an Assistant Professor in the Department of
Information Electronics, Nagoya University, Japan.
From 1997 to 1998, he was a Visiting Researcher
of the Research Group for RF Communications, Department of Electrical Engineering and Information
Technology, University of Kaiserslautern. In 1998,
he gave a 1/2 day tutorial entitled Introduction to CDMA ALOHA at
Globecom held in Sydney Australia. Since then, he has been serving as a TPC
member of Globecom and ICC. In 2006, he received the IEEE Communication
Society 2006 Best Tutorial Paper Award. He served as a co-chair of Wireless
Communication Symposia of ICC2009 and he was a co-chair of Selected
Areas in Communication Symposia of ICC2011. From 2008 to 2010, he
served as a chair of Satellite Space and Communication Technical Committee.
In 2011, he gave a 1/2 day tutorial entitled Visible Light Communication
at ICC2001 held in Kyoto, Japan. He was the Editor-in-chief of Japanese
Section of IEICE Transaction on Communications from 2009 to 2011. His
research interests include visible light communication, satellite and mobile
communication systems, and ITS. He is a member of the IEEE and a fellow
of The Institute of Electronics, Information and Communication Engineers
(IEICE).

Masayuki Kinoshita (S14) received the B.S. degree from Nagoya University, Japan, in 2014. Since
2014, he has been a graduate student at Nagoya University, Japan. His research interests include image
sensor based visible light communications.

Shintaro Arai (S06M09) received the B.E.,


M.E. and D.E. degrees from Tokushima University,
Tokushima, Japan, in 2004, 2006 and 2009, respectively. From January 2007 to December 2008, he was
a Special Research Student at Nagoya University,
Japan. From April 2009 to March 2011, he worked
as a Postdoctoral Fellow of ITS Laboratory, Aichi
University of Technology, Japan. Since April 2011,
he has been a Research Associate at National Institute of Technology, Kagawa College, Japan. His
research interests include visible light communication systems, chaos-based communication systems and stochastic resonance
phenomena. He is a member of the IEICE.

Eisho Souke was born in Kagawa, Japan, in 1994.


He received the associate degree of engineering from
National Institute of Technology, Kagawa College,
Japan, in 2015. Since April 2015, he has been with
Mitsubishi Electric Engineering Co., Ltd., Japan. His
research interests include visible light communication systems.

Tomohiro Yendo (M11) received the B.Eng.,


M.Eng. and Ph.D. degrees from Tokyo Institute of
Technology, Japan, in 1996, 1998 and 2001, respectively. He was a Researcher at the Telecommunications Advancement Organization (TAO) of Japan
from 1998 to 2002, and a Research Fellow at Japan
Science and Technology Agency (JST) from 2002
to 2004. From 2004 to 2011, he was an Assistant
Professor at Nagoya University. Since 2011, he has
been an Associate Professor at Nagaoka University
of Technology. His current research interests include
visible light communication, 3-D image display and capturing.

Toshiaki Fujii (S92M98) received the Dr.E. degree in electrical engineering from the University of
Tokyo, in 1995. From 1995 to 2007, he was with
the Graduate School of Engineering, Nagoya University. From 2008 to 2010, he was with the Graduate
School of Science and Engineering, Tokyo Institute
of Technology. He is currently a Professor in the
Graduate School of Engineering, Nagoya University.
He was a sub-leader of the Advanced 3D Tele-Vision
Project established by the Telecommunications Advancement Organization of Japan from 1998 to 2002.
Now he serves as a Vice-President of the Image Engineering Technical Group,
Institute of Electronics, Information and Communication Engineers (IEICE),
Japan. He received an Academic Encouragement Award from the IEICE in
1996 and Best Paper Award from 3-D Image Conference several times during
2001 and 2009. He is known for his work on 3-D image processing and 3-D visual communications, based on Ray-based representation. His current research
interests include multi-dimensional signal processing, large-scale multi-camera
systems, multi-view video coding and transmission, free-viewpoint television,
and their applications for Intelligent Transport Systems. He is a member of the
IEEE, The Institute of Electronics, Information and Communication Engineers
(IEICE), and the Institute of Image Information and Television Engineers (ITE)
of Japan. He serves as an Associate Editor of IEEE TCSVT.

YAMAZATO et al.: VEHICLE MOTION AND PIXEL ILLUMINATION MODELING FOR IMAGE SENSOR BASED VLC

Koji Kamakura (S99M02) received the B.E.,


M.E., and Ph. D. degrees in electrical engineering
from Keio University, Yokohama, Japan, in 1997,
1999, and 2002, respectively. From 2002 to 2006,
he was an assistant professor at the Department
of Electronics and Mechanical Engineering, Chiba
University, Chiba, Japan. Since 2006, he has been
with the Department of Computer Science, Chiba
Institute of Technology, Chiba, where he is an Associate Professor. He was a Visiting Scientist at
School of Information Technology and Engineering,
University of Ottawa, Ottawa, ON, Canada, in 2002 and 2003. From 2000 to
2002, he was a Special Researcher of Fellowships of the Japan Society for the
Promotion for Science, for Japanese Junior Scientists. His research interests
include optical communication theory and system analysis. Dr. Kamakura is a
member of the IEEE and IEICE. He was a recipient of a 14th Telecom System
Technology Award for Students from the Telecommunications Advancement
Foundation in 1999 and the Ericsson Young Scientist Award in 2002.

1805

Hiraku Okada (S95M00) received the B.S., M.S.


and Ph.D. degrees in information electronics engineering from Nagoya University, Japan in 1995,
1997 and 1999, respectively. From 1997 to 2000,
he was a Research Fellow of the Japan Society
for the Promotion of Science. He was an Assistant
Professor at Nagoya University from 2000 to 2006,
an Associate Professor at Niigata University from
2006 to 2009, and an Associate Professor at Saitama
University from 2009 to 2011. Since 2011, he has
been an Associate Professor of EcoTopia Science
Institute at Nagoya University. His current research interests include the packet
radio communications, wireless multihop networks, inter-vehicle communications, and visible light communications. He received the Inose Science
Award in 1996, the IEICE Young Engineer Award in 1998, and the IEICE
Communications Society ComEX Best Letter Award in 2014. Dr. Okada is a
member of IEEE, ACM and IEICE.

You might also like