Professional Documents
Culture Documents
9, SEPTEMBER 2015
1793
AbstractChannel modeling is critical for the design and performance evaluation of visible light communication (VLC). Although a considerable amount of research has focused on indoor
VLC systems using single-element photodiodes, there remains a
need for channel modeling of VLC systems for outdoor mobile
environments. In this paper, we describe and provide results for
modeling image sensor based VLC for automotive applications.
In particular, we examine the channel model for mobile movements in the image plane as well as channel decay according to
the distance between the transmitter and the receiver. Optical
flow measurements were conducted for three VLC situations for
automotive use: infrastructure to vehicle VLC (I2V-VLC); vehicle
to infrastructure VLC (V2I-VLC); and vehicle to vehicle VLC
(V2V-VLC). We describe vehicle motion by optical flow with subpixel accuracy using phase-only correlation (POC) analysis and
show that a single-pinhole camera model successfully describes
these three VLC cases. In addition, the luminance of the central
pixel from the projected LED area versus the distance between
the LED and the camera was measured. Our key findings are
twofold. First, a single-pinhole camera model can be applied to
vehicle motion modeling of a I2V-VLC, V2I-VLC, and V2V-VLC.
Second, the DC gain at a pixel remains constant as long as the
projected image of the transmitter LED occupies several pixels. In
other words, if we choose a pixel with highest luminance among the
projected image of transmitter LED, the value remains constant,
and the signal-to-noise ratio does not change according to the
distance.
Index TermsVisible light communication (VLC), image sensor, outdoor mobile channel modeling, infrastructure to vehicle
VLC (I2V-VLC), vehicle to infrastructure VLC (V2I-VLC), vehicle to vehicle VLC (V2V-VLC), vehicle motion model, optical flow,
pinhole camera model, pixel illumination model, DC gain.
I. I NTRODUCTION
IGHT-EMITTING Diodes (LEDs) offer a new and revolutionary light source that saves energy [1]. The LED market
continues to grow, with LEDs successfully competing with
conventional light sources used in traffic signal and pedestrian
0733-8716 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
1794
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 33, NO. 9, SEPTEMBER 2015
Fig. 1. Image sensor based VLC and spatial separation of multiple sources
by an image sensor. Owing to spatial separation of multiple sources, the VLC
receiver uses only the pixels that sense LED transmission sources (i.e., data 1
and data 2) and discards other pixels, including those sensing noise sources
(the Sun).
YAMAZATO et al.: VEHICLE MOTION AND PIXEL ILLUMINATION MODELING FOR IMAGE SENSOR BASED VLC
the following three cases: (1) I2V-VLC, (2) V2I-VLC, and (3)
V2V-VLC. Each of these is discussed in more detail below.
1) I2V-VLC: If an image sensor receiver is fixed on-board,
suppose on the dashboard of a vehicle, and optical data are
transmitted from LEDs placed in a street or traffic light, then
the image sensor receiver moves; therefore, pixel positions of
captured images move according to vehicle movement. Such
movement must be considered to accurately receive the optical
signal.
2) V2I-VLC: Conversely, if a transmitter is moving and an
image sensor receiver is fixed on a road, then the relative position of the transmitter will also shift in the u and v directions of
the image sensor plane.
3) V2V-VLC: In the V2V-VLC case, both the image sensor
receiver and LED transmitter move. The effect of vehicle movement is expected to be large. Due to such vehicle movements,
pixel positions of captured images move in a manner very
similar to that in the I2V-VLC case; in addition, the position
of the LED transmitters move in a manner similar to that in the
V2I-VLC case.
Another important channel model is pixel illumination
model. We will describe the model in Section V and show that
there is no direct dependence on distance for the DC gain at a
pixel.
III. O PTICAL F LOW M EASUREMENT
A. Measurement Equipment and Setup
All our measurements were done with a high-speed camera
(HSC) connected to a personal computer (PC).
A Photron IDP-Express R2000 (1,000 frames per second
(fps); resolution, 5121,024 pixels) was used as the HSC.
In our measurements the pixel size of the image sensor was
10 m, and it output 8-bit gray scale images. The focal length
of the lens was 35 mm, and the lens diaphragm was set to 16.
Autofocusing is difficult when a vehicle is moving; thus, the
focus was set to infinity. We recorded for 5 seconds (5,000
frames) for each experiment. The measured data was postprocessed in our laboratory using a PC.
For I2V-VLC channel measurement, 1,024 LEDs arranged in
a 3232 matrix were used for the transmitter. The LED spacing
was 15 mm, and its half value angle was 26 . The LEDs used
are the same as those used in LED traffic lights in Japan. All
LEDs were on during measurements.
For V2I-VLC and V2V-VLC channel measurements a headlight of a vehicle is used. In both cases, no blinking is performed
and we focus only the movement of the headlight in the
captured image.
B. Measurement Scenarios
The measured sites were at Nagoya University, Japan. All
measurements were conducted during the day (i.e., 10 a.m. to
2 p.m.) on a clear day. Fig. 2 shows the scenario for the three
VLC channels. For I2V channel measurements, as shown in
Fig. 2(a), we set the HSC on the dashboard of the vehicle and
recorded images of an LED array set on the ground. The vehicle
1795
1796
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 33, NO. 9, SEPTEMBER 2015
(1)
I
I
I
+ v
+ t ,
u
v
t
(2)
I
I
I
+ v
+ t = 0.
u
v
t
(3)
In general, both brightness constancy and optical flow constraint provide one constraint on the two unknowns at each
pixel and cannot be solved [14]; however, in VLC, a similar
constancy to augment or replace (1) can be provided.
In VLC using HSC, a transmitter can send a known sequence
to the receiver such as a transmitter ID (e.g., for positioning
purposes) and a header sequence for signal acquisition. Suppose that an 11-chip Barker sequence, adopted in the IEEE
802.11b standard, is used for the header; then the on and
off pattern of the LED is 11100010010, with on indicated
by 1 and off indicated by 0. In this case, instead of the
brightness constancy, the sequence of luminance is supplied.
It is generally accepted that no other perfect binary phase
sequence exists other than a Barker sequence with maximum
autocorrelation. Therefore, the sequence of luminance is more
robust to illumination changes and other appearance changes
that are not tolerated by the brightness constancy assumption.
Further, the blinking rate of the LED is usually considerably
faster, and such a fast blinking rate is rare in the natural world.
Therefore, this time-domain feature can easily be extracted
by the receiver for signal acquisition and tracking. This is an
advantage of VLC using HSC; thus, we can design a sequence
that can provide robust optical flow performance as well as
enhance detection and tracking performance.
2) POC: It is a strong method for estimating motion between
two images; this approach is based on the well-known Fourier
shift property, i.e., a shift in the spatial domain of two images
(4)
F(u, v, t)F(u, v, t + 1)
|F(u, v, t)F(u, v, t + 1) |
(5)
(6)
(u = [u]+u, v = [v]+v)
1
,
h(uu, vv) = (< 1) (u = [u], v = [v])
otherwise
(7)
where is the noise term. Note that since [u] and [v] are
integers, the peak is reduced to (< 1). The phase difference
in subpixel accuracy can be obtained by finding the best twodimensional fit of the phase difference, u and v in (7), such
that the peak reaches 1 [13], [21]. If we approximate (7) by the
sinc function, then we obtain
h(uu, vv) sinc (u[u]u) sinc (v[v]v) .
(8)
In our experiments, we varied u and v with 0.1 subpixel
steps to measure the movements.
D. Experimental Results
Fig. 3 shows our experimental results for the case of I2V with
a speed of 30 km/h. The joint probability density of u and v is
shown in Fig. 3(a), and its horizontal and vertical cross-sections
are shown in Fig. 3(b) and (c), respectively. As the images were
captured with 1,000 fps (1 ms), optical flow calculated by (7)
YAMAZATO et al.: VEHICLE MOTION AND PIXEL ILLUMINATION MODELING FOR IMAGE SENSOR BASED VLC
1797
1798
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 33, NO. 9, SEPTEMBER 2015
Fig. 6. Geometory of a pinhole camera; HSC is set in the origin and the world
coordinate (x, y, z) is projected to the image coordinate (u, v).
x
u
f
0
0
f
0 R
T
v = 0
(9)
z ,
1
0
0
1
1
where is an arbitrary scale factor, f is focal length, R is a
33 rotation matrix, and T is a translation vector. The rotation
matrix R is defined as
r12
r13
r11
r22
r23
R = r21
r31
r32
r33
1
0
0
cos
0
sin
cos
sin 0
1
0
= 0
0
sin
cos
sin
0
cos
cos
sin
0
cos
0 ,
sin
(10)
0
0
1
where , , and are rotation angles of the X-, Y-, and Z-axes
circumferences, respectively, as shown in Fig. 6. In the righthand side of (9), the first matrix is called the camera calibration
parameter.
Suppose that the camera calibration parameter is constant.
Let R and T be extrinsic parameters such that R is the camera
posture and T is the camera position. Further, we consider the
world coordinate component as the position of the transmitter.
We apply this camera model to I2V-VLC and V2I-VLC below.
1) I2V-VLC: For I2V-VLC, the camera moves with the
vehicle and the transmitter is static. Therefore, camera posture,
R fluctuates based on vehicle vibrations and camera position,
T can be expressed as the sum of a time function and the
vibration component. Thus, the vehicle motion model of the
VLC transmitter for I2V-VLC can be expressed as
f
0
0
u(t)
f
0
v(t) = 0
1
0
0
1
x
r12
r13
Tx (t) + n1x
r11
y
r22
r23
Ty (t) + n1y
r21
z , (11)
r31
r32
r33
Tz (t) + n1z
1
where Tx (t), Ty (t), and Tz (t) are the time functions, and n1x , n1y ,
and n1z are the vibration components of T, respectively.
2) V2I-VLC: Conversely, for V2I-VLC, the camera is static,
but the transmitter moves with the vehicle. Therefore, the
extrinsic parameters are constant, and the transmitter position
can be expressed as the sum of a time function and the vibration
component.
Suppose the camera is at the origin; then, the vehicle motion
model of the VLC transmitter for V2I-VLC can be expressed as
x(t) + n2x
u(t)
f
0
0
1
0
0
0
y(t) + n2y
f
0 0
1
0
0
v(t)=0
z(t) + n2z ,
1
0
0
1
0
0
1
0
1
(12)
where x(t), y(t), and z(t) are the time functions, and n2x , n2y and
n2z are the vibration components of (x, y, z), respectively.
3) V2V-VLC: The models proposed above are applicable
to V2V-VLC, in which both the transmitter and receiver are
movable. The vehicle motion model of the VLC transmitter for
V2V-VLC can be expressed as
r12
r13
Tx (t) + n1x
u(t)
f
0
0
r11
r22
r23
Ty (t) + n1y
f
0 r21
v(t)=0
r31
r32
r33
Tz (t) + n1z
1
0
0
1
x(t) + n2x
y(t) + n2y
YAMAZATO et al.: VEHICLE MOTION AND PIXEL ILLUMINATION MODELING FOR IMAGE SENSOR BASED VLC
TABLE I
S IMULATION PARAMETERS FOR HSC
1799
TABLE II
V EHICLE V IBRATION PARAMETERS FOR (16) AND (17)
(15)
nx
i Axfi cos 2fi t + Gx
ny = i Ayfi cos 2fi t + Gy ,
(16)
nz
A
cos
2f
t
+
G
i
z
i zfi
i fi
For simplicity, we only considered and as fluctuations of
camera posture [26]. Also, Bfi , Bfi , and B fi are amplitudes
of sinusoidal waveform with frequency fi representing vehicle
vibration, and G , G , and G are Gaussian random variables
representing road surface irregularity, respectively for , ,
and .
Table II summarizes the vehicle vibration parameters of
(16) and (17). The parameters depicted in the above tables
were determined based on the experimental results discussed
in Section III-D.
Figs. 7, 8, and 9 show our simulation results. From the
results, we observe that the probability density of u and v was
distributed mainly on an average of 0 pixel for all cases. This
was caused by the vehicle moving only in the Z-direction in
our simulation. Comparing with experimental results discussed
in Section III-D, the form and tendency of the probability
distributions were quite similar, although their means were
shifted. Variances were also close in close agreement with our
experimental results.
We further measured two probabilities by calculating the
Kullback Leibler (KL) divergence. The KL divergence is always non-negative, with zero indicating the two probabilities
are equal. Thus, the smaller the value, the more likely the two
probabilities are the same. We normalized two probabilities and
calculated KL divergence between the experimental results and
the simulation results for each vehicle motion characteristic.
The KL divergences for I2V, V2I, and V2V were 0.363, 0.259,
and 0.491, respectively, and these values are relatively small.
For references, we compared standard Gaussian distribution,
i.e., N1 (0, 1.02 ), and Gaussian distributions having different
variances, i.e., N2 (0, 1.12) and N3 (0, 1.22). The KL divergence for N1 (0, 1.02 ) and N2 (0, 1.12 ) was 0.123, and the KL
1800
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 33, NO. 9, SEPTEMBER 2015
Fig. 7. Simulation results of I2V with a speed of 30 km/h obtained from 5 000
frames of simulation at 1 000 fps. The occurrence probability for optical flows
u and v in 0.1 subpixel step accuracy is shown. (a) Joint probability density
of u (horizontal direction) and v (vertical direction). (b) Cross-section of (a)
2 = 4.57 102 ). (c) Cross-section
in u (horizontal) direction (u = 0, u
2 = 5.97 102 ).
of (a) in v (vertical) direction (v = 0, v
Fig. 9. Simulation results of V2V with a speed of 30 km/h obtained from 5 000
frames of simulation at 1 000 fps. The occurrence probability for optical flows
u and v in 0.1 subpixel step accuracy is shown. (a) Joint probability density
of u (horizontal direction) and v (vertical direction). (b) Cross-section of (a)
2 = 6.23 103 ). (c) Cross-section
in u (horizontal) direction (u = 0, u
2 = 1.23 102 ).
of (a) in v (vertical) direction (v = 0, v
Fig. 8. Simulation results of V2I with a speed of 30 km/h obtained from 5 000
frames of simulation at 1 000 fps. The occurrence probability for optical flows
u and v in 0.1 subpixel step accuracy is shown. (a) Joint probability density
of u (horizontal direction) and v (vertical direction). (b) Cross-section of (a)
2 = 5.47 104 ). (c) Cross-section
in u (horizontal) direction (u = 0, u
2 = 1.93 103 ).
of (a) in v (vertical) direction (v = 0, v
divergence for N1 (0, 1.02 ) and N3 (0, 1.22) was 0.426, respectively. Comparing with reference values, our results are acceptable as good approximations. Consequently, our proposed
vehicle motion models in this paper are valid. According to
YAMAZATO et al.: VEHICLE MOTION AND PIXEL ILLUMINATION MODELING FOR IMAGE SENSOR BASED VLC
1801
Fig. 12. Simplified diagram of the ray-tracing between an LED and the image
sensor.
Here, we consider simple ray-tracing between an LED (transmitter) and the image sensor, as shown in is Fig. 12. Let d be
the distance between the LED and the lens (camera); further, let
d be the image distance, and f be the focal length of the lens.
Then, the well known thin lens equation is given as
(18)
(19)
(20)
(m + 1)
cosm (),
2
(21)
(22)
Ap
Ro () cos(),
d2
(23)
1
1
1
= + ,
f
d d
Fig. 13. Projected image of an LED for near and far distances; when distance
is near, the projected image occupies several pixels, whereas when distance is
far, the image fits into only one pixel (i.e., no digital image can be formed).
hI wI
M 2 hw
1 f 2S
=
=
,
2
2
2 d2
(24)
2 Ap
Ht (0)
= 2 Ro () cos().
SI
f S
(25)
1802
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 33, NO. 9, SEPTEMBER 2015
Fig. 14. Experimental results: [left vertical axis] normalized highest luminance extracted from the projected LED area (i.e., central pixel of the
LED) versus the distance between the LED and camera; [right vertical axis]
geometrically-calculated LED pixel size (i.e., projected area of LED (SI ))
versus distance. Less than d = 15 m, both results show a highly steady state.
More than d = 15 m, the curves fell exponentially relative to d.
Ap
Ro () cos().
d2
(26)
C. Experimental Results
We performed an experiment to actually measure the luminance of a central pixel from the projected LED area. The
experimental equipment (HSC) and parameters are the same as
those used for the optical flow measurements described earlier.
We placed a single LED with a diameter of 5 mm face-to-face
with the camera ( = 10 m), and then captured the lighted
LED with each change in the distance between them. We chose
the highest pixel value (i.e., the central pixel) within the pixels
occupied by the incident light from the LED and extracted this
value as the LED luminance value.
Fig. 14 shows the normalized extracted luminance versus the
distance (d) between the LED and the camera.
We also plotted SI , which is geometrically calculated at each
distance, and shown in the figure. From the figure, the curve
of the extracted luminance nearly corresponds to the SI curve.
Less than d = 15 m, both results are kept in a steady state with
a high level. Note that these curves do not fully correspond
to each other because the units differ. More than d = 15 m,
two curves fell exponentially depending on d. Let us focus
on SI around d = 15 m. When d = 15 m and 16 m, SI was
approximately 1.07 pixels and 0.94 pixels, respectively. This
indicates that the projected area of the LED dropped below
2 (= 1 1010 m2 ) at 16 m. More specifically, SI fell below
1 pixel over 16 m. In this case, the image sensor corresponding
to 1 pixel cannot receive enough LED light since the LED only
fits in a single pixel, as described above. Thus, LED luminance
decreases in inverse proportion to d2 when SI < 1 pixel.
Here, we explain why the experimental curve is unstable.
In the actual experimental environment, it was difficult to
concentrate the LED light onto any single pixel. The image
YAMAZATO et al.: VEHICLE MOTION AND PIXEL ILLUMINATION MODELING FOR IMAGE SENSOR BASED VLC
1803
vehicle motion model for the VLC transmitter. The singlepinhole camera model successfully describes the three cases of
VLCI2V-VLC, V2I-VLC, and V2V-VLC. We demonstrated
that our measurement results coincided with simulation results
using our proposed vehicle motion model; we also confirmed
the validity via KL divergence. The movement of source LEDs
in the image plane depends on vehicle speed, HSC frame rate,
and road irregularity.
Further, we measured the DC gain at a pixel and confirmed
that the gain remains constant as long as the projected image of
the transmitter LED occupies several pixels. In other words, if
we choose a pixel with highest luminance, then the luminance
value remains constant, and the SNR does not change according
to distance.
ACKNOWLEDGMENT
The authors would like to thank Prof. Masaaki Katayama
and Assistant Prof. Kentaro Kobayashi for their variable
suggestions.
R EFERENCES
[1] A. L. Lars Bergstrm, Per Delsing, and O. Ingans, The nobel prize
in physics 2014-blue LEDs-filling the world with new light, The
Royal Sedish Academy of Sciences, Oct. 2014. [Online]. Available:
http://www.nobelprize.org/nobel_prizes/physics/laureates/2014/popularphysicsprize2014.pdf
[2] M. Wright, Packaged led and SSL market report and forecast, LEDs
Magazine, Mar. 2013, pp. 3540.
[3] J. Gancarz, H. Elgala, and T. Little, Impact of lighting requirements
on VLC systems, IEEE Commun. Mag., vol. 51, no. 12, pp. 3441,
Dec. 2013.
[4] J. Kahn and J. Barry, Wireless infrared communications, Proc. IEEE,
vol. 85, no. 2, pp. 265298, Feb. 1997.
[5] K. Lee, H. Park, and J. Barry, Indoor channel characteristics for visible
light communications, IEEE Commun. Lett., vol. 15, no. 2, pp. 217219,
Feb. 2011.
[6] T. Komine, J. Lee, S. Haruyama, and M. Nakagawa, Adaptive equalization system for visible light wireless communication utilizing multiple
white led lighting equipment, IEEE Trans. Wireless Commun., vol. 8,
no. 6, pp. 28922900, Jun. 2009.
[7] H. Urabe et al., High data rate ground-to-train free-space optical communication system, Opt. Eng., vol. 51, no. 3, Mar. 2012, Art. ID. 031 204.
[Online]. Available: http://dx.doi.org/10.1117/1.OE.51.3.031204
[8] C. Schmidt et al., High-speed, high-volume optical communication for
aircraft, SPIE Newsroom, Oct. 2013. [Online]. Available: http://spie.org/
x103948.xml
[9] S. Nishimoto et al., High-speed transmission of overlay coding for roadto-vehicle visible light communication using led array and high-speed
camera, in Proc. IEEE GC Wkshps, Dec. 2012, pp. 12341238.
[10] Y. Amano, K. Kamakura, and T. Yamazato, Alamouti-type coding for
visible light communication based on direct detection using image sensor, in Proc. IEEE GLOBECOM, Dec. 2013, pp. 24302435.
[11] I. Takai et al., Led and CMOS image sensor based optical wireless
communication system for automotive applications, IEEE Photon. J.,
vol. 5, no. 5, Oct. 2013, Art. ID. 6801418.
[12] T. Yamazato et al., Image-sensor-based visible light communication for
automotive applications, IEEE Commun. Mag., vol. 52, no. 7, pp. 8897,
Jul. 2014.
[13] T. Yamazato and S. Haruyama, Image sensor based visible light communication and its application to pose, position, and range estimations,
IEICE Trans. on Commun., vol. E97-B, no. 9, pp. 17591765,
Sep. 2014.
[14] S. Baker et al., A database and evaluation methodology for optical flow,
in Proc. IEEE 11th ICCV, Oct. 2007, pp. 18.
[15] S. Kamijo, Y. Matsushita, K. Ikeuchi, and M. Sakauchi, Traffic monitoring and accident detection at intersections, IEEE Trans. Intell. Transp.
Syst., vol. 1, no. 2, pp. 108118, Jun. 2000.
[16] Z. Sun, G. Bebis, and R. Miller, On-road vehicle detection using optical
sensors: A review, in Proc. IEEE 7th ITSC, Oct. 2004, pp. 585590.
1804
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 33, NO. 9, SEPTEMBER 2015
[17] J. Alonso, E. Ros Vidal, A. Rotter, and M. Muhlenberg, Lane-change decision aid system based on motion-driven vehicle tracking, IEEE Trans.
Veh. Technol., vol. 57, no. 5, pp. 27362746, Sep. 2008.
[18] S. Avidan, Support vector tracking, IEEE Trans. Pattern Anal. Mach.
Intell., vol. 26, no. 8, pp. 10641072, Aug. 2004.
[19] P. Siegmann, R. Lopez-Sastre, P. Gil-Jimenez, S. Lafuente-Arroyo, and
S. Maldonado-Bascon, Fundaments in luminance and retroreflectivity
measurements of vertical traffic signs using a color digital camera, IEEE
Trans. Instrum. Meas., vol. 57, no. 3, pp. 607615, Mar. 2008.
[20] M. Kinoshita et al., Motion modeling of mobile transmitter for image
sensor based I2V-VLC, V2I-VLC, and V2V-VLC, in Proc. GC Wkshps,
Dec. 2014, pp. 450455.
[21] K. Takita, T. Aoki, Y. Sasaki, T. Higuchi, and K. Kobayashi, Highaccuracy subpixel image registration based on phase-only correlation, IEICE
Trans. Fundam., vol. E86-A, no. 8, pp. 19251933, Aug. 2003.
[22] O. Faugeras, Three-Dimensional Computer Vision: A Geometric
Viewpoint. Cambridge, MA, USA: MIT Press, 1993.
[23] K. Y. K. Wong, P. Mendonca, and R. Cipolla, Camera calibration from
surfaces of revolution, IEEE Trans. Pattern Anal. Mach. Intell., vol. 25,
no. 2, pp. 147161, Feb. 2003.
[24] M. Griffin, The evaluation of vehicle vibration and seats, Appl. Ergonom., vol. 9, no. 1, pp. 1521, Mar. 1978. [Online]. Available: http://
www.sciencedirect.com/science/article/pii/0003687078902144
[25] M. Agostinacchio, D. Ciampa, and S. Olita, The vibrations induced by
surface irregularities in road pavementsA MATLAB approach, Eur.
Transp. Res. Rev., vol. 6, no. 3, pp. 267275, Sep. 2014. [Online]. Available: http://dx.doi.org/10.1007/s12544-013-0127-8
[26] W. Sun, H. Gao, and B. Yao, Adaptive robust vibration control of full-car
active suspensions with electrohydraulic actuators, IEEE Trans. Control
Syst. Technol., vol. 21, no. 6, pp. 24172422, Nov. 2013.
[27] P. A. Haigh, H. Le Minh, and Z. Ghassemlooy, Transmitter distribution for MIMO visible light communication systems, in Proc. 12th
Annu. PGNet Symp. Convergence Telecommun., Broadcast., Jan. 2011,
pp. 190193.
[28] Z. Ghassemlooy, D. Wu, M. A. Khalighi, and X. Tang, Indoor nondirected optical wireless communicationsOptimization of the Lambertian
order, J. Elect. Comput. Eng. Innov., vol. 1, no. 1, pp. 19, Autumn,
2013.
[29] Y. He, L. Ding, Y. Gong, and Y. Wang, Real-time audio & video transmission system based on visible light communication, Opt. Photon. J.,
vol. 3, no. 2B, pp. 153157, Jun. 2013.
Masayuki Kinoshita (S14) received the B.S. degree from Nagoya University, Japan, in 2014. Since
2014, he has been a graduate student at Nagoya University, Japan. His research interests include image
sensor based visible light communications.
Toshiaki Fujii (S92M98) received the Dr.E. degree in electrical engineering from the University of
Tokyo, in 1995. From 1995 to 2007, he was with
the Graduate School of Engineering, Nagoya University. From 2008 to 2010, he was with the Graduate
School of Science and Engineering, Tokyo Institute
of Technology. He is currently a Professor in the
Graduate School of Engineering, Nagoya University.
He was a sub-leader of the Advanced 3D Tele-Vision
Project established by the Telecommunications Advancement Organization of Japan from 1998 to 2002.
Now he serves as a Vice-President of the Image Engineering Technical Group,
Institute of Electronics, Information and Communication Engineers (IEICE),
Japan. He received an Academic Encouragement Award from the IEICE in
1996 and Best Paper Award from 3-D Image Conference several times during
2001 and 2009. He is known for his work on 3-D image processing and 3-D visual communications, based on Ray-based representation. His current research
interests include multi-dimensional signal processing, large-scale multi-camera
systems, multi-view video coding and transmission, free-viewpoint television,
and their applications for Intelligent Transport Systems. He is a member of the
IEEE, The Institute of Electronics, Information and Communication Engineers
(IEICE), and the Institute of Image Information and Television Engineers (ITE)
of Japan. He serves as an Associate Editor of IEEE TCSVT.
YAMAZATO et al.: VEHICLE MOTION AND PIXEL ILLUMINATION MODELING FOR IMAGE SENSOR BASED VLC
1805