Professional Documents
Culture Documents
Dimosthenis Karatzas , Faisal Shafait , Seiichi Uchida , Masakazu Iwamura , Lluis Gomez i Bigorda ,
Sergi Robles Mestre , Joan Mas , David Fernandez Mota , Jon Almaz`an Almaz`an and Lluis Pere de las Heras
Computer
The
I.
I NTRODUCTION
Text extraction from versatile text containers like borndigital images, real scenes and videos has been a continuous
interest in the field for more than a decade. The series of Robust Reading Competitions addresses the need to quantify and
track progress in this domain. The competition was initiated
in 2003 by Simon Lucas, focusing initially on text localisation
and text recognition in real scene images [1] [2]. The first
edition was met with great success and was repeated in 2005
[3], creating a reference framework for the evaluation of text
detection methods. In 2011, two challenges were organised
under the ICDAR Robust Reading Competition, one dealing
with text extraction from born-digital images [4], and the other
from real scene images [5]. The 2013 edition of the Robust
Reading Competition [6] marks a new milestone in the series.
The two challenges on real scenes and born-digital images
have been integrated further, unifying performance evaluation
metrics, ground truth specification and the list of offered tasks.
In parallel, a new challenge is established on text extraction
from video sequences, introducing new datasets, tools and
evaluation frameworks.
The 2013 Robust Reading Competition [6] brings to the
Document Analysis community:
lines of the ICDAR 2013 competition), on-line performance evaluation tools and enhanced visualisation of
results.
The competition consists of three Challenges: Reading Text
in Born-digital Images (Challenge 1), Reading Text in Scene
Images (Challenge 2), and Reading Text in Videos (Challenge
3). Each Challenge is based on a series of specific tasks.
Challenges 1 and 2 offer tasks on Text Localisation, where
the objective is to detect the existence of text and return a
bounding box location of it in the image; Text Segmentation,
where the objective is to obtain a pixel-level separation of text
versus background and Word Recognition, where the objective
is to automatically provide a transcription for a list of prelocalised word images. Challenge 3 at this time offers a single
task on Text Localisation where the objective is to detect and
track text objects in a video sequence.
This report presents the final results after analysing the submissions received. The presentation is following the structure
of the competition and results are grouped by Challenge. For
each Challenge the datasets used and ground truth specification
are briefly described, the performance evaluation protocol is
detailed and finally the results are presented and analysed.
First, the competition protocol is briefly described in Section II.
Section III is dedicated to Challenge 1, Section IV to Challenge
2 and Section V to Challenge 3. Overall conclusions are
presented in Section VI. Finally, in the appendix the technical
details of all participating methods are summarised.
II.
C OMPETITION P ROTOCOL
III.
component. For this evaluation, Tmin was set to 0.5 and Tmax
to 0.9. Each of the atoms in the ground truth is classified
as Well Segmented, Merged, Broken, Broken and Merged or
Lost, while False Positive responses are also counted. Atom
Precision, Recall and F-score metrics are calculated over the
whole collection summarising the performance of the method.
Word boxes that are directly defined at the word level in
the ground truth, and do not contain atoms or text parts,
are treated as dont care regions and discounted from the
evaluation. The interested reader is encouraged to read [11]
for more details. Apart from atom level metrics, we also report
Precision, Recall and F-score results at the pixel level for
completeness.
3) Task 3 - Word Recognition: For the evaluation of Word
Recognition results we implemented a standard edit distance
metric, with equal costs for additions, deletions and substitutions. For each word we calculate the normalized edit distance
between the ground truth and the submitted transcription and
we report the sum of normalised distances over all words of
the test set. The normalisation is done by the length of the
ground truth transcriptions. The comparison we performed is
case sensitive. To assist qualitative analysis, we also provide
statistics on the number of correctly recognised words.
C. Results and Discussion
Overall, 17 methods from 8 different participants (excluding variants of the same method) were submitted in the various
tasks of this challenge. In the sections below we provide the
final ranking tables as well as a short analysis of the results for
each task. The participating methods are referred to by name
in the ranking tables and in the text, please see Appendix VI
for technical details on the participating methods.
Similarly to the past edition of the competition, we have
included the performance of an out-of-the-box commercial
OCR software package as a baseline for text localisation (task
1) and word recognition (task 3). For this purpose we have
used the ABBYY OCR SDK (version 10) [13]. Factory default
parameters were used for pre-processing with the following
exceptions. First, we enabled the option to look for text in low
resolution images, since the resolution of born-digital images
is typically below 100DP I. This is not exclusive, meaning
that text in high resolution images is also looked for with
this parameter enabled. Second, we set the OCR parameters
FlexiForms and FullTextIndex to true, to force detection of
text inside images. For text localization (task 1) we have
used the reported location of individual words, while for word
recognition (task 3) the returned transcription.
TABLE I.
Method Name
USTB TexStar
TH-TextLoc
I2R NUS FAR
Baseline
Text Detection [15], [16]
I2R NUS
BDTD CASIA
OTCYMIST [7]
Inkam
TABLE III.
Method
USTB FuStar
I2R NUS
OTCYMIST
I2R NUS FAR
Text Detection
Recall (%)
82.38
75.85
71.42
69.21
73.18
67.52
67.05
74.85
52.21
Precision (%)
93.83
86.82
84.17
84.94
78.62
85.19
78.98
67.69
58.12
F-score
87.74
80.96
77.27
76.27
75.81
75.34
72.53
71.09
55.00
Well
Segmented
6258
5051
5143
4619
3883
Merged
Broken
Lost
920
1584
1420
1474
2716
56
30
34
12
36
587
1151
1223
1716
1187
False
Positives
370
685
1083
156
210
TABLE II.
Method
USTB FuStar
I2R NUS
OTCYMIST
I2R NUS FAR
Text Detection
TABLE IV.
Recall (%)
87.21
87.95
81.82
82.56
78.68
Pixel Level
Precision (%)
79.98
74.40
71.00
74.31
68.63
Method
PhotoOCR
MAPS [17]
PLT [18]
NESP [19]
Baseline
Total Edit
Distance
105.5
196.2
200.4
214.5
409.4
Correctly Recognised
Words (%)
82.21
80.4
80.26
79.29
60.95
F-score
83.44
80.61
76.03
78.22
73.32
Recall
80.01
64.57
65.75
59.05
49.64
Atom Level
Precision
F-score
86.20
82.99
73.44
68.72
71.65
68.57
80.04
67.96
69.46
57.90
A. Dataset
The scene image dataset used for this competition is
almost the same as the dataset at ICDAR2011 Robust Reading
Competition. The difference from the ICDAR2001 dataset is
revision of ground-truth texts at several images. In addition, a
small number of images duplicated over training and test sets
were excluded. Accordingly, ICDAR2013 dataset is a subset
of ICDAR2011 dataset. The number of images of ICDAR2013
dataset is 462, which is comprised of 229 images for the
training set and 233 images for the test set.
A new feature of ICDAR2013 dataset is pixel-level groundtruth. Human operators carefully observed each scene image
and painted character pixels manually. (Precisely speaking,
they used a semi-automatic segmentation software and then
made further manual revision.) Every character is painted by
TABLE V.
Method Name
USTB TexStar
Text Spotter [20], [21], [22]
CASIA NLPR [23], [24]
Text Detector CASIA [25], [26]
I2R NUS FAR
I2R NUS
TH-TextLoc
Text Detection [15], [16]
Baseline
Inkam
Recall (%)
66.45
64.84
68.24
62.85
69.00
66.17
65.19
53.42
34.74
35.27
Precision (%)
88.47
87.51
78.89
84.70
75.08
72.54
69.96
74.15
60.76
31.20
F-score
75.89
74.49
73.18
72.16
71.91
69.21
67.49
62.10
44.21
33.11
TABLE VII.
Well
Segmented
4028
3719
3992
3540
3990
3640
2452
Method
I2R NUS FAR
NSTextractor
USTB FuStar
I2R NUS
NSTsegmentator
Text Detection
OTCYMIST
TABLE VIII.
Broken
Lost
297
106
279
637
148
314
276
11
10
12
7
25
27
23
1531
2033
1585
1684
1705
1884
3117
False
Positives
355
345
966
357
2792
1885
4549
Method
PhotoOCR
PicRead [27]
NESP [19]
PLT [18]
MAPS [17]
Feilds Method
PIONEER [28], [29]
Baseline
TextSpotter [20], [21], [22]
Total Edit
Distance
122.7
332.4
360.1
392.1
421.8
422.1
479.8
539.0
606.3
Correctly Recognised
Words (%)
82.83
57.99
64.20
62.37
62.74
47.95
53.70
45.30
26.85
TABLE VI.
Method
I2R NUS FAR
NSTextractor
USTB FuStar
I2R NUS
NSTsegmentator
Text Detection
OTCYMIST
Recall (%)
74.73
60.71
69.58
73.57
68.41
64.74
46.11
Pixel Level
Precision (%)
81.70
76.28
74.45
79.04
63.95
76.20
58.53
A. Dataset
The challenge is based on various short sequences (around
10 seconds to 1 minute long) selected so that they represent a
wide range of real-life situations (high-level cognitive tasks),
using different types of cameras. The dataset was collected by
the organisers in different countries, in order to include text in
various languages (Spanish, French and English5 ). The video
sequences correspond to certain tasks that we asked users to
perform, like searching for a shop in the street or finding their
way inside a building. The tasks were selected so that they
represent typical real-life applications, and cover indoors and
outdoors scenarios. We used different cameras for different
sequences, so that we also cover a variety of possible hardware
used; these include mobile phones, hand-held cameras and
head-mounted cameras.
We provided 28 videos in total; 13 videos comprised the
training set and 15 the test set. The training set was divided into
two parts (A and B). The only difference between them was
the time of release. The videos were captured with 4 kinds
of cameras ((A).Head-mounted camera, (B).Mobile Phone,
(C).Hand-held camcorder and (D).HD camera) and categorized
into 7 tasks ((1).Follow wayfinding panels walking outdoors,
(2).Search for a shop in a shopping street, (3).Browse products
in a super market, (4).Search for a location in a building,
(5).Driving, (6).Highway watch, (7).Traing watch). See the
summary of the videos in Table IX.
F-score
78.06
67.61
71.93
76.21
66.10
70.01
51.58
Recall
68.64
63.38
68.03
60.33
68.00
62.03
41.79
TABLE IX.
t
r
a
i
n
g
s
e
t
t
e
s
t
s
e
t
Atom Level
Precision
F-score
80.59
74.14
83.57
72.09
72.46
70.18
76.62
67.51
54.35
60.41
57.43
59.64
31.60
35.99
Task
Camera type
No. of Frames
Duration
7
8
37
42
51
52
54
13
19
26
36
40
41
(5)
(5)
(2)
(2)
(7)
(7)
(7)
(4)
(5)
(5)
(2)
(2)
(2)
(A)
(A)
(C)
(C)
(D)
(D)
(D)
(A)
(A)
(B)
(C)
(C)
(C)
264
240
456
504
450
450
480
576
408
387
336
408
528
00:11
00:10
00:19
00:21
00:15
00:15
00:16
00:24
00:17
00:16
00:14
00:17
00:22
1
5
6
11
17
20
23
24
32
35
39
44
48
49
53
(1)
(3)
(3)
(4)
(3)
(5)
(5)
(5)
(2)
(2)
(2)
(6)
(6)
(6)
(7)
(B)
(B)
(B)
(A)
(A)
(A)
(B)
(B)
(C)
(C)
(C)
(D)
(D)
(D)
(D)
602
542
162
312
264
912
1020
1050
312
336
438
1980
510
900
450
00:20
00:18
00:05
00:13
00:11
00:38
00:34
00:35
00:13
00:14
00:20
01:06
00:17
00:30
00:15
B. Performance Evaluation
There are numerous evaluation frameworks proposed for
multiple object tracking systems [31], [32], an elaborated
review, including their use for text tracking in videos, can be
found in Kasturi et al. [31]. In this competition, we selected to
use the CLEAR-MOT [32] and VACE [31] metrics adapted to
the specificities of text detection and tracking, extending the
CLEAR-MOT code6 provided by Bagdanov et al. [33].
The CLEAR-MOT [32] evaluation framework provides two
overall performance measures: the Multiple Object Tracking
Precision (MOTP), which expresses how well locations of
words are estimated, and the Multiple Object Tracking Accuracy (MOTA), which shows how many mistakes the tracker
system made in terms of false negatives, false positives, and
ID mismatches. On the other hand, the Average Tracking
Accuracy (ATA) VACE metric [31] provides a spatio-temporal
measure that penalizes fragmentations while accounting for
the number of words correctly detected and tracked, false
negatives, and false positives.
Given a video sequence an ideal text tracking method
5 Japanese
6 http://www.micc.unifi.it/masi/code/clear-mot/
Fig. 1. CLEAR-MOT and ATA metrics for (a) the baseline algorithm and
(b) TextSpotter [20], [21], [22] method. Diamond markers indicate average
values.
a(wt ht )
(4)
[13]
[14]
C ONCLUSIONS
[15]
[16]
[17]
[18]
[19]
[20]
ACKNOWLEDGMENT
This work was supported by the Spanish research projects
TIN2011-24631, RYC-2009-05031 and JST CREST project.
[21]
R EFERENCES
[22]
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
824
'()*("YVTE+
Y)!&0$!%&+
Y)=4?-!%&+
T1$E0#+
7ZZ"'()*(+
T/%!%X'E+
B1;#U=+C!/%#+
N)7"B,)!0&+
N)7")!0&+
11 24
B1&=!2+C)DE=+0&+#!$!#+0=+!/+F0=1$+$%--$!#+$%?G%--!=H+
/-2+0+&41%-+F0=#+$;0==1>1&+1=+,=#+!%+$;,#+=%?+-%-I!!+
C)DE=+0-#+!/+;>!+%-=+0&+4&%,G#+1-!%+$0-#1#0!+!!+
&41%-=+0$$%-4+!%+!/+G%=1!1%-2+=1.+0-#+$%;%&+%>+0$/+C)DEH+
B1-0;;@2+!&I=!&,$!,&#+$/0&0$!&+?%#;=+J)CK+0&+0GG;1#+%-+
!/+!!+&41%-=+!%+=0&$/+!/+?1==1-4+$/0&0$!&=+0-#+;1?1-0!+
!/+>0;=+G%=1!1L=H+
<1&0&$/1$0;+=!&,$!,&+%>+C)DE=+1=+,=#+!%+!&0$!+$/0&0$!&+
$0-#1#0!=H+'/0&0$!&+$0-#1#0!=+0&+?&4#+!%+>%&?+!!+
P+N-1
L&=1!@+%>+)$1-$+0-#+ &41%-=+,=1-4+0+=1-4;I;1-R+$;,=!&1-4+0;4%&1!/?H+B1-0;;@2+-%-I!!+
$/-%;%4@2+71A1-42+'/1-0+ 0-#+!!+$;0==1>1&=+0&+=S,-!10;;@+0GG;1#+!%+4!+>1-0;+!!+
8,O0-4+31-P2+8,I'/-4+ Q
+*
-=!1!,!+%>+(,!%?0!1%-2++ &41%-=H+
31-P+0-#+<%-4I61+<0%Q+ '/
1-=+($0#?@+%>+
!+&41%-=+%F!01-#+,=1-4+N)7" )!0&+0&+!/&=/%;##+
)$1-$=2+71A1-42+'/1-0+ F0=#+%-+C)DE=H+T%=!IG&%$==1-4+1=+G&>%&?#+!%+&$%L&+
=?0;;+!!+$%?G%--!=+!/0!+?14/!+F+?1==#+F@+!/+G&L1%,=+
=!04H+
(+&4&==1%-+F0=#+!!+=4?-!0!1%-+!$/-1S,+1=+,=#+!%+
1L&=1!@+%>+
:0$S,;1-+B1;#+0-#+D&1R+ N=4?-!+!!+>&%?+F0$R4&%,-#H+6%&#+&$%4-1!1%-+1=+G&>%&?#+
C0
=
=
0
$
/
,
=
!
!
=
2
+
(?/
&
=
!
2
+
V0&-#IC1;;&+
,=1-4+0+'%-#1!1%-0;+E0-#%?+B1;#+>%;;%O#+F@+0+;0-4,04+?%#;+
N)(+
!&01-#+>&%?+%L&+PW+?1;;1%-+O%&#=H+
';0==1$0;+%L&I=4?-!0!1%-+0-#+F0?+=0&$/+0&$/1!$!,&+1=+
(;==0-#&%+71==0$$%2+C0&R+
,=#H+(+#G+-,&0;+-!O%&R+=&L=+0=+!/+$/0&0$!&+$;0==1>1&2+
',??1-=2+3,L0;+Y!.&2+ 9%%4;+*-$H2+N)(+
O/&0=+$/0&0$!&I-4&0?=+0&+,=#+>%&+;0-4,04+?%#;;1-4H+
<0&!?,!+YL-+
%G+/@G%!/=1=+>&%?+!/+F0?+=0&$/+0&+&I&0-R#+,=1-4+0+
O%&#I-4&0?+?%#;H++
;%&+4&0#1-!=+0&+,=#+!%+F1-0&1.+!/+1?04H+!+&41%-=+0&+
30-4+5/0-42',-./0%+)/12+ *-=!1!,!+%>+(,!%?0!1%-2+ '%
!&0$!#+,=1-4+=L&0;+/,&1=!1$+&,;=+1-$%&G%&0!1-4+?%&G/%;%4@2+
)%-4+90%2+:1-;%-4+<,2+ '/1-=+($0#?@+%>+
+>1;!&1-42+0-#+=1.2+0;14-?-!2+0-#+$%;%&+
'/,-/-4+60-42+701/,0+ )$1-$=2+71A1-42+'/1-0+ #$%--==11=!@!IF-0$=@H+#B1
-0;;@+)[C=+0&+,=#+!%+>,&!/&+&>1-+!/+&=,;!=+
810%+
F@+&?%L1-4+>0;=+0;0&?=H+
P
0!10-0+Y%L1R%L02+X;40+
6%&#+&$%4-1!1%-+1=+G&>%&?#+1-+0+,-1>1#+G&%F0F1;1=!1$+
>&0?O%&R+,=1-4+?01?,?+0+G%=!&1%&1+JC(TK+1->&-$+F0=#+
70&1-%LP0P2+)&4@+
C1;@0L2+[;P0#1?1&+
%-+O14/!#+>1-1!+=!0!+!&0-=#,$&=H+/+?%#;+->%&$=+F%!/+
\1&1$/-RP%L2+(;0-#&+ P+V%?%-%=%L+C%=$%O+ !/+;0-4,04+$%-=1=!-$@2+0-#+!/+$%-=1=!-$@+%>+!/+0!!&1F,!=+
)0G0Q!%L2+T,=/?!+ W )!0!+N-1L&=1!@2+C%=$%O2+ %>+;!!&=+!/0!+$%-=!1!,!+0+O%&#H+
\%/;12+[1$!%&+V?G1!=R@+ QE,
==10++
+C1
$&%=%>!+E=0&$/2+ *?04+F1-0&1.0!1%-+1-$%&G%&0!=+;%$0;+$,=2+=,$/+0=+;%$0;+
'0
?F&1#42+N\+
F1-0&1.0!1%-+?0G+0-#+1?04+V0G;0$10-2+1-!%+0+4;%F0;+%G!1?1.0!1%-+
W
+
P
)R%;R%L%+*-=!1!,!+%>+ >&0?O%&RH+!;1-I$0-#1#0!=+0&+4-&0!#+F0=#+%-+
)&4@+C1
;@0L2+X;40+
P
)$1-$+0-#+$/-%;%4@2+ =1?1;0&1!@+%>+!/+=1.+0-#+$%;%&+%>+>1;!&#+$%--$!#+$%?G%--!=H+
70&1-%L02+0!10-0+
!I;1-+$0-#1#0!=+O1!/+;%O&+$;0==1>1&+=$%&=+0&+>1;!&#+%,!H+
Y%L1RQ%L0P2+T,=/?!+W C%=$%O2+E,==10+
\%/;12+[1$!%&+V?G1!=R@+
/1=+?!/%#+1=+=1?1;0&+!%+Y)=4?-!0!%&+$G!+!/0!+>1;!&1-4+%>+
!!I;1-+$0-#1#0!=+0;=%+!0R1-4+1-!%+0$$%,-!+!/+-,?F&+%>+
F0$R4&%,-#+=4?-!=H+
N=1-4+4&0#1-!+;%$0;+$%&&;0!1%-2+!/+#-=1!@+%>+G01&O1=+#4=+%>+
%GG%=1!+#1&$!1%-=2+0-#+!/+=!&%R+O1#!/+$%-=1=!-$@+0&+F%!/+
*-=!1!,!+%>+(,!%?0!1%-2+ $/0&0$!&1.#+!%+$%?G,!+0+!!+$%->1#-$+?0GH+!+
7%+701+
'/1-=+($0#?@+%>+
$0-#1#0!=+0&+%F!01-#+,=1-4+0+>0=!+=?1I=,G&L1=#+;0&-1-4+
)$1-$=2+71A1-42+'/1-0+ ?!/%#+]^_+0-#+0&+>,&!/&+G&,-#+,=1-4+0-+)[C+$;0==1>1&H+
B0!,&=+=,$/+0=+$%;%&+0-#+=!&%R+O1#!/+0&+,=#+!%+%F!01-+!!I
;1-=+0-#+=,F=S,-!;@+O%&#=H+
234
',-./0%+)/12+30-4+5/0-42+ -=!1!,!+%>+(,!%?0!1%-2+
!"#!$!%&" '/,-/-4+60-42+701/,0+ *'/
0#?@+%>+
'()*(+
810%2+)%-4+90%2+:1-;%-4+ )$11--$==+($
2+71A1-42+'/1-0+
<,+
012345
7891
M
M
M
M
M
M
M
M
M M
M M
M M
M M
M M
M M
M M
M M
M M
82824
82824
3811
3811
8
8
8
8
8
8
234
824
11 24
()!*+',-*+.'()-*+/%*+' /0*+)1-'5*06!7/0#8.'
$01.'20-%30*+'40*+'
9!0:0*+.'()0*-'
)0/'0/'-*'0;<7%6!='6!7/0%*'%>'#)!';!#)%='#)-#'#%%?'<-7#'0*'
@(4AB'CDEE'B%F1/#'B!-=0*+'&%;<!#0#0%*G')!'?!8'
!"#$%&'
!*)-*&!;!*#'0/'#)!'1/!'%>'(%*=0#0%*-H'B-*=%;'I0!H='>%7';%7!'
<7!&0/!'#!"#'&-*=0=-#!'/!H!�%*G'
@*=060=1-H'&%H%7'&)-**!H/'-7!'F0*-70P!='/!<-7-#!H8'1/0*+'K#/1Q/'
#)7!/)%H=G'L0*0;1;'/<-**0*+'#7!!'F-/!='+7%1<0*+'0/'<!7>%7;!='
K(,L@M'
%*'=!#!&#!='&%;<%*!*#/'#%'>0H#!7'F-&?+7%1*='&%;<%*!*#/'-*='
+7%1<'#!"#'&-*=0=-#!/'0*#%'R%7=/G'
)!'+7-8'/&-H!'0/'!*)-*&!='F8'-<<H80*+'<%R!7H-R'#7-*/>%7;G'
T*)-*&!='+7-8'/&-H!'0;-+!'0/'/!+;!*#!='1/0*+'K#/1Q/'
S$'
4G'N1;-7'-*='AG'OG'
@*=0-*'@*/#0#1#!'%>'M&0!*&!' #)7!/)%H=GK;*0<-+!'K(B'0/'1/!='>%7'R%7='7!&%+*0#0%*'>7%;'#)!'
F0*-70P!='0;-+!G'
B-;-?70/)*-*'
9-*+-H%7!.'@*=0-'
I0/&)!7'=0/&70;0*-#0%*'>-&#%7'0/'&-H&1H-#!='>%7'6-70%1/'&%H%7'
UTMS'
<H-*!/'R0#)'=0>>!7!*#'<%R!7H-R'6-H1!/G'')!'<H-*!'R0#)'
;-"0;1;'=0/&70;0*-#0%*'6-H1!'0/'/!H!&#!='>%7'F0*-70P-#0%*G'
A'H%&-H'R0*=%R'F-/!=';0*;-"'#)7!/)%H=0*+'&70#!70-'
LASM'
0*&%7<%7-#0*+';!-*/'-*='6-70-*&!/'%>'#)1/'+!*!7-#!='
>%7!+7%1*=VF-&?+7%1*='7!+0%*/'-7!'1/!=G'
E'TS@
0/';!#)%=')-/'#R%';-0*'/#!</]'-')8<%#)!/0/'+!*!7-#0%*'/#!<'
A'B!/!-7&)'-*=' )
E
"!/'-*='-')8<%#)!/0/'6-H0=-#0%*'/#!<'#%'
B-<)-WH'9%0//!H.'A*E-' 4!6!H%<;!*#'$-F%7-#%78' #>%0H#'!+7!'>#-'<H/%!#!'=*!#0#-!H&'##!0%"*#'/FG%')
!')8<%#)!/0/'+!*!7-#0%*'/#!<'<7%&!//'
!>-*0-'(-H-7-/-*E1.' CZ$'B4TTS@A[.'I7-*&!' 7!H0!/'%*'#)!'LLM'/!+;!
!"#'4!#!�%*' XM#
';!#)%=.'R)0H!'#)!'6-H0=-#0%*'
5*06!7/0#Y'S0!77!'!#'L-70!' /#!<'0/'F-/!='%*'-'#!"#17!F*-#-/#!0%=*'M^L'
%*-#)-*'I-F70P0%.'C
&H-//0>0&-#0%*'-<<7%-&)G'
MY6!70*!'41F10//%*'
(170!'Z$@S\5SL([.'
I7-*&!'
!"#'M<%##!7'0/'-*'1*&%*/#7-0*!='7!-H#0;!'!*=#%!*='#!"#'
H%&-H0P-#0%*'-*='7!&%+*0#0%*';!#)%=G')!'7!-H#0;!'<!7>%7;-*&!'
0/'-&)0!6!='F8'<%/0*+'#)!'&)-7-&#!7'=!#!�%*'<7%FH!;'-/'-*'
1?-/'U!1;-**.'X070'
(P!&)'!&)*0&-H'
!>>0&0!*#'/!31!*#0-H'/!H!�%*'>7%;'#)!'/!#'%>'T"#7!;-H'B!+0%*/'
!"#'M<%##!7' $L#-/.'L0&)-H'91/#-'
5*06!7/0#8.'(P!&)'B!<1FH0&' ZTB/[G'TB/'-7!'+7%1<!='0*#%'R%7='7!+0%*/'R)0&)'-7!'7!&%+*0P!='
1/0*+'-*'-<<7%"0;-#!'*!-7!/#*!0+)F%7'&H-//0>0!7'%<!7-#0*+'%*'-'
&%-7/!'O-1//0-*'/&-H!/<-&!'<87-;0=G'A'=!;%'%>'#)!'/%>#R-7!'
0/'-6-0H-FH!'%*H0*!])##<]VVRRRG#!"#/<%##!7G%7+V'
T-&)'R%7='0;-+!'0/'%6!7/!+;!*#!='-*='H%+0/#0&'7!+7!//0%*'0/'
1/!='#%'&H-//0>8'#!"#'&%;<%*!*#/G'S%H8*%;0-H/'>0#'#%'#)!'#%</'-*='
F%##%;/'%>'F0*-70P!='&)-7-&#!7/'-7!'*%7;-H0P!='#%'-')%70P%*#-H.'
O7
0
*
*
!
H
H
'
(%
H
H
!
+
!
.
'
@
%
R.
'
S@KUTTB' X!7%='_!0*;-*'
H0*!-7'%70!*#-#0%*'R0#)'-'#)0*<H-#!'/<H0*!G'A'=0/&70;0*-#06!'/!;0
5MA'
L-7?%6';%=!H':%0*#H8'/!+;!*#/'-*='7!&%+*0P!/'&)-7-&#!7/'0*'#)!'
*%7;-H0P!='0;-+!'1/0*+'/#!!7-FH!'<87-;0='>!-#17!/.'&)-7-&#!7'
F0+7-;/.'-*='-'H-7+!'H!"0&%*G'
)0/';!#)%='F10H=/'1<%*'@CB`U5M'F8'>17#)!7'7!=1&0*+'>-H/!'
@CB`U5M`IAB'
-H-7;/'#)7%1+)'-';-&)0*!'H!-7*0*+'-<<7%-&)G'
E'@
*/#0#1#!'>%7'@*>%&%;;' )0/';!#)%=';-?!/'1/!'%>'<!7&!<#1-H'/-H0!*&8'%>'#!"#/'0*'
B!/!-7&).'AaMAB.'
/&!*!/VF%7*'=0+0#-H'0;-+!/G'@*'<-7#0&1H-7.'>%17'#!"#/<!&0>0&'
$1'M)0:0-*E.'C''0-*'
*+-<%7!'
/-H0!*&8'>!-#17!/'-7!'=!/0+*!='#)-#'-7!'&%*/0/#!*#H8'
M)-*+E"1-*.'$0;'X%%'C CM0
'M&
>'(%;<1#0*+.' -&&%;<-*0!='R0#)'#!"#/'0*'/&!*!VR!F'0;-+!/G'(%;F0*!='R0#)'
R!!.''-*'()!R'$0; '' U-#0)%%*%-HH''%5*
@CB`U5M'
06!7/0#8'%>' #)!'#!"#'H-8%1#'0*>%7;-#0%*.'#)!'/-H0!*&8'>!-#17!/'-7!'0*#!+7-#!='
M0*+-<%7!'
#%'&H-//0>8'#!"#'-*='*%*#!"#'%F:!&#/'-&&17-#!H8G'A>#!7'#!"#'
H%&-H0P-#0%*.'#!"#'F%1*=-78'0/'!"#7-&#!='#%'+!#'#)!'F0*-78'#!"#'
0;-+!/G'
@*?-;'
U%*!'/<!&0>0!='
U%*!'/<!&0>0!='
U%*!'/<!&0>0!=b'$0*?'#%'/%>#R-7!]')##<]VV0*?-;%&7G&%;V!*+V'
9-/!H0*!'
K7+-*0P!7/'
UVA'
A99,,'K(B'M4N'6ED''
012345
7891
J J
J J
J J
J J
J J
J J
J J
J J
J J
J J
J J
J J
J J
82824
82824
3811
3811
8
8
8
8
8
8