Professional Documents
Culture Documents
1
Detection + Normalization + Representation + Extraction : Matching
Eyes Color Conversion Binary Patterns Clustering Classifiers
Face Enhancement Keypoint Descriptors Normalization Density Estimation
Keypoints Filtering Orientation Histograms Subspace Learning Distance Metrics
Landmarks Registration Wavelets Quantization Regressors
Gallery Management
Clustering & Fusion
Data Parallelization
CUFS
Persistent Storage
CUFSF
FERET
OpenBR
20%
MEDS
FRGC Design 15%
Evaluation
Retrieval Rate
nec3
Project Modern Active Deployable ated on still-image frontal face datasets in Section 5. Novel
contributions include a free open source framework for bio-
CSU [17] Yes No No metrics research, an embedded language for algorithm de-
OpenCV [4] No Yes Yes sign, and a compact template representation and matching
OpenBR Yes Yes Yes strategy that demonstrates competitive performance on aca-
demic datasets and the Face Recognition Vendor Test 2012.
Table 1: Existing open source face recognition software.
A project is considered modern if it incorporates peer- 2. The OpenBR Collaboratory
reviewed methods published in the last five years, active if
it has source code changes made within the last six months, OpenBR is a framework for investigating new biometric
and deployable if it exposes a public API. modalities, improving existing algorithms, interfacing with
commercial systems, measuring recognition performance,
and deploying automated systems. The project is designed
library is written in C++ with installation instructions pro- to facilitate rapid algorithm prototyping, and features a ma-
vided for Windows, Linux/Mac, Android, and iOS. Doc- ture core framework, flexible plugin system, and support
umentation and examples demonstrate how to interact with for open and closed source development. Figures 1 and 2
the face recognition API, though it lacks modern face recog- illustrate OpenBR functionality and software components.
nition methods and fine-grained eye localization. While OpenBR originated within The MITRE Corporation
the project enjoys daily source code improvements and from a need to streamline the process of prototyping and
an extensive developer network, it lacks of-the-shelf func- evaluating algorithms. The project was later published as
tionality useful to biometric researchers, including model open source software under the Apache 2 license and is free
training, gallery management, algorithm evaluation, cross- for academic and commercial use.
validation, and a concise syntax to express algorithms – all
features available in OpenBR. 2.1. Software Requirements
OpenBR is written in a portable subset of ISO C++
1.2. Overview
and is known to work with all popular modern C++ com-
OpenBR is introduced in Section 2 as a project to facil- pilers including Clang, GCC, ICC, MinGW-w64, and Vi-
itate open source biometrics research, and its architecture sual Studio. The project is actively maintained on Win-
is detailed in Section 3. Section 4 describes the OpenBR dows, Mac, and Linux, with ports to Android, iOS, and
face recognition capability, and its performance is evalu- other platforms known to be within reach. OpenBR requires
Open Source Commercial
br Application Application
$ br -algorithm ’Open+Cvt(Gray)+Cascade
(FrontalFace)+ASEFEyes+Affine
(128,128,0.33,0.45)+CvtFloat+PCA
OpenBR Qt &
OpenCV
(0.95):Dist(L2)’ -train BioID/img
Eigenfaces
Commercial
PCA LBP SVM Figure 3: Training Eigenfaces on the BioID dataset [10].
Wrapper
Core The algorithm description is wrapped in quotes to avoid un-
Framework intended interpretation by the shell preprocessor.
Commercial
LDA HOG ...
Library
5.1. Datasets
5.3. Results
In the interest of brevity we have omitted descriptions
of each individual dataset considered in this paper. Instead, One challenging aspect of reporting on this wide vari-
references for the datasets are provided in Table 3, example ety of datasets is the lack of a common training and test-
images are shown in Table 4, and aggregate dataset statistics ing protocol. In the interest of allowing comparison across
are available in Table 5. datasets, we opt to use 5-fold cross validation and report
true accept rates at a false accept rate of 0.1%.
5.2. Algorithms Table 6 compares the template size generated for each
algorithm. OpenBR has the smallest templates, though tem-
Two versions of the previously described 4SF algo- plates from commercial system B are of similar size. BR-A
rithm are considered, one using open source face alignment template size is not listed as it is not statistically different
(OpenBR) and the other using commercial alignment (BR- than OpenBR.
A). Accuracy is compared against three commercial sys- Table 7 compares the true accept rates of each algorithm.
tems (A, B, C). We intentionally choose not to tune algo- The commercial systems tend to outperform OpenBR on the
rithm parameters for each dataset, instead the algorithm is classic face recognition datasets like FERET and FRGC.
trained on all datasets exactly as stated in Figure 6. There is less of a discrepancy in performance on some of
CUFS CUFSF FB FC Dup1 Dup2 FRGC-1 FRGC-4 HFB LFW MEDS PCSO
Table 6: Average gallery template size (KB) across different Table 7: Average true accept rate at a false accept rate of
face recognition algorithms and datasets. Five-fold cross one-in-one-thousand across different face recognition algo-
validation uncertainty reported at one standard deviation. rithms and datasets. Five-fold cross validation uncertainty
reported at one standard deviation.
The OpenBR algorithm was also submitted to NIST’s OpenBR 4SF features were also used to train a support
Face Recognition Vendor Test (FRVT) 2012 [7] for inde- vector machine to compete in the Class D gender and age
pendent evaluation. OpenBR is believed to be the first open estimation tasks. For gender estimation, amongst 7 com-
source system to formally compete in the series. Though petitors OpenBR ranked 2nd on mugshots with 92.8% ac-
the 2012 test has at least five academic participants, to our curacy and 2nd on visas with 85.0% accuracy. For age
knowledge the source code for these systems has not been estimation, amongst 6 competitors OpenBR ranked 4th on
made publicly available. In the NIST reports, the OpenBR mugshots with a RMS error of 9.9 years for males and 11.2
submission is denoted with the letter ‘K’. years for females, and 4th on visas with a RMS error of 12.8
As of the end of Phase 1 in March 2013, amongst 17 years for males and 14.8 years for females.
6. Conclusion video-based biometric person authentication, pages 90–95.
Springer, 2001.
In this paper we introduced the OpenBR collaboratory [11] B. Klare. Spectrally sampled structural subspace features
for biometrics research and algorithm development. We (4SF). In Michigan State University Technical Report, MSU-
then discussed the 4SF face recognition algorithm imple- CSE-11-16, 2011.
mented in OpenBR, which offers a competitive baseline [12] B. Klare, M. Burge, J. Klontz, R. Vorder Bruegge, and
for researchers when a commercial system is unavailable. A. Jain. Face recognition performance: Role of demographic
OpenBR offers the ability to isolate components of a bio- information. IEEE Trans. on Information Forensics and Se-
metric recognition process, allowing researchers to focus curity, 7(6):1789–1801, 2012.
on particular steps in an algorithm within the context of a [13] B. Klare and A. K. Jain. Face recognition across time lapse:
reproducible and deployable system. On learning feature subspaces. In IEEE International Joint
Conf. on Biometrics (IJCB), 2011, pages 1–8.
[14] S. Z. Li, Z. Lei, and M. Ao. The HFB face database for
Acknowledgments heterogeneous face biometrics research. In IEEE Conf. on
The authors would like to thank The MITRE Corpora- Computer Vision and Pattern Recognition Workshops, 2009,
tion for releasing OpenBR as free open source software, pages 1–8.
and recognize Emma Taborsky and Charles Otto for their [15] D. G. Lowe. Distinctive image features from scale-
invariant keypoints. International journal of computer vi-
ongoing commitment to the collaboratory. Software devel-
sion, 60(2):91–110, 2004.
opment at MITRE was funded by the MITRE Sponsored
[16] Y. M. Lui, D. Bolme, P. Phillips, J. Beveridge, and B. Draper.
Research (MSR) program, and B. Klare’s research was sup- Preliminary studies on the good, the bad, and the ugly face
ported by the Noblis Sponsored Research (NSR) program. recognition challenge problem. In IEEE Computer Society
Conf. on Computer Vision and Pattern Recognition Work-
References shops (CVPRW), 2012, pages 9–16.
[17] P. J. Phillips, J. R. Beveridge, B. A. Draper, G. Givens, A. J.
[1] T. Ahonen, A. Hadid, and M. Pietikainen. Face description
O’Toole, D. S. Bolme, J. Dunlop, Y. M. Lui, H. Sahibzada,
with local binary patterns: Application to face recognition.
and S. Weimer. An introduction to the good, the bad, & the
IEEE Trans. on Pattern Analysis and Machine Intelligence,
ugly face recognition challenge problem. In IEEE Interna-
28(12):2037–2041, 2006.
tional Conf. on Automatic Face & Gesture Recognition (FG)
[2] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman. Eigen-
Workshops, 2011, pages 346–353.
faces vs. fisherfaces: Recognition using class specific linear
[18] P. J. Phillips, P. J. Flynn, T. Scruggs, K. W. Bowyer, J. Chang,
projection. IEEE Trans. on Pattern Analysis and Machine
K. Hoffman, J. Marques, J. Min, and W. Worek. Overview
Intelligence, 19(7):711–720, 1997.
of the face recognition grand challenge. In IEEE Conf. on
[3] D. S. Bolme, B. A. Draper, and J. R. Beveridge. Average Computer Vision and Pattern Recognition, 2005, volume 1,
of synthetic exact filters. In IEEE Conf. on Computer Vision pages 947–954.
and Pattern Recognition, 2009, pages 2105–2112. [19] P. J. Phillips, H. Moon, S. A. Rizvi, and P. J. Rauss. The
[4] G. Bradski. The opencv library. Doctor Dobbs Journal, FERET evaluation methodology for face-recognition algo-
25(11):120–126, 2000. rithms. IEEE Trans. on Pattern Analysis and Machine Intel-
[5] Z. Cao, Q. Yin, X. Tang, and J. Sun. Face recognition with ligence, 22(10):1090–1104, 2000.
learning-based descriptor. In IEEE Conf. on Computer Vision [20] S. Sonnenburg, M. L. Braun, C. S. Ong, S. Bengio, L. Bot-
and Pattern Recognition (CVPR), 2010, pages 2707–2714. tou, G. Holmes, Y. LeCun, F. Pereira, and C. E. Rasmussen.
[6] A. Founds, N. Orlans, G. Whiddon, and C. Watson. NIST The need for open source software in machine learning.
Special Database 32 - Multiple Encounter Dataset II (MEDS- Journal of Machine Learning Research, 8:2443–2466, 2007.
II). www.nist.gov/itl/iad/ig/sd32.cfm, 2011. [21] X. Tan and B. Triggs. Enhanced local texture feature sets
[7] P. Grother, G. Quinn, and M. Ngan. Face Recognition Ven- for face recognition under difficult lighting conditions. IEEE
dor Test (FRVT) 2012. http://www.nist.gov/itl/iad/ig/frvt- Trans. on Image Processing, 19(6):1635–1650, 2010.
2012.cfm, mar 2013. [22] M. Turk and A. Pentland. Eigenfaces for recognition. Jour-
[8] G. B. Huang, M. Mattar, T. Berg, E. Learned-Miller, et al. nal of Cognitive Neuroscience, 3(1):71–86, 1991.
Labeled faces in the wild: A database forstudying face recog- [23] P. Viola and M. J. Jones. Robust real-time face detection.
nition in unconstrained environments. In Workshop on Faces International Journal of Computer Vision, 57(2):137–154,
in ’Real-Life’ Images: Detection, Alignment, and Recogni- 2004.
tion, 2008. [24] X. Wang and X. Tang. Face photo-sketch synthesis and
[9] Intel. Intel architecture instruction set exten- recognition. IEEE Trans. on Pattern Analysis and Machine
sions programming reference. http://download- Intelligence, 31(11):1955–1967, 2009.
software.intel.com/sites/default/files/319433-014.pdf, [25] W. Zhang, X. Wang, and X. Tang. Coupled information-
2012. theoretic encoding for face photo-sketch recognition. In
[10] O. Jesorsky, K. J. Kirchberg, and R. W. Frischholz. Robust IEEE Conf. on Computer Vision and Pattern Recognition,
face detection using the hausdorff distance. In Audio-and 2011, pages 513–520.