Professional Documents
Culture Documents
Sean Austin
CST 300
08 FEB 2018
Ethics Argument Paper
There is no disputing that crime, and the criminals that commit them, are one of the most
difficult aspects of any community in a free society. Seldom does any night pass by where the
television media does not illustrate crime after crime on the evening news, or the print media
does not splash stark and horrific pictures of an innocent victim, caught in the wrong place at the
wrong time. Politicians are beseeched and city halls are inundated with the protests of the
citizenry calling for more present police, stricter legislation, and an end to the fear caused by
those who would do what they will, by any means. In light of the need for greater public safety,
there can be found some solutions that skirt the line between what society needs and what it
wants, oftentimes finding those ends to be mutually exclusive. So, if a solution exists, and it does
In 2016, two Chinese engineers sought to quantify the relationship between computer-
learned facial recognition and criminality. It was said that they sought to disprove this theory and
that of the science, some say pseudoscience, of physiognomy, the study, or art, of facial features
and how they predict a person’s values (Revell, 2016). With a careful, methodical approach, the
engineers experimented and tested the images of 1,856 Chinese men, with a number of them
criminals, to make a determination of whether there was a correlation between their features and
any predisposition of a criminal nature. They found that after their program was applied, they
found a definite correlation with an accuracy rate of 89.5% (RT, 2016). With the publishing of
their findings, this attracted the attention of proponents and detractors alike, and with the two
The first position, in favor of government, suggests that with this program, an opportunity
presents itself to be proactive for a change. It would be an enormous benefit for law enforcement
opportunity to engage in predictive policing and essentially cull the negative attributes of a
community out before they become an issue would make the community that much safer for the
families that live there. With this tool, an opportunity exists to establish parity for all
communities, with safety being equal regardless of socioeconomic status or race, and in a way
The second position, in favor of human rights, suggests that with a program like this, the
opportunity exists where a person’s civil rights may be trampled based strictly on appearance. A
person’s right to be innocent until guilt is proven would be passed over in favor of people being
sanctioned, sometimes for crimes that have yet to happen. Additionally, the technology, while
being implemented in some places, is essentially untested and still contains a margin of error that
may be marginalized while in the hands of the same agencies that will ultimately prosecute the
While this would appear to be a viable program, no country has adopted it officially and
there is little to no research about it other than what can be found peripherally in articles talking
experimentation, no sociological studies, nor firsthand accounts of this program being put into
practice exists. In large part, the predictive aspect of this program actually being put into practice
becomes a projection of sorts; defining what potential results would occur based on previous
failed attempts to integrate unpopular legislation in our history. Human nature being what it is,
the idea of projection using the idea of what a reasonable person would accept is not beyond the
Austin – Ethics – 3
realm of reason. In November of 2016, Dr’s Xiaolin Wu and Xi Zhang released an academic
paper entitled, “Automated Inference on Criminality using Face Images” through Shanghai Jiao
Tong University. Both men currently teach in the United States and Canada, Dr. Wu at
McMaster University and Dr. Zhang at Texas A & M University, both are doctors and professors
of electrical engineering and computer science, and both are fellows with the IEEE (TAMU,
2018) (McMaster, 2018) (Sullivan, 2016). In an attempt to quantify the underlying science
behind the commercial applications of artificial intelligence and man-machine interfaces as they
apply to pattern recognition and machine learning, they began an effort to further the research
into quantifying and analyzing the relationships between social perception and facial features
(Wu & Zhang, 2016). Basing their initial work on previous theories involving the science of
physiognomy, the study of facial features and how they apply to a person’s values, they began to
speculate about how “face-induced inferences” affect “an individual’s social attributes” (Wu &
Zhang, 2017). They chose to focus this study on the social attribute of criminality. Developing a
learning algorithm, their aim was to simulate a human’s perception resulting in a judgement of
social values. They chose about 1856 subjects, and using their ID photos, chose the criteria of
between 18 and 55 years of age, no facial hair, no facial scars, Chinese and male. Of the 1850
subjects, 1126 were non-criminals and 730 were criminals, 330 of those being wanted by the
Ministry of Public Security or other Chinese law enforcement agencies. After applying the
results to a number of facial feature equations and calculations, followed by vigorous cross
validations, it was determined that the algorithm as applied had an 89.51% accuracy rate,
essentially demonstrating that the program was able to make a reliable inference on criminality
(Wu & Zhang, 2016) (RT, 2016) (Gu, 2017) (Sullivan, 2016). The research and the resultant
evidence are compelling and seem to transcend any real questions of viability (Sullivan, 2016).
Austin – Ethics – 4
The viability of marketing a software suite capable of facial recognition and criminality
interpretation is real and it allows law enforcement agencies, and others, the opportunity to
conduct actionable forays into predictive policing. Governments, nations, and municipalities
should adopt a policy of preemptively screening the populace because it keeps people safe. The
technology allows law enforcement to focus their attention on a true threat and would allow them
a chance to track the movement and behavior of specific individuals. Technology that allows law
enforcement to focus their attentions on specific targets acts as a personnel multiplier since much
of the surveillance work will be completed by proxy. This will also allow for police to be
increasingly mobile to dissuade any other threats at a significant overall cost savings. This
technology would also afford governments to detain and separate subversive elements in a
community and relocate them to supervised areas. This would create much safer communities
allowing the families there to flourish. An added benefit to all of this is with a reduced and more
focused law enforcement arm, and with the negative element culled from the community, local
governments would have an opportunity to reallocate funds away from the police and
redistribute it to other areas in the community infrastructure, increasing overall economic health
in many areas. This is firmly within the utilitarian realm of ethics, as forwarded by John Stuart
Mill, as it protects the most people, even at the detriment of the few.
Just because a thing can be done, it doesn’t necessarily follow that it should be done. By
instituting programs such as this, a very real threat to the rights of the individual is present. It is
very easy, in the case of a politician, answering to the screaming demands of their constituency
to reduce crime, to institute a process involving facial recognition and criminality interpretation
to the detriment of the individual. Arrest and detention of subjects and suspects identified by
these systems transcend due process. The presumption of innocence before being proven guilty
Austin – Ethics – 5
by a jury of our peers is subverted when a citizen is detained without prior cause or in the
absence of a crime, in the case of preemptive or predictive policing. Due process is again
subverted when a potential suspect to a crime is detained with this technology without the benefit
of corroborating evidence of guilt. Apprehending and detaining individuals with this process is a
violation of civil rights. Arrest without a warrant or for no reason is in violation of the 4th
amendment to the Constitution and arrest or detention without a warrant or without reason is in
violation of Articles 7, 8, and 9 of the Universal Declaration of Human Rights of the United
Nations (Fathers, 1787) (Cassin & Humphreys, 1948) . This is within the Rights Approach of
Duty-Bound ethics as influenced by Immanuel Kant and John Locke in that the ethical rights of
The rights of the individual should always come first. That is a concrete position that I
will stand by. As wonderful as predictive policing and interpreting predispositions for criminality
sound, they ignore a vital part of what we are as people…free. Freedom cannot be overstated
because it is at the very root of this issue, and the lack of ability to incorporate that concept into
this technology is its fatal flaw. Our justice system isn’t perfect, nor are our law enforcement
agencies and especially our governments. But they are ours. Part of living in a free, democratic
society is having to put up with others that exercise their free will in a way that runs counter to
good ethical and moral standards. That’s the price we all have to pay. This technology should be
banned because, first and foremost, it is in violation of our most sacred document and the
instrument that provides us all with the protections we enjoy, the Constitution, and the Universal
Declaration of Human Rights, which the rest of the world enjoys as a blanket protection (Fathers,
1787) (Cassin & Humphreys, 1948). The technology ignores basic investigative practices and
could potentially eliminate an informed jury of our peers, interpreting the evidence through the
Austin – Ethics – 6
filter of public opinion. While the technology may be 89.51% accurate, the margin of error is too
great for any agency or government to be completely reliant upon without fear of catching up an
This technology is truly amazing. We have come to the point where we can teach a
computer to ‘view’ a person and evaluate their features, interpreting that into an array of binary
code which in turn is evaluated for character flaws which could be detrimental to the peace and
harmony of a community or society. But what if it’s wrong? What about the 10.5% that hasn’t
been addressed (Hudson, 2018)? As amazing as this technology is, it isn’t human and that is its
chief flaw because as wonderful as the technology is, it can’t intuit, or ponder, or question, or get
to the bottom of something, or ask a question. It lacks the unspoken, primal instinct that exists at
the bottom of inquiries of this nature. Some people may dismiss this idea as ridiculous, that we
are an enlightened people in an advanced civilization. This is true but study after study have
shown that there is more to us than logic and reason, more than can be quantified by an
algorithm and more than can be interpreted by a computer program. We all have a semblance of
those instincts from so long ago, the instincts that kept us alive and the instincts that,
unconsciously, warn us of danger that could threaten our survival. It is our anima that sets us
above pure logic and it is our anima that makes programs such as interpretive evaluation for
criminality invalid. Banning the technology ultimately protects the citizenry from arbitrary
detentions and arrests by placing the power of prosecution firmly in the hands of the judicial
process. The technology, while potentially viable, is best served to be tested and improved over
time until the margin of error is reduced to at least that of standard prosecutorial methods and,
according to a Prison Legal News article of February 2014, that figure is around 3.78% (Levitt,
R., & Schmidt, P, 2014). We still have a way to go. In terms of ethics and rights, implementing
Austin – Ethics – 7
technology such as this will not protect society. Instead, we should focus efforts on other means
of investigatory processes that expedite the capture of suspects while minimizing the margin of
References
Cassin, R., & Humphrey, J. P. (1948, December 10). United Nations. Retrieved from Universal
Fathers, F. (1787, September 17). U.S. Constitution. Retrieved from National Archives:
https://www.archives.gov/founding-docs/constitution-transcript
Gu, S. (2017, November 24). Automated Inference on Criminality Using Face Images. Retrieved
face-images-aec51c312cd0
Hutson, M. (2018, January 17). A Law Enforcement A.I. Is No More or Less Biased Than People.
https://www.psychologytoday.com/blog/psyched/201801/law-enforcement-ai-is-no-
more-or-less-biased-people
http://www.ece.mcmaster.ca/~xwu/
Revell, T. (2016, December 10). Concerns as face recognition tech used to 'identify criminals.
as-face-recognition-tech-used-to-identify-criminals/
RT. (2016, November 26). Return of physiognomy? Facial recognition study says it can identify
recognition-criminal-china/
SCMP. (2018, January 29). Japan trials AI-assisted predictive policing before 2020 Tokyo
http://www.scmp.com/news/asia/east-asia/article/2130980/japan-trials-ai-assisted-
predictive-policing-2020-tokyo-olympics
Sullivan, B. (2016, November 18). A New Program Judges If You're a Criminal From Your
https://motherboard.vice.com/en_us/article/d7ykmw/new-program-decides-criminality-
from-facial-features
TAMU. (2018). Xi Zhang, Professor, IEEE Fellow. Retrieved from Texas A & M University:
http://www.ece.tamu.edu/~xizhang/
Wu, X., & Zhang, X. (2016). Automated Inference on Criminality Using Face Images. Ithaca:
arXiv.
Wu, X., & Zhang, X. (2017). Responses to Critiques on Machine Learning of Criminality