You are on page 1of 8

Artificial intelligence (AI) in healthcare and research

OVERVIEW
• AI is being used or trialled for a range of systems; inherent biases in the data used
healthcare and research purposes, including to train AI systems; ensuring the protection
detection of disease, management of chronic of potentially sensitive data; securing public
conditions, delivery of health services, and trust in the development and use of AI
drug discovery. technologies; effects on people’s sense of
• AI has the potential to help address dignity and social isolation in care situations;
important health challenges, but might be effects on the roles and skill-requirements of
limited by the quality of available health data, healthcare professionals; and the potential
and by the inability of AI to display some for AI to be used for malicious purposes.
human characteristics. •A
 key challenge will be ensuring that AI
• The use of AI raises ethical issues, including: is developed and used in a way that is
the potential for AI to make erroneous transparent and compatible with the public
decisions; the question of who is responsible interest, whilst stimulating and driving
when AI is used to support decision-making; innovation in the sector.
difficulties in validating the outputs of AI

WHAT IS AI?
There is no universally agreed definition of AI. The and adaptation, sensory understanding, and
term broadly refers to computing technologies interaction.1 Currently, most applications of AI
that resemble processes associated with are narrow, in that they are only able to carry out
human intelligence, such as reasoning, learning specific tasks or solve pre-defined problems.2
AI works in a range of ways, drawing on been the most successful type of AI in recent
principles and tools, including from maths, years, and is the underlying approach of many
logic, and biology. An important feature of of the applications currently in use.3 Rather than
contemporary AI technologies is that they are following pre-programmed instructions, machine-
increasingly able to make sense of varied and learning allows systems to discover patterns and
unstructured kinds of data, such as natural derive its own rules when it is presented with
language text and images. Machine-learning has data and new experiences.4

RECENT INTEREST IN AI
AI is not new, but there have been rapid Major technology companies - including
advances in the field in recent years. This has in Google, Microsoft, and IBM - are investing in the
part been enabled by developments in computing development of AI for healthcare and research.
power and the huge volumes of digital data that The number of AI start-up companies has also
are now generated.5 A wide range of applications been steadily increasing.7 There are several UK-
of AI are now being explored with considerable based companies, some of which have been
public and private investment and interest. The set up in collaboration with UK universities
UK Government announced its ambition to make and hospitals. Partnerships have been formed
the UK a world leader in AI and data technologies between NHS providers and AI developers
in its 2017 Industrial Strategy. In April 2018, a such as IBM, DeepMind, Babylon Health, and
£1bn AI sector deal between UK Government Ultromics.
and industry was announced, including £300
million towards AI research.6 Such partnerships have attracted controversy,
and wider concerns about AI have been the focus
AI is lauded as having the potential to help of several inquiries and initiatives within industry,
address important health challenges, such as and medical and policy communities (see Box 1).
meeting the care needs of an ageing population.

BOX 1. EXAMPLES OF INQUIRIES AND INITIATIVES ON AI


• UK Government Centre for Data Ethics • United Nations Interregional Crime and
and Innovation – announced in January Justice Research Institute – set up a
2018 to advise on safe, ethical, and programme on Artificial Intelligence and
innovative uses of data-driven technologies.8 Robotics in 2015.12
• Ada Lovelace Institute – the Nuffield • Asilomar AI Principles – developed in 2017
Foundation announced it will set up the by the Future of Life Institute (US) to guide AI
Institute by the end of 2018 to examine research and application, and signed by over
ethical and social issues arising from the use 3,800 researchers and others working in AI
of data, algorithms, and AI, ensuring they are and robotics around the world.13
harnessed for social well-being.9 • Reports on AI have been published by
• Partnership on AI – a platform for the House of Lords Select Committee
discussion and engagement around AI on Artificial Intelligence,5 the Royal
founded by Amazon, Apple, DeepMind, Society,3 Reform,14 Future Advocacy and
Facebook, Google, IBM, and Microsoft.10 Wellcome,15 Nesta,16 and the European
• IEEE – launched a Global Initiative on Ethics Group on Ethics in Science and New
of Autonomous and Intelligent Systems in Technologies.17 A further report is expected
2016.11 from the House of Commons Science and
Technology Select Committee.18

Nuffield Council on Bioethics 2


APPLICATIONS OF AI IN HEALTHCARE AND RESEARCH
HEALTHCARE ORGANISATION Possible uses of AI in clinical care include:

AI has the potential to be used in planning and • Medical imaging – medical scans have
resource allocation in health and social care been systematically collected and stored for
services. For example, the IBM Watson Care some time and are readily available to train
Manager system is being piloted by Harrow AI systems.27 AI could reduce the cost and
Council with the aim of improving cost efficiency. time involved in analysing scans, potentially
It matches individuals with a care provider that allowing more scans to be taken to better target
meets their needs, within their allocated care treatment.5 AI has shown promising results
budget. It also designs individual care plans, and in detecting conditions such as pneumonia,
claims to offer insights for more effective use of breast and skin cancers, and eye diseases.28
care management resources.19 • Echocardiography – the Ultromics system,
trialled at John Radcliffe Hospital in Oxford,
AI is also being used with the aim of improving uses AI to analyse echocardiography scans
patient experience. Alder Hey Children’s Hospital that detect patterns of heartbeats and diagnose
in Liverpool is working with IBM Watson to create coronary heart disease.29
a ‘cognitive hospital’, which will include an app to • Screening for neurological conditions – AI
facilitate interactions with patients. The app aims tools are being developed that analyse speech
to identify patient anxieties before a visit, provide patterns to predict psychotic episodes and
information on demand, and equip clinicians with identify and monitor symptoms of neurological
information to help them to deliver appropriate conditions such as Parkinson’s disease.30
treatments.20 • Surgery – robotic tools controlled by AI have
been used in research to carry out specific
MEDICAL RESEARCH tasks in keyhole surgery, such as tying knots to
close wounds.31
AI can be used to analyse and identify patterns
in large and complex datasets faster and more PATIENT AND CONSUMER-FACING
precisely than has previously been possible.21 APPLICATIONS
It can also be used to search the scientific
literature for relevant studies, and to combine Several apps that use AI to offer personalised
different kinds of data; for example, to aid drug health assessments and home care advice are
discovery.22 The Institute of Cancer Research’s currently on the market. The app Ada Health
canSAR database combines genetic and clinical Companion uses AI to operate a chat-bot, which
data from patients with information from scientific combines information about symptoms from
research, and uses AI to make predictions about the user with other information to offer possible
new targets for cancer drugs.23 Researchers diagnoses.32 GP at Hand, a similar app developed
have developed an AI ‘robot scientist’ called by Babylon Health, is currently being trialled by a
Eve which is designed to make the process of group of NHS surgeries in London.33
drug discovery faster and more economical.24
AI systems used in healthcare could also be Information tools or chat-bots driven by AI are
valuable for medical research by helping to match being used to help with the management of
suitable patients to clinical studies.25 chronic medical conditions. For example, the
Arthritis Virtual Assistant developed by IBM
CLINICAL CARE for Arthritis Research UK is learning through
interactions with patients to provide personalised
AI has the potential to aid the diagnosis of information and advice concerning medicines,
disease and is currently being trialled for this diet, and exercise.34 Government-funded and
purpose in some UK hospitals. Using AI to commercial initiatives are exploring ways in
analyse clinical data, research publications, and which AI could be used to power robotic systems
professional guidelines could also help to inform and apps to support people living at home
decisions about treatment.26 with conditions such as early stage dementia,

Bioethics briefing note: Artificial intelligence (AI) in healthcare and research 3


potentially reducing demands on human care and prevent hospital admissions.37
workers and family carers.35
PUBLIC HEALTH
AI apps that monitor and support patient
adherence to prescribed medication and AI has the potential to be used to aid early
treatment have been trialled with promising detection of infectious disease outbreaks
results, for example, in patients with and sources of epidemics, such as water
tuberculosis.36 Other tools, such as Sentrian, use contamination.38 AI has also been used to predict
AI to analyse information collected by sensors adverse drug reactions, which are estimated to
worn by patients at home. The aim is to detect cause up to 6.5 per cent of hospital admissions
signs of deterioration to enable early intervention in the UK.39

LIMITS OF AI
AI depends on digital data, so inconsistencies personal health data.40
in the availability and quality of data restrict the Humans have attributes that AI systems might
potential of AI. Also, significant computing power not be able to authentically possess, such as
is required for the analysis of large and complex compassion.41 Clinical practice often involves
data sets. While many are enthusiastic about complex judgments and abilities that AI currently
the possible uses of AI in the NHS, others point is unable to replicate, such as contexual
to the practical challenges, such as the fact that knowledge and the ability to read social cues.16
medical records are not consistently digitised There is also debate about whether some human
across the NHS, and the lack of interoperability knowledge is tacit and cannot be taught.42
and standardisation in NHS IT systems, digital Claims that AI will be able to display autonomy
record keeping, and data labelling.5 There are have been questioned on grounds that this is
questions about the extent to which patients and a property essential to being human and by
doctors are comfortable with digital sharing of definition cannot be held by a machine.17

ETHICAL AND SOCIAL ISSUES


Many ethical and social issues raised by The performance of symptom checker apps
AI overlap with those raised by data use; using AI, has been questioned. For example, it
automation; the reliance on technologies more has been found that recommendations from apps
broadly; and issues that arise with the use of might be overly cautious, potentially increasing
assistive technologies and ‘telehealth’. demand for uneccessary tests and treatments.16

RELIABILITY AND SAFETY TRANSPARENCY AND ACCOUNTABILITY

Reliability and safety are key issues where AI is It can be difficult or impossible to determine
used to control equipment, deliver treatment, the underlying logic that generates the outputs
or make decisions in healthcare. AI could make produced by AI.45 Some AI is proprietary and
errors and, if an error is difficult to detect or deliberately kept secret, but some are simply too
has knock-on effects, this could have serious complex for a human to understand.46 Machine-
implications.43 For example, in a 2015 clinical trial, learning technologies can be particularly opaque
an AI app was used to predict which patients because of the way they continuously tweak their
were likely to develop complications following own parameters and rules as they learn.47 This
pneumonia, and therefore should be hospitalised. creates problems for validating the outputs of AI
This app erroneously instructed doctors to send systems, and identifying errors or biases in the
home patients with asthma due to its inability to data.
take contextual information into account.44

Nuffield Council on Bioethics 4


The new EU General Data Protection Regulation warned that there could be a public backlash
(GDPR) states that data subjects have the right against AI if people feel unable to trust that the
not to be subject to a decision based solely technologies are being developed in the public
on automated processing that produces legal interest.57
or similarly significant effects. It further states
that information provided to individuals when At a practical level, both patients and healthcare
data about them are used should include “the professionals will need to be able to trust
existence of automated decision-making, (...) AI systems if they are to be implemented
meaningful information about the logic involved, successfully in healthcare.58 Clinical trials of
as well as the significance and the envisaged IBM’s Watson Oncology, a tool used in cancer
consequences of such processing for the data diagnosis, was reportedly halted in some
subject”.48 However, the scope and content of clinics as doctors outside the US did not have
these restrictions - for example, whether and how confidence in its recommendations, and felt
AI can be intelligible - and how they will apply that the model reflected an American-specific
in the UK, remain uncertain and contested.49 approach to cancer treatment.59
Related questions include who is accountable for
decisions made by AI and how anyone harmed EFFECTS ON PATIENTS
by the use of AI can seek redress.3
AI health apps have the potential to empower
DATA BIAS, FAIRNESS, AND EQUITY people to evaluate their own symptoms and
care for themselves when possible. AI systems
Although AI applications have the potential to that aim to support people with chronic health
reduce human bias and error, they can also conditions or disabilities could increase people’s
reflect and reinforce biases in the data used to sense of dignity, independence, and quality of
train them.50 Concerns have been raised about life; and enable people who may otherwise have
the potential of AI to lead to discrimination in been admitted to care institutions to stay at home
ways that may be hidden or which may not align for longer.60 However, concerns have been raised
with legally protected characteristics, such as about a loss of human contact and increased
gender, ethnicity, disability, and age.51 The House social isolation if AI technologies are used to
of Lords Select Committee on AI has cautioned replace staff or family time with patients.61
that datasets used to train AI systems are often
poorly representative of the wider population AI systems could have a negative impact on
and, as a result, could make unfair decisions that individual autonomy: for example, if they restrict
reflect wider prejudices in society. The Committee choices based on calculations about risk or
also found that biases can be embedded in the what is in the best interests of the user.62 If AI
algorithms themselves, reflecting the beliefs systems are used to make a diagnosis or devise
and prejudices of AI developers.52 Several a treatment plan, but the healthcare professional
commentators have called for increased diversity is unable to explain how these were arrived at,
among developers to help address this issue.53 this could be seen as restricting the patient’s
right to make free, informed decisions about
The benefits of AI in healthcare might not be their health.63 Applications that aim to imitate a
evenly distributed. AI might work less well where human companion or carer raise the possibility
data are scarce or more difficult to collect or that the user will be unable to judge whether they
render digitally.54 This could affect people with are communicating with a real person or with
rare medical conditions, or others who are technology. This could be experienced as a form
underrepresented in clinical trials and research of deception or fraud.64
data, such as Black, Asian, and minority ethnic
populations.55 EFFECTS ON HEALTHCARE PROFESSIONALS

TRUST Healthcare professionals may feel that their


autonomy and authority is threatened if their
The collaboration between DeepMind and the expertise is challenged by AI.65 The ethical
Royal Free Hospital in London led to public obligations of healthcare professionals towards
debate about commercial companies being given individual patients might be affected by the use of
access to patient data.56 Commentators have AI decision support systems, given these might

Bioethics briefing note: Artificial intelligence (AI) in healthcare and research 5


be guided by other priorities or interests, such as Nuffield Council on Bioethics has suggested that
cost efficiency or wider public health concerns.66 initiatives using data that raise privacy concerns
should go beyond compliance with the law to
As with many new technologies, the introduction take account of people’s expectations about how
of AI is likely to mean the skills and expertise their data will be used.70
required of healthcare professionals will change.
In some areas, AI could enable automation of AI could be used to detect cyber-attacks and
tasks that have previously been carried out by protect healthcare computer systems. However,
humans.2 This could free up health professionals there is the potential for AI systems to be hacked
to spend more time engaging directly with to gain access to sensitive data, or spammed
patients. However, there are concerns that the with fake or biased data in ways that might not
introduction of AI systems might be used to easily be detectable.71
justify the employment of less skilled staff.67 This
could be problematic if the technology fails and MALICIOUS USE OF AI
staff are not able to recognise errors or carry out
necessary tasks without computer guidance. A While AI has the potential to be used for good, it
related concern is that AI could make healthcare could also be used for malicious purposes. For
professionals complacent, and less likely to example, there are fears that AI could be used for
check results and challenge errors.68 covert surveillance or screening. AI technologies
that analyse motor behaviour, (such as the way
DATA PRIVACY AND SECURITY someone types on a keyboard), and mobility
patterns detected by tracking smartphones,
AI applications in healthcare make use of data could reveal information about a person’s health
that many would consider to be sensitive and without their knowledge.72 AI could be used to
private. These are subject to legal controls.69 carry out cyber-attacks at a lower financial cost
However, other kinds of data that are not and on a greater scale.73 This has led to calls
obviously about health status, such as social for governments, researchers, and engineers to
media activity and internet search history, could reflect on the dual use nature of AI and prepare
be used to reveal information about the health for possible malicious uses of AI technologies.73
status of the user and those around them. The

CHALLENGES FOR GOVERNANCE


AI has applications in fields that are subject to Further challenges include the need to ensure
regulation, such as data protection, research, that the way AI is developed and used is
and healthcare. However, AI is developing in a transparent, accountable, and compatible with
fast-moving and entrepreneurial manner that public interest, and balanced with the desire to
might challenge these established frameworks. drive UK innovation.74 Many have raised the need
A key question is whether AI should be regulated for researchers, healthcare professionals, and
as a distinct area, or whether different areas of policy-makers to be equipped with the relevant
regulation should be reviewed with the possible skills and knowledge to evaluate and make the
impact of AI in mind.5 best use of AI.2

THE FUTURE OF AI
In the future, it is likely that AI systems will raising questions about whether and how ethical
become more advanced and attain the ability to values or principles can ever be coded or learnt
carry out a wider range of tasks without human by a machine; who, if anyone, should decide
control or input. If this comes about, some have on these values; and whether duties that apply
suggested that AI systems will need to learn to to humans can or should apply to machines,
‘be ethical’ and to make ethical decisions.75 This or whether new ethical principles might be
is the subject of much philosophical debate, needed.75

Nuffield Council on Bioethics 6


CONCLUSIONS
AI technologies are being used or trialled for The use of AI raises a number of ethical and
a range of purposes in the field of healthcare social issues, many of which overlap with
and research, including detection of disease, issues raised by the use of data and healthcare
management of chronic conditions, delivery technologies more broadly. A key challenge
of health services, and drug discovery. AI for future governance of AI technologies will
technologies have the potential to help address be ensuring that AI is developed and used
important health challenges, but might be in a way that is transparent and compatible
limited by the quality of available health data, with the public interest, whilst stimulating and
and by the inability of AI to possess some driving innovation in the sector.
human characteristics, such as compassion.

REFERENCES
1 See, for example, Engineering and Physical Sciences Research Council, 24 Williams K, et al. (2015) Cheaper faster drug development validated by
Artificial intelligence technologies. the repositioning of drugs against neglected tropical diseases J R Soc
2 US National Science and Technology Council (2016) Preparing for the Interface 12: 20141289.
future of artificial intelligence. 25 Alder Hey Children’s NHS Foundation Trust (2016) Alder Hey children’s
3 Royal Society (2017) Machine learning: the power and promise of hospital set to become UK’s first ‘cognitive’ hospital.
computers that learn by example. 26 Dilsizian SE and Siegel EL (2013) Artificial intelligence in medicine and
4 The development of AI systems usually involves ‘training’ them with cardiac imaging Curr Cardiol Rep 16: 441; Future Advocacy (2018)
data. For an overview of different training models, see Nesta (2015) Ethical, social, and political challenges of artificial intelligence in
Machines that learn in the wild: machine learning capabilities, health.
limitations and implications. 27 Written evidence from Royal College of Radiologists (AIC0146) to
5 House of Lords Select Committee on Artificial Intelligence (2018) AI in the House of Lords Select Committee on Artificial Intelligence; Hainc
the UK: ready, willing and able?. N, et al. (2017) The bright, artificial intelligence-augmented future of
6 D  epartment for Business, Energy & Industrial Strategy (2017) Policy neuroimaging reading Front Neurol 8: 489.
paper: industrial strategy: the grand challenges; Gov.uk (26 April 2018) 28 Wang D, et al. (2016) Deep learning for identifying metastatic breast
Tech sector backs AI industry with multi-million investment. cancer arXiv preprint arXiv:160605718; Esteva A, et al. (2017)
7 CBInsights (2017) AI, healthcare & the future of drug pricing: investment Dermatologist-level classification of skin cancer with deep neural
activity, market breakdown, AI in clinical trials. networks Nature 542: 115; Rajpurkar P, et al. (2017) CheXNet:
8 Gov.uk (22 November 2017) Autumn budget 2017: 25 things you need Radiologist-level pneumonia detection on chest x-rays with deep
to know. learning arXiv preprint arXiv:171105225; Moorfields Eye Hospital
9 Nuffield Foundation (28 March 2018) The Nuffield Foundation (2018) DeepMind Health Q&A.
announces new £5 million Ada Lovelace Institute. 29 See http://www.ultromics.com/technology/.
10 See: https://www.partnershiponai.org/. 30 Bedi G, et al. (2015) Automated analysis of free speech predicts
11 IEEE (2018) The IEEE Global Initiative on ethics of autonomous and psychosis onset in high-risk youths, NPJ Schitzophrenia, 1: 15030;
intelligent systems. IBM Research (5 January 2017) IBM 5 in 5: with AI, our words will be a
12 U  NICRI (2017) UNICRI centre for artificial intelligence and robotics. window into our mental health.
13 Future of Life Institute (2017) Asilomar AI principles. 31 Kassahun Y, et al. (2016) Surgical robotics beyond enhanced dexterity
14 R  eform (2018) Thinking on its own: AI in the NHS. instrumentation Int J Comp Ass Rad 11: 553-68.
15 F  uture Advocacy (2018) Ethical, social, and political challenges of 32 Medical News Bulletin (20 January 2017) Artificial intelligence app Ada:
artificial intelligence in health. your personal health companion; see also: https://ada.com/.
16 Nesta (2018) Confronting Dr Robot: creating a people-powered future 33 See: https://www.gpathand.nhs.uk/our-nhs-service.
for AI in health. 34 IBM press release (13 Mar 2017) Arthritis research UK introduces IBM
17 European Group on Ethics in Science and New Technologies (2018) watson-powered ‘virtual assistant’ to provide information and advice
Artificial intelligence, robotics, and ‘autonomous’ systems. to people with arthritis.
18 S  cience and Technology Committee (Commons) (2018) Algorithms in 35 See, for example, https://chiron.org.uk/; University of Portsmouth
decision-making inquiry. press release (11 September 2014) Meet Rita, she’ll be taking
19 H  arrow Council (2016) IBM and harrow council to bring watson care care of you. However, some question the suggestion that assistive
manager to individuals in the UK. technologies could relieve pressures on women in particular, see: Parks
20 A  lder Hey Children’s NHS Foundation Trust (2017) Welcome to Alder JA (2010) Lifting the burden of women’s care work: should robots
Hey – the UK’s first cognitive hospital. replace the “human touch”? Hypatia 25: 100-20.
21 S  ee, for example, Leung MKK, et al. (2016) Machine Learning in 36 Shafner L, et al. (2017) Evaluating the use of an artificial intelligence
Genomic Medicine: A Review of Computational Problems and Data (AI) platform on mobile devices to measure and support tuberculosis
Sets Proc IEEE 104: 176-97; Science Magazine (7 July 2017) The AI medication adherence.
revolution in science. 37 Moore SF, et al. (2018) Harnessing the power of intelligent machines to
22 O  ’Mara-Eves A, et al. (2015) Using text mining for study identification enhance primary care Bri J Gen Pract 68: 6-7.
in systematic reviews: a systematic review of current approaches Syst 38 Jacobsmeyer B (2012) Focus: tracking down an epidemic’s source
Rev 4: 5. Physics 5: 89; Doshi R, et al. (2017) Tuberculosis control, and the where
23 The Conversation (11 November 2013) Artificial intelligence uses and why of artificial intelligence ERJ Open Res 3.
biggest disease database to fight cancer.

Bioethics briefing note: Artificial intelligence (AI) in healthcare and research 7


39 P
 irmohamed M, et al. (2004) Adverse drug reactions as cause of 55 See, for example, The Society for Women’s Health Research, The US
admission to hospital BMJ 329: 15-9; Jamal S, et al. (2017) Predicting FDA Office of Women’s Health (2011) Dialogues on diversifying clinical
neurological adverse drug reactions based on biological, chemical and trials; Popejoy AB and Fullerton SM (2016) Genomics is failing on
phenotypic properties of drugs using machine learning models Sci Rep diversity Nature 538:161; Quartz (10 July 2017) If you’re not a white
7:872. male, artificial intelligence’s use in healthcare could be dangerous.
40 N
 uffield Council on Bioethics (2015) The collection, linking and use 56 The Guardian (16 May 2016) Google DeepMind 1.6m patient record
of data in biomedical research and health care: ethical issues; PHG deal ‘inappropriate’.
Foundation (2015) Data sharing to support UK clinical genetics & 57 The Guardian (3 November 2017) Why we can’t leave AI in the hands
genomics services; and Reform (2018) Thinking on its own: AI in the of big tech.
NHS. 58 The Conversation (9 January 2018) People don’t trust AI – here’s how
41 Parks JA (2010) Lifting the burden of women’s care work: should robots we can change that.
replace the “human touch”? Hypathia 25: 100-20. 59 STAT (5 September 2017) IBM pitched its Watson supercomputer as a
42 S ee, for example, Autor D (2014) Polanyi’s paradox and the shape of revolution in cancer care. It’s nowhere close.
employment growth, Volume 20485: National Bureau of Economic 60 Sharkey A and Sharkey N (2012) Granny and the robots: ethical issues
Research; Carr, N. (2015) The glass cage: where automation is in robot care for the elderly Ethics Inf Technol 14: 27-40.
taking us; and Susskind R and Susskind D (2015) The future of the 61 ibid.
professions: how technology will transform the work of human experts. 62 ibid.
43 W achter R (2015) The digital doctor: hope, hype and harm at the dawn 63 Mittelstadt B (2017) The doctor will not see you now in Otto P and Gräf
of medicine’s computer age. An example of risks posed by hard to E (2017) 3TH1CS: a reinvention of ethics in the digital age? Otto P and
detect software error in healthcare is the Therac 25 scandal in 1985- Gräf E (Editors).
7 when faulty computerised radiation equipment led to accidental 64 Wallach W and Allen C (2008) Moral machines: teaching robots right
overdoses causing six deaths in Canada and the US, see https://web. from wrong.
stanford.edu/class/cs240/old/sp2014/readings/therac-25.pdf. 65 Hamid S (2016) The opportunities and risks of artificial intelligence in
44 C aruana R, et al. (2015) Intelligible models for healthcare, in medicine and healthcare CUSPE Communications, summer 2016.
Proceedings of the 21th ACM SIGKDD International Conference on 66 Cohen IG AR, et al. (2014) The legal and ethical concerns that arise
Knowledge Discovery and Data Mining, pp1721-30. from using complex predictive analytics in health care Health Aff 33:
45 M IT Technology Review (11 April 2017) The dark secret at the heart of 1139-47.
AI. 67 Wachter R (2015) The digital doctor: hope, hype and harm at the dawn
46 B urrell J (2016) How the machine ‘thinks’ Big Data Soc 3:1. of medicine’s computer age.
47 ibid. 68 Carr N. (2015) The glass cage: where automation is taking us; Towards
48 R egulation (EU) 2016/679 of the European Parliament and of the Data Science (24 June 2017) The dangers of AI in health care: risk
Council of 27 April 2016 on the protection of natural persons with homeostasis and automation bias.
regard to the processing of personal data and on the free movement of 69 Nuffield Council on Bioethics (2015) The collection, linking and use of
such data, and repealing Directive 95/46/EC (General Data Protection data in biomedical research and healthcare: ethical issues; Information
Regulation). Commissioner’s Office (2018) Guide to the General Data Protection
49 P HG Foundation blog (19 September 2017) The Data Protection Regulation (GDPR).
Bill: contortionist-like drafting; Wachter S, et al. (2017) Why a right 70 See https://www.scnsoft.com/case-studies/ibm-qradar-siem-for-a-
to explanation of automated decision-making does not exist in the hospital-with-2000-staff; IBM Security (2015) Securing the healthcare
general data protection regulation IDPL 7: 76-99. enterprise.
50 D ilsizian SE and Siegel EL (2013) Artificial intelligence in medicine 71 Brundage M, et al. (2018) The malicious use of artificial intelligence
and cardiac imaging Curr Cardiol Rep 16: 441; Bird S, et al. (2016) arXiv preprint arXiv: 180207228; Finlayson SG, et al. (2018) Adversarial
Exploring or exploiting? Social and ethical implications of autonomous attacks against medical deep learning systems arXiv preprint arXiv:
experimentation in AI. 180405296.
51 Bird S, et al. (2016) Exploring or exploiting? Social and ethical 72 Yuste R, et al. (2017) Four ethical priorities for neurotechnologies and AI
Implications of autonomous experimentation in AI. Nature 551:159.
52 House of Lords Select Committee on Artificial Ingelligence (2018) AI in 73 Brundage M, et al. (2018) The malicious use of artificial intelligence
the UK: ready, willing and able?; Reform (2018) Thinking on its own: arXiv preprint arXiv: 180207228.
AI in the NHS. 74 Powles J and Hodson H (2017) Google DeepMind and healthcare in an
53 See, for example, The New York Times (25 June 2016) Artificial age of algorithms Health Technol 7: 351-67.
intelligence’s white guy problem; MIT Technology Review (14 February 75 See, for example, Bostrom N and Yudowsky E (2014) The ethics of
2018) “We’re in a diversity crisis”. artificial intelligence in The Cambridge handbook of artificial intelligence
54 British Academy and Royal Society (2017) Data management and use: Frankish K and Ramsey WM (editors).
governance in the 21st century.

Acknowledgments: Thank you to Natalie Banner (Wellcome); Alison Hall, Johan Ordish and Sobia Raza (PHG
Foundation); Brent Mittelstadt (Research Fellow at the Oxford Internet Institute, Turing Fellow at the Alan Turing
Institute); Ben Moody (techUK); and Reema Patel (Nuffield Foundation), for reviewing a draft of this briefing note.

Published by Nuffield Council on Bioethics, 28 Bedford Square, London WC1B 3JS

May 2018
© Nuffield Council on Bioethics 2018

bioethics@nuffieldbioethics.org @Nuffbioethics NuffieldBioethics

www.nuffieldbioethics.org

Nuffield Council on Bioethics 8