Our sectors

To:
postbox@leighday.co.uk
We treat all personal data in accordance with our privacy policy.
Show Site Navigation

Artificial intelligence: Friend or foe in the fight against COVID-19?

Healthcare solicitor Toby Wilton discusses the use of artificial intelligence technology in the fight against COVID-19 and the legal questions it raises for patient safety.

Doctor looking at xray
Related Areas of Practice:
Toby is a solicitor who works in the clinical negligence department.
The COVID-19 pandemic represents a huge challenge for governments and medical staff the world over. As governments attempt to reduce the spread of the disease and maximise the capacity of health systems to treat victims, they are increasingly turning to technology and artificial intelligence (AI) to help, but how realistic are these hopes and what are the legal questions they present?
 

Technology and COVID-19

In the USA and South Korea scientists are reportedly using machine learning to investigate potential treatments for the disease. In the UK, NHS Digital has described an “increase in use of NHS Digital Tech” since the outbreak began and, on 20 April 2020, announced the beginning of trials on a machine learning system to help UK hospitals plan and manage COVID-19 resources.
 
On 23 April 2020, it was reported that Bolton NHS Foundation Trust was deploying software that monitors how COVID-19 patients progress by reviewing daily chest x-rays. Chest x-rays and other medical images would normally have to be reviewed by a radiologist, specialist doctors trained in the interpretation of medical scans and images.
 

Concerns about the use of technology

 
There is no doubt that the COVID-19 outbreak has and will continue to put the NHS under a huge amount of pressure and that technology can help. For example, whilst stressing that the particular software being used at Bolton NHS Foundation Trust will not replace clinicians, those involved have expressed hope that it will help staff as “another valuable member of the team”.
 
If this or other technology can help NHS staff and hospitals better care for patients, then this is certainly to be welcomed, but the use of technology like this in healthcare does pose some questions, not least how effective it will be and who would be accountable if things were to go wrong.
 
Efficacy
Focusing on the example of software being used to review x-rays, the idea of using artificial intelligence and machine learning to review medical images is not new. In fact, research has suggested that computer programmes can be used to review x-rays and other medical imaging more accurately than human radiologists. However, a recent study in the British Medical Journal concluded that studies investigating the efficacy of such programmes featured “arguably exaggerated claims” that such systems were equivalent to or better than human doctors.
 
Whilst technology is being developed to help fight this new disease at record-breaking pace, this urgency may mean it is even more difficult to know if, or how well, it works.
 
Accountability
Already questions are being asked about governments’ handling of this crisis. In the healthcare setting, acknowledging the potential for serious consequences when things do go wrong, our legal system provides for patients injured as a result of negligent medical treatment to claim compensation, usually from the NHS Trust responsible. If a computer programme made a mistake and, for example, wrongly suggested a patient did not have COVID-19 and they went without life-saving treatment, who would be responsible for this? The doctor using the software? The hospital? The company who manufactured the software? Would that patient have properly consented to being treated in this way? Currently, our legal system does not have all of the answers to these questions.
 
More to the point, we may not always know if or how these technologies are to blame. The code behind software is not transparent, often protected under intellectual property law. Furthermore, a common problem in the development of so-called artificial intelligence is the ‘Black Box Problem’, namely that human beings do not always understand why algorithms make the decisions they do. This could make investigating misdiagnosis (or not) by an artificial intelligence programme extremely difficult.
 
Clearly, this is a crisis that is changing all the time and we should use technology to help us fight this disease. But it is important that any new technology is used with care and only if its use can be properly assessed by the legal system, as any other medical treatment would be.

Share this page: Print this page

Let us call you back at a convenient time

We treat all personal data in accordance with our privacy policy.

To discuss your case

    More information

    Categories