This is a note of an interview with Matt Fenech of Future Advocacy, a UK think tank. We interviewed Matt in relation to a scenario planning project the Strategy Unit is currently undertaking for a programme that is seeking to transform how population health is managed at a local level, improving patient outcomes and making more efficient use of resource.

Matt worked as an NHS doctor for 10 years. This experience helped him understand the value of large-scale, ambitious advocacy projects in improving societal well-being. He now works on Future Advocacy's Artificial Intelligence project[1], advocating for policies that will maximise its benefits while mitigating its potential risks.

We asked Matt about developments in Artificial Intelligence (AI) that might support the delivery of better care and better outcomes in a local health system. We were also interested in the wider factors that might impact the diffusion of these developments, not least their acceptability to patients.

Matt highlighted two solutions that already exist and which give a flavour of the potential impact of AI on local health services. These related to clinical diagnostics and to the triaging of patient needs.

  1. Automated Image Processing[2]

Diagnostic capacity often causes delay in the instigation of treatment; there is a shortage of Radiologists in the NHS; and the interpretation of diagnostic tests is unavoidably subject to human error. Solutions already exist that can take images from investigations such as X-rays, CT scans and more basic imaging available to GPs. Matt says that early research evidence suggests that these automated processes, drawing on thousands of comparator images, outperforms human doctors and gets it wrong fewer times. In the diagnosis of melanoma, for example, this can be done by the GP without further initial referral (and the associated delay, inconvenience and cost). Matt says

You can see it as freeing up capacity for hospital radiologists.

He highlights two key enablers for diffusing this technology: first, the need to train GPs in the limitations of algorithmic decision making; and, second, establishing the acceptability to patients of both the use of AI in decision making concerning their health and the use of their images to support machine learning. A recent Information Commissioner ruling against the Royal Free NHS Foundation Trust’s use of Google’s Deep Mind related to a failure to secure adequate patient consent.[3]

  1. Babylon Triage App[4]

Babylon is an example of a health tech start-up. Its triage app currently uses Chatbot technology that is becoming more advanced with the use of AI, particularly natural language processing. Matt is confident that such technologies will continue to improve, highlighting how far common applications such as Apple’s Siri and Amazon’s Alexa have come. He says that

When paired with diagnostic models there is huge potential for this technology to take off in the health sector.

The Babylon App, as an example, currently offers two levels of service: a free service that enables patients to chat to the app and get advice; and a paid-for service that adds direct contact through to a Doctor or Specialist.

Matt sees real potential for this kind of technology to free up clinician capacity and enable speedy referrals to the appropriate service or Multi-Disciplinary Team, and he is confident that it will be able to play a part in a transformed health system over the next 15 years.

As with the use of AI in diagnostics, however, Matt is concerned that patients are adequately engaged in understanding and consenting to the use of this technology. He says,

We can’t assume that people will be willing to share their data. They probably will but, as in clinical trials, they need to be asked!

Future Advocacy is working to secure funding for a project that will explore the issues around patient acceptability through round table discussions and patient engagement.

There are clearly real opportunities and real risks in relation to AI for local health systems looking to transform how they provide care. It is possible to paint a picture of a rosy future in which AI provides solutions to workforce shortages (enabling clinicians to target their skills where they are most valuable), improves the speed and convenience care for patients and support the realisation of improved outcomes and more sustainable services. Without adequate public and patient engagement, however – or through just one or two high-profile problems with the technology or data security 

September 2017

blog post