AI doctors should improve healthcare, but not at any cost
ANYONE phoning a customer service department is likely to be warned that “your call may be recorded for training purposes”. It sounds banal until you realise that the trainee might be an artificially intelligent voice-recognition system that requires real-world data to learn its trade.
Now imagine visiting the doctor and being told “your medical records may be used for training purposes”. Again, it sounds innocuous – but what if the entity being trained is not a human student but a diagnostic algorithm?
The use of AI in medicine is in its infancy, but looks set to become a routine part of healthcare (see “Artificial Intelligence ushers in the era of superhuman doctors“). That raises important questions for patients and providers alike – although perhaps not the ones you might think of at first.
Consider, for example, safety and accuracy. Both are likely to be prominent patient concerns, but algorithms trained on existing medical data already outperform specialists. Misdiagnosis contributes to tens of thousands of preventable deaths every year. AI could slash that failure rate.
A more troublesome issue is the question of who controls and owns the data. To make AI diagnosis work will require access to vast quantities of data about symptoms, diagnoses, treatments and outcomes: in other words, medical records.
From an individual perspective, this may seem a non-issue. Most people diagnosed with, say, skin cancer would be happy to donate their data to help others. But what if it were subsequently reused for another purpose? What if their privacy were voided by a leak, a hack or a failure to anonymise it? What if it were used to build a commercial product or service?
“Data collection and analysis is changing so rapidly that systems of governance can’t keep up”
Such questions of propriety and custodianship have been asked about data before – but medical information is uniquely valuable and sensitive. Last week, the UK Information Commissioner’s Office (ICO) reprimanded the Royal Free Foundation NHS Trust over an agreement struck with Google DeepMind (see “Real reform must follow ruling on flawed NHS-DeepMind data deal“). As revealed by New Scientist, the deal gave the AI company access to 1.6 million people’s medical records to develop a monitoring tool for kidney patients: the ICO ruled that they were not properly informed about the use of their data, among other shortcomings.
So how should the hospital have proceeded in its dealings with DeepMind? The problem is that no one is entirely sure – and not just in medicine. A report by the Royal Society and the British Academy recently concluded that the collection and analysis of data is changing so rapidly that the UK’s systems of governance cannot keep up. It concluded that a new body is needed to safeguard trustworthiness and trust.
If such a body is created, it will have a tightrope to walk. Some argue that medical data should be sacrosanct, asking for implausibly tough safeguards around sharing and reuse. Others say patients should be compelled to share their data for the greater good, with refuseniks being excluded from any resulting improvements in diagnosis and treatment.
The middle ground is worth seeking out. Nobody should want data to be appropriated by third parties, but nor should anyone want innovation that could advance medical care to be stifled. The best way to ensure neither happens is to provide a clear framework that lets healthcare providers, innovators and investors and, above all, patients have confidence that AI doctors will do no harm, of any kind.
This article appeared in print under the headline “Trust me, I’m an algorithm”
More on these topics:
via New Scientist – News http://ift.tt/1Sl3dlX
July 17, 2017 at 06:18AM