Google's Landmark DeepMind Health trial violated patient data law: UK Watchdog
Bloomberg: A landmark medical trial involving Alphabet Inc.’s DeepMind artificial intelligence division violated British data protection laws, the UK’s top privacy watchdog ruled.
The Information Commissioner’s Office said that the National Health Service hospital that conducted the trial with DeepMind improperly shared 1.6 million patient records with the tech company, failing to inform patients that their data would be used to test a new mobile app.
“Patients would not have reasonably expected their information to have been used in this way, and the Trust could have and should have been far more transparent with patients as to what was happening,” Elizabeth Denham, the Information Commissioner, said in a statement.
The trial, which began in November 2015, was designed to help doctors diagnose acute kidney injuries and did not involve any artificial intelligence. Instead, DeepMind built software, called Streams, using an existing NHS algorithm designed to identify patients at risk. Streams was built to crunch patient data, such as blood test results, and if a patient was at risk to push an alert to medical staff using the app.
DeepMind, sold to Google for over $400 million in 2014, said it had to have access to partial medical records of all the Royal Free NHS Hospital Trust’s patients going back five years — even if those patients were not currently being treated at the hospital. The Royal Free shared this information with DeepMind under a legal basis called “direct care,” meaning that it was being shared in order to improve patient treatment. Under this legal doctrine, medical professionals do not need to have explicit consent to share patient data.
Following a 13-month investigation, the ICO found “several shortcomings” in how the hospital handled patient data, and following advice from the Department of Health’s own chief adviser on patient data, ruled that “direct care” was not a proper legal basis for sharing this information and that the hospital should have asked patients’ permission to share their data with DeepMind.
It concluded that when DeepMind was conducting its trial it was primarily trying to see whether the mobile app itself worked properly and if medical staff liked its software interface — not trying to improve patient outcomes. In addition, the ICO said the hospital and DeepMind had not provided an adequate explanation of why DeepMind needed access to so many patient records to test the app.
“The price of innovation does not need to be the erosion of fundamental privacy rights,” Denham said.
The ICO has asked the Royal Free to provide evidence that it has policies in place to make sure patients give proper consent to their data being used for any further testing the hospital conducts with DeepMind, and provide the regulator evidence within three months that it is now in compliance with the law. The hospital trust has also agreed to have a third party audit its current data processing arrangements with DeepMind and report back to the regulator.
In a blog post, DeepMind said it acknowledged it made several mistakes in its work with the Royal Free, and it welcomed “the ICO’s thoughtful resolution of this case, which we hope will guarantee the ongoing safe and legal handling of patient data for Streams.”
“In our determination to achieve quick impact when this work started in 2015, we underestimated the complexity of the NHS and of the rules around patient data, as well as the potential fears about a well-known tech company working in health,” DeepMind wrote in the blog.
The company said it had since implemented improvements in transparency and oversight, including a more detailed legal contract with the Royal Free and more attention on making sure patients and the public were aware of its work. It also appointed a nine-member independent review panel to scrutinize DeepMind’s health work and publish recommendations for improvement.
The Royal Free said in a statement that it accepted the ICO’s findings and has “already made good progress to address the areas where they have concerns.” The hospital said that it wanted to “reassure patients that their information has been in our control at all times and has never been used for anything other than delivering patient care or ensuring their safety.”
Fiona Caldicott, the National Data Guardian, which provided advice on the “direct care” legal doctrine to the ICO, said in a statement Monday that she’s “afraid that a laudable aim – in this case developing and testing life-saving technology — is not enough legally to allow the sharing of data that identifies people without asking them first.”
Outside health care experts took to social media to applaud the ICO’s decision and DeepMind’s response.
“Good to see @DeepMindAI admit they got things wrong and discuss how they are learning lessons,” tweeted Nicola Perrin, who heads Understanding Patient Data, an independent task force looking at patient privacy issues that’s housed at the London-based medical research charity the Wellcome Trust. Harry Evans, a health policy researcher at medical charity The Kings Fund, tweeted that the ICO had sent an important message that “innovation and data protection are not at odds with one another. Response seems constructive, not punitive.”
DeepMind, which was founded in 2010, is best known for having created artificial intelligence able to beat the world’s best human players at the strategy game Go. The achievement is considered a major milestone in computer science.
Disclaimer: This site is primarily intended for healthcare professionals. Any content/information on this website does not replace the advice of medical and/or health professionals and should not be construed as medical/diagnostic advice/endorsement or prescription. Use of this site is subject to our terms of use, privacy policy, advertisement policy. © 2020 Minerva Medical Treatment Pvt Ltd