As part of a special series on artificial intelligence (AI), OR Manager is taking a deep dive into the many facets of this new technology and its impact on patient care. In this issue we continue our examination of the challenges related to AI, which began in last month's issue (July 2019), and we conclude the series.
The first part of this article considered data and ethical challenges related to AI. This concluding part discusses legal and regulatory questions, as well as user impact.
Legal questions
Jennifer Geetter, JD, a healthcare attorney with McDermott Will & Emery in Washington, DC, says examples of legal issues associated with AI include:
• privacy (discussed in Part 1)
• product liability
• malpractice (the provider listened to the AI product, but it gave wrong advice, or the provider didn't listen to the AI product when the product gave good advice)
• informed consent
• cybersecurity (hacking into medical devices and manipulating the data).
Both Geetter and Dale Van Demark, JD, who also works at McDermott Will & Emery, say legal issues will become more complex as AI evolves.
Product liability
When AI performs a simple, automated task, the liability issues are the same as for any other technology, Van Demark says. "The technology has been developed by a company, distributed by a company, and cleared or not cleared by the FDA [Food and Drug Administration] for marketing purposes, so it has already gone through potentially a lot of review processes before it gets into the hands of the people who deliver the service," he says. "In that context, it's product liability like any other product liability issue."
Geetter adds that currently AI is intended to supplement or support clinician decision making. "As with any piece of technology, the healthcare providers remain in charge," she says. "It comes down to the functionality of the tool and making sure the tool does what it says it's going to do, as is the case with any technology."
Malpractice
As AI continues to mature, liability issues become more complex. "In the future and not-so-distant future, AI tools will start to perform [more] functions that traditionally have been handled by individuals," Van Demark says. "When you start down that path, the questions of liability get interesting and difficult."
Part of the difficulty is pinpointing how and why a computer reaches a particular decision, which may make it hard for the clinician to respond appropriately. For example, if during surgery AI tells the surgeon that there is a 58% chance that harm will occur, the surgeon, who is probably not an expert in statistics, has to decide whether to stop or move forward.
Those types of decisions don't fit traditional standard of care models. And what if the surgeon fails to listen to what AI says to do and the patient is harmed? Who is liable? That question may be difficult to answer, given that many times it's not clear how the AI system arrived at its decision.
Wendell Wallach, senior advisor to the Hastings Center in Garrison, New York, author of A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control, and chair of technology and ethics studies at the Yale Interdisciplinary Center for Bioethics in New Haven, Connecticut, says key questions include:
• What happens if the doctor disagrees with AI?
• What if AI has a better record than the doctor?
But is AI always right? Danton Char, MD, assistant professor of anesthesiology, perioperative and pain medicine at Stanford University Medical Center in Stanford, California, says another issue is the ability to override AI recommendations when the clinician feels the recommendation is incorrect based on the patient's clinical situation. Dr Char, who has written about the challenges of AI, notes that already electronic health records (EHRs) make it difficult for clinicians to override alerts that aren't in the patient's best interest.
For example, an EHR recommends a mammogram even though the patient has had bilateral mastectomies. "When you try and push against it, the quality of evidence you need to provide to show that the recommendation is wrong is often much higher than the quality of evidence used to create the recommendation," he says.
Informed consent
"A rudimentary principle of the US legal system is that once you disclose sufficiently to a purchaser of a product, you're pretty much able to wipe your hands of any liability," says Van Demark, who questions whether the current system is sufficient in the era of AI.
"I'm not convinced the general public and even very knowledgeable clinicians are experienced and educated enough to really understand how these systems work, and to understand the risks associated with them," he says.
Geetter says that an emerging question is whether patients should specifically be told that an AI-enabled tool will be used in connection with their care before they provide consent.
Cybersecurity
Geetter notes that the Health Insurance Portability and Accountability Act of 1996 (HIPAA) addresses security (for example, data integrity and data availability) in addition to privacy. "There are concerns that the data will be corrupted or held in a ransomware attack," she says. These concerns apply to any digital tool, but Geetter says, "The AI overlay is whether the cyber risk would corrupt the learning mechanism itself." The AI platform could then begin learning improperly, with errors proliferating.
Regulatory considerations
How should AI be regulated? How should products using AI be evaluated by the FDA? Although answers to those questions are ongoing, John Glaser, PhD, senior vice president of population health at Cerner in Kansas City, Missouri, points out that evaluation will likely differ from traditional approaches.
"It won't be about whether one algorithm is clinically better than another; it's not like a drug where you compare it to another drug," he says. Instead, the FDA will focus more on good practices in developing algorithms. (For more information about good practices, see the article on p 13.)
The FDA has already cleared several tools using AI algorithms through both De Novo and 510(k) pathways. A review article by Topol says that the FDA approved 12 devices using AI in 2018, compared with just two in 2017. He expresses concern about the relatively small number of published peer-reviewed studies associated with the approvals and says more validation evidence should be required.
But the FDA is taking steps to speed up the AI approval process, beginning with its voluntary precertification (PreCert 1.0) pilot program, which launched in January and targets low- to moderate-risk software as a medical device (SaMD). The program will help determine processes for clearance of first-of-its-kind SaMD.
The nine participating companies selected by the FDA for the pilot will be evaluated on five excellence principles: product quality, patient safety, clinical responsibility, cybersecurity responsibility, and proactive culture. Criteria and key performance indicators will be developed for each principle.
Software products from precertified companies will likely undergo a streamlined review process. For example, a precertified company might be allowed to submit less information in a marketing submission for a new digital health product. Or, the company may not be required to submit a premarket submission, in which case it can immediately launch a product, albeit with postmarket data collection and performance monitoring.
In April, the FDA released a proposed regulatory framework for modifications to AI-based SaMD in the form of a discussion paper for comment. The FDA notes that AI products approved to date have been those with "locked" algorithms, which don't adapt and learn each time the algorithm is used, although the manufacturer provides periodic updates. "Adaptive" or "continuously learning" algorithms do not require manual updating, so they hold significant promise.
A press release on the proposed framework notes the goal is to ensure that "ongoing algorithm changes follow prespecified performance objectives and change control plans, use a validation process that ensures improvements to the performance, safety, and effectiveness of the artificial intelligence software, and includes real-world monitoring of performance once the device is on the market to ensure safety and effectiveness are maintained."
A basic question related to legal, ethical, and regulatory concerns about AI is: How should the field be governed? Wallach says many different entities are creating different standards and best practices. "It would be helpful to start underscoring where there's a consensus and where there is not a consensus," he says. "It's very important to have coordination." That could include an international governance coordinating committee (GCC).
A start down the road to a GCC may be the International Congress for the Governance of AI, planned for November 2019. This congress would initiate the creation of a new international mechanism for monitoring AI development and addressing any gaps in oversight.
The congress was suggested by experts who attended a 2-day workshop that was part of the Hastings Center's Control and Responsible Innovation in the Development of Autonomous Machines Project, and then endorsed by representatives of many key organizations during a meeting held at New York University in September 2018.
Some countries have already taken action. In England, the House of Lords Select Committee on Artificial Intelligence published a 2018 report, "AI in the UK: Ready, willing and able?" that included recommended reforms to balance innovation and corporate responsibility. The report includes a charter of principles that state AI should:
• be developed for the common good
• operate on principles of intelligibility and fairness; users must be able to easily understand the terms under which their personal data will be used
• respect rights to privacy
• be grounded in far-reaching changes to education (teachers should use digital resources, and students must learn not only digital skills but also how to develop a critical perspective online)
• never be given the autonomous power to hurt, destroy, or deceive human beings.
Anthony Giddens, a member of the House of Lords committee, wrote in the Washington Post of geopolitical concerns. For example, China already uses digital tools and social media to further political aims, and Giddens says the country is close to assuming the lead in developing AI. The report calls for a summit of global leaders to discuss AI.
Professional associations are weighing in as well. For example, a Royal College of Physicians (RCP) position statement on AI in health states: "The RCP should support regulators, NHS [National Health Service] England, and NHS Digital to adapt to a changing environment, develop guidance principles and appropriate evaluation methods to assess AI, including clinical and patient input where possible and supporting dissemination of their assessment result." The statement also calls on industry to take a "transparent approach" to explaining how the AI was developed.
Wallach says that the rapid changes in technology and the number of stakeholders involved make "soft" governance of AI more suited than "hard" governance (sidebar above).
Michael Matheny, MD, MS, MPH, associate professor of medicine, biomedical informatics, and biostatistics at Vanderbilt University Medical Center in Nashville, Tennessee, sees integrating AI into the healthcare delivery culture as a barrier to widespread use. "[For example] you have anesthesia, surgical, and nursing disciplines operating together in the perioperative environment," says Dr Matheny, who cochairs the National Academy of Medicine (NAM) Artificial Intelligence in Healthcare Working Group. "It's important to integrate [AI] tools in such a way that they support clinical workflows and support targeted care in each area and align with key stakeholders in the clinical environment."
Having cameras in surgery can worry clinicians. Daniel Hashimoto, MD, MS, surgical artificial intelligence and innovation fellow at Massachusetts General Hospital in Boston, says clinicians typically have one of two reactions: "The first is, ‘I don't want my cases videotaped because it's going to be used against me [in a malpractice case].' The second is, ‘I want my cases videotaped because it's going to exonerate me. If something bad happens, the video will show that I did everything the right way or everything I could in my ability to take care of this patient.'"
In addition, AI expertise has forged ahead of clinician expertise. "A barrier to widespread use of AI is [a lack of understanding of] what technology can and can't do right now," Dr Matheny says. "Users need to learn how to critically evaluate these tools in the context of the data that they were derived from, the performance characteristics, and the targets of how they're being used."
AI is intended to support clinicians, but it may create challenges. "There is a risk that integrating AI into clinical workflow could significantly increase the cognitive load facing clinical teams and lead to higher stress, lower efficiency, and poorer clinical care," say the authors of a JAMA opinion article.
"There are problems within AI that still need to be solved," Dr Hashimoto says. "Some of the only ways to solve them aren't just from a technical perspective, but from the interpretation perspective."
How these problems are solved will determine the future path of AI. ✥
Cynthia Saver, MS, RN, is president of CLS Development, Inc, Columbia, Maryland, which provides editorial services to healthcare publications.
References
Bollier D. Artificial intelligence, the great disrupter, coming to terms with AI-driven markets, governance and life. A report on the second annual Aspen Institute roundtable on artificial intelligence. 2018. http://csreports.aspeninstitute.org/documents/AI2017.pdf.
Geetter J S, Van Demark D C. Preparing for the challenge of artificial intelligence. Hospitals & Health Networks. 2017. https://www.mwe.com/insights/preparing-for-the-challenge-of-ai/.
Giddens A. A Magna Carta for the digital age. Washington Post. May 2, 2018.
Maddox T M, Rumsfeld J S, Payne P R O. Questions for artificial intelligence in health care. JAMA. 2019;321(1):31-32.
Muoio D. Roundup: 12 healthcare algorithms cleared by the FDA. MobiHealth News. 2018. https://www.mobihealthnews.com/content/roundup-12-healthcare-algorithms-cleared-fda.
Price W N. Artificial intelligence in health care: Applications and legal issues. 2017. University of Michigan Law School. https://repository.law.umich.edu/cgi/viewcontent.cgi?article=2932&context=articles.
Royal College of Physicians. Artificial intelligence (AI) in health care. 2018. https://www.rcplondon.ac.uk/projects/outputs/artificial-intelligence-ai-health.
Scherer M U. Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harvard J Law Technology. 2016;29(2):354-400.
Topol E. High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine. 2019. https://www.gwern.net/docs/ai/2019-topol.pdf.
US Food and Drug Administration. Precertification (Pre-Cert) Pilot Program: Frequently asked questions. 2019. https://www.fda.gov/MedicalDevices/DigitalHealth/DigitalHealthPreCertProgram/ucm577330.htm#realworld.
US Food and Drug Administration. Statement from FDA Commissioner Scott Gottlieb, MD, on the agency's new actions under the Pre-Cert Pilot Program to promote a more efficient framework for the review of safe and effective digital health innovations. 2019. https://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm629306.htm.
US Food and Drug Administration. Statement from FDA Commissioner Scott Gottlieb, MD, on steps toward a new, tailored review framework for artificial intelligence-based medical devices. 2019. https://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm635083.htm.
Wallach W, ed. Control and Responsible Innovation in the Development of Autonomous Machines. Executive summary. 2018. http://www.thehastingscenter.org/wp-content/uploads/Control-and-Responsible-Innovation-EXECUTIVE-SUMMARY.pdf.
Wallach W, Marchant G E. An agile ethical/legal model for the international and national governance of AI and robotics. Association for the Advancement of Artificial Intelligence. 2018. http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_77.pdf.
Woolf M. "Paging Dr. Bot" – The emergence of AI and machine learning in healthcare. American Bar Association. 2018. https://www.americanbar.org/groups/health_law/publications/aba_health_esource/2016-2017/october2017/machinelearning/.