As part of a special series on artificial intelligence (AI), OR Manager is taking a deep dive into the many facets of this new technology and its impact on patient care. Part 1 and Part 2 of the introduction to AI (May 2019 and June 2019) defined types of AI and described its many current and potential surgical applications. The series has also presented examples of AI in practice: the OR Black Box® (June 2019) as well as an interactive surgical playbook and a system for quantifying blood loss (see articles in this issue on pages 10 and 12). In part 1 of a two-part article, we examine legal and ethical challenges related to AI.
Artificial intelligence will likely have a dramatic impact on healthcare, including surgery, but the nature of that impact depends on how challenges are addressed. Data management and analysis, ethical issues, legal and regulatory questions, and user impact are some of the issues being discussed not just in the United States but internationally as well.
This article takes an in-depth look at some of these issues. Part 2, which will be published in August, will discuss the additional challenges.
Data challenges
AI systems require large amounts of data to "learn," and that learning is only as good as the data used to teach it. Both volume and quality create challenges related to data protection and analysis.
Data protection
"In healthcare, we put a lot of value, and rightfully so, on maintaining patient privacy and patient confidentiality," says Daniel Hashimoto, MD, MS, surgical artificial intelligence and innovation fellow at Massachusetts General Hospital in Boston. Laws such as the Health Insurance Portability and Accountability Act of 1996 (HIPAA) that are meant to protect privacy can also hinder access to the wealth of data needed to build effective AI systems.
What is the role of data privacy and protection in the age of AI? Experts are calling for a dialogue to answer that question. "These algorithms have the potential to be very high performing, can provide public good, and can provide [significant] health benefits," says Michael Matheny, MD, MS, MPH, associate professor of medicine, biomedical informatics, and biostatistics at Vanderbilt University Medical Center, Nashville, Tennessee. "People will have to decide where the balance is between allowing that data to be more accessible for the public good versus keeping it private and personal."
Dr Matheny cochairs the National Academy of Medicine (NAM) Artificial Intelligence in Healthcare Working Group, which will be releasing a report on AI (sidebar above).
These types of discussions go beyond US borders, and viewpoints vary. Dr Matheny notes that a country's culture determines the balance of public good versus privacy and data protection. "[The viewpoint] is based on the risks and benefits they see to themselves, the risks and benefits they see to society, and how they weigh their own personal rights relative to society," he says. For example, people in the US and China have very different views on data usage.
Some institutions are already taking steps to address data protection. Johns Hopkins in Baltimore, for example, has created a secure platform for storing research data in the cloud.
"We're creating an environment where the data is as secure as we know how to make it," says Ferdinand Hui, MD, associate professor of radiology and radiologic science, director of interventional stroke, and codirector of the Radiology Artificial Intelligence Lab at Johns Hopkins. "That way, we can do analytics on large amounts of data with as little risk of breach as we know how."
Researchers can access a variety of data, including electronic health record data, images, genomics, and physiological monitoring data.
But Dr Hui, along with Wendell Wallach, senior advisor to the Hastings Center in Garrison, New York, acknowledges that advances in information and technology make data de-identification–traditionally viewed as sufficient for protecting a person's privacy–difficult.
"In reality, if you have enough pieces of information about a person, you can probably reconstruct who that individual is, and that would violate their rights," says Wallach, who is also chair of technology and ethics studies at the Yale Interdisciplinary Center for Bioethics in New Haven, Connecticut, the author of A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control, and principal investigator for the Hastings Center's Control and Responsible Innovation in the Development of Autonomous Machines Project (https://dev-the-hastings-center.pantheonsite.io/who-we-are/our-research/current-projects/control-and-responsible-innovation-in-the-development-of-autonomous-machines/) (sidebar above).
Dr Hui uses mammograms as an example of the challenges, saying that each mammogram has a unique pattern of tissue, stroma, and blood vessels.
"If I had a copy of another mammogram to compare against, I could probably figure out who you are, like a fingerprint," he says, adding that it's possible to use CT scans to reconstruct what a person looks like.
Data analysis
Sound AI algorithms depend on sound analysis. "The problem with many published algorithms is that when we evaluate the performance of that algorithm on a new set of data or a new set of patients, it doesn't do as well," Dr Hui says.
The challenge is the variables inherent in data, for example, different surgeons, different hospitals, different patient characteristics, and different procedures.
"We don't know whether the patients and decision points that got programmed into an algorithmic system to provide care align with the patient populations in a different area of the country from where the system was programmed," says Danton Char, MD, assistant professor of anesthesiology, perioperative and pain medicine at Stanford University Medical Center in Stanford, California. "What works in Palo Alto might not work in Akron [Ohio]," says Dr Char, who has written about the ethical issues inherent in AI.
"There's a lot of work that needs to be done to make sure that the algorithms that we develop on one set of patients in one part of the country and one part of the world are valuable and still accurate in other patient groups," Dr Hui says.
Dr Char adds that external pressures could lead to the development of inappropriate algorithms. "Profit-driven pressures and regulatory compliance-driven pressures could cause people to create algorithms that skew toward providing the kind of data that regulators want to hear, or that maximize profits, but at the expense of healthcare or delivering quality health."
The current emphasis on basing reimbursement on outcomes could lead to the creation of algorithms that guide users toward clinical actions that would improve quality metrics but not necessarily improve patient care. For example, there might be indicators that encourage the ordering of unnecessary testing.
Clinical decision-support systems could also be programmed to boost profits for stakeholders without clinicians having any knowledge of that. This might take the form of recommending medications or devices that the creator or purchaser of the AI algorithm holds a stake in.
"Healthcare exists in this tension between maximizing profit and maximizing health, and those two things don't always line up," Dr Char says.
Addressing these data challenges is key to ensure that AI researchers have what they need to build better algorithms. Dr Hashimoto calls for a central database that researchers can access, but notes that it's important to consider modifying certain rules so that data utility can be optimized without compromising patient privacy.
Ethical issues
Ethical issues include bias, monetizing patient data, and access.
Bias
AI algorithms should be based on unbiased data, but ensuring that can be challenging. "We're now realizing that a lot of the data we collect reflects biases from the past, and those are just carried forward," Wallach says. "Either the data itself is not complete, or it's based on historical materials that were intrinsically collected in ways that may reinforce biases."
Some bias has its roots in research. "So much of our medical research is based on the average, 50-year-old Caucasian male," says Sonoo Thadaney, MBA, executive director for presence and executive director for the program in bedside medicine at Stanford University School of Medicine in Stanford, California. "We don't have access to large data sets that represent the populations they aim to serve–with sufficient breadth and depth of diversity in gender, race, and age."
Wallach adds that other forms of bias include patient infirmities such as mental health conditions and how humans think about various issues, such as politics. "Biases can be very broad," he says. "They go beyond basic racial, gender, or cultural prejudices."
Thadaney, who co-chairs the Artificial Intelligence in Healthcare Working Group, adds that focusing too much on currently available data can lead to unexpected consequences. For example, in the last century there was a well-meaning focus on addressing world hunger by increasing yield per acre, without considering nutrition per acre. "Fast forward to today, and we see that the yields per acre on our planet have certainly gone up, but there are [significant] nutritional differences [among zip codes and countries]," she says.
Some have access to high-nutrition food, but others do not. "We have a food apartheid thanks to our food deserts; we can't end up with a healthcare apartheid because we only focus on matrices such as efficiency and costs, ignoring criteria such as inclusivity and equity."
Combining a focus on efficiency with a biased data set could lead to disparities, Thadaney says, especially for those who can't afford second opinions and concierge medicine. "The people who end up with algorithmic medicine are the ones who will likely be people for whom the data set is not relevant and that prioritizes efficiency rather than outcomes," she says.
Monetizing patient data
Thadaney notes that patients give permission for a healthcare system to use their data for treatment, billing, and academic research. "Patients have not explicitly given permission to use that data to monetize it for either one institution or a number of institutions," she says.
Dr Char says that many people will need to volunteer their health data so there is sufficient information to develop AI. "What they should get in return for giving up their data is not clear," he says. "Certainly I think that to do right, there should be some kind of clear benefit to the patient [as opposed to profit for the institution or application designers.]"
He notes that in 2017, London's Royal Free Hospital was found to have breached the Data Protection Act when it gave data for 1.6 million patients to DeepMind, a Google subsidiary. The data transfer was part of a partnership to create Streams, a healthcare app for diagnosing and detecting acute kidney injury. Patients were not told that their data would be used for ongoing testing of the app.
Wallach notes that the European Union's General Data Protection Regulation gives individuals many rights related to who owns data about them, but that's not the case in the United States. "The rules [in the US] are looser in terms of what businesses can and cannot do with data," he says. "There's a lot of concern that the data is being used in unethical ways or inappropriate ways, and that we should be clarifying the norms on the use of that data."
Access
Access to AI could be an issue, particularly for smaller hospitals with fewer financial resources. "If I'm in a rural area or a small community hospital, what are the ethical implications of not being able to get the benefits from AI because of the financial outlays?" Dr Char asks.
"This is going to be a pretty significant challenge," Dr Matheny adds. "There's a lot of infrastructure that needs to be in place from health IT [information technology], health record, and data management resources."
In addition, AI algorithms require updating on a regular basis to be sure they are operating safely and accurately, and those updates contribute to cost.
Dr Matheny says a way to mitigate the financial disparity is to reduce implementation costs through transparent best practices. "That needs to be a conscious effort by stakeholders to encourage national discussion going forward in order to promote standardization and to lower costs of implementation, or only large medical centers will be able to offer the benefits from these technologies," he says.
"Whether AI will realize the promise of actually helping more people or whether it's going to enhance affluence and income disparity is not so clear," Dr Char says. "I think that everybody working in this area acknowledges that we probably don't really understand all of the social ramifications of a lot of this artificial intelligence." ✥
Cynthia Saver, MS, RN, is president of CLS Development, Inc, Columbia, Maryland, which provides editorial services to healthcare publications.
References
Bollier D. Artificial intelligence, the great disrupter, coming to terms with AI-driven markets, governance and life. A report on the second annual Aspen Institute roundtable on artificial intelligence. 2018. http://csreports.aspeninstitute.org/documents/AI2017.pdf.
Char D S, Shah N H, Magnus D. Implementing machine learning in health care–addressing ethical challenges. N Engl J Med. 2018;378(11):981-983.
European Commission. 2018 reform of EU data protection rules. https://ec.europa.eu/commission/priorities/justice-and-fundamental-rights/data-protection/2018-reform-eu-data-protection-rules_en.
Faggella D. Searching for higher ground in rough seas of emerging tech governance–a conversation with Wendell Wallach. 2018 Podcast. https://emerj.com/ai-podcast-interviews/searching-for-higher-ground-in-rough-seas-of-emerging-tech-governance-a-conversation-with-wendell-wallach/.
Hern A. Royal Free breached UK data law in 1.6m patient deal with Google's DeepMind. The Guardian. 2017. https://www.theguardian.com/technology/2017/jul/03/google-deepmind-16m-patient-royal-free-deal-data-protection-act.
Maddox T M, Rumsfeld J S, Payne P R O. Questions for artificial intelligence in health care. JAMA. 2019;321(1):31-32.
National Academy of Medicine. Digital Learning Collaborative. June 28, 2018, meeting highlights. https://www.health-hats.com/wp-content/uploads/2018/07/DLC-_06282018_Meetingsummary.pdf.
Royal College of Physicians. Artificial intelligence (AI) in health care. 2018. https://www.rcplondon.ac.uk/projects/outputs/artificial-intelligence-ai-health.
Vayena E, Blasimme A, Coehn I G. Machine learning in medicine: Addressing ethical challenges. PLOS Medicine. 2018. https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1002689.
Wallach W, ed. Control and Responsible Innovation in the Development of Autonomous Machines. Executive summary. 2018. http://www.thehastingscenter.org/wp-content/uploads/Control-and-Responsible-Innovation-EXECUTIVE-SUMMARY.pdf.
Wallach W, Marchant G E. An agile ethical/legal model for the international and national governance of AI and robotics. Association for the Advancement of Artificial Intelligence. 2018. http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_77.pdf.