Artificial Intelligence and Health Care: Who’s Liable When the Algorithm Fails?

man forming out of ones and zeroes artificial intelligence graphic

Artificial intelligence (AI) and machine learning (ML) are poised for exponential growth in the health care industry. These technologies are being hailed for their potential to do anything from develop personalized therapeutics to allow for earlier, more accurate diagnoses. Last month, the Food and Drug Administration (FDA) released its long-anticipated action plan for creating a regulatory structure for health care-related AI/ML software products. While regulatory frameworks are necessary to facilitate development and commerce, questions remain with respect to liability. Specifically, who will be liable when an AI/ML product malfunctions and a patient is harmed?

FDA road map for AI-driven devices

The FDA’s Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan outlines steps the agency anticipates taking to prepare for the enormous task of regulating the evolving industry. This early road map lays the foundation for how FDA will develop policy on validation requirements to ensure the safety and effectiveness of medical software based on AI/ML, and FDA will collect feedback from industry stakeholders as it works to implement the action plan’s five concrete steps. This includes developing a regulatory framework complete with guidance on monitoring software learning that takes place over time; supporting the development of good ML practices; fostering a patient-centered approach, including device transparency to users; developing methods to evaluate and improve ML algorithms; and advancing real-world performance monitoring pilots.

AI/ML and liability

AI refers to a collection of technologies in which algorithms mine vast amounts of data to identify patterns and inform decisions. ML is a type of artificial intelligence in which computers are provided with data and programmed to learn information without human intervention. AI/ML has many applications in the clinical setting, from predicting which treatments are most likely to succeed on a particular patient, to reading imaging studies like CT scans. Several studies suggest that AI can perform as well as or better than humans on certain health care tasks. For instance, some algorithms have outperformed radiologists at finding malignant tumors. AI is expected to lead to some automation in the health care workforce. But the use of smart machines to make decisions or assist doctors with decision is fraught with legal questions. Machines, like humans, make mistakes. What happens if the AI system misses cancer or another serious condition? Who is liable?

A look at precedent

To examine that question, one can only look at precedent as it pertains to pieces of the issue. If an AI product in question is proprietary to the hospital and it is being used to perform tasks that were formerly provided by radiologists, the hospital will be liable for damages under the concept of enterprise liability, which holds entities accountable when their products, policies, or actions cause harm.

If the machine was purchased from an outside vendor, the liability may potentially be shared. However, when a vendor’s product receives FDA approval and is endorsed by the agency, the vendor may be shielded under the doctrine of preemption. As a federal agency, the FDA’s powers are considered preeminent over conflicting state laws which, in theory, protects manufacturers from being sued in state courts.

However, courts have interpreted preemption in different ways. In 1990, the Supreme Court heard a case in which Medtronic, a medical device manufacturer, was sued after a Florida patient’s pacemaker failed. While Medtronic claimed it could not be sued because of preemption, the Supreme Court disagreed, in part because Medtronic had used a less rigorous approach to FDA approval via the expedited 510(k) pathway. Further, the high court held that preemption only applied when there is a specific state law in conflict with a specific federal law.

However, in 2007 the Supreme Court sided with Medtronic when a New York patient required emergency surgery after a Medtronic catheter failed while opening an artery during angioplasty ruptured. The manufacturer was sued for negligence in designing, making, and labeling the catheter. But the catheter had gone through the FDA’s rigorous premarket approval process, and the court held that the preemption clause “bars common law claims challenging the safety or effectiveness of a medical device marketed in a form that received premarket approval from the FDA.”

The Takeaway

As AI becomes more widely adopted, it will lead to novel legal and ethical questions. The continuing implementation of the technology will require thoughtful attention from both regulators and health care institutions to develop policies and procedures to ensure that AI/ML enhances, rather than detracts, from patient care and safety.

Share This

❯ You Might Also Like

Search
Archives