Here’s How Artificial Intelligence Is Used in Healthcare – and What Happens When It Makes a Mistake

Here’s How Artificial Intelligence Is Used in Healthcare – and What Happens When It Makes a Mistake

If your doctor makes a mistake, the question is usually when, not who, to sue. But what happens when the doctor is a robot?

From the wrong diagnosis to wrong-site surgery, medical errors are made outrageously often by humans. As the third-leading cause of death in the U.S., they kill an estimated 440,000 people a year. But the involvement of artificial intelligence (AI) in these errors is brand new territory. Experts are only just beginning to get a lay of the land, and 1 in 3 fear AI errors could also be fatal.

The Current Status of AI in Healthcare

In basic terms, AI is the use of computer algorithms (known as machine learning) to make data-driven decisions. In healthcare, it’s designed to make more accurate decisions to help reduce medical errors – particularly in diagnostic medicine. AI technologies have already exceeded human ability to identify heart-attack risk and certain types of cancer, for example, just from analyzing images.

But like any technology, AI can fail. It may even be vulnerable to cyberattacks. Earlier this year, Harvard University researchers tested an attack on image recognition models that pushed the AI to identify objects incorrectly, thus leading to misdiagnosis.

What’s more, AI is so complex that we don’t yet know how it makes decisions. All doctors have to act on is AI’s conclusions, which seems unsettling in such life-or-death applications.

Doctors are now faced with addressing their ethical and legal responsibilities in using AI. This will be important not only to keep patients’ trust, but to determine how to compensate those who fall victim to medical errors.

“We cannot have the ‘move-fast-break-things’ mantra of Silicon Valley in healthcare,” said Matthew Fenech, a former NHS doctor and current AI policy researcher at U.K.-based think tank Future Advocacy. “We need to put patients and safety first.”

Who Is Responsible for Medical Errors Today?

AI is meant as a tool. It’s designed to complement, not replace, a doctor’s best judgment. In cases of medical error, healthcare providers remain 100 percent liable in all but 2 cases.

The first is when the doctor (or AI) acted as any reasonable doctor would be expected to act, known as the “standard of care.” Imagine, for example, that an algorithm prescribes a routine antibiotic after checking the patient’s records. If the patient turns out to be allergic to that antibiotic, but the allergy was unknown to the patient or their records, then the AI and the doctor should be blameless.

Another scenario is when the fault lies with the manufacturer of an AI-powered device, the same way car manufacturers are liable for faulty seatbelts. One example in medicine is AI-based pacemakers, whose programming defects have been found to risk fatal harm. If they did cause harm, but the doctor had implanted the pacemaker properly, the device’s developers would be liable.

But as long as we have doctors to set standards of care and make the ultimate call, the former scenario is likelier.

Sokolove Law is no longer accepting Guidant Pacemaker cases.

“We do expect a degree of due diligence,” said Charlotte Tschider, a fellow in health and intellectual property law at DePaul University. “Say you’re a surgeon and you look at your scalpel to find it’s all bent and messed up. You’re not just going to cut a patient open without thinking. You have responsibility there too.”

If found negligent in these types of cases, it’s the doctor who faces legal action.

Who Will Be Responsible in the Future?

So far in 2018, $8.2 Billion has been invested in AI research for healthcare. AI technologies are not yet widely implemented, but pervasive use may be only a few years away. If and when AI becomes the new standard of care, the question of responsibility becomes trickier to answer – but the industry might consider changing liability law to protect doctors who overrule machines.

We can’t be sure what those changes will look like, as AI has yet to be seriously tested in court. One solution, experts say, would be to hold AI systems fully responsible in all cases. But how does one penalize a machine or an algorithm if they don’t earn paychecks?

Their manufacturers do, of course, which brings us back to holding companies accountable.

This is already happening today in medical malpractice lawsuits, when manufacturers of faulty devices don’t adequately warn about their risks. Pharma giants like Johnson & Johnson have paid out billions in settlements for defective talcum powder, transvaginal mesh implants, and other products that have injured and even killed consumers. The fact that some manufacturers fail to test these products before release only makes these medical errors more devastating.

AI’s decision-making still raises a number of questions. Yet, given the current state of machine learning, it might make sense to hold manufacturers accountable. Whatever the related regulations – which all too often protect manufacturers’ interests – we can only hope they put patients first.

Author:
Sokolove Law Team

Contributing Authors

The Sokolove Law Content Team is made up of writers, editors, and journalists. We work with case managers and attorneys to keep site information up to date and accurate. Our site has a wealth of resources available for victims of wrongdoing and their families.

Last modified: September 28, 2020