5 min (1403 words) Read
Case Study

Artificial intelligence shows promise in medicine, but there are recognized drawbacks and risks

It’s easy to imagine the potential for artificial intelligence (AI) to help people around the world live healthier lives.

Some already use AI to spot early signs of disease quickly, as recently reported in a study done in Rangpur, Bangladesh. In this study, the nonprofit Orbis International, which seeks to address preventable causes of blindness, and local physicians used the LumineticsCore system, from Coralville, Iowa-based Digital Diagnostics. The system uses a special kind of camera designed to capture images of the eyes and evaluates them with AI.

This product already has an impressive track record. In 2018, it was the first AI-driven device to gain US Food and Drug Administration (FDA) clearance to check for diabetic retinopathy. In 2020, the giant US Medicare health program agreed to pay for use of the device in primary care offices.

In the Bangladesh study, researchers tracked the productivity of a retina clinic whose patients with diabetes were randomly assigned to the AI or the control group.

The result? When the AI tool was used, an estimated 1.59 patients an hour received what was deemed a high-quality visit, versus 1.14 in the control group,  wrote Digital Diagnostics founder Michael Abramoff and his coauthors in an article published in October in Nature’s npj Digital Medicine journal.

The test showed that LumineticsCore could help more people get screened for eye damage from diabetes, even in developing economies, said Abramoff, who is also a professor of ophthalmology and engineering at the University of Iowa.

He distinguishes between what he calls “impact AI” in medicine and “glamor AI,” meaning products that garner tantalizing headlines but do not yet show hard evidence of benefit to patients.

“We like what we now call impact AI,” which is shown to help improve people’s health, Abramoff said.  “I’m an engineer, so I love technology, but we shouldn’t be paying too much for it if it doesn’t improve outcomes.”

And there’s a need for vigilance with AI because its application in medicine has already shown potential to harm as well as help people.

For example, a 2019 Science paper reported that an algorithm widely used by large health systems and insurers underestimated the severity of Black patients’ illness, thus setting the stage to deny them care. Researchers and policy experts have raised concerns about developing AI tools based on data drawn with a tilt toward relatively wealthy people, who are often White and have good access to health care.

Greater diversity is essential among the patients whose data help train AI tools, as well as among the people who build these products, said Jerome Singh, one of the advisors on a 2021 World Health Organization (WHO) guidance report on ethics and governance of AI for health.

“You’re going to need to have multiracial, multicultural coders,” Singh said. “The interpretation is quite important. AI is only as good as its coding.”

That need for more diversity is one of the chief challenges ahead for attempts to use AI in medicine globally, especially in the Global South, Singh said.

The need for AI may be greater in less developed economies, where the ratio of medical personnel to patients tends to be much higher than in affluent areas. In the United States, there are about 36 doctors for every 10,000 people and in the United Kingdom, about 32, but in India, there are about 7 per 10,000 people, according to WHO data.

Yet these less affluent countries also face challenges when it comes to the infrastructure and institutional knowledge needed to successfully deploy AI, Singh said. These include lack of  electricity and computer servers as well as a shortage of workers able to translate AI-assisted diagnoses AI into effective treatment.

“In some settings, it’s going to be more of a sprint” to successfully integrate AI into health care, Singh says. “In other settings, it’s going to be a marathon.”

AI adoption in medical practice is inevitable at this point, said Partha Majumder, who served as the cochair of an expert group that provided guidance on the 2021 WHO report.

“We have to accept that this is reality,” he said. “Checks and balances need to be hammered in such a way that inappropriate predictions and diagnoses are not made. That’s all we can do. We actually can’t hold back the rolling out of the AI methods.”

Regulators and policymakers around the world are wrestling with ways to make sure AI is applied safely and effectively in health care. Much of this work centers on trying to address bias in how algorithms are developed and trained.

In October, the WHO issued a new report outlining the challenges of regulating AI in medicine. It cited particular concerns about rapid deployment of tools derived from large language models, a class that includes chatbots, without a full understanding of whether these programs will help or harm patients. A European Parliament report issued last year noted that there are also concerns about lack of transparency and privacy and security issues. The FDA is working to refine its approach to regulating AI in medical products through formal guidance. These show companies what kind of evidence they will need to produce to win FDA clearance of products.

AI can eliminate many of the frustrating setbacks that have long been a hallmark of pharmaceutical research, said Tala Fakhouri, associate director for policy analysis at the FDA’s Center for Drug Evaluation and Research Office of Medical Policy. It’s becoming easier to understand in the initial stages how compounds will work in the body, reducing the chance of side effects that often crop up in later testing. Researchers can now quickly analyze information about experimental drugs with AI that would have taken years to synthesize in the past, she said.

“The efficiencies that have been built now on the discovery side are exponential,” Fakhouri said. “We’re going to see a lot coming to the market soon.”

KERRY DOOLEY YOUNG is a Washington-based journalist who previously worked at Congressional Quarterly and Bloomberg News.

Opinions expressed in articles and other materials are those of the authors; they do not necessarily reflect IMF policy.