• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


icon

Semafor Signals

As AI-powered health care expands, experts warn of biases

Oct 5, 2024, 10:00am EDT
A monitor shows a three-dimensional image of a human heart.
Ralph Orlowski/Reuters
PostEmailWhatsapp
Title icon

The News

Google’s DeepMind artificial intelligence research laboratory and German pharma company BioNTech are both building AI-powered lab assistants to help scientists conduct experiments and perform tasks, the Financial Times reported.

It’s the latest example of how developments in artificial intelligence are revolutionizing a number of fields, including medicine. While AI has long been used in radiology, for image analysis, or oncology to classify skin lesions for example, as the technology continues to advance its applications are growing.

AD

OpenAI’s GPT models have outperformed humans in making cancer diagnoses based on MRI reports and beat PhD-holders in standardized science tests, to name a few.

However, as AI’s use in health care expands, some fear the notoriously biased technology could carry negative repercussions for patients.

icon

SIGNALS

Semafor Signals: Global insights on today's biggest stories.

Investors pour money into AI health care startups

Source icon
Sources:  
Business Insider, Mercer

Health care-focused venture capitalists have been particularly keen on investing in startups that use AI, Business Insider reported, with money pouring into AI-related health care startups at twice the rate of the tech sector as a whole since 2019. The US health care system could face shortages of up to 100,000 workers by 2028, consulting company Mercer predicted, and AI could prove a boon to filling certain roles. Most venture investments target companies that use AI for administrative tasks, which one investor described as the “low-hanging fruit.” Those that offer clinical analysis and diagnostics have had a harder time making it to market in the US, because they require FDA approval — a process that can be long and expensive.

AI’s bias problem could have negative implications for health care

Source icon
Sources:  
Yale School of Medicine, Nature, State of California Department of Justice

AI models are notoriously riddled with bias stemming from the data they’re trained on, frequently compounding existing inequality related to race, gender, or sexual orientation — and health care is no exception. For example, some algorithms tend to underestimate the symptoms of racial and patients belonging to racial and ethnic minorities, requiring them to be “considerably more ill than their white counterparts” to receive the same care, a Yale School of Medicine article noted. Mitigation and monitoring for bias will be key, a 2023 Nature study suggested. Some US states have taken steps toward tackling the issue — in 2022, California’s Department of Justice began investigating whether algorithms that were used to determine patients’ access to health care displayed a racial bias.

AD