• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


icon

Semafor Signals

AI workers warn that companies are silencing their concerns

Insights from Platformer, Politico, Wired, and Big Think

Arrow Down
Jun 5, 2024, 1:08pm EDT
techNorth America
Dado Ruvic/Reuters
PostEmailWhatsapp
Title icon

The News

Artificial intelligence workers warned that the industry is silencing their concerns about the tech’s safety.

Thirteen current and former employees of AI companies, including 11 from OpenAI, signed an open letter endorsed by the “Godfather of AI” Geoffrey Hinton.

AD

The letter stated that whistleblower protections for AI staffers are insufficient, and confidentiality agreements “block us from voicing our concerns” on issues including the possibility of human extinction.

icon

SIGNALS

Semafor Signals: Global insights on today's biggest stories.

OpenAI’s employees have ‘whiplash’ after company’s transformation

Source icon
Sources:  
Platformer, The New York Times

Many OpenAI employees have “whiplash” as it transformed from a non-profit research lab focused on safely developing advanced AI to a company that thrust the tech into the mainstream with ChatGPT, Platformer’s Casey Newton argued. “OpenAI soon barely resembled the nonprofit that was founded out of a fear that AI poses an existential risk to humanity,” Newton wrote, and some of its employees, including those who wrote the letter, said the AI giant is prioritizing profits and growth over safety. “OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” a former OpenAI researcher told The New York Times.

US state laws regulating AI may prove ineffective

Source icon
Source:  
Politico

US state lawmakers are more proactive about regulating AI than the “politically paralyzed” federal government, Politico wrote. AI legislation is currently being discussed in 48 out of 50 US states, but these efforts may ultimately prove ineffectual, two consumer advocates argued. Most of the state laws under consideration contain loopholes that would let AI companies evade accountability, and could be weakly enforced, making it challenging for the public to determine “if discriminatory or error-prone AI was used to help make life-altering decisions about them.”

AI doomerism may be overblown

Source icon
Sources:  
Big Think, Wired

Experts have long debated the extent of the threat that AI poses to the human race. “Fear sells, and we are afraid of the unknown,” tech reporter Alex Kantrowitz told Big Think in 2023. AI, in its current state, can only repurpose the information it learns through the training data it’s fed, and is still prone to making mistakes. While its long-term impact is hard to predict, for now, large language models tend to provide more of a “so-so automation,” and people will have to recognize that “it was always a pipe dream to reach anything resembling complex human cognition on the basis of predicting words,” an MIT professor wrote for Wired earlier this year.

AD