
The News
OpenAI has reportedly slashed the time it spends safety-testing its artificial intelligence models, sparking concerns that it is racing toward powerful AI without proper safeguards.
The company used to allow months for safety testing, but now gives just days, according to the Financial Times.
AI safety campaigners worry about catastrophic risks from AI, including easing the development of bioweapons, and one former OpenAI researcher recently warned of arms-race dynamics leading to a dangerous dash for AI.
Most researchers think AI will have a beneficial impact for humanity, but warn there exists a chance of disaster.
AI safety regulation is approaching, but it’s not clear how well governments understand the technology: One US cabinet member recently referred to it as “A1.”