Anthropic abandoned its central commitment in their responsible scaling policy, which was the pledge to never train a model that they could not guarantee was safe.
Source Videos (1)
Claude Blackmailed Its Developers. Here's Why the System Hasn't Collapsed Yet. - YouTube
AI News & Strategy Daily | Nate B Jones
Related Claims
Anthropic Chief Science Officer Jared Kaplan told Time magazine, 'It no longer makes sense to make unilateral commitments if competitors are blazing ahead.'
The lead safety researcher from Anthropic threw in the towel, and his farewell letter warning the world of its peril has been read over a million times.
Anthropic is arguing its red lines around AI-controlled weapons in a kill chain and mass domestic surveillance represent core values for the business.
Every major lab has weakened or abandoned specific safety commitments in the past year.
Just a few hours after Anthropic's stance, OpenAI signed a similar deal with the Pentagon, effectively announcing that they were willing to forgo extremely basic AI safety guardrails if it meant securing a lucrative government contract.