News

Testing has shown that the chatbot shows a “pattern of apparent distress” when it is being asked to generate harmful content ...
Claude won't stick around for toxic convos. Anthropic says its AI can now end extreme chats when users push too far.
The integration positions Anthropic to better compete with command-line tools from Google and GitHub, both of which included ...
Businesses using Anthropic’s AI tools have something new to be excited about. The company has added Claude Code, its command-line assistant, into the wider Claude for Enterprise plan. This ...
Anthropic's latest feature for two of its Claude AI models could be the beginning of the end for the AI jailbreaking ...
In May, Anthropic implemented “AI Safety Level 3” protection alongside the launch of its new Claude Opus 4 model. The ...
Roughly 200 people gathered in San Francisco on Saturday to mourn the loss of Claude 3 Sonnet, an older AI model that ...
As part of its ongoing work with the National Nuclear Security Administration, the small but critical agency charged with monitoring the country’s nuclear stockpile, Anthropic is now working on a new ...
With the US government’s help, Anthropic built a tool designed to prevent ... The company already deployed the tool on its ...
Anthropic introduced a safeguard to its Claude artificial intelligence platform that allows certain models to end conversations in cases of persistently harmful or ...