News

With the US government’s help, Anthropic built a tool designed to prevent ... The company already deployed the tool on its ...
On Friday, Anthropic said its Claude chatbot can now end potentially harmful conversations, which "is intended for use in ...
Testing has shown that the chatbot shows a “pattern of apparent distress” when it is being asked to generate harmful content ...
Businesses using Anthropic’s AI tools have something new to be excited about. The company has added Claude Code, its command-line assistant, into the wider Claude for Enterprise plan. This move ...
Anthropic's latest feature for two of its Claude AI models could be the beginning of the end for the AI jailbreaking ...
Claude won't stick around for toxic convos. Anthropic says its AI can now end extreme chats when users push too far.
The integration positions Anthropic to better compete with command-line tools from Google and GitHub, both of which included ...
Anthropic says the conversations make Claude show ‘apparent distress.’ ...
Anthropic has introduced a safeguard in Claude AI that lets it exit abusive or harmful chats, aiming to set boundaries and ...
By empowering Claude to exit abusive conversations, Anthropic is contributing to ongoing debates about AI safety, ethics, and ...
In May, Anthropic implemented “AI Safety Level 3” protection alongside the launch of its new Claude Opus 4 model. The ...
It will only activate in "rare, extreme cases" when users repeatedly push the AI toward harmful or abusive topics.