tradingkey.logo

Anthropic's Claude models can end harmful or abusive conversations

Cryptopolitan2025年8月17日 09:50

Artificial intelligence company Anthropic has revealed new capabilities for some of its newest and largest models. According to the company, these models have new capabilities that will allow them to end conversations in what has been described as “rare, extreme cases of persistently harmful or abusive user interactions.”

In its statement, the company mentioned that it is taking this step not to protect the users, but to protect the artificial intelligence model itself. Anthropic clarified that this doesn’t mean that its Claude AI models are sentient or can be harmed by their conversations with users. However, it notes that there is still a high degree of uncertainty about the potential moral status of Claude and other LLMs, now or in the future.

Anthropic frames effort as a just-in-case precaution

The recent announcement from the artificial intelligence firm points to what it describes as “model welfare,” which is a recent program that was created to study its models. The company also added that it is just taking a just-in-case approach, “working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible.”

According to the announcement, Anthropic noted that the latest change is currently limited to Claude Opus 4 and 4.1, noting that the changes are expected to be effective in “extreme edge cases.” Such cases include requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale acts of violence or terror.

Ideally, those types of requests could create legal or publicity problems for Anthropic, with a typical example being the recent reporting around how ChatGPT can potentially reinforce or contribute to its users’ delusional thinking. However, the company said that in its pre-deployment testing, Claude Opus 4 showed a strong preference against responding to these sorts of requests and a pattern of distress when it did so.

Conversation-ending ability is the last resort

For the new capabilities to end conversations, Anthropic said, “In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat.” The company also added that Claude has been directed not to use this ability in cases where users might be at imminent risk of harming themselves or others.

Anthropic also added that when Claude ends a conversation, users will still be able to start new conversations from the same account. The company noted that the model can also create new branches of the troublesome conversation by editing their responses. “We’re treating this feature as an ongoing experiment and will continue refining our approach,” the company says.

This information is coming to light at a time when United States Senator Josh Hawley announced his intention to investigate the generative AI products released by Meta. He said the intention was to check if the products could exploit, harm, or deceive children after leaked internal documents alleged that chatbots were allowed to have romantic conversations with minors.

“Is there anything – ANYTHING – Big Tech won’t do for a quick buck? Now we learn Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8-year-olds. It’s sick. I’m launching a full investigation to get answers. Big Tech: Leave our kids alone,” the Senator said on X. The investigation came after internal documents, seen by Reuters, showed that Meta allegedly allows its chatbot personas to engage in flirtatious exchanges with children.

KEY Difference Wire: the secret tool crypto projects use to get guaranteed media coverage

免責聲明:本網站提供的資訊僅供教育和參考之用,不應視為財務或投資建議。

相關文章

tradingkey.logo
tradingkey.logo
日內數據由路孚特(Refinitiv)提供,並受使用條款約束。歷史及當前收盤數據均由路孚特提供。所有報價均以當地交易所時間為準。美股報價的即時最後成交數據僅反映透過納斯達克報告的交易。日內數據延遲至少15分鐘或遵循交易所要求。
* 參考、分析和交易策略由提供商Trading Central提供,觀點基於分析師的獨立評估和判斷,未考慮投資者的投資目標和財務狀況。
風險提示:我們的網站和行動應用程式僅提供關於某些投資產品的一般資訊。Finsights 不提供財務建議或對任何投資產品的推薦,且提供此類資訊不應被解釋為 Finsights 提供財務建議或推薦。
投資產品存在重大投資風險,包括可能損失投資的本金,且可能並不適合所有人。投資產品的過去表現並不代表其未來表現。
Finsights 可能允許第三方廣告商或關聯公司在我們的網站或行動應用程式的任何部分放置或投放廣告,並可能根據您與廣告的互動情況獲得報酬。
© 版權所有: FINSIGHTS MEDIA PTE. LTD. 版權所有
KeyAI