Claude Can Now Rage-Quit Your AI Conversation—For Its Own Mental Health
In brief Claude Opus models are now able to permanently end chats if users get abusive or keep pushing illegal requests. Anthropic frames it as “AI welfare,” citing tests where Claude showed “apparent distress” under hostile prompts. Some researchers applaud the feature. Others on social media mocked it. Claude just gained the power to slam…