Mpathic has introduced a new benchmark tool called mPACT, aimed at assessing the effectiveness of AI models in managing high-risk conversations. This tool evaluates how well systems like Claude, ChatGPT, and Gemini respond to critical topics, including suicide risk and eating disorders. Despite improvements, Mpathic warns that these models still do not meet the standards required for genuine crisis intervention.
In its recent analysis, Mpathic found that while AI models generally avoid harmful responses and can identify distress signals, they often fall short in delivering adequate support. For instance, the evaluation highlighted that understanding subtle behavioral cues, which human clinicians typically perceive, remains a challenge for these systems.
The findings revealed that Claude Sonnet 4.5 performed best overall, especially in detecting and responding to suicide risk, achieving the highest mPACT score. Meanwhile, GPT-5.2 excelled in avoiding harm but lacked proactivity, and Gemini 2.5 Flash showed mixed results—effective in clear risk scenarios but less capable with subtle indications.
In contrast, performance in addressing eating disorders was notably weaker across all models, reflecting the complexities of recognizing indirect risks associated with this issue. Mpathic emphasized the need for AI systems to better navigate the nuanced landscape of mental health challenges.