Gemini, Google invests 30 million for users' mental health
Google has lifted the veil on an update to Gemini, aimed at transforming the chatbot into a safer and more effective guidance tool for those facing moments of psychological distress. The initiative materializes through a massive financial commitment: 30 million dollars provided through Google.org over the next three years to support global helplines.
With this new feature, Google intends to prevent artificial intelligence from attempting to replace therapy, instead acting as a catalyst for immediate access to real professional resources. When Gemini's safety protocols detect inputs indicating a potential suicide or self-harm crisis, the system activates a redesigned module called "You Can Ask for Help", based on a simplified one-touch interface that allows the user to instantly connect with emergency lines without any additional steps.
Gemini and Crisis Management: A Bridge to Human Support
This professional contact option remains anchored and visible throughout the session, ensuring that support remains just a click away even if the conversation shifts to other topics. Big G's support further extends through an enhanced partnership with ReflexAI, which includes an injection of 4 million dollars and the direct integration of Gemini into the training suite of the organization. Google.org Fellows will collaborate pro bono on the development of Prepare, a platform that utilizes AI-based simulations to train staff and volunteers. Through realistic and interactive scenarios, those working in the third sector can refine their skills in handling sensitive conversations, improving human support responsiveness with computational training.
From an engineering perspective, security teams have implemented strict restrictions on the “personality” of the model. To prevent the formation of parasocial bonds or emotional dependencies, Gemini has been trained to systematically avoid presenting itself as a human being or trusted companion. The new rules prevent the AI from claiming to possess human attributes, personal needs, or using language that simulates intimacy.
These constraints are particularly strict for underage users. The system is designed to not cater to false beliefs or harmful impulses, maintaining a constant anchor to objective reality. Responses are structured to discourage self-harming behaviors, avoiding any form of validation of dangerous thoughts. Essentially, Google is deliberately downplaying the AI's illusion of empathy to prioritize clinical safety and protection from bullying and harassment. The message is clear: Gemini is not, and should not be, a substitute for professional therapy.