Artificial Intelligence (AI) is now part of almost all of our everyday lives. AI is, for the most part, a very quick way for the people to obtain fast information on many quotidian aspects of life. It is able to create an immediate overview of a search or summarize an email (I am referring to Google Gemini). AI is a tool that can be very useful to many of us, but at the same time, it can impact our ability to think critically. There is much research available that shows the dependency AI has created in many of us (Tian & Zhang, 2025). This article refers specifically to college students and the level of dependency on AI, specifically ChatGPT, and its impact on brain development, especially as it pertains to critical thinking. This sample was taken from a university in China in which competence in digital literacy was deemed important; therefore, the use of ChatGPT may have also created an expectation of fluency in this emerging digital language. The findings showed overall that reliance on this new technology led to a decrease in activity in the area of the brain linked to critical thinking (Tian & Zhang, 2025).
It is almost obvious that AI has infiltrated many parts of our lives and consequently, it has reached a more fragile audience that relies on AI, such as ChatGPT, for mental health support. Before engaging in a blame game to determine whether it is the population’s fault, AI’s fault, or even the mental health field’s fault, it is important to be aware of the data recorded on the use of AI in addressing mental health concerns. As someone speaking for a mental health platform I think it is important for the field to recognize that although mental health care should be accessible to everyone, it remains very much a privilege that many people, often the most vulnerable, do not have easy access to. Unsurprisingly, this has led people to turn to AI, to help them access a level of care that would otherwise be unavailable (Collins et al., 2025). There are mixed responses arising from this research paper. Some individuals reported satisfactory experiences, including that AI offers a strong therapeutic interaction, at times even suggesting that it almost perfectly emulates the empathic and attuned relationship between therapist and client. On the other hand, it has also been found to be extremely counterproductive, to the extent that it may even exacerbate certain mental health crises in which pathologies or mental health disorders are present (Collins et al., 2025).
In conclusion, forms of AI such as ChatGPT are able to serve as both a catalyst for filling gaps in care that are, too often, unfortunately inaccessible and as a potential detriment to critical thinking and developing brains. Additionally, it can function as a platform that provides unreliable and even harmful levels of care. A recent case involving a young adult who relied on ChatGPT during one of the most tumultuous phases of life, adolescence, while experiencing troubling suicidal thoughts led him to withdraw further and rely on the AI chatbot as a “suicidal coach” (as described in the family’s wrongful death lawsuit against the AI platform) (Yang, Jarrett, & Gallagher, 2025). This tragic case exemplifies how essential mental health care is for everyone, but especially for vulnerable communities. The level of care necessary for a human being to access health and safety must include crisis emergency protocols during periods of severe difficulty. Most of the time, the care offered within mental health services goes beyond the person's, or client’s interest, as human therapists believe in safety and protection above anything else - even if that is not the client’s first choice. That is what differentiates human therapists to chatbots. Ultimately, the therapist’s personal relationship to humanity is something that cannot be replicated by robots or artificial intelligence platforms. It is an aspect of therapy that our clients rely on, and an attribute of mental health care that needs to be protected.
References
Collins et al. (2025). ChatGPT as therapy: A qualitative and network based thematic profiling of shared experiences, attitudes and beliefs on Reddit. Journal of Psychiatric Research, 191, 277-284. DOI: https://doi.org/10.1016/j.jpsychires.2025.09.057
Tian, J., & Zhang, R. (2025). Learners’ AI dependence and critical thinking: The psychological mechanisms of fatigue and the social buffering role of AI literacy. Acta Psychologica, 260. DOI: https://doi.org/10.1016/j.actpsy.2025.105725
Yang, A., Jarrett, L., & Gallagher, F. (2025, August 26). The family of teenager who died by suicide alleges OpenAI’s ChatGPT is to blame. NBC News. URL https://www.nbcnews.com/tech/tech-news/family-teenager-died-suicide-alleges-openais-chatgpt-blame-rcna226147