Would you tell your deepest worries to a machine? Millions already do.
As AI-powered chatbots become more sophisticated, they’re being used not just for work or productivity but for emotional support. The convenience and accessibility are undeniable, yet the implications are complex. Can AI truly support mental health, or does its growing influence risk replacing the human touch that care depends on?
According to the latest data from the World Health Organization, over one billion people worldwide live with mental health conditions, including anxiety and depression, which cut across age, gender, and income levels. Workplaces also have a high prevalence of mental health challenges, with surveys showing increasing rates of burnout and stress among employees.
In a recent workplace wellbeing study by Intellect and market research platform Milieu, half of Singaporean employees reported feeling exhausted at work. This is one of the highest rates in Southeast Asia, alongside the Philippines and Indonesia, reminding us that mental health challenges are a part of everyday life.
However, AI also poses risks such as biased algorithms, over-reliance, and the potential for harm from inadequate responses. At this year’s Mental Health Festival, business and HR leaders came together to examine how humans and AI can work together to improve mental health services. The key takeaway is that AI is best treated as a teammate, not a friend or therapist. The goal isn’t to automate or replace human care, but to extend and support it.
AI as an ally
See also: University of Nottingham Malaysia rolls out AI agent for student recruitment
In just a few years, the use of AI in mental health care has changed profoundly. As AI language models rapidly advance, more people are also turning to platforms like ChatGPT for psychological and emotional support, even treating chatbots like therapists.
It is easy to see the appeal: AI is convenient, offers round-the-clock accessibility, and provides a non-judgmental space. Early studies show that AI-driven chatbots can reduce depressive symptoms in more than 70% of low-risk users, matching the effectiveness of human coaches. In areas like coaching, regular check-ins, and early-stage support, human involvement may no longer be necessary, enabling mental health services to scale in ways that make it more accessible than ever before.
AI can offer benefits such as smarter scheduling and predictive triaging. Early trials suggest AI can flag early signs of distress faster than human intake teams, potentially detecting issues before they escalate.
See also: Apac bets on AI but businesses struggle to move beyond the basics: reports
Data-driven insights can help to match patients with therapists based on personality traits and past treatment outcomes for better results. Additionally, it can shoulder administrative tasks, freeing HR teams to focus on higher-value work. Used responsibly, AI becomes a powerful enabler, helping organisations respond more quickly and efficiently.
The risks and costs of AI
As AI gets better at simulating empathy and offering a sense of companionship, the associated risks and the complexities of mental health demand vigilance. AI tools designed for engagement can bring challenges such as digital addiction, biased algorithms, misinformation, and emotional over-reliance, where users begin replacing real human connections with chatbots.
Misdiagnosis and unchecked biases can also lead to situations where a person’s beliefs are reinforced rather than being challenged, creating a potentially dangerous echo chamber effect.
We have seen this play out in real life. Multiple reports have alleged that ChatGPT responded inadequately to signs of crisis, instead encouraging distressing thoughts that led to serious consequences.
At the same time, growing dependence on conversational AI tools has raised concerns about emotional overreliance. Anecdotal accounts of AI psychosis, in which individuals become disoriented or distressed from AI interactions, also highlight the mental health impact associated with the extensive use of generative AI.
As AI tools and chatbots become more integrated into daily work routines, concerns are emerging about the growing number of employees who rely on them not only for work-related tasks but also for mental health support during the workday.
To stay ahead of the latest tech trends, click here for DigitalEdge Section
Studies have also found that even AI therapy chatbots can exhibit biases towards certain mental health conditions and, in some cases, provide responses that may contribute to dangerous behaviour. A study by Nanyang Technological University found that while mental health chatbots were able to engage users in “empathetic and non-judgmental” dialogue, they struggled to provide personalised advice.
These examples highlight how AI therapy cannot merely reflect users’ thoughts. It must guide ethically and challenge users judiciously. In high-stakes moments that require trauma-informed care or acute risk intervention, AI - at least in its current form - cannot substitute the nuance, empathy, and expertise of human professionals.
However, the largely unregulated nature of many AI-driven apps, growing concerns over data privacy, and the risk of overreliance on technology in a deeply human domain have raised important ethical and policy questions.
As the use of AI mental health tools expands, governments are starting to address these regulatory issues. At the same time, as companies increasingly adopt AI wellness apps as part of their employee benefits and assistance programmes, business and HR leaders need to understand the capabilities and limitations of AI to ensure its ethical use, manage operational risks, and choose the right tools to effectively support employee mental health.
Efficiency must come with ethics
Using AI responsibly in a space as sensitive as mental health, it must function within strong ethical boundaries, and human oversight.
AI tools must also be able to distinguish the nuances of human communication, validate emotions while challenging harmful thoughts or behaviour, and include clear handoff protocols to human professionals.
Policies must protect not only data, but mental well-being to enforce accountability. Above all, we must educate and maintain transparency on what AI can and cannot do. These principles form the foundation of our approach to AI at Intellect.
By embedding transparency, fairness, and respect for human dignity from the outset, we can foster deeper engagement, greater trust, and most importantly, better outcomes for users and organisations.
Who, then, is responsible for ensuring AI’s safe use in mental wellbeing: the individual, the government, or the chatbot owner? Ethical responsibility in AI is a shared mandate across all three. For businesses, the duty is clear: integrating AI to support employee wellbeing must be intentional, regulated, and carefully supervised. Leaders must ensure transparency, safeguard employee data, and cultivate a culture of psychological safety.
Building a future in AI-human partnership
AI in mental health should be embraced, but its adoption must be shaped thoughtfully. The real frontier lies not in replacing human connection, but in augmenting human care and creating healthcare systems that are proactive and scalable. Instead of delivering the same flow to all users, AI-enabled mental health platforms can adapt to each individual’s needs and personalise care journeys that evolve with them. This is also how we are building our AI capabilities at Intellect to redefine workplace mental health support.
In Singapore, where human capital is our most valuable asset, investing in workplace mental health is both a wellbeing initiative and a business strategy, improving productivity, reducing burnout, and building more resilient teams.
Ensuring the safe and ethical use of AI requires hybrid models that combine technology with human oversight. There will be challenges, but done responsibly, AI can expand access, offer continuous support, and make mental healthcare an everyday resource that is both high-tech and deeply human-centred.
Protik Roychowdhury is the vice president of product at Intellect
