- July 10, 2025
Nearly 50 Million Americans are Using A.I. for Mental Health Support as Services Face Pressures

- A fifth of Americans (19%) – equivalent to 49.2 million adults – are now using AI chatbots for mental health care, turning them into a lifeline for those unable to access proper support[1].
- Three in 10 people (30%) have entered physical symptoms and medical history into AI chatbots to find out what might be wrong with them[2].
- With the Big Beautiful Bill introducing a 12% cut to Medicaid spending, the reliance on AI chatbots for health advice looks set to increase.
- However, cybersecurity company NymVPN is warning people to be cautious about how much information they are sharing with AI given the risks of hacking and privacy infringement.
Close to 50 million Americans (19%) are now using AI tools like ChatGPT to answer questions about their mental health, turning chatbots into personal therapists[1], according to new research by NymVPN.
Nearly a third of people (30%) have turned to AI chatbots, including ChatGPT, Google Gemini and Microsoft Copilot, to find out what their health symptoms could indicate. The same number of people admit to sharing their medical history with AI to speed up the process[2].
In many states, patients face months-long delays to see a licensed therapist or psychiatrist. With public mental health systems underfunded and overburdened, it’s not surprising that millions are turning to AI chatbots as stopgap mental health support.
This trend will only continue. The Big Beautiful Bill will bring sweeping changes to the U.S. healthcare system — including a major restructuring of Medicaid funding that results in a 12% funding cut. The legislation also grants states greater control over mental health program budgets.
According to the Kaiser Family Foundation, more than one in three adult Medicaid enrollees have a mental illness[3]. In this climate, AI chatbots may become a default alternative for millions seeking mental health advice.
AI tools like ChatGPT aren’t just turning into therapists and 24/7 physicians, the research also highlights that one in six (18%) are using this technology for relationship advice, including how to navigate breakups and challenging situations with their partners[4].
The cybersecurity firm NymVPN, who commissioned the research, has been researching the ways that consumers are interacting with AI chatbots. They warn that people may be unaware of the risks of sharing so much about their personal lives with a machine.
Nearly half of Americans (48%) described themselves as ‘cautious’ when it comes to their interactions with AI and the potential privacy risks. Meanwhile, a quarter of people (24%) say that while AI may be here to stay in U.S. healthcare services, they would not trust it with their personal information or the ability to do the job as well as a human[5].
Harry Halpin, CEO of cybersecurity experts NymVPN, comments: “As awareness of mental health continues to grow, the demand for services has also increased, yet for millions of Americans, funding is being cut and many insurance companies are actively limiting support. This is pushing millions of people to turn to AI to fill in the gaps to provide free and impartial advice.
“The likes of ChatGPT and Google Gemini are now so frequently used that they have ventured beyond being a useful tool to being treated as therapists, doctors and relationship coaches.
“The elephant in the room is that this is still technology and is therefore at risk of cyberattacks.
“Any information related to health is useful for hackers, as it gives them a clear picture of what makes someone ‘vulnerable’. It could be used by a scammer to blackmail someone or convince them to click on links that expose devices to malware and viruses.
“If you are using AI for health and relationship advice, do not give your name or the names of loved ones, give general details and summaries rather than specifics about events, feelings or symptoms.
“Ensure all privacy features are turned on, consider using a VPN to protect your location and do not share accounts. Some chatbots, like ChatGPT, allow your conversation history to inform its advice, meaning that you could end up sharing details of your personal life with whoever you share the account with.”