AI Chatbots and Mental Health: Between Care and Caution

 By Aboo Bakr Majeedi (Student of law, Jamia Millia Islamia)

Introduction:

Mental health is one of the most pressing public health issues of the 21st century. It has underscored the flaws with the emerging cases of anxiety, depression, stress-related illnesses, and suicides across the globe. Between 2007 and 2017, mental health conditions worldwide increased by 13%, and the pandemic further aggravated this trend. These issues are more apparent in developing and underdeveloped countries, and the main hurdles for treatment are the unavailability of qualified professionals and high costs of treatment.

In the modern era, artificial intelligence (AI) driven chatbots have given new dimensions to every aspect of our lives. These have emerged as innovative tools within the healthcare industry as well, where the use of AI powered chatbots has opened a new gateway for dealing with emotional health and primary medical assistance. These services are viable, cost-effective, and accessible tools for the masses, and can also be accessed at any time of the day, unlike typical therapy services that are limited and constrained by designated times.

A number of moral and legal issues are being brought up by the increasing use of chatbots. How are chatbots used in the healthcare industry? What positive and negative impacts can they have? The most important question is how to make sure chatbots are used in a way that is both empathetic and safe. These questions gain more significance as we realize that an increasing number of people are relying on these chatbots to communicate their worries, tensions, and fears as well as their health concerns.  

AI in Mental Healthcare: Promise, Risks, and Accountability:

Artificial Intelligence (AI) has advanced significantly and is now widely used by people, especially among younger generations. Currently, more than 70% of youngsters have engaged with AI, and many still utilise AI companions on a daily basis.

According to the Youth Pulse report, nearly 57% of Indian youth utilize chatbots and similar platforms for emotional support, which often addresses sensitive issues that may be too personal to discuss with family members. Girls and women, who are more likely to share their feelings and may hesitate to talk about them with others, depend more on AI for emotional assistance. Similarly, many appreciate that it does not judge them.

However, this affordability, immediacy, unbiasedness, anonymity and stigma reduction come with its own set of drawbacks. In 2022, a user wrote that she wanted to jump off a canyon. A popular therapy Woebot called it “wonderful”. The parents of 16-year-old Adam Raine from the UK claimed that their son had interacted with ChatGPT prior to his suicide, asserting that the chatbot failed to provide life-saving or protective advice. Last year, a teenager allegedly died by suicide after becoming excessively attached to a chatbot.

In 2024, the American Psychological Association released a policy statement stating that artificial intelligence does not fully comprehend human psychology. Additionally, it was unable to comprehend human emotional intelligence and feelings that are not expressed through words. A study published in the Journal of the American Medical Association (JAMA) found that sometimes these chatbots provide information or advice that is not appropriate for an individual’s medical condition, and can sometimes be dangerous.

Thus, these instances highlight the risks of employing digital mental health tools without properly safeguarding the rights, dignity, and mental health of people living in an increasingly artificially intelligent society.

Where Indian Law Stands on AI and Mental Health?

The Indian legislation conveys a clear stance on the role of technology within the realm of mental health, even though it does not specifically mention AI. Section 43A of The Information Technology Act, 2000 makes a corporate body liable if it improperly manages an individual’s private data, which can negatively impact that individual. The accompanying IT Rules of 2011 classify an individual’s mental health information as sensitive personal data. However, as the Digital Personal Data Protection (DPDP) Act, 2023, is set to replace Section 43A and the 2011 Rules, it is significant that the new law does not specifically deal with mental-health data or sensitive data.

The responsibility of mental healthcare experts is another crucial point made by the Mental Healthcare Act of 2017.  Section 2 (r) of the Act defined “mental health professionals” comprised of the individuals who satisfy specific criteria such as the person should be a psychiatrist, a professional registered with the State Authority, or a post-graduate degree holder in specific Indian systems of medicine (Ayurveda, Homoeopathy, Unani, or Siddha) specializing in psychiatry/mental health. This clearly means that artificial intelligence systems are not compatible enough to considered to be qualified as mental health experts. As a result, diagnosis, treatment, and counselling cannot be performed autonomously by AI.

Moreover, Section 20 (2) (d) of the Mental health Act guarantee privacy rights to every individual receiving treatment for any mental disorder. Numerous AI systems that address mental health issues have been found to breach these privacy standards, as a result, they are incapable of offering users unregulated mental health guidance. This underscores the necessity for specific legislation that mitigate the impact of unrestricted AI responses on an individual’s mental health.

How the EU and the US Are Setting Limits on AI in Mental Health?

The use of AI in healthcare is considered “high-risk” software in Article 6(2) pursuant to Annexure III within the EU AI Act passed in 2024. This makes it obligatory for such systems to comply with a set of rules. These include the need to respect human oversight, ensure transparent communication with users, provide a strong focus on cybersecurity, build a trustful data governance strategy, and effectively rely on risk and quality. This strategy reveals that the EU is recognizing that whereas AI can offer benefits in healthcare, it also holds the potential to cause actual harm to humans.

Similarly, the Wellness and Oversight for Psychological Resources Act was introduced in the state of Illinois, the USA, which largely targets the usage of AI in mental health resources. This act makes it unlawful for an AI system to conduct mental health evaluations or offer any form of mental health counselling unless it is under the supervision of licensed mental health professionals. This indicates an increasing awareness of the importance of human insight and accountability in mental health. These legislations provide the Indian government with many lessons on the proper usage of AI chatbots in a balanced manner and how to use AI chatbots responsibly.  

The Way Forward:

Chatbots can support therapy in terms of guided self-help exercises, psychoeducation, and monitoring moods. However, to make the use of chatbots safe for users, strict guidelines need to be imposed. Firstly, a specialised regulatory body of psychological health specialists should be established to supervise the functioning of AI systems. The body will ensure that people in vulnerable circumstances are not presented with detrimental recommendations. Secondly, there must be an integrated crisis response method that may connect to trained specialists quickly in situations involving signs of self-harm or severe distress.

Thirdly, the employment of AI chatbots in place of human mental health professionals should be prohibited by law. These chatbots should never be used on its own as a diagnostic, therapeutic, or advisory tool in mental healthcare. They should only be able to provide information up until that point.

Moreover, data regarding mental health is personal in nature. There should be the establishment of regulations adherence to the data protection laws, including the Digital Personal Data Protection Act, 2023 regarding the mental health issues.

Finally, there has to be a set of clearly specified rules in terms of liability and ethical accountability with respect to failure and/or harm resulting from AI systems. In other words, when it comes to regulation in relation to innovation by technology, innovation should never come at the cost of human well-being. We should take advantage of the facilities of technology, but never forget the fact that human health is sacrosanct and cannot be compromised under any circumstances.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top