Daily Breach

Cyber Weekly Trending

AI Chatbots Are Not Your Friends: Scientists Warn of Rising Emotional Dependence

Introduction

Artificial intelligence chatbots are rapidly evolving from productivity tools into perceived companions. While this shift has fueled widespread adoption, leading AI researchers are now warning policymakers that emotional reliance on AI systems poses emerging societal and mental health risks that require urgent regulatory attention.

Background and Context

Tens of millions of users worldwide are engaging with AI-powered chatbots not just for information, but for emotional connection. According to the latest International AI Safety Report, AI companion applications have seen explosive growth since 2023, with usage driven by curiosity, entertainment, and increasingly, loneliness.

Specialized AI companion platforms such as Replika and Character.ai report user bases in the tens of millions. At the same time, general-purpose AI systems like ChatGPT, Gemini, and Claude are also being used as informal emotional supports, blurring the line between utility software and digital companionship.

Expert Warning on AI Companionship

Yoshua Bengio, professor at the University of Montreal and lead author of the report, cautioned that emotional bonds can form even with standard conversational AI.

In his assessment, prolonged interaction and personalized responses create conditions where users begin to perceive AI systems as companions rather than tools. This dynamic, he warns, can subtly reshape human behavior and expectations around social interaction.

Psychological and Social Risks

While research findings remain mixed, the report highlights concerning trends observed among heavy users of AI companions. Some studies indicate increased feelings of loneliness, reduced real-world social engagement, and emotional dependency among frequent users.

Experts point out that chatbots are inherently designed to be agreeable and supportive. This “sycophantic” behavior prioritizes short-term user satisfaction, which may not always align with long-term psychological well-being.

Bengio compared these risks to those previously seen with social media platforms, where engagement-driven design unintentionally amplified harmful behavioral patterns.

Political and Regulatory Developments

The warning arrives amid growing political scrutiny in Europe. Lawmakers in the European Parliament have recently urged the European Commission to examine whether AI companion services should face restrictions under the EU’s Artificial Intelligence Act, particularly due to concerns about adolescent mental health.

Bengio acknowledged that the impact on children and teenagers is drawing increasing attention within policy circles, signaling that regulatory intervention is likely.

However, he advised against narrowly targeting AI companion apps. Instead, he advocated for horizontal legislation that addresses multiple AI risks collectively, rather than isolated, use-case-specific rules.

Broader AI Risk Landscape

The International AI Safety Report outlines a wider spectrum of threats policymakers must confront, including:

  • AI-assisted cyberattacks
  • AI-generated sexually explicit deepfakes
  • AI systems providing guidance on designing biological weapons

The report emphasizes that emotional manipulation is only one dimension of a much larger governance challenge facing governments worldwide.

Outlook

With a global AI governance summit scheduled to begin on February 16 in India, international consensus on managing AI risks is expected to intensify. Bengio and fellow experts have urged governments to strengthen internal AI expertise to keep pace with the technology’s rapid evolution.

As AI systems become more human-like in conversation and behavior, the boundary between assistance and influence continues to erode. The report’s message is clear: AI may sound empathetic, but it is not a friend, and treating it as one carries consequences that policymakers can no longer ignore.

Sources

Adv. Aayushman Verma

Adv. Aayushman Verma

About Author

Adv. Aayushman Verma is a cybersecurity and technology law enthusiast pursuing a Master’s in Cyber Law and Information Security at the National Law Institute University (NLIU), Bhopal. He has qualified the UPSC CDS and AFCAT examinations multiple times and his work focuses on cybersecurity consulting, digital policy, and data protection compliance, with an emphasis on translating complex legal and technological developments into clear insights on emerging cyber risks and secure digital futures.

Leave a Reply

Your email address will not be published. Required fields are marked *