Introduction
The Ministry of Electronics and Information Technology (MeitY) of India has proposed sweeping changes to the digital regulatory framework aimed at tackling the rapidly growing misuse of artificial‐intelligence (AI) generated content on social media. As part of this initiative, the draft amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 introduces mandatory labelling and metadata traceability for synthetic content. The announcement on 22 October 2025 signals India’s intent to ensure authenticity, transparency and accountability as generative-AI tools proliferate.
Background / Context
Generative AI — which enables creation of realistic videos, images, audio and text — has become increasingly accessible. While this opens up innovation, it also elevates risks: deepfakes, impersonations, non-consensual synthetic media, election manipulation and targeted misinformation are proliferating. With nearly a billion internet users and a diverse socio-cultural landscape, India faces heightened exposure to these risks.
The existing IT Rules (2021) place obligations on social media intermediaries (platforms) and digital media, but did not specifically address AI-generated “synthetic” content in a quantitative, visible way.
Technical Details of the Proposal
Definition & Scope
- The draft defines “synthetically generated information” as content “artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that reasonably appears to be authentic or true.”
- The scope includes all types of synthetic content: text, images, audio, video — not limited to photorealistic deepfakes.
Labelling & Metadata Requirements
- Every piece of synthetic content must carry a visible or audible label/disclosure. For visual media, the label must cover at least 10 % of the surface area of the display. For audio/video, the marker must appear during the first 10 % of playback.
- In addition, there must be non-removable metadata or identifiers embedded in the content that signal its synthetic origin, ensuring traceability and transparency. Platforms or users are not permitted to remove or alter those identifiers.
User Self-Declaration & Platform Duties
- The proposal places a dual responsibility:
- Users uploading content must declare whether it is AI-generated or synthetically altered.
- If the user fails to declare, the platform must deploy “reasonable and proportionate” technical measures to detect and label such content proactively.
- Platforms (especially significant social media intermediaries) will be required to integrate systems for detection, labelling, embedding metadata, and monitoring compliance.
Enforcement & Safe-Harbour Implications
- The amendment links intermediary liability (safe-harbour protection) to compliance with these labelling and traceability obligations. If platforms become “aware” of synthetic content without appropriately labelling, they may lose safe-harbour protections.
- The draft is open for public and industry consultation until 6 November 2025.
Timeline of Events
- 22 October 2025: MeitY publishes the draft amendment to the IT Rules targeting AI-generated content.
- 6 November 2025: Deadline for public feedback on the draft rules.
- Future: Once finalised, these rules will be notified in the Gazette and become enforceable.
Related Incidents and Global Context
- The rise of deepfakes involving celebrities and public figures in India has elevated urgency for regulation. For example, court cases of non-consensual AI video generation by and against Bollywood personalities.
- Globally, jurisdictions like the European Commission (EU) and China are working towards watermarking / labelling synthetic media. India’s quantitative visibility standard (10 %) is among the first of its kind.
Impact / Scope
Platforms & Users
- Social media platforms (such as Instagram, X, YouTube, WhatsApp) would need to invest in detection tools, metadata systems and content labelling workflows. The cost and complexity may be significant.
- Individual users and creators must be aware: even benign synthetic content could fall within scope and require labelling.
Risks & Compliance
- Platforms failing to comply may lose their intermediary “safe‐harbour” protections, exposing them to legal liability and regulatory action.
- The regulation may lead to increased removal/takedown of unlabelled content, potentially raising concerns about freedom of expression and over-moderation.
Societal & Democratic Implications
- By compelling transparency of synthetic content, the rules aim to protect the integrity of public discourse, especially during elections where deepfakes can be weaponised.
- The labelling requirement also serves to boost digital literacy: enabling users to better recognise synthetic media.
Expert Commentary
- Some experts applaud the move as a timely regulation of generative AI misuse. The quantifiable visibility standard (10%) is considered pioneering.
- Others caution that labelling alone is insufficient. Not all synthetic content is harmful, and detection tools will inevitably lag behind generation capabilities. The burden on platforms may inadvertently lead to over‐censorship.
- The term “synthetic” as defined is broad and may cover benign edits, satire or remix culture — raising concerns about chilling effects on creativity and lawful expression.
Outlook
- Once enacted, India could emerge as one of the first countries with enforceable, visible labelling standards for synthetic content at this scale.
- The policy may prompt generative AI tool developers and platforms to build in “label-by-default” workflows, and embed metadata as a technical norm.
- Further regulation may follow: we can expect standards around provenance (tracking content origin), algorithmic transparency and cross-border cooperation on generative AI governance.
- However, success will depend on effective implementation, platform compliance, technical robustness, and user awareness. Without these, the labelling may be symbolic rather than transformative.
References / Source Attribution
- Reuters, “India proposes strict rules to label AI content citing growing risks.” (Reuters)
- ThePrint, Srishti Joshi, “Labelling AI content alone won’t solve misinformation. India needs smarter regulation.” (ThePrint)
- Forbes India, Samidha Jain, “Explained: India’s AI content labelling regulation.” (Forbes India)
- India Today, “India to crack down on deepfakes, new rule may force companies to label AI-generated content.” (India Today)
- EconomicTimes, “Centre moves to regulate deepfakes, AI media; MeitY proposes amendments to IT rules.” (The Economic Times)



