Labelling the Synthetic: Why India’s Proposal to Label AI-Generated Content Raises Hard Questions?

Labelling the Synthetic: Why India’s Proposal to Label AI-Generated Content Raises Hard Questions?

Labelling the Synthetic: Why India’s Proposal to Label AI-Generated Content Raises Hard Questions?

Written by Hetal Desai

Written by Hetal Desai

Written by Hetal Desai

Written by Hetal Desai

7 min read

7 min read

India’s proposal to mandate labelling of AI-generated or synthetic content is often described as a response to the rise of deepfakes. That description captures the trigger, but not the legal architecture through which the response is being constructed. Rather than proceeding through a standalone artificial intelligence statute, the proposal operates through India’s intermediary regulation framework, specifically by expanding what “due diligence” is expected to mean under the Information Technology Act, 2000 (‘IT Act’) and the rules framed under it.

India’s proposal to mandate labelling of AI-generated or synthetic content is often described as a response to the rise of deepfakes. That description captures the trigger, but not the legal architecture through which the response is being constructed. Rather than proceeding through a standalone artificial intelligence statute, the proposal operates through India’s intermediary regulation framework, specifically by expanding what “due diligence” is expected to mean under the Information Technology Act, 2000 (‘IT Act’) and the rules framed under it.

India’s proposal to mandate labelling of AI-generated or synthetic content is often described as a response to the rise of deepfakes. That description captures the trigger, but not the legal architecture through which the response is being constructed. Rather than proceeding through a standalone artificial intelligence statute, the proposal operates through India’s intermediary regulation framework, specifically by expanding what “due diligence” is expected to mean under the Information Technology Act, 2000 (‘IT Act’) and the rules framed under it.

India’s proposal to mandate labelling of AI-generated or synthetic content is often described as a response to the rise of deepfakes. That description captures the trigger, but not the legal architecture through which the response is being constructed. Rather than proceeding through a standalone artificial intelligence statute, the proposal operates through India’s intermediary regulation framework, specifically by expanding what “due diligence” is expected to mean under the Information Technology Act, 2000 (‘IT Act’) and the rules framed under it.

It means that AI content labelling is not positioned as a new category of illegality, but as a compliance obligation whose breach may have cascading implications for intermediary liability, platform immunity, and content governance. The law is not prohibiting AI-generated content as such; it is conditioning legal protection on transparency around its use.

It means that AI content labelling is not positioned as a new category of illegality, but as a compliance obligation whose breach may have cascading implications for intermediary liability, platform immunity, and content governance. The law is not prohibiting AI-generated content as such; it is conditioning legal protection on transparency around its use.

It means that AI content labelling is not positioned as a new category of illegality, but as a compliance obligation whose breach may have cascading implications for intermediary liability, platform immunity, and content governance. The law is not prohibiting AI-generated content as such; it is conditioning legal protection on transparency around its use.

It means that AI content labelling is not positioned as a new category of illegality, but as a compliance obligation whose breach may have cascading implications for intermediary liability, platform immunity, and content governance. The law is not prohibiting AI-generated content as such; it is conditioning legal protection on transparency around its use.

At a conceptual level, the proposition relies on a familiar regulatory technique. When direct prohibition is neither feasible nor desirable, law often turns to disclosure. What complicates this approach in the context of generative AI is that the fact being disclosed is neither binary nor always observable. Whether content is “AI-generated” is often a matter of degree, technical inference, and design choice.

At a conceptual level, the proposition relies on a familiar regulatory technique. When direct prohibition is neither feasible nor desirable, law often turns to disclosure. What complicates this approach in the context of generative AI is that the fact being disclosed is neither binary nor always observable. Whether content is “AI-generated” is often a matter of degree, technical inference, and design choice.

At a conceptual level, the proposition relies on a familiar regulatory technique. When direct prohibition is neither feasible nor desirable, law often turns to disclosure. What complicates this approach in the context of generative AI is that the fact being disclosed is neither binary nor always observable. Whether content is “AI-generated” is often a matter of degree, technical inference, and design choice.

At a conceptual level, the proposition relies on a familiar regulatory technique. When direct prohibition is neither feasible nor desirable, law often turns to disclosure. What complicates this approach in the context of generative AI is that the fact being disclosed is neither binary nor always observable. Whether content is “AI-generated” is often a matter of degree, technical inference, and design choice.

India’s existing legal framework was not designed to answer these questions. The IT Act addresses impersonation, deception, and unlawful content, but it does not distinguish between human and machine authorship. Section 79 of the IT Act grants intermediaries conditional safe harbour from liability for third-party content, provided they observe due diligence and do not initiate or modify the transmission of information. The Intermediary Guidelines issued under the IT Act operationalise this due diligence obligation, primarily through notice-and-takedown and grievance redress mechanisms.

India’s existing legal framework was not designed to answer these questions. The IT Act addresses impersonation, deception, and unlawful content, but it does not distinguish between human and machine authorship. Section 79 of the IT Act grants intermediaries conditional safe harbour from liability for third-party content, provided they observe due diligence and do not initiate or modify the transmission of information. The Intermediary Guidelines issued under the IT Act operationalise this due diligence obligation, primarily through notice-and-takedown and grievance redress mechanisms.

India’s existing legal framework was not designed to answer these questions. The IT Act addresses impersonation, deception, and unlawful content, but it does not distinguish between human and machine authorship. Section 79 of the IT Act grants intermediaries conditional safe harbour from liability for third-party content, provided they observe due diligence and do not initiate or modify the transmission of information. The Intermediary Guidelines issued under the IT Act operationalise this due diligence obligation, primarily through notice-and-takedown and grievance redress mechanisms.

India’s existing legal framework was not designed to answer these questions. The IT Act addresses impersonation, deception, and unlawful content, but it does not distinguish between human and machine authorship. Section 79 of the IT Act grants intermediaries conditional safe harbour from liability for third-party content, provided they observe due diligence and do not initiate or modify the transmission of information. The Intermediary Guidelines issued under the IT Act operationalise this due diligence obligation, primarily through notice-and-takedown and grievance redress mechanisms.

The proposed AI content labelling obligations enter this framework by expanding the scope of due diligence under Rule 3 of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (‘the IT Rules’ or ‘the Intermediary Guidelines’). They do so without rewriting the core architecture of intermediary immunity, which creates both their regulatory force and their fragility.

The proposed AI content labelling obligations enter this framework by expanding the scope of due diligence under Rule 3 of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (‘the IT Rules’ or ‘the Intermediary Guidelines’). They do so without rewriting the core architecture of intermediary immunity, which creates both their regulatory force and their fragility.

The proposed AI content labelling obligations enter this framework by expanding the scope of due diligence under Rule 3 of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (‘the IT Rules’ or ‘the Intermediary Guidelines’). They do so without rewriting the core architecture of intermediary immunity, which creates both their regulatory force and their fragility.

The proposed AI content labelling obligations enter this framework by expanding the scope of due diligence under Rule 3 of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (‘the IT Rules’ or ‘the Intermediary Guidelines’). They do so without rewriting the core architecture of intermediary immunity, which creates both their regulatory force and their fragility.

In the United States, regulatory attention has been sectoral and risk-specific, with emphasis on election integrity, consumer protection, and deceptive practices. Disclosure requirements have emerged through agency guidance and state-level legislation, often tied to impersonation or materially misleading synthetic media.

In practice, these disclosure obligations, where they exist, are being operationalised through a combination of provenance, watermarking, and platform-level signalling tools rather than through reliable downstream detection. Large technology providers have experimented with cryptographic watermarking and metadata-based provenance systems, such as content credentials frameworks that attach information about a file’s origin and method of creation at the point of generation. Some generative image and video tools embed invisible watermarks or structured metadata indicating AI involvement, while platforms layer user-facing labels based on self-declarations, tool-specific signals, or internal classifiers. These approaches are uneven in their effectiveness. Metadata can be stripped as content moves across platforms, watermarks degrade under compression or editing, and detection models produce probabilistic outputs rather than definitive attribution. In India, similar techniques are largely present only through voluntary platform practices or imported tooling rather than regulatory mandate, with open-source and offline generation tools remaining entirely outside traceability regimes. The global experience so far suggests that labelling works most consistently where it is embedded upstream in controlled systems, and becomes increasingly fragile as content circulates across open, interoperable digital environments.

Global landscape

Global landscape

Global landscape

Global landscape

Comparable regulatory impulses are visible outside India, though they have been channelled through materially different legal pathways. The European Union, for instance, has approached AI content transparency primarily through the Artificial Intelligence Act, where disclosure obligations are framed as product and system-level duties rather than intermediary due diligence. Certain uses of generative AI, particularly those capable of producing deepfakes or synthetic representations of real persons, attract explicit labelling obligations directed at deployers of AI systems. These duties sit alongside, rather than within, the EU’s intermediary liability regime, preserving the conceptual separation between platform neutrality and AI governance.

Comparable regulatory impulses are visible outside India, though they have been channelled through materially different legal pathways. The European Union, for instance, has approached AI content transparency primarily through the Artificial Intelligence Act, where disclosure obligations are framed as product and system-level duties rather than intermediary due diligence. Certain uses of generative AI, particularly those capable of producing deepfakes or synthetic representations of real persons, attract explicit labelling obligations directed at deployers of AI systems. These duties sit alongside, rather than within, the EU’s intermediary liability regime, preserving the conceptual separation between platform neutrality and AI governance.

On 6 December 2025, the CCPA issued an order finding that Zepto used drip pricing and pre-selected add-ons. In simpler terms, mandatory charges were surfaced only at final checkout and a paid membership appeared pre-selected without affirmative consent. The regulator directed Zepto to remove default opt-ins, redesign its checkout, and submit proof of compliance within a limited time window. This enforcement follows a set of Guidelines for Prevention and Regulation of Dark Patterns (‘Guidelines’) that the CCPA published in late 2023 under the Consumer Protection Act, 2019, which identify thirteen specified dark patterns such as drip pricing, basket sneaking and subscription traps. Taken together, the Guidelines plus recent orders show that the regulator is moving beyond advisory notes to enforcement and corrective directions.

Comparable regulatory impulses are visible outside India, though they have been channelled through materially different legal pathways. The European Union, for instance, has approached AI content transparency primarily through the Artificial Intelligence Act, where disclosure obligations are framed as product and system-level duties rather than intermediary due diligence. Certain uses of generative AI, particularly those capable of producing deepfakes or synthetic representations of real persons, attract explicit labelling obligations directed at deployers of AI systems. These duties sit alongside, rather than within, the EU’s intermediary liability regime, preserving the conceptual separation between platform neutrality and AI governance.

In the United States, regulatory attention has been sectoral and risk-specific, with emphasis on election integrity, consumer protection, and deceptive practices. Disclosure requirements have emerged through agency guidance and state-level legislation, often tied to impersonation or materially misleading synthetic media.

In the United States, regulatory attention has been sectoral and risk-specific, with emphasis on election integrity, consumer protection, and deceptive practices. Disclosure requirements have emerged through agency guidance and state-level legislation, often tied to impersonation or materially misleading synthetic media.

In the United States, regulatory attention has been sectoral and risk-specific, with emphasis on election integrity, consumer protection, and deceptive practices. Disclosure requirements have emerged through agency guidance and state-level legislation, often tied to impersonation or materially misleading synthetic media.

In practice, these disclosure obligations, where they exist, are being operationalised through a combination of provenance, watermarking, and platform-level signalling tools rather than through reliable downstream detection. Large technology providers have experimented with cryptographic watermarking and metadata-based provenance systems, such as content credentials frameworks that attach information about a file’s origin and method of creation at the point of generation. Some generative image and video tools embed invisible watermarks or structured metadata indicating AI involvement, while platforms layer user-facing labels based on self-declarations, tool-specific signals, or internal classifiers. These approaches are uneven in their effectiveness. Metadata can be stripped as content moves across platforms, watermarks degrade under compression or editing, and detection models produce probabilistic outputs rather than definitive attribution. In India, similar techniques are largely present only through voluntary platform practices or imported tooling rather than regulatory mandate, with open-source and offline generation tools remaining entirely outside traceability regimes. The global experience so far suggests that labelling works most consistently where it is embedded upstream in controlled systems, and becomes increasingly fragile as content circulates across open, interoperable digital environments.

In practice, these disclosure obligations, where they exist, are being operationalised through a combination of provenance, watermarking, and platform-level signalling tools rather than through reliable downstream detection. Large technology providers have experimented with cryptographic watermarking and metadata-based provenance systems, such as content credentials frameworks that attach information about a file’s origin and method of creation at the point of generation. Some generative image and video tools embed invisible watermarks or structured metadata indicating AI involvement, while platforms layer user-facing labels based on self-declarations, tool-specific signals, or internal classifiers. These approaches are uneven in their effectiveness. Metadata can be stripped as content moves across platforms, watermarks degrade under compression or editing, and detection models produce probabilistic outputs rather than definitive attribution. In India, similar techniques are largely present only through voluntary platform practices or imported tooling rather than regulatory mandate, with open-source and offline generation tools remaining entirely outside traceability regimes. The global experience so far suggests that labelling works most consistently where it is embedded upstream in controlled systems, and becomes increasingly fragile as content circulates across open, interoperable digital environments.

In practice, these disclosure obligations, where they exist, are being operationalised through a combination of provenance, watermarking, and platform-level signalling tools rather than through reliable downstream detection. Large technology providers have experimented with cryptographic watermarking and metadata-based provenance systems, such as content credentials frameworks that attach information about a file’s origin and method of creation at the point of generation. Some generative image and video tools embed invisible watermarks or structured metadata indicating AI involvement, while platforms layer user-facing labels based on self-declarations, tool-specific signals, or internal classifiers. These approaches are uneven in their effectiveness. Metadata can be stripped as content moves across platforms, watermarks degrade under compression or editing, and detection models produce probabilistic outputs rather than definitive attribution. In India, similar techniques are largely present only through voluntary platform practices or imported tooling rather than regulatory mandate, with open-source and offline generation tools remaining entirely outside traceability regimes. The global experience so far suggests that labelling works most consistently where it is embedded upstream in controlled systems, and becomes increasingly fragile as content circulates across open, interoperable digital environments.

When does AI involvement become legally material under the IT Rules?

When does AI involvement become legally material under the IT Rules?

When does AI involvement become legally material under the IT Rules?

When does AI involvement become legally material under the IT Rules?

The proposal concerns the absence of a legally meaningful threshold for AI involvement. The obligation speaks broadly of AI-generated or synthetic content, but does not specify when AI use crosses the line from incidental assistance to material authorship.

The proposal concerns the absence of a legally meaningful threshold for AI involvement. The obligation speaks broadly of AI-generated or synthetic content, but does not specify when AI use crosses the line from incidental assistance to material authorship.

The working paper positions the proposed mandatory AI licensing scheme as a structural response to these problems. Rather than forcing creators and developers into bilateral negotiations at impossible scale, it sketches a system where developers can train on any content they have lawfully accessed, as long as they pay statutory royalties to a designated collective. By introducing a centralised rights management entity for AI training, the model aims to convert a fragmented environment into a single predictable workflow. The lawful access requirement is a significant pivot because it separates copyright permission from access conditions. In practice this means that buying, subscribing to or otherwise legitimately accessing content would satisfy the threshold for training use, while unauthorised scraping or circumvention of access controls would not. A developer who trains on articles from a paid digital news subscription would qualify, while a developer who bypasses a paywall to download the same content would not.

The proposal concerns the absence of a legally meaningful threshold for AI involvement. The obligation speaks broadly of AI-generated or synthetic content, but does not specify when AI use crosses the line from incidental assistance to material authorship.

Other jurisdictions have attempted to address this by tying disclosure obligations to the risk of deception rather than the mere presence of AI. India’s proposal, as currently articulated, lacks this limiting principle, which makes both compliance and enforcement indeterminate.

Other jurisdictions have attempted to address this by tying disclosure obligations to the risk of deception rather than the mere presence of AI. India’s proposal, as currently articulated, lacks this limiting principle, which makes both compliance and enforcement indeterminate.

Other jurisdictions have attempted to address this by tying disclosure obligations to the risk of deception rather than the mere presence of AI. India’s proposal, as currently articulated, lacks this limiting principle, which makes both compliance and enforcement indeterminate.

Other jurisdictions have attempted to address this by tying disclosure obligations to the risk of deception rather than the mere presence of AI. India’s proposal, as currently articulated, lacks this limiting principle, which makes both compliance and enforcement indeterminate.

From a legal standpoint, proposed obligations under Rule 3 of the IT Rules are framed around reasonableness and knowledge. Stretching them to cover invisible or ambient AI processes risks converting due diligence into strict liability by stealth. From a technical standpoint, it assumes a clarity of attribution that does not exist in practice.

From a legal standpoint, proposed obligations under Rule 3 of the IT Rules are framed around reasonableness and knowledge. Stretching them to cover invisible or ambient AI processes risks converting due diligence into strict liability by stealth. From a technical standpoint, it assumes a clarity of attribution that does not exist in practice.

From a legal standpoint, proposed obligations under Rule 3 of the IT Rules are framed around reasonableness and knowledge. Stretching them to cover invisible or ambient AI processes risks converting due diligence into strict liability by stealth. From a technical standpoint, it assumes a clarity of attribution that does not exist in practice.

From a legal standpoint, proposed obligations under Rule 3 of the IT Rules are framed around reasonableness and knowledge. Stretching them to cover invisible or ambient AI processes risks converting due diligence into strict liability by stealth. From a technical standpoint, it assumes a clarity of attribution that does not exist in practice.

This matters because the IT Rules impose obligations on “users” and “intermediaries” based on acts they can reasonably understand and control. If the duty to label is triggered by any AI involvement, the obligation becomes detached from user intent and knowledge. Modern content creation workflows routinely involve machine learning systems embedded at the operating system or software level. A user correcting grammar, enhancing lighting, or removing background noise may be interacting with AI without any conscious awareness.

This matters because the IT Rules impose obligations on “users” and “intermediaries” based on acts they can reasonably understand and control. If the duty to label is triggered by any AI involvement, the obligation becomes detached from user intent and knowledge. Modern content creation workflows routinely involve machine learning systems embedded at the operating system or software level. A user correcting grammar, enhancing lighting, or removing background noise may be interacting with AI without any conscious awareness.

This matters because the IT Rules impose obligations on “users” and “intermediaries” based on acts they can reasonably understand and control. If the duty to label is triggered by any AI involvement, the obligation becomes detached from user intent and knowledge. Modern content creation workflows routinely involve machine learning systems embedded at the operating system or software level. A user correcting grammar, enhancing lighting, or removing background noise may be interacting with AI without any conscious awareness.

Who, under the IT Act framework, is realistically capable of compliance?

Who, under the IT Act framework, is realistically capable of compliance?

Who, under the IT Act framework, is realistically capable of compliance?

Who, under the IT Act framework, is realistically capable of compliance?

Another issue arises from the distribution of technical capability across the content ecosystem. The IT Act and the Intermediary Guidelines presuppose that responsibility follows control. Intermediaries are protected precisely because they do not exercise editorial control over third-party content, while users are regulated on the assumption that they know what they are publishing.

Another issue arises from the distribution of technical capability across the content ecosystem. The IT Act and the Intermediary Guidelines presuppose that responsibility follows control. Intermediaries are protected precisely because they do not exercise editorial control over third-party content, while users are regulated on the assumption that they know what they are publishing.

Another issue arises from the distribution of technical capability across the content ecosystem. The IT Act and the Intermediary Guidelines presuppose that responsibility follows control. Intermediaries are protected precisely because they do not exercise editorial control over third-party content, while users are regulated on the assumption that they know what they are publishing.

Another issue arises from the distribution of technical capability across the content ecosystem. The IT Act and the Intermediary Guidelines presuppose that responsibility follows control. Intermediaries are protected precisely because they do not exercise editorial control over third-party content, while users are regulated on the assumption that they know what they are publishing.

AI-generated content disrupts this allocation. Platforms do not reliably know whether content is AI-generated. Detection tools are probabilistic and error-prone. Watermarking and provenance standards are unevenly adopted and easily circumvented. Open-source and offline generation tools produce content with no traceable origin.

AI-generated content disrupts this allocation. Platforms do not reliably know whether content is AI-generated. Detection tools are probabilistic and error-prone. Watermarking and provenance standards are unevenly adopted and easily circumvented. Open-source and offline generation tools produce content with no traceable origin.

AI-generated content disrupts this allocation. Platforms do not reliably know whether content is AI-generated. Detection tools are probabilistic and error-prone. Watermarking and provenance standards are unevenly adopted and easily circumvented. Open-source and offline generation tools produce content with no traceable origin.

AI-generated content disrupts this allocation. Platforms do not reliably know whether content is AI-generated. Detection tools are probabilistic and error-prone. Watermarking and provenance standards are unevenly adopted and easily circumvented. Open-source and offline generation tools produce content with no traceable origin.

If users are expected to self-declare AI use, enforcement depends on honesty and awareness. If intermediaries are expected to verify, they are being asked to perform technical functions that exceed current capabilities, particularly at scale. If AI tool providers are implicated, the obligations fall outside the jurisdictional and enforcement logic of the IT Act, which is intermediary-centric.

If users are expected to self-declare AI use, enforcement depends on honesty and awareness. If intermediaries are expected to verify, they are being asked to perform technical functions that exceed current capabilities, particularly at scale. If AI tool providers are implicated, the obligations fall outside the jurisdictional and enforcement logic of the IT Act, which is intermediary-centric.

If users are expected to self-declare AI use, enforcement depends on honesty and awareness. If intermediaries are expected to verify, they are being asked to perform technical functions that exceed current capabilities, particularly at scale. If AI tool providers are implicated, the obligations fall outside the jurisdictional and enforcement logic of the IT Act, which is intermediary-centric.

If users are expected to self-declare AI use, enforcement depends on honesty and awareness. If intermediaries are expected to verify, they are being asked to perform technical functions that exceed current capabilities, particularly at scale. If AI tool providers are implicated, the obligations fall outside the jurisdictional and enforcement logic of the IT Act, which is intermediary-centric.

This creates a structural mismatch between legal responsibility and technical feasibility. The law places obligations where the architecture does not support compliance, encouraging conservative and over-inclusive practices that do little to advance the underlying regulatory goal.

This creates a structural mismatch between legal responsibility and technical feasibility. The law places obligations where the architecture does not support compliance, encouraging conservative and over-inclusive practices that do little to advance the underlying regulatory goal.

This creates a structural mismatch between legal responsibility and technical feasibility. The law places obligations where the architecture does not support compliance, encouraging conservative and over-inclusive practices that do little to advance the underlying regulatory goal.

This creates a structural mismatch between legal responsibility and technical feasibility. The law places obligations where the architecture does not support compliance, encouraging conservative and over-inclusive practices that do little to advance the underlying regulatory goal.

Does AI labelling collapse the distinction between due diligence and proactive monitoring?

Does AI labelling collapse the distinction between due diligence and proactive monitoring?

Does AI labelling collapse the distinction between due diligence and proactive monitoring?

Does AI labelling collapse the distinction between due diligence and proactive monitoring?

The third issue cuts to the core of intermediary immunity under Section 79 of the IT Act. Safe harbour is premised on intermediaries not initiating transmission, selecting receivers, or modifying information. Rule 3 of the IT Rules builds on this by requiring intermediaries to act upon actual knowledge, typically through notices or court orders.

The third issue cuts to the core of intermediary immunity under Section 79 of the IT Act. Safe harbour is premised on intermediaries not initiating transmission, selecting receivers, or modifying information. Rule 3 of the IT Rules builds on this by requiring intermediaries to act upon actual knowledge, typically through notices or court orders.

The third issue cuts to the core of intermediary immunity under Section 79 of the IT Act. Safe harbour is premised on intermediaries not initiating transmission, selecting receivers, or modifying information. Rule 3 of the IT Rules builds on this by requiring intermediaries to act upon actual knowledge, typically through notices or court orders.

The third issue cuts to the core of intermediary immunity under Section 79 of the IT Act. Safe harbour is premised on intermediaries not initiating transmission, selecting receivers, or modifying information. Rule 3 of the IT Rules builds on this by requiring intermediaries to act upon actual knowledge, typically through notices or court orders.

Mandatory AI content labelling complicates this logic. If intermediaries are required to ensure that AI-generated content is labelled, they must either assess content themselves or enforce declarations through monitoring and verification mechanisms. This moves the intermediary closer to an active role in content classification.

Mandatory AI content labelling complicates this logic. If intermediaries are required to ensure that AI-generated content is labelled, they must either assess content themselves or enforce declarations through monitoring and verification mechanisms. This moves the intermediary closer to an active role in content classification.

Mandatory AI content labelling complicates this logic. If intermediaries are required to ensure that AI-generated content is labelled, they must either assess content themselves or enforce declarations through monitoring and verification mechanisms. This moves the intermediary closer to an active role in content classification.

Mandatory AI content labelling complicates this logic. If intermediaries are required to ensure that AI-generated content is labelled, they must either assess content themselves or enforce declarations through monitoring and verification mechanisms. This moves the intermediary closer to an active role in content classification.

The legal risk lies in how failure to label is characterised. If it is treated as a failure of due diligence under Rule 3, intermediaries risk losing the protection of Section 79. This exposure is not limited to the absence of a label; it potentially extends to liability for the underlying content itself.

The legal risk lies in how failure to label is characterised. If it is treated as a failure of due diligence under Rule 3, intermediaries risk losing the protection of Section 79. This exposure is not limited to the absence of a label; it potentially extends to liability for the underlying content itself.

The legal risk lies in how failure to label is characterised. If it is treated as a failure of due diligence under Rule 3, intermediaries risk losing the protection of Section 79. This exposure is not limited to the absence of a label; it potentially extends to liability for the underlying content itself.

The legal risk lies in how failure to label is characterised. If it is treated as a failure of due diligence under Rule 3, intermediaries risk losing the protection of Section 79. This exposure is not limited to the absence of a label; it potentially extends to liability for the underlying content itself.

In effect, the obligation blurs the line between reactive compliance and proactive governance, a distinction that Indian courts have historically treated as central to intermediary protection. Without clear guardrails, platforms may respond by restricting content or imposing blanket labelling policies, not because the law demands it, but because the cost of error becomes too high.

In effect, the obligation blurs the line between reactive compliance and proactive governance, a distinction that Indian courts have historically treated as central to intermediary protection. Without clear guardrails, platforms may respond by restricting content or imposing blanket labelling policies, not because the law demands it, but because the cost of error becomes too high.

In effect, the obligation blurs the line between reactive compliance and proactive governance, a distinction that Indian courts have historically treated as central to intermediary protection. Without clear guardrails, platforms may respond by restricting content or imposing blanket labelling policies, not because the law demands it, but because the cost of error becomes too high.

In effect, the obligation blurs the line between reactive compliance and proactive governance, a distinction that Indian courts have historically treated as central to intermediary protection. Without clear guardrails, platforms may respond by restricting content or imposing blanket labelling policies, not because the law demands it, but because the cost of error becomes too high.

Does the expansion of due diligence incentivise over-compliance and speech distortion?

Does the expansion of due diligence incentivise over-compliance and speech distortion?

Does the expansion of due diligence incentivise over-compliance and speech distortion?

Does the expansion of due diligence incentivise over-compliance and speech distortion?

When compliance standards are vague, regulated entities tend to err on the side of caution. In the context of AI content, this can result in excessive labelling, deprioritization, or removal of content that poses no real risk of deception. Creative works, satire, and experimental uses of AI may be disproportionately affected, not because they are unlawful, but because they are ambiguous.

When compliance standards are vague, regulated entities tend to err on the side of caution. In the context of AI content, this can result in excessive labelling, deprioritization, or removal of content that poses no real risk of deception. Creative works, satire, and experimental uses of AI may be disproportionately affected, not because they are unlawful, but because they are ambiguous.

When compliance standards are vague, regulated entities tend to err on the side of caution. In the context of AI content, this can result in excessive labelling, deprioritization, or removal of content that poses no real risk of deception. Creative works, satire, and experimental uses of AI may be disproportionately affected, not because they are unlawful, but because they are ambiguous.

When compliance standards are vague, regulated entities tend to err on the side of caution. In the context of AI content, this can result in excessive labelling, deprioritization, or removal of content that poses no real risk of deception. Creative works, satire, and experimental uses of AI may be disproportionately affected, not because they are unlawful, but because they are ambiguous.

When compliance standards are vague, regulated entities tend to err on the side of caution. In the context of AI content, this can result in excessive labelling, deprioritization, or removal of content that poses no real risk of deception. Creative works, satire, and experimental uses of AI may be disproportionately affected, not because they are unlawful, but because they are ambiguous.

When compliance standards are vague, regulated entities tend to err on the side of caution. In the context of AI content, this can result in excessive labelling, deprioritization, or removal of content that poses no real risk of deception. Creative works, satire, and experimental uses of AI may be disproportionately affected, not because they are unlawful, but because they are ambiguous.

When compliance standards are vague, regulated entities tend to err on the side of caution. In the context of AI content, this can result in excessive labelling, deprioritization, or removal of content that poses no real risk of deception. Creative works, satire, and experimental uses of AI may be disproportionately affected, not because they are unlawful, but because they are ambiguous.

When compliance standards are vague, regulated entities tend to err on the side of caution. In the context of AI content, this can result in excessive labelling, deprioritization, or removal of content that poses no real risk of deception. Creative works, satire, and experimental uses of AI may be disproportionately affected, not because they are unlawful, but because they are ambiguous.

From a legal perspective, this raises concerns about proportionality and indirect restraint on expression. The IT Act does not grant intermediaries a mandate to curate speech based on perceived risk alone. From a technical perspective, automated systems designed to minimise liability often lack the contextual sensitivity required to distinguish harm from harmless ambiguity.

From a legal perspective, this raises concerns about proportionality and indirect restraint on expression. The IT Act does not grant intermediaries a mandate to curate speech based on perceived risk alone. From a technical perspective, automated systems designed to minimise liability often lack the contextual sensitivity required to distinguish harm from harmless ambiguity.

From a legal perspective, this raises concerns about proportionality and indirect restraint on expression. The IT Act does not grant intermediaries a mandate to curate speech based on perceived risk alone. From a technical perspective, automated systems designed to minimise liability often lack the contextual sensitivity required to distinguish harm from harmless ambiguity.

From a legal perspective, this raises concerns about proportionality and indirect restraint on expression. The IT Act does not grant intermediaries a mandate to curate speech based on perceived risk alone. From a technical perspective, automated systems designed to minimise liability often lack the contextual sensitivity required to distinguish harm from harmless ambiguity.

Other regulatory regimes have attempted to mitigate this by confining disclosure obligations to clearly defined high-risk contexts, such as political communication or impersonation. The Indian proposal, by contrast, leaves too much to interpretation within the due diligence framework, increasing the likelihood of uneven and excessive enforcement.

Other regulatory regimes have attempted to mitigate this by confining disclosure obligations to clearly defined high-risk contexts, such as political communication or impersonation. The Indian proposal, by contrast, leaves too much to interpretation within the due diligence framework, increasing the likelihood of uneven and excessive enforcement.

Other regulatory regimes have attempted to mitigate this by confining disclosure obligations to clearly defined high-risk contexts, such as political communication or impersonation. The Indian proposal, by contrast, leaves too much to interpretation within the due diligence framework, increasing the likelihood of uneven and excessive enforcement.

Other regulatory regimes have attempted to mitigate this by confining disclosure obligations to clearly defined high-risk contexts, such as political communication or impersonation. The Indian proposal, by contrast, leaves too much to interpretation within the due diligence framework, increasing the likelihood of uneven and excessive enforcement.

Where this leaves India’s approach to AI content governance

Where this leaves India’s approach to AI content governance

Where this leaves India’s approach to AI content governance

Where this leaves India’s approach to AI content governance

India’s decision to address AI-generated content through the intermediary framework reflects a pragmatic instinct. It allows regulators to respond quickly to emerging harms without waiting for a comprehensive AI statute. It also leverages an existing enforcement ecosystem that platforms already understand. At the same time, it pushes that ecosystem close to its conceptual limits.

India’s decision to address AI-generated content through the intermediary framework reflects a pragmatic instinct. It allows regulators to respond quickly to emerging harms without waiting for a comprehensive AI statute. It also leverages an existing enforcement ecosystem that platforms already understand. At the same time, it pushes that ecosystem close to its conceptual limits.

India’s decision to address AI-generated content through the intermediary framework reflects a pragmatic instinct. It allows regulators to respond quickly to emerging harms without waiting for a comprehensive AI statute. It also leverages an existing enforcement ecosystem that platforms already understand. At the same time, it pushes that ecosystem close to its conceptual limits.

India’s decision to address AI-generated content through the intermediary framework reflects a pragmatic instinct. It allows regulators to respond quickly to emerging harms without waiting for a comprehensive AI statute. It also leverages an existing enforcement ecosystem that platforms already understand. At the same time, it pushes that ecosystem close to its conceptual limits.

The issues explored above point to a deeper choice that now confronts Indian regulators. AI content labelling can be treated as an extension of content moderation, layered onto existing due diligence obligations. Or it can be treated as a distinct transparency regime, one that recognises the technical limits of detection, the distributed nature of AI creation, and the need for proportionality.

The issues explored above point to a deeper choice that now confronts Indian regulators. AI content labelling can be treated as an extension of content moderation, layered onto existing due diligence obligations. Or it can be treated as a distinct transparency regime, one that recognises the technical limits of detection, the distributed nature of AI creation, and the need for proportionality.

The issues explored above point to a deeper choice that now confronts Indian regulators. AI content labelling can be treated as an extension of content moderation, layered onto existing due diligence obligations. Or it can be treated as a distinct transparency regime, one that recognises the technical limits of detection, the distributed nature of AI creation, and the need for proportionality.

The issues explored above point to a deeper choice that now confronts Indian regulators. AI content labelling can be treated as an extension of content moderation, layered onto existing due diligence obligations. Or it can be treated as a distinct transparency regime, one that recognises the technical limits of detection, the distributed nature of AI creation, and the need for proportionality.

A workable framework will require regulators to resist the temptation to stretch intermediary due diligence beyond recognition. It will require clearer thresholds tied to actual risk of deception, explicit recognition of technical constraints, and guardrails that prevent loss of safe harbour from becoming the default enforcement lever. Transparency should function as an aid to trust, not as a proxy for control. How the current proposal evolves will shape not only India’s response to synthetic media, but the future contours of platform liability and digital speech.

A workable framework will require regulators to resist the temptation to stretch intermediary due diligence beyond recognition. It will require clearer thresholds tied to actual risk of deception, explicit recognition of technical constraints, and guardrails that prevent loss of safe harbour from becoming the default enforcement lever. Transparency should function as an aid to trust, not as a proxy for control. How the current proposal evolves will shape not only India’s response to synthetic media, but the future contours of platform liability and digital speech.

A workable framework will require regulators to resist the temptation to stretch intermediary due diligence beyond recognition. It will require clearer thresholds tied to actual risk of deception, explicit recognition of technical constraints, and guardrails that prevent loss of safe harbour from becoming the default enforcement lever. Transparency should function as an aid to trust, not as a proxy for control. How the current proposal evolves will shape not only India’s response to synthetic media, but the future contours of platform liability and digital speech.

A workable framework will require regulators to resist the temptation to stretch intermediary due diligence beyond recognition. It will require clearer thresholds tied to actual risk of deception, explicit recognition of technical constraints, and guardrails that prevent loss of safe harbour from becoming the default enforcement lever. Transparency should function as an aid to trust, not as a proxy for control. How the current proposal evolves will shape not only India’s response to synthetic media, but the future contours of platform liability and digital speech.