The Ministry of Electronics and Information Technology (MEITY), Govt of India, has introduced amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. These changes seek to enhance transparency and fairness in the process of identifying and removing unlawful content from online platforms. By establishing clearer procedures, the amendments aim to protect users while ensuring that platforms fulfill their responsibilities. This overview explains the primary modifications, their significance, and practical recommendations for those who produce/create or manage online content.
Overview of the Key Amendments
The revisions limit the authority to issue directives for content removal to senior officials and require these directives to be detailed and justified. Platforms must respond to such directives to retain their legal protection against liability for user-generated content, known as safe-harbor immunity under Section 79 of the Information Technology Act, 2000. Additionally, a monthly evaluation by a high-ranking official ensures that all actions are appropriate and lawful. Here is Table to illustrate the differences before and after the amendments:
| Aspect | Before the Amendments | After the Amendments (Effective November 15, 2025) |
| Authority to Issue Directives | Notifications could originate from various levels | Limited to Joint Secretary (or Director if unavailable) for government entities; Deputy Inspector General or above (with special authorization) for police |
| Requirements for Directives | General or broad notifications were permissible | Directives must be reasoned, specifying the legal basis, description of the unlawful activity, and exact location of the content (such as a URL) |
| Evaluation Process | Minimal structured review | Monthly assessment by a Secretary-level official to confirm necessity, balance, and legality |
The Importance of These Amendments
If platforms do not address valid directives, they may lose their safe-harbor protection, which could lead to more frequent content removals or limitations. For individuals and small organizations involved in content creation, this means a reduced likelihood of unwarranted restrictions, provided they remain informed and proactive in responding to any notices.
Who Has the Authority to Issue Directives?
i) Government Entities (Central or State): Only officials at the level of Joint Secretary or higher, or a Director in cases where no Joint Secretary is designated.
ii) Police Authorities: Only officers ranked Deputy Inspector General or above, who have received specific authorization.
This structure helps prevent misuse by limiting decision-making to experienced personnel.
Essential Elements of a Valid Directive
A legitimate directive must clearly outline:
– The relevant law and specific provision.
– The type of unlawful activity involved.
– The precise identifier of the content, such as a URL.
Platforms are encouraged to disregard unclear or overly general notices, focusing solely on those that meet these criteria.
How These Directives Differ from Blocking Orders
Directives under Rule 3(1)(d) relate to platforms’ obligations for safe-harbor compliance and responsible management under Section 79. In contrast, blocking orders under Section 69A follow a distinct process with additional protections, such as provisions for emergencies and confidentiality.
Mechanisms for Oversight and Evaluation
Every directive issued within a month undergoes scrutiny by an official at the Secretary level. This review verifies that the measures taken were essential, balanced, and compliant with legal standards, thereby promoting accountability.
Relevant Court Decisions
On September 20, 2024, the Bombay High Court invalidated a 2023 amendment that authorized a government Fact Check Unit to identify content concerning government affairs, deeming it unconstitutional. The decision highlighted infringements on constitutional rights to equality, freedom of expression, and professional practice, due to ambiguous terms that could unduly restrict intermediaries.
Previously, in March 2024, the Supreme Court temporarily halted the Fact Check Unit’s implementation until a final ruling. Furthermore, the 2015 Shreya Singhal decision established that platforms’ awareness of unlawful content is confined to official court or government orders, not private reports.
As a result, platforms should not base actions on Fact Check Unit identifications.
Guidance for Freelancers, Creators, and Small Businesses
Should your content receive a notice:
i) Obtain the Documentation: Request the full directive from the platform and examine it for the official’s rank, the cited law, and the specific content identifier. Any shortcomings may serve as a basis to dispute the action.
ii) Keep Detailed Records: Preserve the directive, your original material (including timestamps), and all related exchanges. This documentation is vital for any subsequent appeals.
iii) Respond Thoughtfully: If elements are unclear, seek clarification from the platform or issuing authority without delay. Point out inconsistencies to strengthen your position.
iv) Engage Resolution Processes: Submit an initial complaint to the platform’s designated officer for grievances. If the outcome is unsatisfactory, appeal to the Grievance Appellate Committee within 30 days.
v) Recognize Boundaries: Actions prompted only by Fact Check Unit findings are not valid following the court ruling; demand proper legal references from authorized officials.
Failure to adhere to these steps might result in restricted access to platforms, emphasizing the value of careful oversight. It is advisable to seek professional legal counsel for specific situations.
Recommendations for Advisors Supporting Clients (Such as Agencies, Startups, and Influencers)
i) Establish Procedures for Compliance: Develop systematic approaches to manage directives within 24 to 48 hours, including legal assessment and record-keeping.
ii) Refine Platform Management Practices: Verify that accounts under your oversight include correct contact information for handling grievances and maintain logs of moderation activities for appeal purposes.
iii) Provide Training: Instruct teams responsible for online presence on recognizing legitimate directives and promptly referring them to legal experts.
iv) Address Potential Overreach: Refrain from accepting Fact Check Unit notices as binding; require formal directives or judicial orders.
These practices support effective risk management and regulatory alignment.
Forthcoming Regulations: Draft Provisions on Synthetic Media
In a related development, the Ministry has proposed rules requiring identification marks and traceable data for artificially generated or altered content, such as deepfakes. These include prominent labels occupying at least 10 percent of visual or audio elements, along with embedded information for verification. Platforms with significant user bases must adopt technical measures to confirm authenticity. Feedback on these draft proposals is welcome until November 6, 2025, through the designated consultation channel. Those incorporating artificial intelligence in content creation should anticipate these requirements.
Concluding Observations
These amendments introduce stringent procedures for content takedowns on online platforms, emphasizing transparency and senior-level oversight. Effective November 15, 2025, these changes protect freelancers, creators, and small businesses from arbitrary restrictions while guiding platforms on lawful compliance.