Site icon Joginder Poswal

India’s New IT Rules on AI & Deepfakes: What Changes from 20 February 2026

New IT Rules on AI and Deepfakes in India, effective February 2026

New IT Rules on AI and Deepfakes in India, effective February 2026

Picture a video showing a politician saying something they never actually said. Or a voice recording of your boss making a comment that could damage their reputation. In 2024 and 2025, deepfakes like these were already on the rise in India. Starting 20 February 2026, new laws will make the rules stricter.

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, introduce India’s first clear set of rules for AI-generated content and deepfakes.

If you run an app, manage a digital platform, or create content using AI tools, this affects you.

What exactly changed on 20 February 2026?

The Ministry of Electronics and Information Technology (MeitY) announced the IT Rules Amendment 2026 on 10 February. The new rules take effect from 20 February 2026.

In general, the amendments do the following:

In summary, if you are a platform, creator, or intermediary, you now need to show that you are “AI-aware” and following the rules, or you could lose safe-harbour protection.

What does “synthetically generated information” (SGI) mean?

The rules now describe “synthetically generated information” (SGI) as:

For example:

All of these are examples of SGI.

What is not considered SGI?

Just because something is edited digitally does not mean it is a deepfake. The rules make some exceptions:

If you are just cleaning up a video or adding subtitles, you are not creating SGI. The law is focused on edits that create a false sense of realism, not on simple improvements.

Labelling and disclosure rules for AI‑generated content

A key update is the new labelling requirement.

Platforms and creators need to make sure that:

Here are some practical examples:

From a product design perspective, this means:

If you do not follow these rules, your app could lose safe-harbour protection and face penalties for hosting unlabelled deepfakes.

How the new 3-hour takedown rule affects you

Until 2026, intermediaries had up to 36 hours to take down or block content after receiving a valid government notice.

Now, most unlawful content must be removed within 3 hours.

In very sensitive cases, such as deepfakes that threaten national security, public order, or involve non-consensual intimate imagery (NCII), takedowns may be required within 2 hours.

Why this is important for apps and platforms:

If your team still handles government takedown notices manually, you will quickly miss the deadline and risk being seen as non-compliant.

How does this affect each group

1. Social media and UGC platforms

If you manage a social network, short video app, or online community, you should:

A practical step:
Add a content moderation API that can spot facial manipulation and voice cloning, then send flagged content to a fast-track review queue.

2. Messaging and OTT apps

The rules focus on WhatsApp-style chats, VoIP calls, and OTT video platforms, especially targeting:

These platforms are required to:

Tip:
Use origin tags for shared media, like ‘first posted on X platform,’ and set up AI detection for content that is forwarded or shared many times.

3. Creators and influencers

If you are an influencer, educator, or marketing professional who uses AI tools, this applies to you.

Here is a simple rule for creators:

There are legal risks. Creators can be held responsible if their content is meant to mislead people, especially about elections, money, or public health.

What steps should you take right now?

Here’s a checklist to help you get started.

Use this practical checklist to take action right away:

If you run an app or platform:

If you’re a creator or run a small business:

Here are some real-life examples to help you understand how these rules work.

Example 1: E-commerce brand
An online fashion brand uses AI to showcase the wear she never wore. Under the new rules, the brand mu brandt:

Example 2: Ed-tech app
An ed‑tech company uses an AI avatar to teach kids. The video is synthetic, but the script is accurate. They must:

Example 3: Political disinformation
A political party shares a deepfake video that falsely claims their opponent said something offensive. If someone reports the video, it could be removed within three hours. The party could also face damage to their reputation and possible legal trouble later.

Common pitfalls to avoid

New rules can be confusing. Here are some common mistakes people are making:

A deepfake is any audio, visual, or audio‑visual content created or altered by AI or algorithms to look real or authentic, even though it shows a person or event that never actually happened. Simple edits like colour correction or subtitles are not deepfakes.

Yes. If your content is synthetically generated or AI‑generated and appears real, you must clearly label it on the screen (e.g., watermark or text) and, where possible, in metadata.

For most unlawful content, intermediaries must remove or block it within 3 hours of a valid government notice. In sensitive cases (national security, non‑consensual intimate imagery), the expectation may be as low as 2 hours.

The focus is on intermediaries and platforms, but creators who intentionally spread misleading AI‑generated content, especially in elections, finance, or public order, can still face legal action under other laws.

The rules mainly regulate public or widely shared content. Private chats are not automatically targeted, but once a deepfake is forwarded at scale or reported, platforms must act fast under the 2–3‑hour window.

Non‑compliant intermediaries may lose safe‑harbour protection, face penalties, and have their content taken down more quickly. In serious cases, repeated violations could lead to regulatory action or even the blocking of services.

If you update your product, policy, and content practices to follow the February 2026 rules, you help protect your platform from penalties, lower user risk, and build trust at a time when it is easy to fake things online.

— — —

Disclaimer:
This article is published for general legal awareness and informational purposes only, and should not be construed as legal advice or a solicitation to act.

About the Author:
Joginder Poswal is an advocate enrolled with the Bar Council of Punjab & Haryana (Enrolment No. PH/9616/2023) and practising exclusively in non-litigation legal advisory, drafting, and consultation under Indian law.

For more information, please refer to the contact details provided on this website.

Exit mobile version