Picture a video showing a politician saying something they never actually said. Or a voice recording of your boss making a comment that could damage their reputation. In 2024 and 2025, deepfakes like these were already on the rise in India. Starting 20 February 2026, new laws will make the rules stricter.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, introduce India’s first clear set of rules for AI-generated content and deepfakes.
If you run an app, manage a digital platform, or create content using AI tools, this affects you.
What exactly changed on 20 February 2026?
The Ministry of Electronics and Information Technology (MeitY) announced the IT Rules Amendment 2026 on 10 February. The new rules take effect from 20 February 2026.
In general, the amendments do the following:
- They place clear regulations on synthetically generated information (SGI).
- They require strict labelling and disclosure for AI-generated and deepfake content.
- They also reduce the time allowed for content takedown from 36 hours to just 3 hours for most unlawful content.
In summary, if you are a platform, creator, or intermediary, you now need to show that you are “AI-aware” and following the rules, or you could lose safe-harbour protection.
What does “synthetically generated information” (SGI) mean?
The rules now describe “synthetically generated information” (SGI) as:
- Any audio, visual, or audio-visual content that is created, changed, or generated by computers or algorithms.
- It must appear real or authentic to the viewer/listener, and may portray a person or event in a way that looks indistinguishable from reality.
For example:
- A deepfake video where a celebrity appears to say something they never actually said.
- A fake voice call that sounds like your relative, asking for money.
- A loan approval letter made by an AI image tool that looks real but is fake.
All of these are examples of SGI.
What is not considered SGI?
Just because something is edited digitally does not mean it is a deepfake. The rules make some exceptions:
- Routine edits like adjusting colour, changing brightness, reducing noise, or improving resolution are not included.
- Simple format changes, such as converting MP4 files to MP3 or AVI to MOV, are also excluded.
- Edits made for accessibility, like adding captions or subtitles to help people with disabilities, are not considered SGI.
If you are just cleaning up a video or adding subtitles, you are not creating SGI. The law is focused on edits that create a false sense of realism, not on simple improvements.
Labelling and disclosure rules for AI‑generated content
A key update is the new labelling requirement.
Platforms and creators need to make sure that:
- AI-generated or deepfake videos and audio are clearly marked as synthetic.
- There should be a visible sign, like a watermark, overlay, or text, on the content itself.
- Embedded metadata or technical tags should indicate that the content is synthetic, enabling it to be traced back to its origin or platform.
Here are some practical examples:
- For example, a YouTube-style short on a cooking app should display a label such as “AI-generated / synthetic video” at the top or bottom.
- A fake news reel shared on a messaging app should include metadata flags that allow the app to detect and filter it.
From a product design perspective, this means:
- Add an “Is this AI-generated?” toggle or an AI-content tag when users upload content.
- Display clear labels on the thumbnail or player interface.
- Make sure metadata, such as EXIF, XMP, or platform-specific tags, is set correctly.
If you do not follow these rules, your app could lose safe-harbour protection and face penalties for hosting unlabelled deepfakes.
How the new 3-hour takedown rule affects you
Until 2026, intermediaries had up to 36 hours to take down or block content after receiving a valid government notice.
Now, most unlawful content must be removed within 3 hours.
In very sensitive cases, such as deepfakes that threaten national security, public order, or involve non-consensual intimate imagery (NCII), takedowns may be required within 2 hours.
Why this is important for apps and platforms:
- Previous workflow:
- Notice, then manual review, followed by escalations, with a decision made in 24 to 36 hours.
- Current process:
- Notice, then automated detection and priority queuing, with removal completed within 3 hours.
If your team still handles government takedown notices manually, you will quickly miss the deadline and risk being seen as non-compliant.
How does this affect each group
1. Social media and UGC platforms
If you manage a social network, short video app, or online community, you should:
- Use AI-powered tools to spot deepfakes or other synthetic content.
- Label AI-generated content both when it’s uploaded and when it appears on screen.
- Create a content moderation process that handles flagged posts quickly, within two or three hours.
A practical step:
Add a content moderation API that can spot facial manipulation and voice cloning, then send flagged content to a fast-track review queue.
2. Messaging and OTT apps
The rules focus on WhatsApp-style chats, VoIP calls, and OTT video platforms, especially targeting:
- Scams involving voice cloning and fake calls.
- Viral fake videos that are forwarded and mislead voters, investors, or communities.
These platforms are required to:
- Add watermarks or channel identification to synthetic audio and video.
- Work with traceability requests while keeping end-to-end encryption mostly intact.
Tip:
Use origin tags for shared media, like ‘first posted on X platform,’ and set up AI detection for content that is forwarded or shared many times.
3. Creators and influencers
If you are an influencer, educator, or marketing professional who uses AI tools, this applies to you.
- You now have to label AI-generated content.
- If you post a reel featuring a fake interview with a public figure, or an ad that makes a celebrity seem to endorse your product but does not, you must make it clear that it is not real.
Here is a simple rule for creators:
- If something looks real but was made with AI, add a label.
- If you’re unsure, err on the side of transparency rather than risk misinformation cases.
There are legal risks. Creators can be held responsible if their content is meant to mislead people, especially about elections, money, or public health.
What steps should you take right now?
Here’s a checklist to help you get started.
Use this practical checklist to take action right away:
If you run an app or platform:
- Check if your product qualifies as a “significant intermediary,” which usually means having over 5 million Indian users.
- Map all AI‑based features (video generator, voice‑clone, avatar tool, etc.).
- Make sure AI-generated content is labelled both when uploaded and when displayed.
- Include deepfake or AI detection tools in your content moderation process.
- Set up a process to remove flagged content within 3 hours, with clear escalation steps.
- Update privacy policy, TOS, and creator guidelines to reflect SGI rules.
If you’re a creator or run a small business:
- Stop using unlabeled AI‑generated faces/voices in ads or reels.
- Add clear tags such as “AI-generated” or “simulated footage” to your content.
- Avoid sharing political or financial deepfakes that could mislead readers or voters.
Here are some real-life examples to help you understand how these rules work.
Example 1: E-commerce brand
An online fashion brand uses AI to showcase the wear she never wore. Under the new rules, the brand mu brandt:
- Display a label like “AI-generated / synthetic video” on the thumbnail or during the first two seconds of the video.
- Avoid suggesting that the model personally endorsed the product if that is not the case.
Example 2: Ed-tech app
An ed‑tech company uses an AI avatar to teach kids. The video is synthetic, but the script is accurate. They must:
- Include a small label like “AI-generated teacher avatar.”
- Do not claim the avatar is a real human teacher with a PhD unless that is actually true.
Example 3: Political disinformation
A political party shares a deepfake video that falsely claims their opponent said something offensive. If someone reports the video, it could be removed within three hours. The party could also face damage to their reputation and possible legal trouble later.
Common pitfalls to avoid
New rules can be confusing. Here are some common mistakes people are making:
- If “AI-generated” is hidden in the caption, it is not enough. The label needs to be clearly visible on the screen, not tucked away in the text.
- Not all edits count as SGI. Simple changes like colour correction, noise reduction, or adding subtitles do not qualify as SGI.
- It is a mistake to think that only big platforms are affected. Any service that hosts user-generated AI content can be flagged.
- Do not ignore takedown timelines. Even if you do not agree with a notice, you still need to comply within the 3-hour window.
What counts as a “deepfake” under the new rules?
A deepfake is any audio, visual, or audio‑visual content created or altered by AI or algorithms to look real or authentic, even though it shows a person or event that never actually happened. Simple edits like colour correction or subtitles are not deepfakes.
Do I have to label every AI‑generated post?
Yes. If your content is synthetically generated or AI‑generated and appears real, you must clearly label it on the screen (e.g., watermark or text) and, where possible, in metadata.
How strict is the 3‑hour takedown rule?
For most unlawful content, intermediaries must remove or block it within 3 hours of a valid government notice. In sensitive cases (national security, non‑consensual intimate imagery), the expectation may be as low as 2 hours.
Can an individual creator get fined under these rules?
The focus is on intermediaries and platforms, but creators who intentionally spread misleading AI‑generated content, especially in elections, finance, or public order, can still face legal action under other laws.
Does this apply to personal chats or private groups?
The rules mainly regulate public or widely shared content. Private chats are not automatically targeted, but once a deepfake is forwarded at scale or reported, platforms must act fast under the 2–3‑hour window.
What happens if an app ignores these rules?
Non‑compliant intermediaries may lose safe‑harbour protection, face penalties, and have their content taken down more quickly. In serious cases, repeated violations could lead to regulatory action or even the blocking of services.
If you update your product, policy, and content practices to follow the February 2026 rules, you help protect your platform from penalties, lower user risk, and build trust at a time when it is easy to fake things online.
— — —
Disclaimer:
This article is published for general legal awareness and informational purposes only, and should not be construed as legal advice or a solicitation to act.
About the Author:
Joginder Poswal is an advocate enrolled with the Bar Council of Punjab & Haryana (Enrolment No. PH/9616/2023) and practising exclusively in non-litigation legal advisory, drafting, and consultation under Indian law.
For more information, please refer to the contact details provided on this website.
