India prepares comprehensive AI Act following deepfake regulation

India’s digital governance framework is set for a major update as the Ministry of Electronics and Information Technology (MeitY) moves to introduce a dedicated legislation for artificial intelligence after rolling out draft rules to address deepfakes and synthetically generated content.

The draft regulations, open for public consultation until November 6, 2025, propose amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, introducing obligations for platforms to label or embed metadata into AI-generated content. Legal experts have noted that while these provisions aim to enhance transparency, they may also face challenges since the current Information Technology Act, 2000, does not explicitly cover artificial intelligence.

Under the proposed framework, all publicly shared synthetic content will need to include visible labels or watermarks indicating its artificial origin. For visual media, the label must cover at least 10 percent of the screen, and for audio, a declaration must occur within the first 10 percent of its duration. Platforms will also be required to collect user declarations and deploy automated tools to detect and verify the origin of content.

Officials from MeitY have indicated that the deepfake regulations are a precursor to a more comprehensive AI Bill, expected to be modelled on the IT Act. The upcoming legislation will provide a clearer legal foundation for governing artificial intelligence, covering generative models, synthetic media, and intermediary accountability.

The move reflects growing global concern over AI misuse, particularly in deepfakes, misinformation, impersonation, election interference, and non-consensual content creation. The government aims to ensure that platforms and creators are accountable when AI-generated material is presented as authentic, while also maintaining space for innovation and creative expression.

Among the key features being considered for the forthcoming AI law is an expanded definition of “information” to include synthetically generated data. This would allow regulators to treat manipulated or AI-created content in the same way as genuine harmful content under existing IT frameworks. Additionally, safe harbour protections for intermediaries will continue to apply only if platforms meet due diligence obligations.

Experts believe the approach signals India’s shift from fragmented rules to a unified framework for AI regulation, balancing technological innovation with ethical governance. However, implementing these requirements will be complex. Mandating embedded metadata and visible labels presents operational and technical challenges, especially when content is edited, shared across multiple platforms, or stripped of metadata.

The AI Act, once introduced, is expected to empower MeitY and related agencies to set standards for AI model transparency, bias auditing, safety testing, and incident reporting. It will also formalise obligations for companies deploying generative AI systems to ensure accountability and traceability in algorithmic outputs.

For digital platforms, the regulatory direction means heightened compliance requirements. Major social media intermediaries will have to secure declarations from users uploading synthetic or altered content, implement verification tools, and apply clear labelling standards. Non-compliance could lead to loss of legal protection under safe harbour provisions and potential penalties.

Although MeitY has yet to define specific compliance deadlines, the consultation window and subsequent legislative process suggest that the AI Bill could be tabled in Parliament within the next session. The urgency reflects India’s proactive stance in managing AI risks before widespread misuse.

Industry observers view India’s proposed AI legislation as an important step in shaping global norms for responsible AI. It aligns with the country’s ambition to be both a technological leader and a regulator of digital ethics. While larger companies may be able to adapt more quickly, smaller platforms and startups are concerned about implementation costs and technical scalability.

The labelling mandate also raises wider discussions around privacy, freedom of expression, and interoperability with international frameworks. Global platforms will need to harmonise India’s requirements with other regional regulations that govern content authenticity and AI usage.

In the broader policy landscape, India’s upcoming AI Act represents a pivotal evolution in digital regulation, extending the legacy of the IT Act to emerging technologies. For consumers, the changes promise greater clarity and transparency about what is real and what is artificially created online. As the consultation period concludes, industry feedback will play a key role in determining how India strikes the balance between innovation, accountability, and public trust in the age of artificial intelligence.