Javed Akhtar Raises Alarm Over AI Generated Deepfake Content and Digital Misuse

Veteran lyricist and poet Javed Akhtar has raised concerns over the misuse of artificial intelligence after a fabricated video featuring his likeness began circulating online, prompting him to consider reporting the matter to cyber authorities. The incident has once again brought attention to the growing challenge of AI generated deepfake content and the implications it holds for digital trust, identity protection and regulatory oversight.

Akhtar publicly stated that the video in question was entirely fake and created using AI tools without his consent. He warned that such misuse of technology could have serious consequences and indicated that legal options were being explored. His response reflects increasing anxiety among public figures over how easily their voices, images and reputations can be manipulated using advanced generative technologies.

The spread of AI generated deepfakes has accelerated in recent years as tools capable of synthesising realistic video and audio have become widely accessible. While these technologies have legitimate applications in entertainment, education and marketing, their misuse for impersonation and misinformation has raised alarms globally. Akhtar’s case underscores how even individuals with public recognition are vulnerable to such digital manipulation.

Experts note that deepfakes exploit trust by presenting content that appears authentic while being entirely fabricated. In many cases, viewers struggle to distinguish between real and synthetic media, particularly when content is shared rapidly across social platforms. This creates risks not only for individuals but also for institutions and democratic processes.

The entertainment industry has been among the first to feel the impact of AI generated impersonation. Actors, musicians and writers have expressed concerns about unauthorised use of their likeness and voice. For creators whose work and reputation form the basis of their livelihood, such misuse can result in reputational damage and financial loss.

Akhtar’s warning highlights a broader conversation around accountability in the age of generative AI. While technology providers develop increasingly powerful tools, enforcement mechanisms and legal frameworks have struggled to keep pace. Victims of deepfakes often face challenges in identifying creators, removing content and seeking redress.

Cybercrime specialists point out that reporting such incidents is critical to building enforcement capacity. Legal action and formal complaints help authorities understand the scale of the problem and develop appropriate responses. However, they also caution that jurisdictional complexities and anonymity online can make investigations difficult.

The issue is particularly relevant in India, where digital adoption has grown rapidly across social media and messaging platforms. As AI tools become more accessible, the potential for misuse increases alongside legitimate innovation. Policymakers have begun discussing safeguards, but comprehensive regulation around deepfakes remains an evolving area.

From a technology perspective, AI generated content is advancing faster than detection tools. While researchers are developing methods to identify synthetic media, these solutions are often reactive. As generation techniques improve, detection becomes more challenging, creating a continuous cycle of escalation.

The implications extend beyond individuals to brands and enterprises. In the martech ecosystem, trust is a foundational element of customer engagement. The rise of deepfakes threatens to undermine confidence in digital content, advertising and endorsements. Brands may need to adopt stronger verification mechanisms to protect both consumers and reputations.

Akhtar’s response also raises questions about consent and ownership in the digital age. Using someone’s likeness without permission, even through synthetic means, challenges existing legal definitions of identity misuse. As AI blurs the line between real and artificial representation, lawmakers face pressure to clarify rights and responsibilities.

Internationally, governments are exploring regulatory responses. Some jurisdictions have introduced disclosure requirements for AI generated content, while others are considering penalties for malicious use. However, balancing innovation with protection remains complex, particularly when AI tools have both creative and harmful potential.

Industry leaders have increasingly called for ethical guidelines and self regulation. Technology companies are being urged to implement safeguards that prevent misuse, such as watermarking AI generated media or restricting certain capabilities. The effectiveness of these measures will depend on adoption and enforcement.

Public awareness is another critical factor. Media literacy initiatives that educate users about the existence and risks of deepfakes can help reduce the impact of misinformation. Encouraging scepticism and verification may become necessary skills in a digital environment where visual evidence can no longer be taken at face value.

For creators and public figures, protecting digital identity may require proactive strategies. Monitoring online content, asserting intellectual property rights and engaging legal counsel are becoming part of managing public presence. However, these measures can be resource intensive and are not accessible to everyone.

Akhtar’s comments resonate with concerns expressed by other cultural figures who fear that unchecked AI misuse could erode creative integrity. While AI offers new tools for storytelling and production, its ability to replicate human expression raises ethical questions about originality and consent.

The broader societal impact of deepfakes also includes political misinformation and fraud. Synthetic media has been used to impersonate officials, manipulate public opinion and conduct scams. Addressing these threats requires coordination between technology providers, regulators and law enforcement.

As AI continues to evolve, incidents like this highlight the need for clearer norms around acceptable use. While innovation remains important, safeguarding individuals from harm is equally critical. Ensuring accountability in AI deployment will likely shape public trust in technology more broadly.

Akhtar’s willingness to pursue legal recourse sends a message that misuse of AI will not go unchallenged. It also reflects growing recognition that digital harms require formal responses rather than informal takedowns alone.

The incident serves as a reminder that generative AI is not only a technical issue but a social one. Its impact extends into culture, law and everyday life. As tools become more powerful, responsibility for their use becomes shared across creators, platforms and users.

For now, the episode has reignited debate around deepfakes and digital ethics. How quickly effective safeguards are implemented will influence whether AI remains a tool for empowerment or becomes a source of widespread distrust.