Robot Anchors and Phony Photos: India Confronts AI-Generated Content

In the past year, India’s media landscape has seen a curious twist: news anchors made of ones and zeros. At India Today’s prime-time news hour, “Sana” – a computer-generated newscaster – now delivers segments on routine topics like the weather and stock updates. Meanwhile, the Times of India and other outlets are using generative AI behind the scenes – creating personalized push alerts, rewriting headlines for different audiences, even generating games and crosswords on the fly. On one hand, media executives hail these tools as a way to handle vast “volume” and “velocity” of content. Times of India digital chief Rohit Garg says AI helps the newsroom “ensure that we don’t crack under chaos” by doing routine tasks faster and more consistently. On the other hand, critics warn that automatically produced copy can blur the line between fact and fiction. Recent surveys reflect this tension: Indians appear unusually open to AI-written news, yet incidents of misinformation are rising. A Reuters Institute report finds 44% of Indian news consumers say they are comfortable with AI delivering headlines – far above the 11% in the UK – and fully 18% of Indians now use chatbots like ChatGPT weekly to get news. At the same time, watchdogs are sounding alarms: dozens of “content farm” websites are using ChatGPT to spin out plagiarized articles and clickbait, spreading misleading stories under fake bylines.

These developments are not limited to newsrooms. In India’s corporate world, generative AI has swept marketing and customer service. One industry survey reports 42% of Indian marketers are already experimenting with AI for marketing campaigns, with 21.5% saying AI is highly integrated into their strategy. Globally, almost 90% of professional marketers admit to using AI tools on the job – from drafting blog posts and social media captions to optimizing ad copy and video content. Sixty-two percent of marketers report using chatbots like ChatGPT for content generation, and 71% use generative AI at least weekly. These tools promise big gains: in the AMA survey of over 1,000 marketers, 85% said AI made their work more productive, and about half said it actually improved the quality and quantity of content they produce. In India, conglomerates are even rolling out AI to the masses: Reliance Industries chairman Mukesh Ambani announced a plan to give free Google AI Pro subscriptions to tens of millions of Jio mobile users, aiming to make India “not just AI-enabled but AI-empowered.”

Yet behind the buzz lie serious reliability and trust concerns. India’s giant quick-commerce and delivery companies illustrate the promise and peril of AI in business. Zomato, the food-ordering app, has invested heavily in AI-driven customer support – its new “Nugget” chatbot now handles 15 million support interactions per month. But Zomato’s CEO Deepinder Goyal drew a bright line on using AI-generated content in marketing: after customers complained, he banned AI-made food photos on restaurant menus. “AI-generated food/dish images are misleading,” Goyal told the media, noting that customers said such images “lead to breach of trust” and more refund requests. Other Indian firms are quietly deploying generative tools. For example, quick-commerce firm Blinkit (owned by Zomato) uses OpenAI’s GPT models to suggest recipes via its “Recipe Rover” feature. Retailer Myntra has a “MyFashionGPT” shopping assistant to answer open-ended queries. These innovations aim to boost engagement and efficiency, but even business leaders concede risks. In a survey of 300 Indian C-suite executives, 99% called generative AI important for future success, yet 34% worried about “inaccurate outputs” as a barrier to adoption. The Salesforce study found 73% of leaders already use AI tools regularly – but it also noted concerns over data privacy, bias and governance.

As India rushes to embrace AI content, recent incidents highlight how rapidly misinformation can spread. Both globally and at home, watchdogs have documented troubling cases. In the West, an investigation by NewsGuard found at least 37 websites that were simply copycats, using ChatGPT to paraphrase news from major outlets like CNN, Reuters and The New York Times – often without any credit. A follow-up NiemanLab report described a new breed of AI-powered content mills that snap up old local newspaper domains. One former Long Island paper’s site, left abandoned, started publishing two to three AI-written stories every day under fictitious bylines, with headlines like “IKEA lamp looks ten times more expensive” or “China launches first non-uranium nuclear reactor.” In some cases, the AI content even lifted real human quotes and rewrote them: the articles credited “sources” with lines concocted by the machine. Media analysts warn that this “spinning” of old reporting into clickbait is “sloppy” and threatens to drown legitimate journalism in a sea of viral nonsense.

In India, the most visible fallout from AI-generated misinformation has come in politics. During the 2024 election campaign, deepfake videos caused a stir. Prime Minister Narendra Modi himself warned that opponents were “using AI to distort quotes of leaders like me, Amit Shah and [other BJP leaders]… to create social discord,” by making “fake videos in my voice,” and urged citizens to report any suspicious clips to the police. The Election Commission and police launched round-the-clock monitoring of social media to remove manipulative content. One viral clip showed a manufactured voice of Home Minister Amit Shah making incendiary promises about minority rights; Shah quickly posted the “original” and “edited” clips side by side, accusing his rivals of forgery. Meanwhile in Uttar Pradesh, a doctored video of Chief Minister Yogi Adityanath circulated to undermine the ruling party; police later declared it “AI-generated” and detained people who shared it. These episodes echo Modi’s earlier caution: in late 2023 he warned that “a new crisis is emerging due to deepfakes produced through artificial intelligence,” noting that most Indians “do not have a parallel verification system” for such videos.

Regulators in India are responding to these risks. In October 2025, the government proposed amendments to the IT rules that would force social media platforms and users to label AI-generated content. The draft rules call for visible “synthetic content” tags on AI-created videos, images, and even audio, and for platforms to build tools to verify whether a post is machine-made. MeitY Secretary S. Krishnan stressed that authorities are ready to act if needed: “We will act when there is a need to,” he told reporters, as the government studies global AI policies. At the same time, media organizations are setting their own guardrails. The Associated Press – one of the world’s largest wire services – has cautioned reporters to treat any AI output as “unvetted” and to double-check all facts. AP journalists are explicitly barred from using AI to write news; instead, they must apply the agency’s normal standards to whatever material comes from a bot.

Indian business leaders are also weighing ethics. When Zomato’s CEO spoke out against AI dish images, he highlighted the cost of deception: more refund requests and lower ratings. Other tech executives are vocal too. Reliance’s Mukesh Ambani has spoken of expanding AI access to empower consumers and businesses, but he has also joined other industry figures in meetings on AI governance. In media circles, Rohit Garg of the Times of India emphasizes that AI must complement – not replace – human judgement, lest “we crack under chaos.”

Despite the hype, many Indians remain cautious about unbridled AI content. For every executive excited by efficiency gains, there is an ordinary reader or consumer who distrusts a too-perfect script. Pew surveys in other countries have shown most people worry that AI “slop” could mislead the public. In India, a recent Reuters/University of Oxford poll found people still prefer human journalists for accuracy. And when bots slip up – as they occasionally will – the repercussions can be serious. AI tools still hallucinate: they can invent facts, mangle quotes, or mimic voices. The proliferation of those Iceland workweek clickbait and deepfake election clips shows how quickly reputations can be undermined by a synthetic video or story.

For now, the facts are plain: AI-generated content is spreading fast in India. Thousands of companies are trying it out, from media houses using robot anchors to marketers deploying AI copywriters. It’s a game-changer in productivity, no doubt – almost a game-changer in itself that 73% of India’s corporate chiefs say they use generative AI regularly. But trust in the output has not kept pace. As Modi warned and as Zomato’s Goyal experienced, when AI makes an error or fabricates a line, people notice – and they bristle. Addressing that gap is now the million-dollar question for India’s news and marketing industries. Can public trust be won with labels, fact-checks and transparency? Or will AI’s rapid rise unleash a “crisis” of credibility? At stake is nothing less than the currency of the media itself.

Disclaimer: All data points and statistics are attributed to published research studies and verified market research. All quotes are either sourced directly or attributed to public statements.