Online shopping depends on accurate product information, but the rise of generative AI is straining that trust. E‑commerce platforms and thousands of third-party sellers are increasingly using AI tools to write product titles, descriptions and even customer-review summaries at scale. The promise is clear – automation can save time and personalize listings for every customer. But it comes at a cost. Shoppers are finding factual mistakes, filler text, and exaggerated claims in some AI-written descriptions, and experts warn that those errors can erode consumer confidence. As one industry study notes, consumers immediately feel less confident in content that looks “artificial” or “effortless,” even if it sounds polished. Brands and marketplaces are now scrambling to police AI-generated listings and reassure buyers that what they read is accurate.
Major e-commerce companies have already embraced AI content tools. In mid-2023, for example, Amazon announced an AI-powered listing assistant for its sellers. The tool analyzes a product’s title, images and reviews to draft a product description, which the merchant can then review or edit. Amazon made clear that human oversight is required – sellers can tweak or replace any AI text – and that the AI won’t replace human copywriting. Other platforms are experimenting similarly. Shopify’s selling tools now include AI-generated copy and image suggestions. In India, Flipkart and other marketplaces are quietly piloting AI to speed cataloging and personalizing search results. Even Meesho and Nykaa, known for social commerce and beauty, have invested in AI agents to answer customer questions and tailor recommendations.
The goal is efficiency: there are millions of products on these sites, and few large sellers have the resources to write thousands of custom descriptions by hand. Generative AI can instantly produce content for a shoe, a blender or a skincare cream in a consistent tone. At its best, this can help small brands be found online and improve the shopping experience. One global survey found that 42% of shoppers now trust an AI-generated product summary enough to skip directly from the summary to checkout, without clicking through to the original site. In effect, AI is becoming a new “front door” to product information. For e-commerce marketers, leveraging AI is an opportunity to reach that fast-moving customer. But at the same time, these new research tools also introduce new risks.
Trust cracks appear as soon as AI slips up. Analysts have already documented embarrassing mistakes when sellers rely on AI without careful editing. In one widely-cited example, an Amazon listing (later removed) was literally left with ChatGPT’s template text: “Our [product] can be used for a variety of tasks, such [task 1], [task 2], and [task 3].” In another, an Amazon listing even carried the text of a ChatGPT error message: “I’m sorry but I cannot fulfill this request – it goes against OpenAI use policy.” These glitches happened because some merchants simply copy-pasted AI outputs or scraped content without checking. Scammers have been caught flooding marketplaces with hundreds of such listings, using ChatGPT or similar tools to churn out product pages for dubious goods.
Even for well-intentioned sellers, generative content can mislead. AI tools sometimes “hallucinate” product details – inventing features or specs not backed by data – or they turn images and customer reviews into prose that oversells or distorts the truth. For example, tests of Amazon’s own AI review-summary tool found the system regularly exaggerated negatives and buried positives in ways that didn’t match the underlying feedback. In one case, an inversion table for back pain was described as a desk. In another, a famous game was said to have “mixed opinions on ease of use” when only 1% of reviews even mentioned that. Shoppers who see an AI summary or description may never scroll deeper to catch the inaccuracy.
These errors are more than cosmetic: they hit the core of trust in online shopping. When a buyer receives a product that “doesn’t match the description”, it often means an immediate return and a likely loss of that customer’s future business. In fact, retail research shows that roughly 17% of online purchases are returned, and a large fraction of those returns occur because the item “didn’t match the online description.” One survey found that about 31% of shoppers who returned an item did so for exactly this reason. Misleading listings also damage trust in the brand and platform. Studies indicate that as many as 40% of consumers will stop buying from a company after losing trust in it. By contrast, brands that maintain trust earn big rewards: consumers spend roughly 50% more with retailers they trust, and simply adding trust-building elements on product pages (clear specifications, security badges, guarantees, etc.) can lift conversions by around 35%. In other words, the accuracy of product information directly affects both loyalty and sales – and AI mistakes can quickly squander those gains.
Marketing leaders in India stress that authenticity is non-negotiable. Cheil X CEO Jitender Dabas notes that consumers value the time and skill behind human-created content. “Given how we respect what takes time, skill, and hard work, AI by design is fast and frictionless – and that makes its outputs feel less worthy, no matter how impressive,” Dabas observes. He adds that if people perceive content as “artificial” or “effortless,” they begin to view it as disposable. Similarly, Arun Roongta, Managing Director of research firm Texzone, warns that transparency is crucial. He says, “Consumer trust is reduced when content is clearly labeled as AI-generated… Brands need to juggle transparency and care.” As Roongta puts it, in categories like electronics or finance where stakes are high, buyers become particularly skeptical.
At the same time, some industry experts caution against fearing the technology itself. AnyMind’s Aditya Aima compares AI to past shifts like the PC or the internet – “What unsettles consumers is not the tool but the uncertainty around it,” he argues. His point is that careful framing and use of AI can avoid alarm. Piyush Goel, CEO of technology firm Beyond Key, agrees that misuse is the culprit. He notes, “Data identified as being generated by AI often causes a decline in consumer confidence… Short-term profits at the expense of consumer perception could erode distinction and trust.” In other words, Goel and others urge companies to use AI behind the scenes and emphasize human oversight in customer-facing content. They recommend clearly highlighting any AI role only when it adds real value (such as personalized recommendations), and otherwise focusing the message on product quality and benefit.
The regulatory backdrop in India already reinforces these priorities. Under the Consumer Protection Act and ASCI’s advertising code, all product claims – whether written by a human or a machine – must be truthful and non-misleading. Indian guidelines for misleading advertisements cover any content that causes consumer harm. As ASCI itself points out in recent guidance, “the responsibility to prevent the creation or display of prohibited content…lies with the advertiser or the agency.” In practice this means a company cannot hide behind “AI” if customers are misled. The Advertising Standards Council’s white paper on AI warns that generative models can spit out biased, inaccurate or copyrighted content. It urges strict review processes, prompt removal of errors and clear disclaimers. For example, brands are encouraged to train staff not to feed sensitive prompts to AI, to audit AI outputs before publication, and even to insure against AI-related risks. In short, Indian regulators treat AI-generated product copy no differently than any other ad copy: it must comply with truth-in-advertising laws.
In response, e-commerce companies are taking measures. Amazon says it requires sellers to provide “accurate, informative” listings and has pulled the erroneous AI-driven pages once flagged by shoppers. It insists on human review of any AI draft description and plans continual improvements to its tools. Flipkart and other marketplaces have warned third-party vendors to follow their content guidelines – for instance, no offensive language, no plagiarized material, and always factual product specs. Big brands that sell on these sites are also likely to be more cautious: a leading electronics seller told exchange4media that they view generative AI as a writing aid, not a replacement, and every AI draft is checked by a human copywriter. Meanwhile, consumer groups in India are increasingly vigilant. The ASCI’s Consumer Complaints Council has noted a rise in cases where shoppers complained about misleading online listings, and it has issued directives in several instances to correct or remove ads.
For online shoppers, the takeaway is to shop carefully and read with skepticism. Experts advise buyers to cross-check critical product details (like dimensions, materials or technical claims) against multiple sources, and to rely on verified seller ratings and long-form reviews rather than just a headline summary. For retailers and brands, the message is clear: generative AI can boost efficiency, but accuracy must come first. As one e-commerce strategist points out, “AI should be a backstage enabler, with the brand’s human narrative center-stage.” In the end, trust is hard-earned and easily lost. Cutting corners on product descriptions may save time today, but it risks damaging the reputation that retailers have built – and losing customers who won’t come back.
Disclaimer: All data points and statistics are attributed to published research studies and verified market research. All quotes are either sourced directly or attributed to public statements.