People know AI gets things wrong. They have seen chatbots invent legal cases, misquote reports, generate fake citations and confidently answer questions with incorrect information. Yet millions continue to use AI tools every day for work, shopping, research, customer service and even personal advice.
That contradiction is becoming one of the defining stories of the AI era.
In boardrooms, marketers are integrating AI into customer journeys. In workplaces, employees are quietly relying on AI generated summaries and presentations. Consumers increasingly turn to AI assistants instead of search engines for quick answers. At the same time, public trust in AI companies remains fragile and concerns around misinformation are growing.
The issue is no longer whether AI hallucinates. The industry itself openly acknowledges that large language models can produce fabricated or misleading responses. The bigger question is why people continue trusting systems that are known to make things up.
Recent global studies suggest the answer is more complicated than blind faith in technology. What users appear to trust is not necessarily the truthfulness of AI, but the convenience, fluency and confidence with which it delivers information.
A 2025 global study by the University of Melbourne and KPMG across 47 countries found that 66% of people intentionally use AI regularly, but only 46% said they are willing to trust it. The same study revealed another contradiction: even though trust remains low, dependence on AI is rising rapidly in workplaces and daily digital interactions.
That gap between skepticism and continued usage is now shaping how brands, platforms and consumers interact with artificial intelligence.
People distrust AI in theory but rely on it in practice
The research points to a growing pattern. Users often express concern about AI’s reliability, but still choose it because it saves time and reduces effort.
The KPMG and Melbourne study found that 58% of employees regularly use AI tools at work, while 66% admitted they rely on AI output without fully checking its accuracy. More than half said they had made mistakes because of AI generated content.
Professor Nicole Gillespie, Chair of Trust at Melbourne Business School and one of the researchers involved in the study, said public trust will determine how sustainable AI adoption becomes in the long term.
“The public’s trust of AI technologies and their safe and secure use is central to sustained acceptance and adoption,” Gillespie said while discussing the findings.
The same study found another important trend. Many employees are using AI without formal organisational guidance. Only 40% of respondents said their workplace had clear policies around generative AI use.
That lack of structure is significant because AI has already moved beyond experimentation. It is now embedded into productivity tools, customer support systems, search experiences and advertising workflows.
Stanford University’s 2025 AI Index Report showed that 78% of organisations globally were using AI in some form in 2024, up sharply from 55% a year earlier. However, the report also noted declining confidence around data privacy, transparency and fairness in AI systems.
The numbers suggest adoption is growing faster than public confidence.
AI sounds convincing, and that changes how people judge it
Part of the trust problem lies in how AI communicates.
Unlike earlier software systems, modern generative AI speaks in polished natural language. Responses are conversational, fast and often delivered with certainty. Even when incorrect, the output can feel believable.
A 2025 study published in Scientific Reports explored how people react to AI generated advice. Participants initially claimed they preferred human advice over AI advice. But when they were shown responses without knowing whether the source was human or AI, they rated the AI generated answers as more helpful, authentic and effective.
Once participants learned the advice came from AI, trust levels dropped again.
The findings suggest people often judge AI differently before and after exposure. In theory, they remain cautious. In practice, fluent communication can override skepticism.
Another 2026 study examining ethical decision making found similar results. Before reading advice, nearly 73% of participants said they would prefer ethical guidance from a human. After reviewing GPT 4 generated advice, resistance to AI dropped significantly. In direct comparisons, many participants preferred the AI generated responses because they appeared more structured and comprehensive.
Researchers increasingly refer to this as “trust calibration.” The problem is not necessarily that people believe AI is always correct. It is that polished language and confident presentation can make inaccurate information appear reliable.
That effect becomes stronger when AI tools mimic human conversation patterns.
Researchers from MIT Media Lab and OpenAI studied millions of ChatGPT interactions and found that conversational AI can create emotional familiarity over time. Users who interacted with voice based AI systems regularly often described them as supportive, calming or socially engaging.
While the study found emotional dependency was still relatively rare, it highlighted how conversational design changes user behaviour. People tend to lower their guard when interactions feel human.
This is particularly relevant for marketers and customer experience teams. AI systems are increasingly being designed not only to answer questions, but also to sound empathetic, smooth and socially aware.
Convenience often outweighs accuracy
The strongest reason people continue trusting AI may also be the simplest: it makes life easier.
AI compresses multiple digital tasks into a single interaction. Instead of opening several tabs, comparing websites or reading long articles, users can ask one question and receive a summary instantly.
That convenience is changing online behaviour.
The Reuters Institute’s 2025 Digital News Report found that weekly usage of generative AI tools nearly doubled in one year across surveyed countries. Information seeking became the most common use case. Many users said they preferred AI systems because they simplified research and summarised information quickly.
Nick Hagar, a researcher studying AI driven information behaviour, observed that users are often aware the answers may not be perfect, but still prefer the speed and simplicity of AI interfaces.
“They know the answers they are getting are not perfect,” Hagar said in an interview discussing recent AI search studies.
The shift resembles earlier changes in digital behaviour. Consumers once moved from libraries to search engines despite knowing online information varied in quality. They later shifted from desktop browsing to social media feeds despite concerns around misinformation. AI appears to be the next convenience layer.
The problem is that convenience can create an illusion of reliability.
A 2025 investigation by the Tow Center for Digital Journalism tested several AI search systems and found that more than 60% of responses to news related queries contained inaccuracies or misleading information. Some systems fabricated citations or incorrectly attributed information to reputable publishers.
The study also found that premium AI models often generated more confidently incorrect answers because they were less likely to refuse a query.
Similarly, a separate BBC and European Broadcasting Union analysis examined over 3,000 AI generated responses across multiple languages and countries. Researchers found that 45% of responses contained at least one significant issue related to sourcing, factual accuracy or context.
Yet despite these findings, user adoption continues to rise.
That suggests many consumers are unconsciously making a trade off between precision and efficiency. If an answer feels mostly correct and saves time, users may accept the risk of occasional inaccuracies.
Familiar branding creates borrowed credibility
Another reason AI systems gain trust is because they borrow authority from trusted sources.
Many AI tools now include citations, publisher references and web links inside responses. Even when users do not verify the information themselves, the presence of familiar media brands can create an impression of legitimacy.
Researchers studying AI search behaviour found that users frequently trusted answers simply because the response referenced outlets such as CNN, Reuters or The New York Times. In many cases, users never clicked the original links.
This creates what some researchers describe as “borrowed credibility.” The interface appears researched and well sourced, even when the underlying citations may be incomplete or incorrect.
For media organisations and marketers, that creates both opportunities and risks.
Brands now compete not only for human attention but also for AI interpretation. If AI systems summarise products, reviews or news inaccurately, the consequences can affect reputation, customer trust and purchasing decisions.
A 2025 study from the University of California San Diego examined how AI generated review summaries influenced buying behaviour. Researchers found participants were significantly more likely to purchase products after reading AI generated summaries compared to reading original customer reviews.
Lead researcher Abeer Alessa said the scale of behavioural influence surprised the research team.
“We did not expect how big the impact of the summaries would be,” Alessa noted while discussing the findings.
The implication is important for commerce and martech teams. AI generated summaries do not need to be perfectly accurate to shape consumer decisions. They only need to appear organised, efficient and directionally useful.
The trust problem is becoming a business problem
For companies deploying AI, hallucinations are no longer just technical flaws. They are increasingly becoming customer experience risks.
A chatbot providing incorrect refund policies, an AI assistant misrepresenting product reviews or a search summary distorting news information can directly affect brand credibility.
That concern is already visible across industries.
Earlier this year, Apple temporarily paused AI generated notification summaries for certain news and entertainment apps after criticism over misleading alerts. The issue was not simply that the summaries were inaccurate. It was that they looked polished and authoritative enough for users to believe them.
The episode highlighted a larger challenge facing AI companies. Human beings tend to associate confidence with competence. AI systems are often designed to minimise friction and maintain conversational flow, which can discourage uncertainty or refusal responses.
In practice, that means many AI systems answer questions even when they lack reliable information.
Researchers say this creates a dangerous mismatch between user expectations and system capability.
A growing number of experts now argue that the future of trustworthy AI may depend less on making systems sound intelligent and more on making limitations visible.
That includes clearer uncertainty labels, better citation transparency, stronger human escalation systems and improved refusal behaviour when information cannot be verified.
For marketers, the implications are equally significant. As AI assistants become recommendation engines, shopping guides and customer support agents, trust becomes part of brand strategy.
Consumers may not fully trust AI companies. But they often trust the experience enough to continue engaging with it.
That distinction matters.
The current AI economy is not being built on unconditional belief in machine intelligence. It is being built on practical trust. Users tolerate imperfections because the systems are fast, accessible and increasingly woven into everyday digital behaviour.
In many ways, the relationship resembles the broader internet itself. People know online information can be unreliable, yet they continue relying on digital platforms because the convenience outweighs the uncertainty.
AI appears to be accelerating that same behavioural pattern.
The difference is that generative AI does not just retrieve information. It reshapes, rewrites and presents information in ways that feel complete and conversational. That gives incorrect answers a level of fluency older technologies never had.
And that may explain why AI hallucinations have not slowed adoption.
People do not necessarily trust AI because they believe it is always true. They trust it because it feels useful enough, fast enough and convincing enough to keep using.
For the technology industry, that may be both the biggest advantage and the biggest warning sign of the AI era
Disclaimer: All data points and statistics are attributed to published research studies and verified market research. All quotes are either sourced directly or attributed to public statements.