Luc Julia is no stranger to big tech or bold statements. As the co-creator of Siri, Chief Scientific Officer at Renault, and former CTO & SVP of Innovation at Samsung Electronics, Julia has occupied top leadership positions across the global innovation landscape. He has also served as Director of Siri at Apple and Chief Technologist at HP.
But despite his deep involvement with AI, Julia remains a vocal skeptic of its current trajectory. In this exclusive interview with MarTechAI.com's Brij Pahwa, he talks about the dangers of “Hollywood AI,” why large language models are unsustainable, how agentic AI is emerging as the future — and what he'd do if he were back at Apple today.
Watch the live interview below.
Excerpts:
Brij: Thank you for joining us. You've been consistently critical of the way artificial intelligence is evolving — particularly this notion that we’re building something super-intelligent. Could you explain your view on where we’ve gone wrong?
Luc Julia: Thank you for having me. When I say AI isn’t very intelligent, I mean that the term "artificial intelligence" itself is misleading. We constantly compare AI to human intelligence — and that's the root problem.
These systems are tools. They’re great at doing one specific task extremely well — sometimes better than us. But the assumption that one single AI can do everything better than humans? That’s fantasy. That’s not science. That’s Hollywood.
I call it “Hollywood AI” — like in The Terminator or Her. We imagine machines that are either going to destroy us or seduce us. But those systems don’t exist. What does exist are narrow tools designed for narrow tasks — and we still hold the handle. It’s up to us how we use them.
Brij: Despite this, the industry is in a frenzy. OpenAI, xAI, Google DeepMind — everyone is building bigger models, more compute, and larger data centers. Do you think your critique is resonating?
Luc: I think the race to scale is a mistake. We’ve reached a plateau in terms of the usefulness of more data. Simply throwing more compute at the problem doesn’t make the AI more “intelligent.”
What it does do is increase environmental cost. Projects like Stargate — a $500 billion plan in the U.S. — are building vast data centers that consume massive amounts of electricity and water. This has a real impact on our planet.
There’s a generation waking up to this. Young researchers are asking, “Is this sustainable?” I think they’re right to ask that.
The future of AI isn’t about building ever-larger models. It’s about building smarter, smaller, and more efficient models. That’s what agentic AI is all about.
Brij: That’s a great segue. Can you explain agentic AI and how it contrasts with these massive models?
Luc: Agentic AI is made up of small, specialized agents — each trained to perform a specific task. Instead of one monolithic AI, you have many smaller ones that are orchestrated together to perform more complex operations.
These systems are more frugal. They require less compute. And because they’re task-specific, they can be far more effective and efficient.
This is where I see AI heading. We’ll move away from the fantasy of Artificial General Intelligence (AGI) — the one AI that does everything — and toward a network of intelligent agents, each contributing their piece.
Brij: Let’s bring this to marketing. Today’s marketers are trying to decode human psychology, personality traits, emotions — through AI. What happens when digital agents start making buying decisions on behalf of humans?
Luc: That’s already starting. And again, we need to tread carefully.
When I talk about agentic AI doing things “on our behalf,” I mean just that — on our behalf. It only acts when we allow it to. It doesn’t have consciousness or autonomy. It’s not a person. So we need to stop anthropomorphizing AI.
If I ask my agent to buy a laptop, it will follow the logic I’ve provided — budget, specs, preferences. It won’t go rogue and start shopping by itself.
Now, from a marketing perspective, this changes the game. Marketers will now need to persuade my agent, not me. That means influencing the logic and rules that drive the agent’s decision-making — which is a very different kind of persuasion.
It’s no longer just about emotional storytelling. It becomes about logic, trust, and technical transparency.
Brij: You’ve touched on the environmental cost of large models. But there’s also a legal and ethical cost. OpenAI and others are facing lawsuits for data scraping — including here in India. Should we have global AI laws?
Luc: A global law would be ideal — but let’s be honest, it's extremely difficult. Even the United Nations struggles to align on key issues.
Instead, every region must develop its own regulatory framework. India needs its own. Europe has GDPR. Africa and Asia must do the same.
Right now, most major AI companies are American. So their products reflect American cultural values and ethics. That’s not always appropriate for other regions. AI must be localized — not just in language, but in moral and societal values.
We don’t just need regulation. We need locally-built AI that reflects local realities.
That said, there are precedents. In the 1980s, the U.S. and USSR created the SALT agreements to regulate nuclear weapons. We didn’t destroy each other. If the AI threat becomes that serious, I think nations can come together — but we shouldn’t wait for that level of crisis.
Brij: What’s your best-case and worst-case scenario for AI?
Luc: Best case: AI empowers sectors like medicine and transportation. Imagine medical agents that help diagnose rare diseases, assist surgeons, or automate hospital administration. These tools won't replace doctors — they’ll make them better.
Transportation is another area. In Europe, where the population is aging, we need solutions that help people move without driving themselves. AI-driven mobility tools can be life-changing.
Worst case? Misuse. GenAI is incredibly easy to use — which means anyone can use it to manipulate, deceive, or harm. Deepfakes, misinformation, fraud — the tools are powerful, and they’re widely available.
That’s the danger of democratization without responsibility.
Brij: You were once at the heart of Apple’s AI team and co-created Siri. If you returned to Apple today, what would you fix?
Luc: It’s disappointing, honestly.
We envisioned Siri as a truly conversational AI back in the 1990s. But by the time it launched in 2011, it was reduced to a one-shot system. You say something, it replies, and that’s it.
Now, thanks to genAI and prompt-based systems, true conversational interfaces are finally possible. Yet Apple still hasn’t made Siri conversational — and it’s 2025.
Apple tends to move slowly when the innovation doesn’t come from within. It took them years to adopt deep learning. Now they’re behind on LLMs and genAI too.
If I were back, I’d push for a truly conversational Siri — powered by local, sustainable models like small language models (SLMs). That’s what the user expects. And Apple has the resources to build it right.
Brij: So what’s next in AI beyond agentic systems?
Luc: Here’s a bold statement: LLMs are dead. Even small ones — SLMs — are not sustainable enough.
The next generation will look back at the past 70 years of AI — rule-based logic systems, symbolic reasoning — and combine them with today’s interfaces.
It won’t be about big statistical models anymore. It will be about hybrid systems that combine logic, frugality, and usability.
People like Yann LeCun are already pushing in this direction. We’ll move toward AIs that reason — not just predict. That shift is coming in the next 2–5 years.
Brij: What would your advice be to young AI startups in India?
Luc: First — learn the math. AI is built on mathematics. If you don’t understand the foundations, you won’t understand the tool.
Second — choose a real-world problem. Don’t build for hype. Build for impact. Choose a field — healthcare, agriculture, logistics — and make it better with AI.
Third — don’t give up on skills. I hear a lot of people say, “AI will do it for me.” No, it won’t. AI can help you — but only if you understand the problem domain deeply.
Learn the domain. Understand the job. Then let AI augment your abilities.
Brij: Last question — what’s next for Luc Julia?
Luc: I honestly don’t know. The future changes every day. And that’s what keeps it exciting.
Brij: Thank you, Luc. Some people might call you cynical, but your views come from experience — and the clarity you bring is important for our generation to hear.
Luc: Thank you. It was a pleasure speaking with you.