

Robert Gilby speaks to Brij Pahwa on Responsible AI, Practical MarTech, and the Future of Human-Centric Innovation
Q: You've witnessed several tech shifts across your career at Disney, Dentsu, Nielsen, and now Addo AI and MOONJI. Where does the rise of AI stand among them?
AI is by far the most profound shift I’ve seen. While digitization, the rise of e-commerce, and streaming transformed industries, AI is about applying the massive amount of data we’ve generated over decades to improve decision-making. My core belief is that AI should not replace human intelligence; it should augment and amplify it. That includes our creativity, intuition, and emotional intelligence. The real transformation is human first. Whether it was Disney’s creative reinvention or Dentsu’s strategic pivot, the most enduring shifts always started with aligning people behind a shared purpose.
Q: But doesn’t AI pose the risk of making human intervention obsolete?
That’s one of the dangers, but only if we deploy AI irresponsibly. We must address issues like algorithmic bias, which is particularly relevant in diverse markets like India. AI models trained only on narrow datasets won’t serve the diversity of language, culture, and economic conditions here. “Human-in-the-loop” systems are critical to ensure cultural relevance, fairness, and quality control. AI works best when it complements people, not eliminates them. Our focus should be removing the drudgery, not the dignity of work.
Q: You’ve also spoken about ‘practical AI.’ What does that look like in the MarTech stack?
Practical AI means solving real business problems. It’s not about hype; it’s about outcomes. For example, MOONJI uses generative AI in video production and virtual sets to enhance creative delivery, making ideas more affordable and faster to execute. SQREEM uses cognitive AI to derive behavioral intelligence from billions of consumer decisions, helping brands like L’Oréal target audiences more precisely.
If AI doesn't improve ROI, relevance, or speed, it’s not practical.
Q: What technical steps are necessary to mitigate bias in AI, especially in a complex country like India?
Bias isn’t a tech problem; it’s a human one. You need diverse teams and training data that reflect the whole of India, not just metro-centric viewpoints. Companies need to conduct representational reviews of their datasets and output testing before full deployment. The role of humans in the loop, especially at quality assurance stages, is essential. And beyond data checks, cultural context matters. AI models must understand linguistic diversity, regional nuances, and economic differences if they are to serve India's population fairly.
Q: You've led strategy across Asia-Pacific. What’s different about deploying AI here versus the West?
First, infrastructure and data readiness vary. User behavior in Jakarta isn’t the same as in Mumbai or Bangkok. Local context must shape model training. Secondly, APAC is not simply adopting Western LLMs; it’s building indigenous models. India, especially, is leapfrogging due to its talent, scale, and entrepreneurial spirit. You have brilliant marketers here solving incredibly complex problems across languages, platforms, and economic strata. If you can build an AI model that works for India, it’ll work almost anywhere.
Q: Where are we seeing the most measurable ROI from AI today?
Advertising and marketing are definitely leading. Whether it’s performance optimization, content customization, or customer journey analysis, brands and agencies alike are deploying AI aggressively. Companies like WPP are using open intelligence platforms to synthesize massive data pools. Meta and YouTube are applying dynamic content optimization. At MOONJI, we’re creating generative content at scale with LED screens and VFX, replacing the need to shoot in 40 locations. It’s faster, cheaper, and just as compelling. Every sector from FMCG to fashion is experimenting.
Q: Many marketers face challenges integrating LLMs into existing MarTech stacks. What's your advice on navigating this?
Don’t start with “How do we integrate this amazing LLM?” Start with: “What business problem are we solving?” Then work backward. Data governance is essential — understanding what data you have, where it came from, and whether it’s fit for the intended AI model. Tools from Microsoft and Salesforce offer secure enterprise integration options. But governance panels, usage policies, and ethics reviews aren’t optional; they’re critical. Also, LLMs aren’t always the best fit. Sometimes cognitive or predictive models work better. Choose based on the problem, not the trend.
Q: What skills must modern marketers build to stay relevant in this AI-heavy world?
Stay focused on the customer. The marketing funnel may be data-driven now, but the objective hasn’t changed: drive awareness, consideration, and conversion. Marketers must ask where AI can improve this journey — faster insights, reduced waste, better personalization. Measurement is also evolving. It’s no longer about reach; it’s about resonance. We’re moving from counting eyeballs to measuring heartbeats. Cognitive AI, like that used by SCREEN, helps us understand emotional engagement and sentiment. The best marketers will use AI to enhance creativity, not replace it.
Q: You mentioned tools like vector databases and semantic search. How do they influence marketing decisions?
They allow us to organize and access knowledge at scale. Semantic tools enable personalized experiences by recognizing intent, not just keywords. Attention measurement tools like Lumen or Amplified Intelligence are helping marketers track subconscious engagement. But the tech should follow the goal. Ask: What would I ideally do if I had no constraints? Then apply the right stack — be it semantic search, vector models, or real-time inference. Don't put a Ferrari engine in a go-kart. Change your decision-making process to match the tool’s potential.
Q: What regulations do we need as we enter this hyper-personalized, agentic AI era?
We need to ensure AI creates opportunities for all, not just a few. Regulation should focus on data protection, consent, bias mitigation, and fair economic distribution. Governments are trying to catch up, but businesses must take responsibility, too. Responsible AI means augmenting human creativity and potential, not just cutting costs or replacing people. Skills training and equitable access are essential. As AI moves into agentic modes — making decisions on your behalf — governance will become even more vital.
Q: Finally, what’s next in the AI journey? What excites you most?
Predictive cultural intelligence. AI that anticipates cultural shifts before they happen by analyzing behavioral data, neuroscience, and social signals. Imagine a marketer designing a campaign not just based on trends, but anticipating them. That requires integrating social listening, behavioral analysis, and contextual pattern recognition. We’re still far from perfect, but if we can combine this predictive power with human authenticity and empathy, we won’t just sell better; we’ll connect better.
As for readiness — no one’s ever fully ready. But if we focus on responsible, human-centered AI, we’ll find our way forward.