‘Reasoning Is the Real Game-Changer’: L&T Tech CTO on What’s Next in AI

Ashish Khushu, Chief Technology Officer of L&T Technology Services (LTTS), believes the next leap in artificial intelligence will not come from larger models or faster chips. It will come from machines that can reason.

In a conversation with e4m and MartechAI.com, Khushu outlines how LTTS is preparing for an AI-driven future rooted in domain-led innovation, agentic AI, and the steady emergence of reasoning architectures. Drawing from nearly a decade of hands-on AI investments and a long-term innovation roadmap, Khushu’s perspective is grounded, pragmatic, and notably ahead of the hype cycle.

From building a culture of innovation across more than 13,000 engineers to advising global clients on applied AI, this conversation offers a deep look into how one of India’s largest engineering R&D organizations is thinking about the next phase of intelligence.

How do you cultivate a culture of innovation across a large organization like LTTS? What can marketing and business leaders learn from that approach?

Everybody in the last few years has been talking about innovation. At LTTS, what differentiates us is that innovation is not just a program. It’s a culture. And culture does not just happen. It has to be built deliberately.

For us, innovation required a consistent roadmap, strong support from the board and the CEO, and patience. It needs time, investment, and perseverance. There is no shortcut.

We started this journey in a structured way around 2017. We defined what innovation meant for us, not in abstract terms, but in measurable outcomes. Innovation became part of how we operated, not a side function or a lab sitting somewhere outside the business.

We also made it clear that innovation was not limited to a small group of people. A true culture of innovation involves a very large number of people participating in it, sometimes actively and sometimes passively. Both matter.

To make it credible, we introduced metrics. We tracked patents filed, research papers published, participation in hackathons, technical talks, and engagement with professional bodies. These metrics were reviewed quarterly and even reported to the board. That created seriousness and accountability.

Over time, people saw that innovation was not just encouraged, it was expected. That’s when culture starts to change.

When you say you aligned different verticals to this innovation strategy, how did that play out across teams like sales, digital, or marketing?

Initially, it was not easy. People asked very practical questions. How does this help me in my role? Why should I invest time in this? What’s in it for me?

That’s where leadership vision mattered. We were clear that innovation was core to the future of the company, not optional.

One important step was transparency. We published innovation metrics internally, quarter after quarter. When people saw that these numbers were being taken seriously at the highest levels, it changed perception.

Another key factor was what we did with successful ideas. The CTO organization would incubate solutions up to a prototype or MVP stage. Once we saw early customer success, we transferred the solution and the team into the relevant business unit.

This showed that the innovation program was not about building a separate empire. It was about enabling the business. That built trust.

Today, around 13,000 to 14,000 employees are actively or passively involved in innovation programs. Hackathons are conducted at every major location. Participation has scaled because people see outcomes, not just intent.

We also moved from filing about 25 patents a year for ourselves to filing around 110 internally, and roughly 225 patents annually when customer projects are included. That scale did not happen overnight. It came from consistency.

How did generative AI and large language models influence this innovation journey? Did they change how LTTS approached AI?

We started investing in AI in early 2017. At that time, AI was still emerging in engineering services. By 2020, we were already working with language models. By 2021, we had access to large language models and were testing them.

So when the LLM wave hit the mainstream in 2022, we were not reacting. We had a calibrated response ready.

One belief we have held consistently is that general-purpose models are not going to solve domain-specific problems on their own. We were saying this as early as 2021.

Unless you have domain data and context, a general LLM is of limited value. That’s why we never tried to build massive models ourselves. That’s not our business. Building large foundational models requires enormous investment and does not align with our role as an engineering services company.

Instead, we focused on applied AI. We built around 10 to 12 AI-based solutions targeted at specific use cases. These solutions used smaller, focused models trained on relevant data.

We also moved early into embedded AI. We believed that AI would not stay only in the cloud. It would move to system-on-chip architectures. That’s why we invested in running AI on chips and partnered with companies like SiMa.ai to bring AI to the edge.

This domain-led, use-case-driven approach helped us avoid spending time on AI experiments that do not convert into business impact.

There is a lot of discussion today around small language models and agentic AI. How do you see that evolving?

Our view is very clear. Small language models will become the norm in enterprise contexts. But that, by itself, is not the most important shift.

The real game-changer is reasoning.

Most AI systems today are extremely strong at perception. They can analyze data, recognize patterns, generate text, classify images, or suggest next steps. But perception is not intelligence. True intelligence emerges when a system can reason about situations it has not explicitly been trained for.

Reasoning is core.

Today, many AI systems still operate within predefined boundaries. You can train them with thousands of scenarios and decision rules, but when the system encounters a situation that does not fit the script, it struggles. Humans, on the other hand, deal with incomplete information all the time. We infer, abstract, and make judgment calls. That is the gap reasoning aims to close.

This is where agentic AI becomes meaningful. An agent is not just an automation script or a chatbot. A true AI agent must be able to decide what to do next, even when the path is not clearly defined. That requires reasoning.

We have been investing in reasoning for more than two years. Multimodality has already progressed significantly. AI systems can now process text, images, audio, and sensor data together. But multimodality alone does not create autonomy. Reasoning is the missing layer that connects perception to action.

Once reasoning becomes reliable, agents will move beyond task execution. They will begin to plan, prioritize, and take decisions. That is when agentic AI shifts from being an efficiency tool to becoming a decision-making system.

How close are we to that stage? And what does that mean for governance and guardrails? 

We are closer than many people realize.

In the next 9 to 12 months, we expect to see a noticeable strengthening of reasoning capabilities in AI systems. That does not mean AI will suddenly become human-like, but it does mean systems will become far better at handling ambiguity, abstraction, and multi-step decision-making.

Today, most AI still requires a human in the loop to interpret outputs and decide actions. As reasoning improves, that dependence will reduce. Systems will start taking autonomous decisions within defined boundaries. This could include deciding when to shut down a machine, reroute a process, escalate an issue, or trigger downstream actions without waiting for human approval.

This is also where the idea of agent farms becomes relevant. Instead of one monolithic AI system, you will have multiple specialized agents working together. Each agent handles a specific task, but they coordinate, share context, and reason collectively. That kind of architecture dramatically expands what automation can achieve.

Of course, greater autonomy also increases responsibility.

As AI systems move closer to decision-making roles, governance becomes non-negotiable. Guardrails, transparency, and accountability must evolve alongside capability. Responsible AI and ethical AI are not afterthoughts. They have to be built into the system design.

That said, I am optimistic. We have seen this pattern before with other technologies. Governments and regulators tend to react reasonably quickly when new capabilities introduce new risks. The conversations around AI regulation, safety, and ethics are already happening globally.

Guardrails will mature as reasoning systems mature. This will be an ongoing process, not a one-time fix. Innovation and regulation will have to co-exist, and I believe they will.

What are you personally focused on over the next two to three years at LTTS?

There are two major focus areas for me.

The first is continuing to invest in technologies like reasoning, multimodality, and quantum for AI. Quantum is still a longer-term play for us, but we are exploring how it could accelerate AI and optimization problems.

The second, and equally important, is people.

If you are not a true technical expert, new tools will not help you much. Deep domain expertise matters more than ever. We want to build mastery in the areas we operate in, whether that is AI, chips, embedded systems, or engineering domains.

That is how we stay relevant. Tools will change. Platforms will evolve. But mastery endures.

Final thoughts?

Innovation is a journey, not a destination.

We have built structures that allow innovation to scale across the organization, but the work never stops. The AI landscape is evolving rapidly, and staying relevant means being proactive while remaining grounded in domain knowledge.

That balance between forward-looking investment and practical execution is what we try to maintain every day at LTTS.