Oracle founder Larry Ellison has offered a clear framework for understanding how artificial intelligence models are evolving, describing two distinct categories of AI that are shaping enterprise and consumer applications. His remarks come as businesses increasingly grapple with how to deploy AI systems that balance intelligence, speed and reliability across different use cases.
Ellison explained that not all AI models are designed to operate in the same way or under the same constraints. One category focuses on large, centralised models that rely on extensive data processing and cloud infrastructure. These systems are typically used for tasks such as analysis, prediction, content generation and complex decision support, where response time is important but not critical to real world safety or immediate action.
The second category, according to Ellison, centres on low latency intelligence. These models are designed to operate in real time, often close to where data is generated, and must respond almost instantaneously. Such systems are critical in environments where delays of even milliseconds can have significant consequences, including autonomous vehicles, robotics, industrial automation and certain defence or security applications.
To illustrate this distinction, Ellison pointed to autonomous driving technology as an example of low latency intelligence in action. In these scenarios, AI systems cannot rely on distant cloud servers to make decisions. Instead, they must process sensor data locally and act immediately, whether that involves braking, steering or responding to changing road conditions. The priority in such systems is speed and reliability rather than sheer model size.
This framing reflects a broader industry debate around where AI computation should take place. Large language models and generative AI systems have driven massive investment in cloud based infrastructure, where scale and compute power enable sophisticated reasoning and content creation. At the same time, the rise of edge computing is pushing intelligence closer to devices, machines and environments where real time responsiveness is essential.
Ellison’s comments highlight how enterprises are increasingly expected to deploy both types of AI, depending on their operational needs. In many organisations, centralised AI models are used for planning, forecasting and optimisation, while low latency models support frontline operations. This dual approach requires careful coordination between cloud infrastructure, data pipelines and edge systems.
From an enterprise technology perspective, the distinction also has implications for cost, architecture and governance. Large cloud based models typically demand significant compute resources and ongoing operational expenditure. Low latency models, while often smaller, require specialised hardware and software optimised for speed and reliability. Companies must decide how to allocate investment across these different layers of intelligence.
The growing focus on low latency AI is being driven in part by advances in hardware. Improvements in specialised processors and embedded systems have made it possible to run sophisticated models directly on devices rather than relying entirely on remote servers. This shift is enabling new applications in manufacturing, logistics, healthcare and transportation where immediate response is critical.
Ellison’s perspective also underscores the limits of a one size fits all approach to AI. While large models have captured public attention due to their conversational and generative capabilities, they are not always suitable for mission critical tasks. In high risk environments, predictability, speed and robustness often matter more than the ability to generate complex language or imagery.
For technology leaders, this distinction adds another layer to AI strategy. Rather than asking whether to adopt AI, organisations must decide which types of models are appropriate for specific functions. This includes determining where data should be processed, how models should be trained and deployed, and how performance should be measured across different contexts.
The emphasis on low latency intelligence also raises questions around regulation and accountability. Real time AI systems operating in physical environments can have direct safety implications. As governments and regulators develop AI frameworks, they may apply different standards to systems that make instantaneous decisions compared with those used for advisory or analytical purposes.
Ellison’s remarks come at a time when enterprise AI adoption is accelerating globally. Companies are moving beyond experimentation toward large scale deployment, integrating AI into core products and operations. Understanding the trade offs between centralised and low latency models is becoming increasingly important as AI systems move from the lab into the real world.
Industry observers note that the future of AI is likely to involve hybrid architectures that combine both approaches. Centralised models may provide strategic intelligence and continuous learning, while low latency systems execute decisions at the edge. This interplay could define how AI driven organisations operate over the next decade.
The distinction also reflects a maturation in how AI is discussed at the leadership level. Rather than focusing solely on model size or capability, executives are increasingly considering practical constraints such as response time, reliability and integration with existing systems. This shift suggests a more grounded and application focused phase of AI adoption.
Ellison’s framing offers a useful lens for understanding where AI investments are heading. As enterprises deploy AI across diverse environments, the ability to match the right type of model to the right task will be a key determinant of success. In this context, low latency intelligence is not a niche capability but a foundational requirement for many emerging AI driven applications.
By outlining two clear categories of AI models, Ellison has highlighted an important strategic choice facing organisations. The path forward is not about choosing one type over the other but about understanding how different forms of intelligence can work together to deliver real world impact.