Starcloud has become the first company to train and run large language models in space, marking a breakthrough in orbital compute infrastructure and opening a new chapter in how AI systems are developed and deployed. The company successfully trained an AI model aboard one of its orbiting data centers, demonstrating the feasibility of running advanced compute workloads off Earth. The milestone represents an early step toward building space based AI infrastructure for research, commercial applications and future global compute networks.
According to Starcloud, the achievement was made possible through a combination of custom satellite hardware and Nvidia’s H100 processors, which were adapted for the extreme conditions of space. The company has been working on orbital compute systems designed to reduce reliance on terrestrial data centers and enable continuous, energy efficient processing using solar power. During the test, the onboard system trained a lightweight language model and executed inference tasks, confirming that space based compute can operate reliably in real time.
The company said its goal is to build distributed orbital data centers that can support demanding AI workloads while reducing the environmental and infrastructure pressures associated with Earth based data facilities. Space offers several advantages for compute, including lower cooling requirements, access to uninterrupted solar energy and global connectivity potential through satellite networks. Starcloud believes these factors make orbital compute a promising supplement to terrestrial infrastructure as AI adoption grows.
Industry analysts have noted that AI models are becoming increasingly large and resource intensive, placing unprecedented demands on global energy and data center capacity. Some projections suggest that AI compute requirements could outpace energy availability in the coming years. Starcloud’s orbital approach seeks to address this long term challenge by moving part of the compute burden to space, where solar energy is abundant and cooling costs are lower.
The company’s initial demonstration involved training a small scale model to validate stability, hardware performance and connectivity. While the hardware cannot yet support training at the scale of leading terrestrial models, Starcloud said it plans to expand capacity and introduce more powerful systems over time. Future missions will involve training larger models, deploying inference engines for commercial use and supporting scientific research requiring continuous global coverage.
Nvidia, which provided the GPU technology for the mission, said that the successful test reflects the adaptability of its processors for emerging environments. The company has been increasing its involvement in space and high reliability compute projects, including collaborations with aerospace and defense sectors. Nvidia said that supporting AI workloads in space could become part of a broader evolution in distributed compute infrastructure.
Starcloud’s approach involves modular satellites that act as compute nodes. These nodes can work independently or as part of a coordinated orbital network. The company envisions a future in which AI models can be trained, updated or deployed through systems that orbit Earth, enabling rapid access to global data and reducing latency for certain tasks. Orbital nodes could also support AI driven Earth observation, climate monitoring, disaster response and telecommunications.
Experts say that the concept of orbital AI raises new opportunities as well as technical challenges. Hardware must withstand radiation, extreme temperatures and vibration while maintaining stable performance. Communication bandwidth between satellites and Earth must also be sufficient to support AI workloads. Starcloud said that early tests show promising results in all key areas, although further optimization is needed before commercial scale operations can begin.
Interest in orbital compute has grown as private space companies expand capacity for satellite deployment. Advances in launch affordability, satellite miniaturization and power efficiency have made it possible to consider more specialized missions beyond communication and imaging. AI compute is emerging as one of the next potential applications, particularly for companies looking to diversify global infrastructure.
Starcloud stated that its long term vision is to build a scalable constellation of compute satellites that can collaborate with terrestrial data centers. This hybrid approach could support organizations seeking resilience, redundancy and flexible compute access across distributed networks. The company believes that orbital AI infrastructure could be especially relevant for sectors that rely on real time global data, such as logistics, environmental research and security.
The development also reflects increasing interest in space based AI from governments and research institutions. Space agencies around the world have been exploring onboard AI to process satellite data without relying entirely on ground stations. Starcloud’s achievement takes this concept further by bringing model training into orbit rather than limiting AI to inference tasks.
While the company did not disclose timelines for commercial availability, it said that additional missions are planned for the coming year. These missions will test larger compute clusters, new cooling technologies and improved communication links. Starcloud aims to work with enterprise customers, research institutions and global partners to refine use cases and expand operational capability.
The successful experiment positions Starcloud as a pioneer in a new category of AI infrastructure. As technology companies and researchers explore alternatives to energy intensive terrestrial compute, orbital AI could become part of a broader strategy to meet the rising demand for processing power. The company said that today’s milestone represents an early but significant step toward building a distributed AI network that extends beyond Earth.