Tesla Ends Dojo Supercomputer Project, Shifts AI Hardware Strategy to Samsung and Nvidia
This is an AI-generated image.

Tesla has officially ended its ambitious Dojo supercomputer project, a once-touted breakthrough in AI training for autonomous driving, signaling a major shift in the company’s AI hardware strategy. The decision, confirmed by multiple reports and industry insiders, comes less than two years after CEO Elon Musk described Dojo as a “key enabler” for achieving full self-driving (FSD) capabilities.

Dojo’s Promise and the Sudden Pivot

Announced in 2021, Dojo was designed to train Tesla’s AI models on massive amounts of video data collected from its global fleet of vehicles. The custom-built system aimed to reduce reliance on third-party computing providers like Nvidia and instead leverage Tesla’s in-house chip designs. Musk claimed the system would process data faster and more efficiently than existing solutions, giving Tesla a competitive edge in the race toward autonomous mobility.

However, the project faced delays, escalating costs, and engineering challenges. According to sources, Dojo struggled to scale effectively for commercial deployment. As the technical hurdles mounted, Tesla decided to wind down the project and redirect resources to other AI hardware initiatives.

Shift to Nvidia and Samsung Partnerships

Industry reports indicate Tesla will now rely heavily on Nvidia’s H100 GPUs and collaborate with Samsung for custom chip development. The pivot suggests Tesla is prioritizing faster deployment over in-house innovation, choosing proven hardware to accelerate AI model training.

Samsung’s role is expected to center on developing Tesla’s next-generation AI5 and AI6 chips, designed for advanced driver-assistance systems (ADAS) and robotics. This partnership could enable Tesla to continue innovating in AI without the operational risks tied to Dojo’s unique architecture.

Impact on Autonomous Driving Roadmap

The end of Dojo has raised questions about Tesla’s robotaxi timeline. While Musk previously hinted at launching fully autonomous ride-hailing services in the near future, scaling up without Dojo’s dedicated compute platform may slow internal development cycles.

Some analysts see the move as a pragmatic step, allowing Tesla to adopt a hybrid AI infrastructure—balancing cutting-edge in-house hardware projects with established external suppliers. Others believe it reflects a shift in priorities as Tesla faces competition from companies like Waymo, Cruise, and Chinese EV makers that are also investing heavily in autonomous driving AI.

Leadership Changes Add to the Shake-up

The transition coincides with leadership changes inside Tesla’s AI team. Pete Bannon, vice president of custom chip development and one of the key architects behind Dojo, is reportedly leaving the company. His departure adds uncertainty to Tesla’s long-term AI hardware ambitions.

Meanwhile, Tesla continues to enhance its software stack for FSD, rolling out updates to its beta program across multiple markets. The company maintains that AI advancements—both in hardware and algorithms—remain central to its strategy.

Industry Reactions

Tech analysts and AI researchers have mixed views on the decision.

  • Some see it as a sign that AI compute specialization remains difficult to execute at scale without significant cost overruns.
  • Others argue that relying on Nvidia and Samsung could allow Tesla to focus more on refining its core AI models rather than wrestling with hardware engineering challenges.

Industry peers like Apple, Meta, and Microsoft have also shifted between custom AI hardware projects and vendor partnerships, underscoring how volatile the space remains.

The Bigger AI Race

The Dojo pivot comes at a time when the global AI hardware race is intensifying. Nvidia continues to dominate with its GPU ecosystem, while companies from Google (TPUs) to Amazon (Trainium chips) are vying for market share in AI accelerators.

For Tesla, the shift could mean faster iteration cycles and reduced risk—but it also signals a retreat from its most ambitious in-house AI infrastructure project to date.