Tesla is turning on its Dojo supercomputer today, a move that will significantly accelerate dataset training for Full Self-Driving (FSD) Beta.
At its core, Tesla’s new supercomputer delivers a peak performance of 340 FP64 PFLOPS tailored for technical computing and 39.58 INT8 ExaFLOPS optimized for AI applications. The in-house designed and hosted Dojo overshadows even Leonardo, the world’s fourth-highest performing supercomputer, which offers 304 FP64 PFLOPS. (via Tom’s Hardware)
All that power is set to revolutionize the training process of FSD. This leap in computing power bolsters Tesla’s competitive edge among automakers by being able to efficiently manage data processing for their vast fleet of vehicles around the world.
Tim Zaman, AI Infra & AI Platform Engineering Manager at Tesla, shared Dojo’s go-live date on X, highlighting the significance of Tesla’s computational capabilities.
Tesla AI 10k H100 cluster, go live monday.
Due to real-world video training, we may have the largest training datasets in the world, hot tier cache capacity beyond 200PB – orders of magnitudes more than LLMs.
Join us!https://t.co/F4A0Qb0CXG— Tim Zaman (@tim_zaman) August 26, 2023
However, there are challenges to overcome. Nvidia faces difficulties in meeting the surging demand for its H100 GPUs. As a result, Tesla is funneling over $1 billion into the development of Dojo. The company also plans to invest more than $2 billion in AI training in 2023 and a subsequent $2 billion in 2024 for FSD training, underscoring Tesla’s commitment to achieving full autonomy.
Tesla shares update and new details on Dojo supercomputer
Source
https://driveteslacanada.ca/news/tesla-turns-on-dojo-supercomputer-accelerating-full-self-driving-fsd-beta-training/?utm_source=rss&utm_medium=rss&utm_campaign=tesla-turns-on-dojo-supercomputer-accelerating-full-self-driving-fsd-beta-training