Elon Musk doesn’t want Tesla to become just an automaker. He hopes Tesla will become an AI company. This found a way to drive the car yourself.
Important to that mission is Dojo, Tesla’s custom built supercomputer. FSDs are not actually completely self-driving. You can perform some automated driving tasks, but you still need a polite person behind the wheel. However, Tesla believes that with more data, more computational power and more training, it could potentially exceed the near-autonomous to fully autonomous driving threshold.
And that’s where Dojo comes in.
Musk teased Dojo for a while, but the executives strengthened discussions about supercomputers throughout 2024. In 2025, another supercomputer called Cortex joined the chat, but the importance of Dojo may still exist for Tesla. Sales are poor, investors want to assure Tesla that it can achieve autonomy. Below is a timeline of dojo mentions and promises.
2019
The first mention of the dojo
April 22nd – On Tesla’s Autonomous Day, the automaker put the AI team on stage to talk about autopilot and fully autonomous driving, and the AI made both. The company shares information about Tesla’s custom built chips designed specifically for neural networks and self-driving vehicles.
During the event, Musk teases the dojo and reveals that it is a supercomputer for training AI. He also points out that every Tesla car produced at the time had all the hardware needed for fully self-driving, and only requires software updates.
2020
Masks will start roadshows at the dojo
February 2 – Musk says Tesla will soon have more than 1 million connected vehicles worldwide with sensors, promoting the computing and Dojo capabilities needed for fully autonomous driving.
“Dojo, a training supercomputer, can process huge amounts of video training data and efficiently run hyperspace arrays with huge numbers of parameters between cores, ample memory and ultra-high bandwidth. I’ll explain this in more detail later.”
August 14th – Musk repeats Tesla’s plan to “process a truly enormous amount of video data” and develop a neural network training computer called Dojo, calling it “beast.” He also says the first version of Dojo is “about a year away” and will set a release date around August 2021.
December 31 – Elon says Dojo is not necessary, but will improve self-driving. “It’s not enough to be safer than a human driver. Ultimately, an autopilot should be more than 10 times safer than a human driver.”
2021
Tesla makes Dojo official
August 19th – The automaker officially announces Dojo on Tesla’s first AI day. This is an event aimed at attracting engineers to Tesla’s AI team. Tesla also introduces the D1 chip. This says the automaker will use it alongside Nvidia’s GPU. Tesla points out that the AI cluster will house 3,000 D1 chips.
October 12th – Tesla releases Dojo Technology Whitepaper, “Tesla’s Guide to Configurable Floating Point Formats and Arithmetic.” The white paper outlines the technical standards for new types of binary floating point arithmetic used in deep learning neural networks and used in “completely software, fully hardware, or software-hardware combinations.” It’s there.
2022
Tesla reveals progress in the dojo
August 12 – Musk says Tesla “says that it will be step-by-step in the dojo. You won’t need to buy that many incremental GPUs next year.”
September 30th – On Tesla’s second AI day, the company revealed that it had installed its first dojo cabinet and tested a 2.2 megawatt load test. Tesla says it was building one tile a day (consisting of 25 d1 chips). Tesla DeMos Dojo runs a stable diffusion model to create Ai-generated images of “Cybertrack on Mars.”
Importantly, the company has set target dates for the full Exapod cluster to be completed by the first quarter of 2023, and says it plans to build a total of seven exapods in Palo Alto.
2023
“Longshot bed”
April 19 – Musk told investors in Tesla’s first quarter revenue that Dojo “may improve training costs by several digits,” and said, “easy to sell services offered to other companies.” “It could be. The same way Amazon Web Services provides web services.”
Musk also states that he “sees Dojo as a kind of long shot bet,” but “it’s definitely worth the bet.”
June 21 – Tesla AI X account posts that the company’s neural network is already on its customer vehicle. This thread contains a graph with a timeline of Tesla’s current and projected power that will begin production of Dojo in July 2023, but it is clear whether this refers to the D1 chip or the supercomputer itself. Not that. Musk says it’s the same day Dojo is already running tasks online at the Tesla data center.
The company also predicts Tesla’s computing will be in the top five globally by around February 2024 (no signs of success), and Tesla predicts to reach 100 Exaflops by October 2024. I’m doing it.
July 19 – Tesla began production of Dojo in its second quarter revenue report. Musk says Tesla plans to spend more than $1 billion on Dojo by 2024.
September 6th – Mask posted to X, Tesla is restricted by AI training computing, but Nvidia and Dojo fix it. He says it’s extremely difficult to manage data from around 160 billion frames of video that he gets from a car per day.
2024
Plan to scale
January 24th – During Tesla’s fourth quarter and full year revenue calls, Musk once again acknowledges Dojo as a high-risk, high-remuneration project. He also says that Tesla was pursuing a “double path between dozia and dojo.” He points out that Tesla is expanding it and has “plans like Dojo 1.5, Dojo 2, Dojo 3, etc.”
January 26th – Tesla has announced plans to spend $500 million to build a dojo supercomputer in Buffalo. Musk then downplayed the investment somewhat, with $500 million being a huge amount, but “it’s only equivalent to Nvidia’s 10k H100 system. Tesla will spend more on Nvidia hardware this year. AI The table’s interests for being competitive in this time are at least billions of dollars a year.”
April 30 – At TSMC’s North American Technology Symposium, the company, D2, Dojo’s next-generation training tile, puts 25 chips on a single silicon wafer rather than creating one tile. It’s D2. According to IEEE Spectrum.
May 20th – Mask says the rear of the Giga Texas Factory Extension will include the construction of a “ultra-dense water-cooled supercomputer cluster.”
June 4th – CNBC report reveals that the mask will deflect thousands of Nvidia chips reserved for Xa, Xai from Tesla. After initially saying the report was false, Musk posts to X that Tesla had no place to turn on the Nvidia chip. In the warehouse. “He said the extension “contains a 50k H100 for FSD training.”
He will post again:
“Of the roughly $1 billion of AI-related spending, Tesla said he’ll make it this year. To build an AI training super cluster, NVIDIA hardware is about two-thirds of the cost. I’m on Nvidia purchases by Tesla The best estimates at the moment are between $300 million and $400 million this year.”
July 1 – Musk reveals in X that current Tesla vehicles may not have the right hardware for the company’s next-generation AI models. He says that with next-generation AI, when the number of parameters increases by about five times, “it’s extremely difficult to achieve without upgrading the vehicle’s inference computer.”
Nvidia’s supply challenges
July 23rd – During Tesla’s second quarter revenue call, Musk said demand for Nvidia hardware is “often difficult to get a GPU.”
“So I think this requires more effort on Dojo to ensure the training ability they need,” Musk says. “And we’re looking at the path to compete with Nvidia with Dojo.”
Tesla Investor Deck graphs predict that Tesla AI’s training capabilities will increase to around 90,000 H100 equivalent GPUs by the end of 2024, up from around 40,000 in June. Later that day on X, Musk posted that Dojo 1 will “train about 8k H100 equivalent online by the end of the year.” He also posts photos of a supercomputer that appears to be using the same refrigerator-like stainless steel as Tesla’s CyberTruck.
From the dojo to the cortex
July 30 – AI5 is about 18 months away from mass production, Musk said, “Tesla HW4/AI4 owners are angry at being left behind when AI5 comes out ” said in a reply to a post from someone who claims to start the club.
August 3rd – Musk posts to X that he had made a walkthrough of Giga Texas’ Tesla Supercompute Cluster (also known as Cortex). He points out that it will be created with 100,000 H100/H200 NVIDIA GPUs with “large storage for FSD and Optimus video training.”
August 26th – Musk’s post XA cortex video. He calls it “a huge new AI training super cluster being built at Tesla HQ in Austin to solve real-world AI.”
2025
There are no updates for Dojo for 2025
January 29th – Tesla’s fourth quarter and full-year 2024 revenue calls do not include Dojo’s mention. However, Cortex, Tesla’s new AI training supercluster at Austing Giga Factory, has arrived. Tesla noted that it has completed the deployment of the cortical, consisting of approximately 50,000 H100 NVIDIA GPUs on its shareholder deck.
“Cortex helped enable FSD (director) on the V13, boasting significant improvements in safety and comfort, including a 4.2x increase in data and a higher resolution video input. Masu.
During the call, CFO Vaibhav Taneja noted that Tesla accelerated cortical build-outs to speed up the deployment of FSD V13. He said he has accumulated AI-related capital expenditures, including infrastructure. In 2025, Taneja said he expects Capex, which is related to AI, to be flat.
This story was originally published on August 10th, 2024, but will be updated as new information is developed.
Source link