Zuckerberg says Meta will need 10x more computing power to train Llama 4 than Llama 3

From TechCrunch: Meta, which develops one of the biggest foundational open-source large language models, Llama, believes it will need significantly more computing power to train models in the future.

Mark Zuckerberg said on Meta’s second-quarter earnings call on Tuesday that to train Llama 4 the company will need 10x more compute than what was needed to train Llama 3. But he still wants Meta to build capacity to train models rather than fall behind its competitors.

“The amount of computing needed to train Llama 4 will likely be almost 10 times more than what we used to train Llama 3, and future models will continue to grow beyond that,” Zuckerberg said.

“It’s hard to predict how this will trend multiple generations out into the future. But at this point, I’d rather risk building capacity before it is needed rather than too late, given the long lead times for spinning up new inference projects.”

View: Full Article