Meta reported strong fourth quarter results, but the earnings call was much more interesting as CEO Mark Zuckerberg and CFO Susan Li riffed on custom silicon, developing Llama 4 and why building AI infrastructure matters.

The company reported fourth quarter revenue of $48.4 billion, up 21% from a year ago, with net income of $20.84 billion. For 2024, Meta raked in net income of $62.36 billion on revenue of $164.5 billion.

Holger Mueller, analyst at Constellation Research, said Meta is set up financially to invest heavily in AI. 

"Things are going well for Meta, as its business is fundamentally healthy. Despite all the investments, Zuckerberg’s enterprise was able to grow revenue year over year by over $30 billion, but grew profit at the same time $23 billion. Meta earns three quarters on an additional dollar of revenue today and that KPI did not look that favorable in the past. Zuckerberg can keep investing into AI, the metaverse, which could be accelerated by AI, and content creation."

And Meta will invest.

Here's a look at the key takeaways on Meta's investment strategy for AI:

Meta looks at AI as a personalization tool that will have different use cases for each individual. "We believe that people don't all want to use the same AI. People want their AI to be personalized to their context, their interests, their personality, their culture, and how they think about the world," said Zuckerberg. 

Open-source models will win starting with Llama 4. Zuckerberg said: "I think this will very well be the year when Llama and open source become the most advanced and widely used AI models. Llama 4 is making great progress in training. Llama 4 Mini is done and looking good too. It's going to be novel, and it's going to unlock a lot of new use cases."

DeepSeek helps the open-source cause and will bring costs down, but Zuckerberg expects Llama to win. "As Llama becomes more used it's more likely that silicon providers and other APIs and developer platforms will optimize their work more for that and basically drive down the costs of using it," said Zuckerberg. "The new competitor, DeepSeek from China, makes it clear there's going to be an open source standard globally. I think for our kind of own national advantage, it's important that it's an American Standard. We want to build the AI system that people around the world are using. If anything, some of the recent news has only strengthened our conviction that this is the right thing for us to be focused on."

It's too early to know the DeepSeek impact on demand for AI infrastructure. "It's probably too early to really have a strong opinion on what this means for the trajectory around infrastructure and capex and things like that. There are a bunch of trends that are happening here all at once," said Zuckerberg. "I continue to think that investing very heavily in capex and infra is going to be a strategic advantage over time. It's possible that we'll learn otherwise at some point, but I just think it's way too early to call that."

Meta wants AI that will replicate a mid-level engineer. "This is going to be a profound milestone," said Zuckerberg. "Our goal is to advance AI research and advance our own development internally. And I think it's just going to be a very profound thing."

Llama will provide engineering throughput. Li said:

"We expect that the continuous advancements in Llama's coding capabilities will provide even greater leverage to our engineers, and we are focused on expanding its capabilities to not only assist our engineers in writing and reviewing our code, but to also begin generating code changes to automate tool updates and improve the quality of our code base."

The monetization plan for models has nothing to do with licensing or consumption. Zuckerberg noted Meta's plan for AI glasses and investments in AI infrastructure that will improve ads and apps. He said this year will see more growth in Reels on Facebook and Instagram regardless of what happens to TikTok.

Meta AI has more than 700 million active monthly users and updates are planned to deliver more personalized content and monetization efficiency. Meta CFO Susan Li said:

"In the second half of 2024 we introduced an innovative new machine learning system in partnership with Nvidia called Andromeda. This more efficient system enabled a 10,000x increase in the complexity of models we use for ads retrieval, which is the part of the ranking process where we narrow down a pool of 10s of millions of ads to the few 1,000 we consider showing someone. The increase in model complexity is enabling us to run far more sophisticated prediction models to better personalize which ads we show someone. This has driven an 8% increase in the quality of ads that people see."

Meta's capital spending is focused on scaling the footprint and increasing efficiency of workloads. "We're pursuing efficiencies is by extending the useful lives of our servers and associated networking equipment. Our expectation going forward is that we'll be able to use both our non AI and AI servers for a longer period of time before replacing them, which we estimate will be approximately five and a half years. This will deliver savings in annual capex and resulting depreciation expense, which is already included in our guidance," said Li. "We're pursuing cost efficiencies by deploying our custom silicon MTIA in areas where we can achieve a lower cost of compute by optimizing the chip to our unique workloads."

Custom silicon is being used for ranking and recommendation inference workloads for ads and organic content. "We expect to further ramp adoption of MTIA for these use cases throughout 2025 before extending our custom silicon efforts to training workloads for ranking and recommendations next year," said Li. "We're also very invested in developing our own custom silicon for unique workloads where off-the-shelf silicon isn't necessarily optimal, and specifically because we're able to optimize the full stack to achieve greater compute efficiency, and performance per cost and power."

Over time, MTIA is going to take on GPU workloads and training. "Next year, we're hoping to expand MTIA to support some of our core AI training workloads, and over time, some of our Gen AI use cases," said Li.