Microsoft announced a bevy of additions to Azure including AMD Instinct MI300X instances, Cobalt 100 instances in preview and the latest OpenAI model, GPT-4o, in Azure OpenAI Service.

The announcements at Microsoft's Build 2024 conference land as both Amazon Web Services and Google Cloud are busy launching custom silicon and access to multiple model choices.

All of the hyperscalers are looking to supply supercomputers that have diversity of custom silicon and processors from AMD and Nvidia as well as networking and architecture choices for AI workloads. Constellation Research analyst Holger Mueller said:

"It is clear that the path to AI is custom algorithms on custom silicon and Microsoft is on the jouney, with both the Cobalt and the AMD Mi300 preview. A key aspect of faster CPUs is faster networking, but Microsoft is quiet on that. Amongst all the cloud vendors, Microsoft has its traditional partner connections - so the AMD chip uptake comes as no surprise. When viable we will likely see Intel in Azure as well."

Here's a look at the three headliners for Azure at Build along with other key additions.

  • Microsoft announced the general availability of the ND MI300X VM series, which features AMD's Instinct MI300X Accelerator. The ND MI300X VM combines eight AMD MI300X Instinct accelerators. AMD is looking to give Nvidia's GPU franchise competition. On a briefing with analysts, Scott Guthrie, Executive Vice President of Cloud and AI at Microsoft, said AMD's accelerators were the most cost-effective GPUs available based on what the company is seeing with its Azure AI Service. AMD Q1 delivers data center, AI sales surge of 80%
  • Azure virtual machines built to run on Cobalt 100 processors are available in preview. Microsoft announced Cobalt in November and claimed that it could deliver 40% better performance than Azure's previous Arm-based VMs. Guthrie also noted that Cobalt performs better than the current version of AWS' Trainium processor. "It's going to enable even better infrastructure and better cost advantage and performance on Azure," said Guthrie, who was referencing Cobalt on Azure. Guthrie added that Snowflake was developing on Cobalt instances. 

  • OpenAI's GPT-4o will be available exclusively on Azure. Microsoft said OpenAI's latest flagship model will be available in preview on Azure AI Studio and as an API. OpenAI just launched GPT-4o last week, a day ahead of Google's latest Gemini models.
  • To round out the Azure OpenAI Service upgrades, Microsoft enhanced fine-tuning of GPT-4, added Assistants API to ease creation of agents and launched GPT-4 Turbo with vision capabilities. Microsoft also said it is adding its multimodal Phi-3 family of small language models to Microsoft Azure AI as a model as a service offering.
  • Azure will also get new services for managing instances. Microsoft launched Azure Cloud Fleet, a service that provisions Azure compute capacity across virtual machine types, available zones and pricing models to mix and match performance and cost. The company also launched Microsoft Copilot in Azure, which is an assistant for managing cloud and edge operations.

Nadella: We're a GenAI platform company

Speaking at the Build 2024 keynote, Microsoft CEO Satya Nadella said the age of generative AI is just starting.

"We now have these things that are scaling every six months or doubling every six months. You know, what we have though, with the effect of the scaling laws is a new natural user interface that's multimodal that means support stack speech, images, video as input and output. We have memory that retains important context recalls both our personal knowledge and data across our apps and devices. We have new reasoning and planning capabilities that helps us understand very complex context and complete complex tasks while reducing the cognitive load on us."
 
Nadella said Microsoft has always been a platform company and Azure is "the most complete scalable AI infrastructure that meets your needs in this AI era."

"With building Azure as the world's computer, we have the most comprehensive global infrastructure with more than 60 plus data center regions," said Nadella, who noted that the company is optimizing power and efficiency across the stack. 

Nadella said Azure will be built out with Nvidia, AMD and its own silicon in clusters. He said Maya and Cobalt, Microsoft's custom processors, are already delivering customer workloads and responding to prompts.  

Mueller said:

"Microsoft needs to wean itself and OpenAI of Nvidia machines that are expansive and the hardest commodity to purchase in IT. Continuing the Cobalt strategy makes sense, adding AMD as well, but it will not help with the existing workloads. The question is – will Microsoft rebuild OpenAI models? – or support two different AI hardware platfoms and choose what to run where. Time will tell."

More: