DigitalOcean Holdings' strong second quarter results highlight how a new breed of cloud compute providers are gaining traction due to AI workloads and access to Nvidia GPUs.

On a conference call with analysts, CEO Paddy Srinivasan said annual recurring revenue for AI and machine learning products are up more than 200% year over year with help from the Paperspace acquisition a year ago. DigitalOcean also saw revenue contributions from managed hosting as well as new customers.

In its second quarter, DigitalOcean reported net income of $19 million, or 20 cents a share, on revenue of $192 million, up 13% from a year ago. DigitalOcean as well as providers like CoreWeave are increasingly gaining cloud traction for AI workloads. In addition, bitcoin mining companies, notably Core Scientific, that have data centers with GPUs are also gunning for AI workloads to expand.

Constellation Research analyst Holger Mueller said DigitalOcean is on the right path. He said:

"Boutique cloud provider Digital Ocean had another good quarter, growing 13%, fueled by innovation, access to Nvidia GPUs, Xeon compute and more. More impressive is that Paddy Srinivasan and team have turned the ship towards profit, a nearly $70 million net swing from a loss to a profit of $33 million in the first half of the 2024. Digital Ocean now manages to turn $1 out of $20 of revenue into net income. Not a bad turnaround – and definitely what investors want to see."

DigitalOcean has launched "GPU droplets" that allow customers to slice Nvidia H100 instances by 1, 8 or more GPUs, use case and budget limitations. The company also launched global load balancers as well as managed OpenSearch. DigitalOcean said 2024 revenue will be about $770 million to $775 million. To scale, DigitalOcean has hired a Chief Product and Technology Officer, Chief Ecosystem and Growth Officer and Chief Revenue Officer in recent weeks.

Srinivasan said:

"We continue to see very strong demand for our AI platform. To support that growing demand and to take the first step of our long-term data center optimization strategy, I'm very excited to announce that we will be opening a new state-of-the-art data center in Atlanta in Q1 of 2025.

This not only expands our geographic footprint, providing us cost effective additional coverage across the U.S. for our core workloads, but also gives us near term incremental space and power to support our AI strategy and growth."

The plan going forward for DigitalOcean is to provide easy access genAI and AI infrastructure similar to the way cloud computing did. That vision will also require a focus on software too, said Srinivasan. "Our longer-term AI vision is more software-centric, with the mission of making it easy for our approximately 638,000 current customers and other companies that look like them to leverage AI in their application stack without needing super deep AI and machine learning expertise," he said.

DigitalOcean's big bet is that AI instances focused on business value can be a long-term winner as the customer base shifts from large foundational model players. AI model builders and consumers will all have different requirements that need GPU capacity at different levels.

"Our AI strategy, which includes the GPU infrastructure, is tailor made for customers that are looking to consume AI, not necessarily build foundational models. When I talked about the GPU droplets, that's an abstracted version of the core GPU as a service," said Srinivasan. "We feel our strategy is going more up stack and enabling applications that derive business value from AI rather than focusing on model builders that are building and training foundational models. So, there's going to be different needs for customers that are looking to derive business value and build applications and platforms on top of our infrastructure."

Srinivasan added that today's generative AI cloud infrastructure market is all about Nvidia-powered instances used by foundational model builders. The next layer is going to be important too.

"The true business value is going to be when this infrastructure is leveraged to build platforms like simple example would be operating systems based on x86 architecture. And then you have applications, which are the ones that truly deliver business value for everyone. This AI wave goes up stack from one layer to the other, we feel there's a tremendous amount of need to democratize the access to these GPUs and also provide other software frameworks," said Srinivasan.