Snowflake and Nvidia said they're integrating Snowflake Data Cloud and Nvidia NeMo, a platform for large language models (LLMs), so enterprises can build custom generative AI applications.
The news, outlined during the kickoff of Snowflake Summit 2023, enables Snowflake customers to combine their proprietary data with foundational LLMs within Snowflake Data Cloud. Snowflake said it will host and run NeMo in its Data Cloud and include NeMo Guardrails, which ensures applications line up with business specific topics, safety and security.
Also see: Snowflake launches Snowpark Container Services, linchpin to generative AI strategy
Vendors have been racing to enable enterprises to combine their data with LLMs in a secure way. Salesforce has a trust layer to keep customer data cordoned from LLMs and Oracle is planning a similar service. Enterprise technology buyers have been wary of the compliance and privacy issues with building generative AI applications. Meanwhile, Snowflake rivals MongoDB and Databricks are also targeting LLM data workloads. Databricks doubled down on LLMs with the $1.3 billion acquisition of MosaicML.
With Nvidia NeMo, Snowflake customers can use their accounts to create custom LLMs for chatbots, search and summarization while keeping proprietary data separate from LLMs.
Snowflake CEO Frank Slootman said the partnership with Nvidia will add high performance machine learning and AI to Snowflake's platform. Nvidia CEO Jensen Huang said the partnership with Snowflake will "create an AI factory" for generative AI enterprise applications.
For enterprises, the Snowflake and Nvidia alliance may make it easier to tune custom LLMs for specialized use cases. This approach was outlined recently by Goldman Sachs CIO Marco Argenti.
Snowflake Data Cloud offers industry specific versions across financial services, manufacturing, healthcare, retail and other verticals. With Nvidia, Snowflake's bet is generative AI applications will proliferate across industries.
More: