GPU (graphical processing unit) maker NVIDIA launched the Deep Learning Institute one year ago, offering low-priced training to developers on a variety of AI and machine learning technologies. Now it has significantly ramped up its ambitions, saying it intends to train 100,000 coders this year, compared to 10,000 in 2016. Here are the key details from its announcement at the GPU Technology Conference:
The institute has trained developers around the world at sold-out public events and onsite training at companies such as Adobe, Alibaba and SAP; at government research institutions like the U.S. National Institutes of Health, National Institute of Science and Technology, and the Barcelona Supercomputing Center; and at institutes of higher learning such as Temasek Polytechnic Singapore and India Institute of Technology, Bombay.
In addition to instructor-led workshops, developers have on-demand access to training on the latest deep learning technology, using NVIDIA software and high-performance Amazon Web Services (AWS) EC2 P2 GPU instances in the cloud.
Beyond reaching more developers, NVIDIA is adding to the Institute's curriculum. New areas of study include the application of deep learning for self-driving care, healcare, robotics and financial services.
NVIDIA is not attempting to reach 100,000 developers on its own. It will partner with AWS, Facebook, Google, the Mayo Clinic and Stanford to co-create training labs. The labs will focus on the Caffe2, MXNet and TensorFlow deep learning frameworks.
In addition, NVIDIA has teamed up with Facebook AI research head Yann LeCun to create a teaching kit for educators. It says hundreds of professors at Oxford, UC Berkely and elsewere are already using it.
Lab content is being ported to Microsoft Azure and IBM's cloud. And finally, NVIDIA plans to introduce formal certifications for DLI students. To date, DLI has issued certificates noting the completion of a course, but does not offer certification tests.
Analysis: The AI Opportunity Is Far From A Game to NVIDIA
GPUs are better suited than CPUs for deep learning due to their architecture. While CPUs may have only a couple or several cores, GPUs have thousands of smaller ones that are geared for massively parallel processing of simple tasks. This maps well to compute-intensive deep learning workloads.
NVIDIA's GPUs have long been dominant fixtures in graphics and video cards for gaming and other purposes, but the company's investment in deep learning extends back nearly 10 years, long before the current awareness and hype level around AI.
An aggressive expansion of DLI now makes sense, since the market for GPUs in deep learning remains nascent and NVIDIA should make every effort to expand on its early lead. Its chief competitors, AMD and Intel, are only bringing specialized deep learning GPUs to market this year.
While Intel in particular will have plenty of money to throw behind its products, NVIDIA's other edge lies in the extensive libraries and mature software frameworks it's already developed for deep learning workloads. The more it can train up developers, the more GPUs it can sell, both in specialized hardware or to cloud service providers. In turn, AI developers who align with NVIDIA for GPU acceleration benefit from its early-mover maturity and expertise.
24/7 Access to Constellation Insights
Subscribe today for unrestricted access to expert analyst views on breaking news.