X

New Qualcomm Tech To Make Cloud Computing 10x Smarter

Qualcomm on Tuesday announced the Cloud AI 100, a server-based version of its existing artificial intelligence technology designed for powering a wide variety of applications and systems.

The San Diego, California-based semiconductor giant claims the platform offers ten times the performance per watt compared to the next best AI inference systems in use today. That’s largely thanks to the chip at the center of the system which has been specifically designed for handling inference workloads.

The silicon in question is based on a 7nm process node, the same technology used by Qualcomm’s latest and greatest mobile system-on-chip – the Snapdragon 855. It also ships with support for all of today’s most commonly used software stacks: Keras, PyTorch, Glow, ONNX, Google’s own TensorFlow, and the like.

Qualcomm will begin sampling the Cloud AI 100 in the second half of the year. No companies confirmed to be planning participation in this experimental phase of the technology’s launch have yet been named, though usual suspects apply here; Microsoft, most notably, seeing how the Redmond-based software juggernaut even had its Azure partner group program manager, Venky Veeraraghavan, ennoble Qualcomm’s Tuesday announcement.

In a prepared statement, Mr. Veeraraghaban said Microsoft’s cloud unit is looking to continue collaborating with Qualcomm moving forward so as to keep advancing cloud-based AI solutions, noting how this is but one of many areas of technology the two firms are currently jointly pursuing.

The actual cool stuff behind the technical mumbo jumbo

Now that the basics are out of the way, here’s what this means, or at the very least might mean for the industry and (eventually) consumers moving forward:

Contemporary AI is largely reliant on machine learning, a tech branch allowing companies to launch software designed for fulfilling a wide variety of tasks that — watch out, here comes the main selling point — gets better at doing what it’s meant to over time.

In this context, the term “inference” can denote one of two things; either the process of having specific logical models learn various information or behavior from big data, or the process of taking a “finished” model, one that already learned how to fulfill a particular purpose, and using it in practice. In either case, having access to a more robust engine would allow for faster learning sessions and having such a solution in the cloud also accelerates deployment, in addition to making it cheaper.

In other words, virtually every AI-powered piece of software (which, in turn, may be powering hardware) in existence today can benefit from more efficient inference workload management. That’s the big promise Qualcomm is making with the Cloud AI 100 and is particularly promising given how versatile the newly announced technology is. Any combination of GPUs, CPUs, and FPGAs (field-programmable gate arrays) used in today’s data centers can take advantage of the Cloud AI 100 to some degree, ultimately doing what it does with a smaller energy footprint, faster, or both.

This seminal technology could play the central piece in Qualcomm’s push to become the provider of systems fueling cloud-to-edge AI platforms of the future which will start emerging simultaneously with wider 5G deployment.

Anything from self-driving cars to personal assistants, automated content regulators, and self-aware thermostats should be better off in the long run as inference workload management techniques improve, whether thanks to Qualcomm-made tech or someone else’s solutions.

Naturally, Qualcomm would much prefer if its own technologies end up driving that AI processing evolution instead of someone else’s.