Intel ready to chip at the AI market as it matures

Jayadevan PK August 10, 2018

Intel, one of the world’s largest chip makers, was caught flat-footed when artificial intelligence came out of yet another winter in the early parts of this decade. Not many could have predicted the rise of AI — aided by abundant data and cheap computing power — to have such a profound impact on the way the world is run.

Much of the gains from this AI- awakening went to Nvidia, a much smaller and younger chip maker because its gaming chipsets — commonly referred as graphics processing units or GPUs — were better suited to train machine intelligence models. The company, which was once seen as a specialist chipmaker for the gaming industry, is now leading the race to make chips powering the AI revolution. Nvidia had its best quarter ever in the first one of 2018 with revenues of over $3.2 billion on the back of strong sales in gaming and AI. Intel’s first quarter revenue was $16.1 billion of which about 49% came from its data-centric business.

Nvidia’s run in AI has been pretty much unchallenged in recent years. But that could be changing if you were to believe Intel veteran Gadi Singer. The Intel executive, who has spent over three decades at the company, feels the chipmaker has a real shot at winning the race to power the AI market as the industry matures and more applications that focus on drawing inferences from data are born.

“Inference is done primarily on Intel and the world is going towards inference,” Singer, vice president, Artificial Intelligence Products Group — Intel, told FactorDaily in an interview on Wednesday. Singer is referring to the increasing computing power that’s dedicated to inferencing what actions machines must take given a set of data and a trained model.

Gadi Singer, Artificial Intelligence Products Group at Intel Corporation

The training phase was all about using data sets to develop algorithms. The inference phase is about what a machine does when it sees a query based on the training it has received. Neural networks that power a majority of AI systems today have two key phases. One is the training phase, where it is presented with training data sets. This is a compute intensive phase and is primarily used to teach machines. Then comes the inference phase. This is when a trained neural network is deployed to carry out a real world task, or infer what action a machine is supposed to take when presented with new data.

This has been part of Intel’s pitch for many months now. And it’s starting to pay dividends. Intel announced yesterday that in 2017, it made nearly $1 billion in revenues from customers running AI code on Intel Xeon processors in the data center. The Xeon processors, widely used in servers and workstations, were first introduced nearly 20 years ago. Singer was the first general manager at that division then.

Some analysts seem to be seeing Intel’s point of view. “While we expect Nvidia’s GPUs to remain dominant in training, we think other solutions are more suitable for the inferencing portion of deep learning, which we estimate to be a larger opportunity,” investment firm Morningstar’s analyst Abhinav Davuluri wrote last year.

Singer’s argument is quite simple: The years 2014 and 2015 were the breakthrough years for deep learning. At the time, most solutions were experimental. Machines caught up with humans in image recognition, for instance, then. Speech recognition improved dramatically. But the AI industry was still about training machines, experimentation and proof of concepts.

Also see: An ARM killer from IIT, Madras? Meet the brains behind India’s ambitious processor project

“I call this the illustrious childhood where you suddenly see that things are possible,” said Singer, who was in Bengaluru to speak at an Intel developer conference. The development environment for machine learning was also early with the use of C++ or proprietary frameworks like CUDA dominating the landscape. CUDA, short for compute unified device architecture, is the framework used to program Nvidia’s GPUs.

“If you look at 2019-20, it’s the coming of age of deep learning,” predicts Singer. Instead of experimental projects, real world applications of AI are set to grow. If image recognition was about identifying dog species back in the day, it’s now about finding malignant cells in CT scans. “So, both the data is wider and the thing you’re looking for is much more complex,” said Singer.

As AI systems mature, more and more computing power will be spent on inferences as compared to training, he says. “The ratio is about five to one (inference cycles to training cycles)…somewhere in the coming five years, we will hit the ratio of 10 to one between inference and training,” said Singer. This plays out well for Intel, which has positioned its Xeon chipsets as suitable for inferencing.

“A combination of hardware and software and will continue to improve the Xeon functionality that’s dedicated for AI,” said Singer, an electrical and computer science engineer by training from Israel’s Technion.

“The companies who will win are the companies that understand how the problem space changes.”

The company’s AI portfolio includes the Intel Xeon Scalable processor; the Intel Xeon Phi family of processors — used in situations that need higher levels of parallelism and computing power, the Intel FPGAs, that offer low-latency, low power inference and the yet to be launched Nervana chips. Intel’s Nervana chipsets are expected to roll out in 2019. These are better suited for training machine models. Singer wasn’t very specific about the launch timelines. Intel bought Nervana Systems, a deep learning startup in 2016 for $408 million. Nervana co-founder Naveen Rao now heads Intel’s Artificial Intelligence Products Group.

“The pace of change is something I have not seen in the past. In all the cases before, the problem was pretty clear. But the race between the companies was to find the best solution to that problem. In AI, the problem changes. The companies who will win are the companies that understand how the problem space changes,” said Singer, who joined Intel in the early 80s and was involved in the development of its early chipsets such as the Intel 80386.


               

Disclosure: FactorDaily is owned by SourceCode Media, which counts Accel Partners, Blume Ventures, Vijay Shekhar Sharma, Jay Vijayan and Girish Mathrubootham among its investors. Accel Partners and Blume Ventures are venture capital firms with investments in several companies. Vijay Shekhar Sharma is the founder of Paytm. Jay Vijayan and Girish Mathrubootham are entrepreneurs and angel investors. None of FactorDaily’s investors has any influence on its reporting about India’s technology and startup ecosystem.