This site may earn affiliate commissions from the links on this page. Terms of use.

GPUs are a proven manner to speed upwardly the time-consuming task of machine learning, a crucial element of the recent rapid expansion of the use of AI solutions in many industries. The event has been an explosively-growing new marketplace for GPU vendors Nvidia and AMD. IBM's newly announced Power Systems S822LC aims to button auto learning performance fifty-fifty further — with ii IBM POWER8 CPUs and four Nvidia Tesla P100 GPUs.

However, no matter how fast a GPU is, the large data requirements of AI applications means that retentivity access and inter-processor communications can quickly become a clogging. And then IBM is also using Nvidia'south proprietary NVLink interconnect technology to address that problem.

The S822LC is slated to deliver 21 teraflops of half-precision operations; machine learning typically doesn't need full or double precision for preparation neural networks, for example. Customers tin also adhere additional Tesla K80 GPUs over a more than traditional PCIe coach.

NVLink dramatically improves memory access over PCI-e

Nvidia claims typical AI and other data intensive workloads can run as much as twice as fast using NVLink for memory accessNvidia announced NVLink at last year'due south GTC, and its Pascal-based GPUs are the first to support it. Information technology is used both for advice between CPUs and GPUs, and betwixt multiple GPUs. In raw data rates, Nvidia says it is 5 to 12 times faster than PCIe Gen 3 interconnects — yielding as much as a doubling in real world performance for data-intensive GPU applications.

As office of the annunciation, IBM cited raw interconnect operation improvements from 16 GB/due south over PCIe to 40 GB/south using NVLink. IBM has made a huge investment in what it calls cognitive computing, so information technology makes perfect sense that it would implement a version of its POWER8 processor with the highest performance interconnect possible. IBM says some of the early on units volition ship to high-contour customers, including Oak Ridge National Labs and Lawrence Livermore National Labs. The systems will be examination beds in grooming for IBM'due south Summit and Sierra supercomputers due in 2017.

IBM and Nvidia want developers to bound on the bandwagon

To help drive deployments, IBM and Nvidia are establishing a lab for developers. The IBM-Nvidia Acceleration Lab will work with client developers to get the best possible operation from the new systems. IBM has invited interested developers to contact them directly (email link) for more data.

Now read: Car learning is virtually to change how corporations are run