Nvidia’s Releases TensorRT 6 With Impressive Performance Gains

Nvidia’s Releases TensorRT 6 With Impressive Performance Gains

Nvidia’s Releases TensorRT 6 With Impressive Performance Gains

By  MIKE WHEATLEY

Nvidia Corp. is upping its artificial intelligence game with the release of a new version of its TensorRT software platform for high-performance deep learning inference.

TensorRT is a platform that combines a high-performance deep learning inference optimizer with a runtime that delivers low latency, high-throughput inference for AI applications.

Nvidia said TensorRT 6 comes with new optimizations that reduce algorithms’ inference times for BERT with T4 graphics processing units to just 5.8 milliseconds, down from the previous performance threshold of 10 milliseconds.

The platform has also been optimized to accelerate inference on tasks relating to speech recognition, 3D image segmentation for medical applications, and image-based applications in industrial automation, Nvidia said.

TensorRT 6 also adds support for dynamic input batch sizes, which should help to speed up AI applications such as online services that have fluctuating compute needs, Nvidia said.

Nvidia said the TensorRT 6 platform is available to download from today via its product page.