NVIDIA unveiled the NVIDIA® A100 80GB GPU, powering the NVIDIA HGX™ AI supercomputing platform that provides researchers and engineers unprecedented speed and performance for AI supercomputing.
The new A100 with HBM2e technology doubles the A100 40GB GPU’s high-bandwidth memory to 80GB and delivers over 2 terabytes per second of memory bandwidth.
Bryan Catanzaro, vice president of applied deep learning research at NVIDIA said, “Achieving state-of-the-art results in HPC and AI research requires building the biggest models, but these demand more memory capacity and bandwidth than ever before. The A100 80GB GPU provides double the memory of its predecessor, which was introduced just six months ago, and breaks the 2TB per second barrier, enabling researchers to tackle the world’s most important scientific and big data challenges”.
Atos, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Inspur, Lenovo, Quanta and Supermicro are expected to begin offering systems built using HGX A100 integrated baseboards in four- or eight-GPU configurations featuring A100 80GB in the first half of 2021.
The A100 80GB enables training of the largest models with more parameters fitting within a single HGX-powered server. This eliminates the need for data or model parallel architectures that can be time consuming to implement and slow to run across multiple nodes.
A100 can be partitioned into up to seven GPU instances, each with 10GB of memory with its multi-instance GPU (MIG) technology. This provides secure hardware isolation and maximizes GPU utilization for a variety of smaller workloads. For scientific applications, such as weather forecasting and quantum chemistry, the A100 80GB can deliver massive acceleration.
Satoshi Matsuoka, director at RIKEN Center for Computational Science said, “The NVIDIA A100 with 80GB of HBM2e GPU memory, providing the world’s fastest 2TB per second of bandwidth, will help deliver a big boost in application performance”.
Key Features of A100 80GB
- Third-Generation Tensor Cores: Provide up to 20x AI throughput of the previous Volta generation with a new format TF32, as well as 2.5x FP64 for HPC, 20x INT8 for AI inference and support for the BF16 data format.
- Larger, Faster HBM2e GPU Memory: Doubles the memory capacity and is the first in the industry to offer more than 2TB per second of memory bandwidth.
- MIG technology: Doubles the memory per isolated instance, providing up to seven MIGs with 10GB each.
- Structural Sparsity: Delivers up to a 2x speedup inferencing sparse models.
- Third-Generation NVLink® and NVSwitch™: Provide twice the GPU-to-GPU bandwidth of the previous generation interconnect technology, accelerating data transfers to the GPU for data-intensive workloads to 600 gigabytes per second.
The A100 80GB GPU is a key element in NVIDIA HGX AI supercomputing platform. It enables researchers and scientists to combine HPC, data analytics and deep learning computing methods to advance scientific progress.
You can read more details here.