AWS Leverages NVIDIA A100s
November 5, 2020
Amazon
Web Services’ first GPU instance debuted 10 years ago, with the NVIDIA
M2050. At that time, CUDA-based applications were focused primarily on
accelerating scientific simulations, with the rise of AI and deep
learning still a ways off.
Since then, AWS has added to its stable of cloud GPU instances, which
has included the K80 (p2), K520 (g3), M60 (g4), V100 (p3/p3dn) and T4
(g4).
With its new P4d instance generally available today, AWS is paving the
way for another bold decade of accelerated computing powered with the
latest NVIDIA A100 Tensor Core GPU.
The P4d instance delivers AWS’s highest performance, most cost-effective
GPU-based platform for machine learning training and high performance
computing applications. The instances reduce the time to train machine
learning models by up to 3x with FP16 and up to 6x with TF32 compared to
the default FP32 precision.
They also provide exceptional inference performance. NVIDIA A100 GPUs
just last month swept the MLPerf Inference benchmarks — providing up to
237x faster performance than CPUs.
Each P4d instance features eight NVIDIA A100 GPUs and, with AWS
UltraClusters, customers can get on-demand and scalable access to over
4,000 GPUs at a time using AWS’s Elastic Fabric Adaptor (EFA) and
scalable, high-performant storage with Amazon FSx. P4d offers 400Gbps
networking and uses NVIDIA technologies such as NVLink, NVSwitch, NCCL
and GPUDirect RDMA to further accelerate deep learning training
workloads. NVIDIA GPUDirect RDMA on EFA ensures low-latency networking
by passing data from GPU to GPU between servers without having to pass
through the CPU and system memory.
In
addition, the P4d instance is supported in many AWS services, including
Amazon Elastic Container Services, Amazon Elastic Kubernetes Service,
AWS ParallelCluster and Amazon SageMaker. P4d can also leverage all the
optimized, containerized software available from NGC, including HPC
applications, AI frameworks, pre-trained models, Helm charts and
inference software like TensorRT and Triton Inference Server.
P4d instances are now available in US East and West, and coming to
additional regions soon. The instances can be purchased as On-Demand,
with Savings Plans, with Reserved Instances, or as Spot Instances.
The first decade of GPU cloud computing has brought over 100 exaflops of
AI compute to the market. With the arrival of the Amazon EC2 P4d
instance powered by NVIDIA A100 GPUs, the next decade of GPU cloud
computing is off to a great start.
NVIDIA and AWS are making it possible for applications to continue
pushing the boundaries of AI across a wide array of applications. We
can’t wait to see what customers will do with it.