

Add to Cart
HBM2 Memory A30 24GB Nvidia Ampere Data Center GPU For Scientific Computing
NVIDIA Ampere A30 Data Center Gpu
AI Inference and Mainstream Compute for Every Enterprise
Bring accelerated performance to every enterprise workload with NVIDIA A30 Tensor Core GPUs. With NVIDIA Ampere architecture Tensor Cores and Multi-Instance GPU (MIG), it delivers speedups securely across diverse workloads, including AI inference at scale and high-performance computing (HPC) applications. By combining fast memory bandwidth and low-power consumption in a PCIe form factor—optimal for mainstream servers—A30 enables an elastic data center and delivers maximum value for enterprises.
NVIDIA A30 Data Center Gpu Technical Specifications
GPU Architecture | NVIDIA Ampere |
FP64 | 5.2 teraFLOPS |
FP64 Tensor Core | 10.3 teraFLOPS |
FP32 | 10.3 teraFLOPS |
TF32 Tensor Core | 82 teraFLOPS | 165 teraFLOPS* |
BFLOAT16 Tensor Core | 165 teraFLOPS | 330 teraFLOPS* |
FP16 Tensor Core | 165 teraFLOPS | 330 teraFLOPS* |
INT8 Tensor Core | 330 TOPS | 661 TOPS* |
INT4 Tensor Core | 661 TOPS | 1321 TOPS* |
Media engines | 1 optical flow accelerator (OFA) |
GPU memory | 24GB HBM2 |
GPU Memory Bandwidth | 933 GB/s |
Interconnect | PCIe Gen4: 64GB/s |
Max thermal design power (TDP) | 165W |
Form Factor | Dual-slot, full-height, full-length (FHFL) |
Multi-Instance GPU (MIG) | 4 GPU instances @ 6GB each |
Virtual GPU (vGPU) software support | NVIDIA AI Enterprise for VMware |
Data scientists need to be able to analyze, visualize, and turn massive datasets into insights. But scale-out solutions are often bogged down by datasets scattered across multiple servers.
Accelerated servers with A30 provide the needed compute power—along with large HBM2 memory, 933GB/sec of memory bandwidth, and scalability with NVLink—to tackle these workloads. Combined with NVIDIA InfiniBand, NVIDIA Magnum IO and the RAPIDS™ suite of open-source libraries, including the RAPIDS Accelerator for Apache Spark, the NVIDIA data center platform accelerates these huge workloads at unprecedented levels of performance and efficiency.