HBM2 Memory A30 24GB Nvidia Ampere Data Center GPU For Scientific Computing

Brand Name:NVIDIA
Model Number:NVIDIA A30
Minimum Order Quantity:1pcs
Delivery Time:15-30 word days
Payment Terms:L/C, D/A, D/P, T/T
Place of Origin:China
Contact Now

Add to Cart

Verified Supplier
Location: Beijing Beijing China
Address: C1106,Jinyu Jiahua Building,Shangdi 3rd Street, Haidian District, Beijing 100085, P.R.China
Supplier`s last login times: within 25 hours
Product Details Company Profile
Product Details

HBM2 Memory A30 24GB Nvidia Ampere Data Center GPU For Scientific Computing

NVIDIA A30 Data Center Gpu

NVIDIA Ampere A30 Data Center Gpu

Versatile compute acceleration for mainstream enterprise servers.

AI Inference and Mainstream Compute for Every Enterprise


Bring accelerated performance to every enterprise workload with NVIDIA A30 Tensor Core GPUs. With NVIDIA Ampere architecture Tensor Cores and Multi-Instance GPU (MIG), it delivers speedups securely across diverse workloads, including AI inference at scale and high-performance computing (HPC) applications. By combining fast memory bandwidth and low-power consumption in a PCIe form factor—optimal for mainstream servers—A30 enables an elastic data center and delivers maximum value for enterprises.



NVIDIA A30 Data Center Gpu Technical Specifications


GPU Architecture

NVIDIA Ampere

FP64

5.2 teraFLOPS

FP64 Tensor Core

10.3 teraFLOPS

FP32

10.3 teraFLOPS

TF32 Tensor Core

82 teraFLOPS | 165 teraFLOPS*

BFLOAT16 Tensor Core

165 teraFLOPS | 330 teraFLOPS*

FP16 Tensor Core

165 teraFLOPS | 330 teraFLOPS*

INT8 Tensor Core

330 TOPS | 661 TOPS*

INT4 Tensor Core

661 TOPS | 1321 TOPS*

Media engines

1 optical flow accelerator (OFA)
1 JPEG decoder (NVJPEG)
4 video decoders (NVDEC)

GPU memory

24GB HBM2

GPU Memory Bandwidth

933 GB/s

Interconnect

PCIe Gen4: 64GB/s

Max thermal design power (TDP)

165W

Form Factor

Dual-slot, full-height, full-length (FHFL)

Multi-Instance GPU (MIG)

4 GPU instances @ 6GB each
2 GPU instances @ 12GB each
1 GPU instance @ 24GB

Virtual GPU (vGPU) software support

NVIDIA AI Enterprise for VMware
NVIDIA Virtual Compute Server



High-Performance Computing

Data scientists need to be able to analyze, visualize, and turn massive datasets into insights. But scale-out solutions are often bogged down by datasets scattered across multiple servers.


Accelerated servers with A30 provide the needed compute power—along with large HBM2 memory, 933GB/sec of memory bandwidth, and scalability with NVLink—to tackle these workloads. Combined with NVIDIA InfiniBand, NVIDIA Magnum IO and the RAPIDS™ suite of open-source libraries, including the RAPIDS Accelerator for Apache Spark, the NVIDIA data center platform accelerates these huge workloads at unprecedented levels of performance and efficiency.



China HBM2 Memory A30 24GB Nvidia Ampere Data Center GPU For Scientific Computing supplier

HBM2 Memory A30 24GB Nvidia Ampere Data Center GPU For Scientific Computing

Inquiry Cart 0