NVIDIA H100 and H200 Tensor Core GPUs

  • Delivering unprecedented acceleration to power the world’s most advanced AI, data analytics, and HPC workloads.

NVIDIA H100 Tensor Core GPU

Based on the NVIDIA Hopper™ architecture, NVIDIA H100 features fourth-generation Tensor Cores and a Transformer Engine with FP8 precision that provides up to 4x faster training over the prior generation for GPT-3 (175B) models.

NVIDIA H200 Tensor Core GPU

NVIDIA H200 is the first GPU to offer 141 gigabytes (GB) of HBM3e memory at 4.8 terabytes per second (TB/s) – that’s nearly double the capacity of the NVIDIA H100 Tensor Core GPU with 1.4X more memory bandwidth. The H200’s larger and faster memory accelerates generative AI and large language models, while advancing scientific computing for HPC workloads with better energy efficiency and lower total cost of ownership.

Enterprise-ready at any scale and any location

Clusters at any size

Our enterprise level infrastructure seamlessly supports NVIDIA H100 and H200 GPUs of any cluster size. Whether you need a small cluster or a large-scale deployment, we can ensure reliable, high-performance computing to meet your specific needs.

Globally available, locally accessible

Large clusters of NVIDIA H100 and H200 GPUs are available where you need them, thanks to our extensive infrastructure. We guarantee low latency and high availability, enabling your enterprise to achieve optimal performance worldwide.

Enterprise-grade compliance and security

We ensure that our platform, products, and services meet various global compliance, privacy, and security requirements, covering areas such as server availability, data protection, and privacy. Our commitment to privacy and security frameworks across the industry demonstrates our commitment to protecting customer data.

Specifications

NVIDIA

NVIDIA H100 SXM

  • FP64 : 34 TFLOPS
  • FP64 Tensor Core : 67 TFLOPS
  • FP32 : 67 TFLOPS
  • TF32 Tensor Core : 989 TFLOPS
  • BFLOAT16 Tensor Core : 1,979 TFLOPS
  • FP8 Tensor Core : 3,958 TFLOPS
  • INT8 Core : 3,958 TFLOPS
  • GPU Memory : 80 GB
  • GPU Memory Bandwith : 3.35TB/s
  • Decoders : 7 NVDEC | 7 JPEG
  • Interconnect : NVIDIA NVLink®: 900GB/s PCIe Gen5: 128GB/s

NVIDIA

NVIDIA H200 SXM

  • FP64 : 34 TFLOPS
  • FP64 Tensor Core : 67 TFLOPS
  • FP32 : 67 TFLOPS
  • TF32 Tensor Core : 989 TFLOPS
  • BFLOAT16 Tensor Core : 1,979 TFLOPS
  • FP8 Tensor Core : 3,958 TFLOPS
  • INT8 Core : 3,958 TFLOPS
  • GPU Memory : 141 GB
  • GPU Memory Bandwith : 4.8TB/s
  • Decoders : 7 NVDEC | 7 JPEG
  • Interconnect : NVIDIA NVLink®: 900GB/s PCIe Gen5: 128GB/s