Sharon AI Launches Cloud Media Tech with On-Demand GPU Service

NVIDIA H100
Tensor Core GPU

The NVIDIA H100 Tensor Core GPU, built on the NVIDIA Hopper™ architecture, delivers industry-leading performance, scalability, and security for AI and high-performance computing (HPC) workloads. With innovations such as the Transformer Engine and fourth-generation Tensor Cores, it accelerates large language models (LLMs), securely handles enterprise workloads, and supercharges AI training with up to 4X performance improvement over previous models.

Basic Product Information

Product Name

NVIDIA H100 Tensor Core GPU

Architecture

NVIDIA Hopper™

Memory

80GB HBM3 (SXM), 94GB HBM3 (NVL)

Compute Power

Up to 3,958 TFLOPS FP8 performance

Release Year

2022

Use Cases

AI training, large language models, high-performance computing (HPC), scientific computing

Key Advantages

Up to 4X Faster Training

Enhanced AI model training with FP8 precision.

Transformer Engine

Optimized for large language models like GPT-3.

60 TFLOPS FP64 Performance

Accelerates scientific and high-performance computing tasks.

Enterprise-Ready

Includes NVIDIA AI Enterprise for streamlined AI deployment.

7X Performance for HPC Applications

Ideal for tasks like genome sequencing and 3D FFT.

Specifications

Performance Specifications

LLM Inference Performance

Up to 5X over NVIDIA A100 systems for LLMs up to 70 billion parameters

FP64

30 teraFLOPS

FP64

Tensor Core: 60 teraFLOPS

FP32

60 teraFLOPS

TF32 Tensor Core

835 teraFLOPS (with sparsity)

BFLOAT16 Tensor Core

1,671 teraFLOPS (with sparsity)

FP16 Tensor Core

1,671 teraFLOPS (with sparsity)

FP8 Tensor Core

3,341 teraFLOPS (with sparsity)

INT8 Tensor Core

3,341 TOPS (with sparsity)

Decoders

7 NVDEC, 7 JPEG

Confidential Computing

Supported

Multi-Instance GPUs

Up to 7 MIGs @12GB each

Memory and Bandwidth

GPU Memory

80GB HBM3 (SXM) / 94GB HBM3 (NVL)

Memory Bandwidth

3.35TB/s (SXM) / 3.9TB/s (NVL)

Memory Clock

2619 MHz

Thermal and Power

Power

Configurable between 350-400W

Server Options

Compatible with partner and NVIDIA-Certified Systems that can accommodate 1 to 8 GPUs

Board Specifications

Form Factor

SXM or PCIe dual-slot

Interconnect

NVIDIA NVLink: Offers 600GB/s bidirectional bandwidth PCIe Gen5: Supports up to 128GB/s NVLink Bridge: Can be connected with another H100 NVL using 2- or 4-way NVLink bridges for increased bandwidth

Supported Technologies

Multi-Instance GPU (MIG)

Up to 7 MIGs per GPU (10GB each for SXM, 12GB each for NVL)

Confidential Computing

Provides hardware-based security for data in use.

AI Enterprise Software

NVIDIA AI Enterprise included for secure and scalable AI deployment.

Server Compatibility

NVL

Compatible with NVIDIA DGX H100 and NVIDIA-Certified Systems with 1-8 GPUs.

Additional Features

01Transformer Engine

Uses mixed FP8 and FP16 precision to dramatically accelerate training and inference for large AI models.

02 NVLink Switch System

Scales multi-GPU communication up to 900GB/s, over 7X faster than PCIe Gen5.

03 Dynamic Programming (DPX) Instructions

Accelerates tasks like disease diagnosis and routing optimization by 7X compared to previous generations.

04 NVIDIA AI Enterprise Add-on

Included with the H100 NVL, providing access to a suite of AI tools and frameworks

05 Single Root I/O Virtualization (SR-IOV)

Supported with up to 32 virtual functions

06 Secure Boot (CEC)

Ensures secure boot and firmware updates

07 Programmable Power

Allows configuration of the power cap using nvidia-smi or SMBPBI tools

Want to learn more?

Let's call you back

By clicking the “submit” button, you agree to and accept our Terms & Conditions and Privacy Policy