AMD Instinct MI300X

The AMD Instinct MI300X accelerator, built on next-gen AMD CDNA™ 3 architecture, delivers exceptional efficiency and performance for AI and high-performance computing (HPC) tasks. With 192GB of HBM3 memory and 5.3 TB/s of memory bandwidth, it is designed for demanding AI workloads, generative AI, machine learning training, and large language models, offering significant performance improvements over its predecessors.
Calculator Icon

Pricing Calculator

GPU Cost in Seconds

Loading...
-
/ hourly cost

Estimate GPU Cloud Costs Instantly

Calculate your GPU cloud computing costs with our interactive pricing tool.

Billing Type

Product Type

GPU Type

Hardware Configuration

Contract Options

No contract discount applied
best value

GPU Plan Estimate

Hourly Cost
Loading...
-
Total cost per month
Loading...
-
Total cost
Loading...
-

Prices shown include all applicable discounts

Total cost per month
Loading...
-
Total cost
Loading...
-

Basic Product Information

AMD Instinct
MI300X

Product Name

AMD Instinct MI300X

Architecture

AMD CDNA™ 3

Memory

192GB HBM3

Compute Units

304

Release Year

2023

Use Cases

AI/ML training, generative AI, large language models, HPC

Price

$2.50 /GPU/hr

Key Advantages

192GB HBM3 Memory

Largest memory capacity in its class, ideal for data-intensive AI and HPC tasks.

5.3TB/s Memory Bandwidth

Unprecedented bandwidth for faster data processing.

304 Compute Units

Enhanced performance for AI and HPC applications.

Up to 13.7X Peak AI/ML Performance

Superior efficiency compared to previous models.

Multi-Chip Architecture

Improves power efficiency and reduces data movement overhead.

Specifications

Performance Specifications

AI Peak Performance (TFLOPS) *The values shown are with sparsity.

TF32 (TFLOPs): 653.7
FP16 (TFLOPs): 1307.4
BFLOAT16 (TFLOPs): 1307.4
INT8 (TOPS): 2614.9
FP8 (TFLOPS): 2614.9

HPC Peak Performance (TFLOPS)

FP64 Vector

81.7

FP32 Vector

163.4

Memory and Bandwidth

GPU Memory

192GB HBM3

Memory Bandwidth

5.3TB/s

Memory Interface

8192 bits

AMD Infinity Cache™

256MB

Thermal and Power

Max Thermal Design Power (TDP)

750W

Board Specifications

Form Factor

OAM module

Interconnect

7x AMD Infinity Fabric™ Links (128GB/s each) 1 PCIe® Gen 5 x16 (128GB/s)

Supported Technologies

Multi-Chip Architecture

Uses 8 accelerated compute dies with enhanced sparsity and computational throughput.

Decoders

Supports HEVC/H.265, AVC/H.264, VP9, and AV1 codecs.

Virtualization

SR-IOV with up to 8 partitions for resource sharing.

RAS Features

Full-chip ECC memory, page retirement, and page avoidance for reliability.

Server Compatibility

Part of the AMD Instinct Platform with 8 interconnected GPUs on a Universal Base Board (UBB 2.0), compatible with industry-standard HGX host connectors.

Additional Features

01

Infinity Fabric™ Technology

Superior I/O efficiency and scaling between GPUs.

02

Shared Memory and Caches

Coherently shares memory across CPUs and GPUs for better AI and HPC performance.

03

ROCm Software Support

Comprehensive support for AI and HPC frameworks, including PyTorch, TensorFlow, and JAX, with optimized libraries for AMD accelerators.

More Products

01NVIDIA H200

$2.95/GPU/hr

02NVIDIA H100

From $2.29/hr

03NVIDIA L40s

From $0.99/hr

04NVIDIA A40

$0.50 /GPU/hr

Want to learn more?

×
By clicking the "submit" button, you agree to and accept our Terms & Conditions and Privacy Policy .
×
By clicking the "submit" button, you agree to and accept our Terms & Conditions and Privacy Policy .
×
By clicking the "submit" button, you agree to and accept our Terms & Conditions and Privacy Policy .
×
By clicking the "submit" button, you agree to and accept our Terms & Conditions and Privacy Policy .