Hardware Index
GPU & Accelerator Comparison
Compare AI accelerators by memory, bandwidth, and compute. Select chips to compare side-by-side.
Max HBM Capacity
288 GB
AMD Instinct MI355X
Max Bandwidth
8 TB/s
AMD Instinct MI355X
Peak FP8 Compute
4.61 TFLOPS
Google TPU v7
Chips Indexed
19
+3 this month
Full Hardware Index
Select up to 3 chips to compare side-by-side
| Hardware | Manufacturer | Type | Primary Workload | Secondary Workload | Release Date | FP-16 (PFLOPS) | FP-8 (PFLOPS) ↓ | Memory (GB) | Bandwidth (TB/s) | Power (W) | Foundry | Compare |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Google TPU v7 | TPU | Training | Inference | 2025-11-06 | 2.5 | 4.61 | 192 | 7.37 | 960 | TSMC | ||
| AMD Instinct MI355X | AMD | GPU | Training | Inference | 2025-06-12 | 2.25 | 4.6 | 288 | 8 | 1400 | TSMC | |
| AMD Instinct MI350X | AMD | GPU | Training | Inference | 2025-06-12 | 2.25 | 4.6 | 288 | 8 | 1000 | TSMC | |
| NVIDIA B300 | NVIDIA | GPU | Training | Inference | 2025-08-22 | 2.31 | 4.5 | 288 | 8 | 1400 | TSMC | |
| NVIDIA B200 | NVIDIA | GPU | Training | Inference | 2024-11-15 | 2.25 | 4.5 | 192 | 8 | 1000 | TSMC | |
| NVIDIA B100 | NVIDIA | GPU | Training | Inference | 2024-11-15 | 1.31 | 3.5 | 192 | 8 | 700 | TSMC | |
| AMD Instinct MI325X | AMD | GPU | Inference | Training | 2024-10-10 | 1.3 | 2.61 | 256 | 6 | 1000 | TSMC | |
| AMD Instinct MI300X | AMD | GPU | Training | Inference | 2023-12-06 | 1.3 | 2.61 | 192 | 5.3 | 750 | TSMC | |
| Amazon Trainium3 | Amazon AWS | GPU | Training | Inference | 2025-12-02 | 1.26 | 2.52 | 155 | 4.9 | 700 | TSMC | |
| NVIDIA H200 | NVIDIA | GPU | Inference | Training | 2024-11-18 | 0.99 | 1.98 | 141 | 4.8 | 700 | TSMC | |
| NVIDIA H100 SXM5 | NVIDIA | GPU | Training | Inference | 2022-09-20 | 0.99 | 1.98 | 80 | 3.35 | 700 | TSMC | |
| Google TPU v6e | TPU | Training | Inference | 2024-05-14 | 0.92 | 1.84 | 32 | 1.6 | 0 | TSMC | ||
| NVIDIA H100 PCIe | NVIDIA | GPU | Inference | Training | 2022-10-05 | 0.76 | 1.51 | 80 | 2 | 400 | TSMC | |
| Amazon Trainium2 | Amazon AWS | GPU | Training | Inference | 2024-12-03 | 0.65 | 1.3 | 96 | 2.9 | 500 | TSMC | |
| NVIDIA L40S | NVIDIA | GPU | Inference | — | 2023-08-08 | 0.37 | 0.73 | 48 | 0.86 | 350 | TSMC | |
| Google TPU v5e | TPU | Inference | — | 2023-08-29 | 0.2 | 0.39 | 16 | 0.8 | 0 | TSMC | ||
| NVIDIA L40 | NVIDIA | GPU | Inference | — | 2022-10-13 | 0.18 | 0.36 | 48 | 0.86 | 300 | TSMC | |
| NVIDIA L4 | NVIDIA | GPU | Inference | — | 2023-03-21 | 0.12 | 0.24 | 24 | 0.3 | 72 | TSMC | |
| NVIDIA A100 | NVIDIA | GPU | Training | Inference | 2020-05-14 | 0.31 | 0 | 80 | 2 | 400 | TSMC |
Data Source: Aggregated from manufacturer specifications and verified benchmarks