Skip to content

Cart

Your cart is empty

NVIDIA Jetson AGX Thor vs Orin: Full Series Comparison

Updated on: April 20, 2026

The NVIDIA Jetson ecosystem has expanded rapidly, with Orin modules powering everything from entry-level AI applications to industrial robotics. Now, the arrival of Jetson Thor brings a massive leap in compute and efficiency. In this article, we’ll break down the Jetson Orin family and compare it with the new Thor series. To make things clear, we’ll start with a detailed comparison table and then walk through each module series.


Jetson Module Comparison at a Glance 

Jetson Comparison Table
FeatureOrin Nano 4GB (with super mode support)Orin Nano 8GB (with super mode support)Orin NX 8GB (with super mode support)Orin NX 16GB (with super mode support)Jetson AGX Orin 32GBJetson AGX Orin 64GBJetson AGX Orin IndustrialJetson T5000 Module
AI PerformanceUp to 34 TOPSUp to 67 TOPSUp to 117 TOPSUp to 157 TOPS200 TOPS248 TOPS275 TOPS2,070 TFLOPS (FP4, sparse)
GPU512-core Ampere, with 16 Tensor Cores1024-core Ampere, with 32 Tensor Cores1024 Core Ampere, with 32 Tensor Cores1024 Core Ampere, with 32 Tensor Cores1792 Core Ampere, with 56 Tensor Cores2048 Core Ampere, with 64 Tensor Cores2048 Core Ampere, with 64 Tensor Cores2560-core NVIDIA Blackwell architecture GPU with 96 fifth-gen Tensor Cores Multi-Instance GPU (MIG) with 10 TPCs
CPU6-core Arm® Cortex®-A78AE6-core Arm® Cortex®-A78AE6-core Arm® Cortex®-A78AE8-core Arm® Cortex®-A78AE8-core Arm® Cortex®-A78AE12 core Arm® Cortex®- A78AE12 core Arm® Cortex®- A78AE14-core Arm® Neoverse®- V3AE
64-bit CPU
1 MB L2 cache per core
16 MB shared system L3 cache
Memory4GB 64-bit LPDDR5 34 GB/s8GB 128-bit LPDDR5 68 GB/s8GB 128-bit LPDDR5 102.4 GB/s16GB 128-bit LPDDR5 102.4 GB/s32GB 256-bit LPDDR5 205 GB/s64GB 256-bit LPDDR5 205 GB/s64GB 256-bit LPDDR5 205 GB/s128 GB 256-bit LPDDR5X 273 GB/s
DL Accelerator--(1x) NVDLA V2.0(2x) NVDLA V2.0(2x) NVDLA V2.0(2x) NVDLA V2.0(2x) NVDLA V2.0-
Vision Accelerator--1x PVA v21x PVA v21x PVA v21x PVA v21x PVA v21x PVA v3
StorageSupports External NVMeSupports External NVMeSupports External NVMeSupports External NVMe64GB eMMC64GB eMMC64GB eMMCSupports NVMe through Pcle
Supports SSD through USB3.2
Video Encode1080p30 supported by 1-2
CPU cores
1080p30 supported by 1-2
CPU cores
1x 4K60 | 3x 4K30| 6x
1080p60 | 12× 1080p30
(H.265), H.264, H.265, AV1
1x 4K60 | 3x 4K30| 6x
1080p60 | 12x 1080p30
(H.265), H.264, H.265, AV1
1x 4K60 | 3x 4K30/ 6x
1080p60 | 12x 1080p30
(H.265), H.264, H.265, AV1
1x 4K60 (H.265)
3x 4K30 (H.265)
7x 1080p60 (H.265)
15x 1080p30 (H.265)
2x 4K60
4× 4K30 | 8x
1080p60 | 16x 1080p30
(H.265) H.264, AV1
6x 4Kp60 (H.265)
12x 4Kp30 (H.265)
24× 1080p60 (H.265)
50x 1080p30 (H.265)
48× 1080p30 (H.264)
6x 4Kp60 (H.264)
Video Decode1x 4K60 (H.265)
2x 4K30 (H.265)
5x 1080p60 (H.265)
11 x 1080p30 (H.265)
1x 4K60 (H.265)
3x 4K30 (H.265)
6x 1080p60 (H.265)
12x 1080p30 (H.265)
1x 4K60 (H.265)
3x 4K30 (H.265)
3x 4K30 (H.265)
3x 4K30 (H.265)
1× 8K30 (H.265)
2x 4K60 (H.265)
4x 4K30 (H.265)
9x 1080p60 (H.265)
18x 1080p30 (H.265)
1 × 8K30 (H.265)
2x 4K60 (H.265)
4x 4K30 (H.265)
9x 1080p60 (H.265)
18x 1080p30 (H.265)
1× 8K30 (H.265)
3x 4K60 (H.265)
7x 4K30 (H.265)
11× 1080p60 (H.265)
23x 1080p30 (H.265)
1× 8K30 (H.265)
3x 4K60 (H.265)
7x 4K30 (H.265)
11x 1080p60 (H.265)
22x 1080p30 (H.265)
4x 8Kp30 (H.265)
10x 4Kp60 (H.265)
22x 4Kp30 (H.265)
46х 1080p60 (H.265)
92x 1080p30 (H.265)
82x 1080p30 (H.264)
4x 4Kp60 (H.264)
CameraUp to 4 cameras (8 via virtual channels***)
8 lanes MIPI CSI-2
D-PHY 2.1 (up to 20Gbps)
Up to 4 cameras (8 via virtual channels***)
8 lanes MIPI CSI-2
D-PHY 2.1 (up to 20Gbps)
Up to 4 cameras (8 via virtual channels***)
8 lanes MIPI CSI-2
D-PHY 2.1 (up to 20Gbps)
Up to 4 cameras (8 via virtual channels***)
8 lanes MIPI CSI-2
D-PHY 2.1 (up to 20Gbps)
16 lane MIPI CSI-2 connector16 lane MIPI CSI-2 connector16 lane MIPI CSI-2 connectorUp to 20 cameras via HSB
Up to 6 cameras through 16x lanes MIPI CSI-2
Up to 32 cameras using
Virtual Channels
C-PHY 2.1 (10.25 Gbps)
D-PHY 2.1 (40 Gbps)
PCI Express1 x4 + 3 x1 (PCle Gen3, Root Port, & Endpoint)1 x4 + 3 x1 (PCle Gen3, Root Port, & Endpoint)1 x4 + 3 x1 (PCle Gen4, Root Port, & Endpoint)1x4 + 3 x1 (PCle Gen4, Root Port, & Endpoint)Up to 2x8 + 1x4 + 2 x1 (PCle Gen4, Root Port, & Endpoint)Up to 2 x8 + 1 x4 + 2 x1 (PCle Gen4, Root Port, & Endpoint)Up to 2 x8 + 1x4 + 2 ×1 (PCle Gen4, Root Port, & Endpoint)Up to Gen5 (x8 lanes)
Root port only—C1 (x1) and
C3 (x2)
Root Point or Endpoint—C2 (x1), C4 (x8), and C5 (x4)
Mechanical69.6mm x 45mm
260-pin SO-DIMM connector
69.6mm x 45mm
260-pin SO-DIMM connector
69.6mm x 45mm
260-pin SO-DIMM connector
69.6mm x 45mm
260-pin SO-DIMM connector
100mm x 87mm
699-pin Molex Mirror Mezz Connector
Integrated Thermal Transfer Plate
100mm x 87mm
699-pin Molex Mirror Mezz Connector
Integrated Thermal Transfer Plate
100mm x 87mm
699-pin Molex Mirror Mezz
Connector
Integrated Thermal Transfer
Plate
100 mm x 87 mm
699-pin B2B connector
Integrated Thermal Transfer
Plate (TTP) with heatpipe
Power7W - 25W7W - 25W10W - 40W10W - 40W15W - 40W15W - 75W15W - 60W40W - 130W

What are the architecture differences between Jetson AGX Orin and Jetson AGX Thor?

The architectural shift from Jetson AGX Orin to Jetson AGX Thor is not a simple generational upgrade but a transition to a fundamentally new compute stack. Orin is built on NVIDIA’s Ampere GPU architecture paired with Arm Cortex-A78AE CPUs, while Jetson AGX Thor introduces the newer Blackwell GPU architecture alongside Arm Neoverse-V3AE cores. This change enables Thor to deliver significantly higher AI throughput and efficiency, with NVIDIA reporting more than 7.5× higher AI compute and around 3.5× better energy efficiency compared to Orin, positioning it for next-generation robotics and physical AI workloads. Here are the key differences:

  • Performance: Jetson Thor uses the Blackwell architecture, which offers up to 2070 TFLOPS (FP4 sparse). Compared to Jetson AGX Orin's peak of ~275 TOPS (INT8), Jetson Thor offers more compute power.
  • Use Cases: The ideal use cases differ as Jetson Thor brings more power to the table meanwhile Jetson AGX Orin is ideal for standard mobile robots and entry-level automation.
  • Memory & I/O: Jetson AGX Thor supports up to 128GB LPDDR5X, which is double Orin's 64GB max, and PCIe Gen 5.


Differences Between GPUs: CUDA cores, Tensor cores and AI TOPS

The GPU is the primary driver of the performance gap between the two platforms. Jetson AGX Orin integrates an Ampere GPU with 2048 CUDA cores and 64 Tensor cores, delivering up to 275 TOPS of AI performance depending on configuration.


In contrast, Jetson AGX Thor moves to a Blackwell GPU with 2560 CUDA cores and 96 next-generation Tensor cores, alongside a dramatic increase in AI compute that reaches into the thousands of TOPS or equivalent FP4 throughput levels.


This increase is not just about raw numbers; newer Tensor cores and architectural improvements enable more efficient execution of modern AI workloads such as transformers, multimodal models and large-scale perception pipelines, which are increasingly common in robotics and edge AI systems.


FeatureJetson AGX OrinJetson Thor
GPU ArchitectureAmpere GPUBlackwell GPU
CUDA Cores2048 CUDA cores2560 CUDA cores
Tensor Cores64 Tensor cores96 next-generation Tensor cores

CPU: What changes in real workloads?

The CPU evolution is maybe what plays the most critical role in real-world system performance. Jetson AGX Orin’s 12-core Cortex-A78AE CPU is designed for deterministic, safety-capable workloads typical in robotics, including sensor handling and control loops.


Jetson Thor upgrades this to a 14-core Arm Neoverse-V3AE CPU, delivering higher clock speeds, larger cache structures, and improved multi-threading capabilities.


In practical deployments, this translates to better handling of concurrent tasks such as sensor fusion, path planning, and middleware execution (e.g., ROS 2), reducing CPU bottlenecks and enabling more consistent utilization of the GPU in complex, multi-modal AI pipelines.


What are memory bandwidth and capacity differences?

Memory is increasingly a limiting factor for edge AI, especially with the rise of large models and multi-sensor systems. Jetson AGX Orin supports up to 64GB of LPDDR5 memory with bandwidth around 200 GB/s, which is sufficient for most perception and vision workloads.


Jetson Thor significantly expands this with up to 128GB of LPDDR5X memory and bandwidth exceeding 270 GB/s, enabling faster data movement and larger on-device models.


This improvement is critical for applications such as:

  • Real-time multi-camera processing
  • High-resolution sensor fusion
  • Running large language models
  • Deploying vision-language models locally

 As these are all operations where both memory capacity and bandwidth directly impact latency and throughput.


Power and thermal comparison: Is Jetson AGX Orin or Jetson AGX Thor better?

Power and thermal characteristics are one of the clearest practical differences between Jetson AGX Orin and Jetson AGX Thor.

Jetson AGX Orin is designed for efficiency, with a configurable power envelope typically ranging from 15W to 60W, making it suitable for embedded and battery-powered systems where thermal constraints are strict.

In contrast, Jetson AGX Thor operates in a much higher power range, roughly 40W to 130W depending on configuration, reflecting its significantly higher compute capability. Despite this, Thor delivers around 3.5× better performance per watt than Orin, meaning that while absolute power consumption increases, efficiency relative to compute improves substantially. In practice, this creates a clear trade-off: Jetson AGX Orin is easier to integrate into compact, thermally constrained designs, while Jetson AGX Thor requires more robust cooling and power delivery but enables far more demanding AI workloads.


Which module should you choose?

Choosing between Jetson AGX Thor and Jetson AGX Orin ultimately comes down to aligning compute requirements with system constraints such as power, thermal design, and deployment maturity. While Thor represents a major leap in raw performance and is aimed at next-generation AI workloads, Orin remains a highly capable and widely adopted platform with a strong balance between performance, efficiency, and ecosystem maturity. The decision is less about which is “better” overall and more about which fits the specific demands of your application, especially in terms of model complexity, real-time requirements, and hardware limitations. 


Choose Jetson AGX Thor if…Choose Jetson AGX Orin if…
Your application demands cutting-edge AI performance.Power efficiency, cost, and system stability are critical in the project.
You want to run large, complex models directly at the edge.You need a mature software ecosystem.
You focus on automation and robotics.You value stability.
You require the newest, most powerful hardware.Scalability and reliability are the most important in your project.

Are Orin NX and Orin Nano still relevant?

Orin NX and Orin Nano continue to play an important role in the Jetson ecosystem by addressing lower-power and cost-sensitive segments. These modules offer multiple power modes, typically ranging from around 10W up to 40W depending on configuration, allowing developers to optimize performance within tight energy budgets. They remain highly relevant for compact systems such as drones, small robots, and edge devices where size, weight, and thermal constraints are more restrictive, and where the full capabilities of AGX-class modules are unnecessary.


Final Thoughts


The Jetson Orin family gives developers a wide range of compute options, scaling from low-power Nano modules to industrial-grade AGX Orin. With Jetson Thor, NVIDIA is redefining the upper limits of AI performance for robotics and edge computing. The choice now depends on your application’s power budget, size constraints, and compute needs.

Frequently Asked Questions

What are the key architectural differences between Jetson AGX Orin and Jetson AGX Thor?

The Jetson Orin family gives developers a wide range of compute options, scaling from low-power Nano modules to industrial-grade AGX Orin. With Jetson Thor, NVIDIA is redefining the upper limits of AI performance for robotics and edge computing. The choice now depends on your application’s power budget, size constraints, and compute needs.

• Orin: Ampere GPU + Cortex-A78AE CPU

• Thor: Blackwell GPU + Neoverse-V3AE CPU

• Thor delivers ~7.5× higher AI compute and ~3.5× better efficiency

Which types of AI workloads benefit most from Jetson AGX Thor?

Jetson AGX Thor is designed for advanced AI workloads that require high compute throughput and modern architecture support. It excels in running transformer-based models, multimodal AI systems, and large-scale perception pipelines. These workloads benefit from its improved Tensor cores, higher TOPS, and enhanced memory capabilities.

• Transformer-based models (e.g., LLMs, vision-language models)

• Multimodal AI (vision + language + sensor fusion)

• Large-scale, real-time perception and robotics pipelines

Which module is better suited for power-constrained or embedded applications?

Jetson AGX Thor is designed for advanced AI workloads that require high compute throughput and modern architecture support. It excels in running transformer-based models, multimodal AI systems, and large-scale perception pipelines. These workloads benefit from its improved Tensor cores, higher TOPS, and enhanced memory capabilities.

• Transformer-based models (e.g., LLMs, vision-language models)

• Multimodal AI (vision + language + sensor fusion)

• Large-scale, real-time perception and robotics pipelines