NVIDIA Jetson Orin Nano Creates New Standard for Entry-Level Edge AI and Robotics

0

Nvidia announcement the arrival of the Jetson Orin Nano SOM, or system-on-modules, series at this year’s GTC 2022. The Jetson Orin Nano will set the bar for all entry-level robotics and artificial intelligence applications, delivering up to eighty times the AI ​​performance of the original NVIDIA Jetson Nano series.

The NVIDIA Jetson line includes the company’s exclusive Orin-referenced modules that will be used in all NVIDIA Jetson models, from the Jetson Orin Nano to the Jetson AGX Orin, allowing more customers to scale their projects with ease using the company’s Jetson AGX Orin development kit. .

As AI evolves, requiring immediate processing in multiple areas, NVIDIA recognizes the demands of high-level edge computing with lower levels of latency while remaining efficient, inexpensive, and more thorough.

The company intends to sell the production Jetson Orin Nano series modules starting in January 2023, starting at the entry-level price of $199. The new modules will produce up to 40 TOPS of performance in AI workloads in a Jetson SFF module, enabling power consumption levels between 5W and 15W and in two different memory sizes – the Jetson Orin Nano 4GB versions and 8 GB.

NVIDIA Jetson Orin Nano uses the Ampere-based GPU, along with eight streaming multiprocessors containing 1,024 CUDA cores and 32 Tensor cores, which will be used to process artificial intelligence workloads. Ampere-based Tensor GPU Cores provide improved performance-per-watt support for parsimony, enabling twice the throughput of the Tensor Core. The Jetson Orin Nano will also offer the Arm Cortex A78AE six-core processor integrated and various supports in a video decoding engine and image compositor, ISP, audio processing engine and video input block.

  • Up to seven PCIe Gen3 lanes
  • Three USB 3.2 Gen2 10Gbps connection ports
  • Eight MIPI CSI-2 camera port lanes
  • Many sensor inputs and outputs

Another beneficial improvement of the Jetson Orin Nano and Jetson Orin NX modules is their form factor and pin compatibility.

Jetson Orin Nano 4GB Jetson Orin Nano 8GB
AI performance 20 TOP sparse | 10 dense TOPs 40 TOP sparse | 20 dense TOPs
GPUs 512-core NVIDIA Ampere Architecture GPU with 16 Tensor Cores 1024-core NVIDIA Ampere Architecture GPU with 32 Tensor Cores
Maximum GPU Frequency 625MHz
CPU Arm Cortex-A78AE v8.2 6-core 64-bit 1.5MB L2 + 4MB L3 processor
Maximum processor frequency 1.5GHz
Memory 4GB LPDDR5 64bit 34GB/s 8GB LPDDR5 128bit 68GB/s
Storage
(Supports external NVMe)
Video encoding 1080p30 supported by 1-2 CPU cores
Video decoding 1x 4K60 (H.265) | 2x 4K30 (H.265) | 5x 1080p60 (H.265) | 11x 1080p30 (H.265)
Camera Up to 4 cameras (8 virtual channels*) 8 lanes MIPI CSI-2 D-PHY 2.1 (up to 20 Gbps)
PCIe 1 x4 + 3 x1 (PCIe Gen3, root port and endpoint)
USB 3x USB 3.2 Gen2 (10 Gbps) 3x USB 2.0
Networking 1x GbE
Display 1x DisplayPort 1.2 multimode 4K30 (+MST)/e DisplayPort 1.4/HDMI 1.4*
Other I/O 3x UART, 2x SPI, 2x I2S, 4x I2C, 1x CAN, DMIC and DSPK, PWM, GPIO
Power 5W – 10W 7W – 15W
Mechanical 69.6mm x 45mm 260 pin SO-DIMM connector
Price $199 $299

The development kit for the Jetson AGX Orin and the rest of the Jetson Orin series will be able to emulate each of the different modules used to allow developers to start working in the new environment with the use of NVIDIA JetPack.

As you can see in the following two graphs, the Jetson Orin Nano series was pitted against its predecessors in some high-end HPC AI workloads to show the difference in power and efficiency. The first chart shows the FPS difference across generations, while the second bar chart shows the AI ​​performance inference per second of the four groups tested. Readers will notice that the 8GB shows thirty times better performance, and NVIDIA says it plans to improve on that amount and achieve forty-five times better performance in the future.

screenshot-2022-10-24-211809
screenshot-2022-10-24-211826

Various frameworks will be available for developers, including:

  • NVIDIA Isaac (robotics)
  • NVIDIA DeepStream (Vision AI)
  • NVIDIA Riva (conversational AI)
  • NVIDIA Omniverse Replicator (Synthetic Data Generation or SDG)
  • NVIDIA TAO Toolkit (optimization of pre-trained AI models)

For developers interested in learning more about the toolkit, please visit the Jetson AGX Orin Development Kit Page for more information and available resources.

News source: NVIDIA Developer Blog

Share.

Comments are closed.