Product Details
NVIDIA Jetson Orin Nano 4G Module 900-13767-0040-000 Rugged
Embedded Computer 20 TOPS
Rugged Embedded Computer Jetson Orin Nano 4G 900-13767-0040-000
The Jetson Orin Nano 8GB integrates 8GB 128-bit LPDDR5 DRAM, and
Jetson Orin Nano 4GB integrates 4GB 64-bit LPDDR5 DRAM. Maximum
frequency of Jetson Orin Nano Memory is 2133 MHz. The theoretical
peak memory bandwidth on Orin Nano 8GB is 68 GB/s, and on Orin Nano
4GB is 34 GB/s. The Memory Controller (MC) maximizes memory
utilization while providing minimum latency access for critical CPU
requests. An arbiter is used to prioritize requests, optimizing
memory access efficiency and utilization and minimizing system
power consumption. The MC provides access to main memory for all
internal devices. It provides an abstract view of memory to its
clients via standardized interfaces, allowing the clients to ignore
details of the memory hierarchy. It optimizes access to shared
memory resources, balancing latency and efficiency to provide best
system performance, based on programmable parameters.
GPU Operation of the NVIDIA Jetson Orin Nano 4G module
900-13767-0040-000
Module | CUDA Cores | Tensor Cores | Operating Frequency per Core (up to) |
Jetson Orin Nano 8GB | 1024 | 32 | 625 MHz |
Jetson Orin Nano 4GB | 512 | 16 | 625 MHz |
CPU Operation of the NVIDIA Jetson Orin Nano 4G module
900-13767-0040-000
Module | CPU Cores | CPU Maximum Frequency |
Jetson Orin Nano 8GB | 6 | 1.5 GHz |
Jetson Orin Nano 4GB | 6 | 1.5 GHz |
NVIDIA Jetson Orin Nano 4GB 900-13767-0040-000 Technical
Specification
| Up to 20 (Sparse) INT8 TOPs and 10 (Dense) INT8 TOPs |
Ampere GPU | 512 NVIDIA® CUDA® cores | 16 Tensor cores |
DARM Cortex-A78AE CPU | Six-core (ON 8GB and ON 4GB) Cortex A78AE ARMv8.2 (64-bit)
heterogeneous multi-processing (HMP) CPU architecture | 2x clusters
(1x 4-core cluster + 128 KB L1 + 256KB L2 per core + 2MB L3) + 1x
2- core cluster (128 KB L1 + 256KB L2 per core + 2MB L3) | System
Cache: 4 MB (shared across all clusters) |
Temp. Range | -25°C – 90°C |
Supported Power Input | 5V-20V |
Module Size | 69.6 mm x 45 mm | 260 pin SO-DIMM Connector |
Memory | 4GB 64-bit LPDDR5 DRAM |
Networking | 10/100/1000 BASE-T Ethernet | Media Access Controller (MAC) |
Storage | Supports External Storage (NVMe) |
Company Profile
Beijing Plink-Ai is a high-tech company integrating research and
development, production and sales and one of the industry and civil
intelligence device wholly solution supplier. Founded in 2009,
Beijing Plink-AI always persists on
‘creation+efficiency+Intelligence’ core theory, focuses on products
and application tightly combination,exploring GPU to cloud -to-end
solutions. Through in-depth cooperation with customers, we have
accumulated valuable experience from cloud servers to edge side AI
product deployment.

As leading by game graphics card, professional graphics card, GPU
server, embedded AI all-in-one machine products, it serves for
cloud services, smart healthcare, smart agriculture, smart
logistics, smart city, industrial automation, education and
scientific research etc. That would promote the transformation from
traditional production mode to intelligent mode in various
industries.
Company is located in the centre of economic, science and
technology- Beijing, concentrated research and development
resources. We have vibrant, highly educated, strong research and
development ability’s team members and more than 10 years of
high-end hardware and software development experience. Beijing
Plink-AI Technology is one of the NVIDIA global elite partners and
Leadtek special agents.