Aaronn Electronic has expanded its long-standing partnership with ADLINK. In addition to the manufacturer’s panel PCs, box PCs and smart cameras, the system integrator from Puchheim near Munich now also offers its embedded MXM GPU modules. This new product line is based on NVIDIA’s embedded GPUs with the Ampere and Turing architectures. Both are designed to boost the performance of a wide range of IoT edge applications, such as image processing and analysis, compute acceleration, and artificial intelligence (AI) applications.
ADLINK’s embedded MXM GPU modules are featured in high performance, data bandwidth, energy efficiency, and longevity. Another advantage is the low hardware footprint. Thus, they meet requirements in the industrial sector, where size, weight, and low power consumption are important criteria.
In addition, when fast response, high accuracy, and low latency are required, they score points thanks to their efficient on-site data processing. Adlink is a Jetson Elite Partner and OEM Elite Partner of NVIDIA. Collaborating on AI in the embedded space is therefore the next logical step.
“We are seeing growing interest in the use of GPUs in edge computing and AI,” said Florian Haidn, managing director of Aaronn Electronic. “Many of our customers have had experience with edge computing and AI and now see that there are many more possibilities there – but they require significantly more computing power than previously thought. With ADLINK’s embedded MXM GPU modules, technology developed for data centers and supercomputers is now available for these applications. We’re happy to help customers figure out how they can benefit from it.”
Embedded MXM GPU modules are designed for many demanding applications
Potential applications for the embedded MXM GPU modules are diverse and numerous. In healthcare, for example, it could be fast image reconstruction for mobile X-ray, ultrasound, and endoscopy systems; in traffic, real-time object recognition; and in logistics, navigation, and route planning for autonomous drones and mobile robots (AMRs). In addition, the high performance makes the difference for all other scenarios involving time-sensitive and mission-critical applications in control, monitoring, and communication.
Aaronn Electronic offers various Embedded MXM GPU module models from ADLINK to meet the different requirements. They are available in the standard MXM 3.1 Type A (82 x 70 mm), Type B ((82 x 105 mm) or Type B+ (82 x 110mm).
The models based on the Turing architecture offer PCIe Gen 3 x 16 as an interface, while the models based on the Ampere architecture offer PCIe Gen 4 with transfer speeds of at least 7.8 and up to 31.5 GByte/s. Users benefit from the high data transfer speeds. Especially data-intensive tasks benefit from the high data transfer rates. Up to 16 GBytes of memory and 17.66 TFLOPS FP32 peak performance offer sufficient performance reserves.
The right combination of CUDA cores, RT cores and tensor cores
The embedded MXM GPU modules are excellently equipped for tasks in the field of image analysis and AI. The models based on the Ampere architecture have 2048 to 5888 CUDA cores, 16 to 46 RT cores and 64 to 184 tensor cores. The models based on the Turing architecture come with 896 to 3072 CUDA cores, up to 48 RT cores and 384 Tensor cores.
The number of RT cores (ray tracing cores) is important when it comes to light calculations, shadows or reflections. The Tensor cores have been introduced by NVIDIA, especially for calculations in the field of artificial intelligence. Tensor cores are designed to perform very complex multiplications and additions of number fields extremely fast and can simplify and thus further accelerate computing operations if required. In addition, they can support ray tracing technology in a useful way.
Generally, the tensor computing units primarily accelerate Deep Learning and the development of neural networks. With the CUDA computing units, the toolkits and software development kits provided by NVIDIA, developers can create complex simulations – for example for particle distribution or flow simulations. In this regard, the CUDA compute units of the Ampere architecture are twice as fast as those of the Turing architecture for single-precision (FP32) floating-point arithmetic operations. In addition, the training throughput of the Tensor cores is significantly higher. This makes the embedded MXM GPU modules on Ampere architecture suitable for particularly demanding applications.
With the extension of its portfolio to include the embedded MXM GPU modules from ADLINK, Aaronn demonstrates once again its capabilities as a one-stop shop for embedded systems. The experts at Aaronn Electronic will be glad to advise you on the selection of the right modules for your applications and the implementation of your project. In doing so, they leverage their extensive experience in automation and digitization and offer personal technical support as well as many years of experience in design-in, carrier board design, custom chassis design, and rapid prototyping.