The GIGA POD – All-In-One Solution for Supercomputing
GIGABYTE has been deploying GIGA PODs in leading cloud service providers and has the know-how to assist data centers with scaled infrastructure. These turnkey solutions (or pods) are composed of eight racks with 32 GIGABYTE G593 nodes for a total of 256 NVIDIA H100 Tensor Core GPUs that can achieve 1 exaflop (one quintillion floating point operations per second) of FP8 AI performance. At the GIGABYTE booth is a G593-SD0 server built for Intel Xeon and NVIDIA H100 GPUs, the same platform GIGABYTE used for its most recent submission in MLPerf benchmarks to test AI workloads.
Modularized AI & HPC Systems – NVIDIA Grace & Grace-Hopper
For the modularized theme, there are high-density nodes using Arm-based processors and supporting NVMe drives and NVIDIA BlueField-3 DPUs. The 2U H263-V11 has two nodes for the NVIDIA Grace CPU Superchip, and the H223-V10 is for the NVIDIA Grace Hopper Superchip.
Scalable Data Center Infrastructure for All Use Cases
GIGABYTE's G493-SB0 is an NVIDIA-Certified system for NVIDIA L4 Tensor Core and L40 GPUs and has room for eight PCIe Gen5 GPUs and expansion slots for NVIDIA BlueField and ConnectX networking technologies. In the future, it will be officially known as an NVIDIA OVX system. Additionally, following the NVIDIA MGX modular design is the new XH23-VG0. It features a single NVIDIA Grace Hopper with FHFL expansion slots for accelerated, giant-scale AI and HPC applications.
Enterprise Computing
GIGABYTE customers have come to expect bold designs that cater to specific workloads and markets. The first new enterprise server, the S183-SH0, is a slim 1U form factor with dual Intel Xeon processors supporting 32x E1.S form-factor, solid-state drives for a fast, dense storage configuration. Another E1.S supporting server is the H253-Z10, which is a multi-node server with front access. There are two G293 GPU servers that are tailored to AI training or AI inference workloads. The G293-Z43 is an inference specialist that can support sixteen Alveo™ V70 accelerators with four GPU cages that have ample cooling. For an optimally priced GPU server, GIGABYTE has the G293-Z23 that supports higher TDP CPUs and PCIe Gen4 and Gen5 GPUs such as the NVIDIA L40S GPU.
NVIDIA H200 Tensor Core GPU
NVIDIA announced the NVIDIA H200 Tensor Core GPU with enhanced memory performance, which GIGABYTE will support with upcoming server models. The NVIDIA H200 GPU supercharges generative AI and HPC with game-changing performance and memory capabilities. As the first GPU with HBM3e, the H200 GPU's faster, larger memory fuels the acceleration of generative AI and LLMs while advancing scientific computing for HPC workloads. The NVIDIA HGX H200, the world's leading AI computing platform, features the H200 GPU for the fastest performance. An eight-way HGX H200 provides over 32 petaflops of FP8 deep learning compute and 1.1TB of aggregate high-bandwidth memory for the highest performance in generative AI and HPC applications.
For more information, see a complete list of GIGABYTE DLC servers, immersion servers, and immersion tanks.
To submit a query: Contact Sales
Follow GIGABYTE on Twitter: http://twitter.com/GIGABYTEServer
Follow Giga Computing on LinkedIn: https://www.linkedin.com/company/giga-computing/
Follow Giga Computing on Facebook: https ://www.facebook.com/gigabyteserver