2024/3/19
SAN JOSE, Calif.—March 18, 2024— PEGATRON, a globally recognized EMS+ manufacturer, is delighted to announce its participation in NVIDIA GTC, taking place from March 18 to 21 in San Jose, California. At the conference, PEGATRON will showcase its cutting-edge intelligent AI manufacturing solutions, including AI servers, a GenAI platform, and applications, at booth 533.
NVIDIA recently announced PEGATRON as a global partner for their advanced GPU computing technology, specifically the latest NVIDIA GB200 Grace Blackwell Superchip. Pegatron developed GB200 NVL36 is a multi-node, liquid-cooled, rack-scale platform for the most compute-intensive workloads. Moreover, the GB200 NVL36 also includes NVIDIA BlueField®-3 data processing units, which enable various capabilities such as cloud network acceleration and GPU compute elasticity in hyperscale AI clouds.
Accelerated Computing Empowers GenAI for Industrial Digital Transformation
PEGATRON is deeply committed to driving hardware innovation and improving user experiences. In line with the advancements of large language models (LLMs), PEGATRON is in the process of developing an AI server, leveraging its extensive experience in smart manufacturing to enhance LLM training efficiency. This groundbreaking technology is set to reshape the industry by providing advanced capabilities and solutions.
Highlights of PEGATRON’s Presence at GTC 2024:
· Collaborating with NVIDIA to Build Servers for Enterprise AI
PEGATRON is proud to collaborate with NVIDIA to help shape the future of AI and accelerated computing. PEGATRON introduces the AS201-1N0[1], an AI server equipped with an NVIDIA GH200 Grace Hopper Superchip that seamlessly complements LLM training and inference applications. The modularized hardware architecture, based on the NVIDIA MGX reference architecture, provides greater flexibility to meet the diverse accelerated computing requirements of data centers. It is a high-performance, flexibly configurable, and scalable computing system, particularly suitable for applications requiring large-scale parallel processing and NVIDIA GPU acceleration.
· Direct-to-Chip Liquid Cooling Solution for AI Servers
The PEGATRON GS200-2T0[2] adopts Intel® 5th/4th Gen Xeon® 2P processors and can be paired with 2x NVIDIA A100 Tensor Core GPUs. Utilizing direct-to-chip liquid cooling for both the CPU and GPU in confined spaces significantly lowers the total cost of ownership by increasing hardware density. From physically accurate digital twins to generative AI, training, and inference, PEGATRON delivers top-of-the-line graphics and compute performance, enabling a wide range of AI-powered workloads in the data center. The implementation of advanced cooling solutions can effectively lower the power consumption of data centers.
· Accelerating the New Era of Smart Manufacturing
By smartly demonstrating AI-assisted programming of the robotic arm, PEGATRON showcases its ability to simulate various scenarios, including material packing and debugging, in smart factory environments. This demonstration highlights how natural language processing can help operators intuitively provide precise instructions to machines, overcoming language barriers and programming issues. Through the synergy of PEGATRON SVR and PEGAVERSE, enhanced efficiency and comprehensive solutions are expected. Deploying accelerated computing and integrating NVIDIA Omniverse technology into PEGAVERSE will help accelerate industrial digitalization.
Join PEGATRON at NVIDIA GTC at booth 533 to experience the future of manufacturing innovation firsthand. Explore cutting-edge solutions and witness how PEGATRON is shaping the industry with its intelligent AI manufacturing solutions. Don’t miss this opportunity to be a part of the revolution in smart manufacturing. See you at booth 533!
For more information, please visit: PEGATRON SVR (pegatroncorp.com)
[1] AS201-1N0: an AI server, utilizes the NVIDIA MGX 2U chassis design. Equipped with the powerful NVIDIA GH200 Grace Hopper Superchip, AS201-1N0 is suitable for large-scale AI and high-performance computing (HPC) applications running on terabytes of data. Combined with the NVIDIA BlueField-3 DPU, it enables software-defined, hardware-accelerated cloud infrastructure. This system is certified for the NVIDIA AI Enterprise software platform, which provides enterprise-grade security, stability, and support for generative AI workloads.
[2] GS200-2T0: a 2U Intel Eagle Stream general-purpose server with 2-socket Intel Sapphire Rapids/Emerald Rapids processors, can support 2x NVIDIA A100 Tensor Core GPUs and 24x SAS/SATA/NVMe SSDs.