Stony Brook University’s AI Institute Celebrates Upgrade to NVIDIA’s Latest GPU

SBU’s Institute for AI-driven Discovery and Innovation recently upgraded its computational resources with the NVIDIA H100 Tensor Core GPU to power upcoming research in AI, resulting in a significant improvement in performance and scalability.

Launched in 2022, the H100 is a ninth-generation GPU packed with 80 billion transistors. It is based on NVIDIA’s new Hopper architecture, which allows it to provide up to 4X faster training for GPT-3 models and speed up large language models (LLMs) by an incredible 30X over the previous generation. SBU set up the solution for enhanced AI inference supporting real-time and immersive applications using large-scale AI models.

The upgrade was lauded by professors and students alike. The AI Institute’s Director, Professor Steven Skiena, reflected on the impact on the campus community related to the campus values detailed in the strategic plan: community, excellence, equity, collaboration, and innovation.

“Moving closer to a unified vision of expanding the uses of AI at Stony Brook and beyond requires all the computational prowess we can muster. We’re excited about adding H100 to our resources and look forward to the contributions it has enabled us to make to the AI community.”

Akash Ganesh, who partook in the upgrade, noticed H100’s high-performance computing, “H100s are quite literally the highest-end GPUs on the market right now, and I am excited to witness how they impact AI research at the university.”

Srikar Yellapragada, a Ph.D. student who has been using H100s for the past couple of weeks for an ongoing project, commented on the processor's capabilities, “My current project involves training diffusion models, which require GPUs with a high vRAM for effective training. And the new GPUs are immensely helpful for my research. Using the H100 lets me choose the same batch size as the experiments I'm trying to replicate. Previously, I used to lower the batch size and learning rate to accommodate them in smaller GPUs.”

The H100 was preceded by NVIDIA A100—a part of the AI Institute’s existing resources, which, when announced in 2020, was the world’s highest-performing elastic data center for AI, data analytics, and HPC, providing 20x higher performance than its predecessor.

The H100 GPU has been shown to outperform previous-generation NVIDIA GPUs by a wide margin, and is designed to work seamlessly with NVIDIA’s NVLink interconnect technology, which allows for high-bandwidth communication between GPUs, enabling users to scale up their computing performance quickly and easily, making it an ideal solution for large-scale machine learning and deep learning workloads.

The upgrade to NVIDIA’s H100 was funded by the National Science Foundation (), matching funds from the Institute for Advanced Computational Science (IACS) and The Office of Research Compliance (ORC) at Stony Brook, without whose vision, guidance, and support, this would not have been possible.

 

Ankita Nagpal
Communications Assistant