Credits: Disclosure

NVIDIA will be present at the Hot Chips 2024 event, where it intends to bring its new energy efficiency proposals to the server market – driven by Blackwell GPUs, the use of hybrid cooling and artificial intelligence agents for the production of chip designs.

Through the solutions they are working on, the manufacturer wants to help data centers transition to an even more efficient infrastructure. According to NVIDIA, air-cooled servers will not have much space in the future, with new models they are building using the hybrid function: combining air and liquid.

Disclosure/NVIDIA

Liquid cooling moves heat away from systems more efficiently than air, which also makes it easier to keep computing systems cool – even under heavy workloads. It also takes up less space and consumes less power than air cooling, allowing users to add more racks and compute power.

NVIDIA’s plan is to retrofit some air-cooled server designs into liquid units – offering a quick and easy solution for adding more capacity to existing racks.

They also work with piping designs for liquid cooling directly connected to the chip, either using cooling distribution units or completely submerging the servers in immersion tanks. Although this model requires greater expenditure, energy consumption and operating costs are reduced.

NVIDIA may outperform Apple
Disclosure/NVIDIA

Proposal for server cooling

NVIDIA’s director of server cooling and infrastructure, Ali Heydari, promises to present at Hot Chips 2024 all these innovations that are being made in terms of energy consumption and efficiency within the data center market.

Some of this work is in support of the U.S. Department of Energy’s COOLERCHIPS program, which is designed to develop advanced cooling technologies. The production team is using NVIDIA’s Omniverse platform to create digital twins that will help model cooling efficiency and power consumption, optimizing designs.

See more:

Nvidia/Reproduction

NVIDIA Chief Architecture Officers Ajay Tirumala and Raymond Wong will also present a first look at the GB200 NVL72 platform, showcasing how the Blackwell GPU, Grace GPU, BlueField data processing unit, ConnectX network interface card, NVLink, Spectrum Ethernet and InfiniBand Quantum switch work together.

The proposal is to bring a new standard of performance in artificial intelligence and accelerated computing, simultaneously with the advances they are making in the issue of energy efficiency. It supports 72 Blackwell GPUs and 36 Grace CPUs, promising to further elevate AI computing.

NVIDIA Blackwell TSMC
Disclosure/NVIDIA

The NVIDIA GB200 NVL72 is a multinode solution that works with low-latency, high-throughput token generation for LLM inference. According to the manufacturer, it provides up to 30 times faster speeds for LLM workloads and can run models with trillions of parameters in real time.

Additionally, Tirumala and Wong will also talk more about NVIDIA’s Quasar Quantization System – a system that brings together NVIDIA’s algorithmic innovations, libraries and software tools for servers and the second-generation Blackwell Transformer engine, with support for high-precision in low-precision models.

Image of Nvidia's AI chip promotion.
Disclosure/NVIDIA

NVIDIA’s Enterprise division director for Latin America, Marcio Aguiar, reveals that all these resources will serve to drive the next generation of AI.

The NVIDIA Blackwell platform represents a significant milestone in the evolution of accelerated computing, uniting the most advanced hardware and software to power the next generation of artificial intelligence. By combining multiple chips, such as Blackwell GPUs and Grace CPUs, we are setting new standards in performance and energy efficiency, enabling enterprises around the world to unlock the true potential of AI at scale.

Marcio Aguiar

Credits: Pixabay

Contribution of AI in processor design

NVIDIA’s director of design automation research, Mark Ren, will present at Hot Chips 2024 how artificial intelligence models are used to improve CPU design quality and productivity – increasing the efficiency of manual processes and automating more extensive tasks.

He’ll talk about how models include prediction and optimization tools to help his engineering team quickly analyze and improve chip designs—as well as the use of LLMs that can help engineers answer questions, generate code, and analyze form factor issues.

NVIDIA researchers are using AI agents powered by LLMs to autonomously reason and act using custom circuit design tools – in addition to interacting with designers and learning from a database that is fed by humans and their own agents.

Ren also says that NVIDIA is not only building this technology, but that they are currently using it. More information is expected to be presented during Hot Chips 2024, which runs until August 27.

Source: https://www.adrenaline.com.br/nvidia/nvidia-apresenta-inovacoes-que-melhoram-eficiencia-energetica-de-servidores/



Leave a Reply

Your email address will not be published. Required fields are marked *