Skip to content

Nvidia CEO Unveils Exciting News at Computex

Nvidia CEO Unveils Exciting News at Computex


Jensen Huang Announces Plans to Expand Generative AI Across Data Centers

In a two-hour-long keynote speech at Computex in Taipei, Nvidia co-founder and CEO Jensen Huang made several announcements regarding chip release dates, partnerships with major companies, and its DGX GH200 super computer. Here are the details of the announcements:

Nvidia GForce RTX 4080 Ti GPU for Gamers in Full Production

  1. Partnered with Taiwanese companies to produce GForce RTX 4080 Ti GPU in large quantities for gamers.

Nvidia Avatar Cloud Engine (ACE) for Games

An AI model foundry service that uses customizable and pre-trained models to enhance NPCs’ language interactions. It aims to provide characters in games with more personality and character through AI.

Nvidia Cuda Computing Model

Cuda computing model, which now has four million developers and over 3,000 applications, has been downloaded 40 million times, including 25 million just last year alone.

GPU Server HGX H100 in Full Volume Production

The world’s first computer to have a transformer engine in it. The GPU server HGX H100 is now in full volume production and is being made by several Taiwanese companies.

Nvidia’s Acquisition of Mellanox

Huang referred to Nvidia’s acquisition of Mellanox, a supercomputer chipmaker for $6.9 billion in 2019, as one of the greatest strategic decisions the company has ever made.

Production of the Next Generation of Hopper GPUs

The production of the next generation of Hopper GPUs will begin in August 2024, exactly two years after the first generation started to manufacture.

Nvidia GH200 Grace Hopper

GH200 Grace Hopper, a superchip designed for high-resilience data center applications, boosts 4 PetaFIOPS TE, 72 Arm CPUs, 96GB HBM3, and 576 GPU memory – considered to be the world’s first accelerated computing processor that also has a large memory.


Nvidia’s solution to Grace Hopper’s memory limitations. The DGX GH200 is made by connecting eight Grace Hoppers together with three NVLINK switches and combining pods together at 900GB. Thirty-two pods will be joined together, leveraging another layer of switches, to connect a total of 256 Grace Hopper chips. The ExaFLOPS Transformer Engine has 144 TB of GPU memory and works as a giant GPU.

Nvidia and SoftBank Partnership

Nvidia and SoftBank have entered into a partnership, introducing the Grace Hopper superchip into SoftBank’s new distributed data centers in Japan. They will host generative AI and wireless applications in a multi-tenant common server platform, reducing costs and energy.

Nvidia MGX Reference Architecture

Nvidia MGX Reference Architecture, used in partnership with companies in Taiwan, provides system manufacturers with a modular reference architecture to construct more than 100 server variations for AI, accelerated computing, and omniverse uses. Companies in the partnership include ASRock Rack, Asus, Gigabyte, Pegatron, QCT, and Supermicro.

Spectrum-X Accelerated Networking Platform

The Spectrum-X accelerated networking platform includes the Spectrum 4 switch with 128 ports of 400GB per second and 51.2T per second, enabling a new type of Ethernet. The Bluefield 3 Smart Nic connects the Spectrum 4 switch to perform congestion control, ensuring performance isolation, in-fabric computing, and adaptive routing.

WPP and Nvidia Partnership

Nvidia partnered with WPP, the largest ad agency in the world, to develop a content engine based on Nvidia Omniverse, capable of producing photos and video content for advertising.

Nvidia Isaac ARM

Nvidia Isaac ARM is a robot platform available to anyone who wants to build robots. It begins with a chip called Nova Orin, making it the first robotics full-reference stack, according to Huang.


During his keynote speech at Computex in Taipei on May 31, 2021, Nvidia CEO Jensen Huang made several announcements about partnerships with major companies and chip releases. Nvidia is determined to bring generative AI to more data centers and develop projects that advance computing innovation. All these advancements will help Nvidia strengthen its position as one of the most valuable companies in the world.


What is Nvidia Cuda computing model?

Cuda computing model, developed by Nvidia, is a parallel computing platform and programming model used by developers to solve complex computational problems in high-performance computing (HPC) or modern data centers.

What is the purpose of DGX GH200?

The purpose of DGX GH200 is to provide a solution to Grace Hopper’s memory limitations. It’s created by combining eight Grace Hoppers using three NVLINK switches, and then connecting pods to produce 900GB. Finally, the 32 pods are connected via layer switches to support a total of 256 Grace Hoppers.

What is the Nova Orin?

The Nova Orin is a chip used in Nvidia Isaac ARM, described as the world’s first robotics full-reference stack.


For more information, please refer this link