From PC Mag: At the start of Computex 2022, Nvidia showed off a handful of upcoming products and industry blueprints that aim to press more compute performance into several corners of the market. Among these are designs for new "superchip" systems based on the Grace and Hopper architectures (for massive modeling and learning tasks), plus a compact GPU targeted at data centers and new "Jetson Orin" systems for AI-intensive workloads.
High-performance computational hardware isn't a one-size-fits-all affair; some types of high-end hardware simply perform better than others when it comes to different specialized tasks. Nvidia has been making large strides in this department in recent years. The company now aims to make getting the right system easier for its data-center customers. To achieve this, it announced four new reference designs for data-center systems.
The four vary in terms of performance and hardware, and are built to be easily customizable. All are based on iterations of the Grace and Hopper architectures(Opens in a new window) and the giant-compute processors based on them. These designs are built for massive AI and scientific compute tasks. Of the new reference designs, different Grace-chip-based blueprints aim for cloud-graphics and Nvidia Omniverse applications (the latter for 3D design simulation and collaboration), while a Grace Hopper-based one targets AI, learning, and inferencing.
Nvidia also announced what it's calling its first data-center graphics card. The Nvidia A100 is a PCI Express GPU that is designed to fit inside of a single PCI Express slot, unlike the two- and three-slot designs of most of the cards that target the consumer mainstream and high-end market. The difference: The "Ampere"-based A100 makes use of liquid cooling to provide thermally efficient operation in a smaller package than conventional air-cooled graphics cards can. In space-strapped server environments, this can allow for higher compute densities. A liquid-cooled H100 is expected to follow in 2023.
View: Full Article