Comunicato stampa

4 minuti per leggere

Tachyum 8 AI Zettaflops Blueprint to Solve OpenAI Capacity Limitation

LAS VEGAS, December 12, 2023 – Tachyum®, creator of Prodigy®, the world’s first Universal Processor, today released a white paper that presents how Tachyum’s customers planning to build new HPC/AI supercomputer data centers far exceeding not only the performance of existing supercomputers but also the target performance for next-generation systems.

Built from the ground up to provide the highest performance and efficiency, Prodigy’s revolutionary new architecture enables supercomputers to be deployed in fully homogeneous environments, providing simple development, deployment and maintenance. The solution is ideally suited for OpenAI, others like Microsoft Azure, CoreWeave, Ori, etc. and research facilities needing AI datacenters that today do not have the system architecture capable of serving all interested customers.

Developed by Tachyum’s world-class systems, solutions and software engineering teams, the Prodigy-enabled supercomputer, commissioned by a U.S. company this year, delivers the unprecedented performance of 50 exaflops of IEEE double-precision 64-bit floating-point operation and 8 zettaflops of AI training for large language models.

For the supercomputer solution referenced in the white paper, Prodigy has a custom 46RU rack with a liquid-cooled reference design. The rack supports 33 four-socket 1U servers for a total of 132 Prodigy processors. The racks have a modular architecture with the ability to combine them into a two-rack cabinet to optimize floor space.

Tachyum’s HPC/AI software stack provides a complete software environment to enable Prodigy family HPC/AI deployments, delivering full support for all aspects of HPC/AI clusters from low-level firmware to complete HPC/AI applications, and incorporating leading edge software environments for networking and storage. Tachyum’s software team has already integrated a software package for HPL LINPACK and other software for HPC with AI running on Prodigy FPGA soon.

“After our announcement of the purchase order we received this year, we attracted a lot of attention from other interested parties, several of them large organizations, looking to build a similar scale system for their AI applications and workloads,” said Dr. Radoslav Danilak, founder and CEO of Tachyum. “Prodigy’s system architecture fits well into a wide range of deployments, including those that need data center scale-out once the infrastructure for it is already in place. The scale of machines enabled by this new HPC/AI supercomputer data center likely will determine who will win the fight for compute and AI supremacy in the world.”

Prodigy provides both the high performance required for cloud and HPC/AI workloads within a single architecture. As a Universal Processor offering utility for all workloads, Prodigy-powered data center servers can seamlessly and dynamically switch between computational domains. By eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilization, Prodigy reduces CAPEX and OPEX significantly while delivering unprecedented data center performance, power, and economics. Prodigy integrates 192 high-performance custom-designed 64-bit compute cores, to deliver up to 4x the performance of the highest-performing x86 processors for cloud workloads, up to 3x that of the highest performing GPU for HPC, and 6x for AI applications.

Tachyum’s latest white paper follows previous releases detailing how to use 4-bit Tachyum AI (TAI) and 2-bit effective per weight (TAI2) formats in Large Language Models (LLM) quantization without accuracy degradation, reducing the cost of LMMs by up to 100x and bringing them to the mainstream.

Those interested in reading the “Tachyum 50EF/8ZF Datacenter Can Solve OpenAI and Other Problems” white paper can download a copy.

Follow Tachyum

https://twitter.com/tachyum

https://www.linkedin.com/company/tachyum

https://www.facebook.com/Tachyum/

About Tachyum

Tachyum is transforming the economics of AI, HPC, public and private cloud workloads with Prodigy, the world’s first Universal Processor. Prodigy unifies the functionality of a CPU, a GPU, and a TPU in a single processor to deliver industry-leading performance, cost and power efficiency for both specialty and general-purpose computing. As global data center emissions continue to contribute to a changing climate, with projections of their consuming 10 percent of the world’s electricity by 2030, the ultra-low power Prodigy is positioned to help balance the world’s appetite for computing at a lower environmental cost. Tachyum recently received a major purchase order from a US company to build a large-scale system that can deliver more than 50 exaflops performance, which will exponentially exceed the computational capabilities of the fastest inference or generative AI supercomputers available anywhere in the world today. When complete in 2025, the Prodigy-powered system will deliver a 25x multiplier vs. the world’s fastest conventional supercomputer – built just this year – and will achieve AI capabilities 25,000x larger than models for ChatGPT4. Tachyum has offices in the United States and Slovakia. For more information, visit https://www.tachyum.com/.