Tachyum’s Prodigy processor will enable hyperscale data centers, its primary market, to become low cost, energy efficient universal computing centers. Hyperscale data centers with the Prodigy universal processors will reduce their TCO (Total Cost of Operations) by a factor of 4x – saving each hyperscale customer billions of dollars per year.
Prodigy’s industry-leading performance, across data centers, AI, and High-Performance Computing (HPC) workloads, is a game-changer. Idle Prodigy-powered “universal servers” can be seamlessly powered up and reconfigured as ad hoc AI networks that can deliver 10x more AI resources than the current baseline. Alternatively, these same idle universal servers can just as easily be powered up and configured as HPC systems that will deliver ExaFLOPS (a billion, billion floating point operations per second) level performance. Prodigy delivers all of this at 10x lower power and 3x lower cost, with a compete software stack for out of the box operation, including common data center applications ported to Prodigy native ISA (Instruction Set Architecture) and a dynamic binary emulator that enables legacy x86 applications to run at Xeon speed.
Prodigy’s breakthrough computational density (MIPS/Socket, MIPS/Watt) and its unprecedented I/O bandwidth, coupled with its universality, will increase Petascale and Exascale HPC access dramatically, across a broad spectrum of use cases, while driving HPC costs down appreciably. The societal effects of cost-efficient Exascale computing resources, available for rent at hyperscale data centers everywhere, will be profound.
Dedicated Prodigy-powered HPC systems will be affordable at the Enterprise level. Prodigy will “democratize” HPC, by bringing its capabilities to small and medium enterprises, at a much lower CAPEX and OPEX than what is currently available today.
For dedicated, state-of-the-art government-funded Exascale systems, the Prodigy processor released in 2021 directly enables a direct path to 50-100 ExaFLOPS machines by 2023, at 3-6x lower cost per ExaFLOPS versus the announced budget for the latest El Capitan 2 EF system, also due out in 2023.
Idle Prodigy-powered universal servers in hyperscale data centers, during off-peak hours, will deliver 10x more AI Neural Network training/inference resources than currently available, CAPEX free (i.e. at low cost, since the Prodigy-powered universal computing servers are already bought & paid for).
As AI migrates to more sophisticated and control-intensive disciplines, such as Spiking Neural Nets, Explainable AI, Symbolic AI and Bio AI, Prodigy will deliver an order of magnitude better performance than its competitors.
Tachyum’s Prodigy based TPU core will be sold as a licensable soft IP core, enabling edge computing and IOT products to directly incorporate high performance AI capabilities at a very low cost. IOT products will have onboard high-performance AI inference, optimized to exploit Prodigy-based AI training from either the cloud or the home office.
Prodigy massively reduces the cost and the power of the enterprise data center, AI and HPC workloads. It will enable a new generation of on-premise products with unprecedented capabilities. And, it will be available to private cloud operators, at prices that finally generate favorable ROIs.
Prodigy is expected to stimulate the private cloud market by providing in-house low latency compute, and Big AI training/inference.
Enterprise customers are also expected to exploit the HPC capabilities of Prodigy-powered universal servers in proprietary product development programs, as well as in proprietary marketing and corporate-level applications.
Prodigy-powered universal servers, certified for extended temperature range, will be delivered to telecommunications conglomerates, to be mounted to their cellular towers providing unprecedented edge computing resources. Low latency, advanced AI training and high-performance inference for IOT will be as close as the nearest cell tower.
Auto manufacturers will use networks of Prodigy-powered cell tower-based AI to provide their mobile customers with a complete AI-based ecosystem, integrating self-driving capabilities, monitoring of home security, safely streaming incoming calls, messages and video for backseat passengers, while providing predictive diagnostics for the vehicle itself.
Military and Intelligence
Prodigy’s strategic value to national defense virtually spans the entire gamut of the DoD and Intelligence community’s missions and systems. In most use cases, its impact is very high. Prodigy’s unprecedented universal compute capabilities and its cost and energy efficiency in data center, AI, HPC, and telecommunications use cases are directly transferrable to defense and intelligence systems. Prodigy will proliferate low latency compute, Big AI and unprecedented HPC capabilities throughout departments, agencies, programs and units. Within the command & control infrastructure, the ability of Prodigy-powered servers to be seamlessly and dynamically diverted from normal workloads to AI processing, will deliver high performance, low latency AI to operational units.
In addition to performance, size, weight, and power (SWAP) are the hallmarks of good defense systems. As an example; Prodigy will provide our drone force and our satellites with >10x the onboard processing power (MIPS/Watt) compared to current baseline systems, and it will enable onboard Big AI to be efficiently exploited without an increase in power consumption.