Artificial Intelligence

Artificial Intelligence

すべてのソリューションに戻る
Significantly More AI Resources

Significantly More AI Resources

Prodigy-powered universal servers in hyperscale data centers, during off-peak hours, will deliver 10x more AI Neural Network training/inference resources than currently available, CAPEX free (i.e. at low cost, since the Prodigy-powered universal computing servers are already bought & paid for).

Tachyum’s Prodigy ATX Platform Democratizing AI for Everyone

Built from the ground up to provide leading-edge AI features that address the emerging demand for AI across a wide range of applications and workloads, Prodigy’s AI subsystem incorporates innovative features that deliver the high performance and efficiency required of AI environments. The white paper shows how a single Prodigy system with 1 Terabyte (TB) of memory can run a ChatGPT4 model with 1.7 trillion parameters, whereas it requires 52 NVIDIA H100 GPUs to run the same model at significantly higher cost and power consumption.

Since LLMs (Large Language Models) are so memory capacity intensive, determining the memory footprint for an LLM is critical. Just as critical is the use of the latest technology that optimizes the memory footprint for state-of-the-art LLMs, which can have trillions of parameters. Prodigy benefits from its advanced AI subsystem that supports leading-edge data types such as 4-bit TAI and effective 2-bit weights with FP8 per activation that greatly reduces the memory footprint required for LLMs.

Find out more in our ATX Platform White Paper.

Wide Range of Disciplines

Improving AI Performance for Wide Range of Disciplines

As AI migrates to more sophisticated and control-intensive disciplines, such as Spiking Neural Nets, Explainable AI, Symbolic AI and Bio AI, Prodigy will deliver an order of magnitude better performance than its competitors.
TPU® Licensable Core

TPU® Licensable Core Bringing High Performace AI to Any Device

Tachyum’s Prodigy based TPU core will be sold as a licensable soft IP core, enabling edge computing and IOT products to directly incorporate high performance AI capabilities at a very low cost.

IOT products will have onboard high-performance AI inference engines, optimized to exploit Prodigy-based AI training from either the cloud or the home office.