Tachyum Demonstrates Supercharged LLM Training in Only 4 Bits

  • Oct 8, 2025 Date of publishing
  • 14 Pages

The AI market is soaring, projected to grow from $189 billion in 2023 to $4.8 trillion by 2033, a 25x increase in just a decade.1 Earlier this year on a Facebook post Mark Zuckerberg projected that Meta will spend $60 - $80 billion on CapEx in 2025, primarily on data centers and growing the AI team, roughly double the $35B - $40B spent for CapEx in 2024. Microsoft and Amazon are making similar projections, with Microsoft planning to spend $80B on AI data centers in 2025, and Amazon planning for $100B on their Project Rainier AI Supercomputer in the same timeframe.2,3 This exponential growth in AI will put increasingly massive demands on energy consumption, which is already struggling to keep up with the current demand, as well as fuel progressively higher total cost of ownership (TCO.)

Current AI models have mostly been trained using the FP32 or BF16 data types, which offer lots of accuracy, but require enormous hardware deployments that consume immense amounts of energy. Recently models have been successfully trained on FP8, which takes a big step towards higher efficiency and lower TCO for AI training.

Tachyum supports the next step in AI training using FP4 to drive up computing and energy efficiency and reduce TCO. Additionally, as large language models increase in size, frequently by 10x generation-to-generation, the training time becomes intolerably long, yet another dynamic which introduces delays and increases overall cost. The ability to train in FP4 drives up the performance, addressing this pain point with much shorter training times as well as further reduced TCO.