Google parent Alphabet has just unveiled a product called Trillium in its artificial intelligence data center chip family, which the company says is nearly five times faster than its previous version. "Industry demand for computing for machine learning has grown by a factor of 1 million in the last six years, roughly increasing 10-fold every year," Alphabet CEO Sundar Pichai said during a briefing call with reporters. "I think Google was built for this moment; we've been pioneers in (AI chips) for more than a decade."
Alphabet's efforts to build custom chips for AI data centers represent one of the few viable alternatives to Nvidia's top-of-the-line processors that dominate the market. Along with the software closely tied to Google's tensor processor units (TPUs), these chips have enabled the company to capture significant market share.
Nvidia currently has about 80% of the AI data center chip market, with the remaining 20% primarily made up of various versions of Google's TPUs. Google does not sell the chips directly, but rents out access to them through its cloud computing platform. The sixth generation Trillium chip will achieve 4.7 times better computing performance compared to TPU v5e, according to Google.
The Trillium processor is 67% more energy efficient than the v5e. The new chip will be available to Google's cloud customers in "late 2024," the company said. Google engineers achieved further performance improvements by increasing the amount of memory capacity and overall bandwidth.
AI models require huge amounts of high-end memory, which has been a bottleneck to further increase performance. The company has designed the chips to be implemented in units of 256 chips that can scale to hundreds of units.