IBM Unveils On-Chip Accelerated Artificial Intelligence Processor
At the annual Hot Chips conference, IBM today unveiled details of the upcoming new IBM Telum Processor, designed to bring deep learning inference to enterprise workloads to help address fraud in real-time. Telum is IBM’s first processor that contains on-chip acceleration for AI inferencing while a transaction is taking place. Three years in development, Telum will be the central processor chip for the next generation IBM Z and LinuxONE systems. A Telum-based system is planned for the first half of 2022.
The new chip features an innovative centralized design, which allows clients to leverage the full power of the AI processor for AI-specific workloads, making it ideal for financial services workloads like fraud detection, loan processing, clearing and settlement of trades, anti-money laundering and risk analysis. The chip contains 8 processor cores with a deep super-scalar out-of-order instruction pipeline, running with more than 5GHz clock frequency, optimized for the demands of heterogenous enterprise class workloads.
The completely redesigned cache and chip-interconnection infrastructure provides 32MB cache per core, and can scale to 32 Telum chips. The dual-chip module design contains 22 billion transistors and 19 miles of wire on 17 metal layers. Telum is the first IBM chip with technology created by the IBM Research AI Hardware Center. In addition, Samsung is IBM’s technology development partner for the Telum processor, developed in 7nm EUV technology node.