IBM’s Telum addresses $30 bn consumer loss from frauds
Technology major International Business Machines (IBM) says its new line of microprocessors can detect fraudulent transactions in a matter of milliseconds. The Telum processor, IBM’s first offering with on-chip AI acceleration capabilities, is expected to hit the market in 2022.
In an interview with TechCircle, Kailash Gopalakrishnan, IBM Fellow and senior manager for accelerator architectures and machine learning at IBM Research explains how Telum’s silicon wafer design will help financial institutions move from fraud detection to fraud prevention.
What’s so special about the Telum processor?
The Telum processor is an enterprise CPU chip and is developed with technology that came from IBM Research. It enables clients to conduct AI at scale with low latency. The challenge with AI today is that it is computationally expensive, which inhibits its integration with low-latency applications. We have addressed this issue with on-chip AI acceleration.
How will enterprises benefit from this chip?
This will be advantageous to financial services industries such as banking for their credit card transactions, trading insurance and similar use cases which can benefit from this feature.
For example, if you’re building a credit card fraud detection application, when the card is swiped, it goes back to an iBMC (Integrated Baseboard Management Controller) machine, which can detect a fraudulent activity a few minutes after the transaction is done. If you want to integrate AI within the window of that transaction, typically in a few milliseconds, the AI has to be very fast.
Telum provides low latency AI compute directly on the same microprocessor as your traditional CPU cores.
The consumer loss from fraud is growing dramatically and is expected to reach in excess of $30 billion by 2023. This solution can help our clients minimize losses from fraudulent transactions.
What advantage does Telum offer over existing solutions?
What we have in the market are externally connected accelerators. In such cases, you have to ship the data off the platform and into another system that can run the AI inferencing using external accelerators. Shipping data off the platform leads to higher latency. Additionally, there is a problem with regards to security, availability, and reliability, because the data is being moved to an external system. The utilization of external accelerator chips for these specific transactional use cases is between 5-10%. But for our on-chip accelerators the utilization is in the range of 50%, which is the big ticket item in terms of performance and security.
What are the different types of frauds/ threat actors that Telum can mitigate, what fraudulent methods would still be a challenge that you are still working on?
Different clients have different types of datasets, which drives them towards different AI models. The Telum chip can address any of the machine learning and deep learning models available out there. It gives the capability to target deep learning over a specific subset of models and is not specialized for one model.
In terms of cost, how does it compare to traditional methods of using external accelerators?
If you push the data out of the platform and into a different system, you will have security and latency issues. If you want to prevent fraudulent transactions, the option is to exploit the benefits of on-chip acceleration. You also do not need to design an AI system and an enterprise system separately, which also reduces cost. From a software perspective, the ecosystem that we are building for Telum, is in line with the AI ecosystem today. Today clients use frameworks such as TensorFlow and PyTorch for AI, along with standardized models such as ONNX for AI inferencing. We support all such open source frameworks, which makes the system transparent.
Are you thinking about other applications for Telum at IBM research?
I think the natural language processing (NLP) use case is exciting in terms of machine translation, sentiment classification in India. Lot of enterprises today are interested in the NLP use case. We also see that the AI chip within the Telum processor is also applicable to computer vision processing, speech and image classification, object detection and the likes. Such use cases in principle could be applied to the automotive space.