Global chip maker and artificial intelligence (AI) giant Nvidia announced it has trained the language processing and artificial intelligence platform BERT in less than an hour and slashed inference to just over 2 milliseconds.
Solutions such as the Bidirectional Encoder Representations from Transformers or BERT can take the limited AI conversational services to the next level. It can help chatbots, intelligent personal assistants and search engines to operate in a more human-like manner as it is now possible to deploy large AI models in real time.
Nvidia claims to have reduced training time from a few days to as low as 53 minutes while inference has broken the 8 milliseconds threshold to 2 milliseconds.
The company said the reduced inference would allow businesses to engage with customers more naturally and in true real-time conversations.
Developers can now use modern language understanding solutions in large-scale applications and get real-time insights to a wider user base, the company said in a statement.
Early adopters of the technology, including Microsoft and big startups, are utilising BERT for better response rates and intuitiveness, the statement added.
The chip maker said it has now added key optimisations to its AI platform by building the largest language model of its kind to date through BERT.
"Large language models are revolutionizing AI for natural language," said Bryan Catanzaro, vice president of applied deep learning research, Nvidia.
Catanzaro also pointed out that advanced AI capabilities are helping solve difficult language problems and are bringing solutions closer to a truly conversational AI tool.
AI services backed by natural language processing or NLP are expected to grow to $16.07 billion by 2021, according to a report by Markets & Markets. Digital voice assistants might reach 8 billion from the current 2.5 billion in the next four years, according to Juniper research.