From Forbes: Back in November of last year, Google open-sourced a technique for natural language processing pre-training that it called Bidirectional Encoder Representations from Transformers, or BERT. Despite a number of large companies racing towards conversational AI using similar methods, including Microsoft, Facebook, Alibaba, Baidu, and Uber to name just a few, to date, BERT remains one of the most advanced AI language models in the space.
Google open-sourced BERT so that others could train their own conversational question answering systems. And today, NVIDIA announced that its AI compute platform was the first to train BERT in less than an hour and complete AI inference in just over 2 milliseconds.
“Large language models are revolutionizing AI for natural language,” said Bryan Catanzaro, vice president of Applied Deep Learning Research at NVIDIA. “They are helping us solve exceptionally difficult language problems, bringing us closer to the goal of truly conversational AI. NVIDIA’s groundbreaking work accelerating these models allows organizations to create new, state-of-the-art services that can assist and delight their customers in ways never before imagined.”
View: Article @ Source Site