About half a year after Google released its groundbreaking MUM model, Baidu answered with a breakthrough of its own. In December of 2021, the Chinese search giant published a paper detailing ERNIE 3.0 Titan, a beefed-up version of its already impressive ERNIE model.
So what is ERNIE 3.0 Titan capable of, how does it differ from the original ERNIE 3.0, what makes it different from MUM, and will its successor be OSCAR 4.0 Grouch? You’ll find answers to the first three questions ahead (unfortunately, there don’t seem to be any OSCAR models on the horizon—yet).
- What Is ERNIE 3.0 Titan?
- How Does ERNIE 3.0 Titan Compare to Google’s MUM?
- What Does ERNIE 3.0 Titan Mean for the Future of Search?
What Is ERNIE 3.0 Titan?
First thing’s first—what is ERNIE 3.0 Titan, anyway? As Baidu researchers explained in a paper on the topic, it’s an AI model that’s designed to perform Natural Language Processing (NLP) tasks.
In other words, it’s intended to decipher what users mean when they enter a search query. This is the same function that Google aims to achieve with its MUM algorithm and BERT before it, and it’s able to do so through the use of large-scale training based on billions of parameters (i.e. pieces of data the model can learn from).
While Baidu’s original ERNIE 3.0 model was trained with an impressive 10 billion parameters, ERNIE 3.0 Titan was trained with a downright astonishing 260 billion parameters. That’s a staggering amount of progress, especially since only approximately six months elapsed between the announcement of ERNIE 3.0 and that of ERNIE 3.0 Titan.
Perhaps Baidu’s researchers were able to accomplish such a feat because they managed to drastically increase ERNIE 3.0’s capabilities without completely overhauling its framework. Just like ERNIE 3.0, ERNIE 3.0 Titan uses large-scale text data and a knowledge graph to facilitate few-shot learning (learning with limited examples), zero-shot learning (learning with no examples), and fine-tuning (further improvements):
Since Baidu is China’s leading search engine by far, it should come as no surprise that ERNIE 3.0 Titan is designed to understand Chinese language. And with its incredible number of parameters, Titan is the largest Chinese dense pre-trained model to date.
Most importantly, Titan’s performance backs up its behemoth size. According to the results of the Baidu researchers’ experiments, ERNIE 3.0 Titan outperforms the state-of-the-art (SOTA) models on 68 NLP tasks. That includes machine reading comprehension, semantic similarity, text classification, closed-book question answering, and more.
By comparison, the original ERNIE 3.0 outperformed SOTA models on 58 Chinese NLP tasks. Once again, this represents a massive improvement over a timeframe of just half a year.
How Does ERNIE 3.0 Titan Compare to Google’s MUM?
Google announced MUM in May 2021, and Pandu Nayak—the company’s VP of Search—wasn’t shy about touting its power. It’s “1,000 times more powerful than BERT,” he said, and “has the potential to transform how Google helps [users] with complex tasks.”
That’s because MUM is:
- able to not only comprehend language, but also generate it;
- trained across 75 languages; and
- multimodal, meaning it understands both text and images (and will someday understand video, audio, and other formats too).
This is admirable, and will certainly improve the quality of Google’s SERPs in the near future. But the fact of the matter is that its scale just doesn’t compare to that of ERNIE 3.0 Titan. As The Verge calculated, MUM is about the same size as OpenAI’s GPT-3 language model, which has 175 billion parameters. So, that means Titan still has a whopping 85 billion parameters more than MUM.
While Titan is undeniably bigger than MUM, it’s important to note that Titan was trained on just one language (Chinese) rather than 75 languages as MUM was. Plus, Titan was designed to understand language and language alone, while MUM was designed to understand images, videos, and audio, in addition to language.
This doesn’t mean that MUM is inherently better than Titan, though, nor does it mean the reverse. Instead, it simply means that the two NLP models are both very different, and precisely engineered to serve their parent company’s particular needs.
To gain a better understanding of why that is, consider MUM’s greatest strengths: Its ability to understand many different languages across multiple different formats. This makes sense for Google since it’s the leading search engine, not only in the U.S. but also in large swaths of the Americas, Australia, Europe, and Asia. Google is also heavily invested in visual search and web video, so it benefits them greatly to create a multimodal model like MUM.
On the other hand, ERNIE 3.0 Titan is significantly larger than MUM but is trained to understand and generate Chinese language exclusively. This may at first seem like a limitation, but it’s actually a boon to Baidu. After all, Baidu’s user base consists almost entirely of people who live in China.
So MUM and ERNIE 3.0 Titan are each custom-tailored to meet Google’s and Baidu’s respective requirements. And since Google completely exited the Chinese market in 2010, neither company is competing with the other (although we’re sure either one would be happy to earn more bragging rights).
What Does ERNIE 3.0 Titan Mean for the Future of Search?
The release of ERNIE 3.0 Titan may not directly affect the daily life of anyone outside of China, but nothing exists in a vacuum (and that’s especially true on the internet). As such, Baidu’s latest NLP model still impacts the search landscape in general.
Specifically, Titan pushes the limits of what we know to be the so-called “maximum” size of an NLP model, Chinese or otherwise. And with GPT-4 expected to have around 100 trillion parameters when it releases in the future, the question of “How big can an NLP model be?” is more relevant than ever.
Moreover, Titan can show the world just how deeply an AI model can understand a single, complex language when given enough data to work with. (By contrast, Google’s MUM will show us how well an NLP model can understand dozens of languages simultaneously.)
Titan Is One Big Step for Baidu, One Huge Leap for Search
If the sites you optimize cater to a primarily non-Chinese audience, then you may not have put much thought into optimizing for Baidu or learning about its algorithms, and understandably so. But the truth is that you should care about what Baidu does—its size and influence mean that its actions have a ripple effect on the entire world of SEO and search engines in general, Chinese or not.
And in the case of ERNIE 3.0 Titan, that’s more true than ever before. With its release, Baidu has accomplished a genuine breakthrough, and you can bet that Google’s engineers are keeping a close eye on every detail. Search engines are only just beginning to explore the potential of NLP models like ERNIE and MUM, and Baidu has just upped the ante in a big way.
Image credits
Screenshot by author / January 2022
Google / May 2021