AI Weekly: China’s massive multimodal model highlights AI research gap


Image Recognition

ARTICLE SOURCE

Containing 1.75 trillion parameters, the parts of the machine learning model learned from historical training data, Wu Dao 2.0 is 10 times larger than OpenAI’s 175-billion-parameter GPT- 3. Wu Dao 2.0 is the latest example of what OpenAI policy director Jack Clark calls model diffusion, or multiple state and private actors developing GPT-3-style AI models. Wu Dao 2.0, which arrived three months after version 1.0’s March debut, is built on an open source system akin to Google’s Mixture of Experts, dubbed FastMoE. According to Engadget, Wu Dao 2.0 can also power “virtual idols” and predict the 3D structures of proteins, like DeepMind’s AlphaFold. But the release of Wu Dao 2.0 highlights the work that must be done before the U.S. can close the AI gap with other world superpowers.