{"componentChunkName":"component---src-templates-blog-post-js","path":"/blog/ai-weekly-chinas-massive-multimodal-model-highlights-ai-research-gap/","result":{"data":{"site":{"siteMetadata":{"title":"No Frills News"}},"contentfulNfnPost":{"postTitle":"AI Weekly: China’s massive multimodal model highlights AI research gap","slug":"ai-weekly-chinas-massive-multimodal-model-highlights-ai-research-gap","createdLocal":"2021-06-05 14:30:52.850789","publishDate":"2021-06-04 00:00:00","feedName":"Image Recognition","sourceUrl":{"sourceUrl":"https://venturebeat.com/2021/06/04/ai-weekly-chinas-massive-multimodal-model-highlights-ai-research-gap/"},"postSummary":{"childMarkdownRemark":{"html":"<p>Containing 1.75 trillion parameters, the parts of the machine learning model learned from historical training data, Wu Dao 2.0 is 10 times larger than OpenAI’s 175-billion-parameter GPT- 3.\nWu Dao 2.0 is the latest example of what OpenAI policy director Jack Clark calls model diffusion, or multiple state and private actors developing GPT-3-style AI models.\nWu Dao 2.0, which arrived three months after version 1.0’s March debut, is built on an open source system akin to Google’s Mixture of Experts, dubbed FastMoE.\nAccording to Engadget, Wu Dao 2.0 can also power “virtual idols” and predict the 3D structures of proteins, like DeepMind’s AlphaFold.\nBut the release of Wu Dao 2.0 highlights the work that must be done before the U.S. can close the AI gap with other world superpowers.</p>"}}}},"pageContext":{"slug":"ai-weekly-chinas-massive-multimodal-model-highlights-ai-research-gap"}}}