{"componentChunkName":"component---src-templates-blog-post-js","path":"/blog/sensetime-releases-open-sourced-model-intern-2-5-for-autonomous-driving-and-robotics/","result":{"data":{"site":{"siteMetadata":{"title":"No Frills News"}},"contentfulNfnPost":{"postTitle":"SenseTime releases open-sourced model 'INTERN 2.5' for autonomous driving and robotics","slug":"sensetime-releases-open-sourced-model-intern-2-5-for-autonomous-driving-and-robotics","createdLocal":"2023-03-15 14:31:08.102202","publishDate":"None","feedName":"Autonomous Vehicle News","sourceUrl":{"sourceUrl":"https://en.pingwest.com/w/11484"},"postSummary":{"childMarkdownRemark":{"html":"<p>Chinese artificial intelligence company, SenseTime, has released a large multimodal and multitask universal model called \"Intern 2.5\".\nThe model's cross-modal open-task processing ability can provide efficient and accurate perception and understanding support for general scenarios such as autonomous driving and robots.\nAs of today, the Intern 2.5 multimodal universal large model has been open-sourced on OpenGVLab, a general visual open-source platform that SenseTime participates in.\nIntern 2.5 can assist in processing various complex tasks in general scenarios, such as autonomous driving and home robots.\nIntern 2.5 is also able to quickly retrieve visual content based on text.</p>"}}}},"pageContext":{"slug":"sensetime-releases-open-sourced-model-intern-2-5-for-autonomous-driving-and-robotics"}}}