{"componentChunkName":"component---src-templates-blog-post-js","path":"/blog/bias-in-facial-recognition-is-handicapping-deepfake-detection/","result":{"data":{"site":{"siteMetadata":{"title":"No Frills News"}},"contentfulNfnPost":{"postTitle":"Bias in facial recognition is handicapping deepfake detection","slug":"bias-in-facial-recognition-is-handicapping-deepfake-detection","createdLocal":"2021-05-18 14:31:23.648762","publishDate":"2021-05-17 23:01:53+00:00","feedName":"Image Recognition","sourceUrl":{"sourceUrl":"https://www.biometricupdate.com/202105/bias-in-facial-recognition-is-handicapping-deepfake-detection"},"postSummary":{"childMarkdownRemark":{"html":"<p>Harmful bias has been found in deepfake datasets and detection models by researchers from the University of Southern California.\nThe result of this skew is that deepfake detectors are less able to spot fraudulent images and video of people of color.\nUsing FaceForensics++ and Blended Image biometric datasets, they trained MesoInception4, Xception and Face X-ray models, all of which have “proven success” in video detection.\nFacebook, still burning from its inability to separate dangerous political propaganda from informed threads about health care, offered a $1 million prize in its Deepfake Detection Challenge, which wrapped up last June.\nArticle Topicsaccuracy | AI | biometrics | biometrics research | dataset | deepfakes | facial recognition | spoof detection</p>"}}}},"pageContext":{"slug":"bias-in-facial-recognition-is-handicapping-deepfake-detection"}}}