Bias in facial recognition is handicapping deepfake detection


Image Recognition

ARTICLE SOURCE

Harmful bias has been found in deepfake datasets and detection models by researchers from the University of Southern California. The result of this skew is that deepfake detectors are less able to spot fraudulent images and video of people of color. Using FaceForensics++ and Blended Image biometric datasets, they trained MesoInception4, Xception and Face X-ray models, all of which have “proven success” in video detection. Facebook, still burning from its inability to separate dangerous political propaganda from informed threads about health care, offered a $1 million prize in its Deepfake Detection Challenge, which wrapped up last June. Article Topicsaccuracy | AI | biometrics | biometrics research | dataset | deepfakes | facial recognition | spoof detection