{"componentChunkName":"component---src-templates-blog-post-js","path":"/blog/manipulating-weights-in-face-recognition-ai-systems/","result":{"data":{"site":{"siteMetadata":{"title":"No Frills News"}},"contentfulNfnPost":{"postTitle":"Manipulating Weights in Face-Recognition AI Systems","slug":"manipulating-weights-in-face-recognition-ai-systems","createdLocal":"2023-02-04 14:30:51.007332","publishDate":"2023-02-03 12:07:04+00:00","feedName":"Image Recognition","sourceUrl":{"sourceUrl":"https://securityboulevard.com/2023/02/manipulating-weights-in-face-recognition-ai-systems/"},"postSummary":{"childMarkdownRemark":{"html":"<p>These backdoors force the system to err only on specific persons which are preselected by the attacker.\nUniquely, we show that multiple backdoors can be independently installed by multiple attackers who may not be aware of each other’s existence with almost no interference.\nWe have experimentally verified the attacks on a FaceNet-based facial recognition system, which achieves SOTA accuracy on the standard LFW dataset of 99.35%.\nWhen we tried to individually anonymize ten celebrities, the network failed to recognize two of their images as being the same person in 96.97% to 98.29% of the time.\nIn all of our experiments, the benign accuracy of the network on other persons was degraded by no more than 0.48% (and in most cases, it remained above 99.30%).</p>"}}}},"pageContext":{"slug":"manipulating-weights-in-face-recognition-ai-systems"}}}