Using AI to protect against AI image manipulation


Image Recognition

ARTICLE SOURCE

The more straightforward "encoder" attack targets the image's latent representation in the AI model, causing the model to perceive the image as a random entity. The second and decidedly more intricate "diffusion" attack strategically targets the entire diffusion model end-to-end. The original image is a drawing, and the target image is another drawing that's completely different. By doing this, any AI model attempting to modify the original image will now inadvertently make changes as if dealing with the target image, thereby protecting the original image from intended manipulation. The result is a picture that remains visually unaltered for human observers, but protects against unauthorized edits by AI models.