From ExtremeTech: With the rise of artificial intelligence comes new privacy and disinformation concerns, including non-consensual photo manipulation. While anyone can screenshot an image and edit it to portray something else, doing so takes time, effort, and skill. Large diffusion models eliminate the need for all three, making malicious photo manipulation and theft all the more enticing. To get ahead of this, a group of MIT doctorate students has developed a tool that “immunizes” photos against attempts to leverage AI-powered editing.
Called PhotoGuard, the tool uses two different strategies to protect images from AI. The first involves the “encoder” method, which incorporates complicated information about the pixels within the image, like their position and color. The second, called “diffusion,” selects a handful of pixels and modifies them just enough to remain invisible to the human eye. Both scramble the AI’s attempt to understand the target image, forcing it to perceive one thing (say, a photo of a person) as something else entirely (like a flat gray square). This prevents the AI from making any meaningful edits to the original picture.
Because PhotoGuard's changes are virtually imperceptible, those who use the tool to protect their photos’ integrity can still share their favorite images on social media. But according to a blog post by PhotoGuard’s authors, that isn’t the true goal. “Our goal here is not to suggest that individual users should safeguard their own images by themselves,” the team wrote for Gradient Science, run by MIT’s Madry Lab. “Instead, we hope that the companies providing the models themselves can provide an API to safeguard one's images against editing.” In the meantime, those interested in testing PhotoGuard can do so using an interactive demo on the team’s GitHub profile.
View: Full Article