IMPRESS: A Platform to Prevent Unauthorized Usage of Your Images

How many times do you wonder if the photo you just clicked on your phone is safe? Are you sure it will not be seen by anyone? Or that it won’t be used by Snapchat or Meta or Midjourney to train their AI models to do better? The latest social media trends populating our feeds today are images and videos edited by these AI tools, which, when they ask users to sign over their rights, create a disturbing sense of mistrust, begging the question — Have we given up on privacy in the name of progress?

A group of researchers at Stony Brook University, Pennsylvania State University, and the University of Illinois - Urbana Champaign, recently looked at some of the AI models used for generating images, like OpenAI’s DALL-E and Stability AI’s Stable Diffusion. While their study does acknowledge their superior performance and the high-quality images of these AI models, it also points out that these platforms can be easily abused to create images without proper authorization from the original data owner. For example, someone can train these models to learn from the original artworks of a specific artist and generate images that mimic their style. Or a hacker can maliciously edit images of celebrities downloaded from the internet to create misinformation, violating several ethical boundaries at once.

Text

A Robot Printed Like a Picasso, Courtesy of

Several attempts have been made to protect images from such unauthorized usage, like PhotoGuard and GLAZE, which attempt to add some kind of noise to the original image in a way that’s difficult to spot for the human eye. This misleads the AI model, stopping it from imitating the artist’s style and generating new “authentic” samples.

However, Changjiang Li, a Ph.D. student at Stony Brook University, said, “Although these protection methods show some promise in preventing unauthorized usage of user data, we still lack a systematic understanding of their performance in more practical scenarios. For instance, when these AI models come across protected images, they have access to the data that makes up the digital image. As a result, they immediately notice the noise added by PhotoGuard or GLAZE, and can use this information to purify the noisy, protected image, hence obtaining the original one, and putting all the effort goes in vain.”

Text

When using AI models to reconstruct the original image, it’s noticeable how, on the left: the reconstructed clean image looks similar to the original image, and on the right: the image protected by GLAZE leads to noticeable inconsistency after reconstruction

In their project, titled “,” Chanhjiang Li and his team conducted a systematic examination of the protection methods mentioned above, including PhotoGuard and GLAZE. Professor Ting Wang, SUNY Empire Innovation Associate Professor at Stony Brook shares the results of their study, “Our key observation was that the current ‘successful’ protection methods usually lead to an obvious inconsistency between the original image and the reconstructed image, which can easily be used to come up with a strategy for purifying the image and so, negating the effect of the added noise.”

To help solve this problem, the team has devised IMPRESS, or IMperceptible Perturbation REmoval SyStem — a new platform that can evaluate the effectiveness of the noise added by any image protection platform, and, as an added feature, IMPRESS can also reverse-engineer the method by which the platform added noise, and point out its limitations! “Our results suggest that designing image protection techniques specifically for IMPRESS can be challenging, ” Professor Ting Wang confesses, “but the closer we get to it, the harder it’ll be to exploit an authentic image’s ownership.”

Once a technology is released, its usage cannot be controlled by the inventor. There are always going to be malicious attempts at exploiting privacy and security, but with IMPRESS’s ability to evaluate any present and future protection methods, we are one step closer to effective privacy, and that means, we are moving in the right direction.

 

Communications Assistant
Ankita Nagpal