This tool could save your photos from AI (for now)

PhotoGuard image protector against AI image generators
(Image credit: MIT CSAIL)

The rapid explosion of AI image generators and editors has raised wide-ranging concerns, from copyright to the impact on creative jobs. But even outside of the creative sector, the general public may start to fear what could happen now that anyone can find images of them online and potentially doctor them using AI.

Even watermarking images can do little to protect them from manipulation now that there are even AI watermark removers. But while AI image generators are proliferating, so too are the potential solutions. The research institute MIT CSAIL is the latest to announce a potential solution: a tool called PhotoGuard (see our pick of the best AI art generators to learn more about the expanding tech).

PhotoGuard seems to work in a similar way to Glaze, which we've mentioned before. An initial encoder process subtly alters an image by changing select pixels in a way that interferes with AI models' ability to understand what the image shows. The changes are invisible to the human eye but are picked up by AI models, affecting the algorithmic model's latent representation of the target image (the mathematics detailing the position and colour of each pixel. Effectively, these tiny alterations "immunise" an image by preventing an AI from understanding what it is looking at. 

PhotoGuard process to protect images from AI manipulation

This example shows how an AI generator fails to produce the desired result with an image that has been "immunised" with PhotoGuard  (Image credit: MIT CSAIL)

After that, a more advanced diffusion method camouflages an image as something else in the eyes of the AI by optimising the "perturbations" it applies in order to resemble a particular target. This means that when the AI tries to edit the image, the edits are applied to the "fake" target" image instead, resulting in output that looks unrealistic.

As we've noted before, however, this isn't a permanent solution. The process could be reverse-engineered, allowing the development of AI models immune to the tool's interference.  

MIT doctorate student Hadi Salman, the lead author of the PhotoGuard research paper, said: "While I am glad to contribute towards this solution, much work is needed to make this protection practical. Companies that develop these models need to invest in engineering robust immunizations against the possible threats posed by these AI tools."

He called for a collaborative approach involving model developers, social media platforms and policymakers to defend against unauthorized image manipulation. "Working on this pressing issue is of paramount importance today,” he said. PhotoGuard's code is available on GitHub. See our pick of the best AI art tutorials to learn more about how AI tools can be used (constructively).

Thank you for reading 5 articles this month* Join now for unlimited access

Enjoy your first month for just £1 / $1 / €1

*Read 5 free articles per month without a subscription

Join now for unlimited access

Try first month for just £1 / $1 / €1

Joseph Foley

Joe is a regular freelance journalist and editor at Creative Bloq. He writes news and features, updates buying guides and keeps track of the best equipment for creatives, from monitors to accessories and office supplies. A writer and translator, he also works as a project manager at London and Buenos Aires-based design and branding agency Hermana Creatives, where he manages a team of designers, photographers and video editors who specialise in producing photography, video content, graphic design and collaterals for the hospitality sector. He enjoys photography, particularly nature photography, wellness and he dances Argentine tango.