top of page

How to Protect Your Images From AI Image Generators

Note: Aiello Studios does not earn revenue from links in this post.


With the advancement of artificial intelligence technology, it has become easier for AI image generators such as DALL-E 2, DreamStudio, and Midjourney to create images that look like actual photographs or even the work of a particular artist. This raises the question: How can photographers protect their work from these AI systems? While complete protection is not possible at this time, there are steps you can take to help safeguard your images from inclusion in AI image generator systems.

Concept image of a futuristic robot deciding on image selection from an AI dataset.  Made on Midjourney.
Concept image of a futuristic robot deciding on image selection from an AI dataset. Made on Midjourney.

Is It Possible To Protect Your Images From AI Image Datasets?

Short of not displaying your images on any website or social media, achieving complete protection from AI image generators is a challenging task. This is particularly so given the rapid technological advancements and the slow pace of regulating these new technologies. Continuous monitoring and updating of image protection strategies are necessary to stay ahead. Collaboration between artists, legal experts, and AI developers is crucial in addressing image protection concerns and striking a balance between protecting creativity and fostering innovation in the long term.

Can AI Art Generators Use My Copyrighted Art?

Yes, AI image generators have the potential to use your copyrighted art if they have access to it. AI systems believe they have the right to use the images they find on the public internet. I, of course, vehemently disagree, but thus far, the legal standards and guidelines are unsettled.


To protect your images in the meantime, you can consider additional actions like those I describe below. In case of infringement, you should seek legal advice and take necessary action to protect your intellectual property. But as we all know, that can be very expensive and time-consuming.

Are There Foolproof Ways To Prevent AI From Stealing Artwork?

At the current time (mid-2023), short of not posting your images anywhere on the public internet, the answer is no, but there are things you can do that may help reduce your exposure. Preventing AI from stealing images requires deciding on the various methods you want to use and the amount of time and effort you want to put into it. Options include staying informed about advancements, opting out of AI training datasets, blocking website crawlers on your site, implementing watermarks (visible or invisible), monitoring your online images, using blockchain technology for ownership records (I will explore blockchain and NFTs in an upcoming series on this blog), and keeping up with copyright laws and the cases moving through the courts (Google search page).

Current Ways To Opt-out of AI Training Datasets

There are several ways to attempt to opt out of AI training datasets. Opting out of image datasets can prevent your images from being used in AI training. Some organizations are developing tools with AI training datasets to license or protect your images from AI image generators. Adobe Firefly is one such service. It uses Adobe Stock to train its generative AI model. Adobe is also developing Content Credentials, a provenance and authenticity technology, with the Copyright Alliance that will include a "Do Not Train" option for Firefly-enabled apps like Photoshop (Beta).


One of the more promising opt-out services that I have found and used is the "Have I Been Trained" website by Spawning. They allow you to opt out of the LAION-5B and LAION-400M datasets. Lastly, you can consider using reverse image search tools like Google Image Search or Bing Visual Search to monitor if and where your images are being used online. These can be tedious because they search one image at a time. Hopefully, in the future, all image training datasets will have opt-out features that support features like "Do Not Train" or coordinate with sites like Have I Been Trained.

Blocking Website Crawlers With Robots.txt

To protect your images from AI image generators, you can try blocking website crawlers with robots.txt. This text file lives on your web server, so you may need your web developer or service provider to modify it for you. Not all service providers give you access to this file or will change it, so again, check with them. The robots.txt file serves as an instruction manual for web crawlers, specifying which folders or pages to access or avoid. The AI system's web crawler will ignore those pages when you add a disallow rule for paths or pages in your robots.txt. To block complete folders, you can list wildcard paths, like "Disallow: /my-gallery-name/*."


It's important to consider the negative implications, as opting out via robots.txt may limit the exposure and reach of your photographs, particularly for SEO. Search engines like Google respect robots.txt and will not index the pages you mark as "disallow." So, it can be a double-edged sword like so many security and privacy technologies. Also, just so you know, complete protection is not guaranteed, as web crawlers can ignore your robots.txt. Plus, AI systems can still learn from your off-site publicly available images, such as those on social media or other photo-sharing sites. In any case, deciding on your own policy and regularly updating and maintaining your robots.txt is a good idea and will help provide a degree of protection, notwithstanding the SEO downside.

Using Visible, Invisbile, and Aggressive Watermarking

While watermarking your images won't stop their inclusion in AI training datasets, it can deter AI image generators from using them, making it more difficult for them to replicate or misuse your content. Remember that while watermarking may be a deterrent against unauthorized image usage, it may not always be completely foolproof against advanced AI algorithms. It may also impact the image's aesthetics for other uses.


Visible watermarking is the most common. It can include logos, text, or patterns overlaid on the image, often with some degree of transparency. They can easily done in Photoshop and other image editors. There are many guides and how-to videos online.


Invisible watermarks can be embedded in the pixel data and detected only with specialized software. They require special services or software to be made, such as Imatag. They may be rather expensive and difficult to justify for many photographers. We've never used this method, so I can't verify its efficacy.

Aggressive watermarking is a powerful technique combining visible and invisible watermarking to protect images from AI training data.

AI Protection Services Like PhotoGuard and Glaze

Slightly off-topic, MIT's "PhotoGuard" uses another way to make your images unusable to AI image tools. From my reading and testing of it, rather than preventing your image from being used for training, it appears to be targeted at AI manipulating uploaded images into the image generator. This means that if someone downloaded your image from the web and uploaded it into an AI image generator, PhotoGuard would prevent the generation tool from successfully manipulating it. This promises to be most effective in preventing deep fakes and socially biased images. It does this by adding information that is not visible in the pixels. Well, while they claim it is invisible, I found it introduced an objectionable amount of noise-like artifacts into the image in my test. It is a promising technology that I expect will become important to news, sports, portrait, and celebrity photographers. I expect frequent improvements, and I will continue to test PhotoGuard from time to time and report back here in our blog.

  • PhotoGuard info: https://news.mit.edu/2023/using-ai-protect-against-ai-image-manipulation-0731

  • PhotoGuard online demo: https://huggingface.co/spaces/hadisalman/photoguard

Glaze by the University of Chicago is "designed to protect human artists by disrupting style mimicry." The image still looks normal to the human eye but not to the AI generator. It is beyond the scope of this blog, which is photography-centric, but you may still want to look into it. One negative thing about it is it's very slow. Processing just one image can take between 20 minutes to 2 hours. I'm sure that will improve over time. You can download it at no cost from the Glaze website.

Conclusion

In conclusion, protecting your images from AI image generators is complex. While complete protection may not be possible, you can take steps to reduce the risk. It's important to understand that AI image generators have the potential to use copyrighted art, so it's crucial to take additional proactive measures to safeguard your work. These may include using aggressive watermarking and blocking website crawlers with robots.txt. Additionally, you can opt out of AI training datasets by utilizing resources like the "Have I Been Trained" website.

I will continue to explore the impact of AI-generated images on fine-art photography and post my observations and opinions in this blog and on our Facebook page. Please subscribe to our newsletter from our contact form and/or our Facebook page to be notified when I post additional articles and updates.

106 views
bottom of page