Take Eric Paré, whose fantastical light painting images taken on the Uyuni Salt Flats in Bolivia fell foul of Instagram’s AI detector yesterday despite Paré also sharing a photo of the back of his camera showing clearly that the photo is not AI-generated.
This type of otherworldly image is particularly sensitive to accusations of AI generation and it is of vital importance to creatives like Paré that the audience knows they are looking at the work of a photographer who has busted a gut to get a great shot.
Paré told me that the photos had gone through an AI denoise program (taking noise out of an image is a normal editing task and definitely does not mean it was “Made with AI”).
Tellingly, when Paré screenshotted the images, Instagram did not tag them as AI. This runs consistent with my findings earlier this week when I used Photoshop’s Generative Fill tool to remove a speck of dust which got my photo flagged as AI on Instagram. But when I copied and pasted that same image onto a new document, it got past the censors.
The fact this workaround is viable shows that Meta is not only very likely looking for the Content Credentials tags that are embedded in an image when it is processed with Adobe Photoshop but it is also missing the point entirely of the reason these Credentials exist in the first place. They were never designed to be distilled down to a blanket statement like “Made with AI” like this. It’s overly reductive and does a disservice to the creator of the image and the overall mission of the Content Authenticity Initiative (CAI).
Matt Growcoot
This might be the first controversy I encountered via Threads – which fits, since I followed many of the people I follow on Instagram without much thought, so talk around photography pops up constantly in my feed. While many staunchly insist that any editing involving AI tools makes the labeling legitimate, I tend more towards the opposing camp.