Take Eric Paré, whose fantastical light painting images taken on the Uyuni Salt Flats in Bolivia fell foul of Instagram’s AI detector yesterday despite Paré also sharing a photo of the back of his camera showing clearly that the photo is not AI-generated.
This type of otherworldly image is particularly sensitive to accusations of AI generation and it is of vital importance to creatives like Paré that the audience knows they are looking at the work of a photographer who has busted a gut to get a great shot.
Paré told me that the photos had gone through an AI denoise program (taking noise out of an image is a normal editing task and definitely does not mean it was “Made with AI”).
Tellingly, when Paré screenshotted the images, Instagram did not tag them as AI. This runs consistent with my findings earlier this week when I used Photoshop’s Generative Fill tool to remove a speck of dust which got my photo flagged as AI on Instagram. But when I copied and pasted that same image onto a new document, it got past the censors.
The fact this workaround is viable shows that Meta is not only very likely looking for the Content Credentials tags that are embedded in an image when it is processed with Adobe Photoshop but it is also missing the point entirely of the reason these Credentials exist in the first place. They were never designed to be distilled down to a blanket statement like “Made with AI” like this. It’s overly reductive and does a disservice to the creator of the image and the overall mission of the Content Authenticity Initiative (CAI).
Matt Growcoot
This might be the first controversy I encountered via Threads – which fits, since I followed many of the people I follow on Instagram without much thought, so talk around photography pops up constantly in my feed. While many staunchly insist that any editing involving AI tools makes the labeling legitimate, I tend more towards the opposing camp.
This generic “Made with AI” tag used on Instagram (and Facebook and Threads for that matter, but since those were never photo-first platforms, nobody paid much attention to them) makes it seem that AI contributed overwhelmingly to the end result shared online – their post announcing these labels even says ‘AI-Generated’ multiple times, not AI-Edited – although editing comes up in their help article, which is sort of disingenuous in itself.
Photo editing comes in countless forms, from noise reduction to dust spot removal to more extensive object removals; these were done with non-(generative)-AI tools for years, so suddenly labeling these as ‘made with AI’ just because the editing software began using AI is ridiculous. I suspect the large wave of labels people started seeing is because Lightroom added Generative Remove last month, Adobe is labeling these with various Content Credentials tags, and Meta is simply checking those and applying AI labels indiscriminately. Hardly surprising from Meta to engage in a minimal effort for such a complex topic and call it a day. A ‘Made with AI’ label should be reserved for images generated through text prompts, but that would involve a lot more work on Meta’s part to properly detect them.
The broader issue I see is that it promotes the idea that AI is suddenly everywhere and that its capabilities are much better than they really are. That might in fact be Meta’s endgame here, to blur the line between actual images and AI-generated stuff so that its users don’t care as their feeds are taken over by AI content and the company can demote and stop paying real creators, from news media to photographers and videographers.
I haven't read much about Content Credentials, but the concept of a system to track the veracity of images is certainly welcomed. Personally, I think it should apply to all editing, not just AI tools. Earlier this year I took part in a portrait photography workshop that included a demonstration of retouching. I was frankly taken aback by the heavy use of Photoshop needed to arrive at that specific look, from frequency separation to never-ending dodging, burning, warping, and hair-removal, to finally applying textures via layers. It was manual work, but was the result a genuine photo, or something more akin to painted art? And if people worry about the negative effects on young women of comparing themselves to idealistic portrayals of women in art, should we not consider labeling these heavily altered images to remind them that the actual model does have small imperfections, like any other human?
Post a Comment