Summary

  • AI tools like ChatGPT have sparked consumer interest, leading to major platforms like Google and Meta jumping into the AI game.
  • Meta’s “Made with AI” labeling system aims to help users distinguish real and altered images, but it’s not always accurate.
  • The complexity of editing with AI tools is causing challenges in accurately labeling images, leading to confusion and frustration among photographers.



Artificial intelligence is now everywhere. And while AI was already in use prior to the big boom we’ve recently seen, consumers really couldn’t see the potential it had until the release OpenAI’sChatGPT. With consumer interest piqued, it was only a matter of time before heavy hitters like Google and Meta were throwing their hats into the ring with their own AI platforms. Since then, we’ve seen a variety of AI tools being made available to the public, even services that can use AI to generate convincing and lifelike photos and videos.

Related

Meta AI: What is it, who can use it, and how?

Learn about Meta AI and how you can use this free, open source artificial intelligence to chat, create images, and more



As these tools have improved, it’s getting harder and harder to tell what’s real and what’s not. For this reason, Meta came up with its “Made with AI” labeling system for its platforms like Instagram, Facebook and Threads, in order to help users distinguish what’s what. And while this can be a great feature, it looks like, despite Meta’s good intentions, the feature isn’t always accurately labeling images on its platforms.


Meta’s in a tough spot

Meta Ai image generation
Source: Meta

According to TechCrunch, Meta is using its “Made with AI” label on images that aren’t created using AI. Of course, things aren’t always black and white whenever we see images on the internet, with some having some type of alteration done. The news outlet shares that this isn’t just a simple matter of images being mislabeled, but perhaps that images with edits are also being labeled as well.


As you can imagine, this type of label can be frustrating for photographers, especially when an image isn’t created using AI. While Meta wouldn’t provide TechCrunch with a reason as to why this is occurring, the news outlet was able to get a comment from former White House photographer Pete Souza, who shared that this might be occurring because of changes made to Adobe’s popular editing software programs, specifically with regard to how cropping now works.

Regardless, editing has become a lot more complicated now than it used to be, especially with new AI tools that can really alter an image with the click of a button. And while some of these edits might not be profound, they are still technically images that have been manipulated using AI. Of course, there are folks on both sides of the fence, with some believing that simple edits shouldn’t be hit with the AI labels, while others believe that any manipulation of any image should be noted.


Unfortunately, as of now, Meta doesn’t appear to have a way to distinguish these differences, so it must label what it can. However, the brand does have an outline of how it detects these images on its website, sharing that it labels “any content that contains industry-standard signals that it’s generated by AI.” This also includes images that are “created or edited using third-party AI tools.”

With that said, the brand is eager to make improvements to its technology, sharing that it is actively working with companies to improve its labeling process and that the “intent has always been to help people know when they see content that has been made with AI. We are taking into account recent feedback and continue to evaluate our approach so that our labels reflect the amount of AI used in an image.”



For the time being, it looks like the labels will stand, and those that are using AI tools to assist with their edits will run the risk of their art being labeled as something generated by AI. Of course, this could all change in the future as the tools get refined, along with Meta’s system.