Meta (META) announced it is working on a means of better detecting and identifying AI-generated images across Facebook, Instagram, and Threads in the lead-up to the 2024 elections. The technology, which Meta’s president of global affairs, Nick Clegg, says is currently in development, would inform users when an image they see in their feed has been generated using artificial intelligence.
Currently, Meta adds a watermark and metadata to images generated using its Meta AI software. Now, however, the company is looking to expand that capability to images generated by contemporaries like Adobe (ADBE), Google (GOOG, GOOGL), Midjourney, Microsoft (MSFT), OpenAI, and Shutterstock (SSTK).
Clegg says Meta is working with the Partnership on AI, a nonprofit made up of academics, civil society professionals, and media organizations dedicated to ensuring AI has “positive outcomes for people and society,” to come up with standards that can be used to identify AI images across the web.
Experts say generative AI images could create a tsunami of disinformation in the lead-up to the election. We’re already seeing some of what’s to come in real time. When former President Trump was arrested in New York in 2023, images of what appeared to be of him running from police began spreading across the web.
And a likely AI-generated image of an explosion outside of the Pentagon that hit the internet last year briefly sent stocks sliding in New York in 2023 before officials said there was nothing out of the ordinary.
While Meta’s moves will help identify AI-generated images, they won’t be able to recognize AI-generated videos or audio. To that end, Clegg said, the company is adding a feature that people can use to label AI-generated video and audio they share across Meta. If users don’t, he explained, they may face penalties.
“While companies are starting to include signals in their image generators, they haven’t started including them in AI tools that generate audio and video at the same scale, so we can’t yet detect those signals and label this content from other companies,” he said.
AI-generated audio and videos have already been used to spread disinformation. In 2022, a deepfake video of a phony Ukrainian President Volodymyr Zelensky telling troops to put down their weapons hit the web. And in January, a deepfake audio recording of a fake President Biden telling voters not to take part in the presidential primary went out in New Hampshire.
Clegg also pointed out that though Meta will be able to identify AI-generated images, there are still ways that those markers can be manipulated or removed. To prevent that, he said, the company’s Fundamental AI Research (FAIR) team is working on a way of implanting a watermark into an image while it’s being generated so that it can’t be removed.
Generative AI content is still a new concept to many people on the web. And while the concept of Photoshopping an image isn’t novel, the ability to generate a huge number of images in no time and flood social media with them is.
The deluge of AI-generated explicit images of Taylor Swift that circulated across sites, including X, formerly known as Twitter, illustrates how easy it is for such content to propagate across huge swaths of the internet in no time. Meta’s solution is just one tool in what will have to be an expansive chest of options to combat the new generation of misinformation.
On Monday, Meta’s Oversight Board criticized the company over how it currently handles manipulated video content. The board said it is “concerned about the Manipulated Media policy in its current form, finding it to be incoherent, lacking in persuasive justification and inappropriately focused on how content is created, rather than on which specific harms it aims to prevent, such as disrupting electoral processes.”
The board suggested Meta should, “begin labeling manipulated content, such as videos altered by artificial intelligence (AI) or other means when such content may cause harm.”
The board’s remarks came in response to a video of President Biden that was altered without using AI and allowed to stay up because it didn’t run afoul of the company’s Manipulated Media policy. The group says Meta needs to add context to videos that are manipulated with means other than AI.