The parent company of Facebook and Instagram will sniff out and label robot-created videos, photos, and audio
Meta will start labeling AI-generated content on Facebook and Instagram from May onwards, the tech giant has announced. Until now, the company had a policy of deleting such computer-created content.
The company will apply “Made with AI” labels to photo, audio, or video content created with artificial intelligence, it explained in a blog post on Friday. These labels will either be applied automatically when Meta detects “industry-shared signals” of AI content, or when users voluntarily disclose that something they post was created with AI.
If the content in question carries “a particularly high risk of materially deceiving the public on a matter of importance,” a more prominent label may be applied, Meta stated.
At present, Meta’s ‘manipulated media’ policy only covers videos that have been “created or altered by AI to make a person appear to say something they didn’t say.” Content violating this policy is removed rather than labeled.
READ MORE:
Italian PM wants €100,000 over deepfake porn
The new policy expands this dragnet to videos showing someone “doing something they didn’t do,” and to photos and audio. However, it is more relaxed than the old approach in that the content in question will be allowed to remain online.
The new policy expands this dragnet to videos showing someone “doing something they didn’t do,” and to photos and audio. However, it is more relaxed than the old approach in that the content in question will be allowed to remain online.
“Our manipulated media policy was written in 2020 when realistic AI-generated content was rare and the overarching concern was about videos,” the company explained. “In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving.”
Since the beginning of this year, US regulators have announced a ban on AI-generated “robocalls” after New Hampshire residents were contacted by a computer-generated Joe Biden urging them to sit out the state’s Democratic primary election, while the White House has promised to “deal with” the problem of non-consensual porn after fake nude photos of pop star Taylor Swift spread on social media. Former US President Donald Trump has also weighed in on the issue, accusing US media outlets of using AI to make him appear fatter in photographs.
Meta is not the only Big Tech firm to combat artificial content with labels. As of last year, TikTok asks users to label their own AI-generated content, while giving other users the option to report content they suspect was AI-generated. YouTube introduced a similar honor-based system last month.
READ MORE:
Pope warns of ‘perverse’ deepfakes
With pivotal elections taking place in the EU in June and US in November, lawmakers have pushed tech firms to take action against AI-created “deepfakes,” which they argue could be used to deceive voters. Earlier this year, Microsoft, Meta, and Google joined more than a dozen other industry leaders in promising to “help prevent deceptive AI content from interfering with this year’s global elections.”
Platforms such as TikTok and YouTube that use honor systems may soon be forced to take Meta’s approach, however. Under a provision of the EU’s AI Act, which comes into effect next summer, tech companies will be fined for not detecting and identifying AI-created content, including text “published with the purpose to inform the public on matters of public interest.”