The social network wants to promote standardized labels to help detect artificially created photo, video and audio material across its platforms.
Last month at the World Economic Forum in Davos, Switzerland, Nick Clegg, president of global affairs at Meta, called a nascent effort to detect artificially generated content “the most urgent task” facing the tech industry today.
On Tuesday, Mr. Clegg proposed a solution. Meta said it would promotethat companies across the industry could use to recognize markers in photo, video and audio material that would signal that the content was generated using artificial intelligence.
The standards could allow social media companies to quickly identify content generated with A.I. that has been posted to their platforms and allow them to add a label to that material. If adopted widely, the standards could help identify A.I.-generated content from companies like Google, OpenAI and Microsoft, Adobe, Midjourney and others that offer tools that allow people to quickly and easily create artificial posts.
“While this is not a perfect answer, we did not want to let perfect be the enemy of the good,” Mr. Clegg said in an interview.
He added that he hoped this effort would be a rallying cry for companies across the industry to adopt standards for detecting and signaling that content was artificial so that it would be simpler for all of them to recognize it.
As the United States enters a presidential election year, industry watchers believe thatto misinform voters. Over the past year, people have used A.I to create and spread fake videos of President Biden making false or inflammatory statements. The attorney general’s office in New Hampshire is also investigating a series of robocalls that appeared to employ an that urged people not to vote in a recent primary.
Thank you for your patience while we verify access. If you are in Reader mode please exit andyour Times account, or for all of The Times.