Meta to require disclosures for AI-created, altered political ads
Published by Jessica Weisman-Pitts
Posted on November 9, 2023
2 min readLast updated: January 31, 2026

Published by Jessica Weisman-Pitts
Posted on November 9, 2023
2 min readLast updated: January 31, 2026

(Reuters) – Meta Platforms on Wednesday said come 2024, advertisers will have to disclose when artificial intelligence (AI) or other digital methods are used to alter or create political, social or election related advertisements on Facebook and Instagram.
(Reuters) – Meta Platforms on Wednesday said come 2024, advertisers will have to disclose when artificial intelligence (AI) or other digital methods are used to alter or create political, social or election related advertisements on Facebook and Instagram.
Meta, the world’s second-biggest platform for digital ads, said in a blog post it would require advertisers to disclose if their altered or created ads portray real people as doing or saying something that they did not, or if they digitally produce a real-looking person that does not exist.
The company would also ask advertisers to disclose if these ads show events that did not take place, alter footage of a real event, or even depict a real event without the true image, video, or audio recording of the actual event.
The policy updates, including Meta’s earlier announcement on barring political advertisers from using generative AI ads tools, come a month after the Facebook-owner said it was starting to expand advertisers’ access to AI-powered advertising tools that can instantly create backgrounds, image adjustments and variations of ad copy in response to simple text prompt.
Alphabet’s Google, the biggest digital advertising company, announced the launch of similar image-customizing generative AI ads tools last week and said it planned to keep politics out of its products by blocking a list of “political keywords” from being used as prompts.
Lawmakers in the U.S. have been concerned about the use of AI to create content that falsely depicts candidates in political advertisements to influence federal elections, with a slew of new “generative AI” tools making it cheap and easy to create convincing deepfakes.
Meta has already been blocking its user-facing Meta AI virtual assistant from creating photo-realistic images of public figures, and its top policy executive, Nick Clegg, said last month that the use of generative AI in political advertising was “clearly an area where we need to update our rules.”
The company’s new policy will not require disclosures when the digital content is “inconsequential or immaterial to the claim, assertion, or issue raised in the ad,” including image size adjusting, cropping an image, color correction, or image sharpening, it said.
(Reporting by Katie Paul, Devika Nair and Shubham Kalia; Editing by Nivedita Bhattacharjee)
Artificial intelligence (AI) refers to the simulation of human intelligence in machines programmed to think and learn like humans, enabling them to perform tasks such as speech recognition, decision-making, and problem-solving.
Digital advertising involves promoting products or services through online platforms, utilizing various formats such as display ads, social media ads, and search engine marketing to reach target audiences.
Compliance in advertising refers to adhering to legal and regulatory standards that govern advertising practices, ensuring that ads are truthful, not misleading, and respect consumer rights.
Deepfakes are synthetic media created using artificial intelligence that can manipulate audio and video to make it appear as if someone is saying or doing something they did not, raising ethical concerns.
Explore more articles in the Top Stories category











