Meta, the parent company of Facebook and Instagram, has announced a new policy that will require advertisers to disclose when they use AI or other digital methods to create or alter political or social issue ads. This policy will go into effect in early 2024 and will be required globally.
The policy applies to any ad that portrays a real person as saying or doing something they did not say or do, depicts a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alters footage of a real event that happened. Advertisers do not need to disclose when content is digitally created or altered in ways that are inconsequential or immaterial to the claim, assertion, or issue raised in the ad.
Meta says it is implementing this policy to help people understand when they are seeing AI-altered content in political ads. This is important because AI can be used to create very realistic-looking content, even if it is fake. Meta also says that this policy will help to protect people from misinformation and disinformation.
The new policy is part of a broader effort by Meta to combat misinformation and disinformation on its platforms. In recent years, the company has faced criticism for its role in spreading misinformation, including during the 2016 U.S. presidential election. Meta has taken a number of steps to address these concerns, including investing in fact-checking and hiring more content moderators.
The sources for this piece include an article in Facebook.