Meta will mandate that political advertisers to indicate if they have utilized AI or digital alteration in promotions on Facebook and Instagram.
The social media company already has protocols in place for dealing with deepfakes, but believes this additional measure is necessary.
Beginning in January, political, electoral, or social issue-related advertisements must indicate if any digitally modified images or videos are included.
A combination of human and AI-based fact checkers will oversee the global policy.
Meta declared that this will encompass modifying what someone has voiced in a video, altering photographs or recordings of genuine occasions, and representing fake-looking people who don't exist.
When advertisements have been altered digitally, users will be notified. Meta informed the BBC that this info will be incorporated into the ad, but did not explain the manner in which this would be done.
Advertisers don't need to point out minor alterations, such as cropping or color-correcting, "unless the changes are major or relevant to the message or question in the advertisement".
Rules regarding the incorporation of deepfakes into videos apply to all users, not only advertisers, as is the case with Meta.
Deepfakes are taken down if they could potentially cause a viewer to be misled into believing that a subject of the video said words they did not actually say.
The updated rules dictate that ads concerning ideological affairs, elections, or social matters must reveal any type of digital manipulation, be it done by a human being or AI, prior to being posted on Facebook or Instagram.
Meta's platform Threads adheres to the same regulations as Instagram does.
If advertisers do not make this declaration when they put up ads, we will refuse them and continual failure to make this disclosure could result in disciplinary action being taken against the advertiser. in any form, on its platform.
Google recently declared a comparable rule in use on its programs. TikTok doesn't permit any political advertising of any kind on its program.
This video cannot be viewed.
James Clayton from the BBC attempted a deepfake video detector.
Several of the world's most influential democracies are predicted to have general elections in 2024, including India, Indonesia, the US, and the UK.
Elections are approaching in Russia, South Africa and the EU for the coming year.
Deepfakes, in which AI is utilized to alter what someone expresses or does in a video, are becoming an escalating matter of worry in the political realm.
In March, an AI-generated image of former US President Donald Trump which misrepresented him being arrested was widely circulated on social media.
In the same month, a video, which had been digitally altered to portray Ukrainian President Volodymyr Zelensky as speaking about surrendering to Russia, was widely circulated.
In July, assertions that a video of US President Joe Biden was a deepfake were disproven and verified to be genuine.
top of page
bottom of page
Comments