AI notification required for election adsBack
In 2024 elections will be held in both the United States and the United Kingdom. Because a lot of disinformation was spread during the previous elections in the US, Google is taking a number of measures to prevent this as much as possible. It will now be mandatory for advertisers targeting election messaging to disclose if their ad is AI-generated. The measure is mainly intended to prevent the use of so-called 'deepfakes'.
From mid-November, all advertisers with election messages will be required to add a clear and prominent statement when their ads contain AI-generated content. This concerns image, video and audio on all platforms.
Artificial intelligence (AI) is increasingly being used to create content. The cybersecurity company Mandiant, which is owned by Google, sees AI increasingly being used for online disinformation campaigns as well. In particular, 'deepfakes' (fake videos, a contraction of the English words deep learning and fake) are used to make political opponents make statements that they have never actually made.
As deepfakes created by AI algorithms get better and better, it is also becoming increasingly difficult for voters to distinguish real from fake.
With the new measure, Google is trying to anticipate the criticism it received after the previous elections that it did too little against disinformation.