Google has announced plans to restrict the responses provided by its AI chatbot, Bard, and its generative search engine for specific election-related queries. In a blog post on December 19, Google outlined its intention to implement these limitations by early 2024, especially in anticipation of the upcoming U.S. presidential election.
Highlighting the significance of global elections scheduled for 2024, Google expressed a commitment to scrutinize the role of artificial intelligence (AI) during this period. One key focus is to assist users in identifying AI-generated content, aligning with Google’s earlier move in September to mandate AI disclosures in political campaign ads.
In a parallel development, YouTube, a subsidiary of Google, updated its policies in November 2023, compelling creators to disclose their use of generative AI or face potential account suspension.
Google also introduced a new tool called SynthID, currently in beta from Google’s DeepMind. This tool embeds a digital watermark directly into AI-generated images and audio, providing a means to identify and verify the authenticity of such content.
This move by Google follows Meta’s decision in November to prohibit the use of generative AI ad-creation tools for political advertisers. The heightened scrutiny on AI in elections has gained attention as the U.S. elections approach. Studies have indicated potential influences on voter sentiment through AI usage on social media.
A European study specifically identified Microsoft’s Bing AI chatbot, rebranded as Copilot, as providing misleading or inaccurate information about elections in approximately 30% of its responses. This underscores the broader challenges and concerns associated with the role of AI in shaping public discourse during electoral processes.