CNN — NEW YORK (CNN) — With more than half of the world's population set to vote in elections around the world this year, technology industry leaders, lawmakers and civil society groups say artificial intelligence could cause confusion and confusion among voters. There are growing concerns that there is a risk of sexual harassment. Now, a group of major technology companies say they are working together to address the threat.
More than a dozen tech companies building or using AI technology pledged Friday to work together to detect and counter harmful AI content in elections, including deepfakes of political candidates. Signatories include OpenAI, Google, Meta, Microsoft, TikTok, Adobe, and more.
The agreement, called the “Technology Agreement to Combat Deceptive Use of AI in Elections 2024,” includes collaborating on technology to detect misleading AI-generated content and addressing potentially harmful AI content. This includes the need to be transparent with the public about the efforts being made.
“AI did not create election fraud, but we must ensure that AI does not facilitate the spread of fraud,” Microsoft President Brad Smith said in a statement Friday at the Munich Security Conference.
Technology companies generally don't have a great track record of self-regulation or enforcing their policies. But the agreement comes as regulators continue to be slow to create guardrails for rapidly advancing AI technology.
A growing number of new AI tools offer the ability to quickly and easily generate convincing text and realistic images. There's also a growing number of videos and audio that experts say could be used to spread false information to mislead voters. The announcement of the agreement comes after OpenAI announced Thursday a new stunningly realistic AI text video generation tool called Sora.
“Our greatest fear is that we, our workforce, our technology, our industry, will suffer great damage to the world,” OpenAI CEO Sam Altman told Congress during a hearing in May. He told Congress, urging lawmakers to regulate AI.
Some companies have already partnered to develop industry standards for adding metadata to AI-generated images, allowing other companies' systems to identify images as computer-generated. will be automatically detected.
Friday's agreement takes these cross-industry efforts a step further, allowing signatories to find ways to attach machine-readable signals to indicate the origin of AI-generated content and assess the risks of AI models. We are committed to cooperating with such efforts. Generate deceptive election-related AI content.
The companies also said they would work together on an education campaign to teach the public how to “protect themselves from being manipulated and deceived by this content.”
However, some civil society organizations are concerned that this pledge is not being fully fulfilled.
“Voluntary commitments like the ones announced today are woefully inadequate to address the global challenges facing democracies,” said David E., senior advisor and director of the Free Press, a technology and media watchdog. Nora Benavidez, director of digital justice and civil rights, said in a statement. “Each election cycle, tech companies promise vague democratic standards only to fail to fully deliver on those promises. To address the real harm that AI poses in a busy election year…human review, We need strong content management with labeling and enforcement.”
The-CNN-Wire™ & © 2024 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.