November 08, 2023

New Dem Artificial Intelligence Working Group Chair Leads Letter to Social Media and AI CEOs Raising Concerns of Deceptive Synthetic Media

Today, New Democrat Coalition Artificial Intelligence Working Group Chair Derek Kilmer (WA-06) led 30 New Dems in a letter to the CEOs of leading generative AI developer companies and social media companies to raise concerns about the rise of synthetic media that is designed to manipulate or deceive online users. 

The increasing ease of both creating and viewing deceptive synthetic media, including deep fakes and synthetic images, have sparked considerable concern about increased threats from adversaries or bad actors who seek to exploit this medium to spread misinformation and disinformation. This can further complicate efforts to combat false narratives about important current events related to politics, culture, natural disasters, and more. 

This letter asks these companies to share information about their efforts to identify, monitor, and disclose deceptive synthetic media created using their platform and encourages these companies to collaborate with lawmakers to develop solutions that address its associated risks.

“Artificial Intelligence is changing how we live, offering new benefits but also new risks,” said New Dem AI Working Group Chair Kilmer. “As we see more synthetic online content intended to deceive users – including videos, images, and audio – it’s clear all of us need to take this seriously. It’s important that platforms where AI-generated content is created, platforms where that content is distributed and shared, and lawmakers are working together to address risks associated with deceptive synthetic media. And it’s crucial that users have the tools to distinguish between authentic and computer-generated content. Our democracy depends on it.” 

The letter reads in part: 

“[W]e are troubled by the development of online environments where users may have to question the veracity of the content they view online without the necessary tools, information, and resources to accurately do so. To that end, we urge efforts for generative AI developers and social media platforms to collaborate on solutions to both educate users, as well as appropriately flag and disclose synthetic media, particularly as it relates to content with the intent to deceive or manipulate users.” 

You can read the full letter here and below: 

Dear Mr. Altman, Dr. Amodei, Mr. Chew, Mr. Holz, Mr. Jassy, Mr. Nadella, Mr. Narayen, Mr. Pichai, Mr. Spiegel, Mr. Suleyman, Ms. Yaccarino, and Mr. Zuckerberg:

We write to express our concerns about the rise of synthetic media that is designed to manipulate or deceive online users, as well as to encourage collaborative efforts to develop solutions that would address its associated risks. Following developments in generative artificial intelligence (AI) applications that have dramatically increased the ability of users to create and share synthetic media, significant concerns have been raised about the ability of bad actors to use these services for deceptive means, including sharing deceptive content on widely used platforms. The urgency of these concerns is amplified by the risks of misinformation and disinformation, particularly during an age when Americans increasingly receive their news primarily through online sources using social media platforms to reach users and audiences. Considering these risks, we request information about your efforts to identify, monitor, and disclose this content; the extent to which you have identified findings or trends regarding deceptive synthetic media; and how you have acted, or intend to act, to combat its associated risks.

According to a report published by the U.S. Department of Homeland Security, recent advancements in the quality of deceptive synthetic media have not only lowered the barrier for creators to make this content but have also made it more challenging for casual viewers to identify whether it is fraudulent. For the purposes of this inquiry, synthetic media refers to visual, auditory, or multimodal content that has been generated or modified, often through AI, to create highly realistic outputs and may simulate artifacts, persons, or events. These technological advancements have been applied to content mediums ranging from images and videos to audio and written material. Given the range of applications for these technologies, coupled with the increasing ease of both creating and viewing this content, there is considerable concern about increased threats from adversaries or bad actors who seek to spread misinformation or disinformation online using deceptive synthetic media. Already, we have seen how deceptive synthetic media is used to spread disinformation about current events related to politics, culture, and natural disasters, among others.

As a result, we are troubled by the development of online environments where users may have to question the veracity of the content they view online without the necessary tools, information, and resources to accurately do so. To that end, we urge efforts for generative AI developers and social media platforms to collaborate on solutions to both educate users, as well as appropriately flag and disclose synthetic media, particularly as it relates to content with the intent to deceive or manipulate users. This is particularly concerning considering the increasingly polarized and divided political environment in the United States, where the prevalence of online “echo chambers” can also make it more difficult for users to independently fact-check online content. With the 2024 election cycle imminent, we are concerned that the combination of an increasingly polarized political environment and recent synthetic media advancements may create a perfect storm for the proliferation of disinformation and misinformation by bad actors, which could further undermine faith in U.S. democratic institutions. Indeed, the U.S. Department of Homeland Security has highlighted this liability, offering an example where deceptive synthetic media such as deepfakes could be employed to shift the tide of an election or cause civic unrest close to a voting day. With that in mind, we believe it is critical that investments are made now to prevent the potential unfolding of serious consequences in the coming years.

To that end, we share the following questions, to which we hope to receive responses by Friday, December 8, 2023:

Questions for generative AI developer companies:

  1. What efforts is your organization leading to identify and disclose deceptive synthetic media content created using your platforms?

    1. Are users required to maintain the identity and disclosure of deceptive synthetic media created using your platforms under your user terms and conditions.

    2. What strategies, techniques, and standards are used to assess the effectiveness of the identification and disclosure method(s) employed by your platforms?

      1. How are these impact assessments conducted, and what standards and rubrics are applied to assess the effectiveness of the methods in question?

      2. How, if at all, are these strategies, techniques, and standards modified to adapt to different content mediums hosted by your platforms (e.g. images versus text)?

    3. For identification and disclosure methods employed by your platforms, are there penalties applied to users who violate the terms of using and/or maintaining these methods appropriately?

    4. What content has been identified as harmful or problematic that users are prohibited from creating using your platforms?

      1. In this vein, does your organization have policies, standards, and requirements in place for the creation of deceptive synthetic media content related to campaigns and elections?

  2. What efforts, if any, is your organization leading to identify and monitor the distribution of deceptive synthetic media created on your platforms, including distribution to social media platforms?

    1. What information can be shared about the primary targets/audiences of deceptive synthetic media content?

    2. What strategies and techniques, including algorithmic techniques and innovations, are employed to identify and monitor this content?

  3. How has your organization acted to manage and mitigate the risks associated with intentionally deceptive synthetic media, including misinformation and disinformation, that may be created on your platforms and subsequently spread to online platforms?

    1. In this vein, how has your organization acted to improve context literacy and digital literacy for consumer/user awareness of synthetic media on your platforms?

      1. To what extent has your organization identified best practices and lessons learned from these efforts?

  4. Has your organization identified findings, trends, or patterns related to the deceptive synthetic media created on your platforms?

    1. What information can be shared about the primary creators of deceptive synthetic media content on your platforms, including their motivations for using your services?

  5. Is your organization currently partnering with other organizations or industry peers in any of the above efforts, or related efforts?

    1. If there are no current efforts in place, has your organization previously participated in such joint efforts or does it intend to in the future?

    2. What other organizations and/or industries do you believe are important to mitigating misinformation and disinformation online or ought to be a part of this effort?

Questions for social media platform companies:

  1. What efforts is your organization leading to identify deceptive synthetic media on your platforms?

    1. What are the standards and requirements employed to identify this content?

    2. What strategies and techniques, including algorithmic techniques and innovations, are employed to identify this content?

  2. What efforts is your organization leading to monitor the presence of deceptive synthetic media on your platforms?

    1. What are the standards and requirements employed to monitor this content?

    2. What strategies and techniques, including algorithmic techniques and innovations, are employed to monitor this content?

  3. What efforts is your organization leading to disclose the presence of deceptive synthetic media on your platforms?

    1. What are the standards and requirements employed to disclose this content?

    2. What strategies and techniques, including algorithmic techniques and innovations, are employed to disclose this content?

  4. Has your organization identified findings, trends, or patterns related to the presence of deceptive synthetic media on your platforms?

    1. What information can be shared about the frequency of deceptive synthetic media content distributed and/or shared on your platforms?

      1. In this vein, does your organization track the reach of deceptive synthetic media content to users, including how many users view the content and how widely it is circulated and recirculated by users?

    2. What information can be shared about the primary sharers of deceptive synthetic media content, including their motivations?

    3. What information can be shared about the primary targets/audiences of synthetic media content?

    4. What information can be shared about what portion of deceptive synthetic media was developed using generative AI?

      1. In this vein, what efforts, standards, requirements, and/or processes has your organization employed to identify whether generative AI was used to develop deceptive synthetic media?

      2. What strategies and techniques, including algorithmic techniques and innovations, are employed to identify this content?

  5. What information can be shared about your organization’s capacity to identify, monitor, and disclose different formats of deceptive synthetic media posted to your platform(s), including audio, image, text, and video?

    1. Has your organization identified challenges unique to specific formats of deceptive synthetic media?

    2. Has your organization identified findings, trends, or patterns about the prevalence of different formats of deceptive synthetic media, including which formats are most commonly identified and shared?

  6. How has your organization acted to manage and mitigate the risks associated with deceptive synthetic media, including misinformation and disinformation, on your platforms?

    1. In this vein, how has your organization acted to improve context literacy and digital literacy for consumer/user awareness of synthetic media on your platforms? To what extent has your organization identified best practices and lessons learned from these efforts?

    2. If no such actions have been taken, how does your organization intend to act to manage and mitigate these risks?

  7. Does your organization have policies, standards, and requirements in place for deceptive synthetic media content related to campaigns and elections?

  8. Is your organization currently partnering with other organizations or industry peers in any of the above efforts, or related efforts?

    1. If there are no current efforts in place, has your organization previously participated in such joint efforts or does it intend to in the future?

    2. What other organizations and/or industries do you believe are important to mitigating misinformation and disinformation online or ought to be a part of this effort?

We appreciate your full and fair consideration of this matter, and we look forward to your response.



--->