The European Union (EU) is taking a proactive approach to combat the spread of disinformation online. In a recent meeting with over 40 signatories of the Code of Practice on Online Disinformation, the EU’s values and transparency commissioner, Vera Jourova, emphasized the need for platforms to identify and label deepfakes and other AI-generated content. This groundbreaking initiative aims to enhance transparency and protect users from the potential negative consequences of AI technology. Let’s delve into the details and understand why this step is crucial in the ongoing battle against disinformation.
Recognizing the Dark Side of AI Technology
As AI technologies continue to evolve, they offer tremendous potential for innovation and creativity. However, they also pose significant risks and potential harm to society. Jourova aptly highlighted the concerns associated with AI-generated disinformation. Advanced chatbots, image generators, & voice-generating software can produce convincing content within seconds, making distinguishing between real and fake information challenging. Acknowledging this dark side of AI is crucial in devising effective strategies to tackle disinformation.
Also Read: China Sounds the Alarm on Artificial Intelligence Risks
Addressing the Challenges of AI-Generated Disinformation
Jourova urged signatories of the Code to create a dedicated track for discussing the challenges posed by AI-generated content. The EU commissioner outlined two primary discussion angles within the Code. The first focuses on services integrating generative AI, such as Microsoft’s New Bing or Google’s Bard AI-augmented search services. These services must incorporate necessary safeguards to prevent malicious actors from using them to spread disinformation. The second angle emphasizes the responsibility of platforms that have the potential to disseminate AI-generated disinformation. These platforms should implement technology to detect and clearly label such content for users.
Also Read: Breaking News: Massive Protest Erupts Across Stack Overflow and Stack Exchange Network
Pursuing Technological Solutions for Labeling AI-Generated Content
Jourova revealed that she had discussed the matter with Google’s CEO, Sundar Pichai, who confirmed that Google possesses technology capable of detecting AI-generated text content. However, Google continues to enhance its capabilities in this area. The EU commissioner emphasized the need for clear and fast labeling of deepfakes and other AI-generated content in today’s world. The EU is working towards ensuring users can easily identify machine-generated content. The Commission is pushing for immediate implementation of labeling mechanisms on platforms to effectively combat disinformation spread.
Also Read: Microsoft Takes the Lead: Urgent Call for AI Rules to Safeguard Our Future
Augmenting Existing Regulations
The Digital Services Act (DSA) already includes provisions requiring large online platforms to label manipulated audio and imagery. The EU aims to expedite this process by adding labeling requirements to the Code of Practice on Online Disinformation. By doing so, the EU can ensure that platforms adopt these measures even before the August 25 compliance deadline under the DSA. Jourova reiterated the importance of protecting freedom of speech while emphasizing that machines do not possess the same right, emphasizing the fundamental principles that underpin the EU’s legal framework.
Also Read: China’s Proposed AI Regulations Shake the Industry
Action and Reporting Requirements
The Commission expects signatories to take action by reporting on the risks associated with AI-generated disinformation next month. Jourova stressed the significance of public transparency, urging signatories to inform the public about the safeguards they are implementing to prevent the misuse of generative AI in spreading disinformation. With 44 signatories, including tech giants like Google, Facebook, and Microsoft, the Code of Practice on Online Disinformation is gaining momentum in its efforts to combat misinformation and protect users.
Twitter’s Withdrawal and Sanctions
Regrettably, Twitter recently withdrew from the voluntary EU Code, which raises concerns about its commitment to combating disinformation. In response, the EU has warned Twitter of potential sanctions if it fails to comply with new digital content laws that will be implemented across the EU on August 25. The EU expects Twitter to operate under the regulations outlined in the Digital Services Act to ensure the platform takes necessary measures to mitigate risks associated with illegal content. Non-compliance could result in fines of up to 6% of Twitter’s global revenue or a complete ban across the EU. The European Union seeks a cooperative approach from platforms, emphasizing increased responsibility to combat harmful content effectively.
Also Read: OpenAI Raises Concerns Over EU’s AI Regulations, Threatens to Cease Operating in Europe
Battling Russian Disinformation
Jourova drew attention to the ongoing challenge of Russian disinformation and war propaganda, particularly targeting central and eastern European countries. She stressed the urgent need to tackle this issue comprehensively by bolstering fact-checking initiatives, enhancing language understanding capabilities, and addressing the underlying reasons for the susceptibility of certain member states to disinformation campaigns.
Our Say
The EU’s push to identify deepfakes and AI-generated content within the Code of Practice on Online Disinformation represents a significant step forward in combating the spread of disinformation. By addressing the challenges posed by AI technology and encouraging platforms to implement clear labeling mechanisms, the EU aims to protect users from the potential negative consequences of AI-generated content. With continued efforts and collaboration among signatories, the battle against disinformation can gain momentum, leading to a safer and more informed online environment.