Detecting deepfakes and Generative AI

Background

The rapid developments in AI technology, such as deep learning, have led to an increased pervasiveness and proliferation of misinformation through deepfakes, a type of synthetic AI-generated media (could be video, images, text and audio) and are becoming increasingly difficult to detect both by the human eye and by existing detection technologies. These developments have significantly increased cybersecurity risks, digital copyright infringements and potentially impact trust in digital systems.

Objectives of the workshop

The rise of generative AI technology calls for a focus on international standards for determining the authenticity of multimedia, use of watermarking technology, enhanced security protocols and extensive cybersecurity awareness.

Governments and international organizations are already working towards setting policy measures, codes of conduct and regulations to enhance the security and trust of AI systems.

The main objectives of the workshop are to:

  1. Provide an overview of the current risks of deepfakes and AI generative multimedia and the challenge of regulators in ensuring a safe, secure and trusted environment;
  2. Discuss the effectiveness of AI watermarking, multimedia authenticity and deepfake detection technologies, their application use cases, governance issues and gaps that need to be addressed;
  3. Discuss the areas where technical standards are required and where ITU will have an important role to play;
  4. Explore opportunities for collaboration on standardization activities on AI watermarking and multimedia authenticity protocols.
  5. Highlight the importance of the policy measures for international governance for AI, the industry led initiatives such as the Coalition for Content Provenance and Authenticity (C2PA), and the work of international organisations in this area.