ChatGPT creator, OpenAI, has announced its intention to add new digital watermarks to DALL-E 3 images in an effort to enhance transparency and combat the spread of misinformation.
In a blog post published on 6th February, the company revealed that it has collaborated with the Coalition for Content Provenance and Authenticity (C2PA) to incorporate watermarks onto its AI-generated images.
The move aims to bolster public trust in digital information by enabling users to verify whether images have been generated by AI using platforms like Content Credentials Verify. Media organisations are also adopting this standard to authenticate content sources.
The approach isn’t a perfect solution, however, because the watermark can be easily removed by people seeking to pass AI-generated images as authentic. Despite this, OpenAI believes that implementing such measures is a vital step towards addressing concerns surrounding the spread of AI-generated content, especially in the context of upcoming elections where misinformation poses a significant threat.
Instances of AI-generated content impersonating political figures like US President Joe Biden and UK Prime Minister, Rishi Sunak, as well as explicit deepfakes targeting celebrities like Taylor Swift, have showed the urgent need for measures to tackle this issue. In response, fellow tech company Meta has announced plans to label AI-generated images on social media platforms Facebook, Instagram, and Threads as part of its crackdown on the dissemination of misleading content.
OpenAI has stressed that while C2PA metadata is already embedded in images generated using the web version of DALLE 3, efforts are underway to extend this feature to mobile users by 12th February. However, the company cautioned that digital watermarks are not a foolproof solution, as they can be easily removed, either accidentally or intentionally.
The challenges associated with marking out AI-generated content have been widely recognised, with studies highlighting the vulnerabilities of existing watermarking systems. Moreover, early attempts to develop systems for detecting AI-generated written content have faced accuracy concerns, leading OpenAI to quietly take down its detection service.
By Derrick Kafui Deti