Artificial intelligence (AI) experts and industry executives, including renowned AI pioneer Yoshua Bengio, have joined forces to call for increased regulation surrounding the creation of deepfakes, citing potential societal risks associated with the technology.
According to Reuters, AI researcher at UC Berkeley, Andrew Critch, and the group penned an open letter highlighting the growing usage of deepfakes, and the fact that the AI-generated imagery often involved sexual imagery, fraud, or political disinformation. With AI advancements rapidly improving deepfake creation, the signatories have stressed the urgent need for safeguards.
Deepfakes, which are realistic yet fabricated images, audios, and videos generated by AI algorithms, have increasingly become difficult to differentiate from human-created content due to recent technological advancements.
Titled “Disrupting the Deepfake Supply Chain,” the letter outlines recommendations for regulating deepfakes, including the full criminalisation of deepfake child pornography, imposing criminal penalties for individuals knowingly creating or facilitating harmful deepfakes, and seeing to it that AI companies prevent their products from generating harmful content.
As of Wednesday 21st February, the letter had garnered 420 signatures from individuals across various industries, including academia, entertainment, the tech industry, and politics.
The call for regulation comes amid growing concerns about the potential societal impacts of AI technology. The letter aligns with activism intent on ensuring that AI systems do not harm society. Tech billionaire Elon Musk has also called for a six-month pause in the development of AI systems more powerful than OpenAI’s GPT-4 model.
Regulators have increasingly focused on addressing the risks associated with AI technology since the introduction of ChatGPT by Microsoft-backed OpenAI in late 2022. ChatGPT gained attention for its ability to engage users in human-like conversation and perform various tasks, raising concerns about the potential misuse of AI-generated content.