OpenAI’s Chief Executive, Sam Altman has announced a collaboration with the United States Safety Institute to provide early access to its forthcoming generative AI model for safety testing.
This partnership, revealed in a post on X, aims to assess and address potential risks in AI platforms.
The U.S. AI Safety Institute, a federal body under the National Institute of Standards and Technology (NIST), focuses on ensuring the safe and secure development of AI technologies. OpenAI’s agreement to work with the institute follows a similar arrangement with the U.K.’s AI safety body in June. These moves appear to counter claims that OpenAI has deprioritized AI safety to advance more powerful AI technologies.
In a recent letter to Altman, five senators, including Brian Schatz (D-HI), questioned OpenAI’s safety policies. OpenAI’s Chief Strategy Officer, Jason Kwon, responded, affirming the company’s commitment to rigorous safety protocols. The timing of the agreement with the U.S. AI Safety Institute coincides with OpenAI’s endorsement of the Future of Innovation Act, a Senate bill that would formalize the institute’s role in setting AI standards.
Altman, who also serves on the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board, has significantly increased OpenAI’s federal lobbying efforts, with expenditures reaching $800,000 in the first half of 2024.
READ ALSO: X FACES EU HEAT OVER AI DATA
The AI Safety Institute, which includes members from tech giants like Google, Microsoft, Meta, Apple, Amazon, and Nvidia, is tasked with developing guidelines for AI safety and risk management, as outlined in President Joe Biden’s October AI executive order.
OpenAI’s recent efforts, including the planned creation of a safety commission and the re-commitment to dedicating 20% of its compute resources to safety research, come amidst scrutiny over the company’s handling of AI safety issues. The company had previously disbanded a team working on safety controls, leading to concerns about its priorities.
As OpenAI navigates these complex regulatory and ethical landscapes, its actions will likely continue to be closely watched by both policymakers and the public.