OpenAI, the leading artificial intelligence research organization, is venturing into uncharted ethical waters by funding academic research aimed at teaching AI to predict human moral judgments.
In a bold move, OpenAI’s nonprofit arm has awarded a grant to Duke University researchers for a project intriguingly titled “Research AI Morality.”
The initiative forms part of a three-year, $1 million funding effort supporting Duke professors exploring ways to create “moral AI.” While details remain sparse, the project’s principal investigator, Walter Sinnott-Armstrong, a professor of practical ethics at Duke, declined to elaborate on the specifics of the research, citing ongoing work.
A “Moral GPS”
Sinnott-Armstrong and co-investigator Jana Borg have previously delved into AI’s potential to serve as a “moral GPS,” guiding humans toward better ethical decisions. Their earlier work includes designing algorithms to determine the allocation of scarce resources like kidney donations and examining public attitudes toward AI making moral choices.
This latest OpenAI-funded effort aims to train algorithms to predict human moral judgments in complex scenarios, such as resolving ethical dilemmas in medicine, law, and business. But can AI truly grasp something as intricate as morality?
The Complexity of Morality in AI
The challenges are immense. Moral judgments are deeply subjective, shaped by cultural, emotional, and contextual factors. AI, on the other hand, operates as a statistical machine, learning patterns from vast datasets scraped from the web. This reliance on data brings inherent limitations. AI often mirrors the biases of its training material, which predominantly reflects the values of Western, educated, industrialized societies.
This was evident in a 2021 experiment by the Allen Institute for AI, which created Ask Delphi, an AI tool designed to offer ethical guidance. While Delphi performed well on simple moral questions, subtle rephrasing could lead it to endorse controversial actions, such as approving harm to infants.
ALSO READ: NAT,MASTERCARD TO IMPROVE PAYMENTS IN WEST AFRICA
AI’s limitations are rooted in its lack of reasoning and emotional depth — qualities essential to making nuanced moral decisions. Philosophical debates about morality have persisted for centuries, from Kantian absolute moral rules to utilitarian principles prioritizing the greatest good for the greatest number. Encoding such diversity into AI algorithms is no small feat.
A High-Stakes Experiment
OpenAI’s initiative highlights the growing intersection of AI and ethics, a field fraught with questions about the universality of moral values and the role of technology in decision-making. While the potential applications — from fairer legal systems to equitable medical resource allocation — are vast, the hurdles are equally significant.
Ultimately, the success of this endeavor depends on whether researchers can overcome the inherent biases and subjectivity that come with morality. If they succeed, the project could mark a significant leap forward for AI in understanding and reflecting human values. But for now, the ethical debate continues — both in academic circles and within the algorithms themselves.