OpenAI addresses governance and bias concerns in AI by establishing the “Collective Alignment” team, dedicated to creating democratic processes for governing its AI models. The team aims to involve individuals directly in shaping the governance of AI, ensuring inclusivity and diverse perspectives as AI technologies become increasingly integrated into society.
17 January 2024 – Artificial intelligence laboratory OpenAI is taking a proactive step in addressing concerns related to bias and governance of its AI software. In a blog post on Tuesday, the Microsoft-backed company announced the formation of the “Collective Alignment” team, dedicated to developing democratic processes that shape how OpenAI’s AI models should be governed.
The establishment of the “Collective Alignment” team follows the conclusion of a grant program initiated in May 2023, focusing on funding experiments in democratic processes. OpenAI sees this initiative as a crucial element in its mission toward creating superintelligent models that play integral roles in society. The aim is to provide individuals with direct opportunities to contribute their insights and perspectives, ensuring inclusive and diverse governance.
Tyna Eloundou, a research engineer and founding member of the new team, emphasized the importance of allowing people to provide direct input as AI models become increasingly integrated into society. To address concerns such as ensuring only humans can vote, OpenAI is exploring potential partnerships, including the possibility of collaborating with Worldcoin. Worldcoin, a cryptocurrency project founded by OpenAI CEO Sam Altman, offers a mechanism to differentiate between humans and AI bots.
Teddy Lee, a product manager and the other member of the two-person “Collective Alignment” team, clarified that no concrete plans have been made yet to integrate Worldcoin into OpenAI’s processes.
Since the launch of ChatGPT in late 2022, OpenAI’s generative AI technology has gained widespread attention for its ability to produce authoritative text based on prompts. However, concerns have arisen regarding the potential for AI, including systems like ChatGPT, to generate “deepfake” images and misinformation. Critics argue that inherent biases may exist in AI models due to the data used to shape their perspectives, leading to instances of biased or inappropriate outputs.
The newly formed OpenAI team is actively seeking to expand its expertise by hiring a research engineer and research scientist. The team will collaborate closely with OpenAI’s existing “Human Data” team, which focuses on building infrastructure for collecting human input on the company’s AI models, as well as other research teams. – ref: Reuters