Sam Altman: AI regulations should evolve in step with tech-society co-evolution | AI For Good Global Summit 2024

New York: In an interview during the AI for Good Global Summit, Altman addressed AI’s transformative impact, challenges in data quality and governance, and the need for regulatory frameworks that balance short-term benefits of AI with long-term societal implications.

In a riveting conversation during the AI for Good Global Summit 2024, Nicholas Thompson, the CEO of The Atlantic and Sam Altman, the CEO of OpenAI, delved into the current state and future trajectory of AI.

The current impact of AI and future prospects

Altman delved into the immediate positive impacts of AI, emphasising its role in enhancing productivity across various industries. From software development to healthcare, AI tools are already demonstrating transformative effects, increasing efficiency and streamlining processes. However, Altman also underscored the importance of remaining vigilant against potential negative consequences, particularly in the realm of cybersecurity.

Advancements in AI models

The conversation shifted to the development of new AI models, such as GPT-4, and the challenges of language equity. Altman expressed pride in the progress made with GPT-4’s ability to understand a broader array of languages, covering 97% of the world’s primary languages. He highlighted OpenAI’s commitment to further improving language coverage in future iterations, acknowledging the importance of accessibility and inclusivity in AI development.

The importance of high-quality data in AI training

Altman addressed the complexities of training AI on synthetic data generated by other large language models (LLMs), underscoring the need for high-quality data to avoid system corruption. He emphasised that ensuring the quality of data used in AI training is essential to maintain the integrity and effectiveness of the system.

Embracing alien intelligence

Altman emphasised the importance of designing human-compatible AI systems but warned against assuming they possess human-like thinking or capabilities. While aiming for maximal human compatibility, he advocates treating AI as alien intelligence and avoiding projecting excessive human likeness onto AI systems.

The Scarlett Johansson voice controversy

Altman also commented on OpenAI’s use of Scarlett Johansson’s voice likeness in its AI model, ChatGPT. Altman clarified that while one of the voices in question may sound similar to Scarlett Johansson’s, it wasn’t intended to replicate her voice.

Globalization of AI: Multiple models expected

Altman predicts that there will be hundreds or even thousands of LLMs in the future, with a few becoming dominant. He also expects that China will develop its own distinct model, separate from those used in other parts of the world.

Internet and AI: content overload concerns

Thompson expressed concerns about AI making the internet incomprehensible due to the ease of generating content. Altman countered that AI could actually help navigate the internet more efficiently by personalising information retrieval, thus preventing information overload.

Impact on income inequality

Altman discussed the potential for AI tools to benefit lower-paid workers more significantly than higher-paid ones. He cited initiatives like OpenAI for Nonprofits, which aims to make AI tools more accessible to those in need.

The new social contract

Altman supports the idea of universal basic income (UBI) and expects that the societal structure will need to adapt as AI continues to impact the economy. He emphasized the importance of ongoing debate and adjustments to the social contract.

Regulatory challenges

Current regulatory discussions are centred on immediate issues, such as election security, rather than long-term societal changes. Altman believes that regulations should evolve as technology and society co-evolve.

Altman advocates for an iterative deployment of AI, allowing for real-world learning and adaptation of regulations. He stresses the importance of balancing immediate benefits with long-term safety and societal impacts.

OpenAI governance

Altman still supports the idea of global representation in AI governance but did not provide specific updates on this front. He respectfully disagreed with former board members who criticised OpenAI’s governance as dysfunctional.

Human-AI relationship

Altman believes that AI can increase human humility and awe by showing our place in the universe. He compared AI’s impact to scientific discoveries that have historically decentered humanity’s perceived importance.

AI-powered governance for the future

Altman discusses the potential for a future governance system where every individual has a say, facilitated by AI that understands their preferences. Altman sees this as a real possibility and suggests it would be a worthwhile project for organizations like the UN to explore. He highlights a recent initiative called the SPEC as a step towards this goal, emphasizing the importance of setting clear rules and debating principles.

AI and the human skill to learn

Despite rapid technological changes, Altman is optimistic that humans will maintain and enhance their learning ability. He reassured that AI can be a useful educational tool without undermining essential learning skills.

Regulatory takeaways

Regulators should balance immediate benefits with long-term safety and societal impacts. Altman emphasised the need to avoid focusing solely on short-term or long-term implications, advocating for a comprehensive approach to ensure AI’s benefits while mitigating risks.

CATEGORIES
Share This

COMMENTS

Wordpress (0)
Disqus ( )