Council of Europe Adopts Historic International Treaty on Artificial Intelligence

Council of Europe Adopts Historic International Treaty on Artificial Intelligence

Strasbourg: In a landmark decision, the Council of Europe has adopted the world’s first international treaty aimed at ensuring that the development and use of artificial intelligence (AI) uphold human rights, democracy, and the rule of law. The Council of Europe Framework Convention on AI and Human Rights was adopted in Strasbourg during the annual ministerial meeting of the Council of Europe’s Committee of Ministers.

The groundbreaking treaty establishes a comprehensive legal framework that addresses the potential risks associated with AI systems while promoting responsible innovation. The convention adopts a risk-based approach, akin to the EU

AI Act, covering the entire lifecycle of AI systems from design and development to deployment and decommissioning. This approach emphasizes the careful assessment and mitigation of potential negative consequences associated with AI use.

Council of Europe Secretary General Marija Pejčinović stated, “The Framework Convention on Artificial Intelligence is a first-of-its-kind, global treaty that will ensure that Artificial Intelligence upholds people’s rights. It is a response to the need for an international legal standard supported by states on different continents which share the same values to harness the benefits of Artificial Intelligence while mitigating the risks. With this new treaty, we aim to ensure a responsible use of AI that respects human rights, the rule of law, and democracy.”

The treaty applies to AI systems in both public and private sectors, offering parties two compliance options: direct adherence to the convention’s provisions or alternative measures that meet the treaty’s standards while respecting international obligations regarding human rights, democracy, and the rule of law. This flexibility allows accommodation of the diverse legal systems worldwide.

The treaty includes key provisions for transparency and oversight tailored to specific contexts and risks. Parties must implement measures to identify, assess, prevent, and mitigate possible risks, including considering moratoriums, bans, or other actions when AI uses may conflict with human rights standards. The treaty mandates accountability for any adverse impacts of AI systems, ensuring respect for equality, including gender equality, prohibition of discrimination, and privacy rights. It also ensures legal remedies for victims of AI-related human rights violations and procedural safeguards, such as notifying individuals when they are interacting with AI systems.

To protect democratic institutions and processes, the treaty requires measures to ensure AI systems do not undermine democratic principles, such as the separation of powers, judicial independence, and access to justice. While the treaty’s provisions do not extend to national security activities, these must still respect international law and democratic institutions. The convention excludes national defense matters or research and development, except where AI system testing may interfere with human rights, democracy, or the rule of law.

To ensure effective implementation, the convention establishes a follow-up mechanism via a Conference of the Parties. Each party is required to set up an independent oversight mechanism to oversee compliance, raise awareness, stimulate public debate, and conduct multistakeholder consultations on AI usage.

The convention results from two years of work by the intergovernmental Committee on Artificial Intelligence (CAI) and involves 46 Council of Europe member states, the European Union, and 11 non-member states, including Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States, and Uruguay. The private sector, civil society, and academic representatives participated as observers. The convention is open to non-European countries and will be legally binding for signatory states.

Despite its historic significance, some view the treaty as falling short of its intended impact. Critics argue that it reaffirms existing practices rather than introducing substantial regulatory measures. The EU data watchdog recently expressed concerns about potential compromises in human rights standards due to pressure from foreign business interests, suggesting the treaty does not adequately address the risks posed by AI. The European Data Protection Supervisor (EDPS) described it as a “missed opportunity to lay down a strong and effective legal framework” for protecting human rights in AI development.

The framework convention will be opened for signature in Vilnius, Lithuania, on 5 September during a conference of Ministers of Justice, marking the next significant step in its implementation.

CATEGORIES
Share This

COMMENTS

Wordpress (0)
Disqus ( )