Mario Hernandez Ramos, chairman of the Council of Europe’s AI Committee, will identify the main risks of AI to human rights and explain how the Council of Europe is addressing these.
Artificial intelligence (AI) is currently a recurring topic in many areas of society, whether it is a daily conversation or a specialized forum. This technology has demonstrated significant potential to improve the quality of life of people, generating a wide range of opinions and discussions in a variety of fields, including business, health, economics, and law. However, it is important to recognize that, like other technical uses, artificial intelligence can pose both benefits and potential risks for society and individuals. In mature societies, regulations have emerged as the main tool for responsible use of this technology.
Regulation Discussion surrounding AI
Regulations on artificial intelligence have been creating intense debate since the beginning of the 20th year of the 21st century. President Donald Trump’s election to the White House, where he shares positions with prominent business and technology leaders, has revived the view that regulating the technology represents a significant obstacle to its innovation and development. . This perspective is not new, but it has not been the dominant view in specialized forums where artificial intelligence has been discussed for the past few years. Even most companies have called for a regulatory framework that establishes a level playing field where they can invest and innovate with legal and business certainty.
Therefore, the discussion should not focus on whether artificial intelligence should be regulated. In fact, from a legal perspective, when using AI systems is harmful to people, the legal system has enough tools to provide a response. Judges in similar disputes claiming compensation for harm caused by the use of AI systems cannot leave the dispute unbearable. Another question is that such an answer is satisfactory and how much judicial activity or legal creation should be developed by a judge. As a public body, judges must make decisions based on the law. In this way, citizens can know in advance the arguments that the dispute arbitrator is likely to use and adapt actions accordingly. This also means that any abusive behavior is prevented. This requirement applies not only to judges but also to all public institutions of any democratic system embodied in the legal principles that constitute the rule of law.
Without specific rules, people’s actions and decisions do not have a framework of reference, and the response of authority is unpredictable. Therefore, appropriate and specific rules provide certainty to all stakeholders, including corporations, to protect their rights and interests and to counter arbitrariness, abuse of power and injustice.
Therefore, the discussion should focus on other types of questions that provide the rigor and complexity appropriate to this issue, and therefore avoid manikine and simple assumptions. First of all, what are the regulations? Secondly, how should we regulate it? Third, who should regulate it? Finally, from what perspective? For example, should ethical and/or legal principles govern such regulations, as we recognize that these principles definitively condition their purposes and their destinations?
Various artificial intelligence regulations are observed all over the world depending on the answer to these questions.
How can we regulate AI?
Until a few years ago, codes of conduct or internal rules created by companies or technology companies prevailed, and compliance was entirely voluntary and included very general ethical principles without specific content. It is characterized by facts. The flexibility provided by these guided devices is the lack of definition of responses that can be provided for a particular problem, and the voluntary nature of compliance, despite the lack of doubt as to its usefulness. was a contrast.
The Internet was a technological revolution that caused major cross-border problems. Domestic regulations have proven to be ineffective at all in providing satisfactory responses, such that coordination of approaches and efforts in several countries has become essential and leads to international cooperation and agreements. .
This scenario encourages the response that the Council of Europe adopted to face multiple challenges focusing on the protection and promotion of human rights, democratic functioning, and the rule of law: its interests. . There are various institutions (such as parliamentary congresses and human rights committees), or Sectral Commissions (such as the Human Rights Steering Committee, particularly European Judicial Efficiency (CEPEJ). We develop and continue to work on AI use recommendations and create various non-binding regulations.
But more than anything else, the December 2019 AI Commission (CAI) constitution stands out. It consists of the Council of Europe, the European Union, and 11 non-membered states (Argentina, Australia, Canada, Costa Rica, Saint Sea, Israel, Japan, Mexico, Peru, the United States of America, Uruguay). and 68 representatives from the private sector, civil society and academia who participated as observers.

During five years of work, discussion and negotiation, the technical level of artificial intelligence development, and the risks of AI to human rights, democracy and the rule of law, are believed to require legally binding regulatory responses. I did. From an international perspective, that is, it integrates the largest number of culture, perspective and legal traditions (this is one of the main differences compared to European Union AI law). Regulate only use that involves significant risks or effects. In short, always from the perspective of encouraging technology development that supports and promotes human-centered artificial intelligence, human, dignity, and individual autonomy. And while its destination is both public and private sectors, there is a recognition of great flexibility in terms of regulatory measures that could take the form of legislative, administrative or other measures.
These five years ended with the drafting and recognition of the first legally binding international treaties on artificial intelligence and human rights, democracy and the rule of law.
International Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law
The European Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law was adopted by the Ministers’ Committee of the Council of Europe in Strasbourg on May 17, 2024 and was held for signing at the meeting. Ta. Judicial pastor (Lithuania) in Vilinias (Lithuania) on September 5th, 2024. To date, the Framework Convention has been signed by Canada, Andorra, Georgia, Iceland, Japan and Norway. Republic of Moldova, San Marino, Montenegro, UK, Israel, the United States of America and the European Union (on behalf of 27 member states).
The aim is to ensure that activities within the lifecycle of artificial intelligence systems are perfectly in line with human rights, democracy and the rule of law, while promoting technological advancement and innovation. Furthermore, it aims to complement existing international standards for human rights, democracy and the rule of law, and to fill the legal gaps that can arise from rapid technological development. To withstand the test of time, the Framework Convention does not regulate technology and is technically neutral in nature.
The Framework Convention includes general principles and requirements developed by signatories at the national level.
Activities within the lifecycle of an AI system must adhere to several basic principles, such as human dignity and individual autonomy. Equality and non-discrimination; respect for privacy and personal data protection. Transparency and monitoring; Accountability and responsibility. Reliable and safe innovation. Additionally, procedural protections and remedies must be ensured, including documenting relevant information about the AI system and its use, making it available to affected individuals, and the effectiveness of filing complaints with competent authorities.はかったいです。 Rather than with the person who provides the rights to those affected in connection with the application of artificial intelligence systems that seriously undermine human rights and enjoy basic freedoms, and provides notifications that they are. , interacts with artificial intelligence systems.
To monitor the implementation of the Framework Convention, a follow-up mechanism, a meeting of parties consisting of official representatives of the parties to the Convention, is set up to determine the extent to which the provisions are implemented. Its conclusions and recommendations will help ensure compliance with state framework treaties and ensure its long-term effectiveness.
The party meeting also encourages cooperation with stakeholders, including disclosure on aspects related to the implementation of the Framework Convention.
Measurement of risk and impact
Given that only use poses risks to human rights, democracy and the rule of law are subject to regulation, measures of such risks and potential impacts are essential. To address this need, the Artificial Intelligence Committee has developed a specific methodology. This is an assessment of the risk and impact of artificial intelligence systems from the perspective of human rights, democracy and the rule of law (Huderia). This methodology was adopted by the Council of Europe’s Artificial Intelligence Committee (CAI) on November 28, 2024 and supplemented by the model that develops it, and constitutes the CAI work until the end of the mission in December 2025. Complemented by the model. .
The Huderia Methodology is a guide that provides a structured approach to risk and impact assessment of AI systems specialized in protecting and promoting human rights, democracy and the rule of law. It is used by public and private actors and is intended to play a unique and pivotal role at the intersection of international human rights standards for risk management in the context of AI and existing technical frameworks. Huderia is a non-standalone, non-regularly binding guidance document. Therefore, parties to the Framework Convention have the flexibility to use or adapt in whole or in part to develop or refine existing risk assessment approaches in accordance with applicable law.
Huderia consists of four elements. The first is context-based risk analysis (COBRA). This identifies key risk factors that increase the likelihood of adversely affecting human rights, democracy and the rule of law, allowing for the identification and prioritization of systems with significant risk. The second is the Stakeholder Engagement Process (SEP), which enhances risk and impact assessments by incorporating the perspectives of potentially affected individuals identified during the COBRA stage. The third is the Risk and Impact Assessment (RIA), which provides a detailed assessment of the potential and actual impacts of AI systems activities on human rights, democracy and the rule of law. . The fourth and final is the mitigation plan, which handles the mitigation plan, addresses the negative effects and mitigates identified harm. This includes the development of specific measures based on the development of comprehensive plans to implement them, including access to solutions.
The results of CAI research, in particular the Framework Convention and Huderia, have identified general international standards for addressing global challenges by providing response to the risks AI poses to human rights, democracy and the rule of law. , build it. This is undoubtedly good news for all of us who consider human values to be the most important thing.
This article will also be featured in the 21st edition of Quarterly Publication.
Source link