As of Sunday of the European Union, block regulatory authorities can prohibit the use of AI systems, which are “unacceptable risks” or harm.
February 2 is the first compliance deadline of the AI method of the EU, and is a comprehensive AI regulation framework that the European Assembly finally approved in March last year after many years of development. This law was officially enforced on August 1st. I am currently following the beginning of the completion of the compliance.
Details are described in Article 5, but this law is designed to cover countless use cases that AI may interact with individuals, from consumer applications to physical environment. It is done.
The block approach has four wide risk levels. (1) The minimum risk (eg, e -mail spam filter) does not face regulatory authorities. (2) Limited risks including chatbots of customer service include light -touch regulation monitoring. (3) High risk -AI for recommendation of health care is one example. (4) Unacceptable risk application -Focusing this month’s compliance requirements -is completely prohibited.
There are the following activities that are not acceptable.
AI used for social scoring (for example, building a risk profile based on human behavior). AI operates human decisions or seem to be operated at first glance. AI utilizing vulnerabilities such as age, disability, and social economic status. AI tries to predict those who commit crimes based on the appearance. AI that uses biometrics to presume the characteristics of people like sexual orientation. AI collects “real -time” biological certification data in public places for the purpose of the law enforcement organization. AI tries to guess the emotions of people at work and school. AI that creates or expand a facial recognition database by scraping images from online or security cameras.
Companies that know that the EU uses one of the above AI applications will be fined, regardless of where the headquarters is. They may be in hooks at a maximum of € 35 million ($ 36 million), 7 % of annual revenue from the previous fiscal year.
Rob Samroy, the director of the British law firm slaughter, said in an interview with Technicrunch.
“The organization is expected to be fully compliant by February 2, but the next big deadline that companies need to recognize is in August,” said Sumroy. “By then, we will know who the competent authorities are, and fines and execution provisions will be enabled.”
Preliminary oath
The deadline on February 2 is, in a sense, formal.
Last September, more than 100 companies signed the EU AI Agreement. This is a spontaneous pledge to start applying the principles of the AI method prior to entering the application. As part of the agreement, signers, including Amazon, Google, and Openai, have committed to identification of AI systems, which are likely to be classified as high risk under the AI method.
Some high -tech companies, especially Meta and Apple, skipped the agreement. The AI Startup Mistral in France, one of the most severe critics of the AI method, has also chose not to sign.
It does not suggest that Apple, Meta, Mistral, or other people who do not agree with the agreement will not fulfill their duty. SUMROY points out that most companies are not engaged in those practices anyway, given that the property of the banned use case is laid out.
“In the case of an organization, the important concern about the EU AI method is whether the clear guidelines, standards, and the code of activity arrives in time. The important thing is whether to provide the clarity of compliance to the organization. SUMROY said. “But so far, the working group has … has met the deadline for the developer’s code of conduct.”
Possibility of exemption
There are exceptions for some ban on AI methods.
For example, a law enforcement agency helps to implement a “search by narrowing down” or “specific, substantive, and imprisoned” threats that those systems use the duction victims If you are allowed to use a specific system that collects biometric authentication in public places, a life. This exemption requires approval from an appropriate rule, and the law cannot make a decision that the law execution organization “creates a disadvantageous legal effect” based on the output of these systems. Is emphasized.
This law also opens up exceptions of systems that presume emotions in workplaces and schools, which are justified of “medical or safety”, such as a system designed for treatment.
The European Commission, the EU’s administrative agency, said that it would release additional guidelines in early 2025 following consultation with stakeholders in November. However, these guidelines have not been disclosed yet.
SUMROY stated that it is unknown how other laws related to books interact with the prohibition and associated provisions of the AI law. As the execution window approaches, the clarity may not arrive until the second half of the year.
“It’s important to remember that the organization does not exist alone,” says Sumroy. “Other legal frameworks such as GDPR, NIS2, and DORA have a dialogue with the AI Law, especially a potential issue over duplicate incident notification requirements. Understand how these laws are suitable. It is as important to understand AI itself.
Source link