Close Menu
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Spanish
What's Hot

Malicious browser extensions will infect 722 users across Latin America since early 2025

Trump officials vow to lift school separation orders

Should the government ban AI-generated humans to stop the collapse of social trust?

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Spanish
Fyself News
Home » AI systems with “unacceptable risks” are prohibited by the EU
Startups

AI systems with “unacceptable risks” are prohibited by the EU

userBy userFebruary 2, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

As of Sunday of the European Union, block regulatory authorities can prohibit the use of AI systems, which are “unacceptable risks” or harm.

February 2 is the first compliance deadline of the AI ​​method of the EU, and is a comprehensive AI regulation framework that the European Assembly finally approved in March last year after many years of development. This law was officially enforced on August 1st. I am currently following the beginning of the completion of the compliance.

Details are described in Article 5, but this law is designed to cover countless use cases that AI may interact with individuals, from consumer applications to physical environment. It is done.

The block approach has four wide risk levels. (1) The minimum risk (eg, e -mail spam filter) does not face regulatory authorities. (2) Limited risks including chatbots of customer service include light -touch regulation monitoring. (3) High risk -AI for recommendation of health care is one example. (4) Unacceptable risk application -Focusing this month’s compliance requirements -is completely prohibited.

There are the following activities that are not acceptable.

AI used for social scoring (for example, building a risk profile based on human behavior). AI operates human decisions or seem to be operated at first glance. AI utilizing vulnerabilities such as age, disability, and social economic status. AI tries to predict those who commit crimes based on the appearance. AI that uses biometrics to presume the characteristics of people like sexual orientation. AI collects “real -time” biological certification data in public places for the purpose of the law enforcement organization. AI tries to guess the emotions of people at work and school. AI that creates or expand a facial recognition database by scraping images from online or security cameras.

Companies that know that the EU uses one of the above AI applications will be fined, regardless of where the headquarters is. They may be in hooks at a maximum of € 35 million ($ 36 million), 7 % of annual revenue from the previous fiscal year.

Rob Samroy, the director of the British law firm slaughter, said in an interview with Technicrunch.

“The organization is expected to be fully compliant by February 2, but the next big deadline that companies need to recognize is in August,” said Sumroy. “By then, we will know who the competent authorities are, and fines and execution provisions will be enabled.”

Preliminary oath

The deadline on February 2 is, in a sense, formal.

Last September, more than 100 companies signed the EU AI Agreement. This is a spontaneous pledge to start applying the principles of the AI ​​method prior to entering the application. As part of the agreement, signers, including Amazon, Google, and Openai, have committed to identification of AI systems, which are likely to be classified as high risk under the AI ​​method.

Some high -tech companies, especially Meta and Apple, skipped the agreement. The AI ​​Startup Mistral in France, one of the most severe critics of the AI ​​method, has also chose not to sign.

It does not suggest that Apple, Meta, Mistral, or other people who do not agree with the agreement will not fulfill their duty. SUMROY points out that most companies are not engaged in those practices anyway, given that the property of the banned use case is laid out.

“In the case of an organization, the important concern about the EU AI method is whether the clear guidelines, standards, and the code of activity arrives in time. The important thing is whether to provide the clarity of compliance to the organization. SUMROY said. “But so far, the working group has … has met the deadline for the developer’s code of conduct.”

Possibility of exemption

There are exceptions for some ban on AI methods.

For example, a law enforcement agency helps to implement a “search by narrowing down” or “specific, substantive, and imprisoned” threats that those systems use the duction victims If you are allowed to use a specific system that collects biometric authentication in public places, a life. This exemption requires approval from an appropriate rule, and the law cannot make a decision that the law execution organization “creates a disadvantageous legal effect” based on the output of these systems. Is emphasized.

This law also opens up exceptions of systems that presume emotions in workplaces and schools, which are justified of “medical or safety”, such as a system designed for treatment.

The European Commission, the EU’s administrative agency, said that it would release additional guidelines in early 2025 following consultation with stakeholders in November. However, these guidelines have not been disclosed yet.

SUMROY stated that it is unknown how other laws related to books interact with the prohibition and associated provisions of the AI ​​law. As the execution window approaches, the clarity may not arrive until the second half of the year.

“It’s important to remember that the organization does not exist alone,” says Sumroy. “Other legal frameworks such as GDPR, NIS2, and DORA have a dialogue with the AI ​​Law, especially a potential issue over duplicate incident notification requirements. Understand how these laws are suitable. It is as important to understand AI itself.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMore than 700 people have been killed as a military battle of Dr. Congo M23 Rebel | Conflict News
Next Article Google X discharges genetic agriculture, a startup that uses AI to improve the yield of crops using AI.
user
  • Website

Related Posts

Lawyers could face “severe” penalties for quotes generated by fake AI, UK courts warn

June 7, 2025

Review Week: Why Humanity’s Cut Access to Windsurf

June 7, 2025

Will Musk vs. Trump affect Xai’s $5 billion debt transaction?

June 7, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Malicious browser extensions will infect 722 users across Latin America since early 2025

Trump officials vow to lift school separation orders

Should the government ban AI-generated humans to stop the collapse of social trust?

Lawyers could face “severe” penalties for quotes generated by fake AI, UK courts warn

Trending Posts

Sana Yousaf, who was the Pakistani Tiktok star shot by gunmen? |Crime News

June 4, 2025

Trump says it’s difficult to make a deal with China’s xi’ amid trade disputes | Donald Trump News

June 4, 2025

Iraq’s Jewish Community Saves Forgotten Shrine Religious News

June 4, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Should the government ban AI-generated humans to stop the collapse of social trust?

AB will be released at Binance -Tech Startups

Top 10 Startups and Tech Funding News for the Weekly Ends June 6, 2025

Order openai to keep all chatgpt logs including deleted temporary chats, API requests

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.