The ACHILLES project confronts AI’s greatest challenges: Trust and efficiency, paving the way for ethical and impactful solutions.
Artificial intelligence (AI) is rapidly expanding across healthcare, finance, public services, and everyday life. Yet it faces persistent ‘Achilles’ heels’ in trust and efficiency. As advanced systems assume more critical decision-making functions, society’s calls for fair, privacy-preserving, and environmentally-conscious AI are growing stronger.
Europe’s AI landscape is currently shaped by a new wave of regulations, notably the EU AI Act, which implements a risk-based approach to ensure that AI applications meet stringent requirements for safety, fairness, and data governance. Against this backdrop, the ACHILLES project, supported by €8m under Horizon Europe, aims to establish a comprehensive framework allowing the creation of AI-based products that are lighter (environmentally and computationally sustainable), clearer (transparent, interpretable, and compliant), and safer (robust, privacy-preserving, and compliant).
A multidisciplinary consortium: Expertise in every AI dimension
A core strength of the ACHILLES project is its diverse consortium, composed of 16 leading organisations from ten countries, each bringing specialised knowledge to the project. Leading universities and institutes push the state-of-the-art in fairness, explainable AI, privacy-preserving techniques, and model efficiency. High-tech companies and SMEs drive tool development, data innovation, and validation pilots to ensure ACHILLES solutions meet real-world needs. Healthcare and clinical organisations bring sensitive medical datasets and practical expertise in diagnostics, helping tailor robust health AI solutions.
Renowned centres of legal research and ethics specialists guarantee that ACHILLES aligns with emerging legislation (EU AI Act, Data Governance Act, GDPR). They also anticipate future regulatory shifts to help the project remain at the forefront of policy compliance. Specialists in open science, communication, and exploitation initiatives help coordinate interdisciplinary workshops, engage with standardisation bodies, and ensure that the project’s outputs reach wide audiences.
This rich blend of perspectives ensures that ethical, legal, and societal considerations are co-developed alongside technical modules, resulting in a holistic approach to the complex challenges of AI development.
Connecting to the EU AI Act and broader regulations
One of ACHILLES’s main objectives is to streamline compliance with evolving regulations, especially the EU AI Act, which entails:
Risk-based alignment: Matching each AI component’s risk level with appropriate checks, from data audits to bias mitigation.
Privacy and data governance: Ensuring that solutions meet or exceed the requirements of GDPR, Data Governance Act, and related frameworks.
Green AI: Integrating model efficiency and deployment optimisations to help organisations meet the sustainability goals outlined in the European Green Deal.
While compliance can seem intimidating, ACHILLES relies on a three-pillared framework, echoing the AI Act’s insistence on robust accountability:
Targets: Clearly specified goals aligned with regulations, standards (e.g., ISO/IEC 42001), and best practices.
Adherence Support: Practical tools and processes embedded throughout the AI lifecycle, ensuring compliance is built-in, not bolted on.
Verification: A robust auditing process combining data and model cards and continuous monitoring to validate that each step meets or exceeds compliance targets.
The iterative cycle: From idea to deployment and back
Inspired by clinical trials, with separate development and testing phases and a non-deterministic evaluation, ACHILLES has devised an iterative development cycle that moves through four perspectives (with five stages). Each phase ensures that human values, data privacy, model efficiency, and deployment sustainability remain front and centre.
Human-Centric (Start): Value-sensitive design (VSD) and co-design workshops capture end-user needs, societal values, and initial legal constraints to map them into technical specifications. Ethical Impact Assessments highlight potential risks and shape the AI solution’s direction from day one.
Data-Centric Operations: Data auditing and validation by detecting outliers, ensuring data diversity and quality; bias detection and mitigation leveraging advanced techniques to produce representative and fair training datasets (e.g., using synthetic data); privacy checks with automated tools to detect and anonymise personal data in line with GDPR guidelines.
Model-Centric Strategies: Training on distributed data sources without centralising sensitive information (e.g., federated learning), drastically reducing privacy risk; synthetic data generation to make models more robust or replace real data while preserving crucial statistical properties; efficiency tools like pruning, quantisation, and efficient hyperparameter tuning to reduce energy usage and training time.
Deployment-Centric Optimisations: Model compression to minimise a model’s memory footprint and inference time to save energy and costs; infrastructure recommendations on running models on cloud GPUs, FPGAs, or edge devices based on performance cost; and sustainability targets.
Human-Centric (End): Explainable AI (XAI) and Uncertainty Quantification, providing interpretable results, highlighting potential edge cases, and measuring how confident the model is; continuous monitoring to track performance drift, audit fairness, and automatically trigger re-training if biases or errors accumulate; semi-automated reporting by generating dynamic data/model ‘cards’ that mirror pharmaceutical-style leaflets, summarising usage guidelines, known constraints, and risk levels.
This iterative cycle ensures that AI solutions stay accountable to real-world needs and remain adaptive as regulations and societal expectations evolve.
The ACHILLES IDE: Bridging the gap
A standout innovation within ACHILLES is the Integrated Development Environment (IDE), designed to bridge the gap between decision-makers, developers, and end-users throughout the entire AI lifecycle by enabling:
Specification-Driven Design: Ensures that each AI solution adheres to co-created compliance requirements and user needs from the outset. Aligns every iteration of data and model handling with established norms (GDPR, EU AI Act, etc.).
Comprehensive Toolkit: Offers advanced functionalities (through APIs) for bias detection, data auditing, model monitoring, and privacy preservation. Facilitates energy-efficient model training and inference through pruning, quantisation, and other green AI practices.
Smart Copilot: Acts as an AI-driven assistant to guide developers in real-time, suggesting best practices, surfacing relevant regulatory guidelines, and recommending next steps for efficient or privacy-preserving deployment.
From inception to deployment and beyond, the IDE’s integrated approach aims to eliminate guesswork around compliance and sustainability, making it simpler and more intuitive for organisations to adopt responsible AI strategies.
Four real-world use cases: Proving adaptability and impact
ACHILLES validates its framework in diverse sectors, reflecting different levels of risk, regulatory intensity, and data sensitivity:
Healthcare: Ophthalmological diagnostics (e.g., glaucoma screening) combine clinical images with patient data, with strong requirements on privacy preservation, interpretability, and transparent reporting.
Identity Verification: Automates document checks and facial matching while minimising bias and handling strict privacy constraints. Further demonstrates how continuous model monitoring addresses data drifts (e.g., newly issued ID formats).
Content Creation (SCRIPTA): AI-generated scripts for films or literary work, with ethical oversight to filter harmful or copyrighted content, balancing creativity with accountability.
Pharmaceutical (HERA): AI-assisted compliance monitoring and knowledge management to streamline clinical trials and quality assurance. Illustrates the importance of data reliability within complex regulatory requirements.
Each scenario runs through ACHILLES’s iterative cycle, from value-sensitive design to continuous post-deployment auditing. During these use cases, ACHILLES will leverage the Z-Inspection® process for Trustworthy AI Assessment, providing a structured framework to evaluate how well the project’s solutions align with ethical principles, societal needs, and regulatory requirements.
Measuring success
ACHILLES tracks success across multiple Key
Performance Indicators (KPIs), including but not limited to:
Bias reduction: Mitigating up to 40% of detected bias in defined benchmarks and real-world datasets.
Privacy metrics: Synthetic data with under 5% performance loss relative to actual data and 90%+ compliance checks for user personal info.
User trust and satisfaction: Pre/post surveys for end users and developers, with a target of 30–40% improvements in AI fairness and transparency perception, including at least five user studies in human-AI interaction.
Energy reduction: At least 35% fewer joules per prediction than established baselines and 50%+ pruned neural network parameters with under 5% performance loss.
Timeline
ACHILLES kicked off in November 2024 and spans four years. Key phases include:
Year 1: Core architecture design, ethical/legal framework mapping, and initial work on technical toolkits inspired by real-world use cases.
Year 2: Early prototype releases (including compliance toolkits and advanced data operations) and iterative improvements tested through real-world validation pilots.
Year 3: Scaling up demonstration scenarios, refining robust privacy-preserving modules, and integrating results into sector-specific deployments.
Year 4: Beta release of the ACHILLES IDE, final validation in real-world use cases (including comprehensive user studies), and a consolidated exploitation strategy to extend the framework beyond the project’s lifespan.
At each stage, partners meet in interdisciplinary workshops to cross-check progress, share findings in an open-science manner, and communicate insights to standard bodies. By the project’s close, ACHILLES aims to offer a fully-fledged ecosystem for responsible, green, and lawful AI.
Open science, standards, and collaborative outreach
Driven by Horizon Europe principles, the ACHILLES project promotes open science and collaboration:
Open-Source Toolkits and Scientific Dissemination: Many modules and libraries will be released on open platforms (e.g., GitHub) under permissive licences to maximise community input. These and other scientific outcomes will be shared in key conferences and open-access journals.
Public Workshops: Regular interdisciplinary events will unite developers, policymakers, ethicists, and civil society to refine the system’s modules.
Engagement with Standardisation Bodies: Consortium members will actively contribute to AI-related ISO discussions, CEN-CENELEC committees, and other working groups to help shape future technical standards on data sharing, XAI, and privacy.
This culture of openness fosters a broader ecosystem of responsible AI development where best practices are shared, improved, and continuously validated in real-world contexts.
Towards a trustworthy AI future
ACHILLES provides a blueprint for modern AI that respects human values, meets stringent regulations, and operates efficiently. By blending technical breakthroughs with ethical-legal rigour, the project exemplifies how AI can be a force for good: transparent, inclusive, and sustainable. The project’s open and modular architecture, embodied in the user-friendly ACHILLES IDE, demonstrates Europe’s commitment to leading in data governance and digital sovereignty, minimising environmental impacts, and maximising transparency, fairness, and trust.
As the EU AI Act’s full implementation draws nearer, projects like ACHILLES are vital in bridging policy and practice. The goal is to ensure that AI fulfils its potential to improve lives and business outcomes without compromising ethics, privacy, or sustainability. Compliance is not an innovation blocker, and through a rigorous, continuous feedback loop, ACHILLES is setting a benchmark for Trustworthy AI, not just in Europe but globally.
For more details, upcoming workshops, or to access early open-source releases of the ACHILLES IDE, please visit www.achilles-project.eu.
Disclaimer:
This project has received funding from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101189689.
Please note, this article will also appear in the 21st edition of our quarterly publication.
Source link