Business leaders need to prioritize the resilience of AI security systems and implement protection against both traditional cyberattacks and AI-specific threats such as data addiction.
However, Darren Thomson, field CTO EMEAI at Commvault, said government-led regulations remain essential to establishing a standardized framework for AI safety and security.
Global AI Race has reached new heights with the US government’s announcement of a $500 million AI initiative, including the Landmark Project Stargate partnership with Openai, Oracle and SoftBank.
This development, coupled with the UK’s recent AI action plan, marks a pivotal moment in the international AI landscape.
While both countries show clear ambitions for AI leadership, there is a concerning gap between a proactive growth agenda and the regulatory framework needed to ensure safe and resilient AI development.
The growing regulatory gap
The current contrast between regulatory approaches is severe. The EU is making progress with comprehensive AI laws, but the UK maintains a light touch approach to AI governance. This regulatory difference, coupled with the recent withdrawal of major AI safety requirements by the US government, creates a complex landscape for organizations implementing AI systems in today’s globalized world.
This situation is particularly challenging given the evolving nature of AI-specific cyber threats, from sophisticated data addiction attacks to vulnerabilities in AI supply chains that can cascade failures across critical infrastructures.
UK companies are currently facing the unique challenge of deploying AI solutions globally without a clear domestic governance framework. The government’s AI Action Plan shows admirable ambitions for growth, but there is a risk that a lack of regulatory oversight could put UK organisations under new cyber threats and undermine public confidence in AI systems.
The plan to establish a national data library that supports AI development by unlocking high impact public data raises its own security concerns. How is the dataset built? Who is in charge of their defense? How can data integrity be guaranteed for years, if it is part of some AI models at the heart of public, business and private life?
In contrast, the EU is making progress with AI law. The AI Act is an all-encompassing, enforceable framework that clearly places AI regulation, transparency and harm prevention first. It outlines a clear commitment to safe AI development and implementation, including mandatory risk assessments and substantial penalties for non-compliance violations.
Evolving AI Security Protocols
Continuous regulatory deviations can be a complex environment for businesses responsible for building and deploying AI security solutions.
Divergence creates an irregular playing field and a potentially far more dangerous AI-enabled future.
Therefore, businesses need to establish progress that balances innovation and risk management, and integrate powerful cybersecurity protocols that will be changed due to the new demands driven by AI.
Well poisoning
Data addiction is the term for malicious actors who intentionally manipulate training data to change the outcome of an AI model. This could be a subtle change that is hard to discover, potentially minor changes that generate errors or incorrect responses, or cybercriminals can modify their code to “hide” within the model to control performance.
Such difficult interference can gradually put an organization at risk, and can promote poor and ultimate destruction of decision-making. Or, in a political context, it can foster prejudice and promote bad behavior.
These attacks are essentially difficult to detect until damage occurs, as compromised data can be seamlessly mixed with legal data. Data addiction can best be addressed through robust data validation, anomaly assessment, and ongoing monitoring of datasets to find and eliminate malicious data. Poison can occur at any time, from initial data collection through a data repository to deployment, to infection from other corrupted sources during the data lifecycle.
Data supply chain defense
Establishing the National Data Library highlights the risk that a safe model will likely break and quickly expand the supply chain from there.
Over the next few years, infections could flow rapidly as many organizations rely on these AI models on their daily operations. Cybercriminals are already using AI to promote attacks, and the outlook for corrupt AI in the supply chain bloodstream is cold.
Therefore, corporate leaders need to build robust protective measures to support resilience across the supply chain, including proven disaster recovery plans.
In practice, this means defining what a minimal viable business will look like, establishing an acceptable risk attitude, while maximizing critical applications. Companies can ensure that they can quickly and completely rebuild their essential backups in the event of an attack.
Stay up to date with risky situations
It is clear that AI could potentially recharge innovation, but also open the door to new threats, especially when it comes to security, privacy and ethics.
Once AI is integrated with all enterprise infrastructure, the likelihood of malicious violations increases dramatically.
The best future approach from a risk mitigation perspective is to maintain robust protective measures, ensure transparent development, and maintain ethical values. A balance between innovation and zero resistance to abuse allows organizations to utilize AI while protecting corruption. Ultimately, however, only government-enhanced laws will help establish all AI safety and security frameworks globally.
Source link