THANK YOU FOR SUBSCRIBING

The Revolution Will be Automated: How the EU A.I. Act Promises to Change A.I. Forever
Jared Browne , Fexco Group Head of Data Privacy


Jared Browne , Fexco Group Head of Data Privacy
What’s in an amendment? Well, quite a lot actually, when a multi-billion-euro artificial intelligence industry is at stake. The EU’s draft Artificial Intelligence Act, the world’s first regulation governing A.I., is moving steadily through the Byzantine legislative machinery of the EU, with no less than 3,303 amendments having been tabled. The nub of the act is a scary prospect for the tech world: A.I. providers must prove that their tools are safe and fair or they will not be given a certificate of compliance to trade in the EU.
With so much political wrangling to come between the liberal and conservative factions of the parliament–and that’s not even mentioning the fact that member states’ opinions must also be thrown into the mix– should we despair? Will this act ever become law, I hear you say?
It’s not time to hit the panic button just yet, however. The GDPR had over 4,000 amendments and that work was given relatively short shrift by the efficiency of the EU’s political assembly line. Rather than an obstacle, the number of proposed amendments should be looked at as a positive fact, certainly for those who hope to see the act succeeding.
In Brussels, the number of amendments is seen as a gauge for the likely impact of the law once it is in effect. If, like the A.I. Act, there are thousands of amendments, then everyone who should be worried is worried. Make no mistake: this is and will be a big deal. One way or another, A.I. providers will soon be approaching the cliff-edge of having to prove that their tools are safe and fair.
A.I. tools had to be ‘error free’ before they would pass the test but this has now been watered down to a more realistic ‘to the best extent possible
What are some of the most significant bones of contention? Unsurprisingly, the very definition of A.I. itself is attracting much of the most intense debate. The original wording in the draft act gives an ambitiously broad definition of the area to include A.I., machine learning, expert and logic systems, and Bayesian or statistical approaches, that ‘influence the environments they interact with’. Very wide indeed! Generally, liberal groups want to hold onto this breadth of scope, while conservative factions wish to circumscribe it, leading to less businesses being caught in the regulatory drag net.
Another key point concerns the quality that will be expected of A.I. tools under the act. Again, the first draft set the bar dizzyingly high by suggesting that A.I. tools had to be ‘error free’ before they would pass the test but this has now been watered down to a more realistic ‘to the best extent possible’. Similarly, the stringent requirement to fully explain the logic of A.I. tools is undergoing a moderating process and is likely to fall short of demanding absolute explainability.
With everything to play for it is very much a case of watch this space, and if you are a provider of high risk A.I. software in the EU, watch it with interest and start thinking about how you are going to comply with this game-changing regulation.
Weekly Brief
Read Also
Asset Management in ongoing turbulent times - Communication remains key, but a sense of understanding and risk tolerance is vital
There is a storm coming in
Artificial Intelligence regulations and its impact on medical devices
Will data protection law reform open the door to easier international data transfers?
Put your Frontline Teams in the Driving Seat through a Personalized, Customer-Centric Approach
Cybersecurity Enabled by Zero Trust

I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info