The EU unveils a plan Wednesday to regulate the sprawling field of artificial intelligence(AI), aimed at making Europe a leader in the new tech revolution while reassuring the public against “Big Brother”-like abuses.
Whether it’s precision farming in agriculture, more accurate medical diagnosis, or safe autonomous driving—artificial intelligence will open up new worlds for us. But this world also needs rules.
The Commission, the EU’s executive arm, has been preparing the proposal for over a year and a debate involving the European Parliament and 27 member states is to go on for months more before a definitive text is in force.
The rules are part of the EU’s effort to set the terms on AI and catch up with the US and China in a sector that spans from voice recognition to insurance and law enforcement.
The bloc is trying to learn the lessons after largely missing out on the internet revolution and failing to produce any major competitors to match the giants of Silicon Valley or their Chinese counterparts.
But there have been competing concerns over the plans from both big tech and civil liberties groups arguing that the EU is either overreaching or not going far enough.
The draft regulation will create a ban on a very limited number of uses that threaten the EU’s fundamental rights.
This would make “generalised surveillance” of the population off-limits as well as any tech “used to manipulate the behaviour, opinions or decisions” of citizens.
Anything resembling a social rating of individuals based on their behaviour or personality would also be prohibited, the draft said.
Military applications of artificial intelligence will not be covered by the rules, and special authorisations are envisioned to cover anti-terrorism activities and public security.
Infringements, depending on their seriousness, may bring companies fines of up to four per cent of global turnover.
To promote innovation, Brussels wants to provide a clear legal framework for companies across the bloc’s 27 member states.
To this end, the draft regulation says companies will require a special authorisation for applications deemed “high-risk” before they reach the market.
High-risk systems would include “remote biometric identification of persons in public places” as well as “security elements in critical public infrastructure”.
Other uses, not classified as “high risk”, will have no additional regulatory constraints beyond existing ones.
Google and other tech giants are taking the EU’s AI strategy very seriously as Europe often sets a standard on how tech is regulated around the world.
Last year, Google warned that the EU’s definition of artificial intelligence was too broad and that Brussels must refrain from over-regulating a crucial technology, there is a difficult balance to be struck between protection and innovation.
The text “sets a relatively open framework and everything will depend on how it is interpreted.
But for some civil liberties activists, the new rules do not go far enough in curbing potential abuses in the cutting-edge technologies that look set to have a far-reaching impact on everyday life.
It still allows some problematic uses, such as mass biometric surveillance.
The EU must take a stronger position… and ban indiscriminate surveillance of the population without allowing exceptions.