Insights From The Blog

The European Union Regulates for Artificial Intelligence

Over the last couple of years, almost every major market on Earth has been impacted by Artificial Intelligence (AI). Software driven by AI has altered our daily lives and the way we do business. Thanks to this remarkable technology, the future possibilities keep challenging the limits of traditional thought. Nevertheless, worries about AI control and the dangers it might bring to the human species are on the rise due to its increasing capability and autonomy. Consequently, are we, as some have said, indeed “summoning the demon” with all of the horror that would bring? Is it better to keep turning a blind eye or let the complex algorithms work it out for us?

You know that something is serious when the European Union (EU) develops some kind of legal framework to help control it, and that is exactly what has happened with Artificial Intelligence. However, this shouldn’t be considered to be a bad thing, and the legislation seeks to help integrate and help define the acceptable use of AI rather than control it.

Legislators within the EU have decided that AI is becoming such a large part of everyday society that it is needed to outline certain definitions, controls and protections to put in place to ensure that it becomes integrated. The president of the EU, Ursula von der Leyen, said in an official statement:

Artificial intelligence is already changing our everyday lives. And this is just the beginning. Used wisely and widely, AI promises huge benefits to our economy and society. Therefore, I very much welcome today’s political agreement by the European Parliament and the Council on the Artificial Intelligence Act. The EU’s AI Act is the first-ever comprehensive legal framework on Artificial Intelligence worldwide. So, this is a historic moment. The AI Act transposes European values to a new era. By focusing regulation on identifiable risks, today’s agreement will foster responsible innovation in Europe. By guaranteeing the safety and fundamental rights of people and businesses, it will support the development, deployment and take-up of trustworthy AI in the EU. Our AI Act will make a substantial contribution to the development of global rules and principles for human-centric AI.”

The new rules and definitions will be consistent over the entirety of the EU, and is likely to become a benchmark for AI integration all over the world. The hope is that by introducing this kind of legislation, a level of control can be applied to an entity that threatens to become unwieldy and lack any kind of regulation if left unchecked.

The EU recognises that AI represents a significant risk in many areas of business and social life and has established a framework based on risk within these environments. This is broken down into four, distinct levels of impact:

  • Minimal or no risk. The use of AI is permitted in these circumstances without limit in most circumstances, given the following:
    • There are no mandatory obligations in place.
    • The EU Commission and Board to encourage drawing up codes of conduct intended to foster the voluntary application of requirements to low-risk AI systems.
  • Medium risk. Specific transparency obligations exist.  AI use is permitted subject to these obligations, which are defined as:
    • Notify humans that they are interacting with an AI system unless this is expressly evident that this is the case.
    • Notify humans that emotional recognition or biometric categorisation systems are being applied to them.
    • Apply obvious labels to deep-fake interactions stating that they are so unless necessary for the exercise of a fundamental right or freedom or for reasons of public interests.
  • High risk. Characterised by interaction with or impacting safety procedures such as mandatory CE and Quality marking, required Quality systems and third-party safety documentation. 
  • Unacceptable risk. The EU notes several instances where AI should be banned from use.  These include:
    • Subliminal manipulation resulting in physical or psychological harm.
    • The exploitation of children or mentally disabled persons that would result in physical/psychological harm to them.
    • General purpose social scoring.  This could include where an AI system identifies at-risk children in need of social care solely based on insignificant or irrelevant social ‘misbehaviour’ instigated by the parents, and open to interpretation.
    • Remote biometric identification for law enforcement purposes in publicly accessible spaces.  This basically means the use of AI cameras in public areas for law enforcement cameras is banned.

The EU hopes that by developing this detailed legislation, people will be assured that AI can become a friend rather than something to fear. The political agreement will enter into force twenty days after it is published in the Official Journal, once it has been formally approved by the European Parliament and the Council. With the exception of a few clauses, the AI Act would be applicable two years following its enactment: Rules on General Purpose AI will be implemented after a year of consultations, although prohibitions will be in effect from six months of publishing.

We at Unity Developers welcome this kind of political action since it helps define what AI users – and even stand-alone systems – can and can’t do, and this adds a level of safety that we believe will help sculpt AI into something that is actually useful to mankind, rather than something to be afraid of.