Given that we are interacting more and more with AI, this the EU AI Act is also relevant to us. Last week, Roos Dijkxhoorn, Floor Terra and Geoffrey Ceunen gave a session on the EU AI Act.

Here are some findings: 


In the AI act, there are several actors. 
Geoffrey Ceunen explained these.  Here is a brief summary: 

  • Providers (builders of AI models)
  • Users (those who use AI for themselves or others)
  • Authorizers
  • Importers (bring AI systems to the EU)
  • Distributors (make AI available in the EU market)


He also talked about risk levels (see image).

You see there 

  • Unacceptable risk (e.g. manipulative system, social scoring system)
  • High risk (e.g. biometric categorization, critical infra ai)
  • Limited risk (chat bots)
  • Minimal risk (spam filters, game AI)

As you move from low to higher risk the obligations pile up.
The most important obligations are a conformance assessment, and a FRIA (Fundamental Rights Impact Assessment). Why do these serve? To ensure that the AI meets legal standards around security, transparency, and the protection of fundamental rights.

  • Unacceptable risk : not allowed, assesment not required.
  • High risk : Conformity assessment required, FRIA required
  • Limited risk : Conformity assessment required
  • Minimal risk : Usually no conformity assessment required.


Risk analysis EU AI Act


With AI, the surface (attack surface) increases. Roos talked mainly about how to protect users from the AI or the AI from hostile external users.

As with any IT system, it is critical to protect the confidentiality, integrity and availability of the data in your systems. Important frameworks already exist that are cybersecurity best practices. Some examples:

  • ISO 27001/2
  • NIST
  • BIO (nl)
  • NEN 7510 (nl)

The bottom line is this:

  1. What is there to protect
  2. Risk analysis (scope and frequency should increase with AI)
  3. Monitoring measures in place (continuous improvement)

These frameworks also always deal with

  • Data classification
  • Awareness (understanding risks of entering data into an AI system)
  • Information security in the chain (and everyone involved).

These are useful issues for any company serious about data security, and thus definitely worth considering!

Want to learn more about the possibilities with AI?