AI Act Glossary
The most important AI Act terms, explained in plain language.
DeployerThe organisation that uses an AI system (you, as the webshop owner).
A deployer is anyone who puts an AI system to use in a professional context. If you place an AI chatbot on your website, you are the deployer โ even if you didn't build the AI yourself. You have obligations under the AI Act.
GDPR comparison
Comparable to 'controller' under the GDPR: you are responsible for how the AI is put to use.
A webshop using the Tidio AI chatbot is the deployer of that AI system.
ProviderThe organisation that develops or places an AI system on the market.
The provider is the maker of the AI. OpenAI (maker of ChatGPT), Google (Gemini) or Anthropic (Claude) are providers. They have the heaviest obligations under the AI Act.
GDPR comparison
Comparable to 'processor' under the GDPR, but with more responsibility of their own.
OpenAI is the provider of ChatGPT. If you deploy ChatGPT, you are the deployer.
Hoog-risico AIAI-systemen die significante impact hebben op mensen, zoals kredietscoring of HR-besluiten.
De AI Act classificeert bepaalde AI-systemen als hoog-risico vanwege hun impact op mensenlevens. Denk aan AI voor werving, kredietbeoordeling, verzekeringsprijzen of onderwijs. Deze systemen moeten aan strenge eisen voldoen.
GDPR comparison
De AVG kent 'hoog-risico verwerking' waarvoor een DPIA nodig is. De AI Act gaat veel verder met conformiteitseisen.
Een AI-systeem dat bepaalt of je een lening krijgt is hoog-risico.
Verboden AIAI-toepassingen die zo schadelijk zijn dat ze volledig verboden zijn in de EU.
Sommige AI-toepassingen zijn zo gevaarlijk dat ze niet mogen bestaan: sociale scoring, subliminale manipulatie, emotieherkenning op werk, en het scrapen van gezichten voor databases. De boete: tot 7% van de jaaromzet.
Een systeem dat klanten een 'score' geeft op basis van hun sociale media-gedrag.
Limited riskAI with limited risk, subject to transparency obligations.
Most AI on websites falls into this category: chatbots, recommender systems, AI-generated content. You must inform users that AI is being used, but you don't have to perform a conformity assessment.
GDPR comparison
Comparable to the GDPR requirement to inform people about how you process their data.
An AI chatbot on a webshop is limited risk: you must disclose that it is AI.
AI-geletterdheidDe verplichting om medewerkers die met AI werken voldoende kennis te geven.
Artikel 4 van de AI Act verplicht organisaties om te zorgen dat iedereen die met AI werkt begrijpt wat het kan, wat de risico's zijn en hoe de AI Act werkt. Dit geldt al sinds februari 2025.
GDPR comparison
Vergelijkbaar met de AVG-eis om medewerkers te trainen in privacy en dataverwerking.
Een klantenservicemedewerker die een AI-tool gebruikt moet weten wanneer AI fout kan zitten.
GPAIGeneral Purpose AI: versatile AI models such as ChatGPT, Claude or Gemini.
GPAI models are AI systems that can be used for many purposes. The AI Act sets requirements for both providers (makers) and deployers (users) of these models.
ChatGPT is a GPAI model. If you deploy it on your site, you must disclose which model you use.
TransparantieverplichtingDe plicht om gebruikers te vertellen dat ze met AI te maken hebben.
Transparantie is de kerneis van de AI Act voor de meeste websites. Als AI betrokken is bij wat een bezoeker ziet of ervaart, moet je dat melden. Dit geldt voor chatbots, aanbevelingen, AI-content en tracking.
GDPR comparison
De AVG kent al transparantie-eisen voor dataverwerking. De AI Act voegt transparantie over AI-gebruik toe.
Een banner bij je chatbot: 'U spreekt met een AI-assistent'.
DeepfakeVideo, audio or images generated or manipulated by AI.
Deepfakes are increasingly hard to tell from the real thing. The AI Act requires all AI-generated media to be clearly labelled.
An AI-generated product photo must be labelled as 'AI-generated'.
ConformiteitsbeoordelingEen formele beoordeling die bewijst dat een hoog-risico AI-systeem aan alle eisen voldoet.
Voor hoog-risico AI moet worden aangetoond dat het systeem veilig, eerlijk en transparant is. Dit is vergelijkbaar met een CE-markering voor fysieke producten.
Een AI-kredietsysteem moet een conformiteitsbeoordeling doorlopen voor inzet.
DPIAData Protection Impact Assessment: a mandatory privacy risk assessment.
A DPIA is already required under the GDPR for high-risk processing. The AI Act stresses that the DPIA must specifically address the AI aspects of your system.
GDPR comparison
The DPIA comes from GDPR Article 35. The AI Act reinforces the requirement for AI systems.
Before deploying AI credit scoring, you describe the risks in a DPIA.
FRIAFundamental Rights Impact Assessment: assessment of impact on fundamental rights.
A FRIA goes beyond a DPIA. You assess not only privacy risks but also impact on equality, non-discrimination and other fundamental rights.
An insurer using AI pricing assesses whether the system disadvantages certain groups.
High-risk AIAI systems with a significant impact on people, such as credit scoring or HR decisions.
The AI Act classifies certain AI systems as high-risk because of their impact on people's lives. Think of AI for recruitment, credit scoring, insurance pricing or education. These systems must meet strict requirements.
GDPR comparison
The GDPR knows 'high-risk processing' which requires a DPIA. The AI Act goes much further with conformity requirements.
An AI system that decides whether you get a loan is high-risk.
Prohibited AIAI applications so harmful that they are completely banned in the EU.
Some AI applications are so dangerous that they may not exist: social scoring, subliminal manipulation, emotion recognition at work, and scraping faces for databases. The fine: up to 7% of annual turnover.
A system that gives customers a 'score' based on their social media behaviour.
AI literacyThe obligation to give staff who work with AI sufficient knowledge.
Article 4 of the AI Act obliges organisations to make sure everyone working with AI understands what it can do, what the risks are and how the AI Act works. This has been in force since February 2025.
GDPR comparison
Comparable to the GDPR requirement to train staff in privacy and data processing.
A customer-service agent using an AI tool must know when AI can be wrong.
Transparency obligationThe duty to tell users that they are interacting with AI.
Transparency is the core requirement of the AI Act for most websites. If AI is involved in what a visitor sees or experiences, you must disclose it. This applies to chatbots, recommendations, AI content and tracking.
GDPR comparison
The GDPR already has transparency requirements for data processing. The AI Act adds transparency about AI use.
A banner on your chatbot: 'You're chatting with an AI assistant'.
Conformity assessmentA formal assessment that proves a high-risk AI system meets all requirements.
For high-risk AI you must demonstrate that the system is safe, fair and transparent. This is comparable to a CE marking for physical products.
An AI credit-scoring system must pass a conformity assessment before deployment.