Skip to main content

The AI in Your Webshop You Didn't Know Was There

compliancetransparencyrisk-managementdeadlinewebshop

# The AI in Your Webshop You Didn't Know Was There

Ask most webshop owners which AI systems their store uses, and you'll get a short answer: "A chatbot, maybe a recommendation widget." Ask what's actually running under the hood โ€” across their theme, apps, plugins, tracking scripts, and marketing tools โ€” and the list usually triples.

That gap matters. Under the EU AI Act (Regulation 2024/1689), compliance obligations attach to AI systems regardless of whether you built them or installed them as a third-party app. If it's running on your site, it's your problem. And the Act is already in force, with rolling deadlines through 2027.

Here's what webshop owners actually need to know โ€” without the legal jargon.

---

The hidden AI problem

A typical Shopify or WooCommerce store quietly runs AI in places most owners never think about:

  • Personalization and recommendation apps (Klaviyo, Rebuy, LimeSpot)
  • AI-generated product descriptions and review summaries
  • Chatbots and support widgets (Tidio, Gorgias AI, Intercom Fin)
  • Dynamic pricing and discount engines
  • Fraud screening (Shopify's own, Signifyd, Riskified)
  • Ad and audience tools (Meta, Google, TikTok pixels with AI-driven optimization)
  • Search tools that rerank results based on user behavior
You didn't write any of this code. But when a customer interacts with your storefront, you are the deployer under the Act โ€” and deployers have obligations too.

Most of these tools sit in the limited risk category, meaning you mainly owe transparency to users. A few can tip into high risk depending on how you use them. Either way, you can't comply with what you can't see.

The four risk categories, briefly

The Act sorts AI into four buckets:

Unacceptable risk (Art. 5) โ€” banned outright. Think manipulative techniques that exploit vulnerabilities or distort behavior in harmful ways. Aggressive "dark pattern" AI aimed at pressuring impulse purchases could land here.

High risk (Annex III) โ€” strict obligations. For webshops, this is narrower than people assume. Generic order-fraud screening usually isn't high-risk; creditworthiness assessment for Buy Now Pay Later or in-house financing is. If you're unsure, assume you need to check.

Limited risk (Art. 50) โ€” transparency required. Chatbots must disclose they're AI. AI-generated content (product copy, synthetic reviews, deepfake-style imagery) must be labeled as such.

Minimal risk โ€” no obligations. Spam filters, inventory forecasting, most personalization. Good practice still applies.

What you actually have to do

For the limited risk tools that make up most webshop AI, the core obligation is honest disclosure (Art. 50):

  • Tell users when they're interacting with an AI system (chatbots, voice agents)
  • Label AI-generated content where a reasonable person might mistake it for human-made
  • Keep the disclosure clear and at the point of interaction โ€” not buried in a privacy policy
For anything that touches high-risk territory, the bar is much higher: risk management (Art. 9), data governance (Art. 10), human oversight (Art. 14), accuracy and robustness (Art. 15), record-keeping (Art. 12, retained for ten years under Art. 18), and registration in the EU database (Art. 49) before deployment. If you're a deployer of a high-risk system rather than a provider, your duties are lighter but real โ€” chiefly monitoring, logging, and informing affected people.

The deadlines

The Act phases in over roughly three years (Art. 113):

  • February 2025 โ€” prohibitions on unacceptable-risk AI apply
  • August 2025 โ€” general-purpose AI model rules apply
  • August 2026 โ€” most obligations for high-risk systems under Annex III apply
  • August 2027 โ€” full application, including high-risk systems embedded in regulated products
We're past the first two milestones. August 2026 is the one most webshops should be planning around now.

Start with an inventory

You can't classify what you haven't listed. Before you worry about documentation templates or policy updates, build a simple inventory:

1. List every app, plugin, script, and integration on your storefront and backend 2. Flag the ones that use AI โ€” not always obvious; "smart," "personalized," "intelligent," and "automated" are clues 3. Categorize by risk using the Act's criteria, starting with whether anything interacts with users or makes decisions about them 4. Document disclosures already in place, and note the gaps

This is where most webshop owners get stuck โ€” not because the Act is impossibly complex, but because nobody has a clean picture of their own stack. That's exactly the gap AI Act Scanner was built to close: it scans your live site, identifies AI-powered tools and scripts running on it, and flags where you likely have transparency or compliance gaps โ€” in minutes, without a consultant.

Compliance as a trust signal

The fines make the headlines โ€” up to โ‚ฌ35 million or 7% of global turnover for the worst violations (Art. 99). But for most webshops, the bigger risk isn't a regulator's letter. It's customers quietly losing trust when they realize the "support agent" was a bot, or the glowing review was AI-generated, or their order got declined by a black-box algorithm with no appeal.

Transparent, well-documented AI use is becoming a competitive advantage. The shops that get ahead of this won't just avoid fines โ€” they'll stand out.

---

This article is general information, not legal advice. For specific guidance, consult a qualified lawyer.

Ready to see what AI is actually running on your webshop? Scan your site at aiactscanner.com and get a compliance-gap report in minutes.

Dit artikel is informatief en vormt geen juridisch advies. Raadpleeg een gespecialiseerd adviseur voor juridisch advies over de EU AI Act.

Voldoet uw website aan de EU AI Act?

Scan uw website gratis en ontdek direct uw compliance score en boeterisico.

Gratis website scannen
The AI in Your Webshop You Didn't Know Was There โ€” AI Act Scanner