EU AI Act 2026: What Swiss SMEs Need to Do Now
By Marina Nerandzic
March 24, 2026
The EU AI Act takes effect in stages — with the most critical deadline in August 2026. Even as a Swiss SME: if you serve EU customers, export products to the EU, or offer AI-powered services that affect EU citizens, you're directly impacted. This article explains what you need to do now.
What is the EU AI Act?
The EU AI Act is the world's first comprehensive law regulating artificial intelligence. It classifies AI systems by risk level and defines obligations for each class — from minimal transparency requirements to strict conformity assessments.
The four risk classes at a glance
- Unacceptable risk (banned): Social scoring, manipulative AI, biometric mass surveillance. These systems are banned from February 2025.
- High risk: AI in HR/recruiting, credit scoring, insurance assessment, medical diagnostics. Strict documentation, monitoring, and transparency obligations apply.
- Limited risk: Chatbots and generative AI — must be labeled as AI (transparency obligation).
- Minimal risk: Spam filters, recommendation algorithms, production optimization — no special obligations.
Why are Swiss SMEs affected?
The EU AI Act has extraterritorial reach — similar to GDPR. You're specifically affected if:
- Your AI system is deployed in the EU (including via cloud services)
- Your AI output affects EU citizens (e.g., automated contract analysis for EU clients)
- You sell AI products or services into the EU market
For a Zug-based SME with clients in Germany or Austria, this means: the EU AI Act is not a distant EU topic, but directly relevant.
What do you need to do? 6-step checklist
- Create an AI inventory: List all AI systems you use — internal and external. This includes purchased tools like ChatGPT, Copilot, or AI features in your accounting software.
- Determine risk class: Check which risk class each system falls into. Most SME applications (chatbots, process automation, document analysis) fall into 'limited' or 'minimal risk' categories.
- Build documentation: High-risk systems require technical documentation, risk assessments, and a quality management system. For limited risk, a transparency declaration suffices.
- Implement transparency: If you use a chatbot, users must know they're communicating with AI. Generated content must be labeled as AI-created.
- Ensure human oversight: Automated decisions (e.g., credit scoring, HR screening) need a 'human-in-the-loop' — someone who can intervene.
- Set up monitoring: High-risk systems must be continuously monitored — performance, bias, drift. Plan for a simple monitoring dashboard.
Timeline: Which deadlines apply?
- February 2025: Banned AI systems must be decommissioned
- August 2025: Transparency obligations for generative AI (chatbots, content generation)
- August 2026: All high-risk requirements apply — documentation, monitoring, conformity assessment
How it Company Zug can help
We support Swiss SMEs with a pragmatic 3-phase approach: AI inventory and risk assessment (1 week), documentation and compliance roadmap (2-3 weeks), implementation support and monitoring setup. No enterprise overhead, no months-long consulting projects — just practical compliance that fits into your daily operations.