Risk-Based Approach
The EU AI Act classifies AI systems into four risk categories: Unacceptable risk (prohibited), High risk (strict regulation), Limited risk (transparency obligations), and Minimal risk (no specific obligations). The classification determines the compliance requirements.
High-Risk AI Systems
High-risk AI systems include: biometric identification, critical infrastructure, education and vocational training, employment and personnel management, creditworthiness and insurance, law enforcement, and migration and asylum. Strict requirements for risk management, data quality, documentation, transparency, and human oversight apply to these systems.
Relevance for Switzerland
Swiss companies are affected when they offer or deploy AI systems in the EU, when AI-generated results are used in the EU, or when they act as providers, distributors, or operators of AI systems in the EU market. The extraterritorial scope is similar to the GDPR.
Timeline & Deadlines
The EU AI Act entered into force in August 2024. Prohibitions on AI systems with unacceptable risk apply from February 2025. Requirements for high-risk AI systems and general-purpose AI models apply from August 2025 and August 2026 respectively. Companies should start preparing now.
Our Support
We support you with the risk classification of your AI systems, the creation of required documentation, the implementation of compliant AI architectures, and the preparation for audits and certifications. Our experts combine technical AI know-how with regulatory expertise.