AI Act for Education and EdTech: The Complete Compliance Guide
Education shapes life chances. Where a student is admitted, how they are graded, which track they are placed on, and whether they are flagged during an exam — these decisions define trajectories that compound across an entire lifetime. The European Union's legislators understood this when they wrote the AI Act, and they classified AI systems used in education and vocational training as high-risk under Annex III, point 3. The compliance deadline — August 2, 2026 — is now months away, and it applies to every school, university, training institution, and EdTech company that places AI systems on the EU market or deploys them in EU classrooms. Some AI practices in education are not merely regulated — they are already banned. Emotion recognition in schools has been prohibited since February 2, 2025, with penalties reaching EUR 35 million or 7% of global turnover. This guide covers every obligation the AI Act imposes on education AI — from the EdTech vendor building the system to the school deploying it — with legal references, practical checklists, real-world scenarios, and an implementation roadmap.
TL;DR — AI Act and education essentials
- High-risk by default: AI systems used in admissions, grading, student placement, and exam proctoring are explicitly listed as high-risk in Annex III, point 3 of the EU AI Act.
- August 2, 2026 deadline: High-risk obligations for education AI become enforceable on this date. Systems already in use must comply unless they undergo no significant modification.
- Already banned: Emotion recognition in educational institutions has been prohibited since February 2, 2025 under Article 5(1)(f). Any AI system that infers students' emotional states — engagement, attention, frustration — from facial expressions, voice, or physiological signals in a school or university setting is illegal.
- Two sets of obligations: The EdTech vendor (provider) and the school or university using the tool (deployer) each carry independent compliance duties. A vendor's CE marking does not exempt the school from its own obligations.
- Student and parent rights: Students (and parents/guardians for minors) must be informed when AI is used to make or support decisions about them, and they have a right to explanation and human review.
- Bias testing is mandatory: Article 10 requires examination of training datasets for biases — critical in education where socioeconomic background, disability, language proficiency, and ethnicity can introduce systemic disadvantage.
- Penalties: Up to EUR 35 million or 7% of global turnover for prohibited practices; up to EUR 15 million or 3% of global turnover for high-risk compliance failures.
- Assessment tool: Use the Legalithm AI Act Assessment to classify your education AI systems and identify your obligations.
Why education AI is classified as high-risk
The AI Act does not leave high-risk classification to interpretation for education. Annex III, point 3 explicitly identifies AI systems intended to be used in education and vocational training as high-risk, with four sub-categories covering the core decision points in any student's educational journey. The rationale is set out in Recitals 56 and 57 of the Regulation: AI systems in education can determine access to education, professional life, and livelihood, and errors or biases in these systems can result in serious and irreversible harm — particularly for minors.
The fundamental rights dimension
Education is a fundamental right under Article 14 of the EU Charter of Fundamental Rights. When an AI system denies a student admission, assigns a failing grade, or places a student in a lower track, it directly affects the exercise of that right. The legislators concluded that the potential impact on students — many of whom are children — justifies the full suite of high-risk obligations regardless of the technical complexity of the system.
A basic algorithm that ranks applicants by a weighted score is subject to the same regime as a deep-learning model that analyses essays. The classification turns on the function and context, not the sophistication of the technology. For a full walkthrough, see Is My AI System High-Risk? Classification Guide.
The August 2026 deadline
High-risk obligations under the AI Act take effect on August 2, 2026. After that date, any AI system that falls within Annex III, point 3 and is placed on the EU market or put into service in the EU must comply with the full set of requirements in Articles 8–15 — or be taken off the market. Systems already in use before that date must also comply unless they undergo no significant modification. In practice, most actively maintained EdTech tools receive regular updates, which means the legacy exemption is narrow. Schools and EdTech companies should treat August 2026 as a hard deadline for all education AI currently in use.
The 4 high-risk education AI categories
Annex III, point 3 identifies four specific sub-categories of education AI that are classified as high-risk. Each addresses a distinct decision point in the educational process.
AI determining access to education (admissions)
Annex III, point 3(a): AI systems intended to be used for determining access or admission to educational and vocational training institutions at all levels.
This covers any AI system that influences who gets into a school, university, professional training programme, or vocational course — at any educational level.
Common examples:
- University admissions algorithms that score and rank applicants
- Automated screening of application essays or personal statements
- AI systems that predict applicant success and recommend admit/reject decisions
- Scholarship allocation tools that use AI to rank candidates
The "material influence" threshold is important here. The AI system does not need to make the final admission decision. If it scores, ranks, or screens out applicants before a human reviews them, it is high-risk. A university that uses AI to reduce 20,000 applications to 2,000 for human review has deployed a high-risk system — even though an admissions officer makes the final call on those 2,000.
AI evaluating learning outcomes (grading)
Annex III, point 3(b): AI systems intended to be used for evaluating learning outcomes, including when those outcomes are used to steer the learning process.
This is one of the broadest sub-categories in Annex III. It captures any AI system whose output contributes to assessing what a student has learned — whether the assessment is summative (final grades, exam scores, certifications) or formative (ongoing feedback that shapes the learning path).
Common examples:
- Automated essay scoring and grading systems
- AI-powered marking tools for multiple-choice, short-answer, or coding assignments
- Automated feedback generators that assign quality scores to student work
- Language proficiency AI assessors
- AI tools that score standardised tests
- Adaptive learning platforms that evaluate mastery and adjust content difficulty
The second clause — "including when those outcomes are used to steer the learning process" — is significant. It means that an adaptive learning platform that evaluates a student's understanding to decide what to teach next is high-risk, even if it never produces a formal grade. The evaluation of learning outcomes is itself the trigger, regardless of how the result is used downstream.
AI assessing appropriate education level (placement)
Annex III, point 3(c): AI systems intended to be used for assessing the appropriate level of education that an individual will receive or will be able to access.
This covers AI systems that determine which educational track or programme a student is placed in. In many European systems, students are channelled into academic or vocational tracks around ages 10–14. AI systems informing these placement decisions have outsized impact on life outcomes.
Common examples:
- Student placement algorithms that recommend academic vs. vocational tracks
- AI systems that assess readiness for advanced or gifted programmes
- Course recommendation engines that determine which level of a subject a student should take
- Special educational needs assessment tools that use AI to classify students
- Career guidance AI that recommends educational pathways based on aptitude predictions
A student placed in a lower track at age 12 by an AI system with biased training data may face compounding disadvantage for decades — which is why legislators treated placement AI with the same seriousness as admissions and grading AI.
AI monitoring exam behaviour (proctoring)
Annex III, point 3(d): AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests.
This targets AI proctoring systems that monitor students during examinations to detect cheating or rule violations. AI proctoring expanded dramatically during the COVID-19 pandemic and remains widespread in remote and hybrid examination settings.
Common examples:
- Remote proctoring systems with webcam monitoring and facial recognition
- AI tools that track eye movements to detect reading from off-screen materials
- Keystroke analysis tools that detect anomalous typing patterns during exams
- Browser lockdown tools with AI monitoring of attempted screen switching
- Audio monitoring systems that detect voices or sounds suggesting assistance
- AI-powered plagiarism detection tools that flag submissions for further review
Proctoring AI is particularly sensitive because it combines surveillance with consequential decision-making. A student flagged for "suspicious behaviour" may have their exam voided, face academic misconduct proceedings, or be expelled — all based on an algorithmic assessment. False positives disproportionately affect students with disabilities, neurodiverse students, students from different cultural backgrounds, and students testing in non-ideal home environments.
Summary: high-risk education AI at a glance
All four categories carry the same core set of high-risk obligations under Articles 8–15 of the AI Act. For a full breakdown of these obligations, see the EU AI Act Compliance Checklist.
Prohibited AI practices in education
Before addressing the high-risk obligations that take effect in August 2026, educational institutions must understand that certain AI practices are already prohibited. The Article 5 bans took effect on February 2, 2025. Violations are enforceable now, and they carry the highest tier of penalties.
Emotion recognition in schools and universities
Article 5(1)(f) prohibits the use of AI systems to infer emotions of natural persons in the areas of workplace and education institutions, except where the AI system is intended to be placed on the market or put into service for medical or safety reasons.
This prohibition is absolute for standard educational contexts. Any AI tool deployed in a school, university, or training institution that analyses a student's facial expressions, vocal tone, gaze patterns, posture, physiological signals, or keyboard/mouse behaviour to infer an emotional state — attention, engagement, boredom, frustration, confusion, stress — is banned.
This has major implications for EdTech. Several platforms marketed "engagement detection" or "attention monitoring" as core capabilities before the AI Act. These features are now illegal in EU educational settings regardless of branding — rebranding "emotion recognition" as "engagement analytics" does not change the legal classification. Article 5(1)(f) targets the function, not the label.
The narrow exception: The prohibition does not apply where the AI system is intended for medical or safety reasons (e.g., epileptic seizure detection, physical safety in laboratories). The exception is narrow and cannot be used as a backdoor for routine classroom monitoring.
Real-world example: A university deploys a proctoring platform that analyses students' facial expressions during exams to produce a "stress level" score. This is a prohibited AI practice — the university and vendor face penalties of up to EUR 35 million or 7% of global turnover. The proctoring system's other functions (screen monitoring, identity verification) may continue, but the emotion-inference feature must be removed.
Social scoring in educational contexts
Article 5(1)(c) prohibits AI systems that evaluate or classify natural persons based on their social behaviour or known, inferred, or predicted personal characteristics, where the resulting social score leads to detrimental treatment that is unjustified or disproportionate. While this prohibition is general, it has specific relevance in education. Any AI system that creates a composite "student score" aggregating attendance, behaviour incidents, social media activity, family circumstances, or other behavioural data — and uses that score to deny educational opportunities — falls within the prohibition.
Subliminal manipulation and exploitation of vulnerabilities
Article 5(1)(a) prohibits AI systems that deploy subliminal techniques to materially distort behaviour causing significant harm. Article 5(1)(b) extends this to systems exploiting vulnerabilities of specific groups due to age, disability, or social situation. Since many students are minors — and children are inherently vulnerable under the AI Act (Recital 28) — EdTech systems using manipulative design patterns to exploit children's psychological vulnerabilities could fall within these prohibitions.
Compliance obligations for educational institutions
High-risk education AI systems must comply with the requirements in Articles 8–15 of the AI Act. The obligations fall on two categories of actors: the provider (typically the EdTech company that builds and supplies the system) and the deployer (typically the school, university, or training institution that uses it). This section focuses on the obligations most relevant to education contexts.
Risk management system (Article 9)
Every high-risk education AI system must have a risk management system that operates throughout its entire lifecycle. This is not a one-time risk assessment — it is an ongoing, iterative process that identifies risks, evaluates their likelihood and severity, adopts mitigation measures, and monitors residual risk.
For education AI, the risk management system must specifically consider:
- Bias risks: How might the system disadvantage students from particular socioeconomic, ethnic, linguistic, or disability backgrounds?
- Accuracy risks: What happens when the system is wrong? False positives in proctoring can lead to misconduct charges; inaccurate grades affect university admissions.
- Foreseeable misuse: Could teachers use the system beyond its intended purpose? Could students game it?
- Impact on minors: The risk management system must account for the heightened vulnerability of children and irreversible harm during critical developmental periods.
See AI Governance Framework: Building a Compliance Programme for detailed guidance.
Data governance (Article 10)
Article 10 imposes rigorous requirements on the data used to train, validate, and test high-risk AI systems. In education, data governance is particularly sensitive because:
- Student data sensitivity: Education records, assessment results, and learning analytics often involve children and reveal information about disabilities, family circumstances, and socioeconomic status.
- GDPR overlay: Student data is personal data under the GDPR, and much of it qualifies as special category data. The AI Act's requirements operate in addition to GDPR — not as a replacement. See EU AI Act vs. GDPR: Differences and Overlap.
- Representative datasets: Training data must represent the population the system serves. An essay-scoring AI trained on native English speakers may perform discriminatorily on non-native speakers' essays. Article 10(2)(f) requires datasets to account for the setting within which the system is intended to be used.
- Bias examination: Article 10(2)(g) requires examination of data for biases that impact fundamental rights or lead to discrimination — meaning active testing for disparate impact across gender, ethnicity, disability, socioeconomic background, and language.
For more on bias testing requirements, see AI Bias Testing and Fairness Under the EU AI Act.
Human oversight requirements (Article 14)
Article 14 requires that high-risk AI systems be effectively overseen by natural persons during use. In education, teachers, professors, admissions officers, and administrators must be able to:
- Understand the system's capabilities and limitations — including known accuracy rates and error margins.
- Interpret the system's outputs — understanding what a score means and what factors influenced it.
- Override or reverse the AI's output — with practical ability and institutional authority to do so.
- Intervene or interrupt the system — including a "stop" mechanism, particularly for real-time proctoring tools.
Schools cannot adopt a "set and forget" approach. Automated grading without teacher review, proctoring tools that automatically void exams, or admissions algorithms whose outputs are rubber-stamped all fail the human oversight requirement.
Transparency to students and parents (Article 13 + Article 26(11))
Article 13 requires sufficient transparency for deployers to interpret outputs. Article 26(11) adds that natural persons subject to high-risk AI decisions must be informed. For educational institutions:
- Students must be told when AI assesses applications, grades work, determines placement, or monitors exams.
- Parents and guardians must be informed when the student is a minor.
- Meaningful information: "We use AI" is insufficient. Disclosures must explain what the AI does, what data it processes, how outputs are used, and what rights students have (including the right to human review).
- Timing: Notification must come before or at the time the AI system is used — not after the decision.
Fundamental Rights Impact Assessment (Article 27)
Article 27 requires certain deployers of high-risk AI systems to conduct a Fundamental Rights Impact Assessment (FRIA) before putting the system into use. This applies to:
- Bodies governed by public law (most public schools and universities in EU Member States), or
- Private entities providing public services (private schools receiving public funding, private universities in certain jurisdictions).
The FRIA must assess impact on the right to education (Article 14 EU Charter), non-discrimination (Article 21), rights of the child (Article 24), effective remedy (Article 47), and good administration (Article 41). For education AI, specifically examine:
- Disproportionate impact on students from disadvantaged backgrounds
- Recourse available to students and parents who disagree with AI output
- Interaction with existing equality and accessibility obligations
- Specific risks to children and vulnerable students
For a detailed guide, see AI Act FRIA: Fundamental Rights Impact Assessment Guide.
GDPR and DPIA requirements for student data
The AI Act does not replace the GDPR — it operates alongside it. Educational institutions deploying AI must continue to comply with the GDPR in full, including:
- Lawful basis: For public schools, this is typically Article 6(1)(e) GDPR (public task). Consent is problematic in educational settings because of the power imbalance between institutions and students.
- Data Protection Impact Assessment (DPIA): Under Article 35 GDPR, a DPIA is likely required for most education AI because they involve processing of data concerning vulnerable persons (children) and use of new technologies. The DPIA should be coordinated with the FRIA to avoid duplication.
- Data minimisation: AI systems should process only necessary student data. A proctoring system that records full video of a student's home environment likely fails data minimisation.
- Article 22 GDPR: The right not to be subject to a decision based solely on automated processing with legal or similarly significant effects. Automated grading and admissions decisions can trigger Article 22 protections.
Who is the provider vs. deployer in education?
Understanding the provider vs. deployer distinction is essential because the AI Act assigns different obligations to each role — and in education, both roles carry serious responsibilities.
The EdTech company as provider
The provider is the entity that develops or places the AI system on the market. In education, this is typically the EdTech company that builds and sells the AI tool — the company that develops the automated grading platform, the proctoring solution, or the adaptive learning system.
Provider obligations include:
- Conformity assessment (Article 43): Ensuring the system meets all requirements before market placement. For education AI, self-assessment is generally available.
- Technical documentation (Article 11 + Annex IV): Comprehensive documentation describing the system, its purpose, risk management, data governance, accuracy, and testing. See AI Act Technical Documentation Guide.
- CE marking (Article 48): Affixing the CE mark to indicate conformity.
- Quality management system (Article 17): A systematic framework for ongoing compliance.
- Post-market monitoring (Article 72): Monitoring performance after deployment, collecting error data, and taking corrective action. See Post-Market Monitoring and Incident Reporting.
- Instructions for use: Providing deployers with clear instructions including intended purpose, known limitations, accuracy rates, and human oversight requirements.
The school or university as deployer
The deployer is the entity that uses the AI system under its authority. In education, this is the school, university, exam board, or training institution that procures and deploys the EdTech tool.
Deployer obligations include:
- Use in accordance with instructions (Article 26(1)): Deployers must use the system as the provider intended. Using an adaptive learning platform (designed for formative feedback) as a summative grading tool may change the risk profile and trigger additional obligations.
- Human oversight (Article 26(2)): Ensuring that the natural persons assigned to oversee the AI system are competent, trained, and have the authority to override it.
- Input data quality (Article 26(4)): Ensuring that the data fed into the system is relevant and representative. If a school feeds biased or incomplete student records into an AI placement tool, the school bears responsibility for the resulting output.
- Informing students (Article 26(11)): Notifying affected persons that they are subject to a high-risk AI system.
- Fundamental Rights Impact Assessment (Article 27): Where applicable (public bodies and public service providers).
- Record keeping (Article 26(6)): Retaining logs generated by the system for the period specified in the instructions for use, or at least six months.
Shared responsibilities in practice
In practice, the provider-deployer relationship in education is often entangled. An EdTech company may host the AI system as SaaS, manage updates, and retrain models on student data — effectively controlling the system's behaviour even though the school is the formal deployer. Both parties should clearly define their obligations in the procurement contract — particularly around data governance, bias testing, incident reporting, and human oversight.
A school cannot discharge its deployer obligations by pointing to the vendor's compliance. If a vendor-provided AI system produces biased grades, the school is liable as deployer for failing human oversight — even if the vendor is also liable as provider for data governance failures.
European Commission 2026 ethical guidelines for AI in education
In addition to the legally binding AI Act, the European Commission has published ethical guidelines for AI in education, building on the 2022 framework and updated for the AI Act era. These guidelines are not enforceable law, but they represent the Commission's view of best practice and are likely to influence how regulators and courts interpret the AI Act's requirements. The five pillars are:
- Human dignity and autonomy: AI must support — not replace — human agency in learning. Students should remain active participants, not passive subjects of algorithmic optimisation.
- Fairness and non-discrimination: AI must not perpetuate existing educational inequalities. Special attention to the digital divide, socioeconomic disadvantage, disability, and linguistic diversity.
- Transparency and trust: Students, parents, and educators must understand how AI influences decisions. Trust requires explainability, not just disclosure.
- Academic integrity: AI tools must support honest learning and not undermine the value of qualifications.
- Justified and proportionate use: AI deployment must be justified by genuine need and proportionate to the educational benefit.
These pillars should inform risk management systems, transparency notices, and the evaluation of whether a particular AI deployment is proportionate.
Implementation roadmap for schools and EdTech companies
Phase 1: Inventory and classification (now)
- Inventory all AI systems currently in use. This includes tools that may not be obviously "AI" — recommendation engines, adaptive platforms, automated feedback tools, plagiarism detectors, and proctoring systems. See How to Build an AI Systems Inventory.
- Classify each system against Annex III, point 3. Use the Legalithm AI Act Assessment for a guided classification.
- Identify prohibited systems: Check every system for emotion recognition features, social scoring components, or manipulative design patterns. If any are present, they must be removed or the system decommissioned immediately — the prohibition has been in effect since February 2025.
Phase 2: Gap analysis and vendor engagement (Q2 2026)
- Conduct gap analysis: For each high-risk system, assess the current state of compliance against the requirements in Articles 8–15. Identify gaps in documentation, human oversight, bias testing, transparency, and logging.
- Engage vendors: Request technical documentation, accuracy metrics, bias testing results, and instructions for use from each EdTech provider. If a vendor cannot provide these materials, escalate — and begin evaluating alternative providers.
- Begin FRIA (for public institutions): Start the Fundamental Rights Impact Assessment process for each high-risk system. This requires cross-functional input from data protection officers, educators, student representatives, and equality officers.
- Begin or update DPIA: Coordinate the GDPR Data Protection Impact Assessment with the FRIA to ensure comprehensive coverage without duplication.
Phase 3: Implementation (Q2–Q3 2026)
- Implement human oversight procedures: Define who oversees each AI system, what training they need, and what authority they have to override or intervene. Document these procedures.
- Deploy transparency notices: Draft and publish student-facing notices explaining which AI systems are used, what they do, what data they process, and what rights students and parents have.
- Establish logging and record-keeping: Ensure that system logs are retained for the required period and are accessible for audit.
- Conduct or commission bias testing: Test each high-risk system for disparate impact across protected characteristics relevant to the student population. Document results and mitigation measures.
- Update procurement contracts: Ensure that contracts with EdTech vendors clearly allocate AI Act responsibilities between provider and deployer.
Phase 4: Ongoing compliance (post-August 2026)
- Monitor system performance: Implement post-market monitoring processes for providers. Deployers should establish feedback mechanisms for teachers and students to report issues.
- Incident reporting: Establish procedures for reporting serious incidents (system failures, discriminatory outcomes, safety risks) to the relevant national authority. Providers must report within 15 days of becoming aware of a serious incident.
- Regular review: Revisit risk management assessments, bias testing, and FRIAs at regular intervals — at minimum annually, and after any significant system update.
Real-world examples
Adaptive learning platforms
Scenario: A secondary school in Germany deploys an adaptive mathematics platform that evaluates students' answers, assesses mastery levels, and adjusts content difficulty. The platform uses AI to classify students into proficiency tiers that inform teacher recommendations for advanced or standard track placement.
AI Act classification: High-risk under Annex III, point 3(b) (evaluating learning outcomes) and point 3(c) (assessing appropriate level of education). The system both evaluates what students have learned and uses that evaluation to influence their educational track.
Obligations: The vendor must provide technical documentation, accuracy metrics, and bias testing results. The school must implement human oversight (teachers able to override track recommendations), notify students and parents, and — as a public institution — conduct a FRIA. The school should test whether proficiency classifications correlate with socioeconomic status or language background.
AI proctoring in university examinations
Scenario: A Dutch university uses a remote proctoring platform for online examinations. The system uses webcam monitoring to verify student identity, track eye movements, detect unauthorised materials, and flag "suspicious behaviour" for human review. The platform previously included an "engagement analysis" feature that assessed students' apparent attention levels.
AI Act classification: High-risk under Annex III, point 3(d) (monitoring and detecting prohibited behaviour during tests). The "engagement analysis" feature constitutes emotion recognition and is prohibited under Article 5(1)(f).
Action required: The university must immediately disable the engagement analysis feature — the prohibition has been in effect since February 2025. For remaining proctoring functions, the university must ensure human review of all flags before academic consequences, notify students that AI proctoring is in use, test false-positive rates across demographics, and ensure data minimisation (full room scans may violate both the AI Act and GDPR).
AI chatbot tutors
Scenario: A French business school deploys an AI-powered chatbot that answers students' questions about course material, provides practice problems, and gives feedback on written answers. The chatbot does not assign grades — teachers handle all formal assessment.
AI Act classification: This requires careful analysis. If the chatbot purely provides information and practice without evaluating learning outcomes or influencing placement, it may not fall within Annex III, point 3. However, if its assessments feed into the school's learning management system and influence how students are tracked or recommended for courses, it becomes high-risk under point 3(b) or 3(c). The classification depends on how outputs are actually used, not how they are marketed.
Practical guidance: Schools should document the chatbot's function precisely. If there is any ambiguity, the safer approach is to treat the system as high-risk — the cost of compliance is far lower than the cost of a penalty.
Frequently asked questions
Does the AI Act apply to private schools?
Yes. The AI Act applies based on where the AI system is placed on the market or put into service, not the institution's public or private status. A private school using AI for admissions, grading, placement, or proctoring has the same high-risk obligations as a public school. The FRIA obligation (Article 27) may not apply to purely private institutions, but all other obligations apply equally.
Are AI-powered plagiarism detectors high-risk?
It depends on the function. A plagiarism tool that flags text similarity for teacher review is less clearly within Annex III, point 3(d) than a proctoring tool monitoring behaviour during exams. However, if the detector makes a determination about whether a submission constitutes prohibited conduct — rather than flagging similarity — it falls within scope. AI-generated text detectors raise similar questions and should be assessed case-by-case.
Can schools use AI grading if a teacher reviews all grades?
Yes — but human review must be meaningful, not rubber-stamping. Article 14 requires the human overseer to understand outputs, interpret them, and have practical ability and authority to override. If teachers lack the time, training, or mandate to meaningfully evaluate and correct AI grades, the human oversight requirement is not satisfied. Institutions should ensure manageable review volumes, provide training, and create a culture where overriding the AI is supported.
What about AI used in non-EU schools with EU students?
The AI Act has extraterritorial scope. Under Article 2(1)(c), the Regulation applies to providers and deployers in third countries where the AI system's output is used in the Union. A US-based university using AI to assess EU-resident applicants may fall within scope. The exact contours will be clarified by enforcement practice, but non-EU institutions with EU-resident applicants should assess their exposure.
When must we start complying?
The timeline depends on which obligation is at issue:
- Prohibited practices (emotion recognition, social scoring, manipulation): Already enforceable since February 2, 2025.
- AI literacy (Article 4): Enforceable since February 2, 2025 — all staff interacting with AI systems should have sufficient understanding of how they work.
- High-risk obligations (Articles 8–15, deployer duties in Article 26): Enforceable from August 2, 2026.
- General-purpose AI model obligations: Enforceable from August 2, 2025 for providers of general-purpose AI models.
For the complete timeline, see EU AI Act Timeline: Key Dates and Deadlines.
What are the penalties for non-compliance?
The AI Act's penalty structure is tiered:
- Prohibited practices: Up to EUR 35 million or 7% of total worldwide annual turnover, whichever is higher.
- High-risk obligations: Up to EUR 15 million or 3% of total worldwide annual turnover.
- Incorrect information to authorities: Up to EUR 7.5 million or 1% of turnover.
For SMEs and startups, the lower of the two amounts (fixed cap vs. percentage) applies. For a full breakdown, see EU AI Act Penalties and Fines Explained.
Next steps
The August 2, 2026 deadline is approaching. Schools, universities, and EdTech companies that have not begun compliance should start immediately.
Start with classification: Use the Legalithm AI Act Assessment to determine which of your AI systems are high-risk and identify your specific obligations as provider or deployer.
Build your compliance programme: The EU AI Act Compliance Checklist provides a step-by-step framework covering every requirement from risk management to post-market monitoring.
Understand your role: If you are an EdTech company, read the Provider vs. Deployer Obligations Guide to understand exactly what the AI Act requires of you — and what you must provide to your customers to support their compliance.
The AI Act is not an obstacle to innovation in education — it is the framework that makes innovation sustainable, trustworthy, and fair.
Check your AI system's compliance
Free assessment — no signup required. Get your risk classification in minutes.
Run free assessment


