How companies using e-assessment can prepare for the introduction of the EU AI Act

Share

The European Union has introduced new legislation on artificial intelligence, the EU AI Act, for companies with AI products or services used within the EU, regardless of where they are based. The legislation designed to regulate AI systems to ensure their safety, transparency and accountability and has the possibility to become a global standard for AI legislation. 

We asked eAA Chief Executive, Patrick Coates to highlight the key implications for companies involved in e-assessment as AI used in education and training, which in the eAA’s remit of assessing students, falls under this category. 

For companies involved in e-assessment, particularly those using AI for proctoring, content creation, and test creation, the Act has several implications. In particular, the impact on e-assessment tools using AI for proctoring, content creation, and test creation will likely be considered high-risk. 

Impact on online proctoring 

  • Bias and Discrimination: AI proctoring systems must be free from biases that could affect the fairness of assessments. Companies must ensure that these systems do not discriminate against any group of students. 
  • Privacy Concerns: AI proctoring involves monitoring students, which raises significant privacy issues. Companies must ensure compliance with GDPR and other data protection regulations. 
  • Transparency and Consent: Students must be informed about the AI proctoring system, how it works, and what data is being collected. Obtaining explicit consent may be necessary. 

The eAA’s analysis: Technology has enormous potential to enhance and transform learning and assessment, and the e-Assessment Assocation has been sharing examples of best practice since we were founded over 15 years ago. AI has been used by technology providers for many years and issues such as bias are nothing new. This act should not present significant challenges for technology providers who place safety, transparency and accountability at the heart of their operations. Requirements such as having human oversight or having content logs are already embedded in best practice. 

Impact on Content and Test Creation: 

  • Quality and Fairness: AI systems used for content and test creation must generate high-quality, fair, and unbiased content. The data used for training these systems should be diverse and representative. 
  • Accountability: Companies must be able to explain how AI-generated content is produced and ensure it meets educational standards and objectives. 

The eAA’s analysis: “Of course this is true of any assessment and often is the remit of the test sponsor/owner, rather than the technology provider. Under the EU legislation, users of technology will also have certain obligations so do need to be aware of this.  Best practice would always dictate that where such technology is used, it is done so correctly and checked where appropriate.”  

Compliance and Monitoring: 

  • Conformity Assessment: High-risk AI systems will need to undergo conformity assessments before being deployed. This may involve internal checks or third-party audits. 
  • Post-Market Monitoring: Continuous monitoring and reporting of the AI system’s performance and potential risks are required. 

The eAA’s analysis: “As with any system, making sure it is fit for purpose over time is essential. This is something that is a standard part of e-assessments when they are being created, but providers also need to make sure that the same applies with AI-enabled components.” 

 

Non-compliance with the EU AI Act can result in substantial fines, up to 6% of the company’s annual global turnover or €30 million, whichever is higher. There are similar penalties in place for GDPR which is €20 million and 4% respectively. While the financial penalties are significant, the additional reputational damage of a breach of the legislation cannot be ignored.  

Practical Steps for Companies 

Many companies will have already started to prepare for the introduction of this new legislation but, if you haven’t, there are a number of practical steps that you can take to ensure you comply with this new EU law: 

  • Assess AI Systems: Evaluate current AI systems to determine if they are high-risk under the EU AI Act. 
  • Implement Compliance Measures: Develop and implement measures to comply with the Act’s requirements, including risk management, transparency, and human oversight. 
  • Data Governance: Ensure that the data used by AI systems is high-quality, representative, and used ethically. 
  • Regular Audits and Monitoring: Conduct regular audits and continuous monitoring of AI systems to ensure ongoing compliance. 
  • Stakeholder Engagement: Engage with stakeholders, including students and educators, to explain the use of AI systems and address any concerns. 

The EU AI Act aims to create a trustworthy and human-centric AI framework. For e-assessment companies, complying with the Act will not only ensure legal adherence but also promote ethical AI practices, enhancing trust and credibility in their AI systems. 

 For more information on the EU AI Act, read our article The impact of EU AI Act on e-assessment 

Related News

Join our membership

Shape the future of digital assessment

Join the global community advancing e-assessment through innovation, research, and collaboration.

user full
5,000+
Global members
globe point
50+
Countries
cog icon
15+
Years leading

Keep informed

Subscribe to our newsletter

This site uses cookies to monitor site performance and provide a mode responsive and personalised experience. You must agree to our use of certain cookies. For more information on how we use and manage cookies, please read our Privacy Policy.