The Most Innovative Use of Technology in Assessment award recognizes a project that has been highly innovative in terms of new technology, or how existing technology has been used in a new and different way.
Sentira XR is revolutionising assessment by integrating Artificial Intelligence (AI) with Virtual Reality (VR) to create immersive, scalable, and fair competency evaluations.
Traditional assessment methods in medical education often struggle with subjectivity, scalability, and lack of real-world applicability. Sentira XR overcomes these challenges by offering AI-driven, interactive simulations that assess clinical reasoning, decision-making, and communication skills in a high-fidelity, risk-free environment.
Designed for broad scalability, Sentira XR is adaptable across medical, corporate, and compliance training sectors.
By embedding accessibility, ethical AI principles, and robust analytics, Sentira XR sets a new benchmark for AI-powered assessment, transforming training methodologies and ensuring fair, reliable, and future-proof evaluation processes.
The NCFE used RM Compare to try to improve the way they were currently assessing applications for two competitions, namely the NCFE Assessment Innovation Fund and the NCFE Aspiration Awards.
Together we were able to show the transformative potential of this new approach by delivering significant improvements in efficiency, reliability and stakeholder engagement in two critical, live evaluation processes.
Finalists:
With the rise of online assessments and large language model-driven item generation, efficient item piloting and calibration are critical. To ensure test security and score validity of large-scale Computerized Adaptive Testing (CAT), item banks must be continuously refreshed. However, traditional piloting and item calibration methods are too resource-intensive, creating bottlenecks in test development. To address these challenges, we developed a CAT system integrating three innovative components. Together, these components provide a cost-effective solution for high-stakes testing, supporting CATs with large pools of automatically generated items. We successfully used this system to introduce thousands of new vocabulary items to the Duolingo English Test (DET) while maintaining score validity. Joint calibration of six item types processed over 38 million responses in five hours, demonstrating scalability and readiness for large-scale implementation.
The Singapore Examinations and Assessment Board (SEAB), under the Ministry of Education, oversees national and non-national exams across 400 schools, supporting 80,000 annual registrations. Traditional exam methods that relied solely on internet connectivity posed challenges in ensuring exam integrity and candidate performance. Similarly for delivering national level test across 400 schools traditionally required a solution that mandated local installation and administration at the schools to ensure uninterrupted testing. Additionally, the language proficiency testing required a more flexible and efficient approach to conduct oral exams efficiently in scale. To address these challenges, SEAB undertook a macroscale transformation, transitioning to a web-based, cloud-hosted assessment solution. This innovative system integrates cloud computing with offline functionality, enabling seamless exam delivery across all environments.
Join the global community advancing e-assessment through innovation, research, and collaboration.
Keep informed
This site uses cookies to monitor site performance and provide a mode responsive and personalised experience. You must agree to our use of certain cookies. For more information on how we use and manage cookies, please read our Privacy Policy.