The Most Innovative Use of Technology in Assessment Award recognizes projects that have been highly innovative in terms of new technology, or how existing technology has been used in a new and different way.
Read on for more information on the 2025 finalists in this Award category at the 2025 International e-Assessment Awards.
Duolingo English Test with A Novel Scalable System for High-Stakes Computerized Adaptive Testing
Project Summary
With the rise of online assessments and large language model-driven item generation, efficient item piloting and calibration are critical. To ensure test security and score validity of large-scale Computerized Adaptive Testing (CAT), item banks must be continuously refreshed. However, traditional piloting and item calibration methods are too resource-intensive, creating bottlenecks in test development.
To address these challenges, we developed a CAT system integrating three innovative components: SPICE (Scalable Parametric Item Calibration Engine), S2 (Soft-Scoring), and A3 (Adaptive Adaptive Administration). SPICE enhances calibration by efficiently handling sparse response data and using NLP-based item features to inform parameter estimates, reducing pilot data needs. S2 enables newly added items to contribute to scores with reduced impact, mitigating reliance on perfect parameter estimates. A3 strategically administers items with higher parameter uncertainty to a broader range of test-takers, reducing standalone pretesting. Together, these components provide a cost-effective solution for high-stakes testing, supporting CATs with large pools of automatically generated items.
We successfully used this system to introduce thousands of new vocabulary items to the Duolingo English Test (DET) while maintaining score validity. Joint calibration of six item types processed over 38 million responses in five hours, demonstrating scalability and readiness for large-scale implementation.
Commenting on being a finalist, Duolingo said, “We are deeply honored to be shortlisted as a finalist for this year’s E-Assessment Award. This recognition is a testament to the dedication, creativity, and collaboration of the entire team behind the development of S2A3—an innovation that reimagines how large-scale, high-stakes adaptive testing can be delivered in a fast, fair, and scalable way.
At Duolingo, our mission is to make high-quality education accessible to everyone. The Duolingo English Test reaches learners around the world—many of whom rely on it for life-changing opportunities in higher education and beyond. Being recognized for the S2A3 system affirms the value of building smarter, more equitable tests that meet people where they are.
S2A3 enables us to deploy and score new test items in real time, ensuring that the DET remains secure, adaptive, and inclusive—even when collecting millions of test-taker responses across hundreds of thousands of test takers and tens of thousands of test items. This innovation represents a significant leap forward not just for our test, but for the future of e-assessment as a whole.
We’re especially grateful to the interdisciplinary team of psychometricians, applied linguistic researchers, AI research scientists, software engineers, and product leaders who made this work possible, as well as the test-takers, institutions, and partners who continue to inspire us to raise the bar in assessment design.
Being shortlisted motivates us to keep pushing the boundaries of what’s possible in adaptive testing—ensuring that rapid innovation and rigorous validity can go hand in hand. We’re excited to share what we’ve built and to learn from the amazing work being done across the e-assessment community.
Thank you for this incredible recognition.”
Excelsoft Technologies and the Singapore Examinations and Assessment Board (SEAB) with Revolutionizing Exam Delivery: A Hybrid Resilient Solution for Seamless & Inclusive Assessments
Project Summary
The Singapore Examinations and Assessment Board (SEAB), under the Ministry of Education, oversees national and non-national exams across 400 schools, supporting 80,000 annual registrations. Committed to maintaining high educational standards and advancing Singapore’s vision as a global education hub, SEAB strives to meet the diverse needs of all learners.
Traditional exam methods that relied solely on internet connectivity posed challenges in ensuring exam integrity and candidate performance. Similarly for delivering national level test across 400 schools traditionally required a solution that mandated local installation and administration at the schools to ensure uninterrupted testing. Additionally, the language proficiency testing in languages like English, German, French, Spanish, and Japanese required a more flexible and efficient approach to conduct oral exams efficiently in scale.
To address these challenges, SEAB undertook a macroscale transformation, transitioning to a web-based, cloud-hosted assessment solution. This innovative system integrates cloud computing with offline functionality, enabling seamless exam delivery across all environments. Support for online, offline, and hybrid exam modes ensured uninterrupted sessions, safeguarded candidate responses, and enabled secure data management. Also to note is the delivery of eOral exams incorporating offline mechanisms that improved efficiency across schools and the whole nation at large.
Commenting on being a finalist, Excelsoft said, “Achieving a place on the e‑Assessment Awards 2025 shortlist marks a defining moment for Excelsoft and the Singapore Examinations and Assessment Board (SEAB). Advancing to the finals affirms that our solution stands among the year’s most innovative solutions in large‑scale, high‑stakes assessment.
For Excelsoft, the recognition aligns with our mission to make assessment smarter, fairer, and universally accessible. It strengthens stakeholder confidence, propels our AI‑driven analytics roadmap, and underscores our commitment to inclusive design.
This milestone also honours every contributor: SEAB’s visionary leaders, educators, IT teams, and our product and implementation specialists, whose collaboration turned complex national requirements into a seamless, future‑ready solution.
We thank SEAB for its trust and the award jury for acknowledging the project’s impact. This shortlisting motivates us to push boundaries further—ensuring every candidate, regardless of context or connectivity, can excel through transparent, reliable, and inclusive assessments.”
RM Assessment and NCFE with How NCFE used Adaptive Comparative Judgement to further enhance a commitment to assessment innovation and excellence
Project Summary
The NCFE used RM Compare to try to improve the way they were currently assessing applications for two competitions, namely the NCFE Assessment Innovation Fund and the NCFE Aspiration Awards.
Together we were able to show the transformative potential of this new approach by delivering significant improvements in efficiency, reliability and stakeholder engagement in two critical, live evaluation processes.
Commenting on being a finalist, RM Assessment said, “Being shortlisted as a finalist is a truly significant moment for everyone at RM Assessment and for our product, RM Compare. This is the 3rd time we have been nominated in the past 4 years – we must be onto something!!
It’s a powerful affirmation of our commitment to pushing the boundaries of what’s possible in educational assessment through thoughtful and effective technology integration. This recognition underscores the hard work, creativity, and dedication that our team pours into developing solutions that genuinely address the evolving needs of educators and learners.
To be acknowledged amongst such a strong field of innovators is incredibly inspiring. This shortlisting validates our vision for RM Compare as a tool that not only streamlines assessment processes but also unlocks deeper insights into student performance through comparative judgement. The potential impact of winning this award extends beyond mere prestige. It would significantly amplify our visibility within the e-assessment community, bolstering confidence among our existing stakeholders – from schools and colleges to awarding organisations. Furthermore, it would undoubtedly open doors to new partnerships and opportunities, accelerating our mission to empower educators with innovative and reliable assessment solutions.
We are immensely grateful to the e-Assessment Association for this opportunity to showcase RM Compare on such a prominent platform. This achievement would not have been possible without the unwavering commitment of our talented team at RM Assessment, whose expertise and passion drive our innovation. We also extend our sincere appreciation to our valued partners at NCFE. Their collaboration and belief in the transformative potential of RM Compare have been instrumental in reaching this milestone. We look forward with anticipation to the awards ceremony and remain incredibly proud of what we have accomplished together.”
Sentira XR with Transforming Assessment Through AI-Driven VR Technology
Project Summary
Sentira XR is revolutionising assessment by integrating Artificial Intelligence (AI) with Virtual Reality (VR) to create immersive, scalable, and fair competency evaluations.
Traditional assessment methods in medical education often struggle with subjectivity, scalability, and lack of real-world applicability. Sentira XR overcomes these challenges by offering AI-driven, interactive simulations that assess clinical reasoning, decision-making, and communication skills in a high-fidelity, risk-free environment.
Our innovation lies in the dual application of AI: (1) AI-driven virtual patients, powered by natural language processing, enable authentic interactions, and (2) AI-based assessment analytics ensure objective and data-driven evaluation. Unlike conventional methods, our approach captures both procedural accuracy and soft skills in a dynamic, measurable way.
Trialled in partnership with leading institutions, our platform has demonstrated significant improvements in learner engagement, confidence, and competency development. Institutions benefit from cost savings, reduced examiner bias, and sustainability advantages. Designed for broad scalability, Sentira XR is adaptable across medical, corporate, and compliance training sectors.
By embedding accessibility, ethical AI principles, and robust analytics, Sentira XR sets a new benchmark for AI-powered assessment, transforming training methodologies and ensuring fair, reliable, and future-proof evaluation processes.
Commenting on being a finalist, Sentira XR said, ” Being shortlisted is an exciting step for Sentira XR. Our AI-driven VR platform combines artificial intelligence and virtual reality to create an innovative solution for competency assessment in healthcare education. This recognition is a reflection of the effort we’ve put into developing a scalable, objective, and realistic assessment tool for medical training.
Innovation is key to what we do, and this acknowledgment encourages us to continue exploring new ways to improve healthcare education through technology. It also reinforces our commitment to making competency assessments more accurate and accessible, supporting the development of the next generation of healthcare professionals.
We appreciate the support from our team, partners, and the institutions that have helped test and refine our platform. This recognition highlights the collaborative effort that has gone into making this platform a success.
Looking ahead, this shortlisting motivates us to continue enhancing our solution and expanding its impact on healthcare training, as we work to bring AI and VR-driven assessments to more institutions and industries.”
For more information on all finalists in the 2025 International e-Assessment Awards, visit our finalists webpage.
Join the global community advancing e-assessment through innovation, research, and collaboration.
Keep informed
This site uses cookies to monitor site performance and provide a mode responsive and personalised experience. You must agree to our use of certain cookies. For more information on how we use and manage cookies, please read our Privacy Policy.