2025 Award Finalists Best Research

169 Now Live optimized

Share

The Best Research Award honours outstanding research and its impact within the e-assessment community.

Read on for more information on the 2025 finalists in this Award category at the 2025 International e-Assessment Awards.

Education Quality and Accountability Office (EQAO with Leveraging Distractor Analysis in Large-Scale e-Assessment to Develop Instructional Supports for Improving Student Mathematics Achievement

Project Summary

In Ontario, Canada’s most populous province, declining mathematics performance on provincial assessments has highlighted the need for targeted interventions. In response, Education Quality and Accountability Office (EQAO), responsible for Ontario’s large-scale assessments, conducted groundbreaking research to explore how distractor analysis—an established psychometric technique examining incorrect multiple-choice responses—could identify student misconceptions and inform instructional strategies.

Traditionally used for item refinement, distractor analysis was repurposed to pinpoint student misunderstandings and guide pedagogical interventions. By analyzing responses from nearly 400,000 students in the 2022-2023 provincial assessment, four key areas of difficulty were identified: Proportional Reasoning, Algebraic Reasoning, Computational Thinking, and Mathematical Language. These themes were mapped to publicly released items, providing educators with actionable tools to address specific learning gaps in their classrooms.

Through various outreach activities, EQAO has widely shared these findings with the Ministry of Education, school boards, and educators, receiving enthusiastic feedback for their practical applicability. This research demonstrates how distractor analysis can serve as a bridge between large-scale assessments and classroom instruction. By leveraging e-assessment analytics to directly support instructional practices, this study reinforces the role of large-scale assessment as more than an evaluation mechanism—it is a vital resource for enhancing student achievement.

Commenting on being a finalist, Education Quality and Accountability Office said, “We are honoured to be shortlisted for the 2025 International e-Assessment Association’s Best Research Award. This recognition is deeply meaningful to our team at the Education Quality and Accountability Office (EQAO) and affirms the growing value of research that bridges large-scale assessment and classroom instruction.

Our study, which explored how distractor analysis from e-assessment data can be used to support mathematics instruction, was driven by a clear need voiced by educators. Teachers across Ontario, Canada have consistently asked for more actionable insight into student teaching and learning, particularly for students performing just below the provincial standard. Being shortlisted for this award highlights that this line of research that translates assessment analytics into accessible, classroom-ready tools resonates with the broader e-assessment community and supports a shift toward more instructional and student-centered use of data.

This recognition also shines a light on the collaborative effort behind the research. From the data analysts, psychometricians, and researchers who prepared and analyzed response data from over 400,000 students, to the educators and outreach staff who helped shape and validate the instructional implications and recommendations, this work reflects the contributions of a dedicated and multidisciplinary team. We are especially grateful for the feedback and engagement from teachers, school leaders, government staff, and families across Ontario who continue to show enthusiasm for using this research in practical and meaningful ways.

As EQAO continues to evolve its role in supporting student learning, this recognition reinforces our commitment to developing assessment tools and insights that are equitable, actionable, and aligned with curriculum goals. We thank the International e-Assessment Association and the judging panel for this opportunity, and we remain excited to contribute to the global dialogue on how assessment can meaningfully enhance teaching and learning ”

RM with Learner Experience: Now and Opinions about the Future – Key insights from a global study on digital assessment

Project Summary

RM’s report Learner Experience: Now and Opinions about the Future” discovered:

  • For learners, the future of assessment is digital. 59% of learners prefer digital assessment compared to just 22% for pen & paper.
  • Digital assessment is more attractive to learners than traditional assessments, influencing the subject/school they choose. 64% of learners from a total of 1,716 said they’d be more likely to choose a course if assessed digitally.
  • Digital assessment has changed the learning experience of students positively. There’s a high level of confidence (68%) that learners can be accurately assessed and can better prepare for working in an increasingly digital world with digital assessments.
  • Learners taking professional qualifications are more likely to embrace digital assessment than those taking general qualifications. Learners taking professional qualifications find workplace simulations and functioning/live spreadsheets significantly more useful (51%) than other tools assessing vocational skills.
  • Most Learners want access to digital assessment platforms for up to 2 years before taking high stakes exams digitally. 72% of learners felt that up to 2 years was the length of time they should have to access digital assessment platforms prior to taking exams digitally.
  • AI & Automation will transform creation and administration of assessments, according to 69% of learners.

Commenting on being a finalist, RM said, “Being shortlisted for the 2025 e-Assessment Association’s Best Research Award for our report “Learner Experience: Now and Opinions about the Future” is a moment of immense pride for everyone involved. This recognition highlights the dedication and hard work of the team who gave us their expertise, time and energy to help us create research questions that would really resonate with the wider e-assessment community to make this project a success. It’s incredibly motivating to see our collective efforts acknowledged in this way.

The Learner Experience report was more than just a research exercise – it was a chance to truly listen to learners and understand their experiences in a changing educational landscape. While we are fortunate to have many in-house experts who bring deep knowledge of the market, we knew that Validating assumptions with solid, data-led insights was essential. This approach ensured that our findings weren’t just grounded in expert opinion, but backed by meaningful evidence gathered from the people at the heart of the learning journey.

Being shortlisted reinforces the importance of combining expert knowledge with data-driven decision making. It reflects our ongoing commitment to evidence-based practice and delivering impact that genuinely supports learners and the wider education community and backs up our mission to enrich the lives of learners across the world.

This recognition is a powerful reminder of what we can achieve when we collaborate across teams and apply rigour to our research. It encourages us to keep asking the right questions, challenging our assumptions, and striving to improve the learner experience through thoughtful, informed work.

We’re honoured to be in the company of other great research initiatives and look forward to sharing more about the impact of this work as it continues to influence our thinking and our approach.”

University of Stirling/Queen Margaret University with Exploring the practice of remote proctored assessments and their imagined futures

Project Summary

Digital surveillance in higher education (HE) is becoming increasingly common, particularly in the context of digital assessments. In this study, digital surveillance is defined as the collection, analysis, and use of data generated from the interactions between humans and technology in educational settings. This qualitative study explores these interactions, emphasising that technology is not neutral; the categories that are assigned and collected as data are also not neutral. Technology influences pedagogical practices, while those practices, in turn, shape the technology. This research examines the relationships between people and technology, how these relationships impact teaching practices, and the ethical issues that emerge from them. Technologies and human actions are interconnected and can possess inherent biases. Understanding this perspective helps us see how digital surveillance in HE can reinforce power dynamics and may produce unexpected consequences for marginalised groups.

Using posthuman methodologies, which recognise both human and non-human actors (such as technologies) as integral to understanding social interactions, this approach allows us to explore the complex ways technology and people influence one another. The research aims to provide insights into how assessment technology shapes educational practices and to facilitate discussions about the ethical implications now and in the future through speculative future.

Commenting on being a finalist, University of Stirling/Queen Margaret University said, “Being shortlisted for the eAssessment Award is a tremendous honour, both personally and professionally. This recognition acknowledges several years of sustained research into digital assessment practices. To have this research recognised at this stage is not only affirming, but it also offers a valuable opportunity to enhance its impact.

Receiving this award would significantly support the dissemination of findings, foster new connections within the education and digital assessment sectors, and strengthen the research trajectory as it continues to evolve. As both a learning technologist and a researcher, I view this recognition as a milestone that validates the relevance and timeliness of the work being undertaken.

I am deeply grateful to my doctoral supervisors, whose guidance has been instrumental in shaping my approach. I am thankful to my employer, Queen Margaret University for affording me the space to integrate my research into practice and to reflect on the ways learning technology, AI and assessment are used in and shape practice. I also wish to acknowledge the University of Stirling, which has provided a supportive environment to explore critical questions around digital assessment and pursue my Doctorate in Education.

Importantly, this shortlisting also highlights the institutional support and collaboration that have underpinned the research. It is a privilege to be affiliated with both the University of Stirling and Queen Margaret University. Recognition at this level reflects not only individual effort but also the shared commitment of both institutions to advancing research-informed practice in digital education.”

Yunus Emre Enstitüsü with Exploring LLM’s Potential in Automated Essay Scoring Amid Human Rater Conflicts

Project Summary

This study examines the potential of artificial intelligence in resolving scoring discrepancies in educational assessments, focusing on a guided writing task from a language proficiency test. In one exam administration, 1824 individuals completed a test comprising four sections: reading, listening, writing, and speaking. The writing section included two tasks—a guided task and an independent task—with our research targeting the guided task specifically. From this pool, we identified 50 cases where two human raters exhibited significant disagreement, assigning scores differing by at least 2.5 points out of 10. To investigate AI’s capability in such scenarios, we will employ OpenAI’s ChatGPT 4o and zero-shot with a rubric approach to score these 50 essays. ChatGPT’s scores will be compared against those of a third rater—a human expert with extensive experience—who resolved the initial conflicts. By analyzing ChatGPT’s performance against this expert benchmark, the study aims to assess AI’s accuracy, reliability, and potential as a tool for standardizing essay evaluation, offering valuable insights into its role in enhancing fairness and consistency in essay scoring.

Commenting on being a finalist, Yunus Emre Enstitüsü said, “We are honored to have our research, “”Exploring LLMs’ Potential in Automated Essay Scoring Amid Human Rater Conflicts,”” shortlisted for an award at the e-assessment conference. This recognition highlights the growing importance of leveraging large language models to address challenges in fair and reliable essay scoring, particularly in low-resource languages. We are grateful for the opportunity to contribute to advancing e-assessment and look forward to sharing insights with the global e-assessment community.

For more information on all finalists in the 2025 International e-Assessment Awards, visit our finalists webpage.

Related News

Join our membership

Shape the future of digital assessment

Join the global community advancing e-assessment through innovation, research, and collaboration.

user full
5,000+
Global members
globe point
50+
Countries
cog icon
15+
Years leading

Keep informed

Subscribe to our newsletter

This site uses cookies to monitor site performance and provide a mode responsive and personalised experience. You must agree to our use of certain cookies. For more information on how we use and manage cookies, please read our Privacy Policy.