PSI

When seeing is no longer believing: Rethinking identity verification in high-stakes assessment

Lesinda Leightley

Business Development Director

8th April 2026

test security 3

Share

As assessment delivery continues to expand beyond traditional test centres, organisations face a familiar challenge: how to be confident that the person taking a test or exam is genuinely who they claim to be.

Historically, identity verification relied on relatively straightforward controls. A candidate might present a government-issued ID to an invigilator or upload identification documents for verification before testing. And for many years, these approaches provided sufficient assurance.

However, advances in artificial intelligence are reshaping the threat landscape. Technologies capable of generating convincing synthetic media – including deepfakes, manipulated facial images, and AI-generated identity credentials – are becoming more sophisticated, widely accessible, and increasingly commercialised. The 2025 Identity Fraud Report from the Entrust Cybersecurity Institute found that a deepfake fraud attempt occurred roughly every five minutes in 2024, illustrating the speed at which identity-based attacks are evolving.

In this environment, visual proof alone can no longer be treated as definitive evidence of identity. For high-stakes assessment programmes, this shift has important implications. Verification can no longer be viewed as a single checkpoint at the beginning of an exam session. Instead, it must establish a trusted baseline identity and then form part of a broader, layered approach to identity assurance that operates across the candidate journey.

From static checks to layered assurance

Most digital identity verification processes still rely on a sequence of static checks: document capture, facial matching, and approval to proceed to the assessment. On their own, however, these checks are increasingly insufficient in an environment where identity manipulation is becoming easier and more scalable.

As a result, identity assurance frameworks are moving towards a layered model, where multiple signals build confidence in a candidate's identity. These signals may include biometric analysis, liveness detection, device intelligence, behavioural patterns, and environmental indicators captured during the testing session.

The objective is not simply to confirm identity once, but to continuously assess risk throughout the candidate journey.

For example, biometric matching can confirm that a candidate resembles the person shown on an identity document. Device intelligence can help identify anomalies such as unexpected device changes or suspicious configurations. Behavioural indicators, such as unusual patterns of interaction, can provide additional signals that something might not be right.

Individually, these indicators may not provide conclusive evidence. Combined, they enable a more robust and adaptive model of identity assurance.

Responding to emerging threats such as deepfakes

A widely discussed emerging risk is the use of deepfake technology to impersonate candidates. AI-generated or manipulated video can potentially be used to create convincing facial representations that bypass basic visual checks.

Addressing this risk requires more sophisticated detection capabilities. Modern detection models analyse subtle inconsistencies in facial movements, lighting, compression artefacts, and other indicators that can signal synthetic or manipulated media.

These technologies are most effective when incorporated into a broader risk-based framework that evaluates multiple signals together.

Balancing security and candidate experience

While strengthening identity assurance is essential, it must be balanced with accessibility and fairness. High-stakes assessments often serve candidates across diverse regions, devices, and connectivity environments. Overly intrusive or complex verification processes can create unnecessary barriers for legitimate test takers.

A fit-for-purpose approach considers both security risk and candidate experience. Risk-based orchestration can help ensure that candidates encounter additional verification steps only when necessary, while maintaining a streamlined experience for the majority.

Clear communication also plays an important role. Candidates should understand why identity verification steps exist and what is expected of them before their exam day.

Identity verification as part of a wider security strategy

Identity verification should be viewed as one component of a broader test security framework. Effective programmes typically combine identity assurance with other controls such as secure browsers, proctoring, anomaly detection, and post-session analysis.

Together, these layers create a more resilient defence against emerging forms of impersonation and misconduct.

As assessment continues to evolve, identity assurance must evolve with it. By moving beyond single-point verification and adopting layered, risk-based approaches, organisations can strengthen trust while continuing to deliver accessible, flexible testing experiences.

At PSI, we’re seeing many programmes begin to adopt this layered approach to identity assurance as part of their broader test security strategy. As the threat landscape continues to change, collaboration between assessment bodies, technology partners, and the wider assessment community will be essential to understanding emerging threats and responding to them effectively.

To find out more visit www.psiexams.com/test-owners/test-security. 

Related News

Join our membership

Shape the future of digital assessment

Join the global community advancing e-assessment through innovation, research, and collaboration.

user full
5,000+
Global members
globe point
50+
Countries
cog icon
15+
Years leading

Keep informed

Subscribe to our newsletter

This site uses cookies to monitor site performance and provide a mode responsive and personalised experience. You must agree to our use of certain cookies. For more information on how we use and manage cookies, please read our Privacy Policy.