Liveness Detection Competition 3rd Edition

Competition Description

LivDet-Face 2026 is the third competition in the LivDet-Face series. It is designed to provide:

An independent assessment of the current state of the art in Face Presentation Attack Detection (PAD) algorithms, and a standardized evaluation protocol, including datasets of spoof and bona fide face images and videos, that researchers can continue to use after the competition to compare their methods against LivDet-Face baselines and winning submissions.

LivDet-Face 2026 was included in the official IJCB 2026 competition list.

The competition consists of two parts:

Part 1: Algorithms
This part evaluates PAD solutions submitted by participants. The evaluation is performed by the organizers on a sequestered dataset that remains unknown to the competitors.

Part 2: Systems
This part involves the systematic testing of submitted face recognition systems using physical presentation attack artifacts presented to the sensors.

Participants may compete in either part or in both parts. A separate winner will be announced for each part based on the best overall performance. Evaluation will follow the metrics recommended in ISO/IEC 30107-1:2016.

The process to enter competition will be as follows:

Step 1. Register: Send in your information.Click here
Step 2. Recieve sample validation data after registration.
Step 2. Submit algorithms for evaluation(one for image and one for video).
Step 3. Processing: Execute testing protocol by organizers.
Step 4. Performance will be assessed by organizers.
Step 5. Results will be announced and one winner will be selected for each category.
Step 6: Publication: Teams who submit 3 best-performing methods will be invited to submit a joint paper summarizing the competition and describing briefly the PAD methods.

Important Dates

  • Registration Opens7th March 2026
  • Submission Deadline5th May 2026
  • Results Announcement18th May 2026
  • Paper Summary21st May 2026

How to Participate

1) Register your team
2) Review submission guidelines
3) Train & validate your algorithm or System
4) Submit final runnable package by the deadline (5th May 2026).

Announcements

Hurry!!! Registration is opened.

Now Available Registration and Submission Guide.

Results and IJCB Paper Submission

Winners

Each competition part (Part 1: Algorithms and Part 2: Systems) will have a separate winner.

Evaluation will follow the metrics recommended in ISO/IEC 30107-1:2016. For both parts, a threshold of 50.0 will be used to calculate the APCER and BPCER error rates.

Competition Results – Part 1 & 2

A detailed breakdown of the competition results will be shared with all participants after evaluation.

The final results and performance comparisons will also be presented in the official IJCB competition paper.

IJCB Paper Submission

All winners will be invited to co-author the LivDet-Face competition paper submitted to IJCB.

Depending on the number and quality of submissions, additional teams with strong performance may also be invited to contribute, particularly in cases where performance differences between top teams are small.

The goal is to produce a comprehensive paper that benefits both the research community and the competition participants.

Frequently Asked Questions (FAQ)

Is a Win32 application required?

Other formats are possible with prior approval from the LivDet team.
It's advised to follow the submission guidelines.

Should the submission be a single executable?

Not necessarily. The submission can be delivered as a ZIP file containing the executable and any additional required files such as models, configuration files, or supporting resources.

What format should the software be submitted in?

The application should ideally be provided as a a Docker container image containing the complete runnable system. that can be directly executed from a Windows terminal or command prompt.

What types of presentation attacks will be encountered?

Participants should expect both image-based and video-based presentation attack forms.

What should the PAD score look like?

The PAD score should range from 0 to 100. A threshold of 50 will be used:

  • 50–100 → Live
  • 0–49 → Spoof

Will the results be public?

Participants may choose to submit under their real name or a code name. Anonymous participation is allowed. Participants will receive their evaluated results, and high-performing teams may be invited to co-author the competition publication.

What metrics will be used for evaluation?

Evaluation will include the following metrics:

  • BPCER – Bona Fide Presentation Classification Error Rate
  • APCER – Attack Presentation Classification Error Rate
  • ACER – Average Classification Error Rate
  • IAPAR – Impostor Attack Presentation Acceptance Rate
  • FRR – False Rejection Rate (if matching and PAD analysis are conducted)