Evaluation Protocol

All submissions will be evaluated using a sequestered test dataset to ensure fair and reproducible benchmarking.

Submission

Participants must submit an algorithm capable of producing a liveness score for each sample. The organizers will run the algorithm on the hidden test set.

Evaluation Metrics

  • APCER – Attack Presentation Classification Error Rate
  • BPCER – Bona Fide Presentation Classification Error Rate
  • ACER – Average Classification Error Rate

Fair Evaluation

Participants will not have access to the test dataset or labels prior to evaluation. All scores will be computed by the competition organizers.