Efficient face anti-spoofing via head-aware transformer based knowledge distillation with 5 MB model parameters

Published: 01 Jan 2024, Last Modified: 20 Jul 2025Appl. Soft Comput. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Highlights•A head-aware Transformer based on knowledge distillation is proposed for FAS.•Logit-level and feature-level knowledge distillation are combined in HaTFAS.•Head-aware strategy is designed to solve dimension mismatching of middle encoders.•Extensive experiments show our HaTFAS gets on par with or superior results.
Loading