Keywords: Unsupervised domain adaptation; Object detection; Visual-language model
TL;DR: We propose a novel Domain-Aware Adapter (DA-Ada) to exploit domain-invariant and domain-specific knowledge with the vision-language models for domain adaptive object detection.
Abstract: Domain adaptive object detection (DAOD) aims to generalize detectors trained on an annotated source domain to an unlabelled target domain.
As the visual-language models (VLMs) can provide essential general knowledge on unseen images, freezing the visual encoder and inserting a domain-agnostic adapter can learn domain-invariant knowledge for DAOD.
However, the domain-agnostic adapter is inevitably biased to the source domain.
It discards some beneficial knowledge discriminative on the unlabelled domain, \ie domain-specific knowledge of the target domain.
To solve the issue, we propose a novel Domain-Aware Adapter (DA-Ada) tailored for the DAOD task.
The key point is exploiting domain-specific knowledge between the essential general knowledge and domain-invariant knowledge.
DA-Ada consists of the Domain-Invariant Adapter (DIA) for learning domain-invariant knowledge and the Domain-Specific Adapter (DSA) for injecting the domain-specific knowledge from the information discarded by the visual encoder.
Comprehensive experiments over multiple DAOD tasks show that DA-Ada can efficiently infer a domain-aware visual encoder for boosting domain adaptive object detection.
Our code is available at https://github.com/Therock90421/DA-Ada.
Supplementary Material: zip
Primary Area: Machine vision
Submission Number: 6320
Loading