Abstract: The most effective domain adaptation (DA) technique involves the decomposition of data
representation into a domain-independent representation (DIRep) and a domain-dependent
representation (DDRep). A classifier is trained by using the DIRep on the labeled source
images. Since the DIRep is domain invariant, the classifier can be “transferred” to make
predictions for the target domain with no (or few) labels. However, information useful for
classification in the target domain can “hide” in the DDRep. Current DA algorithms, such
as Domain-Separation Networks (DSN), do not adequately address this issue. DSN’s weak
constraint to enforce the orthogonality of DIRep and DDRep allows this hiding effect and
can result in poor performance. To address this shortcoming, we develop a new algorithm
wherein a stronger constraint is imposed to minimize the information content in DDRep to
create a DIRep that retains relevant information about the target labels and, in turn, results
in a better invariant representation. By using synthetic datasets, we show explicitly that
depending on the initialization, DSN, with its weaker constraint, can lead to sub-optimal
solutions with poorer DA performance. In contrast, our algorithm is robust against such
perturbations. We demonstrate the equal-or-better performance of our approach against
DSN and other recent DA methods by using several standard benchmark image datasets. We
further highlight the compatibility of our algorithm with pre-trained models for classifying
real-world images and showcase its adaptability and versatility through its application in
network intrusion detection.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: - New title
- New abstract
- revised version based on reviews and requested changes
- a diff version that highlights the changes from the original
Assigned Action Editor: ~changjian_shui1
Submission Number: 3174
Loading