Multi-Domain Adversarial LearningDownload PDF

Published: 21 Dec 2018, Last Modified: 21 Apr 2024ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Multi-domain learning (MDL) aims at obtaining a model with minimal average risk across multiple domains. Our empirical motivation is automated microscopy data, where cultured cells are imaged after being exposed to known and unknown chemical perturbations, and each dataset displays significant experimental bias. This paper presents a multi-domain adversarial learning approach, MuLANN, to leverage multiple datasets with overlapping but distinct class sets, in a semi-supervised setting. Our contributions include: i) a bound on the average- and worst-domain risk in MDL, obtained using the H-divergence; ii) a new loss to accommodate semi-supervised multi-domain learning and domain adaptation; iii) the experimental validation of the approach, improving on the state of the art on two standard image benchmarks, and a novel bioimage dataset, Cell.
Keywords: multi-domain learning, domain adaptation, adversarial learning, H-divergence, deep representation learning, high-content microscopy
TL;DR: Adversarial Domain adaptation and Multi-domain learning: a new loss to handle multi- and single-domain classes in the semi-supervised setting.
Code: [![github](/images/github_icon.svg) AltschulerWu-Lab/MuLANN](https://github.com/AltschulerWu-Lab/MuLANN)
Data: [Cell](https://paperswithcode.com/dataset/cell), [MNIST](https://paperswithcode.com/dataset/mnist), [MNIST-M](https://paperswithcode.com/dataset/mnist-m)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:1903.09239/code)
7 Replies

Loading