MIKE - Multi-task Implicit Knowledge Embeddings by Autoencoding through a Shared Input SpaceDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Representation Learning, Joint Embedding, Multi-task Learning, Knowledge Transfer, Distillation, Transfer Learning
Abstract: In this work, we introduce a method of learning Multi-task Implicit Knowledge Embeddings (MIKE) from a set of source (or "teacher") networks by autoencoding through a shared input space. MIKE uses an autoencoder to produce a reconstruction of a given input space optimized to induce the same activations in the source networks. This results in an encoder that takes inputs in the same format as the source networks and maps them to a latent semantic space which represents patterns in the data that are salient to the source networks. We present the results of our first experiments that use 11 segmentation tasks derived from the COCO dataset, which demonstrate the basic feasibility of MIKE.
One-sentence Summary: MIKE - Multi-task Implicit Knowledge Embeddings by Autoencoding through a Shared Input Space
9 Replies

Loading