Abstract: This thesis aims at Functional Representation Learning for Uncertainty Quantification & Fast Skill Transfer. Real-world scenarios are posing increasing practical demands for deep learning models. Two particular considerations are uncertainty quantification and fast skill transfer. The first consideration has the potential to address risk-sensitive decision making or reduce sample complexity in query problems. The second consideration is to avoid learning from scratch and increase the adaptive capability of deep learning models.
A typical example can be the vanilla neural process (NP) (Garnelo et al., 2018b), where the approximate functional prior q_{\phi}(z|D_C) as the functional representation helps induce the predictive function distribution E_{q_{\phi}}(z|D_C) [p(y|z, x)]. Also, the vanilla NP as the functional representation model is the foundation of our developed models and algorithms in this thesis. A large body of our work is to incorporate structural inductive biases, such as hierarchical Bayes, the mixture of experts and graph modules, into functional representations. This thesis will also present a rethink of the optimization of vanilla NPs and propose a new method to bridge the inference gap.
Loading