How to Learn and Represent Abstractions: An Investigation using Symbolic AlchemyDownload PDF

25 Feb 2022 (modified: 03 Nov 2024)AutoML 2022 (Late-Breaking Workshop)Readers: Everyone
Abstract: Alchemy is a new meta-learning environment rich enough to contain interesting abstractions, yet simple enough to make fine-grained analysis tractable. Further, Alchemy provides an optional symbolic interface that enables meta-RL research without a large compute budget. In this work, we take the first steps toward using Symbolic Alchemy to identify design choices that enable deep-RL agents to learn various types of abstraction. Then, using a variety of behavioral and introspective analyses we investigate how our trained agents use and represent abstract task variables, and find intriguing connections to the neuroscience of abstraction. We conclude by discussing the next steps for using meta-RL and Alchemy to better understand the representation of abstract variables in the brain.
Keywords: Symbolic Alchemy, Meta-RL, Abstraction, Neuroscience
One-sentence Summary: Using a variety of behavioral and introspective analyses we investigate how our trained agents use and represent abstract task variables in Symbolic Alchemy.
Track: Main track
Reproducibility Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Reviewers: Badr AlKhamissi, balkhamissi@aucegypt.edu
CPU Hours: 0
GPU Hours: 0
TPU Hours: 0
Evaluation Metrics: No
Class Of Approaches: Meta Reinforcement Learning
Datasets And Benchmarks: Symbolic Alchemy
Main Paper And Supplementary Material: pdf
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/how-to-learn-and-represent-abstractions-an/code)
0 Replies

Loading