Files:
data/: Contains data used (human text descriptions, language embeddings, program embeddings, train/test boards, hyperparameters).
data/500_gsp_samples_text_human_encoded.py: Human language embeddings 
data/500_gsp_samples_text_synth_encoded.py: Synthetic language embeddings 
data/500_gsp_samples_text_synth.py: Synthetic language descriptions 
data/500_gsp_samples_text_human.py: Human language descriptions 
data/gsp_samples-recognition_activations-structurePenalty2.npz: Program embeddings 
data/500_gsp_samples.npy: Training boards 
data/gsp_4x4_full.npy: Boards from the GSP chain [Taken from https://github.com/sreejank/Abstract_Neural_Metamers]
data/gsp_4x4_full_probs.npy: Frequencies of each board in the GSP chain. [Taken from https://github.com/sreejank/Abstract_Neural_Metamers]
data/gsp_4x4_sample.npy: Test set GSP boards [Taken from https://github.com/sreejank/Abstract_Neural_Metamers]
data/gsp_4x4_sample_starts.npy: Start tiles for test set GSP boards [Taken from https://github.com/sreejank/Abstract_Neural_Metamers]
data/gsp_4x4_null_sample.npy: Test set control boards [Taken from https://github.com/sreejank/Abstract_Neural_Metamers]
data/gsp_4x4_null_sample-starts.npy: Start tiles for test set control boards [Taken from https://github.com/sreejank/Abstract_Neural_Metamers]
data/hyperparams_grounding.pkl: Hyperparams for grounding agents. 
data/hyperparams_nogrounding.pkl: Hyperparams for non-grounding agents. 

auxillary_model.py: Training code for agents w/ auxillary loss.
auxillary_polcy.py: Setup for agent training code w/ auxillary loss.
small_env_lang_4x4.py: Modified training enviornment for grounded meta-rl agent for the GSP task distribution used in Kumar et al. 2022. We modified the enviornment from here: https://github.com/sreejank/Abstract_Neural_Metamers
small_env_4x4.py: Training enviornment for original meta-rl agent for the GSP task distribution used in Kumar et al. 2022. We mostly used the same enviornment from here: https://github.com/sreejank/Abstract_Neural_Metamers (but do not use training on control boards)
task_performance_zscore.py: Code to calculate task performance metric of paper. 

ec-master/: Folder with program induction code. This is a fork from: https://github.com/ellisk42/ec (we have removed our names/identifying information from files we committed to our fork to maintain anonymity). The vast majority of the code here is from the public repository of DreamCoder (Ellis et al. 2021) 
here: https://github.com/ellisk42/ec. Our additions are the implementation of DreamCoder in our enviornment (mostly in: dreamcoder/domains/grid, which has its own readme). 

Important note: We use code from Ellis et al. 2021 (public repo: https://github.com/ellisk42/ec) and code/data Kumar et al. 2022(repo: github.com/sreejank/Abstract_Neural_Metamers). These works are cited in our paper, and this readme contains which code/data are from those public repositories.
The rest of the code/data are original to this work, and are anonymized in this supplementary material. 

Works cited:

Ellis, K., Wong, C., Nye, M., Sablé-Meyer, M., Morales, L., Hewitt, L., ... & Tenenbaum, J. B. (2021, June). Dreamcoder: Bootstrapping inductive program synthesis with wake-sleep library learning. In Proceedings of the 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation (pp. 835-850).

Kumar, S., Dasgupta, I., Marjieh, R., Daw, N. D., Cohen, J. D., & Griffiths, T. L. (2022). Disentangling Abstraction from Statistical Pattern Matching in Human and Machine Learning. arXiv preprint arXiv:2204.01437.