Implementation of Parallel Backpropagation for Shared-Feature Visualization

Please follow these steps to reproduce the main results of the paper.

1. Download stimuli and model weights from https://www.dropbox.com/scl/fi/egho3tgbzoxryc2zxwowo/submission_data.zip?rlkey=r8jl9bbl58b3pi2r2oekepqmv&st=368ba2sn&dl=0
2. Unzip and move the files to ../parallel_backpropagation/submission_data
3. Navigate to ../parallel_backpropagation
4. pip install -r requirements.txt
        During preparation of the codebase, we noticed that on some machines a bug prevented this command from running.
        In that case, we kindly ask that you remove the lines referring to Pytorch from the requirements file and run
        the command again. The command to install PyTorch directly can be found under
        https://pytorch.org/get-started/locally/
        For windows, the command is
        pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

5. python generate_correlation_plots.py
        to evaluate pretrained models for all recording sessions

6. python evaluate_on_objects.py -d day_05_03_24
        to evaluate a pretrained model for a single recording session

7. python train_and_evaluate.py -d day_05_03_24
        to run the entire pipeline of training, evaluating, visualizing for one recording session.
        names of recording sessions can be found in submission_data/spike_data

8. python parallel_backpropagation.py -d day_05_03_24
        to compute the visualization pipeline for one recording session using a pretrained model

9. python evaluate/post_analysis/global_analysis/plot_obj_responses.py
        to reproduce the plot showing responses to the strongly activating objects in the visualization plots


Plots will be saved to plots/...
Stimuli, recording data, model weights can be found in submission_data/...