Roman Pogodin

McGill/Mila

Names

How do you usually write your name as author of a paper? Also add any other names you have authored papers under.

Roman Pogodin (Preferred)

Emails

Enter email addresses associated with all of your current and historical institutional affiliations, as well as all your previous publications, and the Toronto Paper Matching System. This information is crucial for deduplicating users, and ensuring you see your reviewing assignments.

****@ucl.ac.uk
,
****@gmail.com
,
****@mila.quebec
,
****@mail.mcgill.ca
,
****@mcgill.ca

Education & Career History

Enter your education and career history. The institution domain is used for conflict of interest detection and institution ranking. For ongoing positions, leave the end field blank.

Postdoc
McGill/Mila (mila.quebec)
20232024
 
PhD student
University College London (ucl.ac.uk)
20172022
 
Undergrad student
Moscow Institute of Physics and Technology (phystech.edu)
20132017
 

Advisors, Relations & Conflicts

Enter all advisors, co-workers, and other people that should be included when detecting conflicts of interest.

Postdoc Advisor
Blake Richards
****@mcgill.ca
20232024
 
Postdoc Advisor
Guillaume Lajoie
****@umontreal.ca
20232024
 
PhD Advisor
Peter E. Latham
****@gatsby.ucl.ac.uk
20172022
 
Coauthor
Yash Mehta
****@gmail.com
20202021
 
Coauthor
Timothy Lillicrap
****@gmail.com
20202021
 
Coauthor
Yazhe Li
****@google.com
20202021
 
Coauthor
Arthur Gretton
****@gmail.com
20202021
 
Coauthor
Danica Sutherland
****@gmail.com
20202021
 
Coauthor
Tor Lattimore
****@gmail.com
20182019
 
Internship advisor
Wulfram Gerstner
****@epfl.ch
20162019
 
Coauthor
Dane Corneil
****@corneil.ca
20162019
 
Coauthor
Alexander Seeholzer
****@gmail.com
20162019
 
Coauthor
Joseph Heng
****@gmail.com
20162019
 

Expertise

For each line, enter comma-separated keyphrases representing an intersection of your interests. Think of each line as a query for papers in which you would have expertise and interest. For example: deep learning, RNNs, dependency parsing

self-supervised learning
20212024
 
theoretical neuroscience, biologically plausible deep learning, plasticity models, Hebbian learning
20182024
 
theoretical neuroscience, working memory models, Hopfield networks, attractor networks
20162020