Caglar Gulcehre

Deepmind

Names

How do you usually write your name as author of a paper? Also add any other names you have authored papers under.

Caglar Gulcehre (Preferred)
,
aglar Glehre
,
Çaglar Gülçehre

Emails

Enter email addresses associated with all of your current and historical institutional affiliations, as well as all your previous publications, and the Toronto Paper Matching System. This information is crucial for deduplicating users, and ensuring you see your reviewing assignments.

****@gmail.com
,
****@umontreal.ca
,
****@iro.umontreal.ca
,
****@google.com

Education & Career History

Enter your education and career history. The institution domain is used for conflict of interest detection and institution ranking. For ongoing positions, leave the end field blank.

Research Scientist
Deepmind (google.com)
2017Present
 
Intern
Maluuba (maluuba.com)
20162017
 
PhD student
University of Montreal (umontreal.ca)
20122017
 
Microsoft
Microsoft (microsoft.com)
20162016
 
IBM
International Business Machines (ibm.com)
20152016
 

Advisors, Relations & Conflicts

Enter all advisors, co-workers, and other people that should be included when detecting conflicts of interest.

Coworker
Nando de Freitas
****@google.com
2017Present
 
Coworker
Matt Hoffman
****@google.com
2017Present
 
Coworker
Tom Le Paine
****@google.com
2017Present
 
PhD Advisor
Yoshua Bengio
****@gmail.com
20122018
 
Coauthor
Samira Ebrahimi Kahou
****@gmail.com
20132013
Coauthor
Christopher J Pal
****@polymtl.ca
20132013
Coauthor
Pierre Froumenty
****@polymtl.ca
20132013
Coauthor
Xavier Bouthillier
****@umontreal.ca
20132013
Coauthor
Roland Memisevic
****@iro.umontreal.ca
20132013
Coauthor
Pascal Vincent
****@iro.umontreal.ca
20132013
Coauthor
Aaron Courville
****@umontreal.ca
20132013

Expertise

For each line, enter comma-separated keyphrases representing an intersection of your interests. Think of each line as a query for papers in which you would have expertise and interest. For example: deep learning, RNNs, dependency parsing

multiagent deep reinforcement learning
2016Present
 
reinforcement learning, imitation learning, demonstrations, attention models
2014Present
 
deep learning
2011Present
 
nlp, natural language understanding
20112019
 
optimization
20122017
 
cognitive science, cognitive neuroscience
20092011