Beliz Gunel

Stanford University

Names

How do you usually write your name as author of a paper? Also add any other names you have authored papers under.

Beliz Gunel

Emails

Enter email addresses associated with all of your current and historical institutional affiliations, as well as all your previous publications, and the Toronto Paper Matching System. This information is crucial for deduplicating users, and ensuring you see your reviewing assignments.

****@stanford.edu

Education & Career History

Enter your education and career history. The institution domain is used for conflict of interest detection and institution ranking. For ongoing positions, leave the end field blank.

PhD student
Stanford University (stanford.edu)
20172022
 
Undergrad student
University of California Berkeley (berkeley.edu)
20132017
 

Advisors, Relations & Conflicts

Enter all advisors, co-workers, and other people that should be included when detecting conflicts of interest.

Coauthor
Sandeep Tata
****@google.com
2020Present
 
Coauthor
Marc Najork
****@google.com
2020Present
 
Coauthor
Navneet Potti
****@google.com
2020Present
 
Coauthor
Jingfei Du
****@fb.com
2020Present
 
Coauthor
Alexis Conneau
****@fb.com
2020Present
 
Coauthor
Ves Stoyanov
****@fb.com
2020Present
 
Coauthor
James Wendt
****@google.com
2020Present
 
Coauthor
Chenguang Zhu
****@microsoft.com
2019Present
 
Coauthor
Michael Zeng
****@microsoft.com
2019Present
 
Coauthor
Xuedong Huang
****@microsoft.com
2019Present
 
Coauthor
Fred Sala
****@stanford.edu
2018Present
 
Coauthor
Albert Gu
****@stanford.edu
2018Present
 
Coauthor
Steven Conolly
****@berkeley.edu
2014Present
 
Coauthor
Patrick Goodwill
****@berkeley.edu
2014Present
 
PhD Advisor
John Pauly
****@stanford.edu
20202022
 
PhD Advisor
Christopher Ré
****@cs.stanford.edu
20182020
 

Expertise

For each line, enter comma-separated keyphrases representing an intersection of your interests. Think of each line as a query for papers in which you would have expertise and interest. For example: deep learning, RNNs, dependency parsing

machine learning systems
2020Present
 
representation learning
2018Present
 
natural language processing
2018Present
 
medical imaging
2014Present