Lucas Beyer

Google Brain

Names

How do you usually write your name as author of a paper? Also add any other names you have authored papers under.

Lucas Beyer

Emails

Enter email addresses associated with all of your current and historical institutional affiliations, as well as all your previous publications, and the Toronto Paper Matching System. This information is crucial for deduplicating users, and ensuring you see your reviewing assignments.

****@gmail.com
,
****@vision.rwth-aachen.de
,
****@google.com

Education & Career History

Enter your education and career history. The institution domain is used for conflict of interest detection and institution ranking. For ongoing positions, leave the end field blank.

Researcher
Google Brain (google.com)
2018Present
 
PhD student
RWTH Aachen University (rwth-aachen.de)
20132018
 

Advisors, Relations & Conflicts

Enter all advisors, co-workers, and other people that should be included when detecting conflicts of interest.

Coworker
Stefan Breuers
****@vision.rwth-aachen.de
Present
Advisor
Bastian Leibe
****@vision.rwth-aachen.de
2013Present
 
PhD Advisor
Bastian Leibe
****@vision.rwth-aachen.de
20132018
 
Coworker
Alexander Hermans
****@vision.rwth-aachen.de
20132018
 
Coworker
Stefan Breuers
****@vision.rwth-aachen.de
20132018
 
Coworker
Marco Andreetto
****@google.com
20162017
 
Coauthor
Bastian Leibe
****@umic.rwth-aachen.de
20152016
 
Coauthor
Bastian Leibe
****@vision.rwth-aachen.de
20152016
 
Coauthor
Alexander Hermans
****@vision.rwth-aachen.de
20152016
 
Coauthor
Stefan Breuers
****@vision.rwth-aachen.de
20152015
 
Coauthor
Umer Rafi
****@vision.rwth-aachen.de
20152015
 
Coauthor
Hayley Hung
****@tudelft.nl
20152015
 

Expertise

For each line, enter comma-separated keyphrases representing an intersection of your interests. Think of each line as a query for papers in which you would have expertise and interest. For example: deep learning, RNNs, dependency parsing

weakly supervised learning
Present
 
semi-supervised learning
Present
 
self-supervised learning, unsupervised learning, representation learning
Present
 
metric learning, embeddings
Present
 
task adaptation, transfer learning, pre-training, fine-tuning
Present
 
architecture, image model, neural network design
Present