Karol Hausman

Stanford University

Names

How do you usually write your name as author of a paper? Also add any other names you have authored papers under.

Karol Hausman (Preferred)

Emails

Enter email addresses associated with all of your current and historical institutional affiliations, as well as all your previous publications, and the Toronto Paper Matching System. This information is crucial for deduplicating users, and ensuring you see your reviewing assignments.

****@gmail.com
,
****@google.com

Education & Career History

Enter your education and career history. The institution domain is used for conflict of interest detection and institution ranking. For ongoing positions, leave the end field blank.

Adjunct Professor
Stanford University (stanford.edu)
2021Present
 
Research Scientist
Google Brain (google.com)
2018Present
 
PhD student
University of Southern California (usc.edu)
20132018
 
MS student
Technical University of Munich (tum.edu)
20112013
 
MS student
Warsaw University of Technology (pw.edu.pl)
20072012
 

Advisors, Relations & Conflicts

Enter all advisors, co-workers, and other people that should be included when detecting conflicts of interest.

Coworker
Chelsea Finn
****@cs.stanford.edu
20182021
 
Coworker
Sergey Levine
****@eecs.berkeley.edu
20172021
 
Coworker
Jeannette Bohg
****@stanford.edu
20172021
 
PhD Advisor
Gaurav Sukhatme
****@usc.edu
20122018
 
PhD Advisor
Martin Riedmiller
****@google.com
20172017
 
Coworker
Nicolas Hess
****@google.com
20172017
 
Coworker
Jost Tobias Springenberg
****@google.com
20172017
 
PhD Advisor
Sergey Levine
****@eecs.berkeley.edu
20162017
 
PhD Advisor
Gaurav Sukhatme
****@usc.edu
20132017
 
PhD Advisor
Stefan Schaal
****@usc.edu
20132017
 

Expertise

For each line, enter comma-separated keyphrases representing an intersection of your interests. Think of each line as a query for papers in which you would have expertise and interest. For example: deep learning, RNNs, dependency parsing

Imitation Learning, Learning from Demonstrations
2016Present
 
Deep Reinforcement Learning
2016Present
 
Deep Reinforcement Learning, Robotics
2014Present
 
Robot Learning
2013Present