Mateusz Malinowski

DeepMind

Names

How do you usually write your name as author of a paper? Also add any other names you have authored papers under.

Mateusz Malinowski (Preferred)

Emails

Enter email addresses associated with all of your current and historical institutional affiliations, as well as all your previous publications, and the Toronto Paper Matching System. This information is crucial for deduplicating users, and ensuring you see your reviewing assignments.

****@mpi-inf.mpg.de
,
****@google.com
,
****@deepmind.com

Education & Career History

Enter your education and career history. The institution domain is used for conflict of interest detection and institution ranking. For ongoing positions, leave the end field blank.

Research Scientist
DeepMind (deepmind.com)
2017Present
 
PhD student
Saarland Informatics Campus, Max-Planck Institute (mpi-inf.mpg.de)
20112016
 

Advisors, Relations & Conflicts

Enter all advisors, co-workers, and other people that should be included when detecting conflicts of interest.

Coworker
Peter Battaglia
****@google.com
2017Present
 
Coworker
Carl Doersch
****@cs.cmu.edu
2017Present
 
Coauthor
Peter Battaglia
****@google.com
2017Present
 
Coauthor
Mario Fritz
****@eecs.berkeley.edu
20162016
 
Coauthor
Zeynep Akata
****@mpi-inf.mpg.de
20162016
 
Coauthor
Bernt Schiele
****@mpi-inf.mpg.de
20162016
 
Coauthor
Andreas Bulling
****@mpi-inf.mpg.de
20162016
 
Coauthor
Bernt Schiele
****@mpi-inf.mpg.de
20162016
 
Coauthor
Mario Fritz
****@mpi-inf.mpg.de
20162016
 
Coauthor
Marcus Rohrbach
****@mpi-inf.mpg.de
20152016
 
Coauthor
Marcus Rohrbach
****@berkeley.edu
20152016
 
PhD Advisor
Mario Fritz
****@mpi-inf.mpg.de
20112016
 
Advisor
Mario Fritz
****@mpi-inf.mpg.de
20112016
 

Expertise

For each line, enter comma-separated keyphrases representing an intersection of your interests. Think of each line as a query for papers in which you would have expertise and interest. For example: deep learning, RNNs, dependency parsing

language and vision
Present
 
visual question answering
Present
 
learning representation
Present
 
scalable training
Present