Venkatesh Saligrama

Boston University

Names

How do you usually write your name as author of a paper? Also add any other names you have authored papers under.

Venkatesh Saligrama

Emails

Enter email addresses associated with all of your current and historical institutional affiliations, as well as all your previous publications, and the Toronto Paper Matching System. This information is crucial for deduplicating users, and ensuring you see your reviewing assignments.

****@bu.edu

Education & Career History

Enter your education and career history. The institution domain is used for conflict of interest detection and institution ranking. For ongoing positions, leave the end field blank.

Full Professor
Boston University (bu.edu)
19002019
 

Advisors, Relations & Conflicts

Enter all advisors, co-workers, and other people that should be included when detecting conflicts of interest.

Coauthor
Alex Olshevsky
****@bu.edu
Present
 
Coauthor
Pierre-marc Jodoin
****@usherbrooke.ca
20082016
 
Coauthor
Alfred Hero
****@eecs.umich.edu
20112014
 
Coauthor
Yuting Chen
****@gmail.com
20112014
 
Coauthor
Ziming Zhang
****@bu.edu
20112014
 
Coauthor
Ziming Zhang
****@gmail.com
20112014
 
Coauthor
Tolga Bolukbasi
****@bu.edu
20112014
 
Coauthor
Gregory Castanon
****@bu.edu
20112014
 
Coauthor
Ziming Zhang
****@med.unc.edu
20112014
 
Coauthor
Rama Chellappa
****@umiacs.umd.edu
20112014
 
Coauthor
Joseph Wang
****@bu.edu
20112014
 
Coauthor
Shuchin Aeron
****@ece.tufts.edu
20112014
 
Coauthor
George Atia
****@ucf.edu
20112014
 
Coauthor
Yonggang Shi
****@loni.ucla.edu
20042004
 
Coauthor
Yonggang Shi
****@usc.edu
20042004
 
Coauthor
Yonggang Shi
****@bit.edu.cn
20042004
 

Expertise

For each line, enter comma-separated keyphrases representing an intersection of your interests. Think of each line as a query for papers in which you would have expertise and interest. For example: deep learning, RNNs, dependency parsing

Selective Classification, Resource Constrained Learning, Efficient Inference, Budget Learning, Learning under Abstention
Present
 
Sample Efficient Learning, Meta-Learning, Domain Adaptation, Few Shot Learning, Zero-Shot Learning
Present