Baoyuan Wu

The Chinese University of Hong Kong, Shenzhen

Names

How do you usually write your name as author of a paper? Also add any other names you have authored papers under.

Baoyuan Wu (Preferred)

Emails

Enter email addresses associated with all of your current and historical institutional affiliations, as well as all your previous publications, and the Toronto Paper Matching System. This information is crucial for deduplicating users, and ensuring you see your reviewing assignments.

****@gmail.com
,
****@kaust.edu.sa
,
****@tencent.com
,
****@cuhk.edu.cn

Education & Career History

Enter your education and career history. The institution domain is used for conflict of interest detection and institution ranking. For ongoing positions, leave the end field blank.

Associate Professor
The Chinese University of Hong Kong, Shenzhen (cuhk.edu.cn)
2020Present
 
Principal Research Scientist
Tencent (tencent.com)
20192020
 
Senior Research Scientist
Tencent (tencent.com)
20162018
 
Post Doc
KAUST (kaust.edu.sa)
20142016
 
PhD student
Institute of automation, Chinese academy of science (nlpr.ia.ac.cn)
20092014
 

Advisors, Relations & Conflicts

Enter all advisors, co-workers, and other people that should be included when detecting conflicts of interest.

Coauthor
Yiming Li
****@mails.tsinghua.edu.cn
20202023
 
Coauthor
Bernard Ghanem
****@kaust.edu.sa
20152016
 
Coauthor
Siwei Lyu
****@cs.albany.edu
20112016
 
Coauthor
Siwei Lyu
****@albany.edu
20112014
 
Coauthor
Shangfei Wang
****@ustc.edu.cn
20112014
 
Coauthor
Qiang Ji
****@ecse.rpi.edu
20112014
 
Coauthor
Yifan Zhang
****@nlpr.ia.ac.cn
20112014
 
Coauthor
Qiang Ji
****@rpi.edu
20112014
 
Coauthor
Bao-gang Hu
****@gmail.com
20112014
 

Expertise

For each line, enter comma-separated keyphrases representing an intersection of your interests. Think of each line as a query for papers in which you would have expertise and interest. For example: deep learning, RNNs, dependency parsing

backdoor learning
20192023
 
adversarial examples
20182023
 
model compression
20182021
 
multi-label learning
20142018