Sung Ju Hwang

Korea Advanced Institute of Science and Technology

Names

How do you usually write your name as author of a paper? Also add any other names you have authored papers under.

Sung Ju Hwang
,
Sung Ju Hwang

Emails

Enter email addresses associated with all of your current and historical institutional affiliations, as well as all your previous publications, and the Toronto Paper Matching System. This information is crucial for deduplicating users, and ensuring you see your reviewing assignments.

****@gmail.com
,
****@unist.ac.kr
,
****@kaist.ac.kr
,
****@cs.utexas.edu

Education & Career History

Enter your education and career history. The institution domain is used for conflict of interest detection and institution ranking. For ongoing positions, leave the end field blank.

Associate Professor
Korea Advanced Institute of Science and Technology (kaist.ac.kr)
2020Present
 
Co-founder
AITRICS (aitrics.com)
2016Present
 
Assistant Professor
Korea Advanced Institute of Science and Technology (kaist.ac.kr)
20172019
 
Assistant Professor
UNIST (unist.ac.kr)
20142017
 
Postdoc
Disney Research, Disney (disneyresearch.com)
20132014
 
PhD student
University of Texas, Austin (utexas.edu)
20082013
 

Advisors, Relations & Conflicts

Enter all advisors, co-workers, and other people that should be included when detecting conflicts of interest.

Postdoc Advisor
Leonid Sigal
****@disneyresearch.com
20132014
 
Coauthor
Guang-tong Zhou
****@sfu.ca
20112014
 
Coauthor
Leonid Sigal
****@disneyresearch.com
20112014
 
Coauthor
Kristen Grauman
****@cs.utexas.edu
20112014
 
Coauthor
Ashish Kapoor
****@microsoft.com
20112014
 
Coauthor
Greg Mori
****@cs.sfu.ca
20112014
 
Coauthor
Fei Sha
****@usc.edu
20112014
 
PhD Advisor
Fei Sha
****@usc.edu
20102013
 
PhD Advisor
Kristen Grauman
****@cs.utexas.edu
20082013
 

Expertise

For each line, enter comma-separated keyphrases representing an intersection of your interests. Think of each line as a query for papers in which you would have expertise and interest. For example: deep learning, RNNs, dependency parsing

deep learning
2013Present
 
multi-task learning
2010Present
 
continual learning
2010Present