Joongbo Shin

LG AI Research

Names

How do you usually write your name as author of a paper? Also add any other names you have authored papers under.

Joongbo Shin (Preferred)

Emails

Enter email addresses associated with all of your current and historical institutional affiliations, as well as all your previous publications, and the Toronto Paper Matching System. This information is crucial for deduplicating users, and ensuring you see your reviewing assignments.

****@snu.ac.kr
,
****@gmail.com
,
****@lgresearch.ai

Education & Career History

Enter your education and career history. The institution domain is used for conflict of interest detection and institution ranking. For ongoing positions, leave the end field blank.

Researcher
LG AI Research (lgresearch.ai)
2021Present
 
PhD student
Seoul National University (snu.ac.kr)
20142021
 

Advisors, Relations & Conflicts

Enter all advisors, co-workers, and other people that should be included when detecting conflicts of interest.

Coauthor
Franck Dernoncourt
****@adobe.com
2021Present
 
Coauthor
Doo Soon Kim
****@adobe.com
2021Present
 
Coauthor
Trung Bui
****@adobe.com
2021Present
 
Coauthor
Yoonhyung Lee
****@snu.ac.kr
2019Present
 
Coauthor
Meeyoung Cha
****@kaist.ac.kr
2019Present
 
Coauthor
Kunwoo Park
****@gmail.com
2019Present
 
Coauthor
Hongjun Lim
****@gmail.com
2019Present
 
Coauthor
Hwanhee Lee
****@snu.ac.kr
2017Present
 
Coauthor
Seunghyun Yoon
****@snu.ac.kr
2015Present
 
PhD Advisor
Kyomin Jung
****@snu.ac.kr
2014Present
 
Coauthor
Yanghoon Kim
****@snu.ac.kr
2014Present
 
Coauthor
Seungpil Won
****@snu.ac.kr
2014Present
 

Expertise

For each line, enter comma-separated keyphrases representing an intersection of your interests. Think of each line as a query for papers in which you would have expertise and interest. For example: deep learning, RNNs, dependency parsing

dialogue modeling
2021Present
 
language modeling
2018Present
 
representation learning
2018Present
 
self-supervised learning
2018Present
 
unsupervised learning
2018Present
 
transfer learning
2018Present
 
transformers
2018Present
 
end-to-end text-to-speech
2018Present
 
natural language processing
2016Present
 
deep learning
2015Present