Sanghyuk Chun



I'm a research scientist at NAVER Clova AI Research (CLAIR), working on machine learning and its applications. Prior to working at NAVER, I worked as a research engineer at advanced recommendation team (ART) in Kakao from 2016 to 2018.

I received a master's degree in Electrical Engineering from Korea Advanced Institute of Science and Technology (KAIST) in 2016. During the master's degree, I researched on a scalable algorithm for robust subspace clustering. Before my master's study, I worked at IUM-SOCIUS in 2012 as a software engineering internship. I also did a research internship at Networked and Distributed Computing System Lab in KAIST and NAVER Labs during summer 2013 and fall 2015, respectively.

Research Interests

My research goal is to understand unexpected and less interpretable behaviors of machine learning (ML) models and to solve the problems in both practical and provable manners. In particular, I believe that it is important to solve four main research topics: (i) Understanding machine learning models by explainable ML, (ii) Building generalizable models to different biases, corruptions, or domains, (iii) Developing probabilistic machines with well-calibrated uncertainty measurement, (iv) Overcoming insufficient labeled data points by learning with minimal human supervision.



(C: peer-reviewed conference, W: peer-reviewed workshop, A: arxiv preprint, O: others)
(authors contributed equally)

See also at my Google Scholar.

  • Evaluating Weakly Supervised Object Localization Methods Right.
    • Junsuk Choe*, Seong Joon Oh*, Seongho Lee, Sanghyuk Chun, Zeynep Akata, Hyunjung Shim
    • CVPR 2020.
    • [paper] [code] [bibtex]
  • Data-driven Harmonic Filters for Audio Representation Learning.
    • Minz Won, Sanghyuk Chun, Oriol Nieto, Xavier Serra
    • ICASSP 2020.
    • [paper] [bibtex]
  • Neural Approximation of Auto-Regressive Process through Confidence Guided Skimming.
    • YoungJoon Yoo, Sanghyuk Chun, Sangdoo Yun, Jung-Woo Ha, Jaejun Yoo
    • arXiv 2019.
    • [paper] [bibtex]
  • Learning De-biased Representations with Biased Representations.
    • Hyojin Bahng, Sanghyuk Chun, Sangdoo Yun, Jaegul Choo, Seong Joon Oh
    • arXiv 2019.
    • [paper] [bibtex]
  • Toward Interpretable Music Tagging with Self-attention.
  • CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features.
  • Photorealistic Style Transfer via Wavelet Transforms.
  • Automatic Music Tagging with Harmonic CNN.
  • An Empirical Evaluation on Robustness and Uncertainty of Regularization methods.
    • Sanghyuk Chun, Seong Joon Oh, Sangdoo Yun, Dongyoon Han, Junsuk Choe, Youngjoon Yoo
    • ICML Workshop 2019.
    • [paper] [bibtex]
  • Visualizing and Understanding Self-attention based Music Tagging.
  • Where To Be Adversarial Perturbations Added? Investigating and Manipulating Pixel Robustness Using Input Gradients.
    • Jisung Hwang*, Younghoon Kim*, Sanghyuk Chun*, Jaejun Yoo, Ji-Hoon Kim, Dongyoon Han
    • ICLR Workshop 2019.
    • [paper] [bibtex]
~ 2018
  • Multi-Domain Processing via Hybrid Denoising Networks for Speech Enhancement.
  • A Study on Intelligent Personalized Push Notification with User History.
    • Hyunjong Lee, Youngin Jo, Sanghyuk Chun, Kwangseob Kim
    • Big Data 2017.
    • [paper] [bibtex]
  • Scalable Iterative Algorithm for Robust Subspace Clustering: Convergence and Initialization.
    • Master's Thesis, Korea Advanced Institute of Science and Technology, 2016 (advised by Prof Jinwoo Shin)
    • [paper] [code]

Industry Experience

NAVER Clova AI Research (2018 ~ Now)
  • Hangul
    Hangul Handwriting Font Generation

    Distributed at 2019 Hangul's day (한글날), [Full font list]

    • Hangul (Korean alphabet, 한글) originally consists of only 24 sub-letters (ㄱ, ㅋ, ㄴ, ㄷ, ㅌ, ㅁ, ㅂ, ㅍ, ㄹ, ㅅ, ㅈ, ㅊ, ㅇ, ㅎ, ㅡ, ㅣ, ㅗ, ㅏ, ㅜ, ㅓ, ㅛ, ㅑ, ㅠ, ㅕ), but by combining them, there exist 11,172 valid characters in Hangul. For example, "한" is a combination of ㅎ, ㅏ, and ㄴ, and "쐰" is a combination of ㅅ, ㅅ, ㅗ, ㅣ, and ㄴ. It makes generating a new Hangul font be very expensive and time-consuming. Meanwhile, since 2008, Naver has distributed Korean fonts for free (Nanum fonts, 나눔 글꼴).
    • In 2019, we developed a technology for fully-personalized Hangul generation only with 152 characters. We opened an event page where users can submit their own handwriting. The full generated font list can be found in [this link].
    • More details was presented in Deview 2019. Also, we extend our work to few-shot generation.
    • [BONUS] You can play with my handwriting here
  • example sticker
    Example emoji from LINE sticker shop.
    Emoji Recommendation (LINE Timeline)

    Deployed in Jan. 2019

    • LINE is a major messenger player in east asia (Japan, Taiwan, Thailand, Indonesia, and Korea). In the application, users can buy and use numerous emoijs a.k.a. LINE Sticker.
    • In this project, we recommended emojis to users based on their profile picture (cross-domain recommendation).
    • I developed and researched the entire pipeline of the cross-domain recommendation system and operation tools.
Kakao Advanced Recommendation Technology (ART) team (2016 ~ 2018)
  • Kakao
    Recommender Systems (Kakao services)

    Feb. 2016 - Feb. 2018

    • I developed and maintained a large-scale real-time recommender system (Toros [PyCon Talk] [AI Report]) for various services in Daum and Kakao. I mainly worked with content-based representation modeling (for textual, visual, and musical data), collaborative filtering modeling, user embedding, user clustering, and ranking system based on Multi-armed Bandit.
    • Textual domain: Daum News similar article recommendation, Brunch (blog service) similar post recommendation, Daum Cafe (community service) hit item recommendation.
    • Visual domain: Daum Webtoon and Kakao Page similar item recommendation, video recommendation for a news article (cross-domain recommendation).
    • Musical domain: music recommendation for Kakao Mini (smart speaker), Melon and Kakao Music
    • Online to offline: Kakao Hairshop style recommendation.
  • IPPN
    System overview.
    Personalized Push Notification with User History (Daum, Kakao Page)

    Deployed in 2017

    • I researched and developed personalized item application push system. Our personalized recommender system finds user's interest based on their activity in services. The system has been applied to Daum an Kakao Page mobile applications. More details are in our paper.
  • Daum Shopping
    Large-Scale Item Categorization in e-Commerce (Daum Shopping)

    Deployed in 2017

    • I developed a large-scale item categorization system for Daum Shopping. The problem is challenging because of its web-scale data size, unbalanced label distribution, and noisy label. We served operation tools and the categorization API using deep learning based item categorization model.
  • Naver Labs
    Research on deep learning normalization (Naver Labs)

    Aug. 2015 - Dec. 2015

    • During my research internship in Naver Labs, I mainly worked with Batch Normalization (BN) techniques using C++ Caffe framework [code] and Lua Torch framework. I tested the implemented BN to AlexNet, Inceptionv2 and VGG architectures at ImageNet datasets
    • I researched on normalization techniques for sequential dataset, namely RNN.
    Web developer (IUM-SOCIUS)

    Jun. 2012 - Jan. 2013

    • I worked as web developer at IUM-SOCIUS. I developed and maintained internal admin tools and main service systems based on JAVA spring and Ruby on Rails.
    • I also developed and maintained internal tools including batch jobs (JAVA spring batch), internal statistic service (Python Django, MongoDB).

Academic Activities

  • Served as a reviewer for CVPR 2020.


  • M.S. (2014.03 - 2016.02), School of Electrical Engineering, KAIST
  • B.S. (2009.03 - 2014.02), School of Electrical Engineering and School of Management Science (double major), KAIST