profile

Sanghyuk Chun


I'm a lead research scientist at NAVER AI Lab, working on machine learning and its applications. My research aims to expand machine knowledge with insufficient human supervision.

Machine knowledge: Existing machine learning models cannot understand the problem itself [Shortcut learning tutorial]. This causes many realistic problems, such as discrimination by machines, poor generalizability to unseen (or minor) corruptions / environments / groups. Current state-of-the-art machines only do "predict", rather than "logical thinking based on logical reasoning". As models prefer to learn by shortcuts [WCST-ML], just training models as usual will lead to biased models. If it is difficult to make machines understand the problem itself, what can we do?

Expanding machine knowledge: Thus, we need to make a machine with a causal understanding of the problem. Our model should not learn undesirable shortcut features [ReBias] [StyleAugment], or should be robust to unseen corruptions [CutMix] [RegEval] [ReLabel] [PiT] or significant distribution shifts [SWAD] [MIRO]. Also we need to make a machine not discriminative to certain demographic groups [CGL]. We expect a model says "I don't know" when they get unexpected inputs [PCME]. At least, we expect a model can explain why it makes a such decision [MTSA] [MTSA WS] [WSOL eval] [WSOL Eval journal], and how it can be fixed (e.g., More data collection? More annotations? Filtering?). My research focuses on expanding machine knowledge from "just prediction" to "logical reasoning". Unfortunately, in many cases, the existing evaluation protocol or metrics are not reliable to measure how machines learn proper knowledge. I also have worked with fair evaluation benchmarks and metrics to mitigate this issue [ECCV Caption] [PCME] [WSOL eval] [WSOL Eval journal] [RegEval].

Why "insufficient human supervision"? Maybe we can make such models with large-scale datasets if we have explicit human annotations for every possible situation. Furthermore, data collection itself is even non-trivial in many scenarios. As I have witnessed the power of large-scale data points and models in NAVER [CutMix] [AdamP] [ReLabel] [PiT] [ImageNet PPF WS] [ViDT], my assumption is that learning with tremendously many data points (crawled from web) would mimic many possible situations. However, human annotations are too expensive and infeasible in many practical scenarios. We need other approaches rather than the fully supervised approach. My recent research aims to build reliable machine learning models with limited number of additional information (e.g., bias labels) but more data [ReLabel] [CGL]. In particular, I have focused on learning with vision-language datasets [PCME] [ECCV Caption].


I am looking for motivated research internship students with the following topics:

  • Algorithmic fairness, de-biasing, shortcut learning or domain generalization. NOTE: I am not going to work on heuristic methods without a theoretical motivation. I will co-work with this topic only if the methodology is theoretically motivated and guaranteed. Check my preliminary works for more details: [ReBias] [WCST-ML] [CGL] [SWAD] [MIRO].
  • Multiplicity problem or false negative problem of cross-modal retrieval (especially, image-text matching). I am interested in solving the problems via a non-deterministic way (e.g., one-to-many, multiple experts, probabilistic machine). Check my preliminary works for more details: [PCME] [ECCV Caption].
  • Proper uncertainty estimation and explainable AI. Why we need a better uncertainty estimation and XAI? I believe that we need proper uncertainty estimates and XAI because it is unable to reach to the "oracle" by a data-driven approach. I am interested in solving various errors in the current ML models by (1) rejecting or fixing a prediction if the prediction is uncertain (2) debugging a model based on the knowledge from XAI. Check my relevant studies for more details: [PCME] [MTSA].

If you are interested in joining our group, or collaborating with me please send an email to me (or naverai at navercorp.com) with your academic CV and desired topics. You have to aware that we expect 6-month internship (no extension is available due to legal regulations). That is, we expect internship students to finish their research project within 6-months (i.e., submitting a full paper to top-tier conferences, releasing their code officially, ...). We, therefore, expect strong publication records (e.g., 1+ research papers relevant to their desired topic) to interns. Officially, we can work from either home or office (located at Seoul, Korea [Google map]). However, if you are non Korean citizenship, as of now (July 2022) it is almost impossible to work at Korea. Lastly, our hiring process is notoriously slow, usually taking more than 2 months. So, please do not contact to us imminently.


News

  • _9/2022 : Giving a talk at Sogang University (topic: "ECCV Caption") [slide]
  • _9/2022 : 1 paper [MSDA theorem] is accepted at NeurIPS 2022.
  • _8/2022 : Starting a new chapter in life with Song Park πŸ€΅β€οΈπŸ‘°.
  • _7/2022 : 1 paper [LF-Font journal] is accepted at IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI).
  • _7/2022 : 2 papers [ECCV Caption] [MIRO] are accepted at ECCV 2022.
  • _7/2022 : Giving a talk at UNIST AIGS (topic: "Towards Reliable Machine Learning: Challenges, Examples, Solutions") [slide]
  • _6/2022 : Giving a tutorial on "Shortcut learning in Machine Learning: Challenges, Analysis, Solutions" at FAccT 2022. [ tutorial homepage | slide | video ]
  • _5/2022 : Receiving an outstanding reviewer award at CVPR 2022 [link].
  • _5/2022 : 1 paper [DCC] is accepted at ICML 2022.
  • _4/2022 : 1 paper [WSOL Eval journal] is accepted at IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI).
  • _4/2022 : Organizing ICLR 2022 ML in Korea Social
  • _3/2022 : Giving guest lectures at KAIST and SNU (topic: "Towards Reliable Machine Learning") [slide]
  • _3/2022 : Co-organizing FAccT 2022 Translation/Dialogue Tutorial: "Shortcut learning in Machine Learning: Challenges, Analysis, Solutions" (slides, videos and web pages will be released soon)
  • _3/2022 : 1 paper [CGL] is accepted at CVPR 2022.
  • _2/2022 : Giving a talk at POSTECH AI Research (PAIR) ML Winter Seminar 2022 (topic: "Shortcut learning in Machine Learning: Challenges, Examples, Solutions") [slide]
  • _1/2022 : 2 papers [ViDT] [WCST-ML] are accepted at ICLR 2022.
See older news
  • 12/2021 : Co-hosting NeurIPS'21 workshop on ImageNet: Past, Present, and Future with 400+ attendees!
  • 12/2021 : Giving a talk at University of Seoul (topic: "Realistic challenges and limitations of AI") [slide]
  • 11/2021 : Giving a talk at NAVER and NAVER Labs Europe (topic: Mitigating dataset biases in Real-world ML applications) [slide]
  • 11/2021 : Giving a guest lecture at UNIST (topic: Limits and Challenges in Deep Learning Optimizers) [slide]
  • 10/2021 : Releasing an unified few-shot font generation framework! [code]
  • _9/2021 : 2 papers [SWAD] [NHA] are accepted at NeurIPS 2021.
  • _8/2021: Reaching a research milestone of 1,000 citations at Google Scholar and Semantic Scholar!
  • _7/2021 : Co-organizing the NeurIPS Workshop on ImageNet: Past, Present, and Future! [webpage]
  • _7/2021 : 2 papers [MX-Font] [PiT] are accepted at ICCV 2021.
  • _7/2021 : Giving a talk at Computer Vision Centre (CVC), UAB (topic: PCME and AdamP) [info] [slide]
  • _6/2021 : Giving a talk at KSIAM 2021 (topic: AdamP). [slide]
  • _6/2021 : Giving a guest lecture at Seoul National University (topic: few-shot font generation) .[slide]
  • _5/2021 : Receiving an outstanding reviewer award at CVPR 2021 [link].
  • _4/2021 : 1 paper [LF-Font] is accepted at CVPR 2021 workshop (also appeared at AAAI).
  • _3/2021 : 2 papers [PCME] [ReLabel] are accepted at CVPR 2021.
  • _1/2021 : 1 paper [AdamP] is accepted at ICLR 2021.
  • 12/2020 : 1 paper [LF-Font] is accepted at AAAI 2021.
  • _7/2020 : 1 paper [DM-Font] is accepted at ECCV 2020.
  • _6/2020 : Receiving the best paper runner-up award at AICCW CVPR 2020.
  • _6/2020 : Receiving an outstanding reviewer award at CVPR 2020 [link].
  • _6/2020 : Giving a talk at CVPR 2020 NAVER interative session.
  • _6/2020 : 1 paper [ReBias] is accepted at ICML 2020.
  • _4/2020 : 1 paper [DM-Font short] is accepted at CVPR 2020 workshop.
  • _2/2020 : 1 paper [wsoleval] is accepted at CVPR 2020.
  • _1/2020 : 1 paper [HCNN] is accepted at ICASSP 2020.
  • 10/2019 : 1 paper [HCNN short] is accpeted at ISMIR late break demo.
  • 10/2019 : Working at Naver Labs Europe as a visiting researcher (Oct - Dec 2019)
  • _7/2019 : 2 papers [CutMix] [WCT2] are accepted at ICCV 2019 (1 oral presentation).
  • _6/2019 : Giving a talk at ICML 2019 Expo workshop.
  • _5/2019 : 2 papers [MTSA] [RegEval] are accepted at ICML 2019 workshops (1 oral presentation).
  • _5/2019 : Giving a talk at ICLR 2019 Expo talk.
  • _3/2019 : 1 paper [PRM] is accepted at ICLR 2019 workshop.

Publications

(C: peer-reviewed conference, W: peer-reviewed workshop, A: arxiv preprint, O: others)
(authors contributed equally)

See also at my Google Scholar.

Selected Publications
  • Probabilistic Embeddings for Cross-Modal Retrieval.
  • ECCV Caption: Correcting False Negatives by Collecting Machine-and-Human-verified Image-Caption Associations for MS-COCO.
  • AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights.
  • Learning De-biased Representations with Biased Representations.
  • Learning Fair Classifiers with Partially Annotated Group Labels.
    • Sangwon Jung, Sanghyuk Chun, Taesup Moon
    • CVPR 2022. paper | code | bibtex
  • SWAD: Domain Generalization by Seeking Flat Minima.
    • Junbum Cha, Sanghyuk Chun, Kyungjae Lee, Han-Cheol Cho, Seunghyun Park, Yunsung Lee, Sungrae Park
    • NeurIPS 2021. paper | code | bibtex
  • Domain Generalization by Mutual-Information Regularization with Pre-trained Models.
    • Junbum Cha, Kyungjae Lee, Sungrae Park, Sanghyuk Chun
    • ECCV 2022. paper | code | bibtex
  • CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features.
  • An Empirical Evaluation on Robustness and Uncertainty of Regularization methods.
    • Sanghyuk Chun, Seong Joon Oh, Sangdoo Yun, Dongyoon Han, Junsuk Choe, Youngjoon Yoo
    • ICML Workshop 2019. paper | bibtex
  • Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts.
    • Song Park, Sanghyuk Chun, Junbum Cha, Bado Lee, Hyunjung Shim
    • ICCV 2021. paper | code | bibtex
Journals
  • An Extendable, Efficient and Effective Transformer-based Object Detector.
    • Hwanjun Song, Deqing Sun, Sanghyuk Chun, Varun Jampani, Dongyoon Han, Byeongho Heo, Wonjae Kim, Ming-Hsuan Yang
    • Submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI).
    • preprint. paper | code | bibtex
  • Few-shot Font Generation with Weakly Supervised Localized Representations.
    • Song Park, Sanghyuk Chun, Junbum Cha, Bado Lee, Hyunjung Shim
    • Accepted at IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 2022. (IF:24.314)
    • preprint. paper | code (old) | code (new) | project page | bibtex
  • Evaluation for Weakly Supervised Object Localization: Protocol, Metrics, and Datasets.
    • Junsuk Choe, Seong Joon Oh, Sanghyuk Chun, Seungho Lee, Zeynep Akata, Hyunjung Shim
    • Accepted at IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 2022. (IF:24.314)
    • preprint. paper | code and dataset | bibtex
2022
  • A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function Perspective.
    • Chanwoo Park, Sangdoo Yun, Sanghyuk Chun
    • NeurIPS 2022. paper | code | bibtex
  • ECCV Caption: Correcting False Negatives by Collecting Machine-and-Human-verified Image-Caption Associations for MS-COCO.
  • Domain Generalization by Mutual-Information Regularization with Pre-trained Models.
    • Junbum Cha, Kyungjae Lee, Sungrae Park, Sanghyuk Chun
    • ECCV 2022. paper | code | bibtex
  • Dataset Condensation with Contrastive Signals.
    • Saehyung Lee, Sanghyuk Chun, Sangwon Jung, Sangdoo Yun, Sungroh Yoon
    • ICML 2022. paper | bibtex
  • Learning Fair Classifiers with Partially Annotated Group Labels.
    • Sangwon Jung, Sanghyuk Chun, Taesup Moon
    • CVPR 2022. paper | code | bibtex
  • ViDT: An Efficient and Effective Fully Transformer-based Object Detector.
    • Hwanjun Song, Deqing Sun, Sanghyuk Chun, Varun Jampani, Dongyoon Han, Byeongho Heo, Wonjae Kim, Ming-Hsuan Yang
    • ICLR 2022. paper | code | bibtex
  • Which Shortcut Cues Will DNNs Choose? A Study from the Parameter-Space Perspective.
    • Luca Scimeca, Seong Joon Oh, Sanghyuk Chun, Michael Poli, Sangdoo Yun
    • ICLR 2022. paper | bibtex
Older publications (~ 2021) (Click to expand)
2021
  • SWAD: Domain Generalization by Seeking Flat Minima.
    • Junbum Cha, Sanghyuk Chun, Kyungjae Lee, Han-Cheol Cho, Seunghyun Park, Yunsung Lee, Sungrae Park
    • NeurIPS 2021. paper | code | bibtex
  • Neural Hybrid Automata: Learning Dynamics with Multiple Modes and Stochastic Transitions.
    • Michael Poli, Stefano Massaroli, Luca Scimeca, Seong Joon Oh, Sanghyuk Chun, Atsushi Yamashita, Hajime Asama, Jinkyoo Park, Animesh Garg
    • NeurIPS 2021. paper | bibtex
  • StyleAugment: Learning Texture De-biased Representations by Style Augmentation without Pre-defined Textures.
  • Rethinking Spatial Dimensions of Vision Transformers.
    • Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, Seong Joon Oh
    • ICCV 2021. paper | code | tweet | bibtex
  • Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts.
    • Song Park, Sanghyuk Chun, Junbum Cha, Bado Lee, Hyunjung Shim
    • ICCV 2021. paper | code | bibtex
  • Probabilistic Embeddings for Cross-Modal Retrieval.
  • Re-labeling ImageNet: from Single to Multi-Labels, from Global to Localized Labels.
    • Sangdoo Yun, Seong Joon Oh, Byeongho Heo, Dongyoon Han, Junsuk Choe, Sanghyuk Chun
    • CVPR 2021. paper | code | bibtex
  • AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights.
  • Few-shot Font Generation with Localized Style Representations and Factorization.
    • Song Park, Sanghyuk Chun, Junbum Cha, Bado Lee, Hyunjung Shim
    • AAAI 2021. CVPR Workshop 2021. paper | code | project page | bibtex
2020
  • Few-shot Compositional Font Generation with Dual Memory.
    • Junbum Cha, Sanghyuk Chun, Gayoung Lee, Bado Lee, Seonghyeon Kim, Hwalsuk Lee
    • ECCV 2020. paper | code | video | bibtex
  • Learning De-biased Representations with Biased Representations.
  • Toward High-quality Few-shot Font Generation with Dual Memory. Oral presentation The best paper runner-up award
    • Junbum Cha, Sanghyuk Chun, Gayoung Lee, Bado Lee, Seonghyeon Kim, Hwalsuk Lee
    • CVPR Workshop 2020. paper | bibtex
  • Evaluating Weakly Supervised Object Localization Methods Right.
  • Data-driven Harmonic Filters for Audio Representation Learning.
2019
  • Neural Approximation of Auto-Regressive Process through Confidence Guided Sampling.
    • YoungJoon Yoo, Sanghyuk Chun, Sangdoo Yun, Jung-Woo Ha, Jaejun Yoo
    • preprint. paper | bibtex
  • Toward Interpretable Music Tagging with Self-attention.
  • CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features. Oral presentation
  • Photorealistic Style Transfer via Wavelet Transforms.
  • Automatic Music Tagging with Harmonic CNN.
    • Minz Won, Sanghyuk Chun, Oriol Nieto, Xavier Serra
    • ISMIR LBD 2019. paper | code | bibtex
  • An Empirical Evaluation on Robustness and Uncertainty of Regularization methods.
    • Sanghyuk Chun, Seong Joon Oh, Sangdoo Yun, Dongyoon Han, Junsuk Choe, Youngjoon Yoo
    • ICML Workshop 2019. paper | bibtex
  • Visualizing and Understanding Self-attention based Music Tagging. Oral presentation
  • Where To Be Adversarial Perturbations Added? Investigating and Manipulating Pixel Robustness Using Input Gradients.
    • Jisung Hwang, Younghoon Kim, Sanghyuk Chun, Jaejun Yoo, Ji-Hoon Kim, Dongyoon Han
    • ICLR Workshop 2019. paper | bibtex
~ 2018
  • Multi-Domain Processing via Hybrid Denoising Networks for Speech Enhancement.
  • A Study on Intelligent Personalized Push Notification with User History.
    • Hyunjong Lee, Youngin Jo, Sanghyuk Chun, Kwangseob Kim
    • Big Data 2017. paper | bibtex
  • Scalable Iterative Algorithm for Robust Subspace Clustering: Convergence and Initialization.
    • Master's Thesis, Korea Advanced Institute of Science and Technology, 2016 (advised by Jinwoo Shin) paper | code

Academic Activities

Professional Service
  • Outstanding reviewer:
    • CVPR 2020, CVPR 2021, CVPR 2022
  • FAccT 2022 Translation/Dialogue Tutorial: "Shortcut learning in Machine Learning: Challenges, Analysis, Solutions"
  • NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future
    • Co-organized by Zeynep Akata, Lucas Beyer, Sanghyuk Chun, Almut Sophia Koepke, Diane Larlus, Seong Joon Oh, Rafael Sampaio de Rezende, Sangdoo Yun, Xiaohua Zhai
Awards
  • Outstanding reviewer award, CVPR 2022
  • Outstanding reviewer award, CVPR 2021
  • Outstanding reviewer award, CVPR 2020
  • Best paper runner-up award, AI for Content Creation Workshop at CVPR 2020
Talks
  • "ECCV Caption: Correcting False Negatives by Collecting Machine-and-Human-verified Image-Caption Associations for MS-COCO", Sogang University (2022). [slide]
  • "Towards Reliable Machine Learning: Challenges, Examples, Solutions", UNIST AIGS (2022). [slide]
  • "Tutorial on Shortcut learning in Machine Learning: Challenges, Analysis, Solutions" at FAccT 2022. [ tutorial homepage | slide | video ]
  • "Towards Reliable Machine Learning", KAIST and SNU (2022). [slide]
  • "Shortcut learning in Machine Learning: Challenges, Examples, Solutions", POSTECH AI Research (PAIR) ML Winter Seminar 2022. [slide]
  • "Realistic challenges and limitations of AI", University of Seoul. [slide]
  • "Mitigating dataset biases in Real-world ML applications", NAVER and NAVER Labs Europe (2021). [slide]
  • "Limits and Challenges in Deep Learning Optimizers", UNIST (2021). [slide]
  • "Towards better cross-modal learning by Probabilistic embedding and AdamP optimizer", UAB CVC (2021). [info] [slide]
  • "AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights", KSIAM (2021). [slide]
  • "Towards Few-shot Font Generation", Seoul University and NAVER (2021). [slide]
  • "Learning De-biased Representations with Biased Representations", NAVER (2020). [slide]
  • "Reliable Machine Learning in NAVER AI", Yonsei University (2020). [slide]
  • "Toward Reliable Machine Learning", omnious and nota (2020). [slide]
  • "Reliable Machine Learning", NAVER CVPR 2020 sponser event. [program] [slide] [video]
  • "Neural Architectures for Music Representation Learning", NAVER (2020). [slide]
  • "Learning generalizable representations with CutMix and ReBias", NAVER Labs Europe (2019).
  • "An empirical evaluation on the generalizability of regularization methods", ICML 2019 Expo Talk: Recent Work on Machine Learning at NAVER. [slide]
  • "Recent works on deep learning robustness in Clova AI", ICLR 2019 Expo Talk: Representation Learning to Rich AI Services in NAVER and LINE.
  • "Recommendation system in the real world", Deepest Summer School 2018. [slide]

Mentoring and Teaching

Mentees / Short-term post-doctoral collaborators / Internship students

Topics: Reliable ML learning with limited annotations Modality-specific tasks Generative models Other topics

  • _    Eunji Kim (Seoul National University, 2022) -- XAI
  • _    Jaehui Hwang (Yonsei University, 2022) -- Adversarial robustness and XAI
  • _    Saehyung Lee (Seoul National University, 2021-2022) [C19] Data condensation
  • _ _ Sangwon Jung (Seoul National University, 2021-2022) [C18] [C19] -- Fairness with not enough group labels
  • _    Luca Scimeca (A short-term post-doctoral collaborator, 2021) [C16] [C14] -- Understanding shortcut learning phenomenon in feature space
  • _    Michael Poli (KAIST, 2021) [C14] [C16] -- Neural hybrid automata
  • _    Hyemi Kim (KAIST, 2021) -- Test-time training for robust prediction
  • _    Jun Seo (KAIST, 2021) -- Self-supervised learning
  • _    Song Park (Yonsei University, 2020-2021) [C8/W6] [C12] [A4] [A8] -- Few-shot font generation
  • _    Hyojin Bahng (Korea University, 2019) [C6] -- De-biasing
  • _ _ Junsuk Choe (Yonsei University, 2019) [C5] [A6] -- Reliable evaluation for WSOL
  • _    Naman Goyal (IIT RPR, 2019) -- Robust representation against shift
  • _    Minz Won (Music Technology Group, Universitat Pompeu Fabra, 2018-2019) [W2] [W4] [A2] [C4] -- Audio representation learning
  • _    Byungkyu Kang (Yonsei University, 2018) [C2] -- Image-to-image translation and style transfer
  • _    Jang-Hyun Kim (Seoul National University, 2018) [A1] -- Audio representation learning
  • _    Jisung Hwang (University of Chicago, 2018) [W1] -- Adversarial robustness
  • _    Younghoon Kim (Seoul National University, 2018) [W1] -- Adversarial robustness
Guest lectures
  • "ECCV Caption: Correcting False Negatives by Collecting Machine-and-Human-verified Image-Caption Associations for MS-COCO", Sogang University (2022). [slide]
  • "Towards Reliable Machine Learning", Seoul National University (2022). [slide]
  • "Towards Reliable Machine Learning", KAIST (2022). [slide]
  • "Limits and Challenges in Deep Learning Optimizers", UNIST (2021). [slide]
  • "Towards Few-shot Font Generation", Seoul National University (2021). [slide]
  • "Reliable Machine Learning in NAVER AI", Yonsei University (2020). [slide]
  • "Recommendation system in the real world", Deepest Summer School 2018. [slide]
Industry Experience
NAVER AI Research (2018 ~ Now)
  • Hangul
    Hangul
    DM-Font teasor
    Hangul Handwriting Font Generation

    Distributed at 2019 Hangul's day (ν•œκΈ€λ‚ ), [Full font list]

    • Hangul (Korean alphabet, ν•œκΈ€) originally consists of only 24 sub-letters (γ„±, γ…‹, γ„΄, γ„·, γ…Œ, ㅁ, γ…‚, ㅍ, γ„Ή, γ……, γ…ˆ, γ…Š, γ…‡, γ…Ž, γ…‘, γ…£, γ…—, ㅏ, γ…œ, γ…“, γ…›, γ…‘, γ… , γ…•), but by combining them, there exist 11,172 valid characters in Hangul. For example, "ν•œ" is a combination of γ…Ž, ㅏ, and γ„΄, and "쐰" is a combination of γ……, γ……, γ…—, γ…£, and γ„΄. It makes generating a new Hangul font be very expensive and time-consuming. Meanwhile, since 2008, Naver has distributed Korean fonts for free (named Nanum fonts, λ‚˜λˆ” κΈ€κΌ΄).
    • In 2019, we developed a technology for fully-personalized Hangul generation only with 152 characters. We opened an event page where users can submit their own handwriting. The full generated font list can be found in [this link]. Details for the generation technique used for the service was presented in Deview 2019 [Link].
    • This work was also extended to the few-shot generation based on the compositionality. See the papers in AI for Content Creation Workshop (AICCW) at CVPR 2020 (short paper) [Link], ECCV 2020 (full paper) [Link], AAAI 2021 [Link], ICCV 2021 [Link], and journal extension [Link].
    • [BONUS] You can play with my handwriting here
  • example sticker
    Example emoji from LINE sticker shop.
    Emoji Recommendation (LINE Timeline)

    Deployed in Jan. 2019

    • LINE is a major messenger player in east asia (Japan, Taiwan, Thailand, Indonesia, and Korea). In the application, users can buy and use numerous emoijs a.k.a. LINE Sticker.
    • In this project, we recommended emojis to users based on their profile picture (cross-domain recommendation).
    • I developed and researched the entire pipeline of the cross-domain recommendation system and operation tools.
Kakao Advanced Recommendation Technology (ART) team (2016 ~ 2018)
  • Kakao
    Recommender Systems (Kakao services)

    Feb. 2016 - Feb. 2018

    • I developed and maintained a large-scale real-time recommender system (Toros [PyCon Talk] [AI Report]) for various services in Daum and Kakao. I mainly worked with content-based representation modeling (for textual, visual, and musical data), collaborative filtering modeling, user embedding, user clustering, and ranking system based on Multi-armed Bandit.
    • Textual domain: Daum News similar article recommendation, Brunch (blog service) similar post recommendation, Daum Cafe (community service) hit item recommendation.
    • Visual domain: Daum Webtoon and Kakao Page similar item recommendation, video recommendation for a news article (cross-domain recommendation).
    • Audio domain: music recommendation for Kakao Mini (smart speaker), Melon and Kakao Music.
    • Online to offline: Kakao Hairshop style recommendation.
  • IPPN
    System overview.
    Personalized Push Notification with User History (Daum, Kakao Page)

    Deployed in 2017

    • The mobile push service (or alert system) is widely-used in mobile applications to attain a high user retention rate. However, a freqeunt push notification makes a user feel fatigue, resulting on the application removal. Usually, the push notification system is a rule-based system, and managed by human labor. In this project, we researched and developed a personalized push notification system based on user activity and interests. The system has been applied to Daum an Kakao Page mobile applications. More details are in our paper.
  • Daum Shopping
    Large-Scale Item Categorization in e-Commerce (Daum Shopping)

    Deployed in 2017

    • An accurate categorization helps users to search desired items in e-Commerce based on the category, e.g., clothes / shoes / sneakers. However, the categorization is usually performed based on rule-based systems or human labor, which leads to low coverage of categorized items. Even the automatic item categorization is difficult due to its web-scale data size, the highly unbalanced annotation distribution, and noisy labels. I developed a large-scale item categorization system for Daum Shopping based on a deep network, from the operation tool to the categorization API.
Internship
  • Naver Labs
    Research internship (Naver Labs)

    Aug. 2015 - Dec. 2015

    • During the internship, I implemented batch normalization (BN) to AlexNet, Inception v2 and VGG on ImageNet using Caffe. I also researched batch normalization for sequential models, e.g., RNN using Lua Torch.
  • IUM-SOCIUS
    Software engineer (IUM-SOCIUS)

    Jun. 2012 - Jan. 2013

    • I worked as web developer at IUM-SOCIUS. During the internship, I developed and maintained internal batch services (JAVA spring batch), internal statistics service (Python Flask, MongoDB), internal admin tools (Python Django, MySQL), and main service systems (JAVA spring, Ruby on Rails, MariaDB).
Education and Career
  • M.S. (2014.03 - 2016.02), School of Electrical Engineering, KAIST
  • B.S. (2009.03 - 2014.02), School of Electrical Engineering and School of Management Science (double major), KAIST