Data Science
Distributionally Robust Inference
Many machine learning and statistical methods rely on the fundamental assumption that the distributions of observed/training data and future/test data are identical. However, in real-world applications, this assumption is often violated due to various types of distributional shifts. These shifts can significantly degrade the performance of non-robust methods, such as those based on empirical risk minimization, in statistical inference tasks. To address this challenge, distributionally robust inference has emerged as a promising alternative. This relatively new field in data science focuses on developing methods that are resilient to distributional shifts and studying their theoretical properties. Our research aims to advance this area by developing robust methodologies and investigating their practical and theoretical implications.
Score-based generative models
Generative AI has had a significant impact on various areas of science and industry. Score-based generative models, such as diffusion models, are among the most successful approaches. However, despite extensive research, our theoretical understanding of these models remains limited. Our research aims to develop a solid theoretical foundation that provides fundamental insights into why score-based generative models perform so well.

Over-parametrized deep neural networks
In recent years, many researchers have focused on understanding deep learning from a theoretical perspective. One of the key open problems is to explain why over-parameterized deep neural networks perform so well. For over-parameterized models, researchers have observed phenomena like the double descent curve, which appears to contradict the classical bias-variance tradeoff principle. Our research aims to build a deeper understanding of the mechanisms behind the success of over-parameterized deep neural networks.