Research Interests
Here are some topics I am recently interested in:
Uncertainty quantification (e.g., conformal prediction, calibration)
Distribution-free risk control, especially for foundation models (e.g., LLMs) and robotics
Multi-distribution learning (closely related to algorithmic fairness & domain generalization)
AI ethics (e.g., societal aspects of AI policy-making)
AI in health & medical care
Manuscripts & Preprints
(* for equal contribution; (α-β) for alphabetical ordering; papers with purple titles are not on arXiv yet)
Provable multi-party reinforcement learning with diverse human feedback Submitted
Huiying Zhong, Zhun Deng, Weijie Su, Steven Wu, Linjun Zhang
Taking a Break: The Optimal Stopping Problem for User Engagement Draft in progress
Zhun Deng, He Sun, Guannan Qu, David Parkes
Publications
(* for equal contribution; (α-β) for alphabetical ordering)
Conference Proceedings
2024
Learning and Forgetting Unsafe Examples in Large Language Models ICML 2024
Jiachen Zhao, Zhun Deng, David Madras, James Zou, Mengye Ren
Prompt Risk Control: A Rigorous Framework for Responsible Deployment of Large Language Models ICLR 2024
Tom Zello, Todd Morrill*, Zhun Deng*, Jake Snell, Toniann Pitassi, Richard Zemel
2023
Distribution-free Statistical Dispersion Control for Societal Applications NeurIPS (spotlight, top 3% among submissions) 2023
Zhun Deng, Tom Zello, Jake Snell, Toniann Pitassi, Richard Zemel
PICProp: Physics-Informed Confidence Propagation for Uncertainty Quantification NeurIPS 2023
Qianli Shen, Wai Hoh Tang, Zhun Deng, Apostolos Psaros, Kenji Kawaguchi
Decision-Aware Conditional GANs for Time Series Data ICAIF (oral presentation) 2023
He Sun, Zhun Deng, Hui Chen, David Parkes
How Does Information Bottleneck Help Deep Learning ICML 2023
Kenji Kawaguchi*, Zhun Deng*, Xu Ji*, Jiaoyang Huang
Reinforcement Learning with Stepwise Fairness Constraints AISTATS 2023
Zhun Deng, He Sun, Zhiwei Steven Wu, Linjun Zhang, David Parkes
Understanding Multimodal Contrastive Learning and Incoportate Paired Data AISTATS 2023
Ryumei Nakada, Ibriham Golluk, Zhun Deng, Wenlong Ji, James Zou, Linjun Zhang
FIFA: Making Fairness More Generalizable in Classifiers Trained on Imbalanced Data ICLR 2023
Zhun Deng, Jiayao Zhang, Linjun Zhang, Ting Ye, Yates Coley, Weijie Su, James Zou
Quantile Risk Control: a Flexible Framework for Bounding the Probability of High-loss Predictions ICLR 2023
Jake Snell, Tom Zello, Zhun Deng, Toniann Pitassi, Richard Zemel
Happymap: A Generalized Multi-calibration Method ITCS 2023
(α-β) Zhun Deng, Cynthia Dwork, Linjun Zhang
2022
When and How Mixup Improves Calibration ICML 2022
Linjun Zhang*, Zhun Deng*, Kenji Kawaguchi, James Zou
Robustness Implies Generalization via Data-dependent Generalization Bounds ICML (long presentation, top 2% among submissions) 2022
Kenji Kawaguchi, Zhun Deng, Kyle Luh, Jiaoyang Huang
An Unconstrained Layer-Peeled Perspective on Neural Collapse ICLR 2022
Wenlong Ji, Yiping Lu, Yiliang Zhang, Zhun Deng, Weijie Su
2021
Adversarial Training Helps Transfer Learning via Better Representations NeurIPS 2021
Zhun Deng*, Linjun Zhang*, Kailas Vodrahalli, Kenji Kawaguchi, James Zou
Toward Better Generalization Bounds with Locally Elastic Stability ICML 2021
Zhun Deng, Hangfeng He, Weijie Su
Improving Adversarial Robustness via Unlabeled Out-of-Domain Data AISTATS (oral presentation, top 3% among submissions) 2021
Zhun Deng*, Linjun Zhang*, Amirata Ghorbani, James Zou
How Does Mixup Help With Robustness and Generalization? ICLR (spotlight, top 5% among submissions) 2021
Linjun Zhang*, Zhun Deng*, Kenji Kawaguchi*, Amirata Ghorbani, James Zou
The Role of Gradient Noise in the Optimization of Neural Networks IEEE Big Data 2021
(α-β) Zhun Deng, Jiaoyang Huang, Kenji Kawaguchi
2020
Interpreting Robust Optimization via Adversarial Influence Functions ICML 2020
(α-β) Zhun Deng, Cynthia Dwork, Jialiang Wang, Linjun Zhang
Towards Understanding the Dynamics of the First-Order Adversaries ICML 2020
Zhun Deng, Hangfeng He, Jiaoyang Huang, Weijie Su
2018
The Number of Independent Sets in Hexagonal Graphs ISIT 2018
Zhun Deng, Jie Ding, Kathryn Heal, Vahid Tarokh
Workshop Proceedings
2023
Last-layer Fairness Fine-tuning is Simple and Effective for Neural Networks ICML Workshop 2023
Yuzhen Mao, Zhun Deng, Huaxiu Yao, Ting Ye, Kenji Kawaguchi, James Zou
2019
Differential Privacy After the Fact: The Case of Congressional Reapportionment TPDP 2019
(α-β) Zhun Deng, Cynthia Dwork, Adam Smith
Journal Publications
2024
Making Predictors More Reliable with Selective Recalibration Transactions on Machine Learning Research
Tom Zollo, Zhun Deng, Jake Snell, Toniann Pitassi, Richard Zemel
2023
The Power of Contrast for Feature Learning: A Theoretical Analysis The Journal of Machine Learning Research (JMLR), 24(330):1-78, 2023.
Wenlong Ji, Zhun Deng, Ryumei Nakada, James Zou, Linjun Zhang
2022
Understanding Dynamics of Learning Nonlinear Representations and its Application Neural Computation, 34(4), 991-1018, MIT Press, 2022.
Kenji Kawaguchi, Linjun Zhang, Zhun Deng
Talks
Lunch Seminar, Center for Data Science, NYU, "Taming the Beast: Practical Theories for Responsible Machine Learning".
Theory Lunch Seminar, CMU, "Taming the Beast: Practical Theories for Responsible Machine Learning".
Penn Research in Machine Learning forum, University of Pennsylvania, "Knowing the Unknowns: Uncertainty Quantification for Responsible AI Deployment".
ITCS 2023, "Happymap: a generalized multi-calibration method".
AI TIME, one hour talk, 2022 "Reinforcement Learning with Stepwise Fairness Constraints".
Microsoft Research Asia, Beijing, 2022, "New Tools in Algoritmic Statbility".
JSM 2022, "Obtaining More Generalizable Fair Classifiers on Imbalanced Datasets".
ICML 2022, long talk "Robustness Implies Generalization via Data-dependent Generalization Bounds".
Simons Institute for the Theory of Computing, Data Privacy: Foundations and Applications Reunion, 2022 "Scaffolding Sets".
Tsinghua University, AI TIME, 2022 "Adversarial Training Encourages More Transferable Representations".
NeurIPS 2021, "Adversarial Training Helps Transfer Learning via Better Representations".
ICML 2021, "Towards Better Generalization Bounds with Locally Elastic Stability".
ICLR 2021, "How Does Mixup Helps Robustness and Generalization".
AISTATS 2021, "Improving Adversarial Robustness via Unalabeled Out-of-Domain Data".
ICML 2020, "The Optimization Landscape of the First-Order Adversaries".
University of Minnesota Twin Cities, Jie Ding's Group, 2018 "Recent Advances in Differential Privacy: Theory And Application".