Ziqiao Wang is an Assistant Professor in the School of Computer Science and Technology at Tongji University. He received his Ph.D. in Electrical and Computer Engineering from the University of Ottawa. His research focuses on the theoretical foundations of machine learning and information theory, and his work has been published at top-tier ML conferences such as NeurIPS, ICML, and ICLR. His doctoral dissertation was nominated for the 2025 Canadian AI Association Best Doctoral Dissertation Award, as well as the University of Ottawa Governor General's Gold Medal and the Pierre Laberge Thesis Prize. He currently leads an NSFC Young Scientists Fund project and participates in China's National Key R&D Program. He also serves as one of the Co-Program Chairs of the 2024 IEEE North American School of Information Theory (NASIT) and as an Area Chair for ICLR.
Please visit https://ziqiaowanggeothe.github.io for more information.
Research Interests:
I am broadly interested in the theoretical foundations of machine learning, with a particular focus on statistical learning theory and information theory. My primary research area is generalization theory, including both in-distribution and out-of-distribution settings (e.g., domain adaptation), where I often apply information-theoretic analysis tools. More recently, I have also been exploring safety and trustworthiness in large language models (LLMs), including topics such as LLM alignment and watermarking.
Recent Publications:
Theoretically Grounded Framework for LLM Watermarking: A Distribution-Adaptive Approach
Haiyun He, Yepeng Liu, Ziqiao Wang, Yongyi Mao, and Yuheng Bu
NeurIPS 2025CHPO: Constrained Hybrid-action Policy Optimization for Reinforcement Learning
Ao Zhou, Jiayi Guan, Li Shen, Fan Lu, Sanqing Qu, Junqiao Zhao, Ziqiao Wang, Ya Wu, and Guang Chen
NeurIPS 2025MutualVPR: A Mutual Learning Framework for Resolving Supervision Inconsistencies via Adaptive Clustering
Qiwen Gu, Xufei Wang, Junqiao Zhao, Siyue Tao, Tiantian Feng, Ziqiao Wang, and Guang Chen
NeurIPS 2025Distributional Information Embedding: A Framework for Multi-bit Watermarking
Haiyun He, Yepeng Liu, Ziqiao Wang, Yongyi Mao, and Yuheng Bu
APWDSIT 2025Revisiting Weak-to-Strong Generalization in Theory and Practice: Reverse KL vs. Forward KL
Wei Yao, Wenkai Yang, Ziqiao Wang, Yankai Lin, and Yong Liu
ACL Findings 2025Dynamic Task Vector Grouping for Efficient Multi-Task Prompt Tuning
Peiyi Zhang, Richong Zhang, Zhijie Nie, and Ziqiao Wang
ACL Findings 2025Generalization in Federated Learning: A Conditional Mutual Information Framework
Ziqiao Wang, Cheng Long, and Yongyi Mao
ICML 2025Preserving Label Correlation for Multi-label Text Classification by Prototypical Regularizations
Fanshuang Kong, Richong Zhang, Xiaohui Guo, Junfan Chen, and Ziqiao Wang
WWW 2025
LH-Mix: Local Hierarchy Correlation Guided Mixup over Hierarchical Prompt Tuning
Fanshuang Kong, Richong Zhang and Ziqiao Wang
KDD 2025Generalization Bounds via Conditional f-Information
Ziqiao Wang and Yongyi Mao
NeurIPS 2024On f-Divergence Principled Domain Adaptation: An Improved Framework
Ziqiao Wang and Yongyi Mao
NeurIPS 2024Two Facets of SDE Under an Information-Theoretic Lens: Generalization of SGD via Training Trajectories and via Terminal States
Ziqiao Wang and Yongyi Mao
UAI 2024On Unsupervised Domain Adaptation: Pseudo Label Guided Mixup for Adversarial Prompt Tuning
Fanshuang Kong, Richong Zhang, Ziqiao Wang and Yongyi Mao
AAAI 2024Cross-modal and Uni-modal Soft-label Alignment for Image-Text Retrieval
Hailang Huang, Zhijie Nie, Ziqiao Wang and Ziyu Shang
AAAI 2024Sample-Conditioned Hypothesis Stability Sharpens Information-Theoretic Generalization Bounds
Ziqiao Wang and Yongyi Mao
NeurIPS 2023Tighter Information-Theoretic Generalization Bounds from Supersamples
Ziqiao Wang and Yongyi Mao
ICML 2023 (Oral, top 2.2% of submissions)Information-Theoretic Analysis of Unsupervised Domain Adaptation
Ziqiao Wang and Yongyi Mao
ICLR 2023Over-Training with Mixup May Hurt Generalization
Zixuan Liu*, Ziqiao Wang* (equal contribution), Hongyu Guo, and Yongyi Mao
ICLR 2023On the Generalization of Models Trained with SGD: Information-Theoretic Bounds and Implications
Ziqiao Wang and Yongyi Mao
ICLR 2022



