B.S Student |
I am an first-year Ph.D. student in the School of Intelligence Science and Technology, Peking University.
Currently, I'm focusing on the Out-of-Distribution (OOD) Generalization of various machine learning models and scenarios, including Domain Generalization (DG) in CV tasks, graph ODD generalization and the generalization of LLMs.
[2019.9 ~ 2023.6]
Ph.D. Candidate in Machine Learning and Computer Vision, School of Intelligence Science and Technology, Peking University (PKU) Advisors: Xianghua Ying , Yisen Wang[2023.9 ~ 2028.9 (expected)]
Solving Domain Generalization via Adversarial Training with Structured Priors. [pdf] [slide] with Yifei Wang, Hong Zhu, advised by Yisen Wang.
We investigate the limitness of OOD performance improvement of the conventional sample-wise Adversarial Training (AT) and empirically validate that the low-rank structure that lies in the perturbation of the Universal Adversarial Training (UAT) is beneficial to OOD improvement. We further propose two low-rank structured AT algorithms to alleviate this issue. We empirically and theoretically prove the effectiveness of our methods.
Welcome to the paper reading list & notes of OOD Generalization & some other topics maintained by me! [link]