Yihan Wang

Hi, I am a final-year Ph.D. candidate in Computer Science at UCLA, working with Prof. Cho-Jui Hsieh. I completed my B.Eng. degree at Tsinghua University in June 2020.
My research interests focus on the robustness and generalization of machine learning models, with a recent emphasis on large language models. I work on projects that both interest me and benefit society. I was supported by an Amazon Fellowship.
My research specifically includes:
- Identifying and understanding limitations and potential risks in language model fine-tuning, including:
- Building robust machine learning models against adversarial attacks:
I’ve also worked on formal verification of neural networks/machine learning models in my early PhD years.
For anyone interested in my research: Please feel free to email me if you are interested in a discussion on research or potential collaborations.
selected publications
* indicates equal contribution.
- Arxiv Preprint
- ICLR 2024Two-stage LLM Fine-tuning with Less Specialization and More GeneralizationIn The Twelfth International Conference on Learning Representations , 2024
- NeurIPS 2023Universality and limitations of prompt tuningAdvances in Neural Information Processing Systems, 2023