Fengyu Gao

I'm a fourth-year PhD student in the Department of Computer Science at the University of Virginia, advised by Prof. Jing Yang. Previously, I received my Bachelor's degree in Computer Science and Technology from University of Science and Technology of China (USTC) in 2022.

My research interests span several topics in machine learning, including privacy-preserving machine learning, federated learning, and reinforcement learning. Recently, I have been particularly interested in the privacy issues in LLMs, including (but not limited to) privacy risks during inference (e.g., in-context learning) and post-training (e.g., preference alignment).

Email  /  Google Scholar  /  LinkedIn

profile photo

Publications

(* indicates equal contribution)

HeteroFedSyn: Differentially Private Tabular Data Synthesis for Heterogeneous Federated Settings

Xiaochen Li, Fengyu Gao, Xizixiang Wei, Tianhao Wang, Cong Shen, Jing Yang

In the ACM Special Interest Group on Management of Data (SIGMOD) 2026

TL;DR: Differentially private tabular data synthesis for the horizontal federated setting, achieving utility comparable to centralized synthesis.

Data-Adaptive Differentially Private Prompt Synthesis for In-Context Learning

Fengyu Gao*, Ruida Zhou*, Tianhao Wang, Cong Shen, Jing Yang

In International Conference on Learning Representations (ICLR) 2025      [Code]

TL;DR: Differentially private synthetic few-shot example generation for in-context learning by leveraging data clustering patterns.

Federated Online Prediction from Experts with Differential Privacy: Separations and Regret Speed-ups

Fengyu Gao, Ruiquan Huang, Jing Yang

In Advances in Neural Information Processing Systems (NeurIPS) 2024

TL;DR: Differentially private federated online prediction from experts, achieving regret speed-up under stochastic and special oblivious adversaries, and establishing lower bounds.

Federated Q-Learning: Linear Regret Speedup with Low Communication Cost

Zhong Zheng, Fengyu Gao, Lingzhou Xue, Jing Yang

In International Conference on Learning Representations (ICLR) 2024

TL;DR: Model-free federated Q-learning for tabular MDPs, achieving linear regret speed-up with logarithmic communication cost.

Honors and Awards

Top Reviewer, NeurIPS, 2025

Outstanding Student Scholarship, USTC, 2019 - 2021

Overseas Alumni Foundation Outstanding Student Scholarship, USTC, 2019

Outstanding Freshman Scholarship, USTC, 2018

Service

Conference Reviewer: ICLR'25, 26; NeurIPS'25; ICML'26

Teaching Assistant: CS 4771 Reinforcement Learning, Spring 2026, UVA; CS4501 Law and AI, Fall 2025, UVA; Algebraic Structure, Spring 2021, USTC; Analog and Digital Circuits, Fall 2020, USTC

Hackathon Judge: Hack to the Beat, Women in Computing Sciences (WiCS), University of Virginia, 2026


Thank Dr. Jon Barron for sharing the source code of his homepage.