Research
I'm interested in privacy-preserving machine learning and large language models (LLMs).
(* indicates equal contribution)
Data-adaptive Differentially Private Prompt Synthesis for In-Context Learning
Fengyu Gao*, Ruida Zhou*, Tianhao Wang, Cong Shen, Jing Yang
ICLR 2025 [Code]
We generate differentially private synthetic few-shot examples for in-context learning by leveraging data clustering patterns.
Federated Online Prediction from Experts with Differential Privacy: Separations and Regret Speed-ups
Fengyu Gao, Ruiquan Huang, Jing Yang
NeurIPS 2024
We study differentially private federated online prediction from experts, achieving regret speed-up under stochastic and special oblivious adversaries, with lower bounds showing fundamental limits.
Federated Q-Learning: Linear Regret Speedup with Low Communication Cost
Zhong Zheng, Fengyu Gao, Lingzhou Xue, Jing Yang
ICLR 2024
We study model-free federated Q-learning for tabular MDPs, achieving linear regret speed-up with logarithmic communication cost.
|