Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2 
Published in ISIT 2021, 2021
Investigating the benefits of coding on the Age of Information (AoI) and rate in memoryless broadcast channels.
Recommended citation: Chen, Xingran, Renpu Liu, Shaochong Wang, and Shirin Saeedi Bidokhti. (2021). "Timely Broadcasting in Erasure Networks: Age-Rate Tradeoffs." ISIT 2021.
Download Paper
Published in ISIT 2023, 2023
We prove that statistical heterogeneity in federated learning can enhance generalization performance under certain conditions.
Recommended citation: Renpu Liu, Jing Yang, and Cong Shen. (2023). "Exploiting Feature Heterogeneity for Improved Generalization in Federated Multi-task Learning." ISIT 2023.
Download Paper
Published in ICML 2024, 2024
This paper explores Federated Representation Learning (FRL) in the under-parameterized regime.
Recommended citation: Renpu Liu, Cong Shen, and Jing Yang. (2024). "Federated Representation Learning in the Under-Parameterized Regime." ICML 2024.
Download Paper
Published in AISTATS 2025, 2025
A personalized RLHF framework leveraging shared LoRA modules to capture both common and user-specific features.
Recommended citation: Renpu Liu, Peng Wang, Donghao Li, Cong Shen, Jing Yang. (2025). "A Shared Low-Rank Adaptation Approach to Personalized RLHF." AISTATS 2025.
Download Paper
Published in ICLR 2025, 2025
We demonstrate that Transformers can implement LISTA-type learning-to-optimize algorithms for LASSO-based sparse recovery.
Recommended citation: Renpu Liu*, Ruida Zhou*, Cong Shen, and Jing Yang. (2025). "On the Learn-to-Optimize Capabilities of Transformers in In-Context Sparse Recovery." ICLR 2025.
Download Paper
Published in ISIT 2025, 2025
Applying In-Context Learning techniques to efficient spectrum sensing problems.
Recommended citation: Renpu Liu, Liwen Zhong, Wooram Lee, and Jing Yang. (2025). "In-Context Learning Based Efficient Spectrum Sensing." ISIT 2025.
Download Paper
Published in NeurIPS 2025, 2025
This paper investigates the theoretical foundations of how unlabeled data in the prompt improves In-Context Learning.
Recommended citation: Renpu Liu, Jing Yang. (2025). "Unlabeled Data Can Provably Enhance In-Context Learning of Transformers." NeurIPS 2025.
Download Paper
Published:
This is a description of your talk, which is a markdown file that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
RA, Pennsylvania State University, 2022
RA, Pennsylvania State University, 2022