Jiaming Ji (吉嘉铭)

Phd Student in Peking University

AI Alignment
AI Safety
Large Models

Email: jiamg.ji at gmail dot com

[Google Scholar][GitHub]

About me

I’m a PhD student at the Institute of Artificial Intelligence, Peking University, advised by Prof. Yaodong Yang (both a good teacher and a helpful friend in my life). In parallel, I am also a visiting scholar at the Hong Kong University of Science and Technology, working under the guidance of renowned computer scientist Prof. Yike Guo, and collaborating closely with Sirui Han. My research direction are reinforcement learning, the safety and value alignment of large language models. Beyond academic research, I place strong emphasis on the practical deployment of large models. I have contributed to the open-source and real-world deployment of several large-scale models, including Baichuan2, the Hong Kong AI Model HKGAI-V1, the Pengcheng Brain model, and the medical triage model MedGuide. Notably, MedGuide has been deployed in hospitals and is actively supporting doctors and nurses in emergency triage—something I take great pride in beyond my academic achievements. In 2025, I was honored to be selected as an Apple Scholar in AI/ML, mentored by Rin Metcalf Susa and Natalie Mackraz. In 2024, I received the first batch of National Natural Science Foundation funding for the Youth Student Basic Research Project (Ph.D. track), as the sole awardee from Peking University in the field of intelligence. Prior to my Ph.D., I conducted research on neuromorphic computing and brain-computer interfaces with Prof. Gang Pan at Zhejiang University. I began my research journey focusing on safe reinforcement learning and won the championship in the NeurIPS 2022 MyoChallenge for robotic dexterous manipulation.

吉嘉铭,北京大学人工智能研究院博士生在读,导师为杨耀东老师,研究方向为强化学习、大模型的安全与价值对齐,在计算机顶级会议期刊发表口头、焦点论文等十余篇,谷歌学术引用累计2200余次,模型开源累计下载500W,GitHub开源累计获得2W+ Star。曾获首批国自然博士青年基金资助(2023年度北京大学智能学科唯一),苹果学者奖学金(Apple Scholar,全国仅两位),获北京大学博士最高研究奖“校长奖学金”, 首届中国电子学会—腾讯博士生科研激励计划(全国17人),获 NeurIPS‘22 机器人灵巧操作比赛冠军,研究成果及模型被OpenAI 、Meta引用,被MIT Tech Review报道。

News

[Show more]

Research Summary

Currently, i focus on AI Safety and Alignment.

Honors

Preprints

SafeVLA: Towards Safety Alignment of Vision-Language-Action Model via Safe Reinforcement Learning
SafeVLA: Towards Safety Alignment of Vision-Language-Action Model via Safe Reinforcement Learning
Borong Zhang*, Yuhao Zhang*, Jiaming Ji*, Yingshan Lei, Josef Dai, Yuanpei Chen, Yaodong Yang
Arxiv 2025
[Paper]
Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback
Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback
Jiaming Ji*, Jiayi Zhou*, Hantao Lou, Boyuan Chen, Donghai Hong, Xuyao Wang, Wenqi Chen, Kaile Wang, Rui Pan, Jiahao Li, Mohan Wang, Josef Dai, Tianyi Qiu, Hua Xu, Dong Li, Weipeng Chen, Jun Song, Bo Zheng, Yaodong Yang
Arxiv 2025
[Paper][Code][Data]
AI Alignment: A Comprehensive Survey
AI Alignment: A Comprehensive Survey
Jiaming Ji*, Tianyi Qiu*, Boyuan Chen*, Borong Zhang*, Hantao Lou, Kaile Wang, Yawen Duan, Zhonghao He, Jiayi Zhou, Zhaowei Zhang, Fanzhi Zeng, Kwan Yee Ng, Juntao Dai, Xuehai Pan, Aidan O'Gara, Yingshan Lei, Hua Xu, Brian Tse, Jie Fu, Stephen McAleer, Yaodong Yang, Yizhou Wang, Song-Chun Zhu, Yike Guo, Wen Gao
Arxiv, 2024
[Paper]
Baichuan 2: Open Large-scale Language Models
Baichuan 2: Open Large-scale Language Models
Jiaming Ji and Other Authors (Alphabetic Order)
Arxiv, 2023
[Paper][Code]

Publications (* denotes equal contribution, and denotes the corresponding author)

2025

Language Models Resist Alignment: Evidence From Data Compression
Language Models Resist Alignment: Evidence From Data Compression
Jiaming Ji*, Kaile Wang*, Tianyi Qiu*, Boyuan Chen*, Jiayi Zhou*, Changye Li, Hantao Lou, Yaodong Yang
ACL 2025 Main.
[Paper]
PKU-SafeRLHF: Towards Multi-Level Safety Alignment for LLMs with Human Preference
PKU-SafeRLHF: Towards Multi-Level Safety Alignment for LLMs with Human Preference
Jiaming Ji*, Donghai Hong*, Borong Zhang, Boyuan Chen, Josef Dai, Boren Zheng, Tianyi Qiu, Boxun Li, Yaodong Yang
ACL 2025 Main.
[Paper][Data]
Reward Generalization in RLHF: A Topological Perspective
Reward Generalization in RLHF: A Topological Perspective
Tianyi Qiu*, Fanzhi Zeng*, Jiaming Ji*, Dong Yan*, Kaile Wang, Jiayi Zhou, Yang Han, Josef Dai, Xuehai Pan, Yaodong Yang
ACL 2025 Findings.
[Paper]
SAE-V: Interpreting Multimodal Models for Enhanced Alignment
SAE-V: Interpreting Multimodal Models for Enhanced Alignment
Hantao Lou*, Changye Li*, Jiaming Ji, Yaodong Yang
ICML 2025.
[Paper]
Revolutionizing health care: The transformative impact of large language models in medicine
Revolutionizing health care: The transformative impact of large language models in medicine
Kuo Zhang, Xiangbin Meng, Xiangyu Yan, Jiaming Ji, ... , Wenyao Wang, Jiarong Li, Ming-Qi Zheng, Yaodong Yang, Yi-Da Tang
Journal of Medical Internet Research 2025.
[Paper]
Stream Aligner: Efficient Sentence-Level Alignment via Distribution Induction
Stream Aligner: Efficient Sentence-Level Alignment via Distribution Induction
Hantao Lou, Jiaming Ji, Kaile Wang, Yaodong Yang
AAAI 2025
[Paper]
Sequence to Sequence Reward Modeling: Improving RLHF by Language Feedback
Sequence to Sequence Reward Modeling: Improving RLHF by Language Feedback
Jiayi Zhou*, Jiaming Ji*, Juntao Dai, Yaodong Yang
AAAI 2025 Oral.
[Paper]

2024

OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning Research
OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning Research
Jiaming Ji*, Jiayi Zhou*, Borong Zhang*, Juntao Dai, Xuehai Pan, Ruiyang Sun, Weidong Huang, Yiran Geng, Mickel Liu, Yaodong Yang
JMLR 2024. (Top 15 ~ 20 Papers for Open-source AI Systems per year.)
[Paper]
Aligner: Efficient Alignment by Learning to Correct
Aligner: Efficient Alignment by Learning to Correct
Jiaming Ji*, Boyuan Chen*, Hantao Lou, Donghai Hong, Borong Zhang, Xuehai Pan, Juntao Dai, Yaodong Yang
NeurIPS 2024 Oral.
[Paper][Code][Data]
SafeSora: Towards Safety Alignment of Text2Video Generation via a Human Preference Dataset
SafeSora: Towards Safety Alignment of Text2Video Generation via a Human Preference Dataset
Juntao Dai, Tianle Chen, Xuyao Wang, Ziran Yang, Taiye Chen, Jiaming Ji, Yaodong Yang
NeurIPS 2024.
[Paper]
ProgressGym: Alignment with a Millennium of Moral Progress
ProgressGym: Alignment with a Millennium of Moral Progress
Tianyi Qiu*, Yang Zhang*, Xuchuan Huang, Jasmine Xinze Li, Jiaming Ji, Yaodong Yang
NeurIPS 2024
[Paper]
Safe RLHF: Safe Reinforcement Learning from Human Feedback
Safe RLHF: Safe Reinforcement Learning from Human Feedback
Josef Dai*, Xuehai Pan*, Ruiyang Sun*, Jiaming Ji*, Xinbo Xu, Mickel Liu, Yizhou Wang, Yaodong Yang
ICLR 2024
[Paper][Code]
SafeDreamer: Safe Reinforcement Learning with World Models
Weidong Huang*, Jiaming Ji*, Chunhe Xia*, Borong Zhang, Yaodong Yang
ICLR 2024 (2024)
[Paper][Code]

2023

Heterogeneous-Agent Reinforcement Learning
Heterogeneous-Agent Reinforcement Learning
Yifan Zhong*, Jakub Grudzien Kuba*, Xidong Feng*, Siyi Hu, Jiaming Ji, Yaodong Yang
TIPAMI 2023
[Paper][Code]
Bi-DexHands: Towards Human-Level Bimanual Dexterous Manipulation
Bi-DexHands: Towards Human-Level Bimanual Dexterous Manipulation
Yuanpei Chen, Yiran Geng, Fangwei Zhong, Jiaming Ji, Jiechuang Jiang, Zongqing Lu, Hao Dong, Yaodong Yang
TIPAMI 2023
[Paper][Code]
VOCE: Variational Optimization with Conservative Estimation for Offline Safe Reinforcement Learning
VOCE: Variational Optimization with Conservative Estimation for Offline Safe Reinforcement Learning
Jiayi Guan, Guan Chen, Jiaming Ji, Long Yang, Ao Zhou, Zhijun Li, Changjun Jiang
NeurIPS 2023
[Paper][Code]
Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark
Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark
Jiaming Ji*, Borong Zhang*, Jiayi Zhou*, Xuehai Pan, Weidong Huang, Ruiyang Sun, Yiran Geng, Yifan Zhong, Juntao Dai, Yaodong Yang
NeurIPS 2023
[Paper][Code]
BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset
BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset
Jiaming Ji*, Mickel Liu*, Juntao Dai*, Xuehai Pan, Chi Zhang, Ce Bian, Chi Zhang, Ruiyang Sun, Yizhou Wang, Yaodong Yang
NeurIPS 2023
[Paper][Code][Data]
Augmented Proximal Policy Optimization for Safe Reinforcement Learning
Augmented Proximal Policy Optimization for Safe Reinforcement Learning
Juntao Dai*, Jiaming Ji*, Long Yang, Qian Zheng, Gang Pan
AAAI 2023
[Paper]

2022

MyoChallenge 2022: Learning contact-rich manipulation using a musculoskeletal hand
MyoChallenge 2022: Learning contact-rich manipulation using a musculoskeletal hand
Vittorio Caggiano, Guillaume Durandau, Huwawei Wang, Alberto Chiappa, Alexander Mathis, Pablo Tano, Nisheet Patel, Alexandre Pouget, Pierre Schumacher, Georg Martius, Daniel Haeufle, Yiran Geng, Boshi An, Yifan Zhong, Jiaming Ji, Yuanpei Chen, Hao Dong, Yaodong Yang, Rahul Siripurapu, Luis Eduardo Ferro Diez, Michael Kopp, Vihang Patil, Sepp Hochreiter, Yuval Tassa, Josh Merel, Randy Schultheis, Seungmoon Song, Massimo Sartori, Vikash Kumar
NeurIPS 2022 Competition Track
[Paper]
Constrained Update Projection Approach to Safe Policy Optimization
Constrained Update Projection Approach to Safe Policy Optimization
Long Yang*, Jiaming Ji*, Juntao Dai, Linrui Zhang, Binbin Zhou, Pengfei Li, Yaodong Yang, Gang Pan
NeurIPS 2022
[Paper][Code]

Services

Teaching Assistant