Jiaming Ji (吉嘉铭)

Email: jiamg.ji@gmail.com

 

Hello! I’m a PhD student at the Institute of Artificial Intelligence, Peking University, advised by Prof. Yaodong Yang (both a good teacher and a helpful friend in my life). In 2024, I was honored to receive the first batch of National Natural Science Foundation funding for the Youth Student Basic Research Project (Ph.D. students); the sole recipient from Peking University in the field of intelligence. Before this, I conducted research on safe reinforcement learning and won the championship in the NeurIPS 2022 MyoChallenge for robotic dexterous manipulation. Currently, i focus on AI Safety and Alignment. Recently:

  • AI Alignment: Given the biases and discriminations that may exist in pre-training data, large models (LMs) may exhibit unintended behaviors. I am interested in alignment methods (e.g., Reinforcement Learning from human feedback (RLHF)) and post-hoc alignment methods to ensure the safety and trustworthy of LLMs.
  • Theoretical Explanations and Mechanism Design for Alignment: Aligning these AI System (e.g. LLMs) effectively to ensure consistency with human intentions and values (though some views may question universal values) is a significant current challenge. I am particularly interested in ensuring the feasibility of these alignment methods in both theoretical and practical mechanisms.
  • Applications (LM + X): I am interested in the application of large models in various domain, such as healthcare and education, and the potential impact of rapid industry development and iteration brought about by large models.

  •  

    Publications | Google Scholar | Twitter | Talks | Services | GitHub

    News

    Awards

    Open Source

    Publications

    Talks

    Services