Ye Yuan 袁野

"To ordain conscience for Heaven and Earth. To secure life and fortune for the people. To continue lost teachings for past sages. To establish peace for all future generations." - Zai Zhang.

"为天地立心,为生民立命,为往圣继绝学,为万世开太平" - 张载.

I am a Ph.D. candidate in Computer Science at McGill University and Mila - Quebec AI Institute. It's my great honor to work with and be supervised by Professor Xue (Steve) Liu. And I am also grateful to have Professor Adriana Romero Soriano and Professor Gintare Karolina Dziugaite as my supervision committee member.
Before that, I received my Bachelor's Degree of Science in Honours Computer Science at McGill University.

What truly captivates me is the potential of intelligent systems and generative modelling to assist humans. How can artificial intelligence accurately and flawlessly complete tasks assigned by humans? How can generative models be applied for specific tasks? How do generative models mitigates the issues suffered by predominately used traditional approaches? More specifically, my research is concentrated on score-based generative algorithms and large language models with their applications in (i) addressing the Offline Black Box Optimization challenges, (ii) alleviating the challenges of knowledge-centric NLP tasks, developing a foundational knowledge model, enabling the automatic knowledge base construction, (iii) improve the efficiency and efficacy of AI systems for real-world applications.

Beyond academic community, I am interning/previously interned and closely collaborated with researchers at Microsoft Research, Samsung Research America, Noah's Ark Lab Canada, and RBC (Royal Bank of Canada) Borealis AI.

It is also my honor that my works have been recognized by the research community, and I'm fortunate to receive the DAAD AINeT Fellowship, Bank of Montreal (BMO) Responsible AI Senior Scholar, BMO Responsible AI Fellowship, ICLR 2025 Financial Assistance, NeurIPS 2023 Scholar Award, McGill Faculty of Science Graduate Scholarship, and McGill Graduate Excellence Awards. Moreover, during my internship at Noah's Ark Lab Canada, I received the Outstanding Contribution Award in R&D Peripheral Fields, Overseas Business Contribution Award, and Canada Research Institute President's Spot Award.

I listed my selected publications below:

📑 Curriculum Vitae

💾 Research Statement

💯 Unofficial Transcript

News

Jun 2025

Our paper "Prompting Wireless Networks: Reinforced In-Context Learning for Power Control" was accepted by ICML 2025 ML4Wireless Workshop!

May 2025

My first-authored paper "Understanding 6G through Language Models: A Case Study on LLM-aided Structured Entity Extraction in Telecom Domain" was released!

May 2025

I started my new summer research internship at RBC Borealis and worked with Dr. Amin Shabani, Dr. Siqi Liu, and Dr. Jiawei (Eric) He.

Apr 2025

My (co)first-authored paper "Design Editing for Offline Model-based Optimization" was accepted by TMLR!

Mar 2025

Our survey paper collaborated with Turing Award Recipient Professor Yoshua Bengio "Offline Model-Based Optimization: Comprehensive Review" was released!

Apr 2025

I attended ICLR 2025 in Singapore.🇸🇬

Mar 2025

My (co)first-authored paper "Design Editing for Offline Model-based Optimization" was accepted by ICLR 2025 DeLTa Workshop!

Mar 2025

Our paper "Large Language Models for Wireless Networks: An Overview from the Prompt Engineering Perspective" was accepted by IEEE WCM!

Feb 2025

I started the collaboration with Samsung Research America.

Jan 2025

I successfully passed the Ph.D. oral comprehensive exam and formally became a Ph.D. candidate!

Jan 2025

My (co)first-authored paper "ParetoFlow: Guided Flows in Multi-Objective Optimization" was accepted by ICLR 2025!

Dec 2024

Our paper "Generative AI as a service in 6g edge-cloud: Generation task offloading by in-context learning" was accepted by IEEE WCL!

Nov 2024

I attended EMNLP 2024 in Miami.🇺🇸

Nov 2024

My (co)first-authored paper "Learning to Extract Structured Entities Using Language Models" was accepted by EMNLP 2024 and selected as an oral presentation paper (top 7%🔥🔥🔥)!

Sep 2024

Our survey paper "Large language model (llm) for telecommunications: A comprehensive survey on principles, key techniques, and opportunities" was accepted by IEEE COMST!

Aug 2024

Our paper "Large Language Model (LLM)-enabled In-context Learning for Wireless Network Optimization: A Case Study of Power Control" was released!

Jul 2024

Our survey paper "Retrieval-Augmented Generation for Natural Language Processing: A Survey" was released!

Jun 2024

I started my informal research visiting to several professors in Nanjing, Hangzhou, Shanghai, Shenzhen, and Hong Kong China.🇨🇳

Dec 2023

I attended my first in-person conference at NeurIPS 2023 in New Orleans.🇺🇸

Sep 2023

I received the BMO (Bank of Montreal) Responsible AI fellowship, and was awarded the title of (BMO) Responsible AI Senior Scholar.

Sep 2023

My first (co)first-authored paper "Importance-aware co-teaching for offline model-based optimization" was accepted by NeurIPS 2023! I was so excited to publish this paper on the one of the best venues as a first-year Ph.D. student!

May 2023

I started my internship as an Associate Researcher at Noah's Ark Lab Canada, where I worked on the accelerating LLM inference and next-generation language model architectures.

Mar 2023

I started the close collaboration at Microsoft Research's Alexandria team with Dr. Bhaskar Mitra, Dr. James Hensman, Liana Mikaelyan, Pavel Myshkov, Dr. Alexander Meulemans (during his internship at MSR), Dr. Jan Tönshoff (during his internship at MSR), Dr. Taketomo Isazawa, and Dr. Tom Minka. We work on knowledge foundation model and automatic knowledge base construction.

Jan 2023

I started my first research project on Offline Black Box Optimization under the mentorship of Can (Sam) Chen.

Dec 2022

I received my Bachelor's Degree of Science in Honours Computer Science at McGill University.

Oct 2022

I received the official offer from McGill University and decided to do my direct-entry Ph.D. program at McGill University with Professor Xue (Steve) Liu as my supervisor.

May 2022

I joined in Professor Xue (Steve) Liu's CPS Lab as a research assistant and attended the group meetings regularly.

Jan 2022

I started two research projects about Multi-agent Reinforcement Learning and Domain Adaptation for Human Activity Recognition with Dr. Jikun (Jaxon) Kang and Professor Xi (Alex) Chen. Both of them were Professor Xue (Steve) Liu's students before.

Jan 2022

I started the course COMP 597 (Applications of Machine Learning in Real World Systems) at McGill University, which is a research-oriented course taught by Professor Xue (Steve) Liu. I was impressed by the research projects of the students in the course, and I decided to apply for the Ph.D. program at McGill University.

Sep 2021

I found I was not interested in the software engineering field, and I decided to do more research on Machine Learning and Deep Learning. Then I sent the first email to Professor Xue (Steve) Liu, who was my Ph.D. supervisor later.

May 2021

I started my first industrial internship as a software engineer at CAMLUNI Education, a startup company in Beijing. Since Beijing is a large city, it took me 2 hours to commute to the office every day. During then, I studied the basics of Machine Learning and Deep Learning through self-study by watching Hung-yi Lee's lectures on YouTube.

Dec 2020

I met my first research mentor, Professor Jie Fu, in Montreal, and I started to slowly learn the research career track.

Jun 2020

I started to learn French.

Aug 2019

I started my undergraduate study at McGill University.