Yue Dong

Yue Dong

Assistant Professor

University of California, Riverside

Hi, I am Yue (/yoo-eh/) Dong, an assistant professor of computer science and engineering at the University of California, Riverside. I obtained my PhD in Computer Science at McGill University and Mila, supervised by Dr. Jackie Cheung. I was fortunate to intern at Google AI, AI2, Microsoft, and Noah’s Ark Lab during my PhD.

My research interests include natural language processing, machine learning, and artificial intelligence. I lead the Natural Language Processing group at UCR, which develops natural language understanding and generation systems that are controllable, trustworthy, and efficient.

Prospective Students (Updated on March, 2024): Thank you for your interest! I have 2 co-advised openings for self-motivated PhD candidates in the field of NLP+RL/theory and NLP+CV/robotics for Winter 2025 or Fall 2024. If you are interested in working with me, please fill out the UCRNLP PhD application form to highlight your potential in ML/NLP research and to assess the alignment with my research interests. Please understand that I will only respond to PhD email inquiries accompanied by a completed form submission due to the large amount of emails.

PhD Thesis | Research Statement | 招生 |

  • Machine learning
  • Natural language processing
  • Natural language generation
  • Automatic summarization
  • Trustworthy and efficient AI
  • PhD in Computer Science , 2022

    McGill University

  • MSc & BSc in Mathematics, 2016

    University of Ottawa

  • Clinical Medicine, 2009

    Xi'an Jiaotong University (西安交通大学)

Recent News

All news»


Full List»

Welcome to UCR NLP Lab!

Lab Members


Yue Dong, assistant professor of computer science and engineering

PhD Students

Recent Publications

(2023). Inverse Reinforcement Learning for Text Summarization. Findings of Empirical Methods of Natural Language Processing (Findings of EMNLP).

Cite Arxiv ACL Anthology

(2022). Faithful to the Document or to the World? Mitigating Hallucinations via Entity-linked Knowledge in Abstractive Summarization. Findings of Empirical Methods of Natural Language Processing (Findings of EMNLP).

Cite Arxiv ACL Anthology

(2022). Learning with Rejection for Abstractive Text Summarization. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP).

Cite ACL Anthology

Recent Talks

All Talks»

Invited talks

  • [11/2023] Exploring Safety in Large Language Models YouTube 40:30 mins | News

    • 80th Annual Convention of Jiaotong University Alumni Association in SoCal (交通大学南加州校友会80周年庆), Los Angeles, CA, US
  • [11/04/2023] AI and Large Language Models

    • UCR Family Day/Homecoming seminar, Riverside CA, US
  • [08/2023] Fast Text Generation with Text-Editing Models

    • KDD Tutorial, Long Beach, CA, US

Recent Awards

All Awards»


  • Regents Faculty Fellowship grant, July 2023

  • UCR OASIS Internal Funding Awards, June 2023

Academic Awards:

  • Best Paper Award, Southern California NLP Symposium, 2023

  • Alexander Graham Bell Canada Graduate Scholarship - Doctoral (CGS D) - Accepted, 2018-2019

  • [ Best Paper Award ] at Canadian conference on artificial intelligence

  • NSERC Postgraduate Scholarship - Doctoral (PGS D) - Accepted, 2017-2018

  • FRQNT Doctoral Scholarship - Declined (rank first in all 2016 applicants in mathematics), 2016

  • FRQNT Master’s Research Scholarship, 2016

  • NSERC Canada Graduate Scholarships - CGS Master’s, 2015

  • University of Ottawa Excellence Scholarship, 2015 - 2016

  • NSERC Undergraduate Student Research Awards (USRA), Summer 2014

  • Dean’s Merit Scholarships - Faculty of Science, University of Ottawa, 2014

  • University of Ottawa Women’s Summer Research Award, Summer 2013

  • University of Ottawa Work/Study Research Award, Summer 2012

  • First prize in mathematics competition in Shaanxi province, China, 2008