Yue Dong

Yue Dong

Assistant Professor

University of California, Riverside

Hi, I am Yue (/yoo-eh/) Dong, an assistant professor of computer science and engineering at the University of California, Riverside. I obtained my PhD in Computer Science at McGill University and Mila, supervised by Dr. Jackie Cheung. I was fortunate to intern at Google AI, AI2, Microsoft, and Noah’s Ark Lab during my PhD.

My research interests include natural language processing, machine learning, and artificial intelligence. I lead the Natural Language Processing group at UCR, which develops natural language understanding and generation systems that are controllable, trustworthy, and efficient.

Prospective Students (Updated on Oct., 2024): Thank you for your interest! I have 2 co-advised openings for self-motivated PhD candidates Fall 2025. You may apply directly through the official UC Riverside website. You may also fill out the UCRNLP PhD application form. Due to the volume of emails, I will only respond to inquiries that include a completed form submission. I currently do not have positions for master’s students.

PhD Thesis | Research Statement | 招生 |

Interests
  • Machine learning
  • Natural language processing
  • Natural language generation
  • Automatic summarization
  • Trustworthy and efficient AI
Education
  • PhD in Computer Science , 2022

    McGill University

  • MSc & BSc in Mathematics, 2016

    University of Ottawa

  • Clinical Medicine, 2009

    Xi'an Jiaotong University (西安交通大学)

Recent News

All news»

Students

Full List»

Welcome to UCR NLP Lab!

Lab Members

Director

Yue Dong, assistant professor of computer science and engineering

PhD Students

Recent Publications

(2023). Inverse Reinforcement Learning for Text Summarization. Findings of Empirical Methods of Natural Language Processing (Findings of EMNLP).

Cite Arxiv ACL Anthology

(2022). Faithful to the Document or to the World? Mitigating Hallucinations via Entity-linked Knowledge in Abstractive Summarization. Findings of Empirical Methods of Natural Language Processing (Findings of EMNLP).

Cite Arxiv ACL Anthology

(2022). Learning with Rejection for Abstractive Text Summarization. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP).

Cite ACL Anthology

Recent Services

All Services»

Lead-organizer

Co-organizer

Recent Talks

All Talks»

Invited talks

  • [11/2023] Exploring Safety in Large Language Models YouTube 40:30 mins | News

    • 80th Annual Convention of Jiaotong University Alumni Association in SoCal (交通大学南加州校友会80周年庆), Los Angeles, CA, US
  • [11/04/2023] AI and Large Language Models

    • UCR Family Day/Homecoming seminar, Riverside CA, US
  • [08/2023] Fast Text Generation with Text-Editing Models

    • KDD Tutorial, Long Beach, CA, US

Recent Awards

All Awards»

Fundings:

  • NSF CISE MSI, Co-PI, Jan. 2025- Dec. 2027

  • UTCRS, Co-PI, June 2024 - June 2025

  • Regents Faculty Fellowship grant, PI, July 2023 - June 2025

  • UCR OASIS Internal Funding Awards, PI, June 2023 - June 2024

Academic Awards:

  • Best Paper Award, Southern California NLP Symposium, 2023

  • Alexander Graham Bell Canada Graduate Scholarship - Doctoral (CGS D) - Accepted, 2018-2019

  • [ Best Paper Award ] at Canadian conference on artificial intelligence

  • NSERC Postgraduate Scholarship - Doctoral (PGS D) - Accepted, 2017-2018

  • FRQNT Doctoral Scholarship - Declined (rank first in all 2016 applicants in mathematics), 2016

  • FRQNT Master’s Research Scholarship, 2016

  • NSERC Canada Graduate Scholarships - CGS Master’s, 2015

  • University of Ottawa Excellence Scholarship, 2015 - 2016

  • NSERC Undergraduate Student Research Awards (USRA), Summer 2014

  • Dean’s Merit Scholarships - Faculty of Science, University of Ottawa, 2014

  • University of Ottawa Women’s Summer Research Award, Summer 2013

  • University of Ottawa Work/Study Research Award, Summer 2012

Contact