Papers

Publications

2024

Peer-reviewed

[28] IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models

Haz Sameen Shahgir, Khondker Salman Sayeed, Abhik Bhattacharjee, Wasi Uddin Ahmad, Yue Dong, Rifat Shahriyar

Conference on Language Modeling (COLM) 2024

[27] Cross-task defense: Instruction-tuning LLMs for content safety

Yu Fu, Wen Xiao, Jia Chen, Jiachen Li, Evangelos Papalexakis, Aichi Chien, Yue Dong

TrustNLP Workshop @ NAACL 2024

[26] Safety Alignment in NLP Tasks: Weakly Aligned Summarization as an In-Context Attack

Yu Fu, Yufei Li, Wen Xiao, Cong Liu, Yue Dong

ACL 2024

[25] Asymmetric Bias in Text-to-Image Generation with Adversarial Attacks

Haz Sameen Shahgir, Xianghao Kong, Greg Ver Steeg, Yue Dong

ACL Findings 2024

[24] Subtle Misogyny Detection and Mitigation: An Expert-Annotated Dataset

Brooklyn Sheppard, Anna Richter, Allison Cohen, Elizabeth Allyn Smith, Tamara Kneese, Carolyne Pelletier, Ioana Baldini, Yue Dong

ACL Findings 2024

[23] Source-Free Domain Adaptation for Question Answering with Masked Self-training

Maxwell Yin, Boyu Wang, Yue Dong, Charles Ling

TACL 2024

[22] PAT-Questions: A Self-Updating Benchmark for Present-Anchored Temporal Question-Answering

Jannat Ara Meem, Muhammad Shihab Rashid, Yue Dong, Vagelis Hristidis

ACL Findings 2024

[21] EcoRank: Budget-Constrained Text Re-ranking Using Large Language Models

Muhammad Shihab Rashid, Jannat Ara Meem, Yue Dong, Vagelis Hristidis

ACL Findings 2024

[20] Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models

Erfan Shayegani, Yue Dong, Nael Abu-Ghazaleh

ICLR 2024

[19] Watermarking conditional text generation for AI detection: Unveiling challenges and a semantic-aware watermark remedy

Yu Fu, Deyi Xiong, Yue Dong

AAAI 2024

Pre-prints

Y Luo, H Patel, Y Fu, D Ahn, J Chen, Yue Dong, EE Papalexakis

arXiv preprint arXiv:2406.17261

MS Rashid, JA Meem, Yue Dong, V Hristidis

arXiv preprint arXiv:2406.07136

Y Zhang, B Gao, T Liu, K Lu, W Xiong, Yue Dong, B Chang, J Hu, W Xiao

arXiv preprint arXiv:2406.02069

T Chakraborty, E Shayegani, Z Cai, N Abu-Ghazaleh, MS Asif, Yue Dong

arXiv preprint arXiv:2406.02575

Y Li, S Chen, Y Guo, W Yang, Yue Dong, C Liu

arXiv preprint arXiv:2402.05939

MT Tahmid, HS Shahgir, S Mahbub, Yue Dong, MS Bayzid

bioRxiv

L Yu, M Cao, JCK Cheung, Yue Dong

arXiv preprint arXiv:2403.18167

2023

[18] Survey of vulnerabilities in large language models revealed by adversarial attacks

[17] Inverse Reinforcement Learning for Text Summarization

2022

[16] Learning with Rejection for Abstractive Text Summarization

[15] Hallucinated but factual! inspecting the factuality of hallucinations in abstractive summarization

[14] Faithful to the document or to the world? mitigating hallucinations via entity-linked knowledge in abstractive summarization

2021

[13] On-the-fly attention modulation for neural generation

[12] Bringing structure into summaries: a faceted summarization dataset for long scientific documents

[11] Discourse-Aware Unsupervised Summarization of Long Scientific Documents

2020

[10] Factual error correction for abstractive summarization models

[9] Multi-XScience: A large-scale dataset for extreme multi-document summarization of scientific articles

[8] Multi-fact correction in abstractive text summarization

2019

[7] Countering the Effects of Lead Bias in News Summarization via Multi-Stage Training and Auxiliary Losses

[6] Learning multi-task communication with message passing for sequence learning

[5] EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing

Before 2018

[4] Banditsum: Extractive summarization as a contextual bandit

[3] Threaded ensembles of autoencoders for stream learning

[2] A hierarchical neural attention-based text classifier

[1] Threaded ensembles of supervised and unsupervised neural networks for stream learning

Yue Dong
Yue Dong
Assistant Professor

Yue Dong is an assistant professor of computer science and engineering at the University of California Riverside. Her research interests include natural language processing, machine learning, and artificial intelligence. She leads the Natural Language Processing group, which develops natural language understanding and generation systems that are controllable, trustworthy, and efficient.