Detect Rumors In Microblog Posts For Low-resource Domains Via Adversarial Contrastive Learning | Awesome LLM Papers Add your paper to Awesome LLM Papers

Detect Rumors In Microblog Posts For Low-resource Domains Via Adversarial Contrastive Learning

Hongzhan Lin, Jing Ma, Liangliang Chen, Zhiwei Yang, Mingfei Cheng, Guang Chen . Findings of the Association for Computational Linguistics: NAACL 2022 2022 – 41 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets NAACL

Massive false rumors emerging along with breaking news or trending topics severely hinder the truth. Existing rumor detection approaches achieve promising performance on the yesterday’s news, since there is enough corpus collected from the same domain for model training. However, they are poor at detecting rumors about unforeseen events especially those propagated in different languages due to the lack of training data and prior knowledge (i.e., low-resource regimes). In this paper, we propose an adversarial contrastive learning framework to detect rumors by adapting the features learned from well-resourced rumor data to that of the low-resourced. Our model explicitly overcomes the restriction of domain and/or language usage via language alignment and a novel supervised contrastive training paradigm. Moreover, we develop an adversarial augmentation mechanism to further enhance the robustness of low-resource rumor representation. Extensive experiments conducted on two low-resource datasets collected from real-world microblog platforms demonstrate that our framework achieves much better performance than state-of-the-art methods and exhibits a superior capacity for detecting rumors at early stages.

Similar Work