Cross-lingual Zero- And Few-shot Hate Speech Detection Utilising Frozen Transformer Language Models And AXEL | Awesome LLM Papers Add your paper to Awesome LLM Papers

Cross-lingual Zero- And Few-shot Hate Speech Detection Utilising Frozen Transformer Language Models And AXEL

Lukas Stappen, Fabian Brunn, BjΓΆrn Schuller . Arxiv 2020 – 52 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Few Shot Interdisciplinary Approaches Model Architecture Multimodal Semantic Representation

Detecting hate speech, especially in low-resource languages, is a non-trivial challenge. To tackle this, we developed a tailored architecture based on frozen, pre-trained Transformers to examine cross-lingual zero-shot and few-shot learning, in addition to uni-lingual learning, on the HatEval challenge data set. With our novel attention-based classification block AXEL, we demonstrate highly competitive results on the English and Spanish subsets. We also re-sample the English subset, enabling additional, meaningful comparisons in the future.

Similar Work