Few-shot Learning For Named Entity Recognition In Medical Text | Awesome LLM Papers Add your paper to Awesome LLM Papers

Few-shot Learning For Named Entity Recognition In Medical Text

Maximilian Hofer, Andrey Kormilitzin, Paul Goldberg, Alejo Nevado-Holgado . Arxiv 2018 – 55 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Scalability

Deep neural network models have recently achieved state-of-the-art performance gains in a variety of natural language processing (NLP) tasks (Young, Hazarika, Poria, & Cambria, 2017). However, these gains rely on the availability of large amounts of annotated examples, without which state-of-the-art performance is rarely achievable. This is especially inconvenient for the many NLP fields where annotated examples are scarce, such as medical text. To improve NLP models in this situation, we evaluate five improvements on named entity recognition (NER) tasks when only ten annotated examples are available: (1) layer-wise initialization with pre-trained weights, (2) hyperparameter tuning, (3) combining pre-training data, (4) custom word embeddings, and (5) optimizing out-of-vocabulary (OOV) words. Experimental results show that the F1 score of 69.3% achievable by state-of-the-art models can be improved to 78.87%.

Similar Work