Dynatask: A Framework For Creating Dynamic AI Benchmark Tasks | Awesome LLM Papers Add your paper to Awesome LLM Papers

Dynatask: A Framework For Creating Dynamic AI Benchmark Tasks

Tristan Thrush, Kushal Tirumala, Anmol Gupta, Max Bartolo, Pedro Rodriguez, Tariq Kane, William Gaviria Rojas, Peter Mattson, Adina Williams, Douwe Kiela . Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations 2022 – 40 citations

[Code] [Other] [Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Datasets Evaluation Has Code Interdisciplinary Approaches Tools Visual Question Answering

We introduce Dynatask: an open source system for setting up custom NLP tasks that aims to greatly lower the technical knowledge and effort required for hosting and evaluating state-of-the-art NLP models, as well as for conducting model in the loop data collection with crowdworkers. Dynatask is integrated with Dynabench, a research platform for rethinking benchmarking in AI that facilitates human and model in the loop data collection and evaluation. To create a task, users only need to write a short task configuration file from which the relevant web interfaces and model hosting infrastructure are automatically generated. The system is available at https://dynabench.org/ and the full library can be found at https://github.com/facebookresearch/dynabench.

Similar Work