Towards Retrieval Augmented Generation Over Large Video Libraries | Awesome LLM Papers Add your paper to Awesome LLM Papers

Towards Retrieval Augmented Generation Over Large Video Libraries

Yannis Tevissen, Khalil Guetari, FrΓ©dΓ©ric Petitpont . No Venue 2024

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Interdisciplinary Approaches Model Architecture Multimodal Semantic Representation Question Answering RAG Tools

Video content creators need efficient tools to repurpose content, a task that often requires complex manual or automated searches. Crafting a new video from large video libraries remains a challenge. In this paper we introduce the task of Video Library Question Answering (VLQA) through an interoperable architecture that applies Retrieval Augmented Generation (RAG) to video libraries. We propose a system that uses large language models (LLMs) to generate search queries, retrieving relevant video moments indexed by speech and visual metadata. An answer generation module then integrates user queries with this metadata to produce responses with specific video timestamps. This approach shows promise in multimedia content retrieval, and AI-assisted video content creation.

Similar Work