Leveraging Self-attention For Input-dependent Soft Prompting In Llms | Awesome LLM Papers Add your paper to Awesome LLM Papers

Leveraging Self-attention For Input-dependent Soft Prompting In Llms

Ananth Muppidi, Abhilash Nandy, Sambaran Bandyopadhyay . No Venue 2025

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Fine Tuning Interdisciplinary Approaches Model Architecture Multimodal Semantic Representation Neural Machine Translation Prompting

The performance of large language models in domain-specific tasks necessitates fine-tuning, which is computationally expensive and technically challenging. This paper focuses on parameter-efficient fine-tuning using soft prompting, a promising approach that adapts pre-trained models to downstream tasks by learning a small set of parameters. We propose a novel Input Dependent Soft Prompting technique with a self-Attention Mechanism (ID-SPAM) that generates soft prompts based on the input tokens and attends different tokens with varying importance. Our method is simple and efficient, keeping the number of trainable parameters small. We show the merits of the proposed approach compared to state-of-the-art techniques on various tasks and show the improved zero shot domain transfer capability.

Similar Work