Instruct-fingpt: Financial Sentiment Analysis By Instruction Tuning Of General-purpose Large Language Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

Instruct-fingpt: Financial Sentiment Analysis By Instruction Tuning Of General-purpose Large Language Models

Boyu Zhang, Hongyang Yang, Xiao-Yang Liu . SSRN Electronic Journal 2023 – 56 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Affective Computing Compositional Generalization Content Enrichment Fine Tuning Image Text Integration Interactive Environments Interdisciplinary Approaches Multimodal Semantic Representation Neural Machine Translation Productivity Enhancement Question Answering

Sentiment analysis is a vital tool for uncovering insights from financial articles, news, and social media, shaping our understanding of market movements. Despite the impressive capabilities of large language models (LLMs) in financial natural language processing (NLP), they still struggle with accurately interpreting numerical values and grasping financial context, limiting their effectiveness in predicting financial sentiment. In this paper, we introduce a simple yet effective instruction tuning approach to address these issues. By transforming a small portion of supervised financial sentiment analysis data into instruction data and fine-tuning a general-purpose LLM with this method, we achieve remarkable advancements in financial sentiment analysis. In the experiment, our approach outperforms state-of-the-art supervised sentiment analysis models, as well as widely used LLMs like ChatGPT and LLaMAs, particularly in scenarios where numerical understanding and contextual comprehension are vital.

Similar Work