Enhancing Llm-based Feedback: Insights From Intelligent Tutoring Systems And The Learning Sciences | Awesome LLM Papers Add your paper to Awesome LLM Papers

Enhancing Llm-based Feedback: Insights From Intelligent Tutoring Systems And The Learning Sciences

John Stamper, Ruiwei Xiao, Xinying Hou . Communications in Computer and Information Science 2024 – 40 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Evaluation Image Text Integration Interdisciplinary Approaches Multimodal Semantic Representation Productivity Enhancement Prompting

The field of Artificial Intelligence in Education (AIED) focuses on the intersection of technology, education, and psychology, placing a strong emphasis on supporting learners’ needs with compassion and understanding. The growing prominence of Large Language Models (LLMs) has led to the development of scalable solutions within educational settings, including generating different types of feedback in Intelligent Tutoring Systems. However, the approach to utilizing these models often involves directly formulating prompts to solicit specific information, lacking a solid theoretical foundation for prompt construction and empirical assessments of their impact on learning. This work advocates careful and caring AIED research by going through previous research on feedback generation in ITS, with emphasis on the theoretical frameworks they utilized and the efficacy of the corresponding design in empirical evaluations, and then suggesting opportunities to apply these evidence-based principles to the design, experiment, and evaluation phases of LLM-based feedback generation. The main contributions of this paper include: an avocation of applying more cautious, theoretically grounded methods in feedback generation in the era of generative AI; and practical suggestions on theory and evidence-based feedback design for LLM-powered ITS.

Similar Work