Character-llm: A Trainable Agent For Role-playing | Awesome LLM Papers Add your paper to Awesome LLM Papers

Character-llm: A Trainable Agent For Role-playing

Yunfan Shao, Linyang Li, Junqi Dai, Xipeng Qiu . Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing 2023 – 45 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Agentic Compositional Generalization EMNLP Interdisciplinary Approaches Multimodal Semantic Representation Tools Training Techniques

Large language models (LLMs) can be used to serve as agents to simulate human behaviors, given the powerful ability to understand human instructions and provide high-quality generated texts. Such ability stimulates us to wonder whether LLMs can simulate a person in a higher form than simple human behaviors. Therefore, we aim to train an agent with the profile, experience, and emotional states of a specific person instead of using limited prompts to instruct ChatGPT API. In this work, we introduce Character-LLM that teach LLMs to act as specific people such as Beethoven, Queen Cleopatra, Julius Caesar, etc. Our method focuses on editing profiles as experiences of a certain character and training models to be personal simulacra with these experiences. To assess the effectiveness of our approach, we build a test playground that interviews trained agents and evaluates whether the agents \textit{memorize} their characters and experiences. Experimental results show interesting observations that help build future simulacra of humankind.

Similar Work