The Past, Present and Better Future of Feedback Learning in Large Language Models for Subjective Human Preferences and Values¶
Arxiv Link - 2023-10-11 16:18:13
Abstract¶
Human feedback is increasingly used to steer the behaviours of Large Language Models (LLMs). However, it is unclear how to collect and incorporate feedback in a way that is efficient, effective and unbiased, especially for highly subjective human preferences and values. In this paper, we survey existing approaches for learning from human feedback, drawing on 95 papers primarily from the ACL and arXiv repositories.First, we summarise the past, pre-LLM trends for integrating human feedback into language models. Second, we give an overview of present techniques and practices, as well as the motivations for using feedback; conceptual frameworks for defining values and preferences; and how feedback is collected and from whom. Finally, we encourage a better future of feedback learning in LLMs by raising five unresolved conceptual and practical challenges.
Socials¶
X | |
---|---|
🚀 Exciting insights into the world of Large Language Models (LLMs) and human feedback! 🌟 Curious about how human feedback shapes the behavior of LLMs? Dive into this comprehensive survey of existing approaches in the field, drawing on 95 papers from ACL and arXiv repositories. Learn about past trends, current techniques, and future challenges in integrating human feedback into language models. Read more about this fascinating research at: http://arxiv.org/abs/2310.07629v1 #LLMs #HumanFeedback #AI #NLP #Research #TechInnovation |
🚀 Exciting read! Check out this insightful paper on incorporating human feedback into Large Language Models (LLMs) efficiently and effectively. Learn about past trends, present techniques, and future challenges in feedback learning for LLMs. #AI #NLP 📄 Read more: http://arxiv.org/abs/2310.07629v1 |