Skip to content

Aligning Large Language Models through Synthetic Feedback

Arxiv Link - 2023-10-21 01:50:54

Abstract

Aligning large language models (LLMs) to human values has become increasingly important as it enables sophisticated steering of LLMs. However, it requires significant human demonstrations and feedback or distillation from proprietary LLMs such as ChatGPT. In this work, we propose a novel alignment learning framework with synthetic feedback not dependent on extensive human annotations and proprietary LLMs. First, we perform reward modeling (RM) with synthetic feedback by contrasting responses from vanilla LLMs with various sizes and prompts. Then, we use the RM to simulate high-quality demonstrations to train a supervised policy and further optimize the model with reinforcement learning. Our resulting model, Aligned Language Model with Synthetic Training dataset (ALMoST), outperforms recent open-sourced models, which are trained on the outputs of InstructGPT or human-annotated demonstrations, in alignment benchmarks. In human evaluation, our model is preferred to Alpaca and Dolly-v2, 55.0% and 58.5% of the time, respectively. Further analyses demonstrate the efficacy and importance of synthetic feedback in our framework. The code is available at https://github.com/naver-ai/almost

Socials

LinkedIn X
🚀 Exciting news in the world of large language models (LLMs)! A new alignment learning framework, ALMoST, has been introduced to enhance LLMs' alignment with human values without relying on extensive human annotations or proprietary LLMs like ChatGPT.

By utilizing synthetic feedback and innovative reward modeling techniques, ALMoST outperforms existing open-sourced models in alignment benchmarks. Human evaluation shows that ALMoST is preferred over Alpaca and Dolly-v2 a significant percentage of the time.

Read the full research paper and access the code at: http://arxiv.org/abs/2305.13735v2

#AI #NLP #LLMs #AlignmentLearning #Research #TechInnovation
🚀 Exciting new research alert! A novel alignment learning framework, ALMoST, outperforms recent models in alignment benchmarks without extensive human annotations or proprietary LLMs. Learn more at: http://arxiv.org/abs/2305.13735v2 #AI #NLP #LLMs #Tech #Research #ALMoST

PDF