Skip to content

Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena

Arxiv Link - 2023-12-24 02:01:34

Abstract

Evaluating large language model (LLM) based chat assistants is challenging due to their broad capabilities and the inadequacy of existing benchmarks in measuring human preferences. To address this, we explore using strong LLMs as judges to evaluate these models on more open-ended questions. We examine the usage and limitations of LLM-as-a-judge, including position, verbosity, and self-enhancement biases, as well as limited reasoning ability, and propose solutions to mitigate some of them. We then verify the agreement between LLM judges and human preferences by introducing two benchmarks: MT-bench, a multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our results reveal that strong LLM judges like GPT-4 can match both controlled and crowdsourced human preferences well, achieving over 80% agreement, the same level of agreement between humans. Hence, LLM-as-a-judge is a scalable and explainable way to approximate human preferences, which are otherwise very expensive to obtain. Additionally, we show our benchmark and traditional benchmarks complement each other by evaluating several variants of LLaMA and Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with human preferences are publicly available at https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.

Socials

LinkedIn X
🚀 Exciting news in the world of AI and large language models (LLMs)! Researchers have delved into using strong LLMs as judges to evaluate chat assistants on open-ended questions, addressing the challenges of measuring human preferences. The study explores the limitations and biases of LLM-as-a-judge and proposes solutions to enhance evaluation accuracy.

Results show that powerful LLM judges like GPT-4 can closely match human preferences, achieving over 80% agreement. This approach offers a scalable and explainable method to approximate human preferences efficiently. The study introduces two benchmarks, MT-bench, and Chatbot Arena, to verify agreement between LLM judges and human preferences.

Access the MT-bench questions, expert votes, and conversations at: GitHub - FastChat.

Curious to learn more? Dive into the research here: Research Paper

#AI #LLM #Research #NLP #ChatAssistants #ArtificialIntelligence
Exciting new research on using large language models as judges to evaluate chat assistants reveals promising results! Strong LLM judges like GPT-4 can match human preferences with over 80% agreement, making them a scalable and cost-effective alternative. Learn more about this innovative approach and access the benchmarks at: http://arxiv.org/abs/2306.05685v4 #AI #NLP #LLMs #TechResearch

PDF