Skip to content

JudgeLM: Fine-tuned Large Language Models are Scalable Judges

Arxiv Link - 2023-10-26 17:48:58

Abstract

Evaluating Large Language Models (LLMs) in open-ended scenarios is challenging because existing benchmarks and metrics can not measure them comprehensively. To address this problem, we propose to fine-tune LLMs as scalable judges (JudgeLM) to evaluate LLMs efficiently and effectively in open-ended benchmarks. We first propose a comprehensive, large-scale, high-quality dataset containing task seeds, LLMs-generated answers, and GPT-4-generated judgments for fine-tuning high-performance judges, as well as a new benchmark for evaluating the judges. We train JudgeLM at different scales from 7B, 13B, to 33B parameters, and conduct a systematic analysis of its capabilities and behaviors. We then analyze the key biases in fine-tuning LLM as a judge and consider them as position bias, knowledge bias, and format bias. To address these issues, JudgeLM introduces a bag of techniques including swap augmentation, reference support, and reference drop, which clearly enhance the judge's performance. JudgeLM obtains the state-of-the-art judge performance on both the existing PandaLM benchmark and our proposed new benchmark. Our JudgeLM is efficient and the JudgeLM-7B only needs 3 minutes to judge 5K samples with 8 A100 GPUs. JudgeLM obtains high agreement with the teacher judge, achieving an agreement exceeding 90% that even surpasses human-to-human agreement. JudgeLM also demonstrates extended capabilities in being judges of the single answer, multimodal models, multiple answers, and multi-turn chat.

Socials

LinkedIn X
🚀 Exciting advancements in the field of Large Language Models (LLMs)! Researchers have introduced JudgeLM, fine-tuned LLMs acting as scalable judges to efficiently evaluate LLMs in open-ended scenarios. By leveraging a high-quality dataset and novel benchmark, JudgeLM has achieved state-of-the-art performance, outperforming human-level agreement.

Curious to learn more about this groundbreaking approach and its implications? Check out the full paper at: http://arxiv.org/abs/2310.17631v1

#AI #NLP #LLMs #JudgeLM #TechInnovation #Research #ArtificialIntelligence
🚀 Exciting new research on evaluating Large Language Models (LLMs) efficiently and effectively in open-ended scenarios! Learn how JudgeLM fine-tunes LLMs as scalable judges, achieving state-of-the-art performance and surpassing human agreement. Check out the paper here: http://arxiv.org/abs/2310.17631v1 #AI #NLP #LLMs #JudgeLM

PDF