Skip to content

A User-Centric Benchmark for Evaluating Large Language Models

Arxiv Link - 2024-04-23 01:58:24

Abstract

Large Language Models (LLMs) are essential tools to collaborate with users on different tasks. Evaluating their performance to serve users' needs in real-world scenarios is important. While many benchmarks have been created, they mainly focus on specific predefined model abilities. Few have covered the intended utilization of LLMs by real users. To address this oversight, we propose benchmarking LLMs from a user perspective in both dataset construction and evaluation designs. We first collect 1846 real-world use cases with 15 LLMs from a user study with 712 participants from 23 countries. These self-reported cases form the User Reported Scenarios(URS) dataset with a categorization of 7 user intents. Secondly, on this authentic multi-cultural dataset, we benchmark 10 LLM services on their efficacy in satisfying user needs. Thirdly, we show that our benchmark scores align well with user-reported experience in LLM interactions across diverse intents, both of which emphasize the overlook of subjective scenarios. In conclusion, our study proposes to benchmark LLMs from a user-centric perspective, aiming to facilitate evaluations that better reflect real user needs. The benchmark dataset and code are available at https://github.com/Alice1998/URS.

Socials

LinkedIn X
🚀 Exciting News in AI and NLP! 🚀

When it comes to evaluating Large Language Models (LLMs), understanding real user needs and experiences is key. A recent study has proposed a groundbreaking approach to benchmarking LLMs from a user-centric perspective.

By collecting 1846 real-world use cases from 712 participants across 23 countries, the User Reported Scenarios (URS) dataset was created to categorize user intents. This dataset was then used to benchmark 10 LLM services on their effectiveness in meeting user needs, with results aligning closely with user-reported experiences.

This innovative approach sheds light on the importance of considering subjective scenarios in evaluating LLMs and aims to better reflect real user needs. If you're interested in learning more or accessing the benchmark dataset and code, check out the study at: http://arxiv.org/abs/2404.13940v2

#AI #NLP #LLMs #UserExperience #Benchmarking #TechInnovation
🚀 Exciting new research on Large Language Models (LLMs) evaluation! A user-centric benchmarking approach was taken to assess 10 LLM services using 1846 real-world use cases from 712 participants across 23 countries. Results show alignment between benchmark scores and user-reported experiences. Check out the study here: http://arxiv.org/abs/2404.13940v2 #AI #NLP #LLMs #TechResearch

PDF