Skip to content

How Far Are LLMs from Believable AI? A Benchmark for Evaluating the Believability of Human Behavior Simulation

Arxiv Link - 2024-06-15 14:08:30

Abstract

In recent years, AI has demonstrated remarkable capabilities in simulating human behaviors, particularly those implemented with large language models (LLMs). However, due to the lack of systematic evaluation of LLMs' simulated behaviors, the believability of LLMs among humans remains ambiguous, i.e., it is unclear which behaviors of LLMs are convincingly human-like and which need further improvements. In this work, we design SimulateBench to evaluate the believability of LLMs when simulating human behaviors. In specific, we evaluate the believability of LLMs based on two critical dimensions: 1) consistency: the extent to which LLMs can behave consistently with the given information of a human to simulate; and 2) robustness: the ability of LLMs' simulated behaviors to remain robust when faced with perturbations. SimulateBench includes 65 character profiles and a total of 8,400 questions to examine LLMs' simulated behaviors. Based on SimulateBench, we evaluate the performances of 10 widely used LLMs when simulating characters. The experimental results reveal that current LLMs struggle to align their behaviors with assigned characters and are vulnerable to perturbations in certain factors.

Socials

LinkedIn X
🚀 Exciting developments in the world of AI and Large Language Models (LLMs) are continuously pushing boundaries, but what about the believability of their simulated behaviors?

A recent study introduces SimulateBench, a novel framework evaluating LLMs' believability in simulating human behaviors. This evaluation is based on two crucial dimensions: consistency and robustness. By assessing 10 popular LLMs using 65 character profiles and 8,400 questions, the results shed light on the current challenges faced by LLMs in aligning behaviors with assigned characters and their susceptibility to perturbations.

Curious to dive deeper into the findings? Check out the full study at: http://arxiv.org/abs/2312.17115v2

#AI #NLP #LLMs #ArtificialIntelligence #TechResearch #Innovation #SimulateBench #BelievabilityEvaluation #TechStudy
🤖📊 Exciting insights on evaluating believability of Language Models in simulating human behaviors! Check out how SimulateBench assesses LLMs based on consistency and robustness. Discover the challenges and results of 10 popular LLMs in character simulation here: http://arxiv.org/abs/2312.17115v2 #AI #NLP #LLMs #SimulateBench

PDF