Skip to content

Is LLM-as-a-Judge Robust? Investigating Universal Adversarial Attacks on Zero-shot LLM Assessment

Arxiv Link - 2024-02-21 18:55:20

Abstract

Large Language Models (LLMs) are powerful zero-shot assessors and are increasingly used in real-world situations such as for written exams or benchmarking systems. Despite this, no existing work has analyzed the vulnerability of judge-LLMs against adversaries attempting to manipulate outputs. This work presents the first study on the adversarial robustness of assessment LLMs, where we search for short universal phrases that when appended to texts can deceive LLMs to provide high assessment scores. Experiments on SummEval and TopicalChat demonstrate that both LLM-scoring and pairwise LLM-comparative assessment are vulnerable to simple concatenation attacks, where in particular LLM-scoring is very susceptible and can yield maximum assessment scores irrespective of the input text quality. Interestingly, such attacks are transferable and phrases learned on smaller open-source LLMs can be applied to larger closed-source models, such as GPT3.5. This highlights the pervasive nature of the adversarial vulnerabilities across different judge-LLM sizes, families and methods. Our findings raise significant concerns on the reliability of LLMs-as-a-judge methods, and underscore the importance of addressing vulnerabilities in LLM assessment methods before deployment in high-stakes real-world scenarios.

Socials

LinkedIn X
🚨 New Research Alert! 🚨

Exciting findings on the vulnerability of assessment Large Language Models (LLMs) have been published. This groundbreaking study sheds light on the susceptibility of judge-LLMs to manipulation by adversaries aiming to deceive the system and obtain high assessment scores.

Discover more about this innovative research and its implications for the reliability of LLMs in real-world scenarios by reading the full paper here: http://arxiv.org/abs/2402.14016v1

#AI #NLP #LLMs #Research #TechInnovation #ArtificialIntelligence #MachineLearning #TechNews
🚨 New research alert! A study on the vulnerability of assessment Large Language Models (LLMs) reveals susceptibility to simple attacks, raising concerns about their reliability in real-world scenarios. Learn more at: http://arxiv.org/abs/2402.14016v1 #AI #NLP #LLMs #Research #TechEthics 🤖🔍📚

PDF