LLMs Beyond English: Scaling the Multilingual Capability of LLMs with Cross-Lingual Feedback¶
Arxiv Link - 2024-06-03 20:25:12
Abstract¶
To democratize large language models (LLMs) to most natural languages, it is imperative to make these models capable of understanding and generating texts in many languages, in particular low-resource ones. While recent multilingual LLMs demonstrate remarkable performance in such capabilities, these LLMs still support a limited number of human languages due to the lack of training data for low-resource languages. Moreover, these LLMs are not yet aligned with human preference for downstream tasks, which is crucial for the success of LLMs in English. In this paper, we introduce xLLaMA-100 and xBLOOM-100 (collectively xLLMs-100), which scale the multilingual capabilities of LLaMA and BLOOM to 100 languages. To do so, we construct two datasets: a multilingual instruction dataset including 100 languages, which represents the largest language coverage to date, and a cross-lingual human feedback dataset encompassing 30 languages. We perform multilingual instruction tuning on the constructed instruction data and further align the LLMs with human feedback using the DPO algorithm on our cross-lingual human feedback dataset. We evaluate the multilingual understanding and generating capabilities of xLLMs-100 on five multilingual benchmarks. Experimental results show that xLLMs-100 consistently outperforms its peers across the benchmarks by considerable margins, defining a new state-of-the-art multilingual LLM that supports 100 languages.
Socials¶
X | |
---|---|
🌟 Exciting News in the World of Language Models! 🌟 In a recent paper, researchers have introduced xLLaMA-100 and xBLOOM-100 (xLLMs-100), scaling multilingual capabilities to an impressive 100 languages. By constructing datasets that include a multilingual instruction dataset with 100 languages and a cross-lingual human feedback dataset with 30 languages, these models have achieved a new state-of-the-art in multilingual language understanding and generation. Read the full paper here for details on how xLLMs-100 outperforms its peers across five multilingual benchmarks: http://arxiv.org/abs/2406.01771v1 #LanguageModels #AI #NLP #Multilingual #TechInnovation |
🌐 Exciting development in the world of multilingual large language models! Introducing xLLMs-100, a new state-of-the-art model supporting 100 languages. Learn more about its remarkable performance and capabilities in this research paper: http://arxiv.org/abs/2406.01771v1 #AI #NLP #LLMs #MultilingualModels 🚀📚 |