Large Language Models (LLMs) have achieved strong performance in domains like mathematics, factual QA, and code generation, yet their multilingual reasoning capabilities in these tasks remain underdeveloped. Especially for low-resource languages such as Swahili or Thai, LLMs can often misinterpret prompts or default to reasoning in English. This implicit bias toward high-resource languages undermines factual accuracy, interpretability, and trust. Current multilingual benchmarks focus only on final answers, overlooking whether models actually reason in the target language. To address this gap, we introduce GeoFact-X, a geography-based multilingual factual reasoning benchmark with annotated reasoning traces in five languages: English, Hindi, Japanese, Swahili, and Thai. We further propose BRIDGE, a novel training method that guides supervised fine-tuning and test-time reinforcement learning with a language-consistency reward to align reasoning with the input language. Finally, we develop an automatic evaluation protocol using LLM-as-a-judge to assess answer correctness and the quality and language consistency of reasoning traces, enabling nuanced and scalable analysis beyond surface-level metrics. Our results show that BRIDGE significantly enhances multilingual reasoning fidelity, demonstrating that reasoning-aware multilingual reinforcement learning is crucial for robust cross-lingual generalization.
we introduce a multilingual factual reasoning dataset specifically designed to evaluate factual accuracy and reasoning capabilities across diverse linguistic and cultural contexts. Leveraging Gemini 2.0 Flash, we generate 3,000 unique factual questions (approximately 600 per country) covering locally grounded topics such as history, politics, geography, art, and culture, localized to five geographically distinct countries: the USA, India, Japan, Kenya, and Thailand. The dataset is available in the predominant local languages: English, Hindi, Japanese, Swahili, and Thai. Our goal is to capture country-specific factual knowledge, encouraging language models to reason effectively within culturally contextualized knowledge spaces.
@article{hwang2025learn,
title={Learn Globally, Speak Locally: Bridging the Gaps in Multilingual Reasoning},
author={Hwang, Jaedong and Tanmay, Kumar and Seok-Jin, Lee and Agrawal, Ayush and Palangi, Hamid and Ayush, Kumar and Fiete, Ila R and Liang, Paul Pu},
journal={arXiv preprint arXiv:2507.05418},
year={2025}
}