Large Language Models (LLMs) have achieved strong performance in domains like mathematics, factual question answering, and code generation, yet their ability to reason on these tasks in different languages remains underdeveloped. Especially for low-resource languages such as Swahili or Thai, LLMs can often misinterpret prompts or default to reasoning in English. This implicit bias toward high-resource languages undermines factual accuracy, interpretability, and trust. We propose M2A, a novel method that combines multi-scale multilingual alignment with language-consistency rewards on machine-translated questions, training models to reason directly and accurately in the target language. Furthermore, existing multilingual benchmarks only evaluate on final answers, overlooking whether reasoning occurs in the intended language. To close this gap, we introduce GeoFact-X, a geography-based multilingual factual reasoning benchmark together with reasoning traces in five languages: English, Hindi, Japanese, Swahili, and Thai. Our results show that M2A significantly enhances multilingual reasoning fidelity in both mathematical and factual reasoning tasks, highlighting that reasoning-aware multilingual reinforcement learning is crucial for robust cross-lingual generalization.
$$r_\text{context-align} = \max(\cos(z_o, z_y) - \cos(\tilde{z}_o, \tilde{z}_y) + \alpha, 0),$$ where \(z_o\) and \(z_y\) denote the embeddings of LLM-generated output given the translated question and the ground-truth. For Multilingual reasoning-step alignment, we first split generated output \(o\) and the ground-truth \(y\) into each sentence \(\mathbf{o} = (o^{(1)}, \ldots, o^{(N)})\) and \(\mathbf{y} = (y^{(1)}, \ldots, y^{(M)})\), respectively. and then match sentences between output and ground-truth such that the total similarity score is maximized: $$r_\text{step-algin} = \frac{1}{N}\sum_{i=1}^N \mathbf{C}_{i, j_i} = \frac{1}{N}\sum_{i=1}^N \max(\cos(z_o^{(i)}, z_y^{(j_i)}) - \cos(\tilde{z}_o^{(i)}, \tilde{z}_y^{(j_i)}) + \alpha, 0),$$ where \(z_o^{(i)}\) and \(z_y^{(j_i)}\) denote the embedding of output sentence, \(o^{(i)}\) and matched ground-truth sentence, \(y^{(i_j)}\).
we introduce a multilingual factual reasoning dataset specifically designed to evaluate factual accuracy and reasoning capabilities across diverse linguistic and cultural contexts. Leveraging Gemini 2.0 Flash, we generate 3,000 unique factual questions (approximately 600 per country) covering locally grounded topics such as history, politics, geography, art, and culture, localized to five geographically distinct countries: the USA, India, Japan, Kenya, and Thailand. The dataset is available in the predominant local languages: English, Hindi, Japanese, Swahili, and Thai. Our goal is to capture country-specific factual knowledge, encouraging language models to reason effectively within culturally contextualized knowledge spaces.
$$r_\text{context-align} = \max(\cos(z_o, z_y) - \cos(\tilde{z}_o, \tilde{z}_y) + \alpha, 0),$$ where \(z_o\) and \(z_y\) denote the embeddings of LLM-generated output given the translated question and the ground-truth. For Multilingual reasoning-step alignment, we first split generated output \(o\) and the ground-truth \(y\) into each sentence \(\mathbf{o} = (o^{(1)}, \ldots, o^{(N)})\) and \(\mathbf{y} = (y^{(1)}, \ldots, y^{(M)})\), respectively. and then match sentences between output and ground-truth such that the total similarity score is maximized: $$r_\text{step-algin} = \frac{1}{N}\sum_{i=1}^N \mathbf{C}_{i, j_i} = \frac{1}{N}\sum_{i=1}^N \max(\cos(z_o^{(i)}, z_y^{(j_i)}) - \cos(\tilde{z}_o^{(i)}, \tilde{z}_y^{(j_i)}) + \alpha, 0),$$ where \(z_o^{(i)}\) and \(z_y^{(j_i)}\) denote the embedding of output sentence, \(o^{(i)}\) and matched ground-truth sentence, \(y^{(i_j)}\).
@article{hwang2025learn,
title={Learn Globally, Speak Locally: Bridging the Gaps in Multilingual Reasoning},
author={Hwang, Jaedong and Tanmay, Kumar and Lee, Seok-Jin and Agrawal, Ayush and Palangi, Hamid and Ayush, Kumar and Fiete, Ila R and Liang, Paul Pu},
journal={arXiv preprint arXiv:2507.05418},
year={2025}
}