Evaluating Robustness of LLMs to Numerical Variations in Mathematical Reasoning
Published in The Sixth Workshop on Insights from Negative Results in NLP, 2025
Abstract
Evaluating an LLM’s robustness against numerical perturbation is a good way to know if the LLM actually performs reasoning or just replicates patterns learned. We propose a novel method to augment math word problems (MWPs), producing numerical variations at a large scale utilizing templates. We also propose an automated error classification framework for scalable error analysis, distinguishing calculation errors from reasoning errors. Our experiments using the methods show LLMs are weak against numerical variations, suggesting they are not fully capable of generating valid reasoning steps, often failing in arithmetic operations.
Recommended citation:
Yuli Yang, Hiroaki Yamada, and Takenobu Tokunaga. 2025. Evaluating Robustness of LLMs to Numerical Variations in Mathematical Reasoning. In The Sixth Workshop on Insights from Negative Results in NLP, pages 171-180.