A comprehensive evaluation of semantic relation knowledge of pretrained language models and humans
Published in Language Resources and Evaluation, 2025
Abstract
Recently, much work has concerned itself with the enigma of what exactly pretrained language models (PLMs) learn about different aspects of language, and how they learn it. One stream of this type of research investigates the knowledge that PLMs have about semantic relations. However, many aspects of semantic relations were left unexplored. Generally, only one relation has been considered, namely hypernymy. Furthermore, previous work did not measure humans’ performance on the same task as that performed by the PLMs. This means that at this point in time, there is only an incomplete view of the extent of these models’ semantic relation knowledge. To address this gap, we introduce a comprehensive evaluation framework covering five relations beyond hypernymy, namely hyponymy, holonymy, meronymy, antonymy, and synonymy. We use five metrics (two newly introduced here) for recently untreated aspects of semantic relation knowledge, namely soundness, completeness, symmetry, prototypicality, and distinguishability. Using these, we can fairly compare humans and models on the same task. Our extensive experiments involve six PLMs, four masked and two causal language models. The results reveal a significant knowledge gap between humans and models for all semantic relations. In general, causal language models, despite their wide use, do not always perform significantly better than masked language models. Antonymy is the outlier relation where all models perform reasonably well.
Recommended citation:
Zhihan Cao, Hiroaki Yamada, Simone Teufel, Takenobu Tokunaga. A comprehensive evaluation of semantic relation knowledge of pretrained language models and humans, Language Resources and Evaluation (2025).