November, 2025 — Nara, Japan

Knowledge Base Construction from Pre-trained Language Models

Challenge @ 24th International Semantic Web Conference (ISWC 2025)

Getting Started Submit Papers

News

15.03.2025: First website online

Introduction

LM-KBC Challenge LM-KBC Challenge @ ISWC 2025

Task Description

Pretrained language models (LMs) have advanced a range of semantic tasks and have also shown promise for knowledge extraction from the models itself. Although several works have explored this ability in a setting called probing or prompting, the viability of knowledge base construction from LMs remains underexplored (Hu et al., 2024). In the 4th edition of the LM-KBC challenge, we ask participants to build actual disambiguated knowledge bases from LMs, for given subjects and relations. In crucial difference to existing probing benchmarks like LAMA (Petroni et al., 2019), we make no simplifying assumptions on relation cardinalities, i.e., a subject-entity can stand in relation with zero, one, or many object-entities. Furthermore, submissions need to go beyond just ranking predicted surface strings and materialize disambiguated entities in the output, which will be evaluated using established KB metrics of precision and recall.

This year there are two major changes:

  1. Disallowing fine-tuning.
  2. Disallowing external corpora (RAG).

Thus this year is a true exploration of knowledge within a given LLM.

Special Features

For comparability, all teams have to use the same LLM, Qwen2.5-7B-Instruct, without any finetuning or retrieval augmentation.

The dataset, will be released in two parts: a training and development set, and a test set. The test set will be released later than the training set, to prevent overfitting. Submissions will be evaluated using standard KB metrics of precision and recall. The challenge will be run on the CodaLab platform, and the results will be presented at the ISWC conference.

Calls

Call for Participants

Important Dates

Activity Dates
Dataset (train and dev) release 30 March 2025
Dataset (test) release July 7, 2025
Submission of test output and code July 11, 2025
Submission of system description papers July 18, 2025
Acceptance and winner announcement July 31, 2025
Presentations@ISWC November 2025

Submission Details

TBA

Organization

Challenge Organizers

Jan-Christoph Kalo
Jan-Christoph Kalo

University of Amsterdam

Tuan-Phong Nguyen
Tuan-Phong Nguyen

MPI for Informatics

Simon Razniewski
Simon Razniewski

ScaDS.AI and TU Dresden

Bohui Zhang
Bohui Zhang

King's College London

Contact

For general questions or discussion please use the Google Group.

Past Editions

Our challenge has been running since 2022. For more information on past editions, please visit the corresponding websites: