Knowledge Base Construction from Pre-Trained Language Models

Workshop @ 24th International Semantic Web Conference (ISWC 2025)

Language models such as chatGPT, BERT, and T5, have demonstrated remarkable outcomes in numerous AI applications. Research has shown that these models implicitly capture vast amounts of factual knowledge within their parameters, resulting in a remarkable performance in knowledge-intensive applications. The seminal paper "Language Models as Knowledge Bases?" sparked interest in the spectrum between language models (LMs) and knowledge graphs (KGs), leading to a diverse range of research on the usage of LMs for knowledge base construction, including (i) utilizing pre-trained LMs for knowledge base completion and construction tasks, (ii) performing information extraction tasks, like entity linking and relation extraction, and (iii) utilizing KGs to support LM based applications.

The 3rd Workshop on Knowledge Base Construction from Pre-Trained Language Models (KBC-LM) workshop aims to give space to the emerging academic community that investigates these topics, host extended discussions around the LM-KBC Semantic Web challenge, and enable an informal exchange of researchers and practitioners.

Important Dates

Papers due: August 2, 2025
Notification to authors: August 28, 2025
Camera-ready deadline: September 4, 2025
Workshop dates: TBA

Topics

We invite contributions on the following topics:

  • Entity recognition and disambiguation with LMs
  • Relation extraction with LMs
  • Zero-shot and few-shot knowledge extraction from LMs
  • Consistency of LMs
  • Knowledge consolidation with LMs
  • Comparisons of LMs for KBC tasks
  • Methodological contributions on training and fine-tuning LMs for KBC tasks
  • Evaluations of downstream capabilities of LM-based KGs in tasks like QA
  • Designing robust prompts for large language model probing

Submissions can be novel research contributions or already published papers (these will be presentation-only, and not part of the workshop proceedings). Novel research papers can be either full papers (ca. 8-12 pages), or short papers presenting smaller or preliminary results (typically 3-6 pages). We are accepting demo and position papers as well. Check out also the LM-KBC challenge for further options to contribute to the workshop.

Submission and Review Process

Papers will be peer-reviewed by at least three researchers using a single-blind review. Accepted papers will be published on CEUR (unless authors opt out). Submissions need to be formatted according to this template. Also email a paper-signed copyright form to simon.razniewski@tu-dresden.de (form no GenAI / form if using GenAI).

Keynote Speakers

    TBA

Schedule

    TBA

Program Committee

    TBA

Chairs

Jan-Christoph Kalo
University of Amsterdam
Simon Razniewski
ScaDS.AI & TU Dresden
Sneha Singhania
MPI Informatics
Duygu Sezen Islakoglu
Utrecht University