Language models such as chatGPT, BERT, and T5, have demonstrated remarkable outcomes in numerous AI applications. Research has shown that these models implicitly capture vast amounts of factual knowledge within their parameters, resulting in a remarkable performance in knowledge-intensive applications. The seminal paper "Language Models as Knowledge Bases?" sparked interest in the spectrum between language models (LMs) and knowledge graphs (KGs), leading to a diverse range of research on the usage of LMs for knowledge base construction, including (i) utilizing pre-trained LMs for knowledge base completion and construction tasks, (ii) performing information extraction tasks, like entity linking and relation extraction, and (iii) utilizing KGs to support LM based applications.
The 3rd Workshop on Knowledge Base Construction from Pre-Trained Language Models (KBC-LM) workshop aims to give space to the emerging academic community that investigates these topics, host extended discussions around the LM-KBC Semantic Web challenge, and enable an informal exchange of researchers and practitioners.
Papers due: | August 2, 2025 |
Notification to authors: | August 28, 2025 |
Camera-ready deadline: | September 4, 2025 |
Workshop dates: | TBA |
We invite contributions on the following topics:
Submissions can be novel research contributions or already published papers (these will be presentation-only, and not part of the workshop proceedings). Novel research papers can be either full papers (ca. 8-12 pages), or short papers presenting smaller or preliminary results (typically 3-6 pages). We are accepting demo and position papers as well. Check out also the LM-KBC challenge for further options to contribute to the workshop.
Papers will be peer-reviewed by at least three researchers using a single-blind review. Accepted papers will be published on CEUR (unless authors opt out).
Submissions need to be formatted according to this template. Also email a paper-signed copyright form to simon.razniewski@tu-dresden.de (form no GenAI / form if using GenAI).
Submit your papers on Openreview.
Juan Sequeda is the Principal Scientist and Head of the AI Lab at data.world. He holds a PhD in Computer Science from The University of Texas at Austin. Juan's research and industry work has been on the intersection of data and AI, with the goal to reliably create knowledge from inscrutable data, specifically designing and building Knowledge Graph for enterprise data and metadata management. Juan is the co-author of the book "Designing and Building Enterprise Knowledge Graph" and the co-host of Catalog and Cocktails, an honest, no-bs, non-salesy data podcast.
Juan has researched and developed technology on semantic data virtualization, graph data modeling, schema mapping and data integration methodologies. He pioneered technology to construct knowledge graphs from relational databases, resulting in W3C standards, research awards, patents, software and his startup Capsenta acquired by data.world in 2019. Juan is the recipient of the NSF Graduate Research Fellowship, received 2nd Place in the 2013 Semantic Web Challenge for his work on ConstituteProject.org, Best Student Research Paper at the 2014 International Semantic Web Conference (ISWC), the 2015 Best Transfer and Innovation Project awarded by the Institute for Applied Informatics, 2023 Best Industry Paper at SIGMOD and nominated two additional times for best paper at ISWC.
Juan strives to build bridges between academia and industry as former co-chair of the LDBC Property Graph Schema Working Group, member of the LDBC Graph Query Languages task force, standards editor at the World Wide Web Consortium (W3C). Juan continues to be an active member of the scientific community by being on the editorial board and program committees of scientific journals and conferences in Semantic Web, Knowledge Graphs, Databases and AI, as well as organizer of various academic and industry conferences, including being the General Chair of The ACM Web Conference 2023.