Subscribe our newsletter


Share on facebook
Share on twitter
Share on pinterest

ESWC Tutorial

Learn how to scale your Linked Data evaluations and applications to hundreds of thousands of datasets.

This tutorial will focus on obtaining hands-on experience with LOD Lab: a new evaluation paradigm that makes algorithmic evaluation against hundreds of thousands of datasets the new norm. The LOD Lab approach builds on the award-winning LOD Laundromat architecture, HDT technology and LOTUS text index. The intended audience for this tutorial includes all Semantic Web practitioners that need to run evaluations on Linked Open Data or that would otherwise benefit from being able to easily process large volumes of Linked Data.


  • Introduction: The importance of LOD evaluations
  • LOTUS: From natural language text to resources.
  • Index: From resource to all documents about that resource
  • HDT+LDF+Frank: From resource to all triples about that resource
  • Metadata: Filter data with specific (graph) properties
  • Discussion: The future of (tooling for) LOD evaluations

Covered technologies

LOD Laundromat
Cleaning and republishing service for Linked Open Data.
Header Dictionary Triples (HDT)
Queryable compression format for Linked Data.
Command-line interface to the LOD Cloud.
Full text index over the LOD Cloud.


Wouter Beek
Developer of the LOD Laundromat
VU University Amsterdam
Javier D. Fernández
Developer of Header Dictionary Triples (HDT)
Vienna University of Economics and Business
Paul Groth
Elsevier Labs
Filip Ilievski
Developer of LOTUS
VU University Amsterdam
Get notified of the Latest Sport News Update from Our Blog
subscribe to daily updates