The LOD Laundromat provides access to all Linked Open Data (LOD) in the world. It does this by crawling the LOD cloud, and converting all its contents in a standards-compliant way (gzipped N-Triples), removing all data stains such as syntax errors, duplicates, and blank nodes.
The LOD Laundry Basket contains the URLs of dirty datasets that are waiting to be cleaned by the LOD Washing Machine. This is where you can add a URL to the LOD Laundry Basket for cleaning.
The LOD Wardrobe is where the cleaned data is stored. You can download both clean and dirty (i.e. the original) data. Each data document contains a meta-data description that includes all the stains that were detected, e.g. wrong HTTP headers, syntax errors.
How much data did we clean?
socks triples did we lose
in the LOD Washing Machine?
Which RDF serialization formats did we come across?
This is where we show such LOD Analytics.
For an in-depth overview of the data cleaned by the LOD Laundromat, we provide a live SPARQL endpoint in which all meta-data can be queried.
This is great! I think.. But what do you exactly do? How can I use it?