I'm Ruben Taelman, a Web postdoctoral researcher at IDLab,
with a focus on decentralization, Linked Data publishing, and querying.
My goal is to make data accessible for everyone by providing
intelligent infrastructure and algorithms for data publication and retrieval.
As this website itself contains Linked Data, you can query it live with Comunica.
Have a look at my publications or projects
and contact me if any of those topics interest you.
Latest blog posts
Querying a Decentralized Web
The road towards effective query execution of Decentralized Knowledge Graphs.
Most of today’s applications are built based around the assumption that data is centralized. However, with recent decentralization efforts such as Solid quickly gaining popularity, we may be evolving towards a future where data is massively decentralized. In order to enable applications over decentralized data, there is a need for new querying techniques that can effectively execute over it. This post discusses the impact of decentralization on query execution, and the problems that need to be solved before we can use it effectively in a decentralized Web.
5 rules for open source maintenance
Guidelines for publishing and maintaining open source projects.
Thanks to continuing innovation of software development tools and services, it has never been easier to start a software project and publish it under an open license. While this has lead to the availability of a huge arsenal of open source software projects, the number of qualitative projects that are worth reusing is of a significantly smaller order of magnitude. Based on personal experience, I provide five guidelines in this post that will help you to publish and maintain highly qualitative open-source software.
Who says using RDF is hard?
A story of streaming RDF parsers
Multiple serialization formats are currently being recommended by the W3C to represent RDF. JSON-LD and RDF/XML are examples of RDF serializations that are respectively based on the JSON and XML formats. The ability to parse RDF serializations in a streaming way offers many advantages, such as handling huge documents with only a limited amount of memory, and processing elements as soon as they are parsed. In this post, I discuss the motivation behind my streaming parser implementations for JSON-LD and RDF/XML, their architecture, and I show some live examples.
- Journal Components.js: Semantic Dependency InjectionMore
In Semantic Web Journal
- Journal Optimizing Storage of RDF Archives using Bidirectional Delta ChainsMore
In Semantic Web Journal
- Conference Comunica: a Modular SPARQL Query Engine for the WebMore
In Proceedings of the 17th International Semantic Web Conference
- Journal Triple Storage for Random-Access Versioned Querying of RDF ArchivesMore
In Journal of Web Semantics
- Journal Distributed Subweb Specifications for Traversing the WebMore
In Theory and Practice of Logic Programming
- Conference Scaling Large RDF Archives To Very Long HistoriesMore
In Proceedings of the 17th IEEE International Conference on Semantic Computing
- Workshop A Prospective Analysis of Security Vulnerabilities within Link Traversal-Based Query ProcessingMore
In Proceedings of the 6th International Workshop on Storing, Querying and Benchmarking Knowledge Graphs
- Demo Solid Web MonetizationMore
In International Conference on Web Engineering
- Workshop A Policy-Oriented Architecture for Enforcing Consent in SolidMore
In Proceedings of the 2nd International Workshop on Consent Management in Online Services, Networks and Things