In this demo, we showcase the application of the metrics dief@t
to measure the continous efficiency (or diefficiency
of SPARQL query engines.
To measure the diefficiency of approaches, the metrics dief@t and dief@k
compute the area under the curve of answer traces
. Answer traces record
the points in time when an approach produces an answer. The plot (on the right
side) depicts the answer trace of three approaches when executing a query.
We compare the performance of the
query engine when executing
SPARQL queries with three different configurations:
Not Adaptive, Random, and Selective.
We executed the SPARQL queries from Benchmark 1
using nLDE and recorded two outputs:
traces: Contains the answer trace per query per approach.
metrics: Reports on time for the first tuple, execution time, and number of answers produced per query and approach.
These outputs are available as CSV files at https://doi.org/10.6084/m9.figshare.5008289
As part of this demo, we are providing the
R package available at
To illustrate the usage of the package, in this demo we will show snippets of
the functions provided by
to generate the reported results.
In this demo, we will analyze the performance of the nLDE variants when
executing different SPARQL queries.