The documentation is work in progress.
The web is not only growing in sheer size, but it also grows in how much it is interconnected. Where once the Web was a set of more or less separated sites, today sites are more and more being connected. More and more data is being offered on the Web in a way that can be further processed, and more and more sites and applications are using external data. More and more mashups are created, where data from different sources is integrated and displayed with novel visualisations.
Spark is a library that enables HTML authors to create mashups more easily than ever before. Using standard Web technologies like SPARQL, RDF, HTML5, and JavaScript, Spark can query external knowledge sources (so called triple stores or SPARQL endpoints), and then visualise the results.
With Spark, website developers can create visually appealing mashups without having to write a single line of JavaScript, but merely using some markup elements describing the source of the data that is to be shown, a query to select the appropriate data, and selecting one from an expandable set of visualisations and their parameters.
Spark requires jQuery to run. In order to include jQuery, you can either include it from the Web, e.g. from Google's Content Delivery Network like this:
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js"></script>
For development, we are using jQuery Version 1.4.4, but it should work with a reasonable range of versions.
Once jQuery is included, there are two ways to include Spark into your website: 1) include it from our Website, or 2) download it and include it locally. In order to include it from our Website, put the following statement after you have included the jQuery library.
<script src="http://km.aifb.kit.edu/sites/spark/src/jquery.spark.js"></script>
If you download the library yourself, upload it to your site and simply include
the jquery.spark.js
script from wherever you have uploaded it.
Note that some formats may require you to include further JavaScript- or CSS-files in order to run properly. These additional files should be listed in the documentation of the format.
Spark also requires a sensibly modern browser to run. Currently, Internet Explorer is not supported due to problems with cross-site data access. We are hoping to resolve that soon.
Once included, HTML elements can be marked up for Spark to
process them. In order to do that, you need to add a
class="spark"
to your element, e.g. like this
<span class="spark">
.
Now you need to add the parameters for the call to Spark. The
order of the parameters does not matter. Each parameter must only be used once.
The following parameters are generally available:
data-spark-endpoint
: The URL of the SPARQL endpoint to be queried.
Per default, the qcrumb.com endpoint will be used.
data-spark-rdf
: The URL of the RDF-file with the data to be
queried. This can be omitted if a SPARQL endpoint is given.
data-spark-format
: The format to use for rendering the result.
Note that you can simply use your own formats, since you can enter here the
URL of the JavaScript file containing the formatter. Otherwise, the simple
formatter is assumed.
data-spark-query
: The SPARQL query to be executed. Note that the
query does not require to be complete, including all namespace declarations
etc., but that you can add namespace declarations via the following parameter.
data-spark-ns-*
: Namespace declaration. Instead of *
you write the namespace prefix, the attribute value gives the actual namespace.
Note that Spark already declares a small set of namespaces for convenience.
data-spark-param-*
: Further parameters as used by the given
format. These parameters are documented on the format documentation.
Further parameters may be available for a given format. These parameters are described in the documentation of the respective format.
Note that most formats will remove whatever the element used to have as a content. Thus it might be useful to put in some content in the pure HTML element that indicates that this element will be replaced (e.g. text like loading…).
Spark can also be used from JavaScript directly, without
having to use the additional mark-up. Spark adds a new function
spark
to the jQuery element wrapper, which can be called with one
parameter, an options object (if spark
is being called with only a
string, the string will be assumed to be the SPARQL query and all other options
will be set to their default values).
The options object may include the following fields:
endpoint
: The URL of the SPARQL endpoint to be queried.
Per default, the qcrumb.com endpoint at
http://qcrumb.com/sparql
will be used.
format
: The URL (or name) of the formatter to be used. Per
default, simple
will be used.
format
: The URL (or name) of the formatter to be used. Per
default, simple
will be used.
rdf
: The URL of the RDF file to be queried. The default is blank.
ns
: An object where the keys are the prefixes and the values are
the resolving namespaces. Per default the standard values for
rdf, rdfs, owl, rif, foaf, dbp, db, geo, and dc are already given.
param
: An object where the parameters for the formatter are given.
Per default, the object is empty. The parameters have the same name as for
the mark-up, and the documentation can be found in the given format
definition.
Whereas the mark up version is obviously preferable for more or less static content, you can also use a dynamic browser with Spark via the JavaScript interface.
Spark already ships with a small set of result formats.
The simple formatter returns a flat list of all results.
The ul formatter returns a list of all results as an unordered list, using
the HTML <ul>
element.
The count formatter returns merely the number of results.
The simple table formatter returns a simple table, where each SPARQL variable has a column. This is obviously too simple for real usages, but the formatter provides both an example of how to write a format, and a rather verbose and easy to understand rendering of the result set possibly useful for debugging.
You can easily extend the set of formats. The code documentation of the simpletable format provides a heavily documented example and adds further notes on how to write your own formatter.
Planned: an extension of MediaWiki to include Spark. Will be linked from here.
Planned: an extension of Drupal to include Spark. Will be linked from here.
Planned: an extension of WordPress to include Spark. Will be linked from here.
See example gallery.
Spark is currently available as a pre-release version, i.e. merely a developer peak. There are a number of open issues that need to be resolved before we can go for a proper release:
We would be very happy if more developers would join the further development of Spark, especially with the above given tasks.
For more information, see the Google code page.
The main developers of Spark are researchers. We don't ask you for money in order to use Spark (though if you like it, we sure don't mind if you show us your appreciation in a monetary way), but instead we would be very happy if you use it widely and acknowledge it.
One way to acknowledge us is to link to us. If you create little buttons that can be used for that, please let us know so we can point to them and others can use them too.
If you are writing a paper and want to cite Spark, you can use the following citation (currently, it is merely a tech report, and yet unpublished):
Denny Vrandecic, Andreas Harth: Visualising SPARQL result sets with Spark. Tech Report. Karlsruhe, Institute AIFB, KIT, 2011.