The documentation is work in progress.

  1. Introduction and background
  2. How to include Spark
  3. Spark markup
  4. Using Spark from JavaScript
  5. Spark query result formats
  6. Write your own Spark query result format
  7. Spark in MediaWiki
  8. Spark in Drupal
  9. Spark in Wordpress
  10. Examples
  11. Spark development
  12. How to acknowledge / cite Spark

Introduction and background

The web is not only growing in sheer size, but it also grows in how much it is interconnected. Where once the Web was a set of more or less separated sites, today sites are more and more being connected. More and more data is being offered on the Web in a way that can be further processed, and more and more sites and applications are using external data. More and more mashups are created, where data from different sources is integrated and displayed with novel visualisations.

Spark is a library that enables HTML authors to create mashups more easily than ever before. Using standard Web technologies like SPARQL, RDF, HTML5, and JavaScript, Spark can query external knowledge sources (so called triple stores or SPARQL endpoints), and then visualise the results.

With Spark, website developers can create visually appealing mashups without having to write a single line of JavaScript, but merely using some markup elements describing the source of the data that is to be shown, a query to select the appropriate data, and selecting one from an expandable set of visualisations and their parameters.

Further references

How to include Spark

Spark requires jQuery to run. In order to include jQuery, you can either include it from the Web, e.g. from Google's Content Delivery Network like this:

<script src=""></script>

For development, we are using jQuery Version 1.4.4, but it should work with a reasonable range of versions.

Once jQuery is included, there are two ways to include Spark into your website: 1) include it from our Website, or 2) download it and include it locally. In order to include it from our Website, put the following statement after you have included the jQuery library.

<script src=""></script>

If you download the library yourself, upload it to your site and simply include the jquery.spark.js script from wherever you have uploaded it.

Note that some formats may require you to include further JavaScript- or CSS-files in order to run properly. These additional files should be listed in the documentation of the format.

Spark also requires a sensibly modern browser to run. Currently, Internet Explorer is not supported due to problems with cross-site data access. We are hoping to resolve that soon.

Spark markup

Once included, HTML elements can be marked up for Spark to process them. In order to do that, you need to add a class="spark" to your element, e.g. like this <span class="spark">. Now you need to add the parameters for the call to Spark. The order of the parameters does not matter. Each parameter must only be used once. The following parameters are generally available:

Further parameters may be available for a given format. These parameters are described in the documentation of the respective format.

Note that most formats will remove whatever the element used to have as a content. Thus it might be useful to put in some content in the pure HTML element that indicates that this element will be replaced (e.g. text like loading…).

Using Spark from JavaScript

Spark can also be used from JavaScript directly, without having to use the additional mark-up. Spark adds a new function spark to the jQuery element wrapper, which can be called with one parameter, an options object (if spark is being called with only a string, the string will be assumed to be the SPARQL query and all other options will be set to their default values).

The options object may include the following fields:

Whereas the mark up version is obviously preferable for more or less static content, you can also use a dynamic browser with Spark via the JavaScript interface.

Spark query result formats

Spark already ships with a small set of result formats.


The simple formatter returns a flat list of all results.


The ul formatter returns a list of all results as an unordered list, using the HTML <ul> element.


The count formatter returns merely the number of results.

The simple table formatter returns a simple table, where each SPARQL variable has a column. This is obviously too simple for real usages, but the formatter provides both an example of how to write a format, and a rather verbose and easy to understand rendering of the result set possibly useful for debugging.

Write your own Spark query result format

You can easily extend the set of formats. The code documentation of the simpletable format provides a heavily documented example and adds further notes on how to write your own formatter.

Spark in MediaWiki

Planned: an extension of MediaWiki to include Spark. Will be linked from here.

Spark in Drupal

Planned: an extension of Drupal to include Spark. Will be linked from here.

Spark in WordPress

Planned: an extension of WordPress to include Spark. Will be linked from here.


See example gallery.

Spark development

Spark is currently available as a pre-release version, i.e. merely a developer peak. There are a number of open issues that need to be resolved before we can go for a proper release:

  1. Spark currently does not work with Internet Explorer and Opera due to Cross Site Scripting security issues. Spark should run on all major browsers.
  2. Spark currently assumes that the SPARQL endpoint is capable of returning a JSON result set. It should also be able to deal with the more standard XML result set.
  3. Spark currently does not offer the possibility to easily define behavior for error cases, like if the endpoint does not give a proper answer, or if a certain URL can not be resolved, etc.
  4. Spark currently features a tiny set of formats. Further formats can be written and included in the standard release.

We would be very happy if more developers would join the further development of Spark, especially with the above given tasks.

For more information, see the Google code page.

How to acknowledge / cite Spark

The main developers of Spark are researchers. We don't ask you for money in order to use Spark (though if you like it, we sure don't mind if you show us your appreciation in a monetary way), but instead we would be very happy if you use it widely and acknowledge it.

One way to acknowledge us is to link to us. If you create little buttons that can be used for that, please let us know so we can point to them and others can use them too.

If you are writing a paper and want to cite Spark, you can use the following citation (currently, it is merely a tech report, and yet unpublished):

Denny Vrandecic, Andreas Harth: Visualising SPARQL result sets with Spark. Tech Report. Karlsruhe, Institute AIFB, KIT, 2011.

For questions feel free to mail Denny and Andreas.