EON2007

Evaluation of Ontologies and Ontology-based tools

5th International EON Workshop

Located at the 6th International Semantic Web Conference ISWC 2007
November 11th, 2007 (Workshop day)
Busan Exhibition Convention Centre, Busan, Korea

Supported by KnowledgeWeb and NeOn

Published as CEUR-WS Vol-329

Programme Evaluation Topics Previous Workshops Organization Committee Program Committee Submission Dates

Here you will find the previous EON workshops: EON2006, EON2004, EON2003, and EON2002.

Here you will find this year's proceedings.


Objectives Top

The successful series of EON workshops have provided a meeting facility for the discussion and enhancement of technology evaluation in the Semantic Web. Ontologies and Semantic Web technologies are moving towards industrial application, and thus require evaluations and benchmarks to be available to a broader range of users and developers. The main goal of this workshop is to gather, share and reuse methods, tools and metrics for Web Ontology Evaluation and Semantic Web Technology Evaluation (ontology development tools, ontology merging and alignment tools, ontology-based annotators, etc.).

A well-understood notion of Ontology Evaluation and Semantic Web Technology Evaluation will foster a consistent level of quality and thus acceptance by industry and the web community. But to achieve this it is necessary to enable reusing results and lessons learned from others. Practitioners want to evaluate their tools and to have benchmark suites available for doing that in order to minimize the evaluation cost. Also, there are some benchmark suites that are widely being used by the community (such as LUBM). The problems are that users and developers don't know where to find these benchmark suites, what these benchmark suites are intended for, or how to correctly use them.

This event is supported by Knowledge Web and NeOn


Programme Top

To ensure a creative atmosphere during the workshop, the presenters will be selected based on their submitted papers and demonstrations. In order to obtain an intensive exchange of ideas between the participants, enough time for discussion will be ensured.

The previous workshops proposed a series of experiments for evaluating different aspects of ontology tools (EON2002, EON2003 and EON2004), e.g. their expressiveness and interoperability capabilities, and different aspects of ontologies. The aim of the EON series is to attract attention to a number of evaluation topics since we believe this to be a highly relevant issue for the adaptation of Semantic Web technologies by partners outside the Semantic Web community.

This year we aim to obtain methods, tools and metrics that can be reused by the whole community in ontology evaluation and ontology technology evaluation tasks. We propose a experiment in the ontology evaluation area, while for ontology technology evaluation we will ask for short papers that provide existing benchmark suite descriptions with the information needed to use them and that will be evaluated according to their usability by an expert committee. The benchmarks need to be interoperable and will be shared well before the workshop date, so that, in the workshop, we will gather both the experiences with evaluating the tools and using the benchmarks.

Both the proposed evaluation methods, tools and metrics and well as their results will be collected and made available to the research community by means of the Ontoworld wiki. Therefore, they can be extended while research advances in these topics.

Schedule of the workshop:

14:00-15:30 Session 1
  • Introduction and welcome
  • Characterizing Knowledge on the Semantic Web with Watson Mathieu D'Aquin, Claudio Baldassarre, Laurian Gridinoc, Sofia Angeletou, Marta Sabou, Enrico Motta (slides / proceedings)
  • Evaluating Ontology Search Paul Buitelaar, Thomas Eigner (slides / proceedings)
  • A Panoramic Approach to Integrated Evaluation of Ontologies in the Semantic Web Sourish Dasgupta, Deendayal Dinakarpandian, Yugyung Lee (slides / proceedings)

15:30-16:00 Coffee break

16:00-17:30 Session 2
  • Tracking Name Patterns in OWL Ontologies Vojtech Svatek, Ondrej Svab (slides / proceedings)
  • Detecting Quality Problems in Semantic Metadata without the Presence of a Gold Standard Yuangui Lei, Andriy Nikolov (slides / proceedings)
  • Benchmarking Reasoners for Multi-Ontology Applications Ameet Chitnis, Abir Qasem, Jeff Heflin (slides (slides in 2003 Office format) / proceedings)
  • Sample Evaluation of Ontology-Matching Systems Willem Van Hage, Antoine Isaac, Zharko Aleksovski (slides / proceedings)
  • Closing

Ontology evaluation experiment Top

This year, a set of ontologies in a specific domain will be provided to participants. The participants are expected to perform the following tasks over these ontologies:

  1. To describe the set of metrics (according to a common format) that will be used for evaluating the ontologies.
  2. To apply these metrics to evaluate the given ontologies.
  3. To provide recommendations for improving the ontologies.
  4. To implement the proposed changes in the ontologies.
  5. To re-apply the metrics to the resulting ontologies.
  6. To compare the results with the previous ones.

Experiment contributions are expected in the form of short papers (4 pages).


Collection of reusable technology evaluations and demos Top

Another of our goals this year is to obtain a collection of previous evaluations and benchmark suites that can be used by other people in other contexts different than those where they were developed.

For this task, we ask for short papers (4 pages at most) that provide existing or new evaluation and benchmark suite descriptions with all the information needed to use them and that will be evaluated according to their usability.

As different evaluations and benchmark suites will require different content and detail in their definitions, we do not provide a certain format for describing the evaluations and benchmark suites. Nevertheless, in we provide some guidelines on what to include in the contribution that also include the questionnaire that will be used to evaluate the usability of the approach.

Accepted papers of reusable technology evaluations will be shared well before the workshop date and short papers or demos (4 pages also) that describe experiences on evaluating any tool using these evaluations and benchmark suites are welcome.


Topics of Interest Top

Main topics of interest in the areas of ontology and ontology technology evaluation include but are not limited to:

  • Evaluation methodologies and methods
  • Tools and benchmark suites
  • Metrics
  • Certification
  • Web Ontology Evaluation
  • Ontology Content Evaluation and Criteria for Ontology Content Evaluation
  • Task-oriented Evaluation / Task-independent Evaluation
  • Formal/Informal Ontology Evaluation
  • Evaluation of Heavily Interconnected Ontologies / Networks of Ontologies
  • Interoperability of tools
  • Integration of tools into frameworks
  • Performance and scalability evaluations and benchmarks

Previous Workshops Top

The fourth workshop on Evaluation of Ontologies for the Web (EON2006) was celebrated in conjunction with the 15th International World Wide Web Conference (WWW2006) on May 22nd 2006 in Edinburgh, United Kingdom.

The third workshop on Evaluation of Ontology-based Tools (EON2004) was celebrated in conjunction with the 3rd International Semantic Web Conference (ISWC2004) on November 8th 2004 in Hiroshima, Japan.

The second workshop on Evaluation of Ontology-based Tools (EON2003) was celebrated in conjunction with the 2nd International Semantic Web Conference (ISWC2003) on October 20th 2003 in Sanibel Island, Florida, US.

The first workshop on Evaluation of Ontology-based Tools (EON2002) was celebrated in conjunction with the 13th International Conference on Knowledge Engineering and Knowledge Management (EKAW2002), in September 30th, 2002.

All previous workshops attracted a large number of researchers and practitioners of ontology-based tools (each time over 30 participants).


Workshop Organising Committee Top


Program Committee Top

  • Harith Alani, University of Southampton (UK)
  • Christopher Brewster, University of Sheffield (UK)
  • Roberta Cuel, University of Trento (IT)
  • Klaas Dellschaft, University of Koblenz (DE)
  • Mariano Fernández-López, Universidad San Pablo CEU (ES)
  • Jens Hartmann, University of Bremen (DE)
  • Kouji Kozaki, Osaka University (JP)
  • Joey Lam, University of Aberdeen (UK)
  • Thorsten Liebig, Ulm University (DE)
  • Enrico Motta, Open University (UK)
  • Natasha Noy, Stanford (US)
  • Yue Pan, IBM (CN)
  • Elena Paslaru Bontas, DERI Innsbruck (AT)
  • Yuzhong Qu, Southeast University (CN)
  • Mari Carmen Suárez-Figueroa, Universidad Politécnica de Madrid (ES)
  • Baoshi Yan, Bosch (USA)
  • Sofia Pinto, INESC-ID (PT)

Submission and Proceedings Top

This year, papers will be submitted via the main ISWC2007 submission system. Papers should be formatted according to the guidelines of the ISWC2007 conference.

Papers must not exceed 10 pages. Short papers, experiment contributions, evaluation descriptions, and demo descriptions must not exceed 4 pages.

We will pursue a journal special issue with the topics of the workshop if we receive an appropriate number of high-quality submissions.

Note that workshop attendees must also register for the main conference. So try to register both for the conference and the workshop before the ISWC2007 early registration deadline.


Important Dates Top

  • Deadline for paper submissions: August 12th, 2007 (extended)
  • Deadline for experiment descriptions and demos: August 12th, 2007 (extended)
  • Notification of acceptance (papers and demos): September 7th, 2007
  • Camera-ready versions: September, 28th, 2007
  • Workshop: November 11th, 2007

Please do not hesitate to contact Raúl García-Castro for any questions you have!