|
1 March - 1 September 2005
In the Challenge we have an experimental setup, where the test subject is first shown a question, followed by ten sentences. Five of the sentences are "relevant" to the question (they are of the same topic as the question) and five of the sentences are irrelevant (they have no relation to the topic of the question). One of the relevant sentences is the correct answer to the question. The experimental setup is designed to resemble a real-life information retrieval scenario as closely possible while at the same time retaining a controlled setup where the ground truth is known.
Thus, in the Challenge the meaning of "relevant" is defined in terms of this experimental setup. The objective of the Challenge is to find the best methods and features that can be used to predict the relevance from the eye movement measurements.
12 December 2004 - 22 May 2005
The goal of the "BCI Competition III" is to validate signal processing and classification methods for Brain-Computer Interfaces (BCIs). Compared to the past BCI Competitions, new challanging problems are addressed that are highly relevant for practical BCI systems, such as
session-to-session transfer (data set I)
small training sets, maybe to be solved by subject-to-subject transfer (data set IVa),
non-stationarity problems (data set IIIb, data set IVc),
multi-class problems (data set IIIa, data set V, data set II,),
classification of continuous EEG without trial structure (data set IVb, data set V).
Also this BCI Competition includes for the first time ECoG data (data set I) and one data set for which preprocessed features are provided (data set V) for competitors that like to focus on the classification task rather than to dive into the depth of EEG analysis.
|
1 November 2004 - 30 April 2005
The aim of this challenge is to encourage work on automated construction and population of ontologies. For the purposes of this challenge, an ontology consists of a set of concepts and a set of instances. An instance can be assigned to one or more concepts. The concepts are connected into a hierarchy.
Several types of tasks are included in this challenge:
• Ontology construction: given a set of documents, construct an ontology with these documents as instances.
• Ontology extension: given a partial ontology and a set of instances, extend the ontology with new concepts using the given instances.
• Ontology population: given a partially populated hierarchy of concepts, develop a model that can assign new instances to concepts.
• Concept naming: given a set of instances and the assignment of instances to concepts, suggest user-friendly labels for the concepts.
Evaluation is based on comparing the results to a "golden standard" ontology prepared by human editors.
1 June 2004 - 10 April 2005
Recent years have seen a surge in research of text processing applications that perform semantic-oriented inference about concrete text meanings and their relationships. Even though many applications face similar underlying semantic problems, these problems are usually addressed in an application oriented manner. Consequently it is difficult to compare, under a generic evaluation framework, semantic methods that were developed within different applications. The PASCAL Challenge introduces textual entailment as a common task and evaluation framework for Natural Language Processing, Information Retrieval and Machine Learning researchers, covering a broad range of semantic-oriented inferences needed for practical applications. This task is therefore suitable for evaluating and comparing semantic-oriented models in a generic manner. Eventually, work on textual entailment may promote the development of generic semantic "engines", which will play an analogous role to that of generic syntactic analyzers across multiple applications.
|