|
1 September 2004 - 31 March 2005
The goal of this challenge is to recognise objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images will be provided. The four object classes that have been selected are:
motorbikes
bicycles
people
cars
There will be two main competitions:
• For each of the 4 classes, predicting presence/absence of an example of that class in the test image.
• Predicting the bounding box and label of each object from the 4 target classes in the test image.
Contestants may enter either (or both) of these competitions, and can choose to tackle any (or all) of the four object classes. The challenge allows for two approaches to each of the competitions:
• Contestants may use systems built or trained using any methods or data excluding the provided test sets.
• Systems are to be built or trained using only the provided training data.
The intention in the first case is to establish just what level of success can currently be achieved on these problems and by what method; in the second case the intention is to establish which method is most successful given a specified training set.
1 June 2004 - 28 February 2005
The goal of the proposed challenge is to assess the current situation concerning Machine Learning (ML) algorithms for Information Extraction (IE) from documents, identifying future challenges and to foster additional research in the field. The aim is to:
• Define a methodology for fair comparison of ML algorithms for IE.
• Define a publicly available resource for evaluation that will exist and be used beyond the lifetime of the challenge; such framework will be ML oriented, not IE oriented as so far proposed in other similar evaluations.
• Perform actual tests of different algorithms in controlled situations so to understand what works and what does not and therefore identify new future challenges.
|
1 July - 31 December 2004
Efficient approximate inference in large Hybrid Networks (graphical models with discrete and continuous variables) is one of the major unsolved problems in machine learning, and insight into good solutions would be beneficial in advancing the application of sophisticated machine learning to a wide range of real-world problems.
Such research would benefit potentially applications in Speech Recognition, Visual Object Tracking and Machine Vision, Robotics, Music Scene Analysis, Analysis of complex Times series, understanding and modelling complex computer networks, Condition monitoring, and other complex phenomena.
This theory challenge specifically addresses a central component area of PASCAL, namely Bayesian Statistics and statistical modelling, and is also related to the other central areas of Computational Learning, Statistical Physics and Optimisation techniques.
One aim of this challenge is to bring together leading researchers in graphical models and related areas to develop and improve on existing methods for tackling the fundamental intractability in HNs. We do not believe that there will necessarily emerge a single best approach, although we would expect that successes in one application area should be transferable to related areas. Many leading machine learning researches are currently working on applications that involve HNs, and we invite participants to suggest their own applications. Ideally, this would be in the form of a dataset along the lines of PASCAL.
1 September - 12 December 2004
The goal of this challenge is to evaluate probabilistic methods for regression and for classification problems. A number of regression classification tasks are proposed. Training data (input-output pairs) are given, and the contestants are asked to predict the outputs associated to a set of validation and test inputs. These predictions are probabilistic and take the form of predictive distributions. The performance of the competing algorithms will be evaluated both with traditional losses that only take into account "point predictions" and with losses that evaluate the quality of the probabilistic predictions. |