|
and feature selection challenge December 11-13, 2003 *** Challenge result analysis ***
Recently, there has been much research effort put into the field of feature extraction. In the past few years, the number of papers related to feature extraction, including feature construction, space dimensionality reduction, sparse representations, and feature selection, has been approaching almost ten percent of the NIPS submissions. The applications studied cover a wide range of domains, including bioinformatics, chemistry, text processing, pattern recognition, speech processing, and vision. Yet, there does not seem to be an emerging unity, be it from the standpoint of experimental design, algorithms, or theoretical analysis. The purpose of the workshop is to bring together researchers of various application domains to share techniques and methods. Part of the workshop will be devoted to presentations and discussions of the result of a challenge on feature selection. Results published in the field of feature selection have been in the past, for the most part, on different data sets or have used different data splits. This makes them hard to compare. We formatted a number of datasets for the purpose of benchmarking feature selection algorithms in a controlled manner . The data sets were chosen to span a wide variety of domains. We chose data sets that had sufficiently many examples to create a large enough test set to obtain statistically significant results. The input variables are continuous or binary, sparse or dense. All problems are two-class classification problems. The similarity of the tasks will allow participants to enter results on all data sets to test the genericity of the algorithms.
How to participate:
If you are a Matlab user, we provide sample code to read and check the data. Otherwise, the data follow a straightforward ASCII format. Check the latest challenge results. Each dataset is split into training, validation, and test set. Only the training labels are provided. During the development period, participants can return classification results on the validation set, even for a subset of the datasets. They will receive in return their validation set scores. At any time (but presumably after some development period) the participants can submit their final classification results on ALL the datasets (with a limit of five sumissions per person). Closing deadline: Questions: Check our challenge FAQ. Submission for a
workshop presentation CLOSED: The deadline to submit
abstracts was: December 1, 2003. Friday Dec. 12, morning session 7:30am-10:30am
7:30am Benchmark datasets and challenge
result summary 7:50am Classification for
High Dimensional Problems Using Bayesian Neural Networks and Dirichlet Diffusion
Trees 8:20am Random Forests and Regularized
Least Squares Classifiers 8:40am Feature Selection using
SVM and Random Forest 9:00am Break 9:10am Feature Selection using
Transductive Support Vector Machine 9:30am Boosting Flexible Learning
Ensembles with Dynamic Feature Selection 9:50am Piecewise Linear Regularized
Solution Paths 10:10am Feature Selection with
Sensitivity Analysis for Direct Kernel Partial Least Squares (DK-PLS)
Friday
Dec. 12, afternoon session 4:00am-7:00am 4:00pm Spectral Dimensionality
Reduction via Learning Eigenfunctions 4:30pm Protein Sequence Motifs:
Highly Discriminative Features for Function Prediction 4:50pm Feature Construction: Variations
on PCA and Company 5:10pm Feature Extraction for Image
Interpretation 5:30pm Break 5:40pm Feature Extraction with
Description Logics Functional Subsumption 6:00pm Feature Selection with the
Potential Support Vector Machine 6:20pm Information Based Supervised
and Semi-Supervised Feature Selection 6:40pm Lessons Learned from the
Feature Selection Competition 6:55pm Method description
Information from challenge participants not coming to the workshop: Nameless: Feature Selection Challenge
Attempt NIPS Feature Selection Challenge:
Details On Methods Links
Data mining competitions:
List
of datasets for machine learning: On-line machine
learning resources: CAMDA ICDAR TREC
ICPR CASP Contact information
Other organizers: Acknowledgments: |
||||||||||||||||||||||||||||||