Baseline models using kernel methods
Gavin Cawley and Nicolas Talbot, University of East Anglia, UK, gcc@cmp.uea.ac.uk
We made several baseline entries to stimulate the ALvsPK challenge using
kernel methods. The optimal model parameters of a kernel machine are typically
given by the solution of a convex optimisation problem with a single global
optimum. Obtaining the best possible performance is therefore largely a matter
of the design of a good kernel for the problem at hand, exploiting any underlying
structure and optimisation of the regularisation and kernel parameters, i.e.
model selection. Fortunately, analytic bounds on, or approximations to, the
leave-one-out cross-validation error are often available, providing an efficient
and generally reliable means to guide model selection. However, the degree
to which the incorporation of prior knowledge improves performance over that
which can be obtained using “standard” kernels with automated model selection
(i.e. agnostic learning), is an open question. In this presentation, we compare
approaches using example solutions for all of the benchmark tasks on both
tracks of the IJCNN-2007 Agnostic Learning versus Prior Knowledge Challenge.