Output details
11 - Computer Science and Informatics
University of East Anglia
On over-fitting in model selection and subsequent selection bias in performance evaluation
<24> We demonstrate that over-fitting in model selection, a problem largely neglected in machine learning, significantly degrades the generalisation performance of kernel methods (including Gaussian Processes), and that some widely used performance evaluation protocols exhibit significant optimistic biases as a result. This suggests that improvements in generalisation performance are likely more easily gained by the development of model selection procedures, rather than new learning methods. Counter-intuitively, more recent work reveals this is also a significant issue where only a small number of hyper-parameters are involved. Invited plenary talks were given at IDEAL-2011 and IDA-2012.