Publications and Preprints

High dimensional classification when useful information comes from many, perhaps all features
by
Arindam Chatterjee and Peter Hall
In the analysis of high-dimensional data it is common to reduce dimension from thousands or tens of thousands to a much smaller number, often between 5 and 20. One reason for such a substantial reduction is to reduce the conceptual difficulty of the problem. This difficulty highlights the need for models that permit a small number of features to provide the majority of information available for classification, but allow a much larger number, indeed potentially all features, to supply the remaining information that is needed for a higher level of performance. Inference in such cases is almost bound to involve significantly nonlinear aspects. In this paper we suggest approaches of this type, based on empirical approximations to Bayes rule classifiers and involving adaptive feature selection to optimise performance. This intrinsically nonlinear approach enables the methodology to exploit any interactions among features that might enhance classifier accuracy. The methodology is sequential, and involves steadily building a model of increasing complexity, stopping when an empirical measure of error indicates that further complexity would only degrade performance.

isid/ms/2011/05 [fulltext]

Click here to return to Preprints Page