Embedded local feature selection within mixture of experts

Peralta, B; soto A.

Abstract

A useful strategy to deal with complex classification scenarios is the divide and conquer approach. The mixture of experts (MoE) technique makes use of this strategy by jointly training a set of classifiers, or experts, that are specialized in different regions of the input space. A global model, or gate function, complements the experts by learning a function that weighs their relevance in different parts of the input space. Local feature selection appears as an attractive alternative to improve the specialization of experts and gate function, particularly, in the case of high dimensional data. In general, subsets of dimensions, or subspaces, are usually more appropriate to classify instances located in different regions of the input space. Accordingly, this work contributes with a regularized variant of MoE that incorporates an embedded process for local feature selection using L-1 regularization. Experiments using artificial and real-world datasets provide evidence that the proposed method improves the classical MoE technique, in terms of accuracy and sparseness of the solution. Furthermore, our results indicate that the advantages of the proposed technique increase with the dimensionality of the data. (C) 2014 Elsevier Inc. All rights reserved.

Más información

Título según WOS: Embedded local feature selection within mixture of experts
Título según SCOPUS: Embedded local feature selection within mixture of experts
Título de la Revista: INFORMATION SCIENCES
Volumen: 269
Editorial: Elsevier Science Inc.
Fecha de publicación: 2014
Página de inicio: 176
Página final: 187
Idioma: English
DOI:

10.1016/j.ins.2014.01.008

Notas: ISI, SCOPUS - ISI