hosted by
publicationslist.org
    
N. Garcia-Pedrajas

npedrajas@uco.es

Journal articles

2009
 
DOI   
PMID 
Garcia-Pedrajas (2009)  Constructing Ensembles of Classifiers by Means of Weighted Instance Selection.   IEEE Trans Neural Netw Jan  
Abstract: In this paper, we approach the problem of constructing ensembles of classifiers from the point of view of instance selection. Instance selection is aimed at obtaining a subset of the instances available for training capable of achieving, at least, the same performance as the whole training set. In this way, instance selection algorithms try to keep the performance of the classifiers while reducing the number of instances in the training set. Meanwhile, boosting methods construct an ensemble of classifiers iteratively focusing each new member on the most difficult instances by means of a biased distribution of the training instances. In this work, we show how these two methodologies can be combined advantageously. We can use instance selection algorithms for boosting using as objective to optimize the training error weighted by the biased distribution of the instances given by the boosting method. Our method can be considered as boosting by instance selection. Instance selection has mostly been developed and used for k -nearest neighbor ( k -NN) classifiers. So, as a first step, our methodology is suited to construct ensembles of k -NN classifiers. Constructing ensembles of classifiers by means of instance selection has the important feature of reducing the space complexity of the final ensemble as only a subset of the instances is selected for each classifier. However, the methodology is not restricted to k-NN classifier. Other classifiers, such as decision trees and support vector machines (SVMs), may also benefit from a smaller training set, as they produce simpler classifiers if an instance selection algorithm is performed before training. In the experimental section, we show that the proposed approach is able to produce better and simpler ensembles than random subspace method (RSM) method for k-NN and standard ensemble methods for C4.5 and SVMs.
Notes:
2008
 
DOI   
PMID 
Nicolás García-Pedrajas, Domingo Ortiz-Boyer (2008)  Boosting random subspace method.   Neural Netw 21: 9. 1344-1362 Nov  
Abstract: In this paper we propose a boosting approach to random subspace method (RSM) to achieve an improved performance and avoid some of the major drawbacks of RSM. RSM is a successful method for classification. However, the random selection of inputs, its source of success, can also be a major problem. For several problems some of the selected subspaces may lack the discriminant ability to separate the different classes. These subspaces produce poor classifiers that harm the performance of the ensemble. Additionally, boosting RSM would also be an interesting approach for improving its performance. Nevertheless, the application of the two methods together, boosting and RSM, achieves poor results, worse than the results of each method separately. In this work, we propose a new approach for combining RSM and boosting. Instead of obtaining random subspaces, we search subspaces that optimize the weighted classification error given by the boosting algorithm, and then the new classifier added to the ensemble is trained using the obtained subspace. An additional advantage of the proposed methodology is that it can be used with any classifier, including those, such as k nearest neighbor classifiers, that cannot use boosting methods easily. The proposed approach is compared with standard ADABoost and RSM showing an improved performance on a large set of 45 problems from the UCI Machine Learning Repository. An additional study of the effect of noise on the labels of the training instances shows that the less aggressive versions of the proposed methodology are more robust than ADABoost in the presence of noise.
Notes:
2006
 
DOI   
PMID 
Alfonso Martínez-Estudillo, Francisco Martínez-Estudillo, César Hervás-Martínez, Nicolás García-Pedrajas (2006)  Evolutionary product unit based neural networks for regression.   Neural Netw 19: 4. 477-486 May  
Abstract: This paper presents a new method for regression based on the evolution of a type of feed-forward neural networks whose basis function units are products of the inputs raised to real number power. These nodes are usually called product units. The main advantage of product units is their capacity for implementing higher order functions. Nevertheless, the training of product unit based networks poses several problems, since local learning algorithms are not suitable for these networks due to the existence of many local minima on the error surface. Moreover, it is unclear how to establish the structure of the network since, hitherto, all learning methods described in the literature deal only with parameter adjustment. In this paper, we propose a model of evolution of product unit based networks to overcome these difficulties. The proposed model evolves both the weights and the structure of these networks by means of an evolutionary programming algorithm. The performance of the model is evaluated in five widely used benchmark functions and a hard real-world problem of microbial growth modeling. Our evolutionary model is compared to a multistart technique combined with a Levenberg-Marquardt algorithm and shows better overall performance in the benchmark functions as well as the real-world problem.
Notes:
 
DOI   
PMID 
Nicolás García-Pedrajas, Domingo Ortiz-Boyer, César Hervás-Martínez (2006)  An alternative approach for neural network evolution with a genetic algorithm: crossover by combinatorial optimization.   Neural Netw 19: 4. 514-528 May  
Abstract: In this work we present a new approach to crossover operator in the genetic evolution of neural networks. The most widely used evolutionary computation paradigm for neural network evolution is evolutionary programming. This paradigm is usually preferred due to the problems caused by the application of crossover to neural network evolution. However, crossover is the most innovative operator within the field of evolutionary computation. One of the most notorious problems with the application of crossover to neural networks is known as the permutation problem. This problem occurs due to the fact that the same network can be represented in a genetic coding by many different codifications. Our approach modifies the standard crossover operator taking into account the special features of the individuals to be mated. We present a new model for mating individuals that considers the structure of the hidden layer and redefines the crossover operator. As each hidden node represents a non-linear projection of the input variables, we approach the crossover as a problem on combinatorial optimization. We can formulate the problem as the extraction of a subset of near-optimal projections to create the hidden layer of the new network. This new approach is compared to a classical crossover in 25 real-world problems with an excellent performance. Moreover, the networks obtained are much smaller than those obtained with classical crossover operator.
Notes:
 
PMID 
Alfonso C Martínez-Estudillo, César Hervás-Martínez, Francisco J Martínez-Estudillo, Nicolás García-Pedrajas (2006)  Hybridization of evolutionary algorithms and local search by means of a clustering method.   IEEE Trans Syst Man Cybern B Cybern 36: 3. 534-545 Jun  
Abstract: This paper presents a hybrid evolutionary algorithm (EA) to solve nonlinear-regression problems. Although EAs have proven their ability to explore large search spaces, they are comparatively inefficient in fine tuning the solution. This drawback is usually avoided by means of local optimization algorithms that are applied to the individuals of the population. The algorithms that use local optimization procedures are usually called hybrid algorithms. On the other hand, it is well known that the clustering process enables the creation of groups (clusters) with mutually close points that hopefully correspond to relevant regions of attraction. Local-search procedures can then be started once in every such region. This paper proposes the combination of an EA, a clustering process, and a local-search procedure to the evolutionary design of product-units neural networks. In the methodology presented, only a few individuals are subject to local optimization. Moreover, the local optimization algorithm is only applied at specific stages of the evolutionary process. Our results show a favorable performance when the regression method proposed is compared to other standard methods.
Notes:
 
DOI   
PMID 
Nicolás García-Pedrajas, Domingo Ortiz-Boyer (2006)  Improving multiclass pattern recognition by the combination of two strategies.   IEEE Trans Pattern Anal Mach Intell 28: 6. 1001-1006 Jun  
Abstract: We present a new method of multiclass classification based on the combination of one-vs-all method and a modification of one-vs-one method. This combination of one-vs-all and one-vs-one methods proposed enforces the strength of both methods. A study of the behavior of the two methods identifies some of the sources of their failure. The performance of a classifier can be improved if the two methods are combined in one, in such a way that the main sources of their failure are partially avoided.
Notes:
2003
 
DOI   
PMID 
N Garcia-Pedrajas, C Hervas-Martinez, J Munoz-Perez (2003)  COVNET: a cooperative coevolutionary model for evolving artificial neural networks.   IEEE Trans Neural Netw 14: 3. 575-596  
Abstract: This paper presents COVNET, a new cooperative coevolutionary model for evolving artificial neural networks. This model is based on the idea of coevolving subnetworks that must cooperate to form a solution for a specific problem, instead of evolving complete networks. The combination of this subnetworks is part of a coevolutionary process. The best combinations of subnetworks must be evolved together with the coevolution of the subnetworks. Several subpopulations of subnetworks coevolve cooperatively and genetically isolated. The individual of every subpopulation are combined to form whole networks. This is a different approach from most current models of evolutionary neural networks which try to develop whole networks. COVNET places as few restrictions as possible over the network structure, allowing the model to reach a wide variety of architectures during the evolution and to be easily extensible to other kind of neural networks. The performance of the model in solving three real problems of classification is compared with a modular network, the adaptive mixture of experts and with the results presented in the bibliography. COVNET has shown better generalization and produced smaller networks than the adaptive mixture of experts and has also achieved results, at least, comparable with the results in the bibliography.
Notes:
2002
 
PMID 
N García-Pedrajas, C Hervás-Martínez, J Muñoz-Pérez (2002)  Multi-objective cooperative coevolution of artificial neural networks (multi-objective cooperative networks).   Neural Netw 15: 10. 1259-1278 Dec  
Abstract: In this paper we present a cooperative coevolutive model for the evolution of neural network topology and weights, called MOBNET. MOBNET evolves subcomponents that must be combined in order to form a network, instead of whole networks. The problem of assigning credit to the subcomponents is approached as a multi-objective optimization task. The subcomponents in a cooperative coevolutive model must fulfill different criteria to be useful, these criteria usually conflict with each other. The problem of evaluating the fitness on an individual based on many criteria that must be optimized together can be approached as a multi-criteria optimization problems, so the methods from multi-objective optimization offer the most natural way to solve the problem. In this work we show how using several objectives for every subcomponent and evaluating its fitness as a multi-objective optimization problem, the performance of the model is highly competitive. MOBNET is compared with several standard methods of classification and with other neural network models in solving four real-world problems, and it shows the best overall performance of all classification methods applied. It also produces smaller networks when compared to other models. The basic idea underlying MOBNET is extensible to a more general model of coevolutionary computation, as none of its features are exclusive of neural networks design. There are many applications of cooperative coevolution that could benefit from the multi-objective optimization approach proposed in this paper.
Notes:
Powered by publicationslist.org.