hosted by
publicationslist.org
    

habib dhahri

IEEE student Member
PhD student,
REGIM:REsearch Group on Intelligent Machines
University of Sfax, National School of Engineers
Department of Electrical Engineering
BP W, Sfax, 3038, Tunisia
habib.dhahri@ieee.org
habib.dhahri@ieee.org

Journal articles

2012
H Dhahri, A M Alimi, A Abraham (2012)  Hierarchical Particle Swarm Optimization for the Design of Beta Basis Function Neural Network   Intelligent Informatics 193-205  
Abstract: A novel learning algorithm is proposed for non linear modeling and identification by the use of the beta basis function neural network (BBFNN). The proposed method is a hierarchical particle swarm optimization (HPSO). The objective of this paper is to optimize the parameters of the beta basis function neural network (BBFNN) with high accuracy. The population of HPSO forms multiple beta neural networks with different structures at an upper hierarchical level and each particle of the previous population is optimized at a lower hierarchical level to improve the performance of each particle swarm. For the beta neural network consisting n particles are formed in the upper level to optimize the structure of the beta neural network. In the lower level, the population within the same length particle is to optimize the free parameters of the beta neural network. Experimental results on a number of benchmarks problems drawn from regression and time series prediction area demonstrate that the HPSO produces a better generalization performance.
Notes:
H Dhahri, A M Alimi, A Abraham (2012)  Hierarchical multi-dimensional differential evolution for the design of beta basis function neural network   Neurocomputing 97: 131-140  
Abstract: This paper proposes a hierarchical multi-dimensional differential evolution (HMDDE) algorithm, which is an automatic computational frame work for the optimization of beta basis function neural network (BBFNN) wherein the neural network architecture, weights connection, learning algorithm and its parameters are adapted according to the problem. In the HMDDE-designed neural network, the number of individuals of the population multi-dimensions is the number of beta neural networks. The population of HMDDE forms multiple beta networks with different structures at the higher level and each individual of the previous population is optimized at a lower hierarchical level to improve the performance of each individual. For the beta neural network consisting of m neurons, n individuals (different lengths) are formed in the upper level to optimize the structure of the beta neural network. In the lower level, the population within the same length is to optimize the free parameters of the beta neural network. To evaluate the comparative performance, we used benchmark problems drawn from identification system and time series prediction area. Empirical results illustrate that the HMDDE produces a better generalization performance.
Notes:
2008
H Dhahri, A Alimi (2008)  Automatic Selection for the Beta Basis Function Neural Networks   Nature Inspired Cooperative Strategies for Optimization (NICSO 2007) 461-474  
Abstract: In this paper, we propose a differential evolution algorithm based design for the beta basis function neural network. The differential Evolution algorithm has been used in many practical cases and has demonstrated good convergences properties. The differential evolution is used to evolve the beta basis function neural networks topology. Compared with the traditional genetic algorithm, the combined approach proves goodly the difference, including the feasibility and the simplicity of implementation. In the prediction of Mackey-Glass chaotic time series, the networks designed by the proposed approach prove to be competitive, or even superior, to the traditional learning algorithm for a multi-layer Perceptron network and radialbasis function network. Therefore, designing a set of BBFNN can be considered as solution of a two-optimisation problem.
Notes:

Conference papers

2012
H Dhahri, A M Alimi, A Abraham (2012)  Designing Beta Basis Function Neural Network for Optimization Using Artificial Bee Colony (ABC)   In: Neural Networks (IJCNN), The 2012 International Joint Conference on 1-7  
Abstract: This paper presents an application of swarm intelligence technique namely Artificial Bee Colony (ABC) to design the design of the Beta Basis Function Neural Networks (BBFNN). The focus of this research is to investigate the new population metaheuristic to optimize the Beta neural networks parameters. The proposed algorithm is used for the prediction of benchmark problems. Simulation examples are also given to compare the effectiveness of the model with the other known methods in the literature. Empirical results reveal that the proposed ABC-BBFNN have impressive generalization ability.
Notes:
2010
H Dhahri, A M Alimi (2010)  Opposition-based particle swarm optimization for the design of beta basis function neural network   In: Neural Networks (IJCNN), The 2010 International Joint Conference on 1-8  
Abstract: Many methods for solving optimization problems, whether direct or indirect, rely upon gradient information and therefore may converge to a local optimum. Global optimization methods like Evolutionary algorithms, overcome this problem although these techniques are computationally expensive due to slow nature of the evolutionary process. In this work, a new concept is investigated to accelerate the particle swarm optimization. The opposition-based PSO uses the concept of opposite number to create a new population during the learning process to improve the convergence rate of generalization performance of the beta basis function neural network. The proposed algorithm uses the dichotomy research to determine the target solution. Detailed performance comparison of OPSO-BBFNN with learning algorithm on benchmarks problems drawn from regression and time series prediction area. The results show that the OPSO-BBFNN produces a better generalization performance.
Notes:
H Dhahri, A M Alimi (2010)  Opposition-based differential evolution for beta basis function neural network   In: Evolutionary Computation (CEC), 2010 IEEE Congress on 1-8  
Abstract: Many methods for solving optimization problems, whether direct or indirect, rely upon gradient information and therefore may converge to a local optimum. Global optimization methods like Evolutionary algorithms, overcome this problem although these techniques are computationally expensive due to slow nature of the evolutionary process. In this work, a new concept is investigated to accelerate the differential evolution. The opposition-based DE uses the concept of opposite number to create a new population during the learning process to improve the convergence rate of generalization performance of the beta basis function neural network. The proposed algorithm uses the dichotomy research to determine the target solution. Detailed performance comparison of ODE-BBFNN with learning algorithm on benchmarks problems drawn from regression and time series prediction area. The results show that the ODE-BBFNN produces a better generalization performance.
Notes:
2008
H Dhahri, A M Alimi, F Karray (2008)  The modified particle swarm optimization for the design of the beta basis function neural networks   In: Evolutionary Computation, 2008. CEC 2008.(IEEE World Congress on Computational Intelligence). IEEE Congress on 3874-3880  
Abstract: This paper proposes and describes an effective utilization of the heuristic optimization. The focus of this research is on a hybrid method combining two heuristic optimization techniques; Differential evolution algorithms (DE) and particle swarm optimization (PSO), to train the beta basis function neural network (BBFNN). Denoted as PSO- DE, this hybrid technique incorporates concepts from DE and PSO and creates individuals in a new generation not only by crossover and mutation operations as found in DE but also by mechanisms of PSO. The results of various experimental studies using the Mackey time prediction have demonstrated the superiority of the hybrid PSO-DE approach over the other four search techniques in terms of solution quality and convergence rates.
Notes:
H Dhahri, A M Alimi, F Karray (2008)  Designing beta basis function neural network for optimization using particle swarm optimization   In: Neural Networks, 2008. IJCNN 2008.(IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on 2564-2571  
Abstract: Many methods for solving optimization problems, whether direct or indirect, rely upon gradient information and therefore may converge to a local optimum. Global optimization methods like evolutionary algorithms, overcome this problem. In this work it is investigated how to construct a quality BBF network for a specific application can be a time-consuming process as the system must select both a suitable set of inputs and a suitable BBF network structure. Evolutionary methodologies offer the potential to automate all or part of these steps. This study illustrates how a hybrid BBFN-PSO system can be constructed, and applies the system to a number of datasets. The utility of the resulting BBFNs on these optimization problems is assessed and the results from the BBFN-PSO hybrids are shown to be competitive against the best performance on these datasets using alternative optimization methodologies. The results show that within these classes of evolutionary methods, particle swarm optimization algorithms are very robust, effective and highly efficient in solving the studied class of optimization problems.
Notes:
2006
H Dhahri, A M Alimi (2006)  The modified differential evolution and the RBF (MDE-RBF) neural network for time series prediction   In: Neural Networks, 2006. IJCNN’06. International Joint Conference on 2938-2943  
Abstract: We develop a modified differential evolution algorithm that produces radial basis function neural network controllers for chaotic systems. This method requires few controlling variables. We examine the result of applying the proposed algorithm to time series prediction, which illustrates the effectiveness of this technique. We apply this algorithm to several computational and real systems including Mackey-Glass time series, the Lorenz attractor, and experimental data obtained from the Henon map. Our experiments indicate that the structural differences between our approach and the other methods existing in the bibliography particularly are well suited to modeling chaotic time series data.
Notes:
2005
Powered by PublicationsList.org.