hosted by
publicationslist.org
    
Aurelio Uncini

aurel@ieee.org

Journal articles

2008
 
PMID 
Daniele Vigliano, Michele Scarpiniti, Raffaele Parisi, Aurelio Uncini (2008)  Flexible nonlinear blind signal separation in the complex domain.   Int J Neural Syst 18: 2. 105-122 Apr  
Abstract: This paper introduces an Independent Component Analysis (ICA) approach to the separation of nonlinear mixtures in the complex domain. Source separation is performed by a complex INFOMAX approach. The neural network which realizes the separation employs the so called "Mirror Model" and is based on adaptive activation functions, whose shape is properly modified during learning. Nonlinear functions involved in the processing of complex signals are realized by pairs of spline neurons called "splitting functions", working on the real and the imaginary part of the signal respectively. Theoretical proof of existence and uniqueness of the solution under proper assumptions is also provided. In particular a simple adaptation algorithm is derived and some experimental results that demonstrate the effectiveness of the proposed solution are shown.
Notes:
 
DOI 
Michele Scarpiniti, Daniele Vigliano, Raffaele Parisi, Aurelio Uncini (2008)  Generalized splitting functions for blind separation of complex signals   NEUROCOMPUTING 71: 10-12. 2245-2270 JUN  
Abstract: This paper proposes the blind separation of complex signals using a novel neural network architecture based on an adaptive nonlinear bi-dimensional activation function (AF); the separation is obtained maximizing the output joint entropy. Avoiding the restriction due to the Louivilleâs theorem, the AF is composed of a couple of bi-dimensional spline functions, one for the real and one for the imaginary part of the signal. The surface of this function is flexible and it is adaptively modified according to the learning process performed by a gradient-based technique. The use of the bi-dimensional spline defines a new class of flexible AFs which are bounded and locally analytic. This paper aims to demonstrate that this novel bi-dimensional complex AF outperforms the separation in every environment in which the real and imaginary parts of the complex signal are not dccorrelated. This situation is realistic ill a large number of cases. (C) 2008 Elsevier B.V. All rights reserved.
Notes:
Daniele Vigliano, Michele Scarpiniti, Raffaele Parisi, Aurelio Uncini (2008)  Flexible nonlinear blind signal separation in the complex domain   INTERNATIONAL JOURNAL OF NEURAL SYSTEMS 18: 2. 105-122 APR  
Abstract: This paper introduces an Independent Component Analysis (ICA) approach to the separation of nonlinear mixtures in the complex domain. Source separation is performed by a complex INFOMAX approach. The neural network which realizes the separation employs the so called âMirror Modelâ and is based on adaptive activation functions, whose shape is properly modified during learning. Nonlinear functions involved in the processing of complex signals are realized by pairs of spline neurons called âsplitting functionsâ, working on the real and the imaginary part of the signal respectively. Theoretical proof of existence and uniqueness of the solution under proper assumptions is also provided. In particular a simple adaptation algorithm is derived and some experimental results that demonstrate the effectiveness of the proposed solution are shown.
Notes:
2005
 
DOI 
D Vigliano, R Parisi, A Uncini (2005)  An information theoretic approach to a novel nonlinear independent component analysis paradigm   SIGNAL PROCESSING 85: 5. 997-1028 MAY  
Abstract: This paper introduces a novel independent component analysis (ICA) approach to the separation of nonlinear convolutive mixtures. The proposed model is an extension of the well-known post nonlinear (PNL) mixing model and consists of the convolutive mixing of PNL mixtures. Theoretical proof of existence and uniqueness of the solution under proper assumptions is provided. Feedforward and recurrent demixing architectures based on spline neurons are introduced and compared. Source separation is performed by minimizing the mutual information of the output signals with respect to the network parameters. More specifically, the proposed architectures perform on-line nonlinear compensation and score function estimation by proper use of flexible spline nonlinearities, yielding a significant performance improvement in terms of source pdf matching and algorithm speed of convergence. Experimental tests on different signals are described to demonstrate the effectiveness of the proposed approach. (c) 2005 Elsevier B.V. All rights reserved.
Notes:
2004
 
DOI 
M Solazzi, A Uncini (2004)  Regularising neural networks using flexible multivariate activation function   NEURAL NETWORKS 17: 2. 247-260 MAR  
Abstract: This paper presents a new general neural structure based on nonlinear flexible multivariate function that can be viewed in the framework of the generalised regularisation net-works theory. The proposed architecture is based on multi-dimensional adaptive cubic spline basis activation function that collects information from the previous network layer in aggregate form. In other words, each activation function represents a spline function of a subset of previous layer outputs so the number of network connections (structural complexity) can be very low with respect to the problem complexity. A specific learning algorithm, based on the adaptation of local parameters of the activation function, is derived. This fact improve the network generalisation capabilities and speed up the convergence of the learning process. At last, some experimental results demonstrating the effectiveness of the proposed architecture, are presented. (C) 2003 Elsevier Ltd. All rights reserved.
Notes:
 
DOI 
M Solazzi, A Uncini (2004)  Spline neural networks for blind separation of post-nonlinear-linear mixtures   IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS 51: 4. 817-829 APR  
Abstract: In this paper, a novel paradigm for blind source separation in the presence of nonlinear mixtures is presented. In particular the paper addresses the problem of post-nonlinear mixing followed by another instantaneous mixing system. This model is called here the post-nonlinear-linear model. The method is based on the use of the recently introduced flexible. activation function whose control points are adaptively changed: a neural model based on adaptive B-spline functions is employed. The signal separation is achieved through-an information maximization criterion. Experimental results and comparison with existing solutions confirm the effectiveness of the proposed architecture.
Notes:
 
DOI   
PMID 
Mirko Solazzi, Aurelio Uncini (2004)  Regularising neural networks using flexible multivariate activation function.   Neural Netw 17: 2. 247-260 Mar  
Abstract: This paper presents a new general neural structure based on nonlinear flexible multivariate function that can be viewed in the framework of the generalised regularisation networks theory. The proposed architecture is based on multi-dimensional adaptive cubic spline basis activation function that collects information from the previous network layer in aggregate form. In other words, each activation function represents a spline function of a subset of previous layer outputs so the number of network connections (structural complexity) can be very low with respect to the problem complexity. A specific learning algorithm, based on the adaptation of local parameters of the activation function, is derived. This fact improve the network generalisation capabilities and speed up the convergence of the learning process. At last, some experimental results demonstrating the effectiveness of the proposed architecture, are presented.
Notes:
 
DOI 
D Vigliano, A Uncini (2004)  ‘Mirror model’ gives separation of convolutive mixing of PNL mixtures   ELECTRONICS LETTERS 40: 7. 454-456 APR 1  
Abstract: The proof is given that the so called âmirror modelâ as demixing model is able to recover original sources after non-trivial mixing. The issue explored is the capability to separate sources, in a blind way, after the convolutive mixing of post nonlinear (PNL) mixtures. The strictness of that kind of mixture produces non-trivial problems in separating signals without any adequate assumption on recovering architecture.
Notes:
2003
 
DOI 
A Uncini (2003)  Audio signal processing by neural networks   NEUROCOMPUTING 55: 3-4. 593-625 OCT  
Abstract: In this paper a review of architectures suitable for nonlinear real-time audio signal processing is presented. The computational and structural complexity of neural networks (NNs) represent in fact, the main drawbacks that can hinder many practical NNs multimedia applications. In particular efficient neural architectures and their learning algorithm for real-time on-line audio processing are discussed. Moreover, applications in the fields of (1) audio signal recovery, (2) speech quality enhancement, (3) nonlinear transducer linearization, (4) learning based pseudo-physical sound synthesis, are briefly presented and discussed. (C) 2003 Elsevier B.V. All rights reserved.
Notes:
 
DOI 
G Costantini, A Uncini (2003)  Real-time room acoustic response simulation by IIR adaptive filter   ELECTRONICS LETTERS 39: 3. 330-332 FEB 6  
Abstract: A new IIR adaptive filter for real-time, room acoustic response simulation is proposed, the structure of which derives from Jotâs model of an artificial reverberator. The simultaneous perturbation stochastic approximation (SPSA) algorithm is used to set parameter values. Results show good similarity between the desired and artificial response.
Notes:
 
DOI 
A Uncini, F Piazza (2003)  Blind signal processing by complex domain adaptive spline neural networks   IEEE TRANSACTIONS ON NEURAL NETWORKS 14: 2. 399-412 MAR  
Abstract: In this paper, neural networks based on an adaptive nonlinear function suitable for both blind complex time domain signal separation and blind frequency domain signal deconvolution, are presented. This activation function, whose shape is modified during learning, is based on a couple of spline functions, one for the real and one for the imaginary part of the input. The shape control points are adaptively changed using gradient-based techniques. B-splines are used, because they allow to impose only simple constraints on the control parameters in order to ensure a monotonously increasing characteristic. This new adaptive function is then applied to the outputs of a one-layer neural network in order to separate complex signals from mixtures by maximizing the entropy of the function outputs. We derive a simple form of the adaptation algorithm and present some experimental results that demonstrate the effectiveness of the proposed method.
Notes:
 
DOI 
D Vigliano, A Uncini (2003)  Flexible ICA solution for nonlinear blind source separation problem   ELECTRONICS LETTERS 39: 22. 1616-1617 OCT 30  
Abstract: Presented is a new architecture and a new learning algorithm that are exploited to resolve the blind source separation problem under stricter constraints than those considered to date. The mixing model that is assumed is an evolution of the well-known post-nonlinear (PNL) one: the PNL mixing block is followed by a convolutive mixing channel. The flexibility of the algorithm originates from the spline-SG neurons performing an on-line estimation of the score functions.
Notes:
 
DOI   
PMID 
A Uncini, F Piazza (2003)  Blind signal processing by complex domain adaptive spline neural networks.   IEEE Trans Neural Netw 14: 2. 399-412  
Abstract: In this paper, neural networks based on an adaptive nonlinear function suitable for both blind complex time domain signal separation and blind frequency domain signal deconvolution, are presented. This activation function, whose shape is modified during learning, is based on a couple of spline functions, one for the real and one for the imaginary part of the input. The shape control points are adaptively changed using gradient-based techniques. B-splines are used, because they allow to impose only simple constraints on the control parameters in order to ensure a monotonously increasing characteristic. This new adaptive function is then applied to the outputs of a one-layer neural network in order to separate complex signals from mixtures by maximizing the entropy of the function outputs. We derive a simple form of the adaptation algorithm and present some experimental results that demonstrate the effectiveness of the proposed method.
Notes:
2002
G Cocchi, A Uncini (2002)  Subband neural networks prediction for on-line audio signal recovery   IEEE TRANSACTIONS ON NEURAL NETWORKS 13: 4. 867-876 JUL  
Abstract: In this paper, a subbands multirate architecture is presented for audio signal recovery. Audio signal recovery is a common problem in digital music signal restoration field, because of corrupted samples that must be replaced. The sub band approach allows for the reconstruction of along audio data sequence from forward-backward predicted samples. In order to improve prediction performances, neural networks with spline flexible activation function are used as narrow subband nonlinear forward-backward predictors. Previous neural-networks approaches involved a long training process. Due to the small networks needed for each subband and to the spline adaptive activation functions that speed-up the convergence time and improve the generalization performances, the proposed signal recovery scheme works in on-line (or in continuous learning) mode as a simple nonlinear adaptive filter. Experimental results show the mean square reconstruction error and maximum error obtained with increasing gap length, from 200 to 5000 samples for different musical genres. A subjective performances analysis is also reported. The method gives good results for the reconstruction of over 100 ms of audio signal with low audible effects in overall quality and outperforms the previous approaches.
Notes:
 
DOI 
F Iannelli, A Uncini (2002)  Learning of physical-like sound synthesis models by adaptive spline recurrent neural networks   ELECTRONICS LETTERS 38: 14. 724-725 JUL 4  
Abstract: A recently introduced neural networks architecture, âadaptive spline neural networksâ with FIR/IIR synapse, is used to define a general class of physical-like sound synthesis model. To reduce computational cost, use is made of power-of-two synapses followed by a CR-spline-based flexible activation function the shape of which can be modified through its control points. The learning phase is performed by an efficient combinatorial optimisation algorithm, Tabu Search, for both power-of-two weights and CR-spline control points.
Notes:
 
DOI   
PMID 
G Cocchi, A Uncini (2002)  Subband neural networks prediction for on-line audio signal recovery.   IEEE Trans Neural Netw 13: 4. 867-876  
Abstract: In this paper, a subbands multirate architecture is presented for audio signal recovery. Audio signal recovery is a common problem in digital music signal restoration field, because of corrupted samples that must be replaced. The subband approach allows for the reconstruction of a long audio data sequence from forward-backward predicted samples. In order to improve prediction performances, neural networks with spline flexible activation function are used as narrow subband nonlinear forward-backward predictors. Previous neural-networks approaches involved a long training process. Due to the small networks needed for each subband and to the spline adaptive activation functions that speed-up the convergence time and improve the generalization performances, the proposed signal recovery scheme works in online (or in continuous learning) mode as a simple nonlinear adaptive filter. Experimental results show the mean square reconstruction error and maximum error obtained with increasing gap length, from 200 to 5000 samples for different musical genres. A subjective performances analysis is also reported. The method gives good results for the reconstruction of over 100 ms of audio signal with low audible effects in overall quality and outperforms the previous approaches.
Notes:
2001
M Solazzi, A Uncini, E D Di Claudio, R Parisi (2001)  Complex discriminative learning Bayesian neural equalizer   SIGNAL PROCESSING 81: 12. 2493-2502 DEC  
Abstract: Traditional approaches to channel equalization are based on the inversion of the global (linear or nonlinear) channel response. However, in digital links the complete channel inversion is neither required nor desirable. Since transmitted symbols belong to a discrete alphabet, symbol demodulation can be effectively recasted as a classification problem in the space of received symbols. In this paper a novel neural network for digital equalization is introduced and described. The proposed approach is based on a decision-feedback architecture trained with a complex-valued discriminative learning strategy for the minimization of the classification error. Main features of the resulting neural equalizer are the high rate of convergence with respect to classical neural equalizers and the low degree of complexity. Its effectiveness has been demonstrated through computer simulations for several typical digital transmission channels. (C) 2001 Elsevier Science B.V. All rights reserved.
Notes:
2000
S Traferro, A Uncini (2000)  Power-of-two adaptive filters using Tabu Search   IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-ANALOG AND DIGITAL SIGNAL PROCESSING 47: 6. 566-569 JUN  
Abstract: Digital filters with power-of-two or a sum of power-of-two coefficients can be built using simple and fast shift registers instead of slower floating-point multipliers, such a strategy can reduce both the VLSI silicon area and the computational time. Due to the quantization and the nonuniform distribution of the coefficients through their domain, in the case of adaptive filters, classical steepest descent based approaches cannot be successfully applied. Methods for adaptation processes, as in the least mean squares (LMS) error and other related adaptation algorithms, can actually lose their convergence properties. In this brief, we present a customized Tabu Search (TS) adaptive algorithm that works directly on the power-of-two filter coefficients domain, avoiding any rounding process. In particular, we propose TS for a time varying environment, suitable for real time adaptive signal processing. Several experimental results demonstrate the effectiveness of the proposed method.
Notes:
P Campolucci, A Uncini, F Piazza (2000)  A signal-flow-graph approach to on-line gradient calculation   NEURAL COMPUTATION 12: 8. 1901-1927 AUG  
Abstract: A large class of nonlinear dynamic adaptive systems such as dynamic recurrent neural networks can be effectively represented by signal flow graphs (SFGs). By this method, complex systems are described as a general connection of many simple components, each of them implementing a simple one-input, one-output transformation, as in an electrical circuit. Even if graph representations are popular in the neural network community, they are often used for qualitative description rather than for rigorous representation and computational purposes. In this article, a method for both on-line and batch-backward gradient computation of a system output or cost function with respect to system parameters is derived by the SFG representation theory and its known properties. The system can be any causal, in general nonlinear and time-variant, dynamic system represented by an SFG, in particular any feedforward, time-delay, or recurrent neural network. In this work, we use discrete-time notation, but the same theory holds for the continuous-time case. The gradient is obtained in a straightforward way by the analysis of two SFGs, the original one and its adjoint (obtained from the first by simple transformations), without the complex chain rule expansions of derivatives usually employed. This method can be used for sensitivity analysis and for learning both off-line and on-line. On-line learning is particularly important since it is required by many real applications, such as digital signal processing, system identification and control, channel equalization, and predistortion.
Notes:
1999
P Campolucci, A Uncini, F Piazza, B D Rao (1999)  On-line learning algorithms for locally recurrent neural networks   IEEE TRANSACTIONS ON NEURAL NETWORKS 10: 2. 253-271 MAR  
Abstract: This paper focuses on on-line learning procedures for locally recurrent neural networks with emphasis on multilayer perceptron (MLP) with infinite impulse response (IIR) synapses and its variations which include generalized output and activation feedback multilayer networks (MLNâs). We propose a new gradient-based procedure called recursive backpropagation (RBP) whose on-line version, causal recursive backpropagation (CRBP), presents some advantages with respect to the other on-line training methods. The new CRBP algorithm includes as particular cases backpropagation (BP), temporal backpropagation (TBP), backpropagation for sequences (BPS), Back-Tsoi algorithm among others, thereby providing a unifying view on gradient calculation techniques for recurrent networks with local feedback. The only learning method that has been proposed for locally recurrent networks with no architectural restriction is the one by Back and Tsoi, The proposed algorithm has better stability and higher speed of convergence with respect to the Back-Tsoi algorithm, which is supported by the theoretical development and confirmed by simulations. The computational complexity of the CRBP is comparable with that of the Back-Tsoi algorithm, e,g,, less that a factor of 1.5 for usual architectures and parameter settings. The superior performance of the new algorithm, however, easily justifies this small increase in computational burden. In addition, the general paradigms of truncated BPTT and RTRL are applied to networks with local feedback and compared with the new CRBP method. The simulations show that CRBP exhibits similar performances and the detailed analysis of complexity reveals that CRBP is much simpler and easier to implement, e,g,, CRBP is local in space and in time while RTRL is not local in space.
Notes:
S Guarnieri, F Piazza, A Uncini (1999)  Multilayer feedforward networks with adaptive spline activation function   IEEE TRANSACTIONS ON NEURAL NETWORKS 10: 3. 672-683 MAY  
Abstract: in this paper, a new adaptive spline activation function neural network (ASNN) is presented. Due to the ASNNâs high representation capabilities, networks with a small number of interconnections can be trained to solve both pattern recognition and data processing real-time problems. The main idea is to use a Catmull-Rom cubic spline as the neuronâs activation function, which ensures a simple structure suitable for both software and hardware implementation. Experimental results demonstrate improvements in terms of generalization capability and of learning speed in both pattern recognition and data processing tasks.
Notes:
A Uncini, L Vecci, P Campolucci, F Piazza (1999)  Complex-valued neural networks with adaptive spline activation function for digital radio links nonlinear equalization   IEEE TRANSACTIONS ON SIGNAL PROCESSING 47: 2. 505-514 FEB  
Abstract: In this paper, a new complex-valued neural network based on adaptive activation functions is proposed. By varying the control points of a pair of Catmull-Rom cubic splines, which are used as an adaptable activation function, this new kind of neural network can be implemented as a very simple structure that is able to improve the generalization capabilities using few training samples, Due to its low architectural complexity (low overhead with respect to a simple FIR filter), this network can be used to cope with several nonlinear DSP problems at a high symbol rate. In particular, this work addresses the problem of nonlinear channel equalization. In fact, although several authors have already recognized the usefulness of a neural network as a channel equalizer, one problem has not Set been addressed: the high complexity and the very long data sequence needed to train the network. Several experimental results using a realistic channel model are reported that prove the effectiveness of the proposed network on equalizing a digital satellite radio link in the presence of noise, nonlinearities, and intersymbol interference (ISI).
Notes:
 
DOI   
PMID 
S Guarnieri, F Piazza, A Uncini (1999)  Multilayer feedforward networks with adaptive spline activation function.   IEEE Trans Neural Netw 10: 3. 672-683  
Abstract: In this paper, a new adaptive spline activation function neural network (ASNN) is presented. Due to the ASNN's high representation capabilities, networks with a small number of interconnections can be trained to solve both pattern recognition and data processing real-time problems. The main idea is to use a Catmull-Rom cubic spline as the neuron's activation function, which ensures a simple structure suitable for both software and hardware implementation. Experimental results demonstrate improvements in terms of generalization capability and of learning speed in both pattern recognition and data processing tasks.
Notes:
 
DOI   
PMID 
P Campolucci, A Uncini, F Piazza, B D Rao (1999)  On-line learning algorithms for locally recurrent neural networks.   IEEE Trans Neural Netw 10: 2. 253-271  
Abstract: This paper focuses on online learning procedures for locally recurrent neural nets with emphasis on multilayer perceptron (MLP) with infinite impulse response (IIR) synapses and its variations which include generalized output and activation feedback multilayer networks (MLN). We propose a new gradient-based procedure called recursive backpropagation (RBP) whose online version, causal recursive backpropagation (CRBP), has some advantages over other online methods. CRBP includes as particular cases backpropagation (BP), temporal BP, Back-Tsoi algorithm (1991) among others, thereby providing a unifying view on gradient calculation for recurrent nets with local feedback. The only learning method known for locally recurrent nets with no architectural restriction is the one by Back and Tsoi. The proposed algorithm has better stability and faster convergence with respect to the Back-Tsoi algorithm. The computational complexity of the CRBP is comparable with that of the Back-Tsoi algorithm, e.g., less that a factor of 1.5 for usual architectures and parameter settings. The superior performance of the new algorithm, however, easily justifies this small increase in computational burden. In addition, the general paradigms of truncated BPTT and RTRL are applied to networks with local feedback and compared with CRBP. CRBP exhibits similar performances and the detailed analysis of complexity reveals that CRBP is much simpler and easier to implement, e.g., CRBP is local in space and in time while RTRL is not local in space.
Notes:
1998
L Vecci, F Piazza, A Uncini (1998)  Learning and approximation capabilities of adaptive spline activation function neural networks   NEURAL NETWORKS 11: 2. 259-270 MAR  
Abstract: In this paper, we study the theoretical properties of a new kind of artificial neural network, which is able to adapt its activation functions by varying the control points of a Catmull-Rom cubic spline. Most of all, we are interested in generalization capability, and we can show that our architecture presents several advantages. First of all, it can be seen as a sub-optimal realization of the additive spline based model obtained by the reguralization theory. Besides, simulations confirm that the special learning mechanism allows to use in a very effective way the networkâs free parameters, keeping their total number at lower values than in networks with sigmoidal activation functions. Other notable properties are a shorter training time and a reduced hardware complexity, due to the surplus in the number of neurons. (C) 1998 Elsevier Science Ltd. All rights reserved.
Notes:
 
PMID 
Uncini, Piazza, Vecci (1998)  Learning and Approximation Capabilities of Adaptive Spline Activation Function Neural Networks.   Neural Netw 11: 2. 259-270 Mar  
Abstract: In this paper, we study the theoretical properties of a new kind of artificial neural network, which is able to adapt its activation functions by varying the control points of a Catmull-Rom cubic spline. Most of all, we are interested in generalization capability, and we can show that our architecture presents several advantages. First of all, it can be seen as a sub-optimal realization of the additive spline based model obtained by the reguralization theory. Besides, simulations confirm that the special learning mechanism allows to use in a very effective way the network's free parameters, keeping their total number at lower values than in networks with sigmoidal activation functions. Other notable properties are a shorter training time and a reduced hardware complexity, due to the surplus in the number of neurons.
Notes:
1996
N Benvenuto, F Piazza, A Uncini, M Visintin (1996)  Generalised backpropagation algorithm for training a data predistorter with memory in radio systems   ELECTRONICS LETTERS 32: 20. 1925-1926 SEP 26  
Abstract: The authors present a neural network based data-predistorter with memory, for the compensation of high-power amplifier (HPA) nonlinearities in digital microwave radio systems. The overall system (predistorter, pulse shaping filter and HPA) can be seen as a unique FIR multilayer neural network, for which a specific complex-valued back-propagation algorithm can be developed to realise the data predistorter. The proposed scheme can also control the spectrum of the signal after the HPA.
Notes:
M L Marchesi, F Piazza, A Uncini (1996)  Backpropagation without multiplier for multilayer neural networks   IEE PROCEEDINGS-CIRCUITS DEVICES AND SYSTEMS 143: 4. 229-232 AUG  
Abstract: When multilayer neural networks are implemented with digital hardware, which allows full exploitation of the well developed digital VLSI technologies, the multiply operations in each neuron between the weights and the inputs can create a bottleneck in the system, because the digital multipliers are very demanding in terms of time or chip area. For this reason, the use of weights constrained to be power-of-two has been proposed in the paper to reduce the computational requirements of the networks. In this case, because one of the two multiplier operands is a power-of-two, the multiply operation can be performed as a much simpler shift operation on the neuron input. While this approach greatly reduces the computational burden of the forward phase of the network, the learning phase, performed using the traditional backpropagation procedure, still requires many regular multiplications. In the paper, a new learning procedure, based on the power-of-two approach, is proposed that can be performed using only shift and add operations, so that both the forward and learning phases of the network can be easily implemented with digital hardware.
Notes:
1993
M MARCHESI, G ORLANDI, F PIAZZA, A UNCINI (1993)  FAST NEURAL NETWORKS WITHOUT MULTIPLIERS   IEEE TRANSACTIONS ON NEURAL NETWORKS 4: 1. 53-62 JAN  
Abstract: The paper introduces multilayer perceptrons with weight values restricted to powers-of-two or sum of power-of-two. In a digital implementation, these neural networks do not need multipliers but only shift registers when computing in forward mode, thus saving chip area and computation time. A learning procedure, based on back-propagation, is presented for such neural networks. This learning procedure requires full real arithmetic and therefore must be performed off-line. Some test cases are presented, concerning MLPâs with hidden layers of different size, on pattern recognition problems. Such tests demonstrate the validity and the generalization capability of the method and give some insight into the behavior of the learning algorithm.
Notes:
1992
N BENVENUTO, M MARCHESI, A UNCINI (1992)  APPLICATIONS OF SIMULATED ANNEALING FOR THE DESIGN OF SPECIAL DIGITAL-FILTERS   IEEE TRANSACTIONS ON SIGNAL PROCESSING 40: 2. 323-332 FEB  
Abstract: This paper describes the salient features of using a simulated annealing (SA) algorithm in the context of designing digital filters with coefficient values expressed as the sum of power of two. A procedure for linear phase digital filter design, using this algorithm, is first presented and tested, yielding results as good as known optimal methods. The algorithm is then applied to the design of Nyquist filters, optimizing at the same time both frequency response and intersymbol interference, and to the design of cascade form FIR filters. Although SA is not a solution to all design problems, and is computationally very expensive, it may be an important method for designing special digital filters where numerous or conflicting constraints are present.
Notes:
Powered by publicationslist.org.