Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








368,935 Hits in 3.3 sec

A normalized gradient descent algorithm for nonlinear adaptive filters using a gradient adaptive step size

D.P. Mandic, A.I. Hanna, M. Razaz
2001 IEEE Signal Processing Letters  
A fully adaptive normalized nonlinear gradient descent (FANNGD) algorithm for online adaptation of nonlinear neural filters is proposed.  ...  For rigor, the remainder of the truncated Taylor series expansion within the expression for the adaptive learning rate is made adaptive and is updated using gradient descent.  ...  A Normalized Gradient Descent Algorithm for Nonlinear Adaptive Filters Using a Gradient Adaptive Step Size Danilo P. Mandic, Andrew I.  ... 
doi:10.1109/97.969448 fatcat:blpwcsls55gwlfb3wp4srlxczq

Gradient-Sensitive Optimization for Convolutional Neural Networks

Zhipeng Liu, Rui Feng, Xiuhan Li, Wei Wang, Xiaoling Wu, Jussi Tohka
2021 Computational Intelligence and Neuroscience  
Convolutional neural networks (CNNs) are effective models for image classification and recognition. Gradient descent optimization (GD) is the basic algorithm for CNN model optimization.  ...  Our algorithm is a supplement to the existing gradient descent algorithms, which can be combined with many other existing gradient descent algorithms to improve the efficiency of iteration, speed up the  ...  Preliminaries e basic algorithm for CNN model optimization is GD.  ... 
doi:10.1155/2021/6671830 fatcat:xegos3yn25fyxlpnaa2tbnt64q

An Improved CMA-ES for Solving Large Scale Optimization Problem [chapter]

Jin Jin, Chuan Yang, Yi Zhang
2020 Lecture Notes in Computer Science  
Comparative experiments have been done on state-of-the-art algorithms. The results proved the effectiveness and efficiency of GI-ES for large scale optimization problems.  ...  To solve this problem, this paper proposes an improved CMA-ES, called GI-ES, for large-scale optimization problems.  ...  Effectiveness of the Gradient Information. The adaptation of the mutation strength is crucial for evolutionary calculation.  ... 
doi:10.1007/978-3-030-53956-6_34 fatcat:psbhdxnmbvazrd5myb6fwa5ipq

Hyperspherical parametrization for unit-norm based adaptive IIR filtering

S.G. Sankaran, A.A.L. Beex
1999 IEEE Signal Processing Letters  
We propose a hyperspherical parameterization to convert the unit-norm-constrained optimization into an unconstrained optimization.  ...  The bias problem associated with equation error based adaptive infinite impulse response (IIR) filtering can be surmounted by imposing a unit-norm constraint on the autoregressive (AR) coefficients.  ...  INTRODUCTION T RADITIONALLY, finite impulse response (FIR) structures have been used for adaptive filters, due to their simplicity.  ... 
doi:10.1109/97.803434 fatcat:pb5lx6igxjgjfmp6wgpxq7qvzy

A global least mean square algorithm for adaptive IIR filtering

W. Edmonson, J. Principe, K. Srinivasan, Chuan Wang
1998 IEEE transactions on circuits and systems - 2, Analog and digital signal processing  
Combining this approximation of the gradient with the LMS algorithm results in a stochastic global optimization algorithm for adaptive IIR filtering.  ...  Derivation of Gradient Estimate The key to implementing any algorithm for adaptive filtering (7) is the development of an on-line gradient estimate r(n; ): Here we propose to apply the SAS derived single-sided  ...  The algorithm and simulation results of the compact neural-network-based CDMA receiver are described in this brief.  ... 
doi:10.1109/82.664244 fatcat:l6w5f6tblva4hl5benddfzs75a

An adaptive optimal strategy based on the combination of the dynamic-Q optimization method and response surface methodology

Shiyou Yang, S.L. Ho, Guangzheng Ni, H.C. Wong
2005 IEEE transactions on magnetics  
The dynamic-Q optimization method is combined with an interpolating moving least-squares approximation-based response surface model to design an efficient adaptive strategy for solving computationally  ...  The proposed optimal strategy is validated by comparing its performances in finding the solutions of other common optimal methods on two different kinds of problems.  ...  For the sake of completeness, a brief introduction about IMLS is given in the following paragraphs.  ... 
doi:10.1109/tmag.2005.846031 fatcat:7lxfcblqrne3tldndp5ajhp5yy

Efficient Full-Matrix Adaptive Regularization [article]

Naman Agarwal, Brian Bullins, Xinyi Chen, Elad Hazan, Karan Singh, Cyril Zhang, Yi Zhang
2020 arXiv   pre-print
We also provide a novel theoretical analysis for adaptive regularization in non-convex optimization settings.  ...  The core of our algorithm, termed GGT, consists of the efficient computation of the inverse square root of a low-rank matrix.  ...  Acknowledgments We are grateful to Yoram Singer, Tomer Koren, Nadav Cohen, and Sanjeev Arora for helpful discussions.  ... 
arXiv:1806.02958v2 fatcat:vyzeqvt7bbedrn2tyfdjhsat5a

Page 377 of Automation and Remote Control Vol. 34, Issue 3 [page]

1973 Automation and Remote Control  
Introduction At present, there is no lack of various algorithms for finding the unconditional extremum of a functional J(c) which defines an optimality criterion.  ...  ADAPTIVE SYSTEMS PSEUDOGRADIENT ADAPTATION AND TRAINING ALGORITHMS B. T. Polyak and Ya. Z.  ... 

An Improved Adam Optimization Algorithm Combining Adaptive Coefficients and Composite Gradients Based on Randomized Block Coordinate Descent

Miaomiao Liu, Dan Yao, Zhigang Liu, Jingfeng Guo, Jing Chen, Upaka Rathnayake
2023 Computational Intelligence and Neuroscience  
An improved Adam optimization algorithm combining adaptive coefficients and composite gradients based on randomized block coordinate descent is proposed to address issues of the Adam algorithm such as  ...  Simulation experiments on two standard datasets for classification show that the convergence speed and accuracy of the proposed algorithm are higher than those of the six gradient descent methods, and  ...  Acknowledgments Tis work was supported by the National Natural Science Foundation of China (Grant nos. 42002138 and 62172352), Natural Science Foundation of Heilongjiang Province (Grant no.  ... 
doi:10.1155/2023/4765891 pmid:36660559 pmcid:PMC9845049 fatcat:hrjclg6msbhmtpnvlomkuwc6gi

Comparison of Adaptive Ant Colony Optimization for Image Edge Detection of Leaves Bone Structure

Febri Liantoni, Rifki Indra Perwira, Daniel Silli Bataona
2018 Emitter: International Journal of Engineering Technology  
In this research, Adaptive Ant Colony Optimization algorithm is proposed for edge image detection of leaf bone structure.  ...  that allows for an edge based on the value of the image gradient.  ...  INTRODUCTION Plants are the most substansial item of life on earth. The plant is useful as a supplier of oxygen for breathing, as foodstuff, fuel, medicine, cosmetics and more.  ... 
doi:10.24003/emitter.v6i2.306 fatcat:5yhh2y2e2rgj5bmbrwswbbagfy

Adaptive high order stochastic descent algorithms

Gabriel TURINICI
2022 Zenodo  
After a brief introduction to this framework, we introduce in this talk a new approach, called SGD-G2, which is a high order Runge-Kutta stochastic descent algorithm; the procedure allows for step adaptation  ...  One of the most known among them, the Stochastic Gradient Descent (SGD), has been extended in various ways resulting in Adam, Nesterov, momentum, etc.  ...  Figure : : Figure: Left: Numerical results (over the first 5 epochs) for the SGD-G2 algorithm on the FMNIST dataset with several choices of the initial learning rate h 0 ; right: SGD , SGD-G2 and Adam  ... 
doi:10.5281/zenodo.7257153 fatcat:zoapff5eubdntkq3ectebgjaoa

An improved algorithm for radar adaptive beamforming based on machine learning

Moyu Bai, Hao Liu, Haochuan Chen, Shengming Gu, Zhenhua Zhang
2019 Journal of Physics, Conference Series  
In order to improve the performance of adaptive beamforming, this paper firstly reviews the classical LMS algorithm and then the machine learning optimization algorithm.  ...  The Least Mean Square Algorithm (LMS) is a simple and easy algorithm for adaptive digital beamforming.  ...  Introduction Adaptive digital beamforming is an important branch of digital signal processing.  ... 
doi:10.1088/1742-6596/1325/1/012114 fatcat:ubjx3h3fbfbjpgejf6ahfitjae

Enhancing Performance of a Deep Neural Network: A Comparative Analysis of Optimization Algorithms

Noor Fatima
2020 Advances in Distributed Computing and Artificial Intelligence Journal  
Adopting the most suitable optimization algorithm (optimizer) for a Neural Network Model is among the most important ventures in Deep Learning and all classes of Neural Networks.  ...  In this paper, we will experiment with seven of the most popular optimization algorithms namely: sgd, rmsprop, adagrad, adadelta, adam, adamax and nadam on four unrelated datasets discretely, to conclude  ...  Adaptive Gradient Algorithm (Adagrad) Adaptive Gradient Algorithm (Adagrad) is very similar to stochastic gradient descent algorithm but unlikely uses adaptive gradients to improve robustness as shown  ... 
doi:10.14201/adcaij2020927990 fatcat:mo7gwxkcujf5fadwwiwoef3xpq

A Modified Bio Inspired

Dharmpal Singh
2018 International Journal of Applied Metaheuristic Computing  
(CPSO) algorithm, self adaptive penalty function genetic algorithm (SAPFGA) and mutable smart bee algorithm (MSBA), for optimal design of truss structures with dynamic frequency constraints.  ...  Therefore, an algorithm for automation of constraint shape and size design of truss structures is proposed here.  ...  INTRODUCTION Generally, two different paradigms are taken into account for optimizing engineering systems such as truss design problems.  ... 
doi:10.4018/ijamc.2018010105 fatcat:jcyynwmmrjde5c2s7aivixnbbu

Improved Binary Forward Exploration: Learning Rate Scheduling Method for Stochastic Optimization [article]

Xin Cao
2022 arXiv   pre-print
and the most successful adaptive learning rate algorithm e.g.  ...  The Adaptive version of BFE has also been discussed thereafter.  ...  Improved BFE of gradient change Algorithm 3 Improved BFE of gradient change, the proposed algorithm in non-adaptive learning rate automation for stochastic optimization.  ... 
arXiv:2207.04198v3 fatcat:zkwvjcegevdnrkcyfvn5oxsx5m
« Previous Showing results 1 — 15 out of 368,935 results