Type: Conference paper
PTGVLR: Fast MLP learning using parallel tangent gradient with variable learning rates
Journal: 2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025 ()Year: 2007Volume: Issue: Pages: 2162 - 2165
DOI:10.1109/ICCAS.2007.4406690Language: English
Abstract
In this paper, we propose a MLP learning algorithm based on the parallel tangent gradient with modified variable learning rates, PTGVLR. Parallel tangent gradient uses parallel tangent deflecting direction instead of the momentum. Moreover, we use two separate and variable learning rates one for the gradient descent and the other for accelerating direction through parallel tangent. We test PTGVLR optimization method for optimizing a two dimensional Rosenbrock function and for learning of some well-known MLP problems, such as the parity generators and the encoders. Our investigations show that the proposed MLP learning algorithm, PTGVLR, is faster than similar adaptive learning methods. © ICROS.
Author Keywords
Back propagationMLP learningParallel tangent gradientVariable learning rates
Other Keywords
Adaptive algorithmsArtificial intelligenceBackpropagationEducationLearning systemsParallel algorithmsAdaptive learning methodsGradient descentsMLP learningOptimization methodsParallel tangent gradientVariable learning ratesLearning algorithms