Adaptive Discretization of Bregman Gradient Flows: A Dynamical Systems Perspective on Mirror Descent Convergence Geometry
Abstract
The limited convergence efficiency of classical first-order methods in high-dimensional, non-Euclidean geometries is addressed by analyzing the continuous-time limit of the Bregman gradient method and its connection to Mirror Descent. Focusing on convex optimization with emphasis on ℓ1-regularized sparse regression and convex classification, the Bregman gradient flow is derived via a variational formulation, yielding the governing ODEs. A Lyapunov energy is constructed to characterize decay; its time derivative induces a principled, Lyapunov-guided adaptive step-size rule. During discretization, a stabilization term is introduced to ensure that the numerical scheme tracks the continuous flow under curvature-dependent mirror geometries.The proposed continuum-guided Mirror Descent (CG-MD) adapts step size to local geometry and demonstrates improved efficiency and stability. On synthetic sparse-regression benchmarks, CG-MD reduces mean error by ≈30% relative to standard Mirror Descent. On real-world sparse regression, CG-MD reaches an error threshold of 0.01 in 140 iterations versus >200 for the baseline. In convex classification, CG-MD matches the speed of accelerated Mirror Descent while achieving lower terminal error. A sensitivity study across common mirror maps (entropy, Tsallis, log-barrier) and step-size policies indicates consistent gains for CG-MD. Assumptions and limits (convexity, smoothness, discretization overhead) are detailed, and potential extensions to stochastic and nonconvex settings are outlined.References
References
Wang Zule, Kong Deqiong, Du Yueming, Zhu Bin. Extension and verification of continuous limit analysis method for geotechnical engineering[J]. Rock and Soil Mechanics, 2023, 44(12): 3531-3540.
Li Die, Guo Ke. Study on linear convergence of a type of non-convex Bregman gradient method[J]. Journal of China West Normal University (Natural Science Edition), 2025, 46(1): 30-35.
Zhang Yan, Li Xiaobing. Bregman proximal gradient algorithm for non-smooth non-convex-strongly quasi-concave saddle point problem[J]. Advances in Applied Mathematics, 2025, 14(1): 442-452.
Bai Xianzong, Li Kebo, Li Haojian, Dong Wei. Design of differential geometry guidance law based on fixed time convergence error dynamics[J]. Acta Aeronautica Sinica, 2024, 45(16): 181-194.
Wu Meng, Wang Xuhui, Ni Qian, Wei Mingqiang. Isogeometric analysis solution of Laplace-Beltrami equation on geometric continuous curves[J]. Journal of Computer-Aided Design & Graphics, 2021, 33(12): 1916-1922.
Doikov N, Nesterov Y. Gradient regularization of Newton method with Bregman distances[J]. Mathematical programming, 2024, 204(1): 1-25.
Ding K, Li J, Toh K C. Nonconvex stochastic Bregman proximal gradient method with application to deep learning[J]. Journal of Machine Learning Research, 2025, 26(39): 1-44.
Azizian W, Iutzeler F, Malick J, et al. The rate of convergence of Bregman proximal methods: local geometry versus regularity versus sharpness[J]. SIAM Journal on Optimization, 2024, 34(3): 2440-2471.
Hanzely F, Richtarik P, Xiao L. Accelerated Bregman proximal gradient methods for relatively smooth convex optimization[J]. Computational Optimization and Applications, 2021, 79(2): 405-440.
Zhang H, Dai Y H, Guo L, et al. Proximal-like incremental aggregated gradient method with linear convergence under Bregman distance growth conditions[J]. Mathematics of Operations Research, 2021, 46(1): 61-81.
Wu Z, Li C, Li M, et al. Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems[J]. Journal of Global Optimization, 2021, 79(3): 617-644.
Dragomir R A, Taylor A B, d’Aspremont A, et al. Optimal complexity and certification of Bregman first-order methods[J]. Mathematical Programming, 2022, 194(1): 41-83.
Benning M, Betcke M M, Ehrhardt M J, et al. Choose your path wisely: gradient descent in a Bregman distance framework[J]. SIAM Journal on Imaging Sciences, 2021, 14(2): 814-843.
Zhu D, Deng S, Li M, et al. Level-set subdifferential error bounds and linear convergence of Bregman proximal gradient method[J]. Journal of Optimization Theory and Applications, 2021, 189(3): 889-918.
Gower R, Lorenz D A, Winkler M. A Bregman–Kaczmarz method for nonlinear systems of equations[J]. Computational Optimization and Applications, 2024, 87(3): 1059-1098.
Zhan W, Cen S, Huang B, et al. Policy mirror descent for regularized reinforcement learning: A generalized framework with linear convergence[J]. SIAM Journal on Optimization, 2023, 33(2): 1061-1091.
Azizan N, Lale S, Hassibi B. Stochastic mirror descent on overparameterized nonlinear models[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 33(12): 7717-7727.
Li Y, Lan G, Zhao T. Homotopic policy mirror descent: policy convergence, algorithmic regularization, and improved sample complexity[J]. Mathematical Programming, 2024, 207(1): 457-513.
DOI:
https://doi.org/10.31449/inf.v49i27.11906Downloads
Published
How to Cite
Issue
Section
License
I assign to Informatica, An International Journal of Computing and Informatics ("Journal") the copyright in the manuscript identified above and any additional material (figures, tables, illustrations, software or other information intended for publication) submitted as part of or as a supplement to the manuscript ("Paper") in all forms and media throughout the world, in all languages, for the full term of copyright, effective when and if the article is accepted for publication. This transfer includes the right to reproduce and/or to distribute the Paper to other journals or digital libraries in electronic and online forms and systems.
I understand that I retain the rights to use the pre-prints, off-prints, accepted manuscript and published journal Paper for personal use, scholarly purposes and internal institutional use.
In certain cases, I can ask for retaining the publishing rights of the Paper. The Journal can permit or deny the request for publishing rights, to which I fully agree.
I declare that the submitted Paper is original, has been written by the stated authors and has not been published elsewhere nor is currently being considered for publication by any other journal and will not be submitted for such review while under review by this Journal. The Paper contains no material that violates proprietary rights of any other person or entity. I have obtained written permission from copyright owners for any excerpts from copyrighted works that are included and have credited the sources in my article. I have informed the co-author(s) of the terms of this publishing agreement.
Copyright © Slovenian Society Informatika







