madgrad: 'MADGRAD' Method for Stochastic Optimization

A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization algorithm. MADGRAD is a 'best-of-both-worlds' optimizer with the generalization performance of stochastic gradient descent and at least as fast convergence as that of Adam, often faster. A drop-in optim_madgrad() implementation is provided based on Defazio et al (2020) <arXiv:2101.11075>.

Version: 0.1.0
Imports: torch (≥ 0.3.0), rlang
Suggests: testthat (≥ 3.0.0)
Published: 2021-05-10
Author: Daniel Falbel [aut, cre, cph], RStudio [cph], MADGRAD original implementation authors. [cph]
Maintainer: Daniel Falbel <daniel at>
License: MIT + file LICENSE
NeedsCompilation: no
Materials: README
CRAN checks: madgrad results


Reference manual: madgrad.pdf
Package source: madgrad_0.1.0.tar.gz
Windows binaries: r-devel:, r-release:, r-oldrel:
macOS binaries: r-release (arm64): madgrad_0.1.0.tgz, r-release (x86_64): madgrad_0.1.0.tgz, r-oldrel: madgrad_0.1.0.tgz


Please use the canonical form to link to this page.