Exciting news! Gradient has launched a FREE GPU plan. Read More
Project Details

Optimizers in Deep Learning

Visualize and compare different optimizers like Adam, AdaGrad, and more.

By
The Gradient Team

Description

In this tutorial we'll implement vanilla gradient descent and gradient descent with momentum from scratch. We'll also visualize the algorithms and compare different optimizers using PyTorch, including:

  • Vanilla gradient descent
  • Stochastic gradient descent
  • Adam
  • AdaDelta
  • AdaGrad
  • AdamW
  • AdaMax

For a more thorough breakdown of the code, check out the full tutorial Gradient Descent and Optimization in Deep Learning on the blog.