Final Up to date on November 29, 2022
The gradient descent algorithm is among the hottest strategies for coaching deep neural networks. It has many functions in fields comparable to laptop imaginative and prescient, speech recognition, and pure language processing. Whereas the thought of gradient descent has been round for many years, it’s solely not too long ago that it’s been utilized to functions associated to deep studying.
Gradient descent is an iterative optimization technique used to seek out the minimal of an goal perform by updating values iteratively on every step. With every iteration, it takes small steps in direction of the specified course till convergence, or a cease criterion is met.
On this tutorial, you’ll practice a easy linear regression mannequin with two trainable parameters and discover how gradient descent works and find out how to implement it in PyTorch. Significantly, you’ll study:
- Gradient Descent algorithm and its implementation in PyTorch
- Batch Gradient Descent and its implementation in PyTorch
- Stochastic Gradient Descent and its implementation in PyTorch
- How Batch Gradient Descent and Stochastic Gradient Descent are completely different from one another
- How loss decreases in Batch Gradient Descent and Stochastic Gradient Descent throughout coaching
So, let’s get began.

Implementing Gradient Descent in PyTorch.
Image by Michael Behrens. Some rights reserved.
Overview
This tutorial is in 4 elements; they’re
- Getting ready Knowledge
- Batch Gradient Descent
- Stochastic Gradient Descent
- Plotting Graphs for Comparability
Getting ready Knowledge
To maintain the mannequin easy for illustration, we’ll use the linear regression downside as within the final tutorial. The info is artificial and generated as follows:
import torch import numpy as np import matplotlib.pyplot as plt
# Making a perform f(X) with a slope of -5 X = torch.arange(–5, 5, 0.1).view(–1, 1) func = –5 * X
# Including Gaussian noise to the perform f(X) and saving it in Y Y = func + 0.4 * torch.randn(X.dimension()) |
Similar as within the earlier tutorial, we initialized a variable X
with values starting from $-5$ to $5$, and created a linear perform with a slope of $-5$. Then, Gaussian noise is added to create the variable Y
.
We are able to plot the info utilizing matplotlib to visualise the sample:
... # Plot and visualizing the info factors in blue plt.plot(X.numpy(), Y.numpy(), ‘b+’, label=‘Y’) plt.plot(X.numpy(), func.numpy(), ‘r’, label=‘func’) plt.xlabel(‘x’) plt.ylabel(‘y’) plt.legend() plt.grid(‘True’, colour=‘y’) plt.present() |

Knowledge factors for regression mannequin
Batch Gradient Descent
Now that we now have created the info for our mannequin, subsequent we’ll construct a ahead perform based mostly on a easy linear regression equation. We’ll practice the mannequin for 2 parameters ($w$ and $b$). We will even want a loss criterion perform. As a result of it’s a regression downside on steady values, MSE loss is acceptable.
... # defining the perform for ahead move for prediction def ahead(x): return w * x + b
# evaluating information factors with Imply Sq. Error (MSE) def criterion(y_pred, y): return torch.imply((y_pred – y) ** 2) |
Earlier than we practice our mannequin, let’s be taught in regards to the batch gradient descent. In batch gradient descent, all of the samples within the coaching information are thought-about in a single step. The parameters are up to date by taking the imply gradient of all of the coaching examples. In different phrases, there is just one step of gradient descent in a single epoch.
Whereas Batch Gradient Descent is your best option for clean error manifolds, it’s comparatively sluggish and computationally advanced, particularly when you’ve got a bigger dataset for coaching.
Coaching with Batch Gradient Descent
Let’s randomly initialize the trainable parameters $w$ and $b$, and outline some coaching parameters comparable to studying fee or step dimension, an empty listing to retailer the loss, and variety of epochs for coaching.
w = torch.tensor(–10.0, requires_grad=True) b = torch.tensor(–20.0, requires_grad=True)
step_size = 0.1 loss_BGD = [] n_iter = 20 |
We’ll practice our mannequin for 20 epochs utilizing beneath strains of code. Right here, the ahead()
perform generates the prediction whereas the criterion()
perform measures the loss to retailer it in loss
variable. The backward()
technique performs the gradient computations and the up to date parameters are saved in w.information
and b.information
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
for i in vary (n_iter): # making predictions with ahead move Y_pred = ahead(X) # calculating the loss between authentic and predicted information factors loss = criterion(Y_pred, Y) # storing the calculated loss in an inventory loss_BGD.append(loss.merchandise()) # backward move for computing the gradients of the loss w.r.t to learnable parameters loss.backward() # updateing the parameters after every iteration w.information = w.information – step_size * w.grad.information b.information = b.information – step_size * b.grad.information # zeroing gradients after every iteration w.grad.information.zero_() b.grad.information.zero_() # priting the values for understanding print(‘{}, t{}, t{}, t{}’.format(i, loss.merchandise(), w.merchandise(), b.merchandise())) |
Right here is the how the output appears to be like like and the parameters are up to date after each epoch after we apply batch gradient descent.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
0, 596.7191162109375, -1.8527469635009766, -16.062074661254883 1, 343.426513671875, -7.247585773468018, -12.83026123046875 2, 202.7098388671875, -3.616910219192505, -10.298759460449219 3, 122.16651153564453, -6.0132551193237305, -8.237251281738281 4, 74.85094451904297, -4.394278526306152, -6.6120076179504395 5, 46.450958251953125, -5.457883358001709, -5.295622825622559 6, 29.111614227294922, -4.735295295715332, -4.2531514167785645 7, 18.386211395263672, -5.206836700439453, -3.4119482040405273 8, 11.687058448791504, -4.883906364440918, -2.7437009811401367 9, 7.4728569984436035, -5.092618465423584, -2.205873966217041 10, 4.808231830596924, -4.948029518127441, -1.777699589729309 11, 3.1172332763671875, -5.040188312530518, -1.4337140321731567 12, 2.0413269996643066, -4.975278854370117, -1.159447193145752 13, 1.355530858039856, -5.0158305168151855, -0.9393846988677979 14, 0.9178376793861389, -4.986582279205322, -0.7637402415275574 15, 0.6382412910461426, -5.004333972930908, -0.6229321360588074 16, 0.45952412486076355, -4.991086006164551, -0.5104631781578064 17, 0.34523946046829224, -4.998797416687012, -0.42035552859306335 18, 0.27213525772094727, -4.992753028869629, -0.3483465909957886 19, 0.22536347806453705, -4.996064186096191, -0.2906789183616638 |
Placing all collectively, the next is the whole code
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
import torch import numpy as np import matplotlib.pyplot as plt
X = torch.arange(–5, 5, 0.1).view(–1, 1) func = –5 * X Y = func + 0.4 * torch.randn(X.dimension())
# defining the perform for ahead move for prediction def ahead(x): return w * x + b
# evaluating information factors with Imply Sq. Error (MSE) def criterion(y_pred, y): return torch.imply((y_pred – y) ** 2)
w = torch.tensor(–10.0, requires_grad=True) b = torch.tensor(–20.0, requires_grad=True)
step_size = 0.1 loss_BGD = [] n_iter = 20
for i in vary (n_iter): # making predictions with ahead move Y_pred = ahead(X) # calculating the loss between authentic and predicted information factors loss = criterion(Y_pred, Y) # storing the calculated loss in an inventory loss_BGD.append(loss.merchandise()) # backward move for computing the gradients of the loss w.r.t to learnable parameters loss.backward() # updateing the parameters after every iteration w.information = w.information – step_size * w.grad.information b.information = b.information – step_size * b.grad.information # zeroing gradients after every iteration w.grad.information.zero_() b.grad.information.zero_() # priting the values for understanding print(‘{}, t{}, t{}, t{}’.format(i, loss.merchandise(), w.merchandise(), b.merchandise())) |
The for-loop above prints one line per epoch, comparable to the next:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
0, 596.7191162109375, -1.8527469635009766, -16.062074661254883 1, 343.426513671875, -7.247585773468018, -12.83026123046875 2, 202.7098388671875, -3.616910219192505, -10.298759460449219 3, 122.16651153564453, -6.0132551193237305, -8.237251281738281 4, 74.85094451904297, -4.394278526306152, -6.6120076179504395 5, 46.450958251953125, -5.457883358001709, -5.295622825622559 6, 29.111614227294922, -4.735295295715332, -4.2531514167785645 7, 18.386211395263672, -5.206836700439453, -3.4119482040405273 8, 11.687058448791504, -4.883906364440918, -2.7437009811401367 9, 7.4728569984436035, -5.092618465423584, -2.205873966217041 10, 4.808231830596924, -4.948029518127441, -1.777699589729309 11, 3.1172332763671875, -5.040188312530518, -1.4337140321731567 12, 2.0413269996643066, -4.975278854370117, -1.159447193145752 13, 1.355530858039856, -5.0158305168151855, -0.9393846988677979 14, 0.9178376793861389, -4.986582279205322, -0.7637402415275574 15, 0.6382412910461426, -5.004333972930908, -0.6229321360588074 16, 0.45952412486076355, -4.991086006164551, -0.5104631781578064 17, 0.34523946046829224, -4.998797416687012, -0.42035552859306335 18, 0.27213525772094727, -4.992753028869629, -0.3483465909957886 19, 0.22536347806453705, -4.996064186096191, -0.2906789183616638 |
Stochastic Gradient Descent
As we realized that batch gradient descent is just not an acceptable alternative with regards to an enormous coaching information. Nevertheless, deep studying algorithms are information hungry and infrequently require giant amount of information for coaching. For example, a dataset with tens of millions of coaching examples would require the mannequin to compute the gradient for all information in a single step, if we’re utilizing batch gradient descent.
This doesn’t appear to be an environment friendly method and the choice is stochastic gradient descent (SGD). Stochastic gradient descent considers solely a single pattern from the coaching information at a time, computes the gradient to take a step, and replace the weights. Due to this fact, if we now have $N$ samples within the coaching information, there will likely be $N$ steps in every epoch.
Coaching with Stochastic Gradient Descent
To coach our mannequin with stochastic gradient descent, we’ll randomly initialize the trainable parameters $w$ and $b$ as we did for the batch gradient descent above. Right here we’ll outline an empty listing to retailer the loss for stochastic gradient descent and practice the mannequin for 20 epochs. The next is the whole code modified from the earlier instance:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
import torch import numpy as np import matplotlib.pyplot as plt
X = torch.arange(–5, 5, 0.1).view(–1, 1) func = –5 * X Y = func + 0.4 * torch.randn(X.dimension())
# defining the perform for ahead move for prediction def ahead(x): return w * x + b
# evaluating information factors with Imply Sq. Error (MSE) def criterion(y_pred, y): return torch.imply((y_pred – y) ** 2)
w = torch.tensor(–10.0, requires_grad=True) b = torch.tensor(–20.0, requires_grad=True)
step_size = 0.1 loss_SGD = [] n_iter = 20
for i in vary (n_iter): # calculating true loss and storing it Y_pred = ahead(X) # retailer the loss within the listing loss_SGD.append(criterion(Y_pred, Y).tolist())
for x, y in zip(X, Y): # making a pridiction in ahead move y_hat = ahead(x) # calculating the loss between authentic and predicted information factors loss = criterion(y_hat, y) # backward move for computing the gradients of the loss w.r.t to learnable parameters loss.backward() # updateing the parameters after every iteration w.information = w.information – step_size * w.grad.information b.information = b.information – step_size * b.grad.information # zeroing gradients after every iteration w.grad.information.zero_() b.grad.information.zero_() # priting the values for understanding print(‘{}, t{}, t{}, t{}’.format(i, loss.merchandise(), w.merchandise(), b.merchandise())) |
This prints a protracted listing of values as follows
0, 24.73763084411621, –5.02630615234375, –20.994739532470703 0, 455.0946960449219, –25.93259620666504, –16.7281494140625 0, 6968.82666015625, 54.207733154296875, –33.424049377441406 0, 97112.9140625, –238.72393798828125, 28.901844024658203 .... 19, 8858971136.0, –1976796.625, 8770213.0 19, 271135948800.0, –1487331.875, 8874354.0 19, 3010866446336.0, –3153109.5, 8527317.0 19, 47926483091456.0, 3631328.0, 9911896.0 |
Plotting Graphs for Comparability
Now that we now have skilled our mannequin utilizing batch gradient descent and stochastic gradient descent, let’s visualize how the loss decreases for each the strategies throughout mannequin coaching. So, the graph for batch gradient descent appears to be like like this.
... plt.plot(loss_BGD, label=“Batch Gradient Descent”) plt.xlabel(‘Epoch’) plt.ylabel(‘Value/Whole loss’) plt.legend() plt.present() |

The loss historical past of batch gradient descent
Equally, right here is how the graph for stochastic gradient descent appears to be like like.
plt.plot(loss_SGD,label=“Stochastic Gradient Descent”) plt.xlabel(‘Epoch’) plt.ylabel(‘Value/Whole loss’) plt.legend() plt.present() |

Loss historical past of stochastic gradient descent
As you may see, the loss easily decreases for batch gradient descent. Then again, you’ll observe fluctuations within the graph for stochastic gradient descent. As talked about earlier, the reason being fairly easy. In batch gradient descent, the loss is up to date after all of the coaching samples are processed whereas the stochastic gradient descent updates the loss after each coaching pattern within the coaching information.
Placing every part collectively, beneath is the whole code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
import torch import numpy as np import matplotlib.pyplot as plt
# Making a perform f(X) with a slope of -5 X = torch.arange(–5, 5, 0.1).view(–1, 1) func = –5 * X
# Including Gaussian noise to the perform f(X) and saving it in Y Y = func + 0.4 * torch.randn(X.dimension())
# Plot and visualizing the info factors in blue plt.plot(X.numpy(), Y.numpy(), ‘b+’, label=‘Y’) plt.plot(X.numpy(), func.numpy(), ‘r’, label=‘func’) plt.xlabel(‘x’) plt.ylabel(‘y’) plt.legend() plt.grid(‘True’, colour=‘y’) plt.present()
# defining the perform for ahead move for prediction def ahead(x): return w * x + b
# evaluating information factors with Imply Sq. Error (MSE) def criterion(y_pred, y): return torch.imply((y_pred – y) ** 2)
# Batch gradient descent w = torch.tensor(–10.0, requires_grad=True) b = torch.tensor(–20.0, requires_grad=True) step_size = 0.1 loss_BGD = [] n_iter = 20
for i in vary (n_iter): # making predictions with ahead move Y_pred = ahead(X) # calculating the loss between authentic and predicted information factors loss = criterion(Y_pred, Y) # storing the calculated loss in an inventory loss_BGD.append(loss.merchandise()) # backward move for computing the gradients of the loss w.r.t to learnable parameters loss.backward() # updateing the parameters after every iteration w.information = w.information – step_size * w.grad.information b.information = b.information – step_size * b.grad.information # zeroing gradients after every iteration w.grad.information.zero_() b.grad.information.zero_() # priting the values for understanding print(‘{}, t{}, t{}, t{}’.format(i, loss.merchandise(), w.merchandise(), b.merchandise()))
# Stochastic gradient descent w = torch.tensor(–10.0, requires_grad=True) b = torch.tensor(–20.0, requires_grad=True) step_size = 0.1 loss_SGD = [] n_iter = 20
for i in vary(n_iter): # calculating true loss and storing it Y_pred = ahead(X) # retailer the loss within the listing loss_SGD.append(criterion(Y_pred, Y).tolist())
for x, y in zip(X, Y): # making a pridiction in ahead move y_hat = ahead(x) # calculating the loss between authentic and predicted information factors loss = criterion(y_hat, y) # backward move for computing the gradients of the loss w.r.t to learnable parameters loss.backward() # updateing the parameters after every iteration w.information = w.information – step_size * w.grad.information b.information = b.information – step_size * b.grad.information # zeroing gradients after every iteration w.grad.information.zero_() b.grad.information.zero_() # priting the values for understanding print(‘{}, t{}, t{}, t{}’.format(i, loss.merchandise(), w.merchandise(), b.merchandise()))
# Plot graphs plt.plot(loss_BGD, label=“Batch Gradient Descent”) plt.xlabel(‘Epoch’) plt.ylabel(‘Value/Whole loss’) plt.legend() plt.present()
plt.plot(loss_SGD,label=“Stochastic Gradient Descent”) plt.xlabel(‘Epoch’) plt.ylabel(‘Value/Whole loss’) plt.legend() plt.present() |
Abstract
On this tutorial you realized in regards to the Gradient Descent, a few of its variations, and find out how to implement them in PyTorch. Significantly, you realized about:
- Gradient Descent algorithm and its implementation in PyTorch
- Batch Gradient Descent and its implementation in PyTorch
- Stochastic Gradient Descent and its implementation in PyTorch
- How Batch Gradient Descent and Stochastic Gradient Descent are completely different from one another
- How loss decreases in Batch Gradient Descent and Stochastic Gradient Descent throughout coaching