• Home
  • About Us
  • Contact Us
  • DMCA
  • Sitemap
  • Privacy Policy
Thursday, March 30, 2023
Insta Citizen
No Result
View All Result
  • Home
  • Technology
  • Computers
  • Gadgets
  • Software
  • Solar Energy
  • Artificial Intelligence
  • Home
  • Technology
  • Computers
  • Gadgets
  • Software
  • Solar Energy
  • Artificial Intelligence
No Result
View All Result
Insta Citizen
No Result
View All Result
Home Artificial Intelligence

Implementing Gradient Descent in PyTorch

Insta Citizen by Insta Citizen
December 1, 2022
in Artificial Intelligence
0
Implementing Gradient Descent in PyTorch
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Final Up to date on November 29, 2022

The gradient descent algorithm is among the hottest strategies for coaching deep neural networks. It has many functions in fields comparable to laptop imaginative and prescient, speech recognition, and pure language processing. Whereas the thought of gradient descent has been round for many years, it’s solely not too long ago that it’s been utilized to functions associated to deep studying.

Gradient descent is an iterative optimization technique used to seek out the minimal of an goal perform by updating values iteratively on every step. With every iteration, it takes small steps in direction of the specified course till convergence, or a cease criterion is met.

On this tutorial, you’ll practice a easy linear regression mannequin with two trainable parameters and discover how gradient descent works and find out how to implement it in PyTorch. Significantly, you’ll study:

  • Gradient Descent algorithm and its implementation in PyTorch
  • Batch Gradient Descent and its implementation in PyTorch
  • Stochastic Gradient Descent and its implementation in PyTorch
  • How Batch Gradient Descent and Stochastic Gradient Descent are completely different from one another
  • How loss decreases in Batch Gradient Descent and Stochastic Gradient Descent throughout coaching

So, let’s get began.

Implementing Gradient Descent in PyTorch.
Image by Michael Behrens. Some rights reserved.

Overview

This tutorial is in 4 elements; they’re

READ ALSO

A New AI Analysis Introduces Cluster-Department-Prepare-Merge (CBTM): A Easy However Efficient Methodology For Scaling Knowledgeable Language Fashions With Unsupervised Area Discovery

Bacterial injection system delivers proteins in mice and human cells | MIT Information

  • Getting ready Knowledge
  • Batch Gradient Descent
  • Stochastic Gradient Descent
  • Plotting Graphs for Comparability

Getting ready Knowledge

To maintain the mannequin easy for illustration, we’ll use the linear regression downside as within the final tutorial. The info is artificial and generated as follows:

import torch

import numpy as np

import matplotlib.pyplot as plt

 

# Making a perform f(X) with a slope of -5

X = torch.arange(–5, 5, 0.1).view(–1, 1)

func = –5 * X

 

# Including Gaussian noise to the perform f(X) and saving it in Y

Y = func + 0.4 * torch.randn(X.dimension())

Similar as within the earlier tutorial, we initialized a variable X with values starting from $-5$ to $5$, and created a linear perform with a slope of $-5$. Then, Gaussian noise is added to create the variable Y.

We are able to plot the info utilizing matplotlib to visualise the sample:

...

# Plot and visualizing the info factors in blue

plt.plot(X.numpy(), Y.numpy(), ‘b+’, label=‘Y’)

plt.plot(X.numpy(), func.numpy(), ‘r’, label=‘func’)

plt.xlabel(‘x’)

plt.ylabel(‘y’)

plt.legend()

plt.grid(‘True’, colour=‘y’)

plt.present()

Knowledge factors for regression mannequin

Batch Gradient Descent

Now that we now have created the info for our mannequin, subsequent we’ll construct a ahead perform based mostly on a easy linear regression equation. We’ll practice the mannequin for 2 parameters ($w$ and $b$). We will even want a loss criterion perform. As a result of it’s a regression downside on steady values, MSE loss is acceptable.

...

# defining the perform for ahead move for prediction

def ahead(x):

    return w * x + b

 

# evaluating information factors with Imply Sq. Error (MSE)

def criterion(y_pred, y):

    return torch.imply((y_pred – y) ** 2)

Earlier than we practice our mannequin, let’s be taught in regards to the batch gradient descent. In batch gradient descent, all of the samples within the coaching information are thought-about in a single step. The parameters are up to date by taking the imply gradient of all of the coaching examples. In different phrases, there is just one step of gradient descent in a single epoch.

Whereas Batch Gradient Descent is your best option for clean error manifolds, it’s comparatively sluggish and computationally advanced, particularly when you’ve got a bigger dataset for coaching.

Coaching with Batch Gradient Descent

Let’s randomly initialize the trainable parameters $w$ and $b$, and outline some coaching parameters comparable to studying fee or step dimension, an empty listing to retailer the loss, and variety of epochs for coaching.

w = torch.tensor(–10.0, requires_grad=True)

b = torch.tensor(–20.0, requires_grad=True)

 

step_size = 0.1

loss_BGD = []

n_iter = 20

We’ll practice our mannequin for 20 epochs utilizing beneath strains of code. Right here, the ahead() perform generates the prediction whereas the criterion() perform measures the loss to retailer it in loss variable. The backward() technique performs the gradient computations and the up to date parameters are saved in w.information and b.information.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

for i in vary (n_iter):

    # making predictions with ahead move

    Y_pred = ahead(X)

    # calculating the loss between authentic and predicted information factors

    loss = criterion(Y_pred, Y)

    # storing the calculated loss in an inventory

    loss_BGD.append(loss.merchandise())

    # backward move for computing the gradients of the loss w.r.t to learnable parameters

    loss.backward()

    # updateing the parameters after every iteration

    w.information = w.information – step_size * w.grad.information

    b.information = b.information – step_size * b.grad.information

    # zeroing gradients after every iteration

    w.grad.information.zero_()

    b.grad.information.zero_()

    # priting the values for understanding

    print(‘{}, t{}, t{}, t{}’.format(i, loss.merchandise(), w.merchandise(), b.merchandise()))

Right here is the how the output appears to be like like and the parameters are up to date after each epoch after we apply batch gradient descent.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

0, 596.7191162109375, -1.8527469635009766, -16.062074661254883

1, 343.426513671875, -7.247585773468018, -12.83026123046875

2, 202.7098388671875, -3.616910219192505, -10.298759460449219

3, 122.16651153564453, -6.0132551193237305, -8.237251281738281

4, 74.85094451904297, -4.394278526306152, -6.6120076179504395

5, 46.450958251953125, -5.457883358001709, -5.295622825622559

6, 29.111614227294922, -4.735295295715332, -4.2531514167785645

7, 18.386211395263672, -5.206836700439453, -3.4119482040405273

8, 11.687058448791504, -4.883906364440918, -2.7437009811401367

9, 7.4728569984436035, -5.092618465423584, -2.205873966217041

10, 4.808231830596924, -4.948029518127441, -1.777699589729309

11, 3.1172332763671875, -5.040188312530518, -1.4337140321731567

12, 2.0413269996643066, -4.975278854370117, -1.159447193145752

13, 1.355530858039856, -5.0158305168151855, -0.9393846988677979

14, 0.9178376793861389, -4.986582279205322, -0.7637402415275574

15, 0.6382412910461426, -5.004333972930908, -0.6229321360588074

16, 0.45952412486076355, -4.991086006164551, -0.5104631781578064

17, 0.34523946046829224, -4.998797416687012, -0.42035552859306335

18, 0.27213525772094727, -4.992753028869629, -0.3483465909957886

19, 0.22536347806453705, -4.996064186096191, -0.2906789183616638

Placing all collectively, the next is the whole code

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

import torch

import numpy as np

import matplotlib.pyplot as plt

 

X = torch.arange(–5, 5, 0.1).view(–1, 1)

func = –5 * X

Y = func + 0.4 * torch.randn(X.dimension())

 

# defining the perform for ahead move for prediction

def ahead(x):

    return w * x + b

 

# evaluating information factors with Imply Sq. Error (MSE)

def criterion(y_pred, y):

    return torch.imply((y_pred – y) ** 2)

 

w = torch.tensor(–10.0, requires_grad=True)

b = torch.tensor(–20.0, requires_grad=True)

 

step_size = 0.1

loss_BGD = []

n_iter = 20

 

for i in vary (n_iter):

    # making predictions with ahead move

    Y_pred = ahead(X)

    # calculating the loss between authentic and predicted information factors

    loss = criterion(Y_pred, Y)

    # storing the calculated loss in an inventory

    loss_BGD.append(loss.merchandise())

    # backward move for computing the gradients of the loss w.r.t to learnable parameters

    loss.backward()

    # updateing the parameters after every iteration

    w.information = w.information – step_size * w.grad.information

    b.information = b.information – step_size * b.grad.information

    # zeroing gradients after every iteration

    w.grad.information.zero_()

    b.grad.information.zero_()

    # priting the values for understanding

    print(‘{}, t{}, t{}, t{}’.format(i, loss.merchandise(), w.merchandise(), b.merchandise()))

The for-loop above prints one line per epoch, comparable to the next:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

0, 596.7191162109375, -1.8527469635009766, -16.062074661254883

1, 343.426513671875, -7.247585773468018, -12.83026123046875

2, 202.7098388671875, -3.616910219192505, -10.298759460449219

3, 122.16651153564453, -6.0132551193237305, -8.237251281738281

4, 74.85094451904297, -4.394278526306152, -6.6120076179504395

5, 46.450958251953125, -5.457883358001709, -5.295622825622559

6, 29.111614227294922, -4.735295295715332, -4.2531514167785645

7, 18.386211395263672, -5.206836700439453, -3.4119482040405273

8, 11.687058448791504, -4.883906364440918, -2.7437009811401367

9, 7.4728569984436035, -5.092618465423584, -2.205873966217041

10, 4.808231830596924, -4.948029518127441, -1.777699589729309

11, 3.1172332763671875, -5.040188312530518, -1.4337140321731567

12, 2.0413269996643066, -4.975278854370117, -1.159447193145752

13, 1.355530858039856, -5.0158305168151855, -0.9393846988677979

14, 0.9178376793861389, -4.986582279205322, -0.7637402415275574

15, 0.6382412910461426, -5.004333972930908, -0.6229321360588074

16, 0.45952412486076355, -4.991086006164551, -0.5104631781578064

17, 0.34523946046829224, -4.998797416687012, -0.42035552859306335

18, 0.27213525772094727, -4.992753028869629, -0.3483465909957886

19, 0.22536347806453705, -4.996064186096191, -0.2906789183616638

Stochastic Gradient Descent

As we realized that batch gradient descent is just not an acceptable alternative with regards to an enormous coaching information. Nevertheless, deep studying algorithms are information hungry and infrequently require giant amount of information for coaching. For example, a dataset with tens of millions of coaching examples would require the mannequin to compute the gradient for all information in a single step, if we’re utilizing batch gradient descent.

This doesn’t appear to be an environment friendly method and the choice is stochastic gradient descent (SGD). Stochastic gradient descent considers solely a single pattern from the coaching information at a time, computes the gradient to take a step, and replace the weights. Due to this fact, if we now have $N$ samples within the coaching information, there will likely be $N$ steps in every epoch.

Coaching with Stochastic Gradient Descent

To coach our mannequin with stochastic gradient descent, we’ll randomly initialize the trainable parameters $w$ and $b$ as we did for the batch gradient descent above. Right here we’ll outline an empty listing to retailer the loss for stochastic gradient descent and practice the mannequin for 20 epochs. The next is the whole code modified from the earlier instance:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

import torch

import numpy as np

import matplotlib.pyplot as plt

 

X = torch.arange(–5, 5, 0.1).view(–1, 1)

func = –5 * X

Y = func + 0.4 * torch.randn(X.dimension())

 

# defining the perform for ahead move for prediction

def ahead(x):

    return w * x + b

 

# evaluating information factors with Imply Sq. Error (MSE)

def criterion(y_pred, y):

    return torch.imply((y_pred – y) ** 2)

 

w = torch.tensor(–10.0, requires_grad=True)

b = torch.tensor(–20.0, requires_grad=True)

 

step_size = 0.1

loss_SGD = []

n_iter = 20

 

for i in vary (n_iter):    

    # calculating true loss and storing it

    Y_pred = ahead(X)

    # retailer the loss within the listing

    loss_SGD.append(criterion(Y_pred, Y).tolist())

 

    for x, y in zip(X, Y):

      # making a pridiction in ahead move

      y_hat = ahead(x)

      # calculating the loss between authentic and predicted information factors

      loss = criterion(y_hat, y)

      # backward move for computing the gradients of the loss w.r.t to learnable parameters

      loss.backward()

      # updateing the parameters after every iteration

      w.information = w.information – step_size * w.grad.information

      b.information = b.information – step_size * b.grad.information

      # zeroing gradients after every iteration

      w.grad.information.zero_()

      b.grad.information.zero_()

      # priting the values for understanding

      print(‘{}, t{}, t{}, t{}’.format(i, loss.merchandise(), w.merchandise(), b.merchandise()))

This prints a protracted listing of values as follows

0, 24.73763084411621, –5.02630615234375, –20.994739532470703

0, 455.0946960449219, –25.93259620666504, –16.7281494140625

0, 6968.82666015625, 54.207733154296875, –33.424049377441406

0, 97112.9140625, –238.72393798828125, 28.901844024658203

....

19, 8858971136.0, –1976796.625, 8770213.0

19, 271135948800.0, –1487331.875, 8874354.0

19, 3010866446336.0, –3153109.5, 8527317.0

19, 47926483091456.0, 3631328.0, 9911896.0

Plotting Graphs for Comparability

Now that we now have skilled our mannequin utilizing batch gradient descent and stochastic gradient descent, let’s visualize how the loss decreases for each the strategies throughout mannequin coaching. So, the graph for batch gradient descent appears to be like like this.

...

plt.plot(loss_BGD, label=“Batch Gradient Descent”)

plt.xlabel(‘Epoch’)

plt.ylabel(‘Value/Whole loss’)

plt.legend()

plt.present()

The loss historical past of batch gradient descent

Equally, right here is how the graph for stochastic gradient descent appears to be like like.

plt.plot(loss_SGD,label=“Stochastic Gradient Descent”)

plt.xlabel(‘Epoch’)

plt.ylabel(‘Value/Whole loss’)

plt.legend()

plt.present()

Loss historical past of stochastic gradient descent

As you may see, the loss easily decreases for batch gradient descent. Then again, you’ll observe fluctuations within the graph for stochastic gradient descent. As talked about earlier, the reason being fairly easy. In batch gradient descent, the loss is up to date after all of the coaching samples are processed whereas the stochastic gradient descent updates the loss after each coaching pattern within the coaching information.

Placing every part collectively, beneath is the whole code:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

import torch

import numpy as np

import matplotlib.pyplot as plt

 

# Making a perform f(X) with a slope of -5

X = torch.arange(–5, 5, 0.1).view(–1, 1)

func = –5 * X

 

# Including Gaussian noise to the perform f(X) and saving it in Y

Y = func + 0.4 * torch.randn(X.dimension())

 

# Plot and visualizing the info factors in blue

plt.plot(X.numpy(), Y.numpy(), ‘b+’, label=‘Y’)

plt.plot(X.numpy(), func.numpy(), ‘r’, label=‘func’)

plt.xlabel(‘x’)

plt.ylabel(‘y’)

plt.legend()

plt.grid(‘True’, colour=‘y’)

plt.present()

 

# defining the perform for ahead move for prediction

def ahead(x):

    return w * x + b

 

# evaluating information factors with Imply Sq. Error (MSE)

def criterion(y_pred, y):

    return torch.imply((y_pred – y) ** 2)

 

# Batch gradient descent

w = torch.tensor(–10.0, requires_grad=True)

b = torch.tensor(–20.0, requires_grad=True)

step_size = 0.1

loss_BGD = []

n_iter = 20

 

for i in vary (n_iter):

    # making predictions with ahead move

    Y_pred = ahead(X)

    # calculating the loss between authentic and predicted information factors

    loss = criterion(Y_pred, Y)

    # storing the calculated loss in an inventory

    loss_BGD.append(loss.merchandise())

    # backward move for computing the gradients of the loss w.r.t to learnable parameters

    loss.backward()

    # updateing the parameters after every iteration

    w.information = w.information – step_size * w.grad.information

    b.information = b.information – step_size * b.grad.information

    # zeroing gradients after every iteration

    w.grad.information.zero_()

    b.grad.information.zero_()

    # priting the values for understanding

    print(‘{}, t{}, t{}, t{}’.format(i, loss.merchandise(), w.merchandise(), b.merchandise()))

 

# Stochastic gradient descent

w = torch.tensor(–10.0, requires_grad=True)

b = torch.tensor(–20.0, requires_grad=True)

step_size = 0.1

loss_SGD = []

n_iter = 20

 

for i in vary(n_iter):  

    # calculating true loss and storing it

    Y_pred = ahead(X)

    # retailer the loss within the listing

    loss_SGD.append(criterion(Y_pred, Y).tolist())

 

    for x, y in zip(X, Y):

      # making a pridiction in ahead move

      y_hat = ahead(x)

      # calculating the loss between authentic and predicted information factors

      loss = criterion(y_hat, y)

      # backward move for computing the gradients of the loss w.r.t to learnable parameters

      loss.backward()

      # updateing the parameters after every iteration

      w.information = w.information – step_size * w.grad.information

      b.information = b.information – step_size * b.grad.information

      # zeroing gradients after every iteration

      w.grad.information.zero_()

      b.grad.information.zero_()

      # priting the values for understanding

      print(‘{}, t{}, t{}, t{}’.format(i, loss.merchandise(), w.merchandise(), b.merchandise()))

 

# Plot graphs

plt.plot(loss_BGD, label=“Batch Gradient Descent”)

plt.xlabel(‘Epoch’)

plt.ylabel(‘Value/Whole loss’)

plt.legend()

plt.present()

 

plt.plot(loss_SGD,label=“Stochastic Gradient Descent”)

plt.xlabel(‘Epoch’)

plt.ylabel(‘Value/Whole loss’)

plt.legend()

plt.present()

Abstract

On this tutorial you realized in regards to the Gradient Descent, a few of its variations, and find out how to implement them in PyTorch. Significantly, you realized about:

  • Gradient Descent algorithm and its implementation in PyTorch
  • Batch Gradient Descent and its implementation in PyTorch
  • Stochastic Gradient Descent and its implementation in PyTorch
  • How Batch Gradient Descent and Stochastic Gradient Descent are completely different from one another
  • How loss decreases in Batch Gradient Descent and Stochastic Gradient Descent throughout coaching



Source_link

Related Posts

A New AI Analysis Introduces Cluster-Department-Prepare-Merge (CBTM): A Easy However Efficient Methodology For Scaling Knowledgeable Language Fashions With Unsupervised Area Discovery
Artificial Intelligence

A New AI Analysis Introduces Cluster-Department-Prepare-Merge (CBTM): A Easy However Efficient Methodology For Scaling Knowledgeable Language Fashions With Unsupervised Area Discovery

March 30, 2023
Bacterial injection system delivers proteins in mice and human cells | MIT Information
Artificial Intelligence

Bacterial injection system delivers proteins in mice and human cells | MIT Information

March 30, 2023
A Suggestion System For Educational Analysis (And Different Information Sorts)! | by Benjamin McCloskey | Mar, 2023
Artificial Intelligence

A Suggestion System For Educational Analysis (And Different Information Sorts)! | by Benjamin McCloskey | Mar, 2023

March 30, 2023
HAYAT HOLDING makes use of Amazon SageMaker to extend product high quality and optimize manufacturing output, saving $300,000 yearly
Artificial Intelligence

HAYAT HOLDING makes use of Amazon SageMaker to extend product high quality and optimize manufacturing output, saving $300,000 yearly

March 29, 2023
A system for producing 3D level clouds from advanced prompts
Artificial Intelligence

A system for producing 3D level clouds from advanced prompts

March 29, 2023
Detección y prevención, el mecanismo para reducir los riesgos en el sector gobierno y la banca
Artificial Intelligence

Detección y prevención, el mecanismo para reducir los riesgos en el sector gobierno y la banca

March 29, 2023
Next Post
Netflix CEO Reed Hastings likes Elon Musk, video video games, and binge watching

Netflix CEO Reed Hastings likes Elon Musk, video video games, and binge watching

POPULAR NEWS

AMD Zen 4 Ryzen 7000 Specs, Launch Date, Benchmarks, Value Listings

October 1, 2022
Only5mins! – Europe’s hottest warmth pump markets – pv journal Worldwide

Only5mins! – Europe’s hottest warmth pump markets – pv journal Worldwide

February 10, 2023
XR-based metaverse platform for multi-user collaborations

XR-based metaverse platform for multi-user collaborations

October 21, 2022
Magento IOS App Builder – Webkul Weblog

Magento IOS App Builder – Webkul Weblog

September 29, 2022
Migrate from Magento 1 to Magento 2 for Improved Efficiency

Migrate from Magento 1 to Magento 2 for Improved Efficiency

February 6, 2023

EDITOR'S PICK

PCIe 4.0 Speeds on a Funds

PCIe 4.0 Speeds on a Funds

February 17, 2023
Mining the best transition metals in an enormous chemical house | MIT Information

Mining the best transition metals in an enormous chemical house | MIT Information

March 14, 2023
ADO A20 XE Electrical Foldable Bike Assessment

ADO A20 XE Electrical Foldable Bike Assessment

December 21, 2022
Singapore develops Asia’s first AI-based cell app for shark and ray fin identification to fight unlawful wildlife commerce – Singapore Information Middle

Singapore develops Asia’s first AI-based cell app for shark and ray fin identification to fight unlawful wildlife commerce – Singapore Information Middle

October 5, 2022

Insta Citizen

Welcome to Insta Citizen The goal of Insta Citizen is to give you the absolute best news sources for any topic! Our topics are carefully curated and constantly updated as we know the web moves fast so we try to as well.

Categories

  • Artificial Intelligence
  • Computers
  • Gadgets
  • Software
  • Solar Energy
  • Technology

Recent Posts

  • Insta360 Movement: A Characteristic-packed Telephone Gimbal With 12 Hours Of Battery Life
  • iOS 16.4: What’s New on Your iPhone
  • Professionals and Cons of Hybrid App Improvement
  • Subsequent Degree Racing F-GT Simulator Cockpit Evaluation
  • Home
  • About Us
  • Contact Us
  • DMCA
  • Sitemap
  • Privacy Policy

Copyright © 2022 Instacitizen.com | All Rights Reserved.

No Result
View All Result
  • Home
  • Technology
  • Computers
  • Gadgets
  • Software
  • Solar Energy
  • Artificial Intelligence

Copyright © 2022 Instacitizen.com | All Rights Reserved.

What Are Cookies
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Cookie SettingsAccept All
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT