• Home
  • About Us
  • Contact Us
  • DMCA
  • Sitemap
  • Privacy Policy
Wednesday, March 22, 2023
Insta Citizen
No Result
View All Result
  • Home
  • Technology
  • Computers
  • Gadgets
  • Software
  • Solar Energy
  • Artificial Intelligence
  • Home
  • Technology
  • Computers
  • Gadgets
  • Software
  • Solar Energy
  • Artificial Intelligence
No Result
View All Result
Insta Citizen
No Result
View All Result
Home Artificial Intelligence

Becoming a member of the Transformer Encoder and Decoder, and Masking

Insta Citizen by Insta Citizen
October 13, 2022
in Artificial Intelligence
0
Becoming a member of the Transformer Encoder and Decoder, and Masking
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


We now have arrived to some extent the place we have now applied and examined the Transformer encoder and decoder individually, and we could now be part of the 2 collectively into an entire mannequin. We may also be seeing how you can create padding and look-ahead masks by which we might be suppressing the enter values that we’ll not be contemplating in both of the encoder or decoder computations. Our finish aim stays the applying of the whole mannequin to Pure Language Processing (NLP).

READ ALSO

Head-worn system can management cell manipulators — ScienceDaily

I See What You Hear: A Imaginative and prescient-inspired Technique to Localize Phrases

On this tutorial, you’ll uncover how you can implement the whole Transformer mannequin, and create padding and look-ahead masks. 

After finishing this tutorial, you’ll know:

  • Methods to create a padding masks for the encoder and decoder. 
  • Methods to create a look-ahead masks for the decoder. 
  • Methods to be part of the Transformer encoder and decoder right into a single mannequin. 
  • Methods to print out a abstract of the encoder and decoder layers. 

Let’s get began. 

Becoming a member of the Transformer Encoder and Decoder, and Masking
Picture by John O’Nolan, some rights reserved.

Tutorial Overview

This tutorial is split into 4 elements; they’re:

  • Recap of the Transformer Structure
  • Masking
    • Making a Padding Masks
    • Making a Look-Forward Masks
  • Becoming a member of the Transformer Encoder and Decoder
  • Creating an Occasion of the Transformer Mannequin
    • Printing Out a Abstract of the Encoder and Decoder Layers

Conditions

For this tutorial, we assume that you’re already acquainted with:

  • The Transformer mannequin
  • The Transformer encoder
  • The Transformer decoder

Recap of the Transformer Structure

Recall having seen that the Transformer structure follows an encoder-decoder construction: the encoder, on the left-hand aspect, is tasked with mapping an enter sequence to a sequence of steady representations; the decoder, on the right-hand aspect, receives the output of the encoder along with the decoder output on the earlier time step, to generate an output sequence.

The Encoder-Decoder Construction of the Transformer Structure
Taken from “Consideration Is All You Want“

In producing an output sequence, the Transformer doesn’t depend on recurrence and convolutions.

We now have seen how you can implement the Transformer encoder and decoder individually. On this tutorial, we might be becoming a member of the 2 into an entire Transformer mannequin, and making use of padding and look-ahead masking to the enter values.  

Let’s begin first by discovering how you can apply masking. 

Masking

Making a Padding Masks

We now have already familiarized ourselves with the significance of masking the enter values earlier than feeding them into the encoder and decoder. 

As we’ll see once we proceed to practice the Transformer mannequin, the enter sequences that might be fed into the encoder and decoder will first be zero-padded as much as a particular sequence size. The significance of getting a padding masks is to ensure that these zero values will not be processed together with the precise enter values by each the encoder and decoder. 

Let’s create the next operate to generate a padding masks for each the encoder and decoder:

from tensorflow import math, forged, float32

def padding_mask(enter):
    # Create masks which marks the zero padding values within the enter by a 1
    masks = math.equal(enter, 0)
    masks = forged(masks, float32)

    return masks

Upon receiving an enter, this operate will generate a tensor that marks by a worth of one wherever the enter comprises a worth of zero.  

Therefore, if we enter the next array:

from numpy import array

enter = array([1, 2, 3, 4, 0, 0, 0])
print(padding_mask(enter))

Then the output of the padding_mask operate can be the next:

tf.Tensor([0. 0. 0. 0. 1. 1. 1.], form=(7,), dtype=float32)

Making a Look-Forward Masks

A glance-ahead masks is required to be able to stop the decoder from attending to succeeding phrases, such that the prediction for a specific phrase can solely depend upon identified outputs for the phrases that come earlier than it.

For this function, let’s create the next operate to generate a look-ahead masks for the decoder:

from tensorflow import linalg, ones

def lookahead_mask(form):
    # Masks out future entries by marking them with a 1.0
    masks = 1 - linalg.band_part(ones((form, form)), -1, 0)

    return masks

We are going to cross to it the size of the decoder enter. Let’s take this size to be equal to five, for instance:

print(lookahead_mask(5))

Then the output that the lookahead_mask operate returns is the next:

tf.Tensor(
[[0. 1. 1. 1. 1.]
 [0. 0. 1. 1. 1.]
 [0. 0. 0. 1. 1.]
 [0. 0. 0. 0. 1.]
 [0. 0. 0. 0. 0.]], form=(5, 5), dtype=float32)

Once more, the one values masks out the entries that shouldn’t be used. On this method, the prediction of each phrase solely relies on those who come earlier than it. 

Becoming a member of the Transformer Encoder and Decoder

Let’s begin by creating the category, TransformerModel, that inherits from the Mannequin base class in Keras:

class TransformerModel(Mannequin):
    def __init__(self, enc_vocab_size, dec_vocab_size, enc_seq_length, dec_seq_length, h, d_k, d_v, d_model, d_ff_inner, n, price, **kwargs):
        tremendous(TransformerModel, self).__init__(**kwargs)

        # Arrange the encoder
        self.encoder = Encoder(enc_vocab_size, enc_seq_length, h, d_k, d_v, d_model, d_ff_inner, n, price)

        # Arrange the decoder
        self.decoder = Decoder(dec_vocab_size, dec_seq_length, h, d_k, d_v, d_model, d_ff_inner, n, price)

        # Outline the ultimate dense layer
        self.model_last_layer = Dense(dec_vocab_size)
        ...

Our first step in creating the TransformerModel class is to initialize cases of the Encoder and Decoder courses that we had applied earlier, and assigning their outputs to the variables, encoder and decoder, respectively. If you happen to had saved these courses in separate Python scripts, don’t forget to import them. I had saved my code within the Python scripts, encoder.py and decoder.py, and therefore I have to import them accordingly. 

We’re additionally together with one closing dense layer that produces the ultimate output, as within the Transformer structure of Vaswani et al (2017). 

Subsequent, we will create the category methodology, name(), to feed the related inputs into the encoder and decoder.

A padding masks is first generated to masks the encoder enter, in addition to the encoder output when that is fed into the second self-attention block of the decoder:

...
def name(self, encoder_input, decoder_input, coaching):

    # Create padding masks to masks the encoder inputs and the encoder outputs within the decoder
    enc_padding_mask = self.padding_mask(encoder_input)
...

A padding masks in addition to a look-ahead masks are, then, generated to masks the decoder enter. These are mixed collectively via an element-wise most operation:

...
# Create and mix padding and look-ahead masks to be fed into the decoder
dec_in_padding_mask = self.padding_mask(decoder_input)
dec_in_lookahead_mask = self.lookahead_mask(decoder_input.form[1])
dec_in_lookahead_mask = most(dec_in_padding_mask, dec_in_lookahead_mask)
...

Subsequent, the related inputs are fed into the encoder and decoder, and the Transformer mannequin output is generated by feeding the decoder output into one closing dense layer:

...
# Feed the enter into the encoder
encoder_output = self.encoder(encoder_input, enc_padding_mask, coaching)

# Feed the encoder output into the decoder
decoder_output = self.decoder(decoder_input, encoder_output, dec_in_lookahead_mask, enc_padding_mask, coaching)

# Cross the decoder output via a closing dense layer
model_output = self.model_last_layer(decoder_output)

return model_output

Combining all steps collectively, provides us the next full code itemizing:

from encoder import Encoder
from decoder import Decoder
from tensorflow import math, forged, float32, linalg, ones, most, newaxis
from tensorflow.keras import Mannequin
from tensorflow.keras.layers import Dense


class TransformerModel(Mannequin):
    def __init__(self, enc_vocab_size, dec_vocab_size, enc_seq_length, dec_seq_length, h, d_k, d_v, d_model, d_ff_inner, n, price, **kwargs):
        tremendous(TransformerModel, self).__init__(**kwargs)

        # Arrange the encoder
        self.encoder = Encoder(enc_vocab_size, enc_seq_length, h, d_k, d_v, d_model, d_ff_inner, n, price)

        # Arrange the decoder
        self.decoder = Decoder(dec_vocab_size, dec_seq_length, h, d_k, d_v, d_model, d_ff_inner, n, price)

        # Outline the ultimate dense layer
        self.model_last_layer = Dense(dec_vocab_size)

    def padding_mask(self, enter):
        # Create masks which marks the zero padding values within the enter by a 1.0
        masks = math.equal(enter, 0)
        masks = forged(masks, float32)

        # The form of the masks needs to be broadcastable to the form
        # of the eye weights that it will likely be masking afterward
        return masks[:, newaxis, newaxis, :]

    def lookahead_mask(self, form):
        # Masks out future entries by marking them with a 1.0
        masks = 1 - linalg.band_part(ones((form, form)), -1, 0)

        return masks

    def name(self, encoder_input, decoder_input, coaching):

        # Create padding masks to masks the encoder inputs and the encoder outputs within the decoder
        enc_padding_mask = self.padding_mask(encoder_input)

        # Create and mix padding and look-ahead masks to be fed into the decoder
        dec_in_padding_mask = self.padding_mask(decoder_input)
        dec_in_lookahead_mask = self.lookahead_mask(decoder_input.form[1])
        dec_in_lookahead_mask = most(dec_in_padding_mask, dec_in_lookahead_mask)

        # Feed the enter into the encoder
        encoder_output = self.encoder(encoder_input, enc_padding_mask, coaching)

        # Feed the encoder output into the decoder
        decoder_output = self.decoder(decoder_input, encoder_output, dec_in_lookahead_mask, enc_padding_mask, coaching)

        # Cross the decoder output via a closing dense layer
        model_output = self.model_last_layer(decoder_output)

        return model_output

Observe that we have now carried out a small change to the output that’s returned by the padding_mask operate, such that its form is made broadcastable to the form of the eye weight tensor that it will likely be masking once we practice the Transformer mannequin. 

Creating an Occasion of the Transformer Mannequin

We might be working with the parameter values specified within the paper, Consideration Is All You Want, by Vaswani et al. (2017):

h = 8  # Variety of self-attention heads
d_k = 64  # Dimensionality of the linearly projected queries and keys
d_v = 64  # Dimensionality of the linearly projected values
d_ff = 2048  # Dimensionality of the internal totally linked layer
d_model = 512  # Dimensionality of the mannequin sub-layers' outputs
n = 6  # Variety of layers within the encoder stack

dropout_rate = 0.1  # Frequency of dropping the enter models within the dropout layers
...

As for the input-related parameters, we might be working with dummy values in the meanwhile till we arrive to the stage of coaching the whole Transformer mannequin, at which level we might be utilizing precise sentences:

...
enc_vocab_size = 20 # Vocabulary measurement for the encoder
dec_vocab_size = 20 # Vocabulary measurement for the decoder

enc_seq_length = 5  # Most size of the enter sequence
dec_seq_length = 5  # Most size of the goal sequence
...

We are able to proceed to create an occasion of the TransformerModel class as follows:

from mannequin import TransformerModel

# Create mannequin
training_model = TransformerModel(enc_vocab_size, dec_vocab_size, enc_seq_length, dec_seq_length, h, d_k, d_v, d_model, d_ff, n, dropout_rate)

The whole code itemizing is as follows:

enc_vocab_size = 20 # Vocabulary measurement for the encoder
dec_vocab_size = 20 # Vocabulary measurement for the decoder

enc_seq_length = 5  # Most size of the enter sequence
dec_seq_length = 5  # Most size of the goal sequence

h = 8  # Variety of self-attention heads
d_k = 64  # Dimensionality of the linearly projected queries and keys
d_v = 64  # Dimensionality of the linearly projected values
d_ff = 2048  # Dimensionality of the internal totally linked layer
d_model = 512  # Dimensionality of the mannequin sub-layers' outputs
n = 6  # Variety of layers within the encoder stack

dropout_rate = 0.1  # Frequency of dropping the enter models within the dropout layers

# Create mannequin
training_model = TransformerModel(enc_vocab_size, dec_vocab_size, enc_seq_length, dec_seq_length, h, d_k, d_v, d_model, d_ff, n, dropout_rate)

Printing Out a Abstract of the Encoder and Decoder Layers

We may print out a abstract of the encoder and decoder blocks of the Transformer mannequin. The selection to print them out individually is to have the ability to see the main points of their particular person sub-layers. So as to take action, we might be including the next line of code to the __init__() methodology of each the EncoderLayer and DecoderLayer courses:

self.construct(input_shape=[None, sequence_length, d_model])

Then we have to add the next methodology to EncoderLayer class:

def build_graph(self):
    input_layer = Enter(form=(self.sequence_length, self.d_model))
    return Mannequin(inputs=[input_layer], outputs=self.name(input_layer, None, True))

And the next methodology to the DecoderLayer class:

def build_graph(self):
    input_layer = Enter(form=(self.sequence_length, self.d_model))
    return Mannequin(inputs=[input_layer], outputs=self.name(input_layer, input_layer, None, None, True))

This leads to the EncoderLayer class being modified as follows (the three dots underneath the name() methodology imply that this stays the identical because the one which we had applied right here):

from tensorflow.keras.layers import Enter
from tensorflow.keras import Mannequin

class EncoderLayer(Layer):
    def __init__(self, sequence_length, h, d_k, d_v, d_model, d_ff, price, **kwargs):
        tremendous(EncoderLayer, self).__init__(**kwargs)
        self.construct(input_shape=[None, sequence_length, d_model])
        self.d_model = d_model
        self.sequence_length = sequence_length
        self.multihead_attention = MultiHeadAttention(h, d_k, d_v, d_model)
        self.dropout1 = Dropout(price)
        self.add_norm1 = AddNormalization()
        self.feed_forward = FeedForward(d_ff, d_model)
        self.dropout2 = Dropout(price)
        self.add_norm2 = AddNormalization()

    def build_graph(self):
        input_layer = Enter(form=(self.sequence_length, self.d_model))
        return Mannequin(inputs=[input_layer], outputs=self.name(input_layer, None, True))

    def name(self, x, padding_mask, coaching):
        ...

Related modifications may be finished to the DecoderLayer class too.

As soon as we have now the required modifications in place, we will proceed to created cases of the EncoderLayer and DecoderLayer courses, and print out their summaries as follows:

from encoder import EncoderLayer
from decoder import DecoderLayer

encoder = EncoderLayer(enc_seq_length, h, d_k, d_v, d_model, d_ff, dropout_rate)
encoder.build_graph().abstract()

decoder = DecoderLayer(dec_seq_length, h, d_k, d_v, d_model, d_ff, dropout_rate)
decoder.build_graph().abstract()

The ensuing abstract for the encoder is the next:

Mannequin: "mannequin"
__________________________________________________________________________________________________
 Layer (kind)                   Output Form         Param #     Linked to                     
==================================================================================================
 input_1 (InputLayer)           [(None, 5, 512)]     0           []                               
                                                                                                  
 multi_head_attention_18 (Multi  (None, 5, 512)      131776      ['input_1[0][0]',                
 HeadAttention)                                                   'input_1[0][0]',                
                                                                  'input_1[0][0]']                
                                                                                                  
 dropout_32 (Dropout)           (None, 5, 512)       0           ['multi_head_attention_18[0][0]']
                                                                                                  
 add_normalization_30 (AddNorma  (None, 5, 512)      1024        ['input_1[0][0]',                
 lization)                                                        'dropout_32[0][0]']             
                                                                                                  
 feed_forward_12 (FeedForward)  (None, 5, 512)       2099712     ['add_normalization_30[0][0]']   
                                                                                                  
 dropout_33 (Dropout)           (None, 5, 512)       0           ['feed_forward_12[0][0]']        
                                                                                                  
 add_normalization_31 (AddNorma  (None, 5, 512)      1024        ['add_normalization_30[0][0]',   
 lization)                                                        'dropout_33[0][0]']             
                                                                                                  
==================================================================================================
Complete params: 2,233,536
Trainable params: 2,233,536
Non-trainable params: 0
__________________________________________________________________________________________________

Whereas the ensuing abstract for the decoder is the next:

Mannequin: "model_1"
__________________________________________________________________________________________________
 Layer (kind)                   Output Form         Param #     Linked to                     
==================================================================================================
 input_2 (InputLayer)           [(None, 5, 512)]     0           []                               
                                                                                                  
 multi_head_attention_19 (Multi  (None, 5, 512)      131776      ['input_2[0][0]',                
 HeadAttention)                                                   'input_2[0][0]',                
                                                                  'input_2[0][0]']                
                                                                                                  
 dropout_34 (Dropout)           (None, 5, 512)       0           ['multi_head_attention_19[0][0]']
                                                                                                  
 add_normalization_32 (AddNorma  (None, 5, 512)      1024        ['input_2[0][0]',                
 lization)                                                        'dropout_34[0][0]',             
                                                                  'add_normalization_32[0][0]',   
                                                                  'dropout_35[0][0]']             
                                                                                                  
 multi_head_attention_20 (Multi  (None, 5, 512)      131776      ['add_normalization_32[0][0]',   
 HeadAttention)                                                   'input_2[0][0]',                
                                                                  'input_2[0][0]']                
                                                                                                  
 dropout_35 (Dropout)           (None, 5, 512)       0           ['multi_head_attention_20[0][0]']
                                                                                                  
 feed_forward_13 (FeedForward)  (None, 5, 512)       2099712     ['add_normalization_32[1][0]']   
                                                                                                  
 dropout_36 (Dropout)           (None, 5, 512)       0           ['feed_forward_13[0][0]']        
                                                                                                  
 add_normalization_34 (AddNorma  (None, 5, 512)      1024        ['add_normalization_32[1][0]',   
 lization)                                                        'dropout_36[0][0]']             
                                                                                                  
==================================================================================================
Complete params: 2,365,312
Trainable params: 2,365,312
Non-trainable params: 0
__________________________________________________________________________________________________

Additional Studying

This part supplies extra assets on the subject if you’re seeking to go deeper.

Books

  • Superior Deep Studying with Python, 2019.
  • Transformers for Pure Language Processing, 2021. 

Papers

  • Consideration Is All You Want, 2017.

Abstract

On this tutorial, you found how you can implement the whole Transformer mannequin, and create padding and look-ahead masks.

Particularly, you discovered:

  • Methods to create a padding masks for the encoder and decoder. 
  • Methods to create a look-ahead masks for the decoder. 
  • Methods to be part of the Transformer encoder and decoder right into a single mannequin. 
  • Methods to print out a abstract of the encoder and decoder layers.

Do you have got any questions?
Ask your questions within the feedback beneath and I’ll do my finest to reply.

The put up Becoming a member of the Transformer Encoder and Decoder, and Masking appeared first on Machine Studying Mastery.



Source_link

Related Posts

How deep-network fashions take probably harmful ‘shortcuts’ in fixing complicated recognition duties — ScienceDaily
Artificial Intelligence

Head-worn system can management cell manipulators — ScienceDaily

March 22, 2023
RGB-X Classification for Electronics Sorting
Artificial Intelligence

I See What You Hear: A Imaginative and prescient-inspired Technique to Localize Phrases

March 22, 2023
Quick reinforcement studying by means of the composition of behaviours
Artificial Intelligence

Quick reinforcement studying by means of the composition of behaviours

March 21, 2023
Exploring The Variations Between ChatGPT/GPT-4 and Conventional Language Fashions: The Affect of Reinforcement Studying from Human Suggestions (RLHF)
Artificial Intelligence

Exploring The Variations Between ChatGPT/GPT-4 and Conventional Language Fashions: The Affect of Reinforcement Studying from Human Suggestions (RLHF)

March 21, 2023
Detailed pictures from area provide clearer image of drought results on vegetation | MIT Information
Artificial Intelligence

Detailed pictures from area provide clearer image of drought results on vegetation | MIT Information

March 21, 2023
Palms on Otsu Thresholding Algorithm for Picture Background Segmentation, utilizing Python | by Piero Paialunga | Mar, 2023
Artificial Intelligence

Palms on Otsu Thresholding Algorithm for Picture Background Segmentation, utilizing Python | by Piero Paialunga | Mar, 2023

March 21, 2023
Next Post
Microsoft Claims Hololens Is Doing Nice Really And Is Crucial To Its Metaverse Plan

Microsoft Claims Hololens Is Doing Nice Really And Is Crucial To Its Metaverse Plan

POPULAR NEWS

AMD Zen 4 Ryzen 7000 Specs, Launch Date, Benchmarks, Value Listings

October 1, 2022
Only5mins! – Europe’s hottest warmth pump markets – pv journal Worldwide

Only5mins! – Europe’s hottest warmth pump markets – pv journal Worldwide

February 10, 2023
XR-based metaverse platform for multi-user collaborations

XR-based metaverse platform for multi-user collaborations

October 21, 2022
Magento IOS App Builder – Webkul Weblog

Magento IOS App Builder – Webkul Weblog

September 29, 2022
Melted RTX 4090 16-pin Adapter: Unhealthy Luck or the First of Many?

Melted RTX 4090 16-pin Adapter: Unhealthy Luck or the First of Many?

October 24, 2022

EDITOR'S PICK

The Shiny MSI GeForce RTX 4080 16GB SUPRIM X

The Shiny MSI GeForce RTX 4080 16GB SUPRIM X

February 8, 2023
Apple to encrypt iCloud – The Washington Publish

Apple to encrypt iCloud – The Washington Publish

December 7, 2022
Evebot Print X Goes from Moveable Printer to Desktop Printer in a Snap

Evebot Print X Goes from Moveable Printer to Desktop Printer in a Snap

October 8, 2022
Japanese Authorities Invests $500 Million in New Chipmaking Enterprise

Japanese Authorities Invests $500 Million in New Chipmaking Enterprise

November 11, 2022

Insta Citizen

Welcome to Insta Citizen The goal of Insta Citizen is to give you the absolute best news sources for any topic! Our topics are carefully curated and constantly updated as we know the web moves fast so we try to as well.

Categories

  • Artificial Intelligence
  • Computers
  • Gadgets
  • Software
  • Solar Energy
  • Technology

Recent Posts

  • Report: 72% of tech leaders plan to extend funding in tech abilities growth
  • Head-worn system can management cell manipulators — ScienceDaily
  • Drop Lord Of The Rings Black Speech Keyboard
  • LG made a 49-inch HDR monitor with a 240Hz refresh price
  • Home
  • About Us
  • Contact Us
  • DMCA
  • Sitemap
  • Privacy Policy

Copyright © 2022 Instacitizen.com | All Rights Reserved.

No Result
View All Result
  • Home
  • Technology
  • Computers
  • Gadgets
  • Software
  • Solar Energy
  • Artificial Intelligence

Copyright © 2022 Instacitizen.com | All Rights Reserved.

What Are Cookies
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Cookie SettingsAccept All
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT