• Home
  • About Us
  • Contact Us
  • DMCA
  • Sitemap
  • Privacy Policy
Thursday, March 23, 2023
Insta Citizen
No Result
View All Result
  • Home
  • Technology
  • Computers
  • Gadgets
  • Software
  • Solar Energy
  • Artificial Intelligence
  • Home
  • Technology
  • Computers
  • Gadgets
  • Software
  • Solar Energy
  • Artificial Intelligence
No Result
View All Result
Insta Citizen
No Result
View All Result
Home Artificial Intelligence

Detect fraudulent transactions utilizing machine studying with Amazon SageMaker

Insta Citizen by Insta Citizen
October 20, 2022
in Artificial Intelligence
0
Detect fraudulent transactions utilizing machine studying with Amazon SageMaker
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Companies can lose billions of {dollars} annually attributable to malicious customers and fraudulent transactions. As increasingly enterprise operations transfer on-line, fraud and abuses in on-line programs are additionally on the rise. To fight on-line fraud, many companies have been utilizing rule-based fraud detection programs.

Nevertheless, conventional fraud detection programs depend on a algorithm and filters hand-crafted by human specialists. The filters can usually be brittle and the foundations might not seize the complete spectrum of fraudulent indicators. Moreover, whereas fraudulent behaviors are ever-evolving, the static nature of predefined guidelines and filters makes it tough to keep up and enhance conventional fraud detection programs successfully.

On this publish, we present you construct a dynamic, self-improving, and maintainable bank card fraud detection system with machine studying (ML) utilizing Amazon SageMaker.

Alternatively, if you happen to’re on the lookout for a completely managed service to construct personalized fraud detection fashions with out writing code, we suggest testing Amazon Fraud Detector. Amazon Fraud Detector permits clients with no ML expertise to automate constructing fraud detection fashions personalized for his or her information, leveraging greater than 20 years of fraud detection experience from AWS and Amazon.com.

Answer overview

This answer builds the core of a bank card fraud detection system utilizing SageMaker. We begin by coaching an unsupervised anomaly detection mannequin utilizing the algorithm Random Reduce Forest (RCF). Then we prepare two supervised classification fashions utilizing the algorithm XGBoost, one as a baseline mannequin and the opposite for making predictions, utilizing totally different methods to deal with the acute class imbalance in information. Lastly, we prepare an optimum XGBoost mannequin with hyperparameter optimization (HPO) to additional enhance the mannequin efficiency.

For the pattern dataset, we use the general public, anonymized bank card transactions dataset that was initially launched as a part of a analysis collaboration of Worldline and the Machine Studying Group of ULB (Université Libre de Bruxelles). Within the walkthrough, we additionally talk about how one can customise the answer to make use of your individual information.

The outputs of the answer are as follows:

  • An unsupervised SageMaker RCF mannequin. The mannequin outputs an anomaly rating for every transaction. A low rating worth signifies that the transaction is taken into account regular (non-fraudulent). A excessive worth signifies that the transaction is fraudulent. The definitions of high and low depend upon the appliance, however frequent observe means that scores past three customary deviations from the imply rating are thought-about anomalous.
  • A supervised SageMaker XGBoost mannequin skilled utilizing its built-in weighting schema to deal with the extremely unbalanced information difficulty.
  • A supervised SageMaker XGBoost mannequin skilled utilizing the Sythetic Minority Over-sampling Approach (SMOTE).
  • A skilled SageMaker XGBoost mannequin with HPO.
  • Predictions of the chance for every transaction being fraudulent. If the estimated chance of a transaction is over a threshold, it’s categorized as fraudulent.

To reveal how you should utilize this answer in your current enterprise infrastructures, we additionally embody an instance of constructing REST API calls to the deployed mannequin endpoint, utilizing AWS Lambda to set off each the RCF and XGBoost fashions.

The next diagram illustrates the answer structure.

Architecture diagram

Stipulations

To check out the answer in your individual account, just be sure you have the next in place:

When the Studio occasion is prepared, you possibly can launch Studio and entry JumpStart. JumpStart options aren’t accessible in SageMaker pocket book situations, and you may’t entry them via SageMaker APIs or the AWS Command Line Interface (AWS CLI).

Launch the answer

To launch the answer, full the next steps:

  1. Open JumpStart through the use of the JumpStart launcher within the Get Began part or by selecting the JumpStart icon within the left sidebar.
  2. Underneath Options, select Detect Malicious Customers and Transactions to open the answer in one other Studio tab.
    Find the solution
  3. On the answer tab, select Launch to launch the answer.
    Launch the solution
    The answer sources are provisioned and one other tab opens exhibiting the deployment progress. When the deployment is completed, an Open Pocket book button seems.
  4. Select Open Pocket book to open the answer pocket book in Studio.
    Open notebook

Examine and course of the information

The default dataset incorporates solely numerical options, as a result of the unique options have been remodeled utilizing Principal Element Evaluation (PCA) to guard consumer privateness. In consequence, the dataset incorporates 28 PCA elements, V1–V28, and two options that haven’t been remodeled, Quantity and Time. Quantity refers back to the transaction quantity, and Time is the seconds elapsed between any transaction within the information and the primary transaction.

The Class column corresponds as to whether or not a transaction is fraudulent.

Sample data

We are able to see that almost all is non-fraudulent, as a result of out of the whole 284,807 examples, solely 492 (0.173%) are fraudulent. This can be a case of maximum class imbalance, which is frequent in fraud detection situations.

Data class imbalance

We then put together our information for loading and coaching. We break up the information right into a prepare set and a check set, utilizing the previous to coach and the latter to guage the efficiency of our mannequin. It’s essential to separate the information earlier than making use of any methods to alleviate the category imbalance. In any other case, we would leak info from the check set into the prepare set and harm the mannequin’s efficiency.

If you wish to herald your individual coaching information, make it possible for it’s tabular information in CSV format, add the information to an Amazon Easy Storage Service (Amazon S3) bucket, and edit the S3 object path within the pocket book code.

Data path in S3

In case your information contains categorical columns with non-numerical values, it’s good to one-hot encode these values (utilizing, for instance, sklearn’s OneHotEncoder) as a result of the XGBoost algorithm solely helps numerical information.

Prepare an unsupervised Random Reduce Forest mannequin

In a fraud detection state of affairs, we generally have only a few labeled examples, and labeling fraud can take a number of effort and time. Due to this fact, we additionally need to extract info from the unlabeled information at hand. We do that utilizing an anomaly detection algorithm, benefiting from the excessive information imbalance that’s frequent in fraud detection datasets.

Anomaly detection is a type of unsupervised studying the place we attempt to establish anomalous examples based mostly solely on their characteristic traits. Random Reduce Forest is a state-of-the-art anomaly detection algorithm that’s each correct and scalable. With every information instance, RCF associates an anomaly rating.

We use the SageMaker built-in RCF algorithm to coach an anomaly detection mannequin on our coaching dataset, then make predictions on our check dataset.

First, we look at and plot the expected anomaly scores for optimistic (fraudulent) and damaging (non-fraudulent) examples individually, as a result of the numbers of optimistic and damaging examples differ considerably. We count on the optimistic (fraudulent) examples to have comparatively excessive anomaly scores, and the damaging (non-fraudulent) ones to have low anomaly scores. From the histograms, we are able to see the next patterns:

  • Nearly half of the optimistic examples (left histogram) have anomaly scores increased than 0.9, whereas a lot of the damaging examples (proper histogram) have anomaly scores decrease than 0.85.
  • The unsupervised studying algorithm RCF has limitations to establish fraudulent and non-fraudulent examples precisely. It is because no label info is used. We deal with this difficulty by gathering label info and utilizing a supervised studying algorithm in later steps.

Predicted anomaly scores

Then, we assume a extra real-world state of affairs the place we classify every check instance as both optimistic (fraudulent) or damaging (non-fraudulent) based mostly on its anomaly rating. We plot the rating histogram for all check examples as follows, selecting a cutoff rating of 1.0 (based mostly on the sample proven within the histogram) for classification. Particularly, if an instance’s anomaly rating is lower than or equal to 1.0, it’s categorized as damaging (non-fraudulent). In any other case, the instance is assessed as optimistic (fraudulent).

Histogram of scores for test samples

Lastly, we evaluate the classification end result with the bottom fact labels and compute the analysis metrics. As a result of our dataset is imbalanced, we use the analysis metrics balanced accuracy, Cohen’s Kappa rating, F1 rating, and ROC AUC, as a result of they consider the frequency of every class within the information. For all of those metrics, a bigger worth signifies a greater predictive efficiency. Word that on this step we are able to’t compute the ROC AUC but, as a result of there isn’t a estimated chance for optimistic and damaging courses from the RCF mannequin on every instance. We compute this metric in later steps utilizing supervised studying algorithms.

. RCF
Balanced accuracy 0.560023
Cohen’s Kappa 0.003917
F1 0.007082
ROC AUC –

From this step, we are able to see that the unsupervised mannequin can already obtain some separation between the courses, with increased anomaly scores correlated with fraudulent examples.

Prepare an XGBoost mannequin with the built-in weighting schema

After we’ve gathered an sufficient quantity of labeled coaching information, we are able to use a supervised studying algorithm to find relationships between the options and the courses. We select the XGBoost algorithm as a result of it has a confirmed observe document, is extremely scalable, and might take care of lacking information. We have to deal with the information imbalance this time, in any other case the bulk class (the non-fraudulent, or damaging examples) will dominate the training.

We prepare and deploy our first supervised mannequin utilizing the SageMaker built-in XGBoost algorithm container. That is our baseline mannequin. To deal with the information imbalance, we use the hyperparameter scale_pos_weight, which scales the weights of the optimistic class examples in opposition to the damaging class examples. As a result of the dataset is extremely skewed, we set this hyperparameter to a conservative worth: sqrt(num_nonfraud/num_fraud).

We prepare and deploy the mannequin as follows:

  1. Retrieve the SageMaker XGBoost container URI.
  2. Set the hyperparameters we need to use for the mannequin coaching, together with the one we talked about that handles information imbalance, scale_pos_weight.
  3. Create an XGBoost estimator and prepare it with our prepare dataset.
  4. Deploy the skilled XGBoost mannequin to a SageMaker managed endpoint.
  5. Consider this baseline mannequin with our check dataset.

Then we consider our mannequin with the identical 4 metrics as talked about within the final step. This time we are able to additionally calculate the ROC AUC metric.

. RCF XGBoost
Balanced accuracy 0.560023 0.847685
Cohen’s Kappa 0.003917 0.743801
F1 0.007082 0.744186
ROC AUC – 0.983515

We are able to see {that a} supervised studying technique XGBoost with the weighting schema (utilizing the hyperparameter scale_pos_weight) achieves considerably higher efficiency than the unsupervised studying technique RCF. There may be nonetheless room to enhance the efficiency, nonetheless. Particularly, elevating the Cohen’s Kappa rating above 0.8 can be typically very favorable.

Aside from single-value metrics, it’s additionally helpful to have a look at metrics that point out efficiency per class. For instance, the confusion matrix, per-class precision, recall, and F1-score can present extra details about our mannequin’s efficiency.

XGBoost model

. precision recall f1-score assist
non-fraud 1.00 1.00 1.00 28435
fraud 0.80 0.70 0.74 46

Hold sending check site visitors to the endpoint through Lambda

To reveal use our fashions in a manufacturing system, we constructed a REST API with Amazon API Gateway and a Lambda operate. When consumer purposes ship HTTP inference requests to the REST API, which triggers the Lambda operate, which in flip invokes the RCF and XGBoost mannequin endpoints and returns the predictions from the fashions. You possibly can learn the Lambda operate code and monitor the invocations on the Lambda console.

We additionally created a Python script that makes HTTP inference requests to the REST API, with our check information as enter information. To see how this was executed, test the generate_endpoint_traffic.py file within the answer’s supply code. The prediction outputs are logged to an S3 bucket via an Amazon Kinesis Knowledge Firehose supply stream. Yow will discover the vacation spot S3 bucket identify on the Kinesis Knowledge Firehose console, and test the prediction ends in the S3 bucket.

Prepare an XGBoost mannequin with the over-sampling method SMOTE

Now that now we have a baseline mannequin utilizing XGBoost, we are able to see if sampling methods which are designed particularly for imbalanced issues can enhance the efficiency of the mannequin. We use Sythetic Minority Over-sampling (SMOTE), which oversamples the minority class by interpolating new information factors between current ones.

The steps are as follows:

  1. Use SMOTE to oversample the minority class (the fraudulent class) of our prepare dataset. SMOTE oversamples the minority class from about 0.17–50%. Word that it is a case of maximum oversampling of the minority class. An alternate can be to make use of a smaller resampling ratio, comparable to having one minority class pattern for each sqrt(non_fraud/fraud) majority pattern, or utilizing extra superior resampling methods. For extra over-sampling choices, check with Examine over-sampling samplers.
  2. Outline the hyperparameters for coaching the second XGBoost in order that scale_pos_weight is eliminated and the opposite hyperparameters stay the identical as when coaching the baseline XGBoost mannequin. We don’t must deal with information imbalance with this hyperparameter anymore, as a result of we’ve already executed that with SMOTE.
  3. Prepare the second XGBoost mannequin with the brand new hyperparameters on the SMOTE processed prepare dataset.
  4. Deploy the brand new XGBoost mannequin to a SageMaker managed endpoint.
  5. Consider the brand new mannequin with the check dataset.

When evaluating the brand new mannequin, we are able to see that with SMOTE, XGBoost achieves a greater efficiency on balanced accuracy, however not on Cohen’s Kappa and F1 scores. The explanation for that is that SMOTE has oversampled the fraud class a lot that it’s elevated its overlap in characteristic area with the non-fraud circumstances. As a result of Cohen’s Kappa offers extra weight to false positives than balanced accuracy does, the metric drops considerably, as does the precision and F1 rating for fraud circumstances.

. RCF XGBoost XGBoost SMOTE
Balanced accuracy 0.560023 0.847685 0.912657
Cohen’s Kappa 0.003917 0.743801 0.716463
F1 0.007082 0.744186 0.716981
ROC AUC – 0.983515 0.967497

Nevertheless, we are able to convey again the steadiness between metrics by adjusting the classification threshold. Thus far, we’ve been utilizing 0.5 as the brink to label whether or not or not an information level is fraudulent. After experimenting totally different thresholds from 0.1–0.9, we are able to see that Cohen’s Kappa retains growing together with the brink, with no vital loss in balanced accuracy.

Experiment different thresholds to bring back the balance between metrics

This provides a helpful calibration to our mannequin. We are able to use a low threshold if not lacking any fraudulent circumstances (false negatives) is our precedence, or we are able to enhance the brink to reduce the variety of false positives.

Prepare an optimum XGBoost mannequin with HPO

On this step, we reveal enhance mannequin efficiency by coaching our third XGBoost mannequin with hyperparameter optimization. When constructing complicated ML programs, manually exploring all attainable combos of hyperparameter values is impractical. The HPO characteristic in SageMaker can speed up your productiveness by making an attempt many variations of a mannequin in your behalf. It robotically appears to be like for one of the best mannequin by specializing in essentially the most promising combos of hyperparameter values throughout the ranges that you just specify.

The HPO course of wants a validation dataset, so we first additional break up our coaching information into coaching and validation datasets utilizing stratified sampling. To deal with the information imbalance downside, we use XGBoost’s weighting schema once more, setting the scale_pos_weight hyperparameter to sqrt(num_nonfraud/num_fraud).

We create an XGBoost estimator utilizing the SageMaker built-in XGBoost algorithm container, and specify the target analysis metric and the hyperparameter ranges inside which we’d prefer to experiment. With these we then create a HyperparameterTuner and kick off the HPO tuning job, which trains a number of fashions in parallel, on the lookout for optimum hyperparameter combos.

When the tuning job is full, we are able to see its analytics report and examine every mannequin’s hyperparameters, coaching job info, and its efficiency in opposition to the target analysis metric.

List of each model

Then we deploy one of the best mannequin and consider it with our check dataset.

Consider and evaluate all mannequin efficiency on the identical check information

Now now we have the analysis outcomes from all 4 fashions: RCF, XGBoost baseline, XGBoost with SMOTE, and XGBoost with HPO. Let’s evaluate their efficiency.

. RCF XGBoost XGBoost with SMOTE XGBoost with HPO
Balanced accuracy 0.560023 0.847685 0.912657 0.902156
Cohen’s Kappa 0.003917 0.743801 0.716463 0.880778
F1 0.007082 0.744186 0.716981 0.880952
ROC AUC – 0.983515 0.967497 0.981564

We are able to see that XGBoost with HPO achieves even higher efficiency than that with the SMOTE technique. Particularly, Cohen’s Kappa scores and F1 are over 0.8, indicating an optimum mannequin efficiency.

Clear up

If you’re completed with this answer, just be sure you delete all undesirable AWS sources to keep away from incurring unintended prices. Within the Delete answer part in your answer tab, select Delete all sources to delete sources robotically created when launching this answer.

Clean up by deleting the solution

Alternatively, you should utilize AWS CloudFormation to delete all customary sources robotically created by the answer and pocket book. To make use of this strategy, on the AWS CloudFormation console, discover the CloudFormation stack whose description incorporates fraud-detection-using-machine-learning, and delete it. This can be a dad or mum stack, and selecting to delete this stack will robotically delete the nested stacks.

Clean up through CloudFormation

With both strategy, you continue to must manually delete any additional sources that you could have created on this pocket book. Some examples embody additional S3 buckets (along with the answer’s default bucket), additional SageMaker endpoints (utilizing a customized identify), and further Amazon Elastic Container Registry (Amazon ECR) repositories.

Conclusion

On this publish, we confirmed you construct the core of a dynamic, self-improving, and maintainable bank card fraud detection system utilizing ML with SageMaker. We constructed, skilled, and deployed an unsupervised RCF anomaly detection mannequin, a supervised XGBoost mannequin because the baseline, one other supervised XGBoost mannequin with SMOTE to deal with the information imbalance downside, and a closing XGBoost mannequin optimized with HPO. We mentioned deal with information imbalance and use your individual information within the answer. We additionally included an instance REST API implementation with API Gateway and Lambda to reveal use the system in your current enterprise infrastructure.

To attempt it out your self, open SageMaker Studio and launch the JumpStart answer. To study extra in regards to the answer, try its GitHub repository.


Concerning the Authors

Xiaoli ShenXiaoli Shen is a Options Architect and Machine Studying Technical Discipline Neighborhood (TFC) member at Amazon Internet Providers. She’s centered on serving to clients architecting on the cloud and leveraging AWS companies to derive enterprise worth. Previous to becoming a member of AWS, she was a tech lead and senior full-stack engineer constructing data-intensive distributed programs on the cloud.

Dr. Xin Huang is an Utilized Scientist for Amazon SageMaker JumpStart and Amazon SageMaker built-in algorithms. He focuses on creating scalable machine studying algorithms. His analysis pursuits are within the space of pure language processing, explainable deep studying on tabular information, and sturdy evaluation of non-parametric space-time clustering. He has printed many papers in ACL, ICDM, KDD conferences, and Royal Statistical Society: Collection A journal.

READ ALSO

AI2 Researchers Introduce Objaverse: A Huge Dataset with 800K+ Annotated 3D Objects

Studying to develop machine-learning fashions | MIT Information

Vedant Jain is a Sr. AI/ML Specialist Options Architect, serving to clients derive worth out of the Machine Studying ecosystem at AWS. Previous to becoming a member of AWS, Vedant has held ML/Knowledge Science Specialty positions at numerous corporations comparable to Databricks, Hortonworks (now Cloudera) & JP Morgan Chase. Exterior of his work, Vedant is enthusiastic about making music, utilizing Science to steer a significant life & exploring scrumptious vegetarian delicacies from world wide.



Source_link

Related Posts

AI2 Researchers Introduce Objaverse: A Huge Dataset with 800K+ Annotated 3D Objects
Artificial Intelligence

AI2 Researchers Introduce Objaverse: A Huge Dataset with 800K+ Annotated 3D Objects

March 23, 2023
Studying to develop machine-learning fashions | MIT Information
Artificial Intelligence

Studying to develop machine-learning fashions | MIT Information

March 23, 2023
An AI-Powered Evaluation of our Postal Service By Tweets | by John Adeojo | Mar, 2023
Artificial Intelligence

An AI-Powered Evaluation of our Postal Service By Tweets | by John Adeojo | Mar, 2023

March 23, 2023
Automate Amazon Rekognition Customized Labels mannequin coaching and deployment utilizing AWS Step Capabilities
Artificial Intelligence

Automate Amazon Rekognition Customized Labels mannequin coaching and deployment utilizing AWS Step Capabilities

March 23, 2023
An early take a look at the labor market affect potential of huge language fashions
Artificial Intelligence

An early take a look at the labor market affect potential of huge language fashions

March 22, 2023
Eat the rainbow: Yell for yellow
Artificial Intelligence

Eat the rainbow: Yell for yellow

March 22, 2023
Next Post
Ought to we consider in — and even need — immortality?

Ought to we consider in — and even need — immortality?

POPULAR NEWS

AMD Zen 4 Ryzen 7000 Specs, Launch Date, Benchmarks, Value Listings

October 1, 2022
Only5mins! – Europe’s hottest warmth pump markets – pv journal Worldwide

Only5mins! – Europe’s hottest warmth pump markets – pv journal Worldwide

February 10, 2023
Magento IOS App Builder – Webkul Weblog

Magento IOS App Builder – Webkul Weblog

September 29, 2022
XR-based metaverse platform for multi-user collaborations

XR-based metaverse platform for multi-user collaborations

October 21, 2022
Melted RTX 4090 16-pin Adapter: Unhealthy Luck or the First of Many?

Melted RTX 4090 16-pin Adapter: Unhealthy Luck or the First of Many?

October 24, 2022

EDITOR'S PICK

New Jersey launches device to hyperlink residents with group photo voltaic initiatives

New Jersey launches device to hyperlink residents with group photo voltaic initiatives

February 28, 2023
Amazon’s new AI instrument might take over work from workers dealing with layoffs and buyouts

Amazon’s new AI instrument might take over work from workers dealing with layoffs and buyouts

November 24, 2022
Scaling False Peaks – O’Reilly

Scaling False Peaks – O’Reilly

September 22, 2022
Core i7-13700H, Core i5-13500H Retain Identical Core Rely As Alder Lake Counterparts

Core i7-13700H, Core i5-13500H Retain Identical Core Rely As Alder Lake Counterparts

November 13, 2022

Insta Citizen

Welcome to Insta Citizen The goal of Insta Citizen is to give you the absolute best news sources for any topic! Our topics are carefully curated and constantly updated as we know the web moves fast so we try to as well.

Categories

  • Artificial Intelligence
  • Computers
  • Gadgets
  • Software
  • Solar Energy
  • Technology

Recent Posts

  • How Advantages of Photo voltaic Power can Assist Sort out Air pollution in 2023
  • AI2 Researchers Introduce Objaverse: A Huge Dataset with 800K+ Annotated 3D Objects
  • FTC Desires to Make It Simpler to Cancel Subscriptions
  • Launching new #WeArePlay tales from India
  • Home
  • About Us
  • Contact Us
  • DMCA
  • Sitemap
  • Privacy Policy

Copyright © 2022 Instacitizen.com | All Rights Reserved.

No Result
View All Result
  • Home
  • Technology
  • Computers
  • Gadgets
  • Software
  • Solar Energy
  • Artificial Intelligence

Copyright © 2022 Instacitizen.com | All Rights Reserved.

What Are Cookies
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Cookie SettingsAccept All
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT