The benefit of utilizing SAS PROC KPCA is that you could preprocess your knowledge with the intention to classify teams with nonlinear classification boundaries. The determine on the left reveals two teams of information factors which have a nonlinear classification boundary. It’s unimaginable to attract a line that separates these two teams. Nevertheless, the determine on the suitable illustrates that after we use KPCA to challenge the factors into a better dimension, the factors may be separated linearly.
![]() |
![]() |
The flexibility to separate teams with nonlinear classification boundaries comes with a value. To seize the nonlinear classification’s native geometry, we should specify “how shut to have a look at the geometry of the unique house.” In KPCA, a bandwidth parameter defines “how near look” mathematically. The fee is figuring out the suitable bandwidth parameter to find out an answer that separates the teams properly in increased dimensions. In apply, that is completed both brute drive by trial and error or cross-validation by attempting a variety of bandwidths and searching on the classification error of a downstream machine studying classifier (logistic regression, discriminant evaluation, choice tree, neural community, and so forth.). The target is to establish the bandwidth that minimizes some measure of classification error (misclassification price, false optimistic price, false damaging price, the realm beneath the ROC curve, and so forth.). The disadvantages of utilizing the cross-validation strategy are: it might not be doable to establish a legitimate vary of bandwidth values to attempt, and it may be computationally burdensome to run the classifiers for every of numerous bandwidth values. SAS® PROC KPCA has a novel technique of avoiding these disadvantages.
SAS® PROC KPCA has carried out the criterion of most sum of eigenvalues (CMSE) to handle the bandwidth choice downside. SAS iterates over a variety of bandwidth values for a subset of c factors (centroids) chosen by k-means. It then applies the Nyström technique to approximate the SVD resolution effectively. For every bandwidth worth, SAS sums the eigenvalues and shops them. SAS then finds the utmost sum and chooses the bandwidth related to the utmost sum. The bandwidth is chosen for the answer that explains the very best quantity of variation within the authentic knowledge based mostly on the truth that the sum is maximized. For the reason that approximate Nyström technique is utilized to a subset chosen by k-means, the identification of the bandwidth may be carried out effectively in an automatic method. The time complexity is lowered from O(n3) to O(cn2) utilizing the Nystrom technique, the place n is the variety of observations within the enter knowledge set and c is the variety of centroids recognized by k-means. The next instance reveals how we are able to choose the bandwidth utilizing the CMSE technique:
We are attempting to separate two toruses into three dimensions for this instance. The next graph reveals totally different orientations of the toruses:
Within the KERNEL assertion, we specify BW=RANDOMCMSE in order that the CMSE technique is used to mechanically establish the optimum bandwidth for 3 dimensions (NPC=3 within the OUTPUT assertion). We additionally specify a non-zero seed to make sure the outcomes are reproducible by setting (SEED=2378) as an choice to RANDOMCMSE.
proc kpca knowledge=casuser.two_torus_full technique=approximate; enter x y z; lrapproximation clusmethod=kmpp maxclus=500; kernel RBF / BW=RANDOMCMSE(SEED=2378); output out=casuser.scored_CMSE_fast copyvar=group npc=3; run; |
If we plot the 2nd and third principal parts, we are able to see that the toruses may be linearly separated:
The overall time to run KPCA on 5 thousand observations with bandwidth identification and producing the principal element rating is roughly 8.5 seconds. That is extra environment friendly than utilizing cross-validation to pick the bandwidth parameter by looping round PROC KPCA a number of occasions.
Working PROC DISCRIM gives additional proof that the toruses are linearly separable:
proc discrim knowledge=casuser. scored_CMSE_fast technique=regular pool=sure quick; class group; priors proportional; run; |
The code on this article may be discovered on the general public SAS® software program Github at:
Try my different weblog article on SAS® Quick-KPCA.
References
Ok. Shen, H, Wang, A. Chaudhuri, Z. Asgharzadeh. Automated Bandwidth Choice for Kernel Principal Part Evaluation, Journal of Machine Studying Analysis 1 (2021) 0-00. December 2021.
M. Li, W. Bi, J. T. Kwok, and B. -L. Lu, Giant-Scale Nyström Kernel Matrix Approximation Utilizing Randomized SVD, in IEEE Transactions on Neural Networks and Studying Programs, vol. 26, no. 1, pp. 152-164, Jan. 2015.
SAS® Viya® Programming Documentation, Knowledge Mining and Machine Studying Procedures, The KPCA Process, https://go.documentation.sas.com/doc/en/pgmsascdc/v_032/casml/casml_kpca_details03.htm
Wicklin R., The Do Loop, Visualize a Torus in SAS. Visualize a torus in SAS – The DO Loop