Generate a simulated two-class data set with 100 observations and two features in which there is a visible but non-linear separation between the two classes. Show that in this setting, a support vector machine with a polynomial kernel (with degree greater than 1) or a radial kernel will outperform a support vector classifier on the training data. Which technique performs best on the test data? Make plots and report training and test error rates in order to back up your assertions.
We first create the data set:
set.seed(1)
q4_data <- tibble(
X1 = rnorm(100),
X2 = rnorm(100)
)
We then create our class separation:
q4_data %>%
mutate(
class = as.factor(ifelse(-X1 + X2 - (X1 + X2)^2 < 0, 'A', 'B'))
) -> q4_data_class
q4_data_class %>%
ggplot() +
geom_point(aes(X1, X2, colour = class))
We split our observations into train and test sets, run a SVC and a polynomial SVM of degree 2 of over the training data:
set.seed(1)
q4_data_partition <- q4_data_class %>% resample_partition(c(test = .5, train = .5))
q4_data_partition$train %>%
svm(class ~ ., data = ., scale = F, cost = 10, kernel = 'linear') -> svc_fit
svc_fit %>% plot(data = as_tibble(q4_data_partition$train))
q4_data_partition$train %>%
svm(class ~ ., data = ., scale = F, cost = 10, kernel = 'polynomial', degree = 2) -> svm_poly_fit
svm_poly_fit %>% plot(data = as_tibble(q4_data_partition$train))
Let’s take a look at the training and test error rates between the SVC and the polynomial SVM:
q4_data_partition$train %>%
as_tibble() %>%
mutate(
svc_pred = predict(svc_fit, newdata = .),
svm_pred = predict(svm_poly_fit, newdata = .)
) %>%
summarise(
'SVC Training Error Rate' = mean(class != svc_pred) * 100,
'SVM Training Error Rate' = mean(class != svm_pred) * 100
) %>%
kable() %>% kable_styling()
SVC Training Error Rate | SVM Training Error Rate |
---|---|
9.803922 | 21.56863 |
q4_data_partition$test %>%
as_tibble() %>%
mutate(
svc_pred = predict(svc_fit, newdata = .),
svm_pred = predict(svm_poly_fit, newdata = .)
) %>%
summarise(
'SVC Test Error Rate' = mean(class != svc_pred) * 100,
'SVM Test Error Rate' = mean(class != svm_pred) * 100
) %>%
kable() %>% kable_styling()
SVC Test Error Rate | SVM Test Error Rate |
---|---|
20.40816 | 10.20408 |
Interestingly the linear SVC has a lower error rate over the training data, however the SVM does much better across the test data.
We have seen that we can fit an SVM with a non-linear kernel in order to perform classification using a non-linear decision boundary. We will now see that we can also obtain a non-linear decision boundary by performing logistic regression using non-linear transformations of the features.
Generate a data set with \(n = 500\) and \(p = 2\), such that the observations belong to two classes with a quadratic decision boundary between them.
q5_data <- tibble(
X1 = rnorm(500) - 0.5,
X2 = rnorm(500) - 0.5,
Y = as.factor(ifelse(X1^2 - X2^2 > 0, 1, 0))
)
Plot the observations
q5_data %>%
ggplot(aes(X1, X2)) +
geom_point(aes(colour = Y))
Fit a logistic regression model to the data, using \(X_1\) and \(X_2\) as predictors
q5_data_partition <- q5_data %>% resample_partition(c(test = .5, train = .5))
q5_lr <- q5_data_partition$train %>%
glm(Y ~ ., data = ., family = 'binomial')
Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be linear.
q5_data_partition$train %>%
as_tibble() %>%
mutate(
Y_prime = as.factor(ifelse(predict(q5_lr, newdata = ., type = 'response') < .5, 0, 1))
) %>%
ggplot(aes(X1, X2)) +
geom_point(aes(colour = Y_prime))
We can see the clear linear decision boundary.
Now fit a logistic regression model to the data using non-linear functions of \(X_1\) and \(X_2\) as predictors.
q5_poly_lr <- q5_data_partition$train %>%
glm(Y ~ poly(X1,2) - poly(X2,2), data = ., family = 'binomial')
Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be obviously non-linear.
q5_data_partition$train %>%
as_tibble() %>%
mutate(
Y_prime = as.factor(
ifelse(predict(q5_poly_lr, newdata = ., type = 'response') < .5, 0, 1)
)
) %>%
ggplot(aes(X1, X2)) +
geom_point(aes(colour = Y_prime))
We can see a clear non-linear decision boundary.
Fit a support vector classifier to the data with \(X_1\) and \(X_2\) as predictors.
q5_svc <- q5_data_partition$train %>%
svm(Y ~ ., data = ., scale = F, cost = 10, kernel = 'linear')
q5_data_partition$train %>%
as_tibble() %>%
mutate(
Y_prime = predict(q5_svc, newdata = .)
) %>%
ggplot(aes(X1, X2)) +
geom_point(aes(colour = Y_prime))
Fit a SVM using a non-linear kernel to the data. Obtain a class prediction for each training observation. Plot the observations, colored according to the predicted class labels.
q5_svm_radial <- q5_data_partition$train %>%
svm(Y ~ ., data = ., scale = F, cost = 10, kernel = 'radial')
q5_data_partition$train %>%
as_tibble() %>%
mutate(
Y_prime = predict(q5_svm_radial, newdata = .)
) %>%
ggplot(aes(X1, X2)) +
geom_point(aes(colour = Y_prime))
Comment on your results.
We see that we can gain a non-linear decision boundary by using non-linear kernels within our logistic regressions and SVCs.
It is claimed that in the case of data that is just barely linearly separable, a support vector classifier with a small value of cost that misclassifies a couple of training observations may perform better on test data than one with a huge value of cost that does not misclassify any training observations. You will now investigate this claim.
Generate two-class data with \(p = 2\) in such a way that the classes are just barely linearly separable.
# Generate training data
set.seed(1)
q6_train_data <- tibble(
X1 = rnorm(100),
X2 = rnorm(100),
Y = as.factor( ifelse(X1 - X2 > 0, 'A', 'B') )
)
q6_train_data %>%
ggplot() +
geom_point(aes(X1, X2, colour = Y))
Compute the cross-validation error rates for support vector classifiers with a range of cost
values. How many training errors are misclassified for each value of cost
considered, and howdoes this relate to the cross-validation errors obtained?
We can use the tune()
function to perform the cross validation.
tune(
svm, Y ~ ., data = q6_train_data, kernel = 'linear',
ranges = list(cost = c(0.001, 0.01, 0.1, 1, 10, 20, 40, 100))
) -> q6_cv
summary(q6_cv)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 100
##
## - best performance: 0.01
##
## - Detailed performance results:
## cost error dispersion
## 1 1e-03 0.44 0.13498971
## 2 1e-02 0.21 0.15238839
## 3 1e-01 0.09 0.08755950
## 4 1e+00 0.04 0.05163978
## 5 1e+01 0.02 0.04216370
## 6 2e+01 0.02 0.04216370
## 7 4e+01 0.02 0.04216370
## 8 1e+02 0.01 0.03162278
tibble(cost = c(0.001, 0.01, 0.1, 1, 10, 20, 40, 100), train = nest(q6_train_data)) %>%
mutate(
svm_model = map2(cost, train$data, ~svm(Y ~ ., data = .y, scale = F, cost = .x, kernel = 'linear')),
pred = map(svm_model, ~predict(.x))
) %>%
unnest(pred, train$data) %>%
group_by(cost) %>%
summarise(train_error = mean(Y != pred) * 100) -> train_errors
333 c)
Generate an appropriate test data set, and compute the test errors corresponding to each of the values of cost considered. Which value of cost leads to the fewest test errors, and how does this compare to the values of cost that yield the fewest training errors and the fewest cross-validation errors?
# Generate test data
set.seed(5435)
q6_test_data <- tibble(
X1 = rnorm(100),
X2 = rnorm(100),
Y = as.factor( ifelse(X1 - X2 > 0, 'A', 'B') )
)
tibble(
cost = c(0.001, 0.01, 0.1, 1, 10, 20, 40, 100),
train = nest(q6_train_data),
test = nest(q6_test_data)
) %>%
mutate(
svm_model = map2(cost, train$data, ~svm(Y ~ ., data = .y, scale = F, cost = .x, kernel = 'linear')),
pred = map2(svm_model, test$data, ~predict(.x, newdata = .y))
) %>%
unnest(pred, train$data) %>%
group_by(cost) %>%
summarise(test_error = mean(Y != pred) * 100) -> test_errors
test_errors
## # A tibble: 8 x 2
## cost test_error
## <dbl> <dbl>
## 1 0.001 44
## 2 0.01 39
## 3 0.1 43
## 4 1 46
## 5 10 45
## 6 20 46
## 7 40 46
## 8 100 46
left_join(train_errors, test_errors, by = 'cost') %>%
gather(error_type, rate, c(train_error, test_error)) %>%
ggplot(aes(cost, rate, colour = error_type)) +
geom_point() +
geom_line()
What we see is the bias/variance tradeoff. As the cost is increased, the model becomes more ‘flexible’ and the training error goes down. With the test data, as the flexibility increases we see a decrease in errors until it reaches an inflection point. After this point the model is overfitting and the test error increases.
In this problem, you will use support vector approaches in order to predict whether a given car gets high or low gas mileage based on the Auto
data set.
Create a binary variable that takes on a 1 for cars with gas mileage above the median, and a 0 for cars with gas mileage below the median.
Auto %>%
as_tibble() %>%
mutate(above_median = as.factor( ifelse(mpg >= median(mpg), 1, 0) ) ) ->
auto
Fit a support vector classifier to the data with various values of cost , in order to predict whether a car gets high or low gas mileage. Report the cross-validation errors associated with different values of this parameter. Comment on your results.
set.seed(1)
auto %>%
tune(
svm, above_median ~ ., data = ., kernel = 'linear',
ranges = list(cost = c(0.01, 0.1, 1, 10, 100))
) -> auto_svc
summary(auto_svc)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 1
##
## - best performance: 0.01275641
##
## - Detailed performance results:
## cost error dispersion
## 1 1e-02 0.07403846 0.05471525
## 2 1e-01 0.03826923 0.05148114
## 3 1e+00 0.01275641 0.01344780
## 4 1e+01 0.02038462 0.01074682
## 5 1e+02 0.03820513 0.01773427
We see the lowest error when cost = 1
.
*Now repeat with radial and polynomial basis kernels and different values of gamma
and degree
. Comment on your results.
set.seed(1)
auto %>%
tune(
svm, above_median ~ ., data = ., kernel = 'radial',
ranges = list(gamma = c(0.01, 0.1, 1, 10, 100), cost = c(.01, .1, 1, 10))
) -> auto_svm_radial
summary(auto_svm_radial)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## gamma cost
## 0.01 10
##
## - best performance: 0.02551282
##
## - Detailed performance results:
## gamma cost error dispersion
## 1 1e-02 0.01 0.56115385 0.04344202
## 2 1e-01 0.01 0.19153846 0.07612945
## 3 1e+00 0.01 0.56115385 0.04344202
## 4 1e+01 0.01 0.56115385 0.04344202
## 5 1e+02 0.01 0.56115385 0.04344202
## 6 1e-02 0.10 0.09185897 0.03862507
## 7 1e-01 0.10 0.07916667 0.05201159
## 8 1e+00 0.10 0.56115385 0.04344202
## 9 1e+01 0.10 0.56115385 0.04344202
## 10 1e+02 0.10 0.56115385 0.04344202
## 11 1e-02 1.00 0.07147436 0.05103685
## 12 1e-01 1.00 0.05608974 0.05092939
## 13 1e+00 1.00 0.06634615 0.06187383
## 14 1e+01 1.00 0.51775641 0.04471079
## 15 1e+02 1.00 0.56115385 0.04344202
## 16 1e-02 10.00 0.02551282 0.03812986
## 17 1e-01 10.00 0.02551282 0.02076457
## 18 1e+00 10.00 0.06128205 0.06186124
## 19 1e+01 10.00 0.51012821 0.03817175
## 20 1e+02 10.00 0.56115385 0.04344202
auto %>%
tune(
svm, above_median ~ ., data = ., kernel = 'polynomial',
ranges = list(degree = seq(2, 5), cost = c(.01, .1, 1, 10))
) -> auto_svm_poly
summary(auto_svm_poly)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## degree cost
## 2 10
##
## - best performance: 0.5228205
##
## - Detailed performance results:
## degree cost error dispersion
## 1 2 0.01 0.5587179 0.05068311
## 2 3 0.01 0.5587179 0.05068311
## 3 4 0.01 0.5587179 0.05068311
## 4 5 0.01 0.5587179 0.05068311
## 5 2 0.10 0.5587179 0.05068311
## 6 3 0.10 0.5587179 0.05068311
## 7 4 0.10 0.5587179 0.05068311
## 8 5 0.10 0.5587179 0.05068311
## 9 2 1.00 0.5587179 0.05068311
## 10 3 1.00 0.5587179 0.05068311
## 11 4 1.00 0.5587179 0.05068311
## 12 5 1.00 0.5587179 0.05068311
## 13 2 10.00 0.5228205 0.09271988
## 14 3 10.00 0.5587179 0.05068311
## 15 4 10.00 0.5587179 0.05068311
## 16 5 10.00 0.5587179 0.05068311
For a radial kernel, we see a minimisation of ther error when gamma = 0.01
and cost = 10
. With the polynomial kernel the lowest error is with degree = 2
and cost = 10
.
Make some plots to back up your assertions
svm_linear <- svm(above_median ~ ., data = auto, kernel = 'linear', cost = 1)
svm_poly <- svm(above_median ~ ., data = auto, kernel = 'polynomial', degree = 2, cost = 10)
svm_radial <- svm(above_median ~ ., data = auto, kernel = 'radial', gamma = 0.01, cost = 10)
plot_pairs <- function(fit, data, dependent, independents) {
for (independent in independents) {
formula = as.formula( str_c( dependent, '~', independent) )
plot(fit, data, formula)
}
}
plot_pairs(svm_linear, auto, 'mpg', c('acceleration', 'displacement', 'horsepower'))
plot_pairs(svm_poly, auto, 'mpg', c('acceleration', 'displacement', 'horsepower'))
plot_pairs(svm_radial, auto, 'mpg', c('acceleration', 'displacement', 'horsepower'))
This problem involves the OJ data set which is part of the ISLR package.
Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.
set.seed(1)
oj_samples <-
OJ %>%
resample_partition(c(train = .8, test = .2))
*Fit a support vector classifier to the training data using cost = 0.01
, with Purchase
as the response and the other variables as predictors. Use the summary()
function to produce summary statistics, and describe the results obtained.
oj_linear_svc <- svm(Purchase ~ ., data = oj_samples$train, kernel = 'linear', cost = 0.01)
summary(oj_linear_svc)
##
## Call:
## svm(formula = Purchase ~ ., data = oj_samples$train, kernel = "linear",
## cost = 0.01)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 0.01
##
## Number of Support Vectors: 471
##
## ( 238 233 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
What are the training and test error rates?
oj_samples$train %>%
as_tibble() %>%
mutate(Purchase_prime = predict(oj_linear_svc, newdata = .)) %>%
summarise('Train Error Rate' = mean(Purchase != Purchase_prime))
## # A tibble: 1 x 1
## `Train Error Rate`
## <dbl>
## 1 0.170
oj_samples$test %>%
as_tibble() %>%
mutate(Purchase_prime = predict(oj_linear_svc, newdata = .)) %>%
summarise('Test Error Rate' = mean(Purchase != Purchase_prime))
## # A tibble: 1 x 1
## `Test Error Rate`
## <dbl>
## 1 0.153
Use the tune()
function to select an optimal cost . Consider values in the range 0.01 to 10.
set.seed(1)
tune(
svm,
Purchase ~ .,
data = as_tibble( oj_samples$train ),
kernel = 'linear',
ranges = list(cost = 2^seq(-8,4))
) -> oj_svc_tune
summary(oj_svc_tune)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 0.0625
##
## - best performance: 0.1719289
##
## - Detailed performance results:
## cost error dispersion
## 1 0.00390625 0.1789466 0.04795271
## 2 0.00781250 0.1719425 0.04479256
## 3 0.01562500 0.1731190 0.04473583
## 4 0.03125000 0.1754583 0.04710335
## 5 0.06250000 0.1719289 0.04793950
## 6 0.12500000 0.1777839 0.05126394
## 7 0.25000000 0.1754309 0.04922764
## 8 0.50000000 0.1754309 0.05073058
## 9 1.00000000 0.1801094 0.04895304
## 10 2.00000000 0.1847606 0.04778595
## 11 4.00000000 0.1835978 0.04729860
## 12 8.00000000 0.1765937 0.04479404
## 13 16.00000000 0.1789330 0.04533136
Compute the training and test error rates using this new value for cost
.
oj_linear_svc <- svm(
Purchase ~ .,
data = oj_samples$train,
kernel = 'linear',
cost = oj_svc_tune$best.parameters$cost
)
oj_samples$train %>%
as_tibble() %>%
mutate(Purchase_prime = predict(oj_linear_svc)) %>%
summarise('Train Error Rate' = mean(Purchase != Purchase_prime))
## # A tibble: 1 x 1
## `Train Error Rate`
## <dbl>
## 1 0.172
oj_samples$test %>%
as_tibble() %>%
mutate(Purchase_prime = predict(oj_linear_svc, newdata = .)) %>%
summarise('Test Error Rate' = mean(Purchase != Purchase_prime))
## # A tibble: 1 x 1
## `Test Error Rate`
## <dbl>
## 1 0.153
Repeat parts (b) through (e) using a support vector machine with a radial kernel. Use the default value for gamma
.
oj_radial_svc <- svm(
Purchase ~ .,
data = oj_samples$train,
kernel = 'radial'
)
summary(oj_radial_svc)
##
## Call:
## svm(formula = Purchase ~ ., data = oj_samples$train, kernel = "radial")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: radial
## cost: 1
##
## Number of Support Vectors: 407
##
## ( 207 200 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
oj_samples$train %>%
as_tibble() %>%
mutate(Purchase_prime = predict(oj_radial_svc, newdata = .)) %>%
summarise('Train Error Rate' = mean(Purchase != Purchase_prime))
## # A tibble: 1 x 1
## `Train Error Rate`
## <dbl>
## 1 0.149
oj_samples$test %>%
as_tibble() %>%
mutate(Purchase_prime = predict(oj_radial_svc, newdata = .)) %>%
summarise('Test Error Rate' = mean(Purchase != Purchase_prime))
## # A tibble: 1 x 1
## `Test Error Rate`
## <dbl>
## 1 0.163
set.seed(1)
tune(
svm,
Purchase ~ .,
data = as_tibble( oj_samples$train ),
kernel = 'radial',
ranges = list(cost = 2^seq(-8,4))
) -> oj_radial_tune
summary(oj_radial_tune)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 1
##
## - best performance: 0.1754993
##
## - Detailed performance results:
## cost error dispersion
## 1 0.00390625 0.3975650 0.05379423
## 2 0.00781250 0.3975650 0.05379423
## 3 0.01562500 0.3975650 0.05379423
## 4 0.03125000 0.3355814 0.08645372
## 5 0.06250000 0.1976881 0.04255237
## 6 0.12500000 0.1848700 0.04632982
## 7 0.25000000 0.1813269 0.04754607
## 8 0.50000000 0.1813406 0.04886457
## 9 1.00000000 0.1754993 0.04594156
## 10 2.00000000 0.1778659 0.05049088
## 11 4.00000000 0.1801915 0.04871487
## 12 8.00000000 0.1848837 0.04770320
## 13 16.00000000 0.1954172 0.04847789
oj_radial_svc <- svm(
Purchase ~ .,
data = oj_samples$train,
kernel = 'linear',
cost = oj_radial_tune$best.parameters$cost
)
oj_samples$train %>%
as_tibble() %>%
mutate(Purchase_prime = predict(oj_radial_svc)) %>%
summarise('Train Error Rate' = mean(Purchase != Purchase_prime))
## # A tibble: 1 x 1
## `Train Error Rate`
## <dbl>
## 1 0.171
oj_samples$test %>%
as_tibble() %>%
mutate(Purchase_prime = predict(oj_radial_svc, newdata = .)) %>%
summarise('Test Error Rate' = mean(Purchase != Purchase_prime))
## # A tibble: 1 x 1
## `Test Error Rate`
## <dbl>
## 1 0.144
Repeat parts (b) through (e) using a support vector machine with a polynomial kernel. Set degree = 2
.
oj_poly_svc <- svm(
Purchase ~ .,
data = oj_samples$train,
kernel = 'polynomial',
degree = 2
)
summary(oj_poly_svc)
##
## Call:
## svm(formula = Purchase ~ ., data = oj_samples$train, kernel = "polynomial",
## degree = 2)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: polynomial
## cost: 1
## degree: 2
## coef.0: 0
##
## Number of Support Vectors: 496
##
## ( 252 244 )
##
##
## Number of Classes: 2
##
## Levels:
## CH MM
oj_samples$train %>%
as_tibble() %>%
mutate(Purchase_prime = predict(oj_poly_svc, newdata = .)) %>%
summarise('Train Error Rate' = mean(Purchase != Purchase_prime))
## # A tibble: 1 x 1
## `Train Error Rate`
## <dbl>
## 1 0.191
oj_samples$test %>%
as_tibble() %>%
mutate(Purchase_prime = predict(oj_poly_svc, newdata = .)) %>%
summarise('Test Error Rate' = mean(Purchase != Purchase_prime))
## # A tibble: 1 x 1
## `Test Error Rate`
## <dbl>
## 1 0.172
set.seed(1)
tune(
svm,
Purchase ~ .,
data = as_tibble( oj_samples$train ),
kernel = 'polynomial',
ranges = list(cost = 2^seq(-8,4)),
degree = 2
) -> oj_poly_tune
summary(oj_poly_tune)
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 16
##
## - best performance: 0.1836799
##
## - Detailed performance results:
## cost error dispersion
## 1 0.00390625 0.3975650 0.05379423
## 2 0.00781250 0.3975650 0.05379423
## 3 0.01562500 0.3799863 0.04578353
## 4 0.03125000 0.3694802 0.04427147
## 5 0.06250000 0.3378523 0.05055353
## 6 0.12500000 0.2981806 0.04931904
## 7 0.25000000 0.2234337 0.04743533
## 8 0.50000000 0.2187278 0.04555198
## 9 1.00000000 0.2000684 0.05500319
## 10 2.00000000 0.1930506 0.04820137
## 11 4.00000000 0.1871956 0.05220368
## 12 8.00000000 0.1836936 0.04608234
## 13 16.00000000 0.1836799 0.04366231
oj_poly_svc <- svm(
Purchase ~ .,
data = oj_samples$train,
kernel = 'polynomial',
cost = oj_poly_tune$best.parameters$cost
)
oj_samples$train %>%
as_tibble() %>%
mutate(Purchase_prime = predict(oj_poly_svc)) %>%
summarise('Train Error Rate' = mean(Purchase != Purchase_prime))
## # A tibble: 1 x 1
## `Train Error Rate`
## <dbl>
## 1 0.158
oj_samples$test %>%
as_tibble() %>%
mutate(Purchase_prime = predict(oj_poly_svc, newdata = .)) %>%
summarise('Test Error Rate' = mean(Purchase != Purchase_prime))
## # A tibble: 1 x 1
## `Test Error Rate`
## <dbl>
## 1 0.149
The radial kernel appears to give the best results on the test data.