evaluate_resampling()
uses repeated K-fold cross-validation and
the Root Mean Square Error (RMSE) of testing sets to measure the predictive
power of a single model. Methods are provided for
trending::trending_model
(and lists of these) objects.
Usage
evaluate_resampling(x, ...)
# Default S3 method
evaluate_resampling(x, ...)
# S3 method for class 'trending_model'
evaluate_resampling(
x,
data,
metric = c("rmse", "rsq", "mae"),
metric_arguments = list(na.rm = TRUE),
v = 5,
repeats = 1,
...
)
# S3 method for class 'list'
evaluate_resampling(
x,
data,
metric = c("rmse", "rsq", "mae"),
metric_arguments = list(na.rm = TRUE),
v = 5,
repeats = 1,
...
)
Arguments
- x
An R object.
- ...
Not currently used.
- data
a
data.frame
containing data (including the response variable and all predictors) used in the specified model.- metric
One of "rmse" (see calculate_rmse), "mae" (see calculate_mae) and "rsq" (see calculate_rsq).
- metric_arguments
A named list of arguments passed to the underlying functions that calculate the metrics.
- v
the number of equally sized data partitions to be used for K-fold cross-validation;
v
cross-validations will be performed, each usingv - 1
partition as training set, and the remaining partition as testing set. Defaults to the number of row in data, so that the method uses leave-one-out cross validation, akin to Jackknife except that the testing set (and not the training set) is used to compute the fit statistics.- repeats
the number of times the random K-fold cross validation should be repeated for; defaults to 1; larger values are likely to yield more reliable / stable results, at the expense of computational time
Details
These functions wrap around existing functions from several
packages. evaluate_resampling.trending_model()
and
evaluate_resampling.list()
both use rsample::vfold_cv()
for sampling
and, for the calculating the different metrics, the
yardstick package.
Examples
x <- rnorm(100, mean = 0)
y <- rpois(n = 100, lambda = exp(x + 1))
dat <- data.frame(x = x, y = y)
model <- trending::glm_model(y ~ x, poisson)
models <- list(
poisson_model = trending::glm_model(y ~ x, poisson),
linear_model = trending::lm_model(y ~ x)
)
evaluate_resampling(model, dat)
#> # A tibble: 5 × 9
#> metric result warnings errors model fitting_warnings$warnings
#> * <chr> <dbl> <list> <list> <list> <list>
#> 1 rmse 1.90 <NULL> <NULL> <glm_trn_> <NULL>
#> 2 rmse 2.20 <NULL> <NULL> <glm_trn_> <NULL>
#> 3 rmse 1.56 <NULL> <NULL> <glm_trn_> <NULL>
#> 4 rmse 1.98 <NULL> <NULL> <glm_trn_> <NULL>
#> 5 rmse 2.88 <NULL> <NULL> <glm_trn_> <NULL>
#> # ℹ 3 more variables: fitting_errors <tibble[,1]>,
#> # predicting_warnings <tibble[,1]>, predicting_errors <tibble[,1]>
evaluate_resampling(models, dat)
#> # A tibble: 10 × 10
#> model_name metric result warnings errors model fitting_warnings$war…¹
#> <chr> <chr> <dbl> <list> <list> <list> <list>
#> 1 poisson_model rmse 2.12 <NULL> <NULL> <glm_trn_> <NULL>
#> 2 poisson_model rmse 2.77 <NULL> <NULL> <glm_trn_> <NULL>
#> 3 poisson_model rmse 2.02 <NULL> <NULL> <glm_trn_> <NULL>
#> 4 poisson_model rmse 1.55 <NULL> <NULL> <glm_trn_> <NULL>
#> 5 poisson_model rmse 1.79 <NULL> <NULL> <glm_trn_> <NULL>
#> 6 linear_model rmse 3.12 <NULL> <NULL> <lm_trnd_> <NULL>
#> 7 linear_model rmse 7.49 <NULL> <NULL> <lm_trnd_> <NULL>
#> 8 linear_model rmse 3.89 <NULL> <NULL> <lm_trnd_> <NULL>
#> 9 linear_model rmse 3.68 <NULL> <NULL> <lm_trnd_> <NULL>
#> 10 linear_model rmse 5.61 <NULL> <NULL> <lm_trnd_> <NULL>
#> # ℹ abbreviated name: ¹fitting_warnings$warnings
#> # ℹ 3 more variables: fitting_errors <tibble[,1]>,
#> # predicting_warnings <tibble[,1]>, predicting_errors <tibble[,1]>