I am writing an R package, have spent one week chasing one bug, and wrote tests for it from multiple angles. One of them is that the average result from statistical sampling is within certain bounds. I added set.seed(1)
before sampling to ensure reproducibility. The sampling uses rstan
and stats::rbinom
.
I print the average value of two coefficients and the printout of the test is different on two different runs. The first run gives:
...
Chain 1: COMPLETED.
[1] 0.3247559
[1] -0.1190542
...
and the second one gives:
Chain 1: COMPLETED.
[1] 0.3181746
[1] -0.1384806
How come this is not reproducible? Should I be setting another seed?
The full test, which uses many functions in the package, is:
test_that("Adaptive non-parametric learning with posterior samples works well", {
german <- load_dataset(list(name = k_german_credit))
n_bootstrap <- 100
# Get posterior samples
prior_variance <- 100
bayes_logit_model <- rstan::stan_model(file = get_rstan_file())
train_dat <- list(n = german$n,
p = german$n_cov + 1,
x = cbind(1, german$x),
y = 0.5 * (german$y + 1),
beta_sd = sqrt(prior_variance))
set.seed(1)
stan_vb <- rstan::vb(bayes_logit_model,
data = train_dat,
output_samples = n_bootstrap,
seed = 123,
iter = 100)
stan_vb_sample <- rstan::extract(stan_vb)$beta
# Use these samples in ANPL with multiple cores
anpl_samples <- anpl(dataset = german,
concentration = 1,
n_bootstrap = n_bootstrap,
posterior_sample = stan_vb_sample,
threshold = 1e-8,
num_cores = 2)
col_means <- colMeans(anpl_samples)
print(col_means[21])
print(col_means[22])
expect_true((col_means[21] >= 0.29) && (col_means[21] <= 0.3),
"The average coefficient for column 21 is as expected")
expect_true((col_means[22] >= -0.15) && (col_means[22] <= -0.14),
"The average coefficient for column 22 is as expected")
})
Aucun commentaire:
Enregistrer un commentaire