Example 2: Test Validation With Randomized Item Sampling

Accuracy of correlation in a 2PL model

Author

Anonymous

Published

July 19, 2024


Objective

The example describes the validation of a newly developed computerized personality test with a forced-choice response format comparable to the Eysenck Personality Inventory (e.g., “Do you prefer reading to going out?”, yes/no). The test is assumed to contain 30 items. A random sample of the personality test items is drawn and the correlation with an external metric criterion (e.g., number of Facebook friends) is estimated. The parameter of interest is the standard error of the correlation between the latent trait and the criterion, which depends on both the sample size and the amount of missingness.

I. Determining the data generation for the complete dataset


  1. Number and distribution of factors (unidimensional vs. multidimensional)
  2. Number of items and item parameters (discriminations, difficulties)
  3. Item type (dichotomous, polytomous)

The true item parameters for the 30 dichotomous items are generated.

# Clear work space and load necessary packages
rm(list = ls())
library(tidyverse)   # General data handling and manipulation
library(TAM)         # Item response theory modeling
library(mirt)        # Simulate data from a multivariate normal distribution
library(MASS)        # Simulate data from a multivariate normal distribution

# Set the seed for random number generation to ensure reproducibility
set.seed(2024)

# Number of items
n_items <- 30

# Discrimination parameters (a_i) are randomly drawn from a log-normal 
#   distribution, as it reflects the natural variation and positive skew
#   typically observed in empirical data.
a_i <- rlnorm(n_items,.2,.3)

# Difficulty parameters (b_i) are randomly drawn from a normal distribution
b_i <- rnorm(n_items, 0, 1)

The function generate_mv_data uses mirt::simdata to simulate dichotomous data given the item discriminations (\(a\)), the item difficulties (\(b\)), and the trait distribution (\(\theta\)). In contrast to the first example, individuals’ abilities (\(\theta\)) are generated according to a multivariate normal distribution with a correlation of \(\rho = 0.5\) between the latent trait and the criterion.

Note that the mirt package uses slope-intercept parameterization, so the item difficulty parameters (\(b\)) must be transformed into the item intercepts \(d_i = -a_i*b_i\). The transformation between the traditional IRT parameters (item discrimination and item difficulty) and the slope-intercept parameters can also be done with mirt::traditional2mirt.

# Simulate dichotomous item responses based on the generated theta values
  # - 'a' denotes the item discriminations
  # - 'b' denotes the item difficulties
  # - 'theta' denotes the univariate trait distribution

generate_mv_data <- function(a, b, theta) {
  resp <- 
    simdata(a = a, d = -a*b, Theta = theta[, 1], itemtype = "dich") %>% 
    as.data.frame()
  
  return(resp)
}

II. Defining the test design and the process of missing values


  1. Pattern of missingness (e.g., MCAR, linking design)
  2. Amount of missing data

The items of the computer-administered personality test are randomly drawn from a larger item pool. This kind of missingness, where the probability of missing data on a variable is independent of both observed and unobserved data, is known as Missing Completely At Random (MCAR). The function data_MCAR_design uses the complete simulated data and deletes observations under the assumption of MCAR.

# Induce missingness completely at random (MCAR) to the complete simulated data set
  # - 'resp' denotes the complete data set
  # - 'miss_rates' denotes the amount of missingness [0;1]

data_MCAR_design <- function(resp, miss_rates = 0) {
  resp <- t(apply(resp, 1, function(row) {
    row[sample(seq_along(row), length(row) * miss_rates)] <- NA;
    return(row)
  }))
  
  return(resp)
}

III. Selecting the IRT model and the parameter of interest


  1. Underlying IRT model (e.g., 1PL, 2PL)
  2. IRT modeling software and estimation method
  3. Parameters to extract

The tam.mml.2pl function from the TAM package is used to estimate the two-parametric item response model (Birnbaum model) using the simulated data. The function returns the standard error of the correlation between the latent trait and the metric covariate. Note that this example uses TAM because the criterion can be included in a latent regression model, and the standard error of the regression coefficient can be extracted directly. Alternatively, a bootstrapped standard error could be obtained via mirt using the R package boot.

# Estimate item response model
  # - 'resp' denotes the data set

estimate_irt <- function(resp) {
  
  # Estimate a 2PL model using try-catch to handle errors, the 2nd dimension of 'theta' 
    # is included as an additional predictor variable.
  mod <- tryCatch(
      tam.mml.2pl(resp = resp,                  # 2PL model
                  Y = data.frame(theta[, 2]),   # matrix of covariates in latent regression
                  verbose = FALSE),
    error = function(e) NULL
  )
  
  # Calculate standard errors for the correlation if model is successfully estimated
  if (!is.null(mod)) {
    est <- tam.se(mod)$beta[2, 2]    # Extract the standard error of the correlation
  } else {
    est <- NA
  }
  
  return(est)
}

IV. Setting up the Monte Carlo Simulation


  1. Number of iterations
  2. Sample sizes to evaluate

The Monte Carlo simulation runs n_iterations times, including the previous steps of (i) determining the data generation for the complete dataset, (ii) defining the test design and the process of missing values, (iii) selecting the IRT model and the parameter of interest. The simulation is run for different sample sizes between 200 and 700 (in increments of 50) and three different missing rates (67%, 33%, 0%).

Using an estimated variance for the standard error of the correlation (\(\sigma = 0.05\)), a specified level of accuracy (\(\delta = .05\)), and a significance level (\(\alpha = .05\)), the number of iterations needed is approximately 77. Therefore, we use 80 iterations for the different sample sizes and missing rates.

# Number of iterations
n_iterations <- 77

# Sample sizes (from 200 to 700, in steps of 50) and three missing rates
grid <- expand.grid(n_persons = seq(200, 700, 50), 
                    miss_rates = c(0.67, 0.33, 0))

# Create data frame for results (res)
res <- data.frame()

# Loop over different sample sizes and missing rate combinations
for (g in 1:nrow(grid)) {
  
  # Nested loop, running 'n_iterations' times
  for (iter in 1:n_iterations) {
    # Generate a random sample with 'n_persons' from a multivariate normal distribution
      # - 'mu' denotes the mean vector for the multivariate normal distribution
      # - 'Sigma' denotes the covariance matrix with a correlation of 0.5 between dimensions 
    theta <- mvrnorm(n = grid[g, "n_persons"],
                     mu = c(0, 0), 
                     Sigma = matrix(c(1, 0.5, 0.5, 1), nrow = 2))
    
    dat <- generate_mv_data(a_i, b_i, theta) %>% 
      data_MCAR_design(miss_rates = grid[g, "miss_rates"])

    res <- bind_rows(res, 
                     data.frame(iteration = iter,
                                n_persons = grid[g, "n_persons"],
                                percent_miss = grid[g, "miss_rates"],
                                se_est = estimate_irt(dat)))
  }
}

Results

The figure shows the average standard error for the correlation between the latent trait and an external criterion as a function of sample size on one hand, and the amount of missingness on the other.

# Preparation and aggregation of results
res_plot <- res %>%
  group_by(n_persons, percent_miss) %>%
  summarise(
    m_SE = mean(se_est^2, na.rm = TRUE)^.5,          # Calculate the average SE
    .groups = 'drop')


# Plot standard error of correlation
ggplot(data=res_plot, aes(x=n_persons, y = m_SE, color = factor(percent_miss))) +
  geom_line(linewidth = 0.8, alpha = 0.8) +
  geom_point(size = 1.5, alpha = 0.8) +
  geom_abline(intercept = .05, slope = 0, col="red", lty = "twodash") +
  labs(
    x = "Sample size",
    y = "Average SE(Correlation)",
    color = "Percentage\nmissingness",
    linetype = "Percentage\nmissingness"
  ) +
  scale_color_manual(values = c("0" = "darkolivegreen", 
                                "0.33" = "aquamarine3",
                                "0.67" = "darkolivegreen2")) +
  ylim(0, 0.15) +
  xlim(200, 700) +
  theme_bw()  +
  theme(
    axis.title = element_text(size = 14),
    axis.text = element_text(size = 12),
    legend.title = element_text(size = 14),
    legend.text = element_text(size = 12),
    legend.position = "inside", 
    legend.position.inside =  c(.85, .80)
  )

# Documentation for transparency and reproducibility
print(sessionInfo(), locale=FALSE)
R version 4.3.3 (2024-02-29)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 20.04.6 LTS

Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/liblapack.so.3;  LAPACK version 3.9.0

attached base packages:
[1] stats4    stats     graphics  grDevices utils     datasets  methods  
[8] base     

other attached packages:
 [1] MASS_7.3-60.0.1 mirt_1.41       lattice_0.22-5  TAM_4.2-21     
 [5] CDM_8.2-6       mvtnorm_1.2-4   lubridate_1.9.3 forcats_1.0.0  
 [9] stringr_1.5.0   dplyr_1.1.4     purrr_1.0.2     readr_2.1.5    
[13] tidyr_1.3.1     tibble_3.2.1    ggplot2_3.5.0   tidyverse_2.0.0

loaded via a namespace (and not attached):
 [1] gtable_0.3.4         xfun_0.35            htmlwidgets_1.5.4   
 [4] tzdb_0.4.0           vctrs_0.6.5          tools_4.3.3         
 [7] generics_0.1.3       parallel_4.3.3       fansi_1.0.3         
[10] cluster_2.1.6        pkgconfig_2.0.3      Matrix_1.6-5        
[13] lifecycle_1.0.3      farver_2.1.1         compiler_4.3.3      
[16] GPArotation_2024.3-1 munsell_0.5.0        permute_0.9-7       
[19] dcurver_0.9.2        htmltools_0.5.4      yaml_2.3.6          
[22] pillar_1.9.0         admisc_0.34          vegan_2.6-4         
[25] nlme_3.1-163         Deriv_4.1.3          polycor_0.8-1       
[28] tidyselect_1.2.0     digest_0.6.30        stringi_1.7.8       
[31] labeling_0.4.3       splines_4.3.3        fastmap_1.1.0       
[34] grid_4.3.3           colorspace_2.1-0     cli_3.6.2           
[37] magrittr_2.0.3       utf8_1.2.2           withr_2.5.0         
[40] scales_1.3.0         timechange_0.3.0     rmarkdown_2.18      
[43] gridExtra_2.3        hms_1.1.3            pbapply_1.6-0       
[46] evaluate_0.18        knitr_1.41           mgcv_1.9-1          
[49] rlang_1.1.3          Rcpp_1.0.9           glue_1.6.2          
[52] rstudioapi_0.14      jsonlite_1.8.4       R6_2.5.1