Inference with the non-probability survey samples
Source:R/main_function_documentation.R
, R/nonprob.R
nonprob.Rd
nonprob
fits model for inference based on non-probability surveys (including big data) using various methods.
The function allows you to estimate the population mean with access to a reference probability sample, as well as sums and means of covariates.
The package implements state-of-the-art approaches recently proposed in the literature: Chen et al. (2020),
Yang et al. (2020), Wu (2022) and uses the Lumley 2004 survey
package for inference.
It provides propensity score weighting (e.g. with calibration constraints), mass imputation (e.g. nearest neighbour) and
doubly robust estimators that take into account minimisation of the asymptotic bias of the population mean estimators or
variable selection.
The package uses survey
package functionality when a probability sample is available.
Usage
nonprob(
data,
selection = NULL,
outcome = NULL,
target = NULL,
svydesign = NULL,
pop_totals = NULL,
pop_means = NULL,
pop_size = NULL,
method_selection = c("logit", "cloglog", "probit"),
method_outcome = c("glm", "nn", "pmm"),
family_outcome = c("gaussian", "binomial", "poisson"),
subset = NULL,
strata = NULL,
weights = NULL,
na_action = NULL,
control_selection = controlSel(),
control_outcome = controlOut(),
control_inference = controlInf(),
start_selection = NULL,
start_outcome = NULL,
verbose = FALSE,
x = TRUE,
y = TRUE,
se = TRUE,
...
)
Arguments
- data
data.frame
with data from the non-probability sample.- selection
formula
, the selection (propensity) equation.- outcome
formula
, the outcome equation.- target
formula
with target variables.- svydesign
an optional
svydesign
object (from the survey package) containing probability sample and design weights.- pop_totals
an optional
named vector
with population totals of the covariates.- pop_means
an optional
named vector
with population means of the covariates.- pop_size
an optional
double
with population size.- method_selection
a
character
with method for propensity scores estimation.- method_outcome
a
character
with method for response variable estimation.- family_outcome
a
character
string describing the error distribution and link function to be used in the model. Default is "gaussian". Currently supports: gaussian with identity link, poisson and binomial.- subset
an optional
vector
specifying a subset of observations to be used in the fitting process.- strata
an optional
vector
specifying strata.- weights
an optional
vector
of prior weights to be used in the fitting process. Should be NULL or a numeric vector. It is assumed that this vector contains frequency or analytic weights.- na_action
a function which indicates what should happen when the data contain
NAs
.- control_selection
a
list
indicating parameters to use in fitting selection model for propensity scores.- control_outcome
a
list
indicating parameters to use in fitting model for outcome variable.- control_inference
a
list
indicating parameters to use in inference based on probability and non-probability samples, contains parameters such as estimation method or variance method.- start_selection
an optional
vector
with starting values for the parameters of the selection equation.- start_outcome
an optional
vector
with starting values for the parameters of the outcome equation.- verbose
verbose, numeric.
- x
Logical value indicating whether to return model matrix of covariates as a part of output.
- y
Logical value indicating whether to return vector of outcome variable as a part of output.
- se
Logical value indicating whether to calculate and return standard error of estimated mean.
- ...
Additional, optional arguments.
Value
Returns an object of class c("nonprobsvy", "nonprobsvy_dr")
in case of doubly robust estimator,
c("nonprobsvy", "nonprobsvy_mi")
in case of mass imputation estimator and
c("nonprobsvy", "nonprobsvy_ipw")
in case of inverse probability weighting estimator
with type list
containing:
X
– model matrix containing data from probability and non-probability samples if specified at a function call.y
– list of vector of outcome variables if specified at a function call.R
– vector indicating the probablistic (0) or non-probablistic (1) units in the matrix X.prob
– vector of estimated propensity scores for non-probability sample.weights
– vector of estimated weights for non-probability sample.control
– list of control functions.output
– output of the model with information on the estimated population mean and standard errors.SE
– standard error of the estimator of the population mean, divided into errors from probability and non-probability samples.confidence_interval
– confidence interval of population mean estimator.nonprob_size
– size of non-probability sample.prob_size
– size of probability sample.pop_size
– estimated population size derived from estimated weights (non-probability sample) or known design weights (probability sample).pop_totals
– the total values of the auxiliary variables derived from a probability sample or vector of total/mean values.outcome
– list containing information about the fitting of the mass imputation model, in the case of regression model the object containing the list returned bystats::glm()
, in the case of the nearest neighbour imputation the object containing list returned byRANN::nn2()
. Ifbias_correction
incontrolInf()
is set toTRUE
, the estimation is based on the joint estimating equations for theselection
andoutcome
model and therefore, the list is different from the one returned by thestats::glm()
function and contains elements such ascoefficients
– estimated coefficients of the regression model.std_err
– standard errors of the estimated coefficients.residuals
– The response residuals.variance_covariance
– The variance-covariance matrix of the coefficient estimates.df_residual
– The degrees of freedom for residuals.family
– specifies the error distribution and link function to be used in the model.fitted.values
– The predicted values of the response variable based on the fitted model.linear.predictors
– The linear fit on link scale.X
– The design matrix.method
– set onglm
, since the regression method.model_frame
– Matrix of data from probability sample used for mass imputation.
cve
– the error for each value oflambda
, averaged across the cross-validation folds.
selection
– list containing information about fitting of propensity score model, such ascoefficients
– a named vector of coefficients.std_err
– standard errors of the estimated model coefficients.residuals
– the response residuals.variance
– the root mean square error.fitted_values
– the fitted mean values, obtained by transforming the linear predictors by the inverse of the link function.link
– thelink
object used.linear_predictors
– the linear fit on link scale.aic
– A version of Akaike's An Information Criterion, minus twice the maximized log-likelihood plus twice the number of parameters.weights
– vector of estimated weights for non-probability sample.prior.weights
– the weights initially supplied, a vector of 1s if none were.est_totals
– the estimated total values of auxiliary variables derived from a non-probability sample.formula
– the formula supplied.df_residual
– the residual degrees of freedom.log_likelihood
– value of log-likelihood function ifmle
method, in the other caseNA
.cve
– the error for each value of thelambda
, averaged across the cross-validation folds for the variable selection model when the propensity score model is fitting. Returned only if selection of variables for the model is used.method_selection
– Link function, e.g.logit
,cloglog
orprobit
.hessian
– Hessian Gradient of the log-likelihood function frommle
method.gradient
– Gradient of the log-likelihood function frommle
method.method
– An estimation method for selection model, e.g.mle
orgee
.prob_der
– Derivative of the inclusion probability function for units in a non–probability sample.prob_rand
– Inclusion probabilities for unit from a probabiliy sample fromsvydesign
object.prob_rand_est
– Inclusion probabilites to a non–probabiliy sample for unit from probability sample.prob_rand_est_der
– Derivative of the inclusion probabilites to a non–probabiliy sample for unit from probability sample.
stat
– matrix of the estimated population means in each bootstrap iteration. Returned only if a bootstrap method is used to estimate the variance andkeep_boot
incontrolInf()
is set onTRUE
.
Details
Let y be the response variable for which we want to estimate the population mean,
given by _y = 1N _i=1^N y_i. For this purpose we consider data integration
with the following structure. Let S_A be the non-probability sample with the design matrix of covariates as
X_A =
bmatrix
x_11 & x_12 & & x_1p
x_21 & x_22 & & x_2p
& & &
x_n_A1 & x_n_A2 & & x_n_Ap
bmatrix
and vector of outcome variable
y =
bmatrix
y_1
y_2
y_n_A.
bmatrix
On the other hand, let S_B be the probability sample with design matrix of covariates be
X_B =
bmatrix
x_11 & x_12 & & x_1p
x_21 & x_22 & & x_2p
& & &
x_n_B1 & x_n_B2 & & x_n_Bp.
bmatrix
Instead of a sample of units we can consider a vector of population sums in the form of _x = (_i Ux_i1, _i Ux_i2, ..., _i Ux_ip) or means
_xN, where U refers to a finite population. Note that we do not assume access to the response variable for S_B.
In general we make the following assumptions:
The selection indicator of belonging to non-probability sample R_i and the response variable y_i are independent given the set of covariates x_i.
All units have a non-zero propensity score, i.e., _i^A > 0 for all i.
The indicator variables R_i^A and R_j^A are independent for given x_i and x_j for i j.
There are three possible approaches to the problem of estimating population mean using non-probability samples:
Inverse probability weighting - The main drawback of non-probability sampling is the unknown selection mechanism for a unit to be included in the sample. This is why we talk about the so-called "biased sample" problem. The inverse probability approach is based on the assumption that a reference probability sample is available and therefore we can estimate the propensity score of the selection mechanism. The estimator has the following form: _IPW = 1N^A_i S_A y_i_i^A. For this purpose several estimation methods can be considered. The first approach is maximum likelihood estimation with a corrected log-likelihood function, which is given by the following formula ^*() = _i S_A (x_i, )1 - (x_i,) + _i S_Bd_i^B 1 - (x_i,). In the literature, the main approach to modelling propensity scores is based on the logit link function. However, we extend the propensity score model with the additional link functions such as cloglog and probit. The pseudo-score equations derived from ML methods can be replaced by the idea of generalised estimating equations with calibration constraints defined by equations. U()=_i S_A h(x_i, )-_i S_B d_i^B (x_i, ) h(x_i, ). Notice that for h(x_i, ) = (x, )x We do not need a probability sample and can use a vector of population totals/means.
Mass imputation – This method is based on a framework where imputed values of outcome variables are created for the entire probability sample. In this case, we treat the large sample as a training data set that is used to build an imputation model. Using the imputed values for the probability sample and the (known) design weights, we can build a population mean estimator of the form: _MI = 1N^B_i S_B d_i^B y_i. It opens the the door to a very flexible method for imputation models. The package uses generalized linear models from
stats::glm()
, the nearest neighbour algorithm usingRANN::nn2()
and predictive mean matching.Doubly robust estimation – The IPW and MI estimators are sensitive to misspecified models for the propensity score and outcome variable, respectively. To this end, so-called doubly robust methods are presented that take these problems into account. It is a simple idea to combine propensity score and imputation models during inference, leading to the following estimator _DR = 1N^A_i S_A d_i^A (y_i - y_i) + 1N^B_i S_B d_i^B y_i. In addition, an approach based directly on bias minimisation has been implemented. The following formula aligned bias(_DR) = & E (_DR - )
= & E 1N _i=1^N (R_i^A_i^A (x_i^T ) - 1 ) (y_i - m(x_i^T ))
+ & E 1N _i=1^N (R_i^B d_i^B - 1) m( x_i^T ) , aligned lead us to system of equations aligned J(, ) = arrayc J_1(, )
J_2(, ) array = arrayc _i=1^N R_i^A\ 1(x_i, )-1 y_i-m(x_i, ) x_i
_i=1^N R_i^A(x_i, ) m(x_i, ) - _i S_B d_i^B m(x_i, ) array , aligned where m(x_i, ) is a mass imputation (regression) model for the outcome variable and propensity scores _i^A are estimated using alogit
function for the model. As with theMLE
andGEE
approaches we have extended this method tocloglog
andprobit
links.
As it is not straightforward to calculate the variances of these estimators, asymptotic equivalents of the variances derived using the Taylor approximation have been proposed in the literature. Details can be found here. In addition, a bootstrap approach can be used for variance estimation.
The function also allows variables selection using known methods that have been implemented to handle the integration of probability and non-probability sampling.
In the presence of high-dimensional data, variable selection is important, because it can reduce the variability in the estimate that results from using irrelevant variables to build the model.
Let U( , ) be the joint estimating function for ( , ). We define the
penalized estimating functions as
U^p (, ) = U(, ) - arrayc
q__(||) sgn() \
q__(|\boldsymbol|) sgn()
array ,
where _ and q__ are some smooth functions. We let q_ (x) = p_ x, where p_ is some penalization function.
Details of penalization functions and techniques for solving this type of equation can be found here.
To use the variable selection model, set the vars_selection
parameter in the controlInf()
function to TRUE
. In addition, in the other control functions such as controlSel()
and controlOut()
you can set parameters for the selection of the relevant variables, such as the number of folds during cross-validation algorithm or the lambda value for penalizations. Details can be found
in the documentation of the control functions for nonprob
.
References
Kim JK, Park S, Chen Y, Wu C. Combining non-probability and probability survey samples through mass imputation. J R Stat Soc Series A. 2021;184:941– 963.
Shu Yang, Jae Kwang Kim, Rui Song. Doubly robust inference when combining probability and non-probability samples with high dimensional data. J. R. Statist. Soc. B (2020)
Yilin Chen , Pengfei Li & Changbao Wu (2020) Doubly Robust Inference With Nonprobability Survey Samples, Journal of the American Statistical Association, 115:532, 2011-2021
Shu Yang, Jae Kwang Kim and Youngdeok Hwang Integration of data from probability surveys and big found data for finite population inference using mass imputation. Survey Methodology, June 2021 29 Vol. 47, No. 1, pp. 29-58
See also
stats::optim()
– For more information on the optim
function used in the
optim
method of propensity score model fitting.
maxLik::maxLik()
– For more information on the maxLik
function used in
maxLik
method of propensity score model fitting.
ncvreg::cv.ncvreg()
– For more information on the cv.ncvreg
function used in
variable selection for the outcome model.
nleqslv::nleqslv()
– For more information on the nleqslv
function used in
estimation process of the bias minimization approach.
stats::glm()
– For more information about the generalised linear models used during mass imputation process.
RANN::nn2()
– For more information about the nearest neighbour algorithm used during mass imputation process.
controlSel()
– For the control parameters related to selection model.
controlOut()
– For the control parameters related to outcome model.
controlInf()
– For the control parameters related to statistical inference.
Examples
# \donttest{
# generate data based on Doubly Robust Inference With Non-probability Survey Samples (2021)
# Yilin Chen , Pengfei Li & Changbao Wu
library(sampling)
#>
#> Attaching package: ‘sampling’
#> The following objects are masked from ‘package:survival’:
#>
#> cluster, strata
set.seed(123)
# sizes of population and probability sample
N <- 20000 # population
n_b <- 1000 # probability
# data
z1 <- rbinom(N, 1, 0.7)
z2 <- runif(N, 0, 2)
z3 <- rexp(N, 1)
z4 <- rchisq(N, 4)
# covariates
x1 <- z1
x2 <- z2 + 0.3 * z2
x3 <- z3 + 0.2 * (z1 + z2)
x4 <- z4 + 0.1 * (z1 + z2 + z3)
epsilon <- rnorm(N)
sigma_30 <- 10.4
sigma_50 <- 5.2
sigma_80 <- 2.4
# response variables
y30 <- 2 + x1 + x2 + x3 + x4 + sigma_30 * epsilon
y50 <- 2 + x1 + x2 + x3 + x4 + sigma_50 * epsilon
y80 <- 2 + x1 + x2 + x3 + x4 + sigma_80 * epsilon
# population
sim_data <- data.frame(y30, y50, y80, x1, x2, x3, x4)
## propensity score model for non-probability sample (sum to 1000)
eta <- -4.461 + 0.1 * x1 + 0.2 * x2 + 0.1 * x3 + 0.2 * x4
rho <- plogis(eta)
# inclusion probabilities for probability sample
z_prob <- x3 + 0.2051
sim_data$p_prob <- inclusionprobabilities(z_prob, n = n_b)
# data
sim_data$flag_nonprob <- UPpoisson(rho) ## sampling nonprob
sim_data$flag_prob <- UPpoisson(sim_data$p_prob) ## sampling prob
nonprob_df <- subset(sim_data, flag_nonprob == 1) ## non-probability sample
svyprob <- svydesign(
ids = ~1, probs = ~p_prob,
data = subset(sim_data, flag_prob == 1),
pps = "brewer"
) ## probability sample
## mass imputation estimator
MI_res <- nonprob(
outcome = y80 ~ x1 + x2 + x3 + x4,
data = nonprob_df,
svydesign = svyprob
)
summary(MI_res)
#>
#> Call:
#> nonprob(data = nonprob_df, outcome = y80 ~ x1 + x2 + x3 + x4,
#> svydesign = svyprob)
#>
#> -------------------------
#> Estimated population mean: 9.518 with overall std.err of: 0.151
#> And std.err for nonprobability and probability samples being respectively:
#> 0.08679 and 0.1236
#>
#> 95% Confidence inverval for popualtion mean:
#> lower_bound upper_bound
#> y80 9.222349 9.814346
#>
#>
#> Based on: Mass Imputation method
#> For a population of estimate size: 21631.63
#> Obtained on a nonprobability sample of size: 1032
#> With an auxiliary probability sample of size: 1044
#> -------------------------
#>
#> Regression coefficients:
#> -----------------------
#> For glm regression on outcome variable:
#> Estimate Std. Error z value P(>|z|)
#> (Intercept) 1.93113 0.24859 7.768 7.95e-15 ***
#> x1 1.06616 0.16954 6.289 3.20e-10 ***
#> x2 1.04125 0.09731 10.700 < 2e-16 ***
#> x3 0.98891 0.06927 14.277 < 2e-16 ***
#> x4 0.98930 0.01904 51.946 < 2e-16 ***
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#> -------------------------
#>
## inverse probability weighted estimator
IPW_res <- nonprob(
selection = ~ x1 + x2 + x3 + x4,
target = ~y80,
data = nonprob_df,
svydesign = svyprob
)
summary(IPW_res)
#>
#> Call:
#> nonprob(data = nonprob_df, selection = ~x1 + x2 + x3 + x4, target = ~y80,
#> svydesign = svyprob)
#>
#> -------------------------
#> Estimated population mean: 9.718 with overall std.err of: 0.2375
#> And std.err for nonprobability and probability samples being respectively:
#> 0.1887 and 0.1442
#>
#> 95% Confidence inverval for popualtion mean:
#> lower_bound upper_bound
#> y80 9.252143 10.18302
#>
#>
#> Based on: Inverse probability weighted method
#> For a population of estimate size: 21127.35
#> Obtained on a nonprobability sample of size: 1032
#> With an auxiliary probability sample of size: 1044
#> -------------------------
#>
#> Regression coefficients:
#> -----------------------
#> For glm regression on selection variable:
#> Estimate Std. Error z value P(>|z|)
#> (Intercept) -4.582627 0.105508 -43.434 < 2e-16 ***
#> x1 0.102629 0.074416 1.379 0.168
#> x2 0.234843 0.042871 5.478 4.30e-08 ***
#> x3 0.181632 0.029253 6.209 5.33e-10 ***
#> x4 0.184285 0.008568 21.508 < 2e-16 ***
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#> -------------------------
#>
#> Weights:
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 1.172 10.583 18.137 20.472 27.939 79.562
#> -------------------------
#>
#> Covariate balance:
#> (Intercept) x1 x2 x3 x4
#> -504.27565 -107.32379 -78.59614 -193.36761 1541.19593
#> -------------------------
#>
#> Residuals:
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> -0.56121 -0.04204 -0.01457 0.43052 0.94475 0.98743
#>
#> AIC: 7797.97
#> BIC: 7826.161
#> Log-Likelihood: -3893.985 on 2071 Degrees of freedom
## doubly robust estimator
DR_res <- nonprob(
outcome = y80 ~ x1 + x2 + x3 + x4,
selection = ~ x1 + x2 + x3 + x4,
data = nonprob_df,
svydesign = svyprob
)
summary(DR_res)
#>
#> Call:
#> nonprob(data = nonprob_df, selection = ~x1 + x2 + x3 + x4, outcome = y80 ~
#> x1 + x2 + x3 + x4, svydesign = svyprob)
#>
#> -------------------------
#> Estimated population mean: 9.483 with overall std.err of: 0.1525
#> And std.err for nonprobability and probability samples being respectively:
#> 0.08508 and 0.1265
#>
#> 95% Confidence inverval for popualtion mean:
#> lower_bound upper_bound
#> y80 9.183858 9.781461
#>
#>
#> Based on: Doubly-Robust method
#> For a population of estimate size: 21127.35
#> Obtained on a nonprobability sample of size: 1032
#> With an auxiliary probability sample of size: 1044
#> -------------------------
#>
#> Regression coefficients:
#> -----------------------
#> For glm regression on outcome variable:
#> Estimate Std. Error z value P(>|z|)
#> (Intercept) 1.93113 0.24859 7.768 7.95e-15 ***
#> x1 1.06616 0.16954 6.289 3.20e-10 ***
#> x2 1.04125 0.09731 10.700 < 2e-16 ***
#> x3 0.98891 0.06927 14.277 < 2e-16 ***
#> x4 0.98930 0.01904 51.946 < 2e-16 ***
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
#> -----------------------
#> For glm regression on selection variable:
#> Estimate Std. Error z value P(>|z|)
#> (Intercept) -4.582627 0.105508 -43.434 < 2e-16 ***
#> x1 0.102629 0.074416 1.379 0.168
#> x2 0.234843 0.042871 5.478 4.30e-08 ***
#> x3 0.181632 0.029253 6.209 5.33e-10 ***
#> x4 0.184285 0.008568 21.508 < 2e-16 ***
#> -------------------------
#>
#> Weights:
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 1.172 10.583 18.137 20.472 27.939 79.562
#> -------------------------
#>
#> Covariate balance:
#> (Intercept) x1 x2 x3 x4
#> -504.27565 -107.32379 -78.59614 -193.36761 1541.19593
#> -------------------------
#>
#> Residuals:
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> -0.56121 -0.04204 -0.01457 0.43052 0.94475 0.98743
#>
#> AIC: 7797.97
#> BIC: 7826.161
#> Log-Likelihood: -3893.985 on 2071 Degrees of freedom
# }