Introduction to the EAinference package

EAinference package is designed to facilitate simulation based inference of lasso type estimators. The package includes

  • Fitting the lasso, group lasso, scaleld lasso and scaled group lasso models
  • Gaussian and wild multiplier bootstrap for lasso, group lasso, scaled lasso, scaled group lasso and their de-biased estimators
  • Importance sampler for approximating p-values in these methods
  • Markov chain Monte Carlo lasso sampler with applications in post-selection inference.

Loss functions

Let the unknown true model be y = Xβ0 + ϵ, where β0 is a unknown true coefficient vector and ϵiiidN(0, σ2).

We use following loss functions. $$ \ell_{grlasso}(\beta ; \lambda) = \frac{1}{2}||y-X\beta||^2_2 + n\lambda\sum_j w_j||\beta_{(j)}||_2,$$ $$ \ell_{sgrlasso}(\beta, \sigma ; \lambda) = \frac{1}{2\sigma}||y-X\beta||^2_2 + \frac{\sigma}{2} + n\lambda\sum_j w_j||\beta_{(j)}||_2,$$ where grlasso and sgrlasso represent group lasso and scaled group lasso, respectively, and wj is the weight factor for the j-th group. Loss functions for lasso and scaled lasso can be treated as special cases of group lasso and group scaled lasso when every group is a singleton, respectively.

Fit lasso, group lasso, scaled lasso and scaled group lasso models

lassoFit computes lasso, group lasso, scaled lasso and scaled group lasso solutions. Users can either provide the value of λ or choose to use cross-validation by setting lbd = "cv.min" or lbd = "cv.1se". By letting lbd = "cv.min", users can have λ which gives minimum mean-squared error, while lbd = "cv.1se" is the largest λ within 1 standard deviaion error bound of "cv.min".

library(EAinference)
set.seed(1234)
n <- 30; p <- 50
Niter <-  10
group <- rep(1:(p/5), each = 5)
weights <- rep(1, p/5)
beta0 <- c(rep(1,5), rep(0, p-5))
X <- matrix(rnorm(n*p), n)
Y <- X %*% beta0 + rnorm(n)
# set lambda = .5
lassoFit(X = X, Y = Y, lbd = .5, group = group, weights = weights, type="grlasso")
## $B0
##  [1]  0.5161621302  1.0188228346  0.6824834013  0.3931486388  0.6875020567
##  [6] -0.0016282450  0.0217383769  0.0301842209 -0.0099550641 -0.0164192437
## [11] -0.0014106348 -0.0045337379  0.0020827785  0.0005228741 -0.0052631513
## [16]  0.0000000000  0.0000000000  0.0000000000  0.0000000000  0.0000000000
## [21]  0.0000000000  0.0000000000  0.0000000000  0.0000000000  0.0000000000
## [26]  0.0000000000  0.0000000000  0.0000000000  0.0000000000  0.0000000000
## [31]  0.0000000000  0.0000000000  0.0000000000  0.0000000000  0.0000000000
## [36]  0.0000000000  0.0000000000  0.0000000000  0.0000000000  0.0000000000
## [41] -0.0017677300 -0.0011624716  0.0061031436  0.0010321405 -0.0028783506
## [46]  0.0000000000  0.0000000000  0.0000000000  0.0000000000  0.0000000000
## 
## $S0
##  [1]  0.33335963  0.65799946  0.44077704  0.25391224  0.44401830 -0.03886713
##  [7]  0.51890728  0.72051432 -0.23763298 -0.39193658 -0.19045807 -0.61212652
## [13]  0.28120812  0.07059630 -0.71060890  0.46931035  0.33720782  0.11896157
## [19]  0.29503617  0.03505353  0.05203572 -0.12061761  0.04602795 -0.40549325
## [25] -0.50026678  0.17202559  0.45006928 -0.17411522 -0.80287198 -0.10072091
## [31] -0.25984827  0.01140847  0.24753246  0.62774887  0.04721456  0.33710425
## [37] -0.09279444 -0.04843110 -0.43429989  0.40143864 -0.24735041 -0.16265937
## [43]  0.85398512  0.14442273 -0.40275451  0.53178513 -0.20152076 -0.07230322
## [49] -0.45278993  0.21105497
## 
## $lbd
## [1] 0.5
## 
## $weights
##  [1] 1 1 1 1 1 1 1 1 1 1
## 
## $group
##  [1]  1  1  1  1  1  2  2  2  2  2  3  3  3  3  3  4  4  4  4  4  5  5  5  5  5
## [26]  6  6  6  6  6  7  7  7  7  7  8  8  8  8  8  9  9  9  9  9 10 10 10 10 10
# use cross-validation
lassoFit(X = X, Y = Y, lbd = "cv.1se", group = group, weights = weights, type="grlasso")
## $B0
##  [1]  0.542147627  1.067972600  0.719380282  0.392840348  0.741292109
##  [6] -0.014276359  0.063172054  0.089948810 -0.022679445 -0.043191437
## [11] -0.009471823 -0.044387152  0.013828738  0.008189518 -0.047287646
## [16]  0.000000000  0.000000000  0.000000000  0.000000000  0.000000000
## [21]  0.000000000  0.000000000  0.000000000  0.000000000  0.000000000
## [26]  0.007456928  0.021707653 -0.009224345 -0.044756232 -0.004615916
## [31]  0.000000000  0.000000000  0.000000000  0.000000000  0.000000000
## [36]  0.000000000  0.000000000  0.000000000  0.000000000  0.000000000
## [41] -0.013846553 -0.008093775  0.050048948  0.007689270 -0.022685429
## [46]  0.000000000  0.000000000  0.000000000  0.000000000  0.000000000
## 
## $S0
##  [1]  0.332672579  0.655329266  0.441426080  0.241054658  0.454871611
##  [6] -0.117889181  0.521652710  0.742765794 -0.187278923 -0.356659768
## [11] -0.140352440 -0.657723984  0.204912732  0.121351385 -0.700703190
## [16]  0.485979914  0.293177130  0.084563490  0.262524675  0.004146794
## [21]  0.162695976 -0.119210529  0.015183560 -0.418380295 -0.542410749
## [26]  0.145230875  0.422777512 -0.179653021 -0.871670842 -0.089899431
## [31] -0.331601817 -0.041471502  0.292528943  0.814088644  0.044069829
## [36]  0.337666280 -0.082805949  0.002969195 -0.479027474  0.497476824
## [41] -0.239737511 -0.140134621  0.866541348  0.133131079 -0.392772732
## [46]  0.677532443 -0.346219601  0.017915504 -0.450745030  0.195449953
## 
## $lbd
## [1] 0.3843525
## 
## $weights
##  [1] 1 1 1 1 1 1 1 1 1 1
## 
## $group
##  [1]  1  1  1  1  1  2  2  2  2  2  3  3  3  3  3  4  4  4  4  4  5  5  5  5  5
## [26]  6  6  6  6  6  7  7  7  7  7  8  8  8  8  8  9  9  9  9  9 10 10 10 10 10

Parametric bootstrap sampler

PBsampler supports two bootstrap methods; Gaussian bootstrap and wild multiplier bootstrap. Due to the fact that the sampling distirbution of (β̂, S), coefficient estimator and its subgradient, is characterized by (β0, σ2, λ), users are required to provide PE(either the estimate of β0 or the estimate of E(y) = Xβ0), sig2(or estimate of σ2) and lbd(λ from above loss functions).

By specifying two sets of arguments, (PE_1, sig2_1, lbd_1) and (PE_2, sig2_2, lbd_2), users can sample from the mixture distribution. In this way, samples will be drawn from (PE_1, sig2_1, lbd_1) with 1/2 probability and from (PE_2, sig2_2, lbd_2) with another 1/2 probability.

set.seed(1234)
n <- 5; p <- 10
Niter <-  3
group <- rep(1:(p/5), each = 5)
weights <- rep(1, p/5)
beta0 <- c(rep(1,5), rep(0, p-5))
X <- matrix(rnorm(n*p), n)
Y <- X %*% beta0 + rnorm(n)
#
# Using non-mixture distribution
#
PBsampler(X = X, PE_1 = rep(0, p), sig2_1 = 1, lbd_1 = .5,
weights = weights, group = group, type = "grlasso", niter = Niter, parallel = FALSE)
## $beta
##            [,1]        [,2]         [,3]        [,4]       [,5]        [,6]
## [1,] -0.3525429 -0.04506306 -0.163515697 -0.20737321 0.04096061  0.07987176
## [2,]  0.0000000  0.00000000  0.000000000  0.00000000 0.00000000  0.19449574
## [3,] -0.1019568 -0.01026921  0.007230151 -0.04528637 0.02613020 -0.48574724
##             [,7]         [,8]        [,9]       [,10]
## [1,]  0.01440932 -0.078967690 -0.01727742 -0.02587139
## [2,]  0.10015761 -0.449682539 -0.06018988 -0.11896291
## [3,] -0.02474699  0.003987114  0.16607929 -0.22667552
## 
## $subgrad
##            [,1]        [,2]        [,3]       [,4]        [,5]       [,6]
## [1,] -0.7928100 -0.10133928 -0.36771942 -0.4663476  0.09211357  0.6801388
## [2,]  0.5445932  0.06272735 -0.41645864  0.2135762 -0.29673222  0.3758068
## [3,] -0.8845246 -0.08909037  0.06272503 -0.3928810  0.22669206 -0.8647313
##            [,7]         [,8]       [,9]      [,10]
## [1,]  0.1227009 -0.672440321 -0.1471239 -0.2203049
## [2,]  0.1935256 -0.868881583 -0.1162996 -0.2298614
## [3,] -0.0440548  0.007097887  0.2956558 -0.4035297
## 
## $X
##            [,1]       [,2]        [,3]       [,4]       [,5]       [,6]
## [1,] -1.2070657  0.5060559 -0.47719270 -0.1102855  0.1340882 -1.4482049
## [2,]  0.2774292 -0.5747400 -0.99838644 -0.5110095 -0.4906859  0.5747557
## [3,]  1.0844412 -0.5466319 -0.77625389 -0.9111954 -0.4405479 -1.0236557
## [4,] -2.3456977 -0.5644520  0.06445882 -0.8371717  0.4595894 -0.0151383
## [5,]  0.4291247 -0.8900378  0.95949406  2.4158352 -0.6937202 -0.9359486
##            [,7]       [,8]       [,9]      [,10]
## [1,]  1.1022975 -1.1676193  1.4494963 -0.9685143
## [2,] -0.4755931 -2.1800396 -1.0686427 -1.1073182
## [3,] -0.7094400 -1.3409932 -0.8553646 -1.2519859
## [4,] -0.5012581 -0.2942939 -0.2806230 -0.5238281
## [5,] -1.6290935 -0.4658975 -0.9943401 -0.4968500
## 
## $PE
##  [1] 0 0 0 0 0 0 0 0 0 0
## 
## $sig2
## [1] 1
## 
## $lbd
## [1] 0.5
## 
## $weights
## [1] 1 1
## 
## $group
##  [1] 1 1 1 1 1 2 2 2 2 2
## 
## $type
## [1] "grlasso"
## 
## $PEtype
## [1] "coeff"
## 
## $Btype
## [1] "gaussian"
## 
## $Y
## NULL
## 
## $mixture
## [1] FALSE
## 
## $call
## PBsampler(X = X, PE_1 = rep(0, p), sig2_1 = 1, lbd_1 = 0.5, weights = weights, 
##     group = group, niter = Niter, type = "grlasso", parallel = FALSE)
## 
## attr(,"class")
## [1] "PB"
#
# Using mixture distribution
#
PBsampler(X = X, PE_1 = rep(0, p), sig2_1 = 1, lbd_1 = .5,
 PE_2 = rep(1, p), sig2_2 = 2, lbd_2 = .3, weights = weights,
 group = group, type = "grlasso", niter = Niter, parallel = FALSE)
## $beta
##            [,1]       [,2]        [,3]       [,4]        [,5]        [,6]
## [1,]  0.0000000  0.0000000  0.00000000  0.0000000  0.00000000 -0.27036726
## [2,] -0.3321398 -0.3335077 -0.04145558 -0.4657295  0.06208680 -0.08107206
## [3,]  1.2724390  0.7144906  0.26381037  0.7718455 -0.07768157  0.28101652
##            [,7]        [,8]       [,9]      [,10]
## [1,]  0.1244576 -0.15833337  0.2244698 -0.1545509
## [2,] -0.3907519 -0.08505561 -0.3701667 -0.2274744
## [3,]  1.2572749  1.56840772  1.1990782  1.4824785
## 
## $subgrad
##            [,1]       [,2]        [,3]       [,4]        [,5]       [,6]
## [1,] -0.6154791  0.1204289 -0.02635955  0.1638178  0.05509190 -0.6236712
## [2,] -0.4984468 -0.5004997 -0.06221297 -0.6989269  0.09317453 -0.1360181
## [3,]  0.7602955  0.4269156  0.15762945  0.4611857 -0.04641555  0.1009106
##            [,7]       [,8]       [,9]      [,10]
## [1,]  0.2870933 -0.3652364  0.5177970 -0.3565113
## [2,] -0.6555814 -0.1427015 -0.6210447 -0.3816437
## [3,]  0.4514767  0.5632018  0.4305787  0.5323454
## 
## $X
##            [,1]       [,2]        [,3]       [,4]       [,5]       [,6]
## [1,] -1.2070657  0.5060559 -0.47719270 -0.1102855  0.1340882 -1.4482049
## [2,]  0.2774292 -0.5747400 -0.99838644 -0.5110095 -0.4906859  0.5747557
## [3,]  1.0844412 -0.5466319 -0.77625389 -0.9111954 -0.4405479 -1.0236557
## [4,] -2.3456977 -0.5644520  0.06445882 -0.8371717  0.4595894 -0.0151383
## [5,]  0.4291247 -0.8900378  0.95949406  2.4158352 -0.6937202 -0.9359486
##            [,7]       [,8]       [,9]      [,10]
## [1,]  1.1022975 -1.1676193  1.4494963 -0.9685143
## [2,] -0.4755931 -2.1800396 -1.0686427 -1.1073182
## [3,] -0.7094400 -1.3409932 -0.8553646 -1.2519859
## [4,] -0.5012581 -0.2942939 -0.2806230 -0.5238281
## [5,] -1.6290935 -0.4658975 -0.9943401 -0.4968500
## 
## $PE
##      [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
## PE_1    0    0    0    0    0    0    0    0    0     0
## PE_2    1    1    1    1    1    1    1    1    1     1
## 
## $sig2
## [1] 1 2
## 
## $lbd
## [1] 0.5 0.3
## 
## $weights
## [1] 1 1
## 
## $group
##  [1] 1 1 1 1 1 2 2 2 2 2
## 
## $type
## [1] "grlasso"
## 
## $PEtype
## [1] "coeff"
## 
## $Btype
## [1] "gaussian"
## 
## $Y
## NULL
## 
## $mixture
## [1] TRUE
## 
## $call
## PBsampler(X = X, PE_1 = rep(0, p), sig2_1 = 1, lbd_1 = 0.5, PE_2 = rep(1, 
##     p), sig2_2 = 2, lbd_2 = 0.3, weights = weights, group = group, 
##     niter = Niter, type = "grlasso", parallel = FALSE)
## 
## attr(,"class")
## [1] "PB"

Importance sampler for high dimensional data

Importance sampler enables users to access the inference of an extreme region. This is done by using a proposal distiribution that is denser around the extreme region.

Say that we are interested in computing the expectation of a function of a random variable, h(X). Let f(x) be the true or target distribution and g(x) be the proposal distribution. We can approximate the expectation by a weighted average of samples drawn from the proposal distribution as follows,

$$ \begin{eqnarray} E_f\Big[h(X)\Big] &=& E_g \Big[h(X)\frac{f(X)}{g(X)} \Big]\\ &\approx& \frac{1}{N}\sum_{i=1}^N h(x_i)\frac{f(x_i)}{g(x_i)}. \end{eqnarray} $$ By using hdIS method, one can easily compute the importance weight which is the ratio of target density over proposal density; f(xi)/g(xi) from above equation.

Users can simply draw samples from the proposal distribution using PBsampler and plug in the result into hdIS with target distribution parameters in order to compute the importance weights.

# Target distribution parameter
PETarget <- rep(0, p)
sig2Target <- .5
lbdTarget <- .37
# Proposal distribution parameter
PEProp1 <- rep(1, p)
sig2Prop1 <- .5
lbdProp1 <- 1

# Draw samples from proposal distribution
PB <- PBsampler(X = X, PE_1 = PEProp1, sig2_1 = sig2Prop1,
 lbd_1 = lbdProp1, weights = weights, group = group, niter = Niter,
 type="grlasso", PEtype = "coeff")

# Compute importance weights
hdIS(PB, PETarget = PETarget, sig2Target = sig2Target, lbdTarget = lbdTarget,
log = TRUE)
## [1]  -96.45907 -126.87445  -69.81884

Metropolis-Hastings Lasso Sampler

In this section, we introduce MHLS method, a Markov Chain Monte Carlo(MCMC) sampler for lasso estimator. Although bootstrapping is one of the most convenient sampling methods, it has a clear limitation which is that sampling from the conditional distribution is impossible. In contrast, MCMC sampler can easily draw samples from the conditional distribution. Here, we introduce MHLS function which draws lasso samples under the fixed active set, A.

weights <- rep(1,p)
lbd <- .37
LassoResult <- lassoFit(X = X,Y = Y,lbd = lbd, type = "lasso", weights = weights)
B0 <- LassoResult$B0
S0 <- LassoResult$S0
Result <- MHLS(X = X, PE = B0, sig2 = 1, lbd = 1,
    weights = weights, B0 = B0, S0 = S0, niter = 100, burnin = 0,
    PEtype = "coeff")
Result
## ===========================
## Number of iteration:  100 
## 
## Burn-in period:  0 
## 
## Plug-in PE: 
##  [1] 0.7995741 0.0000000 0.0000000 0.9605223 0.0000000 0.0000000 0.0000000
##  [8] 0.8459423 0.0000000 0.6140244
## 
## PEtype: 
## [1] "coeff"
## 
## Last 10 steps of beta's:
##            [,1] [,2] [,3]      [,4] [,5] [,6] [,7]      [,8] [,9]     [,10]
##  [1,] 0.6461557    0    0 1.0846819    0    0    0 0.4807446    0 0.2335607
##  [2,] 0.3534688    0    0 1.0846819    0    0    0 0.3845279    0 0.2335607
##  [3,] 0.3534688    0    0 0.4931755    0    0    0 0.9697585    0 0.2335607
##  [4,] 0.3449554    0    0 0.4931755    0    0    0 0.9697585    0 0.2335607
##  [5,] 0.3449554    0    0 0.4931755    0    0    0 0.9697585    0 0.2335607
##  [6,] 0.3449554    0    0 0.4931755    0    0    0 0.9697585    0 0.2335607
##  [7,] 0.3449554    0    0 0.4931755    0    0    0 0.9697585    0 0.2335607
##  [8,] 0.3449554    0    0 0.4680810    0    0    0 0.9353853    0 0.2335607
##  [9,] 0.3449554    0    0 0.4680810    0    0    0 0.3043019    0 0.7304599
## [10,] 0.3449554    0    0 0.4680810    0    0    0 0.3043019    0 0.7304599
## 
## last 10 steps of subgradients:
##       [,1]       [,2]      [,3] [,4]        [,5]      [,6]        [,7] [,8]
##  [1,]    1 0.20754108 0.5348568    1 -0.13120091 0.8152909 -0.02089750    1
##  [2,]    1 0.19905322 0.5372861    1 -0.13266806 0.8230639 -0.03403653    1
##  [3,]    1 0.14647739 0.5523336    1 -0.14175596 0.8712117 -0.11542290    1
##  [4,]    1 0.14647739 0.5523336    1 -0.14175596 0.8712117 -0.11542290    1
##  [5,]    1 0.09855956 0.5660480    1 -0.15003870 0.9150938 -0.18959878    1
##  [6,]    1 0.03427603 0.5844463    1 -0.16115031 0.9739632 -0.28910845    1
##  [7,]    1 0.10258729 0.5648952    1 -0.14934249 0.9114053 -0.18336393    1
##  [8,]    1 0.10258729 0.5648952    1 -0.14934249 0.9114053 -0.18336393    1
##  [9,]    1 0.46926264 0.4599506    1 -0.08596154 0.5756121  0.38424243    1
## [10,]    1 0.31442162 0.5042670    1 -0.11272628 0.7174121  0.14455152    1
##              [,9] [,10]
##  [1,] -0.11171851     1
##  [2,] -0.12627530     1
##  [3,] -0.21644361     1
##  [4,] -0.21644361     1
##  [5,] -0.29862339     1
##  [6,] -0.40887057     1
##  [7,] -0.29171577     1
##  [8,] -0.29171577     1
##  [9,]  0.33713776     1
## [10,]  0.07158317     1
## 
## Acceptance rate:
## -----------------------------
##           beta    subgrad
## # Accepted    :   122     62      
## # Moved       :   396     99      
## Acceptance rate   :   0.308   0.626   
## 
## Call:
## MHLS(X = X, PE = B0, sig2 = 1, lbd = 1, weights = weights, B0 = B0, 
##     S0 = S0, niter = 100, burnin = 0, PEtype = "coeff")

We provide summary and plot functions for MHLS results.

summary(Result)
## $beta
##            mean    median       s.d       2.5%     97.5%
##  [1,] 0.3987849 0.3449554 0.2007944 0.11305339 0.8853298
##  [2,] 0.0000000 0.0000000 0.0000000 0.00000000 0.0000000
##  [3,] 0.0000000 0.0000000 0.0000000 0.00000000 0.0000000
##  [4,] 0.5038434 0.4931755 0.3748674 0.04885486 1.0846819
##  [5,] 0.0000000 0.0000000 0.0000000 0.00000000 0.0000000
##  [6,] 0.0000000 0.0000000 0.0000000 0.00000000 0.0000000
##  [7,] 0.0000000 0.0000000 0.0000000 0.00000000 0.0000000
##  [8,] 0.5766294 0.6187194 0.3053770 0.04624490 1.1025832
##  [9,] 0.0000000 0.0000000 0.0000000 0.00000000 0.0000000
## [10,] 0.3805019 0.3966085 0.2790838 0.03597872 0.8889819
## 
## $subgradient
##              mean      median        s.d        2.5%       97.5%
##  [1,]  1.00000000  1.00000000 0.00000000  1.00000000  1.00000000
##  [2,]  0.29516805  0.25011555 0.18012117  0.06105163  0.71134503
##  [3,]  0.50977748  0.52267176 0.05155172  0.39066523  0.57678294
##  [4,]  1.00000000  1.00000000 0.00000000  1.00000000  1.00000000
##  [5,] -0.11605432 -0.12384178 0.03113449 -0.15652206 -0.04411686
##  [6,]  0.73504409  0.77630217 0.16495099  0.35391840  0.94944273
##  [7,]  0.11474736  0.04500697 0.27882409 -0.24766033  0.75898129
##  [8,]  0.99999997  0.99999997 0.00000000  0.99999997  0.99999997
##  [9,]  0.03856301 -0.03870267 0.30891041 -0.36295002  0.75231255
## [10,]  1.00000000  1.00000000 0.00000000  1.00000000  1.00000000
## 
## attr(,"class")
## [1] "summary.MHLS"
plot(Result, index=c(1,4,9))

## Hit <Return> to see the next plot:

## Hit <Return> to see the next plot:

## Hit <Return> to see the next plot:

Post-selection Inference

postInference.MHLS is a method for post-selection inference. The inference is provided by MHLS results from multiple chains. In order to achieve the robust result, different PE values are used for each chain. After drawing samples of (β̂, S) with MH sampler, we then refit the estimator to remove the bias of the lasso estimator. The final output is the (1 − a) quantile of each active coefficient.

postInference.MHLS(LassoEst = LassoResult, X = X, Y = Y, sig2.hat = 1, alpha = .05,
 method = "coeff", nChain = 5, niterPerChain = 20, 
 parallel = !(.Platform$OS.type == "windows"))
## $confidenceInterval
## $confidenceInterval$Coeff
## [1] 0.8839833 1.1459780 0.1678165 1.8872666
## 
## $confidenceInterval$CI
##  Var  LowerBound UpperBound
##    1 -0.27866173   2.511045
##    4 -0.11944185   2.076410
##    8 -1.29702629   1.006260
##   10 -0.02723272   2.476990
## 
## 
## $call
## postInference.MHLS(LassoEst = LassoResult, X = X, Y = Y, sig2.hat = 1, 
##     alpha = 0.05, nChain = 5, method = "coeff", niterPerChain = 20, 
##     parallel = !(.Platform$OS.type == "windows"))

If one wants to construct a confidence set, set Ctype = "CS" and specify target. target needs be a subset of the active set, i.e. a subset of which(LassoResult$B0 != 0). The default option is target = which(LassoResult$B0 != 0).

postInference.MHLS(LassoEst = LassoResult, Ctype = "CS",X = X, Y = Y, sig2.hat = 1, alpha = .05,
 method = "coeff", nChain = 5, niterPerChain = 20, 
 parallel = !(.Platform$OS.type == "windows"))
## $confidenceSet
## $confidenceSet$Target
## [1]  1  4  8 10
## 
## $confidenceSet$Center
## [1] 0.8839833 1.1459780 0.1678165 1.8872666
## 
## $confidenceSet$l2_Radius
##    97.5% 
## 2.154031 
## 
## 
## $call
## postInference.MHLS(LassoEst = LassoResult, Ctype = "CS", X = X, 
##     Y = Y, sig2.hat = 1, alpha = 0.05, nChain = 5, method = "coeff", 
##     niterPerChain = 20, parallel = !(.Platform$OS.type == "windows"))

One can reuse MH samples for other inference tasks. To do that, first postInference.MHLS needs to be run with argument returnSamples = TRUE. Then rerun postInference.MHLS with the argument MHsamples. See below for an example.

P1 <- postInference.MHLS(LassoEst = LassoResult, Ctype = "CS", X = X, Y = Y, 
  sig2.hat = 1, alpha = .05, method = "coeff", nChain = 5, niterPerChain = 20, 
  returnSamples = TRUE, parallel = !(.Platform$OS.type == "windows"))
postInference.MHLS(LassoEst = LassoResult, MHsamples = P1$MHsamples, Ctype = "CS",
  X = X, Y = Y, method = "coeff")
## $confidenceSet
## $confidenceSet$Target
## [1]  1  4  8 10
## 
## $confidenceSet$Center
## [1] 0.8839833 1.1459780 0.1678165 1.8872666
## 
## $confidenceSet$l2_Radius
##    97.5% 
## 1.845843 
## 
## 
## $call
## postInference.MHLS(LassoEst = LassoResult, Ctype = "CS", X = X, 
##     Y = Y, MHsamples = P1$MHsamples, method = "coeff")
postInference.MHLS(LassoEst = LassoResult, MHsamples = P1$MHsamples, Ctype = "CI",
  X = X, Y = Y, method = "coeff")
## $confidenceInterval
## $confidenceInterval$Coeff
## [1] 0.8839833 1.1459780 0.1678165 1.8872666
## 
## $confidenceInterval$CI
##  Var LowerBound UpperBound
##    1 -0.7034021  2.3134376
##    4  0.3043216  2.0643493
##    8 -0.7021762  0.8601998
##   10  0.4398758  2.3038105
## 
## 
## $call
## postInference.MHLS(LassoEst = LassoResult, Ctype = "CI", X = X, 
##     Y = Y, MHsamples = P1$MHsamples, method = "coeff")