RESEARCH ARTICLE | VOLUME 4, ISSUE 1 | OPEN ACCESS DOI: 10.23937/2469-5831/1510017

Partial Variable Selection and its Applications in Biostatistics

Jingwen Gu1, Ao Yuan1,2*, Chunxiao Zhou2, Leighton Chan2 and Ming T Tan1

1Department of Biostatistics, Bioinformatics and Biomathematics, Georgetown University, USA

2Epidemiology and Biostatistics Section, Rehabilitation Medicine Department, Clinical Center, National Institutes of Health, USA

*Corresponding author: Ao Yuan, Department of Biostatistics, Bioinformatics and Biomathematics, Georgetown University, Washington DC 20057, USA; Epidemiology and Biostatistics Section, Rehabilitation Medicine Department, Clinical Center, National Institutes of Health, Bethesda MD 20892, USA.

Ming T Tan, Department of Biostatistics, Bioinformatics and Biomathematics, Georgetown University, Washington DC 20057, USA.

Accepted: April 12, 2018 | Published: April 14, 2018

Citation: Gu J, Yuan A, Zhou C, Chan L, Tan MT (2018) Partial Variable Selection and its Applications in Biostatistics. Int J Clin Biostat Biom 4:017. doi.org/10.23937/2469-5831/1510017

Copyright: © 2018 Gu J, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Abstract


We propose and study a method for partial covariates selection, which only select the covariates with values fall in their effective ranges. The coefficients estimates based on the resulting data is more interpretable based on the effective covariates. This is in contrast to the existing method of variable selection, in which some variables are selected/deleted in whole. To test the validity of the partial variable selection, we extended the Wilks theorem to handle this case. Simulation studies are conducted to evaluate the performance of the proposed method, and it is applied to a real data analysis as illustration.

Keywords


Covariate, Effective range, Partial variable selection, Linear model, Likelihood ratio test

Introduction


Variables selection is a common practice in biostatistics and there is vast literature on this topic. Commonly used methods include the likelihood ratio test [1], Akaike information criterion, AIC [2] Bayesian information criterion, BIC [3], the minimum description length [4,5] stepwise regression and Lasso [6], etc. The principal components model linear combinations of the original covariates, reduces large number of covariates to a handful of major principal components, but the result is not easy to interpret in terms of the original covariates. The stepwise regression starts from the full model and deletes the covariate one by one according to some statistical significance measure. May, et al. [7] addressed variable selection in artificial neural network models, Mehmood, et al. [8] gave a review for variable selection with partial least squares model. Wang, et al. [9] addressed variable selection in generalized additive partial linear models. Liu, et al. [10] addressed variable selection in semiparametric additive partial linear models. The Lasso [6,11] and its variation [12,13] are used to select some few significant variables in the presence of a large number of covariates.

However, existing methods only select the whole variable(s) to enter the model, which may not the most desirable in some bio-medical practice. For example, in two heart disease studies [14,15] there are more than ten risk factors identified by medical researchers in their long time investigations, with the existing variable selection methods, some of the risk factors will be deleted wholly from the investigation, this is not desirable, since risk factors will be really risky only when they fall into some risk ranges. Thus deleting the whole variable(s) in this case seems not reasonable, while a more reasonable way is to find the risk ranges of these variables, and delete the variable values in the un-risky ranges. In some other studies, some of the covariates values may just random errors which do not contribute to the influence of the responses, and remove these covariates values will make the model interpretation more accurate. In this sense we select the variables when their value falls within some range. To our knowledge, method for this kind of partial variable selection hasn't been seen in the literature, which is the goal of our study here. Note that in existing method of variable selection, some variables are selected/deleted, while in our method, some variable(s) are partially selected/deleted, i.e., only some proportions of some variable observations are selected/deleted. The latter is very different from the existing methods. In summary, traditional variable selection methods, such as stepwise or Lasso, some covariate(s) will be removed either wholly or none from the analysis. This is not very reasonable, since some of the removed covariates may be partially effective, removing all their values may yield miss-leading results, or at least cost information loss; while for the variables remaining in the model, not all their values are necessarily effective for the analysis. With the proposed method, only the non-effective values of the covariates are removed, and the effective values of the covariates are kept in the analysis. This is more reasonable than the existing methods of removing all or nothing.

In the existing method of deleting whole variable(s), the validity of such selection can be justified using the Wilks result, under the null hypothesis of no effect of the deleted variable(s), the resulting two times log-likelihood ratio will be asymptotically chi-squared distributed. We extended the Wilks theorem to the case for the proposed partial variable deletion, and use it to justify the partial deletion procedure. Simulation studies are conducted to evaluate the performance of the proposed method, and it is applied to analyze a real data set as illustration.

The Proposed Method


The observed data is ( y i , x i )( i=1,...,n ) , where y i is the response and x i R d is the covariates, of the i-th subject. Denote y n =( y 1 ,, y n )' and X n =( x 1 ' ,, x n ' )' . Consider the linear model

y n = X n β+ ε n ,                            (1)

where β=( β 1 ,, β d )' is the vector of regression parameter,    ε n =( ε 1 ,, ε n )' is the vector of random errors, or residual departure from the linear model assumption. Without loss of generality we consider the case the ε i 's are independently identically distributed (iid), i.e. with variance matrix Var( ε )= σ 2 I n , where    I n is the n -dimensional identity matrix. When the ε i 's are not iid, often it is assumed Var( ε )=Ω for some known positive-definite  Ω , then make the transformation y ˜ n = Ω 1/2 y n , X ˜ n = Ω 1/2 X n and ε ˜ = Ω 1/2 ε , then we get the model y ˜ n = X ˜ n β+ ε ˜ , and the ε ˜ i 's are iid with Var( ε ˜ )= I n . When Ω is unknown, it can be estimated by various ways. So below we only need to discuss the case the ε i 's are iid.

Summary of existing work

We first give a brief review of the existing method of variable selection. Assume the model residual ϵ=y x β has some known density function f( ) (such as normal), with possibly some unknown parameter(s). For simple of discussion we assume there are no unknown parameters. Then the log-likelihood is

l n ( β )= i=1 n logf( y i x i ' β ).

Let β be the Maximum Likelihood Estimate (MLE) of β (when f( ) is the standard normal density, β is just the least squares estimate). If we delete k( d ) columns of X n and the corresponding components of β , denote the remaining covariate matrix as X n and the resulting β as β - , and the corresponding MLE as β . Then under the hypothesis H 0 : the deleted columns of X n has no effects, or equivalently the deleted components of β are all zeros, then asymptotically [1].

2[ l n ( β ^ ) l n ( β ^ ) ] D χ k 2

where χ k 2 is the chi-squared distribution with k -degrees of freedom. For a given nominal level α , let χ d 2 ( 1α ) be the ( 1α ) -th upper quantile of the χ k 2 distribution, if 2[ l n ( β ^ ) l n ( β ^ ) ] χ d 2 ( 1α ) , then H 0 is rejected at significance level α , and its not good to delete these columns of X n ; otherwise we accept H 0 and delete these columns of X n .

There are some other methods to select columns of X n , such as AIC, BIC and their variants, as in the model selection field. In these methods, the optimal deletion of columns of X n corresponds to the best model selection, which maximize the AIC or BIC. These methods are not as solid as the above one, as may sometimes depending on eye inspection to choose the model which maximize the AIC or BIC.

All the above methods require the models under consideration be nested within each other, i.e., one is a sub-model of the other. Another more general model selection criterion is the Minimum Description Length (MDL) criterion, a measure of complexity, developed by Kolmogorov [4], Wallace and Boulton [16], etc. The Kolmogorov complexity has close relationship with the entropy, it is the output of a Markov information source, normalized by the length of the output. It converges almost surely (as the length of the output goes to infinity) to the entropy of the source. Let G={ g( , ) } be a finite set of candidate models under consideration, and Θ={ θ j :j=1,,h } be the set of parameters of interest. θ i may or may not be nested within some other θ j , or θ i and θ j both in Θ may have the same dimension but with different parametrization. Next consider a fixed density f(.| θ j ) , with parameter θ j running through a subset Γ j R k , to emphasize the index of the parameter, we denote the MLE of θ j under model f(|) by θ ^ j (instead of by θ ^ n to emphasize the dependence on the sample size), I( θ j ) the Fisher information for θ j under f(|) , | I( θ j ) | its determinant, and k j the dimension of θ j . Then the MDL criterion (for example, Rissanen [17] and the review paper by Hansen and Yu [5], and references there) chooses θ j to minimize

i=1 n logf( Y i | θ ^ j ) +  k j 2 log n 2π  +log Γ j | I( θ j ) | d θ j , ( j=1,,h ).           (3)

This method does not require the models be nested, but still require select/delete some whole columns. The other existing methods for variable selection, such as stepwise regression and Lasso, etc., are all for deleting/keeping some whole variables, and does not apply to our problem.

The proposed work

Now come to our question, which is non-standard and we are not aware of a formal method to address this problem. However, we think the following question is of practical meaning. Consider deleting some of the components within fixed k ( kd ) columns of X n , the deleted proportions for these columns are γ 1 ,..., γ k (0< γ j <1) . Denote X n for the remaining covariate matrix, which is X n with some entries replaced by 0's, corresponding to the deleted elements. Before the partial deletion, the model is

y n = X n β+ ε n

After the partial deletion of covariates, the model becomes

y n = X n β + ε n

Note that here β and β - have the same dimension, as no covariate is completely deleted. β is the effects of the original covariates, β - is the effects of the covariates after some possible partial deletion. It is the effects of the effective covariates. As an over simplified example, we have n=5 individuals, with five responses y n =( y 1 ,  y 2 , y 3 , y 4 , y 5 ) and covariate vectors x 1 =( 1.3, 0.2, 1.5 )' , x 2 =( 0.1, 0.9, 1.3 )' , x 3 = ( 1.1, 1.4, 0.3 ) ' , x 4 =( 0.8, 1.2, 1.7 )' , x 5 =( 1.0, 2.1, 1.1 )' and X n =( x 1 ,  x 2 ,  x 3 ,  x 4 , x 5 ) . Then β is the effects of the regression of y n on X n . If we remove some seemingly insignificant covariate components, for example, let x 1 =( 1.3, 0, 1.5 )' , x 2 =( 1.1, 1.4, 0 )' , x 3 =( 1.1, 1.4, 0 )' , x 4 =( 0.8, 1.2, 1.7 )' , x 5 = ( 1.0, 2.1, 1.1 ) ' and X n =( x 1 ,  x 2 ,  x 3 ,  x 4 , x 5 ) . In this case β - is the effects of y n regressing on X n . Thus, though β and β - have the same structure, they have different interpretations. The problem can be formulated as testing the hypothesis:

H 0 :β= β vs  H 1 : β β

If H 0 is accepted, the partial deletion is valid.

Note that different from the standard null hypothesis that some components of the parameters be zeros, the above null hypothesis is not a nested hypothesis, or β - is not a subset of β , so the existing Wilks' theorem for likelihood ratio statistic does not directly apply here.

Denote l n ( β ) be the corresponding log-likelihood based on data ( y n ,  X n ) , and the corresponding MLE as β ^ . Since after the partial deletion, β ^ is the MLE of β under a constrained log-likelihood, while β ^ is the MLE under the full likelihood, we have l n ( β ^ ) l n ( β ^ ) . Parallel to the log-likelihood ratio statistic for (whole) variable deletion, let, for our case,

Λ n =2[ l n ( β ^ ) l n ( β ^ ) ]

Let ( j 1 ,..., j k ) be the columns with partial deletions, C j r ={i: x j r ,i is deleted 1in} be the index set for the deleted covariates in the j r -th column ( r=1,...,k ) ; | C j r | be the cardinality of C j r , thus γ r =| C j r |/n( r=1,...,k ) . For different j r and j s , C j r and C j s may or may not have some common components. We first give the following Proposition, in the simple case in which the index sets C j r 's are mutually exclusive. Then in Corollary 1 we give the result in more general case in which the index sets C j r 's are not need to be mutually exclusive.

For given X n , there are many different ways of partial column deletions, we may use Theorem 1 to test each of these deletions. Given a significance level α , a deletion is valid at level α if Λ n < χ 2 ( 1α ) , where χ 2 ( 1α ) is the ( 1α ) - th upper quantile of the j=1 k γ j χ j 2 distribution, which can be computed by simulation for given ( γ 1 ,..., γ k ) .

The following Theorem is a generalization of the Wilks Theorem [1]. Deleting some whole columns in X n corresponds to γ j =1 ( j=1,...,k ) in the theorem, and then we get the existing Wilks' Theorem.

Theorem 1: Under H 0 , suppose C j r C j s =ϕ , the empty set, for all 1rsk , then we have

Λ n D j=1 k γ j χ j 2   .

where χ 1 2 ,..., χ k 2 are iid chi-squared random variable with 1-degree of freedom.

Note that in Wilks problem the null hypothesis is that, the coefficients corresponding to some variables are zero. The null hypothesis is nested within the alternative; while the null hypothesis in our problem is: The coefficients correspond to some partial variables, and the null hypothesis is not nested within the alternative. So the results of the two methods are not really comparable.

The case the C j r 's are not mutually exclusive is a bit more complicated. We first re-write the sets C j r 's such that

r=1 k C j r = r=1 k j 1 ,..., j r D j 1 ,..., j r

where the D j 1 ,..., j r 's are mutually exclusive, D j 1 ,..., D j k are index sets for one column of X n only; the D j 1 , j 2 's are index sets common for columns j 1 and j 2 only; the D j 1 , j 2 , j 3 's are index sets common for columns j 1 , j 2 and j 3 only,.... Generally some of the D j 1 ,..., j r 's are empty sets. Let γ j 1 ,..., j r =| D j 1 ,..., j r | be the cardinality of D j 1 ,..., j r and γ j 1 ,..., j r =| D j 1 ,..., j r |/n ( r=1,...,k ) .

By examining the proof of Theorem 1, we get the following corollary which gives the result in the more general case.

Corollary 1: Under H 0 , we have

Λ n =2[ l n ( β ^ ) l n ( β ^ ) ] D r=1 k j 1 ,, j r γ j 1 ,, j r χ j 1 ,, j r 2

where the χ j 1 ,..., j r 2 's are all independent chi-squared random variables with r-degrees of freedom ( r=1,...,k ) .

Below we give two examples to illustrate the usage of Proposition.

Example 1: n=1000 , d=5 , k=3 . Columns ( 1,2,4 ) has some partial deletions with C 1 ={ 201,202,....,299,300 } , C 2 ={ 351,352,...,549,550 } , C 3 ={ 601,602,...,849,850 } , the C j 's have no overlap; γ 1 =1/10 , γ 2 =1/5 , γ 3 =1/4 . So by the Proposition, under H 0 we have

2[ l n ( β ^ ) l n ( β ^ ) ] D 1 10 χ 1 2 + 1 5 χ 2 2 + 1 4 χ 3 2

where all the chi-squared random variables are independent, each has 1 degree of freedom.

Example 2: n=1000 , d=5 , k=3 . Columns ( 1,2,4 ) has some partial deletions with C 1 ={ 101,102,....,299,300;651,652,...,749,750 }, C 2 ={ 201,202,...,349,350 }, C 3 ={ 251,252,...,299,300;701,702,...,799,800 } . In this case the C j 's have overlaps, the Proposition can not be used directly, so we use the Corollary. Then D 1 ={ 101,102,...,199,200 } , D 2 ={ 301,302,...,349,350 } , D 3 ={ 701,702,...,799,800 } , D 1,2 ={ 201,202,...,249,250 } , D 1,3 ={ 701,702,...,749,750 } , D 2,3 =ϕ , D 1,2,3 ={ 251,252,...,299,300 } ; γ 1 =1/5 , γ 2 =1/20 , γ 3 =1/10 , γ 1,2 =1/20 , γ 1,3 =1/20 , γ 2,3 =0 , γ 1,2,3 =1/20 . So by the Corollary, under H 0 we have

2[ l n ( β ^ ) l n ( β ^ ) ] D 1 5 χ 1 2 + 1 20 χ 2 2 + 1 10 χ 3 2 + 1 20 χ 1, 2 2 + 1 20 χ 1,3 2 + 1 20 χ 1,2,3 2

where all the chi-squared random variables are independent, with χ 1 2 , χ 2 2 and χ 3 2 are each of 1 degree of freedom, χ 1, 2 2 and χ 1,3 2 are each of 2-degrees of freedom, and χ 1,2,3 2 is of 3-degrees of freedom.

Next, we discuss the consistency of estimation of β ^ under the null hypothesis H 0 . Let x = x r with probability γ r ( r=0,1,...,k ) , where x r is an i.i.d. copy of the x i,r 's, whose components with index in C jr , in particular C j0 is the index set for those covariates without partial deletion.

Theorem 2: Under conditions of Theorem 1,

i)                     β ^ β 0 ( a.s. ).

ii)                     n ( β ^ β 0 ) D N( 0,Ω )

where

Ω= E β 0 [ l ˙ ( β 0 )  l ˙ '( β 0 ) ]=E[ ( x μ )( x μ )' ] f ˙ 2 ( ) f( ) dϵ.

To extend the results of Theorem 2 to the general case, we need the following more notations. Let be an i.i.d. copy of data in the set D j 1 ,..., j k . Let x = x j 1 ,, j r with probability γ j 1 ,..., j r ( r=0,1,...,k ) , where x j 1 ,, j r is an i.i.d. copy of the x i, j 1 ,..., j r 's, whose components with index in C j 1 ,..., j r .

Corollary 2: Under conditions of Corollary 1, results of Theorem 2 hold with x given above.

Computationally E[ ( x μ )( x μ )' ] is well approximated by

E[ ( x μ )( x μ )' ] r=0 k | D j 1 ,, j r | n 1 | D j 1 ,, j r | ( i,j ) D j 1 ,, j r ( x i,j μ ^ j 1 ,, j r ) ( x i,j μ ^ j 1 ,, j r ) ' ,

where the notation Σ (i,j) D j 1 ,..., j r means summation over those x i,j 's with deletion index in D j 1 ,..., j r , and ( μ ^ j 1 ,, j r )= 1 | D j 1 ,, j r | Σ (i,j) D j 1 ,..., j r x i,j .

Simulation Study and Application


Simulation study

We illustrate the proposed method with two examples, Examples 3 and 4 below. The former rejects the null hypothesis H 0 while the latter accepts. In each case we simulate  n=1000 i.i.d. data with response y i and with covariates x i =( x i1 , x i2 , x i3 , x i4 , x i5 ) ( i=1,...,n ) . We first generate the covariates, sample the x i 's from the 5-dimensional normal distribution with mean vector μ=( 3.1,1.8,0.5,0.7,1.5 )' and a given covariance matrix Γ .

Then we generate the response data, which, given the covariates. The y i 's are generated as

y i = x i ' β 0 + ϵ i , ( i=1,,n )

β 0 =( 0.42,0.11,0.65,0.83,0.72 )' , the i 's are i.i.d. N( 0,1 ) .

Hypothesis test is conducted to examine if the partial deletion is valid or not. Significant level is set as α=0.05 . The experiment repeated 1000 times, Prop represents the proportion Λ n >Q( 1α ) , where Q( 1α ) is the ( 1α ) -th upper quantile of the distribution j=1 k γ j χ j 2 given in Theorem 1, computed via simulation.

Example 3: In this example, five data sets are generated according to the mentioned method, with five different values of β 0 . We are interested to know whether covariates with | x ij |< 1 10 can be deleted. Five data set with different β 0 values are simulated. The proportion γ=( γ 1 ,, γ k ) of x ij 's with | x ij |< 1 10 are shown for each data set, the results are shown in Table 1. The five rows in Table 1 are the results for the five data sets. For each data, the parameter β is estimated, a and test is conducted using the given γ , the Λ n is computed, Q( 1α ) is given, and the corresponding p-value is provided. Note that for our problem, a p-value smaller than α means a significant value of Λ n , or significant difference between the regression coefficients of original covariates and those of the covariates after partial deletion, which implies in turn that the null hypothesis should be rejected, or the partial deletion should not be conducted (Table 1).

Table 1: The simulation result of γ , Λ n , Q( 1α ) and p-value according to β 0 . View Table 1

We see that the p-values of rejecting H 0 , are all smaller than 0.05 in the five set of β 0 . This suggests that covariates with | x ij |< 1 10 should not be deleted at significance level α=0.05 .

Example 4: In this example, the original X as in Example 3, but now we replace the entries in first 100 rows and first three columns by noise , where ϵ N( 0, 1 9 ) . The delete proportion γ=( 0.1,0.1,0.1 ) is fixed with x ij 's having absolute values smaller than the lower 0.1 percent being deleted. We are interested to see in this case whether these noises can be deleted, i.e. H 0 can be rejected or not. The results are shown in the following (Table 2).

Table 2: The simulation result of γ , Λ n , Q( 1α ) and p-value according to β 0 . View Table 2

We see that the p-values of rejecting H 0 are all greater than 0.95 for the five sets of β 0 . It suggests that the data provided strong evidence to conclude that the deleted values are noises and they are not necessary to the data set at 0.05 significance level.

Application to real data problem.

We analyze a data set from the Deprenyl and Tocopherol Antioxidative Therapy of Parkinsonism, which is obtained from The National Institutes of Health (NIH). (For detailed description and data link, https://www.ncbi.nlm.nih.gov/pubmed/2515723). It is a multi-center, placebo-controlled clinical trial that aimed to determine a treatment for early Parkinson's disease patient to prolong their time requiring levodopa therapy. The number of patients enrolled was 800. The selected object were untreated patients with Parkinson's disease (stage I or II) for less than five years and met other eligible criteria. They were randomly assigned according to a two-by-two factorial design to one of four treatment groups: 1) Placebo 2) Active tocopherol 3) Active deprenyl 4) Active deprenyl and tocopherol. The observation continued for 14±6 months and reevaluated every 3 months. At each visit, Unified Parkinson's Disease Rating Scale (UPDRS) including its motor, mental and activities of daily living components were evaluated. Statistical analysis result was based on 800 subjects. The result revealed that no beneficial effect of tocopherol. Deprenyl effect was found significantly prolong the time requiring levodopa therapy which reduced the risk of disability by 50 percent according to the measurement of UPDRS.

Our goal is to examine whether some of the covariates can be partially deleted. If traditional variable selection methods are used, such as stepwise or Lasso, it will end up with some covariate(s) been removed wholly from the analysis. This is not very reasonable, since some of the removed covariates may be partially effective, removing all their values may yield miss-leading results, or at least cost information loss. We use the proposed method to examine three of the response variables, PDRS, TREMOR and PIGD, and three covariates, Age, Motor and ADL for all these responses. The deleted covariates are the ones with values below the γ -th data quantile, with γ=0.01,0.02,0.03 and 0.05. We examine each response and covariate one by one. The results are shown in Table 3, Table 4 and Table 5 below.

Table 3: Response TREMOR: Λ n values and estimated regression coefficients. View Table 3

Table 4: Response PIGD: Λ n values and estimated regression coefficients. View Table 4

Table 5: Response PDRS: Λ n values and estimated regression coefficients. View Table 5

In Table 3, response TREMOR is examined. For covariable Age, the likelihood ratio Λ n is larger than the cut-off point Q( 1α ) at all the deletion proportions, it suggests that for Age, no partial deletions with these proportions should be removed. For covariable Motor, Λ n is smaller than the cutoff point Q( 1α ) at the 0.01 proportion, this covariable can be partially deleted at this proportion. In other words, the covariate Motor with values smaller than 1%-th of its quantile have no impact on the analysis, or can be treated as noise and should be removed from the analysis. For covariable ADL, with deletion proportions 0.01-0.1 , the likelihood ratio Λ n is smaller than Q( 1α ) which suggest that the lower percentage of 1%-10% of this covariate have no impact on the analysis and should be deleted. After removing the corresponding proportions of Motor and ADL, the model is re-fitted to get the parameter estimates shown there. These estimates have better meaning than the ones based on the whole covariates data, since now the noise values of covariates are removed, and only the effective covariates entered the analysis. However, if traditional variable methods are used, such as stepwise regression or Lasso, it may end up with the whole covariate Motor, ADL, or both to be removed, and leads loss of information or even miss-leading results.

In Table 4, response PIGD is investigated. For covariable age, Λ n is larger than the cut-off point Q( 1α ) at the 0.02, 0.03 and 0.05 proportions, suggests that partial deletion with these proportions are not appropriate. For covariate Motor, Λ n is smaller than cut-off point Q( 1α ) at the deletion proportions 0.02 and 0.03, suggests that the lower percentage of 2-3% should be deleted from the analysis. For the variable ADL, Λ n is larger than the cut-off point Q( 1α ) at the delete proportions 0.02, 0.03 and 0.05, hence partial deletion at these proportions are not valid. After deleting 3% of the smallest values of Motor, the model is re-fit to get the parameter estimates shown in the Table 4. The new estimates are more meaning full since the on-effective values of covariate Motor are removed from the analysis.

In Table 5, the response is PDRS. The likelihood ratios Λ n of Age, Motor and ADL all are larger than χ 2 ( 1α ) at the deletion proportions of 0.01, 0.02, 0.03 and 0.05. Thus the null hypothesis are rejected at all these proportions, or no deletion is valid at these proportions, and the analysis should be based on the original full data, with the parameter estimates shown in the Tables (Table 3, Table 4, and Table 5).

Note that the coefficient for Age is insignificant, and hence the corresponding Λ n values with deleted proportions are senseless.

Concluding Remarks


We proposed a method for partial variable deletion, in which only some proportion(s) of covariate(s) values are to be deleted. This is in contrast to the existing methods either select or delete the entire variable(s). Thus this method is new and is a generalization of the existing variable selection. The question is motivated from practical problems. It can used to find the effective ranges of the covariates, or to remove possible noises in the covariates, and thus the corresponding estimated effects are more interpretable. The proposed test statistic is a generalization of the Wilks likelihood ratio statistic, the asymptotic distribution of the proposed statistic is generally a chi-squared mixture distribution, the corresponding cut-off point can be computed by simulation. Simulation studies are conducted to evaluate the performance of the method, and it is applied to analyze a real Parkinson disease data as illustration. A drawback of the current version of the method is that it needs to specify the proportions of possible deletions for the variables, this makes the optimal proportions are not easy to find. In our next step research we will try to implement an algorithm which finds the optimal proportions automatically, and more easy to use. As suggested from a reviewer, simulation studies should be performed for statistical significance test between the proposed method and existing variable selection method(s) to address the contribution of the proposed method. This will be potential for our future research work (Appendix).

Acknowledgment


This research was supported by the Intramural Research Program of the National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

References


  1. Wilks SS (1938) The large-sample distribution of the likelihood ratio for testing composite hypotheses. Annals of Mathematical Statistics 9: 60-62.

  2. Akaike H (1974) A new look at the statistical identification model. IEEE Transaction on Automatic Control 19: 716-723.

  3. Schwarz G (1978) Estimating the dimension of a model. Annals of Statistics 6: 461-464.

  4. Kolmogorov A (1963) On tables of random numbers. Sankhya 25: 369-375.

  5. Hansen M, Yu B (2001) Model selection and the principle of minimum description length. Journal of American Statistical Association 96: 746-774.

  6. Tibshirani R (1996) Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society B 58: 267-288.

  7. May RJ, Maier HR, Dandy GC, Fernando TG (2008) Non-linear variable selection for artificial neural networks using partial mutual information. Environmental Modelling and Software 23: 1312-1326.

  8. Mehmood T, Liland KH, Snipen L, Saebo S (2012) A review of variable selection methods in partial least squares regression. Chemometrics and Intelligent Laboratory Systems 118: 62-69.

  9. Wang L, Liu X, Linag H, Carroll R (2011) Estimation and variable selection for generalized additive partial linear models. Annals of Statistics 39: 1827-1851.

  10. Liu X, Wang L, Liang H (2011) Estimation and variable selection for semiparametric additive partial linear models. Stat Sin 21: 1225-1248.

  11. Tibshirani R (1997) The lasso method for variable selection in the Cox model. Statistics in Medicine 16: 385-395.

  12. Fan J, Li R (2001) Variable selection via non-concave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96: 1348-1360.

  13. Fan J, Li R (2002) Variable selection for Cox's proportional hazards model and frailty model. Annals of Statistics 30: 74-99.

  14. Wang HX, Leineweber C, Kirkeeide R, Svane B, Schenck-Gustafsson K, et al. (2007) Psychosocial stress and atherosclerosis: family and work stress accelerate progression of coronary disease in women. The Stockholm Female Coronary Angiography Study. J Intern Med 261: 245-254.

  15. Shara NM, Wang H, Valaitis E, Pehlivanova M, Carter EA, et al. (2011) Comparison of estimated glomerular filtration rates and albuminuria in predicting risk of coronary heart disease in a population with high prevalence of diabetes mellitus and renal disease. Am J Cardiol 107: 399-405.

  16. Wallace CS, Boulton DM (1968) An information measure for classification. Computer Journal 11: 185-194.

  17. Rissanen J (1996) Fisher information and stochastic complexity. IEEE Transactions on Information Theory 42: 40-47.

  18. Stat 701 (2002) Proof of Wilks' Theorem on LRT.

  19. Bickel PJ, Klaassen CA, Ritov Y, Wellner JA (1993) Efficient and Adaptive Estimation for Semiparametric Models. The Indian Journal of Statistics 62: 157-160.