Statistical inference on restricted linear regression models with

Transcription

Statistical inference on restricted linear regression models with
2
Statistical inference on restricted linear regression models with partial
distortion measurement errors 1
3
Zhenghong WEI, Yongbin FAN and Jun ZHANG ∗
4
Abstract
6
7
8
9
10
11
12
13
We consider statistical inference for linear regression models when some variables are distorted with errors by some unknown functions of commonly observable confounding variables. The proposed estimation
procedure is designed to accommodate undistorted as well as distorted variables. To test a hypothesis on the
parametric components, a restricted least squares estimator is proposed for unknown parameters under some
restricted conditions. Asymptotic properties for the estimators are established. A test statistic based on the
difference between the residual sums of squares under the null and alternative hypotheses is proposed, and we
also obtain the asymptotic properties of the test statistic. A wild bootstrap procedure is proposed to calculate
critical values. Simulation studies are conducted to demonstrate the performance of the proposed procedure and
a real example is analysed for an illustration.
ip
t
5
cr
1
KEY WORDS: Distortion measurement errors; Local linear smoothing; Restricted estimator; Bootstrap procedure;
16
Short Title: linear regression models with partial distortion measurement errors
BJ
PS
-A
cc
ep
te
d
M
an
us
15
14
1 Corresponding Author: Jun ZHANG, email: [email protected].
Zhenghong WEI (E-mail: [email protected]), College of Mathematics and Computational Science, Institute of Statistical Sciences,
Shenzhen University, Shenzhen, China. Yongbin FAN (E-mail: [email protected]), College of Mathematics and Computational Science,
Shenzhen University, Shenzhen, China. Jun ZHANG (E-mail: [email protected]), Institute of Statistical Sciences, Shen Zhen-Hong
Kong Joint Research Center for Applied Statistical Sciences, College of Mathematics and Computational Science, Shenzhen University,
Shenzhen, China.
1. I NTRODUCTION
2
In some applications, variables may not be directly observed because of certain contamination, such as in health
3
science and medicine research. It is well known that, measurement errors in covariates may cause large bias,
4
sometimes seriously, in the estimated regression coefficient if we ignore the measurement error. As such, mea-
5
surement error models have received much attention. Some literature about measurement error models include
6
Kneip et al. (2015); Li and Xue (2008); Saleh and Shalabh (2014); Stefanski et al. (2014); Xu and Zhu (2014);
7
Yang et al. (2014); Zhang et al. (2011) and others. Carroll et al. (2006) systematically summarized some recent
8
research developments of linear and non-linear models as well as in non-parametric and semiparametric models.
11
cr
10
Our interest is in quantifying the following linear regression models with partial distortion measurement errors:

 Y = β τ0 X + γ τ0 Z + ,
(1.1)
˜
˜ = ψ(U )X,
Y = φ(U )Y, X
where Y is an unobservable response, X = (X1 , X2 , . . . , Xp )τ is an unobservable continuous predictor vec-
an
us
9
ip
t
1
12
tor (the superscript τ denotes the transpose operator throughout this paper), Z = (Z1 , Z2 , . . . , Zq )τ is an ob-
13
served predictor vector, β 0 , γ 0 are the unknown coefficient parameters for the linear regression model of Y
17
variable U distorts unobserved Xr , r = 1, . . . , p in a multiplicative fashion. The model error is independent of
18
(X τ , Z τ , U )τ .
variable U ∈ R1 is independent of (X τ , Z τ , Y )τ . The diagonal form of ψ(∙) indicates that the confounding
ep
te
d
15
M
16
˜ are the distorted and observed response and predictors, ψ(∙) is a p × p diagonal matrix
on X, Z. Y˜ and X
diag ψ1 (∙), . . . , ψp (∙) , where φ(∙) and ψr (∙) are unknown continuous distorting functions. The confounding
14
This type of measurement error models is revealed by S¸ent¨urk and Nguyen (2009), who studied the Pima
20
Indians diabetes data. It is interest to investigate which covariates have impact on the diabetes. To analyse this
21
data, S¸ent¨urk and Nguyen (2009) suggested the covariate ‘body mass index’ (BMI) to be a potential confound-
22
ing variable. Kaysen et al. (2002) also treat BMI as the confounder on hemodialysis patients and they further
23
realised that the fibrinogen level and serum transferrin level should be divided by BMI, in order to eliminate
24
the contamination possibly caused by BMI. From a practical point of view, since the exact relationship between
25
the confounder and primary variables is unknown, naively dividing BMI may be incorrect and may lead to an
26
inconsistent estimator of the parameter. As a remedy, S¸ent¨urk and M¨uller (2005, 2006) introduced a flexible
BJ
PS
-A
cc
19
27
multiplicative adjustment by using unknown smooth distorting functions φ(∙), ψr (∙). Recently, there have been
28
much literature on the statistical modelling of the distortion measurement errors data. S¸ent¨urk and M¨uller (2005,
29
2006) proposed parametric covariate-adjusted models with one-dimensional confounding variable in the setting
30
˜ are related through a varying coefficient model (Xu and Guo, 2013; Xu and Zhu,
in which the observed Y˜ and X
31
2013), using the binning method (Fan and Zhang, 2000). S¸ent¨urk and Nguyen (2006) further proposed a local
1
linear estimator to deal with partial distortion measurement error-in-variables. Nguyen et al. (2008) proposed an
2
estimation procedure for linear mixed effects models with an application to longitudinal data. Cui et al. (2009)
3
proposed a driect-plug-in estimation procedure. Later on, this direct-plug-in method has been developed to the
4
case of multivariate confounders (Zhang et al., 2013a, 2014c, 2012a) and some semi-parametric models, for ex-
5
ample, partial linear models (Li et al., 2010), the partial linear single index models (Zhang et al., 2013b), the
6
dimension reduction models (Zhang et al., 2012b). Zhang et al. (2014a) studied an efficient estimator for the
7
correlation coefficient between two variables that are both observed with distortion measurement errors. Li et al.
8
(2014) employs the smoothly clipped absolute deviation penalty (Fan and Li, 2001; Liang and Li, 2009, SCAD)
9
least squares method to simultaneously select variables and estimate the coefficients for high-dimensional covari-
10
ate adjusted linear regression models. Recently, Zhang et al. (2015) proposed a residuals based empirical process
11
based test statistic to solve the problem of model checking on parametric distortion measurement error regression
12
model.
an
us
cr
ip
t
1
In this article, we investigate the estimation and statistical inference for model (1.1). To estimate the unknown
14
parameter β 0 , γ 0 , we used the direct plug-in method proposed by Cui et al. (2009) and further investigate the
15
asymptotic properties of the estimators. To make statistical inference of β 0 , γ 0 , i.e., testing whether the true
16
parameter β 0 , γ 0 satisfies some linear restriction conditions or not, we further develop a restricted estimator by
17
introducing Lagrange multipliers under the null hypothesis. The associated asymptotic properties of the restricted
18
estimator are also revealed. Finally, a test statistic based on the difference between the residual sums of squares
19
under the null and alternative hypotheses is proposed. The limiting distribution of the test statistic is shown to be
20
a weighted sum of independent standard chi-squared distributions under the null hypothesis. To mimic the null
21
distribution of the test statistic, a wild bootstrap procedure is proposed to define p-values. We conduct Monte
22
Carlo simulation experiments to examine the performance of the proposed procedures. Our simulation results
23
show that the proposed methods perform well both in estimation and hypothesis testing. We also apply our
24
estimation and testing procedure to analyze the Pima Indians diabetes data set.
ep
te
d
cc
-A
PS
The paper is organized as follows. In Section 2, we describe the direct-plug-in estimation procedure for
BJ
25
M
13
26
parameters β 0 , γ 0 , and present the associated asymptotic results. In Section 3, we provide a test statistic for the
27
testing problem and give a restricted estimator under the null hypothesis, associated their theoretical properties.
28
A wild bootstrap procedure is also proposed to mimic the null distribution of the test statistic. In Section 4, we
29
report the results of simulation studies and present the results of our statistical analysis of a diabetes study. All
1
the technical proofs of the asymptotic results are given in Appendix.
2
2. E STIMATION P ROCEDURES
2
FOR
β0 , γ 0
3
2.1. Covariate Calibration
4
In this subsection, a calibration estimation procedure is proposed to estimate unobserved Y, X. Using the ob-
5
˜ i , Ui }n , we follow Cui et al. (2009)’s estimation procedure to estimate unknown
served i.i.d samples {Y˜i , X
i=1
6
distorting functions φ(∙), ψr (∙). To ensure identifiability, S¸ent¨urk and M¨uller (2005, 2006) assumed that
#
"
#
˜ r X
Y˜ E
U = φ(U ), E
U = ψr (U ).
˜r ]
E[Y˜ ]
E[X
"
cr
˜ r ] = E[Xr ] and
r = 1, . . . , p. By (1.1) and (2.2), we can have that E[Y˜ ] = E[Y ], E[X
9
10
(2.2)
an
us
8
E[ψr (U )] = 1.
ip
t
E[φ(U )] = 1,
7
(2.3)
Using these equations, the unobserved {Yi , X i }ni=1 can be estimated as
Yˆi =
11
Y˜i
ˆ i)
φ(U
ˆ ri =
, X
˜ ri
X
,
ψˆr (Ui )
(2.4)
ˆ i ), ψˆr (Ui ) used in (2.4) are the Nadaraya-Watson estimators,
i = 1, . . . , n, r = 1, . . . , p. Those estimators φ(U
13
which are defined as:
14
Pn
j=1 Kh (Uj
P
n
n−1 j=1 Kh (Uj
n−1
Pn
Pn
˜ rj
n−1 j=1 Kh (Uj − Ui )X
− Ui )Y˜j
ˆ
,
ψ
(U
)
=
,
r
i
Pn
ˉ
˜
˜r
− Ui )Yˉ
n−1 j=1 Kh (Uj − Ui )X
ep
te
d
ˆ i) =
φ(U
M
12
Pn
15
ˉ =
i = 1, . . . , n, where Y˜
16
density function, and h is a positive-valued bandwidth.
i=1
˜ˉ r =
Y˜i , X
1
n
i=1
˜ ri . Here Kh (∙) = h−1 K(∙/h), K(∙) denotes a kernel
X
-A
cc
1
n
(2.5)
17
2.2. A least squares estimator
In this subsection, we introduce the estimation procedure for the true parameter β 0 , γ 0 . In what follows, A⊗2 =
19
AAτ , A?2 = Aτ A any matrix or vector A. The estimators of β 0 and γ 0 are obtained by a least squares estimation
20
method:
BJ
PS
18
21
22
23
ˆ
β
ˆ
γ
!
n
=
1 X ˆ ⊗2
T
n i=1 i
!−1
n
1Xˆ ˆ
T i Yi
n i=1
!
,
(2.6)
τ
τ
τ
ˆ , Zi , X
ˆi = X
ˆ pi , i = 1, . . . , n. We now present an asymptotic expression of
ˆ 1i , . . . , X
where Tˆ i = X
i
τ
τ
ˆ ,γ
ˆτ .
β
3
25
1
2
T HEOREM 2.1. Under Conditions (A1)-(A6) given in Appendix, we can have
!
!
ˆ
β
β0
−
ˆ
γ0
γ
n n
E[T Y ]
−1 1 X
−1 1 X
+ E T ⊗2
= E T ⊗2
T i i
Y˜i − Yi
n i=1
E[Y ]
n i=1
!
p
n
⊗2 −1 X
1 X E[T Xr ] ˜
− E T
Xri − Xri
β 0,r + oP (n−1/2 ).
n
E[X
]
r
r=1
i=1
4
−1
Remark 1. In this asymptotic expression, the second term E T ⊗2
5
caused by the distorting functions φ(∙), ψr (∙)’s and the confounding variable U .
3
1
n
Pn
i=1
(2.7)
T i i is the usual asymptotic
expression for the least squares estimator when data are observed without errors. The others in the (2.7) are
ip
t
24
The proposed procedure involves the bandwidth, h, to be selected. It is worthwhile to point out that under-
7
ˆ i , Yˆi . This is because, the Condition (A6) requires the bandwidth h
smoothing is necessary for estimating X
cr
6
satisfies nh4 → 0. To meet Condition (A6), Carroll et al. (1997) suggested that the bandwidth h can be chosen
9
as an order of O(n−1/5 ) × n−2/15 = O(n−1/3 ). Thus, a useful and reasonable choice for h is the rule of thumb
10
ˆU n−1/3 , where σ
suggested by Silverman (1986), namely, h = σ
ˆU is the sample deviation of U . See also in
11
Zhang et al. (2014b); Zhou and Liang (2009).
3. A H YPOTHESIS
TEST FOR
β0 , γ 0
ep
te
d
12
M
an
us
8
In previous section, we discuss the estimation of β 0 , γ 0 rather than testing. Another interesting problem is to
14
evaluate if certain explanatory variables in the parametric components influence the response significantly. As in
15
many important statistical applications, in addition to sample information, we may have some prior information
16
on regression parametric vector which can be used to improve the parametric estimators. Specially, this study
17
can be used to select unobserved underlying variable X for distortion measurement errors data. We consider the
18
following linear hypothesis:
-A
cc
13
H0 : Aθ 0 = b, vs H1 : Aθ 0 6= b,
(3.8)
PS
19
where θ 0 = (β τ0 , γ τ0 )τ , A is a k × (p + q) matrix of known constants and b is a k-vector of known constants.
21
We shall also assume that rank(A) = k ≤ p + q.
BJ
20
22
3.0.1. A restricted estimator and its asymptotic normality
23
Under the null hypothesis H0 , the restriction conditions Aθ 0 = b should be used to obtain an estimator of θ 0
24
without losing the restriction information involved in the null hypothesis H0 . The information for regression
25
coefficients involved in this null hypothesis may improve the efficiency of the estimator. Followed Wei and Wang
4
26
(2012), we can construct a restricted estimator by using Lagrange multiplier technique:
S(θ, λ) =
1
n h
X
i=1
τ
Yˆi − Tˆ i θ
i2
+ 2λτ (Aθ − b) ,
(3.9)
2
where λ is a k × 1 vector of the Lagrange multipliers. Differentiating S(θ, λ) with respect to θ and λ, it is easily
3
seen that
8
9
10
11
12
ip
t
ˆ γ
ˆ used in (3.12) is obtained from (2.6). In the following, we present the asymptotic
where the estimator β,
cr
7
ˆ ,γ
normality for the restriced estimator β
R ˆ R.
an
us
6
T HEOREM 3.1. Under Conditions (A1)-(A6) given in Appendix, we can have that
!
!!
ˆ
√
β
β0
L
R
n
−→ N 0, ΩA Γ−1 ΩτA σ2 + ΩA Qβ0 ΩτA .
−
ˆR
γ0
γ
(3.12)
−1
where ΩA = I p+q − Γ−1 Aτ AΓ−1 Aτ
A, I p+q is an identity matrix of size p + q, and
Qβ 0
=
p X
p
X
l=1 r=1
p
X
M
5
β 0,l β 0,r Γ−1 ΛXl ΛτXr Γ−1 Cov (ψl (U ), ψr (U ))
ep
te
d
4

n
i
h
X

τ
∂S(θ, λ)

ˆ
ˆ
ˆ

T
−
T
θ
+ 2Aτ λ = 0,
=
−2
Y
i
i

i
∂θ
i=1
(3.10)


∂S(θ,
λ)


= 2(Aθ − b) = 0.
∂λ
Solving (3.10) with respective to θ and λ, the restricted least squares estimator of β 0 , γ 0 can be obtained as
 "
−1 "
!
! " n
#−1
#−1
!
#
n


X
X
ˆ
ˆ
ˆ
⊗2
⊗2
βR
β
β
τ
τ
A
A
A
=
−
A
−b .
(3.11)
Tˆ i
Tˆ i


ˆ
ˆ
ˆR
γ
γ
γ
i=1
i=1
E[Xl Xr ]
E[Xl ]E[Xr ]
E[Xl Y ]
β 0,l Γ−1 ΛXl ΛτY + ΛY ΛτXl Γ−1 Cov (ψl (U ), φ(U ))
E[Xl ]E[Y ]
l=1
E Y2
−1 ⊗2 −1
+Γ ΛY Γ Var (φ(U ))
.
(E[Y ])2
−
13
cc
14
and ΛXr = E[T Xr ],r = 1, . . . , p, ΛY = E[T Y ] and σ2 = Var().
16
Remark 2. The first term ΩA Γ−1 ΩτA σ2 in the asymptotic variance (3.12) is the usual asymptotic covariance
17
matrix of the restricted least squares estimator under the null hypothesis H0 when the data can be directly
18
observed, i.e., φ(∙) ≡ 1 and ψr (∙) ≡ 1. The second term ΩA Qβ0 ΩτA is an extra term caused by the distortion in
19
the covariates.
BJ
PS
-A
15
20
21
22
23
3.0.2. A test statistic and its asymptotic result
ˆ ,γ
After obtaining the restricted least squares estimator β
R ˆ R , we can define the residual sum of squares under the
null hypothesis H0 :
RSS(H0 ) =
n h
X
i=1
τ
ˆ
ˆ τβ
ˆR
Yˆi − X
i R − Zi γ
5
i2
.
(3.13)
24
Similarly, we can define the residual sum of squares under the alternative hypothesis:
RSS(H1 ) =
25
n h
X
i=1
26
1
ˆ − Zτ γ
ˆ τβ
Yˆi − X
i ˆ
i
i2
.
(3.14)
Our test statistic for the null hypothesis H0 is based on the difference between the residual sums of squares under
the null and alternative hypotheses:
Tn = RSS(H0 ) − RSS(H1 ).
2
(3.15)
Intuitively, if the null hypothesis H0 is true, the value of Tn is small. If the null hypothesis H0 is not true, there
4
should be significant difference between RSS(H0 ) and RSS(H1 ), and the value of Tn is large. Large value of
5
Tn further indicates that the null hypothesis H0 should be rejected. The following theorem gives the asymptotic
6
distribution of Tn under the the null hypothesis H0 .
7
T HEOREM 3.2. Under Conditions (A1)-(A6) given in Appendix, we can have that:
• Under the null hypothesis H0 : Aθ 0 = b,
L
Tn −→ `1 χ21,1 + `2 χ22,1 + . . . + `k χ2k,1 ,
9
12
13
14
M
11
where χ2i,1 , i = 1, . . . , k are independent standard chi-squared random variables with one degree of freei
h
−1
dom, and `i ’s are the eigenvalues of matrix A Γ−1 σ2 + Qβ0 Aτ AΓ−1 Aτ
. Here matrices Γ
and Qβ0 are defined in Theorem 3.1.
ep
te
d
10
cr
an
us
8
ip
t
3
• Under the locol alternative hypothesis H1n : Aθ 0 = b + n−1/2 c, the test statistic Tn follows a generalised
chi-squared distribution (Sheil and O’Muircheartaigh, 1977), namely,
Tn
L
−→
−1 τ −1 M + Γ−1/2 Aτ AΓ−1 Aτ
c
c ,
M + Γ−1/2 Aτ AΓ−1 Aτ
cc
15
(3.16)
where M is a random vector follows a multivariate normal distribution N (0, D) with
-A
D = Γ−1/2 Aτ AΓ−1 Aτ
−1
−1
A Γ−1 σ2 + Qβ0 Aτ AΓ−1 Aτ
AΓ−1/2 .
3.0.3. A wild bootstrap procedure
17
To apply Theorem 3.2 under the null hypothesis H0 , one way is to estimate the unknown weights `i , 1 ≤ i ≤ k
by estimating Γ and Qβ0 consistently, then, one can use the distribution of `ˆ1 χ21,1 + `ˆ2 χ22,1 + . . . + `ˆk χ2k,1
BJ
18
PS
16
19
or Welch-Satterthwaite approximation to define the p-values. However, as the asymptotic variances obtained
1
in Theorem 3.1 are very complex. Especially for Qβ0 , its estimator may not be precise in finite samples. To
2
overcome this problem, we suggest to apply a wild bootstrap (Escanciano, 2006; Stute et al., 1998; Wu, 1986)
3
technique to mimic the distribution of the test statistic Tn under the null hypothesis H0 .
4
The procedure for calculating the critical values based on the bootstrap test statistic is proposed as follows:
6
5
Step 1: Compute the test statistic Tn defined in (3.15).
Step 2: Generate N times i.i.d. variables ςib , i = 1, . . . , n, b = 1, . . . , B with a Bernoulli distribution which
respectively take values at
√
1∓ 5
2
with probability
√
5± 5
10 .
(b)
ˆ − Zτ γ
ˆ τβ
ˆi = ˆi ςib , and
Let ˆi = Yˆi − X
i
i ˆ and further obtain
(b)
(b)
ˆ + Zτ γ
ˆ τβ
Yˆi = X
ˆi .
i
i ˆ +
!
9
ˆ (b)
β
R
(b)
ˆR
γ
!
10
n
=
=
!
n
1 X ˆ ˆ (b)
,
T i Yi
n i=1
 "
−1 "
! " n
#−1
#−1
n


X ⊗2
X
⊗2
−
A
Tˆ i
Tˆ i
Aτ A
Aτ


1 X ˆ ⊗2
T
n i=1 i
ˆ (b)
β
ˆ (b)
γ
!−1
i=1
i=1
and we then calculate the bootstrap test statistic
11
Tn(b)
=
12
RSS (b) (H0 )
=
RSS (b) (H1 )
=
n h
X
(b)
(b)
ˆ (b) − Z τ γ
ˆ τβ
Yˆi − X
i ˆ
i
ep
te
d
i=1
14
!
#
−b ,
RSS (b) (H0 ) − RSS (b) (H1 ),
n h
i2
X
(b)
τ (b)
ˆ (b)
ˆ τβ
ˆR
,
Yˆi − X
i R − Zi γ
i=1
13
ˆ (b)
β
ˆ (b)
γ
cr
8
ˆ (b)
β
ˆ (b)
γ
ip
t
by
an
us
7
(b) ˆ
(b)
ˆ (b) , γ
ˆ (b) , γ
ˆ (b) and β
Step 3: For each b, using bootstraps Yˆi , X
i , Z i , we re-calculate the bootstrap estimators β
R ˆR
M
6
i2
.
(b)
Step 4: We calculate the 1−κ quantile of the bootstrap test statistics Tn , b = 1, . . . N as the κ-level critical value.
4. A
15
SIMULATION STUDY.
In this section, we present some numerical results to evaluate the performance of our proposed estimators. In
17
the following, the Epanechnikov kernel K(t) = 0.75(1 − t2 )+ is used. As noted in Remark 1, the rule of
18
thumb bandwidth h = σ
ˆU n−1/3 is used. In Example 1, a comparison is made between the direct-plug-in (DPI)
19
estimation method and the local linear approximation (LLA) method proposed in S¸ent¨urk and Nguyen (2006).
20
The simulation results is reported in Table 2. In Example 2, a comparison is made between the usual least squares
-A
ˆ ,γ
ˆ γ
ˆ and the restricted estimator β
estimator β,
R ˆ R . The simulation results in reported in Table 3. We also
investigate the performance of the test statistic Tn and its bootstrap estimators. The values of power calculations
BJ
1
PS
21
cc
16
2
are reported in Table 4.
3
Example 1. We generate 500 realizations from the following model (4.17) and the sample size is chosen as
4
n = 500 and 1000:
5
Y = β 0,1 X1 + β 0,2 X2 + γ 0,1 Z1 + ,
7
(4.17)
where (β 0,1 , β 0,2 , γ 0,1 ) = (1, 3, −2). Those predictors are independently generated from normal distributions:
X1 ∼ N (2, 1.22 ), X2 ∼ N (2, 0.52 ) and Z1 ∼ N (0.5, 0.252 ), and the covariance matrix of (X1 , X2 , Z1 ) is
chosen as


1
−0.5 0.25


1
−0.5 .
−0.5
0.25 −0.5
1
The model error is independent with (X1 , X2 , Z1 ), ∼ N (0, σ 2 ) with σ = 0.5 and 1.0. The confounding
7
covariate U is generated from the Uniform (0, 1) distribution, independent with (X1 , X2 , Z1 , ). The distorting
8
functions for X1 and X2 are chosen as ψ1 (U ) = 1 + 0.3 sin(2πU ) and ψ1 (U ) = 1 + 3(U − 0.5)3 , respectively.
9
The distorting function for Y is chosen as φ(U ) = 0.5 + U .
ip
t
6
In Table 2, we report the simulation results of the sample mean (Mean), the sample standard deviation (SD)
11
and the sample mean squared error (MSE). We can see that both the DPI estimator and LLA estimator are close
12
to the true value. As the sample size increases or σ decreases, their means are generally closer to the true values,
13
the SD and MSE of both the estimators decrease, and the biases of DPI estimator is much smaller than those of
14
LLA estimator. In addition, by comparing the MSE, we can see that our DPI estimator is slightly more efficient
15
than LAP estimator, which suggests that DPI estimation procedure is indeed worthy of recommendation.
16
Example 2. We first consider the restricted estimator under two restricted conditions A[1] = (1, 1, 2)τ (i.e.,
17
β 0,1 +β 0,2 +2γ 0,1 = 0) and A[2] = (−1, 1, 1)τ (i.e., −β 0,1 +β 0,2 +γ 0,1 = 0). The covariates X1 , X2 , Z1 , , U
18
and the choices of distorting functions are the same as Example 1. We generate 500 realizations from the follow-
19
ing model (4.17) and the sample size is chosen as n = 500, 800 and 1000, and the σ for the error term is chosen
20
as σ = 0.25, 0.5 and 1.0 in this example.
ep
te
d
M
an
us
cr
10
In Table 3, we can see that the values of MSE for the restricted estimator of β 0,1 is slightly larger than those
22
obtained in Table 2. However, the restricted estimator of β 0,2 is generally smaller than those in Table 2, the
23
ˆ R are much smaller than those in Table 2, which indicates both A[1] and A[2] can improve
restricted estimator γ
24
the estimation efficiency for β 0,2 , γ 0,1 without losing much estimation efficiency for β 0,1 .
26
-A
H0 : γ 0,1 = 0, vs, H1 : γ 0,1 = c,
(4.18)
where c = 0, ±0.1, ±0.2, ±0.3, ±0.4, ±0.5, ±0.6. The parameter (β 0,1 , β 0,2 ) is set to be (1, 3). In this test
BJ
27
Next, we consider a hypothesis test problem by considering
PS
25
cc
21
28
problem (4.18), it is noted that the alternative hypothesis H1 becomes the null hypothesis H0 when c = 0.
29
1000 bootstrap samples are generated in each simulation to calculate its powers. In Table 4, all empirical levels
30
obtained by the bootstrap test statistics are close to the four nominal levels when c = 0, which indicates that
(b)
1
the bootstrap statistic Tn
2
rapidly and increases to one.
gives proper Type I errors. As the absolute value of c increases, the powers increases
8
Table 1: The results of the hypothesis testing for the Pima Indian diabetes data
Null hypotheisis
[3]
H0
[1]
H0
[2]
H0
: β 0,1 = 0.2
: γ 0,2 = 0.6
: γ 0,2 − 3β 0,1 = 0.0
[4]
H0 : γ 0,1 = 0.0
5. A
3
Alternative hypothesis
p-values
[1]
H1 , β 0,1 6= 0.2
[2]
H1 , γ 0,2 6= 0.6
[3]
H1 , γ 0,2 − 3β 1 6= 0.0
[4]
H1 , γ 0,1 6= 0.0
0.922
0.790
0.994
0.636
REAL DATA ANALYSIS
We apply our method to analyse the Pima Indian diabetes data for an illustration. The dataset is available on the
5
web site http://www.ics.uci.edu/ mlearn/databases/. This data has been analysed by S¸ent¨urk and Nguyen (2009).
6
They suggested the body mass index (BMI) as the potential confounding variable and further investigated a linear
7
regression model between the unobserved covariate ‘plasma glucose concentration’ (Y ), the covariate ‘diastolic
8
blood pressure’ (X), the observed covariate ‘triceps skin fold thickness’ (Z1 ) and the observed covariate ‘age’
9
(Z2 ):
an
us
cr
ip
t
4
Y = β 0,0 + β 0,1 X + γ 0,1 Z1 + γ 0,2 Z2 + .
13
14
15
16
17
(Fan and Gijbels, 1996). Those two plots indicate that φ(u) and ψ(u) are not constants (i.e., φ(u) 6≡ 1,
ep
te
d
12
ˆ
ˆ
We first present the patterns of φ(u)
and ψ(u)
in Figure 1 by using the local linear smoothing procedure
ψ(u) 6≡ 1), which suggests that the variable ‘BMI’ is the potential confounder for the response Y and covariate
ˆ ,β
ˆ ,γ
ˆ
ˆ
=
X. Next, we use our proposed direct-plug-in estimation procedure and obtain that β
,
γ
0,0
0,1
0,1
0,2
(84.0284, 0.2118, 0.0642, 0.6375). 1000 bootstrap samples are conducted for the testing problem, the results are
ˆ ,γ
ˆ ,β
ˆ 0,2 , we are interested in the null hypothesis
presented in Table 1. Based on the estimate β
0,0
0,1 ˆ 0,1 , γ
cc
11
(5.19)
M
10
[s]
[1]
H0 , s = 1, 2, 3, 4 in Table 1. From this table, we can not reject the hypothesis H0 : β 0,1 = 0.2 as the as-
sociated p-value is 0.922, if we use the significant level α = 0.05. Similarly, we can not reject the hypothesis
19
H0 : γ 0,2 = 0.6 either. We consider the hypothesis H0 : γ 0,2 − 3β 0,1 = 0.0, the associated p-value is 0.994,
20
which indicate that the relationship in the null hypothesis H0 can be true. For the null hypothesis H0 , we can
21
not reject the null hypothesis H0 , as its p-value is 0.636, much larger than critical value 0.05. This implies that
22
the variable ‘Triceps skin fold thickness’ may be excluded from model (5.19).
[4]
[4]
[4]
[4]
BJ
PS
[3]
-A
18
23
Acknowledgements The authors thank the editor, the associate editor for their constructive suggestions that
24
helped them to improve the early manuscript. The research work of Wei Zhenghong is supported by the Project of
25
Department of Education of Guangdong Province. The research work of Zhang Jun is supported by the National
26
Natural Sciences Foundation of China (NSFC) grant No.11401391, the NSFC Tianyuan fund for Mathematics
1
grant No.11326179, and the project of DEGP.
9
2
R EFERENCES
3
Carroll, R. J., Fan, J., Gijbels, I. and Wand, M. P. (1997) Generalized partially linear single-index models. J.
5
6
7
8
9
10
Amer. Statist. Assoc., 92, 477–489.
Carroll, R. J., Ruppert, D., Stefanski, L. A. and Crainiceanu, C. M. (2006) Nonlinear Measurement Error Models,
A Modern Perspective. New York: Chapman and Hall, 2 edn.
Cui, X., Guo, W., Lin, L. and Zhu, L. (2009) Covariate-adjusted nonlinear regression. Ann. Statist., 37, 1839–
1870.
Escanciano, J. C. (2006) A consistent diagnostic test for regression models using projections. Econometric
Theory, 22, 1030–1051.
ip
t
4
Fan, J. and Gijbels, I. (1996) Local Polynomial Modelling and Its Applications. London: Chapman & Hall.
12
Fan, J. and Li, R. (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. Journal
14
15
of the American Statistical Association, 96, 1348–1360.
an
us
13
cr
11
Fan, J. and Zhang, J. (2000) Two-step estimation of functional linear models with applications to longitudinal
data. J. R. Stat. Soc. Ser. B Stat. Methodol., 62, 303–322.
Kaysen, G. A., Dubin, J. A., M¨uller, H.-G., Mitch, W. E., Rosales, L. M. and Nathan W. Levin the Hemo Study
17
Group (2002) Relationships among inflammation nutrition and physiologic mechanisms establishing albumin
18
levels in hemodialysis patients. Kidney International, 61, 2240–2249.
23
24
25
26
27
28
Methods, 39, 1054–1074.
Li, G. and Xue, L. (2008) Empirical likelihood confidence region for the parameter in a partially linear errors-invariables model. Communications in StatisticsłTheory and Methods, 37, 1552–1564.
Li, X., Du, J., Li, G. and Fan, M. (2014) Variable selection for covariate adjusted regression model. Journal of
Systems Science and Complexity, 27, 1227–1246.
Liang, H. and Li, R. (2009) Variable selection for partially linear models with measurement errors. J. Amer.
Statist. Assoc., 104, 234–248.
Mack, Y. P. and Silverman, B. W. (1982) Weak and strong uniform consistency of kernel regression estimates. Z.
BJ
29
ep
te
d
22
Li, F., Lin, L. and Cui, X. (2010) Covariate-adjusted partially linear regression models. Comm. Statist. Theory
cc
21
unknown variance. Journal of Econometrics, 184, 379–393.
-A
20
Kneip, A., Simar, L. and Van Keilegom, I. (2015) Frontier estimation in the presence of measurement error with
PS
19
M
16
30
31
32
33
34
Wahrsch. Verw. Gebiete, 61, 405–415.
Nguyen, D. V., S¸ent¨urk, D. and Carroll, R. J. (2008) Covariate-adjusted linear mixed effects model with an
application to longitudinal data. J. Nonparametr. Stat., 20, 459–481.
Saleh, A. E. and Shalabh (2014) A ridge regression estimation approach to the measurement error model. Journal
of Multivariate Analysis, 123, 68–84.
10
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
Sheil, J. and O’Muircheartaigh, I. (1977) Algorithm as106: The distribution of non-negative quadratic forms in
normal variables. Applied Statistics, 26, 92–98.
Silverman, B. W. (1986) Density estimation for statistics and data analysis. Monographs on Statistics and Applied Probability. London: Chapman & Hall.
Stefanski, L., Wu, Y. and White, K. (2014) Variable selection in nonparametric classification via measurement
error model selection likelihoods. Journal of the American Statistical Association, 109, 574–589.
Stute, W., Gonz´alez Manteiga, W. and Presedo Quindimil, M. (1998) Bootstrap approximations in model checks
for regression. J. Amer. Statist. Assoc., 93, 141–149.
Wei, C. and Wang, Q. (2012) Statistical inference on restricted partially linear additive errors-in-variables models.
Test, 21, 757–774.
Wu, C.-F. J. (1986) Jackknife, bootstrap and other resampling methods in regression analysis. Ann. Statist., 14,
1261–1350.
Xu, W. and Guo, X. (2013) Nonparametric checks for a varying coefficient model with missing response at
random. Metrika, 76, 459–482.
Xu, W. and Zhu, L. (2013) Testing the adequacy of varying coefficient models with missing responses at random.
Metrika, 76, 53–69.
Xu, W. and Zhu, L. (2014) Nonparametric check for partial linear errors-in-covariables models with validation
data. Annals of the Institute of Statistical Mathematics, 1–23.
Yang, Y., Li, G. and Peng, H. (2014) Empirical likelihood of varying coefficient errors-in-variables models with
longitudinal data. Journal of Multivariate Analysis, 127, 1–18.
Zhang, J., Feng, Z. and Zhou, B. (2014a) A revisit to correlation analysis for distortion measurement error data.
BJ
29
454–468.
ip
t
8
S¸ent¨urk, D. and Nguyen, D. V. (2009) Partial covariate adjusted regression. J. Statist. Plann. Inference, 139,
cr
7
50, 3294–3310.
an
us
6
S¸ent¨urk, D. and Nguyen, D. V. (2006) Estimation in covariate-adjusted regression. Comput. Statist. Data Anal.,
M
5
Ann. Statist., 34, 654–679.
ep
te
d
4
S¸ent¨urk, D. and M¨uller, H.-G. (2006) Inference for covariate adjusted regression via varying coefficient models.
cc
3
Scand. J. Statist., 32, 365–383.
-A
2
S¸ent¨urk, D. and M¨uller, H.-G. (2005) Covariate adjusted correlation analysis via varying coefficient models.
PS
1
30
31
32
33
34
J. Multivariate Anal., 124, 116–129.
Zhang, J., Gai, Y. and Wu, P. (2013a) Estimation in linear regression models with measurement errors subject to
single-indexed distortion. Comput. Statist. Data Anal., 59, 103–120.
Zhang, J., Li, G. and Feng, Z. (2015) Checking the adequacy for a distortion errors-in-variables parametric
regression model. Comput. Statist. Data Anal., 83, 52–64.
11
35
Zhang, J., Wang, X., Yu, Y. and Gai, Y. (2014b) Estimation and variable selection in partial linear single index
1
models with error-prone linear covariates. Statistics. A Journal of Theoretical and Applied Statistics, 48, 1048–
2
1070.
6
1
2
3
4
5
6
7
errors. Ann. Inst. Statist. Math., 65, 237–267.
Zhang, J., Zhu, L. and Liang, H. (2012a) Nonlinear models with measurement errors subject to single-indexed
distortion. J. Multivariate Anal., 112, 1–23.
Zhang, J., Zhu, L. and Zhu, L. (2012b) On a dimension reduction regression with covariate adjustment. J.
Multivariate Anal., 104, 39–55.
Zhang, W., Li, G. and Xue, L. (2011) Profile inference on partially linear varying-coefficient errors-in-variables
models under restricted condition. Computational Statistics & Data Analysis, 55, 3027–3040.
Zhou, Y. and Liang, H. (2009) Statistical inference for semiparametric varying-coefficient partially linear models
with error-prone linear covariates. Ann. Statist., 37, 427–458.
BJ
PS
-A
cc
ep
te
d
M
8
Zhang, J., Yu, Y., Zhu, L. and Liang, H. (2013b) Partial linear single index models with distortion measurement
ip
t
5
distortion. J. Statist. Plann. Inference, 150, 49–65.
cr
4
Zhang, J., Yu, Y., Zhou, B. and Liang, H. (2014c) Nonlinear measurement errors models subject to additive
an
us
3
12
1.6
1.4
1.0
cr
ip
t
1.2
distorting function for DBP
1.4
1.2
1.0
40
50
60
20
30
M
30
an
us
0.8
distorting function for PGC
20
50
60
BMI
ep
te
d
BMI
40
Figure 1: Plots for distorting functions.
A PPENDIX .
9
In this appendix, we present the conditions, prepare several preliminary lemmas, and give the proofs of the main
11
results.
-A
cc
10
1
5.1. Conditions
(A1) The matrix Γ defined in Theorem 3.1 is positive-definite, 0 < E[2 ] = σ2 < +∞..
3
(A2) The density function fU (u) of the confounding variable U is bounded away from 0 and satisfies the Lips-
4
chitz condition of order 1 on U, where U stands for a compact support set of U . Moreover, inf fU (u) ≥ c0 ,
PS
2
u∈U
0 < c0 < +∞.
BJ
5
6
(A3) φ(∙), ψr (∙) have three bounded and continuous derivatives. Moreover, φ(u) and ψr (u) are nonzero on U.
7
(A4) E[Y ] and E[Xs ] are bounded away from 0, moreover, E[|Y |3 ] < ∞, E[|Xs |3 ] < ∞, s = 1, . . . , p.
8
(A5) The kernel function K(∙) is a density function with a compact support, symmetric about zero, satisfying a
13
Table 2:
Simulation results of Example 1. “Mean” is the simulation mean; “SD” is the standard deviation.
n = 1000
σ = 0.50
n = 500
LLA
n = 1000
n = 500
DPI
n = 1000
σ = 1.00
n = 500
LLA
n = 1000
11
Lipschitz condition and having bounded derivatives. Furthermore,
∞, for j = 1, 2, 3.
(A6) As n → ∞, the bandwidth h satisfies:
u2 K(u)du 6= 0,
log2 n
nh2
R∞
−∞
|u|j K(u)du <
→ 0, nh2 → ∞, and nh4 → 0.
cc
L EMMA 5.1. Suppose E(W |U = u) = mW (u) and its derivatives up to second order are bounded for all u ∈ U,
Z
3
where U is defined in condition (A1), and that E|W | exists, and sup |w|s0 f (u, w)dw < ∞, where f (u, w)
u∈U
-A
2
−∞
5.2. Technical lemmas
12
1
R∞
M
10
γ 0,1 = −2
-1.9843
0.1042
0.0111
-1.9815
0.0818
0.0070
-2.0017
0.1112
0.0123
-1.9964
0.0894
0.0080
-1.9920
0.2056
0.0422
-1.9939
0.1369
0.0187
-2.0181
0.2183
0.0478
-2.0093
0.1487
0.0221
ep
te
d
9
β 0,2 = 3
2.9835
0.0692
0.0050
2.9888
0.0452
0.0022
2.9855
0.0715
0.0053
2.9889
0.0471
0.0023
2.9837
0.1336
0.0180
2.9879
0.0889
0.0080
2.9727
0.1348
0.0188
2.9769
0.0933
0.0092
ip
t
DPI
β 0,1 = 1
0.9858
0.0352
0.0014
0.9900
0.0248
0.0007
0.9911
0.0386
0.0016
0.9925
0.0259
0.0007
0.9883
0.0609
0.0038
0.9915
0.0410
0.0018
0.9878
0.0624
0.0040
0.9901
0.0455
0.0022
cr
n = 500
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
an
us
Sample size
3
is the joint density of (U, W )τ . Let (Ui , Wi )> , i = 1, 2, . . . n be independent and identically distributed (i.i.d.)
4
samples from (U, W )τ . If (A1)-(A3) hold, and n2δ−1 h → ∞ for δ < 1 − s−1
0 , then
PS
BJ
5
n
Z
1 X
h2
00
2
sup Kh (Ui − u)Wi − fU (u)mW (u) − [fU (u)mW (u)]
K(u)u du = O(τn,h ), a.s.
2
u∈U n i=1
q
log n
nh .
6
where Let τn,h = h3 +
7
Proof. Lemma 5.1 can be immediately proved from the result obtained by Mack and Silverman (1982).
8
L EMMA 5.2. Suppose that Conditions (A1)-(A6) hold. Let S(x) be a continuous function satisfying ES 2 (X) <
14
Table 3: Simulation results of Example 2. Restricted condition (I): β0,1 + β0,2 + 2γ 0,1 = 0; Restricted condition (II):−β0,1 + β0,2 + γ 0,1 = 0
“Mean” is the simulation mean; “SD” is the standard deviation.
σ = 0.50
n = 500
n = 800
(II)
n = 1000
n = 500
n = 800
(I)
σ = 0.75
ep
te
d
n = 1000
n = 500
(II)
n = 800
cc
n = 1000
-A
n = 500
(I)
n = 800
n = 1000
BJ
PS
σ = 1.00
(II)
n = 500
n = 800
n = 1000
15
γ 0,1 = −2
-1.9802
0.0469
0.0026
-1.9872
0.0384
0.0016
-1.9924
0.0325
0.0011
-1.9910
0.0352
0.0013
-1.9920
0.0292
0.0009
-1.9957
0.0265
0.0007
-1.9809
0.0633
0.0044
-1.9841
0.0497
0.0027
-1.9904
0.0451
0.0021
-1.9905
0.0476
0.0023
-1.9945
0.0401
0.0016
-1.9960
0.0342
0.0012
-1.9864
0.0819
0.0069
-1.9891
0.0646
0.0043
-1.9964
0.0579
0.0034
-1.9993
0.0672
0.0045
-1.9921
0.0530
0.0029
-1.9977
0.0438
0.0019
ip
t
n = 1000
β 0,2 = 3
2.9778
0.0611
0.0042
2.9841
0.0510
0.0028
2.9910
0.0436
0.0020
2.9760
0.0593
0.0041
2.9833
0.0492
0.0027
2.9904
0.0422
0.0018
2.9776
0.0841
0.0076
2.9818
0.0669
0.0048
2.9889
0.0597
0.0036
2.9764
0.0813
0.0071
2.9817
0.0640
0.0044
2.9886
0.0573
0.0034
2.9881
0.1108
0.0124
2.9858
0.0883
0.0080
2.9958
0.0763
0.0058
2.9864
0.1060
0.0114
2.9852
0.0847
0.0074
2.9953
0.0734
0.0053
cr
n = 800
(I)
β 0,1 = 1
0.9826
0.0388
0.0018
0.9902
0.0308
0.0010
0.9939
0.0263
0.0007
0.9851
0.0365
0.0015
0.9913
0.0292
0.0009
0.9947
0.0247
0.0006
0.9841
0.0497
0.0027
0.9864
0.0402
0.0018
0.9919
0.0361
0.0013
0.9859
0.0477
0.0024
0.9872
0.0385
0.0016
0.9926
0.0348
0.0012
0.9846
0.0659
0.0045
0.9924
0.0499
0.0025
0.9969
0.0468
0.0022
0.9872
0.0630
0.0041
0.9931
0.0481
0.0023
0.9975
0.0451
0.0020
an
us
n = 500
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
Mean
SD
MSE
M
Sample size
Table 4: The simulation results of the power calculations .
cr
n
X
M
ep
te
d
4
Proof. Using Lemma 5.1, it is easily seen that
q
log n
nh .
BJ
PS
-A
cc
Proof of Theorem 2.1 Step 1. Denote κn = h2 +
8
(5.2)
5.3. Proof of Theorems
3
7
(5.1)
Proof. Lemma 5.2 is the direct result of Lemma B.2 in Zhang et al. (2012a).
2
6
0.10
0.0975
0.1951
0.3902
0.6504
0.8374
0.9512
1.0000
0.2602
0.4959
0.7236
0.9268
1.0000
1.0000
n
X
E Y S(X)
+ oP (n−1/2 ),
Yˆi − Yi S(Xi ) = n−1
Y˜i − Yi
EY
i=1
i=1
n
n
X
X
E Xl S(X)
−1
−1
ˆ
˜
Xli − Xli S(Xi ) = n
Xli − Xli
n
+ oP (n−1/2 ).
EX
l
i=1
i=1
n−1
11
5
0.05
0.0517
0.1220
0.2846
0.5203
0.7805
0.9187
1.0000
0.1789
0.3496
0.6423
0.8780
0.9919
1.0000
∞. Then, for l = 1, . . . , p,
10
1
0.025
0.0243
0.1057
0.2114
0.4228
0.7236
0.8943
0.9919
0.1138
0.2602
0.5691
0.8049
0.9674
1.0000
an
us
9
0.01
0.0094
0.0488
0.1789
0.2927
0.6260
0.8537
0.9837
0.0732
0.1870
0.4472
0.7236
0.9268
1.0000
ip
t
Significant level
c = 0.000
c = −0.100
c = −0.200
c = −0.300
c = −0.400
c = −0.500
c = −0.600
c = 0.100
c = 0.200
c = 0.300
c = 0.400
c = 0.500
c = 0.600
n
1 X
sup Kh (Ui − u) − fU (u) = OP (κn ), a.s.,
n
u∈U
i=1
n
1 X
˜
sup Kh (Ui − u)Yi − φ(u)fU (u)E[Y ] = OP (κn ), a.s.,
u∈U n i=1
(5.3)
(5.4)
Together with (5.3) and Condition (A2), we can have that
n
n
1 X
1 X
inf Kh (Ui − u) ≥ inf fU (u) − sup Kh (Ui − u) − fU (u) ≥ c0 + OP (κn ).
u∈U
n
u∈U n
u∈U
i=1
i=1
16
(5.5)
11
12
13
Similar to (5.6), we can show that
sup ψˆr (u) − ψr (u) = OP κn + n−1/2 ,
14
Step 2. Note that
!
!
!−1
n
n
i
τ
β0
1 X ˆ ⊗2
1 X ˆ hˆ
τ
ˆ β −Z γ
−
=
T
T i Yi − X
i 0
i 0
n i=1 i
n i=1
γ0
!−1
!−1
n
n
n
n
1 X ˆ ˆ
1Xˆ
1 X ˆ ⊗2
1 X ˆ ⊗2
=
Ti
T i Yi − Y i +
Ti
T i i
n i=1
n i=1
n i=1
n i=1
!−1
n
n
τ
1Xˆ 1 X ˆ ⊗2
ˆi β .
+
Ti
T i Xi − X
0
n i=1
n i=1
ep
te
d
17
18
19
2
n
1 X ˆ ˆ
Xri Yi − Yi
n i=1
ˆr (Ui ) − ψr (Ui ) φ(U
ˆ i ) − φ(Ui )
n
n
ψ
X
X
1
1
Xri Yˆi − Yi +
Xri Yi
n i=1
n i=1
φ(Ui )ψr (Ui )
n
log2 n
nh2
→ 0, it yields that κ2n = o(n−1/2 ). Using this fact, similar to (5.9), we can have that
!
n
n
n
1X
ˆ i − Xi
X
1 X ˆ ˆ
1 X ˆ
T i Yi − Y i
Yˆi − Yi
T i Yi − Y i +
=
n i=1
n i=1
n i=1
0
n
E[T Y ]
1 X˜
=
+ oP (n−1/2 ),
Yi − Y i
n i=1
E[Y ]
PS
As nh8 → 0,
E[X Y ]
1 X˜
r
+ oP (n−1/2 ) + OP κ2n + n−1 .
Yi − Y i
n i=1
E[Y ]
BJ
5
=
=
3
4
6
7
where E[T Y ] = (E[X1 Y ], . . . , E[Xp Y ], E[Z1 Y ], . . . , E[Zq Y ])τ , and
τ
X
1Xˆ ˆ
T i X i − X i β0 =
n i=1
r=1
n
8
(5.8)
Using (5.6)-(5.7) and Lemma 5.1, we have
-A
1
!
M
ˆ
β
ˆ
γ
cc
16
r = 1, . . . , p.
(5.7)
an
us
u∈U
15
(5.6)
ip
t
10
Then, using (5.3), (5.5), Yˉ˜ − E[Y ] = OP (n−1/2 ), we can have that
n
n
1 X
X
1
ˉ
Kh (Ui − u)Y˜i −
Kh (Ui − u)φ(u)Y˜ sup u∈U n
n
ˆ
i=1
i=1
n
− φ(u) ≤
sup φ(u)
1 X
u∈U
˜ inf Kh (Ui − u)Yˉ
u∈U n
i=1
n
1 X
sup Kh (Ui − u)Y˜i − φ(u)fU (u)E[Y ] sup|φ(u)fU (u)| E[Y ] − Yˉ
˜ u∈U
u∈U n i=1
+
≤
c0 + OP (κn )
c0 + OP (κn )
n
1 X
sup|φ(u)|sup Kh (Ui − u) − fU (u)
u∈U
u∈U n i=1
ˉ
+
Y˜ = OP κn + n−1/2 .
c0 + OP (κn )
cr
9
p
!
n
1 X E[T Xr ] ˜
Xri − Xri
β 0,r + oP (n−1/2 ).
n i=1 E[Xr ]
17
(5.9)
9
10
11
Pn By using E[T ] = 0, we can have that n i=1 Tˆ i − T i i = oP (n−1/2 ). Moreover, similar to (5.9), it is
⊗2 Pn
Pn
⊗2
easily seen that n1 i=1 Tˆ i = n1 i=1 T ⊗2
+ oP (1). As a result, (5.8) can be represented
i + oP (1) = E T
as
ˆ
β
ˆ
γ
!
!
n n
E[T Y ]
−1 1 X
−1 1 X
+ E T ⊗2
Y˜i − Yi
= E T ⊗2
T i i
n i=1
E[Y ]
n i=1
!
p
n
⊗2 −1 X
1 X E[T Xr ] ˜
− E T
(5.10)
Xri − Xri
β 0,r + oP (n−1/2 ).
n
E[X
]
r
r=1
i=1
12
13
β0
γ0
−
We complete the proof of Theorem 2.1.
15
Proof of Theorem 3.1
18
19
cr
an
us
17
Proof. Using (3.12) and the null hypothesis H0 : Aθ 0 = b, we can have that
!
!
ˆ
β
β0
R
−
ˆR
γ0
γ

 "
−1  (
" n
!
#−1
#−1
!)


n




X ⊗2
X
ˆ
⊗2
β
β
τ
τ
0
= I p+q −
A
A
A
−
A
Tˆ i
Tˆ i




ˆ
γ0
γ


i=1
i=1
(
!
!)
n
o
ˆ
β
β0
−1 τ
−1 τ −1
= I p+q − Γ A AΓ A
A
−
+ oP (n−1/2 ).
ˆ
γ0
γ
Together with the asymptotic expression (5.10), we complete the proof of Theorem 3.1.
21
Proof of Theorem 3.2
Proof. Step 3.1. Under the null hypothesis H0 : Aθ 0 = b, and Using the fact that
0, we can have that
cc
1
ep
te
d
20
22
Tn =
Together with (5.10) and that of
BJ
5
6
7
8
ˆ −β
ˆ
β
R
ˆ
ˆR − γ
γ
!τ
n
X
ˆ −β
ˆ
β
R
ˆ
ˆR − γ
γ
⊗2
Tˆ i
i=1
Pn
i=1
h
Yˆi −
ˆ
ˆ τβ
X
i
ˆ
− Z τi γ
!
Using (5.11), recalling the definition of ΩA used in Theorem 3.1, we know that
!)
!
(
!
n
o
ˆ −β
ˆ
ˆ
β
β
β0
−1 τ
−1 τ −1
R
+ oP (n−1/2 ).
= − Γ A AΓ A
A
−
ˆ
ˆ
ˆR − γ
γ0
γ
γ
PS
4
-A
2
3
(5.11)
M
16
ip
t
14
Tn
=
(
1
n
Pn
i=1
⊗2
Tˆ i = Γ + oP (1), we have
i−1/2
√ h −1 2
n Γ σ + Qβ 0
"
ˆ
β
ˆ
γ
!
−
β0
γ0
!#)τ
i1/2
i1/2
h
h
τ
× Γ−1 σ2 + Qβ0
[I p+q − ΩA ] Γ [I p+q − ΩA ] Γ−1 σ2 + Qβ0
"
!
!#)
(
k
i−1/2
X
ˆ
√ h −1 2
β
β0
L
−
×
n Γ σ + Qβ 0
−→
`i χ2i,1 ,
ˆ
γ
γ0
i=1
18
i
ˆi
X
Zi
(5.12)
(5.13)
!
=
411
412
413
414
415
416
Step 3.2. Under the local hypothesis H1n : Aθ 0 = b + n−1/2 c, similar to (5.13), using (3.12), we have that
!)
!
(
!
ˆ −β
ˆ
ˆ
−1
√
√ 1/2
β
β
β
R
0
− Γ−1/2 Aτ AΓ−1 Aτ
= −Γ1/2 (I p+q − ΩA ) n
nΓ
−
c
ˆ
ˆ
ˆ
γ0
γ
γR − γ
+oP (1)
h
i
−1
L
τ
−→ N −Γ−1/2 Aτ AΓ−1 Aτ
c, Γ1/2 (I p+q − ΩA ) Γ−1 σ2 + Qβ0 (I p+q − ΩA ) Γ1/2 . (5.14)
1
n
Using (5.14) and that of
Tn
=
L
−→
417
Pn
i=1
⊗2
Tˆ i = Γ + oP (1), we can have that
ip
t
410
!#τ "
!#
ˆ −β
ˆ −β
ˆ
ˆ
√ 1/2
β
β
R
R
+ oP (1)
nΓ
nΓ
ˆ
ˆ
ˆR − γ
ˆR − γ
γ
γ
−1 τ −1 M + Γ−1/2 Aτ AΓ−1 Aτ
M + Γ−1/2 Aτ AΓ−1 Aτ
c
c ,
"
√
1/2
cr
10
where χ2i,1 , i = 1, . . . , k are independent standard chi-squared random variables with one degree of freei
h
τ
dom, and `i ’s are the eigenvalues of matrix Γ−1 σ2 + Qβ0 [I p+q − ΩA ] Γ [I p+q − ΩA ]. Using the fact that
−1
τ
[I p+q − ΩA ] Γ [I p+q − ΩA ] = Aτ AΓ−1 Aτ
A, we complete the proof of Theorem 3.2.
where M follows multivariate normal distribution N (0, D) with
−1
A Γ−1 σ2 + Qβ0 Aτ AΓ−1 Aτ
AΓ−1/2 .
M
−1
BJ
PS
-A
cc
ep
te
d
D = Γ−1/2 Aτ AΓ−1 Aτ
an
us
9
19
(5.15)