You are on page 1of 7

International Journal of Basic & Applied Sciences IJBAS-IJENS Vol: 10 No: 06

79

Estimation for Multivariate Linear Mixed Models


I Nyoman Latra1, Susanti Linuwih2, Purhadi2, and Suhartono2

1 Doctoral Candidate at Department of Statistics FMIPA-ITS Surabaya,


Email : i_nyoman_l@statistika.its.ac.id
2 Promotor and Co-promotors at Department of Statistics FMIPA-ITS Surabaya.

AbstractThis paper discusses about estimation of


multivariate linear mixed model or multivariate
component of variance model with equal number of
replications. We focus on two estimation methods,
namely Maximum Likelihood Estimation (MLE) and
Restricted
Maximum
Likelihood
Estimation
(REMLE) methods. The results show that the
parameter estimation of fixed effects yields unbiased
estimators, whereas the estimation for random effects
or variance components yields biased estimators.
Moreover, assume that both likelihood and lnlikelihood functions hold some of regularity
conditions, it can be proved that estimators as a
solutions set of the likelihood equations satisfy strong
consistency for large sample size, asymptotic normal
and efficiency.
KeyWords: Linear Mixed Model, Multivariate
Linear Model, Maximum Likelihood, Asymptotic
Normal and Efficiency, Consistency.

I. INTRODUCTION
Linear mixed models or variance components models
have been effectively and extensively used by
statisticians for analyzing data when the response is
univariate. Reference [12] discussed the latent variable
model for mixed ordinal or discrete and continuous
outcomes that was applied to birth defects data.
Reference [16] showed that maximum likelihood
estimation of variance components from twin data can be
parameterized in the framework of linear mixed models.
Specialized variance component estimation software that
can handle pedigree data and user-defined covariance
structures can be used to analyze multivariate data for
simple and complex models with a large number of
random effects. Reference [2] showed that Linear Mixed
Models (LMM) could handle data where the observations
were not independent or could be used for modeling data
with correlated errors. There are some technical terms for
predictor variables in linear mixed models, those are (i)
random effects, i.e. the set of values of a categorical
predictor variable that the values are selected not
completely but as random sample of all possibility values
(for example, the variable product has values
representing only 5 of a possible 42 brands), (ii)
hierarchical effects, i.e. predictor variables are measured
at more than one level, and (iii) fixed effects, i.e.

predictor variables which all possible category values


(levels) are measured.
Otherwise, in this paragraph there are many papers
discussed about linear model for multivariate cases.
Reference [5] applied multivariate linear mixed model to
Scholastic Aptitude Test and proposed Restricted
Maximum Likelihood (REML) to estimate the
parameters. Reference [6] used multivariate linear mixed
model or multivariate variance components model with
equal replication to predict the sum of the regression
mean and the random effects of models. This prediction
problem reduces to the estimation of the ratio of two
covariance matrices. The estimaion of the ratio matrix
obtaine the James-Stein type estimators based on the
Bartletts decomposition, the Stein type orthogonally
equivariant estimators, and the Efron-Morris type
estimators. Recently, Reference [10] applied linear mixed
model in Statistics for Biology Systems to calculate both
covariates and correlations between signals which
followed non-stationary time series. They used the
estimation algorithm based on Expectation-Maximization
(EM) which involved dynamic programming for the
segmentation step. Reference [11] discussed a joint
model for multivariate mixed ordinal and continuous
responses. The likelihood is found and modified Pearson
residuals. The model is applied to medical data, obtained
from an observational study on woman.
Most of previous papers about multivariate linear
mixed models focused on estimation method and not yet
discuss about the asymptotic properties of the estimators.
By assumption that both likelihood and ln-likelihood
functions hold for some of regularity conditions, then a
solutions set of the likelihood equations satisfy strong
consistency for large sample size, asymptotic normal and
efficiency.

II. MULTIVARIATE LINEAR MIXED MODEL


In this section, we firstly discuss about the univariate
linear mixed models and then continue to multivariate
linear models without replications. Whereas discuss
about the multivariate linear mixed models will be
included in next section. Linear mixed models are
statistical models for continuous outcome variables in
which the residuals are normally distributed but may not
be independent or have constant variance. A linear mixed
model is a parametric linear model for clustered,
longitudinal, or repeated-measures data. It may include

1
109406-8484 IJBAS-IJENS December 2010 IJENS

IJENS

International Journal of Basic & Applied Sciences IJBAS-IJENS Vol: 10 No: 06

both fixed-effect parameters associated with one or more


continuous or categorical covariates and random effects
that associated with one or more random factors. The
fixed-effect parameters describe the relationships of the
covariates to the dependent variable for an entire
population, and the random effects are specific to clusters
or subjects within a population [4], [9], [15], [17].
In matrix and vector, a linear mixed population model
(univariate) is written as [4], [7], [9], [17]

y = X
12+3e
{ + Z
fixed

(1)

random

where y is a n 1 vector of observations, X is a n (p+1)


matrix of known covariates, Z is a n m known matrix,
is a (p + 1) 1 vector of unknown regression coefficients
which are usually known as the fixed effects, is a m 1
vector of random effects, and e is a n 1 vector of
random errors. Both vectors and e are unobservable.
The basic assumption of Eq. (1) for and e are
~ N (0, G) , and e ~ N (0, R ) , where G and R are the
variance-covariance matrices (then called variance
matrices).
To simplify the estimation of model
parameters, random effect is marginalized to errors e
to create e* that is N(0,), where = R + ZGZT. After
marginalization process, (1) can be written as,
y = X + e *

or

y ~ N ( X , ) .

(2)

Equation (2) is called marginal model or sometimes


called as model of population mean.
For point estimation process, each of variance
component of model can be notated by that include in
vector = vech() = [1 ,L, 1 n ( n +1) ]T . It implies Gaussian
2

mixed model has a joint probability density function as


follows:

f ( y, , ) = 1 / ( 2 ) n | |

exp (y X ) 1 (y X ) / 2
T

(3)

where n is dimension of vector y and vech() indicates a


vector with all lower triangular elements of have been
stacked by column. A ln-likelihood function of (3) is

l(, ) = ln L( y , , ) = (n / 2)ln (2 ) (1 / 2)ln (| |)


(1 / 2)(y X ) 1 (y X ).
T

(4)

Taking the partial derivatives of (4) with respect to , q


and makes each equal to zero such that we have
parameters estimation as follows:

1 y ,
1 X ) 1 X T
= ( X T

1
1 X ( X T
1 .
1 X ) 1 X T
where P =
Equation (5) has a closed-form. Whereas (6) does
not have a closed-form, therefore [4], [17] proposed
three algorithms that could be used to calculate the
parameter estimation q, i.e. EM (expectationmaximization), N-R (Newton-Raphson), and Fisher
Scoring algorithm. Due to the MLE is consistent and
asymptotically normal with covariance matrix asymptotic
equal to inverse of Fisher Information matrix, so we
need to obtain Fisher Information matrix components. In
both univariate and multivariate linear mixed models, the
regression
coefficients
and
covariance
matrix
components will be placed it in to a vector , that is
= ( T , T ) T . Thus, the Fisher Information matrix will
have expressions as follows
l
2l
= E
Var
T

y P
q

1
P y = tr (
q

2l
E
q

= 0 ; 1 q n(n+1)/2,

2l

= (1/2)tr 1 1 ;
E
q q '

q
q'

1 q, q n(n+1)/2.
Base on (5) and a result of Theorem 1 in [15], the ML
Iterative Algorithm can be used to compute the MLE of
the unknown parameters in model (3). By using iteration
process, the estimator of variance components are found
) = such that estimator of
as elements of vech(
covariance matrix can be written as,

(8)

that is generally bias. By subtituting (8) to (5), it yields


1 X ) 1 that is bias because (8) is bias.
Var( ) = ( X T
Bias in MLE can be eliminated by Restricted Maximum
Likelihood Estimation (REMLE).
REML estimation uses a transformation y1 = ATy,
where rank(X) = p + 1, A is n (n p 1) with
rank(A) = n p 1, such that ATX = 0. It is easy to
show that that y 1 ~ N (0 , A T A ) . Furthermore, the
joint probability density function of y1 is

(5)

) ; q = 1, 2, ..., n ( n + 1) / 2 , (6)
q

(7)

by an asumption that second partial derivative of


exist. Mean value of second partial derivative respect to
fixed components or variance components will be [4]
2l
= X T 1 X ,
E
T

f R ( y 1 ) = 1 / ( 2 ) n p 1 | A T A |
T

80

exp (1/2) y 1T ( A T A) 1 y 1 .

(9)

The ln-likelihood of (9) can be written as


109406-8484 IJBAS-IJENS December 2010 IJENS

IJENS

International Journal of Basic & Applied Sciences IJBAS-IJENS Vol: 10 No: 06

l R () (1 / 2)[ln( A T A ) y 1T ( A T A ) 1 y 1 ] .

(10)

Estimation process can be carried out by partially


derivation of (10) respect to q ; q = 1, ..., n ( n + 1) / 2

(because it does not contain components of ),


l R

Py tr ( P
= (1 / 2){y T P
)}
q
q
q

(11)

where P in (11) is A ( A T A ) 1 A T which basically is


equal to P that the estimator is P in (6). Fisher
Information matrix of (11) is
l
Var R

2l R

= E
T

(12)

By asumming that the second derivative of in (10) is


exist, then it can be proven that its components Fisher is

lR
= (1 / 2)tr P P
E
q q '
q q '

1 q, q n (n + 1) / 2 .
2

(13)

that be obtained from REMLE method


The estimator
is unbias. However, the linear mixed models will yield
Var ( ) that is bias, because Var ( ) is taken from (5) by
. Bias from variance of
doing a substitution of by
fixed effect estimator that constitute the diagonal
elements of Var ( ) is a downward bias both in MLE and
REMLE method [17].
Multivariate linear model that does not depend to
random factors was discussed by [1], [9]. Multivariate
linear model without replications in the matrix form
could be written as

Y = XB + E

(14)

where Y = [ Yij ], E = [ij], X = [xia], and B = [aj];


i = 1, ..., n; j = 1, ..., s; a = 0, ..., p; intercepts xi0 = 1 for
all i. For individual i, (14) could be presented in the
matrix and vector form as follows:
yi = B T x i + ei ,
Yi 1
Y
where yi = i 2 ,
M

Yis

01 02

12
11
B =
M
M

p2
p1

(15)
L 0s
L 1s
,
O M

L ps

81

1
i1
x

i1
xi = , and ei = i 2 .
M
M


xip
is
The error vectors ei in (15) are random vectors that be
assumed follow multivariate normal distribution with
zero mean vector in Rns. The assumption of rows of E
are independents because each of them are related to
different observations, however the columns of E are
still allowed to have a correlation. Thus,
Cov (e i , e i' ) = 0,
Cov (e i , e i ) = i

i i'

(16)

for all i = 1, 2, ..., n. The other assumption is matrix of


variance errors in each observation i = with size s s
is same for all i and positive definite. Then, (16) can be
written as [13]
Cov(vec(ET), vec(ET)) = In
where vec(ET) is a vector that indicates all elements of a
matrix E have been stacked by rows and In is identity
matrix with size n n.
Moreover, estimation of B can be done by assuming
that is known. The likelihood function of (14) for n
observations is
L( vec( B T ), vech( )) = 1 / ( 2 ) ns | |n exp{Q MLM / 2} (17)

where Q MLM = vec( Y T ) ( X I s )vec(B T ) (I n ) 1

vec( Y T ) ( X I s )vec(B T ) .
Maximize (17) with constraint that is known will yield
estimator
B = ( X T X ) 1 X T Y or
vec (B T ) = (( X T X ) 1 X T I s )vec ( Y T ) .

(18)

The estimation of can be obtained by assuming that B


is known. Then, maximize (17) and assume that B is
known will yield bias variance estimator, i.e.
= (1 / 2)Y T (I X ( X T X ) 1 X T )Y . The unbias estimator

of variance [1] is,


S = Y T ( I X ( X T X ) 1 X T ) Y /( n rank ( X )) .

(19)

Hence, estimator in (18) is an unbias estimator with the


variance matrix

.
Var( vec(B T )) = ( X T X ) 1

109406-8484 IJBAS-IJENS December 2010 IJENS

(20)

IJENS

International Journal of Basic & Applied Sciences IJBAS-IJENS Vol: 10 No: 06

III. PARAMETERS ESTIMATION for


MULTIVARIATE LINEAR MIXED MODEL
The multivariate linear mixed model is an extension
of the multivariate linear model. It means that the model
can be constructed by adding a random component part
to (14) and assuming that each elements of Y has a linear
correlation with systematic part of the model. This model
is called as the multivariate linear mixed model without
replication. There are three types of the linear mixed
model, i.e. cluster, longitudinal, and replication. This
paper focuses on the linear mixed model without
replication, i.e.

Y = XB + ZD + E

(21)

where X and Z are known as covariates matrices with


size n (p+1) and n k respectively, B is a (p+1) s
matrix of unknown regression coefficients of fixed effect,
D is a k s matrix of specific coefficients of random
effect, and E is a n s matrix of errors. It is also assumed
that n-rows of E are independents and each row is N(0,)
and random effect D satisfies vec(D) ~ N(0,). Thus, the
distribution of respons Y in model (21) can be written in
matrix and vector form as

vec(Y ) ~ N ((X I s )vec(B ),(Z I s )(Z I s ) + (In ))


(22)
T

Let a matrix of variance V = (ZIs )(ZIs )T + (In )


with size nsns, then variance components of model
(21) are elements of = vech(V) = [1 ,L, ns ( ns +1) / 2 ]T .

82

and
can be
Both components estimator of
obtained too by applying iteration that are included in
. The result of
iteration process for computing V
estimation in (24) is unbiased estimator with variance
matrix
1 ( X I )]1 .
Var (vec(B T )) = [( X I s ) T V
s

IV. ESTIMATOR PROPERTIES: CONSISTENCY,


EFFICIENCY, and ASYMPTOTICALLY
Let vectors of random samples in the form of
observations y 1 , y 2 ,L, y n from a vector of random
variable y with distribution F belonging to a family

F = {F , }, where

= [1 ,L, k ]T .

0 , a subset N( 0 ) of is a neighborhood of 0
iff N( 0 ) is a superset of an open set G containing 0
[8]. Assume R k and there are three regularity
conditions on F, those are [3], [14]:
l( y, )
,
(R1). For each , the derivatives
i

2 l( y , )
3 l( y , )
,
i i '
i i ' i ''

exist,

for all y and

i , i ' , i ' ' = 1, 2, , k .

(R2). For each 0 , there are exist functions g(y),


h(y), and H(y) (possibly depending on 0 ) such

that for in a neighborhood N( 0 ), the relations

The likelihood function of probability density function


(22) is

L( y, )
2 L( y, )
g(y) ,
h( y ) ,
i
i i '

L(vec(B T ), vech( V )) = 1 / (2 ) ns | V | exp{Q MLMM / 2} (23)

3 L( y, )
H( y ) hold for all y, and
i i ' i ''

where Q MLMM = vec( Y T ) ( X I s )vec(B T )

V 1 vec( Y T ) ( X I s )vec(B T ) .
By applying partial derivation of ln-likelihood function
(23) respect to vec(B T ) and q , then make equal to zero
and it will yield estimators

T ) = [(X I ) T V
1 ( X I )]1 ( X I ) T V
1vec( Y T ),
vec(B
s
s
s
(24)
and by using a nonlinear optimization, with inequality
constraints imposed on so that positive definiteness
requirements on the and matrices are satisfied. There
is no closed-form solution for the , so the estimate of
is obtained by the Fisher scoring algoritm or NewtonRaphson algoritm. After an iterative computational
= V , i.e.
process we have V
=

= ( Z I )
( Z I ) T + (I
) .
V
s
s
n

(25)

For each

L g ( y )dy < , L h( y )dy < ,


E {H(y )} < for N ( 0 ) .
(R3). For each ,

0 < E T l( y, )

<



where =
, L,
= gradient, and

k
1

norm of a matrix.

Consider random variables 1 , 2 , L, and on a


probability space (, , P ) . For , m will be
said converges with probability 1 (or strongly, almost
surely, almost everywhere) to if P ( lim m = ) = 1 .
m

1
This is written m wp

; m . An equivalent
condition for convergence wp1 is

lim P ( l < , l m ) = 1, > 0 [14].

109406-8484 IJBAS-IJENS December 2010 IJENS

(26)

IJENS

International Journal of Basic & Applied Sciences IJBAS-IJENS Vol: 10 No: 06

83

Lemma:
Let L( y, ) and l( y, ) are likelihood function and
ln-likelihood function respectively. Assume that the
Fisher
regularity conditions (R1) and (R2) hold.
Information is the variance of the random variable,
l( y, )
; i.e.,

This last equation can be written as an expectation, i.e.

2 l( y, )
l( y, )
= E
= E[T l( y, )].
I( ) = Var

T


(27)

2 l( y, )
l( y, )
Q I( ) = Var
= E

Proof:
The L( y , ) is a likelihood function. Thus, it can be
considered as a joint probability density function, so

Taking the partial derivative with respect to vector ,


then the results are as follows:

L( y , )dy = 0 .

(28)

(ln L( y, ))
L ( y , ) dy = 0 .
L

(29)

(30)

2 (ln L( y , ))
L { T L( y, )

(ln L( y , ))
+
L( y, )}dy = 0.

l( y, )
into a Taylor series

m , as
of second order about 0 and evaluating it at
follows:

m ) l( y, 0 )
l( y,
m 0 )T
=
+ { T l( y, 0 )+(1 / 2)[(

( 11 T )] H ( y, *m )}( m 0 )
(33)

m , H( y, *m ) = (T ) l( y, *m ) .
where 0 < *m <
But

m)
l( y,
= 0 , so Taylor series (33) can be written

as

m 0 ) = (1 / m )l( y, 0 )
H( y, *m )} m (

(34)

Therefore, putting
a m = (1 / m ) l( y i , 0 ) then
i =1

2 (ln L ( y , ))
L ( y , )dy
T

E 0 [a m ] = (1 / m ) E 0 [l( y i , 0 )] = 0 ,
i =1

(ln L( y , ))
+ L
L( y , )dy = 0.

m 0 )T (11T )]
{(1 / m)T l( y, 0 ) (1 /(2m))[(

This is equivalent to

m } satisfying
equations admit a sequence of solutions {
a). strong consistency:
1
m wp
(31)

; m ;

b). asymptotic normality and efficiency


d
m

AN , (1 / m )I 1 ( ) ,
(32)

Expanding the function

It means that the vector mean of random variable


l( y, )
is 0. If (29) be partially differentiated again, it

follows that

(R1), (R2), and (R3) hold on the family F. Then, by


m converges with probability 1 to , the likelihood

Proof:

l( y , )
E
=0.

Theorem:
Let n vectors observations y 1 , y 2 ,L, y n be iid with

where I( ) = E [ T l( y, )] , AN (,) is a multivariate


asymptotic normal distribution.

Writing (29) as an expectation, we have established

= E[T l(y, )] [3], [14].

Equation (28) can be written as

L( y , )

L
L( y, ) L( y, )dy = 0 or

By using (30) and definition of variance, the latter


expression can be rewritten as
l( y , )
2 l( y , )
E
+ Var
= 0 or
T

distribution F , for . Assume regularity conditions

L L( y, )dy = 1 .

l( y, ) 2
2 l( y , )
+

E
E
=0
T

Var[a m ] = (1 / m ) 2 mI( 0 ) = I( 0 ) ,

109406-8484 IJBAS-IJENS December 2010 IJENS

IJENS

International Journal of Basic & Applied Sciences IJBAS-IJENS Vol: 10 No: 06

B m = (1/ m) T l( y i , 0 ) then E0 [Bm ] = I(0 ) , and


i =1

84

Furthermore, by using the Central Limit Theorem


(CLT), we have
a m ~ AN (0, I( 0 ))

C m = (1 / 2m ) H( y i , *m ) then

(40)

i =1

p
By (36) :
Bm

I( 0 )

d
By ( 40) : a m
AN (0, I( 0 ))

E * [Cm ] = (1 / 2m)mE* [H( y, *m )]


m

= (1 / 2) E * [H( y, *m )].
m

Thus, (34) can be written as


m 0 ) T (11T )]C m } m (
m 0 ) = am
{B m + [(

(35)

By applying the Law of Large Numbers (LLN), it can be


shown that
1
a m wp

0,

Bm wp
1I(0 )

(36)

1
Cm wp

(1/ 2)E* [H(y, *m )] .


m

Next, it will be seen that H( y, *m ) is bounded in

m 0 < c0
probability. Let c0 be a constant and the
implies *m 0 < c0 such that,
m

i =1

i =1

(37)
By condition (R2) E {H(y )} < and by applying the
Law of Large Numbers (LLN), it can be shown that
m

1
(1 / m ) H( y i , 0 ) wp

E 0 [H( y, 0 )] .
i =1

Let > 0 be given and we select 1 + E0 [H( y, 0 )] .


Choose M1 and M2 so that,

m 0 < c0 ] 1 / 2
m M 1 P[

(38)

m M 2 P[ (1 / m ) H( y i , 0 ) E 0 [H( y, 0 )]
i =1

(39)

< 1] 1 / 2.
It follows from (37), (38) and (39) that,
m max{M 1 , M 2 }
P[ (1 / m)H( y, *m ) 1 + E 0 [H( y, 0 )] ] 1 / 2
hence, (1 / m)H( y, *m ) is bounded in probability.
m 0 )
(

probability 1 to 0, we have

1
m

converges with
*
m

H( y, ) is bounded in

probability and by Eq. (35) can be concluded that


1
m 0 ) wp

0 for all 0 . The sequence of


(
solutions

m}
{

satisfying

1
m wp

; m is proven.

d
m

AN ( , (1 / m )I 1 ( )) [3], [14].

n } of solutions to the
Now, let a sequence {
likelihood function (23) where
T
T ))T , (vech( V
))T ]T = [ ,L,
n = [(vec(B

n1
n ( ps + s + ns ( ns +1) / 2 ) ] .

n will be convergence
By using the above Theorem,
with probability 1 to or strong consistency
n ; n and also efficiency and asymptotic

normality distribution, i.e.


d
d
n )
n
n (

AN 0, I 1 () or

AN , n1I1 () .

V. CONCLUSION

(1/ m)H(y, *m ) (1/ m) H(yi , *m ) (1/ m) H(yi , 0 )

Finally, to show that

d
m 0 )
m (

AN (0, I 1 ( 0 )).
This expression can be written as
d
m )
(

AN (0, (1 / m)I 1 ( )) or

strong

consistency

This paper has discussed about estimation of


multivariate linear mixed model or multivariate
component of variance model with equal number of
replications. The results show that the parameter
estimation of fixed effects yields unbiased estimators,
whereas the estimation for random effects or variance
components yields biased estimators. Moreover, assume
that both likelihood and ln-likelihood functions hold
some of regularity conditions, it can be proved that
estimators as a solutions set of the likelihood equations
satisfy strong consistency for large sample size,
asymptotic normal and efficiency.
Based on the discussion at the previous section, it can
be drawn some theoretical conclusions as follows:
1). The estimator of parameters by MLE method in the
multivariate linear mixed model Y = XB + ZD + E,
with the variance V = ( Z I s )( Z I s ) T + (I n )
are
1 ( X I )]1
vec ( B T ) = [( X I s ) T V
s
1vec ( Y T ),
(X I )T V
s

= ( Z I )
( Z I ) T + (I
) ,
V
s
s
n
and
1 ( X I )]1 .
Var(vec(B T )) = [( X I s ) T V
s

n }, which
2). Let a sequence {

T ))T , (vech(V
))T ]T then by applying the
n = [(vec(B

n is strong
proposed theorem it can be shown that
consistency, i.e.
d
1
n
n wp

AN , n 1I 1 () .

; n and

109406-8484 IJBAS-IJENS December 2010 IJENS

IJENS

International Journal of Basic & Applied Sciences IJBAS-IJENS Vol: 10 No: 06

REFERENCES
[1] R. Christensen, Linear Models for Multivariate,
Time Series, and Spatial Data. New York: SpringerVerlag, 1991, ch. 1.
[2] G. D. Garson, Linear mixed models: random
effects, hierarchical linear, multilevel, random
coefficients, and repeated measures models
(Statnotes style), Lecturer notes at Department of
Public Administration, North Carolina State
University, 2008, pp. 1- 49.
[3] R. V. Hogg, J. W. McKean, and A. T. Craig,
Introduction to Mathematical Statistics, 6rd ed.,
Intern. ed., New Jersey: Pearson-Prentice Hall,
2005, ch. 6,7.
[4] J. Jiang, Linear and Generalized Linear Mixed
Models and Their Applications, New York: Springer
Science+Business Media, LCC., 2007, ch. 2.
[5] H. A. Kalaian and S. W. Raudenbush A
multivariate mixed linear model for meta-analysis,
Psychological Methods, vol. 1(3), pp. 227-235, Mar.
1996.
[6] T. Kubokawa and M. S. Srivastava, Prediction in
multivariate mixed linear models (Discussion papers
are a series of manuscripts style), Discussion
Papers CIRJE-F-180, pp. 1-24, Oct. 2002.
[7] L.R. LaMotte,A direct derivation of the REML
likelihood function, Statistical Papers, vol. 48, pp.
321-327, 2007.
[8] S. Lipschutz, Schaums Outline of Theory and
Problems of General Topology. New York:
McGraw-Hill Book Co., 1965, ch. 1.
[9] K. E. Muller and P. W. Stewart, Linear Model
Theory: Univariate, Multivariate, and Mixed
Models, New Jersey: John Wiley & Sons, Inc.,
2006, ch. 2-5, 12-17.
[10] F. Picard, E. Lebarbier, E. Budinsk, and S. Robin,
Joint segmentation of multivariate gaussian
processes using mixed linear models (Report style),
Statistics for Systems Biology Group, Research
Report (5), pp. 1-10, 2007.
http://genome.jouy.inra.fr/ssb/

85

Ames, Iowa, 1995.


[16] P. M. Visscher, B. Benyamin, and I. White, The
use of linear mixed models to estimate variance
components from data on twin pairs by maximum
likelihood, Twin Research vol. 7(6), pp. 670-674,
2004.
[17] B. T. West, K. B. Welch, and A. T. Gatecki, Linear
Mixed Models a Practical Guide Using Statistical
Software. New York: Chapman & Hall, 2007(2007).
.

[11] E. B. Samani and M. Ganjali, A multivariate latent


variable model for mixed continuous and ordinal
responses, World Applied Sciences Journal, vol.
3(2), pp. 294-299, 2008.
[12] M. D. Sammel, L. M. Ryan, and J. M. Legler,
Latent variable models for mixed discrete and
continuous outcomes, Journal of Royal Statistical
Society B, vol. 59(3), pp. 667-678, 1997.
[13] S.
Sawyer,
Multivariate
linear
models
(Unpublished style), unpublished, pp. 1-29, 2008.
www.math.wustl.edu/~sawyer/handouts/multivar.pdf
4/18/2008.
[14] R. J. Serfling, Approximation Theorems of
Mathematical Statistics, New York: John Wiley &
Sons, Inc., 1980, ch. 1, 4.
[15] C. Shin, On the multivariate random and mixed
coefficient analysis (Dissertation style), Ph.D.
dissertation, Dept. of Stat., Iowa State University
109406-8484 IJBAS-IJENS December 2010 IJENS

IJENS

You might also like