You are on page 1of 9

Hindawi Publishing Corporation

Journal of Applied Mathematics


Volume 2013, Article ID 296185, 8 pages
http://dx.doi.org/10.1155/2013/296185

Research Article
On the Kronecker Products and Their Applications

Huamin Zhang1,2 and Feng Ding1


1
Key Laboratory of Advanced Process Control for Light Industry of Ministry of Education, Jiangnan University,
Wuxi 214122, China
2
Department of Mathematics and Physics, Bengbu College, Bengbu 233030, China

Correspondence should be addressed to Feng Ding; fding@jiangnan.edu.cn

Received 10 March 2013; Accepted 6 June 2013

Academic Editor: Song Cen

Copyright © 2013 H. Zhang and F. Ding. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.

This paper studies the properties of the Kronecker product related to the mixed matrix products, the vector operator, and the vec-
permutation matrix and gives several theorems and their proofs. In addition, we establish the relations between the singular values
of two matrices and their Kronecker product and the relations between the determinant, the trace, the rank, and the polynomial
matrix of the Kronecker products.

1. Introduction numerical algorithms based on the hierarchical identification


principle for the generalized Sylvester matrix equations [23–
The Kronecker product, named after German mathematician 25] and coupled matrix equations [10, 26] were proposed by
Leopold Kronecker (December 7, 1823–December 29, 1891), is Ding and Chen. On the other hand, the iterative algorithms
very important in the areas of linear algebra and signal pro- for the extended Sylvester-conjugate matrix equations were
cessing. In fact, the Kronecker product should be called the discussed in [27–29]. Other related work is included in [30–
Zehfuss product because Johann Georg Zehfuss published a 32].
paper in 1858 which contained the well-known determinant
This paper establishes a new result about the singular
conclusion |A ⊗ B| = |A|𝑛 |B|𝑚 , for square matrices A and B
value of the Kronecker product and gives a definition of the
with order 𝑚 and 𝑛 [1].
The Kronecker product has wide applications in system vec-permutation matrix. In addition, we prove the mixed
theory [2–5], matrix calculus [6–9], matrix equations [10, 11], products theorem and the conclusions on the vector operator
system identification [12–15], and other special fields [16– in a different method.
19]. Steeba and Wilhelm extended the exponential functions This paper is organized as follows. Section 2 gives the def-
formulas and the trace formulas of the exponential functions inition of the Kronecker product. Section 3 lists some prop-
of the Kronecker products [20]. For estimating the upper and erties based on the the mixed products theorem. Section 4
lower dimensions of the ranges of the two well-known linear presents some interesting results about the vector operator
transformations T1 (X) = X − AXB and T2 (X) = AX − and the vec-permutation matrices. Section 5 discusses the
XB, Chuai and Tian established some rank equalities and determinant, trace, and rank properties and the properties of
inequalities for the Kronecker products [21]. Corresponding polynomial matrices.
to two different kinds of matrix partition, Koning, Neudecker,
and Wansbeek developed two generalizations of the Kro-
necker product and two related generalizations of the vector 2. The Definition and the Basic Properties of
operator [22]. The Kronecker product has an important role the Kronecker Product
in the linear matrix equation theory. The solution of the
Sylvester and the Sylvester-like equations is a hotspot research Let F be a field, such as R or C. For any matrices A =
area. Recently, the innovational and computationally efficient [𝑎𝑖𝑗 ] ∈ F 𝑚×𝑛 and B ∈ F 𝑝×𝑞 , their Kronecker product
2 Journal of Applied Mathematics

(i.e., the direct product or tensor product), denoted as A ⊗ B, Proof. According to the definition of the Kronecker product
is defined by and the matrix multiplication, we have

𝑎11 B 𝑎12 B ⋅ ⋅ ⋅ 𝑎1𝑛 B


A ⊗ B = [𝑎𝑖𝑗 B] [ 𝑎21 B 𝑎22 B ⋅ ⋅ ⋅ 𝑎2𝑛 B ]
[ ]
A ⊗ B = [ .. .. .. ]
[ . . . ]
𝑎11 B 𝑎12 B ⋅ ⋅ ⋅ 𝑎1𝑛 B
[ 𝑎21 B 𝑎22 B ⋅ ⋅ ⋅ 𝑎2𝑛 B ] [𝑎𝑚1 B 𝑎𝑚2 B ⋅ ⋅ ⋅ 𝑎𝑚𝑛 B]
[ ] (𝑚𝑝)×(𝑛𝑞)
= [ .. .. .. ] ∈ F .
[ . . . ] 𝑎11 I𝑝 𝑎12 I𝑝 ⋅ ⋅ ⋅ 𝑎1𝑛 I𝑝 B 0 ⋅⋅⋅ 0
[ 𝑎21 I𝑝 𝑎22 I𝑝 ⋅ ⋅ ⋅ 𝑎2𝑛 I𝑝 ] [ 0 B ⋅ ⋅ ⋅ 0]
[𝑎𝑚1 B 𝑎𝑚2 B ⋅ ⋅ ⋅ 𝑎𝑚𝑛 B] [ ][ ]
(1) =[ . .. .. ] [ .. .. .. ]
[ .. . . ][. . d .]
[𝑎𝑚1 I𝑝 𝑎𝑚2 I𝑝 ⋅ ⋅ ⋅ 𝑎𝑚𝑛 I𝑝 ] [ 0 0 ⋅ ⋅ ⋅ B]
It is clear that the Kronecker product of two diagonal
matrices is a diagonal matrix and the Kronecker product of = (A ⊗ I𝑝 ) (I𝑛 ⊗ B) ,
two upper (lower) triangular matrices is an upper (lower)
triangular matrix. Let A𝑇 and A𝐻 denote the transpose and 𝑎11 B 𝑎12 B ⋅ ⋅ ⋅ 𝑎1𝑛 B
the Hermitian transpose of matrix A, respectively. I𝑚 is [ 𝑎21 B 𝑎22 B ⋅ ⋅ ⋅ 𝑎2𝑛 B ]
[ ]
an identity matrix with order 𝑚 × 𝑚. The following basic A ⊗ B = [ .. .. .. ]
[ . . . ]
properties are obvious.
Basic properties as follows: [𝑎𝑚1 B 𝑎𝑚2 B ⋅ ⋅ ⋅ 𝑎𝑚𝑛 B]
B 0 ⋅⋅⋅ 0 𝑎11 I𝑞 𝑎12 I𝑞 ⋅ ⋅ ⋅ 𝑎1𝑛 I𝑞
[0 B ⋅⋅⋅ 0] [ 𝑎21 I𝑞 𝑎22 I𝑞 ⋅ ⋅ ⋅ 𝑎2𝑛 I𝑞 ]
(1) I𝑚 ⊗ A = diag[A, A, . . . , A], [ ][ ]
= [ .. .. .. ] [ .. .. .. ]
[. . d . ] [ . . . ]
(2) if 𝛼 = [𝑎1 , 𝑎2 , . . . , 𝑎𝑚 ]𝑇 and 𝛽 = [𝑏1 , 𝑏2 , . . . , 𝑏𝑛 ]𝑇 , then, [0 0 ⋅⋅⋅ B] [𝑎𝑚1 I𝑞 𝑎𝑚2 I𝑞 ⋅ ⋅ ⋅ 𝑎𝑚𝑛 I𝑞 ]
𝛼𝛽𝑇 = 𝛼 ⊗ 𝛽𝑇 = 𝛽𝑇 ⊗ 𝛼 ∈ F 𝑚×𝑛 ,
= (I𝑚 ⊗ B) (A ⊗ I𝑞 ) .
(3) if A = [A𝑖𝑗 ] is a block matrix, then for any matrix B, (3)
A ⊗ B = [A𝑖𝑗 ⊗ B].

(4) (𝜇A) ⊗ B = A ⊗ (𝜇B) = 𝜇(A ⊗ B), From Theorem 1, we have the following corollary.

Corollary 2. Let A ∈ F 𝑚×𝑚 and B ∈ F 𝑛×𝑛 . Then


(5) (A + B) ⊗ C = A ⊗ C + B ⊗ C,
A ⊗ B = (A ⊗ I𝑛 ) (I𝑚 ⊗ B) = (I𝑚 ⊗ B) (A ⊗ I𝑛 ) . (4)
(6) A ⊗ (B + C) = A ⊗ B + A ⊗ C,
This mean that I𝑚 ⊗ B and A ⊗ I𝑛 are commutative for square
matrices A and B.
(7) A ⊗ (B ⊗ C) = (A ⊗ B) ⊗ C = A ⊗ B ⊗ C,
Using Theorem 1, we can prove the following mixed
(8) (A ⊗ B)𝑇 = A𝑇 ⊗ B𝑇 , products theorem.

(9) (A ⊗ B)𝐻 = A𝐻 ⊗ B𝐻. Theorem 3. Let A = [𝑎𝑖𝑗 ] ∈ F 𝑚×𝑛 , C = [𝑐𝑖𝑗 ] ∈ F 𝑛×𝑝 , B ∈ F 𝑞×𝑟 ,
and D ∈ F 𝑟×𝑠 . Then

Property 2 indicates that 𝛼 and 𝛽𝑇 are commutative. (A ⊗ B) (C ⊗ D) = (AC) ⊗ (BD) . (5)


Property 7 shows that A ⊗ B ⊗ C is unambiguous.
Proof. According to Theorem 1, we have

3. The Properties of the Mixed Products (A ⊗ B) (C ⊗ D)

This section discusses the properties based on the mixed = (A ⊗ I𝑞 ) (I𝑛 ⊗ B) (C ⊗ I𝑟 ) (I𝑝 ⊗ D)
products theorem [6, 33, 34].
= (A ⊗ I𝑞 ) [(I𝑛 ⊗ B) (C ⊗ I𝑟 )] (I𝑝 ⊗ D)
Theorem 1. Let A ∈ F 𝑚×𝑛 and B ∈ F 𝑝×𝑞 ; then
= (A ⊗ I𝑞 ) (C ⊗ B) (I𝑝 ⊗ D)
A ⊗ B = (A ⊗ I𝑝 ) (I𝑛 ⊗ B) = (I𝑚 ⊗ B) (A ⊗ I𝑞 ) . (2) = (A ⊗ I𝑞 ) [(C ⊗ I𝑞 ) (I𝑝 ⊗ B)] (I𝑝 ⊗ D)
Journal of Applied Mathematics 3

= [(A ⊗ I𝑞 ) (C ⊗ I𝑞 )] [(I𝑝 ⊗ B) (I𝑝 ⊗ D)] where Σ = diag[𝜎1 , 𝜎2 , . . . , 𝜎𝑟 ] and Γ = diag[𝜌1 , 𝜌2 , . . . , 𝜌𝑠 ].


According to Corollary 4, we have
= [ (AC) ⊗ I𝑞 )] [I𝑝 ⊗ (BD)]
Σ 0 Γ 0
= (AC) ⊗ (BD) . A ⊗ B = {U [ ] V} ⊗ {W [ ] Q}
(6) 0 0 0 0

Σ 0 Γ 0
= (U ⊗ W) {[ ]⊗[ ]} (V ⊗ Q)
Let A[1] := A and define the Kronecker power by 0 0 0 0
A[𝑘+1] := A[𝑘] ⊗ A = A ⊗ A[𝑘] , 𝑘 = 1, 2, . . . . (7) Σ 0
(9)
From Theorem 3, we have the following corollary [7]. = (U ⊗ W) [Σ ⊗ [ 0 0
] 0]
(V ⊗ Q)
[ 0 0]
Corollary 4. If the following matrix products exist, then one
has Σ⊗Γ 0
= (U ⊗ W) [ ] (V ⊗ Q) .
0 0
(1) (A1 ⊗ B1 )(A2 ⊗ B2 ) ⋅ ⋅ ⋅ (A𝑝 ⊗ B𝑝 ) = (A1 A2 ⋅ ⋅ ⋅ A𝑝 ) ⊗
(B1 B2 ⋅ ⋅ ⋅ B𝑝 ),
Since U ⊗ W and V ⊗ Q are unitary matrices and Σ ⊗ Γ =
(2) (A1 ⊗ A2 ⊗ ⋅ ⋅ ⋅ ⊗ A𝑝 )(B1 ⊗ B2 ⊗ ⋅ ⋅ ⋅ ⊗ B𝑝 ) = (A1 B1 ) ⊗ diag[𝜎1 𝜌1 , 𝜎1 𝜌2 , . . . , 𝜎1 𝜌𝑠 , . . . , 𝜎𝑟 𝜌𝑠 ], this proves the theorem.
(A2 B2 ) ⊗ ⋅ ⋅ ⋅ ⊗ (A𝑝 B𝑝 ),
(3) [AB][𝑘] = A[𝑘] B[𝑘] .
According to Theorem 7, we have the next corollary.
A square matrix A is said to be a normal matrix if and only
if A𝐻A = AA𝐻. A square matrix A is said to be a unitary Corollary 8. For any matrices A, B, and C, one has 𝜎[A ⊗ B ⊗
matrix if and only if A𝐻A = AA𝐻 = I. Straightforward C] = 𝜎[C ⊗ B ⊗ A].
calculation gives the following conclusions [6, 7, 33, 34].

Theorem 5. For any square matrices A and B, 4. The Properties of the Vector Operator and
(1) if A−1 and B−1 exist, then (A ⊗ B)−1 = A−1 ⊗ B−1 ,
the Vec-Permutation Matrix
(2) if A and B are normal matrices, then A ⊗ B is a normal In this section, we introduce a vector-valued operator and a
matrix, vec-permutation matrix.
(3) if A and B are unitary (orthogonal) matrices, then A ⊗ Let A = [a1 , a2 , . . . , a𝑛 ] ∈ F 𝑚×𝑛 , where a𝑗 ∈ F 𝑚 , 𝑗 =
B is a unitary (orthogonal) matrix, 1, 2, . . . , 𝑛; then the vector col[A] is defined by

Let 𝜆[A] := {𝜆 1 , 𝜆 2 , . . . , 𝜆 𝑚 } denote the eigenvalues of A a1


and let 𝜎[A] := {𝜎1 , 𝜎2 , . . . , 𝜎𝑟 } denote the nonzero singular [ a2 ]
[ ]
values of A. According to the definition of the eigenvalue and col [A] := [ .. ] ∈ F 𝑚𝑛 . (10)
Theorem 3, we have the following conclusions [34]. [.]
[ a𝑛 ]
Theorem 6. Let A ∈ F 𝑚×𝑚 and B ∈ F 𝑛×𝑛 ; 𝑘 and 𝑙 are positive
integers. Then 𝜆[A𝑘 ⊗ B𝑙 ] = {𝜆𝑘𝑖 𝜇𝑗𝑙 | 𝑖 = 1, 2, . . . , 𝑚, 𝑗 = Theorem 9. Let A ∈ F 𝑚×𝑛 , B ∈ F 𝑛×𝑝 , and C ∈ F 𝑝×𝑛 , Then
1, 2, . . . , 𝑛} = 𝜆[B𝑙 ⊗ A𝑘 ]. Here, 𝜆[A] = {𝜆 1 , 𝜆 2 , . . . , 𝜆 𝑚 } and
𝜆[B] = {𝜇1 , 𝜇2 , . . . , 𝜇𝑛 }. (1) (I𝑝 ⊗ A)col[B] = col[AB],

According to the definition of the singular value and (2) (A ⊗ I𝑝 )col[C] = col[CAT ].
Theorem 3, for any matrices A and B, we have the next
theorem. Proof. Let (B)𝑖 denote the 𝑖th column of matrix B; we have

Theorem 7. Let A ∈ F 𝑚×𝑛 and B ∈ F 𝑝×𝑞 . If rank[A] = 𝑟, A 0 ⋅⋅⋅ 0 (B)1


𝜎[A] = {𝜎1 , 𝜎2 , . . . , 𝜎𝑟 }, rank[B] = 𝑠, and 𝜎[B] = [0 A ⋅⋅⋅ 0] [ (B)2 ]
[ ][ ]
{𝜌1 , 𝜌2 , . . . , 𝜌𝑠 }, then 𝜎[A ⊗ B] = {𝜎𝑖 𝜌𝑗 | 𝑖 = 1, 2, . . . , 𝑟, 𝑗 = (I𝑝 ⊗ A) col [B] = [ .. .. .. ] [ .. ]
[. . d . ] [ . ]
1, 2, . . . , 𝑠} = 𝜎[B ⊗ A].
[0 0 ⋅⋅⋅ A] [(B)𝑝 ]
Proof. According to the singular value decomposition theo- (11)
rem, there exist the unitary matrices U, V and W, Q which A(B)1 (AB)1
satisfy [ A(B)2 ] [ (AB)2 ]
[ ] [ ]
= [ .. ] = [ .. ] = col [AB] .
Σ 0 Γ 0 [ . ] [ . ]
A = U[ ] V, B=[ ] Q, (8)
0 0 0 0 [A(B)𝑝 ] [(AB)𝑝 ]
4 Journal of Applied Mathematics

Similarly, we have These two definitions of the vec-permutation matrix are


equivalent; that is,
(A ⊗ I𝑝 ) col [C]

𝑎11 I𝑝 𝑎12 I𝑝 ⋅ ⋅ ⋅ 𝑎1𝑛 I𝑝 (C)1 𝑚 𝑛


𝑇
[ 𝑎21 I𝑝 𝑎22 I𝑝 ⋅ ⋅ ⋅ 𝑎2𝑛 I𝑝 ] [(C)2 ] ∑ ∑ (e𝑘𝑛 ⊗ e𝑗𝑚 ) (e𝑗𝑚 ⊗ e𝑘𝑛 ) = P𝑚𝑛 . (18)
[ ][ ]
=[ . .. .. ] [ .. ] 𝑗=1 𝑘=1
[ .. . . ] [ . ]
[𝑎𝑚1 I𝑝 𝑎𝑚2 I𝑝 ⋅ ⋅ ⋅ 𝑎𝑚𝑛 I𝑝 ] [(C)𝑛 ]
In fact, according to Theorem 3 and the basic properties of
𝑎11 (C)1 + 𝑎12 (C)2 + ⋅ ⋅ ⋅ + 𝑎1𝑛 (C)𝑛 the Kronecker product, we have
[ 𝑎21 (C)1 + 𝑎22 (C)2 + ⋅ ⋅ ⋅ + 𝑎2𝑛 (C)𝑛 ]
[ ]
=[ .. ]
[ . ] 𝑚 𝑛
𝑇
∑ ∑ (e𝑘𝑛 ⊗ e𝑗𝑚 ) (e𝑗𝑚 ⊗ e𝑘𝑛 )
[𝑎𝑚1 (C)1 + 𝑎𝑚2 (C)2 + ⋅ ⋅ ⋅ + 𝑎𝑚𝑛 (C)𝑛 ]
𝑗=1 𝑘=1
𝑇 𝑇
C(A )1 (CA )1
𝑚 𝑛
[ C(A𝑇 ) ] [ (CA𝑇 ) ]
[ 2] [ 2] = ∑ ∑ (e𝑘𝑛 ⊗ e𝑗𝑚 ) (e𝑇𝑗𝑚 ⊗ e𝑇𝑘𝑛 )
=[
[ .. ]=[
] [ .. ] = col [CA𝑇 ] .
] 𝑗=1 𝑘=1
[ . ] [ . ]
𝑇 𝑇
[ C(A )𝑚] [ (CA )𝑚]
𝑚 𝑛

(12) = ∑ ∑ (e𝑘𝑛 e𝑇𝑗𝑚 ) ⊗ (e𝑗𝑚 e𝑇𝑘𝑛 )


𝑗=1 𝑘=1

𝑚 𝑛
Theorem 10. Let A ∈ F 𝑚×𝑛 , B ∈ F 𝑛×𝑝 , and C ∈ F 𝑝×𝑞 . Then = ∑ ∑ (e𝑘𝑛 ⊗ e𝑇𝑗𝑚 ) ⊗ (e𝑗𝑚 ⊗ e𝑇𝑘𝑛 )
col [ABC] = (C𝑇 ⊗ A) col [B] . (13) 𝑗=1 𝑘=1

𝑚 𝑛
Proof. According to Theorems 9 and 1, we have
= ∑ ∑ [e𝑘𝑛 ⊗ (e𝑇𝑗𝑚 ⊗ e𝑗𝑚 ) ⊗ e𝑇𝑘𝑛 ]
col [ABC] = col [(AB) C] 𝑗=1 𝑘=1 (19)
𝑇
= (C ⊗ I𝑚 ) col [AB] 𝑛 𝑚
= ∑ [e𝑘𝑛 ⊗ ∑ (e𝑇𝑗𝑚 ⊗ e𝑗𝑚 ) ⊗ e𝑇𝑘𝑛 ]
= (C𝑇 ⊗ I𝑚 ) (I𝑝 ⊗ A) col [B] 𝑘=1
[ 𝑗=1
]
= [(C𝑇 ⊗ I𝑚 ) (I𝑝 ⊗ A)] col [B] 𝑛
= ∑ [e𝑘𝑛 ⊗ I𝑚 ⊗ e𝑇𝑘𝑛 ]
= (C𝑇 ⊗ A) col [B] . 𝑘=1

(14)
I𝑚 ⊗ e𝑇1𝑛
[ ]
[ ]
[I𝑚 ⊗ e𝑇2𝑛 ]
Theorem 10 plays an important role in solving the matrix =[ ]
[ .. ]
equations [25, 35–37], system identification [38–54], and [ . ]
control theory [55–58]. 𝑇
Let e𝑖𝑛 denote an 𝑛-dimensional column vector which has [I𝑚 ⊗ e𝑛𝑛 ]
1 in the 𝑖th position and 0’s elsewhere; that is,
= P𝑚𝑛 .
e𝑖𝑛 := [0, 0, . . . , 0, 1, 0, . . . , 0]𝑇 . (15)
Define the vec-permutation matrix Based on the definition of the vec-permutation matrix, we
have the following conclusions.
I𝑚 ⊗e𝑇1𝑛
[ ] Theorem 11. According to the definition of P𝑚𝑛 , one has
[I𝑚 ⊗ e𝑇2𝑛 ]
P𝑚𝑛 := [
[ ..
]
]∈R
𝑚𝑛×𝑚𝑛
, (16)
[ . ]
𝑇
[I𝑚 ⊗ e𝑛𝑛 ] (1) P𝑇𝑚𝑛 = P𝑛𝑚 ,

which can be expressed as [6, 7, 33, 37]


(2) P𝑇𝑚𝑛 P𝑚𝑛 = P𝑚𝑛 P𝑇𝑚𝑛 = I𝑚𝑛 .
𝑚 𝑛
𝑇
∑ ∑ (e𝑘𝑛 ⊗ e𝑗𝑚 ) (e𝑗𝑚 ⊗ e𝑘𝑛 ) . (17)
𝑗=1 𝑘=1 That is, P𝑚𝑛 is an (𝑚𝑛) × (𝑚𝑛) permutation matrix.
Journal of Applied Mathematics 5

Proof. According to the definition of P𝑚𝑛 , Theorem 3, and the I𝑚 ⊗ e𝑇1𝑛


basic properties of the Kronecker product, we have [ ]
[ ]
[I𝑚 ⊗ e𝑇2𝑛 ]
P𝑇𝑚𝑛 P𝑚𝑛 = [I𝑚 ⊗ e1𝑛 , I𝑚 ⊗ e2𝑛 , . . . , I𝑚 ⊗ e𝑛𝑛 ] [ ]
𝑇 [ .. ]
I𝑚 ⊗ e𝑇1𝑛 [ . ]
[ ] 𝑇
[ ] [I𝑚 ⊗ e𝑛𝑛 ]
[I𝑚 ⊗ e𝑇2𝑛 ]
P𝑇𝑚𝑛 =[ ]
[ .. ] 𝑛
[ . ] = I𝑚 ⊗ [∑e𝑖𝑛 e𝑇𝑖𝑛 ]
𝑇
[I𝑚 ⊗ e𝑛𝑛 ] 𝑖=1

= I𝑚 ⊗ I𝑛
= [I𝑇𝑚 ⊗ e1𝑛 , I𝑇𝑚 ⊗ e2𝑛 , . . . , I𝑇𝑚 ⊗ e𝑛𝑛 ]
= I𝑚𝑛 .
e𝑇1𝑚 ⊗ e1𝑛 e𝑇1𝑚 ⊗ e2𝑛 ⋅ ⋅ ⋅ e𝑇1𝑚 ⊗ e𝑛𝑛 (22)
[ ]
[ 𝑇 ]
[ e2𝑚 ⊗ e1𝑛 e𝑇2𝑚 ⊗ e2𝑛 ⋅ ⋅ ⋅ e𝑇2𝑚 ⊗ e𝑛𝑛 ]
=[ ] For any matrix A ∈ F 𝑚×𝑛 , we have col [A] = P𝑚𝑛 col [A𝑇 ].
[ .. .. .. ]
[ . . . ]
𝑇 𝑇 𝑇
[e𝑚𝑚 ⊗ e1𝑛 e𝑚𝑚 ⊗ e2𝑛 ⋅ ⋅ ⋅ e𝑚𝑚 ⊗ e𝑛𝑛 ] Theorem 12. If A ∈ F 𝑚×𝑛 and B ∈ F 𝑝×𝑞 , then one has P𝑚𝑝 (A⊗
B)P𝑇𝑛𝑞 = B ⊗ A.
e1𝑛 ⊗ e𝑇1𝑚 e2𝑛 ⊗ e𝑇1𝑚 ⋅ ⋅ ⋅ e𝑛𝑛 ⊗ e𝑇1𝑚
[ ]
[ ] B1
[ e1𝑛 ⊗ e𝑇2𝑚 e2𝑛 ⊗ e𝑇2𝑚 ⋅ ⋅ ⋅ e𝑛𝑛 ⊗ e𝑇2𝑚 ] B2
=[ ] Proof. Let B := [𝑏𝑖𝑗 ] = [ . ], where B𝑖 ∈ F 1×𝑞 and 𝑖 =
[ .. .. .. ] ..
[ . . . ]
𝑇 𝑇 𝑇
[ B𝑝 ]
e
[ 1𝑛 ⊗ e 𝑚𝑚 e 2𝑛 ⊗ e 𝑚𝑚 ⋅ ⋅ ⋅ e 𝑛𝑛 ⊗ e 𝑚𝑚 ] 1, 2, . . . , 𝑝, and 𝑗 = 1, 2, . . . , 𝑞. According to the definition of
P𝑚𝑛 and the Kronecker product, we have
I𝑛 ⊗ e𝑇1𝑚
[ ] P𝑚𝑝 (A ⊗ B) P𝑇𝑛𝑞
[ ]
[ I𝑛 ⊗ e𝑇2𝑚 ]
=[ ]
[ .. ] I𝑚 ⊗ e𝑇1𝑝
[ . ] [ ]
[ ]
𝑇
[I𝑛 ⊗ e𝑚𝑚 ] [ I ⊗ e𝑇 ]
=[
[
𝑚 2𝑝 ] [(A) ⊗ B, (A) ⊗ B, . . . , (A) ⊗ B] P𝑇
] 1 2 𝑛 𝑛𝑞
[ .. ]
= P𝑛𝑚 , [ . ]
𝑇
(20) [I𝑚 ⊗ e𝑝𝑝 ]
P𝑚𝑛 P𝑚𝑛 (A)1 ⊗ B1 (A)2 ⊗ B1 ⋅ ⋅ ⋅ (A)𝑛 ⊗ B1
[ (A)1 ⊗ B2 (A)2 ⊗ B2 ⋅ ⋅ ⋅ (A)𝑛 ⊗ B2 ]
[ ] 𝑇
I𝑚 ⊗ e𝑇1𝑛 =[ .. .. .. ] P𝑛𝑞
[ . . . ]
[ ] 𝑝 𝑝 𝑝
[ ] [(A)1 ⊗ B (A)2 ⊗ B ⋅ ⋅ ⋅ (A)𝑛 ⊗ B ]
[I𝑚 ⊗ e𝑇2𝑛 ]
=[ ] [I𝑚 ⊗ e1𝑛 , I𝑚 ⊗ e2𝑛 , . . . , I𝑚 ⊗ e𝑛𝑛 ]
[ .. ]
[ . ] A ⊗ B1
𝑇 [ A ⊗ B2 ]
[𝑚I ⊗ e 𝑛𝑛 ] [ ]
= [ . ] [I𝑛 ⊗ e1𝑞 , I𝑛 ⊗ e2𝑞 , . . . , I𝑛 ⊗ e𝑞𝑞 ]
[ .. ]
I𝑚 ⊗ (e𝑇1𝑛 e1𝑛 ) I𝑚 ⊗ (e𝑇1𝑛 e2𝑛 ) ⋅ ⋅ ⋅ I𝑚 ⊗ (e𝑇1𝑛 e𝑛𝑛 ) [A ⊗ B ]
𝑝
[ ]
[ ]
[I ⊗ (e𝑇 e ) I ⊗ (e𝑇 e ) ⋅ ⋅ ⋅ I ⊗ (e𝑇 e )] A𝑏11 A𝑏12 ⋅ ⋅ ⋅ A𝑏1𝑞
=[
[
𝑚 2𝑛 1𝑛 𝑚 2𝑛 2𝑛 𝑚 2𝑛 𝑛𝑛 ]
]
[ .. .. .. ] [ A𝑏21 A𝑏22 ⋅ ⋅ ⋅ A𝑏2𝑞 ]
[ . . . ] [ ]
=[ . .. .. ]
I ⊗ (e 𝑇
e ) I ⊗ (e 𝑇
e ) ⋅ ⋅ ⋅ I ⊗ (e 𝑇
e ) [ .. . . ]
[𝑚 𝑛𝑛 1𝑛 𝑚 𝑛𝑛 2𝑛 𝑚 𝑛𝑛 𝑛𝑛 ]
[A𝑏𝑝1 A𝑏𝑝2 ⋅ ⋅ ⋅ A𝑏𝑝𝑞 ]
I𝑚 0 ⋅ ⋅ ⋅ 0
[ 0 I𝑚 ⋅ ⋅ ⋅ 0] = B ⊗ A.
[ ]
= [ .. .. .. ] (23)
[. . d .]
[ 0 0 ⋅⋅⋅ I𝑚 ]
From Theorem 12, we have the following corollaries.
= I𝑚𝑛 ,
(21) Corollary 13. If A ∈ F 𝑚×𝑛 , then P𝑚𝑟 (A ⊗ I𝑟 )P𝑇𝑛𝑟 = I𝑟 ⊗ A.
6 Journal of Applied Mathematics

Corollary 14. If A ∈ F 𝑚×𝑛 and B ∈ F 𝑛×𝑚 , then If 𝜆[A] = {𝜆 1 , 𝜆 2 , . . . , 𝜆 𝑚 } and 𝑓(𝑥) = ∑𝑘𝑖=1 𝑐𝑖 𝑥𝑖 is a
polynomial, then the eigenvalues of
B ⊗ A = P𝑚𝑛 (A ⊗ B) P𝑇𝑛𝑚 = P𝑚𝑛 [(A ⊗ B) P2𝑚𝑛 ] P𝑇𝑛𝑚 . (24)
𝑘
𝑓 (A) = ∑𝑐𝑖 A𝑖 (31)
That is, 𝜆[B ⊗ A] = 𝜆[(A ⊗ B)P2𝑚𝑛 ]. When A ∈ F 𝑛×𝑛 and 𝑖=1
B ∈ F 𝑡×𝑡 , one has B ⊗ A = P𝑛𝑡 (A ⊗ B)P𝑇𝑛𝑡 . That is, if A and B are
are square matrices, then A ⊗ B is similar to B ⊗ A.
𝑘
𝑓 (𝜆 𝑗 ) = ∑𝑐𝑖 𝜆𝑖𝑗 , 𝑗 = 1, 2, . . . , 𝑚. (32)
5. The Scalar Properties and the Polynomials 𝑖=1
Matrix of the Kronecker Product Similarly, consider a polynomial 𝑓(𝑥, 𝑦) in two variables 𝑥
In this section, we discuss the properties [6, 7, 34] of the and 𝑦:
determinant, the trace, the rank, and the polynomial matrix 𝑘
of the Kronecker product. 𝑓 (𝑥, 𝑦) = ∑ 𝑐𝑖𝑗 𝑥𝑖 𝑦𝑗 , 𝑐𝑖𝑗 , 𝑥, 𝑦 ∈ F, (33)
For A ∈ F 𝑚×𝑚 and B ∈ F 𝑛×𝑛 , we have |A⊗B| = |A|𝑛 |B|𝑚 = 𝑖,𝑗=1
|B ⊗ A|. If A and B are two square matrices, then we have
tr[A ⊗ B] = tr[A] tr[B] = tr[B ⊗ A]. For any matrices A and where 𝑘 is a positive integer. Define the polynomial matrix
B, we have rank[A ⊗ B] = rank[A] rank[B] = rank[B ⊗ A]. 𝑓(A, B) by the formula
According to these scalar properties, we have the following
𝑘
theorems.
𝑓 (A, B) = ∑ 𝑐𝑖𝑗 A𝑖 ⊗ B𝑗 . (34)
𝑚×𝑚 𝑛×𝑛 𝑖,𝑗=1
Theorem 15. (1) Let A, C ∈ F and B, D ∈ F . Then
According to Theorem 3, we have the following theorems
|(A ⊗ B) (C ⊗ D)| = |(A ⊗ B)| |(C ⊗ D)| [34].
= (|A| |C|)𝑛 (|B||D|)𝑚 (25) Theorem 17. Let A ∈ F 𝑚×𝑚 and B ∈ F 𝑛×𝑛 ; if 𝜆[A] =
= |AC|𝑛 |BD|𝑚 . {𝜆 1 , 𝜆 2 , . . . , 𝜆 𝑚 } and 𝜆[B] = {𝜇1 , 𝜇2 , . . . , 𝜇𝑛 }, then the matrix
𝑓(A, B) has the eigenvalues
(2) If A, B, C, and D are square matrices, then 𝑘
𝑓 (𝜆 𝑟 , 𝜇𝑠 ) = ∑ 𝑐𝑖𝑗 𝜆𝑖𝑟 𝜇𝑠𝑗 , 𝑟 = 1, 2, . . . , 𝑚, 𝑠 = 1, 2, . . . , 𝑛.
tr [(A ⊗ B) (C ⊗ D)] = tr [(AC) ⊗ (BD)] 𝑖,𝑗=1
(35)
= tr [AC] tr [BD] (26)
Theorem 18 (see [34]). Let A ∈ F 𝑚×𝑚 . If 𝑓(𝑧) is an analytic
= tr [CA] tr [DB] .
function and 𝑓(A) exists, then
(3) Let A ∈ F 𝑚×𝑛 , C ∈ F 𝑛×𝑝 , B ∈ F 𝑞×𝑟 , and D ∈ F 𝑟×𝑠 ; then 𝑓(I𝑛 ⊗ A) = I𝑛 ⊗ 𝑓(A),
𝑓(A ⊗ I𝑛 ) = 𝑓(A) ⊗ I𝑛 .
rank [(A ⊗ B) (C ⊗ D)] = rank [(AC) ⊗ (BD)]
(27) Finally, we introduce some results about the Kronecker
= rank [AC] rank [BD] .
sum [7, 34]. The Kronecker sum of A ∈ F 𝑚×𝑚 and B ∈ F 𝑛×𝑛 ,
denoted as A ⊕ B, is defined by
Theorem 16. If 𝑓(𝑥, 𝑦) := 𝑥𝑟 𝑦𝑠 is a monomial and 𝑓(A, B) :=
A[𝑟] ⊗ B[𝑠] , where 𝑟, 𝑠 are positive integers, one has the following A ⊕ B = A ⊗ I𝑛 + I𝑚 ⊗ B.
conclusions.
Theorem 19. Let A ∈ F 𝑚×𝑚 , and B ∈ F 𝑛×𝑛 . Then
𝑚×𝑚 𝑛×𝑛
(1) Let A ∈ F and B ∈ F . Then
exp[A ⊕ B] = exp[A] ⊗ exp[B],
󵄨󵄨 󵄨 𝑟𝑚𝑟−1 𝑛𝑠 𝑟 𝑠−1
sin(A ⊕ B) = sin(A) ⊗ cos(B) + cos(A) ⊗ sin(B),
󵄨󵄨𝑓 (A, B)󵄨󵄨󵄨 = |A| |B|𝑠𝑚 𝑛 . (28)
cos(A ⊕ B) = cos(A) ⊗ cos(B) − sin(A) ⊗ sin(B).
(2) If A and B are square matrices, then
6. Conclusions
tr [𝑓 (A, B)] = 𝑓 (tr [A] , tr [B]) . (29)
This paper establishes some conclusions on the Kronecker
products and the vec-permutation matrix. A new presen-
(3) For any matrices A and B, one has tation about the properties of the mixed products and the
vector operator is given. All these obtained conclusions make
rank [𝑓 (A, B)] = 𝑓 (rank [A] , rank [B]) . (30) the theory of the Kronecker product more complete.
Journal of Applied Mathematics 7

Acknowledgments [16] C. F. van Loan, “The ubiquitous Kronecker product,” Journal of


Computational and Applied Mathematics, vol. 123, no. 1-2, pp.
This work was supported by the National Natural Science 85–100, 2000.
Foundation of China (no. 61273194), the 111 Project (B12018), [17] M. Huhtanen, “Real linear Kronecker product operations,”
and the PAPD of Jiangsu Higher Education Institutions. Linear Algebra and its Applications, vol. 418, no. 1, pp. 347–361,
2006.
References [18] S. Delvaux and M. van Barel, “Rank-deficient submatrices of
Kronecker products of Fourier matrices,” Linear Algebra and its
[1] H. V. Jemderson, F. Pukelsheim, and S. R. Searle, “On the history Applications, vol. 426, no. 2-3, pp. 349–367, 2007.
of the Kronecker product,” Linear and Multilinear Algebra, vol. [19] S. G. Deo, K. N. Murty, and J. Turner, “Qualitative properties of
14, no. 2, pp. 113–120, 1983. adjoint Kronecker product boundary value problems,” Applied
[2] X. L. Xiong, W. Fan, and R. Ding, “Least-squares parameter Mathematics and Computation, vol. 133, no. 2-3, pp. 287–295,
estimation algorithm for a class of input nonlinear systems,” 2002.
Journal of Applied Mathematics, vol. 2007, Article ID 684074, [20] W.-H. Steeb and F. Wilhelm, “Exponential functions of Kro-
14 pages, 2007. necker products and trace calculation,” Linear and Multilinear
[3] F. Ding, “Transformations between some special matrices,” Algebra, vol. 9, no. 4, pp. 345–346, 1981.
Computers & Mathematics with Applications, vol. 59, no. 8, pp. [21] J. Chuai and Y. Tian, “Rank equalities and inequalities for
2676–2695, 2010. Kronecker products of matrices with applications,” Applied
[4] Y. Shi and B. Yu, “Output feedback stabilization of networked Mathematics and Computation, vol. 150, no. 1, pp. 129–137, 2004.
control systems with random delays modeled by Markov [22] R. H. Koning, H. Neudecker, and T. Wansbeek, “Block Kro-
chains,” IEEE Transactions on Automatic Control, vol. 54, no. 7, necker products and the vecb operator,” Linear Algebra and its
pp. 1668–1674, 2009. Applications, vol. 149, pp. 165–184, 1991.
[5] Y. Shi, H. Fang, and M. Yan, “Kalman filter-based adaptive [23] F. Ding, P. X. Liu, and J. Ding, “Iterative solutions of the
control for networked systems with unknown parameters and generalized Sylvester matrix equations by using the hierarchical
randomly missing outputs,” International Journal of Robust and identification principle,” Applied Mathematics and Computa-
Nonlinear Control, vol. 19, no. 18, pp. 1976–1992, 2009. tion, vol. 197, no. 1, pp. 41–50, 2008.
[6] A. Graham, Kronecker Products and Matrix Calculus: With [24] L. Xie, Y. Liu, and H. Yang, “Gradient based and least squares
Applications, John Wiley & Sons, New York, NY, USA, 1982. based iterative algorithms for matrix equations 𝐴𝑋𝐵 + 𝐶𝑋T 𝐷 =
[7] W.-H. Steeb and Y. Hardy, Matrix Calculus and Kronecker 𝐹,” Applied Mathematics and Computation, vol. 217, no. 5, pp.
Product: A Practical Approach to Linear and Multilinear Algebra, 2191–2199, 2010.
World Scientific, River Edge, NJ, USA, 2011. [25] F. Ding and T. Chen, “Gradient based iterative algorithms
[8] P. M. Bentler and S. Y. Lee, “Matrix derivatives with chain for solving a class of matrix equations,” IEEE Transactions on
rule and rules for simple, Hadamard, and Kronecker products,” Automatic Control, vol. 50, no. 8, pp. 1216–1221, 2005.
Journal of Mathematical Psychology, vol. 17, no. 3, pp. 255–262, [26] J. Ding, Y. Liu, and F. Ding, “Iterative solutions to matrix
1978. equations of the form 𝐴 𝑖 𝑋𝐵𝑖 = 𝐹𝑖 ,” Computers & Mathematics
[9] J. R. Magnus and H. Neudecker, “Matrix differential calculus with Applications, vol. 59, no. 11, pp. 3500–3507, 2010.
with applications to simple, Hadamard, and Kronecker prod- [27] A.-G. Wu, L. Lv, and G.-R. Duan, “Iterative algorithms for
ucts,” Journal of Mathematical Psychology, vol. 29, no. 4, pp. 474– solving a class of complex conjugate and transpose matrix
492, 1985. equations,” Applied Mathematics and Computation, vol. 217, no.
[10] F. Ding and T. Chen, “Iterative least-squares solutions of 21, pp. 8343–8353, 2011.
coupled Sylvester matrix equations,” Systems & Control Letters, [28] A.-G. Wu, X. Zeng, G.-R. Duan, and W.-J. Wu, “Iterative sol-
vol. 54, no. 2, pp. 95–107, 2005. utions to the extended Sylvester-conjugate matrix equations,”
[11] F. Ding and T. Chen, “On iterative solutions of general coupled Applied Mathematics and Computation, vol. 217, no. 1, pp. 130–
matrix equations,” SIAM Journal on Control and Optimization, 142, 2010.
vol. 44, no. 6, pp. 2269–2284, 2006. [29] F. Zhang, Y. Li, W. Guo, and J. Zhao, “Least squares solutions
[12] L. Jódar and H. Abou-Kandil, “Kronecker products and coupled with special structure to the linear matrix equation 𝐴𝑋𝐵 =
matrix Riccati differential systems,” Linear Algebra and its 𝐶,” Applied Mathematics and Computation, vol. 217, no. 24, pp.
Applications, vol. 121, no. 2-3, pp. 39–51, 1989. 10049–10057, 2011.
[13] D. Bahuguna, A. Ujlayan, and D. N. Pandey, “Advanced type [30] M. Dehghan and M. Hajarian, “SSHI methods for solving gen-
coupled matrix Riccati differential equation systems with Kro- eral linear matrix equations,” Engineering Computations, vol. 28,
necker product,” Applied Mathematics and Computation, vol. no. 8, pp. 1028–1043, 2011.
194, no. 1, pp. 46–53, 2007. [31] E. Erkmen and M. A. Bradford, “Coupling of finite element
[14] M. Dehghan and M. Hajarian, “An iterative algorithm for and meshfree methods be for locking-free analysis of shear-
solving a pair of matrix equations 𝐴𝑌𝐵 = 𝐸, 𝐶𝑌𝐷 = 𝐹 deformable beams and plates,” Engineering Computations, vol.
over generalized centro-symmetric matrices,” Computers & 28, no. 8, pp. 1003–1027, 2011.
Mathematics with Applications, vol. 56, no. 12, pp. 3246–3260, [32] A. Kaveh and B. Alinejad, “Eigensolution of Laplacian matrices
2008. for graph partitioning and domain decomposition approximate
[15] M. Dehghan and M. Hajarian, “An iterative algorithm for the algebraic method,” Engineering Computations, vol. 26, no. 7, pp.
reflexive solutions of the generalized coupled Sylvester matrix 828–842, 2009.
equations and its optimal approximation,” Applied Mathematics [33] X. Z. Zhan, The Theory of Matrces, Higher Education Press, Bei-
and Computation, vol. 202, no. 2, pp. 571–588, 2008. jing, China, 2008 (Chinese).
8 Journal of Applied Mathematics

[34] P. Lancaster and M. Tismenetsky, The Theory of Matrices: with [51] F. Ding and Y. Gu, “Performance analysis of the auxiliary model-
Applications, Academic Press, New York, NY, USA, 1985. based least-squares identification algorithm for one-step state-
[35] M. Dehghan and M. Hajarian, “An iterative method for solving delay systems,” International Journal of Computer Mathematics,
the generalized coupled Sylvester matrix equations over gener- vol. 89, no. 15, pp. 2019–2028, 2012.
alized bisymmetric matrices,” Applied Mathematical Modelling, [52] F. Ding and Y. Gu, “Performance analysis of the auxiliary model-
vol. 34, no. 3, pp. 639–654, 2010. based stochastic gradient parameter estimation algorithm for
[36] M. Dehghan and M. Hajarian, “An efficient algorithm for solv- state space systems with one-step state delay,” Circuits, Systems
ing general coupled matrix equations and its application,” and Signal Processing, vol. 32, no. 2, pp. 585–599, 2013.
Mathematical and Computer Modelling, vol. 51, no. 9-10, pp. [53] F. Ding and H. H. Duan, “Two-stage parameter estimation algo-
1118–1134, 2010. rithms for Box-Jenkins systems,” IET Signal Processing, 2013.
[37] N. J. Higham, Accuracy and Stability of Numerical Algorithms, [54] P. P. Hu and F. Ding, “Multistage least squares based iterative
Society for Industrial and Applied Mathematics, Philadelphia, estimation for feedback nonlinear systems with moving average
Pa, USA, 1996. noises using the hierarchical identification principle,” Nonlinear
Dynamics, 2013.
[38] F. Ding, “Decomposition based fast least squares algorithm for
output error systems,” Signal Processing, vol. 93, no. 5, pp. 1235– [55] H. G. Zhang and X. P. Xie, “Relaxed stability conditions for
1242, 2013. continuous-time TS fuzzy-control systems via augmented
multi-indexed matrix approach,” IEEE Transactions on Fuzzy
[39] F. Ding, “Coupled-least-squares identification for multivariable
Systems, vol. 19, no. 3, pp. 478–492, 2011.
systems,” IET Control Theory and Applications, vol. 7, no. 1, pp.
68–79, 2013. [56] H. G. Zhang, D. W. Gong, B. Chen, and Z. W. Liu, “Syn-
chronization for coupled neural networks with interval delay:
[40] F. Ding, X. G. Liu, and J. Chu, “Gradient-based and least-
a novel augmented Lyapunov-Krasovskii functional method,”
squares-based iterative algorithms for Hammerstein systems
IEEE Transactions on Neural Networks and Learning Systems,
using the hierarchical identification principle,” IET Control
vol. 24, no. 1, pp. 58–70, 2013.
Theory and Applications, vol. 7, pp. 176–184, 2013.
[57] H. W. Yu and Y. F. Zheng, “Dynamic behavior of multi-agent
[41] F. Ding, “Hierarchical multi-innovation stochastic gradient systems with distributed sampled control,” Acta Automatica
algorithm for Hammerstein nonlinear system modeling,” Sinica, vol. 38, no. 3, pp. 357–363, 2012.
Applied Mathematical Modelling, vol. 37, no. 4, pp. 1694–1704,
[58] Q. Z. Huang, “Consensus analysis of multi-agent discrete-time
2013.
systems,” Acta Automatica Sinica, vol. 38, no. 7, pp. 1127–1133,
[42] F. Ding, “Two-stage least squares based iterative estima- 2012.
tion algorithm for CARARMA system modeling,” Applied
Mathemat- Ical Modelling, vol. 37, no. 7, pp. 4798–4808, 2013.
[43] Y. J. Liu, Y. S. Xiao, and X. L. Zhao, “Multi-innovation stochastic
gradient algorithm for multiple-input single-output systems
using the auxiliary model,” Applied Mathematics and Compu-
tation, vol. 215, no. 4, pp. 1477–1483, 2009.
[44] Y. J. Liu, J. Sheng, and R. F. Ding, “Convergence of stochastic
gradient estimation algorithm for multivariable ARX-like sys-
tems,” Computers & Mathematics with Applications, vol. 59, no.
8, pp. 2615–2627, 2010.
[45] J. H. Li, “Parameter estimation for Hammerstein CARARMA
systems based on the Newton iteration,” Applied Mathematics
Letters, vol. 26, no. 1, pp. 91–96, 2013.
[46] J. H. Li, R. F. Ding, and Y. Yang, “Iterative parameter identifi-
cation methods for nonlinear functions,” Applied Mathematical
Modelling, vol. 36, no. 6, pp. 2739–2750, 2012.
[47] J. Ding, F. Ding, X. P. Liu, and G. Liu, “Hierarchical least squares
identification for linear SISO systems with dual-rate sampled-
data,” IEEE Transactions on Automatic Control, vol. 56, no. 11,
pp. 2677–2683, 2011.
[48] J. Ding and F. Ding, “Bias compensation-based parameter esti-
mation for output error moving average systems,” International
Journal of Adaptive Control and Signal Processing, vol. 25, no. 12,
pp. 1100–1111, 2011.
[49] J. Ding, L. L. Han, and X. M. Chen, “Time series AR modeling
with missing observations based on the polynomial transforma-
tion,” Mathematical and Computer Modelling, vol. 51, no. 5-6, pp.
527–536, 2010.
[50] F. Ding, Y. J. Liu, and B. Bao, “Gradient-based and least-squares-
based iterative estimation algorithms for multi-input multi-
output systems,” Proceedings of the Institution of Mechanical
Engineers I, vol. 226, no. 1, pp. 43–55, 2012.
Advances in Advances in Journal of Journal of
Operations Research
Hindawi Publishing Corporation
Decision Sciences
Hindawi Publishing Corporation
Applied Mathematics
Hindawi Publishing Corporation
Algebra
Hindawi Publishing Corporation
Probability and Statistics
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

The Scientific International Journal of


World Journal
Hindawi Publishing Corporation
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Submit your manuscripts at


http://www.hindawi.com

International Journal of Advances in


Combinatorics
Hindawi Publishing Corporation
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Journal of Journal of Mathematical Problems Abstract and Discrete Dynamics in


Complex Analysis
Hindawi Publishing Corporation
Mathematics
Hindawi Publishing Corporation
in Engineering
Hindawi Publishing Corporation
Applied Analysis
Hindawi Publishing Corporation
Nature and Society
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

International
Journal of Journal of
Mathematics and
Mathematical
Discrete Mathematics
Sciences

Journal of International Journal of Journal of

Hindawi Publishing Corporation Hindawi Publishing Corporation Volume 2014


Function Spaces
Hindawi Publishing Corporation
Stochastic Analysis
Hindawi Publishing Corporation
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

You might also like