Professional Documents
Culture Documents
Research Article
On the Kronecker Products and Their Applications
Copyright © 2013 H. Zhang and F. Ding. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
This paper studies the properties of the Kronecker product related to the mixed matrix products, the vector operator, and the vec-
permutation matrix and gives several theorems and their proofs. In addition, we establish the relations between the singular values
of two matrices and their Kronecker product and the relations between the determinant, the trace, the rank, and the polynomial
matrix of the Kronecker products.
(i.e., the direct product or tensor product), denoted as A ⊗ B, Proof. According to the definition of the Kronecker product
is defined by and the matrix multiplication, we have
(4) (𝜇A) ⊗ B = A ⊗ (𝜇B) = 𝜇(A ⊗ B), From Theorem 1, we have the following corollary.
(9) (A ⊗ B)𝐻 = A𝐻 ⊗ B𝐻. Theorem 3. Let A = [𝑎𝑖𝑗 ] ∈ F 𝑚×𝑛 , C = [𝑐𝑖𝑗 ] ∈ F 𝑛×𝑝 , B ∈ F 𝑞×𝑟 ,
and D ∈ F 𝑟×𝑠 . Then
This section discusses the properties based on the mixed = (A ⊗ I𝑞 ) (I𝑛 ⊗ B) (C ⊗ I𝑟 ) (I𝑝 ⊗ D)
products theorem [6, 33, 34].
= (A ⊗ I𝑞 ) [(I𝑛 ⊗ B) (C ⊗ I𝑟 )] (I𝑝 ⊗ D)
Theorem 1. Let A ∈ F 𝑚×𝑛 and B ∈ F 𝑝×𝑞 ; then
= (A ⊗ I𝑞 ) (C ⊗ B) (I𝑝 ⊗ D)
A ⊗ B = (A ⊗ I𝑝 ) (I𝑛 ⊗ B) = (I𝑚 ⊗ B) (A ⊗ I𝑞 ) . (2) = (A ⊗ I𝑞 ) [(C ⊗ I𝑞 ) (I𝑝 ⊗ B)] (I𝑝 ⊗ D)
Journal of Applied Mathematics 3
Σ 0 Γ 0
= (U ⊗ W) {[ ]⊗[ ]} (V ⊗ Q)
Let A[1] := A and define the Kronecker power by 0 0 0 0
A[𝑘+1] := A[𝑘] ⊗ A = A ⊗ A[𝑘] , 𝑘 = 1, 2, . . . . (7) Σ 0
(9)
From Theorem 3, we have the following corollary [7]. = (U ⊗ W) [Σ ⊗ [ 0 0
] 0]
(V ⊗ Q)
[ 0 0]
Corollary 4. If the following matrix products exist, then one
has Σ⊗Γ 0
= (U ⊗ W) [ ] (V ⊗ Q) .
0 0
(1) (A1 ⊗ B1 )(A2 ⊗ B2 ) ⋅ ⋅ ⋅ (A𝑝 ⊗ B𝑝 ) = (A1 A2 ⋅ ⋅ ⋅ A𝑝 ) ⊗
(B1 B2 ⋅ ⋅ ⋅ B𝑝 ),
Since U ⊗ W and V ⊗ Q are unitary matrices and Σ ⊗ Γ =
(2) (A1 ⊗ A2 ⊗ ⋅ ⋅ ⋅ ⊗ A𝑝 )(B1 ⊗ B2 ⊗ ⋅ ⋅ ⋅ ⊗ B𝑝 ) = (A1 B1 ) ⊗ diag[𝜎1 𝜌1 , 𝜎1 𝜌2 , . . . , 𝜎1 𝜌𝑠 , . . . , 𝜎𝑟 𝜌𝑠 ], this proves the theorem.
(A2 B2 ) ⊗ ⋅ ⋅ ⋅ ⊗ (A𝑝 B𝑝 ),
(3) [AB][𝑘] = A[𝑘] B[𝑘] .
According to Theorem 7, we have the next corollary.
A square matrix A is said to be a normal matrix if and only
if A𝐻A = AA𝐻. A square matrix A is said to be a unitary Corollary 8. For any matrices A, B, and C, one has 𝜎[A ⊗ B ⊗
matrix if and only if A𝐻A = AA𝐻 = I. Straightforward C] = 𝜎[C ⊗ B ⊗ A].
calculation gives the following conclusions [6, 7, 33, 34].
Theorem 5. For any square matrices A and B, 4. The Properties of the Vector Operator and
(1) if A−1 and B−1 exist, then (A ⊗ B)−1 = A−1 ⊗ B−1 ,
the Vec-Permutation Matrix
(2) if A and B are normal matrices, then A ⊗ B is a normal In this section, we introduce a vector-valued operator and a
matrix, vec-permutation matrix.
(3) if A and B are unitary (orthogonal) matrices, then A ⊗ Let A = [a1 , a2 , . . . , a𝑛 ] ∈ F 𝑚×𝑛 , where a𝑗 ∈ F 𝑚 , 𝑗 =
B is a unitary (orthogonal) matrix, 1, 2, . . . , 𝑛; then the vector col[A] is defined by
According to the definition of the singular value and (2) (A ⊗ I𝑝 )col[C] = col[CAT ].
Theorem 3, for any matrices A and B, we have the next
theorem. Proof. Let (B)𝑖 denote the 𝑖th column of matrix B; we have
𝑚 𝑛
Theorem 10. Let A ∈ F 𝑚×𝑛 , B ∈ F 𝑛×𝑝 , and C ∈ F 𝑝×𝑞 . Then = ∑ ∑ (e𝑘𝑛 ⊗ e𝑇𝑗𝑚 ) ⊗ (e𝑗𝑚 ⊗ e𝑇𝑘𝑛 )
col [ABC] = (C𝑇 ⊗ A) col [B] . (13) 𝑗=1 𝑘=1
𝑚 𝑛
Proof. According to Theorems 9 and 1, we have
= ∑ ∑ [e𝑘𝑛 ⊗ (e𝑇𝑗𝑚 ⊗ e𝑗𝑚 ) ⊗ e𝑇𝑘𝑛 ]
col [ABC] = col [(AB) C] 𝑗=1 𝑘=1 (19)
𝑇
= (C ⊗ I𝑚 ) col [AB] 𝑛 𝑚
= ∑ [e𝑘𝑛 ⊗ ∑ (e𝑇𝑗𝑚 ⊗ e𝑗𝑚 ) ⊗ e𝑇𝑘𝑛 ]
= (C𝑇 ⊗ I𝑚 ) (I𝑝 ⊗ A) col [B] 𝑘=1
[ 𝑗=1
]
= [(C𝑇 ⊗ I𝑚 ) (I𝑝 ⊗ A)] col [B] 𝑛
= ∑ [e𝑘𝑛 ⊗ I𝑚 ⊗ e𝑇𝑘𝑛 ]
= (C𝑇 ⊗ A) col [B] . 𝑘=1
(14)
I𝑚 ⊗ e𝑇1𝑛
[ ]
[ ]
[I𝑚 ⊗ e𝑇2𝑛 ]
Theorem 10 plays an important role in solving the matrix =[ ]
[ .. ]
equations [25, 35–37], system identification [38–54], and [ . ]
control theory [55–58]. 𝑇
Let e𝑖𝑛 denote an 𝑛-dimensional column vector which has [I𝑚 ⊗ e𝑛𝑛 ]
1 in the 𝑖th position and 0’s elsewhere; that is,
= P𝑚𝑛 .
e𝑖𝑛 := [0, 0, . . . , 0, 1, 0, . . . , 0]𝑇 . (15)
Define the vec-permutation matrix Based on the definition of the vec-permutation matrix, we
have the following conclusions.
I𝑚 ⊗e𝑇1𝑛
[ ] Theorem 11. According to the definition of P𝑚𝑛 , one has
[I𝑚 ⊗ e𝑇2𝑛 ]
P𝑚𝑛 := [
[ ..
]
]∈R
𝑚𝑛×𝑚𝑛
, (16)
[ . ]
𝑇
[I𝑚 ⊗ e𝑛𝑛 ] (1) P𝑇𝑚𝑛 = P𝑛𝑚 ,
= I𝑚 ⊗ I𝑛
= [I𝑇𝑚 ⊗ e1𝑛 , I𝑇𝑚 ⊗ e2𝑛 , . . . , I𝑇𝑚 ⊗ e𝑛𝑛 ]
= I𝑚𝑛 .
e𝑇1𝑚 ⊗ e1𝑛 e𝑇1𝑚 ⊗ e2𝑛 ⋅ ⋅ ⋅ e𝑇1𝑚 ⊗ e𝑛𝑛 (22)
[ ]
[ 𝑇 ]
[ e2𝑚 ⊗ e1𝑛 e𝑇2𝑚 ⊗ e2𝑛 ⋅ ⋅ ⋅ e𝑇2𝑚 ⊗ e𝑛𝑛 ]
=[ ] For any matrix A ∈ F 𝑚×𝑛 , we have col [A] = P𝑚𝑛 col [A𝑇 ].
[ .. .. .. ]
[ . . . ]
𝑇 𝑇 𝑇
[e𝑚𝑚 ⊗ e1𝑛 e𝑚𝑚 ⊗ e2𝑛 ⋅ ⋅ ⋅ e𝑚𝑚 ⊗ e𝑛𝑛 ] Theorem 12. If A ∈ F 𝑚×𝑛 and B ∈ F 𝑝×𝑞 , then one has P𝑚𝑝 (A⊗
B)P𝑇𝑛𝑞 = B ⊗ A.
e1𝑛 ⊗ e𝑇1𝑚 e2𝑛 ⊗ e𝑇1𝑚 ⋅ ⋅ ⋅ e𝑛𝑛 ⊗ e𝑇1𝑚
[ ]
[ ] B1
[ e1𝑛 ⊗ e𝑇2𝑚 e2𝑛 ⊗ e𝑇2𝑚 ⋅ ⋅ ⋅ e𝑛𝑛 ⊗ e𝑇2𝑚 ] B2
=[ ] Proof. Let B := [𝑏𝑖𝑗 ] = [ . ], where B𝑖 ∈ F 1×𝑞 and 𝑖 =
[ .. .. .. ] ..
[ . . . ]
𝑇 𝑇 𝑇
[ B𝑝 ]
e
[ 1𝑛 ⊗ e 𝑚𝑚 e 2𝑛 ⊗ e 𝑚𝑚 ⋅ ⋅ ⋅ e 𝑛𝑛 ⊗ e 𝑚𝑚 ] 1, 2, . . . , 𝑝, and 𝑗 = 1, 2, . . . , 𝑞. According to the definition of
P𝑚𝑛 and the Kronecker product, we have
I𝑛 ⊗ e𝑇1𝑚
[ ] P𝑚𝑝 (A ⊗ B) P𝑇𝑛𝑞
[ ]
[ I𝑛 ⊗ e𝑇2𝑚 ]
=[ ]
[ .. ] I𝑚 ⊗ e𝑇1𝑝
[ . ] [ ]
[ ]
𝑇
[I𝑛 ⊗ e𝑚𝑚 ] [ I ⊗ e𝑇 ]
=[
[
𝑚 2𝑝 ] [(A) ⊗ B, (A) ⊗ B, . . . , (A) ⊗ B] P𝑇
] 1 2 𝑛 𝑛𝑞
[ .. ]
= P𝑛𝑚 , [ . ]
𝑇
(20) [I𝑚 ⊗ e𝑝𝑝 ]
P𝑚𝑛 P𝑚𝑛 (A)1 ⊗ B1 (A)2 ⊗ B1 ⋅ ⋅ ⋅ (A)𝑛 ⊗ B1
[ (A)1 ⊗ B2 (A)2 ⊗ B2 ⋅ ⋅ ⋅ (A)𝑛 ⊗ B2 ]
[ ] 𝑇
I𝑚 ⊗ e𝑇1𝑛 =[ .. .. .. ] P𝑛𝑞
[ . . . ]
[ ] 𝑝 𝑝 𝑝
[ ] [(A)1 ⊗ B (A)2 ⊗ B ⋅ ⋅ ⋅ (A)𝑛 ⊗ B ]
[I𝑚 ⊗ e𝑇2𝑛 ]
=[ ] [I𝑚 ⊗ e1𝑛 , I𝑚 ⊗ e2𝑛 , . . . , I𝑚 ⊗ e𝑛𝑛 ]
[ .. ]
[ . ] A ⊗ B1
𝑇 [ A ⊗ B2 ]
[𝑚I ⊗ e 𝑛𝑛 ] [ ]
= [ . ] [I𝑛 ⊗ e1𝑞 , I𝑛 ⊗ e2𝑞 , . . . , I𝑛 ⊗ e𝑞𝑞 ]
[ .. ]
I𝑚 ⊗ (e𝑇1𝑛 e1𝑛 ) I𝑚 ⊗ (e𝑇1𝑛 e2𝑛 ) ⋅ ⋅ ⋅ I𝑚 ⊗ (e𝑇1𝑛 e𝑛𝑛 ) [A ⊗ B ]
𝑝
[ ]
[ ]
[I ⊗ (e𝑇 e ) I ⊗ (e𝑇 e ) ⋅ ⋅ ⋅ I ⊗ (e𝑇 e )] A𝑏11 A𝑏12 ⋅ ⋅ ⋅ A𝑏1𝑞
=[
[
𝑚 2𝑛 1𝑛 𝑚 2𝑛 2𝑛 𝑚 2𝑛 𝑛𝑛 ]
]
[ .. .. .. ] [ A𝑏21 A𝑏22 ⋅ ⋅ ⋅ A𝑏2𝑞 ]
[ . . . ] [ ]
=[ . .. .. ]
I ⊗ (e 𝑇
e ) I ⊗ (e 𝑇
e ) ⋅ ⋅ ⋅ I ⊗ (e 𝑇
e ) [ .. . . ]
[𝑚 𝑛𝑛 1𝑛 𝑚 𝑛𝑛 2𝑛 𝑚 𝑛𝑛 𝑛𝑛 ]
[A𝑏𝑝1 A𝑏𝑝2 ⋅ ⋅ ⋅ A𝑏𝑝𝑞 ]
I𝑚 0 ⋅ ⋅ ⋅ 0
[ 0 I𝑚 ⋅ ⋅ ⋅ 0] = B ⊗ A.
[ ]
= [ .. .. .. ] (23)
[. . d .]
[ 0 0 ⋅⋅⋅ I𝑚 ]
From Theorem 12, we have the following corollaries.
= I𝑚𝑛 ,
(21) Corollary 13. If A ∈ F 𝑚×𝑛 , then P𝑚𝑟 (A ⊗ I𝑟 )P𝑇𝑛𝑟 = I𝑟 ⊗ A.
6 Journal of Applied Mathematics
Corollary 14. If A ∈ F 𝑚×𝑛 and B ∈ F 𝑛×𝑚 , then If 𝜆[A] = {𝜆 1 , 𝜆 2 , . . . , 𝜆 𝑚 } and 𝑓(𝑥) = ∑𝑘𝑖=1 𝑐𝑖 𝑥𝑖 is a
polynomial, then the eigenvalues of
B ⊗ A = P𝑚𝑛 (A ⊗ B) P𝑇𝑛𝑚 = P𝑚𝑛 [(A ⊗ B) P2𝑚𝑛 ] P𝑇𝑛𝑚 . (24)
𝑘
𝑓 (A) = ∑𝑐𝑖 A𝑖 (31)
That is, 𝜆[B ⊗ A] = 𝜆[(A ⊗ B)P2𝑚𝑛 ]. When A ∈ F 𝑛×𝑛 and 𝑖=1
B ∈ F 𝑡×𝑡 , one has B ⊗ A = P𝑛𝑡 (A ⊗ B)P𝑇𝑛𝑡 . That is, if A and B are
are square matrices, then A ⊗ B is similar to B ⊗ A.
𝑘
𝑓 (𝜆 𝑗 ) = ∑𝑐𝑖 𝜆𝑖𝑗 , 𝑗 = 1, 2, . . . , 𝑚. (32)
5. The Scalar Properties and the Polynomials 𝑖=1
Matrix of the Kronecker Product Similarly, consider a polynomial 𝑓(𝑥, 𝑦) in two variables 𝑥
In this section, we discuss the properties [6, 7, 34] of the and 𝑦:
determinant, the trace, the rank, and the polynomial matrix 𝑘
of the Kronecker product. 𝑓 (𝑥, 𝑦) = ∑ 𝑐𝑖𝑗 𝑥𝑖 𝑦𝑗 , 𝑐𝑖𝑗 , 𝑥, 𝑦 ∈ F, (33)
For A ∈ F 𝑚×𝑚 and B ∈ F 𝑛×𝑛 , we have |A⊗B| = |A|𝑛 |B|𝑚 = 𝑖,𝑗=1
|B ⊗ A|. If A and B are two square matrices, then we have
tr[A ⊗ B] = tr[A] tr[B] = tr[B ⊗ A]. For any matrices A and where 𝑘 is a positive integer. Define the polynomial matrix
B, we have rank[A ⊗ B] = rank[A] rank[B] = rank[B ⊗ A]. 𝑓(A, B) by the formula
According to these scalar properties, we have the following
𝑘
theorems.
𝑓 (A, B) = ∑ 𝑐𝑖𝑗 A𝑖 ⊗ B𝑗 . (34)
𝑚×𝑚 𝑛×𝑛 𝑖,𝑗=1
Theorem 15. (1) Let A, C ∈ F and B, D ∈ F . Then
According to Theorem 3, we have the following theorems
|(A ⊗ B) (C ⊗ D)| = |(A ⊗ B)| |(C ⊗ D)| [34].
= (|A| |C|)𝑛 (|B||D|)𝑚 (25) Theorem 17. Let A ∈ F 𝑚×𝑚 and B ∈ F 𝑛×𝑛 ; if 𝜆[A] =
= |AC|𝑛 |BD|𝑚 . {𝜆 1 , 𝜆 2 , . . . , 𝜆 𝑚 } and 𝜆[B] = {𝜇1 , 𝜇2 , . . . , 𝜇𝑛 }, then the matrix
𝑓(A, B) has the eigenvalues
(2) If A, B, C, and D are square matrices, then 𝑘
𝑓 (𝜆 𝑟 , 𝜇𝑠 ) = ∑ 𝑐𝑖𝑗 𝜆𝑖𝑟 𝜇𝑠𝑗 , 𝑟 = 1, 2, . . . , 𝑚, 𝑠 = 1, 2, . . . , 𝑛.
tr [(A ⊗ B) (C ⊗ D)] = tr [(AC) ⊗ (BD)] 𝑖,𝑗=1
(35)
= tr [AC] tr [BD] (26)
Theorem 18 (see [34]). Let A ∈ F 𝑚×𝑚 . If 𝑓(𝑧) is an analytic
= tr [CA] tr [DB] .
function and 𝑓(A) exists, then
(3) Let A ∈ F 𝑚×𝑛 , C ∈ F 𝑛×𝑝 , B ∈ F 𝑞×𝑟 , and D ∈ F 𝑟×𝑠 ; then 𝑓(I𝑛 ⊗ A) = I𝑛 ⊗ 𝑓(A),
𝑓(A ⊗ I𝑛 ) = 𝑓(A) ⊗ I𝑛 .
rank [(A ⊗ B) (C ⊗ D)] = rank [(AC) ⊗ (BD)]
(27) Finally, we introduce some results about the Kronecker
= rank [AC] rank [BD] .
sum [7, 34]. The Kronecker sum of A ∈ F 𝑚×𝑚 and B ∈ F 𝑛×𝑛 ,
denoted as A ⊕ B, is defined by
Theorem 16. If 𝑓(𝑥, 𝑦) := 𝑥𝑟 𝑦𝑠 is a monomial and 𝑓(A, B) :=
A[𝑟] ⊗ B[𝑠] , where 𝑟, 𝑠 are positive integers, one has the following A ⊕ B = A ⊗ I𝑛 + I𝑚 ⊗ B.
conclusions.
Theorem 19. Let A ∈ F 𝑚×𝑚 , and B ∈ F 𝑛×𝑛 . Then
𝑚×𝑚 𝑛×𝑛
(1) Let A ∈ F and B ∈ F . Then
exp[A ⊕ B] = exp[A] ⊗ exp[B],
𝑟𝑚𝑟−1 𝑛𝑠 𝑟 𝑠−1
sin(A ⊕ B) = sin(A) ⊗ cos(B) + cos(A) ⊗ sin(B),
𝑓 (A, B) = |A| |B|𝑠𝑚 𝑛 . (28)
cos(A ⊕ B) = cos(A) ⊗ cos(B) − sin(A) ⊗ sin(B).
(2) If A and B are square matrices, then
6. Conclusions
tr [𝑓 (A, B)] = 𝑓 (tr [A] , tr [B]) . (29)
This paper establishes some conclusions on the Kronecker
products and the vec-permutation matrix. A new presen-
(3) For any matrices A and B, one has tation about the properties of the mixed products and the
vector operator is given. All these obtained conclusions make
rank [𝑓 (A, B)] = 𝑓 (rank [A] , rank [B]) . (30) the theory of the Kronecker product more complete.
Journal of Applied Mathematics 7
[34] P. Lancaster and M. Tismenetsky, The Theory of Matrices: with [51] F. Ding and Y. Gu, “Performance analysis of the auxiliary model-
Applications, Academic Press, New York, NY, USA, 1985. based least-squares identification algorithm for one-step state-
[35] M. Dehghan and M. Hajarian, “An iterative method for solving delay systems,” International Journal of Computer Mathematics,
the generalized coupled Sylvester matrix equations over gener- vol. 89, no. 15, pp. 2019–2028, 2012.
alized bisymmetric matrices,” Applied Mathematical Modelling, [52] F. Ding and Y. Gu, “Performance analysis of the auxiliary model-
vol. 34, no. 3, pp. 639–654, 2010. based stochastic gradient parameter estimation algorithm for
[36] M. Dehghan and M. Hajarian, “An efficient algorithm for solv- state space systems with one-step state delay,” Circuits, Systems
ing general coupled matrix equations and its application,” and Signal Processing, vol. 32, no. 2, pp. 585–599, 2013.
Mathematical and Computer Modelling, vol. 51, no. 9-10, pp. [53] F. Ding and H. H. Duan, “Two-stage parameter estimation algo-
1118–1134, 2010. rithms for Box-Jenkins systems,” IET Signal Processing, 2013.
[37] N. J. Higham, Accuracy and Stability of Numerical Algorithms, [54] P. P. Hu and F. Ding, “Multistage least squares based iterative
Society for Industrial and Applied Mathematics, Philadelphia, estimation for feedback nonlinear systems with moving average
Pa, USA, 1996. noises using the hierarchical identification principle,” Nonlinear
Dynamics, 2013.
[38] F. Ding, “Decomposition based fast least squares algorithm for
output error systems,” Signal Processing, vol. 93, no. 5, pp. 1235– [55] H. G. Zhang and X. P. Xie, “Relaxed stability conditions for
1242, 2013. continuous-time TS fuzzy-control systems via augmented
multi-indexed matrix approach,” IEEE Transactions on Fuzzy
[39] F. Ding, “Coupled-least-squares identification for multivariable
Systems, vol. 19, no. 3, pp. 478–492, 2011.
systems,” IET Control Theory and Applications, vol. 7, no. 1, pp.
68–79, 2013. [56] H. G. Zhang, D. W. Gong, B. Chen, and Z. W. Liu, “Syn-
chronization for coupled neural networks with interval delay:
[40] F. Ding, X. G. Liu, and J. Chu, “Gradient-based and least-
a novel augmented Lyapunov-Krasovskii functional method,”
squares-based iterative algorithms for Hammerstein systems
IEEE Transactions on Neural Networks and Learning Systems,
using the hierarchical identification principle,” IET Control
vol. 24, no. 1, pp. 58–70, 2013.
Theory and Applications, vol. 7, pp. 176–184, 2013.
[57] H. W. Yu and Y. F. Zheng, “Dynamic behavior of multi-agent
[41] F. Ding, “Hierarchical multi-innovation stochastic gradient systems with distributed sampled control,” Acta Automatica
algorithm for Hammerstein nonlinear system modeling,” Sinica, vol. 38, no. 3, pp. 357–363, 2012.
Applied Mathematical Modelling, vol. 37, no. 4, pp. 1694–1704,
[58] Q. Z. Huang, “Consensus analysis of multi-agent discrete-time
2013.
systems,” Acta Automatica Sinica, vol. 38, no. 7, pp. 1127–1133,
[42] F. Ding, “Two-stage least squares based iterative estima- 2012.
tion algorithm for CARARMA system modeling,” Applied
Mathemat- Ical Modelling, vol. 37, no. 7, pp. 4798–4808, 2013.
[43] Y. J. Liu, Y. S. Xiao, and X. L. Zhao, “Multi-innovation stochastic
gradient algorithm for multiple-input single-output systems
using the auxiliary model,” Applied Mathematics and Compu-
tation, vol. 215, no. 4, pp. 1477–1483, 2009.
[44] Y. J. Liu, J. Sheng, and R. F. Ding, “Convergence of stochastic
gradient estimation algorithm for multivariable ARX-like sys-
tems,” Computers & Mathematics with Applications, vol. 59, no.
8, pp. 2615–2627, 2010.
[45] J. H. Li, “Parameter estimation for Hammerstein CARARMA
systems based on the Newton iteration,” Applied Mathematics
Letters, vol. 26, no. 1, pp. 91–96, 2013.
[46] J. H. Li, R. F. Ding, and Y. Yang, “Iterative parameter identifi-
cation methods for nonlinear functions,” Applied Mathematical
Modelling, vol. 36, no. 6, pp. 2739–2750, 2012.
[47] J. Ding, F. Ding, X. P. Liu, and G. Liu, “Hierarchical least squares
identification for linear SISO systems with dual-rate sampled-
data,” IEEE Transactions on Automatic Control, vol. 56, no. 11,
pp. 2677–2683, 2011.
[48] J. Ding and F. Ding, “Bias compensation-based parameter esti-
mation for output error moving average systems,” International
Journal of Adaptive Control and Signal Processing, vol. 25, no. 12,
pp. 1100–1111, 2011.
[49] J. Ding, L. L. Han, and X. M. Chen, “Time series AR modeling
with missing observations based on the polynomial transforma-
tion,” Mathematical and Computer Modelling, vol. 51, no. 5-6, pp.
527–536, 2010.
[50] F. Ding, Y. J. Liu, and B. Bao, “Gradient-based and least-squares-
based iterative estimation algorithms for multi-input multi-
output systems,” Proceedings of the Institution of Mechanical
Engineers I, vol. 226, no. 1, pp. 43–55, 2012.
Advances in Advances in Journal of Journal of
Operations Research
Hindawi Publishing Corporation
Decision Sciences
Hindawi Publishing Corporation
Applied Mathematics
Hindawi Publishing Corporation
Algebra
Hindawi Publishing Corporation
Probability and Statistics
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014
International
Journal of Journal of
Mathematics and
Mathematical
Discrete Mathematics
Sciences