You are on page 1of 7

IEEE TRANSACTIONS ON AUTOMATIC CONTROL

A Hybrid Robust Non-Homogeneous Finite-Time Differentiator


Denis V. Emov, Member, IEEE, and Leonid Fridman, Member, IEEE

tive, in the book [2] the frequency domain framework for discontinuous systems investigation is presented). The following sliding mode differentiator is designed:

x _1 =

AbstractA variant of super-twisting differentiator is proposed. Lyapunov function is designed and an estimate on nite time of derivatives estimation is given. The differentiator is equipped with hybrid adaptation algorithm that ensures global differentiation ability independently on amplitude of the differentiated signal and measurement noise. Index TermsPID control.

( ) [ (t)] x2 ; > 0 x _ 2 = 0 sign [x1 0 f (t)] 0 sign(x2) 0 x2 ; >   0 +

0 jx1 0 f t jsign x1 0 f

(1a) (1b)

The problem of a differentiator design is very important and challenging [3], [5], [13]. Numerical differentiation nds many application in control theory [14]. For example, many kinds of systems can be transformed to a canonical form with the state vector representation as a column of output function derivatives, in this case the problem of unmeasured state estimation is reduced to the derivatives computation of available for measurements output signal [1]. Another example is the class of at systems, the state and the input of such nonlinear systems are functions of the output and its derivatives [8], their computation provides an access to the system internal dynamics and the input evaluation. Finally, despite of the great success achieved in nonlinear control theory, the PID control is still the most popular tool used in practical applications [20]. Realization of this control strategy requires estimation of the regulation error derivative. In this work we are looking for on-line or real time differentiation and by a differentiator we mean an algorithm or a dynamical system that derives an estimate of the given signal derivative. There exist many approaches to differentiators design providing similar performance in applications [19], [21]. One of the most popular is super-twisting differentiator [12]. This differentiator ensures nite-time robust differentiation of noisy signals. A shortage of the algorithm consists in complexity of Lyapunov functions design (to prove explicitly its stability and performance) and time of the estimation error convergence evaluation [17]. In this work we are going to develop the results from [12] proposing a variant of super-twisting differentiator with simple estimates on the time of convergence and accuracy of derivative calculation. Another design goal consists in robustness of the differentiator against a nondifferentiable noise of any amplitude. Contrarily [12], the Lyapunov approach is chosen in this work to achieve these goals (as an alterna-

Manuscript received September 24, 2009; revised March 15, 2010, April 15, 2010, September 06, 2010, and January 03, 2011; accepted January 06, 2011. This work was supported by CONACyT (Consejo Nacional de Ciencia y Tecnologa) under Grant 56819 and FONCICyT (Fondo de Cooperacin Internacional en Ciencia y Tecnologa) under Gant 93302. Recommended by Associate Editor D. Liberzon. D. Emov is with the IMS-lab, Automatic Control Group, University of Bordeaux, Talence 33405, France (e-mail: efde@mail.ru). L. Fridman is with Departamento de Ingeniera de Control y Robtica, Facultad de Ingeniera UNAM, Edicio T, Ciudad Universitaria D.F., Mxico (e-mail: lfridman@unam.mx). Color versions of one or more of the gures in this technical note are available online at http://ieeexplore.ieee.org. Digital Object Identier 10.1109/TAC.2011.2108590

IE W EE eb P r Ve oo rs f ion
I. INTRODUCTION II. THE CASE OF EXACT MEASUREMENTS
=

where x1 R, x2 R are the state variables of the system (1), the function f : R R has two continuous derivatives (we assume that L1 and the constants Li R+ , i = 1; 2 are given such, that f 0 (t) f 00 (t) L2 for all t R). The variable x1 (t) serves as an estimate of the function f (t) and x2 (t) converges to f 0 (t). Therefore, (1) has the input f (t) and the output x2 (t). Comparing with conventional super-twisting differentiator [12], or other sliding-mode differentiators [16], the system (1) has two additional negative feedbacks in the (1b). Appearance of x2 in the equation (1b) causes the homogeneity property loss. As we will show, introduction of these feedbacks does not destroy excellent differentiation abilities of the differentiator [12] providing a hint to Lyapunov function design and time of convergence evaluation improving robustness. As in [12] the requirement on existence of the second derivative can be replaced with Lipschitz continuity of the rst derivative, then L2 is the corresponding Lipschitz constant. The system (1) is discontinuous, its solutions are understood in Filippov sense [4], [7], [18].

j

! 2

j

Introducing variables e1

x1 0 f , e2 = x2 0 f 0 we obtain
( ) + ) +
0

0 je1 jsign e1 e2 (2a) 0 sign e1 0 sign e2 f t 0 e2 0 f t 0 f t : (2b) f t f t 0 fsign e2 t 0 sign e2 t Dene t f t g 2 sign e1 t , that is a piecewise continuous function (for > L1 L2  it is strictly positive and <   t  ,  0 L1 0 L2 0 ,  L1 L2 ) e1 0 je1 jsign e1 e2 (3a) e2 0 t sign e1 0 sign e2 0 e2 : (3b)
e _1 = e _2 =
(
0

( )

( )
00

00

( )

( ) =

+ (

( )+

( )

( )]

( ) +

( )] )

( )]

+ 2

( )

+ 2

_ _

= =

)+

( )

All solutions of (2) are captured by the corresponding solutions of (3). For the system (3) the origin e = 0 contains an invariant solution. We are going to show that the origin is nite time stable. Theorem 1: Let > L1 + L2 + 2 and = 2 8 +  +  ( ) =(1:5 + 0:5), then the system (3) is nite-time stable for any initial conditions e(0)
0

0 g

p f

0 =

2 e 2 R2 :  je1 (0)j + 0:5e2 (0)

( + )( 0 )01

with the time of convergence to zero

2 (0);  = minf = ; T0  201  je1 (0)j + 0:5e2

p p g:
2

All proofs are presented in the Appendix.For chosen in accordance with the conditions of theorem 1 parameters , ,  the system (1) estimates the derivative f 0 (t), i.e. the variable x2 (t) converges to f 0 (t),

0018-9286/$26.00 2011 IEEE

IEEE TRANSACTIONS ON AUTOMATIC CONTROL

( ) = +(f 0 (t)+ f 00 (t) 0 fsign[e2 (t)]0sign[e2(t)+ f 0 (t)]g) 2 sign[e1 (t)] is the same as in (3). p By denition j1 (t)j  20 , 2 (t) = 0 for je1 (t)j  0 , j2(t)j  2 and 2(t)e1(t)  0 for all t 2 R.
where 1 , 2 are the disturbances originated by the noise ' presence,
t

Fig. 1. Results of the system (1) simulation.

~(t) = f (t) + '(t) be available for measurements, Let the signal f R is a useful signal and ' : R R is a noise. In this where f : R case the system (1) takes form

The system (5) is discontinuous and affected by the disturbance '. First, we would like to prove that the system has bounded trajectories. Second, we would like to show that the accuracy of derivatives estimation depends continuously on the noise amplitude ' (at least for small measurement errors), that is not true in general case [15]. Introducing variables e1 = x1 f , e2 = x2 f 0 the system (5) can be rewritten as follows:

_ = 0 je1 jsign(e1)+e2 +1 (t) _ = 0 (t)sign(e1) 0sign(e2) 0e2 +2 (t); 1 (t) = je1jsign(e1)0 je1 0'(t)jsign [e1 0'(t)] 2 (t) = fsign(e1 ) 0 sign [e1 0 '(t)]g
e1 e2

IE W EE eb P r Ve oo rs f ion
L2  0

as t T0 . It is a semi-global result since the size of the set


0 can be arbitrary assigned by proper choice of , , . The proof of global nite-time stability for the differentiator (1) is presented in [6], it is based on another Lyapunov function analysis. The shortage of the estimate on T0 consists in its dependence on unavailable for measurements value e2 (0). Fortunately, choosing initial conditions and parameters , and  in particular way it is possible to compensate this problem. Corollary 1: Let x1 (0) = f (0), x2 (0) = 0 and (4), shown at the bottom of the page, for any  0, then T0 L1 =(0:25 2L1 + ). According to the corollary result, taking  large enough it is possible to ensure any desired rate of the estimation. Example 1: Let f (t) = sin(!1 t) + b sin(!2 t), !1 = 0:5, !2 = 2, i i b = 0:3, Li = !1 + b!2 , i = 1; 2. Take  = 1, and from (4)  = 1:32, = 7:53, = 10:63, T0 = 0:82. The results of simulation are shown in Fig. 1 (step is 1004 for the Euler method). Remark 1: Coefcients , and  can be chosen independently on amplitude of f (t) (f (t) can be unbounded).

Lemma 1: Let the signal : ! be Lebesgue measurable and j 0( )j  , j 00( )j  , j ( )j  for all 2 ; 0, 0 and 0 . Then in (5) for all 2 and initial conditions ( ) 2 , ( ) 2 the solutions are bounded j 0 ( )j max fj ( ) 0 ( )j ( ) 0 0( ) + 3 + 4 0 p + + + 2 ( ) 0 0( )  ( ) 0 0( ) 0 + j3 + + + j
' R R f t L1 f t L2 ' t 0 t R > > <  < t0 R x1 t0 R x2 t0 f t R x1 < x1 t0 f t0 ;

A. Global Boundedness of Solutions

x2 t0

t0

L1

x2 t

x2 t0

t0

0:5t

L1

L2

:

III. THE CASE OF NOISY MEASUREMENTS

x1 x2

_ = 0

_ = 0 sign

x1

x1

0 ~( ) 0 ~( ) + 0 ~( ) 0 ( ) 0
f t sign x1 f t f t sign x2

Further for simplicity assume that '(t) also has two continuous R+ , i = 0; 2 are given such, that derivatives and the constants i 0 00 '(t) 0 , ' (t) 1 and ' (t) 2 for all t R (dene ~(t) as L ~ i = Li + i , the corresponding constants for the function f 0 i = 1; 2). It is required to estimate the signal f (t) from measured ~ ~ ~0 , f (t). Introducing the variables e1 = x1 f ~(t) = f , e2 = x2 0 00 0 ~ ~ ~ +(f (t)+f (t)  sign[e2 (t)] sign[e2 (t)+f (t)] )sign[e1 (t)], ~(t) is piecewise the system (5) can be reduced to (3). The function ~1 + L ~ 2 + 2 it is strictly positive continuously and for > L ~1 L ~ 2 2, L , where  = ~(t) and 0 <  ~ ~  = + L1 + L2 + 2. The results of theorem 1 can be trivially extended to the system (5). ~(0), x2 (0) = 0 and Theorem 2: Let x1 (0) = f

B. Case of Differentiable Noise

j j j

j

2 j j 0 0

0f

0 g

0 0 0

x2

(5a)

x2 :

(5b)

p ~1 + ; > L ~1 + L ~ 2 + 3; = 0:25 2L ~1 + L ~ 2 + 2) + + L ~1 + L ~ 2 + 3 2( + L =4 ~1 + L ~ 2 + 2) =(2 0 L ~1 0 L ~ 2 0 2) 2(L




for any  > 0, then the corresponding solutions of the system (5) are T0 bounded and for all t

(6a)

L1 =

(6b)

This result means insensitivity of (5) to any constant noise. Example 2: Let f (t) be as in the rst example and '(t) = i r sin(!t), r = 0:01, ! = 5!2 , then 0 = r , i = r! , i = 1; 3. Let  = 1,  = 1:357, = 8:72 and = 12:008, then T0 = 0:884, 1004 for the Euler method. step of simulation was chosen 5

j ( ) 0 ( )j  ( ) 0 0( )  and the nite time of convergence possesses the estimate  ~ (0 25p2 ~ + ).


x1 t f t 0 ; x2 t f t 1 T0 T0 : L1 

= 0:25 2L1 + ; > L1 + Lp 2 + 3; 2( + L1 + L2 + 2) + + L1 + L2 + 3(L1 + L2 + 2) =4 2 0 L 0 L 0 2




(4)

IEEE TRANSACTIONS ON AUTOMATIC CONTROL

Fig. 2. Results of the system (5) simulation with noise.

the chosen , ,  (the constraint c1 0 + c2 0 2 2( + ) (  )01 holds), then the estimate on the derivative f 0 has the error :25 proportional to 0 (theoretical limitations of this estimate improve0 ment are established in [10]). If the noise amplitude is very high, then the result of lemma 1 is satised guaranteeing boundedness trajectories. It is worth to stress, that the value 2 2( + ) (  )01 can be taken arbitrary high adjusting , , . Remark 2: It is well known fact that the system with discontinuous feedback is robust with respect to sufciently small measurement noise if and only if the system admits a continuously differentiable Lyapunov function [11]. The Lyapunov function used in theorem 1 is not continuously differentiable, therefore, the result of [11] can not be applied here for robustness approving.

p 

Fig. 3. State space partition for the system (6).

The results of the system (5) simulation presented in Fig. 2, where "1 (t) = x1 (t) f (t) and "2 (t) = x2 (t) f_(t). The simulation results conrm theoretical ndings of the work.

C. Non-Differentiable Noise

Let the signal ' : R R be Lebesgue measurable and '(t) 0 for all t R. Theorem 3: Let > L1 + L2 + 2,  > 0 and = 2 8 +  + (  ) =(1:5 + 0:5), then for any initial conditions e(0) 2
0 ,
0 = e R2 :  e1 (0) +0:5e2 (0) 2 2( + )( 0 1 the trajectories of the system (6) satisfy the estimate for all t ) T0

p je1 (t)j  01 (c1 0 + c2 0 ); p je2 (t)j  2(c1 0 + c2 0 ); p

c1 = max 802 [(0:25 + )


+ max

c2 = 2 =( 2);  = min = ;

where T0  4 01

Theorem 3 proof is based on the observation that 2 (the product e2 2 ) inuences negatively on (6) onto the set 7 = e1 < 0 e1 e2 > 0 only (see Fig. 3). The result 3 20 < e2 < 2 of the theorem says that if the noise amplitude 0 is comparable with

IE W EE eb P r Ve oo rs f ion
~2 , ~1 + L The shortage of theorem 2 conditions (namely > 3 + L ~  = 0:25 2L1 + ) consists in their dependence on the constants ~ i , i = 1; 2, which can be unknown. Instead of these constants we can L ~ 0 , i = 1; 2, their substitution into the conditions of use guess values L i theorem 2 can provide us with sample parameters for the system (5) ~ 0 + ; 0 = L ~0 + L ~ 0 + 30 + ; 0 = 0:25 2L 1 1 2

IV. HYBRID DIFFERENTIATOR

0 ~0 +  ; ~ 0 = 0:25 2L =L T0 1 1

0 = 4

~0 + L ~ 0 + 20 0 2 0 + L 1 2

~0 + L ~0 + L ~ 0 + 30 L ~ 0 + 20 0 + L 1 2 1 2

= 2 0

~0 0 L ~ 0 0 20 0L 1 2

0 g f 2

p f

j

~(t0 ), x2 (t0 ) = 0, where  > 0 is a design constant. Taking x1 (t0 ) = f t0 0 and simulating the system (5) with derived 0 , 0 , 0 we may 0 ~(t) for all t t0 + T0 expect that x1 (t) = f . If there exists a time 0 ~(t), then the values of L ~0, t0 + T0 such that x1 (t) = f instant t1 i i = 1; 2 were guessed wrongly. Next, it is worth to increase the guess ~ i , i = 1; 2 (say L ~1 > L ~ 0 , i = 1; 2) and values for the constants L i i ~ 1 , i = 1; 2 repeating the simulation. recalculate , , , T0 for these L i Formally this algorithm can be written as follows:

2 0 

~ j = 3i L ~ j 01 ; j ; i = 1; 2; L ~0 L i i i

j = 0:25 2L1 + ; j = 3j + L1 + L2 + 


~j + L ~ j + 2j j 2 j + L 1 2

 0;
~j

i = 1; 2;
~j

(7) (8)

~j

j = 4

~j + L ~j + L ~ j + 3j L ~ j + 2j j + L 1 2 1 2
j j j

= 2 j

2( + ); 6

p p2g

; ;

0 ~ 0 = 0:25 2L +  ; T0 =L 1 1

~ 0L ~ 0 2 0L 1 2 p ~0

(9)

tj +1 = arg inftt +T

~(t) ; t0 = 0; x1 (t) = f

c1 0 + c2 0

j p


2 (0), provided e1 (0) + 0:5e2

~(tj ); x2 (tj ) = 0 x1 (tj ) = f

(10)

that

2

2( + )(

0 )01:

j j

fj j

where the functions 3i , i = 1; 2 guarantee strict increasing of the 1 (the concrete choice of the funcsystem (7) solutions for all j tions 3i depends on hypothesis available for f , ' and their derivatives, ~ j) = L ~ + 1 for instance). The (7) denes dynamics of the guess 3i (L; ~ i , i = 1; 2. In (8) the parameters of (5) are derived, the equaestimates L tion (9) estimates the nite time of convergence if the sample parameters in (8) are chosen correctly. The simulation should be performed on the interval [tj ; tj +1 ), where the instant of time tj +1 is dened in (10),

IEEE TRANSACTIONS ON AUTOMATIC CONTROL

An advantage of the differentiator (1) with respect to the standard super-twisting differentiator [12] consists in the Lyapunov function existence (that facilitates performance analysis in the presence of noise and allows us to evaluate the time of convergence). The disadvantages include the requirement on boundedness of the rst two derivatives (in [12] only the second derivative has to be bounded) and the conservatism of differentiation accuracy, that is proportional to the noise magnitude in the power 1/4 (in [12] this power is 1/2).
Fig. 4. Results of the hybrid differentiator simulation.

APPENDIX Proof of theorem 1: Consider for the system (3) the following Lyapunov function:

j 2 

for all t T0 providing that the discrete systems (7) have strictly increasing solutions for any j 1. The result of theorem 4 implies that the algorithm (5), (7)(10) for the parameters , ,  calculation provides an estimate on the deriva~i R+ , i = 1; 2. For the tive f 0 in nite time independently on L differentiator [12] a similar adaptation problem has been solved in [9], where a conventional continuous time tuning technique is used (niteness of the adjusting parameters is not guaranteed). Example 3: Let f (t) = ct+sin(!1 t)+b sin(!2 t), c = 1, !1 = 0:5, 0 = 0:5 ~i !2 = 2, b = 0:3; '(t) = r sin(!' t), r = 0:1. Let  = 2, L and 3i (L; j) = L + 1, i = 1; 2, the corresponding trajectories of the differentiator (5) with the hybrid adaptation algorithm (7)(10) for the cases !' = 5 (dashed lines) and !' = 15 (solid lines) are shown in Fig. 4. Simulations were performed on the interval t [0; 100], step of simulation was chosen 1003 for the Euler method, evolution of the error variable e2 (t) is plotted in logarithmic time scale in Fig. 4(a). Growth of the parameter j is shown in Fig. 4(b). In the case !' = 5 the algorithm needs 1 step, for the case !' = 15 the algorithm stops 1 = after 3 steps, the derivative estimation was ensured after t1 + T0 3 0:439 and t3 + T0 = 1:258 respectively. The asymptotic chattering depicted in Fig. 4(b) is caused by Euler algorithm used for computation of the system (5) solutions with the step of discretization 1003 . Thus during simulation the condition (10) ~(t) > " and was replaced with tj +1 = arg inftt +T x1 (t) f " = 0:001. Note, that f (t) is not bounded in this example.

The proposed hybrid differentiator ensures nite-time exact observation of the derivative f 0 for any signal f (not necessarily bounded) with bounded derivatives f 0 , f 00 . The solutions of the differentiator stay bounded even for wrongly chosen parameters and non differentiable noise. If the noise amplitude is comparable with the values of parameters , , , then the derivative estimation error stays proportional to ' 0:25 . The hybrid adaptation algorithm is proposed for the differentiator parameters adjustment providing the derivative f 0 estimation uniformly in the norms of f 0 and f 00 .

jj

IE W EE eb P r Ve oo rs f ion
2 2 j j j 
0(e) = 0:5 3(e)

~j it is the instant of the current values L i , i = 1; 2 falsication (if the j ~ i , i = 1; 2 have been chosen correctly, then tj +1 = + ). constants L ~(t) = f (t)+'(t), where f : R R, ' : R R Theorem 4: Let f are two times continuously differentiable signals and there exist some Li R+ , i = 1; 2 and i R+ , i = 0; 2 such that f 0 (t) L1 , f 00 (t) L2 for all t R+ ; '(t) 0 , '0 (t) 1 , '00 (t) 2 for all t R+ . Then for the system (5) and the algorithm (7)(10) for any  > 0 0 0, i = 1; 2 there exists T0 0 such that ~i and L

1 ! ! j j j j j  1

2; W (e) = 0(e)je1 j + 0:5e2

3(e) =
0

 je1 j sign(e1 ) 2( 0 )e2 + ( + )]

 

if e1 e2 if e1 e2

 0;
< 0,

if if

je2j <  je1j; je2j   je1j

where 

jx1(t) 0 f (t)j  0; 

x2 (t) 0 f (t)
0

(11)

> 0 is the design parameter. From the function 0 denition   0(e)   for all e 2 R2 2  W (e)  je1 j + 0:5e2 ; je1 j + 0:5e2 pW  p je1j + p0:5je2j:2

The function W is continuous (an example of its contour plot is given in Fig. 5, but not continuously differentiable. For e2  e1 we 2 and have W (e) = 3(e) e1 + 0:5e2

j j j j j j 2 _ = 3(e) je1 j + [3(e) 0 (t)] e2 sign(e1 ) 0 je2 j 0 e2 W  0  je1j 0 je2j 0 e22: The case je2 j <  je1 j is more complicated W (e) = 0:5  1 ( 0 ) je1 jsign(e1 )e2 + ( + )je1 j
0

fj

0 j g

V. CONCLUSION

je1jsign(e1)e2  0:5je1j 0 5sign(e1)2e22 + 0:5je1j1 5 _ 0:25 1 ( 0 )je1 j1 5 W 0 0:5 ( + ) 0 ( + ) 1( 0 ) je1j 0 je2j 0 e22 + 0:5(1 + 0:5  1)( 0 )je2j 0:25 1( 0 )je1j1 5 0 je2j 0 e22 0 0:5 ( + ) 0 ( + ) 1 + ( + 0:5 ) ( 0 ) 2 je1j: For some  > 0, to be specied later, choose = f + [ + ( + ) 1 ]( 0 )g=(1:5 + 0:5), then  = ( + ) 0 [( + ) 1 + _  0:25 1 ( 0 )je1 j1 5 0 je2 j 0 ( + 0:5 )]( 0 ) and W 2 e2 0 0:5 je1 j. Let the constraint je1 j  ( 0 ) 1 hold, then 2 and combining it with the estimate _  00:25 2 je1 j0 je2 j0 e2 W computed for the case je2 j   je1 j we nally obtain: p p _  0 W ;  = minf =p; 0:25=p; 2g W
From
0

_ = 0:5 1 ( 0 ) je1 jsign(e1 )e2 W 0 0:25 1( 0 )je1j 0:5sign(e1)2e22 0 0:5 ( + ) 0 ( + sign(e1e2))  1( 0 ) 2 je1j 0 je2j 0 e22 + 0:5( + ) 0 0 0:25  1 ( 0 ) sign(e1 )e2 :
0 0 0 0 0

2; + 0:5e2

IEEE TRANSACTIONS ON AUTOMATIC CONTROL

that gives  = 2 = 2(0:25 2L1 + ) and the required upper estimate for T0 follows by theorem 1 result. Proof of lemma 1: Let us start with the second equation in the 2 , with system (6), considering the Lyapunov function U (e2 ) = 0:5e2 _ the derivative U U + 0:5[3 + L1 + L2 + ]2 . That gives the 2 desired estimate. Next consider U (e1 ) = 0:5e1

0

_  0 je1 jje1 j + je1 j je2 j + 20 : U


Since e2 (t)

Fig. 5. Contour plot of

that gives upper estimate for the time of convergence T0 . Since

 je1 (t)j   je1 (t)j + 0:5e2 2 (t)  W (e(t))  W (e(0)) 2   je1 (0)j + 0:5e2 (0)
for the initial conditions e(0)
2 0:5e2 (0)


2
0 ,
0 = fe 2 R : je1 (0)j + ( 0 )01 g the constraint je1 (t)j  ( 0 )01 holds for all t  0 and the derived estimates are valid.  and The last thing to do is to optimize values of the parameters p p p . The value of p  is not changing if 0:25=  = 2, then  = 01 )g=(1p :5 + 0:5). 4 2, = f4 2 + [ + ( + ) ]( 0 p The function reaches for its minimum = 2f 8 +  + ( 0 p )g=(1:5 +0:5) for  =  + , that leads to the desired estimates. Proof of corollary 1: The value f (t), t 2 R is assumed to be accessible for a designer, thus e1 (0) = 0 is an admissible choice. The value e2 (0) 2 [0L1 ; L1 ] if x2 (0) = 0, that gives the estimate on the 2 01 0:5L1  4 2( + )( 0  ) . Since set of initial conditions: p p 2 4 2  2 2( + L1 + L2 + 2)( 0 L1 0 L2 0 2)(L1 + 01 this estimate is satised L2 + 2)01 p  4 2( + )( 0  ) 2 2 if 0:5L1  4 2 , that gives the proposed choice of  admissible for any nonnegative  . In accordance with theorem 1 result the function

 = min ( 0 L1 0 L2 0 2)= + L1 + L2 +2;

The rst term under the minimum sign is an increasing function of . If we are able to show that for the minimum value L1 + L2 + 3 of p the rst term is always bigger than 2, then the expression for  can be simplied

( 0 L1 0 L2 0 2)= + L1 + L2 + 2j =L

4
=

 4 L1 + L2 + (2 +

The function (L1 + L2 + (2 + 2))=(L1 + L2 + 4) is strictly decreasing in  > 0 and its minimum is 0:25(2 + 2), therefore

( 0 L1 0 L2 0 2)= + L1 + L2 + 2  (2 +

IE W EE eb P r Ve oo rs f ion
W for  = 0:5,  = 1,  = 3.
j j W (e) = 0:5  ( 0) je1 jsign(e1 )e2 +( +)je1 j j j
0

that implies the result. Proof of theorem 2: The proof follows from theorem 1 and corol~, e2 = x2 f ~0 lary 1 under observation that in coordinates e1 = x1 f the system (5) is reduced to (3). Proof of theorem 3: Consider for the system (6) the same Lyapunov function W (e) as in the proof of theorem 1. Several cases have to be analyzed. 1. The case e2 <  e1 :

j p j  je2 (t0 )j + 3 + L1 + L2 +  for je2 (t0 )j + 3 + L1 + _  00:5 je1 jje1 j < 0, L2 +  + 20  0:5 je1 j we have U 0 0

_ = 0:5 1 ( 0 ) je1 jsign(e1 )e2 W 2 0 0:25 1(0)je1j 0 5 sign(e1 )2 e2 0 0:5 (+) 0( (t)+sign(e1e2 ))  1 (0) 2 2 je1 j0je2 j0e2 + 0:5( + ) 0 (t) 0 0:25  1( 0 ) sign(e1 )e2 + [0:5( + )sign(e1 ) +0:25( 0 ) 1 je1 j 0 5 sign(e1 )2 e2 1 + 0:5 1 ( 0 ) je1 jsign(e1 )+ e2 2 :
0 0 0 : 0 0 0 0 : 0

2 + 0:5e2 ;

Taking in mind inequalities

2

je1 jsign(e1 )e2 2  0:5je1j 0 5 sign(e1)2 e2 +0:5je1 j1 5 ; 0:5( + )sign(e1 )+0:25( 0 ) 1 je1 j 0 5 sign(e1 )2 e2 1  [0:5(+)+0:25(0)] j1 j = (0:25 +0:75)j1 j; 0:5 1 ( 0 ) je1 jsign(e1 )+ e2 2  e2 2 ; =  +  +( +) 1 ( 0) =(1:5 +0:5)
0 :

0 :

for some 

> 0 (to be specied later) we obtain


0

_  0:25 W

2 ( 0 )je1 j1:5 0 je2 j 0 e2 0 0:5 je1 j +(0:25 + 0:75)j1 j + e2 2 :


0

+L +3

2(2L1 + 2L2 + 5)

je1 j  ( 0 ) 2 _  0=4 je1 j 0 je2 j 0 e2 W


Let the constraint

hold, then

+ 2L1 + 2L2 + 6(L1 + L2 + 2)

+ (0:25 + 0:75) +

2 

20

2L1 + 2L2 + 5=(L1 + L2 + 4)

2)

=(L1 + L2 + 4):

where we used the series of relations e2 2 2 e2 < 2  e1 2  0 (the last step is based on that 2 = 0 for e1 > 0 ). 2  e1 , W (e) = 3(e) e1 + 0:5e2 and 2. The case e2

2)> 2

j j j j _ = 3(e) je1 j + [3(e) 0 (t)] e2 sign(e1 ) 0 je2 j W 2 + 3(e)sign(e1 )1 + e2 2 0 e2 2  0  je1 j 0 je2 j 0 e2 + j1 j + e2 2 :

 j j j j j j

j j

IEEE TRANSACTIONS ON AUTOMATIC CONTROL

_ W

 0

where

as in theorem 1 proof  = p p p this estimate is minf = ; 0:25= ; 2g, valid for all e 2 fe 2 R2 n7 : je1 j  ( 0  )01 g, that gives the following time estimate for any t  t0  0:

W (t)  max

C. Now let us compute the estimate on the system (6) solutions into subsets, where for the set 7. This set is composed by two disjoinp the rst one the constraints 0 < e1 < 0 , 3 20 < e2 < 2 , e1 e2 > 0 hold. Then e _p je1 jsign(e1 ) + e2 + 1  1 = 0 p p p 0 0 +3 20 0 20 = 20 and the time T1 of the by the system (6) trajectories is upper bounded as set 7 crossing p p T1  0 =( p 2) (the time of passing from 0 to 0 with the 20 ). For the second subset where 0 > e1 > minimal rate p 00 , 03 20 > e2 > 02 , e1 e2 > 0 the same estimate can be computed similarly. The set 7 also can be left in the direction of the variable e2 , but we are looking for the maximal time of the trajectories stay into the set 7, and the estimate on T1 is sufcient (if the system exits from the set 7 faster in the direction e2 than in e1 , then T1 is still the maximal time of stay into the set; if the time of the set 7 crossing for the variable e2 is bigger than T1 , then it is not important since the trajectories exit the set in time T1 at the maximum). The following estimate holds in 7 (it is the worst case estimate satised for all je2 j   je1 j):

where we used the fact that for j2 j  2 the inequality e2 2 0 2 2 e2  holds for any e2 2 R. Let us introduce into consideration for any t  t0  0 the variable
t

S (t) = W (t) 0 s(t); s(t) =

_ =W _ S

 0

Therefore, for any t

IE W EE eb P r Ve oo rs f ion
0 :  W + (0:25 + ) + maxf 2; 6g
p p p

The properties j1 j  20 and j2 j  2 hold. The main issue of the last estimate is how to treat the disturbance 2 computing the required bounds on the trajectories convergence dependent on 0 only. Fortunately, the disturbance 2 affects negatively on the system dynamics in two compact sets only. In Fig. 3 the partition of the planar state space of the system (6) is shown, where 2 = 0 for je1 j  0 (more precisely for je1 j  j'j). Since always 2 e1  0 by construction of 2 , then e2 2  0 in two quadrants with e1 e2  0. Finally, 2 e2  0 is satised provided that for je2 j  2 the inequality e2 2 0 p p always j2 j  2 , and je2 2 j  6 20 for je2 j  3 20 . Thus, appearance of the destructive amplitude 2 of the disturbance 2 is p possible into the compact set 7 = fje1 j < 0 ^ 3 20 < je2 j < 2 ^ e1 e2 > 0g only (see Fig. 3). Consider all these subsets separately. A. For the cases je1 j  0 , j2 j p  2 or e1 e2  0 we have _  0  je1 j 0 je2 j +  20 . W p _  0  je1 j 0 B. For the case je2 j  3 p20 we obtain W 2 je2 j 0 e2 + ( + 6 ) 20 . Combining these estimates with the one computed for the case je2 j <  je1 j we nally get:

S (t)  max

S (t0 ) 0 0:5(t 0 t0 )

;0 ;
2

W (t)=S (t) + s(t)  max


t

W (t0 ) 0 0:5(t 0 t0 )

;0

+
t

 20 + d:

For the trajectories into the set 7 we know that t p p t0 + 0 =( 2), consequently
W (t)  max W (t0 ) 0 0:5(t 0 t0 )
2

;0
p p

+  0 + 2 =( 2)

20

Finally, taking maximum over all estimates obtained for the Lyapunov function W we obtain:
W (0)0 0:25 t
2

W (t)  max

; 0 + c1 0 + c2 0 ;
2

c1 = max 8


02 (0:25 + ) +maxfp2; 6g
p

;

max

W (t0 ) 0 0:25(t 0 t0 )

;0 ;

2 0:5e2 (t)

802 (0:25 + ) + maxf 2; 6g


p

0

0, c2 = 2 =( 2). Since je1 (t)j  je1 (tp )j + 2 W (e(t))  je1 (0)j + 0:5e2 (0) + c1 0 + c2 0 for initial conditions e(0) 2
0 ,
0 = fe 2 R2 : 2 (0)  0 :5( 0  )01 g and the meaje1 (0)j + 0:5e2 p surement noise c1 0 + c2 0  0:5( 0  )01 the constraint je1 (t)j  ( 0  )01 holds for all t  0 and the
for all t

_ W

 0

 W +  20 +

derived estimates are valid. The last thing to do is to optimize values of the design theorem 1. The value of  is not parameters  and  as in p p p 2, then  = 4 2 and = changing if 0:25=  = p 01]( 0 )g=(1:5 + 0:5). The function f4 2 + [ + ( + ) p p reaches its minimum = 2f 8 +  + ( 0  )g=(1:5 + 0:5) p for  =  + , that leads to the desired estimates. Proof of theorem 4: The dynamics of the system (5) can be reduced to (3) and if e1 (t) = 0, t  0 then necessarily e2 (t) = 0, t  0. From lemma 1 even for wrongly chosen parameters of the system (5) ~ ~j the solutions are always bounded. According to theorem 2, if L i  Li , i = 1; 2, then for initial conditions (10) in the system (5) it holds that j e1 (t) = 0 for all t  tj + T0 and the desired estimates (11) hold. j +1 ~i , i = 1; 2 If a nite instant tj +1 appears in (10), then the values L ~ i 2 R+ , i = 1; 2, then eiincrease in accordance with (7). Since L ~ ~N ther there exists N > 0 such that L i  Li , i = 1; 2 and the result of theorem 2 holds, either the conditions (11) are satised after some j . In both cases there exists a nite time step j  0 for all t  tj + T0 T0  0. ACKNOWLEDGMENT

The authors would like to thank anonymous reviewers for their helpful suggestions for improving the technical note.

 20 + d; s 2 R+ ;

REFERENCES

 W

0 p

 20 +
 0




maxfS + s; 0g  0 S:
t0


0 we have

[1] , G. Besanon, Ed., Nonlinear observers and applications, in Lecture Notes in Control and Inforamtion Science. Berlin, Germany: Springer Verlag, 2007, vol. 363. [2] I. Boiko, Discontinuous control systems: Frequency domain analysis and design. Boston, MA: Birkhuser, 2009. [3] B. Carlsson, A. Ahlen, and M. Sternad, Optimal Differentiation Based on Stochastic Signal Models, IEEE Trans. Signal Processing, vol. 39, no. 2, pp. 341353, Feb. 1991.

IEEE TRANSACTIONS ON AUTOMATIC CONTROL

IE W EE eb P r Ve oo rs f ion

[4] F. H. Clarke, Optimization and nonsmooth analysis. New York: Wiley, 1983. [5] A. M. Dabroom and H. K. Khalil, Discrete-time implementation of high-gain observers for numerical differentiation, Int. J. Control, vol. 72, pp. 15231537, 1999. [6] D. Emov and L. Fridman, A Hybrid Global Robust Finite-Time Differentiator, in Proc. 49th IEEE Conf. Decision Control, 2010, pp. 51145119. [7] A. F. Filippov, Differential Equations with Discontinuous Right-hand Sides. Dordrecht, The Netherlands: Kluwer, 1988. [8] M. Fliess, J. Levine, P. Martin, and P. Rouchon, Flatness and defect of nonlinear systems: Introductory theory and examples, Int. J. Control, vol. 61, no. 6, pp. 13271361, 1995. [9] S. Kobayashi and K. Furuta, Frequency characteristics of Levants differentiator and adaptive sliding mode differentiator, Int. J. Syst. Sci., vol. 38, no. 10, pp. 825832, 2007. [10] A. N. Kolmogoroff, On inequalities between upper bounds of consecutive derivatives of an arbitrary function dened on an innite interval, Amer. Math. Soc. Transl., vol. 2, pp. 233242, 1962. [11] Y. S. Ledyaev and E. D. Sontag, Lyapunov characterization of robust stabilization, Nonlin. Anal., vol. 37, pp. 813840, 1999. [12] A. Levant, Robust exact differentiation via sliding mode technique, Automatica, vol. 34, no. 3, pp. 379384, 1998. [13] A. Levant, Higher-order sliding modes, differentiation and outputfeedback control, Int. J. Control, vol. 76, no. 9/10, pp. 924941, 2003.

[14] M. Mboup, C. Join, and M. Fliess, Numerical differentiation with annihilators in noisy environment, Numer. Algorithms, vol. 50, no. 4, pp. 439467, 2009. [15] J. A. Moreno and M. Osorio, A Lyapunov approach to second-order sliding mode controllers and observers, in Proc. 47th IEEE Conf. Decision Control, 2008, pp. 28562861. [16] A. Pisano and E. Usai, Globally convergent real-time differentiation via second order sliding modes, Int. J. Syst. Sci., vol. 38, no. 10, pp. 833844, 2007. [17] A. Polyakov and A. Poznyak, Reaching Time Estimation for Super-Twistinga Second Order Sliding Mode Controller via Lyapunov Function Designing, IEEE Trans. Autom. Control, vol. 54, no. 8, pp. 19511955, Aug. 2009. [18] A. Poznyak, Advanced Mathematical Tools for Automatic Control Engineers. Deterministic techniques. Oxford, U.K.: Elsevier, 2008. [19] A. Stotsky and I. Kolmanovsky, Application of input estimation techniques to charge estimation and control in automotive engines, Control Eng. Prac., vol. 10, pp. 13711383, 2002. [20] K. K. Tan, Q.-G. Wang, and C. C. Hang, Advances in PID Control. London, U.K.: Springer-Verlag, 1999. [21] L. K. Vasiljevic and H. K. Khalil, Differentiation with High-Gain Observers the Presence of Measurement Noise, in Proc. 45th IEEE Conf. Decision Control, 2006, pp. 47174722.

You might also like