You are on page 1of 3

2009 International Conference on Advances in Recent Technologies in Communication and Computing

FPGA Design and Implementation Issues of Artificial Neural Network Based PID Controllers
Vikas Gupta
Department of Electronics & Telecommunication, Vidyavardhanis College of Engineering Vasai Road (W), Thane, India. E-mail: profvikasgupta@rediffmail.com Mobile: +919892251610
AbstractThis paper discusses implementation issues of FPGA and ANN based PID controllers. FPGA-based reconfigurable computing architectures are suitable for hardware implementation of neural networks. FPGA realization of ANNs with a large number of neurons is still a challenging task. This paper discusses the issues involved in implementation of a multi-input neuron with linear/nonlinear excitation functions using FPGA. It also suggests advantages of error self-recurrent neural networks over back propagation neural network. Keywords: FPGA, PID controller, artificial neural networks, error self-recurrent neural networks.

Dr K. Khare, Dr R.P.Singh
Department of Electronics & communication, MAULANA Azad National Institute of Technology, BHOPAL,India. E-mail:kavitakhare1@yahoo.com

The standard BP learning algorithm has several limitations. Most of all, a long and unpredictable training process is the most troublesome, for example the rate of convergence is seriously affected by the initial weights and the learning rate of the parameters. Many researchers have proposed the modifications of the classical BP algorithm [2][4]. Recently another modified algorithm was derived by Scalero and Tepedelenlioglu as an alternative to the BP algorithm. It uses a modified form of the BP algorithm to minimize the mean square error between the desired output and the actual output with respect to the summation output. II. PID CONTROLLERS A tracking problem should be protected against difficulties that might cause the system down such as steady state error, overshoot effect, rise time, frequency response, disturbance, etc. Using Proportional integral derivative controller (PID controller) the difficulties can be minimized. Here go some of the key notes that could cause in a better design. Increasing proportional parameter could decrease the steady state error while will cause system instability. Increasing integration parameter can slower the system and will cause system instability. Increasing derivative parameter will faster the system and will improve overshoot effect and rise time. III. IMPLEMENTATION ISSUES OF FPGA AND ANN BASED PID CONTROLLERS ANN models generally depend on massive parallel computation. Thus the high-speed operation in real-time applications can be achieved only if the networks are implemented using parallel hardware architectures [5]. FPGAs have been used for ANN implementation due to ease of access, ease of fast reprogramming, and low cost, permitting the fast and non-expensive implementation of the whole system. In addition, FPGA-based ANNs can be modified to specific ANN configurations [6]. For hardware implementation it is considered important to separate the learning and recovery phase of an ANN. In general, all ANN architectures consist of a set of inputs and interconnected neurons with the neurons structure. The neuron can be considered the basic processing
860

I. INTRODUCTION Most of the systems used in industries are nonlinear and their dynamic property varies with the change of running conditions. In such condition PID controller is a common method. PID with constant parameters does not meet the demand of system because control precision will decline obviously when parameters change. In recent years, in terms of solving the control of nonlinear system, artificial neural network provides new ideas for its strong function of nonlinear mapping, parallel processing, self-learning, etc. proposed a PID control method based on feed-forward multilayer neural networks (MNN) neural network using BP algorithm developed by Rumelhart [2]. Neural networks can be implemented in both ways: analog and digital. As it is well known that digital systems have more advantageous then analog in following aspects: higher accuracy, better repeatability, lower noise sensitivity, better testability, higher flexibility, and compatibility with other types of preprocessors. So for the further discussion only digital system is considered. On the broader way digital NN hardware implementations are classified as (i) FPGA-based implementations (ii) DSP-based Implementations (iii) ASIC-based implementations[3-4]. FPGA can preserve the parallel architecture of the neurons in a layer and it also offers flexibility in reconfiguration. We will primarily discuss about FPGA, as DSP based implementation is serial and ASIC implementations do not offer re-configurability. Issues related to the FPGA implementation of a multi-input neuron are discussed. Both the linear and nonlinear excitation functions are considered.

978-0-7695-3845-7/09 $25.00 2009 IEEE $26.00 DOI 10.1109/ARTCom.2009.182

element, and its design determines the complexity of the network. The fundamental problem limiting the size of FPGA-based ANNs is the cost of implementing the multiplications associated with the synaptic connections because fully parallel ANNs require a large number of multipliers. Although prototyping itself can be accomplished using FPGAs which offer high number of multipliers, the overall goal is to use as few resources as possible. Practical ANN implementations are accomplished either by reducing the number of multipliers or by reducing the complexity of the multiplier. One way of reducing the number of multipliers is to share a single multiplier across all neuron inputs [7]. In [6] another method of reducing the circuitry necessary for multiplication is proposed which is based on bit-serial stochastic computing techniques. The complexity of the adder depends on the precision of the inputs from the synapses and on the number of inputs to each neuron. The adders may be shared across inputs with intermediate results being stored in an accumulator. Of particular importance is the hardware implementation of the neuron activation function. The sigmoid function, traditionally used in ANNs, is not suitable for direct digital implementation as it consists of an infinite exponential series. Thus most implementations resort to various methods of approximating the sigmoid function in hardware typically by using lookup tables (LUTs) to store samples of the sigmoid function for approximation, with some examples of this technique reported in [8, 5]. However, the amount of hardware required for these tables can be quite large, especially if one requires a reasonable approximation. Other implementations use adders, shift registers, and multipliers to realize a digital approximation of the sigmoid function. A. Weight Precision Selecting weight precision is one of the important choices when implementing ANNs on FPGAs. Weight precision is used to trade-off the capabilities of the realized ANNs against the implementation cost. A higher weight precision means fewer quantization errors in the final implementations, while a lower precision leads to simpler designs, greater speed and reductions in area requirements and power consumption. One way of resolving the trade-off is to determine the minimum precision is through trial and error by simulating the solution in software.[9,10] . B. Transfer Function Implementation Direct implementation for non-linear sigmoid transfer functions is very expensive. There are two practical approaches to approximate sigmoid functions with simple FPGA designs which are as follows: i. Piece-wise linear approximation It describes a combination of lines in the form of y= ax+b which is used to approximate the sigmoid function. Note that if the coefficients for the lines are chosen to be powers

of two, the sigmoid functions can be realized by a series of shift and add operations. Many implementations of neuron transfer functions use such piece-wise linear approximations [12]. ii. Lookup tables The second method is lookup tables, in which uniform samples taken from the centre of a sigmoid function which can be stored in a table for look up. The regions outside the centre of the sigmoid function are still approximated in a piece-wise linear fashion. In this scheme, the memory inside FPGA has been utilized to provide efficient design for PID controllers. iii. Data representation The integer representation in the equation of the system can be retained for the implementation in order to reduce circuit surface and to perform computing time. Simulations on Matlab environment can be used to set the suitable precision. Two rules may be settled which are as follows: To work on variables To Improve the ANN coefficients (weights bias Coefficients) IV. ERROR RECURRENT NEURAL NETWORKS ANN controller discussed above is based on multilayer back propagation neural network; it can be improved by using error recurrent neural network. It is shown in Fig. 1 that all node input xik and summation outputs yik were specified. The problem would be reduced to a linear problem, i.e., a system of linear equations that relates the node inputs xik to summation output yik through weight vector wik .

Fig. 1 Structure of a Multilayer Feedforward Neural Network

The learning technique with Kalman filtering of MNN builds upon the partitioning into linear and nonlinear parts proposed by Scalero et al [5]. First, the desired summation outputs dik of the nonlinear portion were calculated, which update and optimize the weights of the linear part between the summation outputs yik and the inputs xik by using Kalman filtering method. The Scalero's algorithm can be summarized by the following steps: i) Calculate the desired summation outputs and dik is obtained by using the steepest descent method.

861

(1) (2) where m is the learning rate, f () is activation function, and j, k, and L identify the layer, node, and output layer, respectively. ok is the desired value of output layer. ii) For each layer j, from 1 through L, calculate the Kalman gain and update the covariance matrix. (3) (4) where j-1 x is the input vector of jth layer, and l is forgetting factor. iii) Update the weights. (5) (6) A. Error Self-Recurrent Neuron Model In conventional neuron model shown Fig. 4(a), the bias or threshold given externally has the effect of increasing or decreasing the net input of the activation function respectively [11]. However, it is difficult to determine their magnitudes and signs; also it is hard to explain physical phenomena.

make a direct contribution to fast tracking behavior since it creates a new and meaningful input source in which the conventional neuron models does not, hence it is able to induce fast identification capability. V. CONCLUSION This paper discusses the advantages of FPGA implementation with PID controller. When implementing ANN on FPGA one must be clear on the purpose reconfiguration plays and develop strategies to exploit it effectively. The weight and input precision should not be set arbitrarily as the precision required is problem dependent. Topology adaptation approaches, which take advantage of the features of the FPGA, are also mentioned. The continual developments in FPGA technology which advance in gate count and speed, in their associated cost and reprogrammability, make this approach a viable alternative to the development of custom hardware for real-time applications so they are the best candidates in neural network implementations among the other alternatives. This paper also covers discussion about error self-recurrent neural model with a bias input, which is the time-delayed error signal between the desired output and the model output instead of +1 or -1. By resolving the problem of selecting the learning rate, it makes it flexible to design the controller in real-time based on neural networks model. REFERENCES
[1] C. C. Shi and G. S. Zhang. A New Method of PID Control Based on Improved BP Neural Network, in proceedings of Chinese Control Conference, Harbin, China, Aug. 2006, pp. 11671171. [2] Wasserman, P. D., Neural Computing, Theory and Practice, Van Nostrand Reinhold, NY, 1989 [3] Y.J.Chen, Du Plessis, Neural Network Implementation on a FPGA , Proceedings of IEEE Africon, vol.1, pp. 337-342, 2002. [4] Sund Su Kim, Seul Jung, Hardware Implementation of Real Time Neural Network Controller with a DSP and an FPGA, IEEE International Conference on Robotics and Automation, vol. 5, pp. 3161-3165, April 2004. [5]X. Yu and D. Dent, Implementing neural networks in FPGAs, IEE Colloquium on Hardware Implementation of Neural Networks and Fuzzy Logic, vol. 61, pp. 1/11/5, 1994. [6]S. L. Bade and B. L. Hutchings, FPGA-based stochastic neural networks-implementation, in Proceedings of IEEE Workshop on FPGAs for Custom Computing Machines, pp. 189198, Napa Valley, Calif, USA, April 1994. [7] D. Hammerstrom, A VLSI architecture for high-performance, lowcost, on-chip learning, in Proceedings of International Joint Conference on Neural Networks (IJCNN 90), vol. 2, pp. 537544, San Diego, Calif, USA, June 1990. [8] Wolf, D.F., Romero, R. A. F., Marques, E. Using Embedded Processors in Hardware Models of Artificial Neural Networks. In proceedings of SBAI - 2001. pp 78-83. [9] Holt, J.L., T.E. Baker. Back propagation simulations using limited precision calculations, in Proceedings of International Joint Conference on Neural Networks. 1991. pp 121-126 vol. 2. [10] Draghici, S., On the capabilities of neural networks using limited precision weights. Neural Networks, 2002. 15: p. 395-414. [11] Singhal, S. and Wu, L., "Traning feedforward networks with the extended Kalman algorithm," Proc. IEEE Int. Conf. Acoustic, Speech, Signal Processing, pp.1187-1190, May 1989

Fig. 4 (a) Conventional Neuron Model

Fig. 4(b) Error Recurrent Neuron Model

In a novel neuron model, i) Input for bias or threshold takes the values of one step ahead error signal (xik (t -1)) between the desired output and the model output, and ii) Synaptic weight with fixed value is added to the recurrent error signals. This architecture is depicted in Fig. 4(b). The new neuron model has an automatic decision on sign and magnitude for threshold or bias value instead of manually selecting -1 or +1. Also, its error feedback is expected to

862

You might also like