You are on page 1of 483

4thInternationalConferenceonComputer ApplicationsinElectricalEngineeringRecentAdvances February1921,2010

Proceedings 4thInternationalConferenceon ComputerApplicationsinElectricalEngineeringRecentAdvances CERA09


February1921,2010

Organisedby DEPARTMENTOFELECTRICALENGINEERING IndianInstituteofTechnologyRoorkee,INDIA

Proceedings 4thInternationalConferenceonComputerApplicationsinElectricalEngineeringRecentAdvances CERA09February1921,2010


2010DepartmentofElectricalEngineering,IndianInstituteofTechnology Roorkee,India CoverPageDesignandEditing:KapilKumarNagwanshi

4thInternationalConference On ComputerApplicationsinElectricalEngineering RecentAdvances(CERA09) TableofContents

S.No. TitleandAuthors Preface A1 Instrumentation


1. RealizationofnonvolatileSRAMbasedonmagnetictunneljunction A.AlfredKirubaraj,M.Malathi 2. Modelingandperformanceanalysisofproportionalintegralderivative controllerwithPSSusingSimulink K.Gowrishankar,A.Ragavendiran,R.Gnanadass 3. Modifiedsmithpredictorandcontrollerforstableandunstableprocesses DolaGobindaPadhan,SomanathMajhi 4. RS485/MODBUSbasedintelligentbuildingautomationsystemusing LABVIEW JigneshG.Bhatt,H.K.Verma 5. Faultdiagnosisinbenchmarkprocesscontrolsystemusingprobabilistic neuralnetwork TarunChopra,Dr.JayashriVajpai 6. Classificationofteasusingspectroscopyandprobabilisticneuralnetworks A.Hkiranmayee,P.CPanchariya 7. Positioncontrolofservomotorusingfuzzylogic AmolA.Kalage,SarikaV.Tade 8. FuzzylogictunedPIDcontrollerforstewartplatformmanipulator DerejeShiferaw,RanjitMitra iv

1 5

9 14

19

23 27 31

A2 ElectricalDrives
9. LossesreductioninindirectvectorcontrolledIMdriveusingFLC C.Srisailm,AnuragTrivedi 10. Design & simulation of bipolar microstepping drive for disc rotor type steppermotor ArchanB.Patel,A.N.Patel,S.H.Chetwani,K.S.Denpiya 11. Power control of induction generator using PI controllers for wind power applications K.B.Mohanty,SwagatPati 35 39

43

12. ImplementationofanadaptiveNeuroFuzzybasedspeedcontrollerforVCIM drive R.A.Gupta,RajeshKumar,RajeshS.Surjuse 13. Newvoltageandfrequencyregulatorforselfexcitedinductiongenerator D.K.Palwalia,S.P.Singh 14. Soft magnetic composite core A new perspective for linear switched reluctancemotor N.C.Lenin,Dr.R.Arumugam 15. Voltage and frequency control with integrated voltage source converter and star/hexagontransformerforIAGbasedstandalonewindenergyconversion system BhimSingh,ShailendraSharma,AmbrishChandra,KAlHaddad

47

51 58

62

A3 PowerSystemOperation,ControlandProtectionI
16. Online contingency analysis incorporating generation regulation and load characteristicsusingFuzzylogic D.Thukaram,RimjhimAgrawal,ShashankKaranth 17. Auxiliary controlled SVS for damping SSR in a series compensated power system NarendraKumar,Ritu,NarinderHuda,Vedpal 18. Simulation of surface and volume stress for high voltage transmission insulators SubbaReddyB,UdayaKumar 19. Stability analysis and frequency excursion of combined cycle plant including SMESunit J.Raja,C.ChristoberAsirRajan 20. ViabilityassessmentofmicrohydropowerAcasestudy PKKatti 67

72

78

82

87

A4 ComputerNetworksandSecurity
21. Data aggregation based cooperative MIMO diversity in wireless sensor networks VibhavKumarSachan,SyedA.Imam 22. Design of novel 3D code set based on algebraic congruency for OCDMA communicationnetworksfordatatransfer ShilpaJindal,NeenaGupta 23. A Corelation Based Robust Work Model Formation for Promoting Global CyberSecurity KapilKumarNagwanshi,TilendraShishirSinha,SusantaKumarSatpathy 24. Identification of vibration signature and operating frequency range of ball millusingwirelessaccelerometersensor SatishMohanty,VanamUpendranath,P.BhanuPrasad 93

97

100

104

25. MobileSMStobrailletranscriptionExtendingtoandroidmobiles ANewEraofMobilefortheBlinds DeepakKumar,SaifulIslam,FarheenaKhan

108

B1 PowerSystemOperation,ControlandProtectionII
26. ANNPSOtechniqueforfaultsectionestimationinpowersystem PrashantP.Bedekar,SudhirR.BhideVijayS.Kale 27. Fuzzy decision making for real and reactive power scheduling of thermal powergeneratingunits LakhwinderSingh,J.S.Dhillon 28. Modified robust pole assignment method for designing power system stabilizers AjitKumar,GurunathGurrala,IndraneelSen 29. Anewdigitaldifferentialrelayingshchemefortheprotectionofbusbar N.G.Chothani,B.R.Bhalja,R.P.Maheshwari 30. Overcoming directional relaying problem during current transformer saturationusingdatafusiontechnique P.Jena,A.K.Pradhan 31. Digital VoltageMode Controller Design For ZeroVoltage TurnOn Boost Converter M.Veerachary,SachinDevassy 112 116

120

124 128

132

B2 BiomedicalSignalProcessing
32. Synthesis and characterization of sensor for biological real time applications usingconductingpolymers,nanocompositesandsignalprocessingtechnique SmtUsha.A,Dr.B.Ramachandra,Dr.M.SDharmaprakash 33. HRVdynamicsundermeditationandrandomthoughts RameshK.Sunkaria,VinodKumar,SureshC.Saxena,ShirleyTelles 34. Evaluationofspirometricpulmonaryfunctiontestusingprincipalcomponent analysisandsupportvectormachines KavithaA,SujathaCM,RamakrishnanS 35. EctopicbeatsinPoincarPlotbasedHRVassessment Buttasingh,Dilbagsingh 36. Backgroundmodelingandmovingobjectclassificationforgaitrecognition A.Jagan,C.Sasivarnan,M.K.Bhuyan 37. Calibration setup and technique for gait motion analysis using multiple cameraapproach Binish Thomas, A. Martins Mathew, Neelesh Kumar, Gautam Sharma, Amod
Kumar

137

141 146

151 155 159

B3 PowerQuality
38. DSPBased Capacitor Voltage Balancing Strategy of a ThreePhase, Three 163

LevelRectifier A.H.Bhat,P.Agarwal,N.Langer 39. Performance analysis of CSI based unified power quality conditioner for differentloadingcondition K.Vadirajacharya,PramodAgarwal,HariOmGupta 40. Simulation and design of controller for unity power factor frontend PWM converter G.N.Khanduja 41. Active and reactive power control of voltage source converter based HVDC systemoperatingatfundamentalfrequencyswitching D.MadhanMohan,BhimSingh,B.K.Panigrahi 42. A new configuration of 12pulse VSCs based STATCOM with constant DC link voltage BhimSingh,K.VenkataSrinivas,,AmbrishChandra,KamalAlHaddad 43. Selective current harmonic elimination in a current controlled DCAC inverterdrivesystemusingLMSalgorithm S.Sangeetha,CH.Venkatesh,S.Jeevananthan 44. PerformanceevaluationofAPFconfigurationsusedinHVsystem. M.M.Waware,PramodAgarwal

170

174

178

182

186

190

B4 ArtificialIntelligenceI
45. Analysisandrecognitionofanomalyinfootthroughhumangaitusingneuro geneticapproach RajkumarPatra,RohitRaja,TilendraShishirSinha 46. Orderreductionoftimedomainmodelsusinghankelmatrixapproach C.B.VishwakarmaR.Prasad 47. Afuzzylogicbasedmulticastmodelforregulationofpowerdistribution AQAnsari,V.K.Nangia 48. Model order reduction by optimal frequency matching using particle swarm optimization PrayantaS,MrinmoyRit,Jayantapal 49. DVCCbased nonlinear feedback neural circuit for solving quadratic programmingproblems Mohd.SamarAnsari,SyedAtiqurRahman 50. Estimationofwindenergyprobabilityusingweibulldistributionfunction Narendra.K,E.Fernandez B5 PowerElectronicsConvertors 51. Performance Analysis of A Coupled Inductor Bidirectional DC To DC Converter Narasimharaju.B.L,SatyaPrakashDubey,SajjanPalSingh 52. ControlofTwoInputIntegratedBoostConverter VeeracharyMummadi,DileepGanta 195

199 215 219

224

227

231

236

53. AVoltageModeAnalogCircuitForSolvingLinearProgrammingProblems Mohd.SamarAnsari,SyedAtiqurRahman 54. DigitaldeadbeatcontrollerforfourthorderboostDCDCConverter AnmolRatnaSaxena,MummadiVeerachary 55. Digital deadbeat current controller design for coupled inductor boost converter M.Veerachary,Firehun.T 56. AnalyticalanalysisofSVCmodel RashmiJain,CPGupta,MajidJamil,A.S.Siddiqui

241

244 248

253

B6 ArtificialIntelligenceII
57. MultiArea security constrained economic dispatch and emission dispatch withvalvepointeffectsusingparticleswamoptimization K.K.Swarnkar,S.Wadhwani,A.K.Wadhwani 58. Application of bacteria foraging algorithm to economic load dispatch with combinedcyclecogenerationplants Mariappane,Thiagarajah,M.Sudhakaran,PAjayDVimalRaj 59. Bearing faults classification by SVM and SOM using complex Gaussian wavelet KalyanMBhavaraju,P.K.Kankar,S.C.Sharma,S.P.Harsha 60. Comparison of numerical and neural network forward kinematics estimationsforstewartplatformmanipulator DerejeShiferaw,R.Mitra 61. A Novel Technique for maximum power point tracking of photovoltaic energyconversionsystem B.Chitti Babu, R.Vigneshwaran, R. Sudharshan Kaarthik, Nayan Kumar Dalei,RabiNarayanDas 62. NoiseCancellationInHandFreeCellphoneUsingNeurofuzzyFilter A.Q.Ansari,NeerajGupta 259

264

268

272

276

280

B7 SignalProcessing
63. Identificationofuncertaintiesbyiterativeprefiltering DalvinderKaur,LillieDewan 64. Hadamardbasedmulticarrierpolyphaseradarsignals C.G.Raghavendra,N.N.S.S.R.KPrasad 65. Anovelamplitudedetectorfornonsinusoidalpowersystemsignal ArghyaSarkar,SawanSen,S.Sengupta,A.Chakrabarti 66. A novel method for flaw detection and detectability of flaw using Kirchoff approximation K.S.Aprameya,R.S.Anand,B.K.Mishra,S.Ahmed 67. OFC based insensitive voltage mode universal biquadratic filter using minimumcomponents 284 288 292 296

301

T.Parveen,M.T.Ahmed 68. MixtureofGaussianmodelforpedestriantracking M.K.Bhuyan,C.Sasivarnan,A.Jagan

304

B8 Power System Operation and Control under Deregulated Environment


69. Comparison of optimal and integral controllers for AGC in a restructured powersysteminterconnectedbyparallelACDCtielines S.K.Sinha,R.N.Patel,R.Prasad 70. Concept of virtual flows for evaluation of source contributions to loads and lineflows D.Thukaram,SurendraS. 71. Load frequency stabilization of hydro dominated power system with TCPS andSMESincompetitiveelectricitymarket PraghneshBhatt,S.P.Ghoshal,RanjitRoy 72. ImpactofSSSCandUPFCinlinearmethodswithPTDFforATCenhancement NareshKumarYadav,Ibraheem 73. Anewalgorithmfortheassessmentofavailabletransfercapability R.Gnanadass\,R.Rajathy,NarayanaPrasadPadhy 74. Combined reactive power and transmission congestion management in deregulatedenvironment KanwardeepSingh,N.P.Padhy,J.D.Sharma 308

313

317

321 325 329

C1 ImageProcessing
75. Modellingandsimulationoffacefromsideviewforbiometrical authenticationusinghybridapproach RohitRaja,RajkumarPatra,TilendraShishirSinha 76. Categorizationandclassificationofdirectionalandnondirectionaltexture groupsusingmultiresolutiontransforms S.Arivazhagan,L.Ganesan,S.SelvaNidhyanandhan,R.NewlinShebiah, 77. Blockingartifactdetectionincompressedimages JagroopSingh,SukwinderSingh,DilbagSingh,MoinUddin 78. Animprovedlicenseplateextractiontechniquebasedongradientand prolongedHaarwaveletanalysis ChiragN.Paunwala,SupravaPatnaiak 79. Relevancefeedbackincontentbasedimageretrieval PushpaB.Patil,ManeshKokare 80. Offlinehandwrittensignatureidentificationusingdualtreecomplex wavelettransforms M.S.Shirdhonkar,ManeshKokare 81. Edgepreservedimageenhancementbyadaptivelyfusingthedenoised imagesbywaveletandcurvelettransform G.G.Bhutada,R.S.Anand,S.C.Saxena 333

337

341 345

349 353

356

82. Anewmultiplerangeembeddingforhighcapacitysteganography S.Arivazhagan,W.SylviaLillyJebarani,S.Bagavath

362

C2 RoboticsandControl
83. AutotuningofPIcontrollerforanunstableFOPDTprocess UtkalMehta,SomanathMajhi 84. OrderreductionofintervalsystemsusingRouthHurwitzarrayandcontinued fractionexpansionmethod Devenderkumarsaini,RajendraPrasad 85. Modelreductionofunstablesystemsusinglineartransformationand balancedtruncation NidhiSingh,RajendraPrasad 86. Discretetimeslidingmodetrackingcontrolforuncertainsystems SanjoyMondal,ChitralekhaMahanta 87. SMESbasemultiareaAGCforrestructuredpowersystem SandeepBhongade,H.O.Gupta,BarjeevTyagi 88. VelocityalgorithmimplementationofPIDcontrolleronFPGA DhirendraKumar,BarjeevTyagi 89. OptimizationofPIDcontrollerusingmodifiedantcolonysystemalgorithm ShekharYadav,GauravSinghUike,J.P.Tiwari 366 370

374

378 382 386 390

C3 ImageandVideoProcessing
90. LowcomplexityimageretrievalinDCTcompresseddomainwithMPEG7 edgedescriptor AmitPhadikar,SantiP.Maity 91. Enhancementofmedicalimagesusinganovelfrobeniusnormfiltering method AkashTayal,HarshitaSharma,TwinkleBansal 92. Textureanalysisofultrasoundimagesforliverclassification MandeepSingh,SukhwinderSingh,SavitaGupta 93. Performanceanalysisoftransformbasedcodingalgorithmsformedical imagecompression M.A.Ansari,R.S.Anand,KuldeepYadav 94. Scenebasednonuniformitycorrectionalgorithmbyequalizingoutputof differentdetectors ParulGoyal 95. Rainrenderinginvideos AbhishekKumarTripathi,SudiptaMukhopadhyay 96. Detectionandanalysisofmaculainretinalimagesusingfuzzyand optimizationbasedhybridmethod SriMadhavaRajaN,KavithaG,RamakrishnanS 97. Qualityaccesscontrolandtrackingofillicitdistributionofcompressedgray scaleimage 395

399

403 407

413

417 421

425

AmitPhadikar,SantiP.Maity,MinakshiChakraborty

C4 SoftComputing
98. HighPerformanceinterfacecircuitsbetweenadiabaticandstandardCMOS logicat90nmtechnology B.P.Singh,NehaArora 99. RSTDB&cacheconscioustechniquesforfrequentpatternmining VaibhavKantSingh 100 AnevolutionaryprogrammingalgorithmfortheRWAprobleminsurvivable opticalnetworks UrmilaBhanja,SudiptaMahapatra,RajarshiRoy 101 Parallelparticleswarmoptimizationfornonconvexeconomicdispatch problem C.ChristopherColumbu,SishajPSimon 102 Distributedcoordinatormodelandlicensekeymanagementforsoftware piracyprevention S.A.MRizvi,VineetKumarSharmaAnuragSinghChauhan 103 PrivacypreservationusingNaveBayesclassificationandsecurethirdparty computation KeshavamurthyB.N,MiteshSharma,DurgaToshniwal AuthorIndex

429

433 437

441

447

451

ORGANIZING COMMITEE

Patron : Chairman : Convener : Organising Secretary : Jt Organising Secretary : Treasurer :

Prof. S.C. Saxena Director, I.I.T., Roorkee Prof. Vinod Kumar, Head, Elect. Engg. Dept. Prof. S.P. Srivastava Prof. R.S. Anand Dr. S.P. Dubey Er. Bharat Gupta

Preface
The impact of computers, particularly over last four decades, has changed practices in electrical engineering beyond recognition even though the process is not yet complete. CERA-09 aims to provide platform for bringing together teachers, researchers, professionals, managers and policy makers not only to discuss recent advances in off-line and on-line computer application techniques including use of associated fields of optimization, modeling, simulation, expert systems, artificial intelligence, neural network etc., but also to discuss the absorption of these areas in academic curriculum and industrial practices and to deliberate on directions needed to be taken up for future course of R&D and training in this vital areas. Besides research papers, CERA-09 welcomes application oriented papers particularly relating to industry as well as papers highlighting applied problems for which appropriate solutions are being currently searched. Some of the themes identified for the conference are as under: 1. AI Techniques and Applications. 2. Bio-signal Processing and Instrumentation 3. Computer Networking, Security and Communication 4. Distribution System Automation and Reforms 5. Digital Protection and SCADA 6. Design, Operation and Condition Monitoring of Power Apparatus 7. Intelligent and Virtual Instrumentation 8. Distributed Electronic Measurements 9. Digital Signal and Image Processing 10. Digital Control, Automation and Robotics 11. Flexible AC Transmission System 12. Power Electronics 13. Power Quality Issues 14. Power System Restructuring 15. Power Semiconductor Controlled Electric Drives 16. Renewable Energy Systems 17. System Modeling and Simulation 18. Telemedicine and Bio-informatics

ii

SPEAKERS

Speakers Prof. S. S. Murthy Department of Electrical Engineering IIT Delhi Prof. Bhim Singh Department of Electrical Engineering IIT Delhi Prof. Nikhil S. Padhye Research Associate Professor University of Texas Health Science Center, Houston Prof. Peter W. Macfarlane Professor of Electrocardiology University of Glasgow, UK Prof. T. Gonsalves Director, IIT Mandi Prof. M.M. Gupta Director, Intelligent Systems Research, Univ. of Saskatchewan, Canada Prof. Pramod Kumar, Head- Department of Electrical Engineering Delhi College of Engineering

Venue Venue-I

Date & Time 19-02-2010 04:30 pm 05:00 pm

Venue-I

20-02-2010 11:00 am - 11:30 am

Venue-II

20-02-2010 11:00 am - 11:30 am

Venue-I

20-02-2010 04:30 pm -05:00 pm

Venue-II Venue-I

20-02-2010 04:30 pm -05:00 pm 21-02-2010 11:00 am 11:30 am

Venue-II

21-02-2010 11:00 am 11:30 am

Venue I- Auditorium, Department of Electrical Engineering Venue II- Committee Room, Department of Electrical Engineering

iii

ABOUT ROORKEE

Roorkee town (Longitude : E770 53' 52" and latitude : N290 52' 00". Average elevation: 268 meters above the mean sea level) is situated 30 Kms south of lower Himalayan range and is in the district Haridwar of Uttarakhand State. It is considered to be the gateway to pilgrimage centres of many renowned places like Haridwar, Rishikesh and hill stations such as Mussoorie. Roorkee is well connected by roads and trains falls on Dehradun-Saharnpur rail link. Roorkee is located at 180 Kms northeast of New Delhi, the capital of India. There are frequent and regular bus services, deluxe/ordinary, throughout the day untill midnight, from Inter-State Bus Terminus (ISBT), New Delhi. Taxi service is also available from Indira Gandhi International Airport, Ajmeri Gate and ISBT. Road transport services between Delhi and Dehradun and between Delhi and Haridwar run through Roorkee. It is also connected by rail from New Delhi. A fully air-conditioned superfast train Shatabdi Express and another superfast train Janshatabdi run between New Delhi and Dehradun with a halt at Roorkee. The minimum and maximum temperature during February varies between 50C to 220C. Delegates are advised to bring warm clothing.
ABOUT THE INDIAN INSTITUTE OF TECHNOLOGY ROORKEE

The Indian Institute of Technology (IIT), Roorkee is the successor of the University of Roorkee and is the oldest technical institution of the Indian sub-continent. It was established as the Roorkee College in 1847 and rechristened as the Thomson College of Civil Engineering in 1854. Recognizing its yeoman contribution for the development of the country for over 100 years. This temple of learning was elevated to the status of a University, the first technical university in India in 1949. The University of Roorkee was converted to IIT Roorkee by the Government of India on September 21, 2001, thereby further elevating it to an institute of national importance. Over the years, it has built up and maintained an excellent academic reputation. The outstanding achievements of its alumni especially in the field of water related subjects are a testimony to this. The IIT Roorkee has academic departments in the area of Engineering, Sciences, Architecture and Planning, Management studies and Humanities and Social Sciences besides many centres of higher education and research.
ABOUT THE DEPARTMENT OF ELECTRICAL ENGINEERING

The Electrical Engineering Department was part of the Thomson College of Engineering from the year 1897, one of the earliest such specialization in the world when the discipline itself was in its infancy. The first batch in Electrical Engineering

iv

passed out of the college in the year 1900. The department was closed down in the year 1923, following the recommendation of a special committee that the college may be converted to purely Civil Engineering Institution. This decision was not to be reversed until on the eve of being converted into a University. The Fortescu committee advised the resumption of instructions in Electrical Engineering and the present department of Electrical Engineering came into being in 1946, and the first graduates of the new department offered courses with options in both Electrical & Telecommunication Engineering Subsequently, in 1964, the department was bifurcated to form the two departments of Electrical Engineering and Electronics & Communication Engineering. The department had also started post graduate programme in 1954, one of the earliest programme in the country, offering nine month's diploma in Electrical Machine Design. Another 3 month's dissertation work converted the diploma to ME Degree. In 1963-64, two ME Course on advanced Electrical Machines & Power System Engg. were added on the recommendation of Thacker Committee and the earlier one was dropped. Later two more ME Course on System Engg. & Operations Research and Measurement & Instrumentation were added. The renamed and reorganised courses now are: 1. 2. 3. 4. Instrumentation & Signal Processing Electric Drives & Power Electronics Power System Engineering Systems & Control

Also in year 2007 a new Integrated Dual Degree programme known as B. Tech. in Electrical Engineering and M. Tech. in Power Electronics has been started. The department is recognized centre for Ph.D and M.Tech. programmes. It is offering Ph.D. programme since 1964 and more than 100 faculty members and research scholars have been awarded Ph.D. degrees so far. The Department is very active in industry-institution interaction and is providing consultancy services to the industry over the last 40 years. Number of prestigious sponsored and consultancy projects for different organisation have also been completed by the Department.

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Realization of Non-Volatile SRAM based on Magnetic Tunnel Junction

A.Alfred Kirubaraj
School of Electrical Sciences Karunya University Coimbatore, INDIA alfred.kirubaraj@gmail.com

M.Malathi
School of Electrical Sciences SRM University Chennai, INDIA mmalathi2000@yahoo.com

Abstract Recently there has been considerable interest in MTJ based MRAM because of its promising characteristics exhibiting high non-volatility combined with high density and radiation hardness as well as nondestructive readout (NDRO), very high radiation tolerance higher write/erase endurance compared to the FRAMs, and virtually unlimited power-off storage capability. This paper presents the design of Static Random Access Memory (SRAM) cell followed by MTJ that makes SRAM cell non-volatile, in which the output of SRAM cell having logic state 0 and 1 changes the direction of the moment of the free magnetic layer which is used for the information storage in MTJ device.

not been done on design of non-volatile SRAM based on MTJ (model). This research work provides enhanced analysis of non-volatile SRAM cell by varying the MTJ supply voltage and the resistance (R1). The research work in this paper is organized as follows: Section II will focus on Magnetic Tunnel Junction (MTJ) working principles with respective magnetic operation. Section III will provide MOS SRAM architecture as well as the cell with Section IV concentrating on the design of nonvolatile SRAM (model). Section V summarizes and finally Section VI concludes the research work. II. PRINCIPLES OF MTJ

Keywords- MTJ, SRAM, TMR, MRAM, SOC.

I.

INTRODUCTION

Magneto resistive random access memory (MRAM) is based on magnetic memory elements integrated with CMOS. Key attributes of MRAM technology are no volatility and unlimited read and write endurance. Recent advances in magnetic tunnel junction (MTJ) materials give MRAM the potential for high speed, low operating voltage, and high density. Zhao et al [1] researched on a non-volatile flip-flop in which the flip-flop is based on MRAM (Magnetic RAM) technology on standard CMOS and concluded that the flip-flop can also be used to replace all the registers in SOC (System-on-chip) then makes these chips non-volatile and secure. The authors in [2] did a researched on Non-volatile RAM based on magnetic tunnel junction elements that provides high speed, low operating voltage, and high density. A macro model of Magnetic Tunnel Junction (MTJ) was also researched by the authors [3] where the device is the most commonly used magnetic components in CMOS circuits which allows us to simulate hybrid STT based MTJ and CMOS architectures, including circuits such as look up table and Flip-Flops. Researchers in [4] described several fundamental technical and scientific aspects of MRAM with emphasis on recent accomplishments that enabled our successful demonstration of a 256-kb memory chip and concluded by demonstrating that conventional lithography and patterning of submicron structures can produce single energy barrier magnetic reversal behavior in MRAM elements. It could be seen that much research has

Magnetic tunnel junction consists of two ferromagnetic layers (M) separated by a non-magnetic insulating layer, usually an oxide. The one with lower value of the coercivity is called the free layer and the one with higher value is called the fixed or pinned layer and V is the applied voltage to MTJ device. When a current is passed through these layers, perpendicular to the plane of the layers, a resistance is measured which depends upon the magnetic orientations of the two ferromagnetic electrodes in relation to each other [5]. If the magnetizations of the two electrodes are oriented parallel to each other, then the measured resistance is normally found to be lower than if they are oriented with their magnetizations antiparallel to each other. The basic principle governing the data storage mechanism is the use of electrical polarization (or hysteresis) property [8]. Tunnel Magneto Resistance effect (TMR), occurs when a current flows between two Ferro-magnetic layers separated by a thin (about 1 nm) insulator [3]. III. SRAM TECHNOLOGY

A. MOS SRAM Cell A basic six transistor CMOS SRAM cell is shown in Fig.1. The bit information is stored in the form of voltage levels in the cross-coupled inverters [6]. This circuit has two stable statuses, designated as logic 1 and logic 0. From Fig.1, INV.BITIN is represented as inverse BITIN. If in the logic state 1 the point C5 is high and point C6 is low, then T1 is off and T2 is on; also, T3 is on and T4 is off. In the same way, logic 0 states would be the opposite, with point C5 low and C6 high. During the read/write operations, the

row address of the desired cell is routed to the row address decoder which translates it and makes the correct word line of the addressed row high. This makes transistors T5 and T6 in all cells of the row switch on. The Column address decoder translates the column address, and makes connection to the BITIN and the inverse BITIN of all cells in the column addressed [10]. During READ operation, both the BITIN and inverse BITIN are made high and selecting the desired word line.

output from SRAM cell named OUTPUT port and given as input to MTJ circuit as input data that changes the resistance of MTJ with respect to the input and that makes SRAM cell non-volatile and the output of MTJ circuit is observed from out port as mentioned in Fig. 2 .

Fig. 2.MTJ in SRAM

Fig. 1 Basic six transistors CMOS SRAM cell

At this time, data in the cell will pull one of the bit line low. The differential signal is detected on the BITIN and inverse BITIN are amplified, and read out through the output line through OUTPUT. If the cell is in state 1, then T1 is off and T2 is on and the word line of the addressed column becomes high, a current starts to flow from inverse BITIN through T6 and T2 to ground. As a result, the level of BITIN becomes lower than inverse BITIN. This differential signal is detected by a differential amplifier connected to the BITIN and inverse BITIN, amplified and fed to the output buffer. The process for reading a 0 stored in the cell is opposite, so that the current flows through T5 and T1 to ground, and the bit line BITIN has a lower potential than inverse BITIN The READ operation is nondestructive, and after reading, the logic state of the cell remains unchanged. During WRITE operation, datas are placed on BITIN and inverse BITIN. Then the word line is activated. These forces the cell into the state represented by the bit lines, so that the new data are stored in the cross-coupled inverters. If the information is to be written into a cell, then BITIN becomes high and inverse BITIN becomes low for the logic state 1. For the logic state 0, BITIN becomes low and inverse BITIN becomes high and the word line is then raised, causing the cell to flip into the configuration of the desired state. B. Analysis of basic non-volatile SRAM cell In the design of Non-volatile SRAM cell based on MTJ, the working principle is that a materials magneto resistance will change due to the presence of a magnetic field. In the simulation of non-volatile SRAM, we combine SRAM cell and MTJ circuit in series fashion. From Fig. 2, we take the

Fig. 3.SRAM input Bits (1100)

Fig. 4.SRAM output Bits (0011)

Fig.3 shows the input to the SRAM cell where logic 0 and 1 are 0 V and 3 V respectively, however the output of SRAM cell give 0.4 V and 2 V representing logic 0 and logic 1 as shown in Fig.4. The rise and fall of voltage at the output of SRAM cell is as a result of MOSFET being used. Fig.5 shows the simulated results of Non-Volatile SRAM cell, the required voltage is supposed to be 0.2 V as MTJ is being supplied with external source of 0.2 V, but the simulated results provided a voltage of 125 mV due to

thickness of oxide layer of the MOSFET (M5), the length (L) and width (W) of the MOSFET (M5).

Fig. 5.Output Voltage of Non-Volatile SRAM cell

IV.

ENCHANCED NON-VOLATILE SRAM

anti-parallel to each other and RP is the resistance when they are parallel to each other. The magneto resistance will change with the applied magnetic field. The ratio between the two resistance values is named Tunneling Magneto resistance Ratio (TMR) and defined in equation (1) mentioned above. The TMR is currently up to 70% in an Al-O barrier MTJ and 230% in an Mg-O barrier MTJ. If MTJs with much higher TMR and sufficiently low Resistance-Area (RA) can be engineered, then the possibility is open for much lower critical current densities [9]. This is encouraging for the future development of spintransfer-switched MRAM. Magneto Resistance (TMR) ratio is calculated using equation (1) which is given by TMR=260 %. The stored values of logic 0 and logic 1 in MTJ can be found in difference in resistance (either low or high). Voltage signals at the output of MTJ circuit device represents logic 0 or logic 1 which is given by 148 mV and 80 mV which indicates an improvement in the readout voltages in MTJ circuit as shown in Fig.6.

In the enhanced analysis of Non-Volatile SRAM cell, deals with change in supply voltage for SRAM cell and MTJ circuit, Length (L) and Width (W) of MOSFET (M5), threshold voltage (VT) of MOSFET (M5) and resistance (R1) in MTJ circuit. A. Variation of voltages of SRAM and MTJ In this section, the supply voltage of SRAM cell and MTJ circuit are changed to 3.8 V and 0.22 V instead of 3 V and 0.2 V in Fig.2 Length (L) and Width (W) of MOSFET (M5) are made to be 3.2 m and 60 m and resistance (R1) is made 0.8 K with the thickness of oxide layer is (tOX=300*10-8 cm) and threshold voltage of MOSFET (M5) is taken as 0 V. From these assumptions, the output of SRAM cell (OUTPUT port) having logic 0 and logic 1 is represented as 0.6 V and 2 V as shown in Fig.4. These voltages are given as input to MTJ circuit which exhibits variable resistance (MTJ properties) either low or high, for logic 0 to be stored in MTJ, the resistance value is R=1.23 K and for logic 1 to be stored, the resistance value is R=341 . The Tunnel Magneto Resistance effect (TMR), occurs when a current flows between two Ferro-magnetic layers separated by a thin (about 1 nm) insulator. Then the total resistance of the device, in which tunneling is responsible for current flowing, changes with the relative orientation of the two magnetic layers. The resistance is normally higher in the anti-parallel case. The effect is similar to Giant Magneto resistance except that the metallic layer is replaced by an insulating tunnel barrier. The magneto resistance of a tunnel junction is calculated by the following equation [3, 7]: as succinct as possible (for example, do not differentiate among departments of the same organization). This template was designed for two affiliations.

Fig. 6.Output Voltage of Non-Volatile SRAM

B. Variation of resistance (R1) Here the resistance (R1) value is changed to 0.72 K and threshold voltage of MOSFET (M5) is 0.2V and remaining parameters are kept constant as in section A.

Fig. 7.Output Voltage of Non-Volatile SRAM

TMR= (RAP RP) / RP

(1)

From above equation (1), RAP is the resistance when the magnetizations of the two Ferro-magnetic layers are

As shown in Fig 7, due to change in resistance (R1) the output voltages are now 160 mV and 86 mV which are quite higher in readout voltage when compared to analysis in Section A as 148 mV.

The resistance values obtained from equation (4) are R=1.38 K for logic 1 to be stored and R=374 for logic 0 to be stored in MTJ device and TMR ratio is given by 268 %. C. Improved NVSRAM Output readout Voltage In this section, the output readout voltage of NVSRAM is improved by changing the resistance (R1) to be 0.65 K and threshold voltage of MOSFET (M5) is 0.5V and remaining parameters are kept constant as in section A.

mV which gives an increase in output voltage of the nonvolatile SRAM as compared to 160 mV in Fig. 9. The reduction in resistance value also contributes to low power consumption by the MTJ circuit which eventually improves its efficiency and that of the system as a whole. VI. CONCLUSIONS

From this paper it has been deduced that by reducing the resistance value (R1) by 10 % in MTJ circuits the output voltage is increased by almost 3.75 %. This indicates that the resistance value (R1) in MTJ circuit has a significant effect on the output of non-volatile SRAM. Further reduction in the resistance value (R1) could improve the output voltage; however, other factors such as the threshold voltage must be taken into consideration. It must be pointed out here that not only changing R1 value could improve the performance but also changing (W/L) ratio could improve the performance. REFERENCES
[1] W.Zhao, E. Belhaire, V. Javerliac, C. Chappert, B. Dieny, A nonvolatile Flip-Flop in Magnetic FPGA chip, IEEE Transactions vol. 4, 2006. M. Durlam, P. Naji, M. DeHerrera, S. Tehrani and K. Kyler, NonVolatile RAM based on magnetic tunnel junction elements, in Proc. IEEE Int.Solid State Circuits Conference Dig. Tech. Papers, Feb. 2000, pp. 130- 131. W.Zhao, E. Belhaire, Q. Mistral, V. Javerliac, B. Dieny, E. Nicolle , Macro-model of Spin-Transfer Torque based Magnetic Tunnel Junction device for hybrid Magnetic-CMOS design, IEEE Transactions, 2006. Brad N. Engel, Nicholas D. Rizzo, Jason Janesky, Jon M. Slaughter, Renu Dave, Mark DeHerrera, The Science and Technology of Magneto resistive Tunneling Memory, IEEE Transactions On Nanotechnology, vol. 1, No. 1, Mar, 2002. Brad N. Engel, Nicholas D. Rizzo,member, IEEE, John salter, Magneto resistive Random Access Memory Using Magnetic Tunnel Junctions, IEEE Transactions., vol. 91, No. 5, May, 2003. A. K. Sharma, Semiconductor Memories Technology, Testing, and Reliability, Prentice-Hall, New Delhi, India 2000. Bryan John Baker, a thesis submitted on A model for the behavior of magnetic tunnel junctions, Lowa State University, 2003. Shrinivas Pandharpure, a thesis Submitted over Process Development for Integration of CoFeB/MgO-based Magnetic Tunnel Junction (MTJ) Device on Silicon. Hao Meng and Jian-Ping Wang, Spin Transfer Effect in Magnetic Tunnel Junction with a Nano-Current-Channel Layer in Free Layer, IEEE Transactions on Magnetics, vol.41, No.10, Oct, 2005.

Fig. 8.Output Voltage of Non-Volatile SRAM

As shown in Fig 8, due to change in resistance (R1) the output voltages are now 170 mV and 95 mV which is quite higher in readout voltage when compared to analysis in Section B as 160 mV and the resistance values obtained from equation (4) are R=1.41 K for logic 1 to be stored and R=389 for logic 0 to be stored in MTJ device and TMR ratio is given by 262%. V. SUMMARY

[2]

[3]

[4]

In this paper, we have considered methods of enhancing the performance of non-volatile SRAM cell by changing the supply voltage to both SRAM and MTJ circuits and also by varying resistance (R1) in MTJ circuits. It is seen that by increasing the supply voltage of SRAM and MTJ shown in Fig. 2 to 3.8 V and 0.22 V with (W/L) as 60 m and 3.2 m and the resistance (R1) as 0.8 K. The output voltage of the MTJ was increased to 148 mV which gives an increase in output voltage of the non-volatile SRAM as compared to 125 mV in Fig. 5. In the second case, only the resistance value (R1) in the MTJ circuit shown in Fig. 2 was changed to 0.72 K shown in reduction in resistance value almost 0.08 K with all parameters including (W/L) ratio remaining constant. The output voltage then increased to 160 mV as compared to the previous analysis. In the third case, in order to improve the output readout voltage, the resistance (R1) in the MTJ circuit was changed to 0.65 K and its threshold voltage of MOSFET (M5) is changed to 0.5V. The output voltage of the MTJ was increased to 170

[5]

[6] [7]

[8]

[9]

[10] Stephan De Beer, Monuko Du Plessis, and Evert Seevinck, An SRAM Array Based on a Four-Transistor CMOS SRAM Cell, IEEE Transactions on Circuits and SystemsI: Fundamental Theory and Applications, vol.50, No.9, Sept, 2003.

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Modelling and Performance Analysis of Proportional Integral Derivative Controller with PSS Using Simulink
K.Gowrishankar1, A.Ragavendiran2, R.Gnanadass2
1

Department of ECE, Rajiv Gandhi College of Engineering & Technology, Pondicherry- 607402, INDIA.
2

Department of EEE, Pondicherry Engineering College, Pondicherry-605014, INDIA. gowri200@yahoo.com


synchronizing torques within the generator. A comprehensive analysis of this torques has been dealt with by Klofenstein in their landmark paper in 1971 [3]. These controllers have been known to work quite well in the field and are extremely simple to implement. However the parameter of PSS such as gain, washout and Lead Lag time plays a vital role to damp out low frequency oscillation. Many researchers concentrated for tuning these parameters through mathematical approach for different operating conditions. Tuning of PSS can be clearly defined by Larsen and Swann in 1981[4]. In the last two decades, various types of PSS have been designed. For example adaptive controller based PSS have been used in many applications [5]. Most of these controllers are based on system identifications and parameter estimations therefore from computational point of view they are time consuming. It is evident from the various publications that interest in application of Fuzzy logic based PSS (FLPSS) has also grown in recent years [6]. Low computation burden simplicity and robustness make FLPSS suitable for stabilization purposes. Different methods for designing such devices are proposed using genetic algorithm (GA) and artificial neural network [7]. Corcau and Stoenescu developed simulink model for fuzzy based PSS. In this paper, the application of the fuzzy logic controller as a power system stabilizer is investigated by means of simulation studies on a single machine infinite bus system [8]. This paper presents a simple MATLAB simulink model for a synchronous machine to reduce low frequency oscillation during several operating conditions such as heavy load, vulnerable conditions etc. Here, PID controller combined with PSS can guarantee a robust minimum performance over a wide range of operating conditions. The efficacy of the proposed PID with PSS in damping out low frequency oscillations have been established by extensive simulation studies on single machine infinite bus system.

Abstract - In this paper, the operation of a Proportional Integral Derivative (PID) with PSS controller is analysed in simulink environment. The development of a PID with power system stabilizer in order to maintain stability and enhance the performance of a power system is described widely. The application of the PID with PSS controller is investigated by means of simulation studies on a single machine infinite bus system. The functional blocks of PID with PSS are developed in Simulink and simulation studies are carried out. A study case for the validation of the proposed simulink mechanism is presented and analysed with control application for a synchronous generator excitation system. The superior performance of this stabilizer in comparison to PSS, PID and without PSS proves the efficiency of this new PID with PSS controller. The comparison studies carried out for various results such as speed deviation, field voltage, rotor angle and load angle in Simulink based MATLAB environment. Keywords -PowerSystemStabilizer,PIDController,Excitation System.

I.

INTRODUCTION

In power systems, low frequency oscillations are generator rotor angle oscillations having a frequency between 0.1-3.0 Hz. These oscillations can be created by small disturbance in the system, such as changes in the load, and are normally analysed through the small signal stability of the power system. In early sixties, most of the generators were getting interconnected and the automatic voltage regulators (AVRs) were more efficient. With bulk power transfer on long and weak transmission lines and application of high gain, fast acting AVRs, small oscillations of even lower frequencies were observed. These were described as Inter-Tie oscillations. The oscillations within the plant at slightly higher frequencies were termed as Intra-Plant oscillations. The combined oscillatory behaviour of the system encompassing the three modes of oscillations are popularly called the dynamic stability of the system. In more precise terms it is known as the small signal oscillatory stability of the system. The oscillations, which are typically in the frequency range of 0.2 to 3.0 Hz. might be excited by disturbances in the system or, in some cases, might even build up spontaneously [1]. These oscillations limit the power transmission capability of a network and, sometimes, may even cause loss of synchronism and an eventual breakdown of the entire system. The problem, when first encountered, was solved by fitting the generators with a feedback controller which sensed the rotor slip or change in terminal power of the generator and fed it back at the AVR reference input with proper phase lead and magnitude so as to generate an additional damping torque on the rotor [2]. This device came to be known as a Power System Stabilizer (PSS). A PSS prepares a supplementary input signals in phase with the synchronous rotor speed deviation to excitation systems resulting in generator stability. The principles of operation of this controller are based on the concepts of damping and

II.

POWER SYSTEM STABILIZER

A solution to the problem of oscillatory instability is to provide damping for the generator oscillations by providing Power System Stabilizers (PSS) which are supplementary controllers in the excitation systems.

Structure and Tuning of PSS:


The Generic Power System Stabilizer (PSS) block can be used to add damping to the rotor oscillations of the synchronous machine by controlling its excitation. The disturbances occurring in a power system induce electromechanical oscillations of the electrical generators. These oscillations, also called power swings, must be effectively damped to maintain the system stability. The output signal of the PSS is used as an additional input (Vstab) to the Excitation System block. The PSS input signal can be either the machine speed deviation, , or its acceleration power, Pa = Pm Peo (difference between the mechanical power and the electrical

power).The Generic Power System Stabilizer is modeled by the following nonlinear system:

IV. PROPORTIONAL INTEGRAL DERIVATIVE CONTROLLER WITH POWER SYSTEM STABILIZER


Advantages of Proportional Integral Derivative and Power system stabilizer are combining together to damp-out low frequency oscillation during severe load and vulnerable condition. The role of PID with PSS is used to reduce the overshoot and settling time. The transient stability limit of the synchronous machine can be improved by the PID with PSS since the generator with PSS and PID will have more overshoot and settling time when the load is increased to 600MVA, while the one with PID with PSS will still remain stable. The damping characteristic of the PID with PSS is insensitive to load change while that of PSS and PID controller will deteriorate as the load changes. Therefore the proposed PID with PSS is relatively simpler than the other controllers for practical implementation and also produces better optimal solution.

Fig.1 Conventional Power System Stabilizer

The model consists of a low-pass filter, a general gain, a washout high-pass filter, a phase-compensation system, and an output limiter.

A. Gain
The overall gain K of the generic power system stabilizer. The gain K determines the amount of damping produced by the stabilizer. Gain K can be chosen in the range of 20 to 200.

B. Wash-out time constant


The washout high-pass filter eliminates low frequencies that are present in the speed deviation signal and allows the PSS to respond only to speed changes. The Time constant Tw can be chosen in the range of 1 to 2 for local modes of oscillation. However, if interarea modes are also to be damped then Tw must be chosen in the range of 10 to 20.

Fig.2 PID with PSS

C. Lead-lag time constants: (Phase Compensation System)


The numerator time constant T1n, T2n and denominator time constant T1d, T2d in seconds (s), of the first and second lead-lag transfer function. The phase-compensation system is represented by a cascade of two first-order lead-lag transfer functions used to compensate the phase lag between the excitation voltage and the electrical torque of the synchronous machine.

V.

MODELLING OF PID WITH PSS USING SIMULINK

To analysis the performance of the PSS, Simulation model was developed in simulink block set of MATLAB.The functional block set of PSS in simulink environment shown in fig. 3.

D. Limiter
The output of the PSS must be limited to prevent the PSS acting to counter the action of AVR.PSS action in the negative direction must be shortened more than in the positive direction. A typical value for the lower limit is -0.05 and for the higher limit it can vary between 0.1 to 0.2.

III.

PROPORTIONAL INTEGRAL DERIVATIVE CONTROLLER

A PID controller is a simple three-term controller. The letters P, I and D stand for: P -Proportional, I Integral and D Derivative. The transfer function of the most basic form of PID controller,

C(s) = KP + KI / S + KD S
Where, KP = Proportional gain, KI = Integral gain and KD =Derivative gain. PID controllers are used in more than 95% of closed-system. It can be tuned by operators without extensive background in controls. The effects of increasing each of the controller parameters KP, KI and KD can be summarized as Use KP to decrease the rise time. Use KD to reduce the overshoot and settling time Use KI to eliminate the steady-state error. In this simulink, the KP, KI and KD are selected as 50, 5 secs and 2 secs to reduce the overshoot and settling time in order to achieve optimal solution.
Fig.3 Block Diagram of PID with PSS

The effectiveness of PID with PSS is investigated for various operating conditions using the Simulink model. Independent of the technique utilized in tuning stabilizer equipment, it is necessary to recognize the nonlinear nature of power systems and that the objective of adding power system stabilizers along with PID controller is to extend power transfer limits by stabilizing system oscillations; adding damping is not an end in itself, but a means to extending power transfer limits.

VI.

SIMULATION RESULTS AND DISCUSSIONS

The performance of PSS, PID, PSS with PID and without PSS was studied in the simulink environment for different operating conditions and .the following test cases was considered for simulations.

Case i: For normal load without vulnerable condition, the variation


of speed deviation, field voltage, rotor angle and load angle were analysed for PSS, PID, PID with PSS and without PSS.
Fig.6 Load Angle

Case ii: System was subjected to vulnerable (fault) condition, the variation of above mentioned cases were analysed. Case iii: The variation of above mentioned cases were analysed for
PSSs and without PSS when subjected to different loading condition. The above cases are illustrated clearly, how the controller reduces the overshoot and settling time to the nominal level when subjected to PSS, without PSS, PID and PID with PSS and the inference of the simulation results are as follows.

Fig.7 Rotor Angle

Case II: Fault Condition


This illustrates the stability of the system during vulnerable condition, a three phase fault is assumed to happen at the transmission line. The fault persists in the system for 0.01 sec and it is cleared after 0.1 sec. The parameters of the system during fault condition are illustrated in Fig.8to Fig.11. From the Fig.8, it is observed that the PSS produced more overshoot and settles at 7 secs. The PID controller gives better solution compared to PSS.The combination of PID with PSS further reduces the settling time to 4 secs and also the overshoot. According to Fig.9, the overshoot was high for both PSS and PID; therefore the stability of the system was affected. The PID with PSS reduces the overshoot to 50% and makes the system to reach steady state before 3 secs. From this case, it is inferred that PID with PSS supports the synchronous generator to maintain synchronous speed even at severe fault conditions. During the fault condition, PSSs and without PSS maintains the load angle around zero degree. Normally for the smart system the load angle should be maintained around 15 to 45 degree. The PID with PSS provides better solution by maintaining the load angle around 40 degree. From the Fig.11, it is inferred that the acceleration of rotor increases with respect to Fault condition. However with the help of PID with PSS, the rotor angle maintained at normal level compared to PSS and PID.

Case I: Normal Load without Fault


Here, the synchronous machine subjected to normal load of 200MV without fault condition in the transmission line and the following observations from the dynamic responses are made in Figs. 4 to 7 with respect to the stability of the system. From the Fig.4 it is observed that the PID with PSS can provide the better damping characteristic than the other cases. The PID with PSS reduced the overshoot to 10 MV and the system reaches the steady state quickly compared to PID, PSS and without PSS.By this effect, the field voltage will be stable and in turn it ensures the system stability. In the response of Speed deviation Fig.5, the overshoot reduced to 0.01 from 0.03 using PID with PSS therefore the system reaches the stable state quickly. It is necessary to maintain the speed in the synchronous generator; care should be taken to make the system to reach steady state as early as possible for that PID with PSS give better optimal solution compared to others. From the Fig.6, the load angle reaches the stable state at 30 degree. Here also inferred that after the inclusion PID with PSS the damping oscillation was reduced and also it boosts up the load angle to 30 from 10 degree. According to Fig.7, PID with PSS improves the rotor angle to the maximum extent by reaching the settling time before 2 secs. However the rotor angle in the negative side, the PID with PSS improves the performance compared to other PSSs.

Fig.8 Field Voltage Fig.4 Field Voltage

Fig.9 Speed Deviation Fig.5 Speed Deviation

Fig.14 Load Angle Fig.10 Load Angle

Fig.15 Rotor Angle Fig.11 Rotor Angle

VII.

CONCLUSION

Case III: Heavy Load


In this case, the Synchronous generator is subjected to three phase RLC load of 600MV in the transmission line. The performance characteristics of the system with PSS, PID, PID with PSS and without PSS are illustrated from Fig.12 to 15. From the Fig.12, the PID with PSS provides better solution by reducing overshoot to 75% and the settling time to 3 secs even in heavy load condition. By this effect the field voltage will be stable and in turns maintain the system stability. According to Fig.13, the overshoot was heavy for both PSS and PID and in turn affects the stability of the system. The PID with PSS reduces the overshoot to 50% and makes the system to reach steady state before 3 secs. Therefore it is inferred that PID with PSS supports the synchronous generator to maintain synchronous speed even at severe fault conditions but with some negative damping. During the load condition, the PID with PSS makes the system to settle at 1.5 secs. Also it boosts up the system to maintain load angle in and around 40 degree. In this case also the proposed system maintains stability. From the Fig.15, it is inferred that the PID with PSS maintains the stability with small amount of damping compared to PSS, PID and without PSS.PID with PSS reduce the overshoot to 25% and the system settles at 1.5 secs.

Results from this study indicate that under large disturbance conditions, better dynamic responses can be achieved by using the proposed PID with PSS controller than the other stabilizers. We could also observe in all case studies, from the Matlab / Simulink simulation, that the PID with PSS controller has an excellent response with small oscillations, while the other controller response shows a ripple in all case studies and some oscillations before reaching the steady state operating point. It was shown that an excellent performance of the PID with PSS control in contrast to the other controllers for the excitation control of synchronous machines could be achieved. Modelling of proposed controller in simulink environment shown the accurate result when compared to mathematical design approach. A simple structure of a PID controller and its wide spread use in the industry make the proposed stabilizer very attractive in stability enhancements.

ACKNOWLEDGEMENT
This project is financially supported through Research Promotion Scheme by AICTE, New Delhi.

REFERENCES
[1] [2] P. Kundur, Power system stability and control, Mc Grew Hill, U.S.A 1994. F. P. Demello and C. Concordia, Concepts of Synchronous Machine Stability as Effected by Excitation Control, IEEE Transactions on Power Apparatus and Systems, Vol. PAS-88, No. 4, pp. 316-329, April 1969. A. Klofenstein, Experience with System Stabilizing Excitation Controls on the Generation of the Southern California Edition Company, IEEE Transactions on PAS, Vol.90, No. 2, pp. 698-706, March/April 1971. E.Larsen and D. Swann Applying power system stabilizers IEEE Transaction on PAS, Vol. 100 No. 6, 1981, pp. 4017-3046. A. Ghosh, G. Ledwich, O.P. Malik and G.S. Hope, Power System Stabilizer Based on Adaptive Control Techniques, IEEE Transactions on PAS, Vol. 103, pp. 1983-1989, March/April 1984. M.A.M. Hasan, O.P. Malik and G.S. Hope, A fuzzy Logic Based Stabilizer for a Synchronous Machine, IEEE Transactions on Energy Conversion, Vol. 6, No. 3, pp. 407-413, September 1991. Jinyu-wen ,Cheng ,O.P.Malik, A Synchronous Generator Fuzzy Excitation Controller Optimally Designed with a Genetic Algorithm, IEEE Transactions on Power Systems, Vol. 13, No. 3,pp. 884-889, August 1998. Jenica Ileana Corcau, Eleonor Stoenescu, Fuzzy logic Controller as a power system stabilizer International Journal of Circuits, Systems and Signal Processing, Issue 3, Volume 1, pp.266-273,2007.

[3]

[4] Fig.12 Field Voltage [5]

[6]

[7]

[8] Fig.13 Speed Deviation

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Modified Smith Predictor and Controller for Stable and Unstable Processes
Dola Gobinda Padhan Prof. Somanath Majhi
Department of ECE, Indian Institute of Technology, Guwahati 781039, Assam, India e-mail: dola@iitg.ernet.in , smajhi@iitg.ernet.in
Matauek and Mici [6, 7] have shown that the disturbance response of

Abstract
A modified Smith predictor with a new structure is proposed for open-loop stable and unstable processes whose dynamics can be represented by first-order plus time delay (FOPTD) models. The modified Smith predictor has two controllers, namely, a set point tracking controller and a load disturbance rejection controller for obtaining good set point tracking and load disturbance rejection, respectively. The set point tracking controller is designed based on standard forms of the closed-loop system response with no zero and the disturbance rejection controller is considered as a proportional-integral (PI) controller and is designed using optimal gain and phase margin criteria. Illustrative examples show the superiority of the proposed controller design method over previously published approaches in terms of both closed-loop performance and robustness. Index TermsTime Delay, Modified Smith Predictor, Set point Weighting, Regulatory problems, PID controller, Stable process, Unstable process

the Smith predictor can be controlled independently of the setpoint response for integrating processes by adding a P/PD controller in the feedback path from the difference of the plant output and the model output to the reference input. Majhi and Atherton [5] modified the original Smith predictor by providing an internal feedback loop across the delay free part of the process model and sending the same amount of signal to the original process to stabilize the unstable process. Three controllers: a PI controller for setpoint tracking, a P/PD controller for disturbance rejection and another P controller for stabilizing the unstable and integrating processes are used. The P/PD controller used for disturbance rejection was designed using the approach presented in [8]. The use of standard forms for controller design is a direct method which deserves due attention. Standard forms for an all-pole closedloop transfer function have been derived by some authors. The conventional PI controller introduces a zero in the numerator of the standard form. Dorf and Bishop [9] obtained the optimum values for the transfer function with a single zero for the ITAE criterion with the input constrained to a ramp signal. As is well known [SI, particularly in sampled data theory, design for a good ramp input response often produces a step response with quite a high overshoot so that the use of these optimum ramp input criteria is not necessarily appropriate for step response design when one has a zero in the closed-loop transfer function. In their recent edition [9], they have attempted to design phase lead and PID controllers by the standard form approach with cancellation of the closed-loop zero(s) by a prefilter. An advantage of using the standard form tuning formulas of Atherton and Majhi [10] is that they can he used to handle a closedloop transfer function with a zero. A common technique when using a PI controller is to choose its zero to cancel a plant pole and to realise a closed-loop transfer function with no zero. This, however, puts often an unnecessary and inappropriate constraint on the design. In this paper, Section II introduces the modified Smith predictor structure followed by Section III in which the controller design methods are discussed. Simulation results are given in Section IV and finally conclusions are given in Section V.

I. INTRODUCTION
Good control of plants with a long time-delay may be difficult using a classical control method. This is because the additional phase lag contributed by the time delay tends to destabilize the closed loop system. The stability problem can be solved by decreasing the controller gain which results in a sluggish closed loop response. The Smith predictor is a popular and very effective long dead-time compensator for stable processes. The main advantage of the Smith predictor method is that time delay is effectively taken outside of the control loop in the transfer function relating process output to setpoint. However, this approach fails in a very significant way for an unstable process due to the problem of stabilization. Furukawa and Shimemura [1] augmented the scheme with an observer, Watanabe and Ito [11] deliberately replaced the known process by a mismatched process model and De Paors [3] approach has a severe constraint on the ratio of dead-time to the time constant even for the setpoint response of the predictor. Later on, De Paor and Egan [3] extended and partially optimized the modified Smith predictor and controller [4] to relax the constraint on the ratio, but there is no significant extension to the ranges of that ratio for which closed loop asymptotic stability can be achieved. These cited authors have thus solved the stabilization problem by involving greater complexity than the Smith predictor. Watanabe and Ito [11] pointed out that if the process has poles near the origin in the left half plane, then the responses may be sluggish enough to be unacceptable. Since then, a number of methods have been proposed to overcome the problem of controlling a process with an integrator and long dead time. A new Smith predictor that isolates the setpoint response from the load disturbance response and gives improved closed loop performances was proposed in [2].

II. MODIFIED SMITH PREDICTOR


The structure of the modified Smith predictor for controlling Stable and unstable processes is shown in Fig. 1. In the figure, Gc is the set point tracking controller and Gc1 is the disturbance rejection controller in the inner closed loop for load disturbance rejection. Obviously, both Gc and Gc1 take independent responsibility for overall system performances. Most importantly, by virtue of the enhanced structure, the set point response is apparently decoupled from the load disturbance response in a simple manner. This is a dominant advantage of the proposed control structure. The closedloop response to set point and disturbance inputs can be given by

Y ( s ) = YR ( s ) R ( s ) + YL ( s ) L ( s )

(1)

where
YR (s) =

The stable process considered here is an stable first-order plus time delay plant and is given by

G c G p e s (1 + G m G c 1 e m s ) (1 + G m ) (1 + G c 1 G p e s )

(2)

Gp =

K p e s (T1s + 1)
K m e m s (T1s + 1)

(8)

YL (s) =

G p e s 1 + G c1G p e s

and the stable FOPTD model is considered to be

(3)

Gm =

(9)

L R
Gc

+ -

+ + -

G p e s

A. Design of Gc1
Gc1 is chosen as a PI controller in the form

Gc1 e m s

+ -

Gc1 = K c1 1 + T1s
i

(10)

Gc1 is designed based on gain and phase margin criteria. Even


though Gc1 is meant for load disturbance rejection, it also takes part in stabilizing the unstable process with time delay; hence it is designed for stabilizing the process with time delay. From the definitions of gain margin and phase margin, the relations for magnitude and phase angle are given by

Gm

Fig. 1 New Smith predictor structure and Gme s and Gpes are the transfer functions of the plant model and the plant, respectively. Based on the assumption that the model used perfectly matches the plant dynamics that is Gm = G p and m = , (2)
m

arg[Gc1 ( j p )G p ( j p )] =

(11) (12) (13) (14)

G c1 (j g )G P (j g ) = 1
Am = 1 Gc1 ( j p )G p ( j p )

and (3) reduce to


YR (s) = G c G p e s 1+ G p

(4)

m = + arg[GC1 ( j g )G p ( j g )]
where

YL (s) =

G p e s 1 + G c1G p e s

and

are the phase and gain crossover frequencies

(5)

and

Am and m

are the gain margin and phase margin respectively.

For a given process, It is apparent from (4) and (5) that the new Smith predictor has decoupled the load response from the set point response. (5) shows that the response to a constant load disturbance will be unstable if Gc1 = 0 when G p is an unstable transfer function. The delay-free part of YR(s) is compared with a closed-loop standard form with no zero for a minimum ISTE criterion as suggested by Atherton and Majhi and this enables the parameters of Gc to be obtained. expression for

and

are calculated from (11) and (12).

(11) contains a trigonometric term (tan) and thus the analytic simple, the resulting trigonometric term in (11) is approximated. Using the approximation

p cannot be calculated. Hence, to make the solution

III. CONTROLLERS DESIGN


The modified Smith predictor structure requires the design of two controllers ( Gc and Gc1 ). The design methods of these two controllers are explained in this section. The unstable process considered here is an unstable first-order plus time delay plant and is given by

x 4 tan 1 ( x) = 2 4x
for stable process and

( x 1) ( x > 1)

(15)

Gp =

K p e s

(T1s 1) Kme (T1s 1)


m s

(6)

x tan ( x) = 1 2 x
1

( 0 x 1) ( x > 1)

(16)

and the unstable FOPTD model is considered to be

for unstable process


(7) Usually the second approximation is more appropriate. It is found by numerical evaluation of x from (11)(14) that this approximation is valid for all types of processes. The numerical solution of (11) (14) is computationally intensive due to the severe

Gm =

10

non-linearity and choosing an appropriate initial guess. On the other


1 hand, use of the above approximation for tan ( x) , (15) and (16) lead to still nonlinear equations but these are relatively easy to solve

Am kc1k p = pTi kc1k p = g Ti


The identity

2 2 pT1 +1 2 2 pTi +1

(30)

numerically. Once

and

are obtained in terms of

kc1

and

k p , for a specified gain and phase margin, the parameters of Gc1


i.e.

2T 2 +1 g 1 2T 2 +1 g i

(31) (32)

kc1

and

kp

are calculated for different types of integrating

m = + tan 1 g Ti + tan 1 g T1 g 2
1 tan 1 ( x) = 2 x

processes by using (13) and (14). For stable process the loop-transfer function is

x>1

Gc1 ( s )G p =

kc1k p (1+ sTi ) s sTi ( sT1 +1) e

is used in (16) for x > 1.


(17)

The numerical solution of (29) (32) shows that is one of

Substituting (17) into (11) (14) gives

pTi , pT1 , g Ti

and

g T1 .This helps to approximate


(33) (34) (35) (36)

x >> 1 , where x

(29) (32) as
(18)

tan T tan T = 0 p i p 1 p 2
1 1

Am kc1k p = pT1 kc1k p = g T1

Am kc1k p = pTi

2 2 pT1 +1 2 pTi2 +1

(19)

kc1k p = g Ti

2T 2 +1 g 1 2T 2 +1 g i

(20) (21)

1 1 p = 0 pTi pT1

m = + tan 1 g Ti tan 1 g T1 g 2
The identity

m =

1 1 g g Ti g T1

tan 1 ( x) = 2 4x

x>1

is used in (15) for x > 1. The numerical solution of (18) (21) shows that x is one of pTi , pT1 , g Ti and (18) (21) as

Finally, for unstable process solving for K c1 and Ti in (33)-( 36) gives

g T1 .This helps to approximate


(22) (23) (24) (25)

>> 1 , where x

kc1 =

pT1

Am k p
1

(37)

Am kc1k p = pT1
+

1 2 Ti = p p T1 2

(38)

kc1k p = g T1

1 1 + p = 0 4 pTi 4 pT1

Thus both the parameters of Gc1 are determined. This procedure is illustrated by the following examples.

m =

1 1 + g 4 g Ti 4 g T1

B. Design of Gc Gc is designed based on the standard forms with no zero in the


numerator.[13]

Finally, solving for K c1 and Ti in (22) (25) gives

IV. SIMULATION STUDY

kc1 =

pT1

Am k p
1

(26)

Example 1:Consider a stable first-order plus time delay process

G p (s) =
Setting of G cG p 1 = s +1 1+ G p

1 5 s e 10 s + 1

2 4 p 1 + Ti = 2 p T1

(27)

For unstable process the loop-transfer function is k 1k p (1+ sTi ) s Gc1 ( s )G p = c sTi ( sT1 1) e Substituting (28) into (11) (14) gives

(28)

results in a simple controller

G
(29)

10s + 2 s +1

+ tan 1 pTi + tan 1 pT1 p = 0 2

11

The controller parameters of Gc1 are K c1 and Ti obtained from (26) and (27) respectively. Choosing

m = 60 and Am = 3 , the
0

plant parameter uncertainity. It is apparent that the set point response and the disturbance rejection of the proposed method are superior to those of Rao et al. (i)
2.5
a b
1.4
a b

phase cross over frequency( p ) will be 0.3142. (26) gives

K c1 =1.0473 and from (27) Ti =10.008. A unit-step set point is


introduced at time t = 0 and a load disturbance L = 0.1 at time t = 70. The closed-loop response for this controller setting is shown in Fig. 2. It is apparent that the set point response and the disturbance rejection of the proposed method are superior to the earlier methods. Fig. 2 also shows the variation of the step response for some percentage changes in the plant model parameters. The results are much more robust to these variations. (i)
1.4
10 a b c

1.2

control variable

1.5

system output
0
20
40
60
80
100
120
140
160
180
200

0.8

0.6

0.5

0.4

0
0.2

-0.5

20

40

60

80

100

120

140

160

180

200

time, s

time, s

(ii)
2.5

1.2

a b c

a b

1.4
a b

1.2

control variable

1.5

control variable

system output

system output
0
20
40
60
80
100
120
140
160
180
200

0.8

0.8

0.6

0.6

0.5

0.4
0

0.4

0.2

0.2
-0.5
0

-2

0
0 20 40 60 80 100 120 140 160 180 200

20

40

60

80

100

120

140

160

180

200

20

40

60

80

100

120

140

160

180

200

time, s

time, s

time, s

time, s

(ii)
10

(iii)
2.5
1.4

1.4
a b c

1.2

a b c

a b
2
1.2

a b

control variable

control variable

system output

system output
0
20
40
60
80
100
120
140

1.5

0.8

0.8

0.6

0.6

0.5

0.4

0.4
0
0.2

0.2
1

-0.5
0

20

40

60

80

100

120

140

160

180

200

160

180

200

20

40

60

80

100

120

140

160

180

200

20

40

60

80

100

120

140

160

180

200

time, s

time, s

time, s

time, s

Fig.2 Responses of Example 1 (i)---- +30% variation in

(iv)
2.5
a b
1.4

control variable

system output

.. 30% variation in Without variation in


4 2 s e 10 s 1

(ii)---- +30% variation in

T1 .. 30% variation in T1 Without variation in T1

1.2

a b

1.5

0.8

0.6

0.5

0.4

Example 2: Consider an unstable first-order plus time delay process

0.2

-0.5

G p (s) =

20

40

60

80

100

120

140

160

180

200

20

40

60

80

100

120

140

160

180

200

time, s

time, s

Like the previous example setting of

(v)
2.5

Gc G p 1 + Gp

control variable

1 s +1

a b
2

1.4
a b

1.2

results in

system output
0
20
40
60
80
100
120
140
160
180
200

1.5

0.8

2.5s + 0.75 Gc = s +1

0.6

0.5

0.4

0.2

The controller parameters of Gc1 , K c1 and Ti can be obtained from (37) and (38) respectively. Choosing

-0.5

20

40

60

80

100

120

140

160

180

200

m = 30 and Am = 3 ,the
0

time, s

time, s

phase cross over frequency( p ) will be 0.6872. (37) gives

K c1 =0.5726 and from (38) Ti =28.6012. A unit-step set point is


introduced at time t = 0 and a load disturbance L = 0.1 at time t = 70. The closed-loop response for this controller setting is shown in Fig. 3(i-vi). The closed-loop response for this controller setting is compared with the response of the method suggested by Rao et al.[12] and is shown in Fig. 3. The plant parameters have been varied to illustrate the effectiveness of the proposed method in the face of

12

(vi)
1.4
2.5 a b 2

[9]
1.2

a b

1.5

0.8

0.6

0.5

0.4

0.2

-0.5

0
0 20 40 60 80 100 120 140 160 180 200

20

40

60

80

100

120

140

160

180

200

time, s

time, s

Fig.3 Responses of Example 2 Proposed method, ------- Rao et al. (i) Without variations in the parameter, (ii) +10% variation in (iii) 10% variation in (iv) (v) (vi) +10% variation in

T1 10% variation in T1
Simultaneous variation of 10% in in

DORF R.C.and BISHOP. R. H.: Modern control systems(Addition Wesley, Reading, MA, 24h248, 1995) 7th edn. [10] ATHERTON, D. P, and MAJHI, S.: Tuning of optimum PIPD controllers, Proceedings of international conference CONTROLO'98, Coimbra. Portueal 549-554 [11] K. Watanabe and M. Ito, A process-model control for linear systems with delay, IEEE Trans. Automatic Control, vol. 26, no. 6, pp. 12611269, 1981. [12] A. S. Rao, V. S. R. Rao, and M. Chidambaram, Simple Analytical Design of Modified Smith predictor with Improved Performance for Unstable First-Order Plus Time Delay(FOPTD) Processes, Ind. Eng. Chem. Res.,2007,46(13), 4561-4571. [13] ATHERTON, D. P, and BOZ, A. E: Using standard forms for controller design. Proceedings of CONTROL98, UKACC, 1998, Swansea pp. 1066-1071

control variable

system output

and + 10% in

T1

for proposed method ; Simultaneous variation of + 10%

and 10% in

T1

for Rao et al.

V. CONCLUSIONS
The problem of controlling an unstable first order process incorporating time delay has been tackled by proposing a new Smith predictor. Also it is shown that the predictor is capable of successfully controlling the stable first order with time delay process as well. The method has two controllers such as the set-point tracking controller and the disturbance rejection controller. With the proposed structure, the set-point tracking is decoupled from the load disturbance rejection under ideal conditions. From the simulation results it is seen that the designs are relatively robust to the parameter variations.

VI. REFERENCES
[1] F. Furukawa and E. Shimemura, Predictive control for systems with time delay, Int. J. Control, vol. 37, no. 2, pp. 399412, 1990. [2] K. J. strm, C. C. Hang, and B. C. Lim, A new Smith predictor for controlling a process with an integrator and long dead time, IEEE Trans. Automatic Control, vol. 39, pp. 343 345,1994. [3] A. M. D. Paor and R. P. K. Egan, Extension and partial optimization of a modified Smith predictor and controller for unstable processes with time delay, Int. J. Control, vol. 50, no. 4,pp. 13151326, 1989. [4] A. S. Rao, V. S. R. Rao, and M. Chidambaram, Setpoint weighted modified Smith predictor for integrating and double integrating processes with time delay, ISA Transactions, vol. 46,pp. 5971, 2007. [5] S. Majhi and D. P. Atherton, Modified Smith predictor and controller for processes with time delay, IEE Proc. Control Theory Appl., vol. 146, no. 5, pp. 359366, 1999. [6] M. R. Matauek and A. D. Mici, A modified Smith predictor for controlling a process with an integrator and long dead time, IEEE Trans. Automatic Control, vol. 44, pp. 1199 1203,1996. [7] M. R. Matauek and A. D. Mici, On the modified Smith predictor for controlling a process with an integrator and long dead-time, IEEE Trans. Automatic Control, vol. 41, pp. 1603 1606, 1999. [8] A. M. D. Paor and M. O. Malley, Controllers of ZieglerNichols type for unstable process,Int. J. Control, vol. 49, pp. 12731284, 1989.

13

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

RS-485/MODBUS based Intelligent Building Automation System using LabVIEW


Jignesh G. Bhatt
Instrumentation & Control Engineering Department Faculty of Technology, Dharmsinh Desai University Nadiad 387 001, Gujarat, India e-mail: jigneshgbhatt@gmail.com

H.K. Verma
Electrical Engineering Department Indian Institute of Technology Roorkee Roorkee 247 667, Uttarakhand, India e-mail: hkvfee@gmail.com

AbstractBuilding

automation systems (BAS) are transforming from the legacy stand-alone security and safety systems to intelligent computerized network based solutions. This paper presents the design of a three-layer BAS for a residential building. Sensors and MODBUS compliant digital I/O DAQ modules are wired to form an RS-485 network at field level, that are connected to a PC based main controller. The design incorporates central servers for-(i) OLE for Process Control (OPC), (ii) Camera, (iii) Graphical User Interfaces (GUIs) and (iv) Entertainment. A local or remote workstation working as a client can access all functionalities using intranet or internet respectively. RS-485 network is used for the timecritical process-alarm data exchange, and Ethernet backbone is used for high speed Client-Server communication. Various functionalities of the BAS have been validated through laboratory implementation. Keywords- Building Automation System, Intelligent Building, Virtual Instrumentation, LabVIEW, MODBUS, RS-485.

A. Automation Subsystem
(i) Internal Climate Control This includes ambient temperature and humidity control and allows the homeowner (or user) to control the building's heating and air conditioning systems locally as well as remotely. (ii) Lighting Control Lighting of building is a major contributor to monthly electricity bill that can be reduced significantly by controlling switching frequency and ON/OFF timings of electric lights in a prefixed scheduled or on the basis of ambient light intensity. (iii) Water-Level Control Control of water level of underground and/or overhead tanks is very crucial for routine activities and to tackle abnormal occurrences like fire. Hence, availability of sufficient water can be ensured by automatic water-level control that switches ON/OFF the filling pump and discharge pump as necessary.

I.

INTRODUCTION B. Security Subsystem


Security camera channels can be selected for viewing, camera control and to monitor activity. Motion sensing feature can be configured that will detect unauthorized movement and generate alerts via audio-visual annunciation. Audio-video recording of areas under monitoring can also be done. Past recordings can be stored on the server and replayed on demand. Other major functions can be: 1. Detection of possible intrusion 2. Detection of fire and gas leaks 3. Medical alert 4. Tele-assistance

Building Automation System (BAS) is a data acquisition and control system that incorporates various functionalities provided by the control system of a building. Modern BAS is a computerized, intelligent network of electronic devices, designed to monitor and control the Lighting, Internal Climate and other systems in a building for creating optimized energy usage, safety, security, information, communication and entertainment facilities. BAS maintains the internal climate of building within a specified range by regulating temperature and humidity, regulates lighting based on parameters like occupancy, ambient light and timing schedule, monitors system performance & device failures and generates audio-visual-email and/or text notifications to building O&M staff. The BAS reduces building energy consumption and, thereby, reduces operational and maintenance costs as compared to an uncontrolled building. A building controlled by a BAS is often referred to as an Intelligent Building. Typically, the functionalities like entertainment, communication and information exchange need high data rates, while the features like automation, safety and security need low data rates but low latency, high network reliability and data security. Moreover, the BAS designs for residential buildings must be cost-effective and affordable to common people and easy to operate without specialized training.

C. Entertainment Subsystem
The audio entertainment subsystem includes audio switching and distribution on user demand. Audio switching determines the selection of an audio channel. Audio distribution allows an audio source to be heard simultaneously in several locations of the building. This feature is often referred to as 'multi-zone' audio. Similarly, the video subsystem includes video switching and distribution on user demand, and the feature is often referred to as 'multi-zone' video. Video door entry system can be integrated with the TV screens, allowing the user to view the entrant as seen by a door camera.

II.

BAS SUBSYSTEMS

The BAS needed be split into various subsystems according to the basic functionalities as below:

14

D. Communication Subsystem
An intercom system allows communication via a microphone and speaker between multiple rooms. Following features can be added: 1. Remote Control: using Intranet / Internet / PDA with wireless connectivity. 2. Alarm annunciation. 3. Inter-person communications. This paper presents a design of BAS based on a wired-network operating on RS-485/MODBUS protocol for a residential building. For automation, MODBUS protocol is used for data communication. Although not formally standardized, it is regarded as an open protocol[1][2]. The physical layer of communication system is an RS-485 two-wire network, chosen for its simplicity, low cost and adequate data bandwidth. At the management level, the high-bandwidth Ethernet technology is used. Remote operation of the BAS is provided by the TCP/IP-based Intranet/Internet. The next section presents a detailed design of a BAS for a residential building, including the design objectives, BAS functions and the hardware and software of the system. The design has been validated by implementing most of the functionalities and all the network technologies selected for the BAS in laboratory, and its results are reported here. The paper ends with concluding notes and showing future scope of work in form of suggestions for value addition to the proposed work.

7.

8.

Recording of alarms in MS Excel format and maintaining history of events / alarms. Remote monitoring and control facility from any PC in the Intranet / Internet.

C.

BAS Hardware

The proposed BAS system is modular in nature and has a 3layer architecture[3]. Modules in each layer are described below: 1. Field Level Modules (Layer-1) These are also known as device level modules and consist of sensors, actuators and controllers. Such field devices are ordinary ones, which are inexpensive, easy to interface and readily available from a large number of vendors. They function directly in the physical environment and are deployed for data acquisition and control of the environment. Examples of such Field Level Modules are the smoke sensors, glass break detectors, PIR sensors, door-window sensors, LPG detectors, etc. Digital I/O Modules are also included here that perform the function of data acquisition from sensors. 2. Interconnecting Modules (Layer-2) Also known as interfacing modules, they link various networks and/or network segments together for meaningful application development. These modules enable various BAS modules to interact either with the same protocol or by conversion of protocol. They are also used to provide network range extension, isolation and interfacing at various levels of ISO/OSI Model. For example an RS485/RS232 convertor used in the proposed BAS is an Interconnecting Module. 3. Management and Configuration Modules (Layer-3) The main purpose of these modules is to configure and manage various functionalities of BAS. They can be accessed locally or remotely. Such modules are deployed for monitoring, controlling, logging and archiving the processed data values. They function at the backbone level and provide GUI for monitoring and control using the data collected from the both types of modules described earlier. These modules also generate audio-visual alarm annunciation with some other useful features for user comfort. RS-485 network is implemented to construct the physical layer, while MODBUS is chosen to implement the application layer for the laboratory implementation of the BAS. GUI of this system has been prepared using LabVIEW software, and is very user-friendly. The BAS is divided into three layers as indicated in Fig. 1, which shows only one residence with a room, a corridor, a water tank and a lawn with main entrance monitored and controlled from the control room. The central servers for OPC, Camera, GUI and Entertainment as well as remote workstation, that are at the top (Layer-3) forming the configuration or management modules, are connected to the ethernet backbone and to the Internet. A PC based main controller and IP camera are at the middle layer (Layer-2). The main controller has features like TCP/IP support, Integrated OPC server and multiple communication ports. An RS-485 to RS232 Convertor Module is also at the same layer (Layer-2) as an interfacing module. This module performs dual functions of protocol conversion and isolation. Various sensors and digital I/O modules forms the third and last layer (Layer-1). This layer also

III. A. Design Objectives

DESIGN OF BAS

Following are the design objectives of the proposed BAS for residential buildings: 1. Energy conservation by switching off lights when not required. 2. Security of residents from intruders. 3. Safety against fire and gas leakage. 4. Remote operation (control) of any electric appliance from a central location within the building as well as from outside the building. 5. Connectivity to the Internet to meet information, e-services and communication needs of the residents. 6. Access to a central digital entertainment library. 7. Simple and user-friendly Human Machine Interface (HMI). 8. Low cost of the solution. 9. Simplicity, future expandability and interoperability.

B.

BAS Functions

The system has been designed to perform the functions mentioned below: 1. Reporting alarms and status of different areas under coverage in the GUI. 2. Alarm reporting by audio-visual annunciation. 3. Provides control of various devices via GUI. 4. Live video of view (with audio) captured by IP camera and its live streaming as well as recording facility. 5. Entertainment panel provides continuous broadcast of live radio via GUI. 6. Entertainment panel provides unique video-on-demand facility via GUI.

15

consists of digital input and output (I/O) modules connected on an RS-485 network in multi-drop configuration and further to the main controller via RS-485 to RS-232 Convertor Module. These

digital I/O modules perform data acquisition from sensors and avails control outputs of main controller to actuators.

Figure 1. Schematic of the Building Automation System.

D. BAS Software
The software used in the BAS is based on OPC, an acronym for OLE (Object Linking and Embedding) for Process Control. Its purpose is to provide standards based vendor independent infrastructure for data exchange[4][5]. OPC is based on client-server architecture and enables a fully scalable solution for future changes and expansion. OPC specifications include OPC Data Access, Alarms and events, Historical Data Access, OPC Data eXchange, access to OPC server data through web services, etc. The software used in the BAS is based on OPC, an acronym for OLE (Object Linking and Embedding) for Process Control. Its purpose is to provide standards based vendor independent infrastructure for data exchange[4][5]. There can be multiple local or remote OPC clients accessing the same data from single or multiple OPC servers and running the same program.

IV.

LABORATORY IMPLEMENTATION

The proposed BAS design has been validated by implementing most of its functionalities in laboratory. The laboratory implementation of three layer BAS is shown on a panel in Fig. 2. It provides the functions of automation, safety, security, communication and entertainment.

16

Figure 2. Front view of the BAS Panel.

Figure 4. Entertainment panel of LabVIEW GUI.

At the top layer (Layer-3), PCs acting as local OPC clients are connected on an Ethernet LAN and that are acting as remote workstations are connected using internet. At the middle layer (Layer-2), the BAS consists of a PC acting as the main controller, OPC server and OPC client, and an IP camera connected to LAN. Data acquisition and control modules are connected on RS-485 network in half duplex mode. At the device level, a smoke sensor, motion detector, glass break detector, door-window sensors, ambient light detector, occupancy sensor, water level sensor and LEDs are connected. Remote control and event notification facilities have been also implemented. OPC server is configured in NAPOPC[6] software supplied by ICPCON configured for data exchange, while GUI based OPC client has been developed on LabVIEW[7] (Graphical Programming Environment from National Instruments) as shown in Figs. 3 and 4, developed for serving control and entertainment purposes respectively.

VI.

CONCLUSION

The design implemented in laboratory employs high speed Ethernet at layer-3 for establishing connection between servers and clients. Data acquisition and control in layer-1 use RS-485 and MODBUS protocols, which are industry standard. In this layer, application is implemented using LabVIEW GUIs. For BAS software, OPC is selected because of interoperability, vendor independence and other merits. The BAS scheme presented here is based entirely on wired network. Rewiring of existing buildings may be difficult and costly, and hence in such cases power line and wireless networks can be more suitable. In fact, a mixed-network protocol at the device level could be the best solution.

VII. FUTURE SCOPE


In modern contexts, BAS is a rapidly growing phenomenon. Emerging wired/wireless communication technologies are expected to transform the present partially networked houses into Smart, Intelligent and Adaptive Homes. Higher and higher penetration of automation will lead to increased device intelligence and they would be performing their routine tasks in vivid situations autonomously and without much human intervention. Such smart homes will provide high level of energy optimization, security, communication and entertainment facilities. Enhancement of each and every facet is important for rapid growth and wide acceptance of BAS. Focus should be on making BAS more and more user-friendly and a part of residents routine life. Therefore, serious efforts should be made to make BAS rugged, reliable and cost-effective as well as userfriendly. This work can be further extended by 1. connecting GSM Modem to enable sending of SMS alerts in case of alarm/event. 2. adding more and more functionalities to the GUI to make it more attractive and user-friendly. 3. trying to build similar GUI for commercially available Wireless Sensor Network Motes. 4. trying to build similar GUI for commercially available power line based automation devices and home appliances.

V.

RESULTS

Figure 3. Front panel of LabVIEW GUI.

17

5.

for adding more and more security, communication and entertainment features.

REFERENCES
[1] [2] MODBUS Specifications and Implementation Guidelines [Online] Available: http://www.modbus.org W. Kastner, G. Neugschwandtner, S. Soucek, and H. M. Newman, Communication Systems for Building Automation and Control, Proceedings of IEEE, Vol. 93, Issue 6, June 2005, pp. 1178-1203. W. Granzer, W. Kastner, G. Neugschwandtner, and F. Praus, A Modular Architecture for Building Automation Systems,

[4] [5]

[6] [7]

[3]

Proceedings 6th IEEE International Workshop on Factory Communication Systems, Torino, Italy, 2006, June 28-30, pp. 99102. OPC Overview [Online] Available: http://www.opcfoundation.org Randy Kondor, Integrating OPC into Building Automation [Online] Available: http://www.automatedbuildings.com/news/ dec03/articles/matrikon/matri.htm Users Manual, NAPOPC DA Server [Online] Available: http://www.icpdas.com/products/Software/NAPOPC/napopc.htm Online Help and Technical Support Documentation [Online] Available: http://www.ni.com/support

18

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

FAULT DIAGNOSIS IN BENCHMARK PROCESS CONTROL SYSTEM USING PROBABLISTIC NEURAL NETWORK
__________________________________________________________________________________
Tarun Chopra a, Dr. Jayashri Vajpai b Ph.D. Scholar, Department of Electrical Engineering, Faculty of Engineering ,J.N.V. University, Jodhpur (India)-342 011 b Associate Professor, Department of Electrical Engineering, Faculty of Engineering, J.N.V. University, Jodhpur (India)-342 011
a

_________________________________________________________________________________
Abstract- In this paper, Probabilistic Neural Network is used for
fault diagnosis in Benchmark Process Control System. The performance of the proposed approach is demonstrated on the DAMADICS benchmark problem Keywords: Fault Diagnosis, Benchmark study, Control Valves PNN is derived from Radial Basis Function (RBF) Network which is an ANN using RBF. RBF is a bell shape function that scales the variable nonlinearly. PNN is adopted in this paper for it has many advantages [6]. Its training speed is many times faster than a BP network. PNN can approach a Bayes optimal result under certain easily met conditions [7]. Additionally, it is robust to noise. We choose it also for its simple structure and training manner. The most important advantage of PNN is that training is easy and instantaneous [7]. Weights are not trained but assigned. Existing weights will never be alternated but only new vectors are inserted into weight matrices when training. So it can be used in real-time. Since the training and running procedure can be implemented by matrix manipulation, the speed of PNN is very fast. The network classifies input vector into a specific class because that class has the maximum probability to be correct. In this paper, the PNN has four layers: Input Layer, Hidden Layer, Sum Layer and Output Layer as shown in Fig1. The input vector X=[x1, x2, , xi, , xn], i=1, 2, 3, , n, is connected to the input layer. The number of hidden nodes Hk (k=1, 2, 3, , K) is equal to the number of training data, while the number of summation nodes Sj and output nodes Oj (j=1, 2, 3, , m) equals to the classified types.

1.

INTRODUCTION

The growing complexity of industrial installations, such as in chemical, petrochemical, sugar and power industries etc, has resulted in serious problems in process control .The control devices currently in use to improve the overall performance of industrial processes involve both sophisticated digital system design techniques and complex hardware (sensors, actuators and processing units). The complexity means that the probability of fault occurrence can be significant and an automatic supervisory control system should be used to detect and isolate anomalous working conditions (i.e. faults) as early as possible. These motivations pushed a great amount of attention on fault diagnosis in dynamic processes. In the last three decades, artificial neural networks (ANN) have been applied in the fault diagnosis [1-4]. ANN is very useful owing to its parallel distributed process, training capacity, implicit knowledge representation, and pattern recognition capability. However, ANNs have some drawbacks, including the determination of network architecture and network parameters assignment. When networks are applied in dynamic environments, especially for online applications, traditional networks can become the bottleneck in adaptive applications [5]. Considering these limitations, probabilistic neural network (PNN) based faults diagnosing system is proposed in this paper.

2. PROBABILISTIC NEURAL NETWORK (PNN)


Probabilistic Neural Network (PNN) is effective neural network architecture, underpinned by a theoretically sound framework. It can effectively solve a variety of classification problems and it has the advantage of a fast learning process, requiring only a singlepass network training stage without any iteration for adjusting weights, and it can adapt itself to architectural changes.

Fig.1:- The network structure of PNN

19

The weights wkiIH (connecting the kth hidden node and the ith input node) and wjkHS (connecting the jth summation node and the kth hidden node) are determined by K input-output training pairs [89]. The final output of node Oj is:-

The limited set of 5 available measurements and control value signal have been considered as shown in fig3 for benchmarking purposes: process control external signal CV ,values of liquid pressure on the valve inlet P1 and outlet P2, stem displacement X , liquid flow rate F and liquid temperature T1.

The optimal k can be performed to obtain the minimum misclassification error based on the testing data. The optimization method is used to adjust parameter k with iteration process, and adjust the k would refine the accuracy in the dynamic environment.

Fig. 3. Structure of benchmark actuator system 3. BENCHMARK PROCESS CONTROL SYSTEM


The determination of the development of small or incipient faults before they become serious clearly has an important influence on the predicted lifetime of an industrial actuator. Valve faults causing process disturbance and shutdown are of major economic concern and can sometimes be an issue of safety and environmental pollution. Furthermore, when actuators do not function correctly, the final product quality is influenced. The monitoring of the development of incipient faults is therefore an issue not only for predicting maintenance schedules but also for monitoring the performance of the process under study. The sugar factory investigation has shown that electro-pneumatic actuators have sufficient complexity to warrant detailed diagnostic study. The study has shown that by simultaneous monitoring of several actuators and levels of system operation it is feasible to develop powerful diagnosis and supervision tools for the overall process under investigation. The DAMADICS benchmark flow control valve[10] as shown in fig2 has been chosen as the case study for this method. Model of the actuator developed in MATLAB- Simulink using both analytical description and engineering knowledge of a real industrial actuator has been shown in Fig4 .

Fig 4 MATLAB- Simulink Model of the actuator 4. THE SET OF ACTUATOR FAULTS

The 19 actuator faults which may occur in this actuator have been considered and classified under 4 broad categories as mentioned below:

Control valve faults (F1)


1-valve clogging, 2-valve or valve seat sedimentation, 3-valve or valve seat erosion,

Fig.2

Actuator with spring-and-diaphragm servomotor, positioner and control valve.

pneumatic

20

4-increase of valve friction, 5-external leakage (leaky bushing, covers, terminals), 6-internal leakage (valve tightness), 7-medium evaporation or critical flow.

PNN net has been created and simulated in MATLAB which can work as a classifier and is able to identify the types of test samples.

6. RESULTS & DISCUSSIONS


Results of PNN classifier have been shown in the following Table and It can be seen that using Probabilistic Neural Networks PNN, the rapid recognition for different faults was realized and the average correct recognition rate reached 97.5%.

Servomotor faults (F2)


8-twisted servo-motor stem, 9-servomotor housing or terminal tightness, 10-servomotor diaphragm perforation, 11- servomotor spring fault.

Positioner faults(F3)
12-electro-pneumatic transducer fault, 13-stem displacement sensor fault, Type of fault Number of training samples Number of test samples Correct recognition (%)

14-pressure sensor fault, 15-positioner spring fault General faults/external faults (F4)
16-positioner supply pressure drop, 17-unexpected pressure change across valve, 18-fully or partly opened bypass valves, 19 -flow rate sensor fault. F1 (valve clogging) F2 (servomotor spring fault) F3

30 30

70 70

100% 95%

30

70

97%

5. FAULT DIAGNOSIS USING PNN


The diagnosis system has been implemented according to the samples with associated fault types, records for the PNN are stored in Database, and the diagnosis system can further record new generated samples. The PNN has four input nodes, five output nodes N and F1~F4, (F1: Control valve faults, F2: Servomotor faults, F3: Positioner faults, F4: General faults/external faults) and hidden nodes being equal to the number of training data.

(pressure sensor fault) F4 (positioner

30

70

98%

supply pressure drop)

7. CONCLUSION

According to the various training patterns, the weights between input nodes and hidden nodes are determined by training data. The weights between hidden nodes and summation nodes are the predicted outputs associated with each input pattern by encoding signal 1 for Abnormal, when a training data relates to its fault type, and 0 for Normal.

A diagnosis system using PNN has been developed in this paper. With field data, the diagnosis system provides fast and easy manipulation tool to detect the fault types. The diagnosis system uses a minimal number of connections, requires less computation time for operation, and doesnt need more weight settings.

8. REFERENCES
In a real world, training data could be collected from the field data. The new training data are presented to the PNN, and the corresponding hidden nodes will continue to grow. This process results in very fast training, and the network is adaptive to data changes. [1] Z. Wang, Y. Liu, and P. J. Griffin, A combined ANN and Expert System Tool for Transformer Fault Diagnosis, IEEE Trans. on Power Delivery, Vol.3, No.4, pp.1224 -1229, 1998. [2] Xu, L.; Mo-Yuen Chow, A Classification Approach for Power Distribution Systems Fault Cause Identification,

21

IEEE Trans. on Power Systems, Vol.21, 2006.

No.1, pp.53 -60,

[3] J. L. Guardado, J.L. Naredo, P. Moreno, and C.R. Fuerte, A Comparative Study of Neural Network Efficiency in Power Transformer Diagnosis Using Dissolved Gas Anaylsis, IEEE Trans. on Power Delivery, Vol.16, No.4, pp.643 -647, 2001 [4] P. S. Kulkarni, A. G. Kothari, and D. P. Kothari, Combined Economic and Emission Dispatch Using Improved Back- propagation Neural Network, Electric Machines and Power Systems, 28:31-44, 2000. [5] Chia-Hung Lin and Ming-Chieh Tsao, Power Quality Detection with Classification Enhancible WaveletProbabilistic Network in a Power System, IEE ProceedingsGeneration, Transmission, and Distribution, Vol. 152, No. 6, November 2005, pp. 969-976. [6] T. Master, Practical Neural Network Recipes, New York: John Wiley, 1993. [7] D. F. Specht, Probabilistic neural networks, Neural Networks, vol. 3, 1990. [8] Chia-Hung Lin and Ming-Chieh Tsao, Power Quality Detection with Classification Enhancible WaveletProbabilistic Network in a Power System, IEE ProceedingsGeneration, Transmission, and Distribution, Vol. 152, No. 6, November 2005, pp. 969-976. [9] Whei-Min Lin, Chia-Hung Lin, and Zheng-Chi Sun, Adaptive Multiple Fault Detection and Alarm Processing for Loop System with Probabilistic Network, IEEE Trans. on Power Delivery, Vol. 19, No.1, January 2004, pp. 64-69. [10] Michal Bartys , Ron Patton, Michal Syfert,

Salvadorde las Heras, Joseba Quevedo: Introduction to the DAMADICS actuator FDI benchmark study Control Engineering Practice 14 (2006) 577596

22

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Classification of Teas Using Spectroscopy and Probabilistic Neural Networks


AH KIRANMAYEE and PC PANCHARIYA
Digital Systems Group Central Electronics Engineering Research Institute (CEERI), Pilani-333031, India (A constituent laboratory of Council of Scientific and Industrial Research (CSIR), New Delhi) Tel: +91-1596-252267; Fax:+91-1596-242294 *E-mail: pcp@ceeri.ernet.in
AbstractThis paper describes a methodology towards the development of an optical tongue based on low cost spectrophotometer coupled with chemometric techniques for recognition of teas of different varieties. The application of spectroscopic measurements for identification of particular tea variety can become a rapid analytical tool for classification of teas. Six varieties of Indian tea were studied in the experiment. Probabilistic neural network (PNN) was used to construct the identification model based on Principal Component Analysis (PCA). The number of principal component factors (PCs) was optimized in the constructing model. The experimental results showed good performance of PNN model. The optimal PNN model was achieved when three PCs were used, identification rates being all 100% in the training and 91% in prediction sets. The overall results demonstrated that low cost spectrophotometer with suitable pattern recognition method can be successfully applied as a rapid method to identify Indian tea varieties. Keywords: Indian tea; variety; recognition; pattern recognition taste and color are used as important tools for quality diagnosis [4]. Sensory evaluation of tea is one of the most difficult tasks in the overall tea attributes assessment. It relies on information provided by selected and trained tasting panels, whose members may be influenced by physiological, psychological, and environmental factors. Therefore, the identification of tea varieties by sensory evaluation produces inevitably subjective, inconsistent results and expensive. Instrumental methods for the determination of odour, colour and taste, such as gas chromatography[5], gas chromatography /mass spectrometry (GC-MS)[6], high-performance liquid chromatography with mass spectroscopy (HPLC-MS)[7], colorimeters, inductively coupled plasma atomic emission spectrometry {8-9], Infra red spectroscopy (NIR) [10-13] are costly, which require trained personnel and are often of limited value and time-consuming. However, all of the methods mentioned above are time-consuming in the identification of tea categories. As a result, there has been an effort to investigate devices or instrumental methods for more objective and inexpensive analysis of tea that do not require specialist technicians. These devices would be less prone to drift and more consistent and accurate but very expensive for Indian industry. Compared to conventional analytical methods, NIR spectroscopy is a fast and accurate technique that can be used as a replacement of conventional sensory evaluation methods and timeconsuming chemical methods. Budinova et al. reported an application of NIR for the assessment of authenticity of tea [10] and similar work on green tea was reported by Luypaert et al.[11] and Chen et al.12]. Luo et al. reported a study of ANN coupled with NIR for determination of polyphenol and amylase in tea [13]. In the works mentioned above, near infrared spectroscopy techniques have often been used to analyze quantitatively the valid tea composition using some chemometric methods, for example, Partial Least Squares (PLS) regression, Linear Discriminant Analysis (LDA) and Artificial Neural Net (ANN). Recently, some researchers applied electronic tongue based on voltammetry [14] and electronic nose [15] for discrimination of different teas. As it is known that the different tea varieties have chemical characters which are due to different tea processes and different origin of tea-leaves, therefore, the flavor features of each tea variety are reasonably differentiable in the VisNIR region, the spectral differences providing sufficient qualitative spectral information for the identification. In this work, a low cost Vis-NIR spectroscopy was used to identify six varieties of tea coupled with the pattern recognition method. Probabilistic Neural Network (PNN) was adopted to construct the identification model based on Principal Component Analysis (PCA), and the number of principal components will be optimised in the constructing models.

INTRODUCTION

Tea is among the most popular beverages worldwide as other beverages like coffee, cola, fruit juices, soft drinks and etc. Tea is popular not only for its taste but also for its beneficial medicinal properties [1]. There are many different tea species or clones among the world. Just like grapes and wines, the taste of tea is dependent on where and how it is grown. Different varieties of tea have different taste, flavor and color too. In Asia, the main tea-producing countries are India, China, Sri Lanka and Japan; however, there are a few other countries like Kenya, Georgia, Bangladesh etc. Indian tea tastes quite different from that of China or Ceylon, and teas from Assam in Northern India differ in flavor to those from Nilgiri in the South. All teas are made from the same plant, Camellia sinensis. The different classifications are dependent on the way the tea leaves are processed. A number of tea leaf processing steps such as withering, CTC (Cutting, Tearing and Curling), rolling, oxidation/fermentation, drying etc. are involved once the leaves are plucked from the shoot. Each processing step has significant role in the overall final product. There are hundreds of varieties of teas, but most fall into three main categories: black, green and oolong tea. Tea is categorized by the method that is used in processing the leaves. Tea quality depends mainly on inherent properties of leaf i.e. leaf variety, morphology of tea leaves, growing environment and manufacturing conditions. Quality is measured on the basis of various parameters such as appearance, aroma or flavour, and liquor properties (brightness, briskness, colour, etc.)[2-3]. The production of most branded teas involves blending several varieties to maintain consistency of taste. The differences between the tea varieties are recognized commercially and appreciated by the consumers, therefore, the identification of tea varieties is still focused on at present. Usually, the conventional identification of a tea variety is performed by sensory evaluation where the human senses of flavor,

II A Tea Samples

EXPERIMENTAL

All tea samples of six varieties were collected from Indian market. All tea materials were purchased at the local super-market and they were all stored in air-tight containers. About 5 0.1 g airdried tea-leaves were weighed randomly as one sample.

23

Spectral measurements

For each sample, three transmission spectra (190-1100nm) were collected with a low cost spectrophotometer (Model SL-159, M/S ELICO Limited, Hyderabad, India), using RS232 communication and Spectra Treats software for Windows. The sample was placed in a cuvette with a 10mm light path length, and the transmission from 190 nm to 1100 nm was measured at 1 nm intervals and an average reading of 5 scans for each spectrum. The transmission spectra were firstly transformed into absorbance spectra, and then three absorbance spectra for each sample was averaged into one spectrum and transformed into ASCII format by using the Spectra Treats software. The remaining region of wavelength (190-1100nm) was employed for further treatment.

Data Processing

In our experiment, principal component analysis (PCA), used as feature extraction technique for the spectral response signals was proposed. Probabilistic neural network (PNN) is used as classifier with the extracted features as inputs to the network. The data obtained were evaluated using the following two commercial software package: Unscrambler (version 9.5, 2007, CAMO ASA, Trondheim, Norway) and Matlab (version 7.1, Mathworks, USA) for programming the feature extraction and the pattern recognition techniques. The PCA was carried out on the sensor array data. The PCA is a statistical method that could deduct dimensions and observe a primary evaluation of the between-class similarity. It is also a projection method that allows an easy visualization of all the information contained in a data-set. In addition, it also helps to find out in what respect a sample is different from others and, which variables contribute most to this difference. The PCA method intends to summarize almost all variance contained in the response signal on a fewer number of axes (the PCs) with new coordinates called scores which are mutually orthogonal, obtained after data transformation. Few selected scores (which explain the maximum variance) can be used as inputs to any ANNs for qualitative analysis. However, it can fail to preserve the nonlinearity of a data set, as it is a linear projection technique. If there are some nonlinear characteristics in the response signal, these will be considered as outliers or noise and will not be described by the first PCs as in a linear case. Data were preprocessed before analysis. The preprocessing included a scaling and dividing by the standard deviation.

1990 firstly, is relatively new compared to other neural networks like back-propagation artificial neural networks (BP-ANN) and radial basis function networks (RBFN). It takes its basic concept from the Bayesian statistical classifier which is the optimal statistical classifier. It maps the Bayes rule into a feed forward multilayer neural network. PNN associates an unknown pattern to the class to which this pattern is most likely to belong. PNN constitutes a classification methodology that combines the computational power and flexibility of ANNs. The main advantages of PNN over other ANNs include their simplified architecture which overcomes the difficulty of specifying an appropriate ANN model as well as their easy implementation during training and testing. This type of network is composed of four layers. In the input layer there are three elements that correspond with the first three components of the PCA that carry almost 100% information of the old variables. The next layer is the pattern layer. This layer has a number of neurons equal to the training pattern vectors, grouped by classes, where the distance between the test vector and a learning pattern is assessed. The purpose of this layer is to measure and weigh with a radial function, the distance of the input layer vector with each training set element. The third layer, the summation layer, contains one neuron for each class. This layer adds the outputs of the pattern neurons belonging to the same class. Finally, the output layer is simply a thresholder that seeks for the maximum value of the summation layer. Then the highest one is selected and takes one as a result. The other outputs are set to 0. In Fig. 1 a schematic diagram of the PNN network is shown. A validation method was applied to the network in order to check the performance of the network. The method consisted of validating N distinct nets (in this case, N is the number of measurements) by using N t training vectors and t testing vectors which were excluded from the training set for each group.

III

RESULTS AND DISCUSSION

A. . VIS-NIR Spectral response of tea samples


Fig. 2 shows the average transmission spectra of different varieties of tea samples.
4 3.5 3 2.5 2 1.5 1 0.5 0 100

Output layer

Class 1

................................. Class n ..........

Summation layer

Pattern layer

Absorbance

X1

....................................

Xp

Input layer

200

300

400

500

600

700

800

900

1000 1100

wavelength (nm)

Fig. 2 Transmission spectra of different teas The trend of spectra was quite similar, but with a careful observation, some crossover between the different varieties was in existence. In order to make good use of the spectra, further pre-

Fig. 1. A typical probabilistic neuronal network (PNN)

For classification purposes, a probabilistic neural network (PNN) [32] has been used. The PNN, introduced by Donald F. Specht in

24

processing and treatment with the chemometric methods should be applied to discriminate the different tea samples.

Predicted Actual Khushboo (15) Lipton (11) Marvel (15) Tatagold (11) Tajmahal (12) Tata (8)

B. Principal component analysis (PCA)


The data spectrum obtained from the spectrophotometer for six classes or brands of different teas are analyzed using PCA. The PCA method can indicate the data trend in visualizing dimension spaces based on the score plot of the two components. The two-dimensional score plots of PCA of the complete data set for the six tea varieties are shown in Fig.3. Only first two principal components of the spectrum data explain a total variance of 99.78% (PC1 (95.63%) and PC2 (4.15)). The total explained variances of only two principal components show the power of the discrimination capability of the feature extraction techniques.
2 Khushboo Lipton Marvel Tatagold Tajmahal Tata

Khush -boo 13 0 2 0 0 0

Lipton 0 11 0 0 0 0

Marvel 2 0 13 0 2 0

Tata gold 0 0 0 11 0 0

Taj Mahal 0 0 0 0 10 0

Tata 0 0 0 0 0 8

PC2 (4.15 % explained variance)

1.5

ACKNOWLEDGEMENTS
The project was supported by Council of scientific and Industrial Research (CSIR), New Delhi through supra institutional project grant on development of smart systems.

0.5

CONCLUSION
The overall results sufficiently demonstrate that the spectral measurements in the VIS-NIR region coupled with the pattern recognition have a high potential to identify the tea varieties. Six varieties of tea were studied in this work. Probabilistic Neural Network (PNN) was used as the pattern recognition method in the classification model. The number of principal components factors (PCs) was optimized in the building of the models. The experimental results showed that PNN model was able to predict better, with both identification rates being equal to 100% in the training and 91.67% for prediction sets when the optimal number of principal components factors (PCs) equaled 3. It can be concluded that spectral measurements in the VIS-NIR region coupled with the pattern recognition has a high potential to estimate other foods quality. A reliable overall characterization of a food product quality may be obtained at a low cost. It may be applied to the food quality control, process monitoring, and rapid classification in the industry. In comparison to subjective sensory assessing methods and timeconsuming chemical methods, the results obtained by the developed spectroscopy model represent a considerable improvement in estimating the food quality.

-0.5

-1 -6

-4

-2

PC1 ( 95.63 % explained variance)

Fig.3. Score plot of different tea samples

C. Classification of samples by PCA-PNNs


In this study, the PNNs classification model was applied using the principal component vectors extracted by PCA. PNN classifier is a three-layer, feed-forward, one-pass, learning network that uses sums of Gaussian distributions to estimate the class probability density functions as learned from training vector sets. These functions, in the recall mode, are used to estimate the likelihood of an input feature vector being part of a learned category, or class. The learned patterns can also be combined, or weighted, with a priori probability, also called the relative frequency, of each category to determine the most likely class for a given input vector. Before the input vectors enter the network they are normalized. In this paper, the PNN network contains an input layer which has as many neurons as the dimensions of the extracted features vectors by PCA. It has a pattern layer, which organizes the training set such that each input vector is represented by an individual processing neuron. And finally, the output layer has as many processing neurons as there are classes to be recognized. In order to optimize the PNN model, PCA-PNN architectures with different numbers of inputs (PCs) were examined (1PC to 10 PCs). Finally 3PCs for all types of samples were selected as the inputs to the PNN model. The spread for PNN was set at 0.099. Total 38 samples were selected for training the network. PNN analysis shows an overall 100% classification success for the training sets for all the samples. Total 72 samples were selected for testing the network. The classification rate was 91.67% for the testing sets of different varieties of teas. The confusion matrix for all the tea classes are shown in Table 1.

REFERENCES
[1] Weisburger JH, Tea and health: a historical perspective. Cancer Lett 114:315317 (1997). [2] Harbowy ME and Balentine DA, Tea chemistry. Crit Rev Plant Sci 16:415480 (1997). [3] Fernandez-Caceres PL, Martin MJ, Pablos F and Gonzalez AG, Differentiation of tea (Camellia sinensis) varieties and their geographical origin according to their metal content. JAgric Food Chem 49:47754779 (2001). [4] Borse BB, Jagan Mohan Rao L, Nagalakshmi S and Krishnamurthy N, Fingerprint of black teas from India: identification of the regio-specific characteristics. Food Chem 79:419424 (2002). [5] Togari N., Kobayashi A. and Aishima T.: Pattern recognition applied to gas chromatographic profiles of volatile component in three tea categories. Food Research International, 28: 495502 (1995). [6] Frank M, Ulmer H, Ruiz J, Visani P and Weimar U, Complementary analytical measurements based upon gas chromatographymass spectrometry, sensor system and human

25

sensory panel: a case study dealing with packaging materials. Anal Chim Acta 431:1129 (2001). [7] Huck C.W., Guggenbichler W. and Bonn G.K.: Analysis of caffeine, theobromine and theophylline in coffee by near infrared spectroscopy compared to high-performance liquid chromatography (HPLC) coupled to mass spectrometry. Journal of Pharmaceutical Biomedical Analysis, 538: 195203 (2005). [8] Herrador M.A. and Gonzalez A.G. : Pattern recognition procedures for differentiation of green, black and Oolong teas according to their metal content from inductively coupled plasma atomic emission spectrometry. Talanta, 53: 12491257, (2001). [9] Fernandez PL, Pablos F, Martin MJ and Gonzalez AG, Multielement analysis of tea beverages by inductively coupled plasma atomic emission spectrometry. Food Chem 76:483489 (2002). [10] Budinova G, Vlacil D, Mestek O and Volka K, Application of infrared spectroscopy to the assessment of authenticity of tea. Talanta 47:255260 (1998). [11] Luypaert J., Zhang M.H. and Massart D.L. : Feasibility study for the using near infrared spectroscopy in the qualitative and

[12]

[13]

[14]

[15]

[16]

quantitative of green tea, Camellia sinensis (L). Analytica Chimica Acta, 487: 303312, (2003). Chen Q.S., Zhao J.W., Zhang H.D. and Wang X.Y. : Feasibility study on qualitative and quantitative analysis in tea by near infrared spectroscopy with multivariate calibration. Analytica Chimica Acta, 572: 7784, (2006). Luo Y.F., Guo Z.F., Zhu Z.Y., Wang C.P., Jiang H.Y. and Han B.Y. : Studies on ANN models of determination of tea polyphenol and amylose in tea by near-infrared spectroscopy. Spectroscopy Spectral Analysis, 25: 12301233, (2005). Ivarsson P, Holmin S, HojerNE, Krantz-RulckerC and Winquist F, Discrimination of tea by means of a voltammetric electronic tongue and different appliedwaveforms. Sens Actuat B 76:449 454 (2001). Dutta R, Hines EL, Gardner JW, Kashwan KR and Bhuyan M, Tea quality prediction using a tin oxide-based electronic nose: an artificial intelligence approach. Sens Actuat B 94:228237 (2003). Gardiner WP, Statistical analysis methods for chemists, in Multivariate Analysis Methods in Chemistry, Royal Society of Chemistry, Cambridge, UK, pp. 293307 (1997).

26

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

POSITION CONTROL OF SERVO MOTOR USING FUZZY LOGIC


Amol A. Kalage.
Electrical Engineering Department SND College of Engineering and Research Babhulgaon, Yeola. M.S. India amolkalage@rediffmail.com

Sarika V.Tade
Electrical Engineering Department SND College of Engineering and Research Babhulgaon, Yeola. M.S. India sarika.tade@rediffmail.com

Abstract - This paper presents a fuzzy logic controller (FLC) for servomotor used for position control. The application of FLCs to servo systems produces results superior to classical controllers and their robustness qualities are acknowledged. For instance, it is seen that, if there is a change in system parameters or load disturbances, the response of system due to proportional integral derivative (PID) controller is considerably affected and PID controller needs retuning. However, FLCs preserve the desired response over wide range of system parameters and load disturbances. In this work, the simulation and implementation of FLC for dc servomotor have been carried out under different working conditions. Keywords-PID controller, Fuzzy Logic Controller

I.

INTRODUCTION

Servomotors are widely used in many automatic systems, including drive for printers, tape recorders, robotic manipulators, machine tools, rolling machines etc. Proportional integral derivative (PID) controllers usually control these motors. Such controllers will be effective enough if the speed and accuracy requirements of control systems are not critical under varying environments of the systems. Also, the usual way to optimize the control action is to tune the PID coefficients, but this cant cope up with varying control environments resulting from load disturbances, system non-linearties and change of plant parameters. The experience shows that FLC yields superior results to those obtained by conventional control algorithms [3], [4]. Also, FLC have, common feature of not requiring a detailed mathematical model and lead to much faster and accurate controllers for servo systems [5]. Fuzzy Logic Controllers are based on the fuzzy set and fuzzy logic theory originally advocated by Lotfi A. Zadeh [13]. Due to its heuristic nature, fuzzy logic control is much closer in spirit to human thinking and natural language than traditional logic systems. The FLC provides an algorithm, which can convert the linguistic control strategy, based on expert knowledge into an automatic control strategy. In particular, the methodology of FLC appears very useful when the plants are too complex for analysis by conventional quantitative techniques or when the available sources of information are interpreted qualitatively, inexactly or with uncertainty. The use of FLC significantly changes the approach to control of drives. A conventional controller adjusts the system

control parameters on the basis of a set of the differential equations, which describes the model of drive system. In a fuzzy controller, the adjustments are made by a fuzzy rule based expert system, which is a logical model of human behavior of the plant operator [7], [8]. An FLC usually gives better results than those of conventional controllers, in terms of the response time, settling time and particularly in robustness. The robustness of FLC is commendable feature in motor drive applications, where, the system parameters are widely varying during plant operation. Due to non-linear structure of the FLC, the main design problem lies in the determination of the consistent & complete rule set and the shape of membership functions. However, FLCs design is made easier by friendly and meaning tools of the fuzzy logic. In this paper, the concept of fuzzy logic has been used for position control, employing dc servomotor. Since the introduction of fuzzy set theory by Zadeh and the first invention of a fuzzy controller by Mamadani, fuzzy control has gained a wide acceptance, due to the closeness of inference logic to human thinking and has found applications in many power plants and power systems. It provides an effective means of converting the expert type control knowledge into an automatic control strategy. Two reasons most often signed for pursuing fuzzy control are the desire to incorporate linguistic control rules and the need to design a controller without developing a precise system model. The main advantages of the fuzzy control systems are as follows: 1. 2. 3. 4. 5. 6. It is not necessary to build a detailed mathematical model. Fuzzy controllers have a high strength and a high adjustment. They can operate with a more number of inputs. They can be adapted easily into nonlinear systems. The human knowledge can be easily applied. The process development time is relatively lower.

II. DESIGN OF FUZZY INFERENCE SYSTEM


The Fuzzy Logic Controller is designed to have two fuzzy state Variables and one control variable for achieving position control of the DC Servo Motor. These two input variable are the error and change in error. Figure 1 shows the basic block diagram of fuzzy logic controller.

27

Figure 1: Block diagram of fuzzy logic controller The error signal is the difference between the set point and output position of the motor. Following seven linguistic terms are used for the fuzzy sets i.e. negative big, negative medium, negative small, zero, positive small, positive medium, positive big which are denoted NB,NM,NS,ZE,PS,PM,PB respectively. The fuzzy sets are then defined by the triangular membership functions. The same membership functions are also used for change in error.

signal is given to driver circuit of servo amplifier. From driver circuit output is given to servo amplifier which amplifies current to the motor. A potentiometer is connected to the shaft of the motor and a voltage of 5V is given to it, which is distributed over 360. Thus the motor shaft position is converted in voltage signal and given back to PC through ADC card as feedback signal. From set point and feedback signal error is calculated. The difference between the errors of two conjugative iterations gives change in error. These are the two inputs of the FIS which computes the output signal to achieve desired position. Table 2 : System Parameters Potentiometer sensitivity (Kp) Signal amplifier gain (Ka) Back emf constant (K) Armature Resistance (R) Armature Inductance (Lr) Motor Torque Constant (Kr) Combined moment of inertia motor shaft & load referred to the motor shaft side (Jm) Viscous damping coefficient of the motor referred to the motor shaft side (Dm) 5.093 V/rad 1 15x10-3 V/rad/s 2 1mH 15x10-3 N-m/A 42.6x10-6 Kg-m2 47.3x10-6 Nm/rad/s

III. OUTPUT OF FUZZY INFERENCE SYSTEM


For membership functions of output signal following seven linguistic terms are used for the fuzzy sets i.e. negative big, negative medium, negative small, zero, positive small, positive medium, positive big which are denoted NB,NM,NS,ZE,PS,PM,PB respectively. Table 1 shows rule base for FIS e NB NM NS ZE PS PM PB NB NM NM NB NM NB NB NM NM NM NS NS NS NS NS NS NS NS ZE NS NS NS ZE PS PS PS PS PS PS PS PS PS PS PS PM PM PB PM PM PM PB PB

V. REAL TIME CONTROL The real time execution is performed on data acquisition card PCL-207, personal computer and programming tools Turbo C. Depending on the error signal of the motor shaft the potentiometer gives a voltage that is proportional to the position of the motor. This is then scaled by interfacing electronic circuitry in order to generate the errorrate. The control signal is then computed through fuzzy algorithm written in the form of C program. Through ADC/DAC card, the discrete output signal is converted into analog voltage, which is signal conditioned and is given to the input of dc servomotor driver circuit. Hardware part of servo mechanism is shown in fig.4 and detail hardware implementation is shown in fig.5. VI. CONCLUSION It is seen that the fuzzy controller preserves the desired response, even in the presence of load disturbance and varying contro1 environments. This ensures the controllers robustness. The choice of rules and membership functions has considerable effect on fuzzy controller performance, e.g., rise time, settling time, overshoot etc.Fig.6. It is observed that using the superposition of different consequent at particular region of domain, for the same combination of antecedents, performance of FLC is improved considerably in terms of settling time and

IV. MOTOR PARAMETERS


In order to verify the effectiveness of the used controller, the following numerical simulations were performed with Matlab Simulink (fig.3).
m (S )
Ea (S ) 31.0688 S 3 29.5009S 2 61.57014S 31.0688

(1)

The Transfer function of the servo system is defined in equation 1 and the system parameters are listed in Table 2.

V. OVERVIEW OF SERVO SYSTEM


Fig.2 shows the review of servo system where set point is given through personal computer. The computer program (FIS) output is given to DAC card which converts digital to analog signal. This analog

28

overshoot. The combination of two sets of rules with same antecedents but different consequent reduces the settling time and overshoot. Further, it is possible to find some other combinations of antecedents and consequent to reduce rise time to

give best performance , i.e., to rotate the shaft of the motor to a set point with a faster response without overshoot. The experimental results are found to be consistent with the simulation results.

Fig 2: Block diagram of Servo System

Fig 3: Simulink diagram of Servo system using PID, Fuzzy PI and Fuzzy PID controller.

29

Fig. 4: Hardware of servo system

Fig.5:Real time implementation of FLC.

Intelligent Systems Design and Applications (ISDA'06), 2006. [3] Dongmei Yu, Qingding Guo and Qing Hu, Position Control of Linear Servo System Using Intelligent Feedback Controller, Sixth IEEE International Conference on Intelligent Systems Design and Applications (ISDA'06), 2006. [4] A.B.Patil, A.V.Salunkhe, Temperature controller Using Takagi-Sugeno model fuzzy logic Asian Conference on Intelligent Systems and Networks, 2006. [5] Mohd Fuaad Rahamat&Mariam md Ghazaly,Performance comparison between PID And Fuzzy Logic Controller in position Control system of DC Servomotor. Jurnal Teknologi Malayshia, 45 D), 2006. [6] K. S. Tang, Kim Fung Man, Guanrong Chen, and Sam Kwong, An Optimal Fuzzy PID Controller IEEE Transactions on Industrial Electronics, Vol. 48, No.4 , August 2001 [7] Maki K. Habib, Designing of fuzzy logic controllers for DC Servomotor supported by Fuzzy Logic control development Environment IECON01:The 27th Annual conference of the IEEE Industrial Electronics Society.2001. [8] C. C. Lee, Fuzzy Logic in Control Systems: Fuzzy Logic Controller, Part II IEEE Transactions on Systems, Man. and Cybernetics Vol. 20, No. 2. March/April 2000. [9] George K. I. Mann, Bao-Gang Hu, Raymond G. Analysis of Direct Action Fuzzy PID Controller Structures, IEEE Transactions on Systems, Man, and CyberneticsPart B: Cybernetics, Vol. 29, No. 3, June 1999. [10] Baogang Hu, George K. I. Mann, and Raymond G. Gosine, New Methodology for Analytical and Optimal Designof Fuzzy PID Controllers IEEE transactions on fuzzy system vol 7, Oct.BER 1999 . [11] M. K. Mishra A. G. Kothari Development Of A Fuzzy Logic Controller For Servo Systems IEEE, 2080 16, INDIA 0-7803-4886-9/98/ 1998 [12] Zuo Yusheng, Compared with PID, Fuzzy, PID Fuzzy Controller Hohai University,Nanking in Jiangsu Province. [13] M. Maeda and S. Murakami, Fuzzy gain scheduling of PID controllers, Fuzzy Sets Syst., vol. 51, pp. 2940, 1992.

Fig.6:Simulation result

VII. REFERENCES
I.J.Nagrath and M.Gopal, ControlSystem Engineering. Fifth Edition [2] Xiaodiao Huang and Liting Shi, Simulation on a Fuzzy-PID Position Controller of the CNC Servo System, Sixth IEEE International Conference on [1]

30

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Fuzzy Logic Tuned PID Controller for Stewart Platform Manipulator


Dereje Shiferaw Adama University Department of Electrical Engineering Adama, Ethiopia derejdec@iitr.ernet.in,dnderejesh@gmail.com
Abstract The Stewart platform manipulator is a 6DOF manipulator having higher stiffness and precision compared to serial manipulator. The control of this nonlinear MIMO system has been a hot research issue in the past few decades and various adaptive and robust controllers have been proposed. In this paper, we present a model less adaptive controller using fuzzy logic. The gains of the joint space PID controllers for each leg are tuned using fuzzy logic and hence the system can effectively handle the problem of variation of platform mass due to loading conditions. The input to the fuzzy logic is a weighted sum of errors in the leg lengths and hence the coupling between the legs, which is drawback of independent leg PID control, is solved. Simulation results show that the controller has a good adaptive performance and achieves a better tracking accuracy than simple PID controller.

Professor R. Mitra Indian Institute of Technology Roorkee Department of Electronics and Computer Engineering Roorkee, India rmtrafec@iitr.ernet.in

Keywords- Adaptive Controller, Fuzzy Logic, PID controller, Stewart platform manipulator )

I.

INTRODUCTION

[3] the performance of a conventional PID controller tuned by Ziegler Nichols method, a fuzzy logic PID like controller and a PID controller whose gains are tuned by fuzzy logic have been compared and it was verified that tuning the gains of PID controller using fuzzy logic results in a better controller when the controlled system has nonlinearity. In [13] the use of fuzzy logic for tuning PID controller for non linear systems with H infinity tracking performance is well reported and the paper has shown that using fuzzy logic to tune PID controller gains will give a better tracking accuracy for nonlinear systems. In most of the above fuzzy logic based tuning algorithms, the fuzzy rule base is obtained from step response of the controlled system with the objective of minimizing settling time. In our implementation, the fuzzy logic is used to tune the gain of the PID online and moreover a weighted sum of errors is taken as input of the fuzzy logic to consider the coupling between the legs. Simulation results have shown that the controller drives the Stewart platform in a desired trajectory with a very good precision. The paper is organized as follows: section two discuss the kinematic modeling of the robot, section three gives a brief overview of fuzzy logic and the design of the controller. Section four is about simulation results and then conclusion follows.

The Stewart platform manipulator is a 6DOF parallel manipulator having a fixed base and a moveable platform connected together by six extensible legs, see Fig.1. It has advantages of high precision positioning capacity, high structural rigidity, and strong carrying capacity. Potential areas of application include flight and other motion simulators, light weight and high precision machining, data driven manufacturing, dexterous surgical robots and active vibration control systems for large space structures [4][14][15]. The control of this highly coupled nonlinear dynamic system has been a hot research issue in the last few decades. The various types of controllers designed for this platform can be broadly classified as model less and model based controllers [6][7]. The model based controllers such as computed torque controllers [1], sliding mode controllers [10] and passivity based controllers [15] have the potential of giving higher precision in trajectory tracking and global asymptotic stability can be achieved if the dynamic model parameters are identified exactly. But the complexity of the control algorithms has delayed their use in practical application and PID controllers are still in use. PID controllers, which are mostly implemented in joint space in SISO, do not guarantee high performance because of the highly nonlinear dynamics of the system and variation of the payload [1]. Therefore we propose here to use fuzzy logic to tune the gains of PID controller to improve its performance. Fuzzy logic controllers have been used for various applications and [3][8] [13][16] discus the use of fuzzy logic for tuning the gains of a PID controller. In

. II. GEOMETRIC AND KINEMATIC MODELING

For geometric and kinematic modeling, the method proposed by [12] is used. The centers of the universal and spherical joints are denoted by Bi (i =1, 2 6 ) and Pi (i = 1, 2 , 6) respectively. Reference frames Fb and Fp are attached to the base and the platform as shown in Fig.1. The position vector of the center of universal joints Bi in frame Fb is bi and the position vector of the center of spherical joints Pi in frame Fp is pi. Let r= [rx, ry, rz] be the position of the origin Op with respect to Ob and also let R denote the orientation of frame Fp with respect to Fb. Thus the Cartesian space position and orientation of the moveable platform or end effector is specified by X=[rx, ry, rz,, , , ] where the three angles , , are three rotation angles that constitute the transformation matrix R. Then length of leg i is the magnitude of the vector BiP i which is given by (1) This is the inverse geometric formula that gives the length of each leg for a given desired position and orientation of the end effecter. The direct geometric model which gives the position r=[rx ry rz] and orientation angles , , for a given measured value of qi, i=1, 2, , 6 is nonlinear and is solved using numerical methods[5].

Bi Pi Rpi r - bi q i

31

The inverse kinematic model gives the velocity of the active for a given end effecter linear and angular velocity. joint q

b vp Jp q b p
b

(2)

Where bJp is the Jacobean matrix of the platform and is given by

a 31 a 32 b J p .. .. a 36
and

(R.P1 r)X a 31 (R.P2 r)X a 32 .. .. (R.P6 r)X a 36


B i Pi B i Pi

Hence the parameters to be tuned are the three gains Kp, KD and KI. There are different tuning algorithms designed for linear systems that aim at achieving control specifications such as set point following, disturbance rejection, robustness to model uncertainty and rejection of measurement noise. The traditional tuning algorithms try to achieve one of these specifications. In the current application the system is required to follow a desired trajectory, which means control objective is not regulation but tracking, and moreover the system is required to operate at different payload conditions. Therefore the control system should be robust to model uncertainty and for that we use fuzzy logic to tune the three gains. Fuzzy logic provides a formal methodology for representing, manipulating and implementing a human heuristics knowledge about how to control a system. The heuristics knowledge is embedded into the controller in the form of IFTHEN rules. Hence the fuzzy PID controller will have IFTHEN rules and an inference engine. The inference engine uses the rules to produce an output for a given input. The rules are of the following type. If E is PS and ER is NB then is PS Where E and ER are inputs to the fuzzy logic system, is the output, PS and NB are linguistic values of the inputs and output. Error, E and rate of error, ER are inputs to the controller and the output is the change in control gains. The terms PS and NB stand for positive small and negative big respectively. Then a decision is made about the current output value using an inference mechanism and the rule base which contains IFTHEN rules. A final output interface part converts the decision to a numerical value. The fuzzy logic tuned PID control system discussed in this paper is shown in Fig.2. For the fuzzy logic gain tuning, error E and rate of error ER are each divided into five linguistic values of negative big (NB), negative small(NS), Zero(Z), positive big(PB) and positive small (PS). The membership functions used are symmetric triangular functions and they are shown in Fig.3. The output of the fuzzy controller used to change the gains of the PID controller as given by the following algorithm.

(3)

a 3i

(4)

K p k 1 K p (k) (k)e(k)K p 0

(7) (8)

(k)K D (0) K D (k 1) K D (k) (k)e


Figure 1 Generalized Stewart Platform III. DESIGN OF CONTROLLER

A PID controller in its standard form is given as

(9) The starting value for the three terms is taken from a separate PID controller tuned manually and which gives acceptable output..

(k)K I (0) K I (k 1) K I (k) (k)e(k)e

For the simulation study of the performance of the (5) controller, a typical 6-6 geometry Stewart platform with the Multiplying the gain term outside the bracket by the geometric parameters given in table I is implemented using derivative and integral time constants, the equation can also simmechanics tool box of MATLAB. The fuzzy logic tuner be written as is also implemented using fuzzy logic toolbox of MATLAB and three mfile sfunctions were written to implement the t det discrete difference equation given in (4)-(6). The trajectory u t K p et K D K I et dt taken to test the performance of the controller is heave or dt 0 (6)

t det 1 u t K p et Td et dt dt Ti 0

IV.

SIMULATION RESULTS

32

Figure 2 Fuzzy tuned joint space PID control of a single leg uplift motion in z direction for 2.5 sec followed by circular motion in the x-y plane. . To make the trajectory twice differentiable, the trajectories. are joined with parabolic blend. The desired trajectories and the actual trajectory for the simple PID and fuzzy tuned PID is given in Figs. 6-8. From the simulation results it can be seen that the controller can track the desired trajectory to a better degree of accuracy than the simple PID controller. The gains of the controller used for the simulation are KP= 8.5x107, KD=9x105, and KI=100 and the tracking error for these values is given in Fig. 9 and 10. Figure 4 Graph of membership functions for error rate, ER

Figure 5 Membership functions for output variable TABLE I Geometric Specifications of Stewart platform Joint positions Base Platform Base radius 0.8m Platform radius 0.5m Figure 3 Graph of membership functions for error E CONCLUSION A conclusion section is not required. Although a conclusion may review the main points of the paper, do not replicate the abstract as the conclusion. A conclusion might elaborate on the importance of the work or suggest applications and extensions
[1]

REFERENCES
A. Ghobakhloo, M. Eghtesad and M. Azadi, Position Control of Stewart-Gough Platform Using Inverse Dynamics with Full Dynamics, IEEE Conference 9th Int. Workshop on Advanced Motion Control (AMC06), PP.50-55, Istanbul-Turkey 2006. A.Visioli,, Tuning of PID Controllers Using Fuzzy Logic, IEE Proc.-Control Theory Appl., Vol. 148, No. I , PP.1-8, January 2001

[2]

33

[4]

B. Dasgupta and T. S. Mruthyunthaya, The Stewart Platform Manipulator: a review, Mechanism and Machine Theory, vol. 35, No. 1 PP. 15-40, 2000 Charles C. Nguyen, Zhen-Lie Zhou and Sami S. Antraz, Efficient Computation of Forward Kinematics and Jacobean Matrix of A Stewart Platform Based Manipulator, Proceedings of the 1997 IEEE Int. Conf. on Robotics and Automation vol. 1, PP.869-874. Dongya Zhao, Shaoyuan Li, and Feng Gao, Fully Adaptive Feedforward Feedback Synchronized Tracking Control for Stewart Platform Systems, International Journal of Control, Automation, and Systems, vol. 6, no. 5, PP. 689-701, October 2008 Fathi H. Ghorbel, Olivier Chtelat, Ruvinda Gunawardana, andRoland Longchamp, Modeling And Set Point Control Of Closed Chain Mechanisms, Theory And Experiment, IEEE Transactions On Control Systems Technology, Vol. 8, No. 5, PP. 801-815, September 2000 Hyeong-Pyo Hong, Suk-Joon Park, Sang-Joon Han, Kyeong-Young Cho, Young-Chul Lim, Jong-Kun Park, Tae-Gon Kim, A Design of Auto-Tuning PID Controller Using Fuzzy Logic, Lee Hyung San and Myug-Chul Han, The Estimation for Forward Kinematics of Stewart Platform Using the Neural Network, Proc. Of the 1999 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 501-506.

[5]

[6]

[7]

Figure 6 Three dimensional view of the desired trajectory

[8]

[9]

[10] Nag-In Kim and Chong-Won Lee, High Speed Tracking Control of Stewart Platform Manipulator Via Enhanced Sliding Model Controller, Proceeding of the 1998 International Conference on Robotics and Automation, PP. 2716-2721, Leuven Belgium, May 1998 [11] S. Iqbal, A.I. Bhatti, Dynamic Analysis and Robust Control Design for Stewart Platform With Moving Payloads, Proceedings of the 17th World Congress The International Federation of Automatic Control,PP. 5324-5329, Seoul, Korea, July 6-11, 2008 [12] Wisama Khalil & Ouarda Ibrahim, General Solution for the Dynamic Modeling of Parallel Robots, Journal of Intelligent Robot Systems, Vol. 49, PP.19-37,2007 [13] Wen-Shyong Yu, Adaptive Fuzzy PID Control for Nonlinear Systems with H Tracking Performance, 2006 IEEE International Conference on Fuzzy Systems, Sheraton Vancouver, Canada, PP. 1010-1015, July 16-21, 2006 [14] Y. X. Su and B. Y. Duan, The Application of t he Stewart Platform in Large Spherical Radio Telescopes, Journal of Robotic Systems vol.17 No.7, 2000, PP. 375-383. [15] Yung Ting, Yu-Shin Chen and Shih-Ming Wang, Task Space Control Algorithm for Stewart Platform, Proc. Of the 38 th Conf. on Decision and Control, PP. 3857-3862, Dec. 1999, Phoenix, ArizonaUSA

Figure 7 Task Space tracking errors for x, y and z directions

Figure 8 Joint space tracking performance


[3] A. Omran A. Omran, G. El-Bayiumi, M. Bayoumi, and A. Kassem,Genetic Algorithm Based Optimal Control for a 6DOF Non Redundant Stewart Platform, International Journal of Mechanical, Industrial and Aerospace Engineering, Vol.2, No.2, PP. 73-79, 2008

[16] Zhen-Yu Zhao, Masayoshi Tomizuka, and Satoru Isaka, Fuzzy Gain Scheduling of PID Controllers, IEEE Transactions On Systems, Man, And Cybernetics. Vol. 23, No. 5. September/october 1993

34

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Losses Reduction In Indirect Vector-Controlled IM Drive Using FLC


C. Srisailam
Electrical Engineering Department Jabalpur Engineering College, Jabalpur, Madhya Pradesh Email:chikondra007@gmail.com
Abstract- This paper describes the use of fuzzy logic controller for efficiency optimization control of a drive while keeping good dynamic response. At steady-state light-load condition, the fuzzy controller adaptively adjusts the excitation current with respect to the torque current to give the minimum total copper and iron loss. The measured input power such that, for a given load torque and speed, the drive settles down to the minimum input power, i.e., operates at maximum efficiency. The low-frequency pulsating torque due to decrementation of flux is compensated in a feed forward manner. If the load torque or speed commands changes, the efficiency search algorithm is abandoned and the rated flux is established to get the best dynamic response. The drive system with the proposed efficiency optimization controller has been simulated with lossy models of converter and machine, and its performance has been thoroughly investigated. Key words: Fuzzy logic controller (FLC), Fuzzy logic motor control (FLMC), Adjustable speed drives (ASDs).

Dr. Anurag Trivedi


Electrical Engineering Department Jabalpur Engineering College Jabalpur, Madhya Pradesh Email:dr.anuragtrivedi@yahoo.co.in on-line efficiency optimization control [ 1]-[3] on the basis of search, where the flux is decremented in steps until the measured input power settles down to the lowest value, is very attractive. The control does not require the knowledge of machine parameters, is completely insensitive to parameter changes, and the algorithm is applicable universally to any arbitrary machine. II. CONTROL SYSTEM DESCRIFTION Fig. 1, show the block diagram of an indirect vector controlled induction motor drive incorporating the proposed efficiency optimization controller. The feedback speed control loop generates the active or torque current command ( ) as indicated. The vector rotator receives the torque and excitation current commands and

respectively, from the two positions of a switch: the transient position (l), where the excitation current is established to the rated value ( ) and the speed loop feeds the torque current and the steady state position, where the excitation and torque currents are generated by the fuzzy efficiency controller and feed forward torque compensator which will be explained later.

I.INTRODUCTION Efficiency improvement in variable frequency drives has been getting a lot of attention in recent years. Higher efficiency is important not only from the viewpoints of energy saving and cooling system operation, but also from the broad perspective of environmental pollution control. In fact, as the use of variable speed drives continues to increase in areas traditionally dominated by constant speed drives, the financial and environmental payoffs reach new importance. A drive system normally operating at rated flux gives the best transient response. However, at light loads, rated flux operation causes excessive core loss, thus impairing the efficiency of the drive. Since drives operate at light load most of the time, optimum efficiency can be obtained by programming the flux. A number of methods for efficiency improvement through flux control have been proposed in the literature. They can be classified into three basic types. The simple precompiled flux program as a function of torque is widely used for light load efficiency improvement. The second approach consists in the real time computation of losses and corresponding selection of flux level that results in minimum losses. As the loss computation is based on a machine model, parameter variations caused by temperature and saturation effects tend to yield suboptimal efficiency operation. The

Fig.1. Efficiency optimization controller. Indirect vector controlled induction motor drive incorporating the FLC.

35

The fuzzy controller becomes effective at steady-state condition, i.e., when the speed loop error , approaches zero. Note that the dc link power , instead of input power, has been considered for the fuzzy controller since both follow symmetrical profiles. The principle of efficiency optimization control with rotor flux programming at a steady-state torque and speed condition is explained in Fig. 2. The rotor flux is decreased by reducing the magnetizing current, which ultimately results in a corresponding increase in the torque current (normally by action of the speed controller), such that the developed torque remains constant. As the flux is decreased, the iron loss decreases with the attendant increase of copper loss. However, the total system (converter and machine) loss decreases, resulting in a decrease of dc link power. The search is continued until the system settles down at the minimum input power point A, as indicated. Any excursion beyond the point A will force the controller to return to the minimum power point. A. Efficiency Optimization Control Fig.2explains the fuzzy efficiency controller operation. The input dc power is sampled and compared with the previous value to determine the increment . In addition, the last excitation current decrement (L is reviewed. On these bases, the decrement step of ) is

Fig.2. Principle of efficiency optimization control with rotor flux programming

these bases, the decrement step of

is generated from

fuzzy rules through fuzzy inference and defuzzification [4], as indicated. The adjustable gains and , generated by scaling factors computation block, convert the input variable and control variable, respectively, to per unit values so that a single fuzzy rule base can be used for any torque and speed condition. The input gain as a function of machine speed coefficients The output gain can be given as where the

generated from fuzzy rules through fuzzy inference and defuzzification [4], as indicated. The adjustable gains and , generated by scaling factors computation block, convert the input variable and control variable, respectively, to per unit values so that a single fuzzy rule base can be used for any torque and speed condition. The input gain as a function of machine speed can be given as (2) Where where the coefficients simulation studies. The output gain (3) was derived from is computed from

was derived from simulation studies. is computed from the machine speed ). The

and an approximate estimate of machine torque ( appropriate coefficients and

were derived from

simulation studies. A few words on the importance of the input and output gains are appropriate here. In the absence of input and output gains, the efficiency optimization controller would react equally to a specific value of , resulting from a past action , irrespective of operating speed. Since the optimal efficiency point A (see Fig. 2) is speed dependant, the control action could easily be too conservative, resulting in slow convergence, or excessive, yielding an overshoot in the search process with possible adverse impact on system stability. As both input and output gains are function of speed, this problem does not arise. Equation (2) also incorporates that a priori knowledge that the optimum value of , is a function of torque as well as machine speed. In this way, for different speed and torque conditions, the same (pu) will result in different , ensuring a fast convergence. One additional advantage of per unit basis operation is that the same fuzzy controller can be applied to any arbitrary

the machine speed and an approximate estimate of machine torque ( ). Efficiency Optimization Control Fig.3 explains the fuzzy efficiency controller operation. The input dc power is sampled and compared with the previous value to determine the increment . In addition, the last excitation current decrement (L ) is reviewed. On

36

machine, by simply changing the coefficients of input and output gains. The membership functions for the fuzzy efficiency controller are shown in Fig. 4. Due to the use of input and output gains, the universe of discourse for all variables are normalized in the [-1, 1] interval. It was verified that, while the control variable , required seven fuzzy sets to provide good control sensitivity, the past control action L (i.e., (k - 1)) needed only two fuzzy sets, since the main information conveyed by them is the sign. The small overlap of the positive (P) and negative (N) membership functions is required to ensure proper operation of the height defuzzification method [4], i.e., to prevent indeterminate result in case L approaches zero. An example of a fuzzy rule can be given as IF the power increment ( ) is negative medium (NM) and the last (L ) is negative (N), THEN the new ) is negative medium (NM). excitation increment (

B. Feed forward Pulsating Torque Compensation

Fig. 5. Feed forward pulsating torque compensator block diagram

As the excitation current is decremented with adaptive step size by the fuzzy controller, the rotor flux will decrease exponentially [5], which is given by: = (4)

The basic idea is that if the last control action indicated a decrease of dc link power, proceed searching in the same direction, and the control magnitude should be somewhat proportional to the measured dc link power change. In case the last control action resulted in an increase of ( > 0), the search direction is reversed, and the , step size is reduced to attenuate oscillations in the search process.

The decrease of flux causes loss of torque, which normally is compensated slowly by the speed control loop. Such pulsating torque at low frequency is very undesirable because it causes speed ripple and may create mechanical resonance. To prevent these problems, a feed forward pulsating torque compensator has been proposed. Under correct field orientation control, the developed torque is given by For an invariant torque, the torque current , should

be controlled to vary inversely with the rotor flux. This can be accomplished by adding a compensating signal (t) to the original to counteract the decrease in flux (t), where t [O,T] and T is the sampling (0) and

period for efficiency optimization control. Let (0) be the initial values for for the k-th step change of , and

, respectively,

. For a perfect compensation,

the developed torque must remain constant, and the following equality holds: Solving for yields (6) (7) Eq (7) is adapted to produce
Fig.4. Membership functions for efficiency controller. (a) Change of dc link power (pu)). (b) Last change in excitation current (L (pu)). (c) Excitation current control increment ( (pu)).

Compensated torque current step is computed by (8)

37

III. RESULTS AND DISCUSSION

Fig.6 Input and output power when FLC is not used

controls which optimize the adjustable speed drives on the basis of energy efficiency. It was observed that efficiency has been optimized by minimizing the input power and the output power is maintained constant with the help of Fuzzy controller. For a given power output of the induction motor; its input power is supplied as input to the fuzzy controller. Constant power output and a reduced input the graphs obtained as outputs from the MATLAB simulink. ACKNOWLEDGMENT We take this great opportunity to convey our sincere thanks to CERA 2009 organising committee for recognising our efforts by selecting our paper for oral presentation. We also extend our sincere gratitude to our head of department who gave inspiring suggestions while preparing this paper. REFERENCES
[1]. P. Famouri and J. J. Cathy, Loss minimization control of an induction motor drive, IEEE Trans. Ind. Applicant., vol. 27, no. 1, pp. 33-37,Jan./Feb. 1991. [2]. D. S. Kirschen, D. W. Novotny, and T. A. Lipo, On -line efficiency optimizations of a variable frequency induction motor drive, 1984 IEEEhnd. Applicant. Soc. Annul. Meeting Con$ Rec., pp. 488492. Optimal efficiency control of an induction motor drive, IEEE Trans. Energy Conversion, vol. 2, no. 1, pp. 70-76, March 1987. [3]. G. C. D. Sousa and B. K. Bose, A fuzzy set theory based control of a phase controlled converter dc machine drive, IEE Wind. Applicant. Soc. Annul. Meeting Con$ Rec., pp. 854-861, 1991. [4]. B. K. Bose, Power Electronics and AC Drives. Englewood Cliffs, NJ: Prentice Hall, 1986. [5]. B. K. Bose, Microcomputer-based efficiency optimization control for an induction motor. [6]. J. Cleland, W. Turner, P. Wang, T. Espy, J. Chappell, R. Spiegel, and B. K. Bose, Fuzzy logic control of ac induction motors, IEEE Int. Con$ Rec. on Fuzzy Syst. (FUZ-IEEE), pp. 843-850, March 1992. [7]. G. C. D. Sousa, Application of fuzzy logic for performance enhancement of drives, Ph.D. dissertation, Dec. 1993. Gilbert. [8]. RAMDANI, A. Y.BENDAOUD, A.MEROUFEL, A. Reglage par mode glissant dune machine asynchrone sans capteurmecanique, Rev. Roum. Sci. Techn. Electrotechn. et Energy. (2004), 406416. [9]. CIRSTEA, M. N.DINU, A.KHOR, J. G.Mc CORMICK, M: Neural and Fuzzy Logic Control of Drives and Power Systems, Newnes, Oxford, 2002. [10]. BOSE, B. K: Expert System, Fuzzy logic, and Neural Network Applications in Power Electronics and Motion Control, Proceedings of the IEEE 82 No. 8 (Aug 1994), 13031321. [11]. BUHLER, H: Reglage par logique floue, Presse Polytechniques et Universitaires Romandes, Lausanne, 1994. [12]. SPOONER, J.T.MAGGIORE, M.ORDONEZ, R.PASSINO, K. M. Stable Adaptative Control and Estimation for Nonlinear System, Neural and Fuzzy Approximator Techniques, Willey-Interscience,2002. .

Fig.7 Input and output power when FLC used TABLE.I: When FLC is not used with drive Load Input Output Efficiency torque(Npower(kw) (%) power(kw) m) w.r.t. F.L 6(1/4 F.L) rd 8(1/3 F.L) 12(1/2 F.L) th 18(3/4 F.L)
th

8.4 7.9 6.9 5.5

0.98 1.2 1.88 2.8

11 16 26 50

TABLE.II: When FLC is used with drive Load Input Output Efficiency torque(Npower(kw) (%) power(kw) m) w.r.t. 6(1/4 F.L) rd 8(1/3 F.L) 12(1/2 F.L) th 18(3/4 F.L)
th

2.4 2.5 3.5 4.5

1 1.2 1.9 2.7

41 50 57 60

IV. CONCLUTION Adjustable speed drives which allow the control of speed of rotation of induction motor provide significant savings in energy requirements of motor when operating at reduced speeds and torques. This was observed mainly through minimize the input power at any speed and torque. The objective of this paper is to study fuzzy

38

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Design and Simulation of Bipolar Microstepping Drive for Disc Rotor Type Stepper Motor

Archan B. Patel & A.N. Patel


Department of Electrical Engineering Institute of Technology, NIRMA University Ahmedabad, INDIA e-mail: archanbpatel@gmail.com, amit.patel@nimauni.ac.in AbstractThe paper presents the design of the bipolar micro stepping drive for the disc rotor type permanent magnet stepper motor. Simulated results are also shown in the paper. Disc rotor type permanent magnet stepper motor shows the advantages of higher torque at high speed, high torque to weight ratio, very low moment of inertia, high torque to inertia ratio, low power consumption, ironless rotor and stator having minimum iron loss using SiFe laminations. Different from the available full-step and halfstep operation, the proposed controller simultaneously varies both the combination of phase excitations and the magnitude of applied voltages in such a way that the desired step size can be attained. Simulation results ensure the good performance characteristics and versatility of the driver.
Keywords- Disc rotor, Stepper motor, Motor control, permanent magnet, PIC microcontroller, PIC18F452.

S.H. Chetwani & K.S. Denpiya


Motor, Pumps & Engines Section Electrical Research & Development Association Vadodara, INDIA e-mail: lmpe@erda.org

I.

INTRODUCTION

A disc rotor type permanent magnet stepper motor offers many advantages such as higher torque at high speed, high torque to weight ratio, very low moment of inertia, high torque to inertia ratio, low power consumption, ironless rotor and stator having minimum iron loss using SiFe laminations. They are extremely reliable and compact having better dynamic performance than other stepper motors. Fig.1 shows the view of disk rotor type stepper motor [1], [2]. This paper includes the micro stepping system description & development of the software for the disk magnet motor. A PIC microcontroller ic PIC18F452 is used as the controller as the DSP or FPGA based controller are costly [4]. PIC based controller offers coast effective solution. For these reasons, a drive system based on the PIC microcontroller and MOSFET is proposed. Development of the software is shown here. Flow chart for the interrupt routine is also shown in the development of the software. This paper also presents the results based on simulation carried out in MATLAB and capsoc simulation tool. Simulation results verify the proposed micro stepping operation. The disc rotor type stepper motor is very well suitable for high performance and high volume application such as computer peripherals, robotics, and CNC machines [2]. Exploded view of disc rotor type stepper motor is shown here in Fig.2.

Stepper motors have found a wide range of application and small size at low price are required. Stepper motor with an open loop position control are very well suited to many fields of application, but stepper motors give poor performance with respect to very precise motion control with high dynamic requirements.

Fig.1 Constructional view of disc magnet stepper motor

Fig.2 Exploded view of disc rotor type stepper motor

39

II.

OPERATING PRINCIPLE OF THE DISC ROTOR TYPE


STEPPER MOTOR

Rotor of such motors consists of a rare earth magnet having shape of thin disc axially magnetized. Fig. 3 shows the principle of stepping of the disc rotor type permanent magnet stepper motor.

Fig.4 Block diagram of circuit for microstepping special peripherals at an economical price include a 16-Bit Resolution Capture compare PWM module (CCPWM) with Programmable Dead-time Insertion, Master synchronous serial port module with two modes of operation and 1,00,000 erase/write cycle Enhanced FLASH program memory and a High-Speed 10Bit A/D Converter (HSADC). Based on the microcontroller PIC18F452, a new disc rotor type permanent magnet stepper motor driver system is designed as shown in Fig. 4. PIC18F452 is the most crucial element of the whole control system which receives the external signal such as the start or stop command, motor direction and pulse signals, mode of micro-stepping each of which will cause an interrupt in the main program. As shown in Fig. 5, a 20 MHZ crystal is used as the main oscillator. This operating frequency can effectively eliminates unwanted audible noise generated by the motor. A toggle switch connected to one of the inputs to PORTD (RD6) decides the direction of the stepping motor: forward or reverse. Each press of the switch will toggle the direction. A step switch is connected to one of the inputs of PORTB (RD7). If this switch is pressed the motor moves only one step (full, half or micro-step). Speed setting can be done through a potentiometer connected to one of the ADC channels of the PIC18F452. A DIP switch, connected to PORTD, is used to select the number of microsteps. DIP4 is used as the Enable switch. This has to be closed to run the motor with microsteps selected by DIP 1 -3. Theoretically, the number of microsteps can be even more than 32, but practically, that does not improve stepper motor performance. The motor can be driven in microsteps by changing the currents in both windings, as a function of sine and cosine, simultaneously. Alternatively, the current is kept constant in one winding, while it is varied in the other [5]. Appropriate values of the PWM duty cycle (proportional to the required coil current) for each step calculated. A table corresponding to the PWM duty cycle is stored in the program memory of PIC18F452. The Table Pointer (TBLRD instruction) of PIC18F452 is used to retrieve the value from the table and load it to the PWM registers to generate an accurate duty cycle. Only one look up table is needed because cosine is just an 90 degree offset of sine. Top half of the table is calculated by (1) and the second half of the table is simply the top half in reverse order.

Fig.3 Operating principle of the motor


When phase A is switched on the disc rotates to have its magnetic poles aligned with the stator segments of that phase. It can be seen that the poles under phase B are not directly aligned with the stator segments but there is an offset. This is achieved by having the 2 groups of stator at an angular shift. Now phase B is switched on and Phase A is off. The disc shifts right so that the magnetic south pole of the disc is aligned with the north pole of phase B. Again phase A is switched on but with opposite polarity this time. The rotor again shifts by half a pole pitch to get aligned along the stator segments. Hence this switching cycle continues causing rotary motion in the disc [1].

III.

MICROSTEPPING SYSTEM DESCRIPTION

A. Configuration of the Controller


The block diagram of Disc rotor type stepper motor bipolar control in micro-stepping mode is shown in Fig.4. In this diagram the PIC18F452 microcontroller commands a two phase disc rotor type stepper motor in micro-stepping mode. To obtain this microstepping mode current has to be as much sinusoidal as possible in the two phases of the stepper motor [3],[5]. PIC18F452 is a 40-pin high performance, Enhance Flash microcontrollers developed by Microchip with nano Watt technology and its high computational performance with the additional excellent peripherals such as high speed A/D converter and I/O ports makes it an ideal choice for many high-performance, power control & motor control applications. PIC18F452 introduces

D(n) sin((n * ) / Ns) * 256

(1)

Where n is the number of step and NS is the number of subdivisions.

40

Fig.5 Control schematic using PIC18F452

Fig.6 Interrupt Service routine flowchart

B. Software Development
MPLAB Integrated Development Environment (IDE) is Microchips free, integrated toolset for the development of PIC microcontroller and dsPIC digital signal controller embedded applications. MPLAB IDE runs as a 32-bit applications on MS windows, is easy to use and includes a host of free software components for fast application development and super-charged debugging. MPLAB IDE also serves as a single, unified graphical user interface for additional Microchip and third party software/hardware development tools. Moving between tools is easy and upgrading from the free simulator to MPLAB In-Circuit Debugger (ICD) 2 or the MPLAB In-Circuit Emulator (ICE) is effortless, since MPLAB IDE has the same user interface for all tools. The assembly code for the drive program of hybrid stepping motor will be written and built in MPLAB IDE, which use MPIDE ICD 2 to program the PIC18F452 for the test of application. In the main program, the global interrupt and peripheral interrupt are enabled firstly, then enable the INT2 interrupt and clear the interrupt flag bit. Next initialize the I/O ports: PORTA will be set to digital input to receive the number of subdivision from the DIP switch, and PORTB will be set to digital output to transfer the signal of current value to DAC which is accessed by the table-read instruction. At last enable the global interrupt and wait for the external stepping pulse, each of which will cause an INT2 interrupt and the main program will jump to the interrupt routine. Fig. 6 shows the flow chart of the interrupt routine.

IV.

SIMULATION RESULTS

Simulink model is shown in Fig.7. An H-bridge is configured to allow to current to flow in either direction across a winding. Referring to Fig.6 current will flow from left to right in winding 1 when MOSFETs S1 & S4 are turned on while S2 & S3 are off. Current will flow from right to left when S2 & S3 are on while S1 & S4 are off. The diodes in parallel with each MOSFETs protect the MOSFETs from voltage spikes caused by switching the inductor. These diodes must be adequately sized in order to prevent damage to the MOSFET or diode itself.

Fig.7 Simulink model of the drive system

41

The disc rotor type stepper motor has following parameters: Step angle s = 1.8 (200 steps/revolution); Rated supply voltage = 24 V dc; Current per phase = 5A; Resistance per phase = 1.013 Inductance per phase =3.1 mH.

Half step mode current waveform are shown in Fig.10. For micro stepping mode it required to generate PWMs. PWM generation is done through the software programming. The simulated waveforms in 5microsteps/step & 50 microsteps/step are shown in Fig. 11 & 12 respectively.

Fig.8 Output current for full step mode when only One phase ON at a time. Fig 8 shows the simulated phase currents in full step mode when only One phase ON at a time. Simulated currents for Two phase ON at a time is shown in Fig.9.

Fig,12 Phase currents in 50 Microstepping mode V. CONCLUSION

Based on the microcontroller PIC18F452, a new two phase disc rotor type stepper motor drive system with micro stepping technique has been proposed. Binary & decimal subdivision are achieved by software programming in the PIC microcontroller whose 20 MHz operating frequency effectively eliminates unwanted audible noise generated by motor. A special micro-step resolution in a step motor drive can be obtained which makes the motor most suitable for precisely positioning systems, and for low

speed drive systems due to minimized torque and speed ripple. Simulation results ensure the good performance characteristics and versatility of the driver. ACKNOWLEDGMENT
Fig.9 Output current for full step mode when Two phase ON at a time. The project is sponsored by the Department of Science & Technology(DST), New Delhi. Authors wishes to thank Electrical Research & Development Association for giving the permission to carry out the research work at ERDA laboratory.

REFERENCES
[1] Keyur S. Denpiya, S. H. Chetwani, Amit N. Patel, M.K.Shah Modeling of Disc Rotor type Stepper Motor, 3rd National Conference on Current Trends in Technology, NUCONE 2008, 27-29 November,2008. Reston Condit, Douglas W. Jones, Steeping motors fundamentals, Microchip Technology Inc, pp. 10-14, 2004. Sheng-Ming Yang, Feng-Chieh Lin, and Ming-Tsung Chen, Microstepping control of a two-phase linear stepping motor with threephase VSI inverter for high-speed applications, IEEE Transactions on Industrial Applications, Vol. 40, No. 5, pp. 1257-1264, September/October 2004. Daniel Carrica, Marcos A. Funes, and Sergio A Gonzalez, Novel stepper motor controller based on FPGA hardware implementation , IEEE/ASME Transactions on Mechatronics, Vol. 8, No. 1, pp. 120124, March 2003. Muhammed F. Fahman, Aunneow Poo, An application oriented test procedure for designing microstepping step motor controller, IEEE Transactions on Industrial Electronics, Vol. 35, No. 4, pp. 542-546, November 1988.

[2] [3]

Fig.10 Simulated Phase current in Half Step mode


[4]

[5]

Fig. 11 Simulated phase currents in 5 Micro stepping mode

42

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Power control of induction generator using P-I controllers for wind power applications

K.B.Mohanty
Depart ment of electrical engineering National Institute of Technology Rourkela, India kb mohanty@nitrkl.ac.in Abstract A line excited cage generator system, with PI
control for efficiency optimization and performance enhancement, is discussed in this paper. A squirrel cage induction generator driven by a prime mover feeds the power to a utility grid through two double side pulse width modulated converter system. The generation control system uses three PI controllers. The first PI controller tracks the generator speed with the reference speed for maximum power extraction. The second PI controller controls the direct axis current for reactive power control. The third PI controller provides control of quadrature axis current for active power control. The fourth PI controller is used to maintain the dc link voltage constant. The performance of the PI controlled variable speed wind energy conversion system is evaluated through simulation study in MATLAB. Closed loop speed response of the generator with PI controller show fast speed response can be obtained with well designed PI controllers. Keywords- squirrel cage induction generator, indirect vector control, PI controller, PWM converter, wind energy generation.

Swagat Pati
Depart ment of electrical engineering National Institute of Technology Rourkela, India swagatiter@gmail.co m

I.

INT RODUCTION

The global electrical energy consumption is rising and there is steady increase of the demand on power generation. The existing conventional energy sources are depleting. So, alternative energy source investment is becoming more important these days. Wind electrical power systems are recently getting lot of attention, because they are most cost competitive, environmentally clean and safe renewable source. Of course, the main drawback of wind power is that its availability is somewhat statistical in nature. So it must be supplemented by additional sources to supply the demand curve. In the most preliminary type of wind electrical power system, a fixed speed wind turbine drives an induction generator, directly connected the grid. This system has a number of drawbacks, however. The reactive power and, therefore, the grid voltage level cannot be controlled; the blade rotation causes power and voltage variations [1]. M ost of these drawbacks are avoided, when variablespeed wind turbines are used. The power production of variable speed turbines is higher than for fixed speed turbines, as they can rotate at the optimal rotational speed for each wind speed. Noise at low wind speeds is reduced. Other advantages of variable speed wind turbines are reduced mechanical stresses, reduced torque and power pulsations, and improved power quality [2].

The disadvantage of the variable speed turbine is a more complex electrical system, requiring power electronic converters to make the variable speed operation possible. But, the evolution of power semiconductors and variable frequency drive technology has aided the acceptance of variable speed wind generation systems. In spite of additional cost of converters and control, the total energy capture in a variable speed wind turbine system is larger and therefore the lifecy cle cost is lower than with fixed speed system. The advantages of cage induction machines are well known. These machines are relatively inexpensive, robust, and require low maintenance. When induction machines are operated using vector control techniques, fast dynamic response and accurate torque control are obtained. All of these characteristics are advantageous in variable speed wind energy conversion systems (WECS). Squirrel cage generators with shunt passive or active VAR (volt ampere reactive) generators was proposed in [3], which generate constant frequency power through a diode rectifier and line commutated thyristor inverter. Operation of several self excited induction generators connected to a common bus is analyzed in [4]. The control systems for the operation of indirect rotor flux oriented vector controlled induction machines for variable speed wind energy applications are discussed in [5]-[7]. Sensorless vector control scheme suitable to operate cage induction generator is discussed in [5]. In [6] cage induction machine is considered and a fuzzy control system is used to drive the WECS to the point of maximum energy capture for a given wind velocity. The induction machine is connected to the utility using back to-back converters. In this paper a variable speed wind turbine driven squirrel cage induction generator system with two double sided PWM converters is described. PI control is used to optimize efficiency and enhance performance. The control algorithms are evaluated by MATLAB simulation study.

II.

INDUCTION GENERATOR ELECT RICAL MODEL

The basic configuration of a line excited induction generator is sketched in fig.1. Normally the stator is interfaced with the grid through back-to-back PWM inverter configuration. The operating principle of a line excited induction generator can be analyzed using the classic theory of rotating fields and the well-known d-q model, as well as three-to-two and two-to-three axes transformations. In order to deal with the machine dynamic behavior both the stator and rotor variables are referred to synchronously rotating reference frame in the developed model. When aiming to express the induction machine electrical model in the above mentioned reference frame , it is first necessary to perform the Clarks

43

ds = Ls ids + Lm ids ,
cage induction

dr = Lr idr + Lm ids

machine is given as The final mathematical model for the squirrel

Vds Vqs Vdr Vqr


transformation from the three phase to the d-q current and voltage systems through the following equation

Rs sL s
=
e e Ls

e Ls

sL m
e Lm

e Lm

Rs sL s
e r

sL m
e r

ids iqs
Lr

sL m
r

Lm

Lm

sL m

Rr sL r e r Lr

Rr sL r
.( 8)

idr iqr

Vqs

s s

Vds V0 s s

cos 2 sin = 3 0.5

cos( sin(

120) cos( 120) 0.5 sin( 0.5

120) 120)

V as Vbs Vcs

where s is the laplace operator and r is the rotor electrical speed. For a singly fed machine as the cage motor Vqr =Vdr = 0 The machine dynamics is given as

Te
Te

Tturbine

...(1) It is convenient to set =0, i.e. qs aligned with as axis. Ignoring zero sequence components we get

dt

. (10) .(11)

3P Lm (iqs idr 2 2

ids iqr )

Vqs s
Vds s =

Vas

.. (2)

The active and reactive power of the generator is given as

1 3

Vbs

1 3

p
Vcs
. (3)

Vds ids Vqs ids

Vqs iqs Vds iqs


III.

..(12) .. (13)
CONT ROL ST RUCT URE

The general convention applied in this model is similar to that of the motor convention, i.e. the stator currents are positive when flowing towards the machine and real power and reactive power are positive when fed from the grid. The The equations describing the asynchronous machine in terms of phase variables were derived to develop the model with all rotor variables referred to stator. The equations were then transformed into d-q axis reference frame with axes rotating at synchronous speed using transformations as given above. When deriving the model the q-axis was assumed to be 90o ahead of the d-axis in the direction of rotation, and the d-axis was chosen such that it coincides with the maximum of the stator flux. Therefore Vqs equals the terminal voltage and Vds equal to zero.

The key feature of field oriented control is to keep the magnetizing current at a constant rated value. Thus the torque producing component of current can be adjusted according to the active power demand with a better dynamic response as in Fig.-2. With this assumption the mathematical formulation can be written as
sl

Te

Rr iqs Lr ids 3 P Lm 2 2 Lr

.. (14)
dr i qs

.. (15)

Stator voltages:

d
Vqs = Rs iqs + Vds = Rs ids + Rotor voltages:

qs

dt
d
ds

+ e ds - eqs

(4 ) . (5)

dt

d
Vqr = Rr iqr + Vdr = Rr idr +

qr

dt
d
dr

+ (e- r ) dr - (e- r ) qr

. (6) . ( 7)

dt

The flux linkages in these equations were calculated from:

where sl is the slip speed and dr is the d-axis rotor flux linkage.The basic configuration of the drive consists of a line excited squirrel cage induction generator interfaced with the grid through a back-to-back PWM inverter. The inverters are voltage controlled bidirectional voltage source inverters having a dc link between them. The rotor speed is fed back by a speed sensor which is then added with the slip speed to get the synchronous speed which is further integrated to get the angular displacement e from which the unit vectors are generated. The controller PI-1 keeps the rotor speed in track with the reference speed such that maximum power can be captured from the wind by the wind turbine. The controller PI-2 controls the direct axis rotor current such that the rotor magnetizing flux remains constant to get better dynamic response. The controller PI3 controls the quadrature axis current such that the given active power demand can be met. The controller PI-4 is used to keep the dc link voltage constant.

qs = Ls iqs + Lm iqr ,

qr = Lr i qr + Lm iqs

44

Bidirectional rectifier

PWM inverter

po

pm

IG
Speed sensor

Gear box

i a ,ib ,i c
Rectifier control
Vector rotator & 2 - 3

r*

PI 4
1/lLm
Vd* +

Vd

r*

ids*= K + PI 1
r

PI 2
-

+ +

iqs*+

PI 3
-

i qs

i ds
e

eqs
Unit vector Generator

e ds
Sine cose

rr iqs Lr ids

e*

s l*
+ +

Fig-2
The PI controllers were initially tuned by using Zeigler- Nicholas P-I-D tuning rule. Then they were further adjusted to get optimum results.

IV.

RESULT S AND DISCUSSION

The drive system was simulated with PI controllers with different operating conditions such as step change in the reference speed and some sample results are presented in the following section. A step change command for torque is given at t = 2.5sec. which continues for 0.5sec. and again returns to the previous value. In openloop the machine takes around 0.13sec. to achieve steady state but with pi control it takes approximately 0.05 sec. to achieve steady state which can be seen clearly in fig.- 3.. Thus with PI controllers the active power, reactive power and line current values change for a very short transient time or else remain constant thus improving the stability of the system. The results in the following sections shows the improved performance of the generator system.

(b) Fig-3 simu lated torque response due to step change of turbine torque (a) open loop (b) PI

(a)

(a) Fig-4 Simu lated speed response of the drive system due to step change in turbine torque (a) open loop (b) pi

45

(b) Fig-6

(b) simu lated torque response due to step change of turbine torque (a) open loop (b) PI
REFERENCES

(a)

(b) Fig-5 simu lated active power response due to step change of turbine torque (a) open loop (b) PI

(a)

[1] G. Saccomando, J. Svensson, and A. Sannino, Improving voltage disturbance rejection for variable speed wind turbines, IEEE Trans. Egy.Conv., vol. 17, no. 3, pp. 422-428, Sep 2002. [2] J. Morren, and S. W. H. de Hann, Ridethrough of wind turbines with doubly-fed induction generator during a volta ge dip, IEEE Trans. Egy.Conv., vol. 20, no. 2, pp. 435-441, Jun 2005. [3] C. V. Nayar, and J. H. Bundell, Output power controller for a wind driven induction generator, IEEE Trans. Aerospace Electronic Systems,vol. 23, pp. 388-401, May 1987. [4] C. Chakraborty, S. N. Bhadra, and A. K. Chattopadhyay, Analysis of parallel operated self excited induction generators, IEEE Trans. Egy. Conv., vol. 14, no. 2, pp. 209-216, Jun 1999. [5] R. Cardenas, and R. Pena, Sensorless vector control of induction machines for variable speed wind energy applications, IEEE Trans. Egy.Conv., vol. 19, no. 1, pp. 196-205, Mar. 2004. [6] M. G. Simoes, B. K. Bose, R. J. Spiegel, Design and performance evaluation of a fuzzy logic based variable speed wind generation system, IEEE Trans. Ind. Appl., vol. 33, no. 4, pp. 956-965, Aug. 1997. [7] S. N. Bhadra, D. Kastha, and S .Banerjee, Wind Electrical Systems, Oxford University Press, New Delhi, 2005. [8] M.G.Simoes, B.K.Bose, and R.J.Spiegel, Fuzzy logic based intelligent control of a variable speed cage machine wind generation system, IEEE Trans. Power Electron ., vol. 12, pp. 87-95, Jan 1997. [9] Roberto Cardenas, Ruben Pena, Sensorless vector control of induction machinesfor variable speed wind energy applications, IEEE Trans. Energy conversion , vol. 19 , No. 1 , pp 196-205, Mar 2004. [10] A. T apia, G.T apia, J.Xabier Ostolaza, J.R.Saenz , Modellimg and control of a wind turbine driven doubly fed induction generator IEEE Trans. Energy conversion, vol.18, No. 2, pp. 194-204, June 2003. [11] Richard Gagnon, Gilbert Sybille, Serge Bernard, Daniel Par, Silvano Casoria, Christian Larose, Modeling and Real-Time Simulation of a Doubly- Fed Induction Generator Driven by a Wind T urbine, International Conference on Power Systems Transients (IPST05) , Montreal, Canada, Paper No. IPST 05-162, June 2005. [12] R.Pena, J.C. Clare, G.M.Asher, Doubly fed induction generator using back to back PWM converters and its application to variable speed wind energy generation, IEE Proc. Electr. Power Appl. , vol. 143, No. 3, pp. 231-241, May1996. [13] L.Holdsworth, X.G.Wu, J.B.Ekanayake, N.Jenkins, Comparison of fixed speed and doubly-fed induction wind turbines during power system disturbances, IEE Proc. Gener. Transm. Distrib. , vol.150, No. 3, pp 343-352, May2003.

46

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Implementation of An Adaptive Neuro-Fuzzy Based Speed Controller for VCIM Drive


R. A. Gupta
Dept. of Electrical Engineering MNIT, Jaipur-302017, India e-mail: rag_mnit@rediffmail.com

Rajesh Kumar
Dept. of Electrical Engineering MNIT, Jaipur-302017, India e-mail: rkumar.ee@gmail.com

Rajesh S. Surjuse
Dept. of Electrical Engineering MNIT, Jaipur-302017, India e-mail:surjusemnit@rediffmail.com

Abstract In this paper an adaptive neuro-fuzzy based speed controller for induction motor drive is presented. It has been implementation using dSPACE 1104 platform. Sensorless operation has been achieved using conventional MRAS. An adaptive neuro-fuzzy based speed controller algorithm has been developed in MATLAB/Simulink programming environment and it has been implemented in dSPACE 1104 kit. This paper shows step by step implementation of adaptive neuro-fuzzy based speed controller algorithm in dSPACE 1104. Selected experimental results are presented to investigate performance of the induction motor drive. Keywords- Vector Controlled Induction Motor (VCIM) Drives, dSPACE 1104 controller board, Adaptive Neuro-Fuzzy speed controller, Voltage Sensors, Current Sensors, Model Reference Adaptive System (MRAS)

I.

INTRODUCTION

The dSPACE is a software and hardware real-time control platform based on MATLAB/Simulink. In this paper, dSPACE DS1104 controller board system is used. After creating control system structure in Simulink, the control system frame can be compiled and downloaded to dSPACE real-time hardware system by RTW/RTI. The motor drive performance can be monitored and controlled online to improve the systems performance with ControlDesk software. Conventional high performance vector controlled induction motor drives control use fixed gain PI controllers. These fixed gain PI controllers [1] are very sensitive to disturbances, parameter variations and system non-linearity. This give rise to design of intelligent controllers based on Artificial Intelligence (AI) which does not need the exact mathematical model of the system. Artificial Neural Network (ANN) and Fuzzy Logic Control (FLC) proved to be the best candidate for speed control of high performance IM drives. Fuzzy Logic Controller yields superior and faster control, but main design problem lies in the determination of consistent and complete rule set and shape of the membership functions. A lot of trial and error has to be carried out to obtain the desired response which is time consuming. On the other hand, ANN alone is insufficient if the training data are not enough to take care of all the operating modes. The draw-backs of Fuzzy Logic Control and Artificial Neural Network can be over come by the use of Adaptive

Neuro-Fuzzy Inference System (ANFIS) [2]-[3]. ANFIS is one of the best tradeoff between neural and fuzzy systems, providing: smooth control, due to the FLC interpolation and adaptability, due to the NN back propagation. Some of advantages of ANFIS are model compactness, require smaller size training set and faster convergence than typical feed forward NN. Since both fuzzy and neural systems are universal function approximators, their combination, the hybrid neuro-fuzzy system is also a universal function approximator. Proposed scheme use MRAS based speed estimation. MRAS is suitable for high performance speed sensorless control system especially when dynamic characteristics of the system are not known [4]-[5]. It offer simpler implementation and require less computational effort compared to other methods and are therefore the most popular strategies. This paper present dSPACE implementation of Adaptive Neuro-Fuzzy based speed controller for vector controlled induction motor drive. Speed sensorless operation is achieved with conventional Model Reference Adaptive System (MRAS). Paper is arranged in following sections. Section II presents the modeling of proposed induction motor drive scheme. Section III presents description of experimental setup. Section IV presents implementation of proposed scheme on dSPACE platform. Section V and Section VI presents conclusion and references respectively. II. MODELING OF PROPOSED INDUCTION MOTOR DRIVE

A. Induction Motor Modeling The mathematical model of a three- phase squirrel cage induction motor in synchronous rotating reference frames is given by equations (1)-(7) as follows.
V e =R .i e +p e +w . e e qs ds s ds ds e e e e V =R .i qs +p -w . qs s qs e ds 0=R .ie +p e -(w -w ). e r dr e r qr dr e e e 0=R .i qr +p +(w -w ). r qr e r dr e e where e =Ls .ie +Lm ie ; e =L .i +L i ; qs s qs m qr ds ds dr

(1) (2) (3) (4)

47

e e e =L .i +L i and e =L .i e +L i e . qr r qr m qs dr r dr m ds
Te = 3P e e e e L m (i qs i -i iqr ) dr ds

T denotes the rotor time constant. r

and electromagnetic torque


2 2 d r

(5) (6) (7)

Te =J m

dt d r dt

= r +Bm r +T L

e where ve ,vqs are d-q axis stator voltages respectively; ds


e ,ie ,ie ie ,iqs are d-q axis stator currents and d-q axis rotor ds dr qr

C. Adaptive Neuro-Fuzzy Speed Controller Design methodology used for ANFIS based speed controller is expert control, which mimic a human expert. The control objective is to design a controller function such that the plant state can follow a desired trajectory as closely as possible. The speed error and the rate of change of actual speed error are the inputs of the proposed neuro-fuzzy controller, which are given by
* Input1= = -
(n)- (n-1) Input2= = x100% T

(12) (13)

currents respectively; R s ,R r are the stator and rotor resistance per phase respectively; Ls ,L r ,L m are the self inductances of the stator and rotor and the mutual inductance respectively; P is the number of poles; p is the differentiation operator (d/dt); e ,r are the speed of the rotating magnetic field and the rotor speed respectively; Te ,T are the electromagnetic developed torque and the L load torque respectively; J m is the rotor inertia; Bm is the rotor damping coefficient and r is the rotor position. B. Vector Control Vector control is based on the decoupling of flux and torque producing components of the stator current. Under this condition, the q-axis component of rotor flux is set to zero while the d-axis reaches the nominal value of the magnetizing flux. The torque equation can also be written as
3 P Lm e e e e T = (i -i ) e 2 2 L qs dr ds qr r

where * is the command speed and T is the sampling time. Sugeno fuzzy model with five- layer ANN structure is used in proposed controller. In this five-layer ANN structure the first layer represents for inputs, the second layer represents for fuzzification, the third and fourth layers represents for fuzzy rule evaluation and the fifth layer represents for defuzzification [6]. In proposed scheme generalized bell function is used as a input membership function given by equation (14).
x-ci (x)= 1 1+ A ai

2bi

where a ,b ,c

{i

i i

} is the parameter set.

(14)

(8)

Since e is zero
qr 3 P Lm e e T = (i )=K ie e e 2 2 L qs dr te qs dr r

(9)

where K =
te

3 P Lm = torque constant 2 2 L r

If the rotor flux linkage in equation (9) is maintained constant, then the torque is simply proportional to the torque producing component of the stator current. The other field oriented controller equations are
d e dr + e =L ie T r dt dr m ds e L iqs m = + e T r e r dr

Hybrid learning algorithm [6] is used in proposed controller. It has two passes, forward pass and backward pass. In the forward pass of the hybrid learning algorithm, node output goes forward until layer four and the consequent parameters are identified by the sequential least squares method. In the backward pass, the error signals propagate backward and premise parameters are updated by gradient descent that is back propagation learning method. The consequent parameters thus identified are optimal under the condition that the premise parameters are fixed. Thus, the hybrid approach converges much faster since it reduces the search space dimension of the original pure back propagation. In hybrid learning, for back propagation, objective function to be minimized is defined by (15).
A E p = (Tm,p -O m,p ) 2 m=1

(15)

where Tm,p is the mth component of pth target output vector (10) and O m,p is the mth component of actual output vector produced by the presentation of the pth input vector. Hence the over all error measure is given by (16).
p E= p=1 E P

(11)

(16)

48

r
* r

Field Weakening

r
PI
*

ids
+

Vdc
PI PI

Vds
Vqs
*

ANFIS Speed i qs Controller +

d, q to

,
e

Space Vector PWM

3 Inverter

Induction Motor: Nominal power (P): 2.2KW; Voltage: 415V (L-L, rms); Phases: 3; Frequency: 50Hz; Stator resistance ( R s ): 13.47 ohms; Rotor resistance ( R r ): 9.833 ohms; Stator leakage reactance ( X ls ): 12.137 ohms; Rotor leakage reactance ( X lr ): 14.5 ohms; Mutual reactance ( X m ): 264.66 ohms; Rotor inertia (J): 0.002 Kg.m2; Number of pole (p): 4. DC-Machine: 2.2 Kw, 220 V, 9.5 A, 1500 rpm, separately excited dc generator with loading arrangement. B. Control Circuit The control circuit is based on the DS1104 Controller Board and CP1104 connector panel by dSPACE plugged in a computer. Its development software operates under Matlab/Simulink environment and is divided in two main components: Real Time Interface (RTI) which is the implementation software and ControlDesk which is the experimentation software. RTI is a Simulink toolbox which provides blocks to configure models. These blocks allow the users to access to the dSPACE hardware. ControlDesk allows the users to control and monitor the real-time operation by using a lot of virtual instruments and building a control window. Details of control circuit components are: PC: Intel(R)core(TM)2 CPU, 800 MHz, 1 GB of RAM with Windows XP. DS1104 Controller Board: Single-board PCI hardware for use in PCs, Power PC 603e/250 MHz, Texas Instruments DSP TMS320F240, with on-board I/Os. CP1104 Connector Panel: Access I/O signals with 8 ADC input, 8 DAC output, Digital I/O, Slave DSP I/O, Incremental encoder interfaces, Serial interfaces. C. Sensors Sensors used in experimental setup are current and voltage LEM transducers and a speed sensor. Their ratings are: Current Transducer LA 55-P: Closed loop current transducer using the Hall effect, Primary nominal current rms: 50 amperes, Secondary nominal current rms: 50 mA, Conversion ratio: 1:1000. Voltage Transducer LV 100-1000: Closed loop current transducer using the Hall effect, Primary nominal rms voltage: 1000 V, Secondary nominal current rms: 50 mA, Conversion ratio: 1000V/50mA. Speed Sensor: Rotary encoder (incremental type), 1000 pulse per revolution. IV. IMPLEMENTATION OF PROPOSE SCHEME ON DSPACE PLATFORM

ids

,
to d, q

iqs

r
LPF
MRAS based speed Estimator

a, b, c to ,

ia ib

3 IM

Fig. 1. Proposed induction motor control scheme with an adaptive neuro-fuzzy based speed controller.
Source

Semikron Converter

Induction Motor + dc machine

R Y B

dc

IM

DC

0 400V

Power Circuit

Error

Control Circuit

Signal Conditioning
CMOS 0/15 V
dc

IGBT's pulses

PC + DS1104 Controller Board


V

TTL 0/5 V

Digtal I/O
ADC4

Slave I/O PWM

CP1104

ADC2
ADC3

INC1

Connector Panel

Fig. 2. Schematic of dSPACE based Implementation of induction motor drive control.

Learning rules can be derived as follows


a i (n+1) = a i (n)-ai (E a i )
b i (n+1)=bi (n)- bi ( E bi ) ci (n+1)=ci (n)-ci ( E ci )

(17) (18) (19)

where ai , bi and ci are the learning rates of the corresponding parameters. The derivatives in the above equations can be found by chain rule. Fig. 1. shows proposed induction motor control scheme with an adaptive neurofuzzy based speed controller. III. EXPERIMENTAL SETUP

Experimental setup consists of three essential parts: the power circuit, the control circuit and the sensors. Fig. 2 shows schematic of experimental platform. A. Power Circuit The power circuit consists of a voltage Source inverter and two machines: one squirrel cage induction motor and a dcmachine with a separately excited field winding. These machines are mechanically coupled. The dc-machine is used as the load of the induction motor. Their ratings are: Voltage Source Inverter (VSI):3-phase IGBT based inverter; Input dc voltage: 600 volts; Output ac voltage: 415 volts; Output current: 30 amperes; Switching frequency (up to): 20 KHz.

The proposed scheme for IM drive is experimentally implemented using dSPACE 1104 through both hardware and software. The DSP board is interfaced to PC with uninterrupted communication capabilities through dual-port memory. The DSP has been supplemented by a set of on-

49

starting
(a)

starting

(b)

Fig. 3. Starting dynamics at reference speed of 140 rad/sec at no load (a) Speed response ( 70 rad/sec/div, 25 ms/div) (b) Stator current (2 amp/div, 25 ms/div).
Speed Reversal

(a)

Speed Reversal

(b)

Fig. 4. Speed reversal from 140 to -140 rad/sec at no load (a) Speed response ( 50 rad/sec/div, 2.5 sec/div) (b) Stator current (1 amp/div, 100 ms/div).
Speed Reversal

converted into digital signals. Therefore, an anti-aliasing filter is added to each current transducer. The cut off frequency of this filter is estimated at 500 Hz (an order of magnitude above the rated frequency of 50 Hz). For MRAS speed estimator reference voltages (as it has lesser harmonics and it avoids use of voltage transducers) and actual stator current are used. The estimated actual motor speed is compared to the command speed to generate the speed error signal. This speed error signal is used to generate command torque which is limited to a proper value in accordance to the motor rating. The command torque is used to generate torque component of stator current. The flux and the torque components of the stator current are used to generate their respective reference voltages. These reference voltages are given to the SVPWM block, which generate the pulses for the IGBTs driver circuit. The D/A channels are used to convert the generated pulses to analogue signals and to capture the necessary output signals in a digital storage oscilloscope. The complete IM drive is implemented through software by developing a program in Matlab/Simulink. Finally, the program is downloaded to the dSPACE controller board using loader program. The sampling frequency for experimental implementation of the proposed IM motor drive system is 5 kHz. Fig. 3. to Fig. 6. presents selected experimental results.
V. CONCLUSIONS

(a)

Speed Reversal
1 > 1

(b)

Fig. 5. Speed reversal from 15 to -15 rad/sec at no load (a) Speed response (10 rad/sec/div, 5 sec/div) (b) Stator current (1 amp/div, 250 ms/div).

Load Torque Impact


(a)

This paper has described an adaptive neuro-fuzzy based speed controller algorithm for speed sensorless vector controlled induction motor drive based on the DSl104 DSP board. A hardware set-up of the integrated induction motor drive system has been developed in the laboratory and adaptive neuro-fuzzy based speed controller algorithm has been successfully implemented for induction motor drive in real-time by employing a dSPACE controller DS1104. The experimental results show efficient implementation of proposed scheme on dSPACE platform. REFERENCES
[1] Z. Ibrahim and E. Levi, A comparative analysis of fuzzy logic and PI speed control in high-performance AC drives using experimental approach, IEEE Tranl. On Ind. Appli, vol. 38, No.5, pp. 12101218, Sep/Oct. 2002. M. N. Uddin and H. Wen Development of a self-tuned neuro-fuzzy controller for induction motor drives, in Conf. record of Industry Applications 2004, Vol. 4, 3-7 Oct. 2004, pp. 26302636. Farzan Rashidi, Sensorless speed control of induction motor drives using a robust and adaptive neuro-fuzzy based intelligent controller, IEEE-ICIT 2004, pp.617-627. C. Schauder, Adaptive speed identification for vector control of induction motor without rotational transducers IEEE Trans. On Industrial Applications, vol. 28., no. 5, pp. 1054-1061,1992. R. S. Surjuse, R. Kumar and R. A Gupta, Speed sensorless estimation schemes for induction motor drive: A Survey International Journal of Engineering Research and Industrial Applications (IJERIA), vol.1, No.VI, pp 173-193, 2008. J.S.R. Jang, C.T. Sun and E. Mizutani, Neuro-Fuzzy and soft computing- A computational approach to learning and machine intelligence, Prentice-Hall of India Pvt. Ltd., New Delhi, (2006)

1 > 1

Load Torque Impact


(b)

Fig. 6. Load torque impact of 8 Nm at 25 sec from no load (a) Speed response (50 rad/sec/div, 2.5 sec/div) (b) Stator current (0.75 amp/div, 50 ms/div).

board peripherals used in digital control systems, such as A/D, D/A converters, and incremental encoder interfaces. The dSPACE 1104 is also equipped with a TI TMS320C240 16-bit DSP processor. It acts as a slave processor and provides the necessary digital input/output (I/O) ports and powerful timer functions such as input capture, output capture, and pulse width modulation (PWM) waveform generation. In this study, the slave processor is used for digital I/O configuration. The actual motor currents are measured by the Hall-effect sensors, which have good frequency response and are fed to the dSPACE board through the A/D converter. The measured currents must be filtered in order to avoid aliasing when they will be

[2]

[3]

[4]

[5]

[6]

50

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

New Voltage and Frequency Regulator for Self Excited Induction Generator
D. K. Palwalia and S. P. Singh

Abstract--This paper presents design of equal time ratio control (ETRC) based load controller to regulate the voltage and frequency of self excited induction generator (SEIG), suitable for stand alone power mode employing unregulated turbine such as micro-hydro power generation. The SEIG can be used to generate constant voltage and frequency if the load is maintained constant at its terminals. Moreover, under such operation, SEIG requires constant capacitance for excitation resulting in a fixed-point operation. For this purpose, a suitable control scheme is to develop such that the load on the SEIG remains constant despite the change in the consumer load. The SEIG with such control scheme should provide quality power supply with minimum transients and harmonics. A new load controller has been developed based on equal time ratio control (ETRC) AC chopper controllable load. The transient behavior of SEIG-load controller system at different operating conditions such as application and removal of static(resistive and reactive) and dynamic load is investigated. Index Terms-- Self excited induction generator (SEIG), load controller, Micro-hydro generation, Equal time ratio control (ERTC).

I. INTRODUCTION

NDUCTION machine generates power when enough excitation is provided and its rotor is driven at a speed higher than that of the stator magnetic field. The required excitation for self excited induction generator (SEIG) is given by connecting appropriate capacitors [1-3] across the generator terminals. Induced voltage and its frequency in the winding will increase up to a level governed by magnetic saturation in the machine. Traditionally, synchronous generators have been in use for power generation, but induction generators are increasingly being used these days to harness renewable energy resources because of their relative advantageous features. These features include maintenance and operational simplicity, brushless and rugged construction, lower unit cost, good dynamic response, self protection against faults and ability to generate power at varying speed. Also, the induction generator does not require separate DC exciter and its related equipment like field breaker, rheostat and automatic voltage regulator and therefore
D. K. Palwalia is with the Department of Electrical Engineering, Rajasthan Technical University Kota, Rajasthan 324010, INDIA (e-mail: dheerajpalwalia@gmail.com). S. P. Singh is with the Department of Electrical Engineering, Indian Institute of Technology Roorkee, Uttrakhand 247667 INDIA(e-mail: spseefee@iitr.ernet.in).

requires less maintenance. These advantages facilitate induction generator operation in stand-alone/ isolated mode or in parallel with synchronous generator for supplying local load and in grid mode. The self excited induction generator has a major drawback of poor voltage regulation. The inherent poor voltage regulation of the SEIG due to the difference between the reactive power supplied by the excitation capacitors and that demanded by the load and the machine is a major bottleneck of its application in isolated mode. The generated voltage depends upon the speed, capacitance, load current and power factor of the load. Several schemes for regulating the voltage of a SEIG with the help of saturable core reactor [4, 5], switched capacitors [6], static VAR compensator (SVC) [7-9] are available in the literature. The effective voltage and frequency regulation can be obtained by using a load controller [10-19]. Author [20-21] has presented a DSP based mark space ratio control induction generator controller for single phase self excited induction generator. The load controller keeps the total electrical load constant with variable consumer load through a dump load. The load controller with uncontrolled rectifier and series connected chopper switch with mark space ratio chopper control gives unity power factor operation and it requires only one dump load. Such a load controller is nonlinear in nature and injects harmonics in the system. The harmonics generated are random in nature. The SEIG performance is severely affected with these harmonics. In this paper, the design of SEIG-load controller system based on equal time ratio control (ETRC) AC chopper controllable load through a sample based controller is given for fixed point operation which injects minimum harmonics in the system and makes the dump load control as a linear load. II. SYSTEM CONFIGURATION AND CONTROL SCHEME A schematic diagram of the SEIG-load controller system is shown in Figure 1. It consist of a three phase delta connected SEIG driven by a constant power prime mover. The excitation capacitors are connected at the terminals of the SEIG, which have a fixed value to result in rated terminal voltage at full load. Consumer load and load controller are connected in parallel at generator terminals. The load controller consist of three pair of anti parallel connected static switches (IGBT1IGBT6), connected in series with a resistive load RD (dump load). The SEIG feeds two loads in parallel such that the total

51

TMS320F2812
Z-1
Peak Voltage
+ + +

Gain(A)

e(k)

Saturation I (k) P

PWM Generator

SEIG

ADC
IG,a IG,b IG,c IL,a Ia

Ref

ID,a

C C Constant Power Prime Mover

Consumer Load

Opto-isolation
S1 S2 S3 S4 S5 S6 and Pulse Driver

Capacitor Bank RD RD RD

Figure 1: Schematic diagram of three phase SEIG system with load controller

power Pout=Pc+Pd is constant, Where, Pout is the generated power of the generator (which must be kept constant), P c is the consumer load power, and Pd is the dump load power. This dump load power (Pd) may be used for non priority load such as heating, battery charging, cooking etc. The amount of dump load power is controlled by IGBT switches. The duty cycle of the gate pulse to switches provides the average conduction period and hence the amount of power to dump load. An equal time ratio control (ETRC) AC chopper [22] approach has been adopted to control dump load. The IGBT (1, 3, and 5) conducts for positive half cycle and IGBT (2, 4, and 6) conducts for negative half cycle of applied AC input. A symmetrical pulse width is applied for conduction of switches. The amount of conduction through dump load is the difference between generated power and consumer load. The voltage amplitude is determined and compared with reference voltage, which is taken as proportional to the rated terminal voltage of the SEIG. The error is scaled (gain) and algebraically added to previous sample time pulse width reference. As the desired terminal voltage is achieved the error signal becomes zero and pulse width reference holds its level. The pulse width references is compared with symmetrical triangular wave to obtain pulse of suitable width. The frequency of symmetrical triangular wave (Carrier frequency) is selected according to static switching device. The pulse output is then given to IGBT switches through opto isolation and pulse driver circuit. III. MODELING AND SIMULATION A. Determination of Excitation Capacitor Figure 2 shows the per phase equivalent circuit of three phase SEIG with load where Rs, Xls, Rr , Xlr are per phase stator and

rotor resistances and reactance; Xm and XC are per phase magnetizing and capacitive reactance; Is, Ir, IL are per phase stator, rotor and load currents; Vg is air gap voltage and F, are per unit frequency and speed. Rotor parameters are referred to stator and all reactance are at base frequency.
Rs jFXls jFXlr

IL ZL

Is

Ir Rr F/(F-)

-jXC/F

Vg

jFXm

Rm

Figure 2 : Steady state equivalent circuit of SEIG with a balanced load

Applying Kirchhoffs voltage law we have: (1) (Z ZL ) I L 0 Where, Z ZC || (ZS (ZM || Zr )) and Zc= -jXc/F, Zs = Rs+jFXls, Zr = RrF/(F-)+jFXlr,ZM = jFXm||Rm. Under steady state condition, IL can not be equal to zero and hence, (2) (Z Z L ) 0 This can be defined as the function of capacitive reactance (Xc) and per unit frequency ( F) as:
f ( X c , F) Z ZL

(3)

This equation is solved by sequential unconstrained minimization technique (SUMT) in conjunction with Rosenbrocks method of rotating coordinates for Xc and F. The required capacitance is then calculated as:

52

1 ( 2f b X c )

(4)

B. Controller Design The schematic diagram of controller is shown in Figure 3. To maintain the terminal voltage of the SEIG constant, the terminal voltage error e(k) at kth sampling instant is computed as: (5) e(k) VP (k ) V ref where, VP(k) is the peak amplitude of SEIG terminal voltage and Vref(k) is the reference voltage at kth sampling instant. The terminal voltage error is processed through a load controller. The output of the load controller at kth sampling instant is given as:

Taking an over voltage of 10 % of rated ac voltage for transient condition, the peak voltage is given as: (12) Vpeak 1.10 326.60 359.3 V Dump load resistance is given as: R D (Vo ) 2 / P (163.3) 2 /1000 26.67 Where, P is the per phase power rating of SEIG. The AC rms current is given as I AC (Vo / R D ) m for 0<m<=1 The maximum AC current for m=1 is given as: (15) I AC (163.3/26.67) 1 6.12 A Taking a creast factor of 1.41 for ac wave and average distortion factor of 0.8 [22], the peak ac current is given as: (16) I AC,peak 1.41 6.12 / 0.8 10.79 A The maximum voltage and current ratings are 359.3 V and 10.79 A respectively. D. Simulation Digital simulation of proposed load controller been carried out with MATLAB Version 7.2 on Simulink Version 6.4. This Power System Blockset facilitates to simulate saturation of asynchronous machines. A 3.73 kW, 400 V, 50 Hz delta connected asynchronous machine is used as an SEIG including the machine saturation characteristics which is determined by conducting synchronous speed test. The synchronous speed test specifies the no load saturation curve for induction machine. The voltage and current of synchronous speed test is given in a 2-by-n matrix, where n is the number of points taken from the saturation curve to simulate saturation on MATLAB. The machine parameters are given in appendix. The dropping torque- speed characteristics of prime mover is given as Tsh= k1 - k2 r; Where, k1 and k2 are the prime mover constants given in Appendix. The terminals of the machines are connected with delta connected capacitor bank of 25 F per phase. The output is connected with a controlled dump load and consumer load of resistive and reactive nature. The voltage amplitude is calculated as 2 2 2 1/ 2 , where V , V , V are the line ab bc ca Vp {2 / 3(Vab Vbc Vca )} voltage and Vp is the peak amplitude. The peak amplitude is compared with the reference and processed through the proposed sample based controller. The output of the controller is compared with symmetrical triangular wave and through a relational operator to obtain a pulse of suitable duration (width). These pulses are given to gate of the IGBT switches. IV. RESULTS Figure 4 shows the transient waveform of SEIG voltage buildup, application of resistive load (ILoad_) and frequency response. The no load voltage generation is around 600 V and frequency is around 54 Hz. With the application of controlled dump load at t=0.5 seconds, the terminal voltage drop to its rated value and so as the frequency. At time t=0.7 seconds a resistive load is applied. The terminal voltage and frequency remains constant on application of load. Figure 5 and Figure 6 shows the transient waveform on (13)

(14)

I P (k ) Ae(k ) I P (k 1)

(6)

where, A is the gain of the controller to adjust overshoot, IP(k-1) is the load controller reference at (k-1)th sampling instant. Taking z transform:

I P ( z ) AE ( z ) z 1I P ( z )

(7) (8)

I P ( z) 1 A E( z) 1 Z 1

Z -1

e(k)

Gain(A)

+ +

IP(k)

Figure 3: Schematic diagram of proposed controller

C. Preliminary Design of Dump Load The input AC phase voltage to dump load is given as: (9) v VP Sin t Where, VP is the peak AC voltage, is the angular frequency. A Matlab program has been generated which shows that with ERTC, the average conduction period or the rms voltage appearing at the load side is a function of modulation index (m) as: (10) Vo (Vp /2) m for 0<m<=1 Where,
m Amplitude of modulating voltage (A m ) ; Amplitude of carrier triangular w ave(A c )

For a three phase SEIG of line to line voltage of 400 V, phase voltage is 230.94 V and peak voltage (VP) is

230.94 2 326.60 V . Thus rms output voltage for m=1


is:

VO (326.60 / 2) 1 163.3 V

(11)

53

V. HARMONIC ANALYSIS
600 400

Voltage

200 0 -200 -400 -600 0 5

The equal ratio time control (ERTC) AC chopper circuit is controlled by the switching function h(t), which is generated by comparing a control signal (IP(k)) by a triangular wave (vtri) as shown in Figure 8. As a result, the output voltage may be found from

vo (t ) h(t ).vs (t )
where, s is the angular frequency

(17)

ILoad

vs (t ) VP sin st and VP is the peak AC voltage,

-5 55

Frequency

The switching function h(t) may be expressed in Fourier series as

50

h(t ) Co Cn sin( pnct n )


0 0.1 0.2 0.3 0.4 0.5 Time 0.6 0.7 0.8 0.9 1

(18)

45

n1

Figure 4: Voltage build up and application of main load

application and removal of 1500 W resistive load and 1000 W 0.8 lagging power factor reactive load respectively. The waveform from top to bottom are terminal Voltage (Vabc), generator winding current (IG, abc), load current (IL, abc), dump load current (ID,a), line current (Ia) and frequency (f) respectively. Figure 7 shows the transient waveform on application of dynamic load. On application of load, the average of dump load current decreases and main load current increases. However, the terminal voltage remains constant and so as the frequency. A slight increment in frequency is observed for loads of lagging power factor which is under acceptable limits.
1000
Vabc

where, c is the carrier triangular wave angular frequency and p is the frequency ratio of carrier triangular wave to source wave. The output voltage is given as

vo (t ) VP m sin( s t ) VP 1 sin(n m ) n 1 n sin[{(np 1) s t n m } sin{(1 np) s t n m }]


(19) where m is the time ratio or modulation index

0 -1000 0.9 10

1.1

1.2

1.3

1.4

1.5

1.6

1.7

IG,abc

0 -10 0.9 5 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7

IL,abc

0 -5 0.9 10

1.1

1.2

1.3

1.4

1.5

1.6

1.7

ID,a

0 -10 0.9 10 0 -10 0.9 55

1.1

1.2

1.3

1.4

1.5

1.6

1.7

Ia

1.1

1.2

1.3

1.4

1.5

1.6

1.7

f(Hz)

50 45 0.9

1.1

1.2

1.3 time (t)

1.4

1.5

1.6

1.7

Figure 5: SEIG-load controller system supplying resistive load

54

1000
Vabc

0 -1000 0.9 10

1.1

1.2

1.3

1.4

1.5

1.6

1.7

IG,abc

0 -10 0.9 5 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7

IL,abc

0 -5 0.9 10

1.1

1.2

1.3

1.4

1.5

1.6

1.7

ID,a

0 -10 0.9 10 0 -10 0.9 55

1.1

1.2

1.3

1.4

1.5

1.6

1.7

Ia

1.1

1.2

1.3

1.4

1.5

1.6

1.7

f(Hz)

50 45 0.9

1.1

1.2

1.3 time (t)

1.4

1.5

1.6

1.7

Figure 6: SEIG-load controller system supplying reactive load.


1000

Vabc

0 -1000 0.9 10 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7

IG,abc

0 -10 0.9 5 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7

IL,abc

0 -5 0.9 10 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7

ID,a

0 -10 0.9 10 0 -10 0.9 55 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7

f(Hz)

Ia

50 45 0.9 1 1.1 1.2 1.3 time (t) 1.4 1.5 1.6 1.7

Figure 7: SEIG-load controller system supplying dynamic load.

55

wc t
h(t)

wc t

vo(t)

has been developed for induction motor working as a self excited induction generator. The developed SEIG-load controller system acts as the voltage and frequency regulator for the SEIG. The duration of voltage transients on application and removal of static and dynamic load are found satisfactory. The THD of SEIG voltage is found within acceptable limits under average amount of consumer load. The load controller system works as a linear dump load. Only the odd harmonics are present in the system which may be easily filter out with passive filtration. It has also been observed that there is a small increase in frequency to compensate for lagging power factor loads.

ws t

TABLE I TERMINAL VOLTAGE, VOLTAGE AND CURRENT THD AT DIFFERENT MAIN LOAD Figure 8: Generation of ETRC output voltage

The equation (19) reveals the following features of this technique. 1. The fundamental component of vo (t) is in phase with vs (t). 2. Only odd harmonics are present. 3. Harmonics are side band of nps with amplitude equal to VPCn. 4. Increasing p shifts the lowest order harmonics far from fundamental, making it easy to filter. The amplitude of the fundamental component of vo (t) depends only on the DC value of h(t) or modulation index m. The developed ETRC AC chopper controllable SEIG-load controller behaves as linear dump load as compared to mark space ratio controlled dump load. The dump load current profile follows the dump load voltage profile. The harmonics appearing in mark space ratio control are random in nature and harmonic contents are more while in proposed scheme the harmonics appeared in dump load voltage and current are of the same order and only odd harmonics are present. The THD appearing in the proposed scheme are less as compared to mark space ratio control despite it doesnt involve any filter capacitor. The change in main load will change the modulation index consequently the pulse width and average period of conduction of load controller. The change in pulse width will change the THD appearing in the system. The dominant order of harmonics is the order of 2P 1 . Where, P is the number of pulse per half cycle. For the present system the carrier frequency is chosen to 3 kHz. Hence minimum order of harmonics appearing is 29th and 31st. Table I summarizes the main load, terminal voltage, voltage THD and current THD of main load, dump load and generator at different step of resistive and reactive load application. The harmonic contents are less in the proposed system despite of that it does not involve any filter. VI. CONCLUSION An ERTC AC chopper controllable SEIG-load controller

S. Main Load Terminal Voltage Current THD(%) No. (W) Voltage THD Main Dump Gener (V) (%) Load Load ator 1. 0 401.8 5.12 5.54 5.69 5.68 2. 1000 401.7 5.52 5.58 9.04 6.29 3. 2000 399.1 5.63 5.63 11.90 7.71 4. 3000 398.7 5.64 5.64 18.60 6.89 5. 500 W 0.8 pf 398.2 2.29 4.98 4.21 4.02 lagging 6. 1000 W 0.8 397.6 3.92 3.76 5.69 7.98 pf lagging 7. 1500 W 0.8 396.4 3.99 2.84 5.72 8.01 pf lagging

VII. APPENDIX A. SEIG Parameters The parameters of the 5HP, 400 volts, 3 phase, 50 Hz, 4-pole SEIG are: Stator Resistance (Rs) = 1.405, Rotor Resistance (Rr) = 1.39, Stator Leakage Inductance ( Ls) = 0.0078H, Rotor Leakage Inductance ( Lr)= 0.0078H, Inertia(J)=0.138 kg/m2 The magnetization of SEIG is considered as nonlinear relation between mutual inductance (Lm) and magnetizing current (Im) and expressed as: Lm=A1+A2 Im +A3Im2+A4 Im3 Where, A1=0.1407, A2= 0.0014, A3= -0.0012, A4= 0.00005 B. Prime Mover Characteristics Tsh= k1 - k2 r ; Where, k1= 600; k2= 3.5; C. Controller Parameters A=0.8, Sampling time (Ts) = 0.0001 sec. VIII. REFERENCES
[1] Chang, T. F. (1993) Capacitance Requirements Of Self-Excited Induction Generator, IEEE Trans. on Energy Conversion, Vol. 8, No. 2, pp. 304-310. Malik, N.H. and Mazi, A. A.(1987) Capacitance requirements for isolated self excited induction generators, IEEE Trans. on Energy Conversion, Vol. EC-2, No. 1, pp. 62-68. Singh, S. P., Singh, B. and Jain M.P. (1993) Comparative study on the performance of a commercially designed induction generator with

[2]

[3]

56

induction motors operating as self excited induction generators, IEE Proceedings-C, Vol. 140, No. 5, pp. 374 380. [4] Singh, B., Mishra, R. K. and Vasantha, M. K. (1989) Development of voltage regulator using saturable core reactor, Proc. of Thirteen National system conference. [5] Mishra, R. K., Singh, B. and Vasantha, M. K. (1992) Voltage regulator for isolated self excited cage induction generator, Electric Power Systems Research, Vol. 24, pp. 75-83. [6] Elder, J. M., Boys, J. T. and woodward J. L (1984) Self excited induction machine as low cost generator, IEE Proc. Generation, Transmission & Distribution, Vol. 131, No. 2, pp. 33-40. [7] Ahmed, T., Noro, O., Hiraki, E. and Nakaoka M. (2004) Terminal voltage regulation characteristics by static var compensator for a threephase self-excited induction generator, IEEE Transactions on Industry Applications, Vol. 40, No. 4, pp. 978- 98. [8] Ahmed, T., Nishida, K. and Nakaoka, M. (2004) Static VAR compensator based voltage regulation implementation of single phase self excited induction generator, Proc. of IEEE Conf. on industry Applications, Vol. 3, pp. 2069-2076. [9] Ahmed, T., Nishida, K., Soushin, K., and Nakaoka, M. (2005) Static VAR compensator based voltage control implementation of single phase self excited induction generator, IEE Proc. Generation, Transmission & Distribution, Vol. 152, No. 2, pp. 145-156. [10] Ahmed, T., Nishida, K., Nakaoka, M. and Noro, O. (2005) Self excited induction generator with regulated DC voltage scheme for wind power applications, Proc. of IEEE annual conference and Exposition on Applied Power Electronics, Vol. 3, pp. 1831-1837. [11] Bhim Singh, Murthy, S. S. and Shushma Gupta (2006) Analysis and design of electronic load controller for self-excited induction generators, IEEE Trans. on Energy Conversion, Vol. 21, No. 1, pp. 285-293. [12] Bhim Singh, Murthy, S. S. and Shushma Gupta (2005) Transient analysis of self-excited induction generator with electronic load controller(ELC) supplying static and dynamic loads, IEEE Trans. on Industry Applications, Vol. 41, No.5, pp. 1194-1204.

[13] Douglas Henderson (1998) An advanced elect ronic load governor for control of micro hydroelectric generation, IEEE Trans. on Energy Conversion, Vol. 13, No. 3, pp. 300-304. [14] Smith, N. P. A. (1996) Induction generators for stand -alone microhydro systems, Proceedings of IEEE International Conference on Power Electronics, Drives and Energy Systems for Industrial Growth, New Delhi. Vol. 2, pp. 669-673. [15] Bonert, R. and Hoops G. (1990) Stand alone induction generator with terminal impedance controller and no turbine controls, IEEE Trans. on Energy Conversion, Vol. 5, No.1, pp. 28-31. [16] Bonert, R. and Rajakaruna S. (1998) Self -excited induction generator with excellent voltage and frequency control, Proc. IEE-Generation, Transmission, Distribution, Vol. 145, No. 1, pp. 33-39. [17] Murthy, S. S., Singh, B. P., Nagamani, C. and Satynarayana K. V. V. (1998) Studies on the use of conventional induction motors as self excited induction generator, IEEE Trans. on Energy Conversion, Vol. 3, No. 4, pp. 842-848. [18] Shridhar, L., Bhim Singh, Jha, C. S. (1993) A step towards improvements in the characteristics of self excited induction generator, IEEE Trans. on Energy Conversion, Vol. 8, No. 1, pp. 40-46. [19] S. S. Murthy, B. Singh, A. K. Tandon, Analysis of self excited induction generator, Proc. Inst. Elect. Eng. C, Vol. 129, No.6, Nov. 1982, pp. 260265. [20] Palwalia, D. K., and Singh, S. P. (2008) DSP based induction generator controller for single phase self excited induction generator, International Journal of Emerging Electric Power Systems, the Berkeley Electronic Press, Vol. 9, Issue 1, Article 2. [21] Palwalia, D. K., and Singh, S. P. (2008) Design and implementation of induction generator controller for single phase self excited induction generator, Proceedings of IEEE Conference on Industrial Electronics and Applications, pp-400-404, Singapore. [22] Khaled E. Addiweesh (1993) An exact analysis of an ideal static AC chopper, International Journal of Electronics, Vol.75, No. 5 , pp.9991013.

57

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Soft Magnetic Composite Core A New Perspective for Linear Switched Reluctance Motor
N.C.Lenin
Research Scholar, Department of Electrical Engineering, Anna University Chennai, India nclenin@gmail.com

Dr.R.Arumugam , Member IEEE


Retired Professor, Department of Electrical Engineering, Anna University, Chennai, India.

Abstract-This paper presents electromagnetic aspects of Soft Magnetic Composite (SMC) material application in double sided longitudinal flux Linear Switched Reluctance Motor (LSRM) for the first time. The presented translator core of the LSRM is made of two types of material, the classical laminated silicon steel sheet (M-19) and the soft magnetic composite material. First, the translator core made of laminated steel has been analyzed. The next step is to analyze the identical geometry LSRM with the SMC material, SOMALOY 500 for its translator core. The finite element method has been carried out to analyze the magnetic conditions, average force, maximum force, translator weight and inductances. The obtained simulation results shows that the proposed translator structure using SMC material reduces the translator weight as well as force ripple. Keywords - Soft magnetic composite, linear switched reluctance motor, force ripple, finite element analysis, core loss.

This paper presents the analysis of magnetic conditions in a four phase LSRM using SMC materials for the first time. First the stator core and the translator core has been made of laminated silicon steel sheet and secondly the stator core has been made of laminated silicon steel sheet and the translator core of soft magnetic composite material. The investigation is focused on the influence of constructional parameters on the electromagnetic force characteristics, translator weight, percentage of force ripple and core loss.

II.

LINEAR SWITCHED RELUCTANCE MOTOR

I.

INTRODUCTION

The abundance of power electronics technology with advances in material technology has led to the development of special machines for myriad applications in recent times. One of the special machines, the Linear Switched Reluctance Motor (LSRM) is gaining recognition in the linear electric drives market due to its simple and rugged construction; low expected manufacturing costs, fault tolerance capability and high force to inertia ratio [1]. Innovative practices in powder metallurgy technology has led to the development of Soft Magnetic Composite (SMC) materials that have unique properties and that may offer a revolution in the design and manufacture of electrical machines[2]-[4]. The unique properties include three dimensional (3D) isotropic ferromagnetic behaviors, very low eddy current loss, relatively low total iron loss at medium and high frequencies, possibilities for improved thermal characteristics, flexible machine design and assembly and a prospect for greatly reduced production cost [5][6]. One of the possible approaches is to replace the classical laminated magnetic circuit of electric machines made from silicon iron sheets with soft magnetic composite or iron powder material [8][11]. The results of using these materials are shorter magnetic circuit manufacturing time, a smaller number of components and less scrap. The use of SOMALOY 500 [6], a soft magnetic composite material developed by Hoganas of Sweden for the stator core of an 8/6 rotary SRM offered a cost effective measure for many low performance applications of this special machine. Despite many advantages, the LSRM has some drawbacks. It requires an electronic control and translator position sensor, a huge capacitor is needed in the dc link and the double salient structure causes force ripple which leads to vibration and acoustic noise.

A Linear switched reluctance motor is an electrical machine in which the force is developed by the tendency of the translator to occupy a position so as to minimize the reluctance of the magnetic path of the excited phase winding. The LSRM is a doubly salient but singly excited machine wherein the translator carries the winding while the stator is simply made of stacked silicon steel laminations. This lends to a simpler geometry for LSRM as evidenced from the two dimensional (2D) CAD model of a double sided LSRM shown in figure.1.

Figure.1 2D CAD model of a LSRM using M-19 steel translator.

III.

SOFT MAGNETIC COMPOSITE MATERIALS IN ELECTRICAL MACHINES

Electrical steel lamination is the most commonly used core material in electrical machines. Electrical steels are typically classified into grain-oriented electrical steels and non-oriented electrical steels. Typical applications for grain-oriented steels are power transformer cores whereas non-oriented steels are broadly used in different kinds of rotating and linear electrical machines. Electromechanical steels currently used in the manufacture of electrical machines posses high induction of magnetic saturation (Bs2T), low coercive force (Hc< 100A.m-1), and they are characterized by low total losses. Electrical sheet steels have been the dominant choice for the soft iron components in electrical machines subject to time varying magnetic fields. The new soft iron powder metallurgy materials can be considered as an alternative for magnetic core of the electrical machines. The basis for soft magnetic composites is bonded iron powder as shown in

58

figure.2. The powder is coated, pressed into a solid material using a die, and heat-treated to anneal and cure the bond.

The force profile and the inductance profile of both the structures obtained from FEA are compared in figure.5 and figure.6 respectively. The entire comparisons of the two structures are tabulated in table II.

Figure.2 Picture of SMC material

Figure.4 2-D CAD model of LSRM with SMC translator

The B-H characteristics of M19 silicon steel and SMC material SOMALOY 500 are shown in figure.3, which reveals that although the SMC has inferior relative permeability when compared with lamination steel it still posses the following desirable characteristics. a. Reduced copper volume as a result of increased fill factor and reduced end winding length and reduced copper loss as a result of the reduced copper volume, b. Reduced high frequency tooth ripple losses since the SMC has essentially less eddy current losses, c. Potential for reduced air gap length as a result of the tight tolerances maintained in manufacturing SMC material, d. Modular construction allows the possibility of easy removal of an individual modular unit for quick repair or replacement, e. Easily recyclable back into powered form with pressure and the copper windings readily removed.

Figure.5 Force profile

IV.

2D FINITE ELEMENT ANALYSIS

The time stepped FEA is the most accurate method available to obtain the magnetic characteristics in an electromagnetic device. In this paper a two-dimensional FEA has been carried out on the two proposed machines depicted in table I using FEA based Computer Aided Design (CAD) package MagNet 6.26.2.
B-H curves of M19 and SOMALOY 500 2 1.8 1.6 1.4 1.2 SOMALOY 500 M19 steel

Figure.6 Inductance profile

The inductance profile obtained at the operating current indicates that the electromagnetic force in LSRM is a function of variation of excited phase inductance with the translator position and is expressed mathematically as

B(T)

1 0.8 0.6 0.4 0.2 0

F=
i= rated current in amps
dL dx
0 0.5 1 H(A/m) 1.5 2 2.5 x 10
4

i 2 dL * (N) 2 dx

(1)

= rate of change of inductance with respect to translator

Figure.3 B-H characteristics of laminated steel and SMC TABLE I STUDIED STRUCTURES Structure 1 Laminated steel stator (M19) Laminated steel translator (M19) Structure 2 Laminated steel stator (M19) Soft Magnetic Composite (SMC) translator

position The average electromagnetic force is given by

Favg =

1 n Fi n i=1

(N)

(2)

Fi - Instantaneous force n Number of translator positions

V.

FORCE RIPPLE OF LINEAR SWITCHED RELUCTANCE MOTOR

The projected model of the machine when the translator core is the SMC material SOMALOY 500 is shown in figure.4.

One of the inherent problems in LSRM is the force ripple due to switched nature of the force production. Force ripple may be determined from the variations in the output force. The force dip is

59

TABLE II SUMMARY OF THE COMPARISON OF TWO PROPOSED STRUCTURES Type of the structure Peak Propulsion Force (N) Average Propulsion Force (N) Translator weight (kg) Inductances Aligned Inductance (H) 0.0872 0.0821 Unaligned Inductance (H) 0.0324 0.0318

Translator with M19 steel Translator with SMC

247.5 222.2

191.8 184.1

3.456 3.328

the distance between the peak value and the common point of overlap in the force angle characteristics of two consecutive phases as illustrated in figure.7. Calling the maximum value of the static force peak static force Fmax and the minimum value that occurs at the intersection point Fmin the percentage force ripple may be defined as

The analysis of the two graphs indicates that there is indeed a reduction in force ripple in SMC cored LSRM but with the reduction in the average force summarized in table III.

VI.

CORE LOSSES

F %Force Ripple =

max

-F

min

Favg

100

(3)

The force dip is an indirect indicator of force ripple in the machine, the lesser the value of the force dip, the lesser will be the force ripple.

Core losses are proportional to the frequency of the flux, the magnitude of the flux density and weight of material traversed by the flux. Dynamic FEA simulations of the LSRMs with asymmetric converter are performed and the average flux densities for each stator and translator sections are calculated for every milli-meter (mm) of movement. The average flux densities are normalized and are approximated using fourth-order polynomial equations. Eddy losses and hysteresis losses can then be directly estimated using equations described in [12][13]. The total core losses for both machines are given in Table IV.
TABLE III SUMMARIZED RESULTS OF FORCE RIPPLE Type of the structure Translator with M19 steel Translator with SMC Minimum Propulsion Force (N) 139.5 128.3 % Force ripple 56.31% 51.1%

Figure.7 Force vs translator position characteristics showing force dip

The force dip of both the machines has been computed by FEA and the results are shown in figure.8 and figure.9 respectively.

TABLE IV CORE LOSS ESTIMATE FOR THE TWO STRUCTURES Type of the structure Translator with M19 steel Translator with SMC Core loss (watts) 123.42 109.65

The core-losses for the translator with SMC material is 11.2% lower than the translator with M19 steel. This is because of less eddy loss nature of SMC material.

VII.
Figure.8 Force vs translator position of laminated steel cored LSRM

CONCLUSION

Figure.9 Force vs translator position of SMC cored LSRM

This paper addressed the influence of soft magnetic composite material for the translator core of a four phase LSRM on the electromagnetic characteristics of the machine. Time stepped two dimensional FEA carried out using the finite element method based CAD package MagNet 6.26.2 to obtain the static force characteristics resulted in 9.43% reduction in the force ripple as well as 3.7% translator weight but with a reduction in average force when the translator core of a conventional LSRM has been replaced with SMC material SOMALOY 500. The core-losses for the translator with SMC material is 11.2% lower than the translator with M19 steel. The future course of research may involve study of machine dynamic performance, thermal characteristics and vibration behavior.

60

REFERENCES
R.Arumugam, D.A.Lowther, R.Krish nan and J.F.Lindsay, Magnetic field analysis of a switched reluctance motor using a two dimensional finite element model, IEEE Trans. Magn., vol. MAG-21, no. 5, pp. 1883-1885, Sep. 1985. [2] R.Rabinovici, Force ripple, vibrations, and acoustic noise in switched reluctance motors, HAIT Journal of Science and Engineering B, Volume 2, Issues 5-6, pp. 776-786, July 2005. [3] M.Persson, P.Jansson, A.G.Jack, B.C.Mecrow, Soft Magnetic Composite Materials Use for Electrical Machines; 7th International on Electrical Machines and Drive at Durham, England, Sep. 1995. [4] A.G.Jack, Experience with the Use of Soft Magnetic Composites in Electrical Machines, International Conference on Electrical Machines, Istanbul, Turkey, 1998, pp. 1441-1448. [5] Guo, Y.G., Zhu, J.G., Watterson, P.A., and Wu, W., Design and Analysis of a Transverse Flux Machine with Soft Magnetic Composite Core, the 6th International Conference on Electrical Machines and Systems, Beijing, China, Aug. 2003. [6] The Latest Development in Soft magnetic Composite Technology from Hoganas Metal Powders, Hoganas Manual, 1999 -2003. [7] MagNet, Finite element software, Version 6.22, user manual Infolytica, Canada. [8] L.M.M.Sitsha, M.J.Kamper, Some design aspects of tapered and straight stator pole 6:4 switched reluctance machine, in Proc. IEEE International Conference (AFRICON99 ), pp. 683-686. [9] M.Moallem, C. M.Ong and L.E.Unnewehr, Effect of rotor profiles on the torque of a switched-reluctance motor, IEEE Trans. Ind. Applicat., vol. 28, no. 2, pp. 364-369, Mar./Apr. 1992. [10] K.Bienkowski, J.Szczypior, B.Bucki, A.Biernat and A.Rogalski, Influence of geometrical parameters of switched reluctance motor on electromagnetic torque, Proceedings of XVI International Conference on Electrical. [11] C.Neagoe, A.Foggia and R.Krishnan, Impact of pole tapering on the electromagnetic torque of the switched reluctance motor, in Conf. Rec IEEE Electric Machines and Drives Conference , 1997, pp. WA1/2.1- WA1/2.3. [12] Y.Hayashi and T.Miller, A new approach to calculating core losses in the SRM, IEEE Transactions on Industry Applications, vol.31, no.5, pp.1039 1046, Sept Oct. 1995. [13] P.Vijayaraghavan, Design of switched reluctance motor and development of a universal controller for switched reluctance and permanent magnet brushless drives, PhD dissertation, Virginia Tech, 2001. [1]

61

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Voltage and Frequency Control with Integrated Voltage Source Converter and Star/Hexagon Transformer for IAG Based Stand-alone Wind Energy Conversion System
Bhim Singh*, Shailendra Sharma*, Ambrish Chandra$ and K-Al Haddad$,
$

Electrical Engineering Department, IIT Delhi, New Delhi 110016, India Electrical Engineering Dept., Ecola de Technologie, Superieure, 1100, Norte Dame Ouest, Montreal, Quebec, H3C1K3, Canda (Email: bhimsinghr@gmail.com, ssharma.iitd@gmail.com, ambrish.chandra@etsmtl.ca, kamal.al-haddad@etsmtl.ca) speed, which leads to variation in the terminal voltage and frequency [2]. Substantial work has been reported on the control of IAG in WECS [3-4]. In [4] a voltage and frequency controller (VFC) is used with 3-phase capacitor excited IAG, feeding 3-phase 4wire loads. In this scheme, three single phase voltage source converters (VSC) with a battery at dc link have been used with IAG through three single phase transformers. In this paper, a reduced switch two-leg VSC with a battery at its dc bus is used with a star/hexagon transformer as a VFC of an IAG to feed three-phase four-wire loads. The synchronous reference frame (SRF) [5] theory based reference generator current detection method is used in the control of VFC. A phase locked loop (PLL) is used to detect the angle of the system phase voltage for SRF theory and estimating the corresponding WECS frequency [6]. The use of star/hexagon transformer further reduces the VSC kVA rating by compensating the zero sequence fundamental and harmonic currents [7]. II. SYSTEM CONFIGURATION AND PRINCIPLE OF OPERATION Fig. 1 shows the schematic diagram of the proposed IAG along with its VFC and consumer loads. A reduced switch twoleg VSC and split capacitors as a third leg is used with a battery energy storage system (BESS) as a VFC. A RC ripple filter is used at point of common coupling (PCC) to suppress the high frequency switching ripple of the VSC. The VSC supplies the reactive power to IAG to build the rated voltage and further regulates the voltage at varying loads. With the battery at its dc bus, an active power control takes place by exchanging the active power under varying wind speeds and loads. The battery absorbs the surplus power when the WECS frequency is above rated frequency and delivers the deficit power to common bus when the WECS frequency goes below the rated value. The VSC consists of IGBTs based two- leg module and split capacitors as a third leg along with a BESS. Two-legs comprise of two IGBTs and common point of each leg is connected with an individual phases b and c of the secondary windings of star/hexagon transformer through an interfacing inductor. The mid-point of third leg of split capacitors is connected to phase a of star/hexagon transformer. In practice majority of small loads are single

Abstract This paper deals with the control of voltage and frequency of a stand-alone wind energy conversion system (WECS) based on isolated asynchronous generator (IAG) feeding three-phase four- wire loads. The proposed control scheme is realized using synchronous reference frame (SRF) theory. The proposed voltage and frequency controller (VFC) consists of a reduced switch two leg voltage source converter (VSC) and split capacitors with a battery at its dc bus. The reduced switch VSC is connected to the IAG through star/hexagon transformer. The VFC is capable of fulfilling the demand of reactive power of an IAG along with the function of load leveling, harmonic elimination, voltage regulation and load balancing. The simulated results are presented to demonstrate the capabilities of the proposed VFC for IAG in WECS. The complete IAG system is modeled and simulated in the MATLAB using the Simulink and the sim power system (SPS) toolboxes. Index Terms- Asynchronous generator,Frequency Control, Voltage Control, Voltage source converter, Wind energy conversion system, Star/hexagon Transformer.

I. INTRODUCTION

HE rapid development in power electronics, and DSPs has made it possible to use modern control techniques wind energy conversion system (WECS). Along with grid connected systems, the stand-alone power generation with wind and micro-hydro driven turbines are gaining popularity in remote places where feasibility of grid connection are not yet possible due to geographic and economic issues [1]. Isolated asynchronous generators (IAG) are increasingly being used with these prime-movers because of their relative advantages such as brush less and rugged construction, low cost, maintenance and operational simplicity, self-protection against faults, good dynamic response, and capability to generate power at varying speeds. The later feature facilitates the IAG operation in stand-alone mode to supply far flung and remote areas where extension of grid is not economically viable. Moreover, fundamental problems with IAG are in its inability to control voltage under varying loads and the wind speed. To maintain constant voltage, a variable reactive power support is a must [1-2].When IAG is coupled with a wind turbine, an input power is varying depending on the wind

62

phase loads, therefore to take in to consideration a three-phase four-wire system is investigated in this WECS. The load neutral terminal is provided by the primary winding of the stara IAG igc Ripple iga Filter igb b c1 a1 c2 b 2 a2 a3 b3 c3 icb icc ica C Rb
b

Vc1

b
1

Ri
n

Wind Turbine

iLa

iLc iLn

iLb

Linear/ Non-Linear Loads

Vc2

Vb

Fig. 1 Schematic diagram of VFC for isolated wind energy conversion system

hexagon transformer. III. CONTROL STRATEGY As shown in Fig. 2 the control strategy of the proposed VFC controller is realized using SRF theory for estimation of reference IAG currents. The load currents are sensed and transformed in to d-q rotating frame and the required transformation angle is achieved using PLL over the PCC voltages. The direct axis load current is filtered using low pass filter (LPF) to obtain the active power component of load currents. The active power component along with output of the frequency PI controller provides the active power component of the IAG currents. The reference q axis IAG current is estimated as an algebraic difference of the output of filtered quadrature axis component of the load currents and an output of the voltage PI controller. Basic equations of the control scheme used in modeling the proposed controller are as follows. A. Synchronous Reference Frame Theory In SRF, load currents are transformed into d-q current components as,
i Ld iLq i 0 cos( ) cos( 2 / 3) cos( 2 / 3) iLa 2 sin( ) sin( 2 / 3) sin( 2 / 3) iLb 3 i 1/ 2 1/ 2 1/ 2 Lc

(2) where frf is the reference frequency ( i.e. 50 Hz in this case) and f is the frequency of the terminal voltage of an IAG. The instantaneous value of f is estimated using the PLL at the PCC voltages. The output of the frequency PI controller is as, (3) i fd (n) i fd (n 1) k pf { fe (n) fe (n 1)} kif fe (n) Similarly the error between two capacitor voltages is as, (4) Vdce (n) Vdc1 (n) Vdc 2 (n) where Vdc1 is the voltage across upper-side capacitor at the dc bus and Vdc2 is the voltage across the lower-side capacitor. The output of the voltage PI controller is as, (5) I dcp (n) I dcp (n 1) k pf Vdce (n) Vdce (n 1) kif Vdce (n)
frf iLa iLb iLc sin abc PI f iLd dq iLq cos LPF iLqdc ivq Vtr Vt LPF Ve PI sin PLL LPF Idcp ifd if igd* iLddc igq* abc dq PI Controller iga* igb* igc* cos igc Vc1 Vc2 iga igb PWM controller PWM controller PWM controller s1 s4 s3 s6 s5 s2

fe (n) f rf (n) f (n)

Phase shifter Band Pass filter va vb vc

Computation of Vt

Fig. 2 Proposed control scheme of VFC for IAG in WECS

The instantaneous d-axis component of reference IAG currents is as, * (6) igd i f iLddc where if=idcp+ifd . C. Quadrature Axis Component of Reference IAG Current The difference of fundamental reactive power of loads and the output of voltage PI controller is used to generate the qaxis reference IAG current. The ac voltage error is as, (7) Ve (n) Vtr (n) V (n) where Vtr(n) is the amplitude of the reference ac terminal phase voltage and V(n) is the amplitude of the sensed three phase ac voltage at the terminals of an IAG at the nth instant. The amplitude of PCC voltages Vt(n) is given as, 2 2 2 Vt 2(va vb vc )/3 (8) The output of a voltage PI controller is expressed as, (9) where k pa and kia are proportional and integral gain constants of the PI controller. Ve(n) and Ve(n-1) are the voltage errors in the nth and (n-1)th sampling instant. ivq and and ivq(n-1) is the output of voltage PI controller in the nth and (n-1)th instant. The quadrature axis reference generator current is given as, * iqd ivq iLqdc (10)
ivq (n) ivq (n 1) k pa{Ve (n) Ve (n 1)} k piVe (n)

(1)

where (iLd, iLq, iL0) denote transformed load currents and (iLa, iLb, iLc) are the three-phase load currents. In the SRF, is a time varying angle that represents the angular position of the reference frame, rotating at constant speed in synchronism with the three-phase voltages. A PLL is used for this PCC voltages. In SRF, fundamental currents in abc reference frame are transformed to dc values. Harmonics appear like ripples. It is desired that the IAG should deliver only fundamental component of the direct axis current and fundamental component of quadrature axis current required for the reactive power support. Harmonics in direct axis and quadrature axis currents are taken care by the VSC. A set of LPFs are used to extract the dc component from direct and quadrature axis load currents as shown in Fig. 2. B. Direct Axis Component of Reference IAG Current Direct axis component of reference IAG current is estimated by an algebraic difference of the dc component of d-axis load current and the output of frequency proportional-integral (PI) controller. The frequency error is defined as,

63

D. Estimation of Reference IAG Currents For the two-leg VSC, only two phase currents need to be controlled. Therefore the fundamental reference active power components of IAG currents for two phases b and c are computed. The reference dq axis IAG currents are transformed into two phase currents using inverse Parks transformation as,
* igb * igc * sin( 2 / 3) cos( 2 / 3) 1/ 2 igd 2 * sin( 2 / 3) cos( 2 / 3) 1/ 2 igq 3 * ig 0

(11)

E. PWM Signal Generator Reference IAG currents (i*gb, i*gc) are compared with sensed IAG currents (igb, igc). These current errors are amplified using the proportional controller by gain K and amplified signals are compared with fixed-frequency (10 kHz) triangular carrier wave to generate the gating signals for VSC switches. IV. MATLAB BASED MODELING The MATLAB- based simulation model of the proposed IAG system consists of IAG, a SRF based VFC and consumer loads. The simulation is carried out on the MATLAB version 7.5 using the sim power system (SPS) toolbox and discrete step solver. The detailed modeling description of each part is given in the following sections. A. Modeling of VFC The proposed VFC consists of a two leg current controlled reduced switch VSC with BESS. The mid-point of the half bridges is connected individually to each phase at the secondary windings of the star/hexagon transformer through an inductor. The star/hexagon transformer isolates the VSC from IAG bus. The primary windings of star/hexagon transformer provide path for zero sequence components current present in the loads. The neutral terminal of the load is connected with the star point of a star/hexagon transformer. In Fig.1 the Thevenins equivalent circuit of a battery-based model is shown at the dc link of the controller [8]. The terminal voltage of the battery is as follows, Vbb 2 2 / 3 VL (12) where VL is line rms voltage of the IAG. The battery is an energy storage unit, its energy is represented in kilowatt-hours (kWh), When a capacitor is used to model the battery unit, its value Cb can be determined as,
2 2 Cb (kWh * 3600 *103 ) / 0.5(v0 c max voc min )

windings. If Va, Vb and Vc are the per phase voltages across each winding and Vca is the resultant voltage, then, (14) Vca k1Va k2Vc where k1 and k2 are the fractions of windings in the phases. Considering Va V 00 and Vca 3V 300 , then from (14), (15) 3V 300 k1V 00 k2V 1200 it can be obtained K1=K2=1. The line voltage is 230V, therefore three single phase transformer of each rating 133/60/60V are selected. C. Modeling of the RC Ripple Filter A high pass filter tuned at half the switching frequency is used to filter out switching ripple of VFC at PCC. The switching frequency of VSC of VFC is selected as 10 kHz. Choosing low impedance of 8.09 at frequency of 5 kHz, a value of 5 F is obtained for the ripple filter. A series resistance of 5 is connected in series with a filter capacitor. This offers an impedance of 636.64 at the fundamental voltage.
a3 a2 b3 b2 c3 c2 Vc3 Va3 Vca a1 in ica b1 icb c1 icc Vab Vb2 Vc2 Vb3 Vbc Va2

Fig. 3 Star/hexagon Transformer and Phasor diagram at secondary windings

V. RESULTS AND DISCUSSION The proposed SRF based VFC along with fixed pitch wind turbine driven IAG is tested with linear/non-linear balanced/unbalanced three-phase four-wire loads. The performance of the VFC is tested in two cases. In first case it is considered that the balanced three-phase lagging pf loads are connected at IAG bus and the wind speed is variable. In second case it is considered that balanced/unbalanced nonlinear loads are connected at fixed wind speed. The results are shown in Figs. 4-5. The waveforms of the IAG voltage (Vgabc), IAG current (igabc), load currents for three phases (iLa), (iLb) and (iLc), VSC currents (ica), (icb) and (icc), frequency f, terminal voltage (Vt), the wind speed (Vw), active power transfer through the IAG, the load and the battery (P)are shown during different dynamic conditions. A. Performance of VFC at Balanced Linear Loads and Variation in Wind Speed Fig. 4 demonstrates the performance of the VFC at varying wind speeds. Till 2.1s, the wind speed is 10 m/sec, and three phase balanced 3.7kW, 0.94 pf lagging load is connected at the IAG bus. The load demand is meeting through IAG and the battery keeps its floating state. At 2.1s, the wind speed is decreased to 6 m/s, there is a reduction in the IAG power. With the action of a frequency PI controller, the deficit load power is supplied from the battery, keeping frequency at set value of 50 Hz. The wind speed is increased to 10 m/s at 2.25s. It is observed that voltage and frequency are almost constant during variation in wind speed.

(13)

where vocmax and vocmin are the maximum and minimum open circuit voltage of the battery under fully discharged and fully charged conditions. In the equivalent model, Rin is the equivalent series resistance of the parallel /series combination of a battery, which is usually a small value. The parallel circuit of Cb and Rb is used to describe the stored energy and the resistance responsible for self discharging. B. Design of Star/hexagon Transformer Fig. 3 shows star/hexagon transformer. The windings are designed for mmf balance to mitigate the neutral current. The phasor diagram on secondary windings is shown in Fig.3. It gives the following relations to find the turns ratio of

64

B. Performance of the VFC at Balanced/ Unbalanced nonLinear Load Fig. 5 shows the performance of the VFC at balanced/ unbalanced non-linear loads at a wind speed of 10 m/s. Three single phase diode rectifiers with LC filter based non-linear

The harmonic spectra of the IAG voltage, the IAG current and the load current are shown in Table I to demonstrate the capability of the VFC. It is clear that VFC also provides the function of harmonic elimination. The total harmonic distortion (THD) of the IAG voltage is obtained to be an order of 1.36% and for the IAG current it is 3.38%. While the load current THD is as 84.97%. According to IEEE 519 standard, these THDs of IAG voltages and currents are well within 5% limits. It is clear that VFC also provides the function of harmonic elimination. VI. CONCLUSION A voltage and frequency control has been demonstrated for IAG in a stand-alone WECS. It has been shown that reduced switch two-leg VSC with BESS can work as a VFC. The star/hexagon transformer reduces the VSC ratings and compensates the neutral current. It has also been shown that proposed VFC has the capabilities to work as a load leveler, a load balancer, a harmonic eliminator and a voltage regulator. Simulation results have verified the control capabilities of the VFC under dynamic conditions.
TABLE I THDS OF IAG VOLTAGES, CURRENTS AND LOADS CURRENT THD Phase a Phase b IAG Voltage 1.36% 1.53% IAG Current 3.72% 3.38% Load Current 87.04% 84.97%

Phase c 1.75% 3.59% 85.16%

Fig. 4 Performance of VFC under varying wind speed at balanced linear loads

APPENDIX
IAG Data 3.7 kW, 230 V, 50 Hz, Y- Connected, 4-Pole, Rs=0.3939 , Rr=0.4791 , Xls=0.633 , Xlr=0.7897 , Xm= 23.87 , J=0.05 kg/m2 3.7 kW,, R=3 m, Cpmax=0.48, m =8.1,C1 =0.5176, C2=116, C3=0.4, C4=5, C5=21, C6=0.0068, C7=0.008, C8=0.035 Lf=2 mH, Rf= 0.1 , Cdc=4000 F, Kpa=0.01, Kia=0.1, Kpf=5, Kif=150 Cb=12000 F, Rb=10 k, Rs=0.1, Voc =300V Three Single Phase Transformer with 133/60/60V

loads with load C=300F, L=2mH and Rl=35ohms are connected between each phase and neutral terminal. At 2.1s and at 2.15s load on phase a and b are removed, and the load becomes unbalanced during 2.1s to 2.28s. It is observed that the IAG voltage and frequency remain almost constant and IAG currents are also balanced. The star/hexagon transformer provides path for the neutral current.

Wind Turbine Data

VFC Parameters BatteryParameters Star/hexagonTransformer

REFERENCES
[1] [2] M.G. Simos and F. A. Ferret, Renewable Energy systems. Orlando, Fl: CRC, 2004. J. Faiz, Design and implementation of a solid state controller for regulation of output voltage of a wind driven self-excited three phase squirrel cage induction generator, in Proc. IEEE 8th Int. Conf. Elect. Mech. Syst., Sep.2005, vol.3, pp. 2384-2388. S.S. Murthy, B. Singh, S. Gupta and B.M. Gulati, General steady-state analysis of three-phase self-excited induction generator feeding threephase unbalanced load/single-phase load for stand-alone applications, IEE Proc. Gen., Trans. & Distrib. vol. 150, no. 1, pp. 49 55, Jan. 2003. Bhim Singh and G. Kasal, Voltage and Frequency Controller for a Three Phase Four Wire Autonomous Wind Energy Conversion System, IEEE Trans. on Energy Conversion, vol. 23, no. 2, pp. 509518, June 2008. S. Bhattacharya, D.M. Divan and B. Banerjee, Synchronous Frame Harmonics Isolator using Active Series Filter, in Proc. European Power Electronics Conference EPE91, pp.3030-3035 Firenzi Italy. P. Verdelho and G.D. Marques, Active Power Filter Control Circuit with Phase Locked Loop Phase angle Determination, in Proc. PEMC98, Sep.8-10, 1998.

[3]

[4]

[5]

[6] Fig. 5 Performance of VFC with balanced/unbalanced non-linear loads

65

[7]

[8]

P. Jayaprakash, Bhim Singh and D.P. Kothari, Three -Phase 4-Wire DSTATCOM Based on H Bridge VSC with a Star/hexagon Transformer for Power Quality Improvement, in IEEE region 10th Colloquium and 3rd International conf. on Industrial and Infromation Systems,Kharagpur, pp.1-6,2008. Z.M. Salameh, M.A. Casacca and W.A. Lynch, A mathematical model for lead acid batteries, IEEE Trans. Energy Conversion, vol. 7, no. 1, pp. 93-97, Mar. 1992.

66

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Online Contingency Analysis Incorporating Generation Regulation and Load Characteristics Using Fuzzy Logic
D. Thukaram and Rimjhim Agrawal
Department of Electrical Engineering Indian Institute of Science Bangalore, India e-mail: dtram@ee.iisc.ernet.in, rimjhim@ee.iisc.ernet.in
Abstract Contingency screening and ranking is one of the important components of on-line security assessment. Suitable preventive control actions can be implemented considering severity of the contingencies that are likely to affect the power system performance. This paper presents an approach for online contingency analysis incorporating generation regulation and load characteristics using Fuzzy Logic. In the proposed approach, in addition to real power loadings and bus voltage violations, voltage stability indices at the load buses are also used as the postcontingent quantities to evaluate the generation/ network contingency ranking. The fuzzy logic is used to combine multiple criteria and to eliminate the masking effect. The proposed method has been tested on typical practical systems and results for a 24bus equivalent EHV network of a practical power system are presented to illustrate the proposed approach. Keywords- Contingency ranking, power system security, voltage stability, fuzzy logic.

Shashank Karanth
Department of Electrical Engineering National Institute of Technology Surathkal, India e-mail: shashankkaranth54@gmail.com
Since the human experts play an important role in security evaluation, an approach based on fuzzy logic is proposed in this paper. The post-contingent quantities are expressed in fuzzy set notation and the fuzzy rules employed by system operators in contingency ranking are compiled to reach the overall system severity index [5]. The proposed model has been used on a 24-bus equivalent EHV network of a power system and results for various cases considering normal conditions and outages (generator, line and line and generator) with three different types of load models have been presented to illustrate the proposed approach.

II.

DESCRIPTION OF POWER FLOW MODEL

I.

INTRODUCTION

Contingency analysis, ranking and selection are acceptably considered as crucial activities in power security assessment and normally conducted in line with the voltage stability analysis. Most of contingency analysis algorithms are meant to perform the contingency selection in order to identify and filter out worst contingency cases for further detailed analysis once the preventive and corrective measures have been identified [1]. Contingencies caused by line, generator and transformer outages are identified as the most common contingencies that could violate the voltage stability condition of the entire system. . The conventional methods of steady state power flow analysis using the swing bus concept are based on many assumptions. So from the security point of view it is necessary to determine the frequency, voltage and presence of overload or under-load in various components of the system. Thus in the present approach the power flow model considered overcomes the above deficiencies of conventional load flow model [2]. The static area control error, comprising both steady state frequency deviation and tie-line exchange deviations is introduced as an unknown parameter in this model. This makes the model adequate for representing tie-line controls and also suitable for incorporating the various control strategies such as (i) generators prime mover response, (ii) automatic generator control (AGC), (iii) generators tripping, (iv) load shedding and (v) characteristics of loads. With increased loading of existing power transmission systems, the problem of voltage stability and voltage collapse has become a major concern in power system operation. It has been observed that voltage magnitudes do not give a good indicator of proximity to a voltage stability limit and voltage collapse [3, 4]. Hence voltage stability index is used as an indicator of voltage stability.

Consider a system where n is the total number of buses with 1, 2g, g number of generator buses, and g+1n, remaining (n-g) buses. The static area control error, comprising both steady state frequency deviation and tie-line exchange deviations are introduced as an unknown parameter in this model. The generators prime mover responses and automatic generation control (AGC) action are included as primary and secondary controls in the generation model. Active power generation at a bus is considered as (1) PG i = PGsch i + PGcont i ; ( PGi max PGi PGi min)

PGcont i = -

G=PT +Bf
i

1 f+ i G ri

(2); (4);

f = f - f0

(3) (5)

PT =PT -PT 0

= 1.00 ; Where,

PG i: p.u. Active power generation at bus i PGsch i: p.u. Active power schedule at bus i PGcont i: p.u. Active power generation due to primary and secondary controls at bus iri: Speed-droop setting on turbine governor in generating plant connected to bus i f: Steady-state frequency deviation i: Participation factor of generator connected to bus i G: Static area control error B: Bias factor setting of AGC control regulator, a constant for area load frequency characteristic (p.u. MW/p.u. Hz). The parameters i can be either the grids own generator participating factor or the participating factor agreed by the generation companies. The three possible ways of automatic generation controls are: (i) Flat frequency control (FFC) (ii) Flat tie line control (FTC), and (iii) Flat tie line frequency bias control (TBC) Active and reactive power loads are modeled as function of voltage at the bus. The functions considered are,

67

PLi = PLoi (1 + F1f )(A1 + A2V + A3V 2 + A4V ep )


Q Li =Q Loi (1 + F2 f )(R1+R 2V+R 3V + R 4V
2 eq

(6a) (6b)

condition

Where, PLi, QLi : actual active and reactive load at bus i PLoi, QLoi: nominal active and reactive load at bus i F1, F2, A1, A2, A3, A4, R1 R2, R3, R4, ep, eq are constants A1+ A2 + A3 + A4 = 1.0; R1 + R2 + R3 + R4 = 1.0 Assuming bus 1 as reference for the purpose of voltage phase angle calculations of the other buses, the linearized equations for real power of decoupled Newton-Raphson iterative solution can be written as: P X Q P Q 1 P G + 1 Q G + 1 1 P 1 1 " G+1 V Q G + 1 G + 1 X " V V V 2 N G + 2 N G+1 Q Q P 2 P Q P 2 2 P 2 2 G +2 G +2 G +2 " " V Q G +2 N V ) G+2= V ) (8 G + 1 V G + 2 N # = X 2 # (7 # # # # # # " # # # # # # # P P Q Q P Q N P N N N N N " N N " Q V N N X V V V N 2 1 2 G + G + N
The relations can be written as: X [ P]N 1 = [ J1 ]N N N 1 (9) (10)

IG YGG I = Y L LG

YGL VG YLL VL

(17)

Where IG, IL, VG and VL represents complex current and voltage vectors at the generator nodes and load nodes. [YGG], [YGL], [YLL], and [YLG] are corresponding partitioned portions of network Y-bus matrix. Rearranging the above (17) we get,

VL Z LL I = K G GL

FLG I L YGG VG

(18)

Where [FLG]=[YLL]-1[YLG] Fji are the complex elements of [FLG] matrix. For stability, the index Lj must not be violated (maximum limit=1) for any of the nodes j. Hence, the global indicator L describing the stability of the complete subsystem is given by L = maximum of Lj for all j (load buses). An L-index value away from 1 and close to 0 indicates an improved system security.

IV.

FUZZY REPRESENTATION OF POST CONTINGENT QUANTITIES

[ Q ](N -G)1 = [ J 2 ](N -G)(N -G) [ V ](N -G)1


X k +1=X k + X
k +1 k

The post-contingent quantities must first be expressed in fuzzy set notation before they can be processed by the fuzzy reasoning rules.

Where, X is either G or f as unknown

A. Line Loadings

P k =Pspecified -Pcalculated (11); Q k =Qspecified -Qcalculated (12)


(13);

k +1= k +

(14)

(15) V =V + V Where, k is iteration within the power flow solution The estimated bus voltage, X (F or G), and calculated power are used to evaluate the element of the Jacobian matrices [J1] and [J2]. We can solve for X, and V by iterative process.

III.

VOLTAGE STABILITY L-INDEX COMPUTATION

Figure 1. Line Loadings and Corresponding Linguistic variables.

Several important contributing factors to voltage collapse are: Stressed power system, inadequate fast reactive power resources, load characteristics and tap-changer responses. Different voltage stability and voltage collapse prediction methods are: i. V-P(nose) curves ii. Sensitivity Indices iii. Minimum Singular Value Decomposition(SVD) iv. Voltage stability index (Zii/Zi) As all the above methods for voltage stability margin improvement have some limitations, therefore with present approach Voltage stability L index method has been used. For a given system operating condition, using the load flow (state estimation) results, the voltage stability L-index is computed as [5].

Each post-contingent percentage line MVA loading is divided into five categories using fuzzy set notations: Very Lightly Loaded (VLL) 0-40%, Lightly Loaded (LL) 40-60%, Normally loaded (NL) 60-85%, Fully Loaded (FL) 85-100%, and Over Loaded (OL) above 100%. Fig. 1 shows the correspondence between line loading and the five linguistic variables, which shows the ranges of loadings, as ratio of actual flow to its rated MVA loading, covered by linguistic variables.

L j = 1 Fji
i =1

Vi Vj

(16)

Figure 2. Severity Membership functions for Line Loadings.

Where j = g+1n and all the terms within the sigma on the RHS of (16) are complex quantities. The values of Fji are obtained from the network Y-bus matrix. For a given operating

The output membership functions to evaluate the severity of a post-contingent quantity are also divided into four categories using fuzzy set notations: Very Less Severe (VLS), Less Severe (LS),

68

Below Severe (BS), Above Severe (AS) and More Severe (MS) as shown in Fig. 2. The fuzzy rules, which are used for evaluation of severity indices of post -contingent quantities of line loadings, are IF Line Loading is VLL then severity is VLS IF Line Loading is LL then severity is LS IF Line Loading is NL then severity is BS IF Line Loading is FL then severity is AS IF Line Loading is OL then severity is MS After obtaining the severity indices (SILL) of all the lines the Overall Severity Index (OSILL) of the line loading for a particular line outage is obtained using the expression. OSI LL = SI LL Where, SILL: Line loading severity index of a post-contingent quantity OSILL: Overall severity index reflects the actual severity of the system for a contingency, based on the line loadings.

stability indices the Overall Severity Index (OSIVSI) of the bus voltage stability index for a particular line outage is obtained using the expression: OSI VSI = SIVSI

V.

FUZZY LOGIC APPROACH FOR CONTINGENCY RANKING

The membership function for each post-contingent quantity of line flows, bus voltages and voltage stability indices is established and with these membership functions at hand, the overall severity index is obtained using the fuzzy inference system, shown in Figure 3, for the contingency. The inputs are line loading, voltage profiles and voltage stability L - indices and the outputs are severity indices, which are evaluated using the fuzzy rules.

A. Line Loadings
The approach for evaluating the overall severity index (SILL) for line loading is as follows. If the percentage line loading of a line is less than 40% then the severity index (SILL), which is evaluated using the above fuzzy rules, is 5.0.Similarly If the percentage line loading of a line is 40-60%, 60-85%, 85-100% and above 100%, then the severity indices are 10.0, 15.0, 90.0 and 100.0 respectively. Overall Severity Index (OSILL) of the line loadings for a particular line outage is obtained using the following expression: OSI LL = SI LL

B. Bus Voltage Profiles


In this case each post-contingent bus voltage profile is divided five categories using fuzzy set notations: Very Low Voltage (VLV) below 0.85pu, Low Voltage (LV) 0.85-0.9pu, Below Normal Voltage (BNV) 0.9-0.95pu, Normal Voltage (NV) 0.951.05pu and Over Voltage (OV) above 1.05pu. The output membership functions used to evaluate the severity of a post -contingent quantity are also divided into five categories using fuzzy set notations: More Severe (MS2), Above Severe (AS), Below Severe (BS), Less Severe (LS) and More Severe (MS1). The fuzzy rules, which are used for evaluation of severity indices of, post -contingent quantities of voltage profiles are IF Voltage Profile is VLV then severity isMS2 IF Voltage Profile is LV then severity is AS IF Voltage Profile is BNV then severity is BS IF Voltage Profile is NV then severity is LS IF Voltage Profile is OV then severity is MS1 After obtaining the severity indices (SIVP) of all the voltage profiles the Overall Severity Index (OSIVP) of the bus voltage profiles for a particular line outage is obtained using the expression: OSI VP = SIVP

B. Voltage profiles
If the voltage of the load bus is less than 0.85 p.u, then the severity index (SIVP), which is evaluated using the above fuzzy rules, is 100.0.Similarly if the voltage of the load bus is 0.85-0.9 p.u, 0.9-0.95 p.u, 0.95-1.05 p.u, and above 1.05 p.u, then the severity indices (SIVP) are 80.0, 50.0, 5.0 and 100.0 respectively. Overall Severity Index (OSIVP) of the voltage profile for a particular line outage is obtained using the following expression: OSI VP = SIVP

C. Voltage Stability Indices


If the voltage stability L - index of the load bus is less than 0.2, then the severity index (SIVSI), which is evaluated using the above fuzzy rules, is 5.0.Similarly if the voltage stability L-index of the load bus is 0.2-0.4, 0.4-0.6, 0.6-0.8, and above 0.8, then the severity indices (SIVSI) are 10.0, 50.0, 90.0 and 100.0 respectively. Overall Severity Index (OSIVSI) of the voltage profile for a particular line outage is obtained using the following expression: OSI VSI = SIVSI Using these Overall Severity Indices, respective ranks are calculated. In the Fig. 3, blocks RLL, RVP RVSI represents the process of ranking. Now these three ranks corresponding to each outage are averaged to get overall contingency ranking.

C. Voltage Stability L indices


Each post-contingent voltage stability L - index is divided into five categories using fuzzy set notations: Very Low L - Index (VLI) 0-0.2, Low L - Index (LI) 0.2-0.4, Medium L - Index (MI) 0.4-0.6, High L - Index (HI) 0.6-0.8 and Very High L - Index (VHI) 0.8 above. The output membership functions to evaluate the severity of a post -contingent quantity are also divided into five categories using fuzzy set notations: Very Less Severe (VLS), Less Severe (LS), Below Severe (BS), Above Severe (AS) and More Severe (MS). The fuzzy rules, which are used for evaluation of severity indices of, post -contingent quantities of line loadings are, IF Voltage Stability Index is VLI then severity is VLS IF Voltage Stability Index is LI then severity is LS IF Voltage Stability Index is MI then severity is BS IF Voltage Stability Index is HI then severity is AS IF Voltage Stability Index is VHI then severity is MS After obtaining the severity indices (SIVSI) of all the voltage

Figure 3. Parallel operated Fuzzy inference system.

69

VI.

TEST SYSTEM STUDIES AND RESULTS


G1 1 15 16 5 G2 2 17 11 12 14 8 6 13 19 G4 22 21 4 7 G3 3 Tie Line

TABLE II.

NUMBER OF BUSSES UNDER VOLTAGE PROFILE SEVERITY AND VOLTAGE STABILITY SEVERITY

24 23
18 9 10 20

A. Line 24-23 Outage Contingency

Figure 4. 24-Bus equivalent EHV power system network.

The approach is applied on a practical system of 24-bus equivalent EHV power system network shown in Fig. 4. The system has 4 generator buses and 20 other buses. Generators are connected at buses 1, 2, 3 and 4. Speed droop setting on the turbine governor for all the generators is taken as 5%.A Tie line at bus 1 is considered with the neighboring system. The load is represented at 220 kV sides of 400 kV / 200 kV at 8 number of buses. The load characteristics at all the buses are considered as described by the relations (6a) and (6b).Based on the available load behavior data each substation load can be modeled using appropriate coefficients. In this study the numerical values for the coefficients for the three types of load models considered are as follows: (1) Load Model 1: F1=0, A1=1, A2=0, A3=0, A4=0 F2=0, R1=1, R2=0, R3=0, R4 =0 (2) Load Model 2: F1=0.04, A1=0.5, A2=0.3, A3=0.2, A4=0 F2=0, R1=0.5, R2=0.3, R3=0.2, R4 =0 (3) Load Model 3: F1=0.04, A1=0.2, A2=0.3, A3=0.5, A4 =0 F2=0, R1=0.2, R2=0.3, R3=0.5, R4 =0 Several cases for contingencies caused by line, generator and line-generator outage for 3 load models, with both Flat frequency control and Flat tie line control are considered. The ranking can be easily verified from the Table I and Table II, which shows the number of lines / buses under different category of severity. From the Table I and Table II, it can be seen that the line 21-20 outage contingency has more number of lines/buses under the MS severity category. Thus it has been assigned rank_1. Some of the typical cases are presented below:
TABLE I. NUMBER OF LINES UNDER LINE LOADING SEVERITY

Figure 5. Voltage Profiles and Voltage Stability L-indices of the busses for case A.

In this case firstly the system is intended to operate with load model 1,2 and 3 at nominal frequency of 50 Hz (FFC), that is F=0.0 when a single line outage occurs on double circuit line between buses 24 and 23. The results of this case show that if the system were to operate at nominal frequency 50.0 Hz (FFC), the tie-line exchange error at bus 1 is 17.50 MW, -27.57 MW and 42.99 MW for load model 1, 2 and 3 respectively. Secondly, for the system with load model 1, 2 and 3 the tie-line powers are considered to be fixed (FTC).The results of this case show that the system operates at 49.989 Hz, 50.018 Hz and 50.029 Hz respectively. The voltage profiles and voltage stability L-indices of the busses are shown in Fig. 5.

B. Line 21-19 Outage Contingency

Figure 6. Voltage Profiles and Voltage Stability L-indices of the busses for case B.

70

In this case firstly the system is intended to operate with load model 1,2 and 3 at nominal frequency of 50 Hz (FFC), that is F=0.0 when a single line outage occurs on double circuit line between buses 21 and 19. The results of this case show that if the system were to operate at nominal frequency 50.0 Hz (FFC), the tie-line exchange error at bus 1 is 8.92 MW, -14.77 MW and 38.284MW for load model 1, 2 and 3 respectively. Secondly, for the system with load model 1, 2 and 3 the tie-line powers are considered to be fixed (FTC).The results of this case show that the system operates at 49.994 Hz, 50.01 Hz and 50.025 Hz respectively. The voltage profiles and voltage stability L-indices of the busses are shown in Fig. 6.

In this case firstly the system is intended to operate with load model 1,2 and 3 at nominal frequency of 50 Hz (FFC), that is F=0.0 when a 130MW unit generator outage occurs at bus 4 and a single line outage occurs on double circuit line between buses 21 and 19. The results of this case show that if the system were to operate at nominal frequency 50.0 Hz(FFC),the tie-line exchange errors at bus 1 is 153.46 MW,108.98MW and 186.426MW respectively for load model 1,2 and 3.Secondly, for the system with load model 1,2 and 3 the tie-line powers are considered to be fixed (FTC).The results of this case show that the system operates at 49.897 Hz ,49.924 Hz and 49.939 Hz respectively The voltage profiles and voltage stability L-indices of the busses are shown in Fig. 8.

C. Generator 4 Outage Contingency VII. CONCLUSIONS


In this paper, an approach for generation/network contingencies for online application is proposed. A more realistic model is proposed to consider the AGC, load frequency and tie line controls. In addition to MVA line loadings and bus voltages, the voltage stability L - indices at the load buses are also used as the postcontingent quantities to evaluate the network contingency ranking. These post-contingent quantities are expressed in fuzzy set notation. Then the fuzzy rules employed in contingency ranking are compiled to reach the overall system severity index. The proposed contingency ranking method eliminates the masking effect effectively. The proposed fuzzy approach has been tested on large networks and results indicate a clear ranking of contingencies. The results for a 24-bus equivalent EHV network of a practical power system are presented for illustration purpose. The proposed approach will be helpful in developing suitable preventive control actions in a day to day operation of power system.

Figure 7. Voltage Profiles and Voltage Stability L-indices of the busses for case C.

In this case firstly the system is intended to operate with load model 1,2 and 3 at nominal frequency of 50 Hz(FFC), that is F=0.0 when a 130MW generator unit outage at bus 4 occurs. The results of this case show that if the system were to operate at nominal frequency 50.0 Hz (FFC), the tie-line exchange error at bus 1 is 143.4 MW, 127.95MW and 123.67MW respectively for load model 1, 2 and 3. Secondly, for the system with load model 1, 2 and 3 the tie-line powers are considered to be fixed (FTC).The results of this case show that the system operates at 49.904 Hz, 49.911 Hz and 49.913 Hz respectively The voltage profiles and voltage stability L-indices of the busses are shown in Fig. 7.

VIII. REFERENCES
[1] Zhihong Jia and B Jeyasurya, Contingency Ranking for OnLine Voltage Stability Assessment, IEEE Transactions on Power Systems, vol. 15, Aug. 2000, pp. 1093-1097, doi: 10.1109/59.871738. [2] D Thukaram, K Parthasarathy, H P khincha and B S Ramakrishna Iyengar, Steady State Power Flow Analysis Incorporating Load and Generation Regulation Characteristics, J. Inst. Eng. (India), vol. 64, Apr. 1984, pp. 274-279. [3] Udupa A N, Thukaram D, Parthasarathy K, An expert fuzzy control approach to voltage stability enhancement, International Journal of Electrical Power & Energy Systems, vol. 21, May. 1999, pp. 279-287, doi: 10.1016/S01420615(98)00049-0. [4] Bansilal, Thukaram D, Parthasarathy K. Optimal reactive power dispatch algorithm for voltage stability improvement, International Journal of Electrical Power & Energy Systems, vol. 18, Oct. 1996, pp. 461-468, doi: 10.1016/01420615(96)00004-X . [5] D. Thukaram, Lawrence Jenkins, H. P. Khincha, K. Visakha and B Ravikumar, Fuzzy logic application for network contingency ranking using composite criteria, Engineering Intelligent System, vol. 15, Dec. 2007, pp. 205212.

D. Generator 4 & Line19-21 Outage Contingency

Figure 8. Voltage Profiles and Voltage Stability L-indices of the busses for case D.

71

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Auxiliary controlled SVS for Damping SSR in a Series Compensated Power System
Narendra Kumar, Nareder Hooda, Vedpal
AbstractThe paper presents a new SVS control strategy for damping torsional oscillations due to subsynchronous resonance (SSR) in a series compensated power system. The proposed SVS control strategy utilizes the effectiveness of combined reactive power and voltage angle (CRPVA) SVS auxiliary control signals. A digital computer simulation study, using a nonlinear system model, has been carried out to illustrate the performance of the proposed SVS controller under large disturbance. It is found that the torsional oscillations are effectively damped out and the transient performance of the series compensated power system is greatly improved. Index terms- Combined reactive power and voltage angle (CRPVA), Static VAR System (SVS), Subsynchronous resonance (SSR), and Transient performance.

in a series compensated power system has been developed. The proposed SVS control strategy utilizes the effectiveness of combined reactive power and voltage angle (CRPVA) SVS auxiliary control signals. It is found that the torsional oscillations are effectively damped out and the transient performance of the series compensated power system is greatly enhanced. The beauty of the proposed scheme is that it derives its control signals from the point of location of the SVS. The scheme is easily implemental. SVS is considered located at the middle of transmission line due to its optimum performance.

II. SYSTEM MODEL The study system (Fig.1) consists of a steam turbine driven synchronous generator supplying bulk power to an infinite bus over a long transmission line. An SVS of switched capacitor and thyristor controlled reactor type is considered located at the middle of the transmission line which provides continuously controllable reactive power at its terminals in response to bus voltage and combined reactive power and voltage angle (CRPVA) SVS auxiliary control signals. The series compensation is applied at the sending end side of SVS along the line.

I. INTRODUCTION

n recent years static VAR system (SVS) has been employed to an increasing extent in the modern power systems [1] due to its capability to work as Var generation and absorption systems. Besides, voltage control and improvement of transmission capability, SVS in coordination with auxiliary controllers [2] can be used for damping of power system oscillations. Damping of power system oscillation plays an important role not only in increasing the power transmission capability but also for stabilization of power system conditions after critical faults, particularly in weakly coupled networks. S.K. Gupta, Narendra Kumar [3,4] developed a double order SVS auxiliary controller in combination with continuously controllable series compensation and Induction machine damping unit (IMDU) for damping torsional modes in a series compensated power system. The proposed scheme was able to damp out the torsional modes at wide range of series compensation. However the control scheme was complex as it utilized three different controllers. G.N. Pillai and A. Ghosh et. al [6] designed a power flow controller proposing an integral state feedback controller which reduces the adverse effect of power flow controller on torsional interactions and compared the torsional characteristics of SSSC compensated power systems . In this paper a new SVS control strategy for damping torsional oscillations due to subsynchronous resonance (SSR)

Fig. 1. Study system

A. Generator In the detailed machine model [7] used here, the stator is represented by a dependent current source parallel with the inductance. The generator model includes the field winding f and a damper winding h along d-axis and two damper windings g and k along q-axis. The IEEE type-1 excitation system is used for the generator. In the mechanical model detailed shaft torque dynamics [7, 8] has been considered for Narendra Kumar is with the Department of Electrical Engineering, Delhi College of Engineering, Delhi110042, University of Delhi., India (e-mail: the analysis of torsional modes due to SSR. The rotor flux dnk_ 1963@)yahoo.com). Narender Hooda is with DCRUST, Murthal, and linkages associated with different windings result in rotor Vedpal is with Noida Development Authority, Noida. equations:

72

f a 1 f a 2 h b1Vf b 2 I d h a 3 f a 4 h b 3 I d g a 5 g a 6 k b5I q k a 7 g a 8 h b 6 I q
. . .

1 D56 5 (D56 D66 ) 6 K 56 (6 5 ) M6

Te X" d (i D I Q i Q I D )
Where 1, 2, 3, 4, 5, 6, are the angular displacements and
1, 2, 3, 4, 5, 6 are the angular velocities of different

shaft segments as shown in fig. 3. Linearizing the above equations the state space model is derived as follows:

Where Vf is field excitation voltage. The above eqns. have been linearized and the state space model is obtained as follows:
X R = [A R ]X R + [BR1]U R1 + [BR2 ]U R2 + [BR3 ]U R3
.

(1)

YM C M X M Where:
t

X M A M X M B M1 U M1 B M 2 U M 2

(3) (4)

U R1 = [] , U R 2 = Vf , U R 3 = i D i Q
X R f h g k

X M , U M1 I D I Q

U M 2 i D i Q , YM
t

AM, BM1, BM2, and CM, are given in the appendix 2. C. Excitation Systems The IEEE Type I model as shown in Fig. 3 represents the excitation system. Vg represents the generator terminal voltage and SE is the saturation function. The excitation system is described by the following equations:
Vg

Prefix indicates incremental values and the output equations are

YR1 C R1 X R D R1 U R1

YR 2 C R 2 X R D R 2 U R1 D R 3 U R 2 D R 4 U R 3
. . where YR1 I D I Q t , YR 2 I D I Q

(2)

The matrices AR, BR1, BR2, BR3, CR1, CR2, DR1, DR2, DR3, and DR4 are given in the appendix 2. B. Mechanical System The six spring mass model as used in the IEEE first bench mark model [2] describes the mechanical system as shown in Fig.2. The governing equations and the state and output equations are given as follows
Tm1 1 HP K12 D11 D12 Tm2 2 IP K23 D22 D23 Tm3 3 LPA K34 D33 D34 Tm4 4 LPB K45 D44 D45 Te 5 GEN K56 D55 D66 D56 6 EXC
.
Vref
+

1 1 sTR SE = f (Vf) Vpss=0 - VS


KA 1 sTA

Vr max Vr
+

1 K E sTE

Vf

Vr min

sK F 1 sTF

Fig.3. IEEE-type- I excitation system model

Vf
VS
Vr
.
.

(K E S E ) 1 Vf Vr TE TE
K F (K E S E ) KF 1 Vf VS Vr TE TF TE TE TF

Fig.2.6-Spring mass model of mechanical system

i = i, i=1, 2, 3, 4, 5, 6
1 (D11 D12 ) 1 D12 2 M1 K12 (1 2 ) TM1 . 1 D12 1 (D12 D22 D23 ) 2 D233 2 M 2 K12 (2 1 ) TM 2 1
.

KA K K 1 VS Vr A Vg A Vref TA TA TA TA

The state and out put equations of the linearized system are
X E [A E ]X E [BE ]U E
.

(5)

3
.

1 M3

D 232 (D 23 D33 D34 ) 3 D344 K 23 (3 2 ) K 34 ( 3 4 ) TM 3

YE [C E ]X E (6) t Where XE = [Vf Vs Vr] UE = Vg, YE = Vf and AE, BE, and CE are described in appendix 2.
D. Network The transmission line is represented by lumped parameter circuit as shown in Fig 4. The network has been represented by its axis equivalent circuit, which is identical with the positive sequence network. The governing nonlinear differential equations of the network are derived as follows

1 4 M4

D343 (D34 D 44 D 45 ) 4 D 455 K 34 ( 4 3 ) K 45 ( 4 5 ) TM 4 D 45 4 (D 45 D 55 D 56 ) 5 . 1 5 D 56 6 K 45 ( 5 4 ) M5 K 56 ( 5 6 ) Te

73

Fig.4. -axis representation of network

di1 1 1 V2 V1 dt LT 2 LT2
di2 1 1 V3 V2 dt L L di4 1 1 V4 V3 dt L L di Ra 1 i V4 dt LA LA
dV2 1 1 i2 i1 dt C C

Fig. 5 SVS control system with auxiliary feedback

R i2 L R i4 L L " dI 1 d V5 L A dt LA

Where RS, LS represent TCR resistance and inductances respectively. The other equations describing the SVS model are:
Z 1 Vref Z 2 VF
Z2
.
.

Z3
.

1 V3 K D i3 1 Z 2 TM TM K1 KP KP KP 1 Z1 Z2 Z3 Vref VF TS TS TS TS TS

B z 3 B TD

(9)

dV3 1 1 1 i2 i3 i4 dt Cn Cn Cn

Where V3, i3 are incremental magnitudes of SVS voltage and current, respectively, obtained by linearising
2 2 2 2 V32 V32 D V3Q , And i3 i3 D i3Q

dV4 1 1 i i4 dt C C
dV5 1 i dt C se

III. SVS CONTROL STRATEGY In the proposed SVS control strategy, two auxiliary signals namely the reactive power and voltage angle deviations are used in combination in addition to the main voltage control loop of the SVS. This combination has been designated as Where, LA = LT1 + Ld and Cn = 2C + CFC, and I is the current combined reactive power and voltage angle (CRPVA) SVS of the dependent current source. auxiliary controller. This has been applied for damping power Similarly, the equations can be derived for the - network. The oscillations due to Subsynchronous resonance (SSR) in a series - network equations are then transformed to the compensated power system. synchronously rotating D-Q frame of reference. A. SVS Auxiliary Controller E. Static VAR System The auxiliary signal UC is implemented through a first order Fig.5 shows a small signal model of a general SVS. The auxiliary controller transfer function G(s) as shown in fig.6, terminal voltage perturbations V3 and the SVS incremental which is assumed to be: current i3 weighted by the factor KD representing current 1 sT1 VF (10) G s KB droop are fed to the reference junction. T M represents the UC 1 sT2 measurement time constant, which for simplicity is assumed to This can be equivalently written as: be equal for both voltage and current measurements. The voltage regulator is assumed to be a proportional- integral (PI) controller. Thyristor control action is represented by an average dead time TD and a firing delay time Ts. B is the variation in TCR susceptance. VF represents the incremental auxiliary control signal. The , axes currents entering TCR from the network are expressed as: (7)

i3D
.

i3Q

1 R V3 D S i3 D 0 i3Q LS LS 1 R V3Q S i3Q 0 i3 D LS LS

Fig. 6. General first-order auxiliary controller

(8)

G s K B

1 T1 / T2 T1 KB T2 1 sT2

(11) (12)

Also,

1 T1 / T2 ZC KB UC 1 sT2

74

From (12)
.

ZC
.

K 1 ZC B T2 T2

T1 1 T 2

U C

(13)

V3Q d tan 1 dt V3 D

(18)

X C AC X C BC U C
Where:
K T 1 X C Z C , AC , BC B 1 1 T2 T2 T2 T VF K B 1 U C Z C T2

Linearizing eqn.18 gives the deviation in 3 voltage angle which is taken as the auxiliary control signal (UC2).

V3 D 0 V30 V3Q V3 Q 0 V30 V3D


2 2

(19)

(14)

o represents operating point or steady state values. The state and output equation for the CRPVA auxiliary controller are obtained as follows:

Or, YC

CC X C DC U C

X C1 AC1 X C1 BC1 U C1
.

VF 1Z 5 K B T1 T2 U C YC CC X C DC U C
YC VF , CC 1, DC K B T1 T2

X C 2 AC 2 X C2 BC 2 U C2
.

(15)

YC1 CC1 X C1 DC1 U C1 Yc 2 CC 2 X C2 DC 2 U C2

(20)

B. Combined Reactive power and Voltage Angle (CRPVA) Auxiliary signal The auxiliary controller signal in this case is the combination of the line reactive power and the voltage angle signals with the objective of utilizing the beneficial contribution of both signals towards improving the dynamic performance and transient performance of the system. The control scheme for the composite controller is illustrated in Fig.7. The auxiliary control signals UC1 and UC2 correspond, respectively, to the line reactive power and the voltage angle deviations, which are derived at the SVS bus.

Where AC1, BC1, CC1 and DC1 are the coefficients, which correspond to the line reactive power auxiliary controller, and AC2, BC2, CC2 and DC2 are the coefficients, which correspond to the voltage angle auxiliary controller. The auxiliary control signals UC1 and UC2 as obtained from (16) and (18) are implemented as shown in Fig.7. The overall nonlinear model of the system is obtained by combining the nonlinear differential equations of each constituent subsystem models developed above. The order of the system model is 35 with CRPVA SVS auxiliary controller, and 33 without auxiliary controller. IV. A CASE STUDY The system considered for analysis is similar to IEEE first benchmark model [9]. It onsists of two synchronous generators each of 555 MVA, which are represented by a single equivalent unit of 1110 MVA, at 22 kV. The electrical power is supplied to an infinite bus over 400kV, 600 km long transmission line. The system data and torsional spring mass system data are given in [4,5]. The SVS rating for the line has been chosen to be 100 MVAR inductive to 300 MVAR capacitive by performing the load flow study. About 40% series compensation is used at the sending end of the transmission

Fig. 7

Control scheme for CRPVA auxiliary controller

C. Reactive Power Auxiliary Signal The auxiliary control signal is the deviation in the line reactive line. A. Dynamic Performance power entering the SVS bus. The reactive power entering the The eigenvalues have been computed for the system SVS bus can be expressed as: Q3 V3D i4 Q V3Q i4 D (16) without and with CRPVA SVS damping scheme for a wide range of power transfer. Table I presents the eigenvalues for Where i4D, i4Q and V3D, V3Q are the D-Q axis components of the the system at generator power PG=200, 500, 800 MW without line current i4 and the SVS bus voltage V3 respectively. and with the proposed scheme. When the damping scheme is Linearizing eqn.16 gives the deviation in the reactive power not applied, one mechanical mode having frequency Q3, which is taken as the auxiliary control signal, UC1 4.9157rad/sec is found to be unstable at P G = 800 MW. Q3 V3D0 .i4 Q iQ0 V3D V3Q0 i4 D iD0 V3Q (17) However, at PG = 500 and 200 MW, this mode is stable. When the CRPVA SVS damping scheme is applied, the unstable mechanical mode (4.9157rad/sec) at 800MW is effectively D. Voltage Angle Auxiliary Signal stabilized. At 500 and 200MW also the damping is The voltage angle is given as:

75

considerably enhanced. As the system is already stable at 500 controller is applied. It is seen that the Torsional oscillations and 200MW, the eigenvalues with the proposed scheme are not due Subsynchronous resonance (SSR) are effectively damped presented for the sake of brevity in the table. Table 2 presents out and the system becomes stable. the parameters of the SVS reactive power and voltage angle auxiliary controllers. Performing an exhaustive root locus study VI. CONCLUSION has optimally chosen these parameters In this paper a new SVS control strategy for damping Torsional oscillations due to Subsynchronous resonance (SSR) TABLE 1 in a series compensated power system has been developed. SYSTEM EIGENVALUES WITHOUT AND WITH CRPVA SVS DAMPING The proposed SVS control strategy utilizes the effectiveness of SCHEME combined reactive power and voltage angle (CRPVA) SVS ____________________________________________________________ (Without CRPVA) (With CRPVA SVS Scheme) auxiliary control signals. The digital computer simulation 200 MW 500MW 800MW 800MW study, using non-linear system model, has been carried out to _________________________________________________ illustrate the performance of the proposed SVS controller 0 j298.1 0 j298.1 0 j298.1 0 j298.1 under large disturbance. It is found that the Torsional .0436202.73 .058 j202.73 .0879j202.72 -.0247j202.83 oscillations due to SSR are effectively damped out and the .0136j160.55 .007 j160.54 -.0047 j160.52 -.0026j160.47 transient performance of the series compensated power system -.0007j126.97 -.001 j126.97 -.0027 j126.96 -.007j126.96 -.016j98.86 -.0064 j98.83 .0042j98.74 -.045j98.65 is greatly improved. The proposed SVS controller can easily -.4096j4.47 -.1644 j4.93 .178j4.97 -2.47j6.84 be implemented as it utilizes the signals derivable from the -.9822+j.82 -.8177+j.85 -.885+j.91 -.5768+j.83 SVS bus itself. The SVS is considered located at the middle of -37.25 -37.72 -38.61 -32.06 the transmission line due to its optimal performance at this -28.2 -32.28 -33.23 -6.182 location. -2.42 -2.83 -3.04 -2.948
-25.7248j23.62 -.9822-j.82 -3.4341j3507.33 -3.4345 j2879.33 -13.35 j2523.96 -14.92 j1895.97 -11.742 j1314.0 -15.209 j686.93 -12.46 j446.83 -7.22 j311.7521 -9.295 j186.64 -545.89 j81.51 -55.8214 j71.20 -25.65j23.91 -.8177-j.85 -3.43 j3507.6 -3.44 j2879.6 -13.35 j2524.62 -14.92 j1896.62 -11.99 j1307.79 -15.58 j680.64 -12.64 j445.78 -6.32 j311.66 -9.87j188.09 -545.23j81.57 -53.23j69.84 -25.73j24.1 -.8857-j.91 -3.27j3499.08 -3.27j2871.08 -13.22j2495.34 -14.92j1867.35 -12.69j1137.9 -18.90j510.1 -12.92j443.94 -5.0831j311.45 -10.49j190.84 -49.90j74.74 -545.29j74.41 -25.63j23.76 -.5768-j.83 -3.27j3499.0 -3.27j2871.08 -13.24j2495.36 -14.89j1867.31 -15.459jj1139.5 -6.451j511.12 -13.17j445.82 -4.01j308.9 -9.79j207.09 -546.74j82.75 -7.477j32.34 -141.6 -6.96

ACKNOWLEDGMENT The work presented in this paper has been performed under the AICTE R&D Project, Enhancing the power system performance using FACTS devices. in the FACTS Research Laboratory, Delhi Technological University, Bawana Road, Delhi -110042.
REFERENCES [1]. Balda J.C. Eitelberg E. and Harley R.G., Optimal output Feedback Design of Shunt Reactor Controller Damping Torsional Oscillations, Electric Power System research Vol. 10, 25-33, 1986 [2]. Narendra Kumar, M.P. Dave, Application of Auxiliary Controlled Static Var System for Damping Sub synchronous Resonance in Power Systems, Electric Power System Research 37, 189 201, 1996 [3]. S.K. Gupta, Narendra Kumar et.al., Controlled Series Compensation in Coordination with Double Order SVS Auxiliary Controller and Induction Machine for repressing the Torsional Oscillations in Power system. Electric Power System Research 62, 93-103, 2002 [ 4]. S. K. Gupta, N. Kumar, Damping Sub synchronous Resonance in Power Systems, IEE Proc. Gen. Trans. Distrib.Vol.149, No.6, pp 679-688. [5]. G. N. Pillai, Arindam Gosh, A. Joshi, Torsional Oscillation Studies in an SSSC Compensated Power System, Electric Power System research 55, 57-64, 2000 [6]. Ning, Yang, Q. Liu, J. D. McCalley, TCSC Controller Design for Damping Inter Area Oscillations, IEEE Trans. On Power System, 13(4), 1304-1310, 1998. [7]. R.S. Ramsaw, K.R. Padiyar, Generalized System Model for Slip Ring Machines, IEEE Proc. 120 (6) 1973 [8]. K. R. Padiyar, R.K. Varma, Damping Torque Analysis of Static Var System Controllers, IEEE Trans. on Power Systems, 6(2), 458 -465, 1991 [9]. IEEE Special Stability Control Working group, static var compensator models for power flow and dynamics performance simulation, IEEE Trans. Power Syst. 9 (1) 229-240, 1984 Dr. Narendra Kumar was born in 1963 Aligarh, (India). He completed B.Sc. (Electrical Engineering) and M.Sc. (Power Systems and Drives) in 1984 and 1986 respectively from Aligarh Muslim University, Aligarh (India) . He obtained his Ph.D. (Power Systems) from University of Roorkee in 1996. He is currently working as Professor (Electrical Engineering) in Delhi Technological University (formerly Delhi College of Engineering), Delhi-110042 (India) since Jan 1, 2004. His area of Interest includes power system operation and control, Flexible AC transmission systems, Automatic Generation Control etc.

_______________________________________________ .
TABLE 2 AUXILIARY CONTROLLER PARAMETER SVS auxiliary signals Voltage Angle Reactive Power KB -0.38 -0.056 T1 0.29 0.39 T2 0.05 0.2

B. Transient Performance A digital time domain simulation of the system under large disturbance conditions has been carried out on the basis of nonlinear differential equations with all non-linearties and limits considered. The load flow study is carried out for calculating the operating point. The fourth order Runge-Kutta method has been used for solving the system non-linear differential equations. The natural damping of the system has been considered to be zero so that the effect of controlling-scheme can be examined exclusively. Disturbance is simulated by 30% sudden increase in input torque for 0.1 sec. The Fig. 8 shows the transient responses of the system without any auxiliary controller. It is seen that the oscillations are sustained and growing and the system is unstable. Fig. 9 shows the dynamic response curves when the proposed CRPVA SVS auxiliary

76

TERMINAL VOLTAGE (P.U)

1.12 1.08 1.04 1.00 0.96 0.00

TERMINAL VOLTAGE (P.U)

1.07 1.06 1.05 1.04 1.03 0.00 5.00 TIME (SEC) 10.00 15.00

2.00

4.00 TIME (SEC)

6.00

8.00

SVS BUS VOLTAGE (P.U)

SVS BUS VOLTAGE (P.U)

1.10 1.08 1.06 1.04 1.02


0.00 5.00 TIME (SEC) 10.00 15.00

1.8 1.3 0.8 0.3


0.00 2.00 4.00 TIME (SEC) 6.00 8.00

POWER ANGLE (RAD)

-1.20 -1.00 -0.80 -0.60 -0.40 -0.20 0.00 0.00

2.00

4.00

6.00

8.00

10.00

TIME (SEC)

SVS SUSCEPTANCE (P.U)

1.40 1.20 1.00 0.80 0.60 0.00 5.00 TIME (SEC) 10.00 15.00

SVS SUSCEPTANCE (P.U)

3.00 2.00 1.00 0.00 -1.00 0.00 2.00 4.00 TIME (SEC) 6.00 8.00

2.55
T (H-I) (P.U)

4.00 3.00 2.00 1.00 0.00 0.00 2.00 4.00 TIME (SEC) 6.00 8.00

T (H-I) (P.U)

5.00

2.5 2.45 2.4 2.35 2.3 0.00 5.00 TIME (SEC) 10.00 15.00

T(B-G) (P.U)

16.00 12.00 8.00 4.00 0.00 0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00

8.60
T(B-G) (P.U)

8.40 8.20 8.00 7.80 7.60 0.00 5.00 TIME (SEC) 10.00 15.00

TIME (SEC)

0.30 0.20

W-Wo

0.10 0.00 -0.10 -0.20 0.00 5.00 TIME (SEC) 10.00 15.00

Fig . 8 Response curves without any auxiliary controller

Fig (9) Response curves with (CRPVA) SVS auxiliary controller

77

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Simulation of Surface and Volume Stress for High voltage Transmission Insulators
Subba Reddy B and Udaya Kumar
High Voltage Laboratory, Department of Electrical Engineering, Indian Institute of Science, Bangalore, India E mail: reddy@hve.iisc.ernet.in Abstract Presently transmission of bulk power at high voltages over very long distances has become essential. The dual task of mechanically supporting and electrically isolating the live phase conductors from the support tower is performed by disc insulators. Whether in clean condition or under polluted conditions, the electrical stress distribution along the insulators governs the possible flashover, which is quite detrimental to the system. In the present work an attempt has been made to study accurately, the field distribution for various types of porcelain/ceramic insulators (Normal and Antifog discs) used for high voltage transmission/distribution. The surface charge simulation method is employed for the field computation to simulate potential, electric field, surface and bulk/volume stress. Keywords- Electric field, potential, transmission line insulators, surface/volume stress. I. INTRODUCTION The performance of insulators used in overhead transmission, distribution and in outdoor substations is one of the critical factors which govern the reliability of power delivery systems. Pollution deposition on the insulator surface from the air-borne particulates cause a reduction in the performance of outdoor insulators. It is impractical in most situations to prevent the formation of such a layer and therefore insulators must be designed such that the flashover performance remains high enough to withstand all types of anticipated voltage stresses. This becomes more and more critical at higher transmission voltages. It has often been stated that the upper limit of open-air transmission voltage is set by pollution performance of insulator strings[1]. Failure at any single point can bring down the entire system. Following example amply demonstrates the consequence of such failure. Recent reports [2,3] on grid disturbance in India indicate the loss of five thousand million rupees and 97% of interconnected generation on 2nd January 2001. Similar disturbances of lesser magnitudes were also observed during the period of December 2002 & 2005, Feb & Dec 2006, Jan/Feb 2007 & March 2008. One of the major causes identified was the pollution induced flashovers. These events have demonstrated that the performance of overhead transmission line string insulators and that in outdoor substations is a critical factor which governs the reliability of power systems. As mentioned earlier, the pollution induced flashover occurs at the normal working voltages, it obviously deserves very serious consideration. Though much experimental and modelling work has been carried out, the means of minimizing the pollution induced flashovers is still not available. It is the subject of continuing research publications, IEEE and CIGRE meetings [4]. The design of porcelain insulators have been evolved by intuition and experience, which is supported by some analysis. A due consideration to the pollution induced flashover phenomena suggests that the non-uniform electric field profile over the surface of the insulator is the root cause for the phenomena. However, electric field profile for the string insulators, which is the key for any insulation design, is not supplied with the insulators. It is also probable that many manufacturers may not have accurate field profiles for their own string insulators. Very early literatures have resorted to not so accurate methods like electrolytic tank etc, for the required field mapping. In the recent years several numerical methods [4-10] have been suggested and employed for computing the associated field. Considering the above, the present work was taken up and is described below. II. PRESENT WORK The present work aims to deduce accurate quantitative data on electrical stress distribution on some commonly employed porcelain insulator strings. Both volume stress and surface stress along the porcelain-air and cement-air interfaces will be dealt with. A quantitative comparison of the stresses across commonly employed types of insulators will be made. Employing analytical methods for the required field solution is highly impractical as the geometry does not fit into any of the orthogonal curvilinear coordinates. Hence numerical approach is envisaged. For the problem in hand, the governing field is electrostatic under clean conditions and steady conduction under polluted conditions. Therefore the governing field equation would be Laplacian, with complex permittivity for combined fields. As the problem is of open geometry type with multiple dielectrics, the boundary based methods are more suitable and hence the Surface Charge Simulation Method (SCSM)[5,6] is adopted.

78

III.

MODELLING AND SIMULATION

In the present case, earth is modelled by an image method and the HV conductor, cross arm, arcing horns, corona control (CC) rings are approximated by axisymmetry geometry. Otherwise the amount of discretiation involved with the required 3D modelling would be incomprehendible. More importantly, under the laboratory pollution tests, which are deemed to be representative of various field conditions, the dominant steady conduction field is axi-symmetric. Therefore, 2D axi-symmetric approximation is fully justifiable. In an electrostatic field, the applied excitation induces real charges on the conductor surfaces and apparent (polarisation) charges on dielectric interfaces (for linear media). The resulting field distribution is equivalent to that produced by surface charge distributions on the conductor boundaries and fictitious surface charge distributions at the dielectric interfaces with dielectrics replaced by vacuum. The SCSM attempts to simulate these real and fictitious charges by piecewise-defined surface charge distributions. In other words, the SCSM involves discretisation of conductor surfaces and dielectric interfaces. As a consequence, the solution will satisfy the governing differential equation exactly, but satisfies the boundary conditions only approximately. The present work employs segments with a linearly varying charge distribution for the discretisation and Galerkins method for deriving the SCSM equations. The above mathematical steps are fully coded in C along with many error checking routines. The resulting matrices are inverted in Matlab and obtained charge densities are stored. Two more C programs are run to obtain the data for the equipotential and interface potential distribution. The typical run times involved for the execution of the programs are about 2 hours for single disc and 6 hours for 15 discs (corresponding to a 220kV string). IV. SIMULATION RESULTS

bulk/volume distribution for a single disc respectively. Based on this distribution, it can be expected that stress concentration occurs near top of the pin region, i.e. in cement region between pin and cap, which is followed by concentration at the bottom plate and CC rings. Similarly Fig.3 and Fig.4 presents contour plot of volume electrical stress for 6-disc in a string and 14-disc in a string respectively. Stress concentration on top of the pin if further portrayed in these figures. This is fully visualisable as this region corresponds to the minimum gap region. Fig. 5 & 7 presents the potential plots for single disc and 6-disc in a string, while Fig. 6 & 8 show the simulated values of surface field plot for single and 6-disc in a string respectively. Due to relatively small radius of the pin, there is a sharp drop in potential at the cement air interface, which is followed by the oscillating potential along the sheds/petticoats. As the cap is approached there is a considerable fall in potential. This variation is partly due to the influence of CC ring, which holds the potential in the petticoat regime. Accordingly field plot shows highest stress at the pin region, which is followed by local stress intensification at the sheds and at the cap. As experienced in the laboratory tests considerable voltage drop could be seen on the disc next to the high voltage conductor. But for the pin of the first disc and cap of the last disc, all other cap and pins form floating conductors dictating a capacitive voltage distribution. Consequently the capacitance (arising between cap and pin) of the first disc needs to carry all required current which imposes a large voltage drop across it. Individually the voltage distribution around remaining discs is similar in pattern however, due to reduction in the influence of CC ring the spatial oscillation of the potential diminishes. Field plot, which is an image of the potential plot, shows much larger surface field at the first disc as compared to the single disc. In fact, first three discs have higher surface fields than the remaining discs in a string. A comparison of the results obtained for the bulk and surface stress for single and 15 disc string of different types of insulators, which are employed for the present study is presented in Fig.9. It is evident that all the insulators have similar operating stresses and stress concentration occurs at the top corner of the pin for the bulk stress and cement-air interface for the surface stress. Similar field concentration can be expected to prevail during polluted conditions, during which the surface conduction field dominates over the dielectric field. Consequently, the surface profile of the insulator assumes prime importance. The ratio of maximum to average surface stress under polluted conditions would be a quantitative indicator for a comparison across different type of discs for their pollution performance. In view of the same, attempt

Simulations were carried out with the intention of finding maximum bulk and surface stress on various types of ceramic insulators (normal and anti-fog types). The simulation results obtained for potential and electric field distributions along the air-porcelain interface for single disc and 14/15 disc insulator string are presented below. The results for single disc would be indicative of the intrinsic voltage distribution of the disc, which is expected to be modulated by the interaction of other discs in the case of string insulator. Fig.1 presents the discretised model of normal type insulator and Fig.2 presents the contour plot for

79

was made to ascertain the relative ratios of maximum and average surface stress for six types of discs considered in this study (refer Fig.10). For this, the normalised local surface resistance is considered. It may be worth recalling here that for the surface conduction field, this normalised surface resistance is directly proportional to the resulting electric field and therefore could be used for depicting the field. Fig.10 compares maximum average normalised surface resistance for six types of insulators considered. The maximum normalised resistance is very similar in some of the cases and this can be traced to their identical pin radius. V. SUMMARY AND CONCLUSIONS

Performance of string insulators used in overhead power transmission lines is very critical and it is dictated by the electric field distribution prevailing under different operating contingencies. However, a reliable electric field distribution data on commonly used disc insulators are rather scarce to find. Considering this, in the present work, potential and electric field profile for six different types of commonly used porcelain insulators are investigated. Single disc and 3,6,9 & 15 disc strings (corresponding to 11,33,66,132 & 220 kV line) were evaluated. Both volume and surface electrical stresses were deduced and presented. The prevailing stress during laboratory pollution tests is also evaluated. The simulated data is believed to be quite useful for both designers and utilities. REFERENCES
[1] David C Jolly, Contamination Flashover, Part-I: Theoretical Aspects, IEEE Trans on PAS, Vol.PAS-91, 6, Nov 1972, pp 2437-2442

[2] Rebati Dass, Grid disturbance in India on 2nd January 2001, No:196, pp 6-15, Electra, June 2001. [3] CEA Enquiry Committee report of Grid incident of Northern region, 2007 [4] CIGRE Task force 33-04-01, Polluted Insulators: A review of current knowledge, 2000 [5] Eric H. Allen & Peter L. Levin, Two dimensional & Axi-symmetric Boundary value problems in Electrostatics, Computational Fields Lab, Dept. of Elect & Computer Engg, Worcester Polytechnic Institute, Worcester, MA-USA -1993. [6] D Beatovic et al, A Galerkin formulation of the Boundary Element Method for two dimensional and axi-symmetric problems in Electrostatics, IEEE Trans on Ele Insulation, Vol.27, No.1, Feb 1992, pp.135-143. [7] Udaya Kumar & Vasu M, "Studies on Voltage distribution in ZnO Surge Arrester", IEE Proceedings, Generation, Transmission & Distribution, Vol. 149, No. 4, July 2002 [8] O W Andersen, Finite element solution of complex electric fields, IEEE Trans. PAS-Vol.96, No.4, pp 1156-1160, 1977. [9] H El Kishky and R S Gorur, Electric potential and field computation along ac HV insulators, IEEE Trans. on DEIS, Vol.1, No.6, pp 982-990, Dec 1994. [10] Chakravorti S. and Steinbigler H.: Boundary Element studies on insulator shape and electric field around HV insulators with or without pollution, IEEE Trans. DEIS, 2000, Vol.7 (2), pp. 169-176. [11] Subba Reddy B and Udayakumar, Potential and Electric Field Profiles for Transmission line Insulators, 4th IASTED Conference on Power & Energy Systems, paper No: 606-115, Langkawi, Malaysia, April 2008. [12] Muhsin Tunay Gencoglu and Mehmet Cebeci, The Pollution flashover on high voltage insulators, Electric Power Systems Research, Vol.8, Issue 11, Nov 2008, pp-1914-1921. [13] Subba Reddy B and Udaya Kumar, Field Reduction Electrodes for the possible improvement in the pollution flashover performance of Porcelain Insulators, IEEE 9th International Conference on Properties & Applications of Dielectric Materials(ICPADM-2009) held at Harbin, China, 19th -23rd July 2009.

Fig.1 Discretised model for normal type disc

Fig.2 contour plot for Bulk/Volume stress-single disc

Fig.3 Bulk/Volume stress plot for 6-disc string

Fig.4 Bulk/Volume stress plot for 14-disc string

80

Surface gradient plot 7 6 5 Gradient (kV/cm) 4 3 2 1 0 -1

0.1

0.2 0.3 0.4 0.5 Creepage distance measured from Pin end (m)

0.6

0.7

Fig.5. Potential plot for single disc


12

Fig.6. Surface field plot for single disc


Surface gradient plot

10

8 Gradient (kV/cm)

-2

0.5

1 1.5 2 2.5 3 3.5 Creepage distance measured from Pin end (m)

4.5

Fig.7. Potential plot for 6-disc string

Fig.8. Surface field plot for 6-disc string

200

Max bulk/volume stress-Single disc Max surface stress-Single disc Max surface stress- 15 disc string

18 16

Maximum Normalised resistance Average Normalised resistance


N-Normal type disc AF-Anti-fog type disc

N-Normal type AF-Anti-fog type


150

Max/Ave Normalised Resistance

14 12 10 8 6 4 2 0

Stress in kV/cm

100

50

0 N1 N2 N3 N4 AF1 AF2

N1

N2

N3

N4

AF1

AF2

Type of Insulator discs

Type of Insulator discs

Fig.9 Comparison of bulk & surface stress for various discs

Fig.10 Comparison of normalised resistance

81

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Stability Analysis and Frequency Excursion of Combined Cycle Plant Including SMES Unit
J. Raja, EEE,Sri Manakula Vinayagar Engg College PuducherryIndia.
e mail: rajaj1980@rediff.com

Dr.C.Christober Asir Rajan EEE,PondicherryEnggCollege Puducherry, India


e mail:asir_70@yahoo.com
both attached to the shaft of the unit. Thus rotor speed and frequency have a direct effect on air and fuel supply,[3]which introduces a negative effect on system stability (CIGRE, 2003). In addition, combined cycle power plants function on the temperature limits (above a relatively low power level) so as to achieve the best efficiency in the steam generator (CIGRE,2003). This fact raises further issues relative to the response of combined cycle power plants (CCPP) during frequency drops or variations at load power. Temperature should be maintained (apart from the first 20 seconds of the disturbance) below certain limits for the protection of the plant.This paper is based on the modeling proposed [4] in Kakimoto and Baba (2003) and Rowen [8] (1983), while the model developed is integrated into an educational and research simulation package developed in the Electrical Energy Systems Lab of NTUA (Vournas et al.,2004). Other similar models are presented in Kunitomi et al.[5] (2003), Lalor et al. (2005), Zhang and So (2000), which, however, are slightly different. For instance, in Lalor and OMalley (2003) [7] the structure of the steam turbine is more detailed, while in this paper we use a simplified steam turbine model. In this paper, we analyze dynamic behavior of a combined cycle plant for frequency drops. Several dynamic models of then combined cycle plant have been proposed [1][8]. We combine some of them and build a model for a single-shaft combined cycle plant. We execute numerical simulations to see how the combined cycle plant behaves when the system frequency drops. The gas turbine (Brayton) cycle is one of the most efficient cycles for the conversion of gas fuels to mechanical power or electricity. The use of distillate liquid fuels, usually diesel, is also common where the cost of a gas pipeline cannot be justified. Gas turbines have long been used in simple cycle mode for peak lopping in the power generation industry, where natural gas or distillate liquid fuels have been used, and where their ability to start and shut down on demand is essential. Gas turbines have also been used in simple cycle mode for base load mechanical power and electricity Generation in the oil and gas industries, where natural gas and process gases have been used as fuel. Gas fuels give reduced maintenance costs compared with liquid fuels, but the cost of natural gas supply pipelines is generally only justified for base load operation. The combined cycle power plant model shown in Fig. 3.0 and the basic structure of Gas Power plant and it consists of the power generation units and the control branches. The thermodynamic part giving the available thermal power to the gas turbine and the steam turbine is modeled by algebraic equations given in appendix I, corresponding to the adiabatic compression and expansion, as well as to the heat exchange in the recovery boiler. These equations correspond to the block Algebraic equations of energy transform are in subsystem Fig.2. These algebraic equations are presented in reference Spalding and Cole et,al.[9]. The operation of the turbine of steam is based on the thermodynamic principle that expresses, when the steam expands it diminishes its temperature and it decreases its internal energy. This situation reduction of the energy becomes mechanical energy for the acceleration of the particles of steam, what allows have a great quantity of energy directly. When the steam expands, the reduction of its internal energy can produce an increase of the speed from the

Abstract This paper presents a MATLAB SIMULINK based Dynamic model of a combined cycle stand alone gas power plant and it develops the transfer function model of load frequency and temperature control loop with superconducting magnetic energy storage. The aim of this proposed work is to regulate the frequency and temperature in order to maintain the system stability and safety of the power plant. Since most of the loads are induction type and the induction motor speed is directly proportional to frequency, the frequency has to be maintained within the limits. If temperature is low, it cause low efficiency of the heat recovery boiler and maintaining temperature higher than allowed will reduce life of the equipment. Hence the temperature has to be regulated to have safe operation of the power plant. Considering these backgrounds, it is important to study dynamic behavior of combined cycle plants. Here we developed the dynamic model for a single shaft combined cycle plant and the analysis of its response to electrical load and frequency transients with the effects of superconducting magnetic energy storage. Keywordsfrequency control, Gas turbine, SMES, stability analysis , Steam turbine, temperature control.

I. INTRODUCTION

he energy mix of the electricity grid in India has changed greatly over the last decade, with an increasing proportion of combined cycle gas turbine (CCGT) units now being utilized for ancillary services, such as frequency response. Historically, the provision of frequency response has been dominated by coal fired stations that have proven to be very reliable. But a large number of these units will be retired in the coming decade. With these changes I plant mix and an increasing in the proportions of renewable generation expected by 2010, a firm understanding behind the collective dynamic behaviors of generator response is required. The basic principle of the Combined Cycle is simple: burning gas in a gas turbine produces not only power - which can be converted to electric power by a coupled generator but also fairly hot exhaust gases. Routing these gases through a water-cooled heat exchanger produces steam, which can be turned into electric power with a coupled steam turbine and generator. This set-up of Gas Turbine, waste-heat boiler, steam turbine and generators is called a combined cycle. This type of power plant is being installed in increasing numbers round the world where there is access to substantial quantities of natural gas. This type of power plant produces high power outputs at high efficiencies and with low emissions. It is also possible to use the steam from the boiler for heating purposes so such power plants can operate to deliver electricity alone. During the last decades there has been continuous development of combined cycle power plants due to their increased efficiency and their low emissions Efficiencies are very wide ranging depending on the lay-out and size of the installation and vary from about 40-56% for large new natural gas- fired stations. The dynamic response of such power plants to load and frequency transients is rather problematic, since the compressor and the fuel supply system are

82

particles. To these speeds the available energy is very high, although the particles are very slight.

II. REQUIREMENT FOR SPEED CONTROL


Basic structure of load-frequency loop has main control loop, during normal operating conditions. The first loop involves the speed governor, which detects frequency deviation from the nominal value and determines the fuel demand signal (Fd) so as to balance the difference between generation and load. Autonomous operation is assumed, so power imbalances will cause electrical frequency deviations. In single shaft combined cycle power plants, the fuel control system as well as the fans, which provide air to the compressor, are attached to the generator shaft, so their performance is directly linked to rotor speed. This affects the stability of the system as will be demonstrated in this analysis. Without speed control, a decrease in frequency will result in a decrease in air and fuel flow (W and Wf), as the shaft of the plant will reduce its speed. The decrease of these two parameters will cause a decrease in power generation and consequently the frequency will continue to drop. So even if only a small transient disturbance is applied, it will eventually lead the uncontrolled unit out of service, as at some point the under frequency protection of the plant will be activated. The response of frequency to such a temporary load increase without speed control is shown in Fig.3 together with the disturbance in electrical load. Therefore, it becomes obvious that speed regulation is absolutely necessary for the stability of the combined cycle plant.

III. REQUIREMENT FOR TEMPERATURE CONTROL


The basic structure of temperature control loop consists of two branches. The normal temperature control branch acts through the air supply control. When the temperature of the exhaust gases exceeds its reference value, this controller acts on the air valves to increase the air flow, so as to decrease exhaust gas temperature. In certain situations, however, this normal temperature control is not enough to maintain safe temperatures. Thus, in cases of a severe overheat; the fuel control signal is reduced through a low value select function that determines the actual fuel flow into the combustion chamber. The second loop is the temperature control and consists of two branches. The normal temperature control branch acts through the air supply control. When the temperature of the exhaust gases exceeds its reference value (Tr), this controller acts on the air valves to increase the airflow, so as to decrease exhaust gas temperature. In certain situations, however, this normal temperature control is not enough to maintain safe temperatures. Thus, in cases of a severe overheat, the fuel control signal is reduced through a low-valueselect function (LVS) that determines the actual fuel flow into the combustion chamber. Reference temperature (Tr) is the parameter defined by the supervisory control for the exhaust gas temperature (Te). This control branch reacts by decreasing Tr when gas turbine inlet temperature (Tf) exceeds its nominal value. Low value select: Inputs to the LVS are the fuel demand signal determined by the speed governor and an overheat control variable, which decreases from an initial ceiling value when the exhaust temperature exceeds its reference. During the operation of the unit, only one of the control branches is active, the one whose control variable has the lowest value. The Fig.1 shows the block diagram of the load frequency control loop of the combined cycle power plant which combines the gas turbine and steam turbine with speed and temperature control loop.

DC voltage Ed appearing across the inductor to be continuously varying within a certain range of positive and negative values. The inductor is initially charged to its rated value Ido by applying a small positive voltage. Once the current reaches the rated value, it is maintained constant by reducing the voltage across the inductor to zero since the coil is [24-25]. Superconducting. Neglecting the transformer and the converter losses, the DC voltage is given by equation (i) Where Ed is DC voltage applied to the inductor (KV), is firing angle ( ), Id is the current flowing through the inductor (1) Ed 2Vdo Cos 2Id Rc (KA).R is the equivalent commutating resistance (ohm) and V do is maximum circuit bridge voltage (kv). Charge and discharge of SMES unit are controlled through change of commutation angle. If alpha is less than 90 , converter acts in converter mod and if alpha is greater than 90 , the converter acts as inverter mode. In combined cycle power plant operation, the DC voltage E d across the super conducting conductor is continuously controlled depending on the sensed error signal. In this study, as in recent literature, inductor voltage deviation of SMES unit of each area is based on error of the same area in power system. Moreover the inductor current deviation is used as a negative feed back signal in SMES control Loop. So the current variable of SMES unit is intended to be settling its steady state value. If the load demand changes suddenly, the feed back provide the prompt restoration of current. The inductor current must be restored to its nominal value quickly after system disturbances, so that it can respond to the next load disturbance immediately. As a result, the equations of inductor voltage deviation and current deviation for each area in Laplace domain are as follows:
Edi (s) K oi Idi (s) 1 1 fi (s) K Idi Idi (s) 1 STdci 1 STdci

(2)
(3)

1 E di (s) SLi

Where, Kidi is the gain for feedback. Idi ,Tdci is the converter time delay, Koi (KV/Unit) is gain constant and Li (H) is the inductance of the coil. the deviation in the inductor real power of SMES unit is expressed in time domain as follows. (4) Psmi (t) di Idio Idi di This value is assumed positive for transfer from AC grid to DC. The energy stoered in SMES at any instant in time domain is given as follows.
Wsmi (t)
2 Li Idi (MJ) 2

(5)

V. STABILITY OF CONTROL LOOPS


The low-value-select (LVS) function acts like a switch that activates one of the two control loops (frequency or overheat) by selecting the lower value of the two control variables (Tc or Fd). Since, as discussed above, the speed control is vital for the stability of the system, its temporary interruption by the LVS during overheat conditions is critical. The calculation of linearized system eigen values is done separately with and without SMES for the case where the speed control, or the temperature control, branch is active. The calculated eigen values are presented in Table I&II. This table shows that when the speed loop is active the system is stable, whereas with the LVS switched to overheat control, the system becomes unstable with a positive eigen value corresponding to shaft speed. Thus, a necessary condition for the system to achieve a steady state after a disturbance is that the LVS reactivates the speed control soon enough. The other eigen values of Table II demonstrate a satisfactory behavior (relatively fast and without undamped oscillations) of the combined cycle plant with SMES. This type of control (switching between stable and unstable systems) is typical of sliding mode control systems (Utkin et al.,

IV. SMES SYSTEM


The SMES unit contained DC Superconducting coil (SMES) and converter which are connected by Star- Delta/Delta-Star transformer. The control of the converter firing angle provides the

83

1999). It should be noted that a switching system could become temporarily unstable and still maintain its overall stability if the switching device (LVS in our model) performs properly. At this point it is necessary to note that the airflow control branch, which performs the normal temperature control, has a major influence on the overall stability of the system.

TABLE-I Stability Analysis Eigen Values S. No 1. 2. 3. 4. 5. 6. 7. 8. Open Loop -72.5 -19.99 8.0031 0.0026 0.2405 Speed Control Loop -73.8646 -19.995 -3.9985 -0.0062 -1.6406 -0.5095 + 0.6308i -0.0003 -0.0755 Temperature Control Loop -70.4523 -19.994 -2.0001 0.0016 0.1256 0.4095+0.5830i -0.0003 -0.0551

TABLE-II Stability & Comparative Analysis Eigen Values S. No 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. Open Loop -72.5 -19.99 8.0031 0.0026 0.2405 Closed Loop Without SMES -76.8646 -19.9985 -5.0062 -1.6406 -0.5095 + 0.6308i -0.5095 - 0.6308i -0.0280 -0.0001 -0.2400 Closed Loop With SMES -76.8646 -33.2545 -19.9985 -5.0062 -1.6406 -0.5095 + 0.6308i -0.5095 - 0.6308i -0.0000 -0.0755 -0.0280 -0.2400

Figure 1 MATLAB SIMULINK Model of Single shaft Combined Cycle Model with Temperature and Speed Control Loop

The frequency drop through the speed governor control immediately results in an increase of Fd causing the value of this parameter to exceed the ceiling of 110%, while temperatures increase and overheat control is activated.

VI. RESULT ANALYSIS


There are many power plants in practical power systems. In this study, we consider a small system composed of a combined cycle plant and a step ramp load. Their models have been described. We consider a single-shaft combined cycle plant

Figure 2 - Sub System

When this branch is active, it is possible to increase the power generation without overheating, as a proportional increase of fuel and air will maintain the combustion temperature constant according to Appendix II. Thus, with proper airflow control, even if the system passes temporarily to the unstable operation, it can eventually end up in a stable steady state. During a disturbance (e.g. decrease of frequency or increase of the load), due to the high gain and the small time constant of the speed control loop, a quick increase of the fuel demand signal is observed. Thus, soon after the disturbance, the LVS function activates the temperature control to avoid overheating. When the air control functions properly, it allows the increase of the power produced with constant temperature. In this way the frequency error decreases, resulting in the decrease of fuel demand Fd and the system returns to the stable operation of frequency control and ends up in steady state. If, on the other hand, the generation is not increased, the system does not return to the stable loop of speed control and the plant will go out of service. The normal response of the plant for an instantaneous 3% frequency drop is shown in Figs. 3&4 for an initial operating point corresponding to 85% of nominal power output.

Figure 3 Open Loop Response of Combined Cycle Plant

Its rated output is 32.5MW (Gas turbine 22.9 MW, Steam turbine 9.6 MW). On the gas turbine we use typical parameters provided ed in [4]. As for as for steam turbine parameter is taken from [19], for gas turbine parameters are taken from [20]. The parameters are listed in the Appendix-II.fig 3 shows open loop response of combined cycle power plant without controller, in the presents of step input signal. Response shows that increase in load decreases the turbine speed N and frequency f, increases fuel flow Wf, exhaust temperature Te and power system lost its stability. So without temperature and speed control loop the system becomes unstable.

84

First we consider the cases in which this system is separated from the large system with a step load following step disturbances. Fig.4 shows dynamic response of combined cycle power plant with speed control and without temperature control for step input, in this case once again the power system becomes unstable but the magnitude of oscillation reduces. In Fig. 4 first dynamic responses reveals Fd, input Vs time, it explains during the step period F d suddenly increases and when the time increases temperature reaches steady state value. Similarly second response relates temperature Vs time.

(i)

(ii)

Figure 7 Dynamic Response of Combined Cycle plant with N & Temperature Loop (i) Without SMES (ii) Without SMES Figure 4 Open Loop Response of Combined Cycle Power Plant with SMES

VII. CONCLUSIONS This model including generator unit and SMES unit together represents the realistic performance of power system with the presents of step signal. The obtained results can be summarized as follows.(i) Following a frequency drop, the temperature control soon overrides the speed control, and restricts the fuel flow about its initial value. The fuel flow then slowly increases as the initial guide vanes open. Their correspondingly affects the stability of power plant. If the inlet guide vanes opens fully. The temperature control and the frequency determine the fuel flow and accordingly the power output.(ii) The simulation model and dynamic response reveals that the speed control loop is necessary for the stability of the gas power plant, as frequency feedback in the fuel flow and air flow render the plant very sensitive to disturbances. The model and the stability of a single shaft combined cycle plant as well as its control loop is analyzed. Furthermore, we examined the behavior of such a plant in the presence of step and ramp signal. Without speed control, the plant is unstable. Thus by implementing the speed control loop the stability of the plant is improved and the system ends up in the stable state.(iii) Low value select function plays a major role in the response of the model. In case of overheat it enables the temperature control, which limits the fuel supply to the turbine otherwise it enables the speed control loop which is essential for the system stability. Without frequency control the plant is unstable, so in order to end up in steady state after step and ramp signal a necessary condition is for the LVS to switch back to speed control.(iv) The temperature control designed here is used to measure and control the turbine exhaust temperature. Thus by implementing the temperature control loop, the exhaust temperature is regulated and the safe operation of the power plant is obtained. Hence the stability of the power plant also increased. Air control contributes significantly to the stability of the plant, as it can helps to controls the turbine temperature also it increases the power generation. (v) The proposed model is non linear since the model response is relatively slow, some blocks with small time constants could be ignored in order to reduce the order of the model, non linearity and simplifies the calculations. The plant is designed in this project is a small single shaft combined cycle plant with SMES. We gathering the information from PPCL, various books, articles, papers and internet. Now that the project is finished and compared with the existing works. From this study it is found that this work may seems

Fig.5 shows dynamic response of combined cycle power plant with speed control and without temperature control for ramp input, in this case once again the power system becomes unstable but the magnitude of oscillation reduces.

Figure 5. Dynamic Response of Combined Cycle plant with N control loop without Temperature Loop & SMES

The temperature control is a crucial factor in the above phenomenon. The exhaust temperature is determined by the fuel flow and the air flow and it determines whether the temperature control operates are not. In this section we examine the phenomenon in terms of fuel flow and air flow. Fig. 6&7 shows dynamic response of combined cycle power plant with the presents of both controllers for step input signal,

Figure 6 Dynamic Response of Combined Cycle plant with N control & SMES loop without Temperature Loop

result shows that N,Fd,T, air flow are maintained within the limit and the system becomes stable, the power system operates in a stable region.

85

to be quite simple and easy to analyze a real gas power plant. (PONDICHERRY POWER CORPORATION LIMITED. (PPCL) is the combined power plant which comprises gas and steam power plant. In PPCL they are generating 32.5 MW. The output from the gas power plant is about 22.9 MW and from the steam power plant is about 9.6MW. GAIL (Gas Authority Of India Limited) providing the natural gas for power plant supplying the input for PPCL. The gas consumption per day is 190000m3.The fuel cost per month is about 2 crore).

REFERENCES
[1] J.Raja, Dr.C.Christober Asir Rajan, Dynamic Model of Interaction between frequency and Temperature control in a Combined Cycle Plant, Proceeding of 1st International Conference On Advances in Energy Conservation Technologies (ICAECT-2010),MIT,MANIPAL, Karnataka State, INDIA,Jan 07-10,2010. [2] J.Raja, Dr.C.Christober Asir Rajan, Stability Analysis and Dynamic Modeling of Frequency Excursion and Temperature Control in a Combined Cycle Plant, Proceeding of 1 st International Conference On Research Advances in Energy System (ICRAES10),KSR College of Engineering, Kongu, Tiruchengode, Tamil Nadu State, INDIA, Jan, 2010. [3] CIGRE, 2003, Modeling Gas Turbines and Steam Turbines in Combined-Cycle Power Plants, International Conference on Large High Voltage Electric Systems, Technical Brochure. [4] Kakimoto,N.,Baba.K.,2003,Performance of Gas Turbines Based Plants During Frequency Drops, IEEE transactions on power systems. Vol. 18 No.3, pp.1110-1115. [5] Kunitomi,K.,kurita,A.,Tada,Y.,Ihara,s.,price,W.W.,Richardson, L.M,Smith,G.,2003,Modeling Combined -cycle power plant for Simulation of Frequency Excursions, IEEE Transactions on Power Systems, vol. 18,No.2,pp. 724-729. [6] Lalor,G.,oMalley,M.,2003,Frequency control on an island power system with increasing proportions of combined cycle gas turbines, IEEE Bologna power Tech conference. [7] Lalor,G.,Ritchie,j.,Flynn,D.,oMalley,M.J.,2005,The Impact of Combined cycle gas Turbine Short-Term Dynamics on Frequency Control, IEEE Transactions on Power Systems,vol.20,No.3,pp.1456-1464. [8] Rowen,W.I.,1983,Simplified Mathematical Representation of Heavy duty Gas Turbine,Trans.Amer.soc.Mech.Eng.,vol.105 ,pp.865-869. [9] Spalding,D.B.,cole,E.H.,973,EngineeringThermodynamics,E dwardArnold(publishers),London. [10] Utkin,v.,Guldner,J.,Shi,J.,1999,sliding mode control in electromechanical systems, Taylor & Francis, London. [11] Vournas,C.D.,Potamianakis,E.G.,Moors,C.,Van Cutsem,T., 2004,An educational Simulation tool for Power System control and stability, IEEE Transactions on Power Systems,Vol.19,pp.48-55. [12] Zhang,Q.,so,P.L.,2000,Dynamic Modeling of a combined cycle plant for power system stability studies, IEEE Power Engineering Society Winter Meeting,vol.2,pp,1538-1543. [13] Large-scale blackout in Malaysia, in Kaigai Denryoku (Foreign Power): Japan Electric Power Information center,Inc.,1996,pp.103-104. [14] T.Inoue,y.sudo,A.Takeuchi,Y.Mitani and Y.Nakachi. Development of a combined cycle plant model for power system dynamic simulation study, Trans. Inst. Elect.Eng.Jpn.,vol.119-B,no.7,pp.788-797,1999. [15] S.Suzaki,K.Kawata,M.Seckoguchi and M.Goto, Combined cycle Plant Model For Power System Dynamic Simulation Study, Trans. Inst. Elect.Eng.Jpn.,vol.120-B,no.8/9,pp.11461152,2000.

[16] K.Kunitomi,A.Kurita,H.Okamoto,Y.Tada,S.Ihara,P.Pourbeik, W.W.Price,A.B.Leirburukt,and J.J.Sanchez-Gasca, Modeling independency of Gas Turbine Output, Proc.IEEE/Power Eng.Soc.Winter Meeting, Jan.2001. [17] F.P.de Mello and D.J.Ahner, Dynamic models for combined cycle plants in power system studies, IEEE Trans. Power syst.,vol.8,pp.152-158,Feb.1992. [18] L.N.Hannet and A.khan, Combustion turbine dynamic model valication fromtests, IEE Trans.power syst.,.vol.8pp.152 -158, feb, 1992. [19] C.Concordia and S.Ihara, Load represenyation in power system stability studies, IEEE trans. Power Apparat. syst., vol.Pas-101, PP.969-977,Apr.1982. [20] W.I.Rowen and R.L.Van Housten, Gas turbine airflow control for optimum heat recovery, Trans. Amer. soc. Mech. Eng. Power, Vol. 105 , pp.7279,Jan.1983. [21] C.Concordia and has S.Ihara ,N.Simons,and W.Waldron, load behavior observed in LILCO and RG&E systems, IEEE trans. Power Apparat. syst., vol.Pas-101,PP.969-977,Apr.1982. [23] Lu C-F.Liu C-C, Wu C. Effect of Bettery energy storage system on load frequency control considering dead band and generation rate constraints. IEEE Trans Power Syst 1995;10(3):555-61. [24] Mitani Y,Tisuji K,Murakami Y. Application of Super conductiong magnetic energy storage to improve power system dynamic performance. IEEE Trans Power Syst 1988;PWRS3(4):1418-25. [25] Tripathy SC, Balsubramania R,Chanramohanan Nair PS.Effect of Superconducting Magnetic Energy Storage on AGC considering governor dedand and biler dynamics. IEEE Trans power Syst 1992; PWRS 3(7):1266-73. [26] Demiroren.A, Yesil.E, Automatic Generation Control with Fuzzy Logic Controllers in the Power System Including SMES Unit.

Appendix I
Value of model parameters: ti =303 K,td0 = 390C, tf0 =1085C, te0 =532C, pr0 =11.5, =1.4, c = 0.85, t = 0.85,R =0.04,Tt=0.4699,Tcmax=1.1,Tcmin=0,Fdmax=1.5,Fdmin =0,Tv =0.05,TF = 0.4,gmax = 1.001, gmin =0.73,Tw= 0.4669, Tcd =0.2,Toff = 0.01,K0 =0.00303, K1 = 0.000428,Tg = 0.05, K4 =0.8, K5 =0.2,T3 = 15, T4 = 2.5, T5 =3.3, K3 = 0.77,K6 =0.23,T6 = 60, Tm = 5,Tb = 20, TI = 18.5.

Appendix - II
-1 X= Pr Pr =Pro *W
X= Pro *W c = t d,is -t

-1

t d -t t1(1+X-1) td = c W(t f -t d ) Wf = (t fo -t do ) t f =t d +(t fo -t do )* t t,is -t Wf W

t t -t t f [l-(l-1) t ] X E g =K o [(t f -t e )-(t d -t i )]*W ES =K1*t e *W t -273 Tf = f t fo t -273 Te = e t eo P=E g +ES te =

86

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

VIABILITY ASSESSMENT OF MICROHYDRO POWER A CASE STUDY


P K Katti
1. ABSTRACT
Renewable energy sources are increasingly being considered as viable alternatives to conventional supply systems in the provision of electricity to isolated communities, especially in the rural areas. Traditionally, rural loads are supplied from either stand-alone diesel power stations or via grid extensions. Such sources are sometimes not cost-effective due to the high fuel and maintenance costs of diesel stations, as well as, long distances from the grid to the loads. Provision of electricity to rural areas is considered uneconomical by many utility companies, because of the low consumption and poor load factors. On the other hand, renewable energy sources, such as solar, wind and mini-hydro, are suitable for supplying small loads operating independently. Though are having some limitations such as the initial high capital costs of the energy conversion equipment, and the variable power output. However, if appropriately applied individually or in combination, renewable energy sources have the potential to be effective sources for some of the rural loads. An assessment of mini-hydro energy source shows that the energy in areas where grid supply is uneconomical or not available, mini-hydro sources can be used to supply groups of households, or to support the existing system during load shedding portions of the profile. Key words: renewable source, small hydro, waterpower, hydropower, hydroelectricity

2. INTRODUCTION Rising costs and unsustainability of conventional energy sources, as well as, the need for environmentally friendly energy supplies, are considered to be the main factors that have stimulated interest into studies for practical applications of renewable energy sources. Renewable energy supplies are considered to be partly or wholly generated in the course of an annual solar cycle. They are extremely variable in time and space, and are generally more difficult to store, compared to conventional energy supplies. Examples of renewable energy sources are biomass, wind, hydro and direct solar radiation. Such sources are usually site specific (e.g. hydro), appear in low concentrations (e.g. wind and solar), and hence require capital-intensive equipment to convert into more useable forms of energy (e.g. electricity). However, renewable energy sources have relatively low maintenance costs, are well suited for meeting
Dr. P K Katti; Professor (Electrical Engineering) Email: pk_katti2003@yahoo.com Dr Babasaheb Ambedkar Technological University, - Lonere- Raigad Maharashtra India -4021013

energy requirements of sparsely populated, isolated and remote consumers, such as rural communities The impetus for rural electrification has mainly been due to the growing realisation of the importance of providing reliable and clean energy supplies to such areas, for their socio-economic development and environmental protection. The major household energy requirements in rural areas are for cooking, lighting and heating comes from Wood-fuel and Kerosene fuels. Utilisation of these resources places pressure on tree forests and subjects individuals to pollution due to inhalation of smoke and homes in households. Provision of electricity to the rural areas would give people an alternative cleaner energy source. Availability of electricity also enables people to use electrical equipment and machinery, which has a positive impact on the quality of life of an individual household, and promotes socio-economic development in the community [3]. Electricity consumption in rural areas is generally very low, compared to the urban settlements. Electricity supply to rural areas is mainly by grid extensions or stand-alone diesel power stations, which are either publicly or privately owned. Both types of supply systems have associated technical and economic problems, which often make it difficult to provide electricity to the rural areas. So, in most cases, utility companies consider rural electrification schemes uneconomical, because it is difficult to design and operate cost-effective systems. In many countries, rural electrification schemes are initially subsidised to stimulate demand, which is usually low due to factors such as poverty and lack of awareness among the potential consumers. Renewable energy (RE) sources offer an attractive alternative to the traditional rural supply sources because they are available in abundance in rural areas, their implementation utilises local human and material resources, and they are environmentally friendly. RE based supplies can also be configured to meet demands of scattered, remote and isolated loads, giving consumers complete ownership and control over the supplies. Widespread practical application of renewable energy sources will depend on their costeffectiveness, compared to grid extensions and standalone sources. This paper is based on the findings of a study conducted in Konkan area of Maharashtra, to assess the viability of mini-hydro sources, to supply rural

87

loads. The majority of the consumers are domestic and small commercial premises, with individual maximum demands of less than 5 kW. The average load factor is about 30%. Commercial consumers included small shops, craftsmen and artisans, who were mostly located at community centres. Commercial activities tended to take place mainly during the middle part of the day, and this would help in improving the daily load factor. 3. CHARACTERISTICS OF SMALL HYDRO SYSTEM AS RENEWABLE ENERGY SOURCE. Small hydro power plants are hydraulic works, which use water to generate electricity. They are further classified into the following categories, on the basis of power output: Micro hydro (up to100kW), mini hydro (100-500kW) and small- hydro (500 kW-2500 kW). Small-scale hydro schemes normally require low investments, have short construction periods and low operating costs. Maintenance costs are usually about 1.5% of the capital costs [5]. Their technology is relatively simple, reliable and well proven for application in developing countries. Local materials can be used in the construction and manufacture of generation equipment, and local people can easily be trained to operate and maintain the equipment. The range of power outputs for the different schemes is quite sufficient to meet energy requirements of domestic, commercial and industrial consumers in a rural community, 3.1 SOME SIGNIFICANT CHARACTERISTICS OF HYDROPOWER ARE The most important characteristics of hydropower can be summarised as follows: Its resources are widely spread around the world. Potential exists in about 150 countries, and approximately two-thirds of the economically feasible potential remains to be developed. This is mostly in developing countries, where the capacity is most urgently required; It is a proven and well-advanced technology, with more than a century of experience. Modern power plants provide extremely efficient energy conversion; It plays a major role in reducing greenhouse gas emissions in terms of avoided generation by fossil fuels. Hydro is a relatively small source of atmospheric emissions compared with fossil-fired generating options; The production of peak load energy from hydropower allows for the best use to be made of base load power from other less flexible electricity sources. Its fast response

time can add substantially to the reliability and quality of the electrical system; It has the lowest operating costs and longest plant life, compared with other large-scale generating options. Once the initial investment has been made in the necessary civil works, the plant life can be extended economically by relatively cheap maintenance and periodic replacement of the electromechanical equipment; As hydro plants are often integrated within multipurpose developments, the projects can help to meet other fundamental human needs (for example, irrigation for food supply, domestic and industrial water supply, flood protection). The reservoir water may also be used for other functions such as fisheries, discharge regulation downstream for navigation improvements, and recreation. Hydropower plants can help to finance these multipurpose benefits, as well as some environmental improvements in the area, such as the creation of wildlife habitats; The fuel (water) is renewable, and is not subject to fluctuations in market conditions. Hydro can also represent energy independence for many countries [3,4].

4. COMPARISON OF MICRO-HYDRO POWER PROJECT WITH OTHER POPULAR OPTIONS OF ITS CATEGORY A comparative aspect of hydro power system with other RE sources shows its simplicity and ease of implementation, some significant points with regard to PV and wind generation system are worth noting as specified below. A. Photovoltaic Cells They need solar cells, Inverters, Special control devices to convert solar power into useful energy. Being weather dependent, actual energy generated per year is only 10-20% of maximum capacity [1]. B. Wind Mills Location specific and Wind electric generator is capable of producing only 21-30% of total amount of available energy, and needs specilised equipments that can be worked for the given location [1]. 5. A CASE STUDY FOR VIABILITY ASSESSMENT OF MICRO HYDRO PROJECT: In this section a case study of small hydropower project proposed on a small reservoir dam at village named Panhalghar, 8 km from NH-17 at Mangaon in the stae of Maharashtra has been carried out for its

88

viability studies. The study includes rainfall in the region, availability of water, design of penstock, consideration for the suitable equipment for the possible load profile to be met in the area. The rainfall and other specifications of the reservoir are presented in annexure (1). Figure (1) shows various components that constitute peculiar micro-hydro scheme.

Q = capacity of the plant in kW 8H = 66 / (815) = 0.55m3/s For the required power output Kaplan turbine will be suitable. Therefore the velocity of flow of water through penstock can be calculated using the relation [7] Q = (D02-Db2) V 4 Where Q=discharge through penstock in m3/s D0 = Diameter of runner = 1 m Db = Diameter of hub = 0.4 m V = Velocity of flow in m/s Therefore, V = 0.8336m/s

Figure 1: Typical micro-hydro installation with its components.

6. METHODOLOGY: The methodology of devising micro-hydro generation scheme comprises consideration of following: A. Water Availability [Annexure 1] Total yield in the dam under case study = 8.22Mm3 Evaporation losses = 0.39 Mm3 Dead storage = 0.08Mm3 Percolation loss = 2.75Mm3 The net yield = 8.22-0.39-0.08-2.75 = 5Mm3 A survey has been carried out in the near by village named Ambarle having about 150 houses with the population of around 250 that includes men women and children. The power requirement of the village is 66 kW. Thus it is proposed to determin the parameters and water utility for the power station of 66 kW or more that can fulfill the demand during required hours. The stepwise design methodology to find out proper estimates of parameters is as given under for the proposed location. B. Design of Penstock [8] Length of penstock = 2000 m. Gross head available = 19 m Clearance for dead storage = 1 m Height of nozzle from ground = 2 m Spillway clearance = 2m The net head available = 19-1-1-2 = 15 m The discharge required to get the output of 66 kW is

Diameter of penstock can be calculated using the relation Q=AV Where A = Cross-sectional area of the penstock = D2/ 4 Therefore diameter, D of the penstock = 916mm Head loss in penstock = (4 f L V2) / (2gD) Where f = friction coefficient of the material of penstock = 0.005 (for new and plane surface) L = length of penstock in m g = acceleration due to gravity = 9.81 m/s2 Thus head loss in penstock = 1.54 m Head loss due to bending = f V2/ (2g) = 0.005 0.83362/ (2 9.81) = 0.177 mm Bending is provided at three positions on the penstock. Thus loss of head due to three bends; = 0.1773 = 0.5313 mm Therefore net head available; = 15 -1.54 - 0.531310- 4 = 13.46 m So the net head available is less than that is considered during discharge calculation. Thus to maintain output power constant discharge must be increased. The increased value of discharge will be Q = 66 / (8 13.46) = 0.6129 m3/s Since the discharge is changed velocity of flow will be changed. To maintain the velocity constant the

89

diameter of either runner or that of hub must be changed. We choose to change the diameter of runner. The new diameter of the runner will be D0 = 1046 mm

= V/ (2r) = 20 rpm r = radius of the blade circle = 0.4m N= synchronous speed of the alternator and is assumed to be 500 rpm D. Design of Alternator [5,6]

With the discharge 0.6129 m3/s, the number of hours per day for which we will be able to run the power plant for 240 days (rainy season is excluded since the dam will be full of water in these days) is Number of hours per day; = (5106)/ (240 36000.6129) = 9 Hrs, 26 min C. Design of turbine [7] The selection criteria for turbines depend upon net head available along with other parameters such as discharge through the turbine, rotational speed, cavitation problem and cost. Certain turbines based on available head have been given in table 2.
Table 2. Turbines for range of head Head range in meters 300 or more 150 to 300 60 to 150 Less than 60 Type of turbine Pelton Pelton or Francis Francis or Deriaz Kaplan or Propeller

Three phase alternating current generators are used in normal practice. Depending on the characteristics of the network supplied, the producer can choose between synchronous generator and asynchronous generator. Synchronous generators are equipped with a D C excitation system associated with a voltage regulator, to provide voltage, frequency and phase angle control before the generator is connected to grid and supply the reactive energy required by the power system when the generator is tied into the grid. Synchronous generator can run isolated from the grid and produce power since excitation is not grid dependent. Whereas in asynchronous generator there is no possibility of voltage regulation and these generators run at a speed directly related to system frequency. They draw their excitation current from the grid, absorbing reactive energy by heir own magnetism. They cannot generate when disconnected from the grid because are incapable of providing their own excitation current. The output power of a three phase alternator is given by: P = 3VI cos in Watt Where V, I And cos are voltage, current in amperes and power factor respectively. The generator voltage depends upon the electrical design and on factors such as distance to transformers, the rating of the generator etc. The permissible peripheral speed of the rotor determines the diameter of the rotor. Walker has suggested the following relation between the peripheral speed Uperi and the number of poles, Pn [5]: Uperi = k1/Pn + k2 Where upper limits of k1 and k2 are 180 and 50 respectively. The number of poles, Pn is assumed to be 12. Therefore Uperi = 102 rpm.

The turbine suitable to us for 13.46 m head is Kaplan turbine. Figure 2 shown below shows the schematic arrangement of the Kaplan turbine.

Figure 2: Kaplan turbine

Dimensions of the turbine are as follows: a) Diameter of runner = 1046 mm b) Diameter of hub = 400 mm Specific speed of the turbine ns = (N P) / H5/4 = 39.4 N rpm Where n=speed of the turbine in rpm

The rotor diameter Dr is given by: Dr = (60Uperi)/ ( N) = 3.9 m The outside diameter of the stator approximately is: Ds = Dr + 1.2m = 5.1 m The generator pit diameter, Dp, considering clearances and the space for the stator frame can approximately be: Dp = Dr + 4.25m = 8.15 m

90

The height of the generator depends upon the core length. The total height, H may be H = Hc + 2.3m Where Hc is the core length and is a function of the rotor diameter and the number of poles and is given by an empirical equation: Hc = K3 (Dr / Pn) K3 lies in between 5.5 to12.5. Weight of the alternator is obtained by empirical formula W = [K (kVA/n) - 85] Where kVA = kVA rating of generator = 73.33 kVA K= constant. E. Design of Transformer Power generation is proposed to be of 66 kW at 415 V, 50 Hz and transmission is proposed to be at 415/11000V and the distribution is at 11000/415V with four wire system. kVA rating of the generator is 73.33 kVA (66kW at 0.9 p.f.). So for this, 3 phase 30 kVA transformer would serve the purpose. If we assume the energy loss in the winding to be 3% of the total energy output and the energy loss in the core of the transformer to be 2% of the total energy output, the all day efficiency of the transformer comes out to be 95%. F. Switchgear and Protection equipment Switchgears are required to control the generator and to interface it with the grid or with an isolated load. It must provide protection for the generator, main transformer and station service transformer. The generator breaker, either air, magnetic or vacuum operated, is used to connect or disconnect the generator from the power grid. Instrument transformers, both power transformers and current transformers are used to transform high voltages and currents down to more manageable levels for metering. The generator control equipment is used to control the generator voltage, power factor and circuit breakers. The asynchronous generator protection must include, among other devices a reverse-power relay giving protection against motoring, differential current relays against internal faults in the generator stator winding, a ground fault relay providing system backup as well as generator ground fault protection, etc. The power transformer protection includes an

instantaneous over-current relay and a timed overcurrent relay to protect the main transformer when a fault is detected in the bus system or an internal fault in the main power transformer occurs. G. Drive system: Usually micro-hydro schemes prefer pair of pulleys with industrial V-belts as drive system. In this case gearing ratio is set by ratio of diameter of larger pulley to that of small pulley. H. Crane Considering the size and weight of the parts like turbine runner, alternator, crane of suitable size and weight must be selected for erection, maintenance and repair purposes. I. Power station A suitable power station has to be established to accommodate all the equipments such as turbine, alternators, switchgear and crane. The dimensions of the power station can be determined by knowing the dimensions of various equipments along with necessary clearances. J. Commissioning and Testing Final stage in the installation process of a microhydro project involves performance tests. These tests are carried out to check the function of different components of the scheme and to measure overall system performance against the figures arrived during design stage. Regular maintenance enhances performance of micro-hydro scheme. 7. CONCLUSION: Based on the methodology of viability assessment of small hydro plants it can be seen that proper design considerations along with equipment, site and other parameters, it is sure to be success for the installed plant. Therefore the focus of this paper that takes in to account the important parameters to be considered while establishing a micro-hydro scheme is relevant. From which the following conclusions could be arrived considering Indian scenario: 1) Small hydro power plants may help to reduce the problem of transmission and distribution losses by aiding to the exisiting transmission system. Also they would be useful at times when load shedding takes place in the existing area. 2) In case of isolated rural area with small loads it can surve as a stand alone system providng power at the time of need. 3) In remote and inaccessible areas to get power from a grid system is a difficult task,

91

so the micro-hydro scheme working as a stand-alone system will be helpful in providing power at the time of need.. 4) A huge network of irrigation canal system exists in India and hence there is a scope to thousands of micro-hydro schemes. 5) Micro-hydro schemes can be integrated in the project in which India is thinking of interlinking major rivers. Therefore the dream of India to become developed nation by 2020 cannot be achieved without a satisfactory contribution from power sector. Thus micro-hydro schemes are going to be one of the most influencing schemes that can play a significant role in the development of India. Annexure 1: River Catchment area Average annual rainfall Total yield in the dam Total losses A) Evaporation B) Dead storage c) Percolation loss Panhalghar 2.74 sq.km 3699 mm 8.22 Mm3 0.47 Mm3 0.39 Mm3 0.08 Mm3 2.75 Mm3

[6] Water Power Engineering by M. M. Dandekar and K. N. Sharma. [7] Laymans Handbook on How to develop a small hydro site. [8] Hydraulic Machines by Dr. P. N. Modi and Dr. S.N. Seth [9] Practical report on Kumbhe Hydro-Electric Project. [10] Rainfall data - Raigad Irrigation Division, Kolad.

8. AKNOWLEDGEMENT:
The author would like to acknowledge the cooperation extended by various departments of irrigation engineering Kolad and Mahad of Raigad district for carrying out this project work.

REFERENCES:
[1] C. Drag, T. Sels and R. Belmas, article on small Hydropowerstate of art and application Available: http://www.kuleuven.ac.be/ei/Public/Publications [2] P K Katti: Fostering The Use of Low Impact Renewable Energy Technologies with Integrated Operation Is The Key for Sustainable Energy System Power Con 2008, available at IEEE Explorer [3] Article on Small scale hydro in India, An OPET (organization for promotion of energy technologies) briefing note. Available: www.teriin.org/opt/articles/art3.htm [4] N.M.Ijumba, C.W.Wekesah : Application potential of solar and mini hydro energy sources in rural electrification.0-7803-30196/96/$3.000 1996 IEEE.pp.720-723 [5] Helena RAMOS(1); A.Betmio de ALMEIDA Small hydro as one of the oldest renewable energy sources. Technical University of Lisbon

92

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Data Aggregation Based Cooperative MIMO Diversity in Wireless Sensor Networks


Vibhav Kumar Sachan
Electronics & Comm. Engg. Deptt. Krishna Institute of Engineering & Technology Ghaziabad, U.P., INDIA Email: vibhavsachan@gmail.com

Dr. Syed A. Imam


Electronics & Comm. Engg. Deptt. Jamia Millia Islamia New Delhi, INDIA Email: imam_jmi@yahoo.co.in

AbstractTo

increase prolong the lifetime of wireless sensors network energy efficient transmission method is required so that energy consumption must be minimized while satisfying given throughput and delay requirements. In this context, we proposed an energy model for wireless sensor networks based on cooperative (MIMO) multiple-input multiple-output based communication , taking into consideration both the transmission energy and data aggregation energy. In this paper, two cooperative wireless sensors network schemes, namely, MIMO approach and SISO approach are compared. We also show that over some distance ranges, Cooperative MIMO transmission approach outperforms SISO approach. Simulation results shows that jointly considering both cooperative MIMO diversity and data aggregation technique will further reduce the total energy consumption.

systems require less transmission energy than SISO systems [5]. However, direct application of multiple-input multiple-output (MIMO) techniques to sensor networks is impractical due to the limited physical size of sensor node which typically can only support a single antenna. Therefore, by allowing individual single antenna nodes to cooperate on information transmission and reception, a cooperative MIMO system can be constructed such that Energy efficient MIMO schemes can be developed [3 ]-[4]. The rest of the paper is organized as follows: By integrating data aggregation energy into a cooperative MIMO communication system, details energy analysis of point to point communication system is investigated in section II. In Section III, energy efficiency is compared over different transmission distances with assumption that Alamouti diversity codes are used for MIMO systems. Finally section IV summarizes our conclusion.

Keywords- wireless sensor network (WSN), Cooperative


multiple-input multiple-output, Energy efficiency, Alamouti diversity schemes, Data Aggregation.

II. ENERGY EFFICIENCY OF COOPERATIVE MIMO SYSTEMS WITH DATA AGGREGATION


In a typical energy limited wireless sensor network, data collected by sensors are transmitted to a remote node (sink) through multi-hop relaying. Given that in a WSN, close by sensors generate highly correlated data, and the raw sensor data can be aggregated in network to reduce the transmission energy [5]. Therefore, we consider the data aggregation energy in our model, which, together with transmission energy, compose the overall energy. We consider a communication link connecting two wireless sensors node, which can be MIMO or SISO. The energy dissipation of data aggregation depends on the algorithm complexity. According to the high level software energy macro model given by [8], if the data aggregation algorithm complexity is O(n2), where n is an algorithm related parameter, the energy dissipation is estimated as EDF = C0 + C1 L + C2 L2 (1)

I. INTRODUCTION Recent hardware advances in wireless communications and digital electronics have enabled the development of low power, low cost, multifunctional sensors nodes that are small in size and communicated in short distances. The sensors nodes consist of sensing, data processing, and communicating component, leverage the idea of sensors networks. A sensors network is composed of a large number of sensor nodes that are densely deployed either inside the phenomenon or very close to it. Sensor nodes are fitted with an
onboard processor. Instead of sending raw data to the nodes responsible for the fusion, they use their processing abilities to locally carry out simple computation and transmit only the required and partially processed data. Wireless sensor network has its own design and resource constraints. Resources constraints include a limited amount of energy, short communication range, low bandwidth, and limited processing and storage in each node. Design constraints are application dependent and are based on monitored environment Motivated by information theoretic predictions on large spectral efficiency of Multiple-input multiple-output (MIMO) systems in fading channels [1]-[2]. It has been shown that MIMO systems can support high data rates under the same transmit power budget, and bit error rate performance as a Single Input Single Output (SISO) systems[6]-[7]. For the same throughput requirements, MIMO

Where L is the number of transmission bits and C0, C1 and C2 are the coefficients depending on the software and CPU parameters. If the data aggregation complexity is O(n), the energy dissipation can be formulated as EDF (L) = C0 + C1 L. Hence the energy consumption per bit for data aggregation is determined as EDF (L) = C0/L + C1, when L is large, C0/L is negligible and thus, the energy consumption per bit of the algorithm complexity with O(n) is approximately constant. Finally, the total energy is given by summing up the total energy consumption along the entire signal path and the total energy dissipation of data aggregation. In this paper, the energy consumption of cooperative MIMO transmission and data aggregation is jointly considered. As illustrated in Fig. 1, we assume there are N sensors nodes in a data

93

dm

detection on the receiving side and N s

N
i 1 i

is the total

bm

number of symbol received by each node at the receiving side with bm as the constellation size used in the space time code. For the SISO approach, the cluster head will transmit all the aggregated data directly to the destination node without any cooperation, so the total energy consumption can be written as
N 1 i 1

TX s

RX

E DF SISO N i Eit Ebf N i Eo N i i


i 1 i 1

(3) Communications scheme A with data aggregation is illustrated in Fig.2 to analysis the energy consumption of cooperative communications. There are two nodes on the transmission side in 2x1 MISO systems. During the transmission, one of the two nodes (say n1) will first transmit the data to other node (say n2) to data aggregation to eliminate redundancy. Because both nodes should have the aggregated data to do the cooperative transmissions while node n1 does not have the data of node n2, node n2 need to transmit some data back to node n1 in order that node n1 has the same aggregated data as node n2. However, in most cases as shown in scheme A, node n2 need not transmit all its data to n1, since after the data aggregation, node n2 already knows which datum is redundant and thus does not transmit the overlapped data to save energy. While in real time system where data aggregation algorithm is complex and time consuming, node n2 will transmit all its data as shown in Fig.3 of communication scheme B to node n1 to meet the critical delay requirements.

Figure 1 Cooperative MIMO Communication with Data Aggregation Scheme in Sensor Network

aggregation cluster, each of which has Ni bits to transmit, where i = 1, 2, 3, N. Data collected by multiple local sensors will firstly be transmitted to a cluster head of node for aggregation. Then the aggregated data will be transmitted to a remote destination node under a cooperative MIMO approach or non cooperative SISO approach. For the MIMO approach, four parts of energy consumption are considered according to four stages in scheme. In the first stage, raw sensor data are transmitted to the cluster head for aggregation. In the second stage, the aggregated data are transmitted by cluster head to Mt-1 nodes, which compose the distributed antenna array. After each node receives all the bits, these Mt nodes (including the cluster head) encode the transmission sequence according to Alamouti diversity scheme [6]. Then in the third stage, each node transmit the sequence according to its preassigned index i. Finally in the fourth stage, the destination node and its Mr-1 nearby assisting nodes jointly receive the signal, quantize each symbol they receive into nr bits, and then transmit by uncoded MQAM to the destination node for joint detection. According the four stage communication method described above, the total energy can be written as

E DF MIMO N i E it E bf N i
i 1 i 1

N 1

M t 1 J 1

E to j N i i
i 1

Long haul Transmission , n1

r b

N E n N
i 1 i i h 1 r h r

M r 1

n2

(2) Where for i = 1, 2,., N-1,

t i

denotes the local transmission Data of node n1 Local Transmission Data of node n2

energy cost per bit for aggregation on the transmission side and E bf denotes the energy cost per bit for data aggregation. For j = 1,2,.Mt-1 , E
to j

denotes the local transmission energy cost per bit

for cooperation communication, i is the percentage of remaining data after aggregation, which reflects the correlation between data amongst different sensors, and

E br

denotes the energy cost per bit


r Eh

for long haul MIMO transmission . For h = 1, 2, ., Mr-1,

denotes the local transmission energy cost per bit for joint

Figure 2 Communication Scheme A with Data Aggregation Using Cooperative MIMO Technique

94

TABLE I: SYSTEM PARAMETERS fc = 2.5 GHz = 0.35 2 = -174 dBm/Hz = 1 Psyn = 50 mW PLNA = 20 mW ML = 40 dB Ts = 1/B

Long haul Transmission

GtGr = 5 dBi B =10 KHz Pmix =30.3 mW Pb = 10-3 Pfilt = 2.5 mW

n1

n2

Data of node n1 Local Transmission

Data of node n2

Nf = 10 dB

i.e. Ni = 20 Kb. The related circuit and system parameters are defined in Table I.
Figure 3 Communication Scheme B with Data Aggregation Using Cooperative MIMO Technique

The total energy consumption in scheme A and scheme B are given by


to E A N 1 E1t E bf N i E 2 ( N 2 (1 )( N 1 N 2 )) i 1 2

B. Simulation Result

Total Energy over TX Distances for different approach 0.9 (MIMO + Data Aggregation : scheme A ) (MIMO + Data Aggregation: scheme B ) (SISO + Data Aggregation)

E br N i
i 1

0.8

(4)

i 1

i 1

Total Energy in j

E B N E Ebf N i E N 2 E
t 1 1 to 2

r b

0.7

(5)

0.6

Where (1- ) is the percentage of redundant data in the total raw data.

0.5

III. PERFORMANCE EVALUATION AND SIMULATION EXPERIMENTS A. The Simulation Environment

0.4

0.3

We assume a flat Rayleigh fading channel, i.e., the channel gain between each transmitter antenna and each receiver antenna is a scalar. Therefore, the fading factors of the MISO channel can be represented as a scalar matrix. In addition, the path loss is modeled as a power falloff proportional to the distance squared. In other words, on top of the square law path loss, the signal is further attenuated by a scalar fading matrix H, in which each entry is a Zero Mean Circulant Symmetric Complex Gaussian (ZMCSCG) random variable with unit variance [1]. The fading is assumed constant during the transmission of each Alamouti codeword. For simulation experiments, we assume that d m = 1 meter, = 75%, B = 10 KHz, nr = 10, and all the transmitting nodes have the same number of bits to transmit,

0.2

10

15

20 25 30 35 40 Transmission Distances in Meters (d)

45

50

Figure 4 Total Energy Consumption over Transmission Distances

In Fig. 4, we are comparing the total energy consumption with transmission distances for MIMO with data aggregation scheme A, MIMO with data aggregation scheme B, SISO with data aggregation,. The results show that SISO performs better than MIMO with data aggregation techniques for small distances; at larger distances, greater than 18 meter, MIMO with data aggregation technique in communication scheme A has lower energy consumption as compared to SISO with data aggregation techniques.

95

IV. CONCLUSION In this paper, we integrated the energy model for data aggregation into cooperative MIMO schemes to further optimize the total energy consumption in wireless sensor networks. Through computer simulation, it is justified that proposed system is more energy efficient than non cooperative approach like SISO for long haul transmission. By applying the energy consumption model of systems with data aggregation under cooperative MIMO communication techniques, we have developed total energy consumption model and analyzed the energy efficiency of point to point communication systems. Simulation results shows that the proposed cooperative-multi input multi output MIMO with data aggregation technique based communication scheme A can offer substantial energy savings in wireless sensors network provided that the system is designed judiciously for e.g. careful consideration of transmission distance.

ACKNOWLEDGMENT
We express our sincere thanks to REAR ADMIRAL Dr. AJAY SHARMA, Director General, K.I.E.T, Ghaziabad for his useful guidance in carrying out this work

REFERENCES
[1] A.J Paulraj, D.A.Gore, R.U.Nabar, Introduction of Space-Time Wireless communication, reprint, Cambridge University P ress, Cambridge, UK, 2003. [2] C.Schurges, O.Aberthorne, and M.B.Srivastava, Modulation scaling for energy aware communication systems, International symposium on Low Power Electronics Design, pp. 96-99, 2001. [3] R.Min, A. Chadraksan, A framework scalable communication in high density wireless networks, International symposium on Low Power Electronics Design, pp. 123-129, 2002. [4] S.Cui, A.J.Goldsmith, and A.Bahai, Modulation optimization under energy Constraints, at proceedings of ICC03, Alaska, USA, pp. 234-239, May, 2003. [5] S.Cui, A.J.Goldsmith, and A.Bahai,Energy Constrained modulation Optimization for coded systems, at proceedings of Globecom03, San Francisco, USA, pp. 137-142, December ,2003. [6] Shivash M Almaouti, A simple Transmit Diversity Technique for Wireless Communications proc IEEE vol 16, pp. 1451 58, 1998. [7] A.J Paulraj, D.A.Gore, R.U.Nabar, An overview of MIMO communications-A key to gigabit wireless, Proc. IEEE, vol. 92 , no.2, pp. 198-218, Feb, 2004. [8] T.K.Tan, A. Raghunathan, G. Lakshminarayana and N.K.Jha, High level Software Energy Macro-Modeling, in Proc. Design Automation Conf., pp.605-610 ,2005.

96

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Design of Novel 3D code set based on Algebraic Congruency for OCDMA Communication Networks for Data Transfer
Shilpa Jindal Dept of Electronics Electrical & Comm Engg, Punjab Engineering College, Deemed university Sector 12 Chandigarh, India.
Ji_shilpa@ yahoo.co.in
In this paper, Quadratic Algebraic Congruent Operator from Algebra Coding theory is used to generate 3 D codes with At-most one-pulse per wavelength (AM-OPPW) restriction for optical CDMA systems for increasing the number of potential users. Codes have been generated through Algorithm Implemented in Matlab.

Neena Gupta MIEEE Dept of Electronics Electrical & Comm Engg, Punjab Engineering College, Deemed University Sector 12 Chandigarh, India
1) The peak of the autocorrelation function is u (t) u (t- T) d t of u (t) should be maximized. 2) The side lobes of the autocorrelation function of u (t) should be minimized. 3) The cross-correlation function of any two signals u, (t) and p (t) in the system should be minimized. 11 CODE CONSTRUCTION METHODOLOGIES Optical orthogonal codes can be constructed by the tools from [2] 1. Projective geometry 2. Greedy algorithm 3. Iterative constructions 4. Block design 5. Various combinatorial disciplines 6. Algebraic coding theory Calculation of the codes based on iterative methods, intersection properties of lines in projective space and combinatorial techniques is complex. In Algebraic coding theory, there are various algebraic congruent operators with in Galois field based on which the codes are classified as a) Prime Sequence (PS) codes. b) Congruence (CC) codes: Linear, Hyperbolic and Quadratic c) Extended Quadratic (EQC) d) Codes. e) Costas arrays (CA) codes. Codes constructed through prime sequences have good cross correlation but, poor autocorrelation properties which render prime sequences unusable for fully asynchronous system. Congruence codes can be generated from algebraic congruent operators with in Galois field, where operator is given by yi (k), and is used to represent operator within Galois field GF (p) where, GF (p) = {0, 1, 2, 3, - - - - - - p-1}, here i is the index of the sequence in the family and k points out the position of time slots in the sequence. Algebraic coding theory analyses the three properties of the codes Code word length, total number of valid code words and minimum distance between two valid codes. In Algebra Coding Theory, various congruent operators are classified as: 1) Linear Congruent Operator yi (k) =ik 2) Hyperbolic Congruent Operator

Abstract:

KeywordsOCDMA Optical Code Division Multiple Access, 2D: 2 Dimensional, 3D: 3 Dimensional, OOC optical orthogonal codes, AM-OPPW- At Most One Pulse Per Wavelength, p: Prime Number and P: Polarization State

1 INTRODUCTION
Recently there has been an upsurge of interest in applying code division multiple access (CDMA) techniques to optical networks. As in conventional CDMA, each of the users in optical CDMA system is assigned a unique spreading sequence that enables the user to distinguish his signal from that of other users. These spreading codes are termed as optical orthogonal codes (OOC). Traditionally , the spreading codes has been carried out in time or in time and wavelength due to advancements in WDM technologies and refer this class as one- dimensional OOCs (1-D OOCs ) and two- dimensional OOCs (2 D OOCs ). Drawback of these codes is overcome by 3-D Codes i.e. supporting more number of users and at less chip rate. Further, Optical Code Division Multiple Access (OCDMA) is a highly flexible technique to achieve high-speed, connectivity with large bandwidth. According to Dimensions, OOC codes are classified: 1) 1D Optical Orthogonal Codes. 2) 2D Optical Orthogonal Codes [6, 7]. 3) 3D Optical Orthogonal Codes. [6, 7]. PRELIMINARIES In fiber-optic CDMA LANs each information bit from a user M is encoded into a signal um (t) that corresponds to a code sequence of N chips representing the address of that User. In order for the receiver of the CDMA system to be able to correctly distinguish each of the possible addresses, the following conditions must be satisfied. [1]

97

yi (k) = i/k + m mod (p) 3) Quadratic Congruent Operator yi (k) = ik (k+1) mod (p) /2 In this paper, we have used Quadratic Congruent operator yi (k) = ik (k+1) mod (p) /2 to construct 3D Codes. Firstly we generated 2D Quadratic code sequence and then based on mathematical expressions these codes are then converted to 3D. Here, for every prime p, there are p-1different sequences or there are maximum of p-1 users of OCDMA system for p=11, the code words generated would be p-1 i.e. 10 in 2D. Table 1 shows three such code sequences out of 10 code sequences. Table 1: 3 Code Sequences of 2D Codes Based on Quadratic Congruent Operator with AMOPPW Restriction. User 1 Code sequence 10000000000 00010000000 00000000001 00000000001 00010000000 10000000000 10000000000 00000010000 00000000010 00000000010 00000010000 10000000000 10000000000 00000000010 00000000100 00000000100 0000000010 10000000000

we will refer to these codes as Three-Dimensional OOCs codes (3-D OOC) [3].Taking the advantage from the fact that light can be transmitted on two orthogonal polarization states. We use here, polarization as the third degree of freedom, while generating OCDMA user spreading sequences. These codes use the two polarization states along with chip times and discrete wavelengths to generate these codes. A 3 D ( T P) OOC C is a family of {0,1}( T P) arrays of constant weight w where is the wavelength parameter , T is the time parameter and P is the polarization parameter which can take one of the following values, P=1 if Polarization is Perpendicular or P=0 if Polarization is Parallel [3]. It can be shown that it is possible to construct 3 D ( T P= 11112, w=6) OOC for 10 users with prime number as 11. AM-OPPW array of codes is constructed. Fig shows 3D code set structure and Table 2 shows the Comparison of number of users and code length for 2D and 3D System with increasing prime number p. Table 2: Comparison of Number of Users And Code Length for 2D and3D System With Increasing Prime Number p S.No 1 2 3 4 5 6 Prime No. P 3 5 7 11 13 17 Family size (number of users in 2D) 2 4 6 10 12 16 Family size (number of users in 3D ) 2 12 30 90 132 240 Code length 2D/3D 9/18 25/50 49/98 121/242 169/338 289/578

01000000000 00000010000 00001000000 00000010000 01000000000 00100000000 01000000000 00000000100 01000000000 00100000000 00010000000 00000001000 01000000000 00000001000 00010000000

Practical considerations place restrictions on the placement of pulses within an array [5]: 1) Arrays with one-pulse per wavelength (OPPW) restriction: each row of every ( T) code array in C must have Hamming weight = 1 2) Array At-most one-pulse per wavelength (AMOPPW) restriction: each row of each ( T) code in C must have hamming weight 1 3) Arrays with one-pulse per time slot (OPPTS) restriction: Each Column of every ( T) code array in C must have Hamming weight = 1 4) Array At-most one-pulse per Time slot (AMOPPTS) restriction: Each Column of each ( T) code in C must have Hamming weight 1 Three-Dimensional Optical orthogonal codes: The advent of polarization maintaining fibers and Wavelength Division Multiplexing (WDM) and Dense WDM technology has made it possible to spread in wavelength, time and space domain. The corresponding codes are variously called wavelength, time, and polarization hopping spreading codes. Here

Fig 1: 3D Codes Pictorial View. 111 ARCHITECTURE OF OCDMA The block diagram in Fig 2 illustrates system design with six channels WDM utilizing mode locked lasers and has been used for generating the carrier signal. This carrier signal is fed to individual encoder, which modulates the signal and encodes it according to the code signature sequence. This signal is used to modulate the

98

PRBS data using external modulator signal. After modulating, encoder has optical filters and delay line array whose time increments are integral multiple of chip time ,TC=TB/S , where TB is bit period, and is given by inverse of bit rate and S is number of time slots. Respective Optical filters allow transfer of single wavelength out of six wavelengths, and delay lines provide appropriate time delay. An 81 multiplexer is used to
Mode locked laser 1 Mode locked laser 2 Mode locked laser 3 6*1 Mode locked laser 4 Mode locked laser 5 Mode locked laser 6 MUX Encoder 1 Encoder 2

combine signals coming from all encoders, then the signal is propagated through non linear fiber and decoder decodes the signal back by employing inverse delay lines. A compound optical receiver is used to receive the desired signal. For analyzing received signal various analyzers were used like BER tester, Eye Diagram Analyzer and Signal Analyzer.[4]

Compound receiver

Decoder 1 Encoder 3 Encoder 4 Encoder 5 Encoder 6 Encoder 7 Encoder 8

8*1

MUX

Non linear fiber

ANALYZER TOOL (BER Tester, eye diagram, signal analyzer)

Figure 2: Block Diagram of OCDMA 1V CONCLUSION The codes generated through Matlab resulted in more number of potential users that can transmit data at high rate in OCDMA network. REFERENCES [1] Paul A Prucnal, Mario A Santoro, student Member IEEE and Ting Rui Fan Spread Spectrum Fiber Optic Local Area Network Using Optical Processing Journal of Lightwave Technology, Vol. Lt-4, No.5, May 1986 pp 547-554. [2] Fan R Chugh, Jawad A Salehi, Member IEEE And Victor K Wei, member IEEE Optical Orthogonal Codes: Design, Analysis And Application IEEE Transaction on Information Theory, vol 35,no.3, May 1989. pp 595-604 [3] J. E. Mcceehao, S. M. R. Motaghian Nezam, P. Saghari, T. H. Izadpanah, A. E. Willner, r. Omrani, and P. V. Kumar 3D Time-Wavelength-Polarization OCDMA Coding for Increasing the Number of Users in OCDMA lans Optical Fiber Communication Conference 2004 (OFC 2004) Vol 2 page 3 Date 2327 feb,2004 [4] Shilpa Jindal, Neena Gupta ,Member IEEE, Analysis Of 3D FO-CDMA System With Varying Data Rates National Conference On Emerging Trends In Engineering And Management (Etem-2008) , page 6 IEEE Student Branch Bombay Section, India [5] Reza Omrani, P. Vijay Kumar Improved Constructions and Bounds for 2-D Optical Orthogonal Codes published in the proceedings of Information Theory, 2005. ISIT 2005.pp-127-131. [6] Shilpa Jindal, Neena Gupta, Member IEEE, Simulated Transmission Analysis of 2D and 3D OOC for Increasing the No. Of Potential users published in the proceedings of 10th Anniversary International Conference on Transparent Optical Networks ICTON 2008. pp 302-305. [7] Shilpa Jindal, Neena Gupta, Member, IEEE Design and Implementation of a Novel Set of 3D codes for OCDMA LAN System at 5Gbps published in the proceedings of International Conference on Optics and Photonics ICOP 2009. October 30November 1, 2009. p-172 ACKNOWLEDGMENT We Acknowledge the Optical Communication Lab at Department Of Electronics Electrical and communication Engineering, Punjab Engineering College, Deemed University, Sector 12, Chandigarh, where this work has been carried out.

99

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

A Co-relation Based Robust Work Model Formation for Promoting Global Cyber Security

Kapil Kumar Nagwanshi


Department of Computer Science & Engineering Rungta College of Engineering & Technology Bhilai, India kapilkn@gmail.com

Tilendra Shishir Sinha


Department of Computer Science & Engineering SSCET Bhilai, India tssinha1968@gmail.com

Susanta Kumar Satpathy


Department of Computer Science & Engineering Rungta College of Engineering & Technology Bhilai, India kapil@teachers.org
Abstract Watermark in hidden domain is being used to ensure high degree of confidentiality and security in digital media. Present paper deals with the methodology adopted for hidden watermark by means of modified Co-relational CDMA Transform of pseudo random noise sequence. Most of the work has been carried out under Least Significant Bit (LSB) technique. Watermarking based on LSB Modification is merely successful in noise free channels where no alteration is expected. Also LSB technique has been found that it is not suitable for micro alteration of image either due to accident or by intention. Present paper modifies the notion of LSB with Co-relational CDMA work model. This technique shows good noise immunity over high energy as well as low energy levels of images, and resistance to jpg attack. The approach adopted in this work has been tested for 10 different 256x256 bmp images successfully. In present paper two algorithms have been proposed. These algorithms imply the use of watermark with adequate degree of robustness in noisy and vulnerable environment. Keywords CDMA, copyright, correlation, cryptography, extraction, image security, insertion, LSB, watermark. System Perceptibility watermark can be Visible or hidden. Here in this project, we use the hidden watermarking approach. The paper has been organized in the following manner; section II proposes problem formulation and solution methodology, section III describes the experimental results and discussions, section IV gives concluding remarks and further work and finally incorporates all the references been made for the completion of the present work.

II. PROBLEM FORMULATION AND SOLUTION METHODOLOGY


Watermarking is very similar to steganography in a number of respects [15]. Both seek to embed information inside a cover message with little to no degradation of the cover-object. Watermarking however adds the additional constraint of robustness. An ideal steganographic system would embed a large amount of information, perfectly securely with no visible degradation to the cover object. An ideal watermarking system however would embed an amount of information that could not be removed or altered without making the cover object entirely unusable [16]. As a side effect of these different requirements, a watermarking system will often trade capacity and perhaps even some security for additional robustness.

I. INTRODUCTION
The expansion internet multimedia systems have exaggerated the need for image copyright protection [1]-[3]. It is very hard to identify an image, whether it is original or altered one. One key approach used to solve this dilemma is to add an indistinguishable structure to an image that can be used to mark it. These structures are known as digital watermarks [4]. Digital Watermarking [5]-[9] deals with the process of embedding a watermark into a multimedia object, such as images, text documents audio or video stream, such that the watermark can be detected or extracted later to make an affirmation concerning the object. The embedded piece of information acts as a fingerprint for the owner, allowing safeguard of the copyright, authentication [10][14] of the data and tracing of illegal copies. A simple example of a digital watermark would be a visible seal placed over an image to identify the copyright. Another good example is shadi.com seal placed over the images of the person. The watermark might contain additional information including the identity of customer of a particular copy of the object. According to the Human Visual

A. Proposed Solution Methodology


Use the correlation properties of additive pseudo-random noise patterns, which are then added to an image. Determining the values of the watermark from blocks in the spatial domain might be getting better if employing CDMA spread-spectrum techniques to scatter each of the bits randomly all over the cover image, increasing capacity and improving resistance to cropping.

B. Mathematical Analysis 1) Insertion


The watermark is initially configuring as a vector rather then a 2D image. For each value of the watermark, generate the {-1,0,1} pattern by PN sequence as P(x,y) = round(2*(rand(Mw, Nw)-0.5)) as shown equation (4). These seeds could be either stored or themselves generated through PN methods. The summation of all

100

of these PN sequences represents the watermark, which is then scaled and added to the cover image. A pseudo-random noise (PN) pattern P(x,y) is added to the cover image (x,y), according to the equation shown below.

C
k 1

(x,y) P (x,y)

P(x,y) round(2 *(rand(Mw, Nw)-0.5)) ( , P ) otherwise

(4)

( x, y) ( x, y) * P( x, y)
While the P(x,y) is calculated by following equation

(1)

k 1

P ( x , y ) round( 2*( rand( Mc, Nc) 0.5)

[ k ]0

( x , y ) ( x , y ) * P ( x , y )

[ k ]0

(2) In equation (1), denotes a gain factor, and the resulting watermarked image. Increasing increases the robustness of the watermark at the expense of the quality of the watermarked image. In equation (2) is the length of message vector, calculated by reshaping the message matrix to vector.

The above procedure is then performed once for each pattern, and the pattern with the higher energy resulting correlation is used. This increases the probability of a correct detection, even after the image has been subject to attack. As the embedding uses the values of the transformed value in embedded, the embedding process should be rather adaptive; storing the majority of the watermark in the larger coefficients. CDMA work model proves the resistant to JPEG compression, cropping, and other typical attacks.

C. Proposed Algorithm
Two algorithms have been proposed as IMCC-INS for insertion of watermark and IMCC-EXT for extraction is shown below. IMCC-INS Algorithm Start Step 1. Set the gain factor for embedding. Step 2. Read main image (x,y) cover object where watermark to be inserted. Step 3. Read message image (x,y) Reshape message image into Message vector Step 4. Read key image Ik for Pseudo Random Noise generator. Step 5. When message contains a '0', add PN sequence P with gain to cover image. Step 6. Write the watermarked image (x,y)out to a file. End IMCC-EXT Algorithm Start Step 1. Set the gain factor for embedding. Step 2. Read the watermarked object (x,y) from where watermark to be extracted. Step 3. Read original watermark W(x,y). Step 4. Read key image Ik for Pseudo Random Noise Generator and reset PN generator to state key. Step 5. Initialize message vector to all ones. Step 6. Generate {-1,0,1} PN sequence P (x,y) = round(2*(rand(Mw, Nw)-0.5)). Step 7. Calculate correlation vector C for the image. Step 8. If correlation exceeds threshold, set message bit low. Step 9. Reshape the message vector and display recovered watermark(x,y)Mo,No End

2) Peak Signal to Noise Ratio


Here it is very clear that, only suspiciously defined metric above is PSNR, shown below in equation (3). The main reason for this is that no good rigorously defined metrics have been proposed that take the effect of the Human Visual System (HVS) into account. PSNR is provided only rough approximation of the quality of the watermark. The phrase peak signal-to-noise ratio is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. Because many signals have a very wide dynamic range, PSNR is usually expressed in terms of the logarithmic decibel scale.

PSNR 10 log 10

XY max p 2 x, y

P
x, y

x, y

x, y

Px , y ) 2

(3)

Here XY max p 2 is the maximum possible pixel value of x, y


x, y

the image. Denominator is most easily defined via the mean squared error (MSE) which for two mn monochrome images P xy and Pxy where one of the images is considered a noisy approximation of the other.

3) Extraction
To recover the watermark, consider the equation (4), each seed is used to generate its PN sequence, which is then correlated with the entire image. If the correlation is high, that bit in the watermark is set to 1, otherwise a 0. The process is then repeated for all the values of the watermark. CDMA improves on the robustness of the watermark significantly, but requires several orders more of calculation. Let be the main image, is the watermarked image, is the message image (rounded unsigned integer), is the message vector, is the gain factoris the size operator, is the reshape operator (X,M,N) returns the M-by-N matrix whose elements are taken column wise from X, and C is the correlation vector for the image

III. EXPERIMENTAL RESULTS AND DISCUSSIONS


This algorithm has been tested for 10 different 256x256 sized bmp images in 2D domain. The images are tested for different gain factor from 0.5 dB to 10 dB. It exhibits good result for low energy channel as well as high-energy channels. JPG attack shows different file size behaviour. Proposed algorithm also shows immunity towards jpg attack. All images were tested in AMD Sampron 32 bit Mobile Processor with 512 MB RAM and Fedora 6 as Operating System. Main algorithms are written and

101

tested in Matlab R14 based environment. The results are shown below in table I:
TABLE I. CHARACTERISTICS OF TESTED IMAGES
Elapsed Time (Sec) Attack Extractio n time for Attacked object 9.9063 9.875 9.8438 9.9375 9.9375 9.9375 9.6563 9.5 10 10.0625 Image size after JPG attack 16.3KB 20.9 KB 22.9 KB 18.7 KB 20.3 KB 21.1 KB 17.8 KB 19.2 KB 18.7 KB 19.0 KB

Original Image Bill.bmp barb.bmp bridge.bmp clown.bmp couple.bmp crowd.bmp glface.bmp lena.bmp tank.bmp truck.bmp

PSNR 25.9597 25.2042 25.5618 25.9941 25.4036 25.5063 24.0168 25.0484 23.8991 22.2386

Insertion 4.5938 4.8125 4.6406 4.8125 4.6875 4.8281 4.8281 4.6406 4.5938 4.6406

Extraction 9.9531 9.8906 9.875 9.9375 10.4375 10.5156 10.0781 9.7656 10.0313 9.8906

Figure 3. Watermarked image (1 dB Gain)

From Table I, an image of Bill Gates stored in a file Bill.bmp have shown under different conditions. Fig. 1, is the original image, subsequently the histogram of the original image incorporates Fig. 2. Watermark is applied with 1 dB noise gain, and the image will be obtained as shown in Fig. 3. Good noise immunity as well as immunity to jpg attack has been observed from Fig. 2 and Fig. 4. Finally one authorized person can be able to detect the security information however noise or jpg attack may be present in the received image, as shown in Fig. 6.

Figure 4. Histogram of watermarked image without attack (1 dB gain)

Figure 1. Original Bill Gates Image downloaded from internet. {Courtesy Wikipedia.org}

Figure 5. Histogram of watermarked image with JPG attack (1 dB gain)

(a) (d)

(b) (e)

(c)

Figure 2. Histogram of original Bill Gates Image

Figure 6. (a). Watermark to be inserted (b). Extracted watermark without attack (2 dB gain factor) (c) (d) (e). Extracted watermark with attack (0.5 dB, 2 dB, 5 dB gain factor)

102

IV. CONCLUDING REMARKS AND FURTHER WORK


The conclusion of the experiment shows that the proposed algorithms are robust to operate or contract an image. Hidden watermarking technique proves its existence in the present digital world. Most of the researchers adopted LSB technique in this area of research, but fails under attack. Hence in the present paper, a newer technique called Modified Co-relation based CDMA in correlation domain has been implemented to meet the robustness criteria of watermarked images. The Modified Co-relation based CDMA domain has been also found to be a highly resistant to both compression and noise, with minimal amounts of visual degradation. The Modified Co-relation based CDMA domain may be one of the most promising domains for digital watermarking and may be further analyzed for promoting global cyber security using steganography technique.

[6]

[7]

[8] [9]

[10]

REFERENCES
[1] Ingemar J. Cox, Matt L. Miller, " The first 50 years of electronic watermarking", Journal of Applied Signal Processing, 2, 126-132, 2002. Seitz, J and Jahnke., T., Digital Watermarking: An introduction. in Digital Watermarking for digital Media, Information Science Publishing, pp. 19-20, 2005. Yongdong Wu. On the security of an svd-based ownership watermarking. IEEE Transactions On Multimedia, Vol. 7, No. 4, August, 2005. Dong Zheng, Yan Liu, Jiying Zhao, and Abdulmotaleb El Saddik. A survey of rst invariant image watermarking algorithms. ACM Comput. Surv., 39(2):5, 2007. Cox, M. Miller, and J. Bloom. Watermarking applications and their properties. in Proc. of Int. Conf. on Information Technology: Coding and Computing 2000, pp. 610, March, 2729 2000.

[11]

[12]

[2]

[13]

[3]

[4]

[14] [15]

[5]

[16]

Arnold Michael, Schmucker Martin, Wolthusen Stephen D., Techniques and Applications of Digital Watermarking and Content Protection, ARTECH HOUSE, INC., 2007. Petitcolas, F.A.P., Anderson, R.J., & Kuhn, M.G.. Attacks on copyright marking systems. Paper presented at the Second Workshop on Information Hiding, 1998. Seitz, J., Digital Watermarking for Digital Media, Information Science Publishing, London, 2007. Lacy J., et al., Intellectual Property Systems & Digital Watermarking, Proc. of the 2nd International Workshop on Information Hiding, Portland, Oregon, USA, 15-17 Apr 1998, Lecture Notes in Computer Science, Vol.1525, Springer-Verlag, David Aucksmith (Ed.). Nagwanshi, Kapil K, Mehta Kamal K, Tiwari Rajesh K., Hidden Watermarking Technique In Wavelet Domain: IEEE International Advance Computing Conference (IACC09) Thapar University, IEEE Delhi Section and IEEE Computer Society Chapter (IEEE Delhi Section):IEEE-APPL-1277, March 6-7, 2009. Nagwanshi, Kapil K, and Arora Rajesh K., Digital Image Security Using Fuzzy Logic Approach and Watermarking, Int. Conf. of Academy of Physical Sciences, pp 68, January, 12-14 2008. Nagwanshi, Kapil K, and Arora Rajesh K, Watermark for Digital Images, A National Conf. on Recent Trends & Development in Computer Technology, pp 1.1, October,26-27 2007. Nagwanshi, Kapil K, and Arora Rajesh K, Ad-Hoc Watermark Technique for Digital Image, A National Conference on Opportunities, Achievements and Challenges in Electronics & Telecommunication Technology and its Applications, pp 84, October 26-27,2007. Nagwanshi, Kapil K, Bioinformatics and Data mining, A workshop at IIT-Kanpur, April, 4-8 2005. Langelaar, G., I. Setyawan, Lagendijk, R.. Watermarking digital image andvideo data. IEEE Signal Processing Magazine, 17:2046, 2000. Henry S. Baird and Kris Popat. Human interactive proofs and document image analysis. In IAPR 2002: Workshop on document analisys system, Princeton, NJ, 2002.

103

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Identification of Vibration Signature and Operating Frequency Range of Ball Mill Using Wireless Accelerometer Sensor
Satish Mohanty, Vanam Upendranath and P. Bhanu Prasad
Digital Systems Group Central Electronics Engineering Research Institute (CEERI)/Council of Scientific & Industrial Research (CSIR) Pilani 333031, Rajasthan, India satish.moha@gmail.com, vanam.upendranath@gmail.com, bhanu@ceeri.ernet.in
Abstract This paper identifies the operating frequency of the ball mill using tri-axial wireless accelerometer sensor. The output from mechanical equipments mostly is a mixture of required pattern of signal with noise. If the frequency and strength of the signals are not perfectly considered, then it is mislead by the noise as the dominant signal. To analyze the vibration of the mill various experiments are conducted to determine the dominant frequency from a set of frequencies. The experiments are conducted with rotating as well as stationary ball mill. Here we have discussed the direction of propagation of the vibration and signal loss due to the wall of the ball mill once the impact is applied on to the mill under fixed and rotating conditions. Keywords- Impact, Vibration, Wireless accelerometer Sensor the highest frequency of the vibration, so as to select a sensor of a particular sampling rate for a rotating condition. Figure 1 shows the position as well as the direction of the sensor along X, Y and Z channels. Whenever the impact is applied on to the mill, the frequency of vibration the mill produces depends on the type of the mill and the materials it is made up of. The strength of the signal depends on the total mass of the balls at that instant and the velocity with which the balls are falling. The experiments are conducted increasing the number of balls progressively at different instances, so as to use the static mill frequency as a reference for the dynamic mill.

I.

INTRODUCTION

The ball mill is a mechanical device used to grind different types of crushed ores. While grinding, the mechanism generates different patterns of impact and the right pattern of impact or the vibration signature is the one being discussed here when the required output of grinding is achieved. This has significant effect on the material grinding as the impact helps in analyzing the level of grinding of the ores. A wireless accelerometer is used here to measure the vibration and transmit the signature to the base station connected to a computer. The response of the accelerometer under static and dynamic mill conditions are considered and analyzed in this paper. This paper discusses on the significant effects of the strength of the impact in selecting the frequency of the operating mill. The impact due to balls on the material bed produces vibration of certain frequency and strength at different levels of grinding. The motion of the charge inside the ball mill [1] is an important factor to calculate the strength of the signal. The strength of the output signal depends on the type of grinding (dry and wet). Different approaches were tried to estimate the grinding condition of the mill i.e. using acoustic signals [2] from the mill and vibration signals [3]. Since the acoustic signal estimation is quite difficult in a noisy industrial environment and also the acoustic signal losses compound inside the mill before reception the estimation of the signals is carried out using accelerometer sensor. The vibration signal is acquired using a tri-axial wireless accelerometer sensor of 10g range. The signal from the sensor is further analyzed to get the vibration signature under rotating and stationary conditions of the ball mill. When the balls are falling due to the centrifugal force, the frequency response is more along X and Z directions and is least along Y-direction. The above phenomena can be analyzed by resolving the force acting on the mill at a particular instant. Independent experiments with sampling rate of 4000 samples/sec/channel are performed in order to predict

Figure 1. Accelerometer setup on the mill

II.

METHODOLOGY

The phenomena of vibration in ball mill are used to analyze the grinding level of materials in industrial ball mill. As mentioned earlier, the experiments conducted using different count of balls (keeping vibration of the mill static) give an idea about the vibration signal. The position identification of the sensors and the impact along the mill is discussed in [4]. The ball mill periphery is divided in to four sections (1, 2, 3 and 4) and the direction of the channels at each section is as shown in the Figure 1. The response of the accelerometer sensor for both the channels X and Z are as shown in the Figure 2. The position numbered 3 in the Figure is considered as the point of higher impact, so the maximum signal strength will be at position number 3 and it will be along the Z channel (Impact is normal to the Z channel). Initial experiments are conducted on the static ball mill with accelerometer mounted on to the top of the mill.

3 4 1 1 2 3 4 1 1 2

Figure 2. Z and X channel response of rotating accelerometer sensor

104

Under static condition the frequency is observed and is accounted in analyzing the response of the signal at that frequency in dynamic condition (the rotational speed of the mill is 70% of the critical speed as given in Equation "(1)"). This is to confirm whether the vibration is due to type of the mill or the types of balls (the materials it is made up of). The vibration experiments on static ball mill are done with different types of balls. 42.3 where 'D' is diameter of the mill We observed that the force exerted on the accelerometer can be written mainly as a function of the parameters as provided by the equation given below; F (sensor) = f (LG, T, QI) (2) (1)

Where LG =Level of grinding, T =Types of ore, QI (wt %) = Quantity (wt %) of balls at an instant of impact and F= the force exerted as the function of different parameters.
Figure 4. Time and Frequency domain plot along Y-channel (outside top)

III.

RESULTS AND ANALYSIS

The time and frequency domain signals are obtained for both the stationary condition (4000samples/sec/channel) and rotating condition (2000samples/sec/channel) of the ball mill. The behaviour of the signal at different instances is characterized in relevance to both the rotating and stationary conditions. The results obtained in time domain are transformed in to the respective frequency equivalent using FFT. Both the time domain and its frequency domain are as shown in the Figures below. The frequency domain plot along X-axis represents the frequency of vibration and Y-represents the strength of the vibration signal.

A. Results at stationary condition (sampling rate 4000)


The Figures 3, 4 and 5 show both the time and frequency domain components of the vibration signals. These Figures are obtained when the outer wall of the mill is struck with an iron ball (the most significant parameter here is frequency rather than amplitude). The frequency response is not significant along the Ychannel as it is more around 400-450 HZ. The experiments on stationary conditions are performed manually. The maximum sampling rate is 4000 samples /second/channel.

Figure 5. Time and Frequency domain along Z-channel (outside top)

Figure 3. Time and Frequency domain along X-channel (outside top)

Figure 6.

Time and Frequency domain along X-channel (inside top)

105

around 2000 samples/sec/channel to analyze the radial and tangential component of the vibration signals along Z and X channels of the sensor (for rotating condition of the mill).

C. Results at rotating condition (sampling rate 2000)


It is very easy to monitor the frequency response of a system in static mode. Static condition frequency is kept as reference to conduct the experiments in dynamic mode. The significant changes in the received signals depend on the speed of rotation of the platform on which the sensor is mounted and the time delay between the transmitter and base satiation. The signal from the sensor mounted on the ball mill is weak because the diameter of the industrial ball mill is quite large. When a sensor is mounted on such huge structures the signal strength is weak due to the line of sight problem and multipath gain between the transmitter and receiver. The behaviour of the signals discussed in [5, 6] cannot be completely applied to the ball mill, but it can be taken as method to evaluate the system response during rotating condition. The experiments are conducted with increasing order of the number of balls, to achieve significant frequency band obtained in the static condition. The Figures 9 and 10 show both the time domain and frequency domain response of the ball mill, when the count on the number of balls is increased. The signals obtained from the accelerometer sensor are directly transmitted to the base station connected to PC. The vibration signal obtained is transformed to the corresponding frequency domain using FFT. The frequency obtained is analyzed in reference to the signals obtained from the experiments performed in static condition. These types of experiments help in classifying the behaviour of the vibration of the material inside the mill.

Figure 7. Time and Frequency domain along Y-channel (inside top)

Figure 8.

Time and Frequency domain along Z-channel (inside top)

B. Analysis at stationary condition (sampling rate 4000)


The Figures 3, 4 and 5 shows that the signal strength is more around 800-900 Hz and the frequency of vibration is more for channel X and channel Z. When the mill is subjected to force by means of ball on the top of the wall then the signal strength is higher compared to the signal strength collected, when it is subjected to force inside the mill (Figures 6, 7 and 8). There is a loss of vibration strength but the frequency remains constant. The loss of the signal strength is due to the thickness of the mill and the type of material it is made up of. To distinguish between vibration frequency and its strength, the above experiments are conducted using other materials. It is found that the strength of the signal depends upon the velocity of the projectile and its mass, but frequency of vibration is independent of these parameters. The dominant frequency is found to be around 800-900 Hz. The above analysis helps in keeping track of the vibration signature in the frequency band between 800-900 Hz. The sampling rate is fixed

Figure 9.

Time and Frequency domain with 4- balls rotating

106

It is also observed that as the number balls is increased the frequency dominance approaches towards the natural frequency. The frequency observed in static condition of the mill is 800-900 Hz. The frequency at these conditions is quite low and is significant at 650-750 Hz. The loss in the frequencies may be due to the multipath gain and Doppler shift effects existing in the rotating wireless transmission. In general, the loss of frequency is a significant issue when the mill starts rotating. In the practical case the diameter of the mill is more, i.e. the signal transmission and reception become poor. The analysis of the signal quality is to be carried out for different position of the sensor in static condition and for various rpm.

ACKNOWLEDGMENT
The work done is a part of the network project, NWP31Development of Advanced Eco-Friendly, Energy Efficient Processes for Utilization of Iron ore Resources of India, funded by CSIR. The authors wish to thank the staff members of Mineral Processing Division at IMMT in helping us to conduct the experiments.

REFERENCES
Article in a journal: [1] B.K Mishra and R.K Rajamani, "Simulation of Charge Motion in Ball Mills", International Journal of Mineral Processing 40 (1994), 171186. Article in a conference proceedings: [2] D.G.Almond and W. Valderrama, "Performance Enhancement Tool for Grining Mills", International Platinum Conference Platinum Adding Value,The South African Institute of Mining and Metallurgy, 2004. [3] Karl S. Gugel and Rodney M. Moon, "Automated Mill Control using Vibration Signals Processing", IEEE Charleston World Cement Conference, Chaleston, 2007. [4] V.Upendranath, S. Mohanty, Jagdish Raheja, K. Sai Prasad and P. Bhanu Prasad, "Autotracking the Behaviour of Ball Mill Using Wireless Sensor", International Seminar on Mineral Processing Technology, IMMT, Bhubaneswar, 2009, unpublished. [5] Kuang-Ching Wang, Lei Tang and Yong Huang, "Wireless Sensors On Rotating Structures Performance Evaluation & Radio Link Characterization", ACM, WiNTECH, Montreal, Canada, 2007. [6] Lei Tang, Kuang-Ching Wang and Yong Huang, "Performance Evaluation & Reliable Implementation of Data Transmission For Wireless Sensors on Rotating Mechanical Structure", Structural and Health Monitoring, Sage Publication, 2009.

Figure 10. Time and Frequency domain with 12- balls rotating

D. Analysis at rotating condition


When the noise level dominates over the signal level the noise is generally mistaken as the signal. The signal is to be restricted in the region of the natural frequency in order to get the value of signal strength at that place. The time domain Figures 3, 4, 5 in static condition are more like impulses, whereas in the dynamic case they are sinusoidal. This happens due to the behaviour of the accelerometer during rotation as shown in the Figure 2. The Figure 9 shows that the noise is quite high at 100150 Hz compared to the signal level around 650-750 Hz frequency. If proper frequency selection is not done prior to the rotating experiments, it may happen that we may always go around the 100-150 Hz as our original signal. The same experiments are conducted with increasing the ball count. The Figure 10 shows a significant improvement of the signal strength over the noise level.

107

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Mobile SMS to Braille Transcription Extending to Android Mobiles


A New Era of Mobile for the Blinds
Deepak Kumar
ZHCET, Aligarh Muslim University Aligarh, India deepakkumar@zhcet.ac.in

Saiful Islam
ZHCET, Aligarh Muslim University Aligarh, India saifulislam@zhcet.ac.in

Farhena Khan
ZHCET, Aligarh Muslim University Aligarh, India farhenakhan@zhcet.ac.in

Abstract In this paper, we describe a software application that converts mobile SMSs into Braille script with special emphasis on special symbols so that visually impaired people will also be able to read mobile messages. This application filters mobile messages text from their special format and than converts this message text into Braille script and sends it to the parallel port so that it can be embossed on paper by Braille embosser or can be read by electronic Braille reader attached to the parallel port. In this paper we have extended our work to include Android mobiles. We have created ASCIIBraille encoding lookup table in SQLite database provided in Android software stack. This application increases the availability of information and use of technology for handicapped visually impaired individuals. Keywords Mobile SMS, Braille, ASCII-Braille Encoding, Braille Transcription, Applications

Figure 1. Braille cell representation

We have listed the Braille representations of the first ten characters of the alphabet which, when preceded by the indicator for numbers, also denote the ten digits.

I.

INTRODUCTION

Figure 2. Some Braille characters[4]

At the end of the march 2006 there were people in UK who were registered as severely sight impaired (blind) in the department of health. But at 31 March 2008, 153,000 people were on the register of blind people, an increase of around 500 (0.3%) from March 2006[15]. In India, 10,634,000 people are blinds in which 5,732,000 are male and 4,902,000 are female. So According to this scenario there is strong need of technological development in this field. Mobile SMS that is Short text Messaging Service is becoming very popular now-a- days since it provides a cheap means of communication. Today an average of 60% of mobile communication is done by means of SMSs. But how a blind person is going to use this service. So to resolve this problem we have implemented SMS to Braille transcription software that filters out mobile message from their special format into text format and than transcript this text message into Braille. This message in Braille is displayed on the screen from where it can be sent to the parallel port. We have also implemented a hardware by which we can verify our Braille output on the parallel port with the help of glowing LEDS that represent Braille notations. We have also proposed how this system can be implemented on android phones.

There are 26 = 64 Braille characters altogether with used as the space character. The dimensions of the Braille cell, according to the Library of Congress standards, are given in [5].Certain Braille printers also permit a graphics mode. However, in this paper we do not consider Braille graphics at all. Given the fact that only 64 Braille characters are available, it is clear that special encoding rules had to be developed for different applications [3]. In this paper, we restrict our attention to the message text to Braille conversion. Grade I Braille, for instance, renders text with all details concerning punctuation, capitalization, spelling, and numerals, whereas grade II Braille employs a system of contractions and abbreviations [6, 10]. We have employed an encoding technique that transcript messages into grade I Braille so as to cover a wide range of symbols in Braille.

B. A VIEW OF BRAILLE TRANSCRIPTION


Before the introduction of computers into the Braille production process, Braille transcription worked essentially as follows: a person who had been trained in the relevant Braille codes would have a printed copy of the text in question and produce a Braille copy using a Braille writing machine. Typically, such a Braille writing machine has six keys for embossing a Braille character, one for each of the dots of the Braille cell, and a few more keys for operations like space, backspace, etc. In the case of s ingle copy, the Braille is produced directly onto heavy stock paper. The Braille version would have to be proofread and corrected; corrections could require that complete pages be re-done. Braille transcription is difficult, time intensive and costly. The training time for a transcriber is given as between 6 months and a year [11]. Typically, a page of print results in about two pages of Braille. Given these conditions, it is clear that only a very small part of the printed publications can actually be transcribed. With the

II.

BACKGROUND

A. A BRIEF VIEW OF BRAILLE


Braille is a system of encodings of print in embossed dot patterns used for reading and writing by the blind [1]. Each Braille character occupies a cell of fixed size. It consists of two columns of dots, numbered 1, 2, 3 and 4, 5, 6 from top to bottom. (There are also Braille codes using two columns with four dots [8, 9]).

108

introduction of computers into the transcription process certain simplifications became feasible. In the first step, the text is put into machine readable form. The task of translating the input into Braille is then left to a computer program. Several programs exist that afford the conversion from ASCII to (literary) grade II Braille (see [12, 13, 14], for example). Some successful attempts at computer-aided transcription of mathematics have also been made [8]. However, to our knowledge, the automatic transcription into the Braille code for mobile SMSs is not offered by any software package up to now which I think an essential thing today and must be available for the blinds.

Figure 4. Control flow for message filtration module

III.

SYSTEM DESCRIPTION

Mobile SMSs to Braille Transcription is a software application that produces Braille for mobile messages. It filters text message from its special file format like .VMG. This text message is then converted into Braille format and displayed on the screen along with text message. This message in Braille format can be send to the parallel port. Mostly Braille output devices like Braille embossers and refreshable Braille display can be connected to the computer system through parallel port. So data from the parallel port can be utilized by Braille output device connected to the parallel port. We have developed this software by using Visual Basic 6.0 Enterprise Edition, C language, MS Access 2003, Some DOS Shell Commands and some dynamic link libraries to interact with parallel port.

Above figure shows the flow of execution in which first VB program takes message file name as input and write a DOS command in convert2text.bat file for executing VMGTOTEXT.EXE. This program takes message file in their format as input and writes message text in a text file as output.

B. ASCII to Braille Conversion


To convert ASCII characters into the Braille characters we have used an encoding schema, known as ASCII-Braille encoding that encodes Braille. According to this encoding schema six dots of a Braille character are considered as six bits of Braille character. Raised dots are considered as bit 1 or high and lowered dots are considered as bit 0 or low.

Figure 5. Braille encoding of character T

Figure 3. Complete system interfacing

There are basically three major modules of this software application-

As shown in figure 5 , In Braille representation of character T, dot number 2,3,4,5 are raised and considered as bit 1 and dot number 1,6 are lowered and considered as bit 0. So In ASCII T is represented as 01110100 and its corresponding Braille value is XX011110 in which two MSBs are dont care and only six LSBs are significant. Using this encoding we have created a look up table in MS Access 2003. VB program reads the text message one by one character and corresponding Braille notation is fetched from the look up table and then this message is displayed on the screen in Braille Format.
TABLE I ASCII-BRAILLE ENCODING FOR SOME CHARACTER Input Binary 01110110 01110111 01111000 01111001 01111010 00110001 00110010 00110011 00110100 00110101 Output Binary XX100111 XX111010 XX101101 XX111010 XX110101 XX000010 XX000110 XX010010 XX110010 XX100010

A. Message File to Text Filtration


To use this application first of all we have to transfer message files from mobile to the computer system. This can be done with the help of mobiles supporting software like PC Suite and connecting mobile to the PC via data cable or Bluetooth. These message files are in their special format like Nokia mobiles have message file in .VMG format which can not directly be converted into Braille. These message files have many unwanted information along with message text. This unwanted information needs to be filtered. For this purpose we have implemented a program in C language which takes the message file name as CLA (Command Line Argument), reads message file and filters out the unwanted information from this file. It stores the useful message text in a Text file. Now this text file contains only useful message text which can be further processed and converted into the Braille.

Input V W X Y z 1 2 3 4 5

109

TABLE II ASCII-BRAILLE ENCODING OF SOME SPECIAL SYMBOLS Input (Sp. Symbols) Colon Semicolon Exclamation Full Stop Question Mark Double Quotation Single Quotation Open Bracket Close Bracket Hyphen Input Binary 00111010 00111011 00100001 00101110 00111111 00100010 00100111 00101000 00101001 00101101 Output Binary XX010010 XX000110 XX010110 XX110010 XX100110 XX110100 XX000010 XX110110 XX110110 XX100100 Figure 8. Message in text and Braille format

C. Sending Data on the Parallel Port


Braille Output devices can be connected to the computer system through one of these three types of ports 1- Parallel Port 2- RS232 3- USB Port. We have assumed here interfacing of Braille output devices through D-type, 25 pins parallel port. We have implemented a module to send a message in Braille to the parallel port. We have used inpout32.dll file for sending data to the parallel port and hwinterface.dll for determining address of parallel port from BIOS of system.

D. Testing Hardware
We have designed a testing hardware so as to verify that we are receiving correct Braille output at the port. This hardware represents single Braille character cell in which dots are represented by means of six LEDs. When there is 1 on data line, LED glows and when 0, it doesnt glow. The Braille output devices also work on the same principle. The aim of this hardware is only to verify Braille output.

Figure 9. Hardware representing a single Braille cell at output

IV.

BRAILLE TRANSCRIPTION ON ANDROID MOBILES

Figure 6. Circuit design of testing hardware

E. Screen Shots of Software

The system for SMS to Braille transcription described in the previous section can be also implemented on the Android mobiles. Generally mobile phones are largely closed environments built on proprietary operating systems that required proprietary development tools. But Android is an open source software stack backed by more than 40 Open Handset Alliances (OHA) members and is surrounded by significant industry buzz, so we can use databases and other native tools for application implementation, for this reason we have also implemented SMS to Braille transcription on Android phones. We have created ASCII-Braille encoding lookup table in SQLite database provided in Android. Whenever we receive a SMS, our Braille transcription software is alerted by the broadcast receiver and asks whether to convert it into the Braille or not. If we click for Braille transcription then message is converted into Braille by same method as described in section III (ASCII-Braille encoding table). We have provided two methods for accessing the Braille output device either by USB or Bluetooth connectivity. Today mostly all the Braille output devices are accessible by USB port and Bluetooth. In this way Android mobile can have interaction with Braille output devices by which a blind person can either read the message or emboss it on paper.

V.

PRACTICAL APPLICATIONS

Figure 7. Path of message file

The practical applications of this software include reading and writing of SMSs for the Blinds through devices for the blind person. The device used by the blinds for reading purpose is known as Refreshable Braille display and the device used for writing Braille codes is known as Braille Embosser. A refreshable Braille display is a device that allows a blind person to read the contents of a display one text line at a time in the form of a line of Braille characters. Each Braille character consists of six movable pins in a rectangular array. The pins can rise and fall

110

depending on the electrical signals they receive. This simulates the effect of the raised dots of Braille impressed on paper. When currents or voltages are applied to points in each six-pin array, various combinations of elevated and retracted pins produce the effect of raised dots or dot-absences in paper Braille

ACKNOWLEDGMENT
We are grateful to Mr. Inamullah, Mr. Izharuddin and Mr. Ankur Varshney for their valuable opinions during this project work. We would also like to thanks Mr. Masood Ashraf Khan, principal of Ahmadi School for the Blinds, Aligarh Muslim University, Aligarh, for guiding us to understand Braille and verifying experimental results.

REFERENCES
Figure 10. Refreshable Braille display [1] Paul Blenkhorn,A System for converting print into Braille, IEEE Transactions on Rehabilitation Engineering, Vol. 5, No. 2, June 1997. Pradip K. Das, Rina Das, Atal Chaudhuri, A computerized Braille Transcription for visually handicapped, Proceedings RC IEEE-EMBS 81 14th BMESI 1995. Paul Blenkhorn, Gareth Evans,Aut omated Braille production for Word-processed documents IEEE Transactions on Neural System and Rehabilitation Engineering , Vol. 9, No. 1, March 2001. R. Arrbito, H. Jurgensen, Computerized Braille Type Setting: Another view of mark-up standards , Electronic Publishing, vol 1(2), pg 117-131, September 1988. A. M. Goldberg, E. M. Schreier, J. D. Leventhal and J. C. DeWitt, A look at five Braille Printers, Journal of Visual Impairment and Blindness 81, No. 6, 272 (1987). Louisville, Kentucky, English Braille, American Edition, 1959, Revised 1962, 1966, 1968, 1970, 1972, (with) Changes (of) 1980, American Printing House for the Blind, 1984. E. Freud, Leitfaden der deutschen Blindenkurzschrift, Marburg, 1973. W. Schweikhardt, Die Stuttgarter Mathematikschrift fur Blinde, Vorschlag fur eine 8-Punkt-Mathematikschrift fur Blinde, Report 9/1983, Inst. f. Informatik, Universitat Stuttgart, 1983. O. Sueda, Eight-Dot Braille Code and Automatic Braille Translation Boards, Computerised Braille Production, Proceedings of the 5th International Workshop , Winterthur, 1985. M. B. Dorf and E. R. Scharry, Instruction Manual for Braille Transcribing, Division for the Blind and Physically Handicapped, Library of Congress, Washington, D. C., 1979. P. A. Fortier, A Computer Aid for the Visually Handicapped: Braille Production at the University of Manitoba , Proceedings of the First International Conference on Computers and Applications, IEEE Computer Society Press, Silver Spring, Maryland, 1984, pp.208 214. Newton, Massachusetts, Braille and Computers, Aids and Appliances Review 11 (1984), Carroll Centre for the Blind, 770 Centre Street. D. McHale, A Simple Algorithm for Automated Braille Translation, Department of Computer Science, University of Washington, Seattle, Washington, Technical Report No. 85-0804, 1985. B. Eickenscheidt, W. A. Slaby and H. Werner, Automatische Ubertragung von Texten in Blindenschrift, Textverarbeitung und Informatik, Fachtagung der GI, Bayreuth, Mai 1980, P. R. Wossidlo ed. Informatik Fachberichte 30, Springer-Verlag, Berlin, 1980, pp.5063. RNIB, Supporting Blind and partially sighted people website[online] , Statistics - numbers of people with sight problems.Available: http://www.rnib.org.uk/xpedio/groups/public/documents/public website/public_researchstats.hcsp

Braille Embosser is a printer which is used for printing Braille. It embosses Braille Character on the sheet of paper with relative ease. For example, some Braille embossers are Romeo Attach, Tiger Braille Embosser, Juliet Classic Braille and Braille Express 150 and many more.

[2]

[3]

[4]

[5] Figure 11. Braille Embosser

[6]

VI.

RESULT AND CONCLUSION


[7] [8]

We are here converting text SMS to Braille and simulating Braille output devices through attached hardware. We have tested results for over hundred SMSs of various types.
Table III TESTING RESULTS Types of SMS Simple text Text with special symbols Text with picture MMS Number of SMS 30 30 30 30 Expected Result 100% 97% 75% Fails

[9]

[10]

[11]

As shown in table for text messages results is 100% correct but since we are not considering graphics at all so result for text message with picture depends on ratio of text and picture part in the message if text part is more than picture part then accuracy will be high, our aggregate result in such case is about 75 % and in case of MMS this software fails. There are some Braille transcription software available that read word documents, text files, HTML pages and e-mails. As per the referred articles there is no Braille transcription software for Mobile SMSs. So we have here introduced the Braille transcription of mobile SMSs with relatively easy and fast conversion method. We have tried to greater extent to produce a perfect Braille through this software. This is fully practical based application which will help the blinds for the usage of technology (SMSs). This software application increases the availability of information for blinds. It also provides reliability since some mobile speech software available but what about when these messages are deleted. So Using Mobile SMS to Braille Transcription we can make records of messages and can use them whenever required. We hope that this software will be very helpful for the blinds.

[12]

[13]

[14]

[15]

111

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

ANN-PSO Technique for Fault Section Estimation in Power System


Prashant P. Bedekar Sudhir R. Bhide Vijay S. Kale
(bedekar_pp@rediffmail.com) (srbhide@yahoo.com) (tovijay_kale@rediffmail.com) Research Scholar Associate Professor Asst. Professor Electrical Engineering Department, Visvesvaraya National Institute of Technology, NAGPUR (INDIA) 440 010
Abstract Fault section estimation (FSE) is identification of fault components in a power system using information of the operation of the protective relays and circuit breakers. In this paper the particle swarm optimization (PSO) method is employed to estimate the fault section making use of the objective function obtained using the supervised Hebbian learning rule of artificial neural network (ANN). Hebbs rule gives, linear algebraic equations, to represent the targets in terms of the status of relays and circuit breakers. The proposed approach is tested on various sample power systems, and is found to give correct results in all cases. The results show that the given method can find the solution efficiently even in case of multiple faults or in case of failure of relays/ circuit breakers. Keywords- Artificial neural network; Fault section estimation; Hebbs rule; Particle swarm optimizatioin.

about it [9,10]. ANN can be effectively used for function approximation. An objective function is obtained using the known sets of inputs and outputs. PSO is a global optimization method that mimics the optimal swarm behavior of animals such as birds and bees. [11]. The PSO optimization method has been employed to estimate the fault section properly and quickly, using the objective function obtained by ANN. II. FAULT SECTION ESTIMATION

I.

INTRODUCTION

Modern society depends heavily upon continuous and reliable availability of electricity. It has become important to maintain continuous supply of electricity round the clock. Also, no power system can be designed in such a way that it would never fail. So, one has to live with the failures (faults) [1]. To enhance the service reliability and to reduce power outages, rapid restoration of power system is needed. For restoration of supply, fault section should be estimated quickly and accurately. Fault section estimation identifies fault components in a power system using information of the operation of the protective relays and circuit breakers. However this task is difficult, especially for the cases where the relay or circuit breaker fails to operate and for multiple faults[2-4]. Several methods have so far been developed for FSE, such as expert systems [5], artificial neural networks [6,7], genetic algorithm (GA) [2], evolutionary programming (EP) [3,4] and combined ANN-GA technique [8]. In the expert system based method, the maintenance of large knowledgebase becomes very difficult and time consuming. The correctness of fault section estimation results obtained using ANNs cannot be proved theoretically [2-4]. The GA and EP optimization methods can be used for FSE, but the objective function is usually a high order polynomial. This paper uses an ANN-PSO for FSE in electrical power system. The problem is divided into two parts, (i) obtaining linear algebraic equations, to represent the targets in terms of the status of relays and circuit breakers, using which objective function can be formed, and (ii) estimating the fault section using the objective function, and the information obtained from the relays and circuit breakers. ANNs learn by examples. They can therefore be trained with known examples of a problem to acquire the knowledge

To provide uninterrupted power supply of certain quality is the primary aim of a power system. To enhance service reliability and to reduce damage of equipment, rapid restoration of power system is required, after occurrence of fault. As a first step of restoration, fault section should be accurately estimated as soon as possible. In actual cases, there are difficulties in locating fault sections. If there is some failure in operations of primary relays and circuit breakers, the fault is supposed to be removed by back-up protections. In such a case the removed area is very large and it is difficult for dispatchers to judge the situation and to find the fault section. If there are double faults at almost the same time, the situation becomes very complicated and difficult to judge. Prior to dealing with a fault section estimation problem, following assumptions are made [6] : 1. All relays and breakers have arrived at their final states, i.e., any operations going to occur, have taken place. Therefore, the on-off states of the relays and breakers can be represented by binary values, and can be regarded as binary symptom pattern to identify the fault problem. 2. No relay operates by itself without a fault in its protection zone and no breaker opens by itself without associated relays tripping signal. 3. Operations of relay or breaker can fail, and the failure relay or breaker should be identified to inform the dispatcher. III. ARTIFICIAL NEURAL NETWORK

Neural networks are simplified models of the biological nervous system. A neural network is a massively parallel distributed processor made up of simple processing units which has a natural propensity of storing knowledge and making it available for use. It resembles the brain in two aspects (i) knowledge is acquired by the network from its environment through a learning process, and (ii) inter-neuron connection strengths, known as synaptic weights, are used to store the acquired knowledge [9,10]. ANN learns by examples. They can therefore be trained with known examples of a problem to acquire the knowledge

112

about it. Once appropriately trained, the network can be put to effective use in solving unknown or untrained instances of the problem. In supervised learning the teacher is assumed to be present during the learning process, i.e. the network aims to minimize the error between the target (desired) output presented by the teacher and the computed output, to achieve better performance [9]. A simple linear function approximation can be made using Hebbs learning law. The supervised Hebb rule can be written in matrix notation as Wnew = Wold + tq pqT (1) where W is the weight matrix, and tq is the target corresponding to pattern pq . If we assume that the weight matrix is initialized to zero and then each of the Q input/ output pairs are applied once to equation (1), we can write W = t1 p1T + t2 p2T + . + tQ pQT (2) This can be presented in matrix form as W = TPT (3) When the prototype input patterns are not orthogonal, the Hebb rule produces some errors. There are several procedures that can be used to reduce the errors. The pseudoinverse rule is one of them [9]. Using this rule the weight matrix can be calculated as W = TP+ (4) where P+ is the Moore-Penrose pseudoinverse. The pseudoinverse of a real matrix is a unique matrix. The pseudoinverse can be computed as P+ = (PTP)-1PT (5) The rule mentioned in equation (4) is used in this paper to determine the weight matrix. IV. PARTICLE SWARM OPTIMIZATION

Start

Define test function and enter number of parameters in the test function

Set stopping criteria

Initialize PSO parameters and create initial swarm. Initialize local minimum for each particle

Find best particle in the initial swarm

Start Iterations

Update velocity

Update particle position

Evaluate costs for new swarm

PSO seems to be an extremely simple algorithm that seems to be effective for optimizing a wide range of functions. The thought process behind the algorithm was inspired by the social behavior of animals, such bird flocking or fish schooling [11-13]. Conceptually PSO seems to lie somewhere between GA and EP. It is highly dependent on stochastic processes, like EP [12]. PSO begins with a random population matrix. The rows in the matrix are called particles. They contain the variable values. Each particle flies in the search space with a velocity that is dynamically adjusted according to its own flying experience and its companions flying experience. A path through the components of the PSO is shown as a flowchart in Fig. 1. The ith particle is represented as Xi = (xi1,xi2, , xiD). The best previous position (the position giving the best fitness value) of the ith particle is represented as Pi = (pi1,pi2, , piD). The index of the best particle among all the particles in the population is represented by the symbol g.

Update best local position for each particle

Update g (the index of best particle in the swarm)

Is stopping criteria satisfied

Yes

Display output

No Increment iteration count


Stop

Fig. 1. Flow chart for particle swarm optimization

113

The rate of the position change (velocity) for particle i is represented as Vi = (vi1,vi2, ,viD). The particles are manipulated according to the following equation [13,14] vid = w*vid + c1*rand( )*(pid xid) + c2*Rand( )*(pgd xid) (6) sid = sid + vid (7) where c1 and c2 are two positive constants, and rand( ) and Rand ( ) are two random functions in the range [0,1]. w is the inertia weight. The first part of equation (6) is the previous velocity. The second is the cognition part, representing the exploiting of its own experience, where c1 is individual factor. The third part is the social part, representing the shared information and mutual cooperation among the particles, where c2 is societal factor [13]. V. SAMPLE SYSTEM A simple one-bus distribution system [6], as shown in Fig. 2, is used to demonstrate the estimation of fault system using ANN and PSO. The distribution system has three overcurrent relays (R1, R2, and R3), one back-up relay (R4) and three circuit breakers (B1, B2, and B3). There are three sections one busbar and two feeders.

means breaker opens, and for relays 1 means relay operated. Similarly, for breakers 0, means breaker is closed, and for relays 0 means relay has not operated. The sections are represented by S1, S2 and S3. For a section, 1 represents that the section is fault section and 0 means it is not a fault section. Six different combinations of status of relays and circuit breakers, and the corresponding fault section/s are shown in Table I. The pattern matrix is formed using six different patterns, each pattern containing seven inputs. Thus the pattern matrix is a 7 x 6 matrix. There are three targets (corresponding S1, S2 and S3) hence the target matrix is of dimension 3 x 6. The corresponding weight matrix will be of dimension 3 x 6. A simple program is developed to determine the weight matrix [15]. The weight matrix is found using the rule given in equation (4), which is given in equation (8).

0 1 1 / 3 2 / 3 1 1 1 / 3 W = 1 0 1 / 3 2 / 3 1 1 1 / 3 0 2 / 3 1 / 3 0 0 1 / 3 0

(8)

B3

R3 R4

B1
R1

Section 3

B2
R2

Section 1

Section 2

The function for targets can thus be written as (9) T1 = (-R2 + B1 + B2) + (1/3) * (-R3 + 2*R4 + B3) T2 = (-R1 + B1 + B2) + (1/3) * (-R3 + 2*R4 + B3) (10) (11) T3 = (1/3) * (2*R3 - R4 + B3) Using these functions we define the objective function. This objective function will be used to determine the fault section by PSO. In case of large system with many relays, circuit breakers and sections, all possible combinations need not be taken. Taking some combinations to form the functions for targets is sufficient [8]. A program is developed in MATLAB to identify the fault section using PSO. S1, S2 and S3 (representing section 1, 2, and 3 respectively) are the three design variables. An initial swarm (consisting of 20 particles) is selected randomly. The input pattern (data received from operation of relays and breakers in the form of 0 and 1) is given to the program. The objective function is defined as the addition of square of errors as

Load
Fig. 2. The Sample System

Load

f = [(Ti S i ) 2 ]
i =1

(12)

VI.

RESULTS AND DISCUSSION

where n is the number of sections (three in this case). The values of Ti s are obtained using equations (9) to (11).

The information received from the relays and circuit breakers is presented in the form of 0 and 1. For breakers 1,
TABLE I Status of relays and circuit breakers, and corresponding fault section

S N 1. 2. 3. 4. 5. 6.

R1

R2

R3

R4

B1

B2

B3

S1

S2

S3

Remark

1 0 0 1 0 0

0 1 0 0 1 0

0 0 1 0 0 0

0 0 0 1 1 1

1 0 0 0 0 0

0 1 0 0 0 0

0 0 1 1 1 1

1 0 0 1 0 1

0 1 0 0 1 1

0 0 1 0 0 0

No failure

Failure of B1 Failure of B2 Failure of R1 or R2 or both

114

The values of

S i s are taken from the particles of the swarm.

The value of S i s which give minimum value of the function (least squares error) are found out by performing iterations. The best values of S i s are taken after finishing the iterations. If S i 0.5, then ith section is considered as fault section. The results obtained using the program, are shown in Table II, for five cases. Brief description of these cases is also given.
TABLE II.

The system can be easily established by collecting information which associates various sections and their protective relays and breakers. REFERENCES
[1] [2] Y. G. Paithankar, and S. R. Bhide, Fundamentals of power system protection, Prentice-Hall of India Pvt. Ltd., New Delhi, 2007. F. Wen, and Z. Han, Fault section estimation in power system using a genetic algorithm, Electric Power System Research, 34, 1995, pp. 165172. L. L. Lai, A. G. Sichinie, and B. J. Gwyn, Comparison between evolutionary programming and a genetic algorithm for fault-section estimation, IEE Proc.- Generation, Transmission and Distribution, Vol. 145, No. 5, September 1998, pp. 616-620. Loi Lei Lai, Intelligent system applications in power engineering evolutionary programming and neural networks, John Wiley & sons, Inc., Publication, 1998. C. Fukui, and J. Kawakami, An expert system for fault section estimation using information from protective relays and circuit breakers, IEEE Transactions on Power Delivery, Vol. PWRD-1, No. 4, October 1986, pp. 83-90. H. Yang, W. Chang, and C. Huang, A new neural networks approach to on-line fault section estimation using information of protective relays and circuit breakers, IEEE Transactions on Power Delivery, Vol. 9, No. 1, January 1994, pp. 220-230. Z.E. Aygen, S. Seker, M. Bagriyanik, F.G. Bagriyanik, and E. Ayaz, Fault section estimation in electrical power system using artificial neural network approach, IEEE Transmission and Distribution Conference, 11-16 April 1999, pp. 466-469. P. P. Bedekar, S. R. Bhide, and V. S. Kale, Fault Section Estimation in Power System using Neuro-Genetic Approach, proceedings of the Third International Conference on Power Systems (ICPS-2009), Indian Institute of Technology, Kharagpur, INDIA, 27-29 December 2009, pp. TS3-57/ 1-6. M. T. Hagan, H. B. Demuth, and Mark Beale, Neural network design, Cengage Learning India Pvt. Ltd., New Delhi, 2008. S. Kumar, Neural networks a classroom approach, Tata McGraw Hill Publishing company ltd., New Delhi, 2008. R. L. Haupt, and S. E. Haupt, Practical genetic algorithms, second edition, John Wiley and sons, Inc., Publication Hoboken, New Jersey, 2004. J. Kennedy, and R. Eberhart, Particle Swarm Optimization, Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 1995, pp. 1942-1948. M.P. Song, and G.C. Gu, Research on Particle Swarm Optimization: A Review, Proceedings of the Third International Conference on Machine Learning and Cybernatics, Shanghai, 2004, pp. 2236-2240. Y. Shi, and R. Eberhart, A Modified Particle Swarm Optimizer, Proceedings of the IEEE International Conference on Evolutionary Computation, PPiscataway, 1998, pp. 69-73. P. P. Bedekar, S. R. Bhide, and V. S. Kale, Fault Section Estimation using Neural Network and Genetic Algorithm, Proceedings of the International Conference on Emerging & Futuristic System and Technology (ICE-FST 09), Laxmidevi Institute of Engineering and Technology, Alwar (Rajasthan), INDIA, 9-11 April 2009, pp. 484-490.

[3]

Case

Actuated Relays

R1

Actuated Circuit breakers B1

Estimated Fault Section S1

Remark
[4]

II

R1, R4

B3

S1

III

R4

B3

S1 or S2 or both.

IV

R1, R3

B1, B3

S1 and S3

R2, R4

B2, B3

S1 and S2

Single fault & No failure Single fault & failure of breaker B1 Single or multiple faults and failure of relay R1 or R2 or both. Multiple faults and no failure Multiple faults and failure of relay R1

[5]

[6]

[7]

[8]

[9] [10] [11]

VII. CONCLUSION Combined ANN-PSO technique for estimating fault section in power system is discussed in this paper. The supervised Hebbian learning law is used to form the objective function. The PSO technique is then applied to determine the most possible fault section using information from relays and circuit breakers. The proposed system has been tested for various sample power systems. One simple system is presented in this paper. Results obtained show that the system works nicely in all the cases. In case of failure of multiple relays and circuit breakers, it is possible that, some sections may be wrongly estimated as fault section, but even in this case the actual fault sections are not missed and are correctly estimated as fault section.

[12]

[13]

[14]

[15]

115

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Fuzzy decision making for real and reactive power scheduling of thermal power generating units
Lakhwinder Singh
Department of Electrical Engineering Baba Banda Singh Bahadur Engineering College, Fatehgarh Sahib -140 407, Punjab, India email: lakhwinder1968@yahoo.com Abstract-This paper deals with the dual objective optimization problem relating to real and reactive power scheduling of thermal power generating units to meet the system constraints. Keeping in view the requirements of the 1990 amendments to the clean air act, it is necessary to include environmental emissions in the economic dispatch problem of thermal power plants. In this paper, only NOx emission is considered along with fuel cost, because it is a significant issue at the global level and both the objectives are minimized simultaneously. Fuzzy decision making theory is exploited to decide the generation schedule whereas weighting method is employed to generate noninferior solutions. Regression analysis is performed between membership functions of the objectives and simulated weights, to decide the optimal operating point. The validity of the proposed method is demonstrated on small sample system.
Keywords-Economic-emission load dispatch; fuzzy decision making; weighting method; regression analysis

Jaspreet Singh Dhillon


Department of Electrical & Instrumentation Engg. Sant Longowal Institute of Engg & Technology, Longowal -148 106, Distt. Sangrur, Punjab, India email: jsdhillonp@yahoo.com emission load dispatch through an interactive fuzzy satisfying method. Ramanathan [6] has presented a methodology to include emission constraints in classical economic dispatch, which contains an efficient weights estimation technique. Basu [7] has used the Hopfield neural networks to solve fuel constrained economic-emission load dispatch problem. Chen et al. [8] have presented a direct Newton-Raphson economic-emission dispatch method which considers the line flow constraints. An approach based on the strength Pareto evolutionary algorithm to solve environmental economic power dispatch optimization problem has been presented by Abido [9] in which fuzzy based mechanism is employed to extract the best compromise solution over the trade-off curve. The intent of the paper is to solve dual objective economic-emission load dispatch problem having objectives i.e. the economic index and impact on environment due to NOx emission interactively. The dual objective optimization problem is converted into single objective optimization applying weighting method. Non-inferior solutions are obtained by simulating the weights. Fuzzy decision making theory is exploited to decide the generation schedule. Regression analysis is performed between membership functions of the objectives and simulated weights to decide the optimal operating point. The validity of the proposed method is demonstrated on 5-bus, 7-lines system, comprising three generators. II.

I.

INTRODUCTION

In a large number of real-life decision problems, a decision maker (DM) is faced with difficulties to take a decision, when conflicting objectives are present in the decision and these are to be simultaneously achieved. The levels of attainment of these goals are to be expressed in the form of quantitative performance criteria. Some of which can be selected as the optimal objectives. The situation is formulated as a dual objective optimization problem in which the goal is to maximize or minimize the objective functions simultaneously. Many approaches and methods have been proposed to solve such optimization problems [14]. So far, the main purpose of the optimal power scheduling problem has mainly confined to minimize the generation cost of a power system. However, to meet the environmental regulations enforced in recent years, emission control has become one of the important operational objectives. The pollution of the earths atmosphere caused by three principal gaseous pollutants, oxides of carbon (COx), oxides of sulphur (SOx) and oxides of nitrogen (NOx) from thermal generators, is of greater concern to power utility and communities. In this paper, authors are only concerned with measurable volume of environmentally harmful emission of NOx from thermal generators. Hota et al. [5] have solved the economic-

PROBLEM FORMULATION

The problem formulation treats dual objective mathematical programming problem in which the attempt is to minimize conflicting objectives, cost and NOx emission function simultaneously, while satisfying equality and inequality constraints. Generally the problem is formulated as: Minimize operating cost
2 F1 ( PGi ) = ai PGi + bi PGi + ci $ / h i =1 Ng

(1)

Minimize NOx emission


2 F2 ( PGi ) = di PGi + ei PGi + fi kg / h i =1 Ng

(2)

116

Subject to constraints Power balance constraints: The total real and reactive power generated must meet the total demand and losses in the system.

respectively. Rij is the real component of impedance bus matrix. X ij is the reactive component of impedance bus matrix [2]. III. WEIGHTING METHOD To generate the non-inferior solutions, dual objective optimization problem is converted in scalar optimization problem and is given as: Minimize

P
i =1
Ng i =1

Ng

Gi

= PDi + PL
i =1

Nb

(3) (4)

Gi

= QDi + QL
i =1

Nb

Generator capacity constraints: Each generating unit is restricted by its lower and upper limits of real and reactive power outputs to ensure stable operation.
min max PGi PGi PGi

w F
k =1 k

(9)

Subject to: i)

w
k =1

= 1.0 ,

w k 0. 0

(10)

; i = 1, 2,, Ng ; i =1, 2,, Ng

(5) (6)

ii) Eq. (3) to (6) where wk are the levels of the weighting coefficients. L is the number of objectives. This approach yields meaningful results to the decision maker when solved many times for different values of wk; k =1, 2. The values of the weighting coefficients vary from 0 to 1. To find the solution, constrained optimization problem is converted into an unconstrained optimization problem. Equality and inequality constraints are clubbed with objective function using Lagrange multipliers and penalty method, respectively. The generalized augmented function is formed as: L(PGi, QGi,

min Gi

QGi Q

max Gi

where ai, bi and ci are the cost coefficients. di, ei and fi are the NOx emission coefficients. Ng is number of generators. th PGi is the real power output of i generator and act as a control variable. Nb is the number of buses in the system. PGi and QGi are the real and reactive powers, respectively. PDi and QDi are real and reactive demands, respectively. PL and QL are real and reactive power losses in the
max min transmission lines, respectively. PGi and PGi are the minimum and maximum values of real power output of max min and QGi are the minimum and i th unit, respectively. QGi

p , q )

w j F j - p ( PGi PDi PL ) j =1
i =1 i =1

Ng

Nb

maximum values of reactive power output of i th unit, respectively. Power transmission losses: The real and reactive power transmission losses, PL and QL are given by following equations:
PL =

Ng Nb 2 2 Ng min max q ( QGi QDi QL ) + 1 P P Gi ) + ( P Gi P Gi ) k ( Gi r i =1 i =1

i=1

[A
Nb Nb i =1 j =1

ij

( Pi P j + Q i Q j ) + B ij ( Q i P j Pi Q j )

(7)

Ng 2 2 min max +1 (11) Q QGi + ( QGi QGi ) ) k ( Gi r i =1 where p , q are Lagrange multipliers, r k is penalty factor

QL =

[C
Nb Nb i =1 j =1

ij

( Pi P j + Q i Q j ) + D ij ( Q i P j Pi Q j )

(8)

and should have small value. The Newton-Raphson algorithm [8] is applied to obtain the non-inferior solutions for simulated weight combinations, to achieve the necessary conditions. IV. FUZZY MEMBERSHIP FUNCTION Considering the imprecise nature of the DMs judgment, it is natural to assume that the DM may have fuzzy or imprecise goals for each objective function [9]. The fuzzy sets are defined by equations called membership functions, which represent the degree of membership in values from 0 to 1. The membership value 0 indicates incompatibility with the sets, while 1 means full compatibility. By taking account of the minimum and maximum values of each objective function together with the rate of increase of membership satisfaction, the DM must determine the membership function (Fi) in a subjective manner.

where Pi + jQi = ( PGi PDi ) + j (QGi QDi )


Aij = Rij Vi V j
X ij Vi V j

cos( i j ) ;

Bij =

Rij Vi V j
X ij Vi V j

sin( i j )

C ij =

cos( i j ) ;

Dij =

sin( i j )

with Aij , Bij , Cij and Dij are loss coefficients, and are evaluated from line data by performing load flow analysis. i and j are load angles at i th and j th buses, respectively.
th Vi and V j are voltage magnitude at i and j th buses,

117

min 1 ; Fi Fi max (12) Fi Fi min max (Fi ) = max ; Fi Fi Fi min Fi Fi max ; Fi Fi 0 where Fi min and Fi max are minimum and maximum values of

9.

f = Min [Max { ( Fi ) }; i =1, 2, , L]


VI. TEST SYSTEM AND RESULTS

i th objective function in which the solution is expected. The value of the membership function indicates that how much a noninferior solution has satisfied the Fi objective.
V. ALGORITHM Stepwise procedure to compute optimal weights is given below: 1. 2. Input the system data. Compute loss coefficients, real and reactive power losses by performing load flow using decoupled method. Find the minimum and maximum values of each objective Fi min and Fi max ; j = 1, 2. Simulate weight combinations by varying in a step size of 0.1, such that their sum remains one. Generate non-inferior solutions by solving Eq. (11), using Newton-Raphson method. Compute membership functions of the obtained noninferior solutions using Eq. (12). Perform linear and quadratic regression analysis between minimum values of membership functions of the objectives and simulated weights. Normalize the calculated weights. The best set of decision vector is found by solving the following problem
2 Minimize w* F j j j =1

The validity of the proposed method is illustrated on 5-bus, 7-lines system, comprising three generators [10]. Line data and Load data is depicted in Table 1 and Table 2. Minimum and maximum values of the functions are shown in Table 3. The dual objective optimization problem is solved for two cases: Case-I: Linear regression analysis Case-II: Quadratic regression analysis
TABLE 1: LINE DATA Line 1 2 3 4 5 6 7 From 1 1 2 2 2 3 4 To 2 3 3 4 5 4 5 R (p.u.) 0.02 0.08 0.06 0.06 0.04 0.01 0.08 X (p.u.) 0.06 0.24 0.18 0.18 0.12 0.03 0.24 B (p.u.) 0.030 0.025 0.020 0.020 0.015 0.010 0.025

3. 4. 5. 6. 7.

TABLE 2: LOAD DATA Bus 1 2 3 4 5 P (MW) 0.00 0.20 0.45 0.40 0.60 Q (MVAR) 0.00 0.10 0.15 0.05 0.10

8.

TABLE 3: MINIMUM AND MAXIMUM VALUES OF OBJECTIVES

F1min = 2510.1890 $/h F2min = 288.9621 kg/h


TABLE 4: NON-INFERIOR SOLUTIONS

F1max = 2630.7660 $/h F2max = 472.0815 kg/h

Subject to: Eq. (3) to (6)


Weights Objectives Membership Functions

w1
1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0

w2
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

F1 $/h
2510.1890 2512.9840 2519.5700 2528.4620 2540.0210 2552.8300 2566.7110 2581.4590 2597.9300 2614.1650 2630.7660

F2 kg/h
472.0815 420.5560 382.9547 354.7825 333.6010 317.7413 306.1468 298.0304 292.7366 289.8492 288.9621

( F1 )
1.0 0.9768243 0.9222018 0.8484571 0.7525877 0.6463594 0.5312364 0.4089232 0.2723252 0.1376812 0.0

( F2 )
0.0 0.2813764 0.4867141 0.6405602 0.7562307 0.8428393 0.9061556 0.9504788 0.9793876 0.9951557 1.0

118

In both the cases weights are simulated with the step size of 0.1, so that their sum remains equal to one. Noninferior solutions obtained from simulated weights are shown in Table 4. Linear and quadratic regression analysis has been performed to solve the problem. Achieved results of linear and quadratic regression analysis are shown in Table 5. It has been observed from

the results that the overall membership function is more in Case-II as compared to Case-I. Multiple correlation coefficient is also more in quadratic regression analysis as compared to linear regression analysis. The best power generation schedule and voltage profile obtained at each bus by the proposed method is given in Table 6.

TABLE 5: COMPARISON OF RESULTS Weights Objectives Membership Function 0.6807518 0.8183873 0.7369482 0.7712197 Standard Error Multiple Correlation Coefficient 0.986935 0.919912 0.993510 0.999641

Case-I

Cost ($\h) NOx emission (kg/h)

0.531403 0.468597 0.584679 0.415321

2548.6830 322.2189 2541.9070 330.8562

0.003532 0.012301 0.011931 0.026760

Case-II

Cost ($\h) NOx emission (kg/h)

TABLE 6:BEST POWER GENERATION SCHEDULE AND VOLTAGE PROFILE OF PROPOSED METHOD Generation Bus 1 2 3 4 5 PG 0.6038 0.3028 0.7602 0.00 0.00 QG -0.0641 0.0887 0.1137 0.00 0.00 PD 0.00 0.20 0.45 0.40 0.60 Load QD 0.00 0.20 0.15 0.05 0.10 P 0.6040 0.1021 0.3109 -0.4001 -0.5999 [3] Injected Power Q -0.0703 -0.0111 -0.0364 -0.0499 -0.1000 V 1.06000 1.05184 1.05077 1.04591 1.02883 Voltage Profile

0.00 -0.2551 -0.2864 -0.4105 -0.7370

VII. CONCULSIONS In the dual objective optimization problem, it has been realized that cost and NOx emission are conflicting in nature. The solution set of the problem is non-inferior due to conflicting nature of the objectives and has been obtained through weighting method. Fuzzy decision making methodology is exploited to decide the best generation schedule. Linear and quadratic regression analysis is performed between membership functions of the objectives and simulated weights to decide the optimal operating point. Cost and NOx emission are calculated at the optimal values of the weights. The validity of the proposed method is demonstrated on 5bus, 7-lines system, comprising three generators. REFERENCES
[1] A. A. EL-Keib, H. Ma and J. L. Hart, Environmentally constrained economic dispatch using the LaGrangian relaxation method, IEEE Trans. on Power Systems, vol. 9, no. 4, pp. 1723-1729, 1994. L. Singh and J.S.Dhillon, Secure multiobjective real and reactive power allocation of thermal power units, International Journal of Electrical Power and Energy Systems, vol. 30, no. 10, pp. 594-602, 2008.

[2]

T. Yalcinoz and O. Koksoy, A multiobjective optimization method to environmental economic dispatch, International Journal of Electrical Power and Energy Systems, vol. 29, no. 1, pp. 42-50, 2007. [4] J.S.Dhillon and D.P.Kothari, The surrogate worth trade-off approach for multiobjective thermal power dispatch problem, Electric Power Systems Research, vol. 56, pp. 103-110, 2000. [5] P.K. Hota, R. Chakrabarti and P.K. Chattopadhyay, Economic emission load dispatch through an interactive fuzzy satisfying method, Electric Power Systems Research, vol. 54, pp. 151-157, 2000. [6] R. Ramanathan, Emission constrained economic dispatch, IEEE Trans. on Power Systems, vol. 9, no. 4, pp. 1994-2000, 1994. [7] M. Basu, Fuel constrained economic emission load dispatch using Hopfield neural networks, Electric Power Systems Research, vol. 63, pp. 51-57, 2002. [8] S.D. Chen and J.F. Chen, A direct NewtonRaphson economic emission dispatch, International Journal of Electrical Power and Energy Systems, vol. 25, pp. 411-417, 2003. [9] M.A. Abido, Environmental/ economic power dispatch using multiobjective evolutionary algorithms IEEE Trans. on Power Systems, vol. 18, no. 4, pp. 1529-1537, 2003. [10] L. Singh and J.S.Dhillon, Fuzzy satisfying interactive multiobjective thermal power dispatch: SWT approach, Journal of Systems Science and Systems Engineering, vol. 16, no. 1, pp. 88-106, 2007.

119

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Modied Robust Pole Assignment method for Designing Power System Stabilizers
Ajit Kumar , Gurunath Gurrala , Indraneel Sen
Department of Electrical Engineering Indian Institute of Science Bangalore-560012 , India Email: ajit@ee.iisc.ernet.in ; gurunath@ee.iisc.ernet.in ; sen@ee.iisc.ernet.in

AbstractThe design of robust power system stabilizers for a Single Machine Innite Bus power system model is described in this paper. A robust control approach based on the Modied Nevanlinna-Pick theory is implemented to provide desirable damping to the lightly damped or unstable mechanical modes.The proposed approach takes into account various uncertainties involved in the power system operation and modeling, and thus provides adequate damping for the system dynamics over a wide range operating conditions. The proposed approach determines the stabilizer transfer function and a washout at the same time. The simulation results presented in the paper conrms that the performance of the proposed PSS is better than that could be obtained by a tuned conventional stabilizer over a wide range of operating condition.

Controller vRef C(s)

Plant P(s)

Fig. 1.

A unity feedback system.

keywords: Power system stabilizer, robust control. I. I NTRODUCTION In the day to day operation of power systems one of the major challenges faced is the small-signal oscillatory instability caused by insufcient natural damping in the system. The most cost effective way of countering this instability is to use auxiliary controllers called power system stabilizers (PSS) to produce additional damping in the system [1], [2]. Power systems are nonlinear dynamic systems and this means that as the operating condition changes, so does the system dynamics. Hence, damping controller should ensure that the oscillations are well damped under all operating conditions that obey the other constraints on system performance [4]. Electric power systems constantly experience changes in generation,transmission and load conditions.These factors are some of the sources of uncertainties in power system models and must be taken into consideration in the design of PSSs. Different stabilizer design approaches have been proposed to overcome the problem of uncertainties caused by changes in operating point [1][3]. In [4] power system stabilizer were installed to add damping to local oscillatory modes, which were destabilized by high gain, fast acting exciter. Most of the work in this area of PSS designing has been based on heuristic/expericence based methods for the selection of parameters like PSS gain and wash out time constants. A powerful controller design based on Nevanlinna-Pick theory to shift the unstable mechanical poles to the desire location is reported in [10]. Here Nevanlinna-Pick theory was proposed only for unstable mechanical poles . In power

systems application we generally come across lightly damped modes. The method proposed for the PSS design in the present paper is applicable for both lightly and unstable modes using Modied Nevanlinna-Pick theory as reported in [6] for the desire pole placement. II. ROBUST C ONTROL T HEORY The robust control is dened as the control of uncertain plants that is, systems with uncertain dynamics or unknown disturbance signals, using xed controllers [7]. The rst part of this section deals with robust stability. Here the formulation of uncertain plant, as well as the condition for robust stabilizability are discussed based on the H norm [5]. A. Robust Stability Suppose Pn (s) is the nominal plant transfer function which is xed. A transfer function P(s) is said to be in the class P(Pn (s), R(s)) if P(s) has the same number of the unstable/lightly damped poles as that of Pn (s) and the following relations hold |P(j ) Pn (j )| |R(j )| |R(j )| > 0 (1) (2)

Where R(s) is a stable proper rational function, with its relative degree either zero or one, characterizing the uncertainty plant. Any R(s) with the same values on the imaginary axis, s = j denes the same class of P(s) since our concern is the uncertainty of the frequency response. The denition of a class of transfer function is used to present the concept of the robust stability. Consider the feedback system of Fig. 1 where P(s) is the perturbed plant transfer function and C(s) is the controller transfer function . C(s) is a robust stabilizer for the class P(s) if the closed loop system of Fig. 1 is stable for every P(s) P(Pn (s), R(s)). Referring

120

to Fig. 1 the controller C(s) can robustly stabilize P(s) if and only if the closed loop system is stable for P(s) = Pn (s) and |R(s)Q(s)| < 1 where Q(s) =
C(s) 1+Pn (s)C(s)

Eq
X d

Vt t
Rt + jX t

RL + jXL RL + jXL

Eb0

(3)

(4)

Pg + jQg
Generator
Fig. 2.

Infinite Bus

The condition (3) is closely related to the standard Nyquist criterion [11], [12]. B. The Modied Nevanlinna-Pick Theory The procedure for Modied NPP algorithm [6] is as follows: 1) A Solution to interpolation problem with BR functions exists if and only if the Youla indices [6] associated with i are positive [13] when ai is complex; or |i | 1 when ai is real. Where i is computed by (5). uj (ai ) = i j=n,n-1,...,1 (5)

Re + jXe
A single machine innite-bus system
1+i 2

I1 (ai ) = I3 (ai ) =

Ri /i Xi /i Ri /i +Xi /i , 2 Ri /i Xi /i , 1

i I2 (ai ) = Ri /1 i +Xi /i 1 I4 = I1

2|

(s) = Q(s)D(s) 5. Using Q [10] in the relation u(s) = Q(s)rm (ai ) [5]. We can get Q(s) as: Q(s) =
D(s) rm (ai ) u(s),

(10)

2) Select an arbitrary BR function uj +1 (s) and use the interpolation formula uj (s) =
uj +1 (s)j (s)+j (s) uj +1 (s)j (s)+1

6. The robust controller transfer function C (s) is obtained from (4) and (10). C (s) = Q(s)(1 Pn (s)Q(s))1 (11)

j=n,n-1,...,1

(6)

to compute uj (s) as reported in [6].The function uj +1 (s) is an arbitrary function. Therefore the solution is not unique . When it is desirable for u(s) to have specic property uj +1 (s) can be chosen in such a way that the desired condition on u(s) is satised. Then a unique solution will be obtained. j (s), j (s) and j (s) are some functions which depend on ai and i . The NPP is solvable if the so called Youla indices are positive [6]. This condition limits the choice of i i.e plant perturbation rm (ai ) as shown in (9). C. Applying the Modied NPP to Robust Stability Step by Step procedure as reported in [10] is as follows: 1. Dene a function D(s) with a magnitude less than or equal unity for all values of s = j as D(s) =
(sa1 )...(san ) (sa11 )...(sa1n )

(7)

where a1 , .., an are the unstable/lightly damped poles of the nominal system and a11 , .., a1n are the corresponding desired poles of the system with robust controller. 2. Dene a stable proper transfer function as n (s) = Pn (s)D(s), P
rm (ai ) n (ai ) , P

III. M ODELING OF P OWER S YSTEM For small-signal Stability analysis, dynamic modeling is required for the major components of the power system. It includes the synchronous generator, excitation system, automatic voltage regulator (AVR) etc. A Single machine Innite Bus (SMIB) power system model as shown in Fig 2 is used to obtain the linearized dynamic model (Heffron Phillips or K-constant model). The generator is connected to a double circuit line through a transformer. The line is connected to the rest of the power system which may be an innite bus or another machine. The small signal stability of the system can be improved by applying the output of PSS transfer function to the system having input as speed signal . The schematic block diagram of the system with the PSS can be represented by a unity feedback system as shown in Fig. 1 The system transfer function, P (s) = /ref in terms of K-parameters, can be found by using fourth-order linearized model [9] as shown in Fig 3 is given below
K2K3 s Pn (s) = K AM (12) (s) 4 W here M(s) = 2HTdo K3 TA s + (2HTdo K3 + DTdo K3 TA + 2HTA )s3 + (b K1 K3 TA Tdo + 2HK3 K2 KA + DTdo K3 + DTA +2H )s2 +(b K2 K3 K4 TA + b K1 K3 Tdo + DK3 K6 KA + b K1 TA + D)s b K2 K3 K5 KA + b K1 K3 K6 KA b K2 K3 K4 + b K1

(8)

3. For a given perturbation rm (ai ). We can get i as follows. i = i = 1, ..., n, (9)

4. For any unstable/lightly damped pole ai we can get i satisfying positiveness of Youla indices [6]. Then from (6) we can get u(s) in terms of arbitrary function uj +1 (s). Youla indexes [14] associated with i i.e., let ai = i + ji and 1|i |2 2 i Ri = | 1i |2 , Xi = |1i |2 then the Youla indexes are given as

For a particular SMIB system [15] with parameters as given in the Appendix, The linearized fourth order model of the nominal plant transfer function is found using (12) as: Pn (s) =
43.727s s4 +34.274s3 +409.05s2 +1537.1s+14927.7

121

Torque Balance
Te1

Mechanical Part

TABLE I E IGEN VALUES Open-loop Poles -16.976 8.249i Closed-loop Poles -16.953 8.294i -17 8.205i -4.999 6.471i -7.114 -2.404 CPSS closed loop Poles -9.7852 12.91i -1.8862 5.7023i -30.924 -0.33996

K1
1 2 Hs
S m

Tm

Te 2

wB s

-0.1606 6.471i
PSS
x 10
-4

K4

K5

K2
Eq

K3 K3 1 + sTdo

E fd

KA 1 + sTA

Vref

6 4 2

Proposed PSS CPSS

K6
Electrical Part Field and Exciter

Fig. 3.

Linearized model of a single machine innite-bus system

Sm
0 -2 1 2 Time

(sec) 3

The lightly damped mechanical modes can be obtained from the roots of the above characteristic equation are a1 = 0.1606 + 6.471i, a2 = 0.1606 6.471i. IV. PSS D ESIGN The theory developed in the preceding section is applied in this section for the design of power system stabilizers for nominal SMIB power system. In order to ensure better transient response of the closed-loop system, the weak damping poles of the open-loop system at 0.1606 6.471i is reassigned to the locations a11 = 5 + 6.471i, a12 = 5 6.471i such that it satisfy the criteria as given in step 1. For the desire poles assignment and examining the positiveness of Youla indices (I1 = 4.686; I2 = 2.772; I3 = 43.859; I4 = 0.213) the minimum allowable perturbation is rm (ai ) = 0.0113 (step 3). Now the solution of the NPP can be found in terms of the arbitrary function uj +1 (s). In this case the function is u2 (s) because generator has only one pair of lightly damped poles under nominal operating condition. Selecting u2 (s) = 0 the PSS transfer function acts as a controller as well as a washout [10]. Replacing u2 (s) = 0 in (6) the solution of NPP is u(s) =
4.47s2 41.09s 4.90s2 +46.63s+83.81

Fig. 4.

Nominal Loading Sm for increase in Tm by 0.05pu

V. S IMULATION R ESULTS AND O BSERVATIONS The dynamic performance of the system with the proposed PSS is compared with the performance of conventional PSS using simulation results. Proposed PSS is designed for the nominal operating point (S = P + jQ = 0.4 + j 0.084p.u.,Xe = 0.37p.u) for the given system [15]. The performance of the proposed PSS is shown at varying operating and system conditions has been studied to conrm the robustness and small single transient stability of the stabilizer. Only few representative examples have been included in this paper. Fig.4 shows the system response at the nominal operating point following a 0.05pu step increase in mechanical input (Tm ) of the generator. Fig.5 and Fig.6 depicts the system response in terms of rotor angle and slip speed following a 3 fault of 3 cycle duration at one of the transmission line. Fault is cleared by tripping
-3

x 10 3

Following the procedure describe in this paper PSS transfer function is:
s (s+9.187)(s +33.952s+356.239) C (s) = 80.786 (s+1 .618)(s+8.875)(s2 +42.656s+677.162)
2

CPSS Proposed PSS

The term s in the controller function cancel out the input signal to the PSS (speed signal) in the cases of no system oscillations. However it does not adversely affect the PSS behavior during the oscillation since it is an essential part of the PSS as well. Closed-loop poles for both Proposed PSS and Conventional PSS (CPSS) along with open-loop poles of the linearized system is given in table I. After the addition of the Proposed PSS closed-loop poles are exactly at the desired pole locations.

1 Sm 0 -1 0 1 2 Time 3 (sec) 4 5 6

Fig. 5.

System response for a 3 fault, Nominal system

122

35

CPSS Proposed PSS


(deg) 30

VI. C ONCLUSION A Robust Controller for Single machine Innite Bus (SMIB) power system model has been designed based on a pole assignment technique using the Modied Nevanlinna-Pick theory. A systematic procedure of designing the robust power system stabilizer for the class of perturbed system is presented. The Proposed PSS is physically realizable with real coefcients and includes a wash-out. In the proposed controller design unlike in the conventional PSS design there is no need for the computation of appropriate gain of the controller. The performance of the proposed robust stabilizer is consistently better than that of a conventional PSS under all operating conditions for the different types of disturbances. A PPENDIX Machine Data: Xd = 1.863; Xq = Xd = 0.657; Tdo = 6.9; H = 4; D = 5; fB = 50Hz ; Eb = 1.0 pu.; Vt = 1.02 pu; Xt = 0.127; XL = 0.612; Xe = 0.369; Rt = RL = 0.0; Model 1.0 is considered for the synchronous machine. Parameters of AVR/CPSS: KA = 200; TA = 0.05s; T1 = 0.20; T2 = 0.05; Kpss = 20; Tw = 3; Ef d output limits 6p.u.; P SS output limits 0.10p.u. R EFERENCES
[1] F. P. de Mello, C. Concordia, Concept of Synchronous Machine Stability as Affected by Excitation Control, IEEE Trans. Vol. PAS-88, April 1969, pp. 316-329. [2] E. V. Larsen, D. A. Swann, Applyimg Power System Stabilizer-I-III, IEEE Trans. Vol. PAS-100, June 1981, pp. 3017-3046. [3] P. Kundur, M. Klein, G. J. Rogers, M. S. Zywno, Application of Powr System Stailizer for Enhancement of Overall System Stability, IEEE Trans. Vol. PWRS-4, May 1989, pp. 614-626. [4] G.Rogers, Power System Oscillations. Kluwer Academic Publishers, 2000. [5] H. Kimura, Robust Stabilization for a Class of Transfer Functions, IEEE Trans. Vol. AC-29, Sept. 1984, pp. 788-793. [6] P. Dorato, Yunzhi Li, A Modication of the Classical Nevanlinna-Pick Interpolation Algorithm with Application to Robust Stabilization,IEEE Trans. Vol. AC-31, July 1986, pp. 645-648. [7] P. Dorato and R.K. Yedavalli, 1990, Recent Advances in Robust Control, IEEE Press. [8] K. R. Padiyar, POWER SYSTEM DYNAMICS Stability and Control. John Wiley; Interline Publishing, 1996. [9] W. G. Heffron and R.A. Phillips, Effect of a modern amplidyne voltage regulator on underexcited operation of large turbine generators, Power Apparatus and Systems, Part III. Transactions of the American Institute of Electrical Engineers, vol. 71, Issue 1, Part III, pp. 692697, 1952. [10] Sadegh Vaez-Zadeh, Robust Power System Stabilizers for Enhancement of Dynamic Stability Over a Wide Operating Range. Eletric Power Components and Systems, 29:645-657,2001 [11] J. C. Doyle, B. Francis, A. Tannenbaum, Feedback Control Theory, Book, Macmillan Publishing company, 1992. [12] J. C. Doyle, G. Stein, Multivariable Feedback Design: Concepts for a Classical/Modern Synthesis, IEEE Trans., Vol. AC-26, No. 1, Feb. 1981, pp. 4-16. [13] D. C. Youla and M Saito, Interpolation with positive-real functions, J. Franklin Inst., vol. 284 no. 2, pp. 77-108, 1967. [14] A. C. Youla and N. J. Young, Numerical algorithm for the NevannlinaPick problem, Numerische Mathematik, vol. 42, pp. 125-145, 1983. [15] C.Zhu, R.Zhou, and Y. Wang, A new nonlinear voltage controller for power systems, Int. J. Electr. Power and Energy Syst., vol. 19, pp. 1927, 1997.

Rotor angle

25

20 0

3 Time (sec)

Fig. 6.
x 10
-4

System response for a 3 fault, Nominal system

8 6 4 2 Sm 0

CPSS Proposed PSS

-2 -4 0 1 2 3 Time (sec) 4 5 6

Fig. 7.

Strong system slip speed (Sm ) for increase in Tm by 0.05pu

one of the parallel lines. Fig.7 shows the system response at the Strong operating condition (S = P + jQ = 0.3 + j 0.081p.u.,Xe = 0.3p.u) following a 0.05pu step increase in mechanical input (Tm ) of the generator. Fig.8 shows the system response at the weak operating condition (S = P + jQ = 0.4 + j 0.09p.u.,Xe = 0.8p.u) following a 0.1pu step increase at Vref input of the generator.
-4

6 4 2 Sm 0

x 10

Proposed PSS CPSS

-2 -4 -6 -8 -10 0 1 2 3 Time (sec) 4 5 6

Fig. 8.

Weak system slip speed (Sm ) for increase in Vref by 0.1pu

123

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

A NEW DIGITAL DIFFERENTIAL RELAYING SHCHEME FOR THE PROTECTION OF BUSBAR


N. G. Chothani, B. R. Bhalja and R.P. Maheshwari

Abstract-- The protection of busbar demands high integrity, high standards and high speed relaying scheme. This paper presents a new differential relaying scheme for the protection of busbar. The proposed scheme depends on the difference of the instantaneous value of incoming and outgoing line currents of the respective phase. In this paper, authors have presented a new overcurrent relaying scheme which adopts the characteristic of differential relay. Various power system components such as generator, transmission lines, relay, etc. have been modeled in PSCAD/EMTDC software package. In order to validate the proposed scheme, fault data have been generated by modeling a part of Indian power system network. The proposed scheme successfully discriminates between internal and external faults, even when high resistance fault occurs. Moreover, the proposed scheme remains stable during CT saturation in case of external fault. Index Terms--busbar fault, differential protection, protective relay simulation

compared with line faults which are over 60% [3]-[5]. It has been observed from the widely published literature that most of the bus faults are ground faults having 67% are single line to ground faults, 15% are double line to ground faults and 19% are triple line to ground faults [6]. Protection of substation busbars is almost universally accomplished by differential relaying. For an external fault, the current leaving the busbar is equal to the summation of all of the currents entering the busbar. For an internal fault, the summation of all of the currents entering the busbar is equal to the total fault current. Fig.1 shows single line diagram of a typical configuration of a differential busbar protection system. During pre fault as well as external fault condition, the summation of all the branch currents at the bus is almost zero. On the other hand, during in zone fault the above relation is not true.

I. INTRODUCTION ODERN electric power system is spreaded over large geographical area with a great complexity. It must be protected against flow of heavy short-circuit currents by disconnecting the faulty section of the system by means of circuit breaker and protective relaying. Protection of power system busbars is one of the most critical relaying applications. Busbars are areas in the power systems where the level of fault currents may be very high. Whenever bus fault occurs it results in severe disturbances as clearance of this fault requires tripping of all the breakers of the lines connected to the faulted bus. However, the damage resulting from unclear bus fault, because of the concentration of high fault MVA may be very severe, up to the complete loss of the station by fire. Thus, in order to prevent hazard to the switchgear and entire power system, busbar protection scheme is designed and implemented in such a way that unwanted tripping should not occur [1]-[2]. Hence, a maximum security and high speed operation of the bus protection is essential. Bus faults have been observed to be relatively rare around 6-7% of all faults This work was funded by Gujarat Council on science and Technology, Government of Gujarat, India, under project no. GUJCOST/MRP/201409/2009-10 Bhavesh Bhalja and N. G. Chothani are with Department of Electrical Engineering, ADIT Gujarat 388121, India. (email: brb77dee@iitr.ernet.in) R. P. Maheshwari is with Department of Electrical Engineering, IIT Roorkee 247 667, India.(e-mail: rudrafee@iitr.ernet.in)

Fig. 1 Bus differential relay principle The conventional busbar differential relaying scheme may maloperate in case of CT saturation condition during an external fault [7]. However, it has been observed from the literature that CT doesnt saturate before one cycle immediately after the external fault. Hence, in order to rectify the above problem authors have designed a new overcurrent relaying scheme which adopts the characteristic of differential relay. The proposed scheme remains stable against CT saturation problem as it reduces the sensitivity of relay in case of an external fault. Verification of the proposed scheme has been done on simulated data of an existing part of an Indian power system network using PSCAD / EMTDC software package. The proposed scheme provides sensitivity and fast protection in case of internal fault and maintains stability in case of external fault. The proposed scheme has shown

124

satisfactory performance under various fault conditions, especially for high resistance fault. II. BUSBAR PROTECTION SCHEMES During the last two decade, remarkable success has been achieved in the field of microprocessor as well as digital/numerical based relaying schemes. Power system busbars vary significantly as to the size (number of circuits connected), complexity (number of sections, tie-breakers, auxiliary switches etc.) and voltage level (transmission, distribution). The above technical aspects combined with economic factors yield a number of solutions for busbar protection [8]. A. Non-unit Protection Scheme A simple protection for busbars can be accomplished as a nonunit protection scheme. Overcurrent relays are placed on all the incoming and outgoing feeders. The non-unit protection scheme is governed by back-up over current, earth fault and distance relays. The high fault levels associated with busbars require that the protection must be very fast. Typical fault clearing time should be less than 100 ms and with fast breakers it should be about 15 to 30 ms [9]. In order to minimize the interruption to the plant, the protection system must correctly identify the area of the fault and open only the necessary and minimum number of breakers. B. Differential Protection Scheme Protection of busbars is almost universally accomplished by differential protection scheme. In practice, this type of protection is mainly accomplished by different schemes such as, biased percentage differential protection scheme, low impedance/high impedance voltage scheme. Percent differential relays create a restraining signal in addition to the differential signal and apply a percent (biased) characteristic. The choice of the characteristic includes typically single-slope and double-slope characteristic [10]. The low-impedance approach does not require dedicated CTs. Further, this approach can tolerate substantial CT saturation and provides for high-speed tripping. High impedance voltage scheme is used to overcome the problem of spill current due to CT saturation in case of heavy external fault. But it requires dedicated CTS (a significant cost associated) and cannot be easily applied to re-configurable buses. C. Directional Comparison Scheme For better security, the relay uses a current directional protection principle to dynamically supervise the main current differential function. The main theme of this scheme is that if the power flow in one or more circuits is away from the bus, an external fault exists whereas for the power flow in all of the circuits into the bus, an internal bus fault exists [11]. This scheme is categorized in three parts namely Series Trip Directional Scheme, Directional Blocking Scheme and Directional Comparison Scheme using voltage restraint ohm relay.

D. Protection Based on Artificial Intelligent Technique Digital/Numerical relays provide significant benefits for industrial as well as commercial systems where an economical but effective busbar protection scheme is required. The common practice in busbar protection is to use numerical busbar protection scheme which provides complete protection for all types of extra/ultra high voltage busbar configurations. The commercially available relays use innovative techniques such as CT saturation detection in less than 2 ms and dynamic topology processing algorithms which provides a unique combination of security, speed (operating time is 15 ms) and sensitivity [12], [13]. Many digital busbar relaying schemes have been developed by the designers and researchers using microprocessors, microcontrollers, digital signal processors, Artificial Neural Network (ANN), traveling waves based, Wavelet Transform based and employing various artificial intelligence techniques [14]-[17]. III. SIMULATION AND MODELING A part of 230 kV Indian power system as shown in Fig. 1, has been simulated and tested in order to validate the proposed scheme. The components of power system such as generators, generator transformers (GT) and transmission lines (TL) etc. are designed according to the collected data and specifications. Test data for verifying the proposed scheme have been generated by modeling the complete system of Fig. 1 using the PSCAD/EMTDC software package [18]. Each source was modeled as three separate generators with an equivalent circuit. The transmission line is represented using the Bergeron line model. The bus differential relay, as shown in Fig. 1, is located at the main bus. The difference of the instantaneous value of current in each incoming and outgoing line of the respective phase has been given as an input to the relay. The performance of the proposed scheme has been evaluated for different types of in zone and out zone faults. Relay responses for some special case such as high resistance fault was also investigated. Fig. 2 shows the tripping logic for differential relays connected at the main bus. In this logic, differential principle is accomplished by comparing the instantaneous value of CT secondary current of the respective phase of all branches connected to the main bus (for exp. phase A). As shown in Fig. 2, the summing block generates a differential current signal (Isa) as per the direction and magnitude of fault current. Then, this differential current signal is given to the respective phase relay unit. If it (Isa) exceeds the pick-up setting (0.5 pu), it generates a spike, which will be made constant by HysteresisBuffer. Thereafter, the output of each phase relay is given to OR gate. Depending upon the output (logic 1) of the relay of the respective phase, OR gate generates a final tripping signal (BRK) which will be given to all the circuit breakers connected to the main bus. The same logic is applicable to other two phases.

125

Fig. 2 Tripping logic of differential relays connected at bus IV. SIMULATION RESULTS The system shown in Fig. 1 was subjected to various types of fault. The output of all branch currents, along with the differential current of CT secondaries and the status of circuit breakers are represented graphically in this paper. The performance of the proposed technique was evaluated for different types of internal and external faults. In this paper, due to space limitation, authors have shown simulation results for single phase to ground faults (phase A) only. The effect of high resistance fault was also investigated. A. Internal Fault Fig. 3 shows the relay response for a single line-to-ground fault in phase A at point F (Fig. 1). With reference to Fig. 3, the first, second and third window indicate the value of branch current, the differential current and the status of circuit breakers respectively. When differential current exceeds the predetermine threshold value, the respective relay generates a signal followed by OR gate. Finally, OR gate generates a tripping signal which will be further given to all the circuit breakers connected to the main bus. From fig. 3, it has been observed that the differential relay operates successfully within 11ms. B. External Fault Fig. 4 shows the relay response for a single line-to-ground fault in phase A at point F1. It has been observed that the differential current is lower than the predetermine threshold value. This led to the final decision that the fault was outside the bus-protection zone. In this condition, no trip signal is generated by the relay. Hence, the relay remains stable in case of an external fault. A wide variation of external faults at different locations with various fault resistance have been investigated and results are found to be satisfactory.

Fig. 3 Relay response during internal fault at F on phase A with Rf = 0.01 C. Effect of High Resistance Fault Many algorithms related to bus protection fails to detect fault with a considerable value of fault resistance. Although, high resistance fault reduces the magnitude of the fault current, the proposed scheme has high sensitivity and capability against high resistance faults. To analyse the said situation, a single line-to-ground fault with fault resistance of 200 has been simulated at F within the zone of relay. Although, there is a delay in the action of crossing the threshold boundary of the relay, as shown in Fig. 5, the proposed relay operates within 15ms. The impact of high resistance fault is the delayed operation of the relay. Various values of earth faults were also tested through fault resistances up to 500 for line-to ground, double lineto-ground and triple line-to-ground fault. It has been observed that the trip time was within 25ms after the inception of fault.

126

V. CONCLUSION A new differential relaying scheme for the busbar protection is proposed in this paper. The proposed scheme was tested extensively by using realistic data that was generated by modeling an existing power system using PSCAD/EMTDC software package. The proposed scheme has the ability to detect all types of in-zone faults and remain stable for out of zone faults. An average tripping time for most of the internal faults is within 11ms. Further, the technique is also sensitive in the event of high resistance fault. VI. REFERENCES
[1] [2] [3] [4] [5] [6] [7] M.A. Date, B.A. Oza, N.C. Nair, Power System protection, India: Bharti Prakason, 2002, p. 318. Y. G. Paithankar and S. R. Bhide, Fundamentals of Power System Protection, 2003, p. 101. Walter Elmore, Protective Relaying: Theory and Applications, New York, Marcel Dekker Inc; 1994. A. R. Van C. Warrington, Protective Relays: Their theory and practice vol. 1 & 2, U.K.: Chapman & Hall Ltd., 1962. The Electricity Council, Power System Protection, Vol. 1, Principals and Components, Peter Peregrinus Ltd; London, 1981. GEC Measurment, Protective Relays Application Guide, London: June 1987, p. 225. M. E. Mohammed, High speed differential busbar protection using wavelet packet transform, Proceeding IET Generation, Transmission & Distribution, Vol. 152, No. 6, November 2005, pp. 927-933. Bogdan Kasztenny and Kazik Kuras, A New Algorithm for Digital Low-Impedance Protection of Busbars, GER-4000, GE Power Management, Ontario, Canada, pp. 1-6. Bus Protection for 400 kV and 275 kV Double Busbar Switching Stations, National Grid Technical Specification NGTS 3.6.3, Issue 3, December 1996, pp. 1-19. M. Thompson, R. Folkers and A. Sinclair, Secure application of transformer differential relays for bus protection, 58 th Annual Conference for Protective Relay Eingeers, April 5-7, 2005, pp. 158-168. S.H. Horiwitz and A.G. Phadke, Power System Relaying, New York: John Wiley & Sons, 1996, p. 125. P740 Numerical Busbar Protection, Technical Data Sheet, Publication No. P740/EN TD/G22, AREVA T&D, pp. 1-22. Basic Busbar Protection by Reverse Interlocking, Siemens PTD EA: Application of SIPROTEC Protection Relays 2005, pp. 1-2. Bai-Lin Qin, A. Guzman, Edmud O. A New Method for Protection Zone Selection in Microprocessor-Based Bus Relays, IEEE Transactions on Power Delivery, Vol. 15, No. 3, July 2000, pp. 876-887. C. Li-jun, The research of the sampling method for CT saturation for numerical busbar protection, In Proceedings of 8th Inst. Elect. Eng; International Conference on Developments in Power System Protection, April 2004, Vol. 1, pp. 384-386. K. Feser, U. Braun, F. Engler and A. Maier, Application of neural networks in numerical busbar protection systems (NBPS), In Proceedings of the 1st International Forum on Applications of Neural Networks to Power Systems, 23-26 July 1991, pp. 117-121. M. M. Eissa, A novel wavelet approach to busbar protection during CT saturation and ratio-mismatch, Electric Power System Research, Vol. 72, 2004, pp. 41-48. PSCAD/EMTDC Manual, Getting Started, Manitoba HVDC Research Centre Inc., January 2001

[8]

Fig. 4 Relay response during external fault at F1 on phase A with Rf = 0.01

[9]

[10]

[11] [12] [13] [14]

[15]

[16]

[17]

[18]

Fig. 5 Relay response during internal fault at F on phase A with Rf = 200 conclusion

127

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Overcoming Directional Relaying Problem during Current Transformer Saturation using Data Fusion Technique
Premalata Jena
Department of Electrical Engineering Indian Institute of Technology Kharagpur West Bengal, India premalatajena@yahoo.com
Abstract: Current Transformers (CTs) are used in power system protection schemes to scale down the current signal to an allowable level and for galvanic isolation. At severe faults, due to primary dc transient offsets, the current may saturate the CT-core leading to secondary current not following the primary. Besides current magnitude based relays, this results in maloperation of a directional relay as the phase angle of fault current seen by the relay will be different than expected value. In this paper a solution to CT saturation problem for directional relaying applied to line protection is proposed where the currents of other lines are used to estimate the faulty line current. A simple feature level fusion technique is applied for the purpose. Keywords- Data fusion, Feature Level Fusion, Power system relaying, Digital relays, Current Transformer, Fault detection, Phasor measurement unit.

Ashok Kumar Pradhan


Department of Electrical Engineering Indian Institute of Technology Kharagpur West Bengal, India akpradhan@ee.iitkgp.ernet.in coordination may be compromised [2]. A solution to over current relay to feeder protection is proposed in [10]. Data fusion is an emerging technology which is being applied in military and non-military applications [11]. Multisensor data fusion is a technique that effectively fuses data collected by multiple sources installed on a process to provide a more robust and accurate estimate of the required signal [12]-[16]. It is always advantageous to fuse the data of different sensors instead of having faith on a single sensor data alone. There are different types of data fusion strategies; data level fusion, feature level fusion and decision level fusion. In [13] the fault diagnosis of the main transformer in a marine application is achieved by using the multi-layered data fusion technique. In directional relaying the voltage and current phasors or the derived sequence components are applied to estimate the fault direction with respect to the relay location [6]. Generally the phase angle between the positive sequence component of fault current and voltage gives the fault direction with respect to the relay location. When CT saturates the estimated phase angle of the fault current may be incorrect. In this paper a feature level fusion technique is proposed for estimating an improved phase angle for fault current during CT saturation for directional relaying. The proposed technique assumes that there are number of incoming and outgoing lines at a relay bus (multi-input multi-output system). The logic of the approach is based on the point that current for fault in an outgoing line is shared by incoming lines. At times, a CT current in an outgoing line may be at saturation at fault whereas the incoming line CTs may not observe saturation or will be of less severe. Instead of using the feature from the CT of outgoing line directly, the corresponding feature is estimated from other available features at the bus using the feature level fusion technique. Results show the strength of the proposed technique. II. DIFFERENT SCHEMES IN DATA FUSION There are mainly three levels of fusion used in data fusion technique. (i) Data level fusion (ii) feature level fusion (iii) decision level fusion. In data level fusion each sensor absorbs or measures and the raw sensor data are combined.

I. INTRODUCTION Recent developments in power networks and introduction of deregulation into power markets have asked for more challenging tasks on protection schemes. The distributed generation (DG) environment is increasing day by day and the protection system is also becoming more complex. With DG the over-current relay needs directional relaying for selectivity [1]. Designing a proper directional relaying scheme for such a system is a challenging task especially in case of severe CT saturation. In power system protection schemes CTs are used to scale down the system current signals to an allowable level and for galvanic isolation. The available CT-core characteristics are such that at many fault conditions, saturation may occur which affects protection decisions [2]-[5]. When fault occurs on high-voltage system, the fault current is associated with a dc transient current. Due to this, the fault current decays exponentially to zero within few cycles of power frequency. If the CT-core saturates, the secondary current is distorted significantly and the estimated phasor becomes erroneous both in magnitude and phase. Thus directional relaying may malfunction of CT saturation [6]. Directional relaying is widely applied in line protection. It improves the sensitivity and overall reliability of the protection schemes [7]-[9]. The CT saturation may create delay in directional relay tripping [6]. For over-current relay with IDMT characteristics, CT saturation leads to unnecessary delay and

1
128

This is achieved by extracting a feature vector from the fused data and a transformation is made between the feature vector and declaration of identity. The raw sensor data can be directly combined if the sensor data are commensurate. In feature level each sensor provides observational data from which a feature vector is extracted. Feature level fusion involves the extraction of representative features from sensor data. These features are concatenated together into a single feature vector which in turn is an input to a decision process. In decision level fusion each sensor converts observed target attributes into a preliminary declaration of the target identity. The identity declaration provided by the individual sensors are combined using decision level fusion techniques such as classical interference, Bayesian inference, weighted decision methods. III. PROPOSED SCHEME WITH DATA FUSION APPROACH (i) The basics The power system under consideration is shown in Fig. 1. In the proposed feature level fusion technique, the appropriate feature is the phasor of the each feeder current. Each phase current is acquired at a sampling rate of 1 kHz. After current sample acquisition, one cycle DFT is used to extract the phasor of the each line current. Consider the power system of Fig. 1 where there are three incoming lines and three numbers of outgoing lines at a relay bus. In Fig. 1, there are six CTs. If a three phase fault will be created at line4, the secondary current of CT4 will be distorted more due to the high fault current. Using the proposed feature level fusion technique, the features of the other existing CTs on the bus which has not gone to severe saturation are fused to find the I1 I2 I3
CT1 CT4

primary current and the referred secondary current relation becomes I I I Is 5 I Is (3) 1 s2 s3 s4 s6

I s1 , I s 2 , I s 3 , I s 4 , I s 5 and I s 6 are the required features extracted from CTs. Here, suffix s represents the secondary side of the CTs and I s1 , I s 2 , I s 3 , I s 4 , I s 5 and I s 6 are the extracted features of
Where secondary currents multiplied by their respective turns ratio (secondary referred to primary). In the event of a fault in any phase of line 4 (Fig. 1), CT4 in that phase may be in saturation at times whereas CT5 and CT6 will see less current (unsaturated). This fault current being shared by three incoming lines therefore, CT1, CT2 and CT3 may not be in saturation. In that case, by exploiting the feature level fusion technique the derived secondary current phasor for CT4 is given by Is (4) 4d I s1 I s 2 I s 3 I s 5 I s 6

is the derived secondary current for CT4, which Here, I s 4d is being calculated using the fusion technique. The derived secondary current I of line 4 is obtained by dividing
s4d

Is 4d

with its turns ratio. In the above relation as


Is4d

corresponding CTs are not in saturation;

phasor is

more accurate value to represent fault current as compared to I s 4 obtained from CT4 (at saturation). In the proposed fusion approach therefore, when a fault is detected in any outgoing line, corresponding current should be computed using relation like (4) instead of using the current from CT of faulty line directly. The block diagram for the proposed scheme is depicted in Fig. 2. At the first stage the CT data are acquired at a sampling frequency of 1 kHz. After proper signal conditioning, the CT data are digitized. One cycle DFT is taken to find the phasor of each phase current. Finally feature level fusion technique is used to find the estimated secondary current.
S 1

I4 I5 I6

CT2

CT5

CT3

CT6

Fig. 1. A Multi-input multi-output system.

estimated secondary current for CT4. The one cycle DFT is taken to find the phasor of each phase current. For any situation the line currents of any phase have the current law relation, I1 I 2 I 3 I 4 I5 I 6 (1) Where, I1 , I 2 , I 3 , I 4 , I 5 and I 6 are the features extracted from the respective line currents using one cycle DFT. Hence for CTs of that phase, the primary current relation also becomes, I p1 I p 2 I p 3 I p 4 I p5 I p 6 (2) where suffix p indicates for primary side of CT. During normal conditions, the secondary current is proportional to

S 2

Signal Conditioning

ADC

. . . .

Phasor estimation

Feature level fusion

Estimated secondary current

S 8

Fig. 2. Feature level fusion technique for estimation of the secondary current.

IV. RESULTS To test the algorithm, a power system as shown in Fig. 3 is considered and the results are presented below. In all the

2
129

cases non-linear CT model is used and phase-a voltage is considered as the reference. Case-1: Power System I, An 11 kV non-radial System A 11 kV power system as shown in Fig. 3 is considered where the power flow in lines 1 through 4 may be bidirectional. In this case the turns ratio of CT1, CT2, CT3 and CT4 are of 250:5,500:5, 300:5 and 450:5 respectively. The focus is on angle of secondary fault currents of CT1 and CT3 as faults are created in line-1 and line-3 at Fx and Fy positions respectively. Here the directional relay R1 is placed in line 1. The directional relay is operating based on the phase angle difference between the positive sequence component of fault current and voltage. The particular phase angle difference between the positive sequence component of fault current and voltage is positive for reverse fault (at Fx) and negative for forward fault (at Fy) with respect to the relay location.

Table I). It is better to take the phase angle of the derived secondary current instead of taking the phase angle of actual secondary current for directional relaying purpose.
TABLE I RESULTS FOR AG FAULT AT LINE-1 AT FX Fault position Fault voltage phasor at busbar Mag Angle (A) (rad) 61.98 -1.97 61.98 -1.97 Fault current Angle Difference (rad) phasor of V-I line3 Mag Angle (A) (rad) 4.94 -0.82 1.15 1.20 -1.10 0.87

Fx(without saturation) Fx (with saturation)

TABLE II RESULTS FOR DERIVED SECONDARY CURRENT FOR AG FAULT AT FX AT LINE1 CTs CT1 2.19
0.82

~ ~

Fx

I1 Line1 I2 Line2

CT1

R1

CT3

I3 Line3 I4

Fy

~ ~

Primary Current( kA) Secondary Current(A) Expected Secondary Current(A) Derived Secondary Current(A)

CT2 1.48
1.82

CT3 4.27
1.62

CT4 1.22
1.38

CT2

CT4

1.20
1.10

2.97
1.82

3.06
1.62

3.12
1.38

Line4

4.94 -0.82
I s 1d

2.97
1.82

3.06
1.62

3.12
1.38

[ 3.06 -1.26 (50) 3.12 -1.38 (100)

Fig. 3. Single line diagram of 11 kV, 50 Hz system. In this work it is found that during severe CT saturation the phase angle of positive sequence fault current changes differently in comparison to without saturation case. As the directional relay takes decision by taking help of phase angle of fault current, it may maloperate. To overcome such problem a feature level fusion technique is proposed to derive more accurate angle of fault current. The power flow direction in a line is necessary to obtain the governing equation for a system. This can be obtained by using device 32 (ANSI) for power direction or its algorithm. Here the line-to-ground fault (AG type) is created at line 1(fault at Fx) at t=0.04 s. The positive sequence phasors of voltage and current are calculated and given in Table I. From Table I, the angle difference of fault current and voltage is found to be 1.15 rad when the CT1 has not gone to saturation whereas the angle difference is 0.87 rad when the CT1 has gone to saturation. The angle difference between the saturated and unsaturated case is 0.28 rad. This angle difference is due to the saturation in core of CT1. This angle difference may fall in the boundary of the trip and block zone of the directional relay misleading to unnecessary tripping. To overcome such situation, the measured secondary current of CT1 is replaced by the derived secondary current as explained in (5). Is (5) 1d I s 3 I s 4 I s 2 The derived secondary current is to be 4.94 0.82 A as shown in the last row of Table II which matches with the phasors of line1 current at without CT saturation (given in

-2.97 -1.82 (90)] /(60) 4.94 0.82 A

A line-to-ground fault (AG type) is created at t=0.04 s at line3 (fault at Fy position as shown in Fig. 3). The corresponding results are shown in Table III and Table IV. It is found that without CT saturation the phase angle between the positive sequence component of fault current and voltage is -0.18 rad whereas during severe CT saturation the corresponding phase angle is 0.17 rad. The mismatch between the phase angle difference is due to the saturation is CT core. This phase difference mismatch may mislead the decision of directional relay. To overcome to such situation the phase angle of the faulty line current (here the phase angle of line 3 current) is replaced by the phase angle of the derived current obtained from the other existing CT current phase angle using(6). Is (6) 3d I s1 I s 2 I s 4 The derived secondary current is to be 4.47 2.15A as shown in the last row of Table IV which matches with the phasors of line3 current at without CT saturation (given in Table III).
TABLE III RESULTS FOR AG FAULT AT LINE-3 AT FY Fault voltage Fault current phasor at busbar phasor of line3 Mag Angle Mag Angle (A) (rad) (A) (rad) Fy (without saturation) 62.44 -1.97 4.47 -2.15 Fy (with saturation) 62.44 -1.97 3.09 -1.80 Fault position Angle Difference (rad) V-I

-0.18 0.17

3
130

TABLE IV RESULTS FOR DERIVED SECONDARY CURRENT FOR AG FAULT AT FY AT LINE3 CTs Primary Current( kA) Secondary Current(A) Expected Secondary Current(A) Derived Secondary Current(A) CT1 2.19
1.71

CT2 1.48
1.75

CT3 4.27
2.15

CT4 1.22
1.46

2.63
1.71

2.99
1.75

2.63
1.71
I s 3d

2.99
1.75

3.09 -1.80 4.47 -2.15

3.01
1.46

3.01
1.46

[ 2.63 -1.71 (50 ) 2.99 -1.75 (100 )

V. CONCLUSION This paper proposes a feature level fusion method to improve directional relaying performance during CT saturation. The derived angle of fault current of a faulty feeder is computed fusing corresponding phase currents of all other feeders. This provides more accurate estimated secondary current phasor as compared to the available secondary current phasor of that faulty line. Results obtained from a DG system support this concept. Such a scheme may be exploited in other protection scheme. REFFERENCES
[1] D. Yuan, X. Dong, S. Chen, Z. Q. Bo, B. R. J. Caunce and A. Klimek, An directional comparison scheme for distributed line protection,http://www.arevatd.com/solutions/liblocal/docs/1169 471847868-Paper_13.pdf. [2] L. A. Kojovic, Impact of current transformer saturation on overcurrent protection operation, IEEE Power Engineering Society Summer Meeting 2002, pp. 1078-1083, 2002. [3] A. Y. Wu, The analysis of current transformer transient response and its effect on current relay performance, IEEE Trans. on Ind. Appl., vol. IA-21, no. 4, pp. 793802, 1985. [4] J. Pan, K. Vu and Yi Hu, An efficient compensation algorithm for current transformer saturation effects, IEEE Trans on Power Delivery, vol. 19, no. 4, pp. 1623-1628, 2004. [5] H. Khorashadi-Zadeh and M. Sanaye-Pasand, Correction of saturated current transformers secondary current using ANNs, IEEE Trans. on Power Delivery, vol. 21, no. 1, pp. 73-79, 2006 [6] J. Mooney, Distance element performance under conditions of CTsaturationhttp://www.selinc.com/WorkArea/DownloadAsset .aspxid=3496 [7] J. Roberts and A. Guzman, Directional element design and evaluation, www.selinc.com/techpprs/6009.pdf [8] Y. Q. Xia, J. L. He and K. K Li, A reliable digital directional relay based on compensated voltage comparison for E.H.V transmission lines, IEEE Trans .on Power Del. Vol. 7, no. 4, pp. 1955- 1962, Oct. 1992. [9] L. G. Perez and A. J. Urdaneta, Optimal computation of distance relays second zone timing in a mixed protection scheme with directional overcurrent relays, IEEE Trans .on Power Del.,Vol. 16, no. 3, pp. 385-388, Jul. 2001. [10] P. Jena, A. K. Pradhan and A. Routray, A solution towards current transformer saturation problem in power system protection, Proc. International Conference on Power Systems, CPRI Bangalore, India, 12-14 Dec., 2007. [11] D. L. Hall and J. Llinas, An introduction to multisensor data fusion, Proceedings of the IEEE, vol. 85, no. 1, pp. 6-22, Jan, 1997. [12] A. Mahajan, K. Wang and P.K. Ray, Multisensor integration and fusion model that uses a fuzzy inference system, IEEE/ASME transaction on mechatronics, vol. 6, pp. 188-196, no. 2, Jun, 2001. [13] Z. M. Fan, L. Tong, D. Liu, H. Zhang and W. S. Wang, A fault diagnosis method of the main transformer in the power train using compound data fusion method, Proceedings of the second international conference on machine learning and cybernetics, pp.1369-1393, Nov, 2003. [14] J. Kittler, Multisensor integration and decision level fusion, Centre for vision, speech and signal processing, printed and published by IEE, UK. [15] P. K. Varshney, Multisensor data fusion, Electronics and communication journal, PP. 245-253, Dec, 1997. [16] P. Wide, F. Wingquist, P. Bergsten and M. Patriu, The Humanbased multisensor fusion method for artificial nose and tongue sensor data, IEEE Transaction on Instrumentation and Measurement, vol. 47, no. 5, pp. 1072-1077, Oct, 1997.

3.01 -1.46 (90 )] /(60 ) 4.47 2.15 A

A double line-to-ground fault (ABG type) is created at t=0.045 s at line3 (fault at Fy position as shown in Fig. 3). The corresponding results are shown in Table V and Table VI. It is found that without CT saturation the phase angle between the positive sequence component of fault current and voltage is -0.39 rad whereas during severe CT saturation the corresponding phase angle is 0.35 rad. The mismatch between the phase angle difference is due to the saturation is CT core. To overcome to such situation the phase angle of the faulty line current (here the phase angle of line 3 current) is replaced by the phase angle of the derived current obtained from the other existing CT current phase angle. The derived secondary current is to be 15.73 0.87 A as shown in the last row of Table VI which matches with the phasors of line3 current at without CT saturation (given in Table V).
TABLE V RESULTS FOR ABG FAULT AT LINE-3 AT FY Fault voltage phasor at busbar Mag Angle (A) (rad) Fy (without saturation) 56.99 -0.48 Fy (with saturation) 56.99 -0.48 Fault position Fault current Angle Difference (rad) phasor of V-I line3 Mag Angle (A) (rad) 15.73 -0.87 -0.39 7.76 -0.13 0.35

TABLE VI RESULTS FOR DERIVED SECONDARY CURRENT FOR ABG FAULT AT FY AT LINE3 CTs Primary Current( kA) Secondary Current(A) Expected Secondary Current(A) Derived Secondary Current(A) CT1 0.45
0.50

CT2 0.57
0.58

CT3 0.94
0.87

CT4 0.30
1.28

4.90
0.50

5.72
0.58

7.76
0.13

3.36
1.28

4.90
0.50
I s 3d

5.72
0.58

15.73
0.87

3.36
1.28

[ 4.90 -0.50 (50) 5.72 -0.58 (100)

3.36 1.28 (90 )] /(60 ) 15.73 0.87 A

4
131

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Digital Voltage-mode Controller Design for Zero-Voltage Turn-ON Boost Converter


Mummadi Veerachary
Dept. of Electrical Engineering Indian Institute of Technology Delhi Hauz Khas, New Delhi - 110016 Abstract In this paper analysis and digital controller design for a zero-voltage turn-ON boost converter (ZVTBC) is presented. The steady-state performance and zero-voltage switch-ON methodology has been analyzed and then its state-space models are formulated to obtain the small-signal models. These linearized models are then used in the design of digital voltage-mode controller. A loopgain is defined and then adapted while designing the digital controller. The digital controller is designed through redesign procedure and pole-zero placement technique. Closed-loop performance of the ZVTBC is analyzed. Firstly, the results of controller design and closed-loop analysis are illustrated through MATLAB computer simulations for 42 V applications and then verified using time-domain dynamic response analysis results. A 75 Watt, 24 to 42 V, 50 kHz laboratory prototype closed-loop converter has been and then tested to validate the proposed digital controller design of the ZVTBC.
Keywords- Soft-switching boost converter, Zero voltage turnON, Voltage-mode controller.

Sachin Devassy
Dept. of Electrical Engineering Indian Institute of Technology Delhi Hauz Khas, New Delhi - 110016 Dc-dc boost converter is most popular for delivering higher load voltages from given low voltage source. Although the conventional boost converter is capable of stepping-up of voltages and meeting the load demand at a predefined voltage levels, but its full load efficiency is low on account of higher switching losses. Recently, softswitching techniques are coming-up to overcome the excessive switching losses occurring in the conventional hard-switched dc-dc converters and to realize higher efficiencies for the dc-dc converter at full-load conditions. One such soft-switching boost converter topology with zero-voltage turn-ON (ZVTON) feature is reported in the literature [8]-[9]. However, there is not enough literature covering the development of controllers for such kinds of converters. In order to bridge this gap, this paper presents some investigations on digital controller design, which ensures load voltage regulation while rejecting source and load disturbances. Although analogue controllers are well established for SMPSs [1][4], digital controllers offer many advantages over their analogue counterparts. Due to recent advances in microcontrollers/digital signal processors, there has been a growing interest in the application of digital controllers for high frequency conversion systems and low to medium power dc-dc converters (DDC), due to the low price-toperformance ratio for implementing complex control strategies [5][7]. Several compensator design approaches have been reported in the literature for Op-Amp or IC based analogue controllers. However, in the case of digital controller design, two main approaches are widely used, that is: (i) digital redesign method (DRM), (ii) direct digital design method (DDDM). In the first case, the compensator is designed in the conventional way by using s-domain transfer functions together with linear system theory and the resulting compensator is transformed into the digital domain using appropriate z-transformations. On the other hand, in DDDM, the compensator design is carried out in the zdomain itself and hence there is no need for s -to- z-domain transformation. The main difficulty of the DDDM is that if the plant model is approximate then digital controller design will be a tedious task. In view of this here DRM is used for the digital controller design.

I.

INTRODUCTION

High frequency switching converters application in low power compact electronic circuits is increasing in the recent years. As the power conversion system is becoming miniaturized, increasing the power density is one of challenging issue for the power supply designers. One of the main orientations in switch-mode conversion [1]-[4] is to reduce the size of the magnetic elements, if possible avoiding use of transformers wherever isolation is not required, and more important concern is reducing the electromagnetic interference (EMI), etc. Light weight, small size and high power density are possible, nowadays, by use of high frequency switching. Several different kinds of power circuit configurations have been reported in literature to meet the load demand and they are broadly classified into: (i) buck, (ii) boost, and (iii) buck-boost topologies, etc. These converter or their derivatives find application in several different areas, of which most important one are: (i) power supply applications for miniaturized integrated circuits, (ii) automotive industry, (iii) internet services/ LAN and WANs, (iv) telecom industry etc.

132

II.

STATE-SPACE MODELING OF ZVT-BOOST CONVERTER

The proposed ZVTBC, shown in Fig. 1, having an additional resonant network realized by means of a switch and two diodes. Here the main and auxiliary switches together with diodes will form a bridge network and resonant inductor is connected in between the junction points formed by the switch and diode combination. To understand the features of the converter the following mathematical analysis, based on the state-space averaging technique [4], is performed. Although there may be several operating states possible depending on the current status in the inductor (L1) condition, here the analysis is given for continuous inductor current mode (CICM) only. In CICM the HOBC converter goes through two topological stages in each switching period and its power stage dynamics can be described by a set of state equations, given by:

A discrete-time model of the converter is obtained, from the known state-space models given above, by integration of the continuous-time small-signal model over a switching period [6] as:

x(n + 1)Ts = [ ] x(nTs ) + [ ][u ]

(2)

where
1 1 1 1T s A e 2d2Ts ;= eA2d2Ts A B + eA2d2Ts I A2 B2. = eAd 1 1

G vd ( z ) =

0 (z ) v = E (z - ) -1 (z ) d

(3)

From the above equation the discrete-time control-to-output transfer function, Gvd(z), useful to design the voltage-mode compensator. Knowing the Gvd(z) the loopgain is obtained as

K x+BK u x=A v0 =C k x
T

(1)

TL (z)=G vd (z)G p (z)G c (z)

(4)

x = iL1 vc 0 ; and u=[vg], where k=1 to 6 for mode-1 to6, respectively. State model matrices are given below.

Upon substitution of various matrices in (4) and after simplification gives the following transfer functions. 0.0065 z + 0.0095 Gvd ( z ) = 2 (5) [ z -1.88 z + 0.89]

III.

CONTROLLER DESIGN METHODOLOGY

Once the power stage transfer functions are known, then the feedback compensation can easily be designed to obtain the desired closed-loop performance. Digital compensator design can be carried out on the similar lines as that used in the conventional controller design and the standard linear control theory tools are equally applicable. In this controller design methodology the compensator is designed based on s-domain transfer functions and then transformed into discrete-time-domain using appropriate transformation methods by ignoring the time delays involved in the analog-to-digital (ADC) conversion and digital pulse width modulation (DPWM) generation. Fig. 2 shows the block diagram representation of this method, where Gp(s) is the converter transfer function and in this case it is Gvd(s) , Gc(s) is the controller to be designed, K is the voltage feedback factor and Fm PWM modulator gain. Taking loopgain into account the controller Gc(s) is to be designed so that it ensures stability margins, i.e gain margin (GM) and phase margin (PM). Once suitable Gc(s) is designed in s-domain then it is converted into discrete-time-domain using any of the standard transformations. However, the bilinear transformation is the most widely used in s-domain to discrete-time domain conversion.

Fig. 1. Digital control of zero voltage turn-ON boost converter.

133

IV.

DISCUSSION OF SIMULATION AND EXPERIMENTAL


RESULTS

Fig. 2. Control-loop block diagram- s-domain representation.

Following are the few design steps to be used in the compensator design. Step-1: Plot the frequency response bode plot of the openloop transfer function. Choose the desired crossover frequency, depending on the transient response requirements, and then determine the phase shift and gain of the transfer function at this frequency. Step-2: Choose the desired phase margin depending on the maximum allowable overshoot requirement. Step-3: Depending on the phase boost, gain requirement determine the type of compensator order and its poles-zero configuration. Step-4: Place the poles and zeros in z-plane for digital controller or s-plane for analog controller, such that the loopgain frequency response characteristic meets the required PM, GM and bandwidth (BW). Step-5: From Pole-zero locations determine the compensator transfer function and find the equivalent digital controller. Few important points to be followed while placing poles and zeros of the above compensator are: Loop-gain crossover frequency, fc, be in the range of 1/5 ~ 1/10 of the switching frequency (fs). Select the frequencies f2 and f3 such that they are in geometric mean with respect to fc i.e. f c =

To verify the developed modeling and controller design, a 75 W ZVTBC system was designed to supply a constant load voltage of 42 V from a source voltage of 24 V and the power stage, controller parameters, designed to meet the specifications, are listed in Table I. The frequency domain bode, plots for open-loop converter Gd(z), compensator Gc(z) and the loopgain frequency response TL(z) are shown in Fig. 3 For designing the controller a trade-off crossover frequency of 348 Hz, optimum, used for obtaining the better results. The effect of controller pole-zero location on the stability margins, phase and gain margin, was also studied and the corresponding time-domain closed-loop system step response is shown in Fig. 4, where it gives the information of peak overshoot of 15 % and a settling time of 2.65 ms. Pole-zero plot of the final closed-loop system is shown in Fig. 5, wherein it clearly shows that all the poles are within the unit circle ensuring its stability at this operating point. For the parameters given in Table I the closed-loop converter system regulation capability is tested for: (i) load perturbation of 23 50%, (ii) supply voltage change 24 V 20%. For simulation PSIM [12] power electronics simulator was used. The simulation results for the above mentioned cases are shown in Fig. 7. Fig. 6 shows the steady-state waveforms wherein the switch voltage and currents transitions during the turn-ON process are shown. This clearly indicates the ZVT-ON process. Fig. 7 shows the load voltage regulation against load and supply voltage perturbations. In both the cases the load voltage is regulating within 3 ms time. In order to show the validity of the digital controller design a 75 watt laboratory prototype converter model has been constructed and then measured the steady-state and dynamic responses for the identical cases as that obtained in the simulations. The steady-state waveforms are shown in Fig. 8, while the dynamic response is shown in Fig. 9. The steady-state waveforms showing ZVT-ON are in close agreement with simulation results plotted in Fig. 6. The measured dynamic responses are in close agreement with the simulated results and thus validating the theoretical analysis.

f 2 f3 .

Place the two compensator zeros below the crossover frequency (f1 < f2 < fc). Place the two compensator poles above the crossover frequency (fc < f3 < f4). By using above discussed procedure the compensator design can carried out with the help of any software program. However, MATLAB [11] platform is the good choice for compensator design as it is having all the control related functions. Check the closed-loop converter performance specifications, and if the design is not fulfilling the requirements repeat the process, Step-1 to Step-5, by changing the location of poles and zeros of the compensator. For the ZVTBC a second order digital compensator is designed and its final z-transfer function is given in Table I.

TABLE I. CONVERTER AND COMPENSATOR PARAMETERS Power stage Digital compensator

Lm=200 H, Lr=10 H C0=100F, Cr=40 F R= 25 , Vg=24 V fs = 50 kHz

( z 1.5 z + 0.56 ) ( z + 1.5 z 0.5 )


2 2

134

40
20

Gvd(z) Gc(z) TL(z)

Magnitude (dB)

0
-20
-40
-60
-80 90

Phase (deg)

-90
-180
-270
-360
-450 1 10

10

10

10

10

Frequency (Hz)

Fig. 6. Steady-state waveforms showing ZVT-ON.

Fig. 3. Bode plots of open-loop(red), compensator(blue) and loopgain(green). (GM=17 dB, PM=530, fc=348 Hz)
Step Response

1.4

1.2

Amplitude

0.8

0.6

0.4

0.2

(a) Load resistance perturbation (R: 23 12.5 )


0

0.5

1.5

2.5

3.5

4.5

Time (sec)

x 10

-3

Fig. 4. Step response of the closed-loop converter system.


Pole-Zero Map

1.5
0.08
0.06

Pole-Zero Map

1
Imaginary Axis

0.04

0.02

-0.02
-0.04
-0.06

0.5
Imaginary Axis

Zoom

-0.08

-0.1
-0.12
0.92

0.94

0.96

0.98
Real Axis

1.02

1.04

1.06

-0.5

-1

-1.5 -1.5

-1

-0.5

0
Real Axis

0.5

1.5

(b) Source voltage perturbation (Vg: 24 30 V) Fig. 7. Dynamic response of load voltage against perturbation.

Fig. 5. Pole-zero plot of closed-loop converter system.

135

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

discussed. A second order compensator, using digital redesign approach, was designed to ensure load voltage regulation. The designed compensator validity was verified through PSIM simulations. The effect of controller polezero locations on the stability margins was also discussed. These studies reveal that the cross-over frequency should not be excessively high in the digital control scheme. REFERENCES
[1]
Fig. 8. Steady-state switch voltage and current waveforms.

(a) Load resistance perturbation (R: 23 12.5 )

(b) Source voltage perturbation (Vg: 24 30 V) Fig. 9. Experimental dynamic response of load voltage against perturbation.

Veerachary. M, ``Two-loop voltage-mode control of coupled inductor step-down buck converter,'' IEE Proc. On Electric Power Applications, Vol. 152(6), pp. 1516 - 1524, 2005. [2] Jian Liu, Zhiming Chen, Zhong Du, A new design of power supplies for pocket computer system, IEEE Trans. on Ind. Electronics, 1998, Vol. 45(2), pp. 228-234. [3] Barrado. A, Lazaro. A and etc, "Linear-non-linear control for DCDC Buck converters: stability and transient response analysis", IEEE Applied Power Electronics Conference (APEC), 2004, pp.1329 - 1335. [4] R. D. Middlebrook, Cuk. S, A general unified approach to modelling switching converter power stage, IEEE Power electronics specialists conference, 1976, pp. 13-34. [5] Yuan kui, Wang cong-Qing A new approach to digital control implementation of continuous-time system, Proceedings 1993 IEEE region 10 Conference on computer, communication control and power Engineering, TENCON-199, pp. 386- 389. [6] M. Veerachary, Krishna Mohan. B, Robust digital controller for fifth order boost converter, 31st International Telecommunications Energy Conference, 2009, INTELEC 2009, pp.1-6. [7] S. Choudhury, DSP Implementation of an average current mode controlled Power Factor Correction Converter, International Power Elect Technology Conference Proceeding, 2003. [8] Gil-Ro ,Sang-Hoon Park, Chang-Yuen Won, Yong-Chae Jung and Sang-Hoon Song, High Efficiency Soft Switching Boost Converter for Photovoltaic System,EPE-PEMC 2008, CD-ROM Proceedings. [9] Jianping Xu and C.Q.Lee, Unified Averaging Technique for the Modeling of Quasi-Resonant Converters, IEEE Transactions on Power Electronics, Vol. 13(3), 1998, pp. 556-563. [10] J.Y. Choi, B.H. Cho, H.F. Van Landingham, H.S. Mok, H.H. Song, System identification of power converters based on a black-box approach, IEEE Trans, Circuits and Systems- I: Fund. Theory and App.1998 , Vol. 45(11), pp. 1148-1158. [11] MATLAB, user manual, 2005. [12] PSIM user manual, 2007.

CONCLUSIONS Digital voltage mode controller for ZVTBC was designed. Mathematical models were developed using statespace method and then digital compensator designs were

136

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

SYNTHESIS AND CHARACTERIZATION OF SENSOR FOR BIOLOGICAL REAL TIME APPLICATIONS USING CONDUCTING POLYMERS, NANOCOMPOSITES AND SIGNAL PROCESSING TECHNIQUE
Smt Usha.A, Dr.B.Ramachandra and Dr.M.S. Dharmaprakash
Department of E & E Engg, BMSCE, Bangalore. E-mail:ushaajoshi@yahoo.com Department of E&E Engg, PESCE, MANDYA. E-mail:bramachandra1@yahoo.co.in Department of Chemistry, BMSCE, Bangalore. E-mail:msdharmaprakash@gmail.com Abstract - Sensors are used to collect measurement data from structures to assist in system identification and health monitoring. The new concept presented in this paper focuses on optimizing the sensor film design for biological real time applications and processing of bio-signals of the order of few nano amperes using DSP processor. The sensitivity of noise and transients in measurement of biosignals accessed by the sensor film can be further conditioned, filtered, processed and displayed using a suitable signal conditioning circuitry and digital signal processor. Conducting thin polymer nano materials are widely employed in recent years due to their tunable conductivity from non-conducting to metallic form, good environmental stability and potential applications in electronic devices and bio-sensors, etc. Some important aspects of the Conducting Polymers nano-materials formation will be investigated to track the factors which control the morphology of the nano-materials. These nanomaterials are further subject to detailed electron microscopic, wide angle X-ray diffraction and fluorescence spectroscopic analysis to study the importance of conducting polymer chain characterization and nature of the morphology on the properties of nano particles. The sensor and signal conditioning circuitry proposed will be highly novel and have potential applications in bio-medical electronic devices. This paper will focus on design, synthesis, characterization and testing of sensor film for biological applications using conducting polymers and nanocomposites. The proposed sensor and novel signal conditioning circuit have several advantages such as high sensitivity, large dynamic range, wide frequency bandwidth, small foot print, immunity to electro-magnetic radiation, distributed sensing, accessing and processing of the bio-signals.
Keywords: Biosensor, Nanoparticles, Conducting polymers, Signal processing. (SPR), optical methods design (OMD) and Paramagnetic based (PMB) bio sensors. SPR can analyze only one sample at a time and it is not economical also. Moreover, detection of small molecules is its another limitation. Although more sensitive than SPR, PMB bio- sensors has limited scalability because of its construction, usability which depends strongly on sensor surface. So the electrochemical based bio-sensor appears to be the right choice for the optimized design of bio-sensor for Leukocyte counts in the sample or analyte. An attempt is made here to fabricate the sensor using conducting polymers and doping it with nano-composites. Polymers as a group are relatively light and easy to process. The conducting type is, in addition, mechanically and electrically compatible with such conductors as copper and such semiconductors as silicon. Typical conductivities of the newest conducting polymers run anywhere between 10-4 and 107siemens /meter, as compared with 1.1X107 s/m for copper and 10-2 to 1s/m for lightly doped silicon both at room temperature. The electrical conductivity of any conducting polymer can be varied over a wide range, depending on the amount and reactivity of the dopant used. To start with conductivity increases dramatically with dopant content, until the effect saturates or reaches a maximum at high doping levels. The doping process creates spinless charge carriers called Polarons and Bipolarons at energy levels within the band gap. This situation contrasts with metallic conductors in which the charge carriers (electrons) have spin. Conduction overall in a macroscopic sample is thermally activated and depends exponentially on temperature, much as in semiconductors but unlike metals. Stability and processing characteristics have been the main barriers to the commercialization of these materials. Stability pertains primarily to reactivity toward oxygen or moisture. Doped Polyacetylene, one of the earliest conducting polymers, has a conductivity of 1, 50, 00,000 s/m at room temperature. But as it is also very reactive towards oxygen and moisture, it rapidly suffers irreversible loss of conductivity in the atmosphere. Several other polymers exhibit much greater stability toward ambient conditions, including Polypyrrole, Polyaniline and Polythiophene. While not as conductive as polyacetylene, they do well enough for several other applications. Conducting polymers are prepared in two steps, which can be simultaneous or sequential. First, the polymer is formed from its base material by a conventional chemical polymerization process. The molecular structure of such polymers typically has considerable delocalization of electrons along the polymer chains. This structure is also conductive to formation of energy bands, from and to which charge carriers can be easily removed or added. The second step in the preparation of conducting polymers is

I. INTRODUCTION
In earlier days, the traditional method of counting Leukocytes is by stain method. But now a days Auto-analyzers are used for the same. For bio-sensing, labeling is a basic requirement for detecting and analyzing biomolecules & bio-reactions. Labeling is however an expensive and time consuming process. The ability of label free detection, scalability to allow massive parallelization and sensitivity of the detection range are the important requirements for a future generation of bio-sensors. Therefore, researchers are trying to develop label free bio-sensing, non-invasive techniques. For label free detection, currently there are three popular candidates: Surface Plasmon resonance

137

the creation of charge carriers by reaction with a chemical oxidizing or reducing agent. Among the common dopants are halogens (like iodine or bromine), organic oxidizing agents (like chloranil or dichlorodicyanoquinone) and alkali metals like (sodium or potassium). Doping can also be accomplished by adding or removing electrons electrochemically. In this case, charge is balanced by incorporating ions from the electrolyte. The choice of dopant and doping method determines the polymers magnetic, electrical and optical properties and in some cases also influences its processing characteristics. For this specific application, polypyrrole with a and polyacetylene conductivity (2.0X105)s/m 7 (1.5X10 )s/m appears to be the right choice because of its conductivity and performance characteristics at room temperature and pressure. A glass substrate, (SiO2) forms the base of the sensor, above which (ITO) Indium Coated Tin Oxide layer provides the proper contact between the base and the sensor. Polypyrrole and polyacetylene mixture in different proportions along with doping nano materials as either gold or silver is the optimized composition for the bio sensor. Gold nano materials are added as dopant in different proportions (10ppb or 100ppb or 1000 ppb). Specific enzyme or oxidize or antibody has been coated to stabilize and to ease the immobilization of enzyme and to enhance the concentration of enzyme redox centers. Using electrochemical technique, thin film of the sensor is prepared which constitutes mainly the novel conducting polymers such as polypyrrole and ployacetylene bi-layer composite film with nano materials i.e., either gold or silver or carbon nano tubes. With maximum surface area, i.e., more number of active sites which can assist the enzyme or antibody to enhance the immobilization of enzyme while interacting with the specific analyte or the sample. The analyte employed here is blood sample, in which the WBC or Leukocyte counts are initially traced and identified by observing and analyzing various performance characteristics of the bio sensor at room temperature by using the four-probe technique. Essentially this scheme is more advanced than the traditional or earlier dye or stain method of WBC counting in blood sample. Further this technique appears to be more efficient with less human intervention and reduced time consumption by using only a small amount (1or 2 ml) of the sample or analyte. In future, the same technique can also be employed for the analysis of differential Leukocyte counts (Europhiles, Lymphocytes and Eosinophils) in blood sample and hence the Leukemia patients can be treated in advance at most appropriate time which in turn is a boon to the public or society.

Sample/Analyte Blood

Enzyme/Oxidase/ Antibody
Gold/Silver Nano Material (Oxidizing agent/Dopant)

Polypyrrole+Polya cetylene

Conducting Polymer ITO(Indium Tin Oxide) GlassSubstrate, Semiconductor

Figure1. Sensor film.


The glass substrate first washed with distilled water to make sure that no intervention of dust or other air particles which may interact with the highly conductive polyacetylene and in turn affect on the performance characteristics of the designed bio-sensor. Such substrates were preserved in de-ionized water before the deposition of conducting polymer films. Further gold or silver nano composites are deposited on the prepared polypyrrole and polyacetylene bi-layer film to increase the existing surface area, in turn to ease the immobilization of enzyme and to enhance the concentration of enzyme redox centers.

III. METHODOLOGY AND SIGNAL PROCESSING CIRCUIT


In biological and real time applications, it is always desirable to perform pre-processing on the sensor rawoutput before data acquisition. Based on this requirement, the DSP based biosensor designed or developed will be having in-situ signal conditioning electronics that eliminates the need for additional on-board regulator, temperature compensation and amplifier circuit. This proposed advanced signal conditioning circuitry mainly consists of an ASIC with associated components that process the raw output obtained from the bio-sensor. The ASIC works with low voltage and low current, with wide operating temperature range. The proposed biosensor which is basically made of conducting polymers and nano-composites is very much sensitive to temperature. The ASIC selected enables output adjustment, signal conditioning, amplification and processing the acquired data of the sensor. ASIC operates at +5V input and provides an output of 0.1 to 5V for an input current signal ranging from 10 to 300 nano-amperes, released by the bio-sensor as the leukocyte count varies in the blood. Presently efforts are initiated to incorporate high resolution ADC with the signal processing electronics that gives a serial digital output display using the LCD and TMS320C54XX DSP processor.

II. DEVICE DESIGN


The substrate preparation for ultra thin film deposition is an important task for obtaining the desired number of layers. To obtain the number of layers of conducting polymers either ECM (electro chemical) or CVD (chemical vapor deposition) technique can be employed. The different thicknesses of in-situ, self assembled polypyrrole-polyacetylene composite can be fabricated. The indium tin oxide (ITO) coated glass also treated separately for deposition of the films.

138

Our approach focuses on the fabrication, characterization and application of polypyrrole-polyacetylene-gold nano composite film for leukocyte counts sensing. Initially, polypyrrole-polyacetylene mixture are deposited as bilayer at different proportions (for example: PPY+ polyacetylene; 50%+50%, 40%+60%, 70%+30%, 60%+40%) to which gold nano composites are added as 10ppb or 100ppb 0r 1000ppb to the prepared bi-layer to enhance the existing surface area of the film to accommodate more active sites during reaction and in turn sensing of the device with the required analyte or sample. Once the thin film is fabricated, then it is characterized for conductivity, impedance, resonance, XRD (X-ray detector), UV-Vis, SEM (scanning Electron Microscopy) and AFM (Atomic Force Microscopy) techniques. The effect of temperature and atmosphere may affect the sensing on film properties. Hence usually the fabrication of the film is carried out at clean room or ultra clean room, using sophisticated equipments to reduce the error to the maximum extent during the characterization of the synthesized film and application of the sensor in Real Time Bio-Medical Systems. Initially the surface morphology of the film was investigated by an atomic force microscope (AFM), which was a quesant instrument working, at a constant contact force. We want to show the characteristics of the film at different temperature and its stability. The uniformity in the deposition process and surface morphology of the films can be observed by AFM studies. At room temperature and pressure, perform Four-Probe conductivity measurements. The biosensor synthesized consists of interdigitated electrodes made from copper or platinum on quartz plate. We can find the result of the conductivity change of the sensor films at room temperature. The film is connected to a digital multi-meter, with the change in resistance or conductance or impedance can be recorded as WBC count varies in the sample. In future we can also use both the fourprobe gold electrodes deposited on (AL) alumina substrate for online and real time bio-medical applications. The real time resistance changes of the film are monitored with computer interfaced electro-meter after applying analyte on the film. This paper presents the change in the resistivity of the film at room temperature. The change in resistance (R) values obtained for three different types of (ppy-p.acetylene+nano composites) sensor films can be recorded which can be analyzed and processed also. The total change of the resistance (R) is calculated from the subtraction of resistance at normal WBC count in the blood sample (standard or reference value) with each change of the resistance after introduction of affected blood sample. Further it should be marked that the total change in the resistance or current or voltage value versus WBC counts concentration in the blood sample. Here, we want to study and analyze the effect on sensor membrane after the introduction of 1 or 2ml of sample or required analyte. Then change in the resistance or current is observed on the (conducting polymer + nanocomposites) film as soon as the sample is introduced in the measuring unit.

Figure2. Signal processing circuit

IV. CONCLUSION
In this work, an attempt has been made to fabricate conducting polymer-nano-composite hybrid structure on glass, ITO coated glass plate with interdigitated electrodes. The film treated at different temperatures will be characterized by various physical techniques. The leukocyte counts sensing properties at room temperature and pressure is studied for various samples. This proposed technique for measurement of leukocyte counts in blood sample is quite fast, rapid, cost effective, low power, less human intervention which can be easily installed. In order to be useful as a sensor, the device and the technique should be able to detect the WBC counts at relevant very low concentrations. An effort has been made on the use of highly organized conducting polymer/ nano-composite films for sensing leukocyte counts in the blood sample. We can use, UV-Vis Spectroscopy, SEM (Scanning Electron Microscopy), AFM (Atomic Force Microscopy), conductivity and AC impedance measurements to quantify the different characteristics of the nano-composite polymer films to ensure reproductability, stability, reliability and adaptability for bio-medical real time applications which in turn is useful to the people or public for the welfare of the modern society. Enzyme or antibody based bio-sensors in future are known to be widely used for detection or identification of various virus or bacteria present in the blood sample of human beings and also people can be treated at the earliest, in turn damage to human beings can be prevented to the greater extent.

139

This proposed new innovative concept details the advanced signal conditioning and processing electronics involved in the design of bio-sensor for leukocyte counts in blood sample for real time applications using conducting polymers, nano-composites and DSP Processor which can implement the application software for various bio-signals within the set deadlines and in turn it will be a great contribution to the people or society.

V. REFERENCES
[1] Ben Hur V. Berges and Luiz Eugenio M de Barros Jr. "Design of optical Bio-sensor using polymeric waveguide SBMO or IEEE MTT SIMOC 99 Proceedings. [2] Label Free differential Leukocyte counts using a Microfabric single cell Impedance Spectrometer, Sensors, 2007 IEEE. Holmes D, Taosun Morgan .H, Holloway J, Cakebread J, Davies .D. [3] M.Lee and P.M. Fanchet, The Institute of Optics, "Label Free optical Bio-Sensor using silicon two dimensional photonic crystal, 2006 IEEE. [4] D.A. Chang Yen B.K. Gale, "Design & Fabrication of Multianalyte capable optical bio-sensor using a multi physics approach, 2005 IEEE. [5] Mohd. Zahid Ansari & Chongducho. "Design and analysis of a high sensitive Microcantilever Bio-sensor for biomedical applications" 2008 IEEE. [6] M.Ohkawa, M.Izutsu & T. Sueta, "Integrated optic accelerometer employing a cantilever on a silicon substrate" Jap J. Appl. Phys 28 (2) 287-288, 1989 . [7] Yunan Cheng, Gangwu, Gustaaf Borghs, MangWang, Hong-Zheng Chen Electro Chemical Polyaniline/ Polypyrrole Composite Film with Novel Nano-structure and high Bio-Sensitivity. Dept. of Polymer Science and Engineering, State Key Lab of Silicon Materials, China, 2008. [8] Michel Bozlan, Fabien Miomande and Jinbo Bai Electrochemical Synthesis and Characterization of Carbon Nano tube or Modified Polypyrrole Hybrids Using a Cavity Micro Electrode. September 2008, France, Science Direct Journal. [9] D.Y. Godovsky Device Applications of Polymer Nano Composites Energy & Semiconductor Dept., University of Oldenburg, Germany, Advances in Polymer Science Vol 153, Springer-Venlag Berlin, 2000.

140

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

HRV dynamics under meditation and random thoughts

Ramesh K. Sunkaria

Vinod Kumar

Electronics & Communication Engineering Department Dr. B.R.Ambedkar National Institute of Technology Jalandhar, India. e-mail: rksundee@iitr.ernet.in
3

Electrical Engineering Department Indian Institute of Technology Roorkee Uttaranchal, India. e-mail: vinodfee@iitr.ernet.in
4

Suresh C. Saxena

Shirley Telles

Electrical Engineering Department Indian Institute of Technology Roorkee Uttaranchal, India. e-mail: director@iitr.ernet.in

Yoaga Research Centre Patanjali Yogpeeth Haridwar, India e-mail: shirleytelles@gmail.com

AbstractThis study reports the quantification and comparison of heart rate variability (HRV) dynamics during mental states of random thoughts and meditation. The HRV of thirty experienced yoga practitioners (mean age 28 years) was analyzed in pre, during and in post sessions of random thoughts and meditation using optimized autoregressive modeling. It has been observed that the sympathetic tone (LF) (p<0.05) and sympathovagal balance (LF/HF ratio) (p<0.01) is significantly decreased, whereas para-sympathetic (HF) tone become significantly strong (p<0.05) under meditation. However, the sympathetic tone and sympatho-vagal balance become higher under mental state of random thoughts. The mean heart rate during random thoughts is observed to increase, where as it decreases during meditation. So, the heart rate variability has been observed to improve under meditation, whereas it is lowered under state of random thoughts. Keywords - heart rate variability; autoregressive modeling; yoga; cancalata; dhyana

they conducted study on heart rate (HR) dynamics and cardiopulmonary interactions [4]. S.Vladimir and B. Ofer emphasized the sympathetic nervous system dynamics to counter mental stress and other disturbances effecting visceral homeostasis [5]. S. Patil and S. Telles in their study on two yogic based relaxation effects on HRV in cyclic meditation (CM) and supine rest (SR), showed decrease in LF and LF/HF ratio during and after CM, where as HF decreased [6]. The present study evaluates HRV dynamics during mental states of random thoughts (cancalata) and meditation (dhyana). The cancalata and dhyana are nomenclature of these mental states in Sanskrit language. The HR dynamics changes during these mental states may give further insights of influence on cardiovascular functioning during these yogic-based mental states. II. METHODS 2.1 Subjects Thirty normal and healthy volunteers were selected from Swami Vivekanand Research centre, Bangalore, India for electrocardiogram (ECG) recording. The selected volunteers in the age range of 22.5 to 38.4 years (mean age=32.73 years, mean height =164.57 cms), were regular yoga practitioners having 3-12 years of meditation experience. Informed written consent was obtained from each subject prior to ECG recording. 2.2 Recording protocols The subjects underwent thirty two minutes of lead-II ECG recording under mental states of random thoughts (cancalata) and meditation (dhyana). The ECG signal was recorded at sampling frequency of 250 Hz using 4-channel RMS polyrite ECG recorder under following protocols: I. Random thoughts: This mental state refers to scattered energy of random and directionless thoughts. The subjects were asked to sit cross-legged in relaxed condition in pre-session of duration 5 minutes. During two minutes refreshing gap, the slides of pictures

I. INTRODUCTION The recent investigations regarding influence of yoga, breathing exercises and meditation has shown strong impact on the autonomic cardiac regulation. K. Howorka et al. in 1995 demonstrated immediate decrease of sympathetic and increase of parasympathetic activity after yoga in a preliminary study [1]. P. Raghuraj et al. in two selected breathing techniques of kapalabhati and nadisudhi exhibited significant increase in low frequency (LF) power and LF/HF ratio while high frequency (HF) power was lowered following Kapalabhati, however there were no significant changes following Nadisuddhi [2]. C.K. Peng et al. in 1999 reported extremely prominent heart rate (HR) oscillations during Chinese Chi and Kundalini yoga meditation. Later on

141

pertaining to mountains, rivers etc with no interlinking among pictures were shown to subjects to prepare the subjects for cancalata sessions. During cancalata sessions of D1, D2, D3 and D4 (each of five-minute duration) the interventions were comprised of listening to FM radio play having dialogues with no logical connection in-between. In post-cancalata session, the invention was withdrawn and subjects were asked to sit and relax while watching slides of pictures pertaining to mountains, rivers etc with no interlinking among pictures. II. Meditation: This state refers to continuous focusing of mind on single object and when this becomes effortless, it is termed as state of dhyana or meditation. The subjects were asked to sit in relaxed condition during pre-session of duration 5 minutes. During refreshing gap of two minutes, the subjects were asked to scan the Om picture (Orange coloured) from up to low with closed eyes then merge into it by visualization. During dhyana sessions of D1, D2, D3 and D4 (each of five-minute duration), the volunteers were asked to imagine Om picture (Orange colour) with closed eyes and chanting of Om mentally in D1 session. The Om chanting is slowed down slowly-slowly with gap in Om chanting increasing (D2, D3) and finally merging into it (D4). The subjects were asked to sit in relaxed mode in postdhyana session. 2.3 Spectral analysis of HRV signal The detection of R-peaks was carried out using wavelet based R-peak detection algorithm. The detection using newly designed wavelet function with coefficient for low-pass, Li = [0.30 0.25 0.85 0.25 0.30] and high-pass, Hi = [-0.30 0.25 0.85 0.25 -0.30], has demonstrated to have accuracy of the order of 99.99% [8-9]. The resulting RR-tachogram is resampled at 4 Hz and HRV indices are quantified using optimized autoregressive (AR) modeling in three frequency bands of very low frequency (VLF, 0.004Hz-0.04Hz), low frequency (LF, 0.04Hz-0.15Hz) & high frequency (HF, 0.15Hz-0.40Hz) [10-13]. The AR model with model order evaluated using Akaikes FPE (finite prediction error) criteria has the disadvantage that it gives HRV indices, which differs from those evaluated with fast Fourier transform (FFT) technique. The short term HRV indices evaluated with FFT technique with data length of 256s duration, Hann windowed with 50% percent overlapping and averaging of spectra provides best HRV indices [10]. So, for this study, this model order has been optimized and evaluated on the basis of total prediction error (TPE). It was observed experimentally, that if the highest order of model is evaluated corresponding to TPE of +1% more than least square error (LSE) computed with FPE

TABLE 1 HRV INDICES UNDER CANCALATA MENTAL STATE Variable Pre (5 min) mean Dmean (5 min) mean Post (5 min) mean

S.D. S.D. S.D. Spectral domain parameters 56.75 67.37 68.57 LFn.u. * 17.23 10.85 14.62 43.25 32.63 31.43 HFn.u. ** 17.23 10.85 14.62 83.77 82.72 87.20 VLFn.u. * 44.64 37.26 72.57 1.94 2.71 3.16 LF HF * 1.93 1.60 2.67 2 1030.2 1363.8 1688.0 Ptotal (ms ) 635.8 1205.4 650.1 Time-domain parameters 63.30 70.10 SDNN (ms) ** 57.92 20.31 21.57 30.21 45.24 48.63 RMSSD(ms) ** 45.84 21.33 19.32 20.69 45.91 45.31 48.70 SDSD(ms) * 21.37 19.36 20.72 72.92 74.60 73.95 HR(bpm) * 8.28 8.34 8.08 * p 0.05 - significant, ** p 0.01 - very significant criteria or till the trend of LSE starts increasing, then this model order provides HRV indices which are closely matched to HRV indices evaluated with FFT, without sacrificing the inherent advantages of AR modeling. However, if model order is evaluated corresponding to more than this +1% of least square error, then the HRV indices again starts deviating from those evaluated with FFT. The group values (n=30) of various HRV indices for both cancalata and dhyana mental states are computed as mean S.D. and are summarized in Table 1 and 2. The indices values during mental states (D1-to-D4) are presented in the form of mean values Dmean (mean of D1, D2, D3 and D4) and their standard deviation (S.D.). 2.4 Time domain analysis The time domain indices of heart rate variability such as SDNN (ms), RMSSD (ms), SDSD (ms) and HR (bpm) have also been used for supporting HRV prognosis based on spectral methods. The SDNN (ms) reflects total variability, while RMSSD and SDSD parameters are correlated with HF power in spectral analysis [13].

142

negative values represents decrease in indices values in TABLE 2 HRV INDICES UNDER DHYANA MENTAL STATE Variable Pre (5 min) mean Dmean (5 min) mean Post (5 min) mean TABLE 3 PERCENTAGE CHANGE IN MEAN HRV INDICES WITH PRE-SESSION VALUES AS BASELINE During cancalata cancalata dhyana Variables Post dhyana 15.9 -22.8 28.3 41.3 10.3 -1.2
6 6 7 7 6 6 7 7

44.47 55.61 49.01 17.36 19.20 18.87 74.77 72.52 73.87 HR(bpm) * 9.02 9.14 8.75 * p 0.05 - significant, ** p 0.01 - very significant

S.D. S.D. Spectral domain parameters 58.84 63.51 LFn.u. * 16.10 15.22 41.16 36.49 HFn.u. ** 16.10 15.22 80.84 51.14 VLFn.u. * 45.07 23.10 2.08 2.37 LF HF ** 1.92 1.47 Ptotal (ms 2 ) * 1014.3 1437.0 451.6 538.8 Time-domain parameters 68.17 SDNN (ms) ** 56.55 18.17 17.74 55.52 RMSSD(ms) * 44.39 17.32 19.16 *

S.D. 68.23 12.33 31.77 12.33 63.78 62.86 2.67 1.49 1432.8 932.72 62.40 15.48 48.94 18.84

LFn.u. HFn.u. LF HF

18.9 -24.5 39.7 32.4 9.3 2.3

7.9 -11.3 13.9 41.7 20.5 -3.0

20.9 -27.3 62.9 63.8 21.0 1.4

Ptotal (ms 2 ) SDNN (ms) HR(bpm)

reference to the respective baseline or pre-session indices values. These changes have also been shown in Figure 1 during mental states and in Figure 2 in post-session, to appreciate the effect of interventions. As per observations in Table 1 and 2, the sympathetic tone (LF) is significantly high ( p 0.05 ) during cancalata state while it is low in dhyana state. The parasympathetic tone (HF) is dominant very significantly ( p 0.01 ) during dhyana sessions, while in cancalata state the parasympathetic tone is weakest, thus suggesting low control over autonomic balance. Figure 1. %age change during-sessions in reference to baseline (pre-session)

SDSD(ms) *

2.5 Statistical analysis To determine significance of variation in indices values during mental state (cancalata or dhyana) and in post-session with pre-session as baseline, the p-values were determined using T-tests. These p-values (* p 0.05 -significant or ** p 0.01 - very significant) were evaluated for all enlisted HRV indices. III. RESULTS The effect of yogic-based mental states of cancalata and dhyana on heart rate variability dynamics are presented in the form of spectral domain and time domain HRV indices in Table 1 and 2. The spectral parameters of LF, HF and VLF have been presented in normalized units (n.u.). The percent change in some of significant HRV indices values during both mental states and in post-sessions in reference to baseline are depicted in Table 3. The positive values of percentage change represents rise in indices value, whereas
% age change
60 60 40 40 20 20 0 0 -20 -20 -40 -40 0 0 1 1

During cancalata sessions


During cancalata cancalata sessions sessions During

2 2

3 3

4 4

5 5

During dhyana During sessions Duringdhyana dhyana sessions sessions

60 60 40 40 20 20 0 0 -20 -200 0 1 1 2 2 3 3 4 4 5 5

LFn.u. HFn.u. LF/HF Ptotal SDNN

HR

143

Figure 2. %age change in post-sessions in reference to baseline (pre-session)


Post Post cancalata session cancalata session
100 100 50 50 0 0 -50 -500 0
Post cancalata session

1 1

2 2

3 3

4 4

5 5

6 6

7 7

Post dhyana Post dhyana session session


100 100 50 50 0 0 -50 -500 0
Post dhyana session

sessions is moderate. The percentage rise in LF with respect to baseline (pre-session) value is more during and in postsessions in case of cancalata, which is indicative of higher sympathetic dominance. The parasympathetic tone (HF power) is decreased more in cancalata sessions as compared to dhyana sessions, where as parasympathetic tone is stronger under dhyana influence than cancalata, which is indicative of increase in heart rate variability. This fact is proved by percentage fall in HF values during and in post interventional sessions. The LF/HF ratio increases more under cancalata sessions in comparison to those in dhyana sessions and this trend persists even in post sessions. The LF/HF ratio becomes even higher in post sessions and is more in post cancalata state. So, the HRV is reduced under random thoughts and this trend remains same even in post cancalata session. The LF/HF ratio tilts more towards its normal limits (i.e. LF/HF ratio in normal and healthy subjects), so heart rate variability becomes higher under dhyana and its post sessions. The other important observation is that thermoregulatory variations represented by VLF are higher under cancalata state of mind, whereas it is reduced under dhyana sessions. The total variability represented by total spectral power ( Ptotal (ms ) ) become higher during both these states of mind, but increase in total power is more during dhyana sessions as compared to cancalata sessions in reference to pre-session value. This trend persists and even higher in post-cancalata session as compared to that of post-dhyana session. The SDNN is also higher during dhyana sessions than cancalata with respect to their pre-session values, while postcancalata SDNN is higher than post-dhyana session SDNN. However standard deviation of total power ( Ptotal (ms ) ) and SDNN is very high during and in post-cancalata session, which may be accounted for increased inter visceral dynamics as a result of intervention of this mental state. The relative rise in RMSSD value is higher in case of dhyana sessions as compared to that in cancalata sessions. The RMSSD corresponds to HF power in spectral HRV parameter and is another indicative parameter regarding increased HRV in meditation state of mind. It is noticeable that heart rate is decreased during dhyana sessions, which results in lesser frequent ventricular contractions. However the mean heart rate is increased during cancalata sessions, which shows more frequent ventricular contractions to counter increased neuro-humoral dynamicity due to state of random thoughts. V. CONCLUSIONS The sympathetic tone (LF power) becomes higher under state of random thoughts (cancalata). This sympathetic dominance
2 2

% age change

1 1

2 2

3 3

4 4

5 5

6 6

7 7

LFn.u. SDNN HR HR LFn.u. HFn.u. HFn.u. LF/HF LF/HF P Ptotal total SDNN

The autonomic balance indicated by LF/HF ratio, is very significantly ( p 0.01 ) within normal limits in case of dhyana, whereas it is high during cancalata state. Moreover, the total absolute power is higher in dhyana state. This trend is maintained even in time domain variable of SDNN ( p 0.01 ). The heart rate during dhyana session is low, whereas in cancalata sessions, the heart rate is high. However in post-sessions, the LF is higher in post-cancalata session than post-dhyana in reference to baseline, whereas HF is lower in post-cancalata session than post-dhyana in reference to baseline. The LF/HF ratio is higher in post-cancalata session than in post-dhyana session. The heart rate (HR) remains lowest in post-dhyana session, but in post-ekagrata session, the heart rate has been observed to be highest. These trends of indices value during and in post-session of cancalata and dhyana mental states are shown in the form of percentage changes in Table 3. IV. DISCUSSIONS The sympathetic tone (LF) dominates during both mental states, but it is more dominant during cancalata sessions with respect to their pre-session, which may be due to more and more illogical nature of radio-play, thereby activating more sympathetic discharge. However, increase in LF in dhyana

144

persists even in post-cancalata session, but to lesser extent in post-dhyana. The autonomic balance (LF/HF ratio) is dominated by sympathetic tone during cancalata. The parasympathetic activity remains stronger during and even in post-dhyana session in comparison to during and postcancalata sessions. This leads to reduced HRV under cancalata state of mind, whereas the HRV is higher under dhyana (Om meditation) in comparison to state of cancalata. The thermo-regulatory variations are increased under random thoughts (cancalata), whereas these variations are reduced under meditation (dhyana). However, in postcancalata session, thermoregulatory variations remain higher, whereas in post-dhyana session these become low in comparison to their pre-session magnitudes. The mean heart rate is reduced under dhyana, whereas it becomes higher under cancalata. So, the neuro-humoral body visceral regulation is under control in state of dhyana. In post-dhyana session the mean heart rate is reduced, whereas in postcancalata session it becomes higher with respect to their presession mean heart rate. REFERENCES
[1] K. Howorka, J. Pumprla, G. Heger, H. Thoma, J. Opavsky and J. Salinger, Computerized assessment of autonomic influences of yoga using spectral analysis of heart rate variability, Proceedings RC IEEE-EMBS & 14th BMESl, pp. 1.61-1.62, 1995. P.Raghuraj, A. G. Ramakrishnan, H. R. Nagendra and S. Telles, Effect of two selected yogic breathing techniques on heart rate variability, Indian J Physiol Pharmacol , vol.42, no.4 , pp. 467472, 1998. C.K. Peng, E. Joseph, Mietus, L. Yanhui, K. Gurucharan, S.D. Pamela, B. Herbert and L. Ary, Goldberger, Exaggerated heart rate oscillations during two meditation techniques, International Journal of Cardiology, vol. 70, pp.101107,1999. C.K.Peng, C. H. Isaac, E. M. Joseph, M.H. Jeffrey, K.Gurucharan, B. Herbert and L.G. Ary, Heart rate dynamics during three forms of meditation, International Journal of Cardiology , vol. 95, pp.19 27, 2004. V. Shusterman and O. Barnea, Sympathetic nervous system activity in stress and biofeedback relaxati on, IEEE Engineering in Medicine and Biology, pp. 52-57, March/April 2005. S. Patil and S. Telles, Effects of two yoga based relaxation techniques on heart rate variability (HRV), International Journal of Stress Management, vol. 13, no. 4, pp. 1-16, 2006. T. Fred, A. F. H. David, H. John, T. Melissa, N. Sanford, G. K.Carolyn, G. Sarina, R. Maxwell and H. S.Robert, Effect of transcendental meditation practice on brain functioning and stress reactivity in college students, International Journal of Psychophysiology, article in press-7 pages, 2008. S.C. Saxena, V. Kumar and S. T. Hamde, QRS detection using new wavelets, Jounal of Medical Engineering & Technology , vol. 26, no. 1, pp 7-15, January 2002. B. U. Kohler, C. Henning and R. Orglmeister, The pri nciples of software QRS detection, reviewing and comparing this important ECG waveform, IEEE Engineering in Medicine and Biology , vol 21, no 1, pp 42-57, January 2002. D. Singh, V. Kumar, S. C. Saxena and K K. Deepak, Effects of RR segment duration on HRV spectrum estimation, Physiological Measurements, vol. 25, pp. 721-735, 2004. G.Baselli, D.Bolis, S.Ceruti, and C.Freschi, Autoregressive modeling and power spectral estimate of R-R interval time series

[12]

[13]

in arrhythmic patients, Computers and Biomedical Research, vol. 18, pp. 510-530, 1985. J. L. A.Carvalho, A. F.Rocha, I. dos Santos, C. Itoki, L. F. Junqueira and F. A. O. Nascimento, Study on the optimal order for the auto-regressive time frequency analysis of heart rate variability, 25th Annual International Conference of the IEEE EMBS, September 2003. Task Force of European Society of Cardiology and the North American Society of Pacing and Electrophysiology, Heart rate variability-standards of measurement, physiological interpretation and clinical interpretation and clinical use, European Heart Journal, vol. 17, pp. 354-381, 1996.

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

145

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Evaluation of Spirometric Pulmonary Function Test using Principal Component Analysis and Support Vector Machines
Kavitha A, Sujatha C M * and Ramakrishnan S# Department of Electronics and Instrumentation Engineering, Sri Sairam Engineering College, Chennai, India * Department of Electronics and Communication Engineering, Anna University Chennai, India # Department of Instrumentation Engineering Madras Institute of Technology, Anna University Chennai, India,

ramki@mitindia.edu,ramki@annauniv.edu Abstract In this work, an attempt has been made to classify the pulmonary function data obtained from flow volume spirometers using Principal Component Analysis (PCA) and Support Vector Machines (SVM). For this study, flow-volume curves (N = 50) using spirometers were generated under standard recording protocol. A method based on PCA is used to select the most significant parameters of the pulmonary function data. The parameters were ranked based on their highest magnitudes in the principal components obtained using PCA. Further, SVM is used to classify the spirometric data based on the reduced parameter datasets. The selected parameters were trained (training = 30) and tested (testing = 20) using SVM and four different kernels were chosen for the classification. Results demonstrate that the parameters selected from PCA were capable of classifying spirometric data using SVM. The performance estimators obtained in the classification process showed that the quadratic and RBF kernels performed better in terms of their sensitivity and specificity. The polynomial kernel was found to have comparable specificity values but it was less sensitive. Among all the kernels, the polynomial kernel was found to employ very less number of support vectors for the classification. Further tuning of kernel parameters using various optimization algorithms would yield better performance in classification. It appears that this method of parameter reduction and classification using PCA and SVM on the spirometric datasets could be useful in reducing the computational complexity in the analysis of respiratory abnormalities. Index TermsSpirometry, feature selection, component analysis, support vector machines. principal

I. INTRODUCTION

PIROMETRY is a measure of dynamic lung volumes and capacities as a function of time during forced expiration and inspiration. It quantifies how quickly and effectively air from the lungs can be emptied and filled. It is used for the measurement and interpretation of ventilatory functions in clinical practice. Spirometer has been proposed as an important instrument in the early diagnosis of respiratory

impairments. It is useful in distinguishing respiratory from cardiac disease as the cause of breathlessness and can be used to screen for respiratory diseases in mass screening [1, 2]. Spirometer generates 20 parameters in general and the number of parameters increases with different bronchodilation tests performed along with the normal pulmonary function testing. Classification of respiratory abnormalities based on these parameters is a complex process. All the parameters obtained from the spirometer are not very significant and computations based on a reduced significant parameter set, reduces the computational complexity of classification A method based on PCA is used to select the most significant parameters of the pulmonary function data. The parameters were ranked based on their highest magnitudes in the principal components obtained using PCA. It is the bestknown unsupervised linear feature extraction algorithm; it is a linear mapping which uses the eigenvectors with the largest eigenvalues. The goal of PCA is to minimize the projection error [35]. PCA has been successfully employed in assessing the chest wall movement in pathologies like chronic obstructive pulmonary disease, emphysema and muscular dystrophy. PCA was used in the assessment of lung function to explore the relationships between outcomes with classes of asthma medication and to identify the choice of appropriate treatment routines [6, 7]. Support vector machines perform well compared with other learning algorithms because they are effective in controlling the classifiers capacity and the associated potential for overfitting [8]. Support vector machines proposed by Vapnik find a maximum margin hyperplane for classification by mapping to a higher dimensional space using the kernel function. Kernel function maps the input data in a higher dimensional space without computing all elements and connects the input space and the higher dimensional space directly. It then chooses a maximum soft margin separating hyperplane in this higher dimensional space, which separates the training instances by their classes [9]. In this work, Principal Component Analysis is employed for ranking the

146

most significant features from the available spirometric parameter set. Further, the reduced parameter set is subjected to Support Vector Machines for classifying spirometric data into normal and abnormal and the performance of the different kernels in the classification process is compared. II. METHODOLOGY For the present study spirometric data obtained from 50 adult volunteers were considered. Spirometric measurements were made with volumetric transducer as it has proven accuracy and stability. The acceptability and reproducibility criterion were adopted as per the recommendation given by American Thoracic Society [10]. The spirometric data with all the measured parameters is subjected to PCA. PCA transforms the input space into a new lower dimensional space. The axis (or components) of the new coordinate system are linear combinations of the original axis. The primary axis or principal component is selected to represent the direction of maximum variation in the data. The secondary axis, orthogonal to the primary axis, represents the direction of the next largest variation in the data and so on. PCA uncovers combinations of the original variables which describe the dominant patterns and the main trends in the data. This is done through an eigen vector decomposition of the covariance matrix of the original variables. The extracted latent variables are orthogonal and they are sorted according to their Eigen values [11, 12]. The high dimensional space described by matrix X is modeled using PCA as (1) where T is the score matrix composed by the principal components, P is the loadings composed by the eigen vectors of the covariance matrix and E the residual matrix (variance that was not captured by the model). The loadings provide information on how the original variables are linearly combined to form the latent variables [13 15]. The dataset obtained from the spirometer containing 20 parameters are subjected to PCA. The principal components obtained from PCA are analyzed for ranking the most significant features. The percentage variances between the various parameters are estimated for the normal and abnormal subjects. The principal components that explain the maximum percentage variance are chosen and the corresponding component magnitudes are analyzed. The parameters with highest magnitudes in the loadings of the principal components are chosen for further classification using SVM. The SVM is a linear machine of one output y(x), working in the high dimensional feature space, formed by the nonlinear mapping of the N-dimensional input vector x into a Kdimensional feature space (K>N) through the use of nonlinear function (x). The number of hidden units (K) is equal to the number of support vectors that are the learning data points, closest to the separating hyperplane. The SVM is a linear classifier in the parameter space, but it becomes a nonlinear classifier as a result of the nonlinear mapping of the space of the input patterns into the high dimensional feature space. Training the SVM is a quadratic optimization problem. The

construction of a hyperplane wTx + b = 0 (w is the vector of hyperplane coefficients, b is a bias term) so that the margin between the hyperplane and the nearest point is maximized and can be posed as the quadratic optimization problem. High generalization ability has been provided by the SVMs. A proper kernel function for a certain problem is dependent on the specific data and there is no good method on how to choose a kernel function. For the set of training vectors belonging to two separate classes,

D {( x1 , y1 ),..........., ( x l , y l )}, x R n , y {1,1}


(2) with a hyperplane, (3) The set of vectors is said to be optimally separated by the hyperplane if it is separated without error and the distance between the closest vector to the hyperplane is maximal. Without loss of generality it is appropriate to consider a canonical hyperplane, where the parameters w, b are constrained by,

w, x b 0

min w, x i b 1
i

X TPT E

In the case where a linear boundary is inappropriate the SVM can map the input vector, x into a high dimensional feature space, z. By choosing a non-linear mapping a priori, the SVM constructs an optimal separating hyperplane in this higher dimensional space. An inner product in feature space has an equivalent kernel in input space, (4) A polynomial mapping is a popular method for non-linear modeling,

K ( x, x' ) ( x), ( x' )

K ( x, x' ) x, x'

(5) Radial basis functions have received significant attention, most commonly with a Gaussian of the form,

K ( x, x' ) exp

x x' 2 2

produces a piecewise linear solution which can be attractive when discontinuities are acceptable [16]. The reduced dataset with parameters selected using PCA are classified using RBF and polynomial kernels and their performances are compared using measures like sensitivity, specificity, accuracy and the total number of support vectors employed in the classification model. III. RESULTS AND DISCUSSION The typical responses of a spirometer showing the variation of airflow with lung volume for normal, obstructive and restrictive subjects are presented in figure 1(a), 1(b) and 1(c) respectively. The normal flow-volume curve was found to have a rapid peak expiratory flow rate with a gradual decline

147

in flow. In restrictive subjects, the shape of the flow volume loop was relatively unaffected, but the overall size of the curve appears smaller when compared to normal. And in obstruction, there was a rapid peak expiratory flow but the curve descends more quickly than normal and takes on a concave shape.

component PC1-7 had the largest magnitude, followed by, PC1-13, PC1- 4, PC1-10and PC1-19.

TABLE I
DESCRIPTIVE STATISTICS OF ALL THE PARAMETERS OBTAINED FROM THE SPIROMETER FOR NORMAL, RESTRICTIVE AND OBSTRUCTIVE SUBJECTS

S. No.
1 2 3 4 5 6 7

Parameter
Age Height Weight FVC FVC Pred % Pred FEV1 FEV1 Pred % Pred FEV1% FEV1% Pred % Pred PEF PEF Pred % Pred FEF25-75% FEF25-75% Pred % Pred FEV6 (FEV1/FEV6) %

Mean S.D
N (18) 44.817.7 160.410.2 6112.9 2.50.9 3.30.9 76.116.8 1.70.6 2.70.7 61.514.3 67.12.4 83.12.9 80.93.02 4.11.67 7.41.7 54.115.3 1.20.4 3.40.9 32.99.4 2.50.9 66.13.9 R (20) 39.713.1 16110.2 6513.6 2.60.9 3.00.6 83.817.7 2.00.7 2.60.5 78.518.2 78.84.0 84.42.5 635.8 4.91.84 6.81.3 7221.1 1.90.8 3.30.7 58.119.3 2.50.9 78.93.7 O (12) 49.314.1 16710.2 64.817.7 2.30.9 3.40.9 67.119.4 1.20.6 2.70.8 42.112.5 51.66.2 81.91.9 636.6 2.81.7 7.91.4 33.918.4 0.70.3 3.41 19.14.6 2.30.9 52.56.1

(a)

(b)

8 9 10 11 12 13 14 15 16 17 18 19

(c) Fig. 1 Flow volume curves of Normal, Obstructive and Restrictive subjects The pulmonary function parameters available from the spirometer are listed out in Table 1. It is seen from this statistical representation that the mean and standard deviation values vary widely for obstructive subjects compared to the normal and restrictive subjects. Since there is a large number of parameters involved in the analysis of pulmonary function tests, identifying the most significant ones is very essential. The contributions of the various principal components in the percentage variance of the subjects are shown in Table 2. It is observed that the first five principal components explain more than 95% of the total variance irrespective of the subjects. Thus, analysis on the loadings of these five components is sufficient for ranking the features. Also more than 50% of the variance is explained by the first principal component PC1. Table 3 lists the component magnitudes for the eigenvector corresponding to the PC1 which has the largest eigenvalue. Since this corresponds to 4, the parameter that is responsible for the maximum variance in the data is readily identified. Subsequently, the magnitudes of the other four component magnitudes corresponding to PC1 were examined. The

20

N- Normal, R- Restrictive, O- Obstructive TABLE II


CONTRIBUTION OF THE PRINCIPAL COMPONENTS IN THE PERCENTAGE VARIANCE FOR NORMAL, RESTRICTIVE AND OBSTRUCTIVE SUBJECTS

Principal N R O Components PC 1 51.5 54.7 69.4 PC 2 19.6 26.4 12.4 PC 3 14.1 8.8 10.4 PC 4 7.7 7.8 4.6 PC 5 4.3 1.2 2.8 N- Normal, R- Restrictive, O- Obstructive The higher magnitude of PC1-7 denotes the similarity in direction of PC 1 with basis vector of original feature space. Thus feature PC1-7 was the most sensitive parameter followed by PC1-13, PC1-4, PC1-10 and PC1-19. As a result, the presented selection scheme was able to rank the five parameters which include FVC, FEV1, FEV1%, PEF and FEV6. Thus PCA approach was applied to selecting most

148

representative features extracted from the original data to improve the effectiveness of abnormality identification TABLE III
VARIATIONS IN EIGEN VECTOR COMPONENT MAGNITUDES WITH THE LOADINGS OF PRINCIPAL COMPONENT 1

Component PC 1-4 PC 1-7 PC 1-10 PC 1-13 PC 1-19 .


1 0.9

Magnitude 0.29 0.32 0.29 0.30 0.28

Features FVC FEV1 FEV1% PEF FEV6

Fig. 2(a) and 2(b) represent the sub analysis of SVM classification done based on two significant features chosen from the PCA ranked features by applying RBF and polynomial kernels respectively. The sensitivity, specificity values are either higher or comparable with that of the values obtained for the dataset without PCA reduction. The number of support vectors chosen by both the kernels using PCA reduced parameter set is however higher than that of the complete measured parameter set. TABLE IV
PERFORMANCE OF THE SPIROMETRIC DATA CLASSIFICATION WITHOUT AND WITH PCA

Forced Expiratory Volume (FEV1)

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0

0 (training) 1 (training) Support Vectors 0 (classified) 1 (classified)

Performance estimators Sensitivity Specificity Accuracy

Without PCA RBF Polynomial 1 50 20 60 60 65

RBF 1 50 75

With PCA Polynomial 70 60 65

The various performance measures estimated after support vector classification is listed in Table 4. It is observed from the results that both the kernels perform well with parameters selected using PCA. The sensitivity and sensitivity values which explain the true positive and false negative of the classification are high in the SVM models utilizing PCA based parameter selection.

0.2

0.4 0.6 Forced Vital Capacity

0.8

IV. CONCLUSION Spirometry is a simple and informative technique for characterizing pulmonary function and is well established in clinical medicine. It is useful in both the diagnosis and the monitoring of lung diseases. But it is an effort dependent test that requires the cooperation of the investigated subject. It has been found that nearly 40% of the spirometric results are unacceptable due to the inability of the person who performs the test [17, 18]. Thus, the reliability of spirometric tests depends on the accuracy, performance of the subject and on the measured and predicted values of the parameters. With large number of parameters to assess and the inability of the patients to complete the test, classification of spirometric data becomes difficult. Despite this, spirometry is being considered as one of the most clinically used pulmonary function test commonly employed in everyday health care [19, 20]. In this work, PCA is used to identify the most significant parameters among the available dataset. Further Support vector machines are used to classify the abnormalities based on the reduced parameters. It appears that PCA is efficient in parameter reduction and when further processed with SVM based classification.

(a)
1 0.9
Forced Expiratory Volume (FEV1)

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0 (training) 0 (classified) 1 (training) 1 (classified) Support Vectors

0.2

0.4 0.6 Forced Vital Capacity

0.8

(b) Fig. 2. Sub analysis of SVM classification using (a) RBF and (b) polynomial kernels

149

REFERENCES
P. J. David, R. Pierce, Spirometry The measurement and interpretation of ventilatory function in clinical practice, Spirometry Handbook, 3rd edition, pp. 1-24, 2008. [2] M. R. Miller, J. Hankinson, V. Brusasco, F. Burgos, R. Casaburi, A. Coates, R. Crapo, P. Enright, C.P.M.van der Grinten, P. Gustafsson, R. Jensen, D.C. Johnson, N. MacIntyre, R. McKay, D. Navajas, O. F. Pedersen, R. Pellegrino, G. Viegi, and J. Wanger, Standardisation of spirometry, Eur. Resp. J., Vol. 26, pp. 319338, 2005. [3] H. Gao, W. Hong, J. Cui and Y. Xu, Optimization of Principal Component Analysis in Feature Extraction, Proc. IEEE Int. Conf. Mechat. Aut., China, pp. 3128 3132, 2007. [4] M. Arnaz and X. G. Robert, PCA-Based Feature Selection Scheme for Machine Defect Classification, IEEE Trans. Inst. Meas., Vol. 53, No.6, pp. 1517 -1525, 2004. [5] C. G. Daniel and D. T. Jonathan, Clinical review: Respiratory mechanics in spontaneous and assisted ventilation , Crit. Care, Vol. 9, pp. 472 484, 2005. [6] G. Ferrigno and P. Carnevali, Principal component analysis of chest wall movementin selected pathologies, Med. Biol. Engg. Comput., Vol. 36, pp. 445 451, 1998. [7] C. R. Jenkins F. C. K. Thien, J. R. Wheatley and H. K. Reddel, Traditional and patient centred outcomes with three classes of asthma medication, Eur. Resp. J., Vol. 26, pp. 36 44, 2005. [8] N. Cristianini, and J. Shawe-Taylor, An introduction to support vector machines, Cambridge: Cambridge University Press. [9] V. N. Vapnik, Statistical Learning Theory, New York: John Wiley & Sons. [10] B. G. Cooper and F. Madsen, European respiratory buyers guide", Vol. 3, pp. 40 43, 2000. [1]

[11] M. P. S. Chawla, A comparative analysis of principal component and independent component techniques for electrocardiograms , Neur. Comput. Appl., Vol. 18, pp. 539-556, 2009. [12] B. Heisele, T. Serre, S. Prentice and T. Poggio, Hierarchical classification and feature reduction for fast face detection with support vector machines, Patt. Recog., Vol. 36, pp. 2007-2017, 2003. [13] D. Aguado, T. Montoy, L. Borras, A. Seco, and J. Ferrer, Using SOM and PCA for analyzing and interpreting data from a P-removal SBR Eng. App. Art. Int., Vol. 21, pp. 919 930, 2008. [14] B. Igor, Principal component analysis is a powerful instrument in occupational hygiene enquiries, Ann. Occup. Hyg., Vol. 48, pp. 655 661, 2004. [15] G-D. Samanwoy, A. Hojjat and D. Nahid, Principal component analysis enhanced cosine radial basis function neural network for robust epilepsy and seizure detection, IEEE Trans. Biomed. Engg., Vol. 50, No.2, pp. 512 518, 2008. [16] S. Gunn, Support Vector Machines for Classification and Regression, University of Southampton, Technical report, Faculty of Engineering, Science and Mathematics, School of Electronics and Computer Science, pp. 1-28, 1998. [17] W. T. Ulmer, Lung function-clinical importance, problems and new results, J. Physiol. Pharmacol., Vol. 54, pp. 11-13, 2003. [18] A. Feyrouz, M. Reena and J. M. Peter, Interpreting pulmonary function tests: Recognise the pattern, and the diagnosis will follow, Cleveland Clinical J. Med., Vol. 70, pp. 866-880, 2003. [19] C. M. Sujatha, S. Ramakrishnan, Prediction of Forced Expiratory Volume in Pulmonary Function Test using Radial Basis Neural Networks and k-means Clustering, J. Med. Syst., Vol. 33, pp. 347-351, 2009. [20] D. Sahin, E. D. Ubeyli, G. Ilbay, M. Sahin, and A. B. Yasar, Diagnosis of airway obsruction or restrictive spirometric patterns by multiclass support vector machines J.Med. Syst., DOI 10.1007/s10916009-9312-9317, 2009.

150

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Ectopic Beats in Poincar Plot Based HRV Assessment


Butta singh
Department of Electronics and Communication Engineering, Guru Nanak Dev University Regional Campus, Jalandhar, India bsl.khanna@gmail.com

Dilbag singh
Department of Instrumentation and Control Engineering, National Institute of Technology, Jalandhar, India singhd@nitj.ac.in

AbstractEctopic beats, originating from other than the normal site, are the artifacts contributing a serious limitation to heart rate variability (HRV) analysis. Poincar plot is the promising techniques for visual identification of ectopic beats. The aim of this study is to assess the effect of ectopic beats on Poincar plot indices SD1 and SD2. Normal RR interval time series of length 900, 500 and 300 each for 10 healthy subjects were analysed by inserting low ( =0.8), medium (=0.4) and high (=0.1) level artificial ectopic beats. SD1 was found to be more sensitive to ectopic beats than SD2. Even a single ectopic beat addition lead to an increase in SD1 and SD2. The effect of ectopic beats on these indices has been found to be the function of amount and level of ectopic beats and length of RR interval series.
Keywords- HRV; Ectopic beats; Poincar Plot

I.

INTRODUCTION

The study of variations in the beat-to-beat timing of the heart is known as heart rate variability (HRV). HRV is a noninvasive technique to study the sympathovagal interactions in physiological and pathological conditions. HRV analysis has shown to provide an assessment of cardiovascular disease and functioning of autonomous nervous system (ANS) [1]-[3]. Variation in heart rate may be evaluated by timedomain, frequencydomain and nonlinear analysis [4]-[9]. Most ECG recordings include technical or biological artifacts that cause bias in the reliable measurement of HRV. Technical artefacts may result from poorly fastened electrodes or be motion artefacts during ambulatory recording. Ectopic beats, arterial fibrillation and ventricular tachycardia are examples of biological artefacts in ECG recording. Ectopic beats are generated by additional electrical impulses imposed by local pacemakers. Ectopic beats (in terms of timing) are defined as those which have intervals less than or equal to 80% of the previous sinus cycle length [10], [11]. Presence of ectopic beats perturbs the impulse pattern initiated by the sinoatrial node and autonomic modulation of the sinoatrial node is temporarily lost [10]. Recently, Peltola et al., [12] had performed a study to clarify how the ectopic beats influence the fractal correlation properties. Clifford et al., [11] studied the effect

of ectopic beats on spectral estimation. Similar study was performed by Salo et al., [13] to study the effect of ectopic beats on linear parameters of HRV. HRV analysis relies on the beat-to-beat variation in sinus cycle length to detect changes in the activity of the autonomic nervous system. A number of measures, employing both time-domain and frequency-domain analysis had been proposed and evaluated in a variety of patient population likely to have a substantial amount of spontaneous ectopic beats [11]-[13]. Poincar plot analysis is an emerging quantitative visual technique to identify the presence of ectopic beats. The Poincar plot is becoming a popular technique due to its simple visual interpretation and its proven clinical ability as a predictor of disease and cardiac dysfunction [14]-[16]. But there have been no systematic studies aimed at studying the effect of ectopic beats on Poincar plot indices. Despite the Task Force recommendations for ectopic editing [4], effect of ectopic beats on different Poincar plot indices need attention for wide spread use of HRV in clinical situation. The aim of this study was to quantify the differences in the Poincar plot parameters for HRV estimates by inserting artificial ectopic beats in normal RR intervals of different lengths. II. PATIENTS AND METHODS

This study involves 10 healthy subjects. The ectopic free RR intervals with length N=900 (long-term data), 500 (medium data) and 300 (Short-term data) for each subject were derived from Lead-II ECG recordings having a sampling frequency of 500 Hz, resulting in a total of 30 ectopic free RR interval series. The data was acquired on Biopac MP100 system at Biomedical Instrumentation Laboratory, NIT Jalandhar. Artificial ectopic beats were inserted in the normal RR interval time series to study the effect of different amount of ectopic beats. The insertion of an ectopic beat corresponds to the replacement of two beats as follows [11]. The nth and (n+1)th beats are replaced by ' ' ' RRn RRn 1 RRn 1 RRn 1 RRn RRn , Where the ectopic beats duration is the fraction of the previous RR interval, thus specifies the level of ectopic beat. In each of the ectopic free RR interval segments of 10 subjects, ectopic beats were inserted at each of 3 levels

151

(=0.1, high level of ectopy; =0.4, medium level of ectopy and =0.8, low level of ectopy). The portion of the random ectopic beats was gradually increased from 1 to 10 ectopic beats in normal RR interval series. III. POINCAR PLOT BASED ANALYSIS OF HRV

Poincar plot is a visual tool in which each RR interval is plotted as a function of previous RR interval. Analysis of RR intervals with the use of standard deviations, histograms and spectral techniques provide an assessment of overall variability but obscures instantaneous beat-to-beat changes. However, Poincar plot provides summary information as well as detailed beat-to-beat information on the behaviour of heart. Beat-to-beat variation can be easily displayed for visual assessment by graphing of each RR interval against the subsequent RR interval. The problem regarding Poincar plot use has been lack of obvious quantitative measures that characterize the salient features of Poincar plots. To characterize the shape of the plot mathematically, most researchers have adopted the technique of fitting an ellipse to the plot, as shown in Fig. 1. A set of axis oriented with the line of identity is defined [14]. The axes of the Poincar plot are related to the new set of axis by a rotation of =/4 radian as shown in (1). In the reference system of the new axis, the dispersion of the points around the X1-axis is measured by the standard deviation denoted by SD1. This quantity measures the width of the Poincar cloud and, therefore, indicates the level of short-term HRV [17]. The length of the cloud along the line of identity measures the long-term HRV and is measured by SD2, which is the standard deviation around the X2-axis [14], [17]. These measures are related to the standard HRV measures by (2).

(2) Where Var(x1) denotes the variance of x1 sequence and SDSD denotes the standard deviation of successive differences of RR interval series. Thus, the SD1 measure of Poincar width is equivalent to the standard deviation of the successive difference of intervals, except that it is scaled by 1/2. Further we can relate SD1 to the autocovariance function by (3).

1 1 Var RRn RRn1 SDSD 2 2 2

SD12 RR 0 RR 1

(3)

Where RR(0) and RR(1) are the autocovariance functions. Also


2 RR 0 E RRn RR SDRR2

i.e., variance of RR intervals. Similarly,


2 SDSD2 E RRn RRn

for stationary intervals

RRn 0.
Thus
2 SDSD E RRn RRn1 2 RR 0 RR 1

With a similar argument, it may be shown that the length of the Poincar cloud is related to the autocovariance function by (4)
2 SD2 RR 0 RR 1

(4) (5)

By adding (3) and (4) together, we get


2 SD12 SD2 2 SDRR 2

Where SDRR denotes the standard deviation of RR interval series. Finally

1 2 SD2 2 SDRR 2 SDSD 2 2

(6)

Thus (6) represents SD2 in terms of existing indices of HRV. Fitting an ellipse to the Poincar plot does not generate indices that are independent of the standard time domain HRV indices. IV.
Figure 1. A Poincar plot for ellipse fitting technique with coordinate system X1 and X2 established at /4radian to normal axis. The standard deviation of distance of points from each axis determines width (SD1) and length (SD2) of ellipse.

RESULTS AND DISCUSSION

x1 cos sin RRn x sin cos RR n 1 2

(1)

1 1 SD12 Var x1 Var RRn RRn 1 2 2

The SD1 and SD2 of ectopic free RR interval series of different lengths were taken as reference values. SD1 and SD2 of RR intervals having artificial ectopic beats, with ectopic level =0.1, 0.4 and 0.8 were computed and compared with reference values to assess the effect of ectopic beats. Fig. 2 confirms that the presence of ectopic beats, even at the low level of ectopy (=0.8), has a significant effect on SD1 and SD2 and requires some form of ectopic correction be applied. SD1 was found to be more sensitive to ectopic beats as compared to SD2. Fig. 3 shows

152

that by inserting even a single ectopic beat, SD1 was found to be increased by 83% for =0.1, 44% for =0.4 and 6% for =0.8 for RR interval series of N=900, increased by 135% for =0.1, 71% for =0.4 and 10% for =0.8 for N=500 and increased by 186% for =0.1, 106% for =0.4 and 20% for =0.8 for RR interval series of length N=300. Similarly, SD 2 was increased by 8% for =0.1, 4% for =0.4 and 0.5% for =0.8 for N=900, increased by 15% for =0.1, 7% for =0.4 and 0.6% for =0.8 for N=500, increased by 25% for =0.1, 12% for =0.4 and 1.5% for =0.8 for RR interval series of N=300. It means even a one ectopic beat may significantly effect the both SD1 and SD2. Further difference in these indices increased in function of amount of ectopic beats. RR interval series having shorter length (N=300) were found to be more sensitive as compared to RR interval series of larger length (N=900). High level 10 ectopic beats ( =0.1) may increase the SD1 by 750% and SD2 by 150% for N=300, but for N=900, increase was 400% and 65% respectively. Low level 10 ectopic beats (=0.8) may increase the SD1 by 120% and SD2 by 15% for N=300, but for N=900, increase was 50% and 5% respectively. V. CONCLUSION The major finding of this study are that Poincar plot is not only a visual tool to identify the ectopic beats but ectopic beats, even at low level, results in a overestimation of Poincar plot indices SD1and SD2 and requires some form of ectopic correction be applied. Short-term variability is more sensitive to ectopic beats than long-term variability. The variation in SD1 and SD2 depends upon amount and level of ectopic beats and length of RR interval series. The finding of the present study can be partly used as a reference for the difference in Poincar plot indices SD1 and SD2 of RR intervals with ectopic beats from ectopic free RR interval series of different length. The Poincar plot based HRV quantification is not a reliable indicator for variability signal with ectopies. However this method has certainly edge over commonly used time and frequency domain methods and can be preferred for tachograms having ectopic beats. The applications of signal processing techniques for quantitative interpretations of physiological systems, vital signs management, or in fact, significant innovation in this area might lead to the development of modern and advanced techniques for a better diagnosis and treatment of patients. REFERENCES
[1] M.A. Conny, L.A.A. Kollee, C.W. Hopman, G.B.A. Stoelinga and H.P. Van Geijn, Heart rate variability, Annals of Internal Medicine, vol. 118, pp. 436-447, 1993. D. Singh, K. Vinod and S.C. Saxena, Sampling Frequency of RR interval Time-series for Spectral Analysis of Heart Rate Variability, Journal of Medical Engineering and Technology, vol. 28, no. 6, 2004.

[3]

[4]

[5]

[6]

[7]

[8] [9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

K.K. Deepak, R.K. Sharma, R.K. Yadav, R.K. Vyas and N. Singh, Autonomic Activity in Different Postures: A Non -linear Analysis Approach, Indian J. Physiol Pharmacol Suppl., vol. 47, no. 5, pp. 29, 2003. Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology, Heart Rate VariabilityStandards of Measurement, Physiological Interpretation and Clinical Use, European Heart Journal, vol. 17, pp. 354-38, 1996. D. Singh, K. Vinod, S.C. Saxena and K.K. Deepak, An Improved Windowing Technique for Heart Rate Variability Power Spectrum Estimation, Journal of Medical Engineering and Technology, vol. 29, no. 2, pp. 95-101, 2005. D. Singh, K. Vinod, S.C. Saxena and K.K. Deepak,, Spectral Evaluation of Aging Effects on Blood Pressure and Heart Rate Variations in Healthy Subjects, Journal of Medical Engineering Technology, vol. 30, no. 3, pp. 145-150, 2006. G. Merati, M.D. Reinzo, G. Parati, A. Veicsteinas and P. Castiglioni, Assessment of Autonomic the Control of Heart Rate Variability in Healthy and Spinal-cord Injured Subjects: Contribution of Different Complexity Based Estimators, IEEE Trans on Biomedical Engineering, vol. 53, no. 1, pp. 43-52, 2006. B. Sayers, Analysis of heart rate variability, Ergonomics, vol. 16, no. 1, pp. 17-32, 1973. G. G. Berntson, J. T. Bigger, D. L. Eckberg, P. Grossman, P. G. Kaufmann, M. Malik, H. N. Nagaraja, S. W. Porges, J. P. Saul, P. H. Stone and M. W. van der Molen, Heart rate variability: Origins, methods, and interpretive caveats, Psychophysiology, vol. 34, pp. 623-648, 1997. M.V. Kamath and F.L. Fallan, (1995), Correction of the Heart Rate Variability Signal for Ectopic and Missing Beats, in Heart Rate Variability, M. Malik and A.J. Camm, Eds, Futura, pp. 75-85,1995. G.D. Clifford and L. Tarassenko, Quantifying Errors in Spectral Estimates of HRV due to Beat Replacement and Resampling, IEEE Trans on Biomedical Engineering, vol. 52, no. 4, pp. 630-638, 2005. M.A. Peltola, T. Seppanen, T.H. Makikallio and H.V. Huikuri, Effects and Significance of Premature Beats on Fractal Correlation Properties of R-R Interval Dynamics, A.N.E, vol. 9, no. 2, pp. 127135, 2004. M.A. Salo, V. Heikki and T. Seppanen, Ectopic Beats in Heart Rate Variability Analysis: Effects of Editing on Time and Frequency Domain Measures, A.N.E, vol. 6, no. 1, pp. 5-17, 2001. M. Brennan, M. Palaniswami and P. Kamen, Do existing measures of Poincar plot geometry reflect nonlinear features of heart rate variability? IEEE Transactions on Biomedical Engineering, vol. 48, no. 11, pp. 1342-1346, 2001. P.W. Kamen, H. Krum and A.M. Tonkin, Poincar plot of heart rate variability allows quantitative display of parasympathetic nervous activity, Clinc. Sci., vol. 91, pp. 201-208, 1996. M. A. Woo, W.G. Stevenson, D.K. Moser, R.B. Trelease and R.H. Harper, Patters of beat-to-beat heart rate variabilityin advanced heart failure, Amer. Heart J., Vol. 123, pp. 704-710, 1992. M. Brennan, M. Palaniswami and P. Kamen, Poincar plot interpretation using a physiological model of HRV based in a network of oscillators, American Journal of Physiology Heart Circulation Physiology, vol. 283, pp. H1873-H1886, 2002.

[2]

153

Ectopic Le ve l=0.1 0.3 0.2 0.1 0 0 1 2 3 4 5 6 7 8 9 10 No. of Ectopic Be ats Ins e r te d 0.15

Ectopic le ve l=0.1

SD1

SD2

0.1 0.05 0 1 2 3 4 5 6 7 8 9 10 No. of Ectopic Be ats Ins e r te d

a
Ectopic Le ve l=0.4 0.2

b
Ectopic le ve l=0.4 0.15

SD1

0.1 0 0 1 2 3 4 5 6 7 8 9 10 No. of Ectopic Be ats Ins e r te d

SD2

0.1 0.05 0 1 2 3 4 5 6 7 8 9 10 No. of Ectopic Be ats Ins e r te d

c
Ectopic le ve l=0.8 0.08 0.06 0.04 0.02 0 1 2 3 4 5 6 7 8 9 10 No.of Ectopic Be ats Ins e r te d
0.065 0.06 0.055 0.05 0 1 2 3

d
Ectopic le ve l=0.8

SD1

SD2

10

No. of Ectopic Be ats Ins e r te d

e .... N=300, .... N=500, .... N=900

Figure 2. Effect of insertion of artificial ectopic beats on SD1 (a. =0.1, c. =0.4, e. =0.8) and SD2 (b. =0.1, d. =0.4, f. =0.8) of ectopic free RR interval series of length N=900, 500 and 300

Ectopic Le ve l=0.1

Ectopic Le ve l=0.1

% Difference in SD1

% Difference in SD2

1000 500 0 0 1 2 3 4 5 6 7 8 9 10 No. of Ectopic Be ats Ins e rte d

150 100 50 0 0 1 2 3 4 5 6 7 8 9 10 No. of Ectopic Be ats Ins e r te d

a
Ectopic Le ve l=0.4

b
Ectopic Le ve l=0.4

% Difference in SD2

% Difference in SD1

500 250 0 0 1 2 3 4 5 6 7 8 9 10 No. of Ectopioc Be ats Ins e r te d

100 50 0 0 1 2 3 4 5 6 7 8 9 10 No. of Ectopic Be ats Ins e r te d

c
Ectopic Le ve l=0.8

d
Ectopic Le ve l=0.8

% Difference in SD1

150 100 50 0 0 1 2 3 4 5 6 7 8 9 10 No. of Ectopic Be ats Ins e r te d

% Difference in SD2

20 10 0 0 1 2 3 4 5 6 7 8 9 10 No. of Ectopic Be ats Ins e r te d

e .... N=300, .... N=500, .... N=900

Figure 3. Percentage difference in SD1 (a. =0.1, c. =0.4, e. =0.8) and SD2 (b. =0.1, d. =0.4, f. =0.8) with respect to reference values by addition of artificial ectopic beats in ectopic free RR interval series of length N=900, 500 and 300

154

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Background Modeling and Moving Object Classication for Gait Recognition


A. Jagan, C. Sasivarnan, M.K. Bhuyan
Department of Electronics and Communication Engineering, Indian Institute of Technology Guwahati, India. Email: {a.jagan, sasivarnan, mkb}@iitg.ernet.in
AbstractThis paper deals with background modeling and moving object classication for gait recognition. Current imagebased human recognition methods such as ngerprints, face, iris biometric modalities, generally require a cooperative subject views. These methods cannot reliably recognize non cooperating individuals at a distance in the real world under changing environmental conditions. In such conditions, recognition of a person using gait has good advantage. First step in gait recognition is the background subtraction/modeling. This is the crucial step in gait recognition. By using this, identication of moving objects from background scene has to be done. Perfect background subtraction is essential to get a high recognition rate. Next step is the separation of human beings from other moving objects (viz; car, tree etc.). In this paper, we have used a modied background subtraction algorithm and subsequently used feature-based classication of pedestrian from other moving objects. Experimental results demonstrated the effectiveness of the proposed method.

Database Input Video Frames Background Subtraction

Feature Extraction

Recognition

Decision or Recognized image sequence

Fig. 1.

Block diagram of gait recognition system

this classication. This classication metric is calculated by considering shape and boundary information of the objects. II. G AUSSIAN M IXTURE M ODEL In Gaussian Mixture Model each pixel is modeled by mixture of Gaussians (states). Each pixel (corresponding to objects) comes into the view represented by any one of a set of states k {1, 2, ..., K } . In these K states some states correspond to background rest of the states represents foreground. Each state modeled by set of K parameters k = p (k ). p(k ) is the probability of the surface k appearing the pixel view. By observing the pixel values we are estimating the states. Pixel values are samples of random variable X . So, it can be modeled by K Gaussians as fX/k (X/k, k ) =
(2 )

Keywords - Gaussian Mixture Model, Temporal consistency, Classication metric operator. I. I NTRODUCTION Generally used biometric recognition techniques have some disadvantages. They require co-operation of the individual that is to be identied. Gait, which concerns recognizing individuals by the way they walk, is a relatively new biometric without these disadvantages. Background subtraction/modeling is the rst and crucial step in gait recognition. Fig. 1. shows the general block diagram of gait recognition system. The main challenges involved in gait recognition is imperfect foreground segmentation and changes in clothing of the object. So, background subtraction plays a vital role in gait recognition. Background subtraction identies the moving objects from background scene. Generally two types of background subtraction techniques are there. They are recursive and non-recursive. Background subtraction using frame difference method is simple one. In this method, previous frame acts as a background model for current frame. By calculating difference between these two frames moving objects are estimated. But it fails in uncontrolled environments. This method comes under nonrecursive type. One more existing method is single gaussian model. It comes into recursive type. In this, each pixel is modeled by single and separate gaussian. In this paper, Gaussian Mixture Model (GMM) is used for Background subtraction in which we have incorporated an additional step of image ltering by median lter. This is to remove some noises appeared due to the bad background subtraction. Next step is the classication of human beings from other moving objects. Some classication metrics are used for

1
n 2
1 2

e 2 (X k )

1 k

(X k )

(1)

k is the Gaussian density parameter set give by k = {k , k }. k and k are mean and standard deviation of k th density. The total parameter set used in this is given by = {1 , ..., k , 1 , ..., k } . All these K densities are disjoint. So, the distribution of X can be modeled as
K

fX (X/) =
k=1

p (k )fX/k (X/k, k )

(2)

First step in this GMM is estimating the current state. Among all K states most likely state is estimated for given sample X = x. This can be done by nding posterior probability for each of k distributions. This is given by Bayess theorem. p (k ) fX/k (X/k, k ) (3) p (k/X, ) = fX (X/)
The k which gives the maximum posterior probability is deemed to be current state. The maximum posterior estimate

155

is given by = arg maxk p (k/X, ) k = arg maxk wk fX/k (X/k, k ) (4)

Next step is to nd whether that current state is foreground or background. First arrange (rank) the states according to the value k /k . Generally background surfaces have more probability and less variance. To estimate foreground objects, we rst provide prior probability T . The rst B of the ranked states whose accumulated probability accounts for T are deemed to be background. Rest of the states are foreground.
b

B = arg min(
b k=1

wk > T )

(5)

If the estimated current state is in those rst B states then it is background otherwise it should be foreground. The recursive relations used to update the gaussian parameters are given by k,t = (1 t ) k,t + t = t k,t (6)
(7) (8)
2

Fig. 2.

Typical dispersedness values for a human and a vehicle [5].

according to the classication metric and the result is recorded as classication hypothesis (i) for each one. (i) = {ID (Pn (i))} (11)

k,t = (1 ) k,t + Xt
2 2 k,t = (1 ) k,t + (Xt t )

(9)

Calculation of posterior probability is time taking process. So, instead of calculating posterior probability, the value falling within = 2.5 standard deviations of the mean of one of the Gaussian densities is calculated. The Gaussian density which satises above relation is taken as the current state. Any previously unseen foreground object comes in to the view can be accommodate by assigning a small prior probability. After background subtraction, along with foreground objects some noise also present due to the bad background subtraction. This can be eliminated by using some image ltering techniques. In this, median lter is used to remove those noises. III. M OVING TARGET C LASSIFICATION Next step in gait recognition is moving object (target classication). Moving objects obtained from background subtraction are classied in to human, vehicle, background clutter. Human, with its more complex shape, will have more dispersednes than vehicle. A classication metric operator ID (x) and the notion of temporal consistency is used for classication. The metric is based on the knowledge that humans are in general smaller than vehicles. In this dispersedness is taken as classication metric and is given by Dispersedness = P erimeter2 Area
(10)

Each one of these potential targets must be observed in subsequent frames. Each previous motion region Pn1 (i) is matched to the spatially closest current motion region Rn (j ). Any previous potential targets have not been matched to current regions removed from the list, and any current motion regions Rn which have not been matched are added to the list. At each frame, their new classications are used to update the classication hypothesis. After this a simple application of MLE is employed for classication. IV. E XPERIMENTAL R ESULTS In this, Gaussian Mixture Model is implemented to get background subtraction results. The performance of the this has been evaluated based on PETS2001 data set [6] and a surveillance video collected in brisbane railway station. PETS2001 data set contains more complex scenes with clouds motion, shadows and small illumination changes. It has the frame rate of 30 frames per second and with a frame size of 768 x 576. The surveillance video has frame rate of 18 frames for second and frame size of 704 x 576. In this video more illumination changes are there. After background subtraction some noises may present. Median lter is used to remove these noises. Following Figs. 3-4. shows the obtained experimental results for background subtraction. Second row in these Figs shows background subtraction result without median lter. It is seen that some noises are present. Third row shows the result after applying median lter. Next step is the classication of human beings (pedestrian) from other moving objects. In this, one classication metric (dispersedness) and temporal consistency is used for this classication. Figs. 5-8. shows classication results.

Fig. 2. shows the typical dispersedness values for a human and a vehicle. The rst step in this is to record all Nn potential targets Pn (i) = Rn (i). These regions are classied

156

Fig. 3. Background subtraction results. The rst row shows input video frames (Pets2001: http://www.cvg.cs.rdg.ac.uk/PETS2001/pets2001dataset.html). Second row shows background subtracted frames. Third row shows the background subtraction result after median lter.

Fig. 6.

classication result.

Fig. 7. Fig. 4. Background subtraction results. The rst row shows input video frames. Second row shows background subtracted frames. Third row shows the background subtraction result after median lter.

Input video frame.

Fig. 5.

Input video frame.

Fig. 8.

Classication results.

157

V. C ONCLUSION In this paper, Background subtraction and moving target classication is presented. Background subtraction identies the moving objects from background in a scene. Gaussian Mixture Model in addition with median lter is used for background subtraction. Compared to previous methods we got some good results by using this method. Next step of gait recognition system, the classication of moving objects has done. Shape and boundary information of the objects is used for classication. In this, one classication metric(dispersedness) and temporal consistency is used for this classication. Next step in gait recognition is recognition/classication. Hidden Markov Model is proposed for recognition by considering the key frames as the states of the model. The results of the proposed background modeling and object classication are quite visually convincing to apply them in HMM framework. R EFERENCES
[1] L. Lee and W. E. L. Grimson, Gait analysis for recognition and classication, Proceedings of Fifth IEEE International Conference on Automatic Face and Gesture Recognition, 2002. [2] R. Cucchiara, M. Piccardi, and A. Prati, Detecting moving objects, ghosts,and shadows in video streams, IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1337-1342, Oct. 1993. [3] A. Elagammal, D. Harwood, and L. Devis, Non-parametric model for background subtraction, Proceedings of IEEE ICCV99 Frame-rate Workshop, sept. 1999. [4] P. Wayne Power and Johann A. Schoonees, Understanding background mixture models for foreground segmentation, Proceedings of IEEE International Conference on Image and Vision Computing , 2002. [5] Alan J. Lipton, Hironobu Fujiyoshi, and Raju S. Patil, Moving target classication and tracking from real time video, Proceedings of fourth IEEE Workshop on Applications of Computer vision (WACV98) [6] PETS2001 dataset, computational vision group, University of reading. http://www.cvg.rdg.ac.uk/PETS2001datsset.html.

158

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Calibration setup and Technique for Gait Motion Analysis using Multiple Camera Approach
Binish Thomas, Dr. (Mrs.) A. Martins Mathew
VIT University, Vellore, Tamil Nadu, India contactbinish@yahoo.com

Neelesh Kumar, Gautam Sharma, Amod Kumar


Central Scientific Instruments Organisation (CSIO), Chandigarh, India

Abstract
Calibration of multiple motion cameras using different techniques for kinematic measurement of gait analysis was performed. The change in angles of stationary objects and anatomical angles for dynamic gait motion help in the calibration process. Single camera provides only 2D view, which has its own limitations, so an effort has been made to obtain 3D images, using a pair of cameras.

Key words: Calibration, Stationary, dynamic, 3D analysis 1. INTRODUCTION

while the subject walks on a treadmill rather than over the ground, since the volume in which the measurements need to be made is much smaller and also the speed can be accurately monitored and easily varied as per the requirement [4]. A two camera system is used to obtain video images in AVI format which are synchronized in the acquisition stage using a common external switch acting as a trigger to the Data acquisition system connected to both cameras individually. The obtained and analysed data helps to come to a conclusion regarding the deviations in the angle observed from two different camera positions. The 3D data can also be useful for virtual reality applications [5].

2.
Gait analysis is the systematic study of human walking, augmented by instrumentation for measuring body movements, body mechanics and activity of the muscles [1]. Spatio-temporal parameters of the human gait are measured for identifying gait disorders. Methods of kinematic measurements for gait analysis have evolved from the very primitive visual observation to data analysis using high end camera based systems. To accurately measure the motion of the body in 3D space and to obtain a comprehensive overview of the human gait, 3D motion analysis is the most commonly used procedure. While many parameters and events can be measured using a single video camera and 2D motion analysis, 3D motion analysis offers all this and much more. Even though 3D is much easier these days than 2D motion analysis, but the huge cost associated with setting up of a 3D motion gait analysis system has been a major hurdle towards the easy acceptance of this system [2, 3]. The objective is the generation of 3D structure from 2D source imagery. This basically involves calibration of the camera system for various objects. For this, firstly a stationary object is calibrated on level ground and then the calibration is performed for dynamic motion gait on a treadmill. Treadmill is chosen as it is more convenient to study gait,

METHODS AND MATERIALS

The whole calibration is done in a sequential manner starting with stationary object as viewed from two different camera arrangements and then proceeding towards treadmill walking for dynamic motion.

A. SET UP FOR STATIONARY CALIBRATION: a) T-SHAPED STRUCTURE


Calibration is performed using a T-shaped rod and an arrangement as shown in fig.1 and fig.2. Active LED markers were placed on the three extreme points of the Tshaped rod. The rod was placed at position 1, acting as a straight view for camera 1(C1) and then correspondingly at position 2 being at a straight view from Camera 2(C2).

b) PARELLELOGRAM STRUCTURE
Secondly,the following set up was made with cameras C1 and C2 placed at diametrically opposite regions and the parallelogram structure shifted from positions 1,2 and 3. The images were acquired and analysed for corresponding angles using NI vision software.

1
159

B. CALIBRATION FOR DYNAMIC MOTION ON TREADMILL


The camera calibration is further conducted for the dynamic motion of the subject, were the walk occurs at a predetermined marked area of walk on the treadmill belt of a motorized treadmill.

C1(camera 1) C2(camera 2) ))11 Fig.1: Calibration at first position

3.5 feet

1.75 feet

a) Max Camera position

b) Midway Camera position

Fig.4: Dynamic Gait Motion


C1

C2

Fig. 2: Calibration at second position

The cameras as shown in fig.4 (a) are placed at a distance of 3.5 feet apart, with the subject walking in the middle region of both the cameras, the distance between the subject and camera being 8.2 feet (Max Camera Position). As shown in the fig.4 (b), the camera C1 is later moved to a position at a distance of 1.75 feet from Camera C2. Thus the system consist of a set of cameras in which camera C1 is at a straight line to the subject and the camera C2 is at an inclined angle while acquiring images. The set up is used to calculate the knee flexion angle for treadmill walk. The subject is made to walk on the treadmill at three varying predetermined speeds of 2.8m/s(slow), 3.2m/s(normal), 4m/s(fast).The subject is fitted with a set of active marker system on the hip, knee and ankle .The gait lab is made dark and the markers are observed by the cameras . The angles made by the two cameras are found out and the differences between the two are measured and the data is shown below for the variations measured. These methods assist in finding the deviations occurring in the two cameras when images are being acquired, helping in the process of obtaining 3D images from 2 cameras.

Fig. 3: Calibration setup for active markers on parallelogram structure at three positions

2
160

3.

RESULTS AND OBSERVATIONS

The result for the dynamic motion for one gait cycle is given below as analysed using NI-Vision software by pattern matching technique.

The following were the observations made for the various set ups
POSITION 1 POSITION 2 POSITIO N1 POSITION 2

C1

C1

C2

C2

ANGLE 1

82.31

86.48

80.13

82.40

IMAGE 1 2 3 4 5 6 7 8 9 10 11 12 13
14 15

C1 189.49 196.01 192.06 204.14 234.07 188.13 197.63 -

ANGLE 2

82.62

77.81

86.10

83.02

FAST C2 195.76 201.6 181.74 179.99 190.73 192.69 182.95 194.48 200.77 226.71 215.01 209.28 198.04 193.75 191.75

C1 192.53

SLOW C2 186.94

193.65 192.31 190.52 200.46 205.57 195.94 187.93 189.11 195.2 203.26 228.23 -

180.51 191.21 204.01 223.46 212.32 206.67 201.46 188.60 192.29 -

Table.1 Angles made by T-structure at two positions

Table.3 Angles of slow and fast walk from C1 and C2 for Maximum camera position during one Gait cycle

POSITION 1

POSITION 2

POSITION 3

C1

C2

C1

C2

C1

C2

ANGLE 1

61.89

59.81

62.77

1.71

60.89

61.58

ANGLE 2

59.47

65.41

60.83

61.35

65.75

60.61

IMAGE 1 2 3 4 5 6 7 8 9 10 11 12 13
14 15

C1 175.46 179.29 196.01 193.05 192.06 204.14 210.13 234.07 190.28 188.13 -

FAST C2 195.68 205.87 184.34 190.75 193.12 202.98 223.32 208.15 218.88 221.70 241.53 231.97 230.16
-

C1 183.85

SLOW C2 183.72

ANGLE 3

182.75 182.01 182.32 187.26 208.14 219.04 198.94 184.51 181.65 182.05 181.38 191.68
-

187.89 180.70 180.10 180.05 186.29 186.36 180.24 181.34 191.80 193.64 217.66 224.91
204.56 187.64

58.64

54.78

56.37

56.94

53.36

57.80

Table.2 Angles made by parallelogram structure at three positions by C1 and C2

From the above obtained data it is seen that there is an angle difference of about 3 degree, when the camera is placed at a distance of 3.5 feet, and the distance of the object from the camera being 8.2 feet.

Table.4 Angles of slow and fast walk from C1 and C2 for Midway Camera position during one Gait cycle

3
161

4.

CONCLUSIONS

Stereo, IEEE transactions on pattern analysis and machine intelligence, vol. 25, no. 8, august 2003 . [7] Zhengyou Zhang, Senior Member, IEEE A Flexible New Technique for Camera Calibration, IEEE transactions on pattern analysis and machine intelligence, vol. 22, no. 11, November.

As per the observations made for static images it has been observed that the variations in angle are 3-4 degrees when the object is viewed from two different cameras simultaneously using T-Structure. And variations in angles for parallelogram structure are 3-5 degrees. Whereas for a dynamic motion, similar graphical trends are obtained between fast and slow speeds of walk as viewed from maximum camera position and midway camera position. Thus, these calibrated values can be used as standard constant values while trying to obtain images for gait analysis using two or more cameras, and this can be further used for the calibration of more number of cameras.

REFERENCES [1] Davide Scaramuzza, Agostino Martinelli, Roland Siegwart, A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion, Proceedings of the Fourth IEEE International Conference on Computer Vision Systems, p.45, 04-07, 2006 doi:10.1109/ICVS.2006.3 [2] Dieter Koller, Gudrun Klinker, Eric Rose, David Breen, Ross Whitaker, Mihran Tuceryan,Automated Camera Calibration and 3D Ego motion Estimation for Augmented Reality Applications, International Conference on Computer Analysis of Images and Patterns (CAIP-97), Kiel, Germany, September 10 12, 1997. [3] S. Bougnoux. From projective to Euclidean space under any practical situation, a criticism of self calibration, In Proceedings of the 6th International Conference on Computer Vision, Jan. 1998, pp. 790 796. [4]MacKay-Lyons M. Central pattern generation of locomotion: a review of the evidence. Physical Therapy. 2002; 82:6983. [5] Jozef Novak, Application of Virtual Reality Modeling Language for Design of Automated Workplaces, World Academy of Science Engineering and Technology 31, 2007 .

[6] Myron Z Brown, Member, IEEE, Darius


Burschka, Member IEEE, and Gregory D Hager, Senior Member IEEE Advances in Computational

4
162

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

DSP-Based Capacitor Voltage Balancing Strategy of a Three-Phase, Three-Level Rectifier


A. H. Bhat, P. Agarwal and N. Langer
Abstract In this paper, DSP-based implementation of DC-Bus
capacitor voltage balancing strategy of a three-phase neutralpoint clamped rectifier is presented. The proposed modulation strategy first identifies the switching states responsible for the unbalancing of capacitor voltages and then, based on the knowledge of polarity of capacitor voltage difference and direction of power flow, uses the appropriate redundant switching vectors to achieve the balancing of capacitor voltages. The proposed control algorithm is validated through experimental results obtained on a laboratory prototype of the converter. The performance of the converter is found excellent in terms of the line-side and load-side power quality. A perfect balancing of the DC-bus capacitor voltages is achieved. DSP of dSPACE has been used for real-time simulation and implementation of the proposed control algorithm. Index Terms Space vector modulation, Redundant switching states, Digital signal processor, dSPACE, Real-time

irregular charging and discharging of the capacitors which is originated by asymmetric connections produced by power semiconductor devices in the converter. The capacitor voltage unbalancing has become a burning issue and has recently invited an intensive research to address this problem in multilevel converters [1,5]. The key aspect in the behaviour of multilevel rectifiers is to avoid the unbalance in DC-bus capacitor voltages. In this paper, a linear controller (PI controller) is used to control the total DC-bus voltage in a similar fashion as in two-level PWM rectifiers. A non-linear controller is proposed to achieve the balancing of the DC-bus capacitor voltages. In fact, the mechanism which generates the DC-bus capacitor voltage unbalancing is studied and is used to develop the proposed control algorithm. The redundant switching vectors are used for achieving the capacitor voltage balancing.

simulation

II. SYSTEM DESCRIPTION AND MATHEMATICAL MODELING I. INTRODUCTION The adopted three-phase power factor correction converter based on neutral point clamped scheme is shown in Fig. 1. The input terminals of the rectifier a, b and c are connected to the terminals of three-phase source A, B and C through the filter inductances Ls. Each power switch has a voltage stress of half the dc bus voltage instead of full dc bus voltage as in the twolevel PFCs. The Power switches are commutated with a high switching frequency to generate the PWM voltages Va, Vb, and Vc. Using the definitions of space vectors:

In recent years, the popularity of medium voltage converters has grown and this has invited the attention of industry [1,4,5]. Among the various types of PWM converters, the three-level neutral-point clamped (NPC) converter has received special attention because of its various attractive features like use of lower blocking voltage devices, better harmonic spectrum of line-current as compared with a two-level converter at the same switching frequency, lower di/dt and dv/dt stresses, lower EMI emissions and suitability for operation at higher voltages. As rectifiers, they meet the international standards such as IEEE 519 [3] and IEC 61000-3-2/4. A major drawback in case of three-level PWM rectifiers is unbalance in the DC-bus capacitor voltages which is further aggravated under unbalanced load conditions. The DC-bus capacitor voltage unbalancing is created due to

2 3

v' iL
2 3

2 3

v v

A a

a.vB a 2 .vC

a.v b

LA

a.iLB
diL dt

a .v a .i
2 c 2 LC

(1) (2) (3)

The following vector equation can be written.

v L.

v'

(4)

A. H. Bhat is with the Department of Electrical Engineering, National Institute of Technology, Kashmir, India (e-mail: bhatdee@nitsri.net). P. Agarwal is with the Department of Electrical Engineering, Indian Institute of Technology Roorkee, India (e-mail: pramgfee@iitr.ernet.in). N. Langer is with the Department of Electrical Engineering, MIET, Jammu, India (e-mail: nitin_langer@yahoomail.com)

This equation can be expressed in a rotating reference frame (d-q), with the d-axis oriented in the direction of source voltage vector v as depicted in Fig. 2. Thus (4) can be written as,

vd L.

diLd dt

. L.i Lq vd '
diLq dt

(5) (6)

vq 0 L.

. L.i Ld vq'

where is the angular frequency of three-phase voltage and

163

vd ,vd ' ,iLd

and vq , vq ' , i Lq are the components of v, v ' and i L in

the d and q-axis respectively. Equations (5) and (6) show that the behaviour of currents i Ld and i Lq can be controlled by using the voltages v d ' and v q ' generated by the rectifier. In this way, the active as well as the reactive powers delivered by the mains to rectifier can be controlled.
i c1 Io

converter switches using the Nearest Three Vector (NTV) approach of SVM. Applying the definition of (2) to all the 27 possible conduction states of power semiconductors, the converter generates 19 different space vectors as shown in Fig. 4. This figure also depicts the commutation states used to generate each space vector. The complex plane is divided into 24 triangles.
L O

Vc

Ta 1

Tb1

Tc 1

Vc1

Va
A i sa Vb
N

Ta 2
Ls

Tb 2 a
(Va)

C1

coordinate transformation

dq

abc

control pulses SVM Modulator

Tc 2
Vo io

B
i sb ib

Ls
O

Load

V
Lq* 0

(Vb)

Vq *
c
(Vc)

Vc

C
i sc ic

Ls

i Lq
C2

dq

Ta 1'

Tb1'

Tc 1'

Vc2

i Ld
Vc*
Ta 2' Tb 2' Tc 2' ic 2

Ld *

Vd *
coordinate transformation

Fig. 1. Adopted three-phase Neutral Point Clamped Converter.


q

Fig. 3. Block diagram of controller of SVPWM Rectifier.


Im

Im

v11

v10
0

v9

v12
0 0

v3
0 0

v2
0 0 0

v8
0

vL

v13

v4
0 0 0

v0
00 0

v1
0 0 0

v7

Re
v18

v'
iL

v14

v5
0 0 0

v6
0 0 0

Re

v15

v16
0

v17

Fig. 2. Vector diagram of the converter. Fig. 4. Space Vectors generated by SVPWM Converter.

III. PROPOSED CONTROL STRATEGY The control strategy of this rectifier is the same as employed in two-level PWM rectifiers using SVM [2]. Fig. 3 depicts the block diagram of closed-loop control of the converter. A PI controller is used to control the converter output voltage Vc . The output of this controller, i Ld * is used as reference for an inner closed-loop used to control the direct current i Ld . The current in the q-axis, i Lq is controlled by a similar loop with reference, i Lq * 0, to obtain operation with unity power factor. It is to be emphasized that this method only controls the total dc-bus voltage Vo and does not ensure the balance of capacitor voltages V C 1 and VC 2 . For proper converter operation, VC 1 VC 2 . The current controllers deliver the reference values for the voltages in the d and q-axis, Vd * and

Four types of space vectors can be identified in Fig. 4 as: Zero Vector Vo : It is generated by three different conduction states connecting the terminals a, b and c simultaneously to the same bus bar. Therefore it is said that zero space vector has double redundancy. Internal Vectors V1toV6 : These vectors are generated by connecting three terminals a, b and c to only one capacitor, C1 or C2. They have simple redundancy. Medium Vectors V8,V10,V12,V14,V16andV18 : These

vectors are generated by connecting the terminals a, b and c to different bus bars (+, 0 and -). Each vector can be generated by a unique commutation state which means that they have no redundancy. External Vectors V7,V9,V11,V13,V15andV17 : They are

Vq * respectively. By using coordinates transformation, we


obtain V * and V * in the stationary reference frame , . Voltages V * and V * are delivered as inputs to the space vector modulator which generates the control pulses for the

generated when the terminals a, b and c are connected to the positive and negative bus-bars. These vectors have no redundancy.

A. Voltage unbalance generation Assuming that C1=C2 and that initially VC 1 VC 2, the neutral current io is defined by:

164

(7) io iC 1 iC 2 When the neutral current is zero, iC1 iC 2, the capacitor voltages are balanced. The generation of an internal space vector V1 by two different switching states is depicted in Fig. 5. One of these states uses only capacitor C1, connecting the three-phase source to terminals + and 0. This commutation state can be called positive commutation state V1 . On the other hand, the negative commutation state V1 is obtained by connecting the three-phase source to terminals and 0. As shown in Fig. 5, the commutation state V1 produces: (8) io i LR And the commutation state V1 produces: (9) io i LR This means that these commutation states produce neutral current with different polarity and, consequently, generate dcbus capacitor voltage unbalance with different polarity. This fact forms the basis of the voltage unbalance compensation.
i LR

C. Capacitor voltage unbalance control strategy The dc-bus capacitor voltage difference is given by, (10) V VC 1 VC 2 The polarity of the current i Ld determines the direction of power flow between the converter and source. Since the actual current i Ld has more ripples than the reference current i Ld * , the direction of power flow can be determined advantageously by i Ld * . Thus according to the polarity of V and i Ld * in each modulation period T , the reference vector v ref is generated by using the redundant states of internal vectors to compensate the voltage unbalance. As shown in Fig. 5, if V > 0, VC 1 > VC 2 and consequently

i LR

VC 2 must increase and VC 1 must reduce to compensate the dc-bus capacitor voltage unbalance. In this case, if power flows from source to converter i Ld * 0 , a negative

C1 Io
o

A
Vc1

C1 Io
o

Vc1

redundant state like V1 , as shown in Fig. 5(b), must be chosen for internal space vectors. On the other hand, if the power flows from converter to the source i Ld * 0 , the power must be delivered by the capacitor C1 to cause a reduction of VC 1 and an increase in VC 2 . Therefore a positive redundant state,

C2 B C

Vc2

C2 B C

Vc2

Fig. 5. Switching states of

V1 : a) Positive switching state V1 (+,0,0);

b) Negative switching state

V1 (0,-,-).

B. Influence of direction of power flow The direction of power flow between source and converter changes the polarity of the capacitor voltage unbalance as clearly depicted in Fig. 5. If the three-phase source delivers active power to the converter in commutation state V1 , this power is received by the capacitor C1, causing an increase in VC 1 . The action of voltage controller is to keep the total dcbus voltage Vo constant, implying that the capacitor C2 voltage VC 2 will be reduced by the closed loop voltage control action. On the other hand, if the converter in conduction state V1 regenerates active power to three-phase source, this power will be delivered by the capacitor C1, thus causing a reduction in voltage VC 1 and an increase in voltage VC 2 . If the three-phase source delivers active power to the converter in commutation state V1 , this power is received by the capacitor C2, thus causing an increase in VC 2 and a reduction in VC 1 . On the other hand, if the converter in the state V1 delivers active power to three-phase source, this power comes from capacitor C2, thus causing a reduction in VC 2 and an increase in VC 1 . This analysis is valid for all the internal space vectors.

like V1 as shown in Fig. 5(a), must be chosen for internal space vectors. The strategy to select a redundant switching state is implemented using the following control logic: If V is positive & i Ld * is positive, select a negative redundant state, otherwise select a positive redundant state for negative i Ld * ; If V is negative & i Ld * is positive, select a positive redundant state, otherwise select a negative redundant state for negative i Ld * . IV. PERFORMANCE INVESTIGATION The performance of the three-phase neutral-point clamped converter with the proposed control algorithm is thoroughly investigated on a DSP-based laboratory prototype of the converter whose block diagram is depicted in Fig. 6. Following system parameters are chosen for the experimentation of converter: The mains phase voltage is chosen as 40V peak with a frequency of 50Hz. The capacitance of two dc bus capacitors is 4700F each and the boost inductance is 4.5mH for each line. A sampling frequency of 5 kHz is chosen. A passive load of 15 and 5mH is used for the experimentation. The desired dc-link voltage of the proposed rectifier is set at 100V.

165

The simulink model of the converter control using the proposed voltage balancing control algorithm is designed using MATLAB/Simulink software. Both the sensed DC-link capacitor voltages are fed to the DSP (dSPACE) hardware via two ADCs for processing. Using the Real-Time Workshop (RTW) of MATLAB and Real-Time Interface (RTI) of dSPACE, the optimized code for the control algorithm is automatically generated and downloaded onto the dSPACE hardware. The redundant switching states are chosen according to the sensed capacitor voltages and the polarity of reference d-component of line-current id*. Control pulses for all the 12 devices of three-level converter bridge are generated in real-time upon the execution of the control algorithm and are connected to the IGBT drivers via the dSPACE Master-Bit I/Os.

Fig. 7 shows the two DC-bus capacitor voltages (Vc1 and Vc2) of rectifier. It is to be noted that only positive redundant states of the internal vectors are chosen which produces a large capacitor voltage unbalance. Fig. 8(a) shows the source voltage and current waveforms with this large capacitor voltage unbalance. It can be observed from this figure that the capacitor voltage unbalance causes distortion in the source current. The frequency spectrum of source current in Fig. 8(b) shows an unacceptably large THD of 21.9%. The implementation of the proposed voltage balancing strategy using DSP causes the capacitor voltages to balance perfectly. The effect of capacitor voltage balancing on the supply current waveforms is observed in Fig. 9(a), which shows that the source current is sinusoidal at nearly unity input power factor.

Fig. 6 Block diagram of dSPACE-Based NPC Rectifier Implementation

Capacitor voltages (volts)

Vc1

Vc2
Fig. 7 DC-Bus capacitor voltages, Vc1 & Vc2 X-axis: time 5 mS/div and Y-axis: Vc1 & Vc2 25 volts/div

va ia

(a) Fig. 8

(b) (a) Source voltage and current waveforms for unbalanced capacitor voltages X-axis: time 5 mS/div and Y-axis: va 12 volts/div & ia 10 A/div. (b) Frequency spectrum of source current for unbalanced capacitor voltages

166

Vo 100V va ia

(a) Fig. 9

(b)

(a) Phase A voltage, line-current and load voltage waveforms with voltage balancing strategy X-axis: time 5 mS/div. Y-axis: va 20 volts/div, ia 10A/div & Vo 25 volts/div. (b) Frequency spectrum of source current with voltage balancing strategy

Capacitor voltages (volts)

Vc2

Capacitor voltages (volts)

Vc1

Vc1

Vc2

(a)

(b)

Fig. 10 (a) DC-Bus capacitor voltages before and after applying the voltage balancing strategy X-axis: time 5 mS/div and Y-axis: Vc1 & Vc2 35 volts/div (b) Behaviour of capacitor voltages after withdrawing the voltage balancing strategy X-axis: time 5 mS/div and Y-axis: Vc1 & Vc2 30 volts/div

The THD of line current is well below the limit imposed by IEEE 519. This is shown in the frequency spectrum of linecurrent in Fig. 9(b). The Fig. 10(a) shows the behaviour of the DC-bus capacitor voltages before and after the application of the proposed control algorithm. It is observed that the control action takes place quickly as soon as the control algorithm is implemented and the capacitor voltages are perfectly balanced. The behaviour of the capacitor voltages has also been observed by suddenly withdrawing the proposed control algorithm while the system is in running condition. This is depicted in Fig. 10(b), which shows that as soon as the control algorithm is withdrawn, the DC-bus capacitor voltages again tend to become unbalanced and show an increasing trend of unbalance with time. This shows the effectiveness of the proposed algorithm in balancing the capacitor voltages.
V. CONCLUSION

The proposed control scheme is based on capacitor voltage balancing using space vector modulation. In the proposed control algorithm, the influence of each space vector on capacitor voltages has been studied and a relationship between the direction of power flow and voltage unbalance has been clearly established. Based on this, a non-linear control technique to balance the dc-bus capacitor voltages, by using the redundant switching states, has been developed. Thus the proposed converter is a useful power quality improvement ac/dc converter suitable for various types of applications including domestic, commercial and industrial applications.
VI [1]

REFERENCES

[2]

A space-vector modulated three-phase power factor correction converter based on NPC configuration is proposed to achieve sinusoidal supply currents with nearly unity input power factor, low voltage and dv/dt stresses, low EMI emissions, well-regulated dc output voltage with almost level waveform and balanced neutral point voltage.

[3] [4]

[5]

P. M. Bagwat and V. R. Stefanovic, "Generalized structure of a Multilevel PWM Inverter, IEEE Trans. Ind. Applicat., vol. IA-19, No. 6, pp. 1057-1069, Nov./Dec. 1983. J. W. Dixon, Boost type PWM rectifiers for high power applications, Ph.D dissertation, Dept. Elect. Comput. Eng., McGill Univ., Montreal, QC, Canada, Jun. 1988. IEEE Recommended Practices and Requirements for Harmonics Control in Electric Power Systems, IEEE std. 519, 1992. Mahfouz A., Holtz J., and El-Tobshy A., Development of an integrated high voltage 3-level converter-inverter system with sinusoidal input-output for feeding 3-phase induction motors, Power Electronics and Applications, Fifth European Conference, vol. 4, Sep. 1993, pp. 134-139. J. S. Lai and F. Z. Peng, Multilevel Converters: A New Breed of Power Converters, IEEE Trans. Ind. Applicat., vol. IA-32, No. 3, May/June 1996, pp. 509-517.

167

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

168

169

170

171

172

173

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Simulation and Design of controller for Unity Power Factor Front-End PWM Converter
G.N. Khanduja
Assistant Professor, Electrical Engineering Department Laljibhai Chaturbhai Institute of Technology Bhandu, Mehsana, Gujarat, India e-mail: luckykhanduja@gmail.com AbstractThe paper is based on the difference between the performance of single phase diode bridge rectifier and the frontend converter in terms of the Total harmonic distortion of the line current. It also contains the solution of the voltage control loop by two methods named Unity Modulus Method and Ziegler-Nichols method. The parameters obtained with the help of both the methods are compared with the help of mathematical model in simulink (MATLAB). A programming file is also generated in MATLAB and from the same step response and Nyquist plots are obtained for system control loops.
Keywords- Front-End Converter, Unity Modulus Method, Ziegler-Nichols Method, SPWM, Unity Power Factor and Low % THD

Fig.1. Waveforms of Vs, vdc, and Is of single phase diode bridge converter

here are several conventional methods by which the output dc voltage can be controlled, e.g. a diode bridge with a tap changing transformer or an auto-transformer. The method is simple but suffers from the demerits due to size, weight, cost of transformer and poor dynamic response [1]. In case of an ac-to-dc phase controlled switching (linecommutated rectifier bridge), no transformer is required. Thus, the size, weight and cost reduced and efficiency is high. The waveforms for supply voltage, dc link voltage and line current for the single phase bridge type ac-dc converter are shown in fig.1. The normalized harmonic spectrum of line current for single phase diode bridge type ac-to-dc converter is shown in fig.2, which causes the power quality problems. In the paper unity power factor front-end converter is proposed, which maintains the unity power factor under the various load conditions. The proposed converter has significant low % THD compare to single phase diode bridge rectifier. The P and PI controller parameters are obtained with the help of Magnitude Optimum method and Ziegler-Nichols methods and accordingly simulation results are obtained for various conditions.
Nomenclatures Ls = Input source inductance. G = Maximum peak converter input voltage. Tr = Triangular carrier period. K1 = Current loop gain. Ki = Current feedback loop gain. Kn, Kp= PI controller gain. Tn, Ti = Time parameter of PI controller. Kv = Voltage feedback loop gain. Kcr = Critical gain of the system. Pcr = Period of sustained oscillation.

I. INTRODUCTION

Fig.2. Normalized harmonic spectrum of line current for single phase diode bridge converter

II. PROPOSED CONVERTER AND PWM SWITCHING SCHEME


The proposed front-end converter with 8 IGBT structure is shown in fig.3. In such scheme two single-phase converters are connected in parallel to supply the load and hence to reduce the harmonic currents in the mains as well as to improve the ripple contents at the dc link. Sine-triangular PWM is used for the control of the converters. For each converter, leg-B triangular waveform is 180 phase shifted compared to that of leg-A for the same converter. For leg-A of converter-2 triangular wave is phase shifted by 90 compared to that of converter-1 so that harmonic frequencies around the second multiple of carrier wave gets cancelled. The PWM switching scheme wave forms of the proposed converter is shown in fig.4 and 5. This is because harmonics at two times the carrier frequencies will be 180 phase shifted and hence flux produced by these cancel each other at the input transformer secondary side. For the proposed converter the frequency of carrier is taken 11 times the frequency of sinusoidal reference, which is 60 Hz. So harmonic currents of two times the carrier frequency will not flow at the primary side and hence the dominating harmonic

174

present at the primary side will be at the side bands of four times the carrier frequency (i.e., 4 x 11 x 60 Hz = 2640 Hz), with much reduced amplitudes as compared to the fundamental component[2][3].
Ls1

Is1 S11 A1 B1 S41

Id1

S31

IGBT
Converter-1

Is

S21
Single Phase AC Supply, Vs

Ls2

Is2 S12 A2 B2 S42 S32

Vs2

S22

Fig.3. Power Schematic of proposed front-end converter

Fig.4. Voltage waveform VA1B1= VA10-VB10 (i.e. Vr1)

Fig.5. Voltage waveform VAB= VA1B1+VA2B2 (i.e. Vr)

III. DESIGN OF CONTROL LOOPS


There are two control loops in proposed converter i.e. current control loop and voltage control loop [4]. There is a faster inner current control loop and outer is voltage control loop. The output dc voltage is controlled by matching the input power from the converter to the output power demand from the load, while maintaining unity power factor at all the loads. Another technique named harmonic resistance emulator [5] can be used for the single phase power factor correction. The loading condition includes reverse power flow (regenerative) mode of operation also. A stationary reference frame model is used for the simulation study, as it involves with only fixed frequency operation. The outer voltage loop is working with dc quantities and the inner current loop is working with sinusoidal quantities. The reference sine wave for the inner current control is derived from the input mains. The block schematic of current and voltage control loops are shown in fig.6

Vr2

Vs1

Vr1

Ic

and 7 respectively. In designing the control loops the two different methods are used for finding the parameters of current control loop and voltage control loop. The cases are taken for the under damped, critically damped and over damped system. The considered methods are Unity modulus method [6] and Ziegler- Nichols method [7]-[9]. The parameters of P and PI [10] controller obtained (based on mathematical model of the system) from both the methods are tabulated in table-I.

0
Id2

Vdc

RL

Converter-2

Fig.6. Block schematic of current control loop

Fig.7. Block schematic of voltage control loop


TABLE I. COMPARISON OF PI CONTROLLER PARAMETERS FOR VOLTAGE CONTROL LOOP Values of PI Controller Parameters

Type Of the System

Unity Modulus Method

Ziegler-Nichols Method

Kn

Tn

Kn

Tn

Under Damped System ( = 0.707)

9.06

0.01214

4.08

0.0914

Critically Damped System ( = 1)

4.5307

0.02428

1.1989

0.0647

Over Damped System ( = 1.2)

3.1463

0.03496

0.4917

0.0539

IV. SIMULATION AND PROGRAMMING RESULTS


The simulation for the proposed front-end converter is performed by generating mathematical model in Matlab Simulink. The results are obtained from both the methods i.e. Unity Modulus method and Ziegler-Nichols method under transient, dynamic conditions and fault condition. During fault condition only one converter is capable of taking full load current when another converter fails. The simulation results obtained considering that converter 2 is working and converter 1 is faulty. The general MATLAB program (M-File) is prepared for the calculation of the various parameters of current and voltage control loops of the proposed front-end converter. The flow chart representation of MATLAB program for proposed front-end converter is shown in fig.8. The simulation results are obtained for forward power flow mode of operation and reverse power flow (regenerative) mode of operation. The simulation results are shown in fig. 8 to fig.16. The

175

various plots are also taken with the help of program for under damped system as shown in fig. 17 and 18.

Fig.11. Wave forms of dc link Voltage for critically damped system

Fig.12. Wave forms of dc link Voltage for over damped system

Fig.8. Flow chart Representation of MATLAB Program for Front-End Converter

Fig.13. Waveforms for Vdc, Vs and Is for prototype converter

Fig.9. Vdc, Vs and Is wave for under motoring to regenerative mode of operation

Fig.10. Wave forms of dc link Voltage for under damped system

Fig.14. Vdc, Vs and Is waveforms from full load to no load under motoring mode of operation

176

The simulation results are also obtained for the fault condition and transition mode of operation. It is also observed that unity power factor is maintained in all the cases for proposed converter during the various load conditions. The results obtained show the desired performance of the proposed converter.

APPENDIX
Specifications of font-end converter are:
Specifications
Fig.15. Vdc, Vs and Is waveforms during fault condition under motoring mode of operation

For proposed converter

For prototype converter

Input Voltage (ac)

1432

30

Output Voltage (dc)

2800

60

Supply Frequency

60

50

Switching Frequency

660Hz

1.65 KHz

Rated power

1400 kW

750 w

Assumed %

of converter

98

80

Max. modulation index

0.8

0.8

DC Link Capacitor
Fig.16. Normalized harmonic spectrum of combined current reflected to primary (i.e. Is)

10000 F

9.37 mF

Boost inductor

1.81 mH

1.6171 mH

ACKNOWLEDGMENT The author would like to thank Dr. P.N. tekwani, Professor, Electrical Engineering Department, Institute of Technology, Nirma University, Ahmedabad, Gujarat for his guidance and encouragement during the work. REFERENCES
M.S. Jamil Asghar Power Electronics, Prentice Hall of India Pvt. Ltd., New Delhi, 2004. pp. 187. [2] P.N. Tekwani, Dhaval Patel, Design and Simulation of a PWM Regenerative Front-End Converter, Proceedings of National Conference on Current Trends in Technology, NUCONE 2007, November 29th December 1st Ahmedabad 2007. pp. 79-85. [3] Thiyagarajah, K.; Ranganathan, V.T.; Iyengar, B.S.R. A high switching frequency IGBT PWM rectifier/ inverter system for AC motor drives operating from single phase supply, Power Electronics Specialists Conference, 1990. PESC apos; 90 Record. 21st Annual IEEE, Volume, Issue, 11-14 Jun 1990 pp. 663 671. [4] P.N. Tekwani, G.N Khanduja, Design of Controller for Unity Power Factor Front-End PWM Regenerative Converter at National Conference on Current Trends in Technology, NUCONE 2008, November 27th - 29th , Ahmedabad, pp. 293-299 [5] Swami, H. Harmonic resistance emulator technique for single phase unity power factor correction, IEEE International Conference on Industrial Technology, 2005. ICIT 2005, pp. 1315- 1318 [6] Deur J., Peric N., stajic D. Design of reduced-order feed forward controller, UKACC International Conference on CONTROL98, 1-4 September 1998 pp. 207212. [7] Ya-Gang Wang, Hui-He Shao Automatic tuning of optimal PI controllers Proceedings of the 38th IEEE Conference on Decision and Control, 1999, vol.4, pp.3802-3803. [8] Hang, C.C. Astrom, K.J. Ho, W.K. Refinements of the ZieglerNichols tuning formula, IEE Proceedings of Control Theory and Applications, 1991, vol. 138, Issue: 2, pp. 111-118. [9] Katsuhiko Ogata, Modern Control Engineering, 4th Ed., Prentice Hall of India Pvt. Ltd., New Delhi, 2007, pp. 682-686. [10] Ramakant A. Gayakwad, Op-Amp and Linear Integrated Circuits, 3rd Ed., Prentice Hall of India Pvt. Ltd., New Delhi, 1999, pp. 535537. [1]

Fig.17. Step response of voltage control loop for under damped system

Fig.18. Nyquist plot of Voltage control loop for under damped system

V. CONCLUSION
The parameters of the control loops for the proposed topology are found with the help of Unity Modulus and Ziegler-Nichols method. According to the various values of the current and voltage control loops parameters simulation is done in simulink (MATLAB). For all the cases i.e. transient and dynamic condition the results form the unity modulus method found more suitable than Ziegler-Nichols method. The % THD of line current in the case of the single phase diode bridge rectifier is found 328.50 % and it is 4.078 % in the case of the proposed front-end converter.

177

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Active and Reactive Power Control of Voltage Source Converter Based HVDC System operating at Fundamental Frequency Switching
D. Madhan Mohan, Bhim Singh and B. K. Panigrahi
Department of Electrical Engineering Indian Institute of Technology Delhi New Delhi, India dmadhanmohan@gmail.com, bhimsinghr@gmail.com, bkpanigrahi@ee.iitd.ac.in
Abstract This paper deals with the active and reactive power flow control of Voltage Source Converter (VSC) based High Voltage Direct Current (HVDC) system operating at fundamental frequency switching. The HVDC system consists of two converter stations fed from two different ac systems. The reactive power is independently controlled in each converter station. A coordinated control algorithm is proposed for both the rectifier and an inverter stations for bidirectional active power flow. This results in a substantial reduction in switching losses and avoiding reactive power plant. MATLAB based simulations are carried out to verify and to validate the proposed control method of the VSC based HVDC system for bidirectional active power flow and their independent reactive power control. Keywords- Voltage Source Converter, Fundamental Frequency Switching, HVDC System, Power Flow Control, Reactive Power Control, Power Quality applications. In view of these factors it is considered proper to operate the converter at low switching frequency. This paper proposes a fundamental frequency switching (FFS) operation of voltage source converter (VSC) based HVDC system. In such system each device is switched one time per cycle and conducts for 180 duration. Therefore, the device is fully utilized. A 12-pulse converter topology is commonly used in HVDC installation to reduce the harmonics in the ac system. A new configuration of 12-pulse voltage source converter based HVDC system is designed, modeled and its simulated performance is given to demonstrate the improved power quality of the HVDC system [8, 9].

II. OPERATING PRINCIPLE


The real and reactive powers flow in a VSC based HVDC system is given as,
P = (Vs Vc sin)/X Q = Vs (Vs - Vc )cos/X (1) (2)

VDC transmission systems are providing economic solutions for special kind of transmission such as bulk, long distance, back to back and underwater transmission lines [1]. The voltage source converter (VSC) technology has made it best suited for such kind of applications with additional features and improved performance compared to its conventional HVDC scheme which is based on the line commutated thyristor converter. The VSC technology offers the benefits in the HVDC transmission technology by which a number of drawbacks of conventional HVDC system using thyristor converter are overcome. VSC converters used for power transmission or voltage support combined with an energy storage sources offer continuous and independent control of active and reactive powers [2-4]. The VSC technology is used in HVDC systems for both large and medium power system. Insulated Gate Bipolar Transistors (IGBTs) and Gate Turn Off thyristors (GTOs) are the semiconductor devices used in the VSC based HVDC technology, for medium and high power respectively. The IGBT based HVDC system for medium power applications with PWM (Pulse Width Modulation) control is also known as HVDC Light [5], and there are some installations around the world of high power HVDC system using PWM at nine times the frequency of the system [6-9]. In case of high power HVDC system the PWM technology may not be suitable as it increases the switching losses in VSC [5-7]. Therefore, it is not advisable to use PWM control for the high power

I. INTRODUCTION

where Vs is magnitude of grid voltage, Vc is voltage magnitude at the primary winding of converter transformer, XL is interface reactance between the two voltages, is phase angle difference between supply voltage (Vs) and converter voltage (Vc). From eqns (1) and (2), it is shown that the independent control of real and reactive power can be achieved by controlling both amplitude and phase of ac voltages Vs and Vc. The real power can be controlled by varying the phase angle of the converter voltage and the reactive power is controlled by varying the amplitude of the converter voltage. Using single VSC bridge it is difficult to control the converter voltage magnitude with constant dc voltage (Vdc). The value of dc voltage is function of peak magnitude of converter voltage (Vcm). The DC link voltage has to be varied for the reactive power control. To overcome this issue, two VSC bridges are used in place of single VSC bridge and are considered as a one unit. Two VSC bridges are separately connected two isolated secondary windings of the transformer. The gating pulses of the two bridges are generated in such a way that upper bridge voltage (Vc1) is leading to the supply voltage and lower bridge voltage (Vc2) is lagging to the supply voltage. A synthesis of these two voltages (Vc1 and Vc2) gives the converter primary voltage (Vc). By this method the dc voltage is maintained constant and it is independent of magnitude of Vc. Magnitudes of Vc1 and Vc2 are varied without depending upon the dc voltage (Vdc). An angle () is the phase angle between voltages of the secondary windings of the converter transformer (Vc1 or Vc2) as shown in Fig.2.

178

2
An angle () is the phase angle difference between supply voltage (Vs) and converter primary voltage (Vc). The angle () is responsible for the reactive control and an angle () is responsible for the active power control. The phasor diagrams of real and reactive power control for VSC based HVDC are shown in Fig 1. These three phasor diagrams show with respect to the rectifier station. Fig. 1 (i) shows the condition that when both real and reactive transfer at rectifier station is zero. This is a case the converter voltage (Vc) is in phase with supply voltage (Vs) and magnitudes are equal. Two converter voltages (Vc1 and Vc2) are equally displaced from supply voltage (Vs) at an angle of (). Similarly Figs. 1 (ii) and (iii) show the conditions that when the real power is flowing from the rectifier to an inverter and the reactive power is supplied by the converter station and when the real power is flowing from an inverter to the rectifier and the reactive power is supplied to the converter station. The relation between dc voltage (Vdc) and the converter fundamental rms voltage (Vc) is given as, Vc=(8n6Vdc)cos/ (3) order of 6n + 1, where n= 1, 2, 3 etc. In absence of triplen harmonics in the 3-phase voltage waveforms, (6n+1)th harmonics constitute harmonic positive sequence components and (6n-1)th harmonics as harmonic negative sequence components. While two identical 6-pulse GTO-VSC bridges are connected in parallel across the dc capacitor and operated with displacement angle of 30 (lagging) with ac output terminals of the two converters connected to delta ()-winding (secondary) of - and -winding (secondary) of - transformers having two primary windings are connected in series manner as shown in Fig. 3.

where n is turns ratio of the transformer. It is decided based dc voltage requirement and multipulse operation. In all the cases, the two converter voltages (Vc1, Vc2) maintain the same angle () from converter voltage (Vc), but with respect to the supply voltage the angle varies according to the operating mode.

Fig. 3 Transformer connection for two-level double bridge 12-pulse VSC based HVDC system (i) (ii) Fig. 1 Phase diagrams for real and reactive powers control (iii)

III. CIRCUIT CONFIGURATION


Fig. 2 shows the proposed system configuration of 100 MW, 12pulse two-level GTO-VSC based HVDC system connected to 33 kV system, where two set of two elementary 2x6-pulse converters operated at displacement angle of 0 and 30 in fundamental frequency switching gate control, are connected in parallel on the dc side with an energy storing dc capacitor. The ac sides of the all four converters are connected to a delta connected secondary windings. An elementary 6-pulse GTO-VSC produces a square wave or a quasi-square wave voltage output in fundamental frequency switching mode of GTO gate control.

The winding turns of the -winding (secondary) are made 3 times in the -winding (primary winding) of the other transformer to maintain equal line voltage on the converter side. Accordingly, harmonic voltages of the order 5th, 7th, 17th, 19th.... cancel out in primary windings connected to an ac grid system. Totally four transformers used in this 12-pulse VSC system. All secondary windings are connected in delta configuration and isolated from each other and they are connected to VSC bridge converters. Primary windings of 1st and 2nd transformers are connected in series and primary windings of 3rd and 4th transformers are connected in delta configuration. These two sets of windings are connected in series configuration. First and third converters give a leading converter operation and second and fourth converter gives the lagging converter operation.

VI. CONTROL SYSTEM OF VSC BASED HVDC SYSTEM A. DC Voltage Controller


The dc voltage controller is common for both rectifier and an inverter stations. In this the sensed dc voltage is compared with reference dc voltage and the voltage error is processed in PI (Proportional Integral) controller which gives d-axis reference (id1*, id2*) currents for the rectifier and an inverter stations respectively. The d-axis reference currents id1* and id2* is given as, (4) id1*=(P*/3Vs)+ {Kp1(Vdc*-Vdc) + Ki1(Vdc*-Vdc)dt} (5) id2*=(-P*/3Vs)+ {Kp1(Vdc*-Vdc) + Ki1(Vdc*-Vdc)dt} where P* is the reference real power to be transmitted from one side to another side, Kp1, KIi1 are proportional and integral gain constants, Vs is rms supply voltage, Vdc* is reference dc voltage and Vdc is actual dc voltage. The q-axis reference currents (iq1*, iq2*) for the rectifier and an inverter stations are calculated from the reference reactive power of each station. iq1* = (Q1*/3Vs) (6) iq2* = (Q1*/3Vs) (7)

Fig. 2 A two-level double bridge 12-pulse VSC based HVDC system

The transformer windings connection used for 12-pulse VSC is shown in Fig. 3.Thses ac output converter voltage waveform which is synthesized by Fourier series expansion contains harmonics of the

179

3
In general, the instantaneous active power, and the reactive power are drawn from the utility grid are expressed as, P = vsd1.id1 + vsq1.iq1 (8) Q = vsd1.iq1 - vsq1.id1 (9) where id1, iq1 are the d-axis and q-axis components of supply currents is1. vsd1, vsq1 are the d- axis and q-axis component of supply voltage (vs) of the rectifier station [10-11]. B. Current Controllers The output of a dc voltage controller is given as an input to the current controller. The d-axis and q-axis reference currents for the rectifier and an inverter stations compared with sensed d-axis and qaxis currents and the current controller generates reference d-axis and q-axis voltages (Vd*, Vq*). The calculation of actual d-axis and qaxis voltages and currents are given by the equation as, vsd1=(2/3)*{vasin(t)+vbsin(t-2/3)+vcsin(t+2/3)} (10) (11) vsq1=(2/3)*{vacos(t)+vbcos(t-2/3)+vccos(t+2/3)} (12) id1=(2/3)*{iasin(t)+ibsin(t-2/3)+icsin(t+2/3)} (13) iq1=(2/3)*{iacos(t)+ibcos(t-2/3)+iccos(t+2/3)} The operation of the inner current controller for rectifier station can be explained by using the following equations. Vd1* = vsd1-(kp2 id1* + ki2 id1* dt) + L1iq1 (14) (15) Vq1*=vsq1-(kp3 iq1* + ki3 iq1*dt) + L1id1 where vsd1, vsq1 are d-q axis components of supply voltage, L1 is ac link reactance of rectifier station. From these above equations the angles () and () are calculated for rectifier and inverter stations as, -1 * * (16) 1 =tan (vq1 / vd1 ) The gate drive logic is designed to produce square pulse of duration of 180. It is synchronized with the supply voltage.

V. RESULTS AND DISCUSSION


Fig. 5a shows the dynamic performance of a rectifier station of a 12pulse two-level voltage source converter based HVDC system. It shows the supply voltages (Vabc), supply currents (Iabc), active power (P), reactive power (Q), dc voltage (Vdc) of the rectifier station. A 100 MW of real power is transferred from the rectifier station to the an inverter station. The reactive power is maintained at zero value. The dc voltage is maintained at 3 kV. Initially a 100 MW of real power is transferred from a rectifier to the inverter then it is reversed from an inverter to the rectifier at 0.6 sec. The reactive power also simultaneously controlled in the rectifier station from 50 Mvar to -60 Mvar. Fig. 5b shows the dynamic performance of the inverter station during simultaneous control of real and reactive power control carried out at the rectifier station. During this operation a real of 100 MW received from rectifier station then at 0.6 sec the inverter starts supply the real power to rectifier station. The reactive power at a inverter station is maintained at zero value, but the reactive power at an rectifier station is changed from 0 Mvar to -60 Mvar then -60 Mvar to 0 Mvar then 0 to 60 Mvar and then the finally to 0 Mvar. The variation for angles () and () are also shown in Figs. 5a and 5b for the rectifier and the inverter stations respectively. Fig. 6 shows the harmonic spectra and waveform of the converter voltage and supply current. The THD of converter voltage at a rectifier and an inverter stations are observed as 8.16% and 7.97% and the supply current is observed as 2.24% and 3.14%. The THD level of supply current is well within the IEEE standard [14]. The sysem parameters used for simulation is shown in Table-I.

1 =tan

-1

2 2 2 2 2 (2vc1) (V q1 )/(V q1 ) d1 +V d1 +V

(17)

2 =tan (vq2 / vd2 ) 2 =tan


-1 2 2 2 2 2 (2vc2) (V q2 )/(V q2 ) d2 +V d2 +V

-1

(18) (19)

The calculated angles ( and ) are used to generate the gating signals for the GTO of the VSC bridges. The leading converter set is gated at an angle of (-) and the lagging converter set is gated at an angle of (--) [11-13]. The coordinated control scheme of VSC based HVDC sysem is shown in Fig. 4

Fig. 5a Dynamic performance of a 12-pulse VSC rectifier station.

Fig.4 Control scheme for GTO-VSC-HVDC system Fig. 5b Dynamic performance of a 12-pulse VSC inverter station.

180

4 REFERENCES R. Rudervall, J. Charpentier, and R. Sharma, High voltage direct current (HVDC) transmission systems technology review paper, in Proc. of Energy Week, Washington, D. C, USA, Mar.2000, pp.1-15. Gunnar Asplund, Kjell Eriksson and kjell Svensson, DC Transmission based on Voltage Source Converters, in Proc. of CIGRE SC14 Colloquium in South Africa 1997, pp.1-7. Michael P. Bahrman, Jan G. Johansson and Bo A. Nilsson, Voltage Source Converter Transmission Technologies-The Right Fit for the Applications, in Proc. of IEEE-PES General Meeting, Toronto, Canada, July-2003, pp.1840-1847. Xiao Wang and Boon-Tech Ooi, High Voltage Direct Current Transmission System Based on Voltage Source Converter, in IEEEPESC90 Record, vol.1, pp.325-332. HVDC Light DC Transmission based on Voltage Source Converter, ABB Review Manual 1998, pp. 4-9. Y. H. Liu R. H. Zhang, J. Arrillaga and N. R. Watson, An Overview of Self-Commutating Converters and their Application in Transmission and Distribution, in Conf. Proc of IEEE/PES T & DConf. & Exhibition, Asia and Pacific Dalian, China 2005, pp.1-7. Tatsuhito Nakajima and Shoichi Irokawa, A Control System for HVDC Transmission by Voltage Sourced Converters, in Proc. of PES Summer Meeting, vol.2, 1999, pp. 11131119. Murray link - the worlds longest underground power link, ABB Manual, www.abb.com/hvdc Hirokazu Suzuki, Tatsuhito Nakajima, Kunikazu Mino, Sigeyuki Sugimoto, Yoshiaki Mino and Hideyuki Abe. Development and Testing of Prototype Models for a High-Performance 300 MW SelfCommutated AC/DC Converter, IEEE Trans.on Power Delivery, vol.12, no.4, pp.1589-1601, Oct. 1997. Makoto Hagiwara, Hideaki Fujita and H. Akagi, Performance of a Self-Commutated BTB HVDC Link System under a Single-Line toGround Fault Condition, IEEE Trans. on Power Electronics, vol.18, no.1, pp.278-285, Jan. 2003. Makoto Hagiwara and Hirofumi Akagi, An Approach to Regulating the DC-Link Voltage of a Voltage Source BTB system during power flow line faults, IEEE Trans. on Industry Applications, vol.41, no.5, pp.1263-1271, Sep/Oct. 2005. Ruihua, Song Chao, Zheng Ruomei, Li, Xizoxin and Zhou, VSC based HVDC and its control Strategy, in. Proc. IEEE/PES T& D Conf. Exhibition: Asia and Pacific, China, pp.1-6, 15-18, Aug. 2005. C. Du, A. Sannino, and M. H. J. Bollen, Analysis of the Control algorithms of voltage-source converter HVDC, in Proc of IEEE Powertech-2005, pp.1-7. IEEE Recommended Practices and Requirements for Harmonics Control in Electric Power Systems, IEEE Std. 519, 1992.

[1] [2] [3]

[4] [5] [6]

[7] [8] [9]

[10]

[11]

[12] Fig. 6 Waveform and Harmonic Spectra of a) converter voltage b) supply current of rectifier station TABLE- I SYSTEM PARAMETERS OF TWO-LEVEL 12-PULSE VSC BASED HVDC SYSTEM Design Parameters Design Values Rated active power P 100 MW Line-to-line supply voltage Vs 33 kV Frequency f1/f2 50 Hz/ 60 Hz ac inductance L1/L2 6.9 mH /5.7 mH dc link voltage Vdc 3 kV dc link capacitance C1/C2 0.125 F/0.125 F Transformation ratio n dc voltage controller gain(Kp1, Ki1) 0.1, 0.15 current controller gain (Kp2, Ki2) 1.66, 3.33 current controller gain(Kp3, Ki3) 0.824, 15.82 [13] [14]

VI. CONCLUSION
A voltage source converter based HVDC system operating at fundamental frequency switching has been proposed for a 12-pulse converter HVDC system and it is successfully tested for an independent control of active and reactive powers. A control algorithm has been developed for the power flow between the two stations. The converter has been successfully operated in all four quadrant of active and reactive powers with the proposed control. The power quality of the HVDC system has also improved with 12pulse converter operation. This 12-pulse GTO voltage source converter can be best alternate for conventional HVDC system.

181

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

A New Configuration of 12-Pulse VSCs Based STATCOM with Constant DC Link Voltage
Bhim Singh, K. Venkata Srinivas, Ambrish Chandra and Kamal-Al-Haddad
constant dc link voltage is proposed using two-sets of doubleway [8-9] VSCs. In each double-way VSC there are two six pulse VSCs, one is operated in leading mode and the other is operated in the lagging mode. The two double-way VSCs are arranged in a manner to get a 12-pulse VSC performance. The configuration of STATCOM can be visualized as varying the phase angle difference between two 12-pulse VSCs so as to obtain a voltage of controllable magnitude keeping the dc link voltage constant. The proposed STATCOM configuration is modeled using MATLAB/SimPowerSystems (SPS) toolboxes and the developed model is used to simulate its dynamic performance by varying the reference reactive power. II. CONFIGURATION AND OPERATION OF THE STATCOM The configuration of the proposed STATCOM is as shown in Fig. 1. It consists of four six pulse VSC bridges 1, 2, 3 and 4 and six single phase transformers. The converters 1 and 3 form one double-way VSC and the converters 2 and 4 form the other double-way VSC. The Converters 1 and 3 are connected through open windings of transformers a1-a1, b1b1 and c1-c1 which primary windings are connected in the star configuration. The Converters 2 and 4 are connected through open windings of transformers a2-a2, b2-b2 and c2-c2 which primary windings are connected in the delta configuration. The Converters 1 and 2 operating in lagging mode with a phase difference of 30o among them form one 12pule VSC configuration and the Converters 3 and 4 operating in leading mode with a phase difference of 30o among them form the another 12-pule VSC configuration. The ac converter output voltage magnitude is varied by the phase angle between the two six-pulse VSCs of the double-way VSC i.e. one six-pulse VSC is operated at a lagging angle of /2 and the other six-pulse is operated at a leading angle of /2. The net effect of the two-double way VSCs with a phase a difference of 30o (for a 12-pulse VSC operation) between them is same as varying the phase angle between two 12-pulse VSCs waveforms. Fig. 2(a) show the concept of obtaining an ac converter output voltage of controlled magnitude. Fig. 2(b) shows its harmonic spectrum. The ac converter output voltage vc1of controlled magnitude can be written as, vc1 = 2v1 cos( / 2) (1) where v1 is the fundamental voltage of a 12-pulse VSC. For maintaining the dc link voltage at a constant value a phase

AbstractThis paper presents a new configuration for the STATCOM (static synchronous compensator) with constant dc link voltage. The proposed STATCOM configuration consists of four six pulse VSC (Voltage Source Converter) bridges. Two of the VSCs are operated in lagging mode and the other two in the leading mode to obtain an ac converter output voltage of controlled magnitude, which in turn controls the reactive power. The two VSCs operating in the lagging mode form one set of a 12-pulse converter and the two VSCs operating in the leading mode form another set of a 12-pulse converter. The proposed STATCOM model is developed using MATLAB/Simulink SimPowerSystems (SPS) toolboxes. The dynamic performance of the STATCOM is presented by varying the reference reactive power. Index TermsVoltage Source Converters (VSCs), Static Synchronous Compensator (STATCOM), Flexible Alternating Current Transmission System (FACTS).

TATCOM (static synchronous compensator) a static reactive power compensating device belonging to the family of FACTS (flexible alternating current transmission system) devices is capable of generating or absorbing reactive power whenever it is required without depending on the ac system voltage [1-2]. It has the benefits of fast response, power system stability improvement, and symmetric lead-lag capability and can be interfaced with energy storage devices [3-4]. VSC (voltage source converter) a combination of selfcommutating device with an anti-parallel diode is the elementary unit of the STATCOM. Commercially available Gate Turn-Off (GTO) thyristor with high-power handling capability is a very well suited self-commutating device for VSC in high voltage and high power applications. There are several configurations of VSC for the STATCOM applications such as multipulse VSC, multilevel VSC [5-6], PWM VSC and combined multipulse and multilevel VSC configurations [7]. In this paper, a new STATCOM configuration with
Bhim Singh and K. Venkata Srinivas are with the Department of Electrical Engineering, Indian Institute of Technology Delhi, Hauz Khas, New Delhi110016, India. (E-mail:bhimsinghr@gmail.com; sri.iitd@gmail.com). Ambrish Chandra and Kamal-Al-Haddad are with the Department of Electrical Engineering, Ecolede Technologie Superieure (ETS), 1100 NotreDameOust, Montreal, Quebec, Canada, H3C1K3. (E-mail:ambrish.chandra@etsmtl.ca; kamal-al-haddad@etsmtl.ca.

I. INTRODUCTION

182

2 angle is introduced between the system voltage vs and the ac output converter voltage vc1. By varying the phase angle the ac converter output voltage can be made less than, greater than and equal to the system voltage. When the ac converter output voltage is greater than the system voltage the reactive power flows from the STATCOM to the ac system. When the ac converter output voltage is less than the system voltage the reactive power flows from the ac system to the STATCOM. When the ac converter output voltage equals the system voltage there is no reactive power exchange between the system and the STATCOM. Fig. 3 shows the system configuration considered for demonstrating the dynamic performance by varying the reference reactive power.
vsa

2 (isa cos + isb cos( -120o ) + isc cos( + 120o )) 3 A Phase Locked Loop (PLL) is used to obtain cos and from the system voltages. The reference voltage commands v*cd and v*cq calculated by equations (8) and (9). v*cd = vsd - K pd (i*d - id ) iq =

(7) sin are (8)

v*cq

= vsq - K pq (i

- iq )

(9)

where Kpd, Kpq are proportional gains of the d-axis and q-axis current controllers respectively.

Conv.1

Conv.3
a

a1 '

a'

+
vdc

a1
AC System

b
c c1 '

c' b1 ' b '


a2 ' a '
c' b2 ' b '

c2 c1 b2

a2 Conv.2
b1

Conv.4

a
b c c2 '

vsb

Lagging Converters Fig. 1 Configuration of the proposed STATCOM

vsc

Leading Converters

III. CONTROL ALGORITHM The control algorithm of the proposed configuration is as shown in the Fig. 4. STATCOM
Fig. 2(a) AC converter output voltage of controlled magnitude

A. DC Link Voltage Controller The dc link voltage controller is used to maintain the dc voltage vdc at the reference value v*dc. A PI (proportionalintegral) controller is used for this purpose. The dc link voltage controller produces the reference d-axis current i*d.
i*d = K pdc (v*dc - vdc ) + K Idc (v*dc - vdc )dt
*

(2)

This reference d-axis current i d is fed to the decoupled current controller. B. Decoupled Current Controller The reactive power is calculated by equation (3) as, Q = v sd iq - v sq id

(3)

Fig. 2(b) AC converter output voltage of controlled magnitude for =15o

where vsd and vsq as are the d-axis and q-axis components of vsabc (system voltage). These are expressed as,
v sd = 2 ( v sa sin + v sb sin( - 120 o ) + v sc sin( + 120 o )) 3

(4)

The reference reactive current i*q is calculated from the required reference reactive power of the STATCOM (Q*) as, i*q=Q*/vs (10) C. Phase Angle Calculation The phase command * for maintaining the dc link voltage is calculated as, v * cq * = tan -1 ( * ) (11) v cd The phase command */2 for varying the reactive power is calculated as,

2 vsq = (vsa cos + vsb cos( -120o ) + vsc cos( + 120o )) (5) 3 id and iq are the d-axis and q-axis components of isabc (STATCOM current).

id =

2 (isa sin + isb sin( -120o ) + isc sin( + 120o )) 3

(6)

183

*
2

= tan -1

(2v1 ) 2 - (vcd *2 + vcq*2 ) (vcd *2 + vcq*2 )

(12)

v *dc
vdc

K pdc +

K Idc s

i *d

K pd
id
vsd

v *cd

Phase Calculation
*

where v1 is the fundamental voltage of a 12-pulse VSC waveform as, v1 = n( p 6 / 6 ) cos( / 12)vdc = 46.91vdc (13) where n is turns ratio and p is the pulse number.
AC System

v *cq *

isabc

abc to dqo Transformation

iq i *q

K pq

Gate Pulses Generation


vsq

cos
vsabc

sin

cos sin

PLL

abc to dqo Transformation


vsabc

To GTO's

vsabc i sabc vsabc

STATCOM

vdc isabc

Fig. 4 Block diagram of the control algorithm

PLL

Firing Control

Q*

STATCOM Control

Fig. 3 System configuration to demonstrate dynamic performance by varying reference reactive power

D. Converter Gating The lagging converters are gated at an angle of (-*-*/2) and the leading converters are gated at an angle of (-*+*/2). The gating pulses are synchronized with the system voltages using a PLL as shown in Fig. 3. IV. DYNAMIC PERFORMANCE OF THE STATCOM The proposed STATCOM configuration is modeled using MATLAB/Simulink SimPowerSystems (SPS) toolboxes and the dynamic performance is presented as shown in Fig. 5.

The reference reactive power (Q*) is varied for observing the dynamic performance. Initially from t=0.1s to t=0.2s, the reference reactive power is zero shown by the dotted line in Fig. 5. At t=0.2s, the reference reactive power is changed to 100MVAR and the actual reactive power (Q) follows it without any major overshoots. When Q* is set to 100MVAR the phase angle increases, as a result the ac converter output voltage decreases and the reactive power flows from the ac system to the STATCOM. Q* is again made zero at t=0.4s. From t=0.4s to t=0.6s, Q* is set to zero and the phase is varied such that the magnitude of ac converter output voltage matches the system voltage and the reactive power exchange is zero. At t=0.6s, the reference reactive power is changed to 100MVAR and the actual reactive power (Q) follows it without any major overshoots. When Q* is set to 100MVAR the phase angle decreases, as a result the ac converter output voltage increases and the reactive power flows from the STATCOM to the ac system. Q* is again made zero at t=0.8s. The %THD of the STATCOM current is less than 5% during the inductive and capacitive operations as shown in Fig. 6.

Fig. 5 Dynamic performance of the STATCOM by varying the reference reactive power

184

4 APPENDICES GTO-VSCs based two-level 12-pulse 100MVAR STATCOM GTO-VSC Converters 4 Number of pulses 12 Nominal AC voltage 132kV Nominal DC link voltage 3kV GTO gating frequency 50Hz DC Capacitor 0.25F Single-phase Transformer 17MVA, 50Hz, rating 66kV/2.12kV,15% (X), DC Voltage Controller Kpdc = 4, KIdc = 120 Decoupled Current D-axis: Kpd=30.7 controller Q-axis: Kpq=-120 REFERENCES
[1] A. T. Johns, A. Ter-Gazarian and D.F. Warne, Flexible AC transmission systems (FACTS), IEE Power Energy Series, the Institute of Electrical Engineers, London, UK, 1999. [2] N. G. Hingorani and L. Gyugyi, Understanding FACTS: concepts and technology of flexible AC transmission systems, IEEE Press, 2000. [3] V. K. Sood, HVDC and FACTS controllers: applications of static converters in power systems, Kluwer Academic Publishers, USA, 2004. [4] K. R. Padiyar, FACTS controllers in power transmission and distribution, New Age International (P) Limited Publishers, India, 2007. [5] M. Aredes, A. F. C. Aquino and G. Santos Jr, Multipulse converters and controls for HVDC and FACTS systems, Journal of Electrical Engineering, Springer Berlin / Heidelberg, vol. 83, no. 4, pp. 137-145, May 2001. [6] B. Singh and R. Saha, Modeling of 18-pulse STATCOM for power system applications, Journal of Power Electron., vol. 7, no. 2, pp. 146-158, 2006. [7] B. Geethalakshmi, P. Dananjayan, and K. D. Babu, A Combined Multipulse-Multilevel Voltage Source Inverter Configuration for STATCOM Applications, in Proc. of Power System Technology and IEEE Power India Conference, POWERCON, 12-15 Oct. 08, pp.1-5. [8] B. D. Beford and R. G. Hoft, Principles of Inverter Circuits, John Wiley & Sons, Inc., New York, USA, 1964. [9] L. H. Walker, 10MW GTO Converter for Battery Peaking Service, IEEE Trans. on Industry Applications, vol. 26, no. 1, pp. 63-72, Jan/Feb-1990. [10] IEEE Standard 519-1992, IEEE Recommended Practices and Requirements for Harmonic Control in Electrical Power Systems, IEEE Inc., New York, 1992.

Fig. 6(a) STATCOM current waveform during inductive operation

Fig. 6(b) STATCOM current harmonic spectrum during inductive operation

Fig. 6(c) STATCOM current waveform during capacitive operation

Fig. 6(d) STATCOM current harmonic spectrum during capacitive operation

V. CONCLUSION A new configuration for the STATCOM has been proposed with constant dc link voltage. The dynamic performance of the STATCOM has been evaluated by changing the reference reactive power. The simulation results have validated the inductive and capacitive operations of the STATCOM with constant dc link voltage. The %THD of the STATCOM current during the inductive and capacitive operations has been found well below 5% according to IEEE-519 standard [10]. The proposed STATCOM can be extended for the HVDC and the battery energy storage applications with slight modifications.

185

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Selective Current Harmonic Elimination in a Current Controlled DC-AC Inverter Drive System using LMS Algorithm
S.Sangeetha1
1

CH.Venkatesh2

S.Jeevananthan3

Research Scholar, Department of Electrical and Electronics Engineering, Jawaharlal Nehru Technological University, Hyderabad, INDIA. e-mail: sangeetha.hk@gmail.com 2 PG Scholar, Pondicherry Engineering College, Pondicherry, INDIA 3 Assistant Professor, Department of Electrical and Electronics Engineering, Pondicherry Engineering College, Pondicherry, INDIA, e-mail:drsj_eee@pec.edu.
Vassilios G. Agelidis et al. have proposed a method in which direct minimization of the nonlinear transcendental trigonometric Fourier functions has been possible in combination with a random search. Thus the challenges such as the task of finding all the multiple sets of solutions of the switching angles for a given problem has been reduced [10]. A state-of-the-art optimization tool based on the foraging behavior of a colony of ants has been employed for selective harmonic elimination in a pulse width modulated inverter by Kinattingal Sundareswaran et al [11]. The area of adaptive signal processing has had a significant impact on a wide variety of signal processing applications. It includes inverse filtering, signal modeling, prediction, channel equalization, echo cancellation, noise cancellation, system identification and control, line enhancement, adaptive notch filtering, and beam forming. The algorithms viz. the least mean squares (LMS) algorithm, recursive least squares (RLS) algorithm etc are very common. The gradient algorithm which minimizes a square error criterion is popular in learning of neural networks similar to the LMS algorithm in linear adaptive filtering. The convergence properties of the least mean square algorithm has been well proved [12]. An adaptive filter based on LMS algorithm has been effectively used for frequency estimation in power systems [13]. Many algorithms have been proposed in the technical literature to deal with the problem of finding the desired solutions. They do suffer by any one intricacy such as dependency on starting values guess, involvement of high order polynomials, probabilistic nature, slow speed of convergence, multiple sets of solutions, local optima etc. This paper presents an adaptive algorithm to eliminate unwanted dominant harmonic components with only knowledge of their order. Moreover, the proposed approach can eliminate an even or odd arbitrary number of harmonics (e.g., fundamental current control and elimination of the 5th, 7th, and 11th harmonics) without any penalties in the switching frequency and also has suitability to implement any power converter (e.g., phase controlled rectifiers, ac choppers, cycloconverters etc.). The proposed adaptive selective harmonic elimination (ASHE) algorithm is tested in three-phase voltage source inverter through MATLAB/SIMULINK simulation for the elimination of characteristics harmonics of fifth and seventh.

Abstract: This paper suggests an adaptive filtering scheme


based on least mean square (LMS) algorithm to eliminate selective current harmonics in three-phase inverters. The proposed adaptive selective harmonic elimination (ASHE) algorithm amends the weights and conjures up a notch filter action for the selected frequencies. Harmonic elimination in this innovative approach is achieved by adding weighted sine and cosine components of respective selected frequencies to match their amplitudes and phase angles present in the line current and then subtracting this sum from the line current. The triumph of the algorithm is validated by MATLAB/SIMULINK toolbox. Keywords- Adaptive Selective Harmonic (ASHE), LMS algorithm, PWM inverter. Elimination

[1] I.INTRODUCTION
Function of an inverter is to produce a sinusoidal ac output whose magnitude and frequency can be controlled. The pulse width modulation (PWM) technique is the most commonly used control scheme for a voltage source inverter (VSI), through which the unwanted harmonics are moved to a higher frequency region to reduce the size of output filters and suffers due to higher switching loses. Recent years have seen the development of numerous PWM pattern generation techniques for improving the performance of the voltage source inverter (VSI) [1-5]. A higher quality spectrum even at low switching frequency can be obtained through eliminating specific lower order harmonics. The selective harmonic elimination (SHE) aims at complete elimination of some low-order harmonics from a PWM waveform, while maintaining the amplitude of the fundamental components at a pre-specific value. When employed in an adjustable speed drives drive (ASD), elimination of lower order harmonics from the inverter output leads to great reduction of lower order harmonic torque ripples. SHE offers several advantages compared to traditional modulation methods including acceptable performance with low switching frequency, direct control over output waveform harmonics, and the ability to leave triplen harmonics uncontrolled to take advantage of circuit topology in three phase systems. The elimination of specific low-order harmonics from a given voltage/current waveform generated by a voltage/current source inverter using what is widely known as optimal, programmed or SHE-PWM techniques, has been dealt with in numerous papers and for various converter topologies, systems and applications. Harmonic elimination has been a research topic since the early 1960s, first examined in [6] and developed into a mature form in [7-9] during the 1970s.

II. LEAST MEAN SQUARE ALGORITHM


The LMS algorithm was introduced by Widrow and Hoff in the year 1959. It is a simple and robust algorithm, which does not require correlation function calculation and matrix inversions. It is an adaptive algorithm, which uses a gradientbased method of steepest decent. It uses the estimates of the gradient vector from the available data. It incorporates an

186

iterative procedure that makes successive corrections to the weight vector in the direction of the negative of the gradient vector, which eventually leads to the minimum mean square error. The pot like structure shown in Fig.1 can help in understanding the LMS algorithm. Initially the weights are assigned at the point indicated by the arrow mark. As already stated the LMS algorithm tries to reach the steepest gradient, the final weights and the corresponding error will reach the bottom point of the pot. The filter, shown in the following Fig.2 consists of a combiner, the LMS algorithm, and a summing point. It operates as follows. (i) The reference signal with two orthogonal components cosine and sine (X1 and X2) has the selected frequency (o=2fo), which is to be eliminated from the line current Dk. T is a sampling period and K is a discrete time index. (ii) The reference input (X1 and X2) is multiplied with corresponding weights (W1 and W2). The weighted sine and cosine components of reference signal are combined/added together to match amplitude and phase angle of interfering sinusoid in the line current Dk. Adaptation process adjusts weights to exactly match amplitude and phase of the interference [14]. (iii) The signal Yk created by a combiner, is subtracted from the line current Dk (signal to be filtered) and eliminated from the output of the filter k.

T Wk kT= [W 1 W 2]

(3)

The gradient k is estimated as the following


2 k c k 2 2 k k s

k c 2 k X k k s

(4)

The weight updating formula is given by

Wk+1 = Wk - k = Wk + 2 k Xk

(5)

Where, is the adaptation gain constant. The gradient estimate contains substantial amount of noise. It is attenuated by the adaptive process represented by (5).

III. LMS ALGORITHM FOR SHE PROBLEM


Selective elimination of multiple harmonics is illustrated in the below Fig.3.The fifth and seventh harmonics are eliminated from the output current of the three-phase inverter. More harmonic components can be set to be eliminated by including corresponding blocks like fifth and seventh.
Dk X1 X2
ASHE

X3

ASHE &

5 uU ASHE A

-1 X4 Gp

X5 X6

ASHE &

7 A uU ASHE

+ +

Gp-1

Fig. 1 LMS Algorithm

MF ASHE

Reference input

DK
Combiner + W1 + W2

X1

Controlled Plant i +

UA + Control D

U u

Inverter i Gp(S)

X2

YK

Output

Fig.3 Selective elimination of multiple harmonics

LMS algorithm

Fig.2. Structure of single frequency ASHE filter

The LMS algorithm can be formulated as follows. The error estimation is given by

k =Dk -Yk = Dk -XT kW k


The reference and weight vectors are defined by
T Xk [ X1 X 2 ]

(1) (2)

Using the block 1, which contains ASHE discussed in Fig.2, the first harmonics is taken out of the current input although not necessary, considerably reduces noise in gradient estimation in blocks 5 and 7 (for elimination of fifth and seventh harmonics) and makes adaptation process faster. The output of the block 1, with filtered out first harmonic and with fifth and seventh harmonic components still present is introduced to the error inputs of the blocks 5 and 7. The reference signals of the fundamental, fifth and seventh are represented as X1 and X2, X3 and X4, and X5 and X6 as illustrated in the Fig.6. The current Dk=Uc+Udis, where Uc is the control input and Udis is the undesirable harmonic component of the inverter output. The complete system of ASHE for eliminating fifth and seventh harmonics in three phase inverter is shown in Fig.4.

187

Udc

Udc

U_reg

i*q i*d

Iq1 Id1
ieq id
e

e s

_
+
U

2 3 e s
isq isd

P W M q,d
a,b,c R,L ia ib ic

_
q

U d

2x MF_ASHE

P L L

ua

u b uc

Fig.4 LMS algorithm bases ASHE for three phase VSI

As shown in the figure the system uses PI controller for dc bus voltage control and two PI regulators for synchronous reference frame for current control (Iq1 and Id1). Reference angle for generation of sine and cosine functions, frequency of fundamental component, fifth harmonics and seventh harmonics are created by a phase look loop (PLL) block. Sine and cosine components with fundamental frequency are phase locked with utility voltage and are used for stationary to synchronous (and vice versa) reference frames transformations. Sine and cosine components with five and seven times higher frequencies are used for selective harmonic elimination. The sample line currents (Ia, Ib, Ic) are transformed from the stationary (a,b,c) reference frame to two phase q,d stationary reference frame (block 3/2) and then into synchronous frame Iq,Id (block s/e). The conventional part of control works as follows: voltage regulator U_reg depending on dc bus voltage error creates an active current reference Iq*. For unity power factor reactive current reference Id* is kept zero. PI current regulators maintain an average value of feedback currents Iqe and Ide equal to the average values of corresponding references. Outputs of current regulators are transformed first from synchronous to stationary reference frame (block e/s) and then from two-phase (q,d) to three phase (a,b,c) system and written into PWM control the inverter. The components contributed to PWM from ASHE blocks will create voltage at the output of the inverter with amplitudes and phase angles as needed to cancel harmonic components from the load currents.

Fig. 5 Current and harmonic spectrum without ASHE

Fig.6. Filtering action on inverter current with ASHE

IV. SIMULATION STUDY


The proposed ASHE algorithm is simulated MATLAB/SIMULINK software. The three phase PWM voltage source inverter was simulated with uncompensated dead time without and with ASHE algorithms. The carrier frequency is 10 KHz and dead time is 4s and the sampling rate is 5KHz. Fig.9 indicates the output of MF-SHE block The performance of the system without ASHE algorithm is shown in Fig.10. Fig.11 shows the triumph of the ASHE based on LMS algorithm in shaping the current. The improved wave forms of line currents and harmonic spectrum after application of ASHE are shown in Fig.12. Table 1 compares the various lower order current harmonics and total harmonic distortion (THD). It is evident that almost all the harmonic components including THD are less in ASHE filtering. The variations of weights are shown in Fig.13, Fig.14 and Fig.15 for fundamental, fifth harmonic and seventh harmonics respectively.

Fig.7 Current and harmonic spectrum with ASHE

188

TABLE 1 COMPARISON OF HARMONICS WITH AND WITH OUT ASHE

Method

I1

I5

I7

I11

I17 THD

Without 100 19.59 14.09 6.44 5.09 28.15 ASHE With 100 8.85 ASHE 1.66 3.69 1.59 17.52

implemented. The algorithm performances have been simulated using MATLAB toolbox for a case study to eliminate fifth and seventh. It has been analyzed on the basis of THD, amplitude of the most eliminated harmonics etc. The effectiveness of the simulation has been investigated by eliminating the fifth and the seventh harmonics in the line currents.

ACKNOWLEDGMENT
Authors wish to express thanks to Dr.P.Ramesh Babu, Associate Professor, Department of Electronics and Instrumentation Engineering, Pondicherry Engineering College, for opening their eyes to the potential of Adaptive Filtering Algorithms.

REFERENCES
[1] Kyu Min Cho, Won Seok Oh, Young Tae Kim, and Hee Jun Kim, A new switching strategy for pulse width modulation (PWM) power converters, IEEE Transactions on Industrial Electronics, Vol. 54, No.1, February 2007, pp.330-337. [2] S Jeevananthan, P Dananjayan, and S Venkatesan, A Novel Modified Carrier PWM Switching Strategy for Single-Phase Full-Bridge Inverter, Iranian Journal of Electrical and Computer Engineering, Summer Fall -Special Section on Power Engineering, Vol.4, No.2, pp.101-108, Tehran, Iran, 2005. [3] Prasad N.Enjeti, Phoivos D.Ziogas and James F.Lindsay, Programmed PWM Techniques to Eliminate Harmonics: A Critical Evaluation, IEEE Transactions on Industry Applications, vol.26, no.2, pp.302-316, march/April, 1990. [4] A.M.Hava, R.Kerkman and T.A.Lipo, A High-Performance Generalized Discontinuous PWM Algorithm, IEEE Transactions on Industry applications, vol.34, no.5, September/October 1998. [5] G.Narayanan and V.T.Ranganathan, Triangle-comparison approach and space-vector modulation, Journal of Indian Institute of Science. vol.80, pp.409-427, Sep.-Oct.2000. [6] F. G. Turnbull, "Selected harmonic reduction in static dc-ac inverters," IEEE Transactions on Communication and Electronics, vol. 83, pp. 374-378, 1964. [7] H.S.Patel and R.G.Hoft, "Generalized techniques of harmonic elimination and voltage control in thyristor inverters: Part IIVoltage control Techniques," IEEE Transactions on Industry Applications, vol.IA-10, pp. 666-673, 1974. [8] H.S.Patel and R.G.Hoft, "Generalized techniques of harmonic elimination and voltage control in thyristor inverters: Part IHarmonic elimination," IEEE Transactions on Industry Applications, vol. IA-9, pp.310-317, 1973. [9] I. J. Pitel, S. N. Talukdar, and P. Wood, "Characterization of programmed-waveform pulsewidth modulation," IEEE Transactions on Industry Applications, vol. IA-16, pp. 707-715, 1980. [10] Vassilios G. Agelidis, Anastasios I. Balouktsis, and Calum Cossar, On Attaining the Multiple Solutions of Selective Harmonic Elimination PWM Three-Level Waveforms Through Function Minimization, IEEE Tractions on Industrial Electronics, vol.55, no.3, pp.996-1004, March 2008. [11] Kinattingal Sundareswaran, Krishna Jayant, and T. N. Shanavas, Inverter Harmonic Elimination Through a Colony of Continuously Exploring Ants, IEEE Transactions on Industrial Electronics, vol.54, no.5, pp.2558- 2565, October 2007. [12] B. Widrow and S. D. Stearns, Adaptive Signal Processing. Englewood Cliffs, NJ: Prentice-Hall, 1985. [13] Daniel Barbosa, Renato Machado Monaro, Denis Vinicius Coury and Mrio Oleskovicz, A Modified Least Mean Square Algorithm for Adaptive Frequency Estimation in Power Systems, Proceedings of IEEE International Conference, 2008. [14] Vladimir Blasko, A Novel Method for Selective Harmonic Elimination in Power Electronic Equipment, IEEE Transactions on Power Electronics, vol. 22, no.1, pp.223-228, January 2007.

Fig.8 Weights for fundamental Component

Fig.9 Weights for fifth harmonics

Fig.10 Weights for seventh harmonics

V. CONCLUSION
The generic adaptive digital signal processing filtering algorithm for noise-canceling has been effectively applied for selective harmonic elimination problems of power conversion system. The algorithm features unconstrained SHE and has no load and circuit dependency. For any selected frequency, the approach uses an iterative/adaptive weighted combination of sine and cosine components to equal the amplitude and phase angle of harmonics present in the line current, the sum is subtracted from the line current and eliminated from the final output. The algorithms for adaptive canceling of selected harmonic components in PWM inverters have been successfully

189

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Performance Evaluation of APF Configurations Used in HV System.


Madhukar waware
Department of Electrical Engineering Indian Institute of Technology Roorkee. India-247667 Email: waware.madhukar@gmail.com

Pramod Agarwal, Member ,IEEE


Department of Electrical Engineering Indian Institute of Technology Roorkee. India-247667 Email: pramgfee@iitr.ernet.in

Abstract: In this paper different configurations of active filter to suppress harmonics in high voltage systems are presented. Shunt active power filter (APF) is effective solution to eliminate harmonics produced due to non-linear loads in low voltage systems. Implementation of shunt APF in high voltage system requires high rating transformer which increases the losses and cost. But application of multilevel inverter in active filters eliminates use of bulky and high cost transformer and also improves the performance. In this paper performance of different configurations of multilevel inverter is compared.. For multilevel inverter based APF p-q theory is used to generate reference signals and carrier phase shifted PWM is used to generate switching signals. Keyword:- Multilevel inverter (MLI), Active Power filters (APF)

case there will be problem of high VA rating of transformer and poor harmonic elimination. Also as the voltage decreases at LV side, current increases which mean converters is required to handle heavy currents. Another way is to use series connected semiconductor switches but series operation faces the problems of unbalanced static and dynamic voltage sharing due to the spread of device characteristic and mismatch of drive circuit. In this paper simulated results of two level inverter APF with transformer and five level and seven level cascade type multilevel inverter based active power filters are presented with use of p-q theory for reference generation and carrier phase shifted pulse width modulation (CPS_PWM) method. II. TWO LEVEL INVERTER WITH TRANSFORMER

I. I.INTRODUCTION Nowadays Solid state control of ac power using thyristors and other semiconductor switches is widely employed to feed controlled electrical power to electrical loads such as adjustable speed drives, furnaces, computer power supplies, HVDC systems. As non linear loads, these solid state converters draw harmonic and reactive power components of current from ac mains and leads to power quality problems. There are number of devices available to control harmonic distortion. Classically, shunt passive filters, consists of tuned LC filters or passive filters are used to suppress the harmonics. They have advantages of simplicity, low cost and easy maintenance, but disadvantages of passive filters are bulky passive elements, fixed compensation characteristics and series and parallel resonance. Active power filters (APF) are most effective for harmonic compensation [1][2]. It acts as a harmonic current source which injects an anti-phase but equal magnitude to the harmonic and reactive load current to eliminate the harmonic and reactive components of the supply current. Active filters provide harmonic compensation, reactive power compensation, power factor correction and voltage regulation. These active filters have limitation in medium and high voltage applications due to semiconductor rating constraint. There are two ways to apply the shunt APF in the high voltage range, one way by using transformer. But in this

Fig.1. Two Level Inverter Based APF. The Fig.1. shows the configuration for two level inverter based APF for high voltage system with step up Transformer. This scheme aims at matching magnitude of harmonics compensation to that on the supply side. However, it suffers from disadvantage of blocking of higher harmonic compensation by the transformer, due to its inductive reactance. Simulation results show that reduction in harmonics is not within IEEE THD limits. This disadvantage is circumvented by using higher rating transformer, in turn reducing the effective transformer reactance, but this is an uneconomical proposition. In high voltage system very large transformation ratio is required,

190

due to that primary current will be large and devices used in inverter have to handle heavy current so the losses will increase. Multilevel inverters are effective in high voltage applications as it provides high output voltages with same voltage rating of individual device and also eliminates the need of transformer[3] [4]. Multilevel inverters include an array of power semiconductors and capacitors. The commutation of the switches permits the summation of capacitor voltages, which reach high voltages at the output, while the power semiconductors must withstand only reduced voltages. In multilevel inverters, increasing the number of levels, output voltage has stair case waveform which has reduced harmonic distortion. There are three basic types of multilevel inverter, viz... Diode clamped, Flying Capacitor and Cascade connected inverters. Diode clamped inverter needs a lot of clamping switches, diodes and capacitors. Similarly in flying capacitor inverter number of capacitors required is large and also inverter control will be very complicated [5][6]. TOPOLOGIES OF MULTILEVEL INVERTER Cascade multilevel inverters are increasingly used in high power applications. Cascade multilevel inverter uses multiple H-bridge power cells connected in a series to produce high ac output voltages. An m-level cascade multilevel inverter consists of (m-1)/2 single-phase full bridges in which each bridge has its own separate dc source. This inverter can generate almost sinusoidal waveform voltage with only one time switching per cycle as the number of levels increase. In Fig.1 five level cascade multilevel inverter is shown. It consists of two single phase full bridges connected in series in each leg. In this configuration if S= no of full bridge cells, then the number of output levels is (2S+1). With S=2 there will be five levels in phase voltage and peak magnitude is SVdc [7]. In Fig.1. Is is the AC source current, IL is load current, If is the compensated current from APF then Is =IL+ If III.

In the operation of APF, the harmonic component of load current is derived through harmonic detection circuit and reverses it as the reference compensated current. Then switching signals for multilevel inverter are generated such that Ac side output current of APF correctly trace reference current so that harmonic component of load current will be supplied by filter current and source current will be free from harmonics and approaches towards pure sinusoidal. In this application, separate dc source for each converter bridge is not required as there is no need of active power output from the inverter. It will provide only harmonic currents. The APF will adopt small power from source to compensate the switching losses and capacitor losses and for that purpose, the capacitor voltages are sensed and compared with reference and fed to PI controller to generate loss component of APF. This component is added to fundamental load power component and then current components are obtained by using two phase to three phase transformation and comparing with source current, the reference current is generated. Fig.3.shows configuration for seven level multilevel inverter based APF. In this three single phase full bridges are connected in series in each leg. As S=3, there will seven levels in the output phase voltage. As compared with five levels, output voltage waveform is improved in seven levels multilevel inverter.

Fig.3. Multilevel Inverter Based APF

Fig.2. Five Level Multilevel Inverter Based APF

IV. REFERENCE CURRENT GENERATION Estimation of compensating signal is the most important part of the active filter control. It has great impact on compensation objectives, rating of active filter and its transient as well as steady state performance. Here p-q theory is used to calculate instantaneous real and imaginary reactive power components. Thep-q theory is based on the transformation which transforms three phase voltages

191

and currents into the stationary reference frame [8]. The three phase voltages and currents are transformed into - orthogonal coordinates according to the equations (1) and (2) [9][10].

cr = 360 0 /( m 1)
Where m is the voltage levels of multilevel inverter. Gate signals are generated by comparing the reference current signals with the carrier waves. For example, the carriers Tw1,Tw2 andTw3 shown in Fig.4.are used to generate gating for the upper switches in left legs of power cells H1 , H2 , and H3 respectively, for seven level multilevel inverter as in Fig.3. The inverted signals are used for upper switches in the right legs. The gate signals for all lower switches operate in a complementary manner with respect to their corresponding upper switches. In the PWM method the equivalent switching frequency of the whole converter is (m-1) times as the each power device switching frequency. This means CPS-PWM can achieve a high equivalent switching frequency effect at very low real device switching frequency which most useful in high power applications [8].

v v = 2 / 3
and

va 1 1 / 2 1 / 2 vb 3 / 2 3 / 2 0 vc

(1)

i i = 2 / 3

ia 1 1/ 2 1 / 2 ib 3 / 2 3 / 2 0 ic

( 2)

The instantaneous active and reactive power in - coordinates are calculated by following expressions

p ( t ) = v i + v i q (t ) = v i + v i

(3)

From this instantaneous active power by using low pass filter fundamental active power component is extracted and APF loss component is added. The compensating currents in - plane are derived by using equation (4).

i i

* 1 = 2 * v + v 2

v v

v p v q

(4)

Then three phase currents are obtained by following two phase to three phase transformation
i a * i * = b ic * 1 2 1 2 1 2 1 2 3 2 3 2

2 3

i * (5) i *

Fig.4. Carrier Waves TW1, TW2 and TW3 for Seven Level MLI

VI.

SIMULATION RESULTS

These currents are compared with source currents and error is processed through PI controller to generate reference currents for APF. The Dc voltage of each converter unit should be balanced to ensure the system to work at normal state and to ensure that losses of APF are provided by source, the dc voltage of each converter is sensed and average voltage of each leg is compared with reference and error is given to PI controller which generates loss component of APF. This loss component is added to fundamental load component. V. CARRIER PHASE SHIFTED PWM

Carrier phase shifted pulse width modulation is used for multilevel inverter to generate switching signals. In the phase shifted multicarrier modulation, triangular carriers of same frequency and the same peak to peak amplitude are used [12][13]. There is a phase shift between any two adjacent carrier waves of magnitude given by

The performance of different topologies for active power filter is evaluated by simulating the proposed schemes in MATLAB Simulink. The simulation parameters for all configurations are : Ac Source Voltage = 6600 Vrms , 50 Hz Source Inductor = 0.5mH Source Resistance 0.5 Ac side Filter Inductance= 3 mH Dc Capacitor= 5000 F. Carrier Frequency= 10000Hz Fig.5. shows the supply phase voltage. In Fig.6.waveforms for two levels inverter based active power filter are shown. Fig.7 and Fig.8 shows the waveforms of Load current, filter current and source current after compensation for five level and seven level multilevel inverter based APF respectively. Fig 9a is spectrum of load current IL where the THD is 26.17%. Fig.9b is the spectrum of Ac source current Is after compensation in case of two level inverter based APF with transformer and THD is 8.17%. Fig. 9c and Fig.9d shows the spectrum of source

192

current in five level and seven level multilevel inverter based APF, where the THD is 3.09% and 2.11% respectively. Thus the simulation results prove that MLI based APF system have a good compensation effect at low device switching frequency which is an important aspect for high voltage applications.

Fig.5.Supply Phase Voltage.


Fig.8. Waveforms for Seven Level MLI based APF. (a) Load Current, (b) Filter Current, (c) Source current after Compensation

Fig.6. Waveforms for Two Level Inverter Based APF with Transformer. (a) Load Current (b) Filter Current (c) Source Current after compensation

(a)Spectrum of Load Current IL

(b) Spectrum of Source Current for Two Level Inverter Based APF with Transformer after Compensation.

Fig.7. Waveforms for Five Level MLI Based APF. (a) Load Current, (b) Filter Current (c)Source current after Compensation

193

(c) Spectrum of Source Current Is for Five Level MLI Based APF after Compensation

(d) Spectrum of Source Current Is for Seven Level MLI Based APF after Compensation Fig.9 Spectrums of load current and source current in Different Configurations of APF.

VII. CONCLUSION In this paper different configurations for active power filter are simulated. Two level inverter based APF needs step up transformer for injection of compensating current in high voltage system. But this configuration has poor performance. Application of multilevel inverter in high voltage system improves the performance of APF by reducing THD to low value which is within IEEE THD limits and also eliminates need of high cost transformer in high voltage systems. Reduction in THD will be more in seven level cascade inverter as compared with five level inverter based APF. The p-q theory is good choice to generate reference current and carrier phase shifted PWM method reduces individual device switching frequency even resulting high frequency output of converter. REFERENCES
[1] Bhim Singh, Kamal Al-Haddad and A. Chandra, A New
Control Approach to Three Phase Harmonics and reactive power Compensation, IEEE Trans. On power systems Vol. 13, No 1 Feb 1998 pp. 133-138 [2] Bhim Singh, Kamal Al-Haddad and A. Chandra, A Review of Active Filters for Power Quality Improvement, IEEE Trans. On Industrial Electronics Vol. 46, No 5 Oct 1999 pp. 960-971 [3] W. Liqiao ,L Ping Z. Zhongchao study on Shunt Active Power filter based On cascade Multilevel Converters, in

35th Annual IEEE power electronics Specialists Conference 2004. Pp.3512- 3516 [4] H.Miranda V. Cardens J Perez,A Hybrid Multilevel Inverter for Shunt active Filter Using Space Vector Control. in 35th Annual IEEE power electronics Specialists Conference 2004, pp 3541-3546. [5] J. Rodriguez . Sheng Lai , F.Z.Peng Multilevel Inverters: A Survey Of Topologies ,Controls, and Applications, IEEE Trans. On Ind. Electronics Vol.49, no 4 Aug 2002 pp. 724-738 [6] J.S Lai and F.Z. Peng, Multilevel Converters-A New Breed Of Power Converters,IEEE/PESC Ann.Mtg Vol.2 pp.1121-1126 June 1997. [7] Y.S Lai and F.S. Shyu, Topology for Hybrid Multilevel Inverter, IEE Proc. Electr.Power Appl. Vol.149, No.6, Nov.2002 pp 449-458. [8] Akagi H, Nabae A , Control strategy of active power filters using multiple voltage source PWM converters , IEEE Trans on industry applications ,vol IA 22 No 3 may/June 1986 pp460-465. [9] Akagi H, Nabae A , instantaneous reactive power compensators Comprising switching devices without energy storage components.IEEE Trans.on industry application, vol.IA 20.No 3 may/June 1984 pp-625-630. [10] Hirofumi Akagi , Trends in Active Power Line Conditioner , IEEE Trans. On Power Electronics, Vol.9. No.3 May 1994 pp 263268 [11] D.G Holmea,et al. Opportunities for Harmonic Cancellation with carrier-Based PWM for Two-Level and Multilevel Cascaded Inverters, IEEE Trans. On Industry Applications, vol.37,No 2 pp 574-582, 2001. [12] Gong Maozhong, Liu Hankui, A Novel method of Calculating Current Reference for Shunt Active Power Filters without Hardware Synchronization , IEEE 2002 [13] Wang Liqiao, Lin Ping, Study on Shunt Active Power Filter Based on Cascade Multilevel Converters, in Proc. Of 35th Annual IEEE Power Electronics Specialists conference, Aachen Germany,2004 [14] Mauricio Aredes, F.C Monteiro, A Control Strategy for shunt Active Filter in proc. IEEE, Harmonics and Quality of Power Conf. Vol 2 pp.472-477 Oct 2002 [16] A.M Massoud S.J.Finney Seven Level Shunt active power Filter,11th International conference on Harmonics and Quality of Power Proc. 2004, pp136-140. Madhukar Waware completed B.E in Electrical Engineering in 1999 and M.E in Control System in 2003 from Walchand College of Engineering (WCE) Sangli, Maharashtra. He is faculty member in EED in WCE Sangli and presently pursuing Ph.D program in Electrical Engineering Department ,Indian Institute of Technology Roorkee, India. Pramod Agarwal obtained B.E, M.E and Ph.D degree in Electrical Engineering from University of Roorkee, now Indian Institute of Technology Roorkee (IITR), India. Currently he is a professor in the department of Electrical Engineering, IITR. His fields of interest include Electrical Machines, Power Electronics, Power Quality, Microprocessors and microprocessor-controlled drives, Multilevel converters and applications of dSPACE for the control of Converters. He is a member of IEEE.

194

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Analysis and Recognition of Anomaly in foot through Human Gait using NeuroGenetic Approach
Raj Kumar Patra
Department of Computer Science & Engineering Shri Shankaracharya College of Engineering and Technology, Bhilai, India
patra.rajkumar@gmail.com

Rohit Raja
Department of Computer Science & Engineering Shri Shankaracharya College of Engineering and Technology, Bhilai, India rohitraja4u@gmail.com

Tilendra Shishir Sinha


Department of Computer Science & Engineering Shri Shankaracharya College of Engineering and Technology, Bhilai, India tssinha1968@gmail.com
Abstract The present paper deals with Recognition of Anomalous Foot (RAF) through knowledge-based model of GAIT. Most of the work in this area has been carried out for the detection of various problems like: short and long foot problems, ankle and knee angle variations, detection of walking speed after hitting an object, recovery type of problems using GAIT analysis. Very little amount of work has been carried out with knowledge-based or model-based using soft-computing tools like Artificial Neural Network (ANN) and Genetic Algorithm (GA). In the present paper, Neuro-Genetic approach has been applied for the RAF. The work has been carried out in two phases. In the first phase, a knowledge-based model called GAIT_MODEL has been formed. The model has been formed by considering an input gait image that has been enhanced and compressed for the removal of distortion with loss-less information. Later on it has been segmented along with the detection of the region of interest. Hence the relevant parameters of GAIT have been extracted from that ROI. Each parameter has been stored in the form of corpus. For this, Artificial Neural Network (ANN) has been employed in the work. In the second phase, the GAIT_MODEL deals with understanding of RAF. For this an algorithm called NGBRAF (Neuro-Genetic Based Recognition of Anomalous Foot) has been proposed and has been also tested with 15 subjects of varying age groups on a flat and plain surface. The result has been found very satisfactory with the tested data sets. Keywords Artificial Neural Network (ANN), Genetic Algorithm (GA), Recognition of Anomalous Foot (RAF), Principal Component Analysis (PCA), Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) relationships between normal and abnormality gait can be easily understood [2]-[3]. The use of quantitative gait analysis in the rehabilitation setting has increased only in recent times. Since past five decades, the work has been carried out for gait abnormality treatment. Many medical practitioners along with the help of scientists and engineers [1]-[4] have carried out more experimental work in this area. It has been found from the literature that two major factors: time and effort, are playing a vital role. These two factors have been kept as a major objective in the present work. In the present paper, a unique strategy has been adopted for further analysis of gait using above two factors for Recognition of Anomalous Foot (RAF). Many researchers from Engineering field till 1980 have not carried out the work for GAIT analysis. In the year 1983, Garrett and Luckwill [5], carried the work for maintaining the speech of walking through Electromyography (EMG) and Gait analysis. In the year 1984, Berger and his colleagues [6], detected angle movement and disturbances during walking. In the year 1990, Yang et al.[7], further carried the experimental work for the detection of short and long steps during walking. In the year 1993 Grabiner et al. [8] investigated that when a obstacle is placed in the path of a subject, how much time is taken by the subject to recover its normal walking after hitting the obstacle. In 1994 the work has been carried out by Eng et al. [9] with little modifications for the detection of foot angles. In the year 1996 Schillings et al. [10] modified the work of Grabiner et al. [8] and calculated the angles of movement.
Pre-processing Enhancement Segmentation

Known Gait Image

I. INTRODUCTION
GAIT analysis is the systematic study of human walking. It is used for diagnosis and the planning of treatment in people with medical conditions that affect the way they walk. Biometrics such as automatic face and voice recognition continue to be a subject of great interest. Gait [1] is a new biometric aimed to recognize subjects by the way they walk. However, functional assessment, or outcome measurement is but one small role that quantitative gait analysis can play in the science of rehabilitation. If the expansion on gait analysis has been made, then ultimate complex
Feature Selection & Extraction

Corpus

Figure 1: Schematic diagram for GAIT_MODEL

195

In the year 1999 Schillings et al. [11] further carried the work that was done in the year 1996 with little modifications. In the year 2001 Smeesters et al. [12] calculated the trip duration and its threshold value by using GAIT analysis. From the literature, it has been observed that very little amount of work has been carried out using Neuro-Genetic approach for RAF through human GAIT. The schematic diagram for the formation of knowledge-based model, that is, GAIT_MODEL has been shown in Fig.1. From Fig. 1, it has been shown that a known image has been fed as input. Then it has been pre-processed for enhancement and segmentation. The enhancement has been done for the filtering any noise present in the image. Later on it has been segmented using connected component method [13]-[14]. Discrete Cosine Transform (DCT) has been employed for loss-less compression. DCT has a strong energy compaction property. The main property is that it considers real-values and provides better approximation of an image with fewer coefficients. Segmentation is the fundamental step in any image processing work. This has been carried out for detecting the boundaries of the objects present in the image and also used in detecting connected components between pixels. Hence the Region of Interest (ROI) has been detected and the relevant GAIT features have been extracted. The relevant features that have been selected and extracted in the present paper are based on the physical characteristics of GAIT of the subject. The GAIT and physical characteristics that have been extracted are: foot_angle, step_length, knee to ankle (K_A) distance, foot_length and shank width. Based on these features relevant parameters have been extracted. The relevant parameters are: mean, median, standard deviation, range of parameter (lower and upper bound parameter), power spectral density (psd), auto-correlation and discrete wavelet transform (DWT) coefficient, eigen_vector and eigen_value. The relevant feature based parameters that have been extracted are fed as an input to an Artificial Neuron (AN) as depicted in Fig. 2, below.

Y=

X W
i 1 i

(2)

An activation function will take any input from minus infinity to infinity and squeeze it into the range 1 to +1 or between 0 to 1 intervals. Usually an activation function being treated as a sigmoid function that relates as given in equation (3), below: f(Y) =

1 1 e Y

(3)

The threshold defines the internal activity of the neuron. This has been kept fixed to 1. In general, for the neuron to fire or activate the sum should be greater than the threshold value. The learning capability is a result of the ability of the network to modify the weights through usage of a learning rule. In the present work, feed-forward network has been used as a topology and backpropagation as a learning rule for the formation of corpus or knowledge-based model called GAIT_MODEL. The understanding of this model has been shown in Fig. 3, below, for RAF.
Unknown Gait Image Pre-processing Enhancement Segmentation

NGBRAF Algorithm Feature Extraction Selection &

Corpus

Classification

Abnormal Foot

Normal Foot

Figure. 3: Schematic diagram for understanding the GAIT_MODEL for RAF Figure 2: An Artificial Neuron (AN)

From Fig. 2, each neuron has an input and output characteristics and performs a computation or function of the form, given in equation (1): Oi = f (Si) and Si = WT X (1) where X = (x1,x2,x3,.,xm) is the vector input to the neuron and W is the weight matrix with wij being the weight (connection strength) of the connection between the jth element of the input vector and ith neuron. The f (.) is an activation or non-linear function (usually a sigmoid), Oi is the output of the ith neuron and Si is the weighted sum of the inputs. A single neuron, as shown in figure 2, by itself is not a very useful tool for GAIT_MODEL formation. The real power comes when a single neuron is combined into a multi-layer structure called neural networks. The neuron has a set of nodes that connect it to the inputs, output or other neurons called synapses. A linear combiner is a function that takes all inputs and produces a single value. Let the input sequence be {X1,X2,,XN} and the synaptic weight be {W1,W2,W3,.,WN}, so the output of the linear combiner, Y, yields to equation (2),

This model has been optimised for the best match of features using Genetic Algorithm (GA) for Recognition of Anomalous Foot (RAF). GA has been adopted because it is the best search algorithm based on the mechanics of natural selection, crossover, reproduction and mutation. They combine survival of the fittest features with a randomised information exchange. In every generation, new sets of artificial features are created and are then tried for a new measure after best-fit matching. In other words, GAs are theoretically and computationally simple on fitness values. The crossover operation has been performed by combining the information of the selected chromosomes (GAIT features) and generates the offspring. The mutation and reproduction operation has been utilized by modifying the offspring values after selection and crossover for the optimal solution. Here in the present work, a GAIT_MODEL signifies the population of genes or GAIT parameters. The paper has been organized in the following manner, section II proposes an algorithm for RAF, section III proposes the experimental results and discussions, section IV gives the conclusions and further scope of the work and final section gives all the references made in completing the present work.

196

II. AN ALGORITHM FOR RECOGNITION OF ANOMALOUS FOOT


1. 2. 3. 4. 5. 6. 7. 8. 9. Read the unknown gait image of twelve frames with leftleg and right-leg movements alternatively. Set the frame counter, fcount = 1 Do while fcount <= 12 Read the gait_image[fcount] Enhance the image Apply loss-less Compression Compute the connected components for segmenting the image Corp the image for locating the Region of Interest Compute the step_length, K_A (Knee to Ankle) distance, Foot_length, Shank_width, Foot_angle and Walking speed Compute the genetic parameters using the relation as, UB=(((mmaxmmean)/2)*A)+mmean LB=(((mmeanmmin)/2)*A)+mmin where A is the pre-emphasis coefficient, mmax is the maximum value and mmean is the mean value and mmin is the minimum value, UB is the upper bound value and LB is the lower bound value Store the feature vectors in a look-up table. Perform the best-fit matching with the data set of GAIT_MODEL (stored in a master file) using Genetic algorithm Employ classification for a two-class problem and then make decision End do Display the result with NORMAL FOOT and ABNORMAL FOOT

After segmentation of gait image in walking mode, extraction of features using relevant mathematical analysis has been done. Some of the distance measures of subject #1 have been shown in Fig. 6, below,

10.

Figure 6: GAIT feature measure in walking mode of subject #1

Similarly, distance measures of subject #2 have been shown in Fig. 7, below,

11.

12. 13.

III. EXPERIMENTAL RESULTS AND DISCUSSIONS


The proposed algorithm called NGBRAF (Neuro-Genetic Based Recognition of Anomalous Foot) has been tested with necessary test data considering 15 subjects. The efficiency of recognizing the abnormalities of foot has been kept at a threshold range of 90% to 100% and verified from the trained data stored in the master file. For each unknown gait image, the distance measures have been done with the dataset (GAIT_MODEL) and hence employed genetic algorithm for the best fit or match. Fig. 4, shows the original image of one subject along with the segmented portion of the GAIT in standing mode.
Figure 7: GAIT feature measure in walking mode of subject #1

The relevant GAIT feature, that is, step_length and knee to ankle distance has been shown in Fig. 8, below,

Figure 8: Step_length and Knee to Ankle measure in walking mode of the subject

The probability distribution of the GAIT features extracted has been plotted in Fig.9, below, during Abnormal Foot Recognition process.

Figure 4: Segmented image in standing mode

Fig. 5, shows the original image of one subject along with the segmented portion of the GAIT in walking mode.

Figure 5: Segmented image in walking mode

Figure 9: Probability distribution of GAIT features extracted during walking mode on a plain surface without carrying weight of one subject

197

Table 1, depicted below describes the GAIT features extracted in standing and walking mode of subject #1. From Table 1, it has been observed that minimum variations have been found from one frame to other. This has been plotted in Fig. 10, below, for the graphical analysis.
TABLE 1. GAIT FEATURES EXTRACTED IN STANDING AND WALKING MODE OF SUBJECT #1

IV. CONCLUSIONS AND FURTHER SCOPE OF THE WORK


In order to recognize anomalies in the foot of any subject on a plain surface without carrying any weight an algorithm have been proposed in the present paper. Limited number of GAIT features has been extracted. Few more features have to be extracted for achieving better results. Other environments like rough surface, slippery surface and so on have to be considered for further study and analysis.

REFERENCES
[1] D. Cunado, M.S. Nixon and J.N. Carter, Using gait as a biometric, via phase-weighted magnitude spectra, in the proceedings of First International Conference, AVBPA97, pp. 95 -102, Crans-Montana, Switzerland, March 1997. P.S. Huang, C.J. Harris and M.S. Nixon, Recognizing humans by gait via parametric cannonical space, Artificial Intelligence in Engineering, vol. 13, No. 4, pp. 359-366, October 1999. P.S. Huang, C.J. Harris and M.S. Nixon, Human gait recognition in canonical space using temporal templates, IEEE Procs. Vision Image and Signal Processing, Vol. 146, No. 2, pp. 93-1000, 1999. W. I. Scholhorn, B. M. Nigg, D.J, Stephanshyn and W. Liu, Identification of individual walking patterns using time discrete and time continuous data sets, Gait and Posture, Vol. 15, pp. 180 -186, 2002. Garrett M. and Luckwill E. G., Role of reflex responses of knee musculature during the swing phase of walking in man., Eur J Appl Physiol Occup Physiol, 52(1), pp 36-41, 1983. Berger W., Dietz V. and Quintern J., Corrective reactions to stumbling in man: neuronal co-ordination of bilateral leg muscle activity during gait., The Journal of Physiology, 357, pp 109 -125, 1984. Yang J. F., Winter D. A. and Wells R. P., Postural dynamics of walking in humans., Biological Cybernetics, 62 (4), pp 321 -30, 1990. Grabiner M. D. and Davis B. L., Footwear and balance in older men., Journal of the American Geriatrics Society, 41(9), pp 1011 1012, 1993. Eng J. J., Winter D. A. and Patla A. E., Strategies for recovery from a trip in early and late swing during human walking., Experimentation Cerebrale, 102 (2), pp 339-349, 1994. Schillings A. M., Van Wezel B. M. and Duysens J., Mechanically induced stumbling during human treadmill walking., Journal of Neuro Science Methods, 67 (1), pp 11-7, 1996. Schillings A. M., Van Wezel B. M, Mulder T. and Duysens J., Widespread short-latency stretch reflexes and their modulation during stumbling over obstacles., Brain Research, 816 (2), pp 480-6, 1999. Smeesters C., Hayes W. C., McMahon T. A., The threshold trip duration for which recovery is no longer possible is associated with strength and reaction time., Journal of Biomechanics, 34 (5), pp 58995, 2001. Yang, X.D., An improved Algorithm for Labeling Connected Components in a Binary Image, TR 89-981, March 1989. Lumia, R., A New Three-dimensional connected components Algorithm, Computer Vision, Graphics, and Image Processing, Vol. 23, pp. 207-217, 1983.

[2]

[3]

[4]

[5]

[6]

[7]

[8] Figure 10: Graphical representation of GAIT features extracted in standing and walking mode of subject #1

[9]

From Fig. 10, it has been observed that during standing mode of the subject, there are no so much variations. But during switching time from standing mode to walking mode, there are some variations. Only during walking mode this variation is almost same. In the work, odd (left-leg at the back and right-leg at the front) or even frames (left-leg at the front and right-leg at the back) are considered separately for the analysis. As per the proposed algorithm called NGBRAF, for most of the cases NORMAL FOOT has been achieved, because of minimum variations. If high variations have been detected through the proposed algorithm then ABNORMAL FOOT may result.

[10]

[11]

[12]

[13] [14]

198

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Order Reduction of Time Domain Models using Hankel Matrix Approach


C. B. Vishwakarma* and R. Prasad Department of Electrical Engineering Indian Institute of Technology Roorkee, Roorkee Uttarakhand-247667, India

ABSTRACT The authors present the modified order reduction methods for the linear single-input single-output (SISO) and multi-input multi-output (MIMO) time domain models. In the proposed methods, the modified Hankel matrices are considered for the minimal realization of the reduced order models. The algorithms of the proposed methods are conceptually simple and easy to program. The viability of the proposed methods is illustrated with the help of numerical examples taken from the literature. Keywords: Hankel matrix, Time domain model, Stability, Order reduction, Time-moment, Markov parameter.

INTRODUCTION

The analysis of all physical systems starts by building up of a model. The reason is that once a physical phenomenon has been adequately modeled so as to be a faithful representation of reality, all further analysis can be done on the model so that experimentation on the process is no longer required. Therefore, the analysis, simulation and controller design can be conveniently carried out if low order models are available which provide a good approximation to the original high order system. Several methods based on Hankel matrix have been suggested for deriving low order time domain model from the complex system described by its transfer function or state model. The problem of minimal realization of rational state model based on Hankel matrix
*

C.B Vishwakarma is Research Scholar in the Department of Electrical Engineering, IIT Roorkee (email: cbv_gkv@yahoo.co.in )

199

approach has drawn major attention of the several authors [1]-[8] from the last few decades. Rozsa et al. [4], Hickin & Sinha [7] and Shrikhande et al. [8], etc. has suggested reduction methods based on Hankel matrix approach in which Hankel matrix is converted into Hermite normal form by using outer products. The realization of the reduced model can be achieved in fixed number of operations on the Hankel matrix. Shamash [5] has proposed a method based on Hankel matrix approach in which conversion of Hankel matrix into Hermite normal form is not required but using Silvermans algorithm [ 9] and theory developed in reference [10], the reduced order time-domain models are obtained by minimal realization. The method suggested by Shamash is applicable to linear SISO and MIMO dynamic systems. In this paper, two new methods for reducing the order of the large-scale timedomain models are proposed. In the both methods, the modified Hankel matrices consisting both time-moments and markov parameters are considered and then modified algorithms are suggested for obtaining the lower order time-domain models. The proposed methods are applicable to linear stable SISO and MIMO systems and are illustrated with help of the numerical examples. II DESCRIPTION OF THE METHOD Let the nth order original linear dynamic SISO system be expressed as

X (t ) AX (t ) BU (t ) Y (t ) CX (t )
or,

(1)

G(s) C ( sI A)1 B

Let the k th ( k n ) order reduced model of the original system (1) be expressed as

X k (t ) Ak X k (t ) BkU (t ) Yk (t ) Ck X k (t )
or,
Rk (s) Ck (sI Ak )1 Bk

(2)

The objective of the proposed method is to find the k th order reduced model (2) of the original system (1) such that it retains the important qualitative properties of the original system and approximates its response as closely as possible.

200

The following two methods are suggested herein for reducing the order of the system: (a) First method for order reduction of linear SISO systems Time-moments and Markov parameters of the original nth order system are defined as

Ti CAi B
M i CAi 1B

(3) (4)

where i 1, 2,3...........

The modified Hankel matrix H nn is taken as

H nn =

M 1 T1 T T 2 1 ... ... Tk 1 Tk ... ... Tn 1 Tn

... Tk 1 ... Tk ... ... ... T2 k 2 ... ... ... Tn k 2

... Tn 1 ... Tn ... ... ... Tn k 2 ... ... ... T2 n 2

(5)

The next Hankel matrix is written as

H nn =

T1 T2 T 2 T3 ... ... Tk Tk 1 ... ... Tn Tn 1

... Tk ... Tk 1 ... ... ... T2 k 1 ... ... ... Tn k 1

... Tn ... Tn 1 ... ... ... Tn k 1 ... ... ... T2 n 1

(6)

The first row of the Hankel matrix (5) is taken as

H1n M1 T1 ... Tk 1 ... Tn1

(7)

The minimal realization of the original system G( s) is now given by the following the triple of matrices ( Ak , Bk ,Ck ), which are obtained from the above matrices as follows:

201

Ak H kk . H kk T Bk H1k

(8)

Ck H1k .( H kk ) 1
Thus, the k th -order reduced model is obtained as

X k Ak X k BkU Y Ck X k
or,
Rk (s) Ck (sI Ak )1 Bk

(9) (10)

Extension to MIMO systems Let the nth order MIMO system G( s) with ' p ' inputs and ' q ' outputs be required to reduce into lower order models, i.e., k 1, 2,3..., n 1 , then the following procedure is adopted. The Hankel matrix H ij is taken in the following form

H ij iq jp

M1 q p T 1 q p ... Ti 2 q p Ti 1 q p

T1 q p T2 q p
...

T2 q p T3 q p
...

... ... ... ... ...

Ti 1 q p Ti q p Ti q p Ti 1 q p

T j 1 q p T j q p ... Ti j 3 q p Ti j 2 q p iq jp

(11)

n n p with i and j q p
where is the largest integer more than .

H ij iq jp

T1 q p T 2 q p ... Ti 1 q p Ti q p

T2 q p T3 q p
...

T3 q p T4 q p
...

... ... ... ... ...

Ti q p Ti 1 q p Ti 1 q p Ti 2 q p

T j q p T j 1 q p ... T i j 2 q p T i j 1 q p iq jp

(12)

202

The triple of matrices ( Ak , Bk ,Ck ), which are obtained from the above Hankel matrices given as

Ak H kk . H kk Bk H kp Ck H qk . H kk
where H kp ,

(13)

H qk and H kk are taken from the Hankel matrix H ij and H kk is taken

from the H ij Hankel matrix element wise.

(b) Second method for order reduction The following Hankel matrices are considered for getting low order models. The Hankel matrix H nn for this algorithm is taken as

Tn 1 Tn T n 1 Tn 2 H nn Tn k 1 T M1 1

Tk Tk 1

T2 T1

T1

M k 2 M n2

T1 M1 M k 1 M n 1

(14)

The Hankel matrix H nn is taken as

H nn

Tn Tn 1 T Tn 1 n Tn k 2 T T1 2

Tk 1 Tk

T3 T2

T2

M k 3 M n 3

T2 T1 M k 2 M n2

(15)

The first row of the Hankel matrix H nn is taken as

203

H1n Tn Tn1

Tk

T2 T1

(16)

The minimal realization of the original system is given by the following the triple matrices ( Ak , Bk ,Ck ) which are obtained from the above Hankel matrices and given as

Ak H kk . H kk T Bk H1k

(17)
1

Ck H1k . H kk . Ak k 1

The k th -order reduced model is obtained as

X k Ak X k BkU Y Ck X k
Extension to MIMO systems In case of MIMO system, the size of the Hankel matrix H ij is calculated as follows:

(18)

n i and q

n p j p

where [ ] is the largest integer greater than and p, q , and n are inputs, outputs and order of the original system respectively. The following Hankel matrices are taken for the algorithm.

T j q p T j 1 q p H ij T j k 1 q p T j i 1 q p

T j 1 q p T j 2 q p

Tk q p Tk 1 q p T1 q p

T2 q p T1 q p M k 2 q p M i 2 q p

T j i q p

M 1 q p M k 1 q p M i 1 q p iq jp

T1 q p

(19)

204

T j 1 q p T j q p H ij T j k 2 q p T j i 2 q p

T j q p T j 1 q p

Tk 1 q p Tk q p T2 q p

T3 q p T2 q p M k 3 q p M i 3 q p

T j i 1 q p

T1 q p M k 2 q p M i 2 q p iq jp

T2 q p

(20)

The triple of matrices ( Ak , Bk ,Ck ) are obtained as follows

Ak H kk . H kk Bk H kp

(21)
1

Ck H qk . H kk . Ak k 1

where H kP , H qk , H kk and H kk are taken from Hankel matrices H ij and H ij element wise as explained in the algorithm for SISO system reduction.

III NUMERICAL EXAMPLES Two numerical examples are taken from the literature to illustrate the algorithm of the proposed methods. In order to check the goodness of the reduced order models, the integral square error index ISE [11] and relative integral square error index RISE [12] are calculated between original and reduced order models using Matlab/Simulink. These performance indices are defined as follows: ISE y(t ) yk (t )2 dt
0

(22)
2

RISE = y(t ) yk (t ) dt / y (t ) y () dt
0 0

(23)

where y(t ) and yk (t ) are the unit step responses of the original and reduced order models respectively and y() is the steady-state value of the step response of the original high order system.

205

Example-1: Consider a SISO system form Lucas [13] and converted into state model form having the following matrices.
10.5 2.625 0.6758 0.2891 0.07813 16 0 0 0 0 , A 0 8 0 0 0 0 2 0 0 0 0 0 0 1 0 C 1.25 0.6406 0.2578 0.1802 0.07617 4 0 B 0 0 0

The proposed method-1 is applied to get reduced order model. The Hankel matrix H 55 is obtained from the above system is given as

5.2040 -8.9900 16.8976 5.0000 -3.8997 -3.8997 5.2040 -8.9900 16.8976 -32.7754 H 55 5.2040 -8.9900 16.8976 -32.7754 64.5383 -8.9900 16.8976 -32.7754 64.5383 -128.0823 16.8976 -32.7754 64.5383 -128.0823 255.2206 5.2040 -8.9900 16.8976 -32.7754 -3.8997 5.2040 -8.9900 16.8976 -32.7754 64.5383 -8.9900 16.8976 -32.7754 64.5383 -128.0823 16.8976 -32.7754 64.5383 -128.0823 255.2206 -32.7754 64.5383 -128.0823 255.2206 -509.6027

H 55

and

H15 5.0000 -3.8997 5.2040 -8.9900 16.8976


The following 1st , 2nd , and 3rd order reduced models are obtained using the proposed method and are given as

X 1 1.2821 X 1 [5]U Y [1] X 1

or

R1 ( s)

5 s 1.282

206

3.0910 1.3555 5 X2 X2 U 0 1 3.8997 Y 1 0 X 2

or

R2 ( s)

5s 5.286 s 3.091s 1.355


2

and

4.0168 3.8424 1.0380 5 X3 1 0 0 X3 -3.8997 U 1 0 0 5.2040 Y 1 0 0 X 3


or

R3 ( s)

5s 2 9.582s 4.048 s3 4.017 s 2 3.842s 1.038

The 1st , 2nd and 3rd order reduced models obtained by using the proposed method are compared graphically. This has been observed from the Fig. 1 that the 2nd and 3rd order models exactly match the original system and the error ISE and RISE are calculated between the original and reduced systems and given in the Table-1. From these comparisons, it may be concluded that the proposed method is computationally simple and qualitatively better.

207

Fig. 1 Step response comparison of the reduced order models

Table-1 Comparison of the reduced order models for example-1 Reduced Models ISE 0.8253 RISE 0.09777

R1 ( s)
R2 ( s) R3 ( s)

2.523 104
1.143 104

2.989 105
1.354 105

Example-2 Consider a MIMO system taken from Shamash [14].

s 20 ( s 1)( s 10) G ( s) s 10 ( s 2)( s 5)


Let the 2nd and 3rd order reduced models are to be synthesized by using the first proposed method. The order of the Hankel matrix is calculated as follows:

208

n 4 i 2 q 2 n p 4 1 j 5 p 1

The Hankel matrix H 25 and H 25 are written as


M T T T T H 25 45 T 1 21 T1 21 T2 21 T3 21 T4 21 1 21 2 21 3 21 4 21 5 21 45

1.0000 -2.0000 2.1000 -2.1100 2.1110 1.0000 -1.0000 0.6000 -0.3200 0.1640 -2.0000 2.1000 -2.1100 2.1110 -2.1111 -1.0000 0.6000 -0.3200 0.1640 -0.0828
T1 H 25 21 45 T2 21

T2 21 T3 21 T4 21 T5 21 T3 21 T4 21 T5 21 T6 21 45
-2.1100 -0.3200 2.1110 0.1640 2.1110 -2.1111 0.1640 -0.0828 -2.1111 2.1111 -0.0828 0.0416

-2.0000 2.1000 -1.0000 0.6000 = 2.1000 -2.1100 0.6000 -0.3200

The 2nd and 3rd order reduced models are obtained and are given as

-1.5556 2.1111 1 X2 X 2 U -0.4444 -0.1111 1 1 0 Y X2 0 1


and

0 -10 -11 1 X3 -7.4035 -1.6667 -7.3684 X 3 1 U 0 0 1 2 1 0 0 Y X3 0 1 0


The step responses of the 2nd and 3rd order reduced models are compared with the original system and shown in the Fig.2 and the error between the original and reduced systems is

209

calculated and given in the Table-2, from which it is seen that the 3rd order model exactly matches the original system.

Fig. 2 Step responses of the original and reduced system

Table-2 Comparison of the reduced order models for example-2

Elements of Reduced Models

ISE [ gij (t ) rij (t )]2 dt;


0

i 1, 2; j 1
3rd order model R3 (s) 0.0
1.429 104

2nd order model R2 (s)

r11 (t )

5.536 102
1.614 102

r21 (t )

The same multivariable system [14] is solved by the second proposed method as follows

The Hankel matrix H 25 and H 25 are obtained as follows

T T T T H 25 5 4 3 2 T4 T3 T2 T1

T1 M1

and

T T T T T H 25 6 5 4 3 2 T5 T4 T3 T2 T1

210

-2.1111 2.111 -2.11 2.1 -0.0828 0.164 -0.32 0.6 H 45 2.1110 -2.11 2.1 -2 -1 0.1640 -0.32 0.6 2.1111 -2.1111 0.0416 -0.0828 -2.1111 2.111 -0.0828 0.164

-2 -1 1 1

H 45

2.111 -2.11 2.1 0.164 -0.32 0.6 -2.11 2.1 2 -0.32 0.6 1

Initially the 3rd order reduced model is synthesized for better illustration of the proposed method. The minimal realization of the original system is given by the following triple matrices ( A3 , B3 , C3 ) which are obtained from the following equations:

A3 H 33 . H 33 B3 H 31

C3 H 23 . H 33 . A32

where

H33 , H31 , H 23 and H 33 are chosen from the Hankel matrices H 45 and H 45

respectively and are given as

2.11 2.1 2 H 33 0.32 0.6 1 , 2.1 2 1

H 33

2.111 2.11 2.1 0.164 0.32 0.6 2.11 2.1 2

2.11 H 31 0.32 2.1

2.11 2.1 2 H 23 . 0.32 0.6 1


hence, the 3rd order reduced model is obtained and given as

211

0 1 0 -2.11 X 3 1.3816 -1.8750 1.3882 X 3 -0.32 U 0 -11 10 2.1 0 111 110 Y X3 169.7667 -6.5918 169.8087
Similarly, 2nd and 1st order reduced models are obtained and are given as follows

-1.0438 0.3199 2.1 X2 X 2 U -0.0673 -1.4310 0.6 1.0679 -0.7916 Y X2 0.1666 2.0262
and
X 1 -0.9524 X 2 -2U -0.9524 Y X2 -0.4762

The step response of the 2nd order reduced model is compared with the original system and shown in the Fig. 3.

Fig. 3 Step response comparison of

R2 (s) and G(s)

212

IV CONCLUSIONS Two modified order reduction methods for the linear dynamic SISO and MIMO systems are proposed in this paper. The proposed methods are conceptually simple, computationally easy and efficient. But sometimes minimal realization may generate unstable reduced model. The algorithms of the proposed methods have been tested on two numerical examples and the reduced order models are compared by using performance indices ISE and RISE which are tabulated and their unit step response comparison are shown in the figures.

REFERENCES [1] Ho B. L. and Kalman R. E., Effective construction of linear state variable models from input output data, Proc. 3rd 1965. [2] Tether A. J., Construction of minimal linear state variable models from finite input output data, IEEE Trans. Automatic Control, Vol. AC-15, pp. 427-436, 1970. [3] Therapos C. P., Balanced minimal realization of SISO systems, Electronics Letters, Vol. 19, No.11, pp. 424-426, 1983. [4] Rosza P. and Sinha N. K., Efficient algorithm for irreducible realization of a rational matrix, Int. Journal of Control, Vol. 20, pp. 739-751, 1974. [5] Shamash Y., Model reduction using minimal realization algorithms, Electronics Letters, Vol. 11, No. 16, pp. 385-387, 1975. [6] Parthasarathy R. and Singh H., Minimal realization of a symmetric transfer function matrix using Markov parameters and moments, Electronics Letters, Vol. 11, No. 15, pp. 324-326, 1975. [7] Hickin J. and Sinha N. K., New method of obtaining reduced order models for linear multivariable systems, Electronics Letters, Vol. 12, pp. 90-92, 1976. [8] Shrikhande V. L. , Harpreet Singh and Ray L. M., On minimal realization of transfer function matrices using Markov parameters and Moments, Proc. IEEE, Vol. 65, No. 12, pp. 1717-1719, 1978. [9] Silverman L. M., Realization of linear dynamical systems, IEEE Trans. Automatic Control, Vol.16, pp. 554-567, 1971. Inter. Conf. Circuits and Systems, pp. 449-459,

213

[10] Shamash Y., Minimal realization of differential systems, Int. Conf. Systems and Control, PSG College of Technology, Coimbatore, India, 1973. [11] Singh V., Chandra D. and Kar H., Improved Routh Pade approximants: A Computer aided approach, IEEE Trans. Automatic Control, Vol. 49, No. 2, pp. 292-296, 2004. [12] Lucas T. N., Further discussion on impulse energy approximation, IEEE Trans. Automatic Control, Vol. 32, No. 2, pp. 189-190, 1987. [13] Lucas T. N., Linear system reduction by the modified factor division method, Proc. IEE, Pt .D., Vol.133, pp. 293-296, 1986. [14] Shamash Y., Model reduction using minimal realization algorithms, Electronics Letters , Vol. 11, No. 16, pp. 385-387, 1975.

214

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

A Fuzzy Logic-based Multicast Model for Regulation of Power Distribution


A. Q. Ansari Department of Electrical Engineering Jamia Millia Islamia, New Delhi, India
aqansari@ieee.org
Abstract--Multimedia streaming applications consume a significant amount of server and network resources. The network layer in the protocol stack is concerned with routing of the data in an efficient manner with minimal duplication of data to the various receivers. The data transmitted needs to be transferred from the sender(s) to the receivers. The sender(s) and receivers are mostly end-hosts. Intermediate nodes are the routers, which route/direct the data from the sender(s) to the receivers. A spanning tree has been considered one of the most efficient and viable mechanisms to perform the data transmission in such a scenario. The Electrical Distribution System networks are akin to data communication tree network in many ways. By regulating at each distribution station (node/ router) for optimal loading of the distribution links, an efficient and trouble free distribution system is made possible as is done by the multicast routing of data traffic thus allowing the saving on network bandwidth. Many multicasting algorithms have been proposed over the years. On similar lines, using multicasting as a model, we propose here a fuzzy logic-based algorithm which allows for intervention-free readjustment of electrical loads even before a breakdown takes place. This can also be used to relieve some links for scheduled preventive maintenance by shifting load by preplanning the load distributions without a shutdown, thus providing a better quality of service. Keywords: Multicast; Routing; Electrical Power Distribution; Fuzzy Logic Algorithm, Load Balancing; Parallel & Distributed Processing

V. K. Nangia* Department of Computer Sc. & Engg. Maharaja Surajmal Inst of Technology New Delhi, India
hodcse@msit.in
point communication mechanism, broadcast is when data is forwarded to all the hosts in the network, anycast is when data is to be transmitted to any one of the members selected to be part of a group and multicast is when data is to be transferred to only a group of hosts on a network. In the age of multimedia and highspeed networks, multicast is one of the viable mechanisms by which the power of the Internet can be further harnessed in an efficient manner [1]. Although unicasting is the traditional method of data transport on the Internet, in the age of multimedia and high-speed networks, when more than one receiver is interested in receiving a transmission from a single or a set of senders, multicast is the most efficient and viable mechanism. In the protocol stack of the network, multicast is best implemented in the Network Layer in the form of a multicast routing protocol to select the best path for the transmission. Multimedia streaming applications can consume a significant amount of server and network resources. Periodic broadcast and patching are two approaches that use multicast transmission and client buffering in innovative ways to reduce server and network load, while at the same time allowing asynchronous access to multimedia streams by a large number of clients. There have been a number of techniques proposed to implement multicast in the Internet and intranet. The various techniques have their pros and cons and the suitability of a particular technique for a given multicast scenario. Additional features for multicast can be implemented at the other layers of the protocol stack such as reliability in transport layer, intranet multicast in data-link layer and session information and log maintenance in application layer but multicast is most efficiently implemented and handled at the network layer. Current research in this area has focused primarily on the algorithmic aspects of these approaches, with evaluation performed via analysis or simulation [2]. In the Network Layer multicasting, routing protocols maintain the state information and use the routing algorithm to select the most appropriate route. Features such as reliability and group management are added in multicast communication at layers other than the network layer. Multicast provides efficient communication and transmission, optimizes performance and enables truly distributed applications. Copies of message are made only when paths diverge at a router, that is, when the message is to be

I. INTRODUCTION
There are many similarities between the tree structured data communication networks using many routers and links, and the electrical distribution networks using central control centres, high voltage feeders, substations, control rooms and secondary links up to the low voltage consumer ends. The savings on bandwidth and the loading of communication links by router level multicasting techniques can also be applied to the electrical distribution systems. A fuzzy logic-based method is proposed for this purpose which can anticipate the load changes and indicate for the redistribution of the load on alternative links without human intervention, thus preventing any supply breakdowns.

A. Multicast Communication
Data communication in the Internet can be performed by unicast, broadcast, anycast or multicast mechanism. Unicast is a point-to*Corresponding Author

1
215

transferred to another route in the path to the receiver or when a receiver is attached to the router. The optimal multicast path is computed as a tree or a group of trees. The quality of the tree is determined by low delay, low cost and light traffic concentration. The first effort at quantifying the cost advantage in using multicast was by Chuang and Sirbu [2]. It focuses on link cost such as bandwidth quantification and ignores node cost such as routing table memory, CPU usage etc. The data transmitted needs to be transferred from the sender(s) to the receivers. The sender(s) and receivers are mostly end-hosts. Intermediate nodes are the routers, which route/direct the data from the sender(s) to the receivers. A spanning tree has been considered one of the most efficient and viable mechanisms to perform the data transmission in such a scenario, since it minimizes duplication of packets in the network. Messages are duplicated only when the tree branches and this ensures data communication is loop-free. An efficient multicast routing algorithm will aim to build a Minimal Spanning Tree (MST). Different type of trees such as a source tree, a shared tree etc. are used depending on whether receivers are sparsely or densely distributed throughout the network; the number of receivers does not matter. The receivers might have a set of requirements like the cost or a given amount of delay that it can tolerate in the receipt of data.

Figure 3. Different levels of electrical grids

B. Electrical Power Distribution Systems


Electrical infrastructure consists of four main parts (Fig. 2): 1. Generation: a prime mover, typically the force of water, steam, or hot gasses on a turbine, spins an electromagnet, generating large amounts of electrical current at a generating station. 2. Transmission: the current is sent at very high voltage (hundreds of thousands of volts) from the generating station to substations closer to the customers over long distances of hundreds of kilometers. 3. Primary Distribution: electricity is sent at mid-level voltage (tens of thousands of volts) from substations to local transformers, over cables called feeders, usually 10-20 km long, and with a few tens of transformers per feeder. Feeders are composed of many feeder sections connected by joints and splices. 4. Secondary Distribution: sends electricity at normal household voltages from local transformers to individual customers within a few kilometers radius. The distribution grid of a large city is organized into networks, each composed of a substation, its attached primary feeders, and a secondary grid (Fig. 3). The networks are largely electrically isolated from each other, to limit the cascading of problems. The feeders of the primary grid are critical and have a significant failure rate (mean-time between failure less than 400 days) [3,4] and thus

much of the daily work of the field workforce involves the monitoring and maintenance of primary feeders, as well as their speedy repair on failure. Substations reduce the voltage to 22 kV or less, and underground primary distribution feeders then locally distribute the electricity to distribution transformers. From there, the secondary network, operating at 220V/440V, delivers electricity to customers. The distribution network effectively forms a 3-edge-connected graph in other words, any two components can fail without disrupting delivery of electricity to customers in a network. Most feeder failures result in automatic isolation and many more occur in summer when power use for air conditioning adds to the load. When such disruption occurs, the load that had been carried by the failed feeder must shift to adjacent feeders, further stressing them and putting networks, control centers, and field crews under considerable stress, especially during the summer, and at enormous cost in Operations and Maintenance (O&M) expenses annually [5]. We need to study the power distribution multiplexing infrastructure from the parallel and distributed processing perspective for addressing issues like the Quality of Service (QoS). We examine with a broader outlook, the scope of parallel and distributed processing in Section II. The rest of the paper is organized as follows: Section III introduces the fuzzy logic approach to load balancing in Power Distribution infrastructure; Section IV includes conclusions and suggestions for future work.

II PARALLEL AND DISTRIBUTED PROCESSING CONSIDERATIONS IN POWER DISTRIBUTION ENVIRONMENT


A typical small scale power distribution infrastructure may be operational in a city or in multiple cities. In the past, power distribution infrastructures were examined more as standalone infrastructural local units for their performance. However, with the next generation corporate practices, the need to apply simulation and modeling studies for performance analysis using parallel and distributed computing techniques is on the rise for multiplexing of power distribution infrastructures. Such power distribution multiplexing need not be restricted to a single country, but spread over multiple countries also. In this context, the uncertainty in the global state is fuzzy in nature, since the actual global state in a power distribution multiplexed environment cannot be measured. A distributed application in power distribution infrastructure consists of many components that get executed on one or multiple nodes. A proper scheduling needs to be undertaken to ensure uniform

Figure 2. Electrical Power Distribution system

2
216

loading of the links. Simulation and modeling techniques have to assess the effectiveness of the load balancing algorithms from dynamic behavior and for scalable enhanced performance. Scalability indicates the successful functioning of an algorithm that is independent of physical topology as well as size of system links [6]. Software methodologies supporting Parallel Virtual Machine (PVM), Message Passing Interface (MPI) and Distributed Shared Memory (DSM), are being widely used in parallel computing domains to treat heterogeneous network of computers as a parallel machine [7,8].

TABLE 1 LOAD BALANCING DECISION IN A POWER DISTRIBUTION ENVIRONMENT Particulars Local Exchange Control data locally with neighbors Global Master oriented approach where loads are assessed for ideal distribution Periodic Characteristics of Master

Control

III APPLICATION OF FUZZY LOGIC IN ELECTRICAL LOAD BALANCING FOR POWER DISTRIBUTION INFRASTRUCTURE
There have been a number of efforts to improve the efficiency of complex systems by having a computer interpret a stream of sensor data. However, these systems generally use humanconstructed expert or rule-based systems [6,9,10]. In contrast, we have opted for a machine learning system that learns its models entirely from data and hence does not include frequent human intervention. Load balancing decisions in power distribution environment need to be examined from two separate angles viz. global perspective (advocating master or central control) and local view point. While both these options could be supported (as summarized in Table 1), it is suggested that the power distribution infrastructure operators should be given the choice of selection, either as a static practice or as an organizational culture where no fixed bias is shown for any decision methodology. In fact, it is advisable to treat it as a matter of negotiation between the agency awarding the power distribution contract and the organization generating power.

Performance

Built-in Nonperiodicity characteristic of algorithm After each load balancing phase, there is no optimal global mapping Re-mapping requires less data movement Scalable to a larger population of processors

Small control

Re-mapping involves large data movement Poorly tailored for scalability due to central control

Fuzzy Rule: If the application utilization of the link is HIGH (LOW), then REDUCE (INCREASE) the rate of flow of packets through the gate for the application. In this example, we have two Fuzzy linguistic terms viz. HIGH and LOW for the only fuzzy variable Link Utilization [12]. The membership function for the Link utilization (Network Traffic) is shown in Figure 4. Mathematically, we represent the two membership functions as follows LOW = 1 if rate < 80; = (100 - rate)/20 if rate > 80 and < 100 = 0 if rate > 100 HIGH = 0 if rate < 100; = (rate - 100)/20 if rate > 100 and < 120 = 1 if rate > 120 Since the base rate is to be increased (i.e. summation) if the utilization is low and is required to be reduced (i.e. subtraction), when the utilization is high, we update the new rate from the existing rate using the following formula. rate = rate + (LOW * ) (HIGH * ), where is a tunable parameter ( = 10, assumed). In order to exercise the fuzzy control, we assume that at a particular time, the rate of packets is 110. Then using the above rate computation, we get the updated rate as 105. When the existing rate is 105, then the updated rate is 102.5. Likewise, if the rate is 80, then the updated rate is 90.

A.

Simple Fuzzy Control Technique

Out of the four parameters indicated above for Power Distribution load balancing, we discuss the network traffic flow through a simple example of control of flow of packets on a communication link. One of the important considerations in Quality of Service (QoS) algorithm is to support consistent amount of bandwidth of the link. We should adjust the rate at which the packets are released depending upon the link utilization. The following fuzzy rule can solve the problem [10].

B. Fuzzy Load Balancing


In principle, the load balancing nodes attempt to organize the total cluster system load by transferring (or commencing) processes on the idle or lightly loaded links, in preference to heavily loaded ones. In order to overcome the limitations of the static load balancing approach, the dynamic load balancing uses the current system state information to improve performance in the

3
217

heterogeneous applications prevailing in power distribution multiplexing infrastructure. Heuristic load balancing is an accepted methodology [11]. The computation of load of a particular link requires processing of fuzzy rules to infer the load using fuzzy sets representations for each of the above four parameters in the Load Information Vector. Thus, the modeling methodology is a powerful mechanism to allow individual nodes (Control Centres) to incorporate flexible decisions. The fuzzy sets could be on the lines of the representation given in Section A for packet flow for network traffic. A load-sharing algorithm partitions a system into domains consisting of sets of nodes and links. A Load State of a link is represented as a fuzzy term using fuzzy linguistic variables IDLE, LOW, NORMAL and HIGH. A similarity relation is used at node computation, to decide which other node to include in its domain depending on the Load State of the link. We use a mix of triangular and trapezoidal representation. A typical fuzzy rule set is of the following form: Rule 1: If the load utilization of the link is VERY LOW, then the link Load is IDLE. Rule 2: If the load utilization is LOW AND Network Traffic is LOW, then the link Load is LOW. Rule 3: If the link utilization is MEDIUM AND Network Traffic is MEDIUM, then the link Load is NORMAL. Rule 4: If the link utilization is VERY HIGH OR AND Network Traffic is HIGH, then the link Load is HIGH. While there is no unique method for describing the fuzzy membership functions of individual parameters, the triangular (and/or mix of triangular and trapezoidal) representations are preferred over S and type of curves, especially from computational simplicity prospective [9]. We use standard hedges like very and more-or-less as Concentration (squaring) and Dilatation (square rooting) operations [12,13,14].

paper. And, lastly, the facilities and conducive environment provided by MSIT cannot be appreciated enough.

REFERENCES
[1] Xavier De Fago, Presto, E Schiper, Total Order Broadcast and Multicast Algorithms: Taxonomy and Survey, ACM Computing Surveys, Vol. 36, No. 4, Dec 2004, pp. 372 421. [2] Mojtaba Hosseini, Dewan Tanvir Ahmed, Shervin Shirmohammadi and Nicolas d. Georganas, A Survey of Application-layer Multicast Protocols, IEEE Communications Surveys & Tutorials, 3rd Quarter 2007, VOLUME 9, NO. 3. [3] J.Bialek, Tracing The Flow Of Electricity, IEE Proc.-Gener. Transm. Distrih., Vol. 143, No 4, July 1996. [4] Philip Gross, Albert Boulanger, Marta Arias, David Waltz, Philip M. Long, Charles Lawson, Roger Anderson, Predicting Electricity Distribution Feeder Failures Using Machine Learning Susceptibility Analysis, Proc. 18th Conference on Innovative Applications in Artificial Intelligence, 2006. [5] Madhu Chetty & Rajkumar Buyya, Weaving Computational Grids: How Analogous Are They with Electrical Grids?, Grid Computing, IEEE, Aug 2002,pp 61-71. [6] O. Kremien, J. Kramer, and J. Magee. Scalable, Adaptive Load Sharing for Distributed Systems, IEEE Parallel and Distributed Technology, Systems and Applications. 1(3), pp 62-70, 1993. [7] A. Geist, J. Beguelin, W. Dongarra, R. Jiang, Manchek, and V. Sunderam, PVM: Parallel Virtual Machine- A users Guide and Tutorial for Networked Parallel Computing, MIT Press, 1994. [8] W. Gropp, E. Lusk, and A. Skjellum., Using MPI, MIT Press, 1994. [9] S. Hongjie, F. Binxing , Z. Hongli. A Distributed Architecture For Network Performance Measurement and Evaluation System, Proc. sixth International Conference on Parallel and Distributed Computing, applications and Technologies (PDCAT05), 2005. [10] G. Klir, and T. Folger. Fuzzy Sets, Uncertainty and Information, Prentice Hall, January 1988. [11] T. Kunz. The Influence Of Different Workload Descriptions On A Heuristic Load Balancing Scheme, IEEE Transactions on Software Engineering, 17(7), pp. 725-730, 1991. [12] L. A. Zadeh. Fuzzy Sets, Information and Control, (8), pp. 338-353, 1965. [13] L.A. Zadeh. A Fuzzy-Algorithmic Approach To The Definition Of Complex Or Imprecise Concepts, International Journal of Man-Machine Studies, (8), 249-291, 1976. [14] Patki T., Patki A.B. Innovative Technological Paradigms for Corporate Offshoring, Journal of Electronic Commerce in Organizations, Vol. 5, Issue 2, pp 57-76, April- June 2007.

IV. CONCLUSIONS AND FUTURE WORK


This paper has focused on application of Fuzzy logic for load balancing. In our analysis, we have deployed Type-1 fuzzy set representation where, the degree of membership (i.e. belongingness), is indicated by a number in the range from 0 to 1. It is crisp. A number of researchers have shown that in certain applications the Type-2 representation can outperform Type-1. Type-2 fuzzy sets are recommended for situations where (i) Data generating system is known to be time varying and the mathematical description of the time variability is unknown, and (ii) Linguistic terms are used that have a non-measurable domain. It will be interesting to explore the parallel and distributed computing applied to power distribution environment using Type-2 fuzzy set representation.

ACKNOWLEDGEMENTS
The encouragement and guidance received from Dr. Arun B. Patki, Senior Director & Scientist G of DIT, Min of Comm. & IT, GOI in pursuing specialization in multicasting is highly appreciated. The suggestions and the feedback on the initial manuscript received from Dr. Vinay Nangia, Univ. of Guelph, Canada have helped in improving the depth and coverage of this

4
218

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Model Order Reduction by Optimal Frequency Matching using Particle Swarm Optimization
P. Sarkar and M. Rit
Department of Electrical NITTTR, Kolkata Block FC, Sector III, Salt Lake City, Kolkata 700 106, e-mail: sarkarprasant@yahoo.com , mrinmoyrt@yahoo.co.in

J. Pal
Department of Electrical IIT, Kharagpur Kharagpur 721 302 e-mail:jpal@ee.iitkgp.ernet.in

AbstractIn this paper reduced order modeling of higher order systems is considered. The reduced model is developed by matching a number of frequency points in the low frequency range which is a variant of moment matching. These frequency points are then optimally chosen by minimizing a scaler performance index (PI) developed between the norms of higher order model and that of the reduced order model. Particle Swarm Optimization (PSO) is used to minimize this PI and hence to select optimal frequency points (OFP). These OFPs are then utilized to find the parameters of the reduced model using least square technique. A couple of examples is presented to show the efficacy of the proposed method. KeywordsHigher Order Model, Model Order Reduction, Optimal Frequency Matching, Particle Swarm Optimization, Performance Index, Optimal frequency Points, Reduced order model.

organisms, such as birds flocking and fish schooling. Each particle studies its own previous best solution to the optimization problem, and its groups previous best, and then adjusts its position accordingly. The optimal value is found by repeating this process. The paper is organized as follows: Section I gives the introduction, in section II, the problem of reduced order modeling using approximate frequency fitting is presented. A brief introduction of Particle Swarm Optimization is given in section III, while in section IV algorithm for optimal frequency matching is introduced. In section V a couple of higher order models is taken up to show how the reduced model is close to the higher order system. Finally section VI gives the conclusions.

II. REDUCED ORDER MODELING USING APPROXIMATE FREQUENCY FITTING


Let Transfer function of single input, single output (SISO) higher order model (HOM) is represented by :

I. INTRODUCTION
Reduced order modeling is an important area in system theory. The methodologies of the reduced order modeling are to reduce a large order model to a small order model which is sufficient to facilitate analysis and control system design and further retain the prominent characteristics of the higher order model to its reduced order counterparts. A large volume of work has been reported over the past few decades to address the problem of reduced order modeling [1-4]. Two major stems of reduced order modeling can be distinguished among several methodologies: (i) time domain balanced and Hankel norm methods [5-6], and (ii) complex domain moment matching methods [7-10]. Balanced and Hankel norm approximation methods are built upon a family of ideas with very close connection to the singular value decomposition. These methods preserve asymptotic stability and allow for global error bounds. However, they do not scale well in terms of computational efficiency and numerical stability when applied to large scale problems as they rely upon dense matrix computations. Moment matching methods are based principally on Pad-like approximations and for large-scale problems have led naturally to the use of Krylov and rational Krylov subspace projection methods. These methods generally enjoy greater efficiency and numerical stability though maintaining asymptotic stability in the reduced order model cannot be guaranteed and therefore may be problematic at times. Moreover, no global error bounds exist. With the above background the work proposed in this paper is based on a variant of moment matching philosophy in which an error bound is imposed using Particle Swarm Optimization (PSO). PSO is motivated by Kennedy and Eberhart in 1995 [11-16]. PSO is motivated by the social behavior of the

1 b s b s2 b sm 1 2 m G(s) K 2 D( s) 1 a s a s a sn 1 2 n Where mn, it is assumed that G ( s) is irreducible. N (s)

(1)

Let transfer function of reduced order model (ROM) is represented by

where q n, p q .The order of the reduced model is assumed as q, this necessitates computation of at least 2q-1 free parameters of the reduced order model. In the approximate frequency fitting, the frequency response of the HOM and that of the ROM are approximately matched, such that:

p 1 s s2 s N (s) 1 2 m r R( s ) K q D (s) 1 s s2 s r 1 2 n

(2)

G ( s ) s j R( s ) s j

(3)

Writing ROM in (2) in its structural form and equating real and imaginary part, we obtain the following expressions for (i) Real part

219

q1 q i Ri ( ) i Si ( ) T ( ) i1 i 1
(ii) Imaginary part
q 1 i 1 q

2. (4)

The

velocity

of

the

particle

is

Vi vi1 , vi 2 , vi 3 ........vid .Where


search space.

d-is the dimension of the

3. For each particle, evaluate the fitness function (5)

f (Xi )

in d

iU i ( ) iVi ( ) W ( )
i 1

Left hand side of equation (4) and (5) are real function of with unknown coefficients i , i . These may be written in matrix form as:

variable. 4. Also initialise the best visited position of the particle is

Pibest pi1 , pi 2 , pi 3 ........... pid

and

compare

fitness

Ax b

(6)

Where A is a matrix of dimension 2 N ( 2q 1) given by

Pi best . If f ( X i ) f ( P i best ) f ( Pi best ) f ( X i ) , Pi best X i


evaluation with

then

R1, 2 R1,1 U U 1, 2 1,1 Ri ,1 Ri , 2 A U U i,2 i ,1 Rq 1,1 Rq 1, 2 U q 1,1 U q 1, 2

S1,q V1,q S i ,q U i ,q 1 Vi ,1 Vi ,q Rq 1,q 1 S q 1,1 S q 1,q U q 1,q 1 Vq 1,1 Vq 1,q R1,q 1 S1,1 U 1,q 1 V1,1 Ri ,q 1 S i ,1

Flow Chart of PSO


Initialize the location and velocity of swarm Exam the fitness of the swarm Update velocity of swarm Update position of swarm Present Is better than Pbest yes Pbest=present

x 1 , 2 , i q1 , 1 , 2 , k q

T
T

b T1 , T2 ,Ti Tq 1 ,W1 ,W2 ,Wk Wq 1


Finally, the parameter vector x is computed as:

no

T 1 T x ( A A) A b

It may be seen that the parameter estimate of the reduced model is largely dependent on the choice of and normally trial and error methods are resorted for such selection. The above problem is addressed in the proposed optimal frequency matching using PSO, which is discussed in the next section.

Present is better than Gbest yes Gbest=present

no

III. PARTICLE SWARM OPTIMIZATION


In PSO, the particles are placed in search space of some problem or function and each of the particles evaluates the objective function at its current location. Movements of particles were based on the effect of two elastic forces. One is attraction with a random magnitude to the fittest location by the particle and other is attraction with a random magnitude to the best location. In PSO, individuals are called particles, the population is called swarm. Each individual in PSO composed of three D-dimensional vectors, where D is the dimension of the search space. These vectors are current position Xi, the previous best position Pi and the velocity Vi. The PSO algorithm is simple and easy to implement. The procedures for implementing PSO are as follows: Assume that, in d-dimensional search space and i swarm can be represented by vector
th

Iteration=Maxiteration yes Output Gbest

no

5.

Initialise

global

best

position

Pg best p g1 , p g 2 , p g 3 .......... . p gd . Identify the particle in the


neighbourhood with the best success so far. If f ( X i ) f ( Pg best ) then

f ( Pgibest ) f ( X i ) Pg best X i

particle of the

X i xi1 , xi 2 , xi3 ......... xid


220

6. Position and velocity of the particle is updated by the following equation:

Vi (t 1) w * Vi (t ) c1 * R1 * ( Pi best X i ) c 2 * R2 * ( Pg best X i )

(7) (8)

5. Repeat Repeat SN 2 to 4 until the maximum number of iteration reaches. Step4: To find matrix A, x, b determine the value of the

X i (t 1) X i (t ) Vi (t 1)
Where

Ri ( ) , U i ( ) , S i ( ) , Vi ( ) , T ( ) and W ( ) .
Step 5: Check the polynomial 1+ i s q for stability.
i 1 q

c1 , c 2

are

positive

constant,

known

as

acceleration

coefficient. R1 , R2 a range as follows:

are two random variables with uniformly

Step 6: If yes stop, otherwise, go to step 3 after adjusting search space.

distributed. The velocity component of the particle

Vi (t )

is limited to

V. SIMULATION STUDY AND RESULTS

Vi (t ) Vmin if Vi (t ) Vmin Vmax if Vi (t ) Vmax


The inertia weight following equation:

Examples 1: Transfer function of higher order system is

in (14) is decreased linearly with time as per

7 6 5 4 3 35s 1086 s 13285 s 82402s 278376s G(s) 2 511812s 482964s 194480 8 7 6 5 4 3 s 33s 437 s 3017 s 11870s 27470s 2 37492s 28880s 9600
This system was reduced into 2nd order model using the following PSO settings: Swarm size =30 Maximum Velocity =100 Acceleration constant (c1) =2 Acceleration constant (c1) =2 Initial inertia weight (w_start) =0.95 Final inertia weight (w_end) =0.3 Maximum Iteration =500 Reduced order of the system is

w ( w _ start w _ end ) w _ end

( MAXITER t ) MAXITER

(9)

Where w _ start and w _ end are the initial and final values respectively. A large inertia weight means that exploration of particle, while small inertia weight favours of exploitation. t is the current iteration number and MAXITER is the maximum number of allowed iteration.

IV. ALGORITHM FOR OPTIMAL FREQUENCY MATCHING


Determination of the optimal frequency points by particle swarm optimization technique: Step1: Select the HOM using (1). Step2: Assume the order of the ROM using (2). Step 3: Invoke PSO to perform : 1. Initialization: The positions of particle are initialized in a search space. Parameter values of (7) such as inertia weight w and nonnegative constant values c1 , c 2 .Randomly initialize the position and initial velocity of the particles. 2. Evaluation Evaluate fitness function by using step1 to step6 and save it in vector form. 3. Floating (a) Initialize the best positions matrix and the corresponding function values. (b) Find best particle in initial population. (c) Define social neighborhoods for all the particles. 4. Selection and Update the particles (a) Update inertia weight of all particles by (9). (b) Update velocity and position of all particles by (7) and (8) respectively. (c) Update best position for each particle. (d)Update global best position.

R( s )

35.0241s 798.824

s 25.0051s 39.4105
2

Root Mean square error is 0.36224

Fig:1 Step response of continuous time high and reduced order systems

221

Fig2: Nyquist plot of continuous-time high and reduced order system In example no.1, an 8th order model is considered and a 2nd order reduced model is obtained by minimizing the error bound of the HOM and the ROM by developing a cost functions using PSO. The PSO is used to optimally select the frequency points by minimizing the cost function. The step response of the HOM and ROM show the similar response characteristics. Example2: Transfer function of higher order system is
12 11 10 9 - 7.56 s - 495.3312s - 14036.35 s - 224441.7 s 8 7 6 - 2185157 s - 12880660 s - 40898330 s 5 4 3 - 22737720 s 340107400 s 1347239000 s G(s) 2 2248851000s 1728803000 s 468169300 13 12 11 10 s 66.9 s 2079.212 s 42464.86 s 9 8 7 608685.1s 6151530 s 43734380 s 6 5 4 218409600 s 761257600 s 1824391000 s 2936616000 s 3052796000 s 1871543000 s 511970900 3 2

Fig:3 Step response of continuous time high and reduced order systems

Fig4: Nyquist plot of continuous-time high and reduced order system

This system was reduced into 3rd order model using the following PSO settings: Swarm size =30 Maximum Velocity =10 Acceleration constant (c1) =2 Acceleration constant (c1) =2 Initial inertia weight (w_start) =0.95 Final inertia weight (w_end) =0.3 Maximum Iteration =400 Reduced order of the system is

VI CONCLUSIONS
The optimal frequency matching developed in this work is a variant of the traditional moment matching technique. The major drawback of the moment matching is non-existence of global error bounds. The above problem is solved by introducing an error bound which is a scalar cost function between the norms of the HOM and that of the ROM. The cost function is minimised to choose optimal frequency points. These OFPs are then used to computed the parameters of the ROM using least square technique. Two examples are attempted to show the efficacy of the method developed. The time and frequency responses of the ROM are very close to its higher order counterpart. This highlights the useful of the method presented. REFERENCES [1]. Mahmoud M. S., and Singh M.G., Large Scale System Modeling and Control, Pergamom Press, Oxford, 1981. [2]. Jamshidi M., Large Scale Systems: Modeling and Control, North Holland, New York, 1983.

R( s )

- 7.4591s 2 - 90.8915 s 498.2321 s 3 11.1029 s 2 195.115 s 519.07

Root Mean square error is 0.13755271752903

222

[3]. Fortuna, L., Nunnari, G and Gallo, A., Model Order Reduction Techniques with Applications in Electrical Engg, Springer -Verlag, London, 1992. [4] Antoulas, A. Lectures on the approximation of large-scale dynamical systems. SIAM, 2004. [5] B C Moore Principal component analysis in linear systems, controllability, observability and model reduction, IEEE Trans. Autom. Control vol. 26, pp 17-32 [6]P Benner, E S Quintana-Orti and G Quintana-Orti Balanced Truncation Model Reduction of Large Scale Dense Systems on Parallel Computers Mathematical and Computer Modeling of Dynamical Systems, vol.6 (2000) , pp 383-405 [7] Lai M. and Mitra R., "Simplification of large system dynamics using a moment evaluation algorithm," IEEE Trans, on Automatic Control, vol. AC-19, no. 5, pp. 602-603, 1974. [8] Zakian V., "Simplification of linear time invarient systems by moment approximants," International Journal of Control, vol. 18, no. 3, pp. 455-460, 1973. [9] Shih Y.P. and Shieh C.S., "Model reduction of continuous and discrete multivariable systems by moment matching," Computer and Chemical Engineering., vol. 2, pp. 127-132, 1978. [10] Bosley M. J., Kropholler H. W. and Lees F.P., "On the relation between the continued fraction expansion and moments matching methods of model reduction," International Journal of Control, vol. 18, no. 3, pp. 461-474, 1973 [11]. J. Kennedy, and R. C. Eberhart, Particle swarm optimization, IEEE Int. Conf. on Neural Networks, IV, 1942-1948, Piscataway, NJ, 1995. [12].Maurice Clerc, Particle Swarm Optimization, ISTE Ltd, First South Asian Edition,2007. [13] P. J. Angeline, "Evolutionary Optimization versus Particle Swarm Optimization: Philosophy and Performance Differences", Evolutionary Programming VII (1998), Lecture Notes in Computer Science 1447, 601-610. Springer. [14] .J. Vesterstrm and J. Riget, "Particle Swarms Extensions for improved local, multi-modal, and dynamic search in numerical optimization." M.S. thesis, Dept. Computer Science, Univ Aarhus, Aarhus C, Denmark, 2002. [15] T. Krink, J. S. Vesterstrm and J. Riget, "Particle Swarm Optimization with Spatial Particle Extension", in Proc. 2002 Congress on Evolutionary Computation (CEC), Conf, 2002 IEEE World Congress on Computational Intelligence, Hawaiian, Honolulu, 2002. [16]. J. Kennedy, and R. C. Eberhart, Swarm intelligence, San Francisco: Morgan Kaufmann Publishers, 2001.

223

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

DVCC-Based Non-Linear Feedback Neural Circuit for Solving Quadratic Programming Problems
Mohd. Samar Ansari Department of Electronics Engineering Aligarh Muslim University Aligarh, India
email: mdsamar@gmail.com

Syed Atiqur Rahman Department of Electronics Engineering Aligarh Muslim University Aligarh, India
email: syed.atiq.amu@gmail.com

Abstract This paper presents a non-linear feedback neural network for solving quadratic programming problems. The objective is to minimize a quadratic cost function subject to linear constraints. The proposed circuit employs non-linear feedback, in the form of unipolar comparators realized using DVCCs and diodes, to introduce transcendental terms in the energy function ensuring fast convergence to the solution. PSPICE simulation results are presented for a chosen optimization problem and are found to agree with the algebraic solution. Keywords Neural Networks, Non Linear circuits, Feedback Neural Networks, Quadratic Programming, Dynamical Systems

I. INTRODUCTION

uadratic Programming problems are very important in the field of optimization. They arise in many applications such as constrained least mean square estimation and the classical newsvendor problem. Besides its wide applications, quadratic programming is also of theoretic meaning, because it forms a basis for solving some general nonlinear programming problems. In the discipline of constrained optimization, problems with nonlinear objective functions are usually approximated by a second-order (quadratic) system and solved approximately by a standard quadratic programming technique. Traditional methods for solving quadratic programming problems typically involve an iterative process, but long computational time limits their usage. There is an alternative approach to solution of this problem. It is to exploit the artificial neural networks (ANN's) which can be considered as an analog computer relying on a highly simplified model of neurons [1]. ANN's have been applied to several classes of constrained optimization problems and have shown promise for solving such problems more effectively. For example, the Hopfield neural network has proven to be a powerful tool for solving some of the optimization problems. Tank and Hopfield first proposed a neural network for solving mathematical programming problems, where a linear programming problem was mapped into a closed-loop network [1]. Later, the dynamical approach was extended for solving quadratic programming problems. Over the past two decades several neural-network architectures for solving quadratic programming problems have been proposed by Kennedy and Chua [2], Maa and Shanblatt [3], Wu et al. [4], Xia [5]. These networks depend on the network parameters or use expensive analog multipliers. More recently,

Malek and Alipour [6] proposed a recurrent neural network that is able to solve quadratic programming problems. This architecture is advantageous in the sense that there is no need to set network parameters. Moreover, the number of analog multipliers is reduced. This reduces the hardware complexity to a certain extent. However, the network still has a complex hardware in the sense that the main processing element (alpha) would be cumbersome to implement in hardware. In this paper, we present a hardware solution to the problem of solving a quadratic programming problem subject to linear constraints. The proposed architecture uses non-linear feedback which leads to a new energy function that involves transcendental terms. This transcendental energy function is fundamentally different from the standard quadratic form associated with Hopfield network and its variants. The hardware complexity of the proposed circuit compares favourably with the existing hardware implementations. The remainder of this paper is arranged as follows: SectionII outlines the basic problem and the mathematical formulation on which the development of the proposed network will be based. SectionIII contains the circuit implementation of the proposed network for a set of sample problem in two variables. SPICE simulation results of the proposed circuit are also presented. Issues that are expected to arise in actual monolithic implementations are discussed in SectionIV. Concluding remarks are presented in SectionV.

II. PROPOSED NEURAL NETWORK


The symbolic diagram of the DVCC is shown in fig. 1. The device can be characterized by the following port relations: 0 1 0 2 = 0 + 1 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 +

(1)

VY1 VY2 VX IX

Y1 Y2 X

DVCC x 1 Z+ Z-

IZ+ IZ-

Fig. 1 Symbolic Representation of DVCC

224

Although both Z+ and Z- types of current outputs are mentioned in (1), the DVCCs used in the proposed network use only Z+ type of outputs. For the DVCC, comparator action can be achieved by putting 0 i.e. by directly grounding the X-terminal [7]. For such a case, the current in the X-port will saturate and can be written as = tanh 1 2 (2) Where is the open-loop gain of the comparator (practically very high) and are the saturated output current levels of the comparator. Equation (2) can also be written in an equivalent notation as = (3) tanh 1 2 Where is the transconductance from the input voltage ports (Y1 and Y2) to the output current port (X) and are the biasing voltages of the DVCC-based comparator. By virtue of DVCC action, this current will be transferred to the Z+ ports as + = = tanh 1 2 (4)

As can be seen from fig. 2, individual equations from the set of constraints are passed through non-linear synapses which are realized using DVCC-based unipolar comparators. The outputs of the comparators are fed to neurons having weighted inputs. These weighted neurons are realized by using opamps where the scaled currents coming from various comparators act as weights. Weighing is also provided by the self-feedback resistances (R11 & R22 for neurons 1 & 2 respectively) and the cross-feedback resistances (R21 and R12 for neurons 1 & 2 respectively). Rpi and Cpi are the input resistance and capacitance of the opamp corresponding to the i-th neuron. These parasitic components are included to model the dynamic nature of the opamp. Node equation for nodes A and B give the equations of motions of the first and second neurons as

= 11 1 + 21 2 +

1 11

2 21

1 1

(8)

Where, 11 1 and 21 2 represent the currents 11 1 and 21 2 after passing through diodes as shown in Fig. 1 and
1 1

Where is the current scaling factor introduced during the current conveying process from the X-port to the Z+ port. Ideally, = 1. However, by altering the aspect ratios of the transistors used in the output stages of the DVCC, + can be made a scaled replica of . In that case, the value of will be different from unity. This property will be utilized in the design of the multi-output DVCCs needed for the proposed circuit. Let the function to be minimized be = 11 1 2 + 12 1 2 + 22 2
2 2 1 1 2

1 1

1 11

1 21

(9)

And,

= 12 1 + 22 2 +

1 12

2 22

1 2

(10)

Where, 12 1 and 22 2 represent the currents 12 1 and 22 2 after passing through diodes as shown in Fig. 1 and
1 2

1 2

1 12

1 22

(11)

(5) (6) (7)

Subject to 11 1 + 12 2 1 21 1 + 22 2 2

In equations (8) and (10) ui is the internal state of the i-th neuron and is the current scaling factor at the j-th output of i-th DVCC. As is shown later in this section, these weights are governed by the constraint inequalities (2, 3). Using (6) in (8) and (9) results in (12) and (13) given below.

Where V1, and V2 are the variables and aij, bi and cij are constants.
The proposed neural-network based circuit to minimize (5) subject to (6, 7) is presented in fig. 2.
R11 a11V1+ a12V2 s11Ix x1 Y1 Z+1 DVCC-1 s12Ix Z+2 Y2 X A _ Rp1 Cp1 + R21 V1

1
1

=
2 21

2 11 1
1 1 1

tanh 11 1 + 12 2 1 + 1 +

2 21 1
11

tanh 21 1 + 22 2 2 + 1 +
(12)

2
1

=
2 22

2 12 2
1 2

tanh 11 1 + 12 2 1 + 1 +

b1

2 22 1
12

tanh 21 1 + 22 2 2 + 1 + +
(13)

a21V1+ a22V2 s21Ix x1 Y1 Z+1 DVCC-2 s22Ix Z+2 Y2 X R12 Cp2 Rp2 + _

Moreover, it can be shown that the network in fig. 1 can be associated with an Energy Function E of the form

=
1

b2

V2

11 1 2 + 21 1 2 + 22 2 +
2 2

ln cosh 11 1 +

B R22

12 2 1 +
2

ln cosh 21 1 + 22 2 2 +
(14)

11 1 + 12 2 + 21 1 + 22 2

Fig. 2 The proposed feedback neural circuit to solve a quadratic programming problem in 2-variables with 2 linear constraints

From (14), it follows that

225

= 11 1 + 21 2 + 2 11 tanh (11 1 + 12 2
1 2 1 1

1 ) + 21 tanh (21 1 + 22 2 2 ) 1

(15)

Also, if E is the Energy Function, it must satisfy the following condition [8].
1

= 1

(16)

Where K is a constant of proportionality and has the dimensions of resistance. Comparing (12) and (15) according to (16) yields

The circuit for DVCC was taken from [7] and standard 0.5 micron CMOS parameters were used for simulation purposes. For the opamp, use was made of the LMC7101A CMOS opamp from National Semiconductor. The sub-circuit file for this opamp is available in Orcad Model Library. Routine mathematical analysis of (19, 20) yields: 1 = -0.584, 2 = 0.416. The results of PSPICE simulation are presented in fig. 3. From the plots of the neuron output voltages, it can be seen that V(1) = -0.58 V and V(2) = 0.41 V which are very near to the algebraic solution thereby confirming the validity of the approach.

11 = 11 ; 21 = 21 11 =

11

(17) (18)

IV. CONCLUSIONS
In this paper we have described a CMOS compatible approach to solve a quadratic programming problem in 2 variables subject to 2 linear constraints, which uses 2-neurons and 2-synapses. Each neuron requires one opamp and each synapse is implemented using a dual output DVCC. This results in significant reduction in hardware over the existing schemes [2-6]. The proposed network was tested on a sample problem of minimizing a quadratic function in 2 variables and the simulation results confirm the validity of the approach.

; 21 =

21

The values of the current scaling factors (sji) of Fig. 1 can be easily calculated by setting the value of K to be equal to 1 and then using (17). The values of the weight resistances connected to the first neuron, R11 and R21, can be obtained by using (18). Analysis on similar lines can be performed to obtain the values of the synaptic weights for the second neuron.

III. SIMULATION RESULTS


This section deals with the application of the proposed network to task of minimizing the objective function 31 2 + 41 2 + 52 2 subject to 1 2 1 1 + 2 1 (20) (19)
[1]

REFERENCES
D.W. Tank, J.J. Hopeld, Simple neural optimization networks: an A/D converter, signal decision network, and linear programming circuit, IEEE Trans. Circ. Syst. CAS, 33, 533541, 1986. M.P. Kennedy, L.O. Chua, Neural networks for nonlinear programming, IEEE Trans. Circ. Syst., 35, 554562, 1988. C.Y. Maa, M.A. Shanblatt, A two-phase optimization neural network, IEEE Trans. Neural Network 3 (6), 580 594, 1992. X.Y. Wu, Y.S. Xia, J. Li, W.K. Chen, A high performance neural network for solving linear and quadratic programming problems, IEEE Trans. Neural Networks, 7 (3), 643651, 1996. Y. Xia, A new neural network for solving linear and quadratic programming problems, IEEE Trans. Neural Networks 7 (6), 1544 1547, 1996. A. Malek, M. Alipour, Numerical solution for linear and quadratic programming problems using a recurrent nueral network, Appl. Math. Computation, 192, 27 39, 2007. S. Maheshwari, A canonical voltage-controlled VM-APS with a grounded capacitor, Cir. Syst. Sig. Process, 27, 123 -132, 2008. S. A. Rahman, Jayadeva, S. C. Dutta Roy, Neural network approach to graph colouring, Electronics Letters, 35, 1173 1175, 1999.

[2] [3] [4]

The values of resistances acting as the weights on the neurons are obtained from (18). Towards that end, first the value of was computed for the DVCC which was found to be 1.091 milli-mhos. The corresponding values of the weight resistances are found to be: R11 =305.53; R21 =229.14; R12 =229.14; R22 =183.318. The values of the current scaling factors were found out to be: s11 = 1; s12 = -1; s21 = 1 ; s22 = 1.

[5]

[6]

[7] [8]

Fig. 3 Simulation results for the chosen 2 variable problem

226

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

ESTIMATION OF WIND ENERGY PROBABILITY USING WEIBULL DISTRIBUTION FUNCTION

Narendra. K,
M. Tech. Student, Elect. Engg. Dept. Indian Institute of Technology Roorkee Roorkee-247667, India e-mail: narenpee@iitr.ernet.in
AbstractWind energy is renewable and environment friendly. The most important parameter of the wind energy is the wind speed. Statistical methods are useful for estimating wind speed because it is a random phenomenon. For this reason, wind speed probabilities can be estimated by using probability distributions. For an accurate determination of probability distribution, wind speed values are very important in evaluating wind speed energy potential of a region. The main objective of this paper is to calculate the wind energy probability by using Weibull distribution function for Indian Institute of Technology (IIT) Roorkee region. In this study, first, we tried to determine appropriate theoretical probability density function (pdf) by comparing measured wind speed data with Weibull function for IIT Roorkee region. In determining proper pdf , an approach consisting of fit tests and fitted graphics by MATLAB software have been used. The energy from wind speed for one month is equal with the area under power curve, multiply with the number of days for that month. To estimate the wind energy probability the wind speed data is taken from Department of Hydrology, IIT Roorkee, India. Keywords-Weibull distribution, wind energy, hydrological data, MATLAB software.

E. Fernandez
Associate Professor, Elect. Engg. Dept. Indian Institute of Technology Roorkee Roorkee-247667, India e-mail: eugfefee@iitr.ernet.in

I. INTRODUCTION
Since the oil crisis in the early 1970s, utilization of solar and wind power has became increasingly significant, attractive and costeffective. In recent years, wind energy has become viable alternatives to meet environmental protection requirement and electricity demands. With the efficient characteristics wind energy present becomes an unbeatable option for the supply of small electrical loads at remote locations where no utility grid power supply. Since they can offer a high reliability of power supply, their applications and investigations gain more concerns nowadays. Due to the stochastic behavior of both wind energy, the major aspects in the design of wind system are the reliable power supply of the consumer under varying atmospheric conditions and the cost of the kWh of energy. Energy, an essential need for every individual and for economic development, has always been particularly lacking in rural areas of developing countries, were rural areas are defined as sparsely separated, faraway from large cities and in many cases, in difficult terrain [1]. The benefit of wind power generation lies in providing clean energy and saving fossil fuels thereby reducing emissions and import costs. Energy generation by wind power plants is without fuel cost but is stochastic in output [3]. This necessitates that the energy generated by wind be supplied fully to the grid to minimize the overall economics of generation. In recent years, renewable energy is

being used for large scale power generation in the form of wind energy conversion systems (WECS) and Wind energy penetration levels in power system are increasing worldwide. A wind turbine installed at a potential site will generate electric energy to about 7085% of its maximum capacity most of the time, but seldom at rated power output. The power output characteristics of wind energy conversion system are different from that of conventional generation systems. Due to this reason, conventional power plants can be represented as binary units, either fully available or not at all unlike wind power plants where the output varies between zero and rated capacity. When the power generation through wind is inadequate or not available, conventional sources must meet the load demand during those lean periods. Reliability is an important performance index and can be used for assessing the performance of wind farms [4]. The worldwide installed capacity of wind power reached 1,21,188 MW by the end of 2008. USA (25,170 MW), Germany (23,903 MW), Spain (16,754 MW) and China (12,210 MW) are ahead of India in fifth position [8]. The short gestation periods for installing wind turbines, and the increasing reliability and performance of wind energy machines has made wind power a favored choice for capacity addition in India. The development of wind power in India began in the 1990s, and has significantly increased in the last few years. The "Indian Wind Turbine Manufacturers Association (IWTMA)" has played a leading role in promoting wind energy in India [9]. Although a relative newcomer to the wind industry compared with Denmark or the US, a combination of domestic policy support for wind power and the rise of Suzlon (a leading global wind turbine manufacturer) have led India to become the country with the fifth largest installed wind power capacity in the world. As of November 2008 the installed capacity of wind power in India was 9587.14 MW, mainly spread across Tamil Nadu (4132.72 MW), Maharashtra (1837.85 MW), Karnataka (1184.45 MW), Rajasthan (670.97 MW), Gujarat (1432.71 MW), Andhra Pradesh (122.45 MW), Madhya Pradesh (187.69 MW), Kerala (23.00 MW), West Bengal (1.10 MW), other states (3.20 MW) It is estimated that 6,000 MW of additional wind power capacity will be installed in India by 2012. Wind power accounts for 6% of India's total installed power capacity, and it generates 1.6% of the country's power.

II. A.

DETERMINATION OF THE AVAILABLE WIND POWER RESOURCE

Weibull Distribution

In probability theory and statistics, the Weibull distribution is a continuous probability distribution. It is named after Waloddi Weibull who described it in detail in 1951, although it was first

1
227

identified by Frchet (1927) and first applied by Rosin & Rammler (1933) to describe the size distribution of particles [2]. The probability density function of a Weibull random variable x is:

k vmp v prob = . c c

( k 1)

vmp c k

(4)

k x k 1 ( x / ) k e f ( x; , k ) = 0

x0 x<0

B. Wind power
(1)

The wind power can be determinate using the equation [6]:

Where k > 0 is the shape parameter and > 0 is the scale parameter of the distribution. Its complementary cumulative distribution function is a stretched exponential function. The Weibull distribution is related to a number of other probability distributions; in particular, it interpolates between the exponential distribution (k = 1) and the Rayleigh distribution (k = 2).

Pwind =
Where:

1 3 . air .vwind 2

(5)

Pwind The specific wind power,[W/m2] vwind The average wind speed, [m/s]

air The density of dry air, [kg/m3]


The wind speed was measured with the help of an anemometer AN1 placed at 10 m high. Mean values were recorded and stored from day to day using a Data logger (DL2).
TABLE 1: AVERAGE WIND SPEED (M/S) DATA OF IIT ROORKEE FOR YEAR 2008

Figure 1. Weibull (2 parameter) probability density, cumulativeDistribution functions.

The Weibull distribution is often used in the field of life data analysis due to its flexibilityit can mimic the behavior of other statistical distributions such as the normal and the exponential. If the failure rate decreases over time, then k < 1, if the failure rate is constant over time, then k = 1 & if the failure rate increases over time, then k > 1. It is very important for the wind industry to be able to describe the variation of wind speeds. Turbine designers need the information to optimize the design of their turbines, so as to minimize generating costs. Turbine investors need the information to estimate their income from electricity generation. If wind speeds are measured throughout a year, it is noticeable that in most areas strong gale force winds are rare, while moderate and fresh winds are quite common. The wind variation for a typical site is usually described using the so-called Weibull distribution. The wind speed variation is best described by the Weibull probability distribution function [7]. There are two important parameters, the shape parameter k, and the scale parameter c. The probability of wind speed being h during any time interval is given by the following equation:

Source: Hydrological Data, Department of Hydrology, I.I.T Roorkee, Roorkee, India.

III.

CALCULATE THE WIND ENERGY PROBABILITY USING MATLAB SOFTWARE

h( v )

k v . c c

( k 1)

v k c

for 0 < v <

(2)

My focus is to estimate the wind energy probability using Weibull distribution function from Matlab software. The wind speed was measured with the help of an anemometer AN1 placed at 10 m high. Day to day mean values were recorded and stored by using a Data logger (DL2). The characteristics curves of the model were in matlab using Weibull distribution and the appropriate fitting option from the program[5]. Below one can see all the steps in the Matlab for wind speed distribution the wind power calculation.

The average wind speed and most probable wind speed are then given by:

1 k = c. 1 vmp k

(3)

2
228

Figure 2. Algorithm for Estimation of wind energy probability using Weibull distribution function.

IV.

RESULTS AND DISCUSSION

The results consist of wind energy probability graphs for each month (year 2008, IIT Roorkee).The wind energy values are plotted and the wind energy variation during the whole year can be studied. The calculated principle for each month is the same as above mentioned in program; just wind speed data is different according as wind potential for each month.

Figure 3. Wind energy % probability vs Speed graphs (year 2008, IIT Roorkee).

Figure 4. Wind Energy (kWh) probability for year 2008, IIT Roorkee, Roorkee, India.

3
229

TABLE 2: AVAILABLE WIND POTENTIAL ENERGY FOR YEAR 2008 IIT ROORKEE, ROORKEE, INDIA.

kw can harvest from the wind is 81.604 kwh. This value can be raised by using the optimal combination between sources of energy, batteries, wind turbine and panels, allow the necessities fulfillment at the lowest cost of possible.

REFERENCES
[1] Billinton, Roy and Bai, Gaung, Generating capacity adequacy associated with wind energy, IEEE Trans Energy Convers, vol. 19, no. 3, pp. 641-64, Sep. 2004. [2] Sagias, Nikos C., Karagiannidis, George K., Gaussian class multivariate Weibull distributions: theory and applications in fading channels, IEEE Transactions on Information Theory, vol. 51, no. 10, pp. 36083619, 2005. [3] Wang, Peng and Billinton, Roy, Reliability benefit analysis of adding WTG to a distribution system, IEEE Trans Energy Convers, vol. 16, no. 2, pp. 134-139, Jun. 2001. [4] Billinton, Roy and Karki, Rajesh, Cost effective wind energy utilization for reliable power supply, IEEE Trans Energy Convers, vol. 19, no. 2, pp. 435-440, Jun. 2004. [5] Billinton, R., Hua Chen and Ghajar, R., A sequential simulation technique for adequacy evaluation of generating systems including wind energy, IEEE Trans Energy Convers, vol. 11, no. 4, pp. 728-734, Dec 1996. [6] Ubeda, J.R. and Rodriguez Garcia, M.A.R., Reliability and production assessment of wind energy production connected to the electric network supply, IEE Proc Gener Transm Distrib, vol. 146, no.2, pp. 169-175, 1999. [7] Voorspools, Kris R. and Dhaeseleer, William D., Critical evaluation of methods for wind-power appraisal, Renewable and Sustainable Energy Reviews, vol. 11, no. 1, pp. 78-97, 2007. [8] World Energy Wind Report 2008, World Wind Energy Association, Germany, February 2009. [9] www.planettark.com Clean Energy Brings Windfall to Indian Village, 2007.

V.

CONCLUSIONS

Decision on whether to make an investment or not for the wind plant planned to be built in a region is made by the help of wind speed data measured for that region. Distribution of the wind speeds in I.I.T Roorkee region was measured with the help of an anemometer AN1 placed at 10 m high. Mean values were recorded and stored from day to day using a Data logger (DL2) for the year 2008. Since wind speeds are explained as random events, it is expected to fit a probability distribution, therefore the distribution, which the wind speed fitting, is investigated. In this paper estimated the wind energy probability by using Weibull distribution function from Matlab software. The annual energy that a wind turbine of 1.1

4
230

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Performance Analysis of a Coupled Inductor Bidirectional DC to DC Converter


Narasimharaju.B.L Satya Prakash Dubey Sajjan Pal Singh
Department of Electrical Engineering Indian Institute of Technology Roorkee, India-247 667 E-mail: spseefee@iitr.ernet.in Department of Electrical Engineering Department of Electrical Engineering Indian Institute of Technology Indian Institute of Technology Roorkee, India-247 667 Roorkee, India-247 667 E-mail: narasimharaju.bl@gmail.com E-mail: spdubfee@iitr.ernet.in

Abstract-- Energy storage system is one of the major concern in the applications such as DC UPS systems (DUPS) for battery discharge/charge, renewable energy source conversions, and hybrid electric vehicles etc. A Bidirectional DC-DC Converter is capable of providing power flow in either direction while maintaining the polarities of the voltage on either side unchanged. Also, it proves to be scaled down energy storage system that replaces the bulk storage devices. The BDCs employed in DUPS system incorporating battery bank has attracted special interest as promising load balancing system, and to meet failure situations of the mains power. Because battery bank voltage of this system is low, it is important to use BDC suitable for large step-up/ step-down ratio. This paper is intended to provide improved changes of BDC topology appropriate for DUPS system, its theoretical analysis and simulation performances using MATLAB/SIMULINK. Also provides selection factors of BDC to specific applications.
Keywords- Energy Storage System (ESS), Bidirectional DCDC converter (BDC) and coupled inductor

when charging. Hence, small LB systems require bidirectional converters suitable for large step-up and stepdown ratios. The authors propose a circuit topology with a bidirectional chopper suited to such applications, and described analytical and simulation results.
In earlier days bi-directional power flow for charging and discharging operation with two unidirectional converters was in practice. Due to the advancements in semiconductor technologies and recent innovative ideas it is possible to achieve bi-directional power flow using single DC-DC converter. It offers the solutions in the reduction of the capability of input power source by bidirectional operational mode. These BDCs will functions in two modes of operation. One is discharge mode during which the BDC is used to boost the battery voltage to a suitable higher level DC bus voltage. Other one is the charging mode during which the BDC is used to buck the DC bus voltage to a suitable low level battery voltage. These converters with constant regulated output DC voltage/current at vast varying power level is required in many applications; such as in DC UPS (DUPS) system for battery charging/discharging, low-rating adjustable speed drives (ASDs) in fans and AC grid interfacing system etc. Based on the specific type of application and power ratings; these BDCs are categorized as non-isolated BDCs [1]-[7] and isolated BDCs. An isolated BDCs are further classified into three categories as half-bridge [9, 10], full-bridge [8, 12, 13], and flyback BDCs [14, 15]. Likewise, various BDC converter topologies were introduced recently. The literature also presents topologies of converters soft-switching techniques in order to reduce the switching loss and stress problems in [2, 5, 7, 9, 12, 13, 14]. However, such isolated and soft switching converters have the disadvantages such as more complicated, higher switch stress, instability due to high frequency transformer saturation, and less utilization of switch due to soft switching. The aforementioned complexities are not really encouraged to use in low or medium power applications [6]. This paper is includes five sections. Starting with Section- I, the others sections cover the design analysis of Bidirectional DC-DC topologies in Section-II, simulation performance analysis in section-III, conclusive observations in Section-IV and references in Section-V.

N Electrical Energy Storage (EES) with anticipated unit cost reductions and hence their practical applications look very attractive for energy demands on future timescales of only a few years. Characteristics of small energy storage systems using storage batteries include the ability to install them in proximity to users, the promise of cost reduction through mass production, the ease of securing installation space, and the ability to combine UPS functions. For this reason they are considered prime candidates for peaking power, standby reserves and load-balancing (LB) systems or peak-cut systems when used in conjunction with traditional sources of energy [16]. For example, appropriate power output when such systems are installed in residences ranges from several hundred VA to about 3 kVA, while the capacity of storage batteries is in the comparatively small range of several kWh to approximately 10 kWh. Generally it is more economical for storage batteries to use a few largecapacity cells than many small-capacity cells, and for that reason the number of cells used in small LB systems is limited. This makes it impossible to connect many cells in series, making battery voltage low. But as discussed below, because inverters used in LB systems require comparatively high input voltages, batteries need a high voltage step-up ratio when discharging, and a large voltage step-down ratio

I.

INTRODUCTION

II.

DESIGN ANALYSIS OF BIDIRECTIONAL DC-DC CONVERTERS

A. Conventional Bidirectional DC-DC Converter In fact, the boost converter is a proven topology and is extensively used in industry as the unidirectional DC-DC converter stage for many high frequency inverter products, but not yet in bi-directional manner. The conventional BDC circuit derived from unidirectional boost DC-DC converter as shown in Fig. 1 is preferred for charge/discharge operation in DC UPS system due to the presence of fewer parts and simple controls [2, 3]. Also, it is preferred to operate the

231

converter in CCM to get a better dynamic response and also a tight regulation of output voltage for the entire load variation. Fig. 1 illustrates the circuit configuration suitable for a small DC UPS system that may be installed in residential and other locations. Also, it is a transformer less type so as to reduce size and weight to improve economy [11]. Because it interacts to commercial power system using single-phase three-wire utility distribution, inverter input voltage needs a high DC voltage (VDC) of about 240V, which necessitates voltage step up when discharging batteries, and step down when charging them. In Fig. 1, during discharging (forward) mode, S1 operates and the converter acts as a boost converter; while during charging (reverse) mode, S2 operates and the converter acts as a buck converter [1, 2, 3, 7]. As noted previously, a small LB system needs low battery voltage because it cannot use a large number of series-connected batteries. It must therefore have a bidirectional converter with very large step-up and step-down ratios, and operate with a very large S1 duty and a very small S2 duty, which brings down efficiency.
L
Rint
+ VL _

LB systems. In addition to the duties of S1 and S2, this circuit can also set for step-up and step-down ratios with the turn ratios of magnetically coupled L1 and L2, and make S1 and S2 operate at appropriate duty values [17]. This proposed topology is quite similar to one reported in [1, 2, 3, 6]. The proposed converter with coupled inductors appreciably minimizes the voltage and current stress of the power switches. Thus, this paper discusses the operating and design principles, and other aspects of the proposed converter shown in Fig. 2. Fig. 3(a) and Fig. 3(b) illustrate the ideal current wave forms of inductor (L1) and switches S1 and S2 for discharge (boost) mode and charge (buck) mode respectively. Using this proposed bidirectional DC-DC converter topology one can retain all the benefits of conventional boost topology; such as its simplicity high conversion efficiency, reduced switching stress and more importantly less components count. 1) Design principles of Proposed BDC Converter a) Design of Duty ratio and Inductor: During forward discharge (boost) operation from the equivalent circuits shown in Fig. 4(a) and Fig. 4(b) the duty cycle of the switch S1is developed as follows; When S1 is ON (2)
(3)

S2 + D2

+ VBat _ Ci D1 S1

CO

VBus

DC UPS/ Inverter

When S1 is OFF

Bidirectional Converter

Average voltage across the inductor L1 is zero over a period (T=TON+TOFF); by simplifying we get
DC UPS

Figure 1. Bidirectional DC-DC converter feeding to DCUPS/inverter

(4) During forward charge (buck) operation, from equivalent circuits shown in Fig. 4(c) and Fig. 4(d) the duty cycle of the switch S2 is developed as follows; When S2 is ON (5)
When S2 is OFF (6)
1

For the conventional BDC shown in Fig. 1, the duty ratio of S1 when stepping up, and the duty ratio ( of S2 ( when stepping down are given by
(1a)
(1b) The currents of S1 and D2 during discharging mode are equal to battery discharge current, while the currents of S2 and D1 during charging mode are equal to battery charging current, which means that S1, S2, Dl, and D2 must be able to carry large currents. Both when charging and discharging, the applied voltages of S1, S2, D1, and D2 are equal to VBus (i.e. DC bus high-voltage). S1, S2, D1, and D2 must therefore have high withstand voltages. Thus, the conventional BDC shown in Fig. 1, power switches are suffering from high voltage and current stress, and also, low conversion efficiency. B. Proposed Bidirectional DC-DC Converter In contrast to conventional BDC, Fig. 2 shows the circuit configuration of the bidirectional chopper proposed for small

If duty ratio of S2ON is

Average voltage across the inductor L1 is zero over a period (T=TON+TOFF); by simplifying we get
(7) Where and L2.
is turns ratio of coupled inductors L1

By considering the critical current conduction mode (i.e. 0 ; minimum inductance is obtained as (8)

232

Rint + VBat _

VL

b) Current and voltage relations for the devices: Based on the power balance relation with negligible losses; (9) I = = 1 1 (10) (11)

S1ON
Ci

(a) When S1ON & D2OFF (Discharge mode)


(VBat-IBAvRint) L1 (n1 ) IBAv

L2 (n2 ) + _

Where, IBAV and IBuAv are the average battery discharge current and DC-Bus (HV) current respectively. By simplifying the equations (9) to (11) Peak current through S1/D1 and S2/D2 are obtained as follows: (12) (13) OFF state voltages of S1and D1 is

Rint + VBat
_

+ VL

Ci

NVBat +VBus 1+N

(b) When S1OFF & D2ON (Discharge mode)


(VBat-IBAvRint)
IBAv

L1 (n1 )

L2 (n2 ) + _

Rint +

+ VL
Ci

(14) OFF state voltages of S2 and D2 is (15)


L2 (n2 )
S2

VBat
_

NVBat +VBus 1+N

(c) When S2ON & D1OFF (Charge mode)


(VBat-IBAvRint) L1 (n1 ) IBAv
+

L1 (n1 )
Rint + VBat _

0V
_

2 L2 (n2 ) - n VBat 1

n
_

VL

Ci

D1 S1

Battery

Bi directional DC-DC Converter

Figure 2.
IL1

Proposed Bi-directional DC/DC converter


IL1

IT1

IT2

(a) Discharge (Boost) Mode

(b) Charge (Buck) Mode

Figure 3.

Ideal waveforms (a) In boost mode (b) In buck mode

Rint + VBat
_

+ VL
Ci

D2 CO VBus

D1ON

(d) When S2OFF & D1ON (Charge Mode)


Figure 4. Equivalent circuits of discharge (Boost) and charge (Buck) mode

From the above equations, comparisons of the conventional and proposed topology in terms of various parameters are described in table-1.
Time in Sec

TABLE-1: COMPARISON OF VOLTAGE, CURRENT AND DUTY FOR VBAT=24V, VBUS=200V AND N=2 Boost Mode Buck Mode Conventional Proposed S1 Voltage D1 Voltage 200V 82.67V S1 Current D1 Current 16.67A 20.67A S2 Voltage D2 Voltage 200V 248V S2 Current D2 Current 16.67A 6.89A S1 Duty ___ 88% 71% ----S2 Duty 12% 29%

233

For continuous current conduction vice versa for discontinuous current conduction

and

(VBat-IBAvRint) L1 (n1 ) IBAv


_

0V

2 L2 (n2 ) - n V B at 1

VBus
+

CO VBus _

VBus

VBus

D2ON

CO VBus _

VBus

VBus

S2ON

CO VBus _

VBus
+

CO VBus _

III.

SIMULATION PERFORMANCES ANALYSIS

Operating principle and theoretical designed values of both conventional and proposed converters are analyzed in simulation using SIMULINK. Fig. 5 and Fig. 6 illustrate the schematics of the conventional and proposed BDC respectively. For the realistic simulation of battery charging/discharging application, common parameters for the simulation were the following values: Vin=242 V, Vout = 200 to 218V, Pout = 400W CIN=COUT=600F. the other parameters values are as follows: L=20H for the conventional BDC, and for proposed BDC L1=10H L2=40H. The percentage duty ratios are for both BDCs are described in Table-1. Fig. 7 and Fig. 8 illustrate the steadystate simulation results of the conventional BDC for varying loads. Fig. 9 to Fig. 11 describes the steady state response of proposed BDC under different load conditions. Simulated result has a close agreement with the theoretically designed parameter values. These critical analyses shows that the improvements of the proposed topology in providing high level DC-link voltage of 200V from low level voltage of 24V and vice versa with a minimum current and voltage stresses. Also, justifies the improved utilization of switches. Thus, confirms that the proposed converter realizes significantly lower stresses than the conventional converter. From the analysis, it is also possible to understand the effect of change in load on the converter voltage gain and the efficiency.
Continuous powergui
+ -v

Figure 7. Waveforms of VPulse, VSwitch I(Switch), VDiode IL and VLoad with RLoad=1.5 (Buck Mode) and RLaod=100 (Boost Mode)

Vbat IL
+ i -

IGBTC
m E g C
NOT Logical Operator 1

MCCScope

+ i 2

IBus
+ i 2 2 c 1

IBatC B1
2

c 1

B4
c 1

L Diode1
g C

B2 Cbus
c 1

B3 Rbus1

Cbat Rbat VBat IGBTD


m E

Diode VDC BusRbus


+ - v

MCDscope

VBus

Figure 8. Waveforms of VPulse, VSwitch I(Switch), VDiode, IL and VLoad with RLoad=2 (Buck Mode) and RLaod=120 (Boost Mode)
NOT Logical Operator Delay _Pulse1
In1Out1 In2Out2

Gate _Pulses Scope

Figure 5. Simulink schematics of the conventional BDC

Continuous powergui
+ -v
Vbat
IL1

IGBTC

i +2
IBatC
B1

i +B4

g C

NOT Logical Operator 1

MCCScope

L1

i +L2
IL2

IBus

i +2

c 1

2 c 1
B2

g C

Cbat

Diode1

B3

c 1

Cbus

c 1

m E

Rbat

VBat

IGBTD

Rbus1

Diode

MCDscope

VDC BusRbus

+v VBus

NOT

Logical Operator
Delay _Pulse1

In1Out1 In2Out2

Gate_Pulses

Scope

Figure 6. Simulink schematics of the proposed BDC

Figure 9. Waveforms of proposed BDC: VPulse, VSwitch I(Switch), VDiode IL1 and VLoad with RLoad=1.5 (Buck Mode) and RLaod=400 (Boost Mode)

234

support the loads during power failures without disturbing the systems schedules. V.
[1]

REFERENCES

Figure 10. Waveforms of proposed BDC: VPulse, VSwitch I(Switch), VDiode, IL1 and VLoad with RLoad=1.5 (Buck Mode) and RLoad=800 (Boost Mode)

Figure 11. Waveforms of proposed BDC: VPulse, VSwitch, I(Switch), VDiode, IL1 and VLoad with RLoad=2 (Buck Mode) and RLaod=1000 (Boost Mode)

IV.

CONCLUSION

A. Jusoh, A. J. Forsyth and Z. Salam, Analysis and Control of the Unloaded Bi-directional DC/DC Converter to Perform an Active Damping Function, Jurnal Teknologi, 44(D) (JTjun44D[5]CRC.pmd), pp. 65-84, June 2006. [2] F. A. Himmelstoss and M. E. Ecker, Analyzes of Bi-directional DC/DC half-bridge converter with zero voltage switching, IEEE Transactions, pp. 603-60, 2005. [3] Chang Gyu Y, Woo-Cheol L, Kyu-Chan L and Bo H Cho, Transient Current Suppression Scheme for Bidirectional DC-DC Converter in 42V Automotive Power systems, Conf. Rec. of IEEE 2005, pp.16001604. [4] S. Waffler and I.W. Kolar, A Novel Low-Loss Modulation Strategy for High-Power Bi-directional Buck Boost Converters, 7Th International Conf. on Power Electronics-07, PP.889-894, Oct 2007. [5] Yuang-Shung Lee, and Guo-Tian Cheng, Quasi-Resonant ZeroCurrent-Switching Bidirectional Converter for Battery Equalization Applications, IEEE TRANS on POWER ELECTRONICS, vol.21, no.5, Sept-2006, PP-1213-1224 [6] Wei Li, Geza Joos and Chad Abbey, A Parallel Bidirectional DC/DC Converter Topology for Energy Storage Systems in Wind Applications, in Proc. of IEEE 2007, pp.179-185, 2007. [7] Zenon.R.M and Biswajit Ray, Bidirectional dc-dc Power conversion using constant frequency multiresonant topology converters, IEEE1994 vol.5, pp-991-997 [8] K. Yamamoto, E Hiraki, T Tanaka, M Nakaoka and T Mishima, Bidirectional DC-DC Converter with Full-bridge/Push-pull circuit for Automobile Electric Power Systems, Proce. of IEEE 37th annual Power Electronics Specialists Conference (PESC-06), June-22, 2006. [9] T. Mishima, E. Hiraki and T.Tanaka, A ZCS Lossless Snubber Cells-Applied Half-Bridge Bidirectional DC-DC Converter for Automotive Electric Power Systems. Proceedings of the IEEE 37th annual Power Electronics Specialists Conference (PESC '06), June 18-22, 2006. [10] Manu. Jain, M. Daniele and Praveen. K. Jain, A bi-directional DCDC converter topology for low power application, IEEE TRANS. On Power Electronics, vol. 15, No. 4, pp. 595-606, July 2000. [11] Kazunon Nishimura, Keihachiro Tachibana, Katsuya Hirachi and

The bidirectional DC-DC converters are essentially needed in wide range of applications to provide flow of power in either direction. The detailed design analysis of proposed bidirectional DC-DC topologies and its theoretical design comparisons are discussed. The steady-state analysis of conventional and proposed BDCs under varying loads has been made using simulation. The simulation results and analysis confirms the improvement in device stresses and switch utilization factor of the proposed BDC than that of conventional BDC. Simulated results have a close agreement with the theoretically designed parameter values. Therefore, the proposed coupled inductor is the good choice for low/medium applications, particularly where isolation is not required like in DC-UPS systems. The selection criterions of BDCs for specific applications is discussed. The BDCs are considered to be a better alternative for efficient ESS by means of power quality improvement because of reduced size of overall converter, higher efficiency, lower cost, and enhanced reliability. These converters provide improved performance of ESSs and

Mutsuo Nakaoka, Trends of Development of Inverter System Interconnected Solar Photovoltaic Power Conditioner, The

[12]

[13]

[14]

[15]

[16] [17]

Journal of the Institute of Electrical Installation Engineers of Japan, V01.20, No.2, pp.88-92, 2000. Lizhi Zhu, A Novel Soft-Commutating Isolated Boost Full-bridge ZVS-PWM DC-DC Converter for Bidirectional High Power Applications, in Proc. 35th Annual IEEE Power Electronics Specialists Conference, pp.2141-2146, 2004. R. Farrington, M.M. Jananovic and F.C. Lee, A new family of isolated converters that uses the magnetizing inductance of the transformer to achieve zero-voltage switching, IEEE Trans. Power Electronics, vol.8, no. 4 pp. 535-545, Oct. 1993. Henry Shu- Hung Chung, Wai-Leung Cheung and K.S. Tang, A ZCS Bidirectional Flyback DC/DC Converter, IEEE TRANS on Power Electronics, vol.19, no.6, pp 1426-1434, Nov 2004. K.Venkatesan, Current mode controlled bidirectional flyback converter, in Proc. IEEE Power Electron. Spec. Conf., pp. 835-842, June/July 1989. Baker.J.N and Collinson, A Electrical energy storage at the turn of the millennium, Power Eng 1999, pp.107-12. Yoshihiko Yamakata, Makoto Yatsu, Eiki Iwabuchi and Kazuyuki Yoda, Development of New Series MlNI-UPS, Tran. of Japan Society for Power Electronics, V01.25, No.1, pp.81-89, 1999

235

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Control of Two-Input Integrated Boost Converter


Mummadi Veerachary
Dept. of Electrical Engineering Indian Institute of Technology Delhi Hauz Khas, New Delhi - 110016 Abstract The analysis and control of two-input integrated dc-dc converter is presented in this paper. The integrated converter is essentially a combination of two boost converters. However, on account of integration only three switching devices, instead of four, are sufficient enough for performing the power conversion. In order to have simple control strategy as well as simpler compensator design a single loop control scheme, voltage-mode and current-limit control, are proposed here for the power distribution. Closed-loop converter performance of this converter is simulated and compared with the theoretical predictions. Experimental measurements are provided for validating the control concept.
Keywords- Integrated boost converter, current-limit control, Voltage-mode controller.

Dileep Ganta
Dept. of Electrical Engineering Indian Institute of Technology Delhi Hauz Khas, New Delhi - 110016 even in the automotive industries. In order to efficiently and economically utilize renewable energy resources, PV [5]-[6] and FC, storage of energy is mandatory. In recent days electric double layer capacitors (EDLCs) are coming up in the energy storage systems in addition to the conventional battery systems. However, in all these cases the power conversion efficiency and its control is major challenge for the power supply designer. The efficiency improvement, from the steady-state point of view, of the HDGS is one of the considerations for the designer. The other constraint while designing such systems is to evolve simple control strategy. To meet some of these concerns, multi-input converters [7]-[12] with different topology combinations are coming up in the recent days. Although several power conversion topology configurations can easily be developed, but an integrated converter with boosting feature is more suitable in the HDGS. In view of this a two-input integrated boost converter (TIBC) and its control features are analyzed in this paper. Several controlling methods, including single-loop and multi-loop strategies, have been reported for the dc-dc converters. Each of these control schemes has their own advantages and limitations. However, simple and yet cost effective single-loop control strategies, voltage-mode controller for the first boost converter and current control for the second boost converter, are proposed here for the TIBC and their details presented in the following Sections. II. MODELING OF THE TWO-INPUT INTEGRATED BOOST CONVERTER

I.

INTRODUCTION

High frequency switching converters application in the dc power distribution is increasing in the recent years. Increasing the power density is one of challenging issue for the power supply designers to have compact power supply system. For the last one decade the main emphasis in power electronics is increasing in the direction of development of switching-mode converters with higher power density and low electromagnetic interference (EMI). Light weight, small size and high power density are also some of the key design parameters [1]-[4]. Several different types of switch-mode dc-dc converters (SMDC), belongs to buck, boost and buckboost topologies, have been developed and reported in the literature to meet variety of application specific demands. Major concern in the recent dc distribution systems, such as in automotive and telecom power supply systems, is to meet the increased power demand and reducing the burden on the primary energy source, i.e. built-in battery. This is possible by adding additional power sources in parallel to the existing battery source. The additional power sources can be: (i) renewable energy sources such as photovoltaic (PV) or wind (WD), (ii) fuel cell (FC) storage power. Power sources supplementing other resources are normally categorized as hybrid power source (HPS) and the corresponding scheme is called hybrid distributed generation systems (HDGS). Integration of renewable energy sources to form a HDGS is an excellent option for the hybrid vehicles and

The proposed TIBC, shown in Fig. 1, is essentially a parallel combination of two boost converters that uses only three switching devices. Just paralleling two converters at the load terminals demands more number of inductors/switching devices and also more load voltage filtering requirements. However, to reduce the number of switching devices the two boost converters are arranged in such a manner so that only three switching devices would be sufficient for processing the power. The main advantage of having such integrated topology is that the efficiency of power conversion is more. The circuit can actually operate either in continuous or discontinuous inductor current mode of operation. However, its operation in discontinuous mode (DCM) will not provide benefits for the power conversion, and also on account of higher power demands the current

236

flowing, for almost all loading conditions, is continuous. In view of this the circuit operation is discussed here only for the continuous inductor current mode (CICM). In CICM the TIBC goes through three topological stages in each switching period and its power stage dynamics can be described by a set of state-space equations [16] given by:

second boost converter is formed by: S2, L2, S3, R. The steady-state load voltage can easily be established, either by employing volt-sec balance or through state-space model steady-state solution [x]=-A-1Bu, as

= AK x + BK u x v0 = C k x
where [ x ] = iL1 iL 2

Vo =

Vg1 + Vg 2

(1- d1 )

(2)

vc and

u = vg

[ ], k=1,2, and 3 for

(1)

mode-1, mode-2, and mode-3, respectively. Here the circuit operation depends on the type of controlling signal used for switching devices S1 and S2. In any case for proper functioning of the integrated converter, the gate control signals for the switching devices needs to be synchronized either in the form of trailing or leading-edge modulated pulses. Further, the operating modes depends on the duty ratios of the switching devices, d1<d2 or d1> d2, and in any case only three modes will repeat in one switching cycle. Applying the state-space averaging analysis and upon = Ax + Bu , simplification results the average model x where A=(A1d1+A2d2+A3d3), B=(B1d1+B2d2+B3d3) and these matrices are:

Fig. 1.Control of Two-input Integrated Boost Converter.

;
III. CONTROL STRATEGY FOR THE TWO-INPUT INTEGRATED BOOST CONVERTER Since there are two duty ratios that need to be controlled, one for current control while the second for voltage regulation, variation of one control quantity reflect other control variable. To study the system response against any of these changes, we need to evolve the small-signal transfer functions of load voltage and inductor current with respective to the control signals, d1 and d2 and their matrix form is

; ;

o G11 G12 d v 1 = . G G i d 21 22 L 2
where
1

(3)

In this TIBC the diodes will be the integral part of both buck and buck-boost converters, while the switching devices are unique to the individual converters. Load and its filtering capacitor are common to both the converters. First boost converter is formed by: S1, L1, S3, R; while the

( A2 A1 ) x + ( B2 B1 ) Vg + G11 = C ( SI A ) ( C2 C1 ) x 1 G22 = P ( SI A ) 1P 3 ) x ( P ( A1 A3 ) x + ( B1 B3 )Vg +

G12 = C ( SI A) ( C2 C3 ) x ( A2 A3 ) x + ( B2 B3 )Vg + 1 G21 = P ( SI A) 1P 3 ) x ( P ( A1 A3 ) x + ( B1 B3 ) Vg +


1

237

converter parameters, given in Table I, various compensator design bode plots are generated and the important one are shown in Fig. 5.

Fig. 2. Small-signal block diagram of the TIBC.

If the cross-coupling transfer functions, G12(s) & G21(s), effect is significant then the individual controllers must be designed based on the combined plant as shown in Fig. 2. However, if their effect is negligible then the two-loops can easily be designed in a decoupled manner. In case of TIBC the cross-coupling is very small and hence the current and voltage controllers are designed in decoupled manner as explained in the following paragraphs. Several different types of control strategies are reported in literature and they are broadly classified into: (i) singleloop voltage-mode, (ii) single-loop current-mode, (iii) twoloop current mode control, and (iv) multi-loop schemes. However, single-loop strategies are simple to implement, but their dynamic response time is more. Although two-loop current-mode schemes are popular in the power supply applications, but their compensator design becomes complex and some times it is the limiting factor. In this paper for the TIBC two interdependent single-loop control schemes are proposed. This structure is capable of maintaining the load voltage regulation while ensuring the load distribution on the individual sources. The control schemes can be interchangeable from one to other depending on their power supplying capacity. To illustrate the control principle, current control-loop for low voltage source (LVS), voltage control-loop for high voltage source (HVS) is shown in Fig. 1. The current controller is realized by means of one zero and two-pole structure, while the voltage controller uses two-zero and two-pole structure [17]. The generalized block diagram for the control loop design is shown in Fig. 4 where-in Gd(s) is replaced with G11(s) in case of voltage control loop design, while it is G22(s) for current control loop design, Gc(s): Compensator, Fm: PWM generator transfer function, K is the load voltage sensing gain, and loopgain TL(s)=KFmGc(s)Gd(s). Both of these compensators were designed in the frequency domain approach using pole-zero placement technique. Fine tuning of the compensators were performed using MATLAB[18] SISO tool to ensure stability margins, i.e. gain margin> 6 dB, 0<phase margin<750. For the specifications and

Fig. 3. Block diagram of two-loop controlled TIBC.

Gd(s)

Vo

Fm

TL(s)
Vc

Vo / iL

Gc(s )
Bode Diagram

Vref/

iLref

Fig. 4. Block diagram of loopgain.


40
30
20
10

M a g n itu d e ( d B )

0
-10
-20
-30
-40
-50
-60 90

P h a s e (de g )

-90

-180

-270
10
1

10

10

10

10

10

Frequency (rad/sec)

(a) Voltage loop design frequency response characteristics


Bode Diagram

100
80

M a g n it u d e ( d B )
P h as e (de g)

60
40
20
0
-20 0
-45
-90

-135

-180
-225
10
1

10

10

10

10

10

Frequency (rad/sec)

(b) Current loop design frequency response characteristics Fig. 5. Compensator design bode plots.

238

IV.

DISCUSSION OF SIMULATION AND EXPERIMENTAL


RESULTS

To verify the developed modeling and controller design, a 100 W TIBC system was designed to supply a constant dc bus/ load voltage of 48 V 1% from a two different dc sources: (i) high voltage power source: 24 V, (ii) low voltage power source: 12 V. The switching frequency of 50 kHz is used for driving both the switching devices with leading-edge modulation where-in the converter switching devices turn-off instants are synchronized. The TIBC and controller parameters are designed to meet the specifications are listed in Table I. In this TIBC the load voltage is function of both the source voltages, Eqn. 2, and hence the duty ratio must be properly controlled to ensure the load voltage regulation as well as the load distribution on the input sources. In order to confirm the controller design analysis, discussed in the preceding sections, simulation studies have been carried out on the TIBC. PSIM is used for simulation purpose [19] and the converter parameters used in these studies are given in Table. I With these parameters the closed-loop converter system regulation as well as load distribution capability is tested for the following cases: (i) load disturbance R: 24 12 , (ii) supply voltage perturbation of 12 14 V and the corresponding results are plotted in Figs. 6 and 7. When there is a change in load demand, then the load voltage under goes intermediate dynamics in order to regulate its voltage, and during this period the control-loops are acting in such a way that the load distribution is taking place as per their predefined references. For constant load demand if there is any change in one of the source voltage then at that point of time the controllers will come into action such that they ensure the load voltage regulation and re-distribution of the load on the input sources. To support the simulation studies an experimental prototype[20], with parameters used as in simulation, has been built and tested. The measured observation for the supply voltage perturbation is shown in Fig. 8. The measurement result is showing the load voltage regulation together with load distribution on the LV, HV sources, respectively. In all these cases the experimental characteristics trend is same as in simulations. However, in simulations the processing time is almost zero and hence the response time is low, while in experimental system definite amount of time is required for: (i) sensing and converting the real-time signals, voltage and current, (ii) problem of identifying and realizing the identical simulation conditions as that are present in experimentation, (iii) computation of control loop. On account of these factors, additional processing time requirement, there is slight mismatch of dynamic response time in the simulation and experimental results.

Fig. 6. Dynamic response of load voltage against load change(R: 24 12 )

Fig. 7. Dynamic response of load voltage (Vg1: 12 14 V).

Fig. 8. Experimental dynamic response of load voltage against change in Low voltage source variation(Vg1= 7 to 14 V).

239

CONCLUSIONS Integrated boost converter was proposed for two-input dc sources and then single-loop controllers have been designed. Validity of single-loop control strategies, voltage-mode and current-mode, have been tested for load voltage regulation and power distribution. The decoupled controllers were designed using pole-placement technique to ensure minimum stability margins. The closed-loop converter design was verified using PSIM simulator and the results were in-line with the theoretical discussions. The experimental results were also in agreement with the theoretical predictions. Extensive analysis and detailed experimental results belonging to the power distribution on the two input sources will be given in the final paper.
TABLE I. CONVERTER AND CONTROLLERS PARAMETERS Power stage Controller

[5]

[6]

[7]

[8]

[9]

[10]

[11]
L1=200 uH L2=200 uH Fs= 50 kHz C=220 uF r=0.047 R=20 Current controller:

Gcc ( z ) =

0.6( z 0.95) ( z 1)

[12]

Voltage controller:

Gcv ( z ) =

2( z 0.92)( z 0.92) ( z 1)( z 0.132)

[13]

[15]

REFERENCES
[1] Jian Liu, Zhiming Chen and Zhong Du, A new design of power supplies for pocket computer system, IEEE Trans. on Ind. Electronics, 1998, Vol. 45, pp. 228-234. Liping Guo, John Y. Hung, R. M. Nelms, Evaluation of DSP-Based PID and Fuzzy Controllers for DCDC Converters, IEEE Transactions on Ind. Electronics, 2009, Vol. 56(6), pp. 2237-2248. M. Veerachary, Krishna Mohan. B, Robust digital controller for fifth order boost converter, 31st International Telecommunications Energy Conference, 2009, INTELEC 2009, pp.1-6. Alejandro R. Oliva, Simon S.Ang, Gustavo E. Bortolotto, Digital control of a voltage-mode synchronous buck converter, IEEE Transactions on Power Eelectronics, 2006, Vol. 21(1), pp. 157-162.

Masafumi Miyatake, Nabil A. Ahmed, A. K. Al-Othman, Hybrid Solar Photovoltaic/Wind Turbine Energy Generation System with Voltage-based Maximum Power Point Tracking, Electric Power Components and Systems, 2009, Vol. 37, pp. 43-60. Masafumi Miyatake, Nabil A. Ahmed, A.K. Al-Othman, Power fluctuations suppression of stand-alone hybrid generation combining solar photovoltaic/wind turbine and fuel cell systems, Energy Conversion and Management, 2008, Vol. 49, pp. 27112719. Hirofumi Matsuo, Wenzhong Lin, Fujio Kurokawa, Tetsuro Shigemizu, Nobuya Watanabe, Characteristics of the Multiple-Input DCDC Converter, IEEE Transactions on Ind. Electronics, 2004, Vol. 51(3), pp.625-631. K. P. Yalamanchili, M. Ferdowsi, Keith Corzine, New Double input dcdc converters for automotive applications", IEEE Applied Power Electronics Conference (APEC), 2006, CD-ROM proceddings. M. Veerachary, Kamlesh K. Sawant, Control of multi-input integrated buck-boost converter, The Third International Conference on Industrial and Information Systems, ICIIS 2008, pp. 1-6. Rajkumar Copparapu, Donald S. Zinger, Senior Member, IEEE, and Anima Bose, Energy Storage Analysis of a Fuel Cell Hybrid Vehicle with Constant Force Acceleration Profile, Proc. Of IEEE, 2006, pp. 43-47. Yuan-Chaun Liu, and Yaow-Ming Chen, A Systematic Approach to Synthesizing Multi-Input DC/DC Converters, IEEE Power Electronics Specialists Conference, PESC 2007, 2007, pp. 26262632. Yaow-Ming Chen, Yuan-Chuan Liu, Sheng-Hsien Lin, DoubleInput PWM DC/DC Converter for High-/Low-Voltage Sources, IEEE Transactions on Ind. Electronics, 2006, Vol. 53(5), pp. 15381545. D. Maksimovic and R. Zane, Small-Signal Discrete Time Modeling of Digitally Controlled PWM Converters, IEEE Trans. on Power Electronics, 2007, Vol. 22(6), pp. 2552-2556. R.W.Erickson and D.Maksimovic, Fundamentals of Power Electronics, 2nd edition, Springer International Edition, 2006.

[2]

[3]

[16] R. D. Middlebrook, Cuk. S, A general unified approach to modeling switching converter power stage, IEEE Power electronics specialists conference, 1976, pp. 13-34. [17] Veerachary. M, Suresh. M, Digital Voltage-mode Control of Higher order Boost Converter, IEEE Conference Proc. on Power System Technology and IEEE Power India Conference, 2008, pp. 1-5. [18] MATLAB Reference manual, 2008. [19] PSIM user manual, 2005. [20] dsPIC30F Family Reference Manual.

[4]

240

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

A Voltage-Mode Analog Circuit for Solving Linear Programming Problems


Mohd. Samar Ansari Department of Electronics Engineering Aligarh Muslim University Aligarh, India
email: mdsamar@gmail.com

Syed Atiqur Rahman Department of Electronics Engineering Aligarh Muslim University Aligarh, India
email: syed.atiq.amu@gmail.com

AbstractThis paper presents a neural circuit for solving linear programming problems. The objective is to minimize a first order cost function subject to linear constraints. The dynamic analog circuit, consisting of N identical units for N variable problem, can solve the general LPP and always converges to the optimal solution in constant time, irrespective of the initial conditions, which is of the order of its time constant. The proposed circuit employs non-linear feedback, in the form of unipolar comparators, to introduce transcendental terms in the energy function ensuring fast convergence to the solution. Further, the use of resistors to generate weighted inputs to the neurons is avoided. Instead, transconducting elements are utilized to directly generate the required scaled currents. PSPICE simulation results are presented for a chosen optimization problem and are found to agree with the algebraic solution. Index TermsNeural network applications, Neural network hardware, Nonlinear circuits, Linear Programming, Optimization

I. INTRODUCTION

inear programming is a considerable field of optimization for several reasons. Many practical problems in operations research can be expressed as linear programming problems. Certain special cases of linear programming, such as network flow problems and multicommodity flow problems are considered important enough to have generated much research on specialized algorithms for their solution [1-6]. A number of algorithms for other types of optimization problems work by solving LP problems as sub-problems. In an LPP problem, a linear function is optimized subject to certain linear constraints. The general LPP problem can be stated as:

Minimize Subject to
N

(1) (2)

which can be considered as an analog computer relying on a highly simplified model of neurons [1]. ANN's have been applied to several classes of constrained optimization problems and have shown promise for solving such problems more effectively. For example, the Hopfield neural network has proven to be a powerful tool for solving some of the optimization problems. Tank and Hopfield first proposed a neural network for solving mathematical programming problems, where a linear programming problem was mapped into a closed-loop network [2]. Over the past two decades several neural-network architectures for solving linear programming problems have been proposed by Wang [3], Xia [4], Xia, Wang & Hung [5] and Malek and Yari [6]. Wangs realization of a neural network for solving linear programming problems utilizes 3n amplifiers, n2+3n resistances and n capacitors. The circuit of [4] requires 2m2+4mn amplifiers, 2m2+4mn+3m+3 summers, n+m integrators, and n simple limiters. The circuit of [5] consists of m2+2mn amplifiers, 3n summers, n integrators, and n simple limiters. And although no circuit implementation of the proposed scheme is given in [6], it is evident that it would not be simple to actually realize in hardware. Therefore, the search still continues for a working circuit, to solve LPP, which is feasible to implement in integrated forms and is fast enough for utilization in real-time applications where the time-tosolve is needed to be of the order of tenths of milliseconds [5]. In this paper, a hardware solution to the problem of solving a linear programming problem subject to linear inequality constraints is presented. The proposed circuit employs only n amplifiers, m comparators and mn+m+n resistances. It is evident that the circuit complexity of the proposed scheme is much reduced as compared to existing ones [3-6]. The paper is organized as follows. Section II describes the proposed neural network to minimize a first order polynomial in 2 variables subject to 2 linear constraints. Section III contains results of PSPICE simulation. Some concluding remarks appear in section IV.

where R is a column vector of decision variables, R is a column vector of cost coefficient, RM is a column vector of constant bounds, RMxN is a constraint coefficient matrix, M N, and the superscript T denotes the transpose operator. The M constraints specified by (2) define the feasible region in the N dimensional Euclidean space in which the cost function is required to be minimized [2]. Traditional methods for solving linear programming problems typically involve an iterative process, but long computational time limits their usage. There is an alternative approach to solution of this problem. It is to exploit the artificial neural networks (ANN's)

II. PROPOSED NETWORK


Let the function to be minimized be

= 1 1 + 2 2
Subject to

(3) (4) (5)

11 1 + 12 2 1 21 1 + 22 2 2

Where V1, and V2 are the variables and aij, bi and cij are constants.
The proposed neural-network based circuit to minimize (3) subject to (4, 5) is presented in fig. 1.

241

_ b1 +

gc11 x1
Vout Vi

A Rp1 C1

_ + V1

x1

G1 g x c21 1

R1 P1

a11V1+ a12V2 gc12 x2


Vout Vi

Where ui is the internal state of the i-th neuron and gcji is the voltage-to-current conversion factor of the j-th transconductance block for the i-th output current. As is shown later in this section, these weights (gcji) are governed by the coefficients of (4, 5). Moreover, it can be shown that the network of fig. 1 may be associated with an energy function of the form

P2 R2 Rpi B C2 V2

= 1 1 + 2 2 + 12 2 1 +
2

ln cosh 11 1 +

_ b2 +

x2

ln cosh 21 1 + 22 2 2 +
(13)

G2 g x c22 2

+ _

11 1 + 12 2 + 21 1 + 22 2

a21V1+ a22V2

From (11), it follows that


1

Fig. 1 The proposed neural network for solving a linear programming problem in 2 variables subject to 2 linear constraints

= 1 + 11 tanh (11 1 + 12 2 1 ) +
(14)

As can be seen from fig. 1, individual equations from the set of constraints are passed through non-linear synapses which are realized using voltage-mode unipolar comparators followed by multi-output transconducting cells. From fig. 1, the unipolar comparator outputs can be modeled as

21 tanh (21 1 + 22 2 2 )

Also, if E is the Energy Function, it must satisfy the following condition [7].
1

= 1

(15)

1 = 2 =

2 2

11 1 + 12 2 + 1 21 1 + 22 2 + 1

(6) (7)

Where K is a constant of proportionality and has the dimensions of resistance. Comparing (9) and (14) according to (15) using (6) yields

11 =

11

where is the open-loop gain of the comparator (practically very high), Vm are the output voltage levels of the comparator and V1 and V2 are the neuron outputs. The outputs of the comparators are fed to neurons having weighted inputs. The neurons are realized by using opamps and the weights are implemented using transconductance amplifiers. The currents arriving to the neuron from various synapses get added up at the input of the neuron. R pi and Cpi are the input resistance and capacitance of the opamp corresponding to the i-th neuron. These parasitic components are included to model the dynamic nature of the opamp. Node equation for node A gives the equation of motion of the first neuron in the state space as

, 21 = 1 =

21

(16) (17)

The values of the transconductance factors (gji) of Fig. 1 can be easily calculated by choosing a suitable value of K and then using (16). Analysis on similar lines can be performed to obtain the values of the synaptic weights for the second neuron and are presented in (18) and (19) below.

12 =

12

, 22 = 2 =

22

(18) (19)

= 1 11 + 2 21 +
1 1

1 1

1 11 +
(8)

III.

SIMULATION RESULTS

21 +

1 1

This section deals with the application of the proposed network to task of minimizing the objective function

On simplification, (3) yields

51 + 22
subject to
1 1

(20)

= 1 11 + 2 21 +
1 1

1
1 1

1 1

(9)

1 + 2 1 2 1

(21)

Where

1 1

+ 11 + 21 +

(10)

Similarly, for node B we can write

The values of resistances acting as the weights on the neurons are obtained from (17, 19). For the purpose of simulation, the value of K was chosen to be 1 K. Using K = 1 K in (1 6 through 19) gives

= 1 12 + 2 22 +
1 2

2 2

2
1
2

1 2

(11)

gc11 = gc21 = gc22 = 1/K = 1; gc12 = 0 R1 = 1 K, R2 = 1 K

(22)

Where

1 2

+ 12 + 22 +

(12)

For the purpose of PSPICE simulations, the voltage comparator and the multi-output transconductance amplifier for each neuron were realized as a single block using a slightly modified circuit of

242

the differential-input, high-gain, active-loaded transconductance amplifier [8] shown in Fig. 2. Standard BSIM3 0.35m parameters were used for the purpose of simulations. The supply voltages were set to VDD = 15V, VSS = 0V and VBB = 1V. The aspect ratios of the NMOS and PMOS transistors were taken to be 1.4m/1.4m and 2.8m/1.4m respectively. Further, to get the output currents according to (22) the W/L ratios of transistors M6, M7, M8 and M9 were set to provide the required current scaling.
VDD

IV.

CONCLUSION

In this paper, a CMOS compatible approach to solve a linear programming problem in 2 variables subject to 2 linear constraints, which uses 2-neurons and 2-synapses, is described. Each neuron requires one opamp and each synapse is implemented using a differential transconductance element. This results in significant reduction in hardware over the existing schemes [3-6]. The proposed network was tested on a sample problem of minimizing a quadratic function in 2 variables and the simulation results confirm the validity of the approach.

M3

M4 M6 M8
[1] [2]

REFERENCES
J.J. Hopfield, D.W. Tank, Neural computation of decisions optimization problems, Biological Cybern., 52, 141-152, 1985 D. W. Tank and J. J. Hopfield, Simple Neural Optimization Networks: An A/D Converter, Signal Decision Circuit, and A Linear Programming Circuit, IEEE Trans. Circuits and Systems, Vol. CAS-33, No.5, pp. 533-541, May 1986. J. Wang, Analysis and Design of a Recurrent Neural Network for Linear Programming, IEEE Trans. Circuits and Systems1: Fundamental Theory and Applications, Vol.40, No.9, pp. 613-618, Sept.1993. Y. Xia, A New Neural Network for Solving Linear and Quadratic Programming Problems, IEEE Trans. Neural Networks, Vol.7, No.6, pp. 1544-1547, Nov 1996. Y. Xia, J. Wang and D. L. Hung, Recurrent Neural Networks for Solving Linear Inequalities and Equations, IEEE Trans. Circuits and Systems1: Fundamental Theory and Applications, Vol.46, No.4, pp. 452-462, April 1999. A. Malek and A. Yari., Primaldual solution for the linear programming problems using neural networks, Applied Mathematics and Computation, 167, 198211, 2005. S. A. Rahman, Jayadeva, S. C. Dutta Roy, Neural network approach to graph colouring, Electronics Letters, 35, 1173 1175, 1999 M. S. Ansari and S. A. Rahman, A non-linear neural circuit for solving system of simultaneous linear equations, Proc. IMPACT2009, AMU, Aligarh, March 2009.

V1

M1

M2

V2

Iout1

Iout2

[3]

VBB

M5

M7

M9

[4]

[5]

VSS
Fig. 2 CMOS implementation of the comparator and transconductance blocks [6]

Routine mathematical analysis of (20, 21) yields: 1 = 0, 2 = 1. The results of PSPICE simulation are presented in fig. 3. From the plots of the neuron output voltages, it can be seen that V(1) = 0 V and V(2) = 1 V which correspond exactly to the algebraic solution thereby confirming the validity of the approach.

[7]

[8]

Fig. 3 Simulation results for the proposed circuit

243

Digital Deadbeat Controller For Fourth-Order Boost DC-DC Converter


Dept. of Electrical Engineering, Indian Institute of Technology Delhi, New Delhi India 110016
AbstractIn this paper a digital dead-beat controller is proposed for a fourth-order boost dc-dc converter to achieve zero steady-state error and for tracking reference within two switching cycles. Digital deadbeat control law is established for the inductor-1 current (iL1) from the state-space averaging model using ZOH discretization technique. A digital PI controller is designed for outer voltage loop in-order to regulate the set point reference voltage. The closed loop performance in time domain and stability of the converter with (a) inner deadbeat control for inductor current-1, and (b) inner deadbeat control with outer PI controller is analyzed through computer simulations. Simulation results are then validated experimentally on a 28 V, 30 W laboratory prototype converter. It is observed that simulation results are in close agreement with experimental results. Keywords-component; dc-dc converter; small-signal model; digital deadbeat control
I.

Anmol Ratna Saxena

Dept. of Electrical Engineering, Indian Institute of Technology Delhi, New Delhi India 110016
making some software modifications [4] thereby, making digital current mode control a viable option. In this paper a small-signal model, derived using state-space averaging technique [6], is used to design a digital deadbeat controller for inductor-1 current of the fourth order boost dc-dc converter. The source as well as load current of the converter is smooth due to the presence of inductors both at input and output. It finds application in space power supplies where constant input and load currents are desired in-order to facilitate the battery charging process from solar arrays. Also current and voltage dynamics should be tightly regulated to respond for transients. The deadbeat control for input inductor provides tight regulation of the input current for optimally utilizing the solar energy.
II.

Mummadi Veerachary

SMALL-SIGNAL MODEL FORMULATION FOR FOURTH ORDER BOOST CONVERTER

INTRODUCTION

Modern power supplies, demand fast and accurate dynamic regulation of load voltage so as to provide reliable power to sophisticated equipments/instruments such as space-crafts, computer processors, communication system, critical medical equipments, hybrid vehicles, electronic goods and gadgets etc.[1]-[3]. Although, analog controllers have effectively served the purpose for decades, they have many limitations which bring out the need for the implementation of advanced digital control techniques. Digital techniques have several advantages over their analog counterpart [1][2], also the availability of high performance, low cost DSPs/FPGAs has further paved the way for the implementation of high performance digital control laws in power supplies. One of the limitations of digital control is that it introduces some additional delay in the control loop, thereby deteriorating the overall performance [1], [3]. The closed loop performance of the power converters can be improved by implementing two-loop control strategies which are most commonly termed as multi-loop or current-mode control [4]. In current mode control, outer loop uses load voltage and the inner loop uses inductor current. Double loop control strategies respond faster even for source side disturbances as compared to single loop strategies. The basic concept used here is that current loop responds faster than voltage loop, so in-order to attain faster dynamic response inductor current must be tightly controlled such that it follows its reference value within short period of time, preferably few switching cycles. A control scheme where steady-state error is made zero in finite time or few switching cycle has gained much importance in power supplies and is termed as deadbeat control. Digital deadbeat control offers much faster dynamic response as compared to conventional current mode control and is yet another good way to optimize digital control performance. Deadbeat response means attaining zero steady-state error response in few switching cycles [5]. The control loop delay can be reduced with this strategy. Digital current mode control also overcomes the well known instability problem of analog current mode control (for d 0.5) merely by

Averaged models are not only very accurate in low frequency region but are also easy to formulate thereby making the overall design process simple. The continuous time small-signal model is discretized using approximation technique zoh and deadbeat controller is then directly designed in z-domain. A digital PI controller is designed for the outer voltage loop, to ensure load voltage regulation. Small-signal models have been extensively used for the control loop design of dc-dc converters [6], these models are accurate and easy to formulate. In this section, state-space averaging technique is used to develop small-signal models of the converter under consideration. Averaging approach results in continuous time, large signal, non-linear model, which are then perturbed and linearized around some operating point to obtain linear, time invariant and smallsignal models. The circuit diagram of the fourth order boost dc-dc converter is shown in Fig.1 and its parameters used in the design are given in Table-I. The time-domain analysis of the converter has been carried out to derive the steady-state gain of the converter, which is given by equation (1).

Vo 1 = Vin (1 D)

(1)

Fig.1. Circuit diagram of the fourth order boost converter.

244

di1 1 dt = L vg + d 2 vc1 1 di2 1 dt = L vg d1vc1 vc2 2

(4)

L1 , iL1 = I L1 + i vc1 = Vc1 + V c1 g vg = Vg + v and d = D d d1 = D1 + d 2 2


(a)

(5)

di 1 1 g + D2 v c1 Vc1d v = dt L 1 2 1 di v g D1v c1 Vc1d c 2 v = dt L2

(6)

As the switching period is very small, we can consider that source voltage and load voltage is constant thought the period, thereby modifying the above equation to (7), which is used for the design of digital deadbeat controller for inductor-1 current of converter.

(b) Fig.2. Equivalent circuit during (a) switch ON time (b) switch OFF time.

Dynamics of the dc-dc converters can be accurately modeled up to half of the switching frequency by applying state-space averaging (SSA) approach. For the derivation of state-space averaging model, we assume that converter is operating in continuous inductor current mode (CCM), where it exhibits only two modes of operation in one switching cycle depending on switch ON and OFF time. In each mode of operation circuit is assumed to be linear time invariant (LTI). The equivalent circuit of the converter under each mode of operation is shown in Fig.2. State equations for inductors-1&2 during mode-1 and mode-2 operation are given by equation (2) & (3), respectively. Energy storage elements are considered ideal during model formulation. MODE-I

Considering di 1 1 = L1 dt di2 1 = dt L 2

c1 = v c 2 = v 0 = 0 and v g = 0, -v ( dV )
c1

(7)

( dV )
c1

III. DESIGN OF DIGITAL DEADBEAT CONTROLLER


A. Basics of Digital Deadbeat Control Theory
Deadbeat control law for any system is designed such that all the poles of the closed loop system are placed at origin [7]. Therefore the pulse transfer function of the closed-loop system to attain deadbeat response should be of the form given in (8) i.e. a finite polynomial in the terms of power of z-1.

F ( z) =
(2)

a z
j j =1

(8)

vg di1 dt = L I 1 1 di2 dt = L vg vc1 vc2 I 2

where the coefficients aj, j=1,2N, constitute the impulse response sequence of the system and N is the order of the overall system. For deadbeat response to step inputs these coefficients must satisfy

a
j =1

= 1 ensuring that the closed loop system dc gain is unity [5].

MODE-II

1 di1 dt = L vg + vc1 II 1 1 di2 dt = L vg vc2 II 2

) )

(3)

The state-equations (2) & (3) are now averaged considering d1+d2=1, thereby giving the averaged model (4). In-order to obtain the small-signal model, each and every variable in (4) is perturbed around steady-state operating point as per (5). Neglecting second and higher order terms, and retaining first order terms results in linear model which is given by (6).

Fig.3. Block diagram of the deadbeat current controlled converter.

245

TABLE-I Design Parameters of Laboratory Prototype Converter Parameter Source Voltage Load Voltage Inductor-L1 Load side Inductor-L2 Capacitor-C1 Output Capacitor-C2 Load Switching Frequency Value 12 V 28 V 148 H 293 H 40 F 200 F 26 50 kHz

F ( z) =

L ( z ) i Gc ( z )Gi ( z ) 1 = = a j z j ref ( z ) 1 + Gc ( z )Gi ( z ) H ( z ) i

(13)

ie ( z ) = iref ( z ) iL1 ( z ) =

1 F ( z) 1 z 1

(14)

The general block diagram representation of the system for digital deadbeat control is shown in Fig.3 and its closed-loop transfer function is given by (9).

( z) d z j +1 = K Gc ( z ) = j+1 e ( z ) i i z i =0

(15)

F ( z) =

Gc ( z )Gi ( z ) 1 + Gc ( z )Gi ( z ) H ( z )

(9)

A general expression derived for the pulse transfer function of the digital deadbeat controller from eq. (8) & (9) with H (z) =1 is given by (10).

The deadbeat controller (15) forces the inductor current to follow the reference current in fixed number of switching cycles but does not regulate load voltage. Thus a digital PI controller of the form given in (16) is designed for the outer voltage loop in order to ensure load voltage regulation. Here a & b are the location of zero and pole respectively. This outer loop generates the reference current for inner deadbeat control.

1 1 F ( z) z = Gc ( z ) = Gi ( z ) 1 F ( z ) Gi ( z ) 1 z j

Gv ( z ) =

(10)

K ( z a) ( z 1)( z b)

(16)

IV. SIMULATION AND EXPERIMENTAL RESULTS


In this section, stability and performance of the digital deadbeat controller for the fourth order boost converter is analyzed and validated by means of computer simulation and experimental results. The closed-loop performance of the converter with digital deadbeat controller, designed for j=2 is simulated in time domain using circuit simulation software PSIM [8]. Digital deadbeat controller for 28 volts, 30 watt experimental prototype converter is realized by means of Analog Devices dSP (digital signal processor) ADMC-401 [9]. Deadbeat controller (15) needs some rearrangement and simplification before it can be realized / implemented in a dSP. The controller for j=2 is given by (17) which after rearrangement gives (18).

B. Design of Digital Deadbeat controller for inductor-1 current


Digital deadbeat controller for the fourth order boost converter can be designed for either of the two inductor currents as per requirement. However, here we proceed with the design of deadbeat controller design for inductor L1 current tracking. The first-order continuous-time control-to-inductor-1 current transfer function is obtained using eq. (7) and is given by (11), but a discrete-time transfer function is needed for the design of digital controller. Thus discretizing this continuous-time transfer function using zero order hold zoh approximating gives the discrete-time control-to-inductor1 current transfer function (12).

( s ) Vc1 i Gi ( s ) = 1 = ( s ) sL1 d
( z ) 1 z 1 i = Gi ( z ) = 1 ( z ) K 1 z 1 d

(11)

Gc ( z ) =

( z) d z 1 =K e ( z ) i (1 + z 1 )

(17) (18)

(12)

( n) = Ki (n 1) error (n 1) d d

where K =

L1 Now, the digital deadbeat controller is designed Vc1Ts

for the above plant based on the number of switching cycles it takes to attain the zero steady-state error. As the system cannot respond immediately for an input due to inherent delay of one switching cycle, it is not feasible to design deadbeat controller for j=1. For j=2 and j=3 deadbeat controller will make the steady-state error in inductor-1 current zero in two & three switching cycles respectively, which will thereafter follow the current command iref. The general expression for closed loop pulse transfer function in terms of j is given by (13). From block diagram shown in Fig.3 the error between the sampled inductor-1 current and reference current Iref for a step change in Iref is given by (14). On solving eq. (13) & (14), a general expression for discrete-time deadbeat controller in terms of the order of overall system is obtained, and is given by (15).

Substituting, following small-signal perturbations in (18) and simplifying gives the digital deadbeat control law (19) which can be directly implemented in a dSPs. Similarly the control law (20) is formulated for outer voltage loop using the controller transfer function (16).

= (d D), d = (d D), i error (n 1) = iref in 1 d n n n 1 n 1


d ( n) = K * ie ( n 1) d ( n 1) + 2 * D where ie = iref iL1
iref ( n) = (1 + b) * iref (n 1) b * iref (n 2) + K * ve ( n 1) K * a * ve ( n 2)

)
(19)

(20)

The constant K as given in (12) depends on the value of inductance L1 which may not be always constant and may effect the system stability and its overall closed loop performance. To access

246

(a)

(b) (a)

Fig.4. Closed loop system (a) pole-zero location (b) step response

the system stability for inductance variation, actual inductance is expressed as a fraction of nominal inductance given by such that Lactual=*Lnominal. Fig.4 (a) shows the pole-zero locations of the overall closed-loop system for variation in . It is observed that complex poles starts moving towards unity circle for any deviation in . Here the pole-zero cancellation between plant and controller is observed for =1. System shows better stability for >1, but it tends to become marginally stable at =0.5 and unstable for <0.5. Fig.4 (b) shows the step response of the closed loop system with deadbeat controller where it is seen that step change takes place after a delay of 2 switching cycles for j=2 and 3 switching cycles for j=3. Same can be observed from the simulated time-response of inductor-1 current with deadbeat controller for j=2 for a step change in command current Iref=1.01.6 A, which is shown in Fig. 5 (a). The experimental results are shown in Fig. 5 (b). From above results it is observed that steady-state error becomes zero after exactly 2 switching cycles in simulation and experimental results thereafter current follows the command current. Fig.6 shows simulated dynamic response of load voltage and inductor-1 current with inner deadbeat current control and outer PI control for step change in load from R= 2618 . It is seen that inductor-1 current exactly follows the reference current and load voltage reverts back to 28 V in about 2ms.

(b) Fig.6. Simulated dynamic response of load voltage and inductor-1 current for a step change in load from R = 2610 . V.

CONCLUSIONS

A generalized expression for digital deadbeat controller, based on the continuous-time small-signal model has been developed for both inductor currents of the fourth order boost dc-dc converter. It is then implemented to attain the deadbeat response for inductor-1 current which is then verified through computer simulation and experimental results. It is observed that steady-state error becomes zero after two switching cycles for which controller was designed. A digital PI controller is then designed for outer voltage loop in order to attain load voltage regulation. The two loop control performance i.e. with inner deadbeat control for inductor-1 current and PI control for outer voltage loop is simulated and experimentally verified. It is observed that a better dynamic response is attained as compared to normal average current mode control.

REFERENCES
[1] [2] (a) [3] [4] [5] [6] [7] (b) Fig.5. Dynamic response of inductor-1 current with j=2 for step change in Iref = 1.01.6 (a) Simulation results (b) Experimental results [8] [9] T.W. Martin, S.S. Ang, Digital Control of Switching Converters, International Symposium on Industrial Electronics, 1995, pp. 480-484. Y. Duan, H. Jin, Digital Controller Design for switch mode power converters, IEEE Applied Power Electronics Conference, 1999, Vol. 2, pp. 480-484. Yan-Fei Liu, P.C. Sen, Digital Control of Switching Power Converters, 2005 IEEE Conference on Control Applications, Toronto, 2005, pp. 635-640. Raymod R. Ridley et.al., Analysis and Interpretation of Loop Gains of Multi-loop controlled Switching Regulators, IEEE Transactions on Power Electronics Vol.3 No.4, Oct 1988, pp. 189-498 Mummadi Veerachary, Gundu Satish, First-order Pseudo Dead-Beat Current Controller For Buck Converter, Telecommunications Energy Conference, 2009. INTELEC 2009. 31st International, pp.1-6. Jung, Y.S., Small-signal model-based design of digital current-mode control, IEE Proceedings- Electric Power Applications, 2005, Vol.152(8), pp. 871-877. Bibian, S., and Jin, H. High performance predictive deadbeat digital controller for DC power supplies, IEEE Ann. Power Electron. Conf. Rec., 2001, pp. 67-73. PSIM User Guide, 2008 ADMC-401 User Manual, 2006

247

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Digital Deadbeat Current Controller Design for Coupled Inductor Boost Converter

Mummadi Veerachary
Dept. of Electrical Engineering Indian Institute of Technology Delhi Hauz Khas, New Delhi - 110016

Firehun. T
Dept. of Electrical Engineering Indian Institute of Technology Delhi Hauz Khas, New Delhi - 110016

Abstract Analysis and deadbeat current control for the coupled inductor boost dc-dc converter is proposed in this paper. Mathematical models are formulated, using statespace technique, and then discrete compensator is designed using deadbeat control theory. Salient features of the coupled inductor topology are compared with the conventional boost topology. Compensator design is validated, through simulations, when the converter is subjected to disturbances such as source and load fluctuations. Furthermore, the deadbeat controller effectiveness is investigated with inductance parameter variation. For demonstration simulation and experimental results are presented for 12 to 36 V, 30 W, 50 kHz converter system. These studies show that the proposed digital controller resulting better dynamic response together with robust performance for a given range of parameter variation.
Keywords- Coupled inductor boost converter, deadbeat controller, state-space model.

I.

INTRODUCTION

Switch-mode power supplies (SMPS) are becoming an essential part of many electronic systems and the dc-dc converters, used in these power control applications, development is taking place in order to (i) improve performance, (ii) achieve better reliability and (iii) increasing the power density, etc. The aspect of increasing the power density is something related to converter design and packaging. Performance improvement of the converter topologies include: (i) steady-state performance, (ii) dynamic performance. Among these two the dynamic performance mainly depends on the nature of the controller used. Application of analog controllers and their design is well established. Recently, digital control application to SMPS is gaining momentum as they are more versatile than those with conventional integrated-circuit analog controllers. In general, the digital control offers several advantages, over the analog control [1]-[8], but few of them are as follows: (i) digital components are less susceptible to aging and environment variations, (ii) they are less sensitive to noise, (iii) changing a controller does not

require an alternation in the experiment, (iv) they provide improved sensitivity to parameter variations, etc. Conventional boost converter (CBC) topologies are well established for applications requiring higher load voltages. However, in the applications where the voltage requirement is higher, not possible to realize with single boost converter, then there are two possible solutions, which are: (i) cascading the boost converters, (ii) using basic boost converter with coupling among the inductors. Although it is possible to realize higher transformation ratios using first method, mentioned above, but more number of components are required for their realization and also results in lesser efficiency. Some times to realize the predefined transformation ratio the converter needs to be operated at the extreme duty ratios where-in the device utilization is poor. In order to eliminate some of these shortcomings coupled inductor boost converter (CIBC) topologies are coming up [9]-[11]. The coupled inductor has the benefits that the duty ratio of the converter at the operating point can be adjusted to a value at which its device utilization is improved. In this paper investigations on the design of digital deadbeat controller (DDC) for the coupled inductor boost converter is discussed and then its validity is demonstrated through simulation and experiments. II. MODELING OF COUPLED INDUCTOR BOOST CONVERTER

The proposed coupled inductor boost converter is shown in Fig. 1, where-in the switch is connected to center-tap of the coupled inductor. If the tap is 100 % then this converter is identical to the conventional boost converter. By adjusting this tap position it is possible to realize higher voltage transformation ratios. This converter can be operated in several operating modes depending on the load, supply voltage and switching frequency. However, continues inductor current mode of operation (CICM) has several advantages and hence this mode is only analyzed in this paper. In CICM case the circuit has two operating modes; Mode-1: S-ON (0<t<dT); Mode-2: S-OFF (dT<t<T). In each mode of operation the circuit is linear and its behavior can easily be described by the state-space model [7] given by

248

= AK x + BK u ; x
v0 = C k x ;
T where x = im vc , u = [Vg ] , k=1,2.

(1)

operating conditions and its parameters, type of the PWM strategy, and the controller coefficients. TABLE I. DISCRETE-TIME SMALL-SIGNAL MODEL FORMULATIONS Small-signal Transfer Function Formula
0 (z ) v

0 0 0 A1 = ; A2 = 0 - 1 1 RC C

1 L (1+N ) -1 RC

Audiosusceptibility

-1 g ( z ) = E ( z - ) + F v

1 B1 = L

1 0 ; B2 = L(1 + N )

0 ; C1 = C2 = [ 0 1]

Control -to- Output

0 ( z ) v

= E ( z - ) -1 d ( z )

Input Admittance

ii n ( z )

-1 g ( z ) = P [ ( z I - ) ] v

Output Impedance

0 (z ) v

= [ E ( z I - ) -1 + J ] i0 ( z )

Fig. 1. Digital deadbeat controlled coupled inductor boost converter.

iref ( z )

ie ( z )

Gd ( z )

d ( z)

G ( z)

iL ( z )

III. DEADBEAT CONTROLLER DESIGN The two most common forms of control schemes used in dc-dc switching power converters (DCSPC) are: (i) single loop voltage mode control, (ii) current-mode control. It is very well known that the single loop voltage mode control slow in its response time. On the other hand, the inner current mode control together with outer voltage loop results in faster dynamic response. Furthermore, each of these control methods have its own advantages and limitations as well. In the power supplies context faster dynamic response requirement necessitates to go in for the current mode control and particularly, the digital current mode control (DCMC) offers many advantages over the analog current controllers. The Digital deadbeat controllers provide viable solution, where the faster dynamic response is the primary concern for the digitally controlled switchmode power supplies (DCSPCs). Deadbeat control has traditionally been one of the most stimulating subjects of discrete time control theory. By definition, deadbeat response means zero steady-state error response at the sampling instants after a specified finite settling time, irrespective of the response of the system between samples. The deadbeat controller ensures the quantity to be controlled reaching to the reference set point within a time period equal to the system time delay. As the digital control involves sampling the signal and then initiating the control effort, which normally takes place in digital domain at periodic time intervals, the system will take finite number of switching cycles to reach the reference quantity. Let us assume G(z):system open-loop z-transfer function, Gd(z): deadbeat controller, for the unity gain feedback system, as shown in Fig. 2, the closed-loop pulse transfer function becomes

Fig. 2. Block diagram of the current controlled converter.

A discrete-time model of the converter is obtained, from the known state-space models given above, by integration of the continuous-time small-signal model over a switching period [7] as:

x(n + 1)Ts = [ ] x(nTs ) + [ ][u ]

(2)

where
1 1 1 1T s A e 2d2Ts ;= eA2d2Ts A B + eA2d2Ts I A2 B2. = eAd 1 1

Various formulations for obtaining the small-signal discrete-time transfer functions are tabulated in Table I. These transfer functions are useful for obtaining the system stability information and then also to design suitable controller. As indicated in Fig. 2 here the loopgain provides useful means to analyse the closed-loop system stability, wherein its characteristic equation contains the converter

249

F (z) =

G d ( z )G ( z ) C (z) = R(z) 1 + G d ( z ) G ( z )

(3)

If the closed-loop system, F(z), is settling within a finite time with zero steady-state error, then such system must exhibit a finite impulse response, and it is represented mathematically as

F ( z) =

a z
j j =1

To verify the theoretical analysis and validity of controller design simulation studies have been made using PSIM simulator parameter values listed in Table II. The steady-state voltage gain expression of CIBC is (1+D*N) time that of the CBC. For identical source and load voltages the CIBC needs to be driven at lower duty ratio than the CBC, i.e with lower duty ratios it is possible to realize higher step-up ratios. Further, the step-up ratio can also be modified by choosing suitable vale for the turns ratio (N) as evidenced by Fig. 3.

(4) where the coefficients aj , m=1, 2,..,N, constitute the impulse response sequence of the system and N is the order of the overall system. For deadbeat response to step inputs these coefficients must satisfy

a
j =1

= 1.

The

physical

significance of this condition is that the closed-loop system dc gain must be unity [9]. Re-arranging eqn. 1 gives the control law, Gd(z) as

1 F ( z) Gd ( z ) = G( z) 1 F ( z)
The closed-loop controlled CIBC defined by eqn. (6).
G Cl

Fig. 3. Variation of steady-state voltage gain with duty ratio.

(5)

(Z ) =

G c ( Z ) * G ZOH G p ( Z ) 1 + G c ( Z ) * G ZO H G p ( Z )

(6)

Here m is the system delay, and it must be integer multiples of the switching period (Ts) of the converter. Since the digital current control involves sampling the current signal and then applying the new duty ratio to minimize the error signal, which requires at least one PWM cycle, i.e m1. The CIBC has the closed loop z-transfer function given by eqn. (6). If the closed-loop should follow the deadbeat response (DBTR), GCl(z)=GDBTR(z), then we can design a suitable controller, and it can easily be established the digital deadbeat duty ratio control law, applied at nth switching period, can be obtained as. (7) d ( n ) = 2 D + K i re f ( n 1) i ( n 1) d ( n 1 )

The DDC reference tracking capability is tested by changing the inductor current reference from 2 4 A and the corresponding inductor current dynamic response results are plotted in Fig. 4. It is clear that the DDC is tracking the inductor current within two switching cycles, which is in accordance with the initial design parameter m=2. The duty ratio variation during this reference tracking is shown in Fig. 5. It is clear from this figure that the DDC applies the required control effort, in this case duty ratio, at the beginning of the disturbance itself and then in the subsequent cycles it reaches to the steady-state value. The current control effectiveness with deadbeat type controller has been analyzed extensively in MATALB. The current-control closed-loop bode and pole-zero plot is generated as shown in Fig. 6. There four poles and two zeros, of which the two poles are on the unit circle and they are cancelled by the zeros. The remaining two poles are located in the neighborhood of the origin of the unit circle. As the proposed deadbeat control law is dependent on the converter parameters, here inductance, any change of these will have an effect on the control performance. To verify this fact the inductance is varied from 90 to 50 H and then pole-zero movement is plotted in Fig. 7. This plot clearly indicates that the poles located in the neighborhood of origin are moving away from the it and if the inductance variation is more than the threshold value then these poles will cross the unit circle and thus the current control-loop will become unstable. In order to prevent this instability aspect one has to check the inductance variation for the given source voltage, load power demand and then the inductance must be designed properly such that under worst condition its variation is within the permissible limits, in this case L1>65 H.

where K=L1(V0+NV)/(fs*V0+N). IV. DISCUSSION OF SIMULATION AND EXPERIMENTAL


RESULTS

To demonstrate the proposed converter salient features and its controlling capability a 30 W 12 to 36 V converter is considered here. State-space models, derived in Section II, have been used for obtaining the z-domain discrete transfer functions and then used in MATLAB[12], for the converter parameter given in Table II, for the CIBC performance analysis.

250

1.5

0.5
Imaginary Axis

-0.5

-1

-1.5 -1.5

-1

-0.5

0
Real Axis

0.5

1.5

(b) Pole-zero movement of Gcl(z) Fig. 6. Closed-loop bode plot/ PZ map with constant inductance.
1.5
L=60e-6 H

0.5
Imaginary A xis

Decreasing "L"

Fig. 4. Simulated dynamic response of the inductor current with DDC(constant inductance: L= 90 H; m=2).

-0.5

-1

-1.5 -1.5

-1

-0.5

0
Real Axis

0.5

1.5

Fig. 7. Pole-zero movement of Gcl(z) with decrease in inductance.

Fig. 5. Simulated duty ratio variation under deadbeat control.


Bode Diagram

150
100
50
0
-50 180
90
Phase (deg)

Magnitude (dB)

Gd(z) Gi(z) Gcl(z)

(a)

Steady-state waveforms

0
-90
-180 1 10

10

10

10

10

10

Frequency (rad/sec)

(a)

Frequency response bode plots of Gid(z), Gd(z) and loopgain.

TABLE I.

CONVERTER PARAMETERS Value

Power stage parameters L1 L2 C R Fs Vg N

110 H 90 H 200 F 30 40 kHz 12 V 1

(b) Experimental dynamic response of the inductor current with DDC Fig. 8. Experimental waveforms of the CIBC.

In order to verify the theoretical analysis and simulations an experimental prototype CIBC was fabricated and experimental measurements were recorded. The steady-state performance indicating measurements are provided in Fig. 8a, while Fig. 8b shows the dynamic response of the inductor current with deadbeat control strategy.

251

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

CONCLUSIONS Digital deadbeat controller for CIBC was developed and tested. Mathematical models were developed using state-space method and then digital controller was designed to yield stable current-control loop. Theoretical analysis of the proposed deadbeat controller effectiveness and parameter variation has been studied. These investigations revealed that the proposed digital deadbeat controller ensures stable current control-loop as long as its inductor variation is within the limits. The designed compensator validity was verified through PSIM simulations as well as experimental results. REFERENCES
[1] Forsyth. A. J, Mollov. S. C, Modelling and control of dc-dc converters, IEE Power Engineering Journal, 2005, Vol. 12(5), pp. 783-792. M.Veerachary, Two-loop voltage-mode control of coupled inductor step-down buck converter, Proc. Of IEE Electric Power Applications, 2005, Vol. 152(6), pp. 1516-1524. M. Veerachary, T. Narasa Reddy, Voltage-mode control of hybrid switched-capacitor converters, IEEE Conference on IECON, 2006, pp. 2450-2453.

[4]

[5] [6]

[7]

[8]

[9]

[10] [11]

[2]

[3]

[12] [13] [14]

M. Veerachary, S. Balasudhakar, Peak current-mode control of hybrid switched-capacitor converter, IEEE Conference on PEDES, 2006, pp. 1-6. M. Veerachary, Deepen Sharma, Adaptive hysteritic control of 3rd order buck converter, IEEE Conference on PEDES, 2006, pp. 1-6. Mummadi Veerachary, Gundu Satish, First-order Pseudo Dead-Beat Current Controller For Buck Converter, Telecommunications Energy Conference, 2009. INTELEC 2009. 31st International, pp.1-6. Oliva. A. R, Ang. S. S, Bortolotto. G. E, Digital control of voltagemode synchrounous buck converter, IEEE Trans. On Power Electronics, Vol. 21(1), pp. 157-163. Liping Guo, John Y. Hung and R. M. Nelms, Digital Controller Design for Buck and Boost Converters Using Root Locus Techniques, IEEE Trans on Power Electronics, 2003, Vol.2, pp.1864-1869. Rong-Jong Wai and Rou-Youg Duan, High Step-Up Converter With Coupled-Inductor, IEEE Trans. on Power Electronics, 2005, Vol.20(5), pp. 1025-1035. P.D. Evans, W.M. Chew, Reduction of proximity losses in coupled inductors, IEE Proceedings-B, 1991, Vol. 138(2), pp 51-58. Suman Dwari, Saurabh Jayawant, Troy Beechner, Stephanie K. Miller, Anu Mathew, Min Chen, Jonathan Riehl, and Jian Sun, Dynamic Characterization of Coupled-Inductor Boost DC-DC Converters, IEEE Proceedings on COMPEL, 2006, pp 264-269. MATLAB user manual, 2004. PSIM user manual, 2006. ADMC401 family reference manual

252

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Analytical Analysis of SVC Model

Rashmi Jain Department of Electrical Engg YMCA University of Science & Technology Faridabad, India
rashmiagarwal70@yahoo.co.in

C P Gupta Department of Electrical Engg Indian Institute of Technology Roorkee, India


cpg_umist@yahoo.co.in

Majid Jamil Department of Electrical Engg Jamia Millia Islamia University New Delhi, India
majidjamil@hotmail.com
Abstract: This paper presents linear state space model of a Static Var Compensator (SVC). The model consists of three subsystem model; an ac system, a SVC model and a controller model which are stable. however, the total system is unstable. The modelling approach is based on Parks transformation that enables convenient coupling between the three phase main circuit side of SVC and control circuit.There are cases where the subsystem eigenvalues are all stable while the total system eigenvalues show instability. Using sensitivities or participation factors from the subdivided system, the critical eigenvalues in each subsystem which cause the instability are identified. One interesting result is that the eigenvalues that interact are not the eigenvalues that would have been expected from the classical understanding of this problem The model is analysed and verified using MATLAB. The proposed study may be helpful in modelling different FACTS using SVC unit. Key words: SVC, PLL, Eigen value and matrix partitioning.

A.S. Siddiqui Department of Electrical Engg Jamia Millia Islamia University New Delhi, India
anshi@yahoo.co.in
heavily on the analysis of the linearized system [1]. Stability studies for power system planning, operation and control rely immensely on computer based power system simulation tools. Simulation tools use mathematical models that predict the dynamic performance of the system.[2] It is crucial that these power system models be modelled accurately to predict the actual performance of the system. that these power system models be modelled accurately Small signal stability is the ability of the system to maintain synchronism under disturbances which occur continually on the system due to variation in load and generation in the system. Linear analysis is a powerful technique to study whether a power system is stable or not. The eigenvalues obtained from linear analysis give a complete picture of the stability of the system [3]-[5]. The dynamic characteristics of a power system under small disturbances are usually studied using eigenvalues and eigenvectors.[1] When the system is very large, this method becomes difficult both for computing and interpreting the results. The purpose of this paper is to present the application of matrix partitioning in Eigen value analysis to system and individually modelling them with d-q transformation and matrix coupling them. Facts Devices like SVC, TCSC, UPFC are rapidly becoming a part of modern Power transmission system. The new thyristor based technology has enabled the speed of response and controllability character of these elements to be far superior to any of conventionally used voltage controlling power system elements [7]. In this study, we attempt to derive a suitable linear continuous SVC model in state-space form. The model should have reasonable accuracy for the dynamic studies in the sub-synchronous range and it should incorporate common control elements including phase locked loop (PLL). The modelling approach is based on Parks Transformation [8] that enables convenient coupling between the three phase main circuit side of SVC and control circuit. The model in this form has advantage by using more complex AC system or advanced controller like fuzzy controller.

I . INTRODUCTION

Power systems are geographically wide distributed, capital intensive big complex systems. It is thus of high risk and often difficult to conduct experiments on such complex systems. With the increase in demand of electricity power systems engineers are forced to operate these systems at their limits with very narrow stability margins which often require the installation of special stabilizing controls whose design rely

253

II.

MODEL USED

IV. ANALYTICAL MODEL A .Model structure


The test system model platform is developed in the manner described in [6] and a brief overview is given here. The system is divided into three subsystems: an AC subsystem, a SVC model and a controller model. Each subsystem is modelled as a standalone state-space model, linking with the remaining two subsystems and with the outside signals.

The test system for the study is a long transmission system compensated by a SVC and connected to a firm voltage source on each side. Fig. 1 shows a single line diagram of the test system where the transmission line is represented by a lumped resistance and inductance in accordance with the approach for sub-synchronous resonance studies. The Each phase of the SVC is composed of a fixed capacitor in parallel with a thyristor-con- trolled reactor (TCR). The SVC is controlled by varying the phase delay of the thyristor firing pulses synchronized through a PLL to the line current waveform. The controller is of a PI type with a feedback filter and a series compensator The modelling approach adopted here uses two independently developed models: main circuit model (in the rotating coordinate frame) and general control circuit model (in the static coordinate frame). These models are joined together using Parks transformation. Dynamic model of the main circuit represents all network dynamic elements including the SVC inherent dynamics. Control circuit Model comprises: voltage controller and detailed Phase Locked Loop (PLL) model. The proportional and integral gains are changed to make the system stable.

III.

THE DIAGONAL SUBSYSTEM FORM

To study the stability of a power system under small disturbances, the system equations are linearized around an operating point and the eigenvalues of the resulting state matrix found. This set of linear equations can be written as z=Gz where G is the state matrix of the system. The vector z is made up of the state variables of the system: various voltages and currents in the transmission network, thyristor firing angle, PLL and PI controller output. Partition G in the following way Figure 1: Test system model G= G11 G12 (1) G21 G22 where each diagonal block Gii represents a model of a separate area (or a subsystem) which is part of a large system, and the off-diagonal blocks Gij,i # j, represent coupling between the subsystems. This matrix can be transformed into 1 2 A= 3 a41 a42 a43 a51 a52 a53 a61 a62 a63 a14 a15 a16 a24 a25 a26 a34 a35 4 5 6 State transformations refer to the act of producing another state space representations for the same system..If the system is linear, the state space representation are also linear,and state transformation is the linear transformation in which the original state vector is premultiplied by a constant transformation matrix yielding a new state vector. For any phase of the study system, the states are chosen to be the instantaneous values of currents in the inductors and voltages across the capacitors. The main circuit dynamic equations for AC system in s-domain are written as : a36 (2)

B.AC System model


A single-phase dynamic model is developed firstly, using the instantaneous circuit variables as the states. A phase model is given below having one input link, one output link and one D matrix. xaca = Aacaxaca (3) yacaco = Cacaco xaca+ Dacacouacaco + Bacacouacaco (4)

where the subscript cadenotes phase of the system. Using the single- phase model and assuming ideal system symmetry, a complete three-phase model in the rotating coordinate frame is readily created. To enable a wider frequency range dynamic analysis and coupling with the static coordinate frame, the above model is converted to the d-q static frame using Parks transformation and state transformations.

using the method described in [1]. This matrix has the same eigenvalues as the matrix G because a similarity transform was used to obtain it. The s on the diagonal of matrix are the eigenvalues of the diagonal blocks Gjj and thus of the subsystems. The relationship between the s and the eigenvalues of the total system, the eigenvalues of G or of A which will be called s.

sLi V iL1 R3 )/(R2 + R3 ) 1 L1 =Ri 1 L1 +R 2 ( sCv V iL1 R3 )/(R2 + R3) itcr iL2 1 =iL1 +(

(5) (6) (7)

254 sL2iL2 = v iL2R4

The test system uses a third order model with iL1, iL2and v as the states as shown in above eq . In the matrix notation ,ac system model can be written in form given below:

substituting them in (11) and neglecting the small terms, equation (11) is written as:

itcr = v2 K SVC + v / Lo tcr where: K SVC = (1 / Ltcr ) /

(15)

o X1 o X2 o X3

R1 / L1 R3R2 / (R3 + R2 ) L1
1/ C1 R3 / C1(R2 + R3) o

R2 / L1(R2 + R3) 0
1/ (R2 + R3)C1 1/ C1 1/ L2 R4/ L2

X1 X2 X3

The first term on the right side in (7) is an artificial oscillating variable (susceptance) that has a varying magnitude and a constant angle equal to the voltage nominal angle. In this way, the SVC model (9),(10),(15) has all oscillating variables that are converted to d-q variables. Subsequently, using the d-q components of the inputs and outputs, this model is linked with the other model units. [10].

D. CONTROLLER MODEL (8)


where , the states are chosen to be the instantaneous values of currents in the inductors and voltages across the capacitors. The control system model includes a PI controller, a second order feedback filter and a small-signal PLL model. This subsystem comprises: PLL model, TCR controller and interaction equations. PLL model:. When PLL is set for fast tracking of the system dynamic changes, the PLL output will closely follow the AC voltage angle changes , and the actual firing angle will closely follow the firing angle ordered from the controller. The PLL model consists of the following units: Vector transformer model PLL controller model VCO model Considering the dynamics of Vector transformer, PLL controller VCO, the PLL model is derived in state space domain as:

C. SVC MODEL
With reference to Figure 2, the SVC model can be represented in the state-space domain as follows:

SX1 = k IPLLtc x2 + k IPLLtc where k IPLLtc and k pPLLtc are PLLcontroller int egral
Figure 2: SVC electrical circut model

SX 2 = k IsPLLtc k pPLLtc X 2 + k IsPLLtc X1 + k IsPLLtc k pPLLtc and proportional cons tan t gains TCRvoltagecontrollerk X 3 = k Itc (Vref X 5 )

(16)

i1o = 1 / Lt v1 1 / Lt v2 v2 = 1 / CSVC it 1 / CSVC I tcr 1 / CSVC Rcp v2 itcr o = 1 / Ltcr ( ) v2


where Lt is transformer reactance, Ltcr is equivalent TCR reactance and Csvc and Rcp are TSC parameters.
o

(9) (10) (11)

X T f X 4 = X 2 + X 3 X 4 + k ptc Vref K ptc V + where k Itc and k ptc are int egral and proportional gain cons tan t of SVC controller T f = 1 / 6000 Voltage transducer dynamics: X Tf X 5 = X 5 + V or in matrix notation:
SX = Ax + Bcac uac + Bcinp uinp where Bcac is the controller input matrix for coupling with the ACsystem Ycac = Ccac X where Ccac is the controller output matrix for coupling with the ACsystem and the mod el states are:

(17)

Equation (11) is non-linear in view of the fact that the TCR reactance is dependent upon the firing angle obtained from the controller model. This equation cannot be directly linearised since the SVC model is developed in the AC frame with oscillating variables, (i.e.v2 = V2 cos(t + )) whereas the firing angle signal is derived as a signal in the controller reference frame (i.e. a non-oscillating signal). To link the SVC model with the controller model, the approach of artificial rotating susceptance is adopted It is firstly presumed that the AC terminal voltage has th following value in the steady state:v2o =V2o cos(t + o), where superscript o denotes the steady-state variable, i.e., V2 0 is a constant magnitude, 0 is a constant angle and v2o is a rotating vector of a constant magnitude and angle. The susceptance value in the steady-state is 1 / LtcRo Assuming small perturbations around the steady state,we can represent the terms as:

(18)

(19)

v 2 = (v2 + v2 ) 1 / Ltcr = 1 / Lo tcr + (1 / Ltcr ) Ltcr = L (2 2 ) sin(2 2 )

(12) (13) (14)

Small perturbations are justified assuming an effective voltage control at the nominal value. Multiplying the terms in (12) and (13),

255

x1 x2 x3 x4 x5 =

kIPLLtc ( )/ s

kItc (Vref X5)/ s

(20)

where V denotes measured AC voltage, obtained after the feedback filter. The inputs in the above model are:

Stability problems, such as inter-area oscillations, have become increasingly common inlarge interconnected power systems. Eigenvalue and Modal Analysis module provides an extension Of analytical methods to examine these oscillations.Eigenvalue and modal analysis describe the small signal behaviour of the system the behaviour linearized around one operating point And do not take into account the non-linear behaviour of, for instance, controllers during large perturbations. Therefore, time domain simulation and modal analysis in the frequency domain complement each other in the analysis of power systems. Eigenvalue analysis investigates the dynamic behavior of a power system under different characteristic frequencies (modes). In a power system, it is required that all modes be stable. Moreover, it is desired that all electromechanical oscillations be damped out as quickly as possible. The results of an Eigenvalue analysis are given as frequency and relative damping for each oscillatory mode The modal system analysis allows a much deeper analysis by not only interpreting the Eigenvalues but by also analyzing the eigenvectors of a system. The right Eigenvector gives information about the Observability of oscillations The left Eigenvector gives information about the controllability The combination of the right and left eigenvectors (residues) indicates the location of the controllers

ucac = V

uinp =

(21)

V AC voltage magnitude AC voltage angle

Vref Reference value for ACvoltage and the model output is actual firing angle. E.FINAL MODEL The above three models are linked to form single system model in state space form.

Ycac = X 4 =

(22)

X s o = As X s + Bs uout Yout = Cs X s + Ds uout


o

(23)

One problem in using right and left eigenvectors individually for identifying the relationship between the states and the modes is that the elements of the eigenvectors are dependent on units and scales associated with the state variables. A solution to this problem ,a matrix called participation factors which combines the right and left eigenvectors [1]as the measure of the association between state variables and modes.

where slabels the overall system .The matrix As has the subsystem matrices on the main diagonal ,with the other submatrices representing interactions between subsystems and model matrices according to equation are:

p1i p2i pi = pni

1ii1 2ii2

niin

Aco
As=

Bcot c * Ctcco Atc Bactc * Ctcac

Bcoac * Cacco Btcac * Cactc Aac

Btcco * Ccot c Bacco * Ccoac

The element pik= kiik is termed as the participation factor.It is a measure of the relative participation of the kth state variable in the ith mode and vice-versa.

VI. RESULT ANALYSIS The example to be considered is an system given in figure1. The model was implemented in MATLAB and its eigenvalues and participation factors were calculated. Selected eigenvalues of the subsystems and system are shown in Table 1 and Table 2 respectively. In both tables, the eigenvalues are listed in order of their real part, and their position in the total list is shown as the row number in the first column. All of the subsystem eigenvalues have negative real parts, and thus all subsystems would be considered stable by themselves. For the total system four eigenvalue pair has a positive real part, showing that the total system is unstablere represented in detail and each of which makes up one subsystem. Eigenvalues of the subsystems ( S) and system ( s) are shown in Table 1 and Table 2 respectively. In both tables, the eigenvalues are listed in order of their real part, and their position in the total list is shown as the row number in the first column.

{{{

Bcoout
Bs =

Btcout Bacout

Cs = Ds =

Ccoout 0

Ctcout

Cacout
(24)

All the subsystems D matrices are assumed zero.

V. EIGENVALUE ANALYSIS

256

Table1 Eigen values of subsystem


Row No AC Controller SVC Model Controller

1 2 3 4 5 6 7 8 9

-3.76e+002 -3.76e+002 -3.76e+002 -2.26e+003 -2.26e+003 -2.26e+003 -2.26e+003 -2.26e+003 -2.26e+003

-5.13e-017 -3.48e-016 -3.48e-016 -2.19e-003 -2.19e-003 -2.19e-003 -2.19e-003 -2.19e-003 -2.19e-003

-2.50e+001 -2.50e+001 -2.50e+001 -5.00e+001 -5.97e+003 -5.97e+003 -6.00e+003 -6.00e+003 -6.00e+003

Table 2 Eigenvaluues of unstable and stable system


Row No UNSTABLE STABLE EIGEN VALUE Freq.(rad/s) EIGEN VALUE Freq(rad/s)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33

7.34e+004 -7.68e+004 -2.23e+003 -2.23e+003 -2.26e+003 -2.26e+003 -1.51e+003 -2.51e+002 -2.51e+002 -3.01e+002 -2.50e+001 -2.50e+001 -6.00e+003 -6.00e+003 -6.00e+003 -6.00e+003 2.40e-005 2.40e-005 -2.02e-003 -2.02e-003 -2.02e-003 -2.02e-003 -2.50e+001 -2.50e+001 -2.50e+001 -2.50e+001 2.02e-003 1.62e-012 -2.12e-012 0.00e+000 -6.00e+003 0.00e+000 -6.00e+003

7.34e+004 7.68e+004 3.14e+003 3.14e+003 2.86e+003 2.86e+003 1.51e+003 3.81e+002 3.81e+002 3.01e+002 3.16e+001 3.16e+001 6.00e+003 6.00e+003 6.00e+003 6.00e+003 1.25e+000 1.25e+000 4.50e+001 4.50e+001 4.50e+001 4.50e+001 3.16e+001 3.16e+001 3.16e+001 3.16e+001 2.02e-003 1.62e-012 2.12e-012 0.00e+000 6.00e+003 0.00e+000 6.00e+003

-1.70e+003 -1.70e+003 -2.23e+003 -2.23e+003 -2.26e+003 -2.26e+003 -1.51e+003 -2.51e+002 -2.51e+002 -3.01e+002 -6.00e+003 -6.00e+003 -6.00e+003 -6.00e+003 -2.50e+001 -2.50e+001 -4.86e-003 -4.86e-003 -2.02e-003 -2.02e-003 -2.02e-003 -2.02e-003 -1.91e-003 -7.50e-004 -2.00e-010 -2.50e+001 -2.50e+001 -2.50e+001 -2.50e+001 0.00e+000 -6.00e+003 0.00e+000 -6.00e+003

7.52e+004 7.52e+004 3.14e+003 3.14e+003 2.86e+003 2.86e+003 1.51e+003 3.81e+002 3.81e+002 3.01e+002 6.00e+003 6.00e+003 6.00e+003 6.00e+003 3.16e+001 3.16e+001 1.28e+000 1.28e+000 4.49e+001 4.49e+001 4.49e+001 4.49e+001 1.91e-003 7.50e-004 2.00e-010 3.16e+001 3.16e+001 3.16e+001 3.16e+001 0.00e+000 6.00e+003 0.00e+000 6.00e+003

Figure 3 Pole zero plot of unstable and stable system

a.

From Participation Factors

The larger elements of the participation factors are shown in Table 4 for selected eigenvalue 7.34e+04.From the table(3) it was found that participation factor of row 20 gives the maximum value .This row corresponds to parameters. which correspond to SVC model parameters Taking Xt =0.18p.u instead of 0.17p.u again the stability is achieved and plot showing all the eigen values towards negative side.(Fig3) Table 3 Participation Factor of unstable eigen values
Row No Participation Factor of 7.34e+004

All of the subsystem eigenvalues have negative real parts, and thus all subsystems would be considered stable by themselves. For the total system some eigenvalue pair has a positive real part, showing that the total system is unstable. It is not able to decide which parameters have to be changed to make whole system stable. General view is to calculate frequency and the one with same frequency will lead to unstability. A common explanation of subsynchronous resonance would be that these frequency interact to cause instability. This is not so, as will be shown in the remainder of this paper.

16 20 30 31

4.01e-6 0.5118 0.1629 9.81e-9

b.

From Physical Parameter Variation

257

The participation factors discussed above show the influence of each of the on the system . While the characterize the subsystems to which they belong, they cannot be independently changed in the subsystems. Several changes were tried of system parameters to see how they would influence the S and s, and case will be discussed below.

In this test, load impedance shown by L2 in fig(1) if changes its values from 0.2 H to 2H ,the system parameters has to be changed to regain stability. By participation factors (Table 4) it was found that for eigen value 4.35e-013, row 6,7 shows the maximum participation which correspond to PLL parameter .Change Kp(PLL proportional gain) to 25 and Ki (PLL integral gain)to 300, to make system becomes stable which is shown by negative eigen values.. Table 4 Stable and unstable eigen values
Row No UNSTABLE STABLE PARTICIPATIN FACTOR OF 4.35e013

REFERENCES
[1] J. E. Van Ness F. D. Dean,Interaction Between Subsystems of a Power System ,AmericanControlConference,Maryland, 1994 [2] M. Ntombela, K.K. Kaberere, K.A. Folly and A. I. Petroianu, SMIEEE,An Investigation into the Capabilities of MATLAB Power System Toolbox for Small Signal Stability Analysis in Power Systems Inaugural IEEE PES 2005 Conference and Exposition in Africa Durban, South Africa, 11-15 July 2005 [3] P. Kundur, Power System Stability and Control, McGraw-Hill, 1994. [4] J. G. Slootweg, J. Persson, A. M. van Voorden, G. C. Paap ,W. L.Kling, A Study of the Eigenvalue Analysis Capabilities of Power System Dynamics Simulation Software, 14th PSCC, Sevilla, 24th 28thJune, 2002. [5] K. R. Padiyar, Power System Dynamics: Stability and Control, John Wiley & Sons, 1996 [6] . Jovcic and G. N. Pillai, Discrete system model of a six-pulse SVC, in Conf.Proc.EuroPES 2003 ,Marbella,Spain ,Sep 2003 [7] T.J.E.Miller, Reactive Power Control in Electric System ,JohnWiley Sus. Inc 1982 [8] D. Jovcic, N. Pahalawaththa, M. Zavahir, and H. Hassan, SVC dynamic analytical model,EEE Trans.Power Del.,vol.18 ,no.4, pp.14551461,Oct.2003 [9] N.Hingorani, Lszlo Gyugi, nderstanding FACTS ,IEEE Press2003 [10]D.JovcicControl of high voltage dc and flexible ac transmission systems,Ph.DdissertationUniv.Aukland,Aukland,New Zealand,1999

EIGEN VALUE

EIGEN VALUE

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33

-1.70e+003 -1.70e+003 -1.76e+003 -1.76e+003 -1.55e+003 -1.55e+003 -6.00e+003 -6.00e+003 -5.15e+001 -5.15e+001 -3.91e+002 -1.39e+002 -2.50e+001 -2.50e+001 -6.00e+003 -6.00e+003 -2.64e-003 -2.64e-003 -2.02e-003 -2.02e-003 -2.02e-003 -2.02e-003 -2.50e+001 -2.50e+001 -2.50e+001 -2.50e+001 -1.85e-003 4.35e-013 -1.02e-013 0.00e+000 -6.00e+003 0.00e+000 -6.00e+003

-1.70e+003 -1.70e+003 -1.76e+003 -1.76e+003 -1.55e+003 -1.55e+003 1.00e+000 -6.00e+003 -5.15e+001 -5.15e+001 -3.91e+002 -1.39e+002 -6.00e+003 -6.00e+003 -5.49e-003 -5.49e-003 -1.25e+001 -1.25e+001 -2.02e-003 -2.02e-003 -2.02e-003 -2.02e-003 -2.01e-003 -1.25e+001 -1.25e+001 -1.25e+001 -1.25e+001 -4.98e-014 -4.98e-014 0.00e+000 -6.00e+003 0.00e+000 -6.00e+003

1.27*10 -18 1.2*10 -18 1.72*10-29 1.05*10-21 6.04*10-34 .8762 .8762 0 3.7*10-19 0 .0481 .0481 0 2.13*10-20 0 2.5*10-28 4.2*10-28 4.9*10-27 6.7*10-31 1.5*10-30 1.98*10-28 4.35*10-29 9.4*10-29 1.9*10-27 0 0 0 0 0 0 1.75*10-30 7.5*10-33 1.4*10-30

APPENDIX TEST SYSTEM DATA

AC System
R1 R2 R3 R4 L1 L2 Z(MVA) V1

System 1 .6 200 .1 300 .3 H .2H 72 540 120kV

Controller Data
VII. CONCLUSION This paper has shown through an example how a large system can be partitioned into subsystems characterized by their own Eigen values, the S.. Through the use of a special form of the participation factors, and then through tests made by varying actual parameters of the system, it was shown that these are not necessarily the S that interact. It is hoped that this type of analysis can be extended to other similar problems, The model in this form has advantages in flexibility since if the SVC is connected to a more complex ac system, only the matrix and the corresponding input and output matrices need odifications. Different FACTS can be modeled using the TCR/SVC model unit Similarly, more advanced controllers can be developed using modern control theory (Fuzzy controller) and implemented directly by replacing the matrix.

kp ki Tf f KISPLL PLL kp PLL ki

System 1 8.33*10-4 rad/kv 0.05 rad/kv 0.5 753rad/s 1 50 950/s

SVC Data (+167/-100 MVA)


Total Reactive MVA Total Capacitive MVA Transformer Voltages Transformer rating MVA Transformer Xt, Resistance Rcp

System 1 100MVA 167MVA 120kv 200 MVA 0.17p.u 167

258

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

MultiArea Security Constrained Economic Dispatch and Emission Dispatch with Valve Point Effects Using Particle Swam Optimization
K.K.Swarnkar, Dr.S.Wadhwani and Dr.A.K.Wadhwani

Abstract This paper presents an Efficient and Reliable


Particle Swam Optimization (PSO) for solving security constrained Economic Dispatch and Emission Dispatch with Valve Point Effects problem in inter connected power system. The main objective of the Multi Area Economic Dispatch and Emission Dispatch with Valve Point Effects is to determine the generation allowance of each committed unit in the system and power exchange between Areas so as to minimize the total generation production cost (GPC) without violating the Tie Line Security Constraint. The proposed approach has been evaluated on IEEE 30 Bus Interconnected Three Area systems with eight generator system. The investigation reveals that the proposed methods can provide accurate solution with fast convergence. Index Terms - Economic Dispatch; Emission Dispatch; Power System; Particle Swam Optimization; Valve Point Effects.

I. Introduction

The Modern power system needs to operate its generating units is


parallel to meet the specific load demand under variable operating condition. The maintenance Engineer is always under a pressure to ensure Economic Dispatch of load. The Engineer has always to plan about the generators that will remain in operation at a given time and under given load condition. Electrical Power systems are interconnected because it is a better system to operate with maximum reliability, improved stability and less production cost than operated as isolated system (I.S.). The power dispatch problem can be divided into several stages. The Economic load dispatch has only one objective for minimizing fuel cost .with the increasing awareness of Environmental protection in recent years, Emission/Economic dispatch is proposed as alternative to achieve simultaneously the minimization of fuel costs pollutant Emission [1]. At same time, only limited work has been carried out to deal with the multi area Economic dispatch, where power is dispatch within multiple areas [2, 3, 5, 6, 8&9]. Electrical Power System (EPS) are interconnected because it is a better system to operate, with Maximum Efficiency, Maximum Reliability, Improved Stability and Reduces total

In this paper, we further extend the concept of Emission/Economic dispatch in to the Multi area Economic dispatch scenario and a new concept termed Multi Area Economic Dispatch and Emission Dispatch with Valve Point Effects is proposed by also minimizing the pollutant Emission in the Multi area Economic Dispatch context. The Multi Area Economic Dispatch and Emission Dispatch with Valve Point Effects problem is first presented and then an enhanced multi-objective Particle swam optimization (MPSO) algorithm is developed to handle the Multi Area Economic Dispatch and Emission Dispatch with Valve Point Effects problem. In the problem formulation the Tie line transfer capacities are treated as a set of design constraints to increase the system security. A three Area test power system is then used as an application example to verify the effectiveness of the proposed method through numerical simulations. A comparative study is also carried out to illustrate the different solutions obtained based on the different problem formulation. The remainder of the paper is organized as follows: in section 3, the Multi Area Economic Dispatch and Emission Dispatch with Valve Point Effects problem is formulated. The inner working of the PSO algorithms is discussed in section 4. In section 5, the proposed method for optimal Multi Area Economic Dispatch and Emission Dispatch with Valve Point Effects is presented. Simulation results and analysis are given in section 6. Finally, conclusion is drawn.

II. Problem Formulation


The optimal Multi Area Economic Dispatch and Emission Dispatch with Valve Point Effects problem can be modeled as a bicriteria optimization problem. The two conflicting objective i.e., operation costs and pollution emission should be minimized simultaneously while fulfilling certain system constraints.

1. The objective of Multi Area Economic Dispatch and Emission


Dispatch with Valve Point Effects is to determine the generation levels and the interconnected power between areas that minimize the system operation cost. The total $/hr Fuel cost F (Pg) can be represents as follows -:

generation production cost than operated as isolated systems. The Multi Area Economic Dispatch and Emission Dispatch with Valve Point Effects is an optimization scheme to
determine the best Generations schedule for a given load with Minimum Generation cost Maximum Pollutions control and Tie Line Security Constraints. Thus the and Multi Area Economic Dispatch and Emission Dispatch with Valve Point Effects problem are a large scale Non Linear optimization problems with both linear and Non Linear Constraints.
Kuldeep Kumar Swarnkar is the faulty of the M.P.C.T., Gwalior (M.P.) (E-mail kuldeepkumarsony@yahoo.co.in)

HJ{{ {
Where

(#

(#

{{ -I

(1) (2)

-I

Fmn (Pgmn) = The Total Fuel cost Pgmn = The Power output of Generator n in area m. amn,bmn,cmn = The Fuel cost coefficients Fuel

Dr.S.Wadhwani is the Reader of the M.I.T.S., Gwalior (M.P.) Dr.A.K.Wadhwani is the Reader of the M.I.T.S., Gwalior (M.P.)

1.1 The in equation constraints on Active Power Generation Pgnk of each Generation n

(3)

259

Where
is respectively minimum and maximum And value of Active Power generation allowed at generator n.

which keep being adjusted during the optimization process based on the following rules:

Vt+1 = c1 rand (Pig,t Xt) Vt+c2 Rand (P g,t Xt) Xt+1 = Xt + Vt+1

(10) (11)

1.2 The cost is optimized with the following power system balance constraints.
(# (#

(4)

Where
= The Power output of generator nth in area m in MW. = The Transmission loss of the system in MW. = The load at ------ n in area m.

1.3 The Tie line Power flow limit, 3 , Max; t=1.Nt


Where
Pt = The Active Power flow on the Tie line t. Nt= Number of the lines.

(5)

Where Vt+1 is the updated particle velocity in the next iteration, Vt is the particle velocity in the current iteration, is the inertial dampener which indicates the impact of the particles own experience on its next movement, c1 * rand represents a uniformly distributed number within the interval [0, c1], which reflects how the neighbors of the particle affects its flight, Pig,t is the neighborhood best position, Xi is the current position of the particle, c2 * Rand represents a uniformly distributed number within the interval [0, c2], which indicates how the particle trusts the global best position, P g,t is the global best position, and Xt+1 is the updated position of the particle. Under the guidance of these two updating rules, the particles will be attracted to move toward the best position found thus far. That is, the optimal solutions can be sought out due to this driving force.

1.4 Security constraints, 3 3 3


Where:

J . J

(6) (7)

IV The Proposed Solution Method


We have proposed an enhanced multi-objective particle swarm optimization (MOPSO) algorithm and used it to successfully resolve both deterministic and stochastic EED problems [7]. In this section, the algorithm will be adopted to deal with the optimal Multi Area Economic Dispatch and Emission Dispatch with Valve Point Effects problem. Encoding scheme - The power output of each generating unit and the tie line flow is selected as the gene to constitute the individual, which is a potential solution for the optimal Multi Area Economic Dispatch and Emission Dispatch with Valve Point Effects problem. The genes are all real-coded and the i-th individual Pi can be represented as follows:

= minimum Voltage magnitude limits at bud n in Area m. = maximum Voltage magnitude limits at bud n in Area m. . = Number of Buses, Excluding Slack bus. = Number of Branches in the system. = Power flowing through branch n in area m. = Maximum loading limits for branch n in area m.

2. Emission Dispatch Formulation - The Emission function can be


expressed as the sum of all types of emission considered, such as NOx, SO2, CO2, particles and thermal emissions, ect, with suitable pricing of weighting on each pollutant emitted [26]. In the present study, only one type of emission (NOx) is taken into account. The NOx emissions of the system are expressed as follows:

Pi = [Pgmn, PT ]
Where PS is the population size.

(12)

Min?E =

NG K=1

)C

(8)

The emission function Emn(Pgmn) in Rs/hr is usually expressed as a quadratic polynomial [26].
$ - J

(9)

Where

E is the total NOx emission of the system,  and is the emission coefficients of the n unit in m area.

III. Partial Swam Optimization


Particle swarm optimization (PSO) is inspired from the collective behavior exhibited in swarms of social insects [4].It has turned out to be an effective optimizer in dealing with a broad variety of engineering design problems. In PSO, a swarm is made up of many particles, and each particle represents a potential solution (i.e., individual). A particle has its own position and flight velocity,

Optimization procedure -The computational flow of the proposed optimization procedure is laid out as follows: Step 1: Specify the lower and upper bound generation power of each unit as well as the tie-line transfer limits; specify the area loads and reserves. Step 2: Randomly initialize the individuals of the population. Step 3: Evaluate each individual Pi in the population based on the concept of Pareto-dominance. Step 4: Store the non-dominated members found thus far in the archive. Step 5: Initialize the memory of each particle where a single local best pbest is stored. The memory is contained in another archive. Step 6: Increase the iteration counter. Step 7: Choose the personal best position pbest for each particle based on the memory record; choose the global best gbest from the fuzzified region using binary tournament selection. The niching and fitness sharing mechanism is also applied throughout this process for enhancing solution diversity.

260

Step 8: Update the member velocity v of each individual Pi based on (10) as follows:

Table 1
SYSTEM DESCRIPTION OF CASE STUDY

(t+1) id

=w v

(t)

+ c1 rand() (pbestid

(t)

P
id

(t)

id )

+ c2 Rand () (gbestd P i = 1. . . PS;

Where

),

d = 1. . . (NG+ 2 TLN)
NG is the total number of generators, and TLN is the number of tie lines.

Step 9: Modify the member position of each individual Pi based on (11) as follows: P (t+1)id = P(t)id + v(t+1)id (12) Step 10: Update the archive which stores non-dominated solutions according to the four selection criteria [7]. Step 11: If the current individual is dominated by the pbest in the memory, then keep the pbest in the memory; otherwise, replace the pbest in the memory with the current individual. Step 12: If the maximum iterations are reached, then go to Step 13. Otherwise, go to Step 6. Step 13: Output a set of Pareto-optimal solutions from the archive as the final solutions.

S.NO. 1 2 3 4 5 6

VARIABLE Buses Generator Branches Shunts Reactors Tap-Changing Transformer Generator Buses

30-BUS SYSTEM 30 06 41 02 04 06

Table 2
PSO PARAMETERS FOR BEST RESULTS S.NO. PARAMETERS VALUE

1 2 3 4 5 6

POPULATION SIZE NUMBER OF ITERATIONS ACCELERATION CONSTA. INERTIA WEIGHT FACTOR LIMITS OF VELOCITY LIMITS OF VELOCITY

550 100 C1=2 & C2=2 Wmax=0.9&Wmin=0.4 Y =+0.5PD X X X =-0.5PD

Table 3

V - Simulation Results
For examining the proposed method, a network of three interconnected area are constructed as shown in fig. This test system is same as that was used in [13].Area A1is an IEEE 30 BUS system. It is extended to 32 bus system by connected two more buses at Area 2(A2) and Area 3(A3). There are six generators in Area A1 with different fuel and emission characteristics, which are show in table (3) and table (4) respectively. There are two generators in Area A2 and Area A3with different fuels and emissions characteristics, which are show in table (5). The Tie line transfer limits are shown in table (6).

GENERATOR COST COEFFICIENTS FOR THE IEEE 30-BUS SYSTEM-A

BUS NO.
1 2 5 8 11 13

ACTIVE POWER OUTPUT LIMIT(MW)


Min 50 20 15 10 10 12 Max 200 80 50 35 30 40

COST COEFFICIENTS
a 0 0 0 0 0 0 b 2.00 1.75 1.00 3.25 3.00 3.00 C 0.00375 0.01750 0.06250 0.00834 0.02500 0.02500

amn,bmn,cmn are Fuel cost coefficients Fuel

Table 4
POLLUTION (EMISSION) COEFFICIENTS FOR THE IEEE 30-BUS SYSTEM A
8 IEEE 30 buses system 27 21 2 Area (A1)

Bus 1 2 5 8 11 13

.10-2 4.041 2.543 4.258 5.326 4.258 6.131


and

.10-4 -5.554 -6.047 -5.094 -3.550 -5.094 -5.555

.10-6 6.490 5.638 4.586 3.380 4.586 5.151

d.10 2.00 5.00 0.001 20.00 0.001 10.00

-4

e.10 2.857 3.333 8.000 2.000 8.000 6.667

-2

is the emission coefficients of the n unit in m area.

Table 5.1
Area (A2) 32
Area (A3) 31

GENERATOR COST COEFFICIENTS OF AREA-2 AND AREA - 3

BUS NO.
Figure.1. Three Areas 32 bus system with eight generators

ACTIVE POWER OUTPUT LIMIT(MW) Min Max


10
10

COST COEFFICIENTS
a
0
0

b
650
30

C
325
100

31
32

100
100

261

Table 5.2 POLLUTION (EMISSION) COEFFICIENTS FOR AREA-2 AND AREA 3 -4 -2 .10-2 .10-4 .10-6 d.10 e.10 0.0 0.0 0.0 0.00 0.00 0.0 0.0 0.0 0.00 0.00

Table 7
Optimum generation schedule for MAEED Without security constraints

Bus 31 32

Algorithm

PSO(P.U)

EMEP(P.U)

TS(P.U)

FGTU(P.U)

Table 6 TIE LINE TRANSFER LIMIT (P.U.)


Pg1 Pg2 Pg3 Pg4 Pg5 Pg6 1.64 0.41 0.19 0.23 0.16 0.15

Area(A1)
1.64 0.41 0.19 0.22 0.16 0.15
Area(A2)
Pg31 0.1 0.1
Area(A3)
Pg32 Total Gen power 0.9
3.78

Tie line nm P2-31 P8-32 P31- 27 P32-21 P31-32 P31-2 P32-8 P 27-31 P21-32 P32-31

WX .1 .1 .1 .1 .1 .1 .1 .1 .1 .1

WY .598 .501 .590 .249 .499 .598 .501 .59 .249 .499

1.63 0.42 0.18 0.22 0.16 0.15

1.63 0.41 0.18 0.22 0.16 0.14

0.1

0.1

0.9
3.77

0.9
3.76

0.9
3.76

The optimum solution is given in table 7 and table 8 for The Multi Area Emission and Economic Dispatch without and with security constraints respectively. The result on security constrained The Multi Area Economic Dispatch and Emission Dispatch with Valve Point Effects are compared with the result obtained by ANN approach [13]. The Tie line flows corresponding to the optimum schedule are also given in table 7 and table 8.

Losses

0.083

0.083

0.082

0.082

Total Fuel cost $/hr

921.65

921.33

920.45

921.33

Tie-line Power
P2-31 P8-32 P31- 27 P32-21 P31-32 P31-2 P32-8 P 27-31 P32-27 P32-31 Number Of iterations Emission (ton/hr) 0.49 0.25 -0.17 0.28 0.11 0.49 0.25 -0.17 0.28 0.11
90
0.368

1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0


Pg1 Pg2 PSO Pg3 Pg4 Pg5 Pg6

0.49 0.26 -0.17 0.28 0.11 0.49 0.26 -0.17 0.28 0.11
60
n.a

0.49 0.26 -0.17 0.28 0.11 0.49 0.26 -0.17 0.28 0.11
65
n.a

0.49 0.26 -0.17 0.28 0.11 0.49 0.26 -0.17 0.28 0.11
60
n.a

EMEP

TS

FGTU

Figure.2. The Generation levels after process

262

Table 8
Optimum generation schedule for MAEED With security constraints Algorithm PSO(P.U) EMEP(P.U) TS(P.U) FGTU(P.U) Area(A1) Pg1 1.60 1.65 1.63 1.63 Pg2 0.43 0.46 0.46 0.46 Pg3 0.20 0.16 0.16 0.16 Pg4 0.29 0.16 0.16 0.16 Pg5 0.14 0.20 0.20 0.20 Pg6 0.13 0.16 0.16 0.16 Area(A2) Pg31 0.1 0.1 0.1 0.1 Area(A3) Pg32 0.9 0.9 0.9 0.9 Total Gen 3.79 3.79 3.78 3.78 power Losses 0.083 0.083 0.082 0.082 Total Fuel cost 923.03 923.03 922.83 922.80 $/hr Tie-line Power P2-31 0.46 0.48 0.47 0.47 P8-32 0.18 0.17 0.17 0.17 P31- 27 -0.14 -0.14 -0.13 -0.13 P32-21 0.23 0.24 0.24 0.24 P31-32 0.08 0.08 0.08 0.08 P31-2 0.46 0.48 0.47 0.47 P32-8 0.18 0.17 0.17 0.17 P 27-31 -0.14 -0.14 -0.13 -0.13 P32-27 0.23 0.24 0.24 0.24 P32-31 0.08 0.08 0.08 0.08 Number Of 140 110 95 75 iterations Emission 0.368 n.a n.a n.a (ton/hr)

VI

Conclusion

The main advantage of PSO over other modern heuristics is modeling flexibility, sure and fast convergence, and loss computational time than other heuristic method. In this paper a new concept termed Multi Area Economic Dispatch and Emission Dispatch with Valve Point Effects is proposed and an enhanced Multi objective Particle Swam Optimization (MOPSO) algorithm is used to derive a set of Pareto optimal solutions. The Multi objective Particle Swam Optimization (MOPSO) required only a few parameters to be tuned, which makes it attractive from implementation viewpoint. The Tie line transfer limits between areas are considered to ensure the power system security. The flexibility of the proposed algorithms is demonstrated on an IEEE 30 Bus Interconnected Three Area systems with eight generator system.

VII References
[1] Farag, A., Al-Baiyat, S., and Cheng, T. C. (1995). Economic load dispatch multiobjective optimization procedures using linear programming techniques, IEEE Transactions on Power Systems, Vol. 10, pp. 731738. [2] Chen, C.-L. and Chen, N. (2001). Direct search method for solving economic dispatch problem considering transmission capacity constraints, IEEE Trans. on Power Systems, Vol. 16, No. 4, pp. 764769. [3] Jayabarathi, T., Sadasivam, G., and Ramachandran, V. (2000). Evolutionary programming-based multi-area economic dispatch with tie line constraints, Electric Machines and Power Systems, Vol. 28, pp. 11651176.

[4] Kennedy, J. and Eberhart, R. (1995). Particle swarm optimization, IEEEProceedings of the International Conference on Neural Networks, Perth, Australia, pp. 19421948. [5] Streiffert, D. (1995). Multi-area economic dispatch with tie line constraints, IEEE Trans. on Power Systems, Vol. 10, No. 4, pp. 1946 1951. [6] Wang, C. and Shahidepour, S. M. (1992). A decomposition approach to non-linear multi-area generation scheduling with tie line constraints using expert systems, IEEE Trans. on Power Systems, Vol. 7, No. 4, pp.14091418. [7] Wang, L. F. and Singh, C. (2006). Multi-objective stochastic power dispatch through a modified particle swarm optimization algorithm, Special Session on Applications of Swarm Intelligence to Power Systems,Proceedings of IEEE Swarm Intelligence Symposium, Indianapolis. [8] Yalcinoz, T. and Short, M. J. (1998). Neural neworks approach for solving economic dispatch problems with transmission capacity constraints, IEEETrans. Power Systems, Vol. 13, No. 2, pp. 307313. [9] Zhu, J. Z. (2003). Multiarea power systems economic power dispatch using a nonlinear optimization neural network approach, Electric Power Components and Systems, Vol. 31, No. 6, pp. 553563. [10] Shoults R.R, Chang S.K, Helmick.S and Grady W.M. A Practical Approach to unit commitment, economic dispatch and savings allocation for Multi-Area pool operation with Import / Export constraints. IEEE Transactionson Power Apparatus and systems, 1980.Vol. 99. PP625- 635. [11] C.E.Lin and C.Y.Chou. Hierarchical Economic Dispatch for multiarea Power Systems Electric power system Research, 1991.Vol. 10.PP193 -203. [12] Wang.C. and Shahidehpour.S.M. A decomposition approach to non-linear multi area generation scheduling with tie line constrains using expert systems. IEEE Trans on Power systems, 1992. Vol. 7. PP14091418. [13] Streiffert. D. Multi-area economic dispatch with tie-line constraints. IEEE Trans on Power systems, 1995. Vol. 10. No.4. PP1946 - 1951 [14] Tseng, C.L.Guan, X. Svoboda. A.J. Multiarea unit commitment for large-scale power systems. Generation, Transmission and Distribution. IEE Proceedings, 1998. Vol. 4. PP415 421. [15] K.P.Wong, J.Yureyevich. Evolutionary programming based economic dispatch for units with non-smooth fuel cost functions.IEEE trans. Power systems, 1998. Vol. 11. PP301-306. [16] T. Jayaharathi, G.sadasivam, Ramachandran. Evolutionary programming based economic dispatch of generators with prohibited operating zones. Electric power system research, 1999. Vol. 52, PP261266. [17] Somasundaram.P, K.Kuppusamy. Application of evolutionary programming to security constrained economic dispatch. Electrical Energy systems, 2005. Vol. 27. PP343-351. [18]Mauriozio, Denna, Giancarlo Mauri, Anna Maria Zanaboni. Learning fuzzy rules with Tabu- search an application to control. IEEETransactions on fuzzy systems, 1999. Vol. 7. No.2.PP295- 318. [19]S.Khansawang, S.Pothiya, C.Boonseng.Solving the Economic Dispatch Problem with Tabu Search Algorithm. IEEE Conference , CIT,2002.Thailand. PP274-278. [20] W.Ongsakul, S.Dechanupaprittha and I.Ngamroo. Parallel Tabu Search Algorithm forconstrained economic dispatch. IEE ProcGeneration - transmission distribution, 2004. Vol. 151. No.2. PP157165. [21] S.Khansawang, S.Pothiya,C. Boonseng. Distributed Tabu Search Algorithm for Solving the Economic Dispatch Problem. 2004 IEEE Transactions on Power systems, PP484-487. [22] J.Z.Zhu. Multi-Area Power systems economic power Dispatch Using a Non linear Optimization Neural Network Approach. 2003.Taylor and Francis. PP553-563. [23] M.Y.El-sharkh, A.A.El-Keib,H.Chen. A fuzzy evolutionary programming based solution methodology for security-constrained generation maintenance scheduling. Electric Power system Research, 2003. Vol. 67. PP67-72. [24] M Basu. An interactive fuzzy satisfying method based on evolutionary programming technique for multi objective short-term hydro thermal scheduling. Electric Power system Research, 2004.Vol. 69. PP277-285. [25] T.Aruldoss Albert Victoire, A Ebenezer Jeyakumar. A tabu search based hybrid optimization approach for a fuzzy modeled unit commitment problem. Electric Power system Research, 2006. Vol. 76. PP413-425. [26] T.S.Prasanna, P.Somasundaram, Fuzzy Mutated Evolutionary Programming basedalgorithm for combined economic and emission dispatch. IEEE international Conference,TENCON-2008, November 2008 ,in press.

263

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Application of Bacteria Foraging Algorithm to Economic Load Dispatch with Combined Cycle Cogeneration Plants
Mr. E. Mariappane
Professor EEE Department Dr.Pauls Engg college, Villupuram (Dist) E-mail: dev_mari@ Rediffmail.com

Dr. K. Thyagarajah
Principal

Dr. M. Sudhakaran
Assistant Professor EEE Department

Dr.P.Ajay-D-Vimal Raj
Assistant Professor EEE Department

K S R College of Tech Tiruchengode


Tamilnadu E-mail: principal@ ksrct.ac.in

Pondicherry Engg College


Pondicherry- 605014 E-mail: karan_mahalingam@ yahoo.com

Pondicherry Engg College


Pondicherry- 605014 E-mail: ajayvimal@yahoo.com

Abstract- This paper presents an approach based on Bacteria Foraging Algorithm to solve the economic load dispatch (ELD)problem with losses for three unit system and also one plant as combined cycle cogeneration plant in three unit thermal plant system. Bacteria Foraging Algorithm (BFA) is a recently proposed new algorithm from the family of Evolutionary Computation. The algorithm is based on the foraging behavior of E.coli bacteria present in human intestine. This approach was tested for three thermal plant systems and extended to one plant as combined cycle cogeneration plant in three thermal plant systems. The performance of the Bacteria Foraging Algorithm is compared with the Classical Kirchmayer method and Genetic Algorithm method. The results reveal that the proposed BFA provides optimal solution as compared to the results of other existing techniques. Keywords-component; Optimization; Bacteria Foraging Algorithm; Cogeneration plant; Chemotaxis; Swarming I. INTRODUCTION

Economic load dispatch (ELD) is a sub problem of the optimal power flow (OPF) having the objective of fuel cost minimization. The classical solutions for ELD problems have used equal incremental cost criterion for the loss less systems and use of penalty factors for considering the system losses. The lambda iterative method has been used for ELD. Many other methods such as gradient method, Newtons method, linear and quadratic programming, etc have also been applied to the solution of ELD problems [1].However all these methods are based on assumption of continuity and differentiability of cost functions. Hence, the cost functions have been approximated in the differentiable form, mostly in quadratic form. Further these methods also suffer two main counts. One is their inability to provide global optimal solution and getting struck at local optima. The second problem is handling the integer or discrete variables. Bacteria Foraging Algorithms (BFAs) have become increasingly popular in recent years in science and engineering disciplines [8]. Bacteria Foraging Algorithm is a search technique based on the evolution of biological systems. Bacteria Foraging Algorithm (BFA) is recently proposed new algorithm from the family of Evolutionary Computation. The algorithm is based on the foraging

population or initial set of solutions. The characteristics of initial set of solutions improve in terms of costs from generation to generation. This algorithm uses payoff information for finding feasible near global solutions to evaluate optimal generations. The approach utilizes the natural selection of global optimum bacterium having successful foraging strategies as the cost function. This proposed method easily takes care of the transmission losses, dynamic operation constraints (ramp rate limits) and prohibited operating zones and also accounts for nonsmoothness of cost function arising due to valve point loading effects. Combined cycle cogeneration plants (CCCP)[3,4] have the following advantages over the thermal plants, namely, (i) Higher overall thermal efficiency (ii) Minimum air pollution by NOx, dust, etc, (iii) Independent operation of gas turbine for peak loads, (iv) Quick start-up and less capital cost per kW, and (v) Less water requirement per unit of electrical output. The fuel consumption and the cost characteristics of such plants, in general, are not differentiable. Discontinuity of these curves may also be observed in steam based power plants due to valve point loading [5]. This paper proposes the application of BFA to solve economic load dispatch of following example problems, (i) Three unit thermal plant system for which the results are compared with conventional method and GA method, and (ii) Two thermal plants and third plant a combined cycle co generation plant for which the results are compared with GA method.

A. Classic Economic Load Dispatch Problem


The objective of ELD problem[1] is to minimize the total fuel cost in thermal plants
n

OBJ = Fi ( Pi)
i =1 n

(1)

Subject to the constraint of equality in real power balance (2) Pi Pl Pd = 0 i =1 inequality of real power limits on the generator outputs are The

behavior of E.coli bacteria. The idea of Bacteria Foraging


Algorithm is based on the fact that natural selection tends to eliminate animals with poor foraging strategies and favor those having successful foraging strategies. After many generations, poor foraging strategies are either eliminated or reshaped into good ones. The search starts with random generation of initial

Pi min < Pi < Pi max

(3)

264

Where Fi(Pi) is the individual generation production in terms of its real power generation Pi, Pi the output generation of unit I, n the number of generators in the system, PD the total system load demand, and PL the total system transmission losses. The thermal plant can be expressed as input output models (cost function), where the input is fuel cost and the output the power output of each unit. In practice, the cost function could be represented by quadratic function. Fi( Pi ) = Ai Pi 2 + Bi Pi + Ci (4) The incremental cost curve data are obtained by taking the derivative of the unit input output equation resulting in the following equation for each generator:

dFi ( Pi) = 2 Ai X Pi + Bi dPi

(5)

on the fact that natural selection tends to eliminate animals with poor foraging strategies and favor those having successful foraging strategies. After many generations, poor foraging strategies are either eliminated or reshaped into good ones. The search starts with random generation of initial population or initial set of solutions. The characteristics of initial set of solutions improve in terms of costs from generation to generation [10]. This algorithm uses payoff information for finding feasible near global solutions to evaluate optimal generations. The approach utilizes the natural selection of global optimum bacterium having successful foraging strategies as the cost function. This proposed method easily takes care of the transmission losses, dynamic operation constraints (ramp rate limits) and prohibited operating zones and also accounts for non-smoothness of cost function arising due to valve point loading effects.

Transmission losses are function of unit generations and are based on the system topology. Solving the ELD equations for a specified system requires an iterative approach since all unit generation allocations are embedded in the equation for each unit. In practice, the loss penalty factors are usually obtained using on line power flow software. This information is updated to ensure accuracy. They can also be calculated directly using the Bmn matrix loss formula. PL = Pi Bij Pj\ (6) Where Bij are coefficients, constant for certain conditions

Process Involved In BFA Chemotaxis:


The process is achieved by swimming and tumbling. Depending upon the rotation of the flagella in each bacterium, it decides whether it should move in a predefined direction (swimming) or an altogether different direction (tumbling), in the entire lifetime of the bacterium. Both the process are decided by rotation of the flagella in each bacterium. To represent a tumble, a unit length random direction, (j) say, is generated; this will be used to define the direction of movement after a tumble

B. Economic Load Dispatch with Valve Point Loading


Many researchers have applied Gas to ELD problems. In 1992 Walters and Sheble showed how GAs utilizes payoff information, such as discontinuous heat rate curve in ELD problems effectively. They also proposed the modeling of valve-point effects by adding the rectified sinusoidal contribution to the conventional quadratic cost curve [5] Fi(Pi) = [conventional quadratic Fi (Pi)] + e sin( fi ( Pimin - Pi)) i (7)

( j + 1, K , l ) = ( j , K , l ) + c (i ) (i ) ( j , K , l ) - i bacterium, in j chemotactic,
kth reproductive and lth elimination and dispersal loop c (i ) - Step size

i i

(8)

th

th

Swarming:
It is always desired that the bacterium that has searched the optimum path of food should try to attract other bacteria so that they reach the desired place more rapidly. Swarming makes the bacteria congregate into groups and hence move as concentric pattern of groups with high bacterial density. It can be mathematically represented as

C. Economic Load Dispatch with Combined Cycle Cogeneration Plant (CCCP)


The eastern region of electricity generating authority of Thailand (EGAT) has a large amount of thermal plants including combined cycle cogeneration plants [4]. From the survey the cost characteristics of CCCP (two 50MW gas turbines and one 50MW steam turbine) in eastern region of EGAT is obtained and the cost characteristics of such plants are not differentiable. Till date only limited work has been reported in the area of CCCP economic load dispatch.

s i i J cc ( , P ( j , K , l )) = J cc ( , ( j , K , l )) i =1

(9)

Reproduction:
The least healthy bacteria die, and other healthy bacteria each split into two bacteria. These are placed in same location which makes the population of bacteria constant. There by stagnation of bacteria is eliminated.

D. Bacteria Foraging Algorithms


Foraging Algorithms (BFAs) have become increasingly popular in recent years in science and engineering disciplines. Bacteria Foraging Algorithm is a search technique based on the evolution of biological systems. Bacteria Foraging Algorithm (BFA) is recently proposed new algorithm from the family of Evolutionary Computation. The algorithm is based on the foraging behavior of E.coli bacteria. The idea of Bacteria Foraging Algorithm is based

E. Elimination And Dispersal


It is possible that in the local environment, the life of a population of bacteria changes either gradually by consumption of nutrients or suddenly due to some other influence. Events can kill

265

or disperse all the bacteria in a region. They have the effect of possibly destroying the chemo tactic progress, but in contrast, they also assist it, since dispersal may place bacteria near good sources. Elimination and dispersal helps in reducing the behavior of stagnation(i.e., being trapped in a premature solution point or local optima).

Compute cost function o Swim : Let m = 0 While m < Ns Let m= m +1 If J sw (i , j + 1, K , l ) J last then

F. Steps Involved In Bacteria Foraging Algorithm


The main aim of genetic simulation is to provide a feasible solution in the population pool of each generation and achieve the objective of minimization through these feasible solutions in further evolutions. [8]

J last = J sw (i , j + 1, K , l )
o o Else m = Ns Go to next bacterium (i+1) if i S to process the next bacterium If j > Nc, go to step 3.In this case, continue chemotaxis since life of bacteria is not over.

Step 1: Initialization The following variables are initialized Number of bacteria (S) used in the search Number of parameters (p) to be optimized Swimming length Ns Number of iterations in a chemotactic loop Nc The number of reproductions Nre Number of elimination & dispersal events Ned Probability of elimination and dispersal Ped Location of each bacterium dattract , attract, hrepelent, repelent and are fixed values Step 2: Iterative Algorithm For Optimization This section models the bacterium population chemotaxis, swarming, reproduction, and elimination and dispersal Elimination-dispersal loop :l = l + 1 Reproduction loop : k = k+1 Chemotaxis loop : j = j +1 Step 3: Chemotaxis loop For i =1,2,.S calculate the cost function value for each bacterium. Location of the bacterium corresponding to global minimum cost function. i Jsw( j, K, l) = J(i, j, K,l) + Jcc ( ( j, K, l), P( j, K,l)) (10) Let J last = J sw (i , j , K , l ) to save this value since we may find a better value End of for loop For i =1,2,.S take the tumbling/swimming decision o Tumble : Generate random vector

Step 4: Reproduction For a given k and l, and for each i = 1, 2, ..S, let Ji health be the health of the bacterium i. Ji health is the minimum of Jsw The Sr = S/2 bacteria with highest Jhealth dies and other Sr bacteria with best value split into two. If k < Nre, go to step 2; i.e. the Chemotactic loop. Step 5: Elimination And Dispersal For i = 1, 2..S, with probability Ped eliminates and disperses each bacterium. To do this, if one eliminates a bacterium, simply disperse it to a random location on the optimization domain. Evaluation of the objective function is carried out and the process is repeated till no further improvement in the objective can be obtained.

G. Simulation Results And Performance Three Thermal Plant System


To focus on the evaluation of the proposed BFA, a three unit power system is used. The data used in this paper are obtained from Sheble and Britting [6] are as follows. F1 = 0.0156 P12 + 7.92 P1 + 561 Rs/h F2 = 0.0194 P22 + 7.85 P2 + 310 Rs/h F1 = 0.0482 P32 + 7.97 P3 + 78 Rs/h BMN Matrix 0.0000750 0.000005 0.0000075 Bmn = 0.0000050 0.000015 0.0000100 0.0000075 0.000010 0.0000450 The unit operating ranges for this example are 100MW < P1 < 600MW 100MW < P2 < 400MW 50MW < P3 < 200MW The parameters used in BFA are as follows Number of Bacteria - 20 Number of iteration in Chemotactic Loop -5 Swimming Length -4 Number of Reproduction -2 Probability of elimination and dispersal - 0.25 dattract , attract, hrepelent, repelent- 0.25 , 3e-6 , 0.25 , 15e-5

i ( j + 1, K , l ) = i ( j, k , l ) + c(i )

(i ) (i) x (i)
T

(11)

H. Two Thermal Plants and One CCCP System


In three thermal plant systems, the third thermal plant is to be replaced by combined cycle cogeneration plant CCCP (two 75MW

266

gas turbines and one 50MW steam turbine). The fuel cost characteristics of this plant is as shown in Fig 1. BFA claims to provide near optimal or optimal solution for computationally intensive problems. Therefore, the effectiveness of BFA solutions should always be evaluated by experimental results. For economic load dispatch problem, the results obtained through BFA developed by a Matlab coding was tested for three thermal plant systems and extended to one plant as combined cycle cogeneration plant in three thermal plant systems. The performance of the proposed algorithm is compared with the classical Kirchmayer and GA as given in Table 1 and optimal output is shown in Table 2. It is observed that this method is accurate and may effectively replace the conventional practices presently performed in different central load dispatch centers. This approach can be used for the cogeneration plants in which it is not possible to solve ELD problem by conventional technique and it is given in Table 3 with optimal outputs in Table 4.
TABLE I Comparison of Test Results Obtained By GA, Classical Kirchmayer Method And BFA Three unit system

TABLE IV Optimal Scheduling of Generators in Two Thermal Plants and One CCCP System by BFA Algorithm Total Load, PD MW
680 287.43 319.52 82.87 9.84 6588.38

PG1

PG2

PG3

PL

Total cost Rs/h

750

272.62

311.85

176.62

11.11

7235.09

869

329.53

377.98

176.62

15.14

8346.84

I.

CONCLUSION

Total Load, PD MW Classical Kirchmayer Method [7]

Total Cost Rs/h

This paper attempted to solve economic load dispatch problem of the power system networks containing combined cycle cogeneration plants, using Bacteria Foraging Algorithm. The results are obtained for three thermal plant systems and extended to one plant as combined cycle cogeneration plant in three thermal plant systems. The solution obtained by the proposed algorithm is compared with other methods. From the comparison of the results it is proved that the proposed algorithm is capable of solving the nonlinear optimization problems.

Genetic Algorithm[7]

Proposed BFA

REFERENCES
[1] [2] [3] Wood and B. F. Wollenberg, Power Generation Operation and control, New York: John Willey and sons, 1984, PP. 23-62. D E Goldberg and J h Holland, Genetic algorithms in Search Optimization and Machine Learning. Addison Wesley, 1989. Y H Song and Q Y Xuan. Combined Heat and Power Economic Dispatch using Genetic Algorithm based Penalty function Method. International Journal of Electric Machines and Power Systems, vol 26, 1988, p 363 [4] [5] Economic Load Dispatch. M Tech Thesis: Report in Asian Institute of Technology, Thailand. K P Wong and Y W Wong. Genetic and Genetic/Simulated Annealing Approaches to Economic Load Dispatch. IEE Proceedings Generation, Transmission and Distribution, vol 141, no 5, September 1994, p 507 [6] G B Sheble and K Britting. Refined Genetic Algorithm: Economic Dispatch Example. IEEE Transactions on Power System, vol 10, no 1, 1995, p 117. [7] P Venkatesh, Dr P S Kannan and M Sudhakaran. Application of Computational Intelligence to Economic Load Dispatch. Journal of the Institution of Engineers (India), vol 81, no EL/2, September 2000, pp 39-43. [8] A K Barisal, P K Hota and R Chakrabarti. Economic load dispatch by Modified Bacteria Foraging Algorithm. Proceedings of international conference-PSACO-2008.

812.57

7986.093

7986.068

7985.850

585.33

5890.063

5890.094

5889.910

869.00

8522.450

8522.875
TABLE II

8522.289

Optimal Scheduling Of Generators By BFA Three unit system Total Load, PD MW PG1 PG2 PG3 PL Total cost Rs/h

812.57

352.562

370.736

129.845

13.573

7985.850

585.33

233.423

267.802

91.071

6.967

5889.910

869.00

347.969

396.893

139.677

15.540

8522.289

TABLE III Comparison Of Test Results Obtained By GA And BFA For Economic Loading Of Two Thermal Plants and One CCCP

Total Load, PD MW GA[7]

Total Cost Rs/h

BFA

[9]

Tripathy & S. Mishra Bacteria Foraging-Based Solution to optimize Both Real Power loss and Voltage Stability limit, IEEE Transactions on Power Systems, Vol. 22, No. 1, February 2007.

680

6639.47

6588.38

750

7267.93

7235.09

869

8398.07

8346.84

267

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Bearing Faults Classification by SVM and SOM Using Complex Gaussian Wavelet

Kalyan M Bhavaraju
Mechanical & Industrial Engineering Department Indian Institute of Technology Roorkee, India kmanohar87@gmail.com

P.K.Kankar
Mechanical & Industrial Engineering Department Indian Institute of Technology Roorkee, India kankar@rediffmail.com S.P.Harsha Mechanical & Industrial Engineering Department Indian Institute of Technology Roorkee, India surajfme@iitr.ernet.in

S.C.Sharma
Mechanical & Industrial Engineering Department Indian Institute of Technology Roorkee, India sshefme@iitr.ernet.in

AbstractBearing failure is one of the foremost causes of


breakdown in rotating machine, resulting in costly systems downtime. This paper presents a feature-recognition system for rolling element bearings fault diagnosis using continuous wavelet transform (CWT). Bearing faults are classified using two artificial Intelligence techniques (AI) i.e. Support Vector Machines (SVM) and Self Organizing Maps (SOM). To extract most appropriate features from raw vibration signatures and for effective classification of faults, raw vibration signals are decomposed using complex Gaussian wavelet. Total 150 signals of healthy and defective bearings at rotor speeds 250, 500, 1000, 1500 and 2000 rpm with three loading conditions are considered. The defects like spall in outer race, inner race and ball are considered in the present study. 1-D continuous wavelet coefficients of these samples are calculated at the seventh level of decomposition (27 scales for each sample). Maximum Energy criterion (MEC) is used to determine scale corresponding to characteristic defect frequency. Statistical features are extracted from the wavelet coefficients corresponding to scale satisfying MEC. The test results show that the SVM identify the fault categories of rolling element bearing more accurately and has a better diagnosis performance compared to the SOM.

Networks (ANN), Support Vector Machine (SVM), Fuzzy Logic classifiers and other soft computing techniques are widely used tools to classify the faults [1-2]. Lebaroud et al. [3] proposed a new method for induction motor faults detection based on time frequency classification of the current waveforms. Saravanan et al. [4] carried out a comparative study on classification of features by SVM and PSVM extracted using Morlet wavelet for fault diagnosis of spur bevel gear box. This paper emphasizes on bearing fault classification by SVM and SOM approaches. Vibration data are collected by sensors as a time domain signals for the different faulty and healthy bearings. Then 1-D continuous wavelet coefficients of these signals are calculated using complex Gaussian as base wavelet in seventh level of decomposition (27 scales per signal). Statistical Features are extracted from the wavelet coefficients corresponding to scale satisfying the maximum energy criterion. These features are fed as input to the SVM and SOM classifiers for diagnosing the bearing condition. Classification results are obtained and compared for both SVM and SOM. The flow diagram of fault diagnosis procedure that is proposed is as shown in Fig. 1.

II.

MACHINE LEARNING TECHNIQUES

Keywords- wavelets; support vector machine; self organizing maps

I.

INTRODUCTION

Rolling element bearings are widely used in electric machines to support and locate the rotor and to keep the air gap small and consistent and to transfer loads from the shaft to the motor frame. The bearings should enable high and low speed operation, minimize friction, and save power. Due to this close relationship between electric motors and bearing assembly performance, it is difficult to imagine the progress of modern machines without consideration of the wide application of bearings. The faults arising in electric machines are often linked with bearing faults. Heavy bearing vibration can even cause the entire system to function incorrectly, resulting in downtime for the system and economic loss to the customer. Machine condition monitoring is an important tool for such preventive maintenance. Artificial Neural

Machine learning is an approach of using examples (data) to synthesize programs. In the particular case when the examples are input/output pairs it is called Supervised Learning. If we consider a case where there are no output values and the learning task is to gain some understanding of the process that generated the data then this type of learning is said to be unsupervised. In the present study, the two supervised machine learning techniques are considered i.e. SVM and ANN and SOM is considered as the unsupervised machine learning technique. Pattern recognition and classification using machine learning techniques are described here in brief; a more detailed description can be found in [1]. A. Support Vector Machine
SVM is a supervised machine learning method based on the statistical learning theory. It is a very useful method for classification and regression in small-sample cases such as fault diagnosis. The SVM tries to place a boundary between the two

268

different classes and orientate it in such way that the margin is maximized, which results in least generalization error. The nearest data points that used to define the margin are called SUPPORT VECTORS. This is implemented by reducing it to a convex optimization problem: minimizing a quadratic function under linear inequality constraints [4]. Consider a training sample set {(xi,yi)}; i=1 to N, where N is total number of samples. The hyperplane f(x) = 0 that separates the given data can be obtained as a solution to the following optimization problem. Subject to y wx b 1 0, i 1,2, . . N (1) (2)

A SOM is therefore characterized by the formation of a topographic map of the input patterns in which the spatial locations of the neurons in the lattice are indicative of intrinsic statistical features contained in the input patterns, hence self-organizing map. The type of SOM used in the present study is Kohonen model as shown in Fig. 2.

III.

WAVELET BASED FEATURE EXTRACTION METHODLOGY

A. Experimental setup
The problem of predicting the degradation of working conditions of bearings before they reach the alarm or failure threshold is extremely important in industries to fully utilize the machine production capacity and to reduce the plant downtime. In the present study, an experimental test rig is used and vibration response for healthy bearing and bearing with faults are obtained. Table-1 shows dimensions of the Ball Bearings taken for the study. Accelerometers are used for picking up the vibration signals from various stations on the rig. As a first step, the machine was run with healthy bearing to establish the base-line data. Then data are collected for different fault conditions. A variety of faults are simulated on the rig at rotor speed 250, 500, 1000, 1500 and 2000 rpm. Following bearing conditions are considered for the study: Bearing with no fault (BNF) Spall on outer race (SOR) Spall on inner race (SIR) Spall on ball (SOB) Combination of bearing components fault (CFB) Time responses are obtained at various speeds and with different load conditions with considerations of all cases in a phased manner.

where C is a constant representing error penalty. Rewriting the above optimization problem in terms of Lagrange multipliers, leads to the problem. Subject to , (3)

0 C (4) y 0, i 1,2, . N The Sequential Minimal Optimization (SMO) algorithm gives an efficient way of solving the dual problem arising from the derivation of the SVM.
N

B. Self-Organizing maps
The Self-organizing maps are special class of ANN and are based on competitive learning. In self-organizing maps, the neurons are placed at the nodes of a lattice that is usually one or two dimensional. The neurons become selectively tuned to various input patterns or classes of input patterns in the course of a competitive learning process. The location of neuron so tuned (winning neurons) becomes ordered with respect to each other in such a way that a meaningful co-ordinate system for different input features is created over the lattice. Rolling Element Bearings Machinery Fault Simulator Raw Vibration Signals 1D CWT using complex Gaussian as base wavelet MEC for scale selection Statistical Parameters

Two dimensional array of postsynaptic neurons Winning neuron Bundle of synaptic connections

Input
Figure 2. Kohonen Model of SOM.

Training Data Set Soft computing techniques (SVM, SOM) Trained SVM, SOM Bearing Fault Diagnosis

Testing Data Set

TABLE I.

PARAMETERS OF BEARING Value 28.262 mm 18.738 mm 4.762 mm 8 0 10 m

Parameter Outer race diameter Inner race diameter Ball diameter Ball number Contact angle Radial Clearance

Figure 1. Flow Diagram.

269

B. Maximum energy criterion The When applying wavelet transform to a signal, if a major
frequency component corresponding to a particular scale exists in the signal, then the wavelet coefficients at that scale will have relatively high magnitudes at the time when this major frequency component occurs. As a result, the energy related to such frequency component will be extracted from the signal. For the purpose of bearing health diagnosis, the more the energy extracted from defectinduced transient vibrations is, the more effective the wavelet-based signal processing will be. Therefore, the energy content is designed as a criterion for selecting the scale consisting information about defect frequency. Energy content of signal wavelet coefficients is given by
,

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

(5)

Energy

20

40

60 80 Scale Number

100

120

140

where m is the number of wavelet coefficients and Cn,i is the ith wavelet coefficient of nth scale.

Figure 3. Energy vs. scale number for Complex Gaussian coefficients.

C. Feature extraction
When applying wavelet transform to a signal, if a major frequency component corresponding to a particular scale exists in the signal, then the wavelet coefficients at that scale will have relatively high magnitudes. As a result the Energy particular scale will be more. The continuous wavelet coefficients (CWC) of all the 150 signals with Complex Gaussian as base wavelet are calculated at seventh level of decomposition (27 scales) and the scale having the maximum Energy is selected. The statistical features of the CWC corresponding to the selected scale are calculated. For bearing with spall on the ball with zero loader condition running at 250 rpm rotor speed the Energy vs. Scale number plot is as shown in Fig. 3. In Fig. 3, scale number 24 has the maximum energy so the CWC of this scale are considered to calculate the statistical parameters. The statistical features that are considered in the present study are 1) Kurtosis: A statistical measure used to describe the distribution of observed data around the mean. Kurtosis is defined as the degree to which a statistical frequency curve is peaked. Kurtosis (6)

Total 75 instances and 8 features are used for the study including statistical features for each of the horizontal and vertical response, speed and number of loader used. Table 2 and 3 shows the test results as confusion matrices for each of the two techniques i.e. SVM and SOM. Total 75 numbers of instances are considered in which 15 cases are considered each of SOB, SIR, CFB, BNF and SOR respectively. Table 4 shows accuracy associated while classifying the faults for each techniques. Percentage of correctly classified instances for SVM and SOM are 97.3333% and 73.333% respectively. The results show that SVM is effective in predicting the bearing faults well in advance of the impending catastrophic failure. It is observed that SOM being an unsupervised learning technique also give good classification efficiency.
TABLE II. SOB 14 0 0 0 0 SIR 0 15 0 0 0 TABLE III. SOB 13 0 0 4 2 SIR 0 11 3 2 1 CONFUSION MATRIX OF SVM CFB 1 0 15 0 0 BNF 0 0 0 15 1 SOR 0 0 0 0 14 Classified as SOB SIR CFB BNF SOR

2) Skewness: Skewness characterizes the degree of asymmetry


of a distribution around its mean. Skewness can come in the form of negative or positive skewness. Skewness (7)

CONFUSION MATRIX OF SOM CFB 1 2 12 0 1 BNF 0 2 1 9 1 SOR 1 0 1 0 10 Classified as SOB SIR CFB BNF SOR

3) Standard Deviation: Standard deviation is measure of


energy content in the vibration signal. Standard Deviation

(8)
TABLE IV. EVALUATION OF THE SUCCESS OF NUMERIC PREDICTION SVM 73(97.3333%) 2(2.6667%) 0.9667 75 SOM 55(73.333%) 20(26.6667%) 0.6667 75

IV.

RESULTS AND DISCUSSIONS

In the present study training and testing of the classifiers SVM and SOM is carried out. The results on a test set in a multi-class prediction are displayed as a two dimensional confusion matrix with a row and column for each class [6]. Each matrix element shows the number of test examples for which the actual class is the row and the predicted class is the column.

Parameters Correctly Classified Instances Incorrectly Classified Instances Kappa statistic Total No. of Instances

270

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

V.

CONCLUSIONS

[2]

This study presents a procedure for detection of bearing defects in electric machines by classifying them using SVM and SOM. Procedure incorporates most appropriate features extracted from wavelet coefficients of raw vibration signals. Complex Gaussian wavelet is considered for the fault diagnosis Based on maximum Energy criterion wavelet coefficients are selected for extraction of statistical features. Percentage of correctly classified instances for SVM and SOM are 97.3333% and 73.333% respectively. The performance of SVM is found to be the best due to its inherent generalization capability. The results show the potential application of proposed algorithm with machine learning techniques for the development of on-line fault diagnosis systems for electric machine condition monitoring.

[3]

[4]

[5]

[6]

REFERENCES
[1]

B. Samantha and K.R. Al. Balushi, Artificial Neural Networks based fault diagnostics of rolling element bearings using time domain features, Mechanical Systems and Signal Processing, vol. 17(2), pp. 317-328, 2003. A. Lebaroud and G. Clerc, Accurate diagnosis of induction machine faults using optimal time frequency representations, Engineering Applications of Artificial Intelligence, vol. 22, pp. 825-832, 2009. N. Saravanan, V.N.S. Kumar Siddabattuni and K.I. Ramachandran, A comparative study on classification of features by SVM and PSVM extracted using Morlet wavelet for fault diagnosis of spur bevel gear box, Expert Systems with Applications, vol. 35, pp. 1351-1366, 2008. N. Cristianini and N.J. Shawe-Taylor, An Introduction to Support Vector Machines. Cambridge: Cambridge University Press, 2000. H. Ian, F. Eibe, Data Mining Practical Machine Learning Tools and Techniques. San Francisco, CA: Morgan Kaufmann Publishers, 2005.

S. Haykin, Neural Networks - A Comprehensive Foundation. Delhi: Pearson Prentice Hall, 2005.

271

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Comparison of Numerical and Neural Network Forward Kinematics Estimations for Stewart Platform Manipulator

Dereje Shiferaw
Adama University Department of Electrical Engineering Adama, Ethiopia dnderejesh@gmail.com
Abstract The Stewart platform manipulator is a 6 degree of freedom (DOF) manipulator having higher stiffness and precision compared to serial manipulator. But unlike serial manipulators, its forward kinematics is not direct; rather it is nonlinear and complicated. The present work tries to exhaustively compare the performance of two methods, Newton Raphson numerical method and feed forward neural network, in terms of estimation error for position and orientation and average time by taking various trajectories. The simulation result shows that, the numerical algorithm, irrespective of initial conditions taken, always performs less than the neural network for trajectories where there is pitch motion. Moreover, the numerical algorithm takes longer average time while the three-layer feed forward neural network takes less average time with uniform estimation error for all trajectories. Keywords- forward kinematics, Newton Raphson method, neural network, Stewart platform manipulator

R.Mitra
IIT Roorkee Department of Electronics and Computer Engineering Roorkee,India rmtrafec@iitr.ernet.in

I.

INTRODUCTION

THE Stewart platform manipulator is a 6 degree of freedom (DOF) parallel manipulator having a fixed base and a moveable platform connected together by six extensible legs[2]. It has advantages of high precision positioning capacity, high structural rigidity, and strong carrying capacity. Potential areas of application include flight and other motion simulators, light weight and high precision machining, data driven manufacturing, dexterous surgical robots and active vibration control systems for large space structures [2][6][14]. The dynamic modeling and hence control of the manipulator can be done in either the joint space in terms of the length of the legs or the task space in terms of the Cartesian position and orientation of the moveable platform[15]. Due to nonlinear coordinate transformation, the dynamic modeling in joint space is complex and task space approach is preferred. This approach calls for the use of the forward kinematics [15]. The forward kinematics problem in Stewart platform is to find the actual position and orientation of the moveable platform for a given set of leg lengths. Because of its importance in the task space control, this problem has been a research issue for more than two decades [2]-[16] and Various solutions have been reported [2][16]. Whereas numerical solutions like Newton Raphson [3] have problems of dependence on initial guess, analytical solutions [8][12] give multiple solutions and they are also computationally complex.

To solve the problems of numerical and analytical solutions, feed forward neural network [5] [7] and Cerebella Model Arithmetic Computer (CMAC) network [16] have been tried. The ground for the use of neural network to solve the forward kinematics problem lies in their nonlinear function estimation capacity. The network will be trained offline using input output data obtained from inverse kinematics and then it will used for online estimation in a feedback loop. In using neural network, after network is trained, it has to be tested for generalization capacity. In [5] [16] training performances were given but generalization is not tested using various trajectories. In [4] the output of a feed forward neural network is used as the initial guess for the Newton Rapson method but Newton Raphson algorithm may take longer time to converge to a solution, whatever the initial guess is, when a root is at inflectional tangent; moreover, using neural network and Newton Raphson in cascade will increase the total estimation time. In this paper we report the performance of a feed forward neural network with enough number of hidden layers and neurons and try to show that the performance is better for trajectories where the Newton Raphson method fails to give small error. The outline of the paper is as follows: section II discusses the kinematic modeling, section III and IV are numerical algorithm and Neural Network implementation and Training respectively, section V is discussion and last section is conclusion.

II.

KINEMATIC MODELING

A general 6 DOF Stewart platform manipulator with hexagonal base and platform is shown in Fig. 1. It consists of a base B i (i=1, 2, 6) and platform Pi (i=1, 2, 6) joined by six extendable legs. Each leg is attached to the base by universal joint and to the platform by a spherical joint. The length of legs is controlled by an actuated prismatic joint. A reference frame Fb (Ob-Xb, Yb, and Zb) and a coordinate frame Fp (Op-Xp, Yp, Zp) are attached to the base and the platform respectively. The position vector of the center of universal joints Bi in frame Fb is bi =[bix biy biz]and the position vector of the center of spherical joints P i in frame Fp is pi=[pix piy piz]. Let r=[rx, ry,rz] be the position of the origin Op with respect to Ob and also let R denote the orientation of frame Fp with respect to Fb. Thus the Cartesian space position and orientation of the platform is specified by X = [rx, ry, rz,, , , ] where the three angles , , are the yaw-pitch-roll rotation angles that constitute the transformation matrix R. For Fig. 1, a vector equation

272

Ob Bi + Bi Pi = O b O + O P Pi P
Can be written where

(1)

li

2 = (r11p ix + r12 p iy + r13p iz + rx - b ix ) +

21p ix

+ r22 p iy + r23 p iz + ry - b iy

2 +

O b Bi is the vector bi,


O b O p is vector r and O p Pi is vector pi. Substituting these symbols and writing all vectors in base frame,

Bi Pi = Rpi + r - bi
The magnitude of Bi P is the length of ith leg. Hence the inverse i kinematic equation, which gives the length of the legs l i for a given platform position and orientation, is given by

(5) In the numerical method, first an estimated Cartesian position (rx, ry , rz) and orientation angles (, , ) are taken. Then the corresponding leg lengths are calculated using inverse kinematics equation (5) and the error between the calculated leg lengths and (2) the measured values is used to adjust the estimated values. The iteration continues until a tolerable error value is achieved. In the Newton Raphson method [3][14], the next estimate is calculated using
x k + 1 = x k + J F(x(k))

(r31p ix + r32 p iy + r33 p iz + rz - b iz )

li = Rpi + r - bi

(3)

-1

(6)

The solution of (3) is unique for a given platform position r and orientation R and can be directly calculated. This constitutes the solution of the inverse kinematics problem. The forward kinematics problem is finding the actual Cartesian space position and orientation X=[rx, ry, rz,, , , ] given a set of leg lengths, l1, l2, l3, l6.This is a nonlinear equation and it has no direct solution.

Where

I.

NUMERICAL METHOD
2

Taking the three orientation angles as the standard yaw- pitchroll angles, the transformation matrix R can be written as (4) Let rij be the i row and j column of the transformation matrix, then multiplying and taking magnitude of the vector, (2) becomes
th th

f1 (x(k)) F(x(k)) = f 2 (x(k)) M f (x(k)) 6


fi (x(k)) = li - ((r11p ix + r12 p iy + r13 p iz + rx - b ix ) +
2

(7)

R = Rz Ry Rx

21p ix

+ r22 piy + r23 piz + ry - biy

(r31pix + r32 piy + r33 piz + rz - biz ) )

(8)

and J is the Jacobean matrix obtained by differentiating (7) with respect to kinematics time and is given by (9) .

J = J i1 J i2

J i3

J i4

J i5

J i6

(9)

Where
J i1 = (r11p ix + r12 p iy + r13p iz + rx - b i x )/li J i2 = (r21p ix + r22 p iy + r23p iz + ry - b i y )/l i J i3 = (r31p ix + r32 p iy + r33p iz + rz - b i z )/l i J i4 = J i1 J i5 = J i1 J i6 = J i1 J i1 J i1 J i1 +J J i2 i2 J i2 J i2 i2 + J i3 + J i3 J i3 J i3 J i3

+ J i2 +J

+ J i3

(13)

Figure 1 Generalized Stewart platform


:

The Newton Raphson numerical algorithm converges to a solution in four or five iterations in most cases but it depends up on the initial guess taken and the trajectory followed. This can be seen from simulation results given in section IV.

273

II.

FEED FORWARD NEURAL NETWORK

A. Numerical Method
Newton Raphson numerical algorithm is implemented using MATLAB. The algorithm implemented is similar to [3][9].The algorithm is tested for different trajectories. The maximum error and the time to converge for different trajectories is tabulated in table II The result shows that the numerical algorithm gives very small maximum error for trajectories where there is no pitch motion but the time taken to converge is relatively longer.

Neural networks are massively parallel distributed processing systems made up of highly interconnected processing elements that have the ability to learn and acquire knowledge and make it available for use. They are efficient in problems like the forward kinematics problem where input output data is readily available but it is difficult to get easy and working mathematical relations. In the forward kinematics problem, input output data can easily be generated using inverse kinematics, (3). Classical theory of function approximation supports the use of neural networks for function approximation. Generally a feed forward network with sufficient number of neurons in hidden layer can approximate any continuous function to any desired accuracy. In the forward kinematics problem the required functional mapping is from 6 measured joint input values to three Cartesian position and three orientation angle values. The effectiveness of a neural network in solving such a problem is measured by the complexity of the network, which is indicated by the number of neurons and weights, relative to the complexity of the problem itself. Fig. 2 shows a fully connected feed forward neural network with three inputs and two outputs. .
Bias 1 Bias 1

B. Feed Forward Neural Network


Two layer and three layer feed forward neural network trained with back propagation algorithm are used to estimate the forward kinematics. To select the most optimal network size, networks with different sizes and training data are taken and trained offline. The training data is generated by taking random Cartesian space positions and orientations and then calculating corresponding leg lengths using inverse kinematics formula given in (3). The range of values for the work space is given in table III. The performance of different networks with respect to training time and mean squared error is given in table V. Generally, three layer networks have better performance than two layer networks but they have longer training time. Among the three layer networks taken, training performance improves as the number of hidden layer neurons increase which is expected but the trend stops after some time. The MSE of the last network, having 30 and 35 neurons, is bigger compared to the one having 25-35 neurons. One reason for this is, for a good training performance the ration of number of tunable parameters to that of training data size has to be very small and in here network size has increased but training data size is the same. For the last network, the number of tunable parameters is 1511 and ration is 0.252. Increasing the training data size increases the training time and the required memory size drastically. Therefore for the rest of the simulation, the network with 25-35-hidden layer neurons is taken. Fig. 3 shows the training performance of this three layer network. The above network is then tested and compared for its estimation performance when the manipulator is moving in different trajectories and table IV gives result in comparison with the output of the numerical. Trajectories 1-3 are circular, 4-6 are helical and 7 is linear and they are shown in Fig. 4 and Fig.5. The numerical algorithm gives better result, in terms of estimation error, for all trajectories except 2, 5 and 7. These trajectories contain pitching motion of the platform and they have bigger errors. The algorithm has been checked with different initial conditions and it was also tested with a different geometry as given by [6], but the result is found to be the same. The trained neural network gives more or less a constant estimation error for all trajectories and the time taken is always less than the numerical. Therefore the neural network can be used to estimate forward kinematics better than the numerical for application that need mm accuracy.
CONCLUSION

X1

Y1 W32 X2 W21

y2

x3

Figure 2 Three layer feed forward neural network III. DISCUSSION AND SIMULATION

For simulation study, Stewart platform manipulator with planar base and platform having geometric specifications given in table I is used[14].

TABLE I Geometric Specifications of Stewart platform Joint positions Base Platform Base radius 2.8m Platform radius 2.5m

The forward kinematics problem of a general Stewart platform manipulator with planar base and platform is formulated and two important solutions, numerical Newton Raphson method and neural network method, were compared using seven different trajectories.

274

TABLE II Cartesian position and Orientation Limits of Center of Platform Limiting values x [-0.25m 0.25m] y [-0.25m 0.25m] z [1.8m 3m] [-/18 /18] [-/18 /18] [-/18 /18]

Table III Comparison of different networks based on their training performance Network Network MSE Training used size time(sec) 2 layer 10-6 1.38X10-5 1.58X103 -6 network 15-6 1.64X10 2.49X103 -7 40-6 9.1X10 8.5X103 -6 3 layer 10-10-6 2.90X10 3.1X103 -7 network 15-15-6 5.89X10 3.5X103 -7 15-25-6 2.90X10 1.19X104 -8 17-35-6 6.25X10 2.06X104 -9 25-35-6 3.64X10 2.89X104 -7 30-35-6 2.35X10 4.38X104 .The numerical algorithm performs very well for a trajectory that does not have pitching motion but gives bigger error when the trajectory has pitching motion. On the other hand, the numerical algorithm gives consistent estimation error for all trajectories. When the average time taken is compared, the neural network responds faster than the numerical algorithm which shows that the neural network can be used for faster manipulators. Hence a feed forward neural network can be used to estimate the forward kinematics of Stewart platform manipulator for faster application. REFERENCES
[1] A. Ghobakhloo, M. Eghtesad and M. Azadi, Position Control of Stewart-Gough Platform Using Inverse Dynamics with Full Dynamics, IEEE Conference 9th Int. Workshop on Advanced Motion Control (AMC06), pp.50-55, Istanbul-Turkey 2006. B. Dasgupta and T. S. Mruthyunthaya, The Stewart Platform Manipulator: a review, Mechanism and Machine Theory, vol. 35, No. 1 pp. 15-40, 2000 Charles C. Nguyen, Zhen-Lie Zhou and Sami S. Antraz, Efficient Computation of Forward Kinematics and Jacobean Matrix of A Stewart Platform Based Manipulator, Proceedings of the 1997 IEEE Int. Conf. on Robotics and Automation vol. 1, pp.869-874. Durali M and Shameli, Full Order Neural Velocity and Acceleration Observer for a General 6-6 Stewart Platform, Proc. of the 2004 IEEE Int. Conf. on Networking, Sensing and Control, Taipei-Taiwan, March 21-23, 2004, pp. 333-338. H.Sadjadian,H.D.Taghirad and A.Fatehi, Neural Network Approaches for computing the Forward Kinematics of Hydraulic Shoulder, Inter. Journal of Computational Intelligence, Vol. 2,No.1 PP. 40-47.

Table IV Comparison between the performance of numerical and NN methods with respect to trajectory tracking Numerical NN error Average error (mm,10-3 (mm,10-3rad) time(m sec) rad ) Max Max Max Max nume NN error1 error2 error1 error2 rical 1 0 0 0.12 0.15 15.2 9.4 2 5.8 32.5 0.53 0.38 20.5 10.6 3 0 0 1.4 0.9 18 9.8 4 0 0 1.1 0.9 20 11.6 5 5.6 31.6 1.1 0.9 20 11.8 6 0 0 1.2 1.0 19.1 11.8 7 5.4 31.4 0.086 0.052 21.8 11.0 * error1 and error2 are position error and orientation angle errors respectively.
Kai Liu, John M. Fitzgerald and Frank L. Lewis, Kinematic Analysis of Stewart Platform Manipulator, IEEE Tran. On Industrial Electronics, Vol. 40, No. 2, pp. 282-293, April 1993. [7] Lee Hyung San and Myug-Chul Han, The Estimation for Forward Kinematics of Stewart Platform Using the Neural Network, Proc. Of the 1999 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 501-506. [8] Mohmoud Tarokh, Real Time Forward Kinematics Solution for General Stewart Platform, 2007 IEEE Int. Conf. on Robotics and Automation, pp. 901-906, 10-14 April 2007, Roma-Italy [9] Oliver Dirdrit, Michel Petitot and Eric Walter, Guaranteed Solution of Direct Kinematic Problem for General Configuration of Parallel Manipulators, IEEE Tran. on Robotics and Automation, vol. 14,No. 2, pp.259-266, April 1998 [10] Ping Ji and Hongtao Wo, Closed Form, IEEE Tran. on Robotics and Automation, vol. 17,No. 24,pp. 522-526, August 2001 [11] Prabjot Nanua, Kenneth J. Waldron, and Vasudeva Murthy, Direct Kinematic Solution of a Stewart Platform, IEEE Trans. on Robotics and Automation, vol. 6, No. 4,pp 438-444, August 1990 [12] Se-Kyong Song and Dong-Soo Kwon, Efficient Formulation Approach for the Forward Kinematics of the 3-6 Stewart-Gough Platform, Pro. of the 2001 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 1688-1693, Oct. 29-Nov.03, 2001, Mani, Hawaii, USA [13] Y. X. Su and B. Y. Duan, The Application of the Stewart Platform in Large Spherical Radio Telescopes, Journal of Robotic Systems vol.17 No.7, 2000, pp. 375-383. [6] [14] Yamin Wan and Sunan Wang, Kinematic Analysis and Simulation System Realization of Stewart Platform Manipulator, The Fourth Int. Conf. on Control and Automation (ICCA03), pp.780-784,10-12 Jun. 2003, Montreal-Canada [15] Yung Ting, Yu-Shin Chen and Shih-Ming Wang, Task Space Control Algorithm for Stewart Platform, Proc. Of the 38 th Conf. on Decision and Control, pp 3857-3862, Dec. 1999, Phoenix, ArizonaUSA [16] Z. Gang and L. Hynes, Neural Network Solution for Forward Kinematics Problem of a Stewart Platform, IEEE Int. Conf. on Robotics and Automation, Sacramento-California, 1991, pp.26502655.

[2]

[3]

[4]

[5]

275

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

A Novel Technique for Maximum Power Point Tracking of Photo-Voltaic Energy Conversion System
B.Chitti Babu, R.Vigneshwaran, R. Sudharshan Kaarthik, Nayan Kumar Dalei, Rabi Narayan Das Department of Electrical Engineering, National Institute of Technology, Rourkela, INDIA E-mail: bcbabu@nitrkl.ac.in,vignesh.nitrkl@gmail.com,sudharshan.kaarthik@gmail.com, nayandalei@gmail.com,
Abstract: This paper presents the novel technique for maximumpower point tracking of photovoltaic (PV) energy conversion system. In order to obtain the maximum power extracted from the PV array, usually two parameters are considered, namely solar irradiation and temperature, most of the research work has been carried out by considering these two parameters. But in this paper, a new technique is proposed where maximum power is extracted when the load current is 0.9 times the short circuit current. The proposed technique gives optimum utilization of PV array and enhances the applications of PV systems for both stand alone and grid connected systems. The study has been carried out in the in the MATLAB-Simulink environment via the graphical user interface. And also validation of the simulated results with the theoretical results shows proper matching. Keywords- Converter, Maximum power point tracking (MPPT), photovoltaic, solar irradiation

II. PV SYSTEM CONFIGURATION


Photovoltaic module Imodule I Load

Vmodule

DC-DC Converter

VLoad

R Load

Fig.1. Block diagram of the PV system

I. INTRODUCTION IN India there are about 300 clear sunny days in a year and
solar energy is available in most parts of the country, including the rural areas. But still we have miles to cover before solar power is effectively utilized to replace the fossil fuels and become a cheap and effective solution for domestic and commercial applications. With the growing demand for renewable sources of energy, the manufacturing of solar cells and photovoltaic arrays has advanced dramatically in recent years. Its efficient usage has led to increasing role of photovoltaic technology as scalable and robust means of renewable energy. This paper presents a photovoltaic energy conversion system for converting solar power in to useable DC at 5V for charging low power devices like mobile phones etc. The energy obtained from the photovoltaic module is directly useable (approx 12V).But for charging applications we require 5V DC supply. The 12V DC obtained from the PV module is stepped down to 5V by DC-DC buck converter. Fig.1 shows the complete structure of PV energy conversion system, which comprises PV array with DC-DC buck converter. The inductor design of the buck converter circuit is discussed in detail which is a significant part in designing of the converter. For efficient usage of photovoltaic energy conversion system, it is essential to design a maximum power point tracking (MPPT) system. This paper also presents modeling of PV cell along with MPPT with a buck converter. The concept of MPPT is to automatically vary a PV array's operating point so it can produce its maximum output power. This is necessary because a PV cell has non-linear current-voltage relation. The power delivered by array increases, to maximum as the current drawn rises and suddenly the voltage falls making the power zero. A boost converter is not preferred here because it cannot track maximum power point at low radiation levels, as this point is located in the non-operating region. Simulation of the whole system has been carried out using Matlab-Simulink environment via the graphical user interface.

A. Photovoltaic Array Simulation By arranging PV solar cells in series and parallel combinations the complete formation of the desired PV array is obtained. These are generally represented in a simplified equivalent circuit model such as the one given below in Fig 2.
Rs

Io IPh D

IC VC

Fig.2. Simplified equivalent circuit of PV cell

The PV cell output voltage is a function of the photocurrent which mainly determined by load current depending on the solar radiation level during the operation. (1) where symbols are defined as follows: e: electron charge ( 1.602 C) k: Boltzmann constant ( 1.38 J/K ) : Cell output current, A : Photocurrent, a function of irradiation level and junction temperature (5 A). : Reverse saturation current of diode (0.0002 A). RS: Series resistance of cell (0.001 ). : Reference cell operating temperature (20 C). : Cell output voltage, V. The voltage obtained from Eq (1) gives the voltage for a single solar cell and then it is multiplied by number of cells connected in series to calculate the total array voltage. For certain cell operating temperature and corresponding solar insolation level the cell current

276

is obtained by dividing array current by number of cells connected in parallel. Change in and is reflected in voltage and current outputs of PV array. So effects of these changes have been taken into consideration in the final PV array model. A methodology has been devised to handle the effects of change in and , according to which, a model is obtained for a known temperature and solar insolation level. Later this model is modified accordingly to handle various changes in temperature and insolation level. Using correction factors , , and , the new values of the cell output voltage and photocurrent are obtained for the new temperature and solar irradiation as follows. (2) (3) where and are temperature coefficients. and are the benchmark reference cell output voltage and reference cell photocurrent, respectively.

where is the applied volt secs, is the maximum rated load in Amps and L is the inductance in H. The Duty cycle is (5) is the diode forward voltage drop ( and is the drop across the switch when it is ON plus any parasitic voltage ( .
I CLIM

I PEAK I DC= I O ITROUGH I

I /2 I /2

Continuous Mode (I O = MAX LOAD)

Discontinuous Mode ( CRICTICAL BOUNDARY )

DT

(1-D)T

B. The Buck Converter These converters produce a lower average output voltage than the DC input voltage. The figure 3 shows a step down or a buck converter. It consists of a DC input voltage source , inductor L, controlled switch (MOSFET), diode D, filter capacitor C, and load resistance R.
iL L IO

Fig.4. Inductor current waveform

+
VS G D

vL

+
iC R VO

In fig 4, is the current to the load, since the average current through the output capacitor as for any capacitor in steady state is zero, is + , and it determines the peak energy in the core (e = 0.5L ), which in turn is directly related to the peak field the core must withstand without saturating, is and determines the constant residual level of current or energy in the inductor it depends on the load, even though it is not itself transferred to the load. The AC component of the current is = I = (6)

Fig.3. Buck Converter

III. INDUCTOR DESIGN The inductor design for the buck converter has been done at the maximum input voltage . This represents the worst-case for all the inductor parameters: the core loss, the peak or RMS inductor current, the copper loss, the temperature rise, the energy it must handle and the peak flux density. We define D as the duty cycle and r the ripple current ratio , r is related to the inductance through equation (4)

The DC component is the load current for the case shown in the figure 4 = where is the maximum rated load. The defined r is a constant for a given converter/application (as it is calculated only at maximum load), and it is also defined only for continuous conduction mode. (7) A high inductance reduces I and results in lower r (and lower RMS current in the output capacitor), but may result in a very large and impractical inductor. So practically, for this buck converter, r is chosen to be in the range of 0.250.5 (at the maximum rated load). From the general rule V = we get during the ON time of the converter. (8)

277

iL

where f is the switching frequency in Hz. The switch ON time (9) Solving for I we can write r as (10) And L is therefore, (11) will guarantee continuous conduction for all D. The inductor is designed for specific scheme in this paper. IV. MAXIMUM POWER POINT TRACKING The photovoltaic system displays an inherently nonlinear current-voltage (I-V) relationship, requiring an online search and identification of the optimal maximum operating power point. The MPPT controller is a power electronic DC/DC chopper or DC/AC inverter system inserted between the PV array and its electric load to achieve the optimum characteristic matching, so that PV array is able to deliver maximum available power which is also necessary to maximize the photovoltaic energy utilization.PV cell has a single operating point where the values of the current and Voltage of the cell result in a maximum power output. These values correspond to a particular resistance, which is equal to V/I as specified by Ohm's Law. Also the PV cell has an exponential relationship between current and voltage, so the maximum power point (MPP) occurs at the knee of the curve, where the resistance is equal to the negative of the differential resistance ( ). Any additional current drawn from the array results in a rapid drop-off of cell voltage, thus reducing the array power output. .The aim of this MPPT subsystem is to determine just where that point is, and to regulate current accordingly and thus to allow the converter circuit to extract the maximum power available from a cell. The control methodology presented in this paper will adopt an approach in which designing of the power converter is done by using the relationship existing between the short-circuit current ( ) and the MPP current ( . By simulating with various sample data for and it is ascertained that the ratio of to remains constant at 0.9. One such control scheme is shown in the fig 5. Determining the MPP for a specific insolation condition and operating the converter for this condition is the critical part in the design of PV conversion system. Initially the Short-circuit current is measured and then the actual load current adjusted in such a way it is equal to a desired fraction (0.9) of .
LPF CONTROLLER + DRIVER

L G1
PV ARRAY

IO

+
Mm

vL

+
iC R VO

G2

D
RS

0.9

LPF

Fig.5. Control scheme for MPPT

Measurement of the current is done by switching ON the MOSFET in fig 5 for extended interval ie. in turn it is the way of shorting the panel. In this approach an extended pulse is applied to MOSFET once in several switching cycles so that the short-circuit current is sensed in accordance with the switching instance. V. RESULTS AND DISCUSSION The complete PV power conversion system was developed and simulated with Matlab-Simulink. Power MOSFET switches were used in developing the power converter along with the MPPT.Fig.6 is the complete simulation block for PV with MPPT.

Fig.6. Simulation block of PV with MPPT

Fig.7. Current/power Vs Voltage plot with MPPT

Fig.7 shows the simulation output curves. It clearly shows the operating point condition. The I-V and P-V curves are plotted for a typical PV array with the exposure

278

to standard solar insolation level (intensity of 1KW/m2). Standard design approach shows that an increased number of cells can provide a nominal level of usable charging currents for normal range of solar insolations. In fig 8 the zero current indicates the condition of open circuit, so the value of voltage at that point gives the value of open circuit voltage of the PV array. Similarly a zero voltage indicates a short circuit condition; the current at this point is used to determine the optimum value of current drawn for maximum power. The value of the maximum current increases for increase in temperature.
Fig.11. Response of the output voltage of buck converter

VI. CONCLUSION Modeling of photovoltaic array has been validated with the theoretical results obtained and a GUI has been developed in the Simulink environment. An attempt has been made to develop an economical photovoltaic energy conversion for charging low power devices. The inductor design proposed has shown significant output in accordance with the desired performance. A robust MPPT methodology has been devised, which can prove more useful in terms of providing usable voltage with very little fluctuations. VII. ACKNOWLEDGEMENTS We thank our honorable Director, NIT Rourkela for giving financial assistance to carry out this work successfully and also thank our project guide Prof.B.Chitti Babu for his guidance, support, motivation and encouragement throughout the period this work was carried out. REFERENCES
[1] S. S.Yuvarajan,

Fig.8. Plot for output current Vs Voltage

Fig. 9 is the plot between voltages versus the power of PV module. This graph has been plotted for different values of temperatures. From this curve it was ascertained that the maximum power decreases for increase in temperature.

Dachuan Yu and Shanguang Xu, A novel power converter for photo voltaic applications Elsevier Journal of Power Sources, June-2004.
Y. Kuo, et al., Maximum-power-point-tracking controller for photo-voltaic energy conversion system, IEEE Trans. Ind.

[2] Fig.9. Plot for output power Vs Voltage

Fig 10 and 11 shows the output curves of the Buck converter. Fig.10 shows the graph of output current versus Time in milli second. Fig.11 shows the graph of buck converter output voltage versus time in milli second.

Electron. 48 (3) (2001) 594601.


[3] D..L. King, J.H. Dudley, W.E. Boyson, PVSIM: a simulation program for photovoltaic cells, modules

and arrays, in: Proceedings of the 25th Photovoltaic Specialists Conference, May 1996. [4]
[5]

IEEE

I. H. Altas and A.M. Sharaf, A Photovoltaic Array Simulation Model for Matlab-Simulink GUI Environment,

1-4244-0632-3/07 IEEE.
C.G. Steyn, Analysis and optimization of regenerative linear snubbers, IEEE Trans. Power Electron 4 (3) (1989) 362370. [6] I.H. Altas and A.M. Sharaf: A Novel On-Line MPP Search Algorithm for PV Arrays, IEEE Transactions On Energy

Conversion, Vol. 11, No.4, December 1996. [7] M. Buresch: Photovoltaic Energy Systems Installation, McGraw-Hill, New York, 1983.
Fig.10.Response of the output current of buck converter

Design and

279

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Noise Cancellation in Hand Free Cellphone Using Neurofuzzy Filter


A.Q. Ansari
Department of Electrical Engg., Jamia Millia Islamia, New Delhi 110025 aqansari@ieee.org
Abstract- In this paper, a neurofuzzy filter (NFF) is presented, which is based on fuzzy if-then rules (structure learning) and the tuning of the parameters of membership function (parameter learning). In the structure learning, fuzzy rules are found based on the matching of input-output clusters. In the parameter learning, the consequent parameters are tuned optimally by either least mean square (LMS) or recursive least squares (RLS) algorithms and the pre condition parameters are tuned by backpropagation algorithm. Both the structure and parameter learning are performed simultaneously as the adaptation proceeds. Noise cancellation of hand free cellphone by the proposed NFF is illustrated and achieved good performance with root mean square error (RMSE) of 0.0440 and maximum instant error of 0.02. Keywords- Structure learning; Parameter learning; Neurofuzzy filters; Noise cancellation.

Neeraj Gupta
Department of Electrical Engg., Krishna Institute of Engg. & Technology, Ghaziabad -201206 neeraj_gupta2000@yahoo.com
filters is that the internal layers of neural networks are always opaque to the user, so it is not easy to determine the structure and size of a network. To enable a neural network to learn from numerical data as well as expert knowledge expressed as fuzzy if-then rules, several approaches have been proposed [1320]. To overcome the shortcomings encountered in neural filters, while still keeping their advantages, a neurofuzzy filter (NFF) is presented in this paper. Neurofuzzy filter learns system behavior by using system input-output data and so does not use any mathematical modeling. After learning the systems behavior, neurofuzzy filter automatically generates fuzzy rules and membership functions and thus solves the key problem of fuzzy logic and reduces the design cycle very significantly. Neurofuzzy filter then verifies the solution (generated rules and membership functions). It also optimizes the number of rules and membership functions. The fuzzy logic solution developed by neurofuzzy solves the implementation problem associated with neural nets. Unlike conventional fuzzy logic, neurofuzzy filter uses new defuzzification, rule inferencing and antecedent processing algorithms, which provide more reliable and accurate solutions. These new algorithms are developed based on neural net structure. Finally, automatic code converter converts the optimized solution (rules and membership functions) into embedded controllers assembly code.

I.

INTRODUCTION

Adaptive filtering has achieved widespread applications and success in many areas such as control, image processing, and communications [1]. In classical filtering theory, signal estimation based on a priori knowledge of both desired signal and the noise may largely depend on linearity of model and stationary mathematical methods. But in case of highly nonlinearity the estimation performance may become unsatisfied. For such case, neurofuzzy filtering may provide better solution to noise filtering [2-5] problem. Neural networks are composed of a large number of highly interconnected processing elements (nodes), which are connected through the weights. When looking into the structure and parameter learning of neural networks, many common points to the methods used in adaptive signal processing can be found. The backpropagation algorithm used to train the neural network is a generalized Widrows least mean square (LMS) algorithm and can be contrasted to the LMS algorithm usually used in adaptive filtering. Many kinds of nonlinear filters designed using neural networks have been proposed. One of them is the neural filter, whose learning algorithm is shown to be more efficient than Lins adaptive stack filtering algorithm [6]. This class of neural filters is based on the threshold decomposition and neural networks, and is divided into hard neural filters (whose activation functions are unit steps) and soft neural filters (whose activation functions are sigmoid functions). Another kind of neural filter is the recursive filter obtained by training a recurrent multilayer perceptron (RMLP) [7]. Other applications of neural networks in the adaptive filtering include nonlinear channel equalizers [8,9] and the noisy speech recognition [10-12] where neural networks are used to map the noisy input features into clean output features for recognition. A problem encountered in the design of neural

II.

STRUCTURE OF NEUROFUZZY FILTER

Fig. 1 shows the function diagram of the proposed neurofuzzy filter (NFF). The neurofuzzy filter possesses the advantages of both the neural and fuzzy systems. It brings the low-level learning and computational power of neural networks into fuzzy systems and provides the high-level human-like thinking and reasoning of fuzzy systems into neural networks.

Figure 1. Functional diagram of the proposed NFF

280

The structure and parameter learning both are used concurrently for the construction of an adaptive neurofuzzy filter. The structure learning includes the pre-condition and consequent structure identification. The pre-condition structure corresponds to the input space partitioning and is formulated as a combinational optimization problem with objective to minimize the number of rules generated and number of fuzzy sets. The main task of consequent structure identification is to decide when to generate a new membership function and which significant terms (input variables) should be added to the consequent parts. Either LMS or recursive least squares (RLS) algorithms adjust the parameters of the consequent parts and the pre-condition part parameters are adjusted by using backpropagation algorithm to minimize the cost function. During the learning process the system can be used anytime for normal operation without repeated training on the input output patterns. Rules are created dynamically on receiving online training data by performing following learning processes: 1. Input / Output space partitioning; 2. Fuzzy rules and membership function generator; 3. Fuzzy rules verifier and optimizer; 4. Parameter learning. In this learning process the first three steps belong to structure learning and the last step belong to the parameter learning. The various blocks of an adaptive neurofuzzy filter can be explained as follows:

In the system the width is taken into account in degree measure so far a cluster with larger width, fewer rules will be generated in its vicinity than a cluster with smaller width.

B.

Fuzzy Rules and Membership Function Generator

The generation of a new input cluster corresponds to the generation of a new fuzzy rule, with its pre-condition part constructed by the learning algorithm in process 1. At the same time, the consequent part of the generated rule is decided. The algorithm is based on the fact that different preconditions of different rules may be mapped to the same consequent fuzzy set. Since only the center of each output membership function is used for defuzzification, the consequent part of each rule may simply be regarded as a singleton.

C. Fuzzy Rule Verifier and Optimizer


The generated fuzzy rules & membership functions can be verified by using neurofuzzy rule verifier. Both training set as well as a test set should be used for the verification process. If the generated rules and membership functions do not produce satisfactory result, one can easily manipulate appropriate parameters (e.g. more data, smaller error criterion, learning rate etc) so that the neural net learns more about the system behavior and finally produce satisfactory solution. The number of rules & membership function can also be optimized using the neurofuzzy rule verifier, which is another very important feature. This reduces memory requirement and execution speeds the two very desirable features for many applications. The optimization process might lose some accuracy and one can make some trade-offs between accuracy & cost. D. Parameter Learning After the network structure is adjusted according to the current training pattern, the network then enters the parameter learning phase to adjust the parameters of the network optimally based on the same training pattern. Parameter learning is performed on the whole network after structure learning, no matter whether the nodes (links) are newly added or are existent originally. Using backpropagation for supervised learning the error function as follow.

A. Input / Output Space Partitioning


This block determines the number of rules extracted from the training data as well as the number of fuzzy sets on the universe of discourse disclosure of each input variable. A rule corresponds to a cluster in the input space with

mi and

Di representing the center and the variance of the cluster. For


the incoming pattern x the strength a rule is fired can be interpreted as the degree of incoming pattern belongs to the corresponding cluster. For computational efficiency, we can use the firing strength as follows:

F i ( x ) u i e [ Di ( x mi )]
i

[ Di ( x mi )]

(1)

where

F i [0,1]

and the argument of the exponential is

the distance between x and the cluster

can be used for the generation of new fuzzy rule. Let the newly coming pattern then

i . The above criteria x(t ) be


(2)

1 E [ y (t ) y d (t )] 2 2
d

(5)

J arg max F j ( x )
1 j c ( t )

where
j

c(t ) is the number of F F (t ) , then a new F (t ) (0,1) .

existing rules at time t. If rule is generated where

where y (t ) is the desired output and y (t ) is the current output. For each training data set, starting at the input nodes, a forward pass is used to compute the activity levels of all the nodes in the network to obtain the current output y (t ) . Then starting at the output nodes, a backward pass is used to compute

for all the hidden nodes. Taking that

is

the adjustable parameter in a node, the update rule used is

Once a new rule is generated the next step is to assign initial centers and widths of the corresponding membership functions, which can be set as follows

E w

(6) (7)

m[c ( t )1] x
D[ c ( t ) 1] 1 1 1 .diag ........ J ln( F ) ln( F J )

(3) (4)

E w(t 1) w(t ) w
where

is the learning rate,

E E a . w a w

and

a is

the activation function.

281

E. Automatic Code Converter


After a satisfactory solution is obtained, automatic code converter converts the solution to an embedded processors assembly code. Various options can be provided to have the code optimized for accuracy, speed, and/or memory.

III.

NOISE CANCELLATION

The adaptive noise cancelling using NFF is shown in Fig. 2. An input signal contaminated by the noise is transmitted to the receiver.

From (12), the conclusion is deducted that the power minimization of the signal r(t) is equivalent to minimizing the power of noise. In this simulation, the neurofuzzy filter is trained to remove the noise in the adaptive noise cancelling process. Traditionally, the design of the adaptive filters for the aforementioned noise cancelling problem is based upon a linear filter adapted by the LMS or RLS algorithm. In real situations, the environment between n (t) and c(t) or n(t) and c1(t) is so complex that c(t) or c1(t) is in fact a nonlinear function of n(t) [2123]. Higher performance of noise cancellation by using a nonlinear filter can thus be expected. In order to demonstrate the capability and effectiveness of the NFF, simulation in hand free cellphone is being done. IV. EXPERIMENTAL RESULT The important application of NFF includes a cellphone hung onto the vehicle control panel. The noise created by the engine affects the ongoing conversation, since it adds to the voice signal and enters into the microphone. Thus the net output electrical signal of the microphone would contain two components noted above. Consider the human input source

Figure 2. Adaptive noise cancelling using NFF The principle of the adaptive noise cancellation techniques is to adaptively process (by adjusting the filters weights) the reference noise c1(t) to generate a replica of c(t) , and then subtract the replica of c(t) from the primary input x(t) to recover the desired signal h(t) . We denote the replica of c(t),

(t ) . i.e., the adaptive filter output, as process c With the diagram, the received signal x(t) can be described as: x(t) h(t) c(t) h(t) f (n(t),n(t 1 ),........ ..n(t k)) (8)
Where the f represents the passage channel of the noise source n(t). The basic principle of the adaptive noise cancellation is to estimate the desired signal form the corrupted. The noise cancelling is based on the following assumptions [1]: 1) h, c, and the noise signal n, are zero-mean process (statistical stationary and zero means); 2) h is uncorrelated with n and c; 3) n and c are correlated by the function f The recovered signal r(t), which is served as the error signal in the adaptive process, can be given as (t ) r (t ) h(t ) c (t ) c (9)

(13) as shown in Fig. 3(a) and noise generated by engine are given as (20 pi t) (14) n(t) sin (0.2 sin(100 pi t 100)) 2 The input signal is a sum of usual voice signal h(t) and engine noise n(t). A neurofuzzy filter has been used as the adaptive noise filter for getting extracting signal. The NFF achieves good performance with RMSE of 0.0440 and maximum instant of 0.02. The extracted signal through the use of trained NFF is much closer to the useful voice signal as shown in Fig 3.

ht sin(sin(20 t ) t 200)

(a) Useful voice signal h.

(b) The engine noise n

(t ) is the estimated output from the filter. By Where the c squaring, the expectation of (9) is given as (10) (t))2 ] E[h(t)(c(t) c (t))] E[r(t)2 ] E[h(t)2 ] E[(c(t) c Based on the second assumption, the third item of (10) can be removed.
(11) In the nature of application, the signal power of h(t), E[h(t)2] , remains unaffected while the adaptive filter is used to minimize the power of the recovered signal. In other words, in adaptation of the error signal, the power difference is minimized between the contaminated signal and the filter output, i.e.

(c)Noisy signal x = h + 0.833 n. (d) Extracted signal.

(t))2 ] E[r (t) 2 ] E[h(t) 2 ] E[(c(t ) c

(e) Extraction error.


Figure 3. Illustrations of hand free cellphone using NFF.

(t )) 2 ] Emin [r (t ) 2 ] E[h(t ) 2 ] Emin [(c(t) c

(12)

282

V.

CONCLUSION

In this paper neurofuzzy filter is presented. The NFF can be trained by numerical data and linguistic information expressed by fuzzy if-then rules. This feature makes the incorporation of a priori knowledge into the design of filters possible. Another important feature of the NFF is that, without any given initial structure, the NFF can construct itself automatically from numerical training data. Simulation in hand free cellphone has been demonstrated. Hand free cellphone achieves good performance with RMSE of 0.0440 and maximum instant error of 0.02. The proposed research has wide application for many fields, like noisy speech recognitions, noisy image filtering, nonlinear adaptive control, intelligent agents and performance analysis of dynamical systems. Future implementations in various engineering problems are under consideration.

REFERENCES [1] B. Widrow and S. D. Stearns, Adaptive Signal


Processing. Englewood Cliffs, NJ: Prentice-Hall, 1985. [2] C. F. Juang and C. T. Lin, Noisy speech processing by recurrently adaptive fuzzy filters, IEEE Transactions on Fuzzy Systems, vol. 9, no. 1, pp.139-152, February 2001. [3] A. I. Hanna and D. P. Mandic, Nonlinear FIR adaptive filters with a gradient adaptive amplitude in the nonliearity, IEEE Signal Processing Letters, vol. 9, no. 8, August 2002. [4] K. P. Seng, Z. Man, and H. R. Wu, Lyapunov-theorybasedradial basis function networks for adaptive filtering, IEEE Transactions on circuits and systems-I: Fundamental Theroy and Applications, vol. 49, no. 8, August 2002. [5] . M. ojbaic and V. D. Nikoli, An approach to neuro-fuzzy filtering for communications and control, 5th International conference on telecommunications in modern satellite, cable and broadcasting service, vol. 2, pp. 719-722, September 2001. [6] L. Yin, J. Astola, and Y. Neuvo, A new class of nonlinear filtersneural filters, IEEE Trans. Signal Processing, vol. 41, pp.12011222, Mar. 1993. [7] T. H. J. Lo, Synthetic approach to optimal filtering, IEEE Trans. Neural Networks, vol. 5, pp. 803811, Sept. 1994. [8] G. J. Gibson, S. Siu, and C. F. N. Cowan, The application of nonlinear structures to the reconstruction of binary signals, IEEE Trans. Signal Processing, vol. 39, pp. 18771884, Aug. 1991. [9] S. Chen, G. J. Gibson, C. F. N. Cowan, and P. M. Grant, Reconstruction of binary signals using an adaptive radial-basis-function equalizer, Signal Process., vol. 22, pp. 7793, 1991. [10] S. Moon and T. N. Hwang, Coordinated training of noise removing networks, Proc. Int. Conf. Acoust., Speech, Signal Processing, pp. 573576, 1993. [11] S. Tamura and A. Waibel, Noise reduction using connectionist models, in Proc. Int. Conf. Acoust., Speech, Signal Processing, New York, pp. 553556, Apr.1988. [12] H. B. D. Sorensen, A cepstral noise reduction multilayer neural network, in Proc. Int. Conf. Acoust., Speech, Signal Processing, pp. 933936, 1991. [13] G. G. Towell, J. W. Shavlik, and M. O. Noordewier, Refinement of approximate domain theories by knowledge-based neural networks, in Proc. AAAI, pp. 861866,1990.

[14] Q. Yang and V. K. Bhargava, Building expert systems by a modified perceptron network with rule-transfer algorithms, in Proc. IJCNN, San Diego, CA, vol. 2, pp. 7782, 1990. [15] R. C. Lacher, S. I. Hruska, and D. C. Kuncicky, Backpropagation learning in expert networks, IEEE Trans. Neural Networks, vol. 3, pp.6272, Jan. 1992. [16] S. Horikawa, T. Furuhashi, and Y. Uchikawa, On fuzzy modeling using fuzzy neural networks with the backpropagation algorithm, IEEE.Trans. Neural Networks, vol. 3, pp. 801806, Sept. 1992. [17] C. T. Lin and C. S. G. Lee, Neural-network-based fuzzy logic control and and decision system, IEEE Trans. Comput., vol. 40, no. 12, pp.13201336, 1991. [18] J. S. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Trans. Syst., Man, Cybern., vol. 23, pp. 665685, May 1993. [19] L. X. Wang and J. M. Mendel, Back propagation fuzzy systems as nonlinear dynamic system identifiers, in Proc. IEEE Int. Conf. Fuzzy Systems, San Diego, CA, pp. 14091418,1992. [20] H. Ishibuchi, R. Fujioka, and H. Tanaka, Neural networks that learn from fuzzy if-then rules, IEEE Trans. Fuzzy Syst., vol. 1, pp. 8587, May 1993. [21] M. J. Coker and D. J. Simkins, A nonlinear adaptive noise canceller, IEEE Int. Conf. Acoust., Speech, Signal Processing, pp. 470473, 1980. [22] J. C. Stapleton and G. Ramponi, Adaptive noise cancellation for a class of nonlinear dynamic reference channels, in Int. Symp. Circuits Systems, pp. 268271, 1984. [23] S. A. Billings and F. A. Alturki, Performance monitoring in nonlinear adaptive noise cancellation J. Sound Vibration, vol. 157, no. 1, pp.161175, 1992.

283

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

IDENTIFICATION OF UNCERTAINTIES BY ITERATIVE PREFILTERING


Dalvinder Kaur
Electronics and Instrumentation. TITS, Bhiwani Bhiwani, India mangaldalvinder@yahoo.com Abstract Iterative pre-filtering technique and non-arithmetic
filtering are commonly used for the identification. In this paper an attempt has been made to compare both the methods for the identification of system parameters in the presence of additive uncertainties. Although the regression equations used in Iterative pre-filtering for the optimal set of coefficients are highly nonlinear and intractable, it is shown that the problem can be reduced to the repeated solution of a related linear problem. Successful convergence within 5-10 iterations has been obtained for the plant with uncertainty as compared to the recursive prediction error algorithms using median smoothing non-arithmetic filtering technique. Index TermsLinear prediction, Median smoothing, Iterative Prefiltering, Kalman estimate, Uncertainty structures.

Lillie dewan LMSSI


Electrical Engineering Department, NIT, Kurukshetra Kurukshetra, India l_dewanin@yahoo.com
coefficients obtained by Kalman estimate [7]. In this paper an attempt has been made to compare Mode 1 iterative pre-filtering procedure [5] with the robust adaptive median smoothing nonarithmatic filtering technique. In the robust control oriented system identification, the choice for an appropriate model uncertainty structure is important. An amplitude bounded uncertainty test can equivalently be described in terms of an additive, Youla-parameter and V-gap uncertainty [8]. A brief introduction to various uncertainty structures is given in section II. In section III the Algorithms of iterative prefiltering and non-arithmatic filtering techniques are given and results obtained by both the algorithms are compared in section IV followed by conclusions in section V.

II. Uncertainty structures


I. Introduction Finite impulse methods were proposed to increase the signal to noise ratio of original measurement, before the singular value decomposition based Pronys method was applied to estimate the location of single pole. By FIR pre-filtering not only the AR but also the MA coefficient functions are estimated. Liu and Doraiswami in [1] have emphasized on moving average FIR prefiltering. The coefficients of the numerator polynomial and denominator polynomial of the transfer function completely characterize the signal and are called the linear predictive code (LPC) which is a non-parametric method and a scheme for online monitoring of the LPCs of short time records of power system signals was developed. The estimation of the signal parameters can be broadly classified into non-parametric and parametric methods. The non-parametric methods make no assumption about how the data were generated and the estimates are based entirely on a finite record of data. A finite record causes spectral leakage and poor frequency resolution which can only be alleviated by increasing the record length. However large record length may not be suitable for on line application especially when the signal is non-stationary or transient. The parametric method, on the other hand assumes a model for the signal and the parameters of the model are estimated from the observed data. A parametric method based on a LPCA proposed by Kumaresan and Tufts referred to as KT method, is used to estimate the coefficients [2]. They have presented techniques based on linear prediction (LP) and singular value decomposition (SVD). The poles of the system are accurately determined by the KT method and the zeros are obtained subsequently, using Shanks method which is equ ivalent to Pronys method. Pronys method is well known to perform poorly when the signal is imbedded in noise and it is shown that it is actually inconsistent [3]. A new iterative version [4] of the Pronys method was presented and shown to be exceptionally effective in finding ARMA models for acoustic data in the time domain. The alterative Prony method retains the separation of solution for poles and residues inherent in the classical Prony method but iterates these steps to reduce the error in representation of the signal. One of the best known iterative methods is known as iterative pre-filtering [5, 6] and is generally capable of results similar to iterative Prony method. Iterative pre-filtering algorithm simultaneously estimates poles and zeros and seems to converge in practice after 5 to 10 iterations, using the initial estimate of the denominator Consider single-inputsingle-output linear time invariant finitedimensional systems G(s) and controllers C(s). Co-prime factorizations of plants and controllers are defined as = 1 and 1 where , , , satisfy = the usual conditions (Vidyasagar, 1985).The factorizations are normalized, denoted by . , if they additionally satisfy + = 1, . denotes complex conjugate transpose. Three model sets based on a specific uncertainty structure are defined. Additive uncertainty: , = + , , (1) With is a nominal model and a weighting function. Youla-uncertainty: , , , , := + 1 = ,

(2) 1 a With = nominal model, = 1 a present controller and , stable and stably invertible weighting functions reflecting the freedom in choosing the Co-prime factorization of and [12]. An additional weighting can be provided by .The Youla parameter is uniquely determined by:
1 = 1 + 1

-gap uncertainty (Vinnicombe, 2001): , , (3) With , the chordal distance between a plant 1 = and the nominal model 1 = , defined by , =
1+
2

1+

Note that at this point pole/zero conditions are not yet imposed on , , , or as required when studying robust stability conditions [13].The focus lies here with the properties of the frequency responses of the sets. From the above three uncertainty structures defined first two have been considered.

284

III. Algorithms for filtering


Iterative prefiltering procedure:When a high-speed digital computer is available, a linear sampled-data model [7,10] is of special utility, which assumes that the input and output samples are related by a rational ztransform = 0 + 1 1 + . . +1 1 ; = 1 + 1 1 + + 1 2 = 1 2 = (4)
2

1 = 1 1 1 i=1,2,3,.., and D0 =1.

= ,

where = = ; 1 = 1 = 1 If the coefficient vector and input output vector qj are defined by = 0 , . . , 1, 1 , , , = , , +1 , 1 1 , , 1 ; This becomes = 1 (5) prime denotes the transpose. Summing on j over the record length and taking the gradient with respect to yields 2 =
2

=2

ej / ej = 2

qj ej = 0 . (6)

Substitution of (5) into (6) gives = 1 . If the 2n*2n correlation matrix is defined as = and the 2n correlation vector as = 1 the solution to the original minimization problem can be written as = 1 (7) The idea of the iterative procedure is shown diagrammatically in fig. (1), except that qj is defined in terms of and 1.First the minimization problem (4) is solved using (7) and the original input-output records. The result is a first estimate of N(z) and D(z), the Kalman estimate. Call these N1(z) and D1(z). The original input and output records, u and d1, are then filtered by the digital filter 1/D1(z), yielding new, pre-filtered input and output records and 1. These pre-filtered records are then used in place of the original input-output records in (5) and new estimates N2(z) and D2(z) are obtained. The digital filter 1/D2(z) is then used to find and 1 from u and d1, and so forth. At each stage, the previous denominator is used to pre-filter the input and output records, so that Ni (z) and Di(z) are found such that n U Plant G 1/Di-1(Z) 1/Di-1(Z) D1+ + D1

Non-Arithmatic Filtering:In this algorithm the adaptive ARMA system identification is considered. The adaptive ARMA system identification is usually realized by the adaptive equation error and output error algorithm [16, 17]. For the robustness of adaptive system identification the M-estimate is used. (a) Robust adaptive equation error system identification algorithm:A single input and single output ARMA system can be written as = (8) 1 , = 1 = =1 =0 = + Where , are the output and input respectively denotes the additive noise. Considering the ARMA system (8), its predictor for equation error is of the form, 1 = =1 + =0 or = 1 (9) wher = 1 , . . , , , . . , + 1 , is the regressor vector. = 1 , . . , , 0 , . . , 1 is the parameter of adaptive estimate system [16], such that minE (10) According to (9) and (10), the robust adaptive equation error system identification algorithm is derived as: = 1 + (11a) = (11b) = (11c) 1 = (11d) = 1 , . . , , , . . , + 1 Where R (t) is the median smoothing influence function which cancels the influence of wild-points in n (t). 1 , ; = (11e) , .

Ni(Z) e Fig. 1 The iterative pre-filtering scheme + Di(Z) 1

= 1 , , T is the threshold, selected as: = 3~5 , . Analyzing the adaptive algorithm (11a) - (11e), if R (t) 1, then, this adaptive algorithm is the algorithm presented in [17], its stability and convergence properties are equivalent to adaptive FIR filter. If R (t) takes the form of (11e), it is obvious that the ARMA system output d(t) contaminated by the wild-points is pre-filtered or smoothed through the nonlinear median smoothing, the influence by the wild-points is cancelled. (b) Robust adaptive output error system identification algorithm:Again considering the ARMA system (8), its predictor for output error is of the form, 1 = (12) =1 + =0 Considering the output error: = = 1 () + ( 1) Where 1 =
1 1
=1

and

= , . . , , , . . , ( + 1) By using the output error e(t), the robust adaptive output error ARMA system identification algorithm is derived as: = 1 + 2 () () (13a) = () (13b) = (13c)

285

= ()

(13d) (13d)

1 2 = 1 = () (13e) = 1 , . . , , , . . , + 1 Where R (t) is the median smoothing influence function which cancel the influence of wild-points in n (t). 1 , ; = otherwise (13f) ,

= 1 , , T is the threshold, selected as: = 3~5 , . From the adaptive algorithm (13a)-(13f), the estimate (13d) of ARMA system is a recursive form, thus, the system may be unstable unless the roots of = 0 lie inside unit circle during the adaption. The wild-points existed in d(t) may influence the parameter estimate , greatly, and make the adaptive algorithm divergence, if d(t) is not pre-filtered by the median smoothing. Hence, it is significant to cancel the influence of wild-points using median smoothing to keep the adaptive algorithm stability. If R (t) takes the form of (13f), it is obvious that the ARMA system output d(t) contaminated by the wildpoints is pre-filtered[18,19] or smoothed through the nonlinear median smoothing, the influence by the wild-points is cancelled. IV. Case study Random numbers were generated and used for the input record u; these were filtered through a known plant with uncertainty. The ARMA system used in the simulation [20, 21] is of the form, =
( ) 1 ( )

(13f) Fig. 3 Equation error formulation of youla uncertainty with median smoothing

Fig. 4 Output error formulation of additive uncertainty with median smoothing.

.05 0.4 1 11.1314 1 +0.25 2

(14)

The amplitude bounded additive uncertainty model structure is of the form [14] given as + = 1 Where = 1+.7 1 With 1 for additive uncertainty [14] and = .968 for youla uncertainty [15]. The additive noise takes the following form: , 0.9; = 1 2 . Where 1 the Gaussian is white noise with variance 0.01, and 2 is also the white noise with variance 1. is a random, its distribution is uniform within 0,1 . The Kalman estimates are generated as given by iterative prefiltering procedure described in section 3 for the plant given by (14) including additive and youla uncertainty. The convergence of the error between given plant output and the estimated plant output gives the convergence of the plant parameters. The results obtained thus are compared with the prediction error algorithms described in [20,21] using median smoothing influence function for the same uncertainty model of the plant given by (14). It is observed that the convergence has been obtained earlier in case of iterative pre-filtering as compared to non-arithmetic median smoothing. The graphical results are shown below:
.32 1 .5 1

Fig. 5 Output error formulation of youla uncertainty with median smoothing.

Fig. 6 Iterative prefiltering with additive uncertainty

Fig. (3) Equation error formulation of youla uncertainty with median smoothing

Fig.7 Iterative prefiltering with youla uncertainty Fig. 2 Equation error formulation of additive uncertainty with median smoothing

286

V. Conclusion
From the above results it can be observed that the iterative identification method has significant improvement over the median smoothing. Successful convergence within 5-10 iterations has been obtained. As the iterative pre-filtering algorithm proceeds by first obtaining an estimate D(z). Further investigation in this field can be done by taking initial estimate of D (z) obtained by linear prediction and adding v-gap uncertainty. References 1. R. Doraiswami and W. Liu, "Real-time estimation of the parameters of power system small signal oscillations," IEEE Power Engineering Society IEEEIPES 1992 Winter Meeting, New York, USA. 2. R.Kumaresan and D.W.Tufts, "Estimating the parameters of exponentially damped sinusoids and pole zero modelling in noise,'' IEEE Trans. ASSP, vol. ASSP-30, pp. 833 840, Dec. 1982. 3. Kahn, M., Mackisack, M.S., Osborne, M. R., and Smyth, G.K., On the consistency of Pronys method and related algorithms,Journal of Computational and Graphical Statistics, 1, 329-349, 1992. 4. Charles W. Therrien and Carlos H. Velasco, An iterative prony method for ARMA signal modeling, IEEE Trans. on signal processing, vol. 43, NO. 1, pp. 358-361, Jan. 1995. 5. K. Steiglitz and L. E. McBride, A technique for the identification of linear systems, IEEE Trans. Automaic. Control, Vol. AC-10, pp. 461-464, Oct. 1965. 6. Kenneth Steiglitz, On the simultaneous estimation of poles and zeros in speech analysis, IEEE Trans. on acoustics, speech, and signal processing, vol. ASSP-25, No. 3, pp. 229234, June 1977. 7. R. E. Kalman, Design of a self-optimizing control system, Trans. ASME, vol. 80, pp. 468-478, Feb. 1958. 8. Sippe G. Douma, Paul M.J. Van den Hof , Relations between uncertainty structures in identification for robust control, Automatica 41, pp. 439-457, Jan. 2005. 9. Dai xianbua, He zhenya, Robust adaptive systems identifications by using median smoothing, IEEE Trans., pp. 564-567. 10. G. G. Lendaris, The identification of linear systems, Trans. AIEE (Applications and industry), pt. 2, vol. 81, pp. 231-242, Sep. 1962. 11. M. J. Levin, Optimum estimation of impulse response in the presence of noise, IRE Trans. On circuit theory, vol. CT-7, pp. 50-56, March 1960. 12. Vidyasagar, M., Control system synthesis:a factorization approach," Cambridge, MA, USA: MIT Press. 1985. 13. Vinnicombe, G., Uncertainty and feedback: H loopshaping and the -gap metric, London, UK: Imperial College Press, 2001.

14. Richard C. Younce and Charles E. Rohrs, Identification with non-parametric uncertainty, Proceedings of the 29th Conference on decision and control, Honolulu, Hawali, pp. 3154-3161, Dec. 1990. 15. Sippe G. Douma, Paul M.J. Van den Hof, Okko H. Bosgra, Controller tuning freedom under plant identification uncertainty: double Youla beats gap in robust stability, Proceedings of the American control conference, Arlington, pp. 3153-3158, VA June 2001. 16. Wahlberg, Bo, Lennart Ljung, Design Variables for Bias Distribution in Transfer Function Estimation, IEEE Trans. On Automatic Control, vol. Ac-31, no. 2, pp. 34-144, Feb. 1986 17. J. J. Shynk, Adaptive IIR filtering, IEEE magazine, April, 1989. A SSP

18. Kosut, Robert L., Adaptive Control via Parameter Set Estimation, Int. Journal of Adaptive Control, vol. 2, pp. 371399, 1988. 19. L. EI Ghaoui and G. Calafiore, Robust filtering for discrete time systems with bounded noise and parametric uncertainty, IEEE Trans. Autom. Control, vol. 46. and 7, pp. 1084 1089, Jul, 2001. 20. Kaur Dalvinder, Dewan Lillie,System Identification in the presence of uncertainties International Journal of Electronics Engineering(accepted). 21. Kaur Dalvinder, Dewan Lillie,System Identification using output error method in the presence of uncertainties International Journal of Electronics Engineering.

287

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Hadamard Based Multicarrier Polyphase Radar Signals


C.G. Raghavendra
M.S. Ramaiah Institute of Technology, MSRIT Post, Bangalore-560054, India email-cgraagu@rediffmail.com

N.N.S.S.R.K Prasad
Aeronautical Development Agency (ADA), PB No 1718, Vimanapura Post, Bangalore-560017, India

Abstract:- This paper presents a novel Multifrequency signal structure for the use in radar system for target detecting using different phasing schemes. In radar systems for detecting targets, single carrier modulation schemes are mainly used. Multi-carrier modulation or multifrequency is a different approach, investigated for target detection using multiple sub-carrier. The characteristic of the signal structure is that N sequences are modulated on N subcarriers, which are different from each other. The signal structure employs all the sub-carriers concurrently. Each subcarrier is modulated with different phases which constitute a complementary set. Such a set can be constructed, from the cyclic shifts of the sequence length (e.g., P3 and P4 signal). Each subcarrier is separated by the inverse of the duration of a phase element, yielding the Orthogonal Frequency Division Multiplexing (OFDM) feature renowned in communication field. The performance of multifrequency radar signal using multiple carriers with P3, P4 and Hadamard based is analyzed and supported by extensive simulations. Key words:- Multifrequency Radar, Multicarrier Modulation, Target Detection, Orthogonal Frequency Division Multiplexing, Polyphase Code, Complementary Code.

The multicarrier or multifrequency radar signals was described by Levanon [5]. A multicarrier signal employs N subcarriers simultaneously, each subcarrier is phase modulated by N different code sequences obtained by polyphase [6] code P3 and P4. A complimentary set can be easily designed from the N cyclic shifts of phase coded sequence of length N, and also another phase coded set is obtained by utilizing Hadamard matrices. Each subcarrier is equally spaced by inverse duration of phase element tb, which is the basis of OFDM concept. Each subcarrier is of fixed amplitude, but the entire complex signal exhibits a variable amplitude. A single multifrequency signal can be generated with different sequence ordering of phases on each subcarrier, in each sequence ordering of phases, different amplitude variation can be observed in the complex signal. This paper focuses on multifrequency signal structure designed using Hadamard matrices and compared with P3 and P4 types supported by extensive simulations.

II. MULTIFREQUENCY SIGNAL STRUCTURE

I. INTRODUCTION
Communication systems are primarily designed for some specific application, such as speech on mobile or high rate data in wireless LAN. Such a system will be of large data rates which require careful choosing of modulation technique to save the bandwidth. The most suitable choice of modulation scheme is OFDM (Orthogonal Frequency Division Multiplexing) [1]. In radar communication [2] systems the efficient utilization of bandwidth and power are the major issues of concern. The fact that bandwidth [3] and power are the two primary communication resources and it is therefore essential that they are used with care in the design of most radar systems. To cope up with the problem of bandwidth one solution is to use multicarrier modulation [4] scheme, hence OFDM would be suitable for this purpose. OFDM is a special case of multicarrier transmission, where it is based on spreading the data to be transmitted over a large number of carriers, each being modulated at a smaller rate. The carriers are made orthogonal to each other by appropriately choosing the frequency spacing between them. 1

t tb Ntb

Figure 1: Multifrequency Signal Structure The schematic diagram of multifrequency signal structure of N x N is shown in Figure 1. It shows N subcarriers modulated by N sequences, each sequences comprises of different phases obtained by P3 and P4 type. All the phase modulated subcarrier are transmitted simultaneously. The mathematical expression of the complex envelope of the transmitted signal is given by

288

N N 1 n exp j 2 tfs 2 n 1 N u (t ) , 0 t Ntb Un , m t m 1 tb n 1 elsewhere 0,


Where

0 -5 -10

(1)
Autoc orrelation [dB]

-15 -20 -25 -30 -35 -40

exp( j n , m), Un , m(t ) 0,

0 t tb
(2)

-45 -50 0 0. 01 0.02 0.03 0.04 Tim e 0. 05 0. 06 0.07 0.08

elsewhere

Figure 3: Autocorrelation of 4 x 4 multifrequency signal with 2134 sequence ordering Figures 2 and 3 show the magnitude and autocorrelation of a 4 x 4 multifrequency signal for the sequence ordering of 2134. For the sequence ordering 2341 shown in figure 4, the variation of amplitude is less compared to 2134 sequence ordering in figure 2.
5 4.5 4 A bsolute of com plex signal 3.5 3 2.5 2 1.5 1 0.5 0 0 0.01 0.02 0.03 0.04 Time 0.05 0.06 0.07 0.08

Here n,m is the mth phase element of nth sequence. By comparing equations (1) and (2), equation (2) can be rewritten as shown below

exp( j n , m), Un , m t m 1 tb 0,

m-1 tb t tb

(3)

elsewhere

Using P3 and P4 types the phase elements of each subcarrier can be obtained as given in equations (4) and (5) respectively.

m (m 1)2 (m 1) ,
N N

m = 1,2,.., N, m odd

(4) (5)

(m 1)

m = 1,2,.., N, m even

Table 1 shows the 4 x 4 phase coded complimentary set [7]. Each sequence is obtained by cyclically shifting the phase element, which is obtained by P3 type. Each sequence, which comprises of different phases are modulated on each subcarriers. Table 1: 4 x 4 Phase coded complementary set 0 45 180 405 45 180 405 0 180 405 0 45 405 0 45 180
5 4.5 4 A bs olute of c om plex s ignal 3.5 3 2.5 2 1.5 1 0.5 0 0 0.01 0.02 0.03 0.04 Tim e 0.05 0.06 0.07 0.08

Figure 4: Real envelope of a 4 x 4 multifrequency signal III. MULTIFREQUENCY USING HADAMARD with 2341 sequence ordering

MATRICES

III. MULTIFREQUENCY USING HADAMARD MATRICES


The P3 and P4 phase sequences used so far to construct the multifrequency complementary set are polyphase sequences. They are 2-valued phase sequences that also exhibit perfect autocorrelation [8] and can be used to construct a complementary set. Such a alternative is described by Golomb [9]. Binary complementary sets can be constructed from Hadamard matrices. Equation (6) shows the generalized Hadamard matrices and equation (7) shows 4 x 4 Hadamard matrices. Here 0 and 180 are the two-phase elements, which is used to represent 1 and 1 of the Hadamard matrices of equation (7). Figures 5 and 6 show the Hadamard based 4 x 4 multifrequency complex signal and autocorrelation respectively for the sequence ordering 1234. It is observed that amplitude variation for all the permutation of sequence ordering remains same in the case of Hadamard based multifrequency signal.

Figure 2: Real Envelope of 4 x 4 multifrequency signal with 2134 sequence ordering

289

V. CONCLUSION

H (k 1) H (k ) H (k 1)

H (k 1) , k = 1,2,..,n H (k 1)

(6)

1 1 1 1 1 -1 1 -1 H (2) 1 1 -1 -1 1 -1 -1 1

(7)

A multifrequency radar signal is required for the use in systems like Radar or detecting a remote target. The multifrequency radar signal is a different approach for detecting a target, which adopts a multicarrier transmission technique namely OFDM. Issues on using 4 x 4 multifrequency signal based on P3, P4 and Hadamard are compared. Following are the conclusions on mutlifrequency radar signal. Multifrequency radar signals use multiple subcarriers simultaneously. The signal can be constructed by combining fixed amplitude subcarrier signals. The resulting signal however is of variable amplitude. The subcarriers are phase modulated by different sequences that constitute a complementary set. The subcarriers are separated by the inverse duration of a phase element, yielding OFDM. Simulations are carried out for sequence ordering 2134 and 2341 for 4 x 4 multifrequency based on P3 type, wherein 2341 has lesser amplitude variation than 2134. In the case of Hadamard based multifrequency signal the amplitude variation remains same for all the sequence ordering. At the same time the mainlobe to sidelobe ratio is less compared to P3 based.

5 4.5 4

A bsolute of complex signal

3.5 3 2.5 2 1.5 1 0.5 0 0 0.01 0.02 0.03 0.04 Time 0.05 0.06 0.07 0.08

ACKNOWLEDGEMENT
The authors are grateful to Dr. S. Sethu Selvi, HOD E&C, M.S. Ramaiah Institute of Technology, Bangalore and Aeronautical Development Agency (ADA) Bangalore.

Figure 5: Real envelope of Hadamard based 4 x 4 multifrequency signal with 1234 sequence ordering

REFERENCES
0 -5 -10 -15 Autoc orrelation [dB] -20 -25 -30 -35 -40 -45 -50 0 0.0 1 0. 02 0 .03 0 .04 Tim e 0.05 0.06 0.07 0.08

[1]

[2]

[3]

Figure 6: Autocorrelation of Hadamard based 4 x 4 multifrequency IV. COMPARISON In this paper multifrequency signal of 4 x 4 based on P3 type and Hadamard are analyzed. The mainlobe to sidelobe ratio for the 3 x 3 multifrequency signal (P4 type) is 3.055 for all the sequence ordering. For 4 x 4 multifrequency signal (P3 type) it is 7.870 for the sequence ordering 2341. In the case of Hadamard based 4 x 4 multifrequency it is about 5.42 for all the sequence ordering. [4]

[5]

[6]

Hermann R, Thomas M, Karsten B, Broad band OFDM radio transmission for multimedia applications, Proc of IEEE, vol. 87. No. 10, Oct 1999. Janakiraman M., Wessels B.J. and P. van Genderen, System design and verification of PANDORA multifrequency radar Proc-International conference on radar systems, Brest, France, 17-21 May 1999. N.N.S.S.R.K. Prasad, V. Shameem, U.B. Desai and S.N. Merchant, Improvement in target detection performance of pulse coded Doppler radar based on multicarrier modulation with fast Fourier transform (FFT),IEE Proc.-Radar Sonar Navig., Vol.151, No.1, February 2004. Bingham J.A.C., Multicarrier modulation for data transmission: an idea whose time has come, IEEE Comm. Mag. pp. 5-14, May-1990. Levanon N., Multifrequency complementary phase coded radar signal, IEE Proc-Radar, Sonar Navig., No. 6, December 2000. Krestschmer F.F, and Lewis B.L, Doppler properties of polyphase coded pulse compression waveforms, IEEE Trans. Aerosp. Electron, Syst., AES-19, (4), pp. 521-531, 1983.

290

[7]

[8]

[9]

Tseng, C. C., and Liu, C. L., Complementary sets of sequences, IEEE Transactions on Information Theory, IT-18, No. 5, 644-652, sep 1972. Popovic, B. M, Complementary sets based on sequences with ideal periodic autocorrelation. Electronics Letters, Vol 26, No.18, 1428-1430, Aug. 1990. Golomb S.W, Two-valued sequences with perfect periodic autocorrelation, IEEE Trans. Aerosp. Electron. Syst., 28, (2), pp. 383-386, 1992.

currently for another prestigious project for the nation i.e. Indian Light Combat Aircraft (TEJAS) and for its Multimode Radar. He has more than 50 publications in national and international conferences, symposia and journals. He is a Fellow of IETE, Senior member of IEEE and Member of IEE, IE, OSI, ISI, SEMCE & AeSI.

C.G. Raghavendra, did his B.E. in Electronics & Communication Engg from JMIT Chitradurga in 2000. M.Tech in Digital Electronics & Communication Engineering from M.S.Ramaiah Institute of Technology in 2004, Bangalore. Presently working as Senior Lecturer in Dept of E&C, M.S.R.I.T, Bangalore. He worked as Engineer in Aeronautical Development Agency (ADA), Bangalore in Avionics Integration Group from 2001 to 2002. He has presented papers in National and International Symposiums. His area of interest is Signal Processing.

Dr N.N.S.S.R.K. Prasad presently working as Deputy Project Director (Avionics Systems) in Aeronautical Development Agency (ADA) for LCA project. He Joined ADA in 1998 from SAMEER- Mumbai (1986-1998), where he was working as Head-Signal Processing Cell. Did his B.Tech, in Electronics & Communication Engineering, M.Tech in Controls and Instrumentation from JNTU College of Engineering, Kakinada, (AP) in 1985 and 1987 respectively and PhD in Electrical Engineering from IIT- Bombay, Mumbai in 2003. He has been working in the field of Signal Processing for Radar and Communication applications since he joined SAMEER and worked for design and development of MST Radar for atmosphere research and applications. Also, he worked for Integrated Opto-electronics project of Ministry of Information Technology during his tenure at SAMEER. He is working

291

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

A Novel Amplitude Detector for Non-sinusoidal Power System Signal


Arghya Sarkar1
Electronics and Communication Engineering MCKV Institute of Engineering
Howrah -711204, India sarkararghya@yahoo.co.in

Sawan Sen2
Electrical Engineering Department Techno India Saltlake, India sen_sawan@rediffmail.com

S. Sengupta3
Department of Applied Physics University of Calcutta 92, A. P. C. Road, Kolkata - 700009, India samarsgp@rediffmail.com

A. Chakrabarti4
Department of Electrical Engineering, Bengal Engineering and Science University Shibpur, Howrah, India a_chakraborti55@yahoo.com

Abstract This paper presents development and implementation of a novel digital signal processing algorithm for on-line estimation of peak value of the fundamental component of a nonsinusoidal power system signal. Compared with the wellestablished technique such as the enhanced phase-locked-loop (EPLL) based system, the proposed algorithm provides higher degree of immunity and insensitivity to harmonics and noise, and faster response. Based on simulation studies, performances of the proposed algorithm at different operating conditions have been presented and its accuracy and response time have been compared with the EPLL systems. Keywords Algorithm, amplitude estimation, digital filter, harmonics, Simpson integrator

Pre-filtration of non-sinusoidal signal, as defined in (1), can eliminate harmonics, keeping fundamental component almost unaltered and leads to the following observation model sF ( t ) = Smax1 sin (t + F ) (2) where F is the new phase angle. If, all the parameters remain constant, the integration of (2) takes the following form t Smax1 cos ( F ) Smax1 cos ( t + F ) (3) sFI ( t ) = sF ( t ) dt = 0 Effective filtration can eliminate the constant term of (3) and provides following signal: S max1 cos ( t + F ) (4) sFFI ( t ) = Further integration of (4) and elimination of constant term gives S max1 sin ( t + F ) (5) sFSI ( t ) = 2 If, the distorted power system signals uniformly sampled at sampling frequency fs, greater than the Nyquist rate, so that aliasing of spectra does not occur, then (2), (4) and (5) can be discretized at any arbitrary sample instant, n, as sF [ n ] = Smax1 sin ( n + F ) (6)
sFFI [ n ] =
sFSI [ n ] =

I. INTRODUCTION
Peak value of a sinusoidal signal is an important piece of information, which has applications for control, protection, and monitoring functions in power systems. While this is straightforward under sinusoidal environments, the task of amplitude estimation becomes challenging and interesting in the presence of harmonic distortion. A variety of techniques and algorithms have been proposed in different literature to estimate amplitude, for example, discrete Fourier transform (DFT) based approach [1], Kalman filtering [2], least mean square (LMS) algorithm [3], numerical method [4], enhanced phase-locked-loop (EPLL) [5] and orthogonal peak detectors [6]. However, most of the aforementioned methods have trade-off between accuracy and speed. In this paper, an integration based digital signal processing algorithm has been proposed to measure the amplitude of fundamental component of a distorted power system signal. The simulation results obtained by the proposed algorithm confirm an advantage in faster response time, better immunity and insensitivity to harmonics and noise than the conventional EPLL based system.

S max1 cos ( n + F )

II.

PEAK DETECTION SCHEME

Let, s(t) represents a continuously measured non-sinusoidal signal, expressed as (1) s ( t ) = Smax k sin ( kt + k )
k

(8) 2 The fundamental amplitude can easily be obtained from (6)-(8) as Smax1 [ n ] = r [ n ] (9)

S max1 sin ( n + F )

(7)

where is the fundamental angular frequency, Smax k are the peak values and k are the phase angles of kth harmonic.

(10) 2 sFSI [ n] Clearly, r[n] depends on the region where the readings sFSI[n] are taken and would thus tend to be worst where sFSI[n]
where

r [ n] =

2 2 sF [ n] sFSI [ n] sF [n] sFFI [ n]

292

is small, i.e., in the region of a zero crossing. Hence, according to following theorem [7] a c e = = = ...... > 0 (11) If b d f

a + b + c + ...... a = b d + e + f + ...... r[n] can be modified as


Then
A1

(12)

r [ n] =

j =0

2 2 sF [ n j ] sFSI [ n j ] sF [ n j ] sFFI [ n j ] j =0 A 1 j =0

A 1

In order to maintain a constant phase difference between sF[n] and sFSI[n], the same 4th order Chebyshev I BPF as utilized with FDDI, has been cascaded with (16). Since, BPF is an integral part of the designed stable FDDI and SDDI, the distorted power system signals s[n] can directly be fed to FDDI and SDDI to get sFFI[n] and sFSI[n], respectively. These design considerations lead the construction of the block diagram (as shown in Fig. 1) for fundamental amplitude estimation algorithm.

2 sFSI

[n j ]

(13)

Where A is the number of consecutive estimations. Hence, the fundamental amplitude can be estimated from (9) and (13) as

Smax1 [ n ]
A1 2 2 sF [ n j ] sFSI [ n j ] sF [ n j ] sFFI [ n j ] (14) j =0 j =0 A1 j =0 2 sFSI [ n j ] A1

III.

DESIGN CONSIDERATION

For a practical implementation of fundamental amplitude estimation algorithm, it is necessary (a) to design fast response low order stable band-pass first degree and second-degree digital integrators and (b) to choose suitable value of A. A. Design of band-pass first degree and second degree digital integrators The basic requirement is to design a suitable first degree and second degree digital integrator so that there is a /2 phase difference between sF[n], sFFI[n] and, phase difference between sF[n] and sFSI[n]. At the same time sFFI[n] and sFSI[n] should not have any dc offset. The proposed design procedure starts from conventional Simpson rule based first degree digital integrator (FDDI), then extend it to second degree case. The detailed design procedure has been explained below. The transfer function of a Simpson FDDI is given by [8]

Fig. 1 Block diagram of the fundamental amplitude estimation algorithm.

B. Choice of A Choice of A plays an important role in accuracy and computational complexity of the proposed algorithm. Simulation result shows that A is the best choice in terms of computational burden and smoothing criteria when it is equal to number of samples per fundamental cycle. IV. PERFORMANCE ANALYSIS

H FDDI ( z ) =

Ts 1 + 4 z 1 + z 2 3 1 z 2

(15)

where Ts is the sampling interval. In order to remove dc component, the proposed FDDI is cascaded with an appropriate bandpass filter (BPF). A Chebyshev I digital BPF of order 4 and pass-band 45-55 Hz, has been selected for that purpose. The simplest way to determine the transfer function of double integrator is to cascade two FDDI. Hence, using (15) the transfer function of the second-degree digital integrator (SDDI) can be obtained as:
H SDDI ( z ) = Ts2

A set of simulation test has been performed in MATLAB environment to estimate the validity and performance of the proposed algorithm under different operating conditions. The fundamental amplitude for each of the cases has been computed by the proposed algorithm and compared with the results obtained from the enhanced phase-locked-loop (EPLL) based system [5]. The block diagram of the EPLL structure has been presented in Fig. 1 of [5]. For an 1-p.u., 50-Hz input signal, the parameter values K = KP = 200, and Ki = 10,000 have been selected [5] and sampling frequency is kept constant at 6.4 kHz. A. Static sinusoidal test In this test, input test signal is a 1 p.u. 50 Hz pure sinusoidal signal. Simulation results show that both the propose method and EPLL system provide zero steady state error. Another important observation is that the pass-band ripple factor, RP, of Chebyshev I band-pass filter of the proposed method significantly affect the algorithm convergence and accuracy.

(1 + 8z

+ 18 z 2 + 8 z 3 + z 4 9 1 z

(16)

293

Hence, like the EPLL system, proposed algorithm also provides higher flexibility of control over speed and accuracy of estimation by tuning RP. B. Static harmonic test To study the effect of harmonics on the performance of the proposed amplitude estimator, an input signal having fundamental frequency of 50 Hz, a third harmonic component in the range from 0% to 50%, and a fifth harmonic component equal to a half of the third component have been utilized. The absolute maximum errors for the proposed technique in comparison with the EPLL system has been shown in Fig. 2, from which it has been observed that much higher accuracy can be achieved utilizing the proposed scheme.

D. Dynamic response during step variation of amplitude A sinusoid with fixed frequency of 50 Hz has been utilized for this test of which amplitude abruptly drops from 1.6 to 1 p.u at 2s. Corresponding to the original signal amplitude, Fig. 4 shows the amplitude tracking of the proposed algorithm and the EPLL system during step amplitude change. The simulation results indicates that the proposed scheme provides faster convergence (response time is about 50ms) than the EPLL system (response time is about 80ms).

Fig. 4 Amplitude estimation during step change in signal amplitude.

Fig. 2 Absolute maximum percent errors in terms of the third harmonic.

C. Noise test Sinusoidal 1 p.u. 50-Hz signals with the superimposed white zero-mean Gaussian noise has been utilized as input test signals. A range from a highly noisy signal (SNR=20 dB) to a low noisy signal (SNR=60 dB) is covered. The absolute peak of oscillating steady state error has been depicted in Fig. 3, which exhibits that the proposed algorithm provides higher accuracy and better immunity to noise than the EPLL system.

E. Dynamic response during step variation of frequency As the response of a sudden increase in the frequency of 1 p.u sinusoids at 2s from 50 to 40 Hz, tracking of the fundamental amplitude has been depicted in Fig.5, which exhibits that the proposed approach again provides faster convergence (transient time = 80ms) than the EPLL system (transient time = 100ms).

Fig. 5 Amplitude estimation during step frequency change.

V.
Fig. 3 Absolute maximum percent errors in terms of SNR.

CONCLUSIONS

A novel digital signal-processing algorithm for estimation of amplitude of the fundamental component of a distorted power system signal has been presented and its performance is evaluated by means of simulation studies. Higher accuracy and

294

insensitivity to harmonics and noise, and faster response have been observed compared to the conventional EPLL based system. Structural simplicity of the proposed estimator makes it suitable for digital implementation in both software environment, e.g., a DSP, and a digital hardware environment, e.g., FPGA or ASIC. ACKNOWLEDGMENT The authors would like to thank Prof. M. KarimiGhartemani and Prof. M. R. Iravani, for their useful suggestions to build SIMULINK model of the EPLL systems. REFERENCES
A. Oppenheim, and R. Schafer, Discrete-Time Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1989. [2] I. Kamwa and R. Grondin, Fast adaptive schemes for tracking voltage phasor and local frequency in power transmission and distribution systems, IEEE Trans. Power Delivery, vol. 7, pp. 789795, Apr. 1992. [3] T. Lobos, Nonrecursive methods for real-time determination of basic waveforms of voltages and currents, Proc. Inst. Elect. Eng., vol. 136, no. 6, pp. 3473551, Nov. 1989. [4] H.-L. Jou et al., A shortest data window algorithm for detecting the peak value of sinusoidal signals, IEEE Trans. Ind. Electron., vol. 37, pp. 424425, Oct. 1990. [5] M. Karimi-Ghartemani and M. R. Iravani, Robust and frequencyadaptive measurement of peak value, IEEE Trans. Power Delivery, vol. 19, no. 2, pp. 481 489, Apr. 2004. [6] C.-T. Pan and M.-C. Jiang, A quick response peak detector for variable frequency three-phase sinusoidal signals, IEEE Trans. Ind. Electron., vol. 41, pp. 434440, Aug. 1994. [7] Q. Q. Xu, J. L. Suonan and Y. Z. Ge, Real-time measurement of mean frequency in two-machine system during power swings, IEEE Trans. Power Delivery, vol. 19, pp.1018-1023, 2004. [8] M. A. Al-Alaoui, A class of second order integrators and lowpass differentiators, IEEE Trans. Circuits Syst. I, vol. 42, pp. 220-223, Apr. 1995. [1]

295

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

A Novel Method for Flaw Detection and Detectability of Flaw using Kirchoff Approximation
K.S.Aprameya*, R.S.Anand#, B.K.Mishra##, S.Ahmed+
*

Dept of Electrical Engg, University BDT College of Engg, Davangere, India


1

aprameya_ks@yahoo.com
2

#
##

Dept of Electrical Engg,Indian Institute of Technology,Roorkee,India


anandfee@iitr.ernet.in
3

Dept of Mech and Indust Engg, Indian Institute of Technology,Roorkee,India


bhanufme@iitr.ernet.in

Senior Scientist/Engineer, Naval Surface Warfare Center, Maryland, USA.


4

salahuddin.ahmed@navy.mil

Abstract The aim of this paper is to determine the flaw response in an ultrasonic pulse echo simulation. A polycrystalline material is considered for study. One of the standard scatterers like spherical void is considered. A theoretical method has been developed to predict the flaw response. This method is suitable for small flaws only. The incident ultrasound beam is modeled by well known multi Gaussian beam approach. The flaw response is expressed both in frequency domain and time domain. In this paper, experiment is not performed to validate the predicted response. However, the predicted flaw response is compared with the grain noise signal so that flaw detection can be estimated. Keywords- wave number, nondestructive evaluation, flaw response, scattering, kirchoff approximation

these material properties as well as the influence of micro structural inhomogeneities and the effects of interfaces on ultrasonic wave propagation have to be taken into account [10]. In this respect, mathematical modelling provides an efficient method of assisting analysis. In this regard, mathematical modelling may be used to make the analysis simpler and to arrive at optimized experimental setups. In this paper, ultrasound wave propagation through polycrystalline material is considered. A polycrystalline material is composed of numerous discrete grains, each having a regular crystalline atomic structure. The elastic properties of grains are anisotropic and their crystallographic axes are oriented differently. When an ultrasonic wave propagates through such a polycrystalline aggregate, it is attenuated due to scattering at the grain boundaries [5,12]. The energy scattered by a flaw when subjected to an incident wave field and received by a transducer (flaw response) depends on the size, shape and material properties of the flaw. The incident ultrasound beam is modeled by Multi Gaussian Beam (MGB) model as proposed by Wen and Breazeale [8]. In this paper, the particle velocity due to passage of ultrasonic wave for different flaw locations is determined using MGB model and is used to determine the flaw response. present work is a spherical void of radius b and distance ZF from the contact circular transducer. at a

I.

INTRODUCTION

Ultrasonic testing is one of the most versatile non destructive evaluation methods and is applicable to most materials metallic or non-metallic. Nowadays, the modelling of nondestructive evaluation has gained profound importance than experimental setups. The simulation of ultrasonic testing using appropriate models allows us to perform parametric studies and to obtain quantitative simulated results [1, 10-11].

Also, it is known that many modern structural materials exhibit anisotropic elastic behavior leading to complicated wave propagation phenomena. To ensure the reliability of ultrasonic nondestructive testing techniques, II. THE PROBLEM FORMULATION

A pulse echo simulation as shown in Fig (1). Fig.1(a) shows the location of flaw in near field and Fig.1(b) shows the same in far field. The flaw considered in the

296

(a) III.

(b) Fig.1 Pulse echo simulation (a) Flaw in near field (b) Flaw in far field THEORY In Eqn.1, ( ) is the transducer efficiency given by the ratio of outgoing ultrasonic power to incident electrical power in the transducer cable and is assumed that ( ) follows a Gaussian distribution. Moreover, the far field scattering amplitude A( ) can be evaluated for different flaw geometries. Using Kirchoff approximation, the scattering amplitude A( ) as reported by Kim et al. [6] is given by:

This section explains the theory briefly in two parts. In the first part the theory for voltage received at the transducer is explained and the other part deals with the determination of grain noise signal.

I Determination of flaw response VR ( )


The voltage received at the transducer is determined by considering the flaw at different locations. The flaw response is determined for two cases. In the first case, the material is assumed to be homogeneous and isotropic. In the second case the material is assumed to be microscopically inhomogeneous. For an ultrasonic wave propagating in the z direction, assuming the flaw is small, the voltage received at the transducer is given by [6] ,

b sin(kb) A ( ) exp( ikb)[exp(ikb) ] 2 kb


(3) Using Eqn.3 in Eqn.1, we get

2 ( )]2 A( ) VR ( ) ( )[ ][V ikST

VR ( )

( )b sin(kb) [V ( )]2 [e 2ikb e ikb ]


ikST kb

(1) The different variables of Eqn.1 are explained in the reference [6, 9]. The determination of particle velocity due to passage of ultrasonic wave is explained in the reference [9]. This requires the coefficients An and Bn which are used in evaluating the Gaussian beam description of the field of a rigid piston radiator and these coefficients are given in the reference[8] . The expression for particle velocity is used here also and is given by [9]
10 A exp(ikz) Bn v ( ) n (1 ) izBn izBn n1 1 kzR (1 ) zR zR

(4) In this paper the flaw response is determined using Kirchoff approximation. Using Eqn.2 and Eqn.4, the flaw response is determined. In the first cast the material is considered to be homogeneous so that the wave number k in Eqns. 1-4 is real quantity and the flaw response is determined from Eqn.2 and Eqn.4 by varying k uniformly. This is called Case A. In the second case the material is considered to be microscopically inhomogeneous. The effect of scattering of ultrasonic wave on the received flaw signal is analyzed. Scattering of ultrasonic wave occurs because most metals are not truly homogeneous. Scattering is highly dependent on the ratio of crystallite size to ultrasonic wavelength. The actual k values used in the calculation of flaw response are given by:

k k Re[ kd] / d i Im[kd] / d


(5) Here, d is mean diameter of the grains and d = 100 microns has been assumed in the present work. The

(2)

297

frequency response (DFT) samples and corresponding time domain response (IDFT) are obtained for N=1024 points. Hence, using the computed k values and substituting Eqn.5 in Eqn.2 and Eqn.4, the flaw response is determined. This is called Case B. II. Determination of grain noise signal The probability of detection of a flaw of a given size involves the determination of the back scattered power and the average grain noise spectra. To quantitatively describe the grain-induced backscattered signals at receiving transducer, we employ the formalism of Rose[7]. Starting from the electro-mechanical reciprocity relations formulated by Auld[2], the expected back scattered power,

Fig.2 Near field flaw response, z = 10 mm (Case A)

with superscript * denoting complex conjugate and after carrying out algebraic manipulations thus can be written as

2 k 4U o4 C3333C3333 2ik . s W ( s )e d 3 s 2 4 P( )

2 k 4 C3333C3333 2ik . s W ( s ) e d 3s 2 2 2 4(a v L )

Fig.3 Far field flaw response z = 20 mm (Case A) (6)

After simplification, it can be shown that

N ( )

2a

D .

(7)

It is observed from Eqn.7 that the grain noise signal is a function of frequency.
IV RESULTS AND DISCUSSIONS

A circular transducer of diameter 20mm with a center frequency of 1MHz is considered. For ultrasonic wave propagating in the steel medium with velocity of 5.9 mm / S , the length of near field corresponds to 16.94mm. By considering the flaw in near and far field, the flaw response is obtained in time and frequency domain. The pulse echo response is plotted in the frequency range of 0 to 10MHz in steps of 0.009765625 MHz. The radius of spherical void is taken to be 1mm. The flaw response in frequency domain is considered as Discrete Fourier Transform (DFT) coefficients here. The time domain response is obtained by taking Inverse Discrete Fourier Transform (IDFT). The number of points considered for DFT and IDFT are 1024. To obtain the flaw response a Matlab program is written. Fig.2 and Fig.3 shows the flaw response for Case A and Fig.4 and Fig.5 illustrates for Case B.

Fig.4 Near field response z = 10 mm (Case B)

Fig.5 Far field response z = 20 mm (Case B) From Fig.2 and Fig.3 it has been observed that amplitude of flaw response is decreased as the distance z is increased. The flaw response in frequency domain is centered on the center frequency of transducer which is selected as 1MHz. The time domain response is shifted

298

towards right as the distance from flaw to transducer z is increased. Fig.4 and Fig.5 shows the flaw response for Case B. Here the variation of flaw response is similar to that of Fig.2 and Fig.3. But the amplitude of flaw response is clearly reduced in Case B, since the effect of ultrasonic attenuation and phase velocity dispersion due to grain scattering is considered in this case. Further the variation of flaw response against mean diameter of grains is studied. It is found that the flaw response is reduced with increase in the mean diameter of grains [9]. This result agrees with the physics of ultrasonic wave propagation. I. Variation of Grain noise signal Fig.7 Predicted flaw response and grain noise signal Using Eqn.7, the RMS value of the grain noise is determined for a flaw of given size and shape. The back scatter coefficient is determined using the material properties. The grain noise signal is considered in the frequency domain. Fig.6 shows the variation of grain noise signal in frequency domain for a flaw of diameter 4 mm . In Fig.7, PFS means predicted flaw signal and GNS means grain noise signal. From Eqn.7, it is seen that grain noise signal is a function of diameter of flaw. Thus, for increase in radius of flaw, the grain noise signal will increase as is evident in Fig.7. From Fig.7, it follows that when the flaw response is predicted using Kirchoff approximation, for the case when z is upto the value of 10 mm, the flaw of size more than 0.5 mm can be detected and when z is in the range of 14 mm the flaw of size more than 1.5 mm can be detected. Similarly, when z is within the limit of 20 mm, the flaw of size more than 2.5 mm can de detected. For higher values of z more than 30mm, the predicted flaw response is very much reduced when compared with the grain noise signal making the flaw detection difficult. V CONCLUSION In this paper, the results of interaction of plane ultrasonic waves with microstructure in the beam propagation model based on multi Gaussian beam superposition method (MGB) is used. This method is computationally efficient. This gives more realistic realization of the incident beam. The results obtained in this paper agree with the physics of ultrasonic wave propagation. The flaw response predicted using Kirchoff approximation is compared with the grain noise signal. Finally it is concluded that if the grain noise signal is more than the predicted flaw response, is the flaw cannot be detected from the received signal without the use of complex signal processing methodology. Acknowledgements The authors are highly grateful to the department of electrical engineering, Indian Institute of Technology, Roorkee for providing necessary facilities to carry out this research work. For K.S. Aprameya this research work was supported by Davangere University, BDT College of Engineering, Davangere, Karnataka, through the AICTE New Delhi QIP research program.

Fig.6 Grain noise signal in frequency domain II. Comparison of predicted flaw response with grain noise signal For a given distance z, the amplitude of the flaw response for different dimensions of spherical void is determined using Kirchoff approximation. The frequency domain plots of flaw response are considered. These values are compared with that of the grain noise N ( ) . Fig.7 shows the variation of grain noise signal and predicted flaw response against varying radius of voids. In Kirchoff approximation the radii of spherical void selected for study are 0.5mm, 1mm, 2.5 mm and 5mm. Fig.7 shows the variation of predicted flaw response and grain noise signal against the radius of the void.

299

VI REFERENCES
[1] A. Minachi, F.J. Margetan and R.B. Thompson, Reconstruction of a piston transducer beam using multi Gaussian beams and its applications, Review of progress in QNDE, vol. 17A, edited by D.O. Thompson and D.E. Chimenti, pp. 907-914, Plenum Press, New York, NY,1998. [2] B. A. Auld, General electromechanical reciprocity relations applied to the calculation of elastic wave scattering coefficients, Journal of Wave Motion, vol.1, no.1, pp. 3-10, Jan 1979. [3] F. E. Stanke and G.S. Kino, A unified theory for elastic wave propagation in polycrystalline materials, J. Acoust. Soc.Am , vol.75, no. 3, pp. 665-681, March 1984. [4] F. E. Stanke, Unified theory and measurements of elastic waves in polycrystalline materials, Ph.D. thesis, Stanford University, Stanford, CA(1983). [5] F. J. Margetan, T.A. Gray and R.B. Thompson, A technique for quantitatively measuring microstructurally induced ultrasonic noise, Review of progress in QNDE, vol. 10B, edited by D.O. Thompson and D.E. Chimenti, pp. 1721-1728, Plenum Press, New York, NY, 1991. [6] H. J. Kim, S.J. Song and L.W. Schmerr, An ultrasonic measurement model using a multi-Gaussian beam model for a rectangular transducer, Ultrasonics , vol. 44, pp. e969-e974, 2006. [7] J.H. Rose, Ultrasonic backscattering from polycrystalline aggregates using time domain linear response theory, Review of progress in QNDE, vol. 10B, 1991, pp.1715-1720. [8] J. J.Wen and M.A.Breazeale, A diffraction field expressed as the superposition of Gaussian beams, J. Acoust. Soc.Am, vol.83, no. 5, pp. 1752-1756, May 1988. [9] K.S. Aprameya, R.S. Anand, B.K. Mishra and S. Ahmed, Prediction of flaw response in an ultrasonic pulse echo simulation using Born approximation, Journal of Nondestructive Evaluation,Taylor and Francis, vol.24,no.3,pp.289-300,Sept.2009. [10] M. Spies, Analytical methods for modelling of ultrasonic nondestructive testing of anisotropic media, Ultrasonics , vol.42, pp.213-219, 2004. [11] M. Spies, Modelling of transducer fields in homogeneous anisotropic materials using Gaussian beam superposition, Journal of NDT and E , vol.33, pp.155-162, 2000. [12] R. Huang, L.W. Schmerr and A. Sedov, Improving the Born approximation for the scattering of ultrasound in elastic solids, Ultrasonics , vol. 44, pp. e981-e984, 2006. [13] S. Ahmed and R.B. Thompson, Attenuation and dispersion of ultrasonic waves in rolled aluminium, Review of progress in QNDE, vol. 17B, edited by D.O. Thompson and D.E. Chimenti, pp. 1640-1655, Plenum Press, New York, NY, 1998. [14] S. Ahmed and R.B. Thompson, Effect of preferred grain orientation and grain elongation on ultrasonic wave propagation in stainless steel, Review of progress in QNDE, vol. 11B, edited by D.O. Thompson and D.E. Chimenti, pp. 1999-2006, Plenum Press, New York, NY, 1992. [15] S. Ahmed and R.B. Thompson, Influence of columnar microstructure on ultrasonic backscattering, Review of progress in QNDE, vol. 14, edited by D.O. Thompson and D.E. Chimenti, pp. 1617-1624, Plenum Press, New York, NY, 1995. [16] S. Ahmed, R.B. Thomopson and P.D. Panetta , A formal approach to include multiple scattering in the estimation of ultrasonic backscattered signals, Review of Progress in QNDE, vol. 22, edited by D.O. Thompson and D.E. Chimenti, American Institute of Physics, pp. 79-82, 2003

300

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

OFC based insensitive voltage mode universal biquadratic filter using minimum components

T. Parveen
Associate Professor, Electronics Engineering Department, A.M.U., Aligarh, 202002, India. E-mail: tahiraparveen@rediffmail.com

M. T. Ahmed
Principal, Z. H. College of Engineering & Technology, A.M.U., Aligarh, 202002, India. E-mail: muslimahmed@hotmail.com

Abstract A new multi inputs and single outputs (MISO) VM


universal biquadratic filter (UBF) is realized employing only two low voltage Operational Floating Conveyors (OFCs) along with two resistors and two capacitors. The proposed circuit realizes six standard biquadratic responses, viz., low pass high pass, non inverting band pass, inverting band pass, band elimination and all pass. The filter enjoys attractive features, such as, low active and passive component count, low sensitivity performance, operation at low supply voltage (0.75V). The filter was also designed and verified using PSPICE with convincing results.

Keywords- Operational Floating Conveyors, biquadratic filter, analog signal processing.

universal

In applications where power consumption and IC implementation are important, the number of active elements employed becomes important. Taking this into consideration, a novel voltage mode universal biquadratic filter with multi inputs and single output is presented. It uses only two DOOFCs, alongwith, two resistors and two capacitors in its realization. The filter uses low count of active and passive components and provides five standard responses, through appropriate selection of inputs. The OFC- based UBF enjoys attractive features, such as, low active and passive component count, low sensitivity performance and realization of all standard biquadratic responses, over previously reported circuits [1-9]. II. CIRCUIT DESCRIPTION

I.

INTRODUCTION

There is a growing interest in designing filters with multi inputs and single output (MISO) [1-5], because of the following advantages: (i) realization of different filter functions from the same circuit based on the selection of input excitations, (ii) reduced component count of both active and passive components as compared to single input and single output (SISO) MBFs, (iii) enjoying greater versatility and simplicity in design, (iv) reduced cost and (v) superior performance. Recently, many voltage mode universal biquadratic filters with multi inputs and single output (MISO) structures have been proposed [1-5, 8]. The circuits in [2, 5] have the drawback of use of excessive number of active and passive components. Circuits of Ref [3, 4] require excessive number of passive components and have complex matching constraints. In [1,5], inverting voltage signal is additionally required to realize all pass function, which adds to one more active device. Recently in [6-9], VM universal biquadratic filters have been reported, which employ large numbers of active and passive components. In Ref [8], OFC-based VM multi inputs and single output UBF is realized using two OFCs, alongwith, seven passive components. Recently in Ref [9], an OFC-based voltage mode single input and multi output (SIMO) UBF is realized using four OFCs and ten grounded and floating passive components.

The proposed circuit of the MISO universal biquadratic filter is shown in Fig. 1. It employs only two dual output operational floating conveyors (DO-OFCs), along with, two capacitors and two resistors. The DO-OFC is characterized by:

i y 0 v x v y vw zt i x

(1) Routine analysis of the UBF gives the voltage transfer function as:

i z =+ i w , i z =- i w

1 V1 1 s 2V3 s R C V1 R C V2 R R C C 2 2 1 2 1 2 1 2 Vo D( s )

(2)

where the denominator polynomial, D(s), is given by:

D( s ) s 2 s

1 1 R1C 2 R1 R2 C1C 2

(3)

301

Y1

Z1
DOOFC Z1

Y2 DOOFC

Z2 Z2

VO

X1

W1

C1 V2 R1
R2 V1

X2

W2

C2
V3

also simple to satisfy through design, particularly in monolithic technologies, where inherently matched devices are available. The pole frequency ( o) and the pole-Q of the proposed UBP an obtained as

1 R1 R2 C1C 2

R1 C 2 . R2 C1

(8)

The gains of the UBF are given by:

Fig. 1 DOOFC based MISO universal biquadratic filter

(i)

High pass filter: For the high pass response is obtained by selecting V3 = Vin, V1 = V2 = 0, the resulting voltage transfer function is given by

H HP 1, H LP 1, R H IBP 1 R2
III.

H BE 1,

H AP 1,
(9)

NON-IDEAL ANALYSIS

THP ( s)
(ii)

VHP s Vin D( s )

(3)

Taking the non-idealities of DO-OFC into account, the port relationships are characterized by:

Inverting band pass filter: If we select, V2 =Vin, V1 = V3 = 0, then this results in the following Inverting band pass (IBP) filter transfer function given by

i y 0 , v x v y , v w z t i x ,

i z iw , i z iw

(10)

TIBP ( s )

VIBP Vi

s RC 2 2 D( s)

With such non-idealities, the characteristic polynomial of the transfer functions is modified to (4)

D( s) s 2 s

1 1

(iii) Low Pass filter: If we select, V1 = V2 = Vin, V3 = 0, and R1= R2, then this results in the following low pass (LP) filter transfer function given by

R1C 2

1 1 2 2

R1 R2 C1C 2
1 R1C 2 1 2 2 R2 C1 1

(11)

The non-ideal filter parameters become:

1 1 2 2
R1 R2 C1C 2

1 V RRCC TLP ( s) LP 1 2 1 2 Vi D( s )
(iv)

1 , (12) Also, the non-ideal gains of the UBF are given by:

(5)

Band Elimination filter: If we select, V1=V2=V3= Vin, and R1= R2, then this results in the following band elimination (BE) filter transfer function, given by

TBE ( s)
(v)

VBE Vi

s2

1 R1 R2 C1C 2 D( s )

(13) It may be seen that in low to medium frequency ranges, the non-idealities do not have significant effect on filter parameters. The gains remain invariant, but results in slight reduction in pole-o and pole-Q values. IV. SENSITIVITY STUDY

H HP 1, H LP 1, R H IBP 1 2 2 R2 1 1

H BE 1,

H AP 1,

(6) The sensitivities of filter parameters, pole- o and poleQ, are evaluated with respect to active and passive components and are summarized below:
o SR 1, , R2 ,C1 ,C2

All Pass filter: If we select, V1 = V2 = V3 = Vin, and R1 = 2R2, then this results in the following all pass (AP) filter transfer function given by

V TAP ( s) AP Vi

s2 s

1 1 R1C 2 R1 R2 C1C 2 D( s )

(7)

1 2

o S 1, 1 , 2 , 2

1 2

It may be noted that the realizations of HP and IBP responses do not require any matching constraints [case (i) to (ii)]. The constraints in the case of LP, BE and AP cases are

1 Q SR , 2 ,C1 , 1 2
Q SR 1 , C 2 , 1 , 2 , 2

SQ 1 1
1 2
(14)

302

From eqn. (14), it is clear that all the active and passive sensitivity figures are equal to half in magnitude, which is an attractive feature of the circuit. Only S (Q;1) is slightly higher, being unity in magnitude. V. DESIGN AND SIMULATION

VI.

CONCLUSION

To demonstrate the performance of universal biquadratic filter, the circuit is simulated using OFC model [10]. Initially the UBF was designed for, o= 3 MHz at Q = 0.707 for gain of unity. For R1 = R2 = R = 1.59 K, eqn. (8) yields C1 = 47 pF and C2 = 23 pF. The simulated UBF response is shown in Fig. 2, along with, simulated parameters. It shows close agreement with the theory. The UBF was then tuned by controlling the resistor R2. The BP responses corresponding to fo = 300 KHz, fo = 500 KHz, and fo = 1 MHz at a constant Q of 5 are shown in Fig. 3, alongwith, simulated values of filter parameters. The results once again give close agreement with the theory.
1.2

A new, multi inputs and single output voltage mode universal biquadratic filter employing only two low voltage DOOFCs along with four passive components is presented. The circuit realizes five biquadratic responses, viz, low pass, high pass, inverting band pass, band elimination and all pass. The filter enjoys attractive features, such as, low active and passive component count, very low sensitivity performance, and reliable operation at low supply voltage of 0.75V. REFERENCES
J. W. Horng, C. C. Tsai, and M. H. Lee, Novel universal voltage mode biquad filter with three inputs and one output using only two current conveyors, Int. J. Electronics, Vol 80, No.4, 1996, pp.543546. [2] S. Ozoguz and E. O. Gunes, Universal filter with three inputs using CCII+, Electron. Lett., Vol.32, No.23, 1996, pp.2134-2135. [3] S.I. Liu, and J. I. Lee, Voltage mode universal filter using two current conveyors, Int. J. Electronics, 1997, pp.145-149. [4] C. M. Chang and S. H. Tu, Universal voltage mode biquadratic filter with four inputs and one output using two CCII+, Int. J. Electron., Vol. 86, No. 3, 1999, pp. 305 309. [5] J. W. Horng, High input impedance voltage mode universal biquadratic filter using three plus type CCIIs, IEEE Trans. Circuits system II, Vol. 48, No. 10, 2001, pp. 996 997. [6] J. W. Horng, Voltage mode universal biquadratic filters using CCIIs, IEICE, Trans. Fundamentals, E87-A, 2004, pp. 406-409. [7] J. W. Horng, C. L. Hou, C. M. Chang, and W. Y. Chung, Voltage mode universal biquadratic filters with one input and five outputs, Analog Integrated Circuits and signal processing, Vol.47, No.1, 2006, pp.73-83. [8] Y. H. Ghallab, M. Abou El-Ela and M. Elsaid, A novel universal voltage mode filter with three inputs and single output using only operational floating current conveyor, Proceedings of International conference on Microelectronics, IMC 2000, 2000, pp.95-98. [9] Y. H. Ghallab, W. Badawy, K. V. I. S. Kaler, M. Abou El-Ela and M. Elsaid, A new second-order active universal filter with single input and three outputs using operational floating current conveyor, The 14th International Conference on Microelectronics, 2002, pp. 42-45. [10] Rajput, S. S., Jamuar, S. S., Advanced Applications of Cu rrent Conveyors: A Tutorial, J. of Active and passive Electronic Devices, Vol.2, 2007, pp.143-164. [1]

fo(MHz) 3.1

QBP 0.703

Simulated results QBE HLP HBP 0.701 1.00 1.00

HHP 1.00

HBE 1.00

HAP 1.01

0.8

Gain

0.4

0 100 KHz VAP/ Vi 300 KHz VHP/ Vi 1.0MHz VLP/ Vi 3.1 MHz 3.0MHz 10MHz VBE/ Vi 30MHz 100MHz

Frequency
VBP/ Vi

Fig. 2 The simulated OFC based UBF response at o= 3MHz

1.2

0.8

Gain

Simulated results fo Q 300.02KHz 4.65 500.03KHz 4.48 1.04 MHz 4.42

HBP 1.00 1.01 1.01

0.4

0 30 KHz 100 KHz 500. 03 KHz 300 KHz 1.0 MHz 3.0 MHz 10M Hz

Frequency

Fig. 3 Frequency tuning of BPF at constant Q = 5

303

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Mixture of Gaussian Model for Pedestrian Tracking


C. Sasivarnan, A. Jagan and M.K. Bhuyan
Department of Electronics and Communication Engineering, Indian Institute of Technology Guwahati, India. Email: {sasivarnan, a.jagan, mkb}@iitg.ernet.in
AbstractThe problem of visual inspection of outdoor environments (e.g., airports, railway stations, roads, etc.) has received growing attention in recent times. The purpose of intelligent visual surveillance systems is to automatically perform surveillance tasks by applying cameras in place of human eyes. The two important steps in visual sensing are object detection and tracking. Moving objects are detected by background subtraction. The numerous approaches to this problem differ in the type of background model used. This paper discusses the modeling of each pixel as a mixture of Gaussian distribution. Subsequently, an on-line approximation is used to update the model. We proposed to employ median ltering along with connected component labeling. These two steps ensures a ne object model for the tracking framework. Next, objects in the background subtracted image are classied into relevant category, on the basis of shape and boundary information. Our experimental results demonstrated the effectiveness of the proposed methods. Keywords - video surveillance, mixture of Gaussian, median lter, connected components, metric operator.

I. I NTRODUCTION Due to the decreasing costs and increasing miniaturization of video cameras, the use of digital video based surveillance is rapidly increasing. Visual surveillance in dynamic scenes is currently one of the most active research area in image processing and computer vision. The main concern is to increase safety and security in indoor and outdoor environments, by replacing traditional video surveillance system with intelligent visual surveillance system. The main challenges associated with intelligent video surveillance systems are quick illumination changes, non-stationary backgrounds, initialization with moving objects, similarity of people in shape, size and color, occlusion, and shadows. So there is need to overcome all these challenges by using appropriate techniques. In a video surveillance system, the main steps involved are object detection, classication and tracking. Moving objects in video frames are detected by different background modeling techniques. In object classication, objects are classied into relevant category. Tracking is used to nd appearance and location of a particular object in sequence of frames. Earlier, frame differencing [4] method is used for background subtraction, in which current frame is simply subtracted from the previous frame and if the difference in pixel values for a given pixel is greater than a particular threshold, the pixel is considered as foreground. Even though it is highly adaptive to changes in background, the problem is that objects has to move continuously. In median ltering [1], previous frames are buffered and the background is calculated as the median of buffered frames. Then the background is subtracted from the current frame and then threshold it to

detect foreground. But this method requires large amount of memory for storing and processing video frames. Kalman lters are employed when there is slow and gradual changes in background. In [2], a single Gaussian model is proposed, in which each pixel is modeled as a single and separate Gaussian distribution. It cant work well in dynamic environments since it can only handle unimodal distribution. In this paper, mixture of Gaussian model together with median ltering and connected component labeling is proposed. It is more robust compared to the previous methods, since it can handle multi-modal distributions and is able to model complex non-static backgrounds, with small illumination changes. In this, mixture of Gaussian distribution is used instead of single Gaussain, to model each pixel and an on-line approximation is used to update the model. The assumption taken is that, the background is visible more frequently than any foregrounds and it has modes with relatively narrow variance. The noise that is presented after background subtraction, is removed by median ltering. The boundaries of objects is specied by connected component labeling process. After that, objects are classied mainly into human and vehicle on the basis of shape and boundary information. II. M IXTURE OF G AUSSIAN M ODEL This is one of the efcient method for background modeling, in which each pixel is modeled as a mixture of Gaussian [3] and use an on-line approximation to update the model. The Gaussian distributions in the mixture model are evaluated to determine which are most likely to result from a background process. Each pixel is classied based on whether the Gaussian distribution which represents it most effectively is considered part of background model. The algorithm is created by taking the assumption that, background is visible more frequently and has less variance compared to foreground. A. Modeling of Each Pixel by Mixture of Gaussian Distribution In this each surface which comes into the view of given pixel is represented by one of a set of states k {1, 2...K}, where K is an assumed constant, taken as 3(usually between 3 and 7). In this, some states correspond to background and the rest are foregrounds. The process k which generates the state at each frame time t = 1, 2... is modeled by a set of K parameters k = P (k ), k = 1, 2...K, each representing the a priori probability of surface k appearing in the pixel view and K k=1 k = 1. The surface process k is hidden. So it is indirectly observed through the associated pixel value X.

304

The pixel value process X is modeled by a mixture of K Gaussian densities [3] with parameter sets k , one for each state k. fX/k (X/k, k ) = 1
(2 ) |
n 2

III. M EDIAN F ILTERING For reducing noise from the background subtracted image, a 5 5 neighborhood is considered to compute the median. Median lters are non-linear spatial lters [5], whose response is based on ordering the pixels contained in the image area encompassed by the lter, and then replacing the value of center pixel with the value determined by the ordering result. IV. C ONNECTED C OMPONENT L ABELING It is used to detect connected components [5] between pixels, so that boundaries of objects can be specied and is able to count the objects. Generally there are two types of connectivity: 4-connectivity and 8-connectivity. The former considers 4-direct neighbors of a pixel and the latter considers both 4-direct and 4-diagonal neighbors of a pixel. Here 8connectivity is employed for labeling prcess, after the median ltering of background subtracted image. V. O BJECT C LASSIFICATION BASED ON S HAPE I NFORMATION After background subtraction, objects are classied based on shape parameters. To classify objects in to human and vehicle, a classication metric operator ID(x) [4] is employed. The metric is based on the fact that humans have more complex shapes and less size compared to vehicle. A bivariate approach, with the targets total area on one axis and its dispersedness [4] on the other, is used. Dispersedness is based on shape parametes of object, and is given by Dispersedness = P erimeter2 /Area (7)

1 2

T 1 2 (X k )

1 k

(X k )

(1)

where k is the mean and k is the covariance matrix of the kth density. An assumption is taken that, the dimensions of X are independent, so that k is diagonal and is represented by 2 . Here k = {k , k } for given the n-dimensional variance k k and initialize the mean k , variance k , and weights k of the kth density. B. Estimating the Current State For estimating the current state, every new pixel value Xt is checked against the existing K Gaussian distributions, until a match is found. A match is dened as a pixel value within 2.5 standard deviations of a distribution. The parameters and of the distribution which matches the new observation are updated as k,t = (1 k,t )k,t1 + k,t Xt
2 k,t

(2)

= (1

2 k,t )k,t 1

+ k,t (Xt k,t ) (Xt k,t ) (3)

where k,t = t /k,t (4)

If none of the K distributions match the current pixel value, the least probable distribution is replaced with a distribution with the current value as its mean value, an initially high variance, and low prior weight. For unmatched distribution and parameters will remain same. The prior weights of the K distributions at time t, k,t is adjusted as k,t = (1 )k,t1 + (Mk,t ) (5) where is learning rate and Mk,t is 1 for model which match and zero for remaining models. After this weights are renormalized. 1/ denes the time constant which determines the speed at which the distributions parameters change. C. Segmenting Foreground After the estimation of current state k, a decision has to take whether that belong to foreground or background. The surface is said to be background with higher probability if it occurs frequently (high k ), and does not vary much (low k ). Then ranking the K states by the criterion k /k . So the most likely background distribution remain on top and less probable background distributions gravitate towards bottom and are eventually replaced by new distributions. The rst B of the ranked states are deemed to be background as
b

So classication in frames is based on dispersedness of objects. Humans with its complex shape will have large dispersedness than vehicle. In any single frame, the instance of a particular motion region may not be representative of its true character. That means, a partly occluded vehicle may appear as a vehicle. For this, a multiple hypothesis approach is used. The rst step is to record all Nn potential targets Pn (i) = Rn (i) from some initial frame. These regions are classied according to classication metric operator ID(x) and the result is recorded as a classication hypothesis X(i) for each one. X (i) = {ID(Pn (i))} (8)

B = argminb (
k=1

k > T )

(6)

where T is a minimum portion of the data that should be accounted for by the background. The rest are foregrounds.

Then these potential targets are observed in subsequent frames to determine whether they persist or not, and to continue classifying them. So for new frames, each previous motion region Pn1 (i) is matched to the spatially closest current motion region Rn (j ). After this process, any previous potential targets Pn1 which have not been matched to current regions are removed from the list, and any current motion regions Rn which have not been matched to previous are considered new potential targets. At each frame, their new classications are used to update the classication hypothesis. X (i) = {X (i)} {ID(Pn (i))} (9)

305

Fig. 1. Background subtraction results. The rst row shows the input video frames (Pets2001: http://www.cvg.cs.rdg.ac.uk/PETS2001/pets2001dataset.html). The second row shows the background subtracted frames after median ltering. The third row shows the results of connected component labeling process.

Fig. 2. Background subtraction results. The rst row shows the input video frames. The second row shows the background subtracted frames after median ltering. The third row shows the results of connected component labeling process.

After classifying into relevant category, objects are tracked by using combination of blob matching and particle ltering. This method would be efcient for pedestrian tracking, by eliminating some of the disadvantages. VI. E XPERIMENTAL R ESULTS The performance of the mixture of Gaussian (MoG) model has been evaluated based on PETS2001 dataset [6] and a surveillance video collected from the railway station in Brisbane, Australia. As shown in Fig. 1, PETS2001 dataset contain more complex scenes with clouds motion, shadows and small illumination changes. In this, the second row shows perfect background subtracted image after median ltering. The third row sows the result after connected component labeling process. The frame rate is 30 frames per second and the size of the frame is 768 576. Fig. 2. shows the results of background subtraction of the video taken from the railway station, with median ltering and connected component labeling. The result in the rst column is not perfect due to large illumination changes. Here the size of frame is 704 576 and frame rate is 30 frames per second. The performance of the object classication has been evaluated based on BEHAVE dataset [7]. Fig. 3-5. shows the result for object classication. Fig. 3. shows the input video frame and Fig. 4. shows the corresponding background subtracted image. These objects are classied into human and vehicle using shape parameters, which is shown in Fig. 5. The frame size is 640 480 and frame rate is 30 frames per second.
Fig. 3. Input video frame for classication [7].

Fig. 4.

Background subtracted image of the above frame.

Fig. 5.

Classication result: Objects are classied into human and vehicle.

306

VII. C ONCLUSION The paper has shown a robust, probabilistic method for background subtraction and how an implementation of mixture of Gaussian model can be assembled from an understanding of the underlying theory. It listed all the modal parameters that are necessary for practical use of algorithm. The use of median ltering in the background subtracted image improved the results a lot. The boundaries of the moving objects have been specied by the connected component labeling process. Objects have been classied based on shape and boundary information. Results on surveillance videos showed the proposed methods are reliable and robust, and will ensure to produce better results for the pedestrian tracking. R EFERENCES
[1] M. Piccardi, Background subtraction techniques: a review, Proc. IEEE SMC, vol. 4, pp. 3099-3104, 2004. [2] C.R. Wren, A. Azarbayejani, T. Darell, and A.P. Pentland, Pnder: real time tracking of the human body, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 19, pp. 780-785, 1997. [3] C. Stauffer and W.E.L. Grimson, Adaptive Background Mixture Models for real-time tracking, Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 246-252, 1999. [4] A.J. Lipton, H. Fujiyoshi, and R.S. Patil, Moving target classication and tracking from real-time video, Proc. IEEE Workshop Applications of Computer Vision, pp. 8-14, 1998. [5] R.C. Gonzalez, R.E. Woods, Digital Image Processing , Addison Wesley, 1992. [6] PETS2001 dataset. Computational vision group, University of Reading. http://www.cvg.rdg.ac.uk/PETS2001/pets2001dataset.html. [7] BEHAVE dataset. Engineering and Physical Science Research Council (EPSRC), UK. http://groups.inf.ed.ac.uk/vision/BEHAVEDATA.

307

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Comparison of Optimal and Integral Controllers for AGC in a Restructured Power System interconnected by Parallel AC-DC Tie Lines
S. K. Sinha
Electrical Engineering Department, College of Engineering Roorkee Roorkee, India

R. N. Patel
Shri Shankaracharya College of Engineering & Technology, Bhilai Bhilai, India

R. Prasad
Electrical Engineering Department IIT Roorkee, Roorkee, India

Abstract- The comparison of two controllers viz. optimal and conventional integral controllers has been done for Automatic Generation Control (AGC) of two-area restructured power system interconnected by parallel ac-dc tie lines. The comparison has been made to compare the performance of the two controllers in terms of time taken to damp out oscillations of frequency in each area, tie line power exchange between the two areas connected by parallel ac and dc lines and power generated by each generator when there is change in load for two cases, when tie line consists of parallel ac-dc lines and when only ac link is present. The simulation results indicate that results are the best in terms of overshoot and settling time when tie line consists of parallel ac-dc line and the controller used is optimal controller.

factors of various DISCOs with different GENCOs [3-4].

cpf11 cpf DPM = 21 cpf 31 cpf 41

cpf12 cpf 22 cpf 32 cpf 42

cpf13 cpf 23 cpf 33 cpf 43

cpf14 cpf 24 cpf 34 cpf 44

Keywords- automatic generation control; frequency deviation; integral controller; optimal controller; tie line power deviation; deregulation. I. INTRODUCTION
The use of HVDC link in parallel with ac tie line as interconnection between two areas improves the dynamic performance of the system and controlling of frequency and tie line power exchange becomes easier in the wake of load perturbation in any area. In this paper Automatic Generation Control scheme has been designed for two area restructured power system interconnected with parallel ac dc lines. The two areas are having bilateral contract. Deregulation or restructuring has become a trend all over the world. Generation, Transmission and Distribution Companies called GENCOs, TRANSCOs and DISCOs respectively come as market players and no single utility company monopolizes as in the case of a traditional power system, which is supervised by Independent System Operator (ISO) [2].In this system, there are various types of contracts between GENCOs and DISCOs and Bilateral is one of the types. Two types of controllers viz. optimal and conventional integral controllers have been used for controlling frequency and tie-line power exchanges. An attempt has been made to compare the performance of the two types of controllers in terms of time taken to damp out oscillations, overshoot, power exchange between the two areas and power generated by different GENCOs for two cases: a) when tie line consists of parallel ac-dc lines and b) when only ac line is present.

Where, each element (cpf) is contract participation factor of a DISCO with a GENCO. Block diagram of two area system in deregulated environment has been shown in Fig.1. The change in load demanded by a DISCO is reflected as a local load in the area to which it belongs and hence appears at the point of input to the power system block. Area control error (ACE) signal is distributed in proportion to their participation in AGC and hence are shown by ACE participation factor (apf). GENCOs follow load demanded by DISCOs and hence signals carrying these information are reflected in the dynamics of the system. The demands are specified by elements of DPM and pu MW load of a DISCO [2, 5-6].

III. HVDC LINK MODEL AND DESIGN OF OPTIMAL CONTROLLER


For HVDC link simple first order model in the form of transfer function

Pdc =

K dc f can be used [1,8]. 1 + Tdc

Where, K dc = Gains associated with dc link Tdc = DC link time constant and In order to design an optimal controller for a two area thermal-thermal system, modern control theory is utilized. u1 and u2 control inputs are created by a linear combination of all the system states. State variables are defined as the outputs of all blocks having either an integrator or a time constant. There are thirteen state variables for the considered system[2].Incremental power flows through dc links Pdc1 and

Pdc 2 have not been considered as state variables. The state


model is formulated by writing the differential equations representing each individual block of the model in terms of state variables. In the block diagram of the system (Fig.1), following are defined:

II. DISCO PARTICIPATION FACTOR AND SYSTEM MODEL


DISCO participation matrix (DPM) is the matrix with the number of rows equal to the number of GENCOs and the number of columns equal to the number of DISCOs, describing the load contract between them. Each entry in this matrix is a fraction of total load contracted by the DISCOs (column) toward GENCOs (row). The sum of all entries in any column is unity. The following matrix shows the contract participation

x1 = f1 ; x2 = Pg1 ; x4 = f 2 ; x5 = Pg 3 ; x10 = Pg 2 ;

x12 = Pg 4 w1 = PL1 + PL 2 + P UCL1 + P UCL 2 w2 = PL 3 + PL 4 + P + P UCL 3 UCL 4 ;


;

308

&11 = x
&1 = x

1 x1 + x2 x7 + x10 w1 (1) Tps1 Tps1 Tps1 Tps1 Tps1 1 1 x2 + x3 Tt1 Tt1


(2) (3)

K ps1

K ps1

K ps1

K ps1

1 1 1 x1 x11 + u12 R2Tsg 2 Tsg 2 Tsg 2

(11) (12)

Similarly,

&2 = x

1 1 x12 + x13 Tt 4 Tt 4 where, u12 = apf 2 u1 + cpf 21PL1 + cpf 22 PL 2 +cpf 23 PL 3 + cpf 24 PL 4 &12 = x &13 = x 1 1 1 x4 x13 + u22 R4Tsg 4 Tsg 4 Tsg 4 +cpf 43 PL 3 + cpf 44 PL 4

&3 = x

1 1 1 x1 x3 + u11 R1Tsg1 Tsg1 Tsg1


+cpf13 PL 3 + cpf14 PL 4

(13)

where, u11 = apf1u1 + cpf11PL1 + cpf12 PL 2

where, u22 = apf 4 u2 + cpf 41 PL1 + cpf 42 PL 2

&4 = x

K ps 2 a12 K ps 2 K ps 2 K ps 2 1 x4 + x5 + x7 + x12 w2 (4) Tps 2 Tps 2 Tps 2 Tps 2 Tps 2

&5 = x

1 1 x5 + x6 Tt 3 Tt 3

The variables used in above equations have the following nomenclature: f = Nominal frequency; f1 , f 2 = Deviation in frequency of area 1 & 2; Ptie = Tie-line power deviation;

(5) (6)

&6 = x
where,

1 1 1 x4 x6 + u21 R3Tsg 3 Tsg 3 Tsg 3

&7 x &8 x & x9

u21 = apf3u2 + cpf31PL1 + cpf32 PL 2 +cpf 33 PL 3 + cpf 44 PL 4 = 2 T12 x1 2 T12 x4 (7) = b1 x1 + x7 (8) = b2 x4 a12 x7 (9) 1 1 x10 + x11 Tt 2 Tt 2
(10)

b1 , b2 = Frequency bias constants of area 1 & 2; R1 , R2 , R3 , R4 = Governor speed regulation parameters; Tsg1 , Tsg 2 , Tsg 3 , Tsg 4 = Governor time constants; Tt1 , Tt 2 , Tt 3 , Tt 4 = Turbine time constants; Tps1 , Tps 2 = Power
system time constants; K ps1 , K ps 2 = Power system gains;

K1 , K 2 = Optimal controller gains of area 1 & 2; K I 1 , K I 2 = Integral controller gains of area 1 & 2; PLj = Load demanded
by DISCO j; P Uncontracted load demanded by DISCO UCLj = j; Pgi = Incremental generation change by GENCO i

&10 = x

Fig.1.

Block diagram of two area restructured power system with optimal controller.

309

K dc1 , K dc 2 = Gains associated with DC links


Tdc1 , Tdc 2 = DC link time constants Equations (1) to (13) can be organized in the vector matrix form as [2, 3, 7] : & = Ax + BU + P + Fp x (14) The matrices A, B, F and are obtained as below:
K ps1 1 0 0 0 0 Tps1 Tps1 1 1 0 0 0 0 Tt1 Tt1 1 1 0 0 0 0 RT Tsg 1 1 sg 1 K ps 2 1 0 0 0 0 Tps 2 Tps 2 1 1 0 0 0 0 Tt 3 Tt 3 1 1 0 0 0 A= 0 R3Tsg 3 Tsg 3 2 T 0 0 0 0 2 T12 b 12 0 0 0 0 0 1 b 0 0 0 0 0 2 0 0 0 0 0 0 1 0 0 0 0 0 R2 Tsg 2 0 0 0 0 0 0 1 0 0 0 0 0 R4Tsg 4 K ps1 Tps1 0 0 a12 K ps 2 Tps 2 0 0 0 1 a12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 K ps1 Tps1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 Tt 2 1 Tsg 2 0 0 0 0 0 K ps 2 Tps 2 0 0 0 0 0 0 0 1 Tt 4 0 0 0 0 0 0 0 0 0 0 0 1 Tt 4 1 Tsg 4 0

u U = 1 u2

PL1 P L2 ; P= PL 3 PL 4

P UCL1 P p = UCL 2 P UCL 3 UCL 4 P

0 0 0 1 0 0 Tt 2 0 0 0 0 0 0 0 0 0

where, U is the control vector, which gives the feedback to the Governors of the GENCOs. Similarly, vectors P and p can be seen as the disturbance vectors, which essentially indicate the change in load and Un-contracted load demand by DISCOs respectively. In the optimal control scheme the control inputs u1 and u2 are generated by means of feedback from all the thirteen states with feedback constants to be determined in accordance with an optimal criterion. The control vector U is constructed by a linear combination of all states. The feedback matrix K is to be determined so that a certain performance index J is minimized. A convenient form of the performance index J is as follows:

J = 1/ 2( x 'T Qx ' + u 'T Ru ' )dt


0

(15)

apf1 0 0 0 0 T sg 1 BT = 0 0 0 0 0 K ps1 K ps1 Tps1 Tps1 0 0 cpf12 cpf11 T Tsg1 sg1 0 0 0 0 cpf 31 cpf32 = Tsg 3 Tsg 3 0 0 0 0 0 0 0 0 cpf 22 cpf 21 Tsg 2 Tsg 2 0 0 cpf cpf 41 42 Tsg 4 Tsg 4
K ps 1 T ps 1 0 0 0 0 F = 0 0 0 0 0 0 0 0 K ps 1 T ps 1 0 0 0 0 0 0 0 0 0 0 0 0

0 apf 3 Tsg 3

0 0 0 0 0 0 0 0

apf 2 Tsg 2 0

0 0

0 apf 4 Tsg 4

0 0 cpf13 Tsg1 0 K ps 2 Tps 2 cpf 33 Tsg 3 0 0 0 0 cpf 23 Tsg 2 0 cpf 43 Tsg 4


0 0 0 K ps 2 T ps 2 0 0 0 0 0 0 0 0 0

0 cpf14 Tsg1 0 K ps 2 Tps 2 cpf34 Tsg 3 0 0 0 0 cpf 24 Tsg 2 0 cpf 44 Tsg 4 0


0 0 K ps 2 T ps 2 0 0 0 0 0 0 0 0 0 0

Where, x ' and u ' are transient state terms. In the expression of J , Q & R can be obtained through the following design considerations: (i) Excursions of ACEs about the steady values are minimized. (ii) Excursions of ACE dt about the steady values are minimized. (iii) Excursions of control vector about the steady values are minimized. For the two-area system model of Fig.1, J can be written as:

J=

1 ' ' ' 2 [( x7 ) + b1 x1' ) 2 + ( a12 x7 + b2 x4 2 0


'2 '2 '2 + ( x8 + x9 ) + k (u1' 2 + u2 )] dt

(16) From the above expression of J, Q and R can be obtained as follows:


b12 0 0 0 0 0 Q = b1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 b22 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 b1 0 0 0 0 0 0 a12b2 0 0 0 0
2 0 1 + a12 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 1 0 0 0 0 0

0 0 0 0 0 0 0 0 1 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0

0 a12 b2 0 0 0 0 0 0 0 0 0 0 0 0

R = KI = symmetric matrix

0 0 0 0 0 0 0 0 0 0 0 0 0

Determination of the feedback matrix K, which minimizes the value of J, is an important optimization problem. The value of K is obtained from the solution of the reduced matrix Riccati equation [3]. This value has been found using MATLAB The values of matrices A, B, Q and R can be obtained as discussed previously. Once the value of K is obtained, the state-feedback control law u = Kx is supplied to the model (as shown in Fig. 1), which minimizes the cost function ( J ) .

310

IV. THE INTEGRAL CONTROLLER


In order to compare the dynamic performance of the proposed optimal controller, a conventional integral controller is also designed for the two area restructured power system. In the model of Fig. 1, the optimal controllers are now replaced with integral controllers in both the areas. The optimization of gain of integral controller has been done using ISE technique. 1% step perturbation is considered in area 1. For different values of gain, the new cost function J, given as: J =

( f

2 1

2 + f 22 + ptie )dt , is calculated. The value of

integral controller gain K I 1 for which J has minimum value is the optimum value of integral controller gain for area 1. The controller gain of area 2 is found in similar manner which is same as that of area 1 because the two areas are similar.

V. RESULTS AND DISCUSSIONS


For two area thermal-thermal system example in hand, the following data are taken [3, 10]. f = 60 Hz , Tsg1 = Tsg 2 = Tsg 3 = Tsg 4 = 0.08sec

It shows the response of the two controllers used for AGC in deregulated environment. The tie line deviation and its settling time are both less when the optimal controller is used and they reduce further when dc link is connected but still performance with optimal controller is better. Figs. 5 to 8 compare the power generated by different GENCOs when optimal and integral controllers are used. Results obtained by optimal controller with dc link connected are the best which are also evident from the plots. Comparison of magnitude of frequency deviation, the time taken to damp out oscillations, tie-line power deviation and power generated by different GENCOs clearly reveal that optimal controller provides better results than the conventional integral controller and these results improve further for both the controllers when dc link is connected as parallel tie line. The overshoots as well as the settling times of the system responses are significantly less in case of optimal controller as compared to that of the integral controller.
0.03 0.02 0.01 0

Del f1(Hz)

Tt1 = Tt 2 = Tt 3 = Tt 4 = 0.3sec , H = 5 sec,


D = 8.33 x103 puMW / Hz , R = 2.4 Hz / puMW ,

-0.01 -0.02 -0.03 -0.04 -0.05 -0.06

Pr1 = Pr 2 = 2000 MW ; Ptie ,max = 200MW K dc1 = K dc 2 = 1.0 ; Tdc1 = Tdc 2 = 0.2sec b1 = b2 = D + (1/ R) = 0.425 , K ps = 120, Tps = 20 sec
Simulation study of AGC of two area interconnected system in deregulated environment has been done with following considerations. DISCOs contract with the GENCOs according to the following DPM:
0.5 0.20 DPM = 0.0 0.30 0.25 0.0 0.30 0.25 0.0 0.0 0.25 1.0 0.7 0.25 0.0 0.0
Del f2(Hz)

delf1INT delf1INTDC delf1OPT delf1OPTDC


0 5 10 15 20

Time (Sec)

Fig. 2. Change in frequency of Area 1.


0.04 0.03 0.02 0.01 0 -0.01 -0.02 -0.03 -0.04 -0.05

Each DISCO demands 0.02 pu MW power from GENCOs according to DPM matrix. GENCOs participate in AGC according to the following ACE participation factors: apf1 = 0.75, apf2 = 0.25, apf3 = 0.50 and apf4 = 0.50. Uncontracted loads are assumed to be zero. Computer solution for feedback matrix K obtained with the help of MATLAB as below:
0.2888 0.5138 0.1286 -0.1181 -0.1339 K = -0.0085 -0.0492 -0.0141 0.3121 0.5467 0.1139 0.9994 0.0360 0.5138 0.1504 -0.0360 0.9994 -0.0492 -0.0282 0.1406
0.1286 0 0 -0.0141 0 0

delf2INT delf2INTDC delf2OPT delf2OPTDC


0 5 10 15 20

Time (Sec)

Fig. 3. Change in frequency of Area 2.


10 x 10
-3

The optimum values of gains for the integral controllers were found by ISE technique for both the areas. The two area restructured power system interconnected with parallel ac and dc tie lines was simulated with two different controllers, namely conventional integral controller and optimal controller in both the cases when ac-dc parallel lines are connected and when dc link is not connected. Comparison of performance of integral and optimal controllers in both the cases have been shown in the plots of Figs. 2 to 8 for a step load perturbation. Figs. 2 and 3 show the frequency responses in area 1 and 2 respectively. In case of an integral controller, the settling time is more as compared to that of the optimal controller which improves when dc link is connected. The peak overshoot is also significantly less in the latter case. Fig. 4 shows the tie line deviation on step load perturbation.

delptieINT
8

delptieINTDC

delptieOPT
6

delptieOPTDC

DelPtie (pu MW)

-2

-4

10

15

20

Time (Sec)

Fig. 4. Change in tie line power.

311

0.04

VI. CONCLUSIONS
delpg1INT delpg1INTDC delpg1OPT delpg1OPTDC

0.035 0.03

0.025 0.02 0.015 0.01 0.005 0

10

15

20

Time (Sec)

In the present work a two area thermal-thermal system interconnected by parallel ac-dc tie line has been studied with a view to design an optimal controller in a restructured environment and copare its performance with those of integral controller. The dynamic performance of the system has been compared for two cases a)when tie line consists of parallel acdc link and b) when dc link is absent and only ac line is present. The computer simulation shows better control performance in terms of overshoot and settling time with the optimal controller as compared to conventional integral controller in both the cases. The best results are obtained with optimal controllerwhen tie line consists of ac-dc parallel tie line.

Del Pg1(pu MW)

Fig. 5. Power generated by GENCO 1.


0.025

REFERENCES
Ibrahim, P.Kumar, Study of dynamic performance of power systems with asynchronous tie-lines considering parameter uncertainties Journal of Institution of Engineers(I), vol.85,pp. 35-42, June 2004. [2] V. Donde, M.A. Pai and I.A. Hiskens, Simulation and optimization in an AGC system after deregulation, IEEE Trans. on Power Systems, vol. 16, pp.481-489, 2001. [3] D. P. Kothari, and I. J Nagrath, Power System Engineering, 2nd ed. McGraw Hill, 2007. [4] D. Das, Electrical Power Systems, New Age International Publishers, 2008. [5] J. Kumar, Ng. Kah-Hoe and G. Sheble, AGC simulator for price based operation part I: a model, IEEE Trans. on Power Systems, vol. 12, no. 2, pp.527-532, 1997. [6] J. Kumar, Ng. Kah-Hoe and G. Sheble, AGC simulator for price based operation part II : case study results, IEEE Trans. on Power Systems, vol. 12, no. 2, pp. 533538, 1997. [7] C.Srinivasa Rao, S.Siva Nagaraju and P.Sangameswara Raju Improvement of Dynamic Performance of AGC under Open Market Scenario Employing TCPS and A.CD.C Parallel Tie Line International Journal of Recent Trends in Engineering, Vol 1, No. 3, May 2009. [8] H.D.Mathur, H.V.Manjunath, Study of dynamic performance of thermal units with asynchronous tie-lines using fuzzy based controller Journal of Electrical Systems, vol.3-3, pp.124-130, 2007. [9] S. K. Sinha, R. Prasad and R. N. Patel, Design of optimal and integral controllers for AGC of two area interconnected power system, Proc. of the National Systems Conference, NSC 2008, IIT Roorkee, pp. 236240, December 17-19, 2008. [10] J. Nanda, and A. Mangla, Automatic generation control of an interconnected hydro-thermal system using conventional integral and fuzzy logic controller, IEEE International Conference on Electric Utility Deregulation and Power Technologies, pp. 372-377, 2004. [11] S. K. Sinha, R. N. Patel and R. Prasad, Automatic generation control of restructured power systems with combined intelligent techniques, to appear in International Journal of Bio-Inspired Computation (IJBIC), Vol.2, No.4, 2010. [1]

delpg2INT
0.02

delpg2INTDC delpg2OPT
delpg2OPTDC

Del Pg2 (pu MW)

0.015

0.01

0.005

Time (Sec)

10

15

20

Fig. 6. Power generated by GENCO 2.


0.05

0.045

0.04

0.035

Del Pg3 (pu MW)

0.03

0.025

0.02

0.015

0.01

delpg3INT delpg3INTDC
delpg3OPT delpg3OPTDC
0
5
10 Time (Sec)
15
20

0.005

Fig. 7. Power generated by GENCO 3.


0.025
delpg4INT
delpg4INTDC delpg4OPT
delpg4OPTDC

0.02

Del Pg4 (pu MW)

0.015

0.01

0.005

Time (Sec)

10

15

20

Fig. 8. Power generated by GENCO 4.

312

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Concept of Virtual flows for Evaluation of Source Contributions to Loads and Line flows
D. Thukaram, SM, IEEE, Surendra S
Department of Electrical Engineering, Indian Institute of Science Bangalore - 560 012, India dtram@ee.iisc.ernet.in ; surendra18@ee.iisc.ernet.in
Abstract In the context of deregulation, unbundling of utility services and open access of transmission facilities is becoming order of the day. In order to bring transparency and fairness from the point of participants involved, it is necessary to get contributions from each generator (source) to loads, flows in the network and losses for the system. This paper attempts to evaluate real and reactive power flow in the network due to individual sources and its contribution to loads using the principle superposition which is fairly simple and straightforward. As a result, in a multilateral transaction environment, it is possible to fairly allocate charges for real and reactive power usage. In addition the extent of network loading by individual generator is computed which shows direction to manage network congestion in an effective manner. Sample system to illustrate the proposed approach is presented. Keywords- Congestion, Current injection, Open Access Transmission, Source Contribution and Superposition.

I. INTRODUCTION Deregulation is the principle adopted by most of the power industries around the world, which aims at unbundling of services in order to bring revolutionary changes in the way they are being operated. Contracted transactions need the data regarding actual usage/path of the power flowing from each source to each sink across the interconnected system. This is very essential since it determines the lines involved in providing transmission services for that transaction. In order to allocate charges based on usage, it is imperative to know about the extent of transmission facility availed by a particular generator and what proportion of it is shared among different loads present in that entity. Determination of contribution is not a new concept and most of the work is done in [1], [2] used topological approach and matrix inversion to determine the individual source contributions. Reference [3], [4] suggests the domain of generator, common, links and state graph for this purpose. Graph theory based power flow tracing justified by two lemmas for application of such technique is proposed in [5]. Proportional sharing principle is applied in one or other way for the techniques suggested by above references and concept of proportional sharing is neither proved nor disproved. Using network Z-bus information for loss allocation to buses in the system is given in [6] but no information regarding the extent of line usage by sources. Evaluation of the line utility factors which are determined by obtaining dominion for each source and sink is presented in

[7]. It is claimed that line utility factors calculated based on power flow studies remain constant irrespective of changes in generation and loads for a given network as long as the network topology remains unchanged. Optimization approach considering multiplicity of the solution space for the real power tracing problem is explored in [8]. In a day to day operation of integrated large network grids, it is often observed that the flows in certain lines not only change significantly but also change their direction. This is due to the fact that some lines have counter flows by different sources. Thus there is a need to develop more accurate approaches for evaluating the network usage and contribution from sources. This work aims at evaluating the contribution of each generator to each and every load and line flows using the principle of superposition. Each transaction is evaluated with one source at a time but with all loads connected. All the individual transactions are then superposed to get the resultant state of the network. The results can be used as a guide line for adjustment/re-dispatch of generators in order to relieve line congestion and loss minimization as the purpose may be. The technique works equally with both real and reactive power output from generators. II. PROPOSED APPROACH USING SUPERPOSITION Principle of superposition is a basic concept from circuit theory as applicable to linear and bilateral network consisting of sources and load impedances. This approach is simple and straightforward but requires solution of network equation using sparse Y-bus and to be repeated as many number of times as sources. Preliminary data required for this computation is from power flow analysis or from output of online state estimator, so is the network observable. The bus voltage phasors (p.u.), injected real and reactive power (MW, MVAR) from all generator buses, connected loads with shunt compensation, shunt reactors and the network parameters are obtained. The Principle can be summarized as follows: step 1. Perform load flow / state estimation of the network. step 2. Read bus voltage phasors, real and reactive power injections at generator buses, loads and network parameters step 3. Convert all the loads to equivalent admittances and modify the network Y bus matrix to include loads as admittances. step 4. For i = 1 to g (number of sources),

313

inject equivalent current from one source at a time to respective bus and obtain corresponding bus voltage profile. Determine all the resulting branch currents for voltage profile obtained from this source. step 5. For i =1 to g determine the complex power contributions from each source to network flow and loads. step 6. Obtain normalized coefficients of, individual source contribution (virtual) to the line flows. step 7. Use this coefficient in the context of de-regulated power systems for various transactional applications. Repeat all steps at suitable intervals OR when there is significant change in load/generation pattern / contingency. By principle stated earlier, the bus voltage contribution by ith source when injecting equivalent current is solved by using the following equation. 1

evaluate a normalized (Tk,i) coefficient of source contributions to the lines, which is called as T-coefficients. These coefficients are evaluated as,
, Flow MVA in line due to source at node Power MVA injected at source node

The figures 1,2 and 3 depicts the steps involved in determining contributions from each source to network flows and loads as a function of initial power flow bus voltage profiles and line currents. III. ILLUSTRATIVE SAMPLE SYSTEM

The vector sum of complex bus voltages developed by individual sources for equivalent current injections is matching with the original bus voltage profile of the network. Similarly the line currents in branches due to individual source action sums up to line current profile as obtained from initial condition, equation (2) depicts these claims.

&

The contribution to the network flows and loads from ith source is calculated by considering the bus voltage determined from initial load flow and respective current vector as determined from step 4.
, , ,

A. Normal case without contingency A sample 6-node test system is considered for illustration purpose of the proposed concept. The single line diagram of the test system is shown in Figure 4. The bus generations/loads, voltage profiles and flows in the lines and losses which are obtained from power flow/state estimator as a initial condition, are shown in Table I and Table II, respectively (bus 1 as the reference). When source at respective bus is alone injecting equivalent current into the network, the resulting bus voltage profiles in the network is shown in Table III. The sum of the network bus voltages contributed from each source individual action, matches exactly with the original bus voltage profile as obtained from initial condition, as can be observed in last column of Table III. The contributions to line flows from each source is determined and represented in complex form (MW and MVAR) are shown in Table IV (only for forward flow).

The losses in the lines due to a given source are calculated from respective forward and reverse power flows. The total loss contribution by ith source to the network is obtained by the summing up of losses in all the lines due to its individual action on the network.

Figure 1. Network showing initial condition, source power injections due to multilateral transactions along with loads, bus voltage, net flow and branch current vectors.

is the loss in kth line due to ith source. Where, Negative loss in a line indicates counter flow contribution from that source so as to reduce the net loss accordingly. Contribution to particular load from a source is given by the summation of power flows in all the lines incident on the bus to which the load is connected. The contribution from ith generator to jth load is thus given by,

Figure 2. Injection of equivalent current from a source to the system and resulting bus voltage & branch current vectors.

as the Where k is set of lines incident on node j, and th element of matrix for k line. For given generation and load pattern, we obtain the individual source contributions in the lines based on proposed approach. These contributions can be used to

Figure 3. Power injection to the system due to single source and resulting branch flow vector and contributions to loads.

314

TABLE III.

COMPLEX CURRENT INJECTIONS (P.U) FROM EACH SOURCE AND RESULTING NETWORK BUS VOLTAGE (P.U) CONTRIBUTIONS Bus Equivalent current injection(p.u) at generator buses taken one at a time
Bus-1 8.70-j1.08

Bus-2 3.50+j0.02

Bus-3 4.55-j0.85

1 2 3 4 5 6

0.539+j0.003 0.508-j0.134 0.512-j0.118 0.499-j0.112 0.490-j0.151 0.486-j0.123

0.208-j0.026 0.215+j0.044 0.211-j0.004 0.203-j0.029 0.206-j0.008 0.196-j0.037

0.277+j0.023 0.2766+j0.045 0.274+j0.061 0.271+j0.012 0.271+j0.022 0.265+j0.010

individual bus voltages 1.024+j0.0 0.999-j0.045 0.997-j0.061 0.973-j0.129 0.967-j0.137 0.947-j 0.15

Figure 4. Power injection to the system due to single source and resulting branch flow vector and contributions to loads.

Contribution to voltage profiles (p.u)


Lines 1-6 1-3 1-4 6-3 3-4 3-5 3-2 5-4 2-5

TABLE IV.

VIRTUAL CONTRIBUTIONS FROM ALL THE SOURCES TO LINE LOSSES FOR THE NETWORK UNDER NORMAL CASE Source 2 27.09+j9.78 -36.91-j17.82 9.82+j8.04 -95.64-j26.14 47.28-j6.24 12.34+j1.39 -194.32+j4.76 55.58-j13.10 157.56-j2.39 Source 3 Total Flow 28.48+j5.27 334.76+j69.07 -66.28-j10.99 105.63-j46.97 37.80+j5.72 450.59+j89.19 -140.42-j3.38 -264.58-j116.65 86.16-j25.47 132.54-j42.14 103.51-j19.17 216.64+j6.41 63.81-j24.42 -61.18-j26.95 22.13-j17.34 -26.40-j56.47 68.39-j5.83 288.74+j18.82

Table V gives losses in the lines of the network and total loss contribution from each source when acting independently on the network. The contribution from each source to all the loads connected in the system is calculated by using (5), and same is shown in Table VI. Sum of individual line flows caused by each source as indicated from Table V, adds up to the net flow as determined from initial conditions which is indicated in Table II. The T-coefficient for line between buses 1 & 6 with generation at source bus 1, using (6) is given by, T1-6,1= [279.19+j54.02] /[ 891.0+j111.3] = 0.3168 These coefficients form a matrix of order lg and finds application in many transactional aspects involved with operation of deregulated power systems. In some lines the contribution from a source can be negative, which is indicated by the negative real power loss/flow in opposite direction. This is counter flow and can be concluded that given source is contributing in such a way, so as to reduce line loading/loss. Note in all the cases the values with # sign indicates counter flow in a given line. Loads are represented as injections with negative complex power at the respective bus.
TABLE I. GENERATION/LOAD, BUS VOLTAGE PROFILES Bus Generation/Load Voltage (p.u) 1 891.0+j111.3 1.025+j0.0 2 350.0-j17.9 0.999-j0.045 3 450.0-j112.0 0.998-j0.06 4 -550.0-j231.0 0.973-j0.128 5 -528.0-j210.0 0.967-j0.137 6 -594.0-j252.0 0.947-j0.15

Source 1 279.19+j54.02 208.84-j18.15 402.97+j75.43 -28.52-j87.13 -0.92-j10.43 100.79+j24.19 69.33-j7.29 -104.12-j26.03 62.80+j27.05

Since the respective complex bus voltages contributed, complex currents in all the lines of network and complex line flows resulting from individual action of sources sums up to the values as obtained from initial power flow analysis, these quantities can be considered as purely virtual. If there are no counter flows in a given network, then the flows are actual contributions from sources rather than virtual contributions. From the observations made as per Table VI, it can be inferred that virtually, each source contributes to each and every load of the system in some proportion which may be significant or otherwise. The contribution to loads from a given source depends upon the network topology, line parameters, generation load pattern and bus voltage profiles.
TABLE V. VIRTUAL CONTRIBUTIONS FROM ALL THE SOURCES TO LINE LOSSES FOR THE NETWORK UNDER NORMAL CASE

TABLE II. Line 1-6 1-3 1-4 6-3 3-4 3-5 3-2 5-4 2-5

GROSS FLOW IN SYSTEM WITH ALL SOURCES IN ACTION Forward Flow Reverse Flow Loss 334.76+j69.07 -329.41-j135.36 5.34-j66.29 105.62-j46.96 -104.93-j106.75 0.70-j153.71 450.58+j89.19 -444.69-j112.15 5.89-j22.97 -264.60-j116.64 266.97+j57.42 2.37-j59.22 132.54-j42.15 -131.76-j85.50 0.79-j127.65 216.66+j6.40 -215.18-j80.82 1.48-j74.43 -61.16-j26.93 61.23-j36.74 0.07-j63.68 -26.39-j56.48 26.41-j33.32 0.03-j89.81 288.74+j18.80 -286.44-j72.69 2.30-j53.89 Total Losses 18.97-j711.64

Source 1 Source 2 Source 3 Total line loss Line 1-6 10.57-j18.08 0.59-j20.35 -5.81-j27.87# 5.35-j66.28 1-3 10.52-j71.16 0.47-j35.61 -10.29-j46.95# 0.7-j153.72 1-4 9.65+j10.40 -0.48-j15.78# -3.28-j17.59# 5.89-j22.97 6-3 0.58-j40.60 1.69-j8.27 0.11-j10.36 2.38-j59.23 3-4 7.26-j71.27 -0.11-j25.36# -6.36-j31.03# 0.79-j127.66 3-5 6.57-j39.24 -1.97-j18.27 -3.12-j16.91# 1.48-j74.72 3-2 6.53-j34.35 -1.88-j10.74# -4.58-j18.59# 0.07-j63.68 5-4 6.34-j45.67 -1.22-j19.42# -5.09-j24.72# 0.03-j89.81 2-5 5.06-j35.78 0.66-j2.35 -3.43-j15.75# 2.29-j53.88 63.10-j345.74 -2.25-j156.14# -41.86-j209.77# 18.99-j711.65
TABLE VI. VIRTUAL POWER CONTRIBUTIONS (MW & MVAR) FROM ALL THE SOURCES TO LINE FLOWS UNDER LINE 1-4 OUTAGE CONTINGENCY Load-Bus 6 Load-Bus 4 Load-Bus 5 Total G1 -297.14-j159.23 -274.68-j145.51 -256.07-j152.29 -827.89-j457.03 G2 -174.72-j36.52 -160.83-j36.25 -156.31-j25.00 -491.86-j97.77 G3 -122.14-j56.26 -114.49-j49.25 -115.62-j32.73 -352.25-j693.06 -594.0-j252.02 -550-j231.02 -528.0-j210.02 ------

315

B. Contingency Analysis Single line outage between buses 1 and 4 is considered as a contingency case, the real power schedule for sources at buses 2, 3 and loads remain unchanged. The load flow for this situation is carried out as usual and from the superposition, line flow contributions due to each sources are evaluated and are listed Table VII. There will be changes in network flow conditions with modified network bus voltage profile for contingency case. Normal loadings of all the line are 500MVA. Due to the line outage contingency, the line flows are getting reshuffled and line between buses 1and 6 is loaded to the extent of 107.14% of normal rating. From the analysis devised in the previous section, the virtual flow contribution from each source are obtained. In this case the proportion of contribution from source 1 to line 1 and 6 is more compared to other two sources. These can be used as a guiding factor to alleviate the overloading. Also suitable optimization problem can be formulated for various other applications. The virtual contributions to loads from each sources also gets modified. C. Overload alleviation for line outage case (Line 1-4) Line between buses 1-6 is overloaded to some extent. The extent of this line usage for each source is obtained from superposition and are presented for analysis. The contributions to flows in this overloaded line both in terms of MW and MVAR is 469.45+j48.63 from source at bus 1, 24.84+j22.80 from source at bus 2 and 30.86+j34.58 from source at bus 3. If generation at bus 1 is reduced then there will be corresponding reduction in the line flow. In this line, the proportion of sharing from source at bus 2 or bus 3, is relatively small and hence increase in generation can be considered either from one of these two or both / more detailed merit order approach to meet the load demand and to relieve overloading. Real power injected from source at bus 2 is increased from 350 MW to 650MW and real power injection from source at bus 1 is correspondingly adjusted. With this real power rescheduling the extent of line loading is brought within limits. The modified flows in the lines with this rescheduling is shown in Table VIII, now the reduced flow in line 1 and 6 is 407.36 MVA (81.48%) from 535.7 MVA (107.14%). The extent of generation rescheduling is to be made upon the participating factors and other related issues involved. Care should be taken so that no additional lines are get overloaded due to this rescheduling. This requires the framing and solving of optimization problem considering constraints imposed upon and the same. Type of load is taken care during power flow computation/state estimator and subsequent linearization by equivalent impedance. The technique is simple enough to be adopted for real life power systems. Since transactions are evaluated at regular time intervals as agreed upon by the parties involved, and with the computational facility available as on today, the method is justified for its purpose. This technique has also been tested on 72 bus Indian southern grid system. Though the flows computed by the

proposed approach are virtual, the line flows and counter flows gives information regarding extent of line usage by each sources. This is valuable for re-dispatch of generation and overload alleviation based on economics, environmental issues or any other criterion. The potential applications of this technique include Transmission Pricing, Bilateral OR Multilateral Transaction Evaluation, Loss allocation etc.
TABLE VII. VIRTUAL POWER CONTRIBUTIONS (MW & MVAR) FROM ALL THE SOURCES TO LINE FLOWS UNDER LINE 1-4 OUTAGE CONTINGENCY Line 1-6 1-3 6-3 3-4 3-5 3-2 5-4 2-5 Source 1 Source 2 Source 3 Total Flow 469.45+j48.63 24.84+j22.80 30.86+j34.58 525.15+j106.01 440.85+j22.77 -24.84-j22.80 -30.86-j34.58 385.15-j 34.61 106.21-j42.82 -81.82-j44.79 -105.77-j72.44 -81.38-j160.05 214.17+j15.13 52.56+j19.28 101.83+j54.23 368.56+j88.64 226.84+j13.23 15.05+j9.22 108.46+j56.07 350.36+j78.52 143.14-j14.39 -190.78-j44.12 73.41+j24.22 25.76-j34.29 86.18+j1.97 61.66+j12.05 41.88+j19.80 189.72+j33.82 148.24+j18.13 158.70+j39.24 68.81+j41.29 375.75+j98.66

TABLE VIII. VIRTUAL POWER CONTRIBUTIONS (MW & MVAR) FROM ALL THE SOURCES TO LINE FLOWS UNDER LINE 1-4 OUTAGE CONTINGENCY & WITH REAL POWER RESCHEDULING Line 1-6 1-3 6-3 3-4 3-5 3-2 5-4 2-5 Source 1 309.68+j23.04 291.22+j6.86 72.55-j25.74 140.40+j21.50 148.96+j20.99 95.22-j1.74 56.41+j7.24 95.64+j25.11 Source 2 Source 3 Total Flow 54.28+j28.46 35.91+j26.31 399.87+j77.81 -54.3-j28.46 -35.91-j26.31 201.02-j47.91 -162.7-j50.98 -111.36-j54.20 -201.53-j130.92 100.8+j21.02 103.53+j42.75 344.78+j85.27 30.3+j13.14 110.19+j43.97 289.47+j78.10 -357.1-j27.46 73.27+j16.49 -188.60-j12.69 114.7+j7.84 41.86+j16.10 213.01+j31.18 296.5+j43.97 68.56+j37.14 460.68+j106.22

REFERENCES
[1] J Bialek, Tracing the flow of electricity, IEE Proceedings-Generation Transmission Distribution, vol. 143, no. 4, pp. 313-320, July 1996. [2] J Bialek, Topological generation and load distribution factors for supplementary charge allocation in transmission open access, IEEE Trans. on PWRS, vol. 12, no. 3, pp. 1185 -1193, Aug. 1997. [3] Daniel Kirschen, Ron Allan and Goran Strbac, Contribution of individual generators to loads and flows, IEEE Trans. on PWRS, vol. 12, no. 1, pp. 52-60, Feb. 1997. [4] Daniel Kirschen and T Goran Strbac, Tracing Active and Reactive Power between Generators and Loads Using Real and Imaginary Currents, IEEE Trans. on PWRS, vol. 14, no.4, pp. 1312-1318, November 1999 [5] Felix F Wu, Yixin Ni and Ping Wei, Power Transfer Allocation for open Access Using Graph Theory-Fundamentals and Applications in Systems Without loop flow, IEEE Trans. on PWRS , vol. 15, no. 3, Aug. 2000. [6] Antonio J Conejo, Francisco D. Galiana and Ivana Kockar, Z-Bus Loss Allocation, IEEE Trans. on PWRS, vol. 16, no. 1, pp. 105-110, Feb. 2001. [7] K Visakha, D Thukaram and Lawrence Jenkins, An approach for evaluation of transmission costs of real power contracts in deregulated systems, Electric Power System Research 70(2004), pp. 141-151. [8] A R Abhyankar, S A Soman and S A Khaparde, Optimization Approach to Real Power Tracing Application to Transmission Fixed cost Allocation, IEEE Trans. on PWRS, vol. 21, no. 3, pp. 1350-1361, August 2006.

316

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Load Frequency Stabilization of Hydro Dominated Power System with TCPS and SMES in Competitive Electricity Market
Praghnesh Bhatt Praghnesh Bhatt Research Scholar Research Scholar SVNIT Surat, Gujarat, India SVNIT - Surat S. P. Ghoshal Department of Electrical Engineering National Institute of Technology - Durgapur Ranjit Roy Department of Electrical Engineering SVNIT - Surat, Gujarat, India AbstractThis paper presents the analysis of Automatic
Generation Control (AGC) of a two-area interconnected multiple-units hydro-hydro power system in competitive electricity market. The non-minimum phase characteristic of hydro turbine shows an opposite initial power surge in the event of frequency disturbance. The step load perturbation to such a system results in heavy frequency oscillations and the system is unable to regain its stable state as the positive real parts of some eigenvalues confirm the inherent dynamically unstable characteristic of the system. In order to stabilize such a dynamically and transiently unstable system, impacts of Thyristor Controlled Phase Shifter (TCPS) in series with tie-line and Superconducting Magnetic Energy Storage system (SMES) at the terminal of area have been investigated. The parameters of TCPS, SMES and integral gains of AGC loop are optimized through craziness-based particle swarm optimization algorithm in order to have the optimal transient response of the system under different transaction in competitive electricity market. Keywords-AGC, Bilateral Contracts, Deregulation, SMES, TCPS

I. INTRODUCTION

ONTROLLING the frequency has always been a major subject in electrical power system operation and is becoming much more significant recently with increasing size, changing structure and complexity in interconnected power systems. However, since the liberalization of the electricity supply industry, the resources required to achieve this control have been treated as ancillary services that the system operator has to obtain from other industry participants. One of the ancillary services is the frequency regulation or load following utilized for the Load Frequency Control (LFC). The LFC in a deregulated electricity market should be designed to consider different types of possible transactions such as poolco-based transactions, bilateral transactions, and a combination of these two [1]-[2]. In this new paradigm, a Disco can contract individually with a Genco for power and these transactions are done under the supervision of the ISO or the RTO. Hence, situations may arise where Disco make contract with only hydro generating units in competitive market place. In this paper we have analyzed the issue of load frequency control in deregulated power system where contracts of Discos exist only with Gencos based on hydro generation. II. LINEARIZED MODEL OF TEST SYSTEM IN COMPETITIVE ELECTRICITY MARKET Consider a two-area interconnected power system in which each area has two GENCOs based on hydro generation and

two DISCOs. Let GENCO1, GENCO2, DISCO1, and DISCO2 be in area I and GENCO3, GENCO4, DISCO3, and DISCO4 be in area II as shown in Fig. 1. The linearized model of two-area interconnected power system in competitive electricity market is shown in Fig. 2. The system parameters are given [3]. For the transient analysis of system, 10% step load perturbation is considered in area 1. It has been observed that under the occurrence of load changes, a system frequency is heavily perturbed from its normal operating point and system does not regain its stable state as shown in Fig. 3. Positive real parts of some eigenvalue pairs (shown as shaded in Table I) indicate that the system itself is inherently dynamically (small signal) unstable. To overcome this situation and to make the system dynamically and also transiently stable against any load disturbance, a coordination of Thyristor Controlled Phase Shifters (TCPS) [4] with Superconducting Magnetic Energy Storage System (SMES) [5] and the coordination of SMESSMES are considered. The structure of TCPS and SMES as a frequency stabilizers are shown in Figs. 4 and 5 respectively. TCPS is located in series with tie-line near area 1and SMES is placed at the terminal of area. III. MATHEMATICAL PROBLEM FORMULATION The objective of AGC is to reestablish primary frequency regulation, restore the frequency to its nominal value as quickly as possible and minimize the tie-line power flow oscillations between neighboring control areas. In order to satisfy the above requirements, gains K I1 , K I2 of integral controller in AGC loop, parameters of TCPS KTCPS ,TTCPS and the parameters of SMES KSMES ,TSMES ,T1,T2 ,T3 and T4 ) are to be optimized. In the present work, an Integral Square Error (ISE) criterion is used to minimize the objective function defined as Figure of Demerit (FDM) as follows:
FDM=

2 2 2 1 +f 2 +Ptie T

(1)

where T = a given time interval for taking samples, f = incremental change in frequency and Ptie = incremental change in tie-line power. The objective function is minimized with the help CRPSO based optimization techniques.
Area 1 GENCO1 DISCO1 GENCO2 DISCO2 GENCO3 GENCO3 Area 2 GENCO4 GENCO4

Fig.1. Schematic diagram of two-area system in restructured environment

317

cpf11 cpf21 depPDISCO1 cpf31 cpf41

+ + + + + + +

cpf12 cpf22 cpf32 cpf42 depPDISCO2

1/R1 B1

+ + +

1/R2 Gov1 Transient Droop1

depPdisco1 + depPdisco2 + delPuncot1

-K i1 s

apf1

Turbine1

+ +
1

apf2

+ + + +

Gov2

Transient Droop2

Turbine2

- 2 H1 s+D1
PSMES1

PTCPS

T 12 s

+ -

12
+ +
B2

apf3

-K i2 s

+ +

+
Gov3

delta_Scheduled_Ptie12 Transient Droop3 Turbine3

12
1 2 H 2 s+D 2
PSMES2

+ + +
Gov4 Transient Droop4

apf4

Turbine4

++ depPdisco3+

1/R3 1/R4

depPdisco4+ delPuncot2

cpf13 cpf23 depPDISCO3 cpf33 cpf43

+ + + + + + +

cpf14 cpf24 cpf34 cpf44 depPDISCO4

Fig.2. Linearized Model of two-area interconnected hydro-hydro power system in competitive electricity market
20

1 0.9 0.8

Frequency Deviation (Hz)

-20

0.7
-40

0.6
Delta f 1 (Hz) Delta f 2 (Hz)

-60

0.5 0.4 0.3

IV. CRAZINESS-BASED PARTICLE SWARM OPTIMIZATION The PSO was first introduced by Kennedy and Eberhart [6]. It is an evolutionary computational model, a stochastic search technique based on swarm intelligence. Velocity updating equation:

-80

vik 1 vik c1 r1 p Best i xik c2 r 2 gBest xik

(2)

-100

10

20

30

40 50 Time (Sec)

60

70

80

0.2

0.1 Fig. 3. Frequency deviation of the test system after 10% step load in area1 0

i (s)

60

K TCPS 1+s TTCPS

T 12

PTCPS

Fig. 4. Structure of TCPS as a frequency stabilizer

i (s)

60

1 + s T1 1 + s T2

1 + s T3 1 + s T4

KSMES

PSMES 1 1+sTSMES

Position updating equation: (3) xik 1 xik vik 1 The0.2 following modifications in velocity help to enhance the 0.4 0.6 0.8 1 global search ability of PSO algorithm as observed in CRPSO [7]. (i) Velocity updating as proposed in [7] may be stated as in the following equation: vik 1 r 2 vik 1 r 2 c1 r1 p Best i xik (4) 1 r 2 c2 1 r1 gBest xik Local and global searches are balanced by random number r2 as stated in (5). Change in the direction in velocity may be modeled as given in the following equation: vik 1 r 2 sign(r3) vik 1 r 2 c1 r1 (5) p Best i xik 1 r 2 c2 1 r1 gBest xik

Fig. 5. Structure of SMES as a frequency stabilizer TABLE I EIGENVALUE ANALYSIS OF TEST SYSTEM
-6.9788 + 0.0046i -5 0.0163 + 1.0763i -0.2083 -0.0245 - 0.0174i -0.0261 -6.9788 - 0.0046i 0.0483 + 1.1034i 0.0163 - 1.0763i -0.0525 -2 -0.0261 -5 0.0483 - 1.1034i -0.2475 -0.0245 + 0.0174i -2

In (5), sign (r3) may be defined as 1 r 3 0.05 sign(r 3) = 1 r 3 0.05

318

(ii) Inclusion of craziness: Diversity in the direction of birds flocking or fish schooling may be handled in traditional PSO with a predefined craziness probability. The particles may be crazed in accordance with the following equation before updating its position. (6) vik 1 vik 1 Pr (r 4) sign(r 4) vicraziness where, Pr(r4) and sign(r4) are defined respectively as: 1 r 4 Pcraz (7) Pr (r 4) 0 r 4>Pcraz

1 sign(r 4) -1

r 4 0.5 r 4<Pcraz

(8)

During the simulation of PSO, certain parameters require proper selection as PSO is much sensitive to the selection of input parameters. The best chosen maximum population size = 50, maximum allowed iteration cycles = 100, best Pcraz = 0.2 (chosen after several experiments), best values of c1 and c2 are c1= c2 = 1.65. The choice of c1, c2 are very much vulnerable for PSO execution. The value of vicraziness lies between 0.25 and 0.35. The novelty of CRPSO lies with the fact of its faster convergence to the true optimal solution as compared to its other counter parts. V. SIMULATION RESULTS AND DISCUSSION A DISCO participation matrix (DPM) is utilized to make the visualization of contracts easier in deregulated market. DPM is a matrix with the number of rows equal to the number of GENCOs and the number of columns equal to the number of DISCOs in the system [1-2]. cpf 11 cpf 12 cpf 13 cpf 14 cpf 21 cpf 22 cpf 23 cpf 24 DPM (9) cpf 31 cpf 32 cpf 33 cpf 34 cpf 41 cpf 42 cpf 43 cpf 44 For TCPS-SMES coordination, TCPS is considered in series with a tie-line near area 1 and SMES is placed at the terminal of area2 whereas for SMES-SMES coordination, SMES units are considered at the terminal of both areas. Case 1: PoolCo based transaction: Consider a case where the GENCOs in each area participate equally in AGC. ACE participation factors are: . apf1 0.5, apf 2 1 apf1 , apf3 0.5, apf 4 1 apf3 Assume that the load change occurs only in area I. Thus, the load is demanded only by DISCO1 and DISCO2. Let the value of this load demand be 0.1 pu MW for each of them. 0.5 0.5 0 0 0.5 0.5 0 0 DPM= (10) 0 0 0 0 0 0 0 0 The transient response of the system for the transaction given by (10) is shown in Fig. 6. It is clearly observed that the coordination of SMES-SMES and TCPS-SMES effectively

suppress the area frequency and tie-line power oscillations and stabilize the system effectively. The optimized parameters for both coordinated system are given in Table II. Power generation response shown in Fig. 6 indicates that the initial surge of power is in the opposite direction to that desired due to non-minimum phase characteristic of hydro turbine [3]. Hence, issue of LFC with only hydro generating unit has become very critical and needs a special coordinated control of TCPS-SMES or SMES-SMES. As only the DISCOs in area I, viz. DISCO1 and DISCO2, have nonzero load demands, the transient dip in frequency of area I is larger than that of area II. Since the off diagonal blocks of DPM are zero, i.e., there are no contracts of power between a GENCO in one area and a DISCO in another area, the scheduled steady state power flow over the tie line is zero. The actual power on the tie line goes to zero. Case 2: Bilateral Contracts: Consider a case where all the DISCOs contract with the GENCOs for power as per the following DPM: 0.40 0.25 0 0.25 0.30 0.20 0 0.25 DPM= 0.15 0.20 1 0.30 0.15 0.35 0 0.20 It is assumed that Disco1, Disco2, Disco3 and Disco4 demands 0.15 pu, 0.05 pu, 0.05 pu and 0.15 pu power from GENCOs as defined by cpf s in DPM matrix and each GENCO participates in AGC as defined by following apfs : apf1 0.75, apf 2 0.25, apf3 0.5, apf 4 0.5 The dynamic responses are shown in Fig. 7. In this case also, the coordination of SMES-SMES and TCPS-SMES works satisfactorily. Case 3: Contract violation It may happen that a DISCO violates a contract by demanding more power than that specified in the contract. This excess power is not contracted out to any GENCO. This uncontracted power must be supplied by the GENCOs in the same area as the DISCO. It must be reflected as a local load of the area but not as the contract demand. Consider case 2 again with a modification that DISCO demands 0.1 pu MW of uncontracted excess power. Fig. 8 shows the stabilized transient response in the event of contract violation with the coordinated device operations. TABLE II OPTIMIZED PARAMETERS
SMES-SMES Area1 KSMES1 0.30 Area2 KSMES2 0.30 TSMES2 0.04 T1 0.3 T2 0.02 T3 0.65 T4 0.30 Ki2 -0.13 TSMES1 0.04 T1 0.12 T2 0.03 T3 0.56 T4 0.3 Ki1 -0.4

TCPS-SMES Area1 KTCPS TTCPS Ki1 3.00 0.09 -0.4 Area2 KSMES2 TSMES2 T1 0.3 0.04 0.3

T2 0.025

T3 0.8

T4 0.22

Ki2 -0.1

319

0.2
Delta f1

-0.2 -0.4 0 10 20

TCPS-SMES SMES-SMES

0.5
40 50

-0.5 -1 0 0 -0.5 10 20 0.5

TCPS-SMES SMES-SMES

Delta f1

Delta f1

30

0 1 0.5

30

40

50

0.5 0 0.5 -0.5 -1 0 -1.5 0.5 0 0.5 -0.5 -1 0 -1.5


Delta f2 Delta Ptie

1
TCPS-SMES SMES-SMES

0.5

10

20 0.5

30

40

50

0 1 0.5

Delta f2

0 -0.2 -0.4 0 10 20 30 40 50

Delta f2

Delta Ptie

Delta Ptie

0.05 0 -0.05 -0.1 0


0.08 0.04 0 -0.04 0 0.1 0.05 0 -0.05 0 0.02 0.01 0 -0.01 0 0.02 0.01 0 -0.01 0 10 20 30 Time(Sec) 40 50 10 20 30 40 50 10 20 30 40 50 10 20 30 40 50

0 -1 0 0 1 0.1 0.5
0 -0.1 -0.2 0
0.2 0.1 0 -0.1 0

10

20 0.5

30

401

50

10

20 0.5

30

40

50

0 1 0.5

0.5 -0.2
10

10

20 30 Time (Sec)

40

50

0.5 20

30 Time(Sec)

401

50

-0.4 0 10 20 30 40 50

0.5 Time (Sec)

1 0.5
Pg1

TCPS-SMES SMES-SMES

0 1
Pg2

0.2 0.5 0.1 0 -0.1

1 0.5
10

Pg1

Pg1

0.1 0.05 0

10

0.520

30

40 1

50

0 0 1 00.1
0 -0.1 0

0.5

20

30

40

50

0 1 0.5 0 1 0.5 0 1 0.5 0

Pg2

0.5 0 1
Pg3

0.5
0 10

-0.05 0.1 0.05 0 -0.05 0.1 0.05 0 -0.05 0 10 0 10

Pg2

0.520

30

40 1

50

0 10 0.5
Pg3

10

0.5

20

30

40

50

0.1 0 10 20 0.5 30 40

Pg3

0.5 0 1 0.5 0
Pg4

0.520

30

40 1

50
Pg4

0 -0.1 0 1 00.1 0.5


0.05 0

50

Pg4

0.520

1 0.5
Fig.6. Case 1
5 10 15 20 25 30 35 40 45 50

30 Time (Sec)

40 1

50

0-0.05 0 0

10

20 0.5

30 Time (Sec)

40

50

TCPS-SMES Coordination

SMES-SMES Coordination
Fig. 8. Case 3

Fig. 7. Case 2

0.8 0.7

0.2

0.4

TCPS-SMES Coordination

1 SMES-SMES Coordination 0.5 0

responses of frequency and tie-line power exchanges as 0.6 0.8 1 compared to TCPS-SMES coordination. REFERENCES
[1] J. Kumar, K. Ng, and G. Sheble, AGC simulator for price-based operation part 1: A model, IEEE Trans. Power Syst., vol. 12, no. 2, pp. 527532, May 1997. V. Donde, M. A. Pai, and I. A. Hiskens, Simulation and optimization in an AGC system after1deregulation, IEEE Trans. Power Syst., vol. 16, 0.6 0.8 no. 3, pp. 481489, Aug. 2001. 0.5 1 P. Kundur, Power system stability and control, McGraw-Hill, 1994 Rajesh Joseph Abraham, D. Das, and Amit Patra, "AGC of a hydrothermal system with Thyristor Controlled Phase Shifter in the tieline", IEEE Power India Conference, 2006 Tripathy S.C and Juengst K.P., "Sampled data automatic generation control with superconducting magnetic energy storage in power 0.6 1 systems",0.8 IEEE Transactions on Energy Conversion, vol. 12, no. 2, pp. 187-191, 1997 J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Networks, Perth, Australia, pp. 19421948, 1999 R. Roy, S.P. Ghoshal, "A novel crazy optimized economic load dispatch 0.5 types of cost functions" 1 for various , Int. J. Electrical Power Energy Syst, 0.6 0.8 vol. 30, no. 4, pp. 2421 253, 2008 APPENDIX

FDM

0.6 0.5 0.4

0.5

[2]

10

15

20

25

30

35

40

45

50

20

1 Fig.9. Convergence characteristic obtained by CRPSO 0.5 of the objective Fig. 9 shows the 1 convergence characteristic 1 function given in (1) with the help of CRPSO algorithm. 0 15 20 25 30 35 40VI.45 CONCLUSION 50 0 0.2 0.4 Two-area 0.5 interconnected multiple unit hydro-hydro power 0.5 system shows dynamically and transiently 1unstable behavior. The optimized coordination of SMES-SMES and TCPS0.5 SMES with the help of CRPSO effectively suppresses the 0 0 transients in area0frequency and0.5 tie-line power exchanges for 1 0 0 15step 20 load 25 disturbance 30 35 40 and 45 for 50 different contract variation in 0 0.2 0.4 Time (Sec) competitive electricity market. The performance of SMESSMES coordination yields the least value of Figure of Demerit and shows lesser undershoot and overshoot in dynamic

40 60 Iteration Number

80

100

0.2

0 0.4 0
[3] [4]

[5]

10

[6]

[7]

10

B= 41;12 1;T12 0.0866;

320

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Impact of SSSC and UPFC in Linear Methods with PTDF for ATC Enhancement
Naresh Kumar Yadav
Electrical Engineering Department D.C.R University of Science & Technology Murthal, Hariana INDIA nkyadav7619@gmail.com
Abstract Open access to the transmission system places an emphasis on the intensive use of the interconnected network reliably, which requires knowledge of the network capability. Fast, accurate algorithms to compute network capabilities are indispensable for transfer-based electricity markets. Available Transfer Capability (ATC) is a measure of the remaining power transfer capability of the transmission network for further transactions. One of the limitations of linear ATC is the error produced by neglecting the effect of reactive power flows in line loading. This paper presents the determination of shunt reactive power compensation with Flexible AC Transmission System (FACTS) devices, the Static Synchronous Series Compensator (SSSC) and Unified Power Flow Controller to improve the transfer capability of a power system incorporating the reactive power flows in ATC calculations. By redistributing the power flow, the ATC is improved. Studies on a sample 5-bus power system model are presented to illustrate the effect of shunt compensation along with line flow control. Keywords-component Linear ATC, reactive power, FACTS devices

Prof. Ibraheem
Electrical Engineering Department J.M.I University New Delhi, INDIA

coupling transformer. In principle, the SSSC can generate and insert a series voltage, which can be regulated to change the impedance (more precisely reactance) of the transmission line. In this way, the power flow of the transmission line or the voltage of the bus, which the SSSC is connected with, can be controlled. B. Equivalent Circuit and Power Flow Constraints of the SSSC An equivalent circuit of the SSSC as shown in Fig. 2 can be derived based on the operation principle of the SSSC. In the equivalent, the SSSC is represented by a voltage source in series with a transformers impedance. In the practical operation of the SSSC,

Vse can be regulated to control the

power flow of line i - j . It is proposed to improve the performance of the system by in presence of SSSC using all of its advantages. The SSSC equivalent circuit for steady state model is shown in Fig.1.

I.

INTRODUCTION

Available transfer capability (ATC) determines the size of the largest transfer that can be implemented in a certain direction across the power grid without violating security constraints [1], [2]. The determination of ATC requires the continuation version of power flow, steady-state stability, voltage stability, and transient stability simulations. The transfer direction is then specified by means of participation factors of source and sinks buses. In the simplest case, this would be a real power transaction to take place between bus s and bus i while holding all other power flow injections fixed at their base case levels [3][13]. This paper presents the application of one type of Flexible AC Transmission System (FACTS) device, the Static Synchronous Series Compensator (SSSC) to improve the transfer capability of a power system incorporating the reactive power flows in ATC calculations. II. MULTICONTROL FUNCTIONAL MODEL OF THE SSSC

Figure1. SSSC operating principles

A. Operation Principles of the SSSC A SSSC usually consists of a coupling transformer, an inverter, and a capacitor. As shown in Fig. 1, the SSSC is series connected with a transmission line through the

Figure 2. SSSC equivalent circuit

In

the

equivalent

circuit,

Vse = Vse se ,

321

= = of the SSSC are:


2

Vi Vi i , V j V j j

, then the power flow constraints

Figure 3: UPFC equivalent circuitFFF

According to the equivalent circuit shown in Fig. 3,

Pij = Vi g ii - ViV j ( g ij cos( i j ) + bij sin( i j ))


-

Vsh = Vsh sh , Vse = Vsese , Vi =Vi i , Vj =Vj j then


the power flow equations of the UPFC can be found in [7]. Operating constraint of the UPFC (active power exchange between two inverters via the DC link) is:

ViV se ( g ij cos( i se ) + bij sin( i se ))

Qij = Vi 2 bii ViV j ( g ij sin( i j ) bij cos( i j ))


-

PE = Re(Vsh I *sh Vse I *ij ) = 0

ViV se ( g ij sin( i se ) bij cos( i se ))


+

Vi Vi
where

specified

=0

(1)

Pji = V j2 g jj - ViV j ( g ij cos( j i ) + bij sin( j i )) V jVse ( g ij cos( j se ) + bij sin( j se )) Q ji = V j2 b jj ViV j ( g ij sin( j i ) bij cos( j i ))
+ Where

Vi

specified

is specific bus voltage.

The equivalent voltage injection

Vsh = Vsh sh ,
constraints:

Vse = Vsese V
min sh

,
max

bound (2) (3) (4) (5)

to

Vsh Vsh

V jVse ( g ij sin( j se ) bij cos( j se ))

min sh sh sh max
V min se Vse Vse
max

g ij + jbij = 1 / Z se , g ii = g ij ,

bii = bij , g jj = g ij and b jj = bij


The operating constraint of the SSSC (the active power exchange via the dc link) is

min se se se
IV.

max

AVAILABLE TRANSFER CAPABILITY

PE = Re(Vse I * ji ) =0 where
Re(Vse I * ji ) = - ViV se ( g ij cos( i se ) bij sin( i se ))
+ V jVse ( g ij

cos( j se ) bij sin( j se ))

of the shunt voltage source .

III.

THE UNIFIED POWER FLOW CONTROLLERS (UPFC)

The basic operation principle of an UPFC was described in [3]. An UPFC consists of two switching converters based on gate-turn-off (GTO) valves. The two inverters are connected by a common DC link. The series inverter is coupled to the transmission line via a series transformer. The shunt inverter is coupled to the transmission line via a shunt-connected transformer. The shunt inverter can generate or absorb controllable reactive power, and it can provide active power exchange to the series inverter to satisfy operating control requirements. In the steady state operation, the main objective of an UPFC is to control voltage and power flow. The equivalent circuit of an UPFC is given by Figure 3.

A. Linear Static ATC Based on the dc power flow model, linear ATC calculations typically assume a lossless system, where changes in the line j k real power flows are linearly related to changes in the net real power injections. For illustration, we assume a point-to-point transfer from the slack bus s , to any bus i , and we would like to maximize this transfer without exceeding the flow limits of any line or transformer. The key to the linear power flow solution is the use of power transfer distribution factors (PTDFs), expressed here as sensitivities of line real power flows to bus injections.

jk ,i =

Pjk
Ps

Pjk
Pi

(6)

These PTDFs are essentially current dividers in linear circuit theory. As such, they are large-change sensitivities and can be used to predict the change in the line flow (line j k ) due to a transfer (bus s to bus i ) as

Pjk = jk ,i Ps = jk ,i Pi
Note that

(7)

Ps = Pi is the amount of transferred power from s to i . For a given positive line flow limit max Pjk , assumed equal to the line MVA rating, and an initial
positive line flow Pjk , the size of the transfer that drives the line to its limit is equal to
0

Figure 3 UPFC equivalent circuit

322

Ps

jk

max 0 Pjk Pjk jk ,i = max 0 Pjk Pjk jk ,i

jk ,i 0
(8)

where

2 2 M 2 = S2 jk Pjk Q jk . Substituting back into

the limiting circle equation the following quadratic expression in Pjk is obtained
2 2 * max 2 2 * ( Pjk + Q jk ) Pjk Pjk ((S jk ) M ) Pjk 2

jk ,i 0
jk

In order to determine ATC, the minimum value of Ps among all lines in the system is determined

1 2 2 2 2 max 2 (( S max jk ) M ) Q jk ( S jk ) = 0 4

(14)

Defining the corresponding constant coefficients


(9)
2 2 max 2 2 a = ( Pjk + Q jk ) ; b = Pjk ((S jk ) M ) ;

ATC si = min{Ps : all lines

jk

jk}

Since, in general, the complex flow at the sending and receiving ends of a transmission line are different, there is a corresponding k -end circle given by the equation

c=

1 2 2 2 2 max 2 (( S max jk ) M ) Q jk ( S jk ) = 0 4

(15)

( Pkj Pk kj ) 2 + (Qkj Qkj ) 2 = ( S kj ) 2


where in general of the two circles though have the same value. V.

(10)

the solution for the maximum complex flow is obtained as


* = Pjk

Pjk Pkj and Q jk Qkj . The radii

b b 2 4ac * max 2 * ; Q jk = (S jk ) Pjk 2a

(16)

INCORPORATION OF REACTIVE FLOWS

A. Maximum Complex Flow and Linear ATC Since the transmission line complex flow is constrained to be on the operating circle and inside the limiting circle, the maximum complex flow of the line j k corresponds to point
* ( Pjk , Q* jk ) in Fig. 4. Depending on the sign of the * Pjk can be

The sign in the previous equation is chosen to be positive if the PTDF of line - j k is positive and negative otherwise. In order to incorporate the maximum active flow
* max Pjk in ATC, the only change required is to replace Pjk * max Pjk . This differs from linear ATC, where Pjk was max

by

assumed to be equal to the MVA rating of the line Pjk

distribution factor two different solutions for found. In order to compute

* * Pjk and Q jk the following

system of equations must be solved:

( Pjk Pk jk ) 2 + (Q jk Q jk ) 2 = ( S jk ) 2 (11)
2 ( Pjk ) 2 + (Q jk ) 2 = ( S max jk )

represents a better approximation to the actual maximum active line flow due to the transfer by considering the reactive power component. The process of computing linear ATC including the effect of reactive flows is summarized as follows a) compute distribution factors jk ,i ; b) compute c) replace
* Pjk using (15) and (16);

(12)

max * Pjk by Pjk and compute the necessary transfer

Ps jk to overload each line end using (3);


d) obtain the minimum among all line ends. Since the incorporation of reactive flows into the algorithm resides in computing a new line flow limit, all of the advantages of the linear ATC method are preserved.

VI.
Figure 4. Operating and limiting circles.

SIMULATION RESULTS

Expanding the first equation and subtracting the second one we obtain

Q jk =

1 2 2 ( 2 Pjk Pjk + ( S max jk ) M ) 2Q jk

(13)

Simulation of a 5-Bus system Another sample 5-bus system is considered to illustrate the implementation of SSSC and UPFC for ATC enhancement. The system consists of two generator buses, 3 load buses, and 7 transmission lines. The SSSC is located in the line connected between 3 and 4. The real and reactive power settings of the SSSC are 40 MW and 2 MVAR. The results of the 5-bus system are given in Tables I and III.

A.

323

TABLE I ATC RESULTS FOR THE FIVE BUS SYSTEM WITHOUT SSSC

Transfer Direction

Limiting Line

S max jk
1.0 1.0 1.0 1.0 1.0 1.0 1.0

PTDF jk ,i

Min{ Ps } Linear

jk

Min{ Ps } Reactive

jk

Linear

Reactive

Linear

Reactive

1-2 1-3 2-3 2-4 2-5 3-4 4-5

3-1 1-3 2-3 2-4 2-5 3-4 2-5

1-2 1-3 2-4 2-4 2-5 2-4 3-4

0.5115 0.3340 0.0722 0.3294 0.7053 -0.7755 0.2045

-0.6544 0.3340 -0.2459 0.3294 0.7053 0.6783 0.0046

2.7423 2.9714 1.4780 3.0266 1.4101 1.0395 2.2171

0.0165 2.9609 0.2427 3.0106 1.4092 0.0830 0.1895

TABLE II ATC RESULTS FOR THE FIVE BUS SYSTEM WITH SSSC

Transfer Direction

Limiting Line

S max jk
1.0 1.0 1.0 1.0 1.0 1.0 1.0

PTDF jk ,i

Min{ Ps } ATC Linear

jk

Min{ Ps } ATC Reactive

jk

Linear

Reactive

Linear

Reactive

1-2 1-3 2-3 2-4 2-5 3-4 4-5

3-6 1-3 3-6 3-6 5-2 3-4 5-4

1-2 1-3 2-3 4-2 2-5 3-4 4-5

-0.4563 0.4603 -0.6694 -0.1006 0.7382 0.6358 0.1895

-0.2037 0.4603 -0.5038 0.0410 -0.7382 0.6358 -0.1895


[6]

0.9870 1.0866 2.3149 4.4788 1.9872 0.9437 5.5965

0.2576 0.4806 0.0952 1.0155 0.0577 0.0226 0.1256

TABLE III ATC RESULTS FOR THE FIVE BUS SYSTEM WITH UPFC

sb
Transfer Direction 1-2 1-3 2-3 2-4 2-5 3-4 4-5

jk
Limiting Line 2-1 2-1 2-1 4-2 5-2 1-2 5-2

PTDF
-0.85166 -5.93333 -5.08166 -0.44555 -0.74166 -5.20500 -0.51916

Linear ATC

Reactive ATC

[7]

1.174168 0.168539 0.196786 2.244389 1.348315 0.192123 1.926164

1.173516 0.168446 0.196677 2.235192 1.345912 0.192273 1.192035

[8] [9] [10]

[11]

REFERENCES
[1] NERC transmission transfer capability task force, in Available Transfer Capability Definitions and Determination. Princeton, NJ: North American Electric Reliability Council, 1996. P.W. Sauer, Alternatives for calculating transmission reliability margin (TRM) in available transfer capability (ATC), in Proc. Thirty-First Annu. Hawaii Int. Conf. Sys. Sci., vol. III, Kona, HI, Jan. 69, 1998, p.89. G. C. Ejebe, J. G. Waight, M. Santos-Nieto, and W. F. Tinney, Fast calculation of linear available transfer capability, IEEE Trans. Power Syst., vol. 15, pp. 11121116, Aug. 2000. M. Pavella, D. Ruiz-Vega, J. Giri, and R. Avila-Rosales, An integrated scheme for on-line static and transient stability constrained ATC calculations, in IEEE Power Eng. Soc., Summer Meeting, vol. 1, 1999, pp.273276. S. Repo, Real-time transmission capacity calculation in voltage stability limited power systems, in Proc. Bulk Power Syst. Dyn. Control IV-Restructuring, Santorini, Greece, Aug. 2428, 1998.

[12]

[13]

[2]

[14]

[3]

[15]

[4]

M. H. Gravener and C. Nwankpa, Available transfer capability and first order sensitivity, IEEETrans. Power Syst., vol. 14, pp. 512518, May 1999. V. Ajjarapu and C. Christy, The continuation power flow: A tool for steady state voltage stability analysis, IEEE Trans. Power Syst., vol. 7,pp. 416423, Feb. 1992. G. T. Heydt, Computer Analysis Methods for Power Systems. New York, NY: Macmillan, 1986. A. J. Wood and B. F. Wollenburg, Power Generation, Operation, and Control. New York, NY: Wiley, 1996. B. Stott and J. L. Marinho, Linear programming for power system network security applications, IEEE Trans. Power Apparat. Syst., vol. PAS-98, pp. 837848, MayJune 1979. G. L. Landgren and S. W. Anderson, Simultaneous power interchange capability analysis, IEEE Trans. Power Apparat. Syst., vol. PAS-92, pp.19731986, Nov.Dec. 1973. S. Grijalva and P. W. Sauer, Reactive power considerations in linear ATC computation, in Proc. 32nd Annu. HI Int. Conf. Syst. Sci., HI, 1999, HICSS-32, pp. 111. S. Grijalva, Complex Flow-Based Non-Linear ATC Screening, Ph.D. dissertation, Univ. Illinois at Urbana-Champaign, Urbana, IL, 2002. Naresh Kumar Yadav, Prof. Ebraheem, K.Anand Kumar, R.Srinu Naik, Prof. K.Vaisakh, Identification of Overloading Line Using Linear and Reactive ATC Methods with PTDF, IJEEE-Journal, Summer Edition-2009, Vol -04, N0-06, page no. 86-96 R.Srinu Naik, , Prof. K.Vaisakh, K.Anand Kumar, Naresh Kumar Yadav, Prof. Ebraheem, ATC Enhancement with TCSC using Linear and Reactive Methods IJEEE-Journal, Summer Edition2009, Vol -03, N0-04, page no. 12-22.

[5]

324

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

A New algorithm for the Assessment of Available Transfer Capability


1

R. Gnanadass1, R. Rajathy1, Harish Kumar 1 ,Narayana Prasad Padhy2 Department of Electrical and Electronics Engineering, Pondicherry Engineering College, Puducherry , INDIA 2 Department of Electrical and Electronics Engineering Indian Institute of Technology, Roorkee, INDIA
consider the impact of uncertainty in future system topology, load demand and power transactions [10 14]. CBM is defined as that amount of transmission transfer capability reserved by load serving entities to meet generation reliability CBM requirements. At present, there are two methods of calculating CBM namely; a) Deterministic method b) Reliability index method [15]. In deterministic method, CBM is assumed as a firm percentage of TTC. Hence this method has less precision and cannot reflect the effect of the system state change on the margin. can be incorporated into ATC as firm and non firm transfers. ATC that considers CBM as a non-firm transfer is determined by subtracting the CBM and the base case power transfer from the TTC. On the other hand, ATC that considers CBM as a firm transfer is referred to as the amount of transfer capability that takes into account the modification of the generation outputs of all areas according to the CBM. In reliability index method, generation deficiency or outage for a given hourly load in a year is evaluated and defined as the loss of load expectation (LOLE). When LOLE index for a particular area is above a specified level (for example, 2.4hrs/yr]) that area requires an additional amount of external generation capacity, imported from other areas. The external generation capacity is used to compensate for the shortage of local generation in order to serve the load entities. But if the LOLE index of an area is below a specified level, the area is capable of exporting its generation capacity to other areas. Thus, to allow the transfer of power between areas, a transmission provider must reserve a certain amount of CBM which depends on the level of LOLE. Generation system reliability is studied using cumulative probability method in [16-18]. This paper proposes a new algorithm for the calculation of ATC. We solved maximum power transfer optimal power flow in deregulated market using Differential Evolution [19-21] as an optimization tool. Evaluation of CBM for each area is done using Differential Evolution along with Monte Carlo method [22-23].

Abstract In a competitive electric power market, the knowledge of available transfer capability (ATC) can help power marketers, sellers and buyers in planning, operation and reserving transmission services. The inclusion of Transmission Reliability Margin (TRM) and Capacity Benefit Margin (CBM) makes the computation of ATC more accurate. In the present work, Total Transfer Capability (TTC) is evaluated as an optimization problem using Differential Evolution (DE) technique. Transmission Reliability Margin is included by introducing contingencies such as single line, multi line and generator outages. The Capacity Benefit Margin between the source and load points is obtained by the appropriate LOLE, which is evaluated by Monte Carlo simulation method. The performance of the proposed method is tested on an Indian utility 62 bus systems. From the results obtained, it can be inferred that the margins and methods of incorporating them gives a clear idea to the load dispatcher about the additional power to be transmitted between the areas. Keywords Total Transfer Capability (TTC), Differential Evolution (DE), Capacity Benefit Margin (CBM), Loss- Of- Load Expectation (LOLE).

I. INTRODUCTION

N In a competitive electric power market, the knowledge of available transfer capability (ATC) can help power marketers, sellers and buyers in planning, operation and reserving transmission services. ATC is the additional amount of power that may flow across the interface, over and above the base flows without jeopardizing system security [1]. It is an indicator useful for the operator in realizing how far the stability limits are from the operating point. ATC also indicates the amount by which inter-area power transfers can be increased without comprising system security. Mathematically, ATC is defined as the total transfer capability less the transmission reliability margin less sum of existing transmission commitments and the capacity benefit margin [2]. Total Transfer Capability (TTC) is defined as the amount of electric power transferred over the interconnected transmission network in a reliable manner while meeting all of a specific set of defined pre and post-contingency system conditions. A defined pre- and post-contingency system conditions [1, 2]. A wide variety of mathematical methods and algorithms have been developed for calculating TTC. These methods can be divided into four types as follows: Linear Available Transfer Capability (LATC) method, Continuation Power Flow (CPF) method, Repetitive Power Flow (RPF) method and Optimal Power Flow (OPF) based method [3 9]. Transmission Reliability Margin (TRM) is the amount of transmission capability necessary to ensure that the transmission network is secure under a reasonable range of uncertainties in system conditions. Sensitivity analysis and probabilistic methods are used to

II. PROBLEM FORMULATION III. A. DETERMINATION OF TOTAL TRANSFER CAPABILITY


In the present study, TTC problem is formulated as an OPF problem. The objective function is to maximize the power that can be transferred from a specific set of generators in a source area to loads in a sink area, subjected to the constraints of load flow equation and system operation limits. The sum of real power loads in the sink area at the maximum power transaction is defined as the TTC value. The objective function for the OPF-based TTC calculation is expressed mathematically as
ND _ SNK

Max F
i 1

Di

(1)

N P Gi P Di j 1 V V Y cos( i j ij ij i j ) 0

(2)

325

N Q Gi Q Di j 1 V V Y sin( i j ij ij i j ) 0

= estimated LOLE of 2.4 hour/year (3)

min P Gi
min Q Gi

P Gi
Q Gi

max P Gi
max Q Gi i

1, 2, ........., NG

(4) (5) (6) (7)

i 1, 2, 3 , NL g 1, 2, 3 , NG NL =total number of load buses NG = total number of generator buses.

1, 2, ........., NG

V i

min

V i

V i

max

1, 2, ........., N

Li

max Li

1, 2,........., NL

where F is the total load of load buses in a sink area. PGi and QGi are the real and reactive power generation at bus i . PDi and QDi are the real and reactive load at bus i . Vi and V j are the voltage magnitudes at bus

and

respectively. Yij and

ij

are the
i

magnitude and angle of element i, j of bus admittance matrix. and


j

are the voltage angles of bus i and j . P Gi

min

and P Gi

max

are
min

the lower and upper limits of real power generation at bus i . QGi
max

and QGi are the lower and upper limits of reactive power generation at limits of voltage magnitude at bus i . S Li is the i transformer loading and S Li
max
th

line or

is the i

th

line or transformer-loading

limit. N is the total number of buses. NG is the number of generators. NL is the number of branches and ND _ SNK is the number of load buses in sink area. For the optimization problem, only the generators in the source area are allowed to change their power output. Similarly only the loads in the sink area will be changed while others remain constant. During the optimization, all OPF problem constraints have to be enforced. All equality constraints are satisfied by verifying the convergences of power flow. The detailed algorithm using differential evolution is explained in Figure 1.

Figure 1: Flowchart for calculating TTC with contingency The value of LOLE for each area is calculated using Monte Carlo simulation method [22, 23]. The procedure for estimating
the effective CBM of each area using Differential evolution is described below. 1. Calculate LOLE for each area. 2. Identify the area having highest LOLE value that violates the criterion. 3. Find the maximum value of Pg ext (or) CBM using Differential evolution with LOLE as the fitness function.

B. DETERMINATION OF CAPACITY BENEFIT MARGIN


Mathematically , the objective function which is the total generation for each area is defined as,
NG

PG

new g 1

PG

base , g

CBM

(8)

Subject to,
CBM max PG
NL

ext

(9) (10)

4. Modify Pg base as Pg new according to the eq. (8). 5. Excluding the area identified in step 2, PG for other areas
new

LOLE PG ,
new i 1

PD

base , i

with LOLE less than 2.4 hour/year are reduced by PGext . 6. Determine the new LOLE in conjunctions with the reduced PGnew . 7. Return to step 2, to identify the next area with LOLE above 2.4hrs/yr and repeat the process, until all the areas satisfy LOLE criterion. 8. Record the CBM value of each respective area.

where,
PG = new amount of total generation in MW
new

PG

base , g

= base case generation (MW) at generator bus

CBM = maximum amount of total external generation in MW LOLE = loss of load expectation corresponding to PGnew and the
sum of PDbase ,i

326

IV. CASE STUDY AND RESULTS


An Indian utility 62 bus system [24] is used to demonstrate the effectiveness of the proposed method in determining the CBM for each area. The system has 19 generators, 89 (220 KV) transmission lines, 11 tap changing transformers with a power demand of 2909 MW. The system is divided into 3 areas with six generators in area 1and area 3 respectively, whereas area 2 has seven generators. In this system area 1 and area 2 are connected by four tie lines. Power transfer between area 1 and area 2 is considered in which the generation in area 1 is changed randomly between its base values to its maximum value, while the load of area 2 is also varied simultaneously.. It is found that the generation of area 2 reaches its maximum limit when the load of area is 1608.7906 MW, which is the needed value for TTC. In order to incorporate the effect of Transmission Reliability Margin contingencies such as single line, multiline and generator outages is included in the system. The tie lines between area 1 and area 2, and the largest generator among the two areas, are made out of service individually and multi line contingency is introduced by the outage of lines 49-50 and 58-60 of area 1 and 2 simultaneously. It is inferred, that the value of TTC is found to be 1564.2711MW, during the multi line contingency with the limitation of the line 43(31-32). Figure 2 shows convergence characteristics of DE for the above case. The line flows in MVA for the transaction from area 1 to area 2 is shown in Figure 3. It is observed that the lines 43(31-32), 78(55-58) and 58(39-42) are the most violated lines during various contingencies. During contingencies, there exists some variation in the system load voltages from its base voltages which results in system instability. Therefore, to ensure the system stability, the variation of load voltages during the outage of multiline is observed for a period of 10 seconds and it is seen from the figure 4, that the all load bus voltages of the given system are within the permissible limit and hence the stability of the system is ensured. Load for each hour is generated randomly within its limits and LOLE of each area is calculated. Figure 5 shows the convergence graph of LOLE ( hrs/yr) for each area, and the numerical values for each area are found to be 3.4356 hrs/yr, 1.5248hrs/yr and 2.0811hrs/yr respectively. Since the value of LOLE for area 1 exceeds the permissible limit, we have to optimize CBM for area 1. We estimated CBM for area 1 as 93.3443 MW while maintaining LOLE fairly below 2.4 hrs/yr as given in Table 1. This implies that, during the emergency of generation deficiency, area 1 needs to import 61.1448 MW from area 2 and 32.1955 MW from area 3 as indicated by the negative values of CBM in Table 1. As a result, generation of area 1 is increased to 1093.3997 MW and the generation of area 2 and area 3 are decreased to 2438.8512 MW and 1967.8045 MW respectively. Figure 6 shows the convergence characteristics of LOLE values for each area after including CBM using DE. It is observed that the LOLE values for area 2 and area 3 increases as the power are exported, whereas LOLE value for area 1 decreases as the power is imported. Table 2 shows the comparison of the values of ATC obtained by considering CBM as firm and non firm transfer using Differential Evolution for the area 1-2. From Table 2, it is concluded that when CBM is taken as a firm transfer, slightly larger values of ATC are obtained as compared to the case when CBM is taken as a non-firm transfer and also the value of ATC obtained without considering CBM will lead to loss of some ATC values in future power transfer contracts.

Figure 2: Convergence characteristics of TTC including contingencies of Indian utility system .

Figure 3: Line flows including various contingency of Indian utility system

Figure 4: Variation of load bus voltages for multiline contingency of Indian utility system

Figure 5: LOLE of each area


TABLE 1 OPTIMIZED NEW GENERATION, CBM, LOLE OF EACH AREA USING DE AREA

Optimized Generation after importing

PGnew (MW
1 2 3 ) 1093.3997 2438.8512 1967.8045

CBM (MW) (Imported gen. Cap.) 93.3997 -61.1488 -32.1955

LOLE (hrs/year) (after importing )

1.1563 2.2776 2.2776

327

Figure 6: LOLE for each area after including CBM


TABLE 2 COMPARISON OF ATC BY CONSIDERING CBM AS A FIRM AND NON-FIRM TRANSFER

Areas of transfer From Area 1 To Area 2

TTC including TRM(MW)

New ATCs (MW) Non Firm Transfer 1503.122 3 5 Firm Transfer 1523.420

1564.271 1 V.

CONCLUSION

This paper proposes a new algorithm to calculate the values of ATC taking CBM into consideration. The proposed algorithm makes use of differential evolution and the Monte Carlo sampling technique. The practical Indian utility 62 bus system is used for the case study. In the calculation of ATC, we adopted two methods of incorporating CBM(firm and non-firm transfer). From the results obtained, one will be convinced that the consideration of CBM plays a significant role in the calculation of ATC for without the incorporation of CBM; ATC may be overestimated leading to risk during practical implementation. Hence the calculation of ATC gives a green signal to the system operator to permit the transaction within the system limitations. VI.

ACKNOWLEDGEMENT

The authors acknowledge the financial support given by All India Council for Technical Education, New Delhi through Research Promotion Scheme (F. No. 8023/BOR/RPS 104/ 2008-09).
VII.

REFERENCES

[1] North American Electric Reliability Council (NERC), Transfer Capability Definitions and Determination, NERC Report, May 1995. [2] North American Electric Reliability Council (NERC), Available Transfer Capability Definitions and Determination, NERC Report, Princeton, New Jersey, June 1996. [3] G.Hamoud, Assessment of Available Transfer capability Transmission systems, IEEE Trans. power systems, vol. 15, no.1, pp. 27-32, Feb. 2000. [4] H. Chiang, A. H. Flueek, K. S. Shah and N. Balu, CPFLOW: A practical tool for tracing power system steady-state stationary behavior due to load and generation variations, IEEE Trans. Power systems, vol. 10, no.2, pp. 623-634, May 1995. [5] G. C. Ejebe, J.G. Waight, S. N. Manuel, and W. F. Tinney Fast Calculation of linear available transfer capability, IEEE Trans. Power System, vol. 15, no.3, pp. 1112-1116, Aug.2000.

[6] G. C. Ejebe, J. Tong, J. G. Waight, J. G. Frame, X. Wang, W. F. Tinney, Available transfer capability calculation IEEE Trans. Power System, vol.13, no.4, pp. 1521-527, Nov.1998. [7] Mohammad Shabaan, Weixing Li, Zheng yan Yixin Ni, Felix F. Wu, Calculation of Total transfer capability incorporating the effect of reactive power, Int. J. Electric power systems Research, vol.64, no.2, pp.181-188, May 2003. [8] V.Ajjarapu, C.Christie, The continuation power flow: a practical tool for tracing power system steady-state stationary behavior due to the load and generation variations, IEEE Trans. power systems, vol.7, no.1, pp.416-423, May 1992. [9] Weerakorn Ongsakul and Peerapol Jirapong, Calculation of Total Transfer Capability by Evolutionary programming. IEEE Region 10 Conf, vol. 3, pp. 492-495, March 2004. [10] Leita da silva, A. M., de carvalho costa, J. G. da Fonseca Manso, Transmission capacity; availability, maximum transfer and reliability, IEEE Trans. power system, vol. 17, no.3, pp.843-849, Aug. 2002. [11] Yan Ou, Chanan Singh, Calculation of risk and statistical indices associated with available transfer capability, 2003 IEE proc. Generation, Transmission and Distribution , pp.239-244. [12] P. W. Sauer, Alternatives for calculating transmission reliability margin (TRM) in Available Transfer Capability (ATC), in proc.1998 31 th Hawaii International Conference system science , vol. 3, pp.89. [13] M. Gravener and C. Nwanpka, Available transfer capability and first order sensitivity analysis, IEEE Trans power systems, vol.14, no. 2, pp.512-518, May 1999. [14] M. M. Othman and A. Mohammed, Transmission Reliability Margin Assessment of the Malaysian Power system, National power and Energy conf, Bangi, ,Malaysia, 2003. [15] Rongfu Sun, Lin Chen ,Yong-hua Song and Yuan-zhang Sun, Capacity Benefit Margin Assessment Based on Multi-area Reliability Exponential Analytic Model, 2008 Power and Energy Society General Meeting Conversion and Delivery of Electrical Energy in the 21 st Century , pp. 20-14.. [16] Yan Ou, Chanan Singh, Assessment of Available Transfer capability and Margins IEEE Trans power systems, vol.17, no. 2, pp.463-468, May 2002. [17] M. M. Othman and A. Mohammed, Evolutionary Programming Method in the Capacity Benefit Margin Assessment of Deregulated Power System, 2004 IEEE proc. National Power and Energy Conference , pp. 343-346. [18] M. M. Othman, A. Mohammed, and A. Hussain, Available transfer capability assessment using evolutionary programming based capacity benefit margin, Int. J. Electrical Power &Energy Systems,vol.28,no..3, pp. 166-176, Mar. 2006. [19] K. Price and M.Storn, An introduction to Differential Evolution New optimization,(Eds), London: McGraw Hill International (UK) Limited, 1999, pp.79 -108. [20] R. Rajathy, R. Gnanadass, K. Manivannan and Harish Kumar Riskconstrained optimal bidding strategies using Differential evolution International Journal of Power, Energy, Automation and Intelligence, Vol.2 No.1, pp.96-103, January 2009. [21] R. Gamperie, S. Muller, A parameter study for Differential Evolution, Advances in Intelligent systems, Fuzzy systems, Evolutionary computation, WSEAS Press, 2002, pp.293 - 298. [22] R. Billinton, W. Li, Reliability Assessment of Electrical Power Systems using Monte-Carlo methods, New York: Plenum Press, 1994. [23] R. Billinton and R. N. Allan, Reliability Evaluation of power systems, New York: Plenum press, pp 400-440, 1994. [24] R. Gnanadass, Narayana Prasad Padhy, K. Manivannan, Assessment of available transfer capability for practical power systems with combined economic emission dispatch, Int.J. Electric Power Systems and Research, vol.69, no.2-3, pp.267-276, May 2004.

328

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Combined Reactive Power and Transmission Congestion Management in Deregulated Environment


Kanwardeep Singh, N. P. Padhy, and J. D. Sharma
Department of Electrical Engineering Indian Institute of Technology, Roorkee Roorkee, India. AbstractManagement of reactive power and transmission
congestion are the fundamental aspects of maintaining a secure and reliable electrical transmission system. This paper proposes a combined strategy for the management of reactive power and transmission congestion in deregulated environment. The transmission congestion management is achieved by rescheduling of day-ahead schedule of generator companies for minimization of congestion management cost subject to operational and security constraints. In tandem, reactive power is also managed by providing incentive to generator companies for any loss of profit due to their reduction in real power sales associated with increased reactive power supply from the scheduled values. The effectiveness of the proposed methodology is tested on IEEE 30bus system and a discussion is presented to validate the results obtained.

II.

INTRODUCTION

Keywords- congestion managemen; deregulation; lost opportunity cost

capability

curve;

I.

NOMENCLATUE

Abbreviations of commonly used symbols in the paper are as follows. N , NL , G Set of all buses, lines and GenCos
th Ciup . , Cidn . Incremental and decremental cost of i

LOCi
0 u d Pgi , Pgi , Pgi

Qgi

Vi i , Yij ij Pdi , Qdi


Sb (.), Sbmax
max min Pgi , Pgi max min Qgi , Qgi

GenCo for rescheduling Lost opportunity cost of ith GenCo for supply of reactive power Scheduled, increment and decrement in real power generation at ith bus Reactive power generation at ith bus after congestion management Bus voltage phasor at ith bus and i-jth element of Y-Bus Real and reactive power demand at ith bus Apparent power flow and its maximum limit in bth line Maximum and minimum real power generation limit of ith GenCo Maximum and minimum reactive power generation points in the capability curve of ith Generator Maximum and minimum bus voltage limits at ith bus

Vi max ,Vi min

With the advent of deregulation in electricity sector, a number of generation companies (GenCos) and distribution companies (DistCos) have emerged into the picture and replaced the vertically integrated structure of electric utilities. However, the interconnected transmission system is considered to be an inherent monopoly so as to avoid the duplicity, problems of right-of-the-way, ecological issues and huge investment for new infrastructure, and to take the advantage of the interconnected network. By interconnecting many distribution systems and supply sources, the transmission grid allows for the efficient and reliable use of resources and reduces risk of power failures. In order to ensure a healthy competitive environment, an independent system operator (ISO) has emerged as a generic operator, which sets rules and protocols for open and nondiscriminatory access of transmission services. The open access transmission framework results in more intensive use of the transmission system by the market participants, and in turn may lead to the situations in which transmission network is not able to accommodate all the desired transactions due to violation of some system constraints. This phenomenon is commonly termed as congestion. The congestion in transmission system is one of the serious hindrances in front of perfect competition in deregulated environment. Thus, management of transmission congestion is one of the important functions of ISO [1-3]. Congestion is not a new phenomenon as it was also there in transmission networks of pre-deregulation periods, but then its management was relatively less problematic, since all the generation together with transmission (and sometimes distribution also) were regulated by a unique government company. After the deregulation, congestion and its management have become very complex and sensitive issues, because on one hand it can have a direct impact on real time implementation of power transactions of some/ all of the market participants, and on the other hand there may be existing some smart market entities that may exploit system congestion to enhance their own profits [3]. In recent years, a wide literature on congestion management methodologies and related issues has been published. However, a universally acceptable approach for congestion management has yet to be evolved. A bibliographical survey on congestion management approaches in competitive power markets has been presented in [4]. Bompard et al. [5] have tried to develop a unified framework to illustrate different congestion management approaches used by England & Wales, Norway, Sweden, PJM and California markets. Congestion management problem in pool and bilateral models is considered in [6], [7]. A work on congestion relief in multilateral transaction framework is presented in [8]. Conejo et al. [9] have developed a congestion management technique based on voltage stability margins. Gedra [10] presented a tutorial on congestion rent and its hedging by the participants in nodal pricing based market. Yesuratnam and Thukaram [11] have used the concept of relative

329

electrical distances for congestion management and improvement of voltage stability. In electrical power systems, reactive power maintains a nominal voltage profile at system buses and helps in transportation of real power from suppliers to consumers. The Federal Energy Regulatory Commission (FERC) of USA under Order No. 888 has recognized reactive power and voltage control as one of ancillary services that transmission providers have to procure from GenCos and VAR sources [12-13]. The use of lost opportunity cost based on generators capability curve [14-16] and valuation of reactive power support from various VAR sources [17-18] have been explored for the management of reactive power. However, in the literature, management of reactive power and transmission congestion are generally treated separately, although both problems are interlinked with each other. Kumar et al. [19] have presented a zonal congestion management approach by considering the rescheduling of real and reactive powers. However, the authors have not included the effect of reduction in real power sales of GenCos in response to a call for increase in reactive power during rescheduling. In the present paper a combined strategy for the management of reactive power and transmission congestion has been proposed in deregulated environment. The management of transmission congestion is achieved by rescheduling of day-ahead schedule of GenCos for minimization of congestion management cost subject to operational and security constraints. The reactive power management is considered along with congestion management by providing incentive to GenCos for any loss of profit due to their reduction in real power sales associated with increased reactive power supply from the scheduled values. The remaining paper is organized as follows. Section III presents reactive power management cost in terms of lost opportunity cost of GenCos for supply of reactive power. Section IV presents problem formulation for combined reactive power and transmission congestion management. Results and discussion are presented in Section V, followed by concluding remarks in Section VI.

capability curve. These points can be determined from the real and reactive power relationships under various limits as given in [16].

max Qg

A
Field Heating Limit Approximate Curve

Qg

P Pg , Qg

Exact Curve
z B Pgz , Qg

Armature Heating Limit

O
Pgmin Pgcap

Pgmax Pg
z' C Pgz ' , Qg

Qg

min Qg

Under-excitation Limit

Figure 1. Determination of LOC from Generators Capability Curve.

Considering that point P ( Pg , Qg ) represents the point of operation of the generator after congestion management.
cap Then Pg , the maximum real power generation corresponding

III.

REACTIVE POWER MANAGEMENT COST

to Qg can be obtained from approximate capability curve as follows

Reactive power management cost is the cost incurred for the supply of reactive power. Under normal operating conditions, reactive power is supplied by the generators without any additional expenditure. But under congested conditions, if ISO calls some generators to increase reactive power supply from their scheduled values, it may cost reduction in their real power supply. The loss in profit of GenCos due to their reduced sale of real power as a result of increase in reactive power supply is termed as lost opportunity cost (LOC). LOC can be obtained from the capability curve of generators, which represent the limits on real and reactive power generation bounded by armature heating limit, field heating limit and under-excitation limit. Figure 1 shows the capability curve of a typical generator with above said three limits. In this paper, an approximate capability curve is used to develop LOC. The approximate capability (also shown in Figure 1) is bounded by a circular arc BC (for armature heating limit) with centre at point O and radius as maximum power generation ( Pgmax ), a straight line AB (for field heating limit) with OA represents maximum reactive
max power generation ( Qg ) and a straight line CD (for underexcitation limit) with OD represents minimum reactive power min z generation ( Qg ). Point B( Pgz , Qg ) is the intersection of armature heating limit and field heating limit on approximate z' capability curve. Similarly, point C( Pgz ' , Qg ) is the intersection of armature heating limit and under-excitation limit on approximate

P max 2 Q 2 ; z' z if Q Q Q g g g g g z min P P g g max z Pgcap = Pgmin z Qg Qg ; if Qg Qg max Q Q g g min Pgz ' Pgmin min z' Qg Qg ; if Qg Qg Pg z ' min Qg Qg

(1)

cap The generator is entitled to receive LOC, if Pg is less than the

scheduled value of real power generation. So, LOC =

C dn Pg0 Pgcap ; if Pgcap Pg0

(2)

IV. COMBINED REACTIVE POWER AND TRANSMISSION CONGESTION MANAGEMENT PROBLEM FORMULATION
The combined reactive power and transmission congestion management problem is formulated as to minimize congestion management cost (determined from incremental and decremental bids of GenCos for rescheduling of their generators) and LOC (2) subject to real and reactive power balance, line flow limits, and bounds on real and reactive power generations. Mathematically, the problem can be stated as follows

330

u min. Ciup Pgi Cidn Pgid LOCi iG

(3)

Subject to the following constraints 1) Power Flow Equations as determined by Kirchhoffs laws
N

0 u d Pgi Pgi Pgi Pdi ViV jYij cos(i j ij ) 0; j 1

i N
(4)

Qgi Qdi

V V Y
j 1

i j ij

sin( i j ij ) 0;

i N

2)

(5) The apparent power flows in transmission lines are limited by their respective MVA limits

powers, known as locational marginal prices (LMPs). In a perfect competitive environment, GenCos would bid for rescheduling their real power output in accordance with their LMPs. The incremental/ decremental bids (in $/MW2h) of GenCos as derived from their LMP values, are given in Table I. The congested system conditions are simulated by considering the outage of lines 6(2-6) and 24(1920), which cause the overloading of line 2(1-3). It has been observed by performing load flow study (considering bus 1 as slack bus) that the congested line 2(1-3) would be overloaded to 84.47 MVA, if day-ahead schedule as given in Table I would have been implemented under assumed contingencies. This shows that day-ahead schedule is required to be rescheduled for congestion management. The maximum and minimum bus voltages limits at all buses are taken to be 1.06 p.u. and 0.94 p.u., respectively. The assumed data corresponding to capability curve of GenCos are given in Table II.
TABLE I. DAY-AHEAD SCHEDULE AND INCREMENTAL/ DECREMENTAL BIDS
0 Qg

Sb (, V) Sbmax ;
3)

b N L
i G i G

(6)

The real and reactive power generations and bus voltages are bounded by minimum and maximum limits (7) (8) (9)

min 0 u d max Pgi P i P gi P gi P gi ;

Bus No. 1 2 5 8 11 13

Pg0
(MW) 199.13 35.51 58.50 28.01 22.96 7.50

min gi

Qgi Q

(MVAR) 10.0 10.0 40.0 40.0 21.53 23.15

Incremental Bid Decremental ($/MW2h) Bid ($/MW2h) 0.35 0.38 0.42 0.41 0.41 0.40 0.33 0.37 0.40 0.38 0.37 0.40

max gi

Vi
4)

min

Vi Vi

max

i N

The real and reactive power generations are bounded by armature heating, field heating and under-excitation limits. These limits are implemented by (10)-(12) by making use of approximate capability curve. Armature heating limit
2 Pgi 2 Qgi

max Pgi

i G

(10)
TABLE II. CAPABILITY CURVE DATA

Field heating limit


max Qgi Qgi

z max Qgi Qgi z min Pgi Pgi z' min Qgi Qgi z' min Pgi Pgi

min Pgi Pgi

i G

(11)

Bus No. 1

Pgmax

Pgz

z Qg

Pgz '

z' Qg

max Pgmin Qg

min Qg

Under-excitation limit
min Qgi Qgi min Pgi Pgi ;

(MW) (MW) (MVAR) (MW) (MVAR) (MW) (MVAR) (MVAR) 360.0 359.86 10.0 360.0 0.0 0.0 40.0 -40.0 140.0 139.64 100.0 100.0 100.0 100.0 98.87 98.87 99.50 99.50 10.0 15.0 15.0 10.0 10.0 139.64 98.87 98.87 100.0 100.0 -10.0 -15.0 -15.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 40.0 40.0 40.0 24.0 24.0 -40.0 -40.0 -40.0 -6.0 -6.0

i G

(12)

2 5

V.

RESULTS AND DISCUSSION

The proposed combined reactive power and transmission congestion management problem has been solved for a case study of IEEE 30-bus system by making use of KNITRO solver in AMPL environment [20]. The KNITRO solver makes use of robust interior-point and active-set methods for solving nonlinear programming problems.

8 11 13

A. IEEE 30-Bus System


The system is considered to be stressed by increasing real and reactive power bus loading by 20%. Line 2(1-3) (i.e. line number 2 connected between buses 1 and 3) is considered to be congested with maximum power flow limit of 70 MVA only. The day-ahead real and reactive power schedule is obtained by running optimal power flow (OPF) under the above said system conditions and is given in Table I. The shadow prices associated with real power balance equations of OPF represent the marginal prices of real bus

The results obtained for congestion management including lost opportunity cost for procurement of reactive power are given in Table III. It is clear from the results obtained that due to the congestion in line 2(1-3), power generation at bus 1 has been reduced and power demand at the system buses is required to be met by power flows through alternative routes. Thats why power generation at buses 11 and 13 increase significantly. At bus 2, the scheduled real power generation was 35.51 MW which corresponds to a maximum reactive power generation of 32.37 MVAR, as determined from capability curve data of bus 2 in Table II. However, the reactive power generation required at bus 2 for congestion management is 34.83 MVAR. In order to produce

331

additional reactive power, this generator has to reduce its real power generation by 11.45 MW.
TABLE III. Bus No. 1 2 5 8 11 13 CONGESTION MANAGEMENT RESULTS

ACKNOWLEDGMENT
Kanwardeep Singh is grateful to Guru Nanak Dev Engineering College, Ludhiana and A.I.C.T.E., New Delhi for sponsoring him to pursue Ph.D. at IIT, Roorkee, India, under Q.I.P. Scheme.

Pgu
(MW) 0.0 0.0 0.26 3.41 10.91 17.76

Pgd
(MW) 21.99 11.45 0.0 0.0 0.0 0.0

Qg
(MVAR) 10.0 34.83 25.14 32.06 19.23 20.45

C up / C dn
($/h) 159.57 48.51 0.03 4.77 48.80 126.17 436.36 $/h

LOC ($/h) 0.0 48.51 0.0 [3] 0.0 0.0 0.0 [5] [4] [1] [2]

REFERENCES
Y. Song and X. Wang, Operation of market-oriented power systems, London, UK: Springer, 2003. D. Shirmohammadi et al., Transmission dispatch and congestion management in the emerging energy market structures, IEEE Trans. Power Syst., vol.13, Nov. 1998, pp. 1466-1474. R. D. Christie, B. F. Wollenberg, and I. Wangensteen, Transmission management in the deregulated environment, Proc. IEEE, vol. 88, Feb. 2000, pp. 170-195. A. Kumar, S.C. Srivastava, and S.N. Singh, Congestion management in competitive power market: a bibliographical survey, Elect. Power Syst. Res., vol. 76, Jul. 2005, pp. 153-164. E. Bompard, P. Correia, G. Gross, and M. Amelin, Congestion management schemes:a comparative analysis under a unified framework, IEEE Trans. Power Syst., vol.18, Feb. 2003, pp. 346352. H. Singh, S. Hao, and A. Papalexopoulos, Transmission congestion management in competitive electricity markets, IEEE Trans. Power Syst., vol.13, May 1998, pp. 672-680. R. S. Fang and A. K. David, Transmission congestion management in an electricity market, IEEE Trans. Power Syst., vol.14, Aug. 1999, pp. 877-883. S. Tao and G. Gross, A congestion-management allocation mechanism for multiple transaction networks, IEEE Trans. Power Syst., vol. 17, Aug. 2002, pp. 826-833. A. Conejo, F. Milano, and R. Bertrand, Congestion management ensuring voltage stability, IEEE Trans. Power Syst., vol. 21, Feb. 2006, pp. 357-364. T. W. Gedra, On transmission congestion and pricing, IEEE Tran. Power Syst., vol. 14, Feb. 1999, pp. 241-248. G. Yesuratnam and D. Thukaram, Congestion management in open access based on relative electrical distances using voltage stability criteria, Elect. Power Syst. Res., vol. 77, 2007, pp. 16081618. E. Hirst and B. Kirby, Ancillary Services: Oak Ridge National Laboratory, Technical Report ORNL/CON 310, February 1996. I. El-Samahy, K. Bhattacharya, and C. A. Caizares, A unified framework for reactive power management in deregulated electricity markets, in Proc. IEEE-Power Eng. Soc. Power Systems Conf. Expo. (PSCE), Atlanta, GA, Oct. 2006. S. Hao, A reactive power management proposal for transmission operators, IEEE Trans. Power Syst., vol. 18, Nov. 2003, pp. 13741381. K. Bhattacharya and J. Zhong Reactive power as an ancillary service, IEEE Trans. Power Syst., vol. 16, Nov. 2001, pp. 294-300. M. J. Rider and V. L. Paucar, Application of a nonlinear reactive power pricing model for competitive electric markets, IEE Proc. Gener. Transm. Distrib., vol. 151, May 2004, pp. 407-414. W. Xu, Y. Zhang, L. C. P. Da Silva, and P. Kundur Assessing the value of generator reactive power support for transmission access, IEE Proc. Gener. Transm. Distrib., vol. 148, July 2001, pp. 337-342. W. Xu, Y. Zhang, L. C. P. Da Silva, P. Kundur, and A. A. Warrack, Valuation of dynamic reactive power support services for transmission access, IEEE Trans. Power Syst., vol. 16, Nov. 2001, pp. 719-728. A. Kumar, S. C. Srivastava, and S. N. Singh, A zonal congestion management approach using real and reactive power rescheduling, IEEE Trans. Power Syst., vol. 19, Feb. 2004, pp. 554-562. R. H. Byrd, J. Nocedal, and R. A. Waltz, KNITRO: an integrated package for nonlinear optimization, in Large-Scale Nonlinear Optimization, G. Pillo, and M. Roma, Eds. Springer: 2006, pp. 35-59.

Total Congestion Management Cost

Without Congestion Management 1.1

With Congestion Management

[6]

Bus Voltage (p.u.)

1.06 1.02 0.98 0.94 0.9 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 Bus Number

[7]

[8]

[9]

[10] [11]

Figure 2. Improvement in Bus Voltage Profile after Congestion Management.

Hence, generator at bus 2 is entitled to receive LOC. The total congestion management cost (including LOC payment of 48.51 $/h) amounts to 436.36 $/h, as given in Table III. After rescheduling, the line flow in congested line is restricted at its maximum limit. Figure 2 shows the comparison of bus voltage profile without and with congestion management. It is clear from Figure 2 that without congestion management, if day-ahead schedule is implemented, the bus voltages at certain buses may fall below the nominal value under assumed contingency conditions. With the application of proposed methodology, bus voltages are well within the nominal limits after congestion management as shown in Figure 2.

[12] [13]

[14]

[15] [16]

VI.

CONCLUSION
[17]

A combined methodology for reactive power and transmission congestion management in deregulated environment has been presented in this paper. The reactive power management cost has been obtained from lost sales of real power generation of GenCos for supply of reactive power. The management of transmission congestion and reactive power are achieved by rescheduling of real and reactive power generations for minimization of congestion management and lost opportunity costs. It has been demonstrated with a case-study on IEEE 30-bus system that the power flow over the congested line as well as bus voltages have been corrected to fall within the nominal limits after the application of proposed methodology.

[18]

[19]

[20]

332

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Modelling and Simulation of face from side-view for Biometrical Authentication using Hybrid Approach
Rohit Raja
Department of Computer Science & Engineering Shri Shankaracharya College of Engineering & Technology, Bhilai, India rohitraja4u@gmail.com

Raj Kumar Patra


Department of Computer Science & Engineering Shri Shankaracharya College of Engineering & Technology, Bhilai, India patra.rajkumar@gmail.com

Tilendra Shishir Sinha


Department of Computer Science & Engineering Shri Shankaracharya College of Engineering & Technology, Bhilai, India tssinha1968@gmail.com
Abstract In the recent times frontal face images have been used for the biometrical authentication. The present paper incorporates the frontal face images only for the formation of corpus. But for the biometrical authentication, side-view of the face has been analysed using hybrid approach, which means the combination of Artificial Neural Network (ANN) and Genetic Algorithm (GA). The work has been carried out in two phases. In the first phase, formation of the FACE_MODEL as a corpus using frontal face images of the different subjects have been done. In the second phase, the model or the corpus has been used at the back-end for Biometrical Authentication using a proposed algorithm called HABA (Hybrid Approach Biometrical Authentication). The authentication process has been carried out with the help of a known ninety-degree oriented image. Hence relevant geometrical features with reducing orientation in image from ninety-degree to lower degree with five-degree change have been matched with the corpus. The classification process of acceptance and rejection has been done after best-fit matching. The proposed algorithm has been tested with 10 subjects of varying age groups. The result has been found very satisfactory with the data sets and may be helpful in bridging the gap between computer and authorized subject for more system security. Keywords Artificial Neural Network (ANN), Genetic Algorithm (GA), Abnormal Face Recognition (AFR), Principal Component Analysis (PCA), Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) Modelling phase, a knowledge-based model or corpus has been framed using ANN by extracting relevant geometrical features of the known frontal image. In the Understanding phase, this knowledge-based model or corpus has been utilized for Biometrical Authentication of Face using proposed algorithm call HABA (Hybrid Approach Biometrical Authentication). Many researchers like Zhao and Chellappa [5] proposed a Shape from shading (SFS) method for pre-processing of 2D images. Hu et al. [6] have modified this work by proposing 3D model approach and creating synthetic images under different poses. Lee et al. [7] has proposed a similar idea and given a method where edge model and color region are combined for face recognition after synthetic image were created by a deformable 3D model. Xu et al. [8] proposed a surface based approach that uses Gaussian moments. Chua et al. [9]-[10] introduced point signature to describe the 3D landmark that is used to describe the position of forehead, nose and eyes. From the literature survey it has been observed that still there is a scope in face recognition using ANN and GA (Hybrid approach). The paper has been organized in the following manner, section II proposes the problem formulation and solution methodology, section III proposes the results and discussions, section IV gives the conclusions and further scope of the work and final section gives all the references made in completing the present work. .

II. PROBLEM FORMULATION AND SOLUTION METHODOLOGY


In the present paper the problem has been formulated in five stages. The first stage of the problem is the detection of noises and its removal from the image for better performance of the system. The second stage of the problem is the detection of boundaries or contours of the image. The third stage of the problem is the selection and extraction of relevant features from the enhanced and segmented image. The fourth stage of the problem is framing of knowledge-based model as corpus. The fifth stage of the problem is the recognition problem, given a known face image; the goal is to formulate an algorithm that matches the best pattern stored in the knowledge-based model for classification. The objective of this paper is to investigate and develop a new method for Biometric authentication using Hybrid Approach (ANN and GA) within a single framework. The paper aims to analyze and discuss experimentally the aforesaid problem in the subsequent subsections.

I. INTRODUCTION
BIOMETRICAL authentication plays a vital role in system security in recent times. Most of the researchers have done the work so far using frontal images of face for biometrical study. The relevant features have been extracted from the frontal images and the template matching has been employed for face recognition [1][4]. Little work has been done in the area of face recognition by extracting features from the side-view of the face. When frontal images are tested for its recognition with minimum orientation in the face or the image boundaries, the performance of the recognition system degrades. In the present paper, side-view of the face has been considered with 90-degree orientation. After enhancement and segmentation of the image relevant geometrical features have been extracted. These features have been matched using an evolutionary algorithm called Genetic Algorithm. To throw more light the present work has been carried out in two phases: Modeling phase and Understanding phase. In the

A. Mathematical Preliminaries

333

Based on the assumption that the original image is additive with noise. To compute the approximate shape of the wavelet (i.e., Any real valued function of time possessing some structure), in a noisy image and also to estimate its time of occurrence, two methods are available, first one is a simple structural analysis and the second one is the template matching technique. For the detection of wavelets in noisy image, assume a class of wavelets, Si(t), I = 0,...N-1, all having some common structure. Based on this assumption that noise is additive, then the corrupted image has been modeled by the equation -: X(m,n)=i(m,n)+Gd(m,n) (1) where i(m,n) is the clean image, d(m,n) is the noise and G is the term for signal-to-noise ratio control. To de-noise this image, wavelet transform has been applied. Let the mother wavelet or basic wavelet be (t), which yields to, (2) (t)=exp(j2ftt2/2) Further as per the definition of Continuous Wavelet transform CWT (a,), the relation yields to, CWT (a,) = (1/ a ) The parameters obtained in equation (3) has been discretized, using Discrete Parameter Wavelet transform, DPWT (m, n), by substituting a = form results to an equation (4), DPWT (m, n) = 2-m/2 k l x(k , l ) (2-mk n)

Due to squaring, the dynamic range of the values in the spectrum has been found very large. Thus to normalize this, logarithmic transformation has been applied in equation (6). Thus it, yields.

I (u , v )

normalize

= log( 1 + I ( u , v ) )

(9)

The expectation value of the enhanced image has been computed and it yields to the relation, E[I(u,v)]= 1 MN
M 1 u=0

N 1 v=0

I (u , v )

(10)

where E denotes expectation. The variance of the enhanced image has been computed by using the relation, Var[I(u,v)] = E{[I(u,v) I(u,v)]2} (11)

The auto-covariance of an enhanced image has also been computed using the relation, Cxx(u,v)=E{[I(u,v)I(u,v)][I(u,v)I(u,v)]} (12)

x(t ) {(t-)/a}dt
a
m 0

(3)

Then the power spectrum density has been computed from equation (12), PE ( f ) =
M 1 N 1 m=0 n=0

m 0

, = n 0

. Thus equation (3) in discrete (4)

xx

(m, n)W(m, n) exp( j2f (m+ n))

(13)

where m and n are the integers, a0 and 0 are the sampling intervals for a and , x(k,l) is the enhanced image. The wavelet coefficient has been computed from equation (4) by substituting a0 = 2 and 0 = 1. Further the enhanced image has been sampled at regular time interval T to produce a sample sequence {i (mT, nT)}, for m = 0,1,2, M-1 and n=0,1,2,N-1 of size M x N image. After employing Discrete Fourier Transformation (DFT) method, it yields to the equation of the form, I(u,v)=
M 1 N 1 m=0 n=0

where Cxx(m,n) is the auto-covariance function with m and n samples and W(m,n) is the Blackman window function with m and n samples. The data compression has been performed using discrete cosine transform (DCT), given below, DCT(u,v)=
M 1 N 1 m=0 n =0

I (m, n) cos

2T (m + n) MN

(14)

i(m,n)exp ( j2(um/ M + vn/ N)


for u=0,1,2,,M-1 and v = 0, 1, 2, ..,N-1

(5)

Further for the computation of principal components (i.e., Eigenvalues and the corresponding Eigenvectors), a pattern vector p n , which can be represented by another vector q n of lower dimension, has been formulated using (5) by linear transformation. Thus p n = [M

qn

(15) for m= 0 to M-1 and n = 0 to N-1.

In order to compute the magnitude and power spectrum along with phase angle, conversion from time domain to frequency domain has been done. Mathematically, this can be formulated as, Let R(u,v) and A(u,v) represent the real and imaginary components of I(u,v) respectively. The Fourier or Magnitude spectrum yields.

where [M ] = and

[I (m, n)]

qn

= min([M]), such that

qn > 0

Taking the covariance of equation (15), it yields, the corresponding Eigenvector, given in equation (16),

I (u , v ) = R 2 (u , v ) + A 2 (u , v )

1/ 2

(6)

P = cov( p n )
and thus P . M i
= i . M i

(16) (17)

The phase angle of the transform is defined as,

( u , v ) = tan

A (u , v ) R (u , v )

(7)

where I are the corresponding Eigenvalues. For the detection of boundaries in an image mathematically, let pix at coordinates (x,y) has two horizontal and two vertical neighbors, whose coordinates are (x+1,y), (x-1,y), (x,y+1) and (x,y-1). The arrangement has been shown in figure4-1. This forms a set of 4-neighbors of pix, denoted as N4(pix). The four diagonal neighbors of pix have coordinates (x+1,y+1),(x+1,y-1),(x-1,y+1) and (x-1,y-1), denoted as ND(pix). The union of N4(pix) and ND(pix), yields 8-neighbors of pix. Thus,

The power spectrum is defined as the square of the magnitude spectrum. Thus squaring equation (6), it yields,

P(u, v) = I (u, v) = R2 (u, v) + A2 (u, v)

(8)

334

N8(pix)=N4(pix)ND(pix)

(18)

Step2. Set the frame counter, fcount = 90 Step3. Set the flag for best fit as fbest = 1 Step4. Do while fbest <> 0 Read the face_image[fcount] Enhance the image using DCT Compute the connected components Locate ROI and Crop the image Compute the relevant geometrical features Perform the best-fit matching using Genetic algorithm Compute the efficiency of matching of parameters If true then fbest = 0 and display ACCEPT else display REJECT

A path between pixels pix1 and pixn is a sequence of pixels pix1, pix2, pix3,..,pixn-1,pixn, such that pixk is adjacent to pk+1, for 1 k < n. Thus connected component is defined, which has been obtained from the path defined from a set of pixels and which in return depends upon the adjacency position of the pixel in that path. In order to compute the orientation using reducing strategy, phaseangle must be calculated first for an original image. Hence considering equation (7), it yields, to some mathematical modelling. Let Ik be the side-view of an image with orientation k. If k = 90, then I90 is the image with actual side-view. If the real and imaginary component of this oriented image is Rk and Ak. For k = 90 degree orientation,

2 k =[R k
0

2 1/ 2 +A k ]

(19)

For k = 90 , orientation,

I 90

=[R 90 +A 90 ]
1

1/ 2

(20)

End do

Thus phase angle of image with k = 90 orientations is

=tan

Ak Rk

(21)

III. EXPERIMENTAL RESULTS AND DISCUSSIONS


In order to form a FACE_MODEL, a known frontal face image as depicted in Fig. 1 has been analyzed for the extraction of relevant features. Fig. 2 shows the enhanced image, the histogram has been plotted in Fig. 3. The relevant ROI of different face parts have been shown in Fig. 4. The methods that have been applied for the above analysis are connected component, DCT and ANN.

If k = k-5, (applying reducing strategy), equation (21) yields,


k 5 =tan 1

Ak 5 R k 5

(22)

Form equation (21) and (22) there will be lot of variation in the output. Hence it has been normalized, by imposing logarithmic to both (21) and (22)

normalize =log (1 + ( k k 5 ))

(23)

Taking the covariance of (23), it yields to perfect orientation between two side-view of the images i.e. , I 90 and I 85 I
perfect orientation =Cov

(normalize )

(24)

The distance between the connected components have been computed using Euclidean distance method. A perfect matching has been done with the corpus with best-fit measures using genetic algorithm. If the matching fails, then the orientation is reduced further by 50, that is k = k-5 and the process is repeated till k = 450. This model has been optimized for the best match of features using Genetic Algorithm (GA) for Recognition of face from side. GA has been adopted because it is the best search algorithm based on the mechanics of natural selection, crossover, reproduction and mutation. In other words, GAs are theoretically and computationally simple on fitness values. The crossover operation has been performed by combining the information of the selected chromosomes (FACE features) and generates the offspring. The mutation and reproduction operation has been utilized by modifying the offspring values after selection and crossover for the optimal solution.

Figure 1: (a) Original Image (b) Enhanced / Noise-Free Image (c) Segmented Image with connected component (d) Difference of CC and SI

B. Solution Methodology with proposed algorithm The proposed algorithm called HABA (Hybrid Approach Biometrical Authentication) has been depicted below,
Step1. Read the unknown 90-degree oriented face image.

Figure 2: (a) Original Image (b) Motion Blurred (c) Blurred Image (d) Sharpened Image

335

Figure 7: Graphical representation of FACE features extracted with different orientations of subject #1 Figure 3: Shows the histogram of Original Image and Sharpened Image

From Fig. 7, it has been observed that during 90-degree orientation of the subject, there are lots of variations. But once switching with reductions in orientation by five-degree, there are similar correlations with the feature vectors stored in the corpus. The plot shows almost linear (shown in horizontal lines) when maximum parameters have been matched with best-fit rate.
Figure. 4: (a) Original image (b) ROI Eye Image (c) ROI Nose Image

IV. CONCLUSIONS AND FURTHER SCOPE OF THE WORK


In the present paper, the authentication process from side-view with 90-degree orientation has been performed using a proposed algorithm called HABA, which has been tested with the corpus developed and the result has been found very satisfactory. For further study, the proposed algorithm has to be tested with the image of the same subject by changing the getups and extracting the side-view geometrical features for biometrical authentication

Next for Biometrical authentication testing with the FACE_MODEL, a known face image has been fed as input and relevant geometrical features have been extracted. Fig 5, shows the basic measurements from side-view with 90-degree orientation.

REFERENCES
[1] Kover, T., Vigh, D and Vamossy, Z., MYRA- Face Detection and Face Recognition system, in the proceedings of Fourth International Symposium on Applied Machine Intelligence, SAMI 2006, Herlany, Slovakia, pp. 255-265, 2006. [2] Rein-Lien Hsu, Mohammad Abdel-Mottaleb and Anil K. Jain, Face detection in color images, in IEEE transactions of Pattern Analysis and Machine Intelligence, Vol. 24, No. 5, pp. 696-706, May 2002. [3] Zhang, J., Yan, Y., and Lades, M., Face recognition: eigenface, elastic matching and neural nets, in the proceedings of IEEE, Vol. 85, No. 9, pp. 1493-1435, September 1997. [4] Turk, M. A., and Pentland, A. P., Face recognition using eigenfaces, in the proceedings of IEEE Computer society conference on computer vision and pattern recognition, pp. 586-591, Maui, Hawaii 1991. [5] Zhao, W.Y., Chellapa ,R., SFS Based View Synthesis for Robust Face Recognition in the proceedings of IEEE international Automatic Face and Gesture recognition 2000 [6] Hu., Y., Jaing, D., Yan, S., Zhang, L., and Zhang, H., Automatic 3d reconstruction for face recognition, in the proceedings of IEEE International Conferences on Automatic Face and Gesture Recognition 2000. [7] Lee, C.H., Park, S.W. , Chang , W., Park, J.W. Improving the performance of Multiclass SVMs in face recognition with nearest neighbours rule in the proceedings of IEEE International conference on tools with Artificial Intelligence 2000. [8] Xu, C., Wang, Y., Tan, T., Quan, L., Automatic 3D Face recognition combining global geometric features with local shape variation information, in the proceedings of IEEE International Conference for Automatic Face and Gesture Recognition 2004. [9] Chua, C.S., Jarvia, R., Point Signature : A new Representation for 3D Object Recognition, International Journal on Computer Vision vol 25, 1997. [10] Chua, C.S., Han, F, Ho, Y.K., 3D Human face recognition using point signature, in the proceedings of IEEE International Conference on Automatic Face and Gesture Recognition 2000.

Fig. 6, shows the basic geometrical measurements from side-view Figure 5: Geometrical face feature measurement with 90-degree with eighty-five-degree orientation.

Figure 6: Geometrical face feature measurement with eighty-five degree orientation

Table 1, depicted below describes the side FACE features extracted at different orientations with minimum variations in the output. This has been plotted in Fig. 7, below, for the graphical analysis.
TABLE 1. SIDE FACE GEOMETRICAL FEATURES EXTRACTED WITH DIFFERENT ORIENTATIONS OF SUBJECT #1

336

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Categorization and Classification of Directional and Non-Directional Texture Groups using Multiresolution Transforms
S.Arivazhagan1, L.Ganesan2, S.Selva Nidhyanandhan1, R.Newlin Shebiah1
Department of Electronics and Communication Engineering, Mepco Schlenk Engineering College, Sivakasi - 626 005, Tamilnadu, India. 2 Department of Computer Science and Engineering, Alagappa Chettiar College of Engineering and Technology, Karaikudi -623 004, Tamilnadu, India.
Abstract - The directionality of a texture is one of its important visual characteristics. The goal of texture categorization is often to group texture in to different categories, so that, the texture analysis methods try to describe the properties of the textures in a proper way. In this paper the texture images are categorized in to directional vs. non-directional texture images using the correlation sequence derived from the Wavelet packet subbands. Then classification of texture groups is done based on the statistical and co-occurrence features derived from transformed sub bands. Discrete wavelet transform, and Curvelet transform are the two multiresolution transformations used for classification. Experimental evaluation is done using the Brodatz database. The transform that is best suitable for the particular texture group has been identified. Keywords: Texture; Discrete Wavelet Transform; Wavelet Packet Transform; Ridgelet Transform; Curvelet Transform; Statistical; Co-occurrence features number of directional elements, independent of scale. Curvelets and ridgelets [7-10] take the form of basis elements which exhibit high directional sensitivity and are highly anisotropic. Texture classification is done from the statistical and co-occurrence features extracted from the image decomposition done by ridgelet transform and curvelet transform[11,12]. This paper has been organized as follows: In section 2, the Multi-resolution transforms, which includes Discrete Wavelet Transform and Curvelet Transform are explained. In section 3, texture categorization methodology is explained. In section 4, the feature extraction and the texture classification system are discussed. The experimental results and discussion are given in section 5. Finally, concluding remarks are given in section 6.
1

II. MULTIRESOLUTION TRANSFORMS A. Wavelet Transform


The Discrete Wavelet Transform [13-15] provides a fast, local, sparse, multiresolution analysis of real world signals and images. Wavelets are a special kind of functions which exhibits oscillatory behaviour for a short period of time and then die out. Unlike Fourier series, in wavelets, a single function and its dilations and translations to generate a set of orthonormal basis functions are used to represent a signal. The discrete wavelet transform used here is identical to hierarchical sub-band system, where the sub-bands are logarithmically spaced in frequency and represent octave band decomposition. The decomposition results in two-dimensional array of coefficients in four bands, each labeled as LL (low-low), LH (low-high), HL (high-low) and HH (high-high). The LL band can be decomposed once again in the same manner, to get second level of DWT decomposed sub-bands. The one level and two level DWT decomposed sub-bands are shown in Fig. 1(a) and 1(b) respectively.

I. INTRODUCTION
Texture is an important spatial cue in a wide range of natural images, and infact serves as the reasonable and reliable tool for identification or classification of image objects or regions. A region in an image has a constant texture if a set of local statistics or other local properties of picture function are constant, slowly varying and approximately periodic [1]. Texture consists of texture primitives called texels. A texel contains several pixels, whose placement could be periodic, quasi-periodic or random. The features used to make machine recognition possible are found in tone and structure of the texture. Tone is based mostly on pixel intensity properties in the primitives while structure depends on the spatial relationship between primitives. Analyzing images with textured regions often led to a qualitative distinction between classes of images micro vs. macro textures, periodic vs. non periodic textures, directional vs. non directional textures and deterministic vs. random textures [2]. Directionality of a texture indicates how the textural patterns are repeated in the texture and it has been often used as a measure for texture discrimination at the structural level. In the last decade, wavelet theory has been widely used for texture classification purposes. Here, image is decomposed using wavelet transform and statistics such as mean and standard deviation are derived form the decomposed sub-bands and is used as features for classification [3-5]. Apart from these statistical features, co-occurrence features are extracted from the wavelet decomposed sub-bands in order to increase the correct classification rate [6]. Wavelets rely on a dictionary of roughly isotropic elements occurring at all scales and locations, do not describe well highly anisotropic elements, and contain only a fixed

LL2

HL2

LL1

HL1
LH2 HH2

HL1

LH1

HH1

LH1

HH1

(a) One Level

(b) Two Level

Fig 1: Wavelet decomposition of an image

337

B. Curvelet Transform
Candes and Donoho developed a new multiscale transform called Curvelet Transform [9-10] with strong directional characteristics. Curvelets was designed to represent edges and other singularities along curves much more efficiently than traditional transforms, i.e. using many fewer coefficients for a given accuracy of reconstruction. Unlike the wavelet transform, the Curvelet pyramid contains elements with a very high degree of directional specificity. In addition, the Curvelet Transform is based on a certain anisotropic scaling principle which is quite different from the isotropic scaling of wavelets. Curvelet transform overcomes the difficulties in Ridgelet transform. i.e., Ridglet transform can handle only straight line singularity but failed to handle curve singularity which is common in images. However at sufficiently fine scales, a curved edge is almost straight, and so to capture curved edges, one ought to be able to deploy ridgelets in a localized manner, at sufficiently fine scales. Implementation of Curvelet transform involves the following steps: (i) Sub-band decomposition, (ii) Spatial windowing, (iii) Renormalization and (iv) Ridgelet analysis. Sub-band Decomposition: To implement the decomposition, the image is first decomposed into log 2 M (M is the size of the image) wavelet sub-bands and Curvelet Sub-bands are formed by performing partial reconstruction from these wavelet sub-bands at levels j 2s,2s 1. Spatial Windowing: To sub-band array Ds, we apply localization into squares according to windows wQ which are of width about twice the width of the associated dyadic square. Renormalization: The partitioning introduces redundancy, as a pixel belongs to 4 neighboring blocks. So, each square resulting from the previous stage is renormalized to unit scale. Ridgelet Analysis: Ridgelet transform is performed on each square resulting from the previous stage. Ridgelet Transform: Ridgelet analysis can be viewed as a form of wavelet analysis in Radon domain. In two dimensions, points and lines are related via Radon transform and hence Wavelet and Ridgelet transforms are linked via the Radon transform. The Radon transform is defined as Rf ( , t ) f ( x1, x2)( x1cos x2 sin t )dx1dx2 (1)

III. TEXTURE CATEGORIZATION


Directionality is indicated to be the most important dimension of human texture perception. To understand the role of the statistical properties of a texture, the vagueness of human texture perception must be translated into "hard numbers". Here an attempt is made to discriminate Directional and Nondirectional textures based upon the Wavelet packet decomposition and Correlation sequence derived from the horizontal and vertical sub bands [16]. The block diagram is shown in Fig 2. Wavelet Packet Decomposition is a generalization of the classical wavelet decomposition. Unlike the Discrete Wavelet Transform which only decomposes the low frequency components, Discrete Wavelet Packet Analysis decomposes both the low frequency components and the high frequency components.
Input Texture

Normalization

Wavelet Packet Decomposition

Horizontal Normalized Correlation (C1(i,j))

Vertical Normalized Correlation


(C2(i,j))

Horizontal Correlation Sequence (Ck1(i))

Vertical Correlation Sequence (Ck2(i))

Horizontal Directionality (d1)

Vertical Directionality (d2)

where, is a Dirac function. A discrete Ridgelet transform can be obtained via a discrete radon transform. The finite radon transform is defined as the summation of image pixels over a certain set of lines. To implement finite radon transform for digital data, the following steps are used 2D FFT: Compute the two dimensional Fast Fourier Transform F(u,v) for the input image f(x,y). Cartesian to Polar conversion: Using the interpolation scheme, substitute the sampled values of the Fourier transform obtained on the square lattice with the sampled values on the polar lattice. 1D FFT: Compute one-dimensional Inverse Fast Fourier Transform on each radial line i.e. for each value of angular parameter. 1D Wavelet Transform: Finally one dimensional non-orthogonal wavelet transform is applied along each separated lines. When Ridgelet transform is applied to a image of size nn, it will result in an output array of Ridgelet coefficient of size n2n. This increase in size of output image is due to the application of Cartesian to polar conversion, which infact have 2n radial lines.

Directionality of Image d= (d1+d2)/2

Fig 2: Block Diagram for Categorizing Directional and NonDirectional Textures

In this work, two level decomposition is performed which leads to 16 sub bands as shown in Fig 3. Before decomposition each texture image (T) is normalized to zero mean and unit variance. =
( ) ( )

(2)

where, E (.)-Expected value and (.)-Standard Deviation.

338

feature library. The classification is done using the Minimum Distance Criterion which is given in Eqn. (6). =
=1 |(

(6)

Directionality function along the horizon

(a)

(b)

Fig 3: (a) Input Image; (b) Wavelet Packet Decomposition of an Image

where, fj(x) -Feature of unknown Texture image (x) fj(i) - Feature of known ith Texture image (i) n - Number of Features The texture image which has the minimum distance when compared with the test image is classified as the resultant image. The success of classification is measured using the classification gain (G) and is calculated using Eqn. (7). % =

Directionality function along the horizontal or vertical axis is based on the correlation between the wavelet coefficients of the successive columns or rows of the horizontal, vertical sub bands respectively. The normalized correlation Corr(i,j) between rows and columns is given by , =
( ) ( )

100

(7)

where, Ccorr is the number of images correctly classified and M is the total number of images belonging to the particular texture group.

(3)

where, di and dj are the sequences of wavelet coefficients within the rows (i) or columns (j) respectively. A correlation sequence Corr K measuring the correlation between the rows and columns of the subbands is computed. 1 = (, ) (4) Where, K represents the number of horizontal or vertical sub bands used. Estimates for directionality, is obtained by integrating each sequence and summing the result, as = 1,2,3 (5) The sub bands H0, A1 and H1 is used to estimate horizontal directivity and the sub bands V0, A2 and V2 is used to estimate vertical directivity, since the sensitivity of the human visual system is highest among the horizontal and vertical directions.

V. EXPERIMENTAL RESULTS AND DISCUSSION


Experiments are conducted with 112 monochrome texture images, each of size 512 x 512, obtained from Brodatz database [17]. These images are categorized into directional and non-directional textures. Among the 112 images of Brodatz database, 85 images are grouped under directional textures and the remaining 27 images under non-directional textures, using the procedure described in section 3. Some of the sample images that have been categorized in to directional and non-directional texture group are shown in Fig. 4. A separate database is maintained for each texture group.

IV. TEXTURE CLASSIFICATION SYSTEM


Texture classification system involves two phases, i.e., training and classification. The texture image of size 512512 is divided into equal sized blocks of 128x128, and 256x256 which sums to a total of 20 blocks. In the texture training phase, 25% of the blocks is decomposed using multiresolution transforms such as Wavelet transform or Curvelet transform. Then, mean and standard deviation of approximation and detail sub-bands of decomposed images are calculated as features and stored in features library [6]. The co-occurrence matrix is constructed from the subband by estimating the pair wise statistics of pixel intensity. The use of the co-occurrence matrix is based on the hypotheses that the same grey-level configuration is repeated in a texture. This pattern will vary more in fine textures than in coarse textures. Further, co-occurrence features such as contrast, energy, local homogeneity, cluster shade and cluster prominence are calculated from co-occurrence matrix C(i,j), derived for transformed subbands and stored in the features library [6]. In the texture classification phase, remaining 75% of the blocks are decomposed using DWT/Curvelet transform and a similar set of statistical and co-occurrence features are extracted and compared with corresponding feature values, stored in the
(a)

(b) Fig. 4 (a) Directional Textures; (b) Non-Directional Textures

Each texture image in a particular group is subdivided into sixteen 128x128 and four 256x256 non-overlapping image regions, which were used for training and classification. Table 1 shows the results of texture classification for Brodatz database. The training and classification datasets are chosen in a nonoverlapped manner, either 25% or 50% of image regions are used for training while the remaining were used for classification.

339

TABLE I: RESULTS OF TEXTURE CLASSIFICATION FOR BRODATZ DATABASE Multiresolution Transform No. of sample images in Training / Testing Set Training - 25% Classification 75% Training - 50% Classification 50% Training - 25% Classification 75% Training - 50% Classification 50% Classification Rate (%)
Directional Textures Non-Directional Textures

87.2548

88.2716

Discrete Wavelet Transform

88.9706

89.1203

Curvelet Transform

88.1863

85.6478

91.0294

88.1944

From the Table 1, it is observed that DWT performs better for non-directional textures than Curvelet Transform, while the Curvelet transform results in improved classification for directional textures. This is mainly due to the strongly directional characteristics of Curvelet Transform.

VI. CONCLUSION
From the analysis of texture classification results obtained using Discrete Wavelet Transform and Curvelet Transform, it is inferred that the Discrete Wavelet Transform improves the percentage of correct classification for non-directional texture group and Curvelet Transform for directional texture group. Hence it is proved that, no single approach did perform best for all images.

[7] Candes J., Ridgelets, theory and applications, Ph.D. thesis, Department of Statistics, Stanford University, 1998. [8] Candes E. J. and Donoho D. L., Ridgelets: A key to higherdimensional intermittency, Phil. Trans. R.Soc. Lond. A., pp. 2495 -2509, 1999. [9] Donoho.D.L & Duncan.M.R, Digital Curvelet Transform: Strategy, Implementation and Experiments, Stanford University, 1999. [10] Starck J.L., Candes E., and Donoho D.L., "The Curvelet Transform for Image Denoising", IEEE Transactions on Image Processing , vol. 11, no. 6, pp 670 -684, 2002. [11] Arivazhagan. S, Ganesan. L. and Subash Kumar T.G., Texture Classification using Ridglet Transform, Pattern Recognition Letters, Netherlands, Vol. 27 (16), pp.1875-1883, 2006. [12] Arivazhagan. S, Ganesan. L. and Subash Kumar T.G., Texture Classification using Curvelet Transform, International Journal on Wavelets, Multiresolution and Information Processing (IJWMIP), Vol. 5, No.3, pp.451-464, 2007. [13] Daubechies,Orthonormal bases for compactly supported Wavelets, Comm. Pure Applied Math., Vol. XLI, NO. 41, pp. 909-996, 1988. [14] Strang. G, Nguyen.T, Wavelets and Filter Banks, WellesleyCambridge Press, Wellesley, MA,1996. [15] Burrus,Goinath.R.A, Guo.H, Introduction toWavelets and Wavelet Transform A Primer, Prentice Hall,1998, Expansion of notes for the IEEE Societys tutorial program held in conjunction with the ICASSP,1993. [16] Balmelli, L., Mojsilovic, A.,Wavelet Domain Features for Texture Description, Classification and Replicability Analysis,Proc. IEEE Intl. Conf. Image Process,No. 4, pp.440-444,1999. [17] Brodatz.P. Textures: A Photographic Album for Artists and Designers. New York: Dover, 1966.

ACKNOWLEDGMENT
The authors are grateful to acknowledge the financial support of Department of Science and Technology, New Delhi for carrying out this Research project. The authors are expressing their sincere thanks to the Management and Principal of Mepco Schlenk Engineering College, Sivakasi for their constant encouragement and support.

REFERENCES
[1] Sklansky, J., Image Segmentation and Feature Extraction, IEEE Transactions on Systems, Man, and Cybernetics, SMC-8, pp. 237-247, 1978. [2] L. Davis, S. Johns, and J. Aggarwal, Texture analysis using generalized co-occurrence matrices, PAMI, vol.3, pp. 251259, 1979. [3] Smith J. and Chang S. F., Transform features for texture classification and discrimination in large image databases, ICIP vol. 3, pp. 407-411, 1994. [4] Sebe.N and Lew M.S., Wavelet based texture classification, IEEE Multimedia, pp. 947 950, 2000. [5] DeBrunner. V, Kadiyala, Texture classification using wavelet transform M. Circuits and Systems, 42nd Midwest Symposium, Vol. 2, pp. 1053 -1056, 2000. [6] Arivazhagan. S and Ganesan. L., Texture classification using wavelet transform, Pattern Recognition Letters, Vol. 24 (9 -10), pp. 1513-1521, 2003.

340

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

BLOCKING ARTIFACT DETECTION IN COMPRESSED IMAGES Jagroop Singh1, Sukwinder Singh2, Dilbag Singh3 and MoinUddin4 . 1 Faculty Department of Elec. & Comm. Engg., DAVIET, Jalandhar, Punjab, India 2 Faculty Department of Computer Science & Engg., UIET, Chandigarh, Punjab, India 3 Faculty Dept. of Elec. & Comm. Engg., Dr. B.R.Ambedkar N.I.T , Jalandhar, Punjab, India 4 Director, Dr. B.R.Ambedkar N.I.T , Jalandhar, Punjab, India

ABSTRACT It is well known that low bit rate block-based discrete cosine transform coded images exhibits visually annoying coding artifacts. It is of interest to be able to numerically assess the degree of blocking artifacts as it plays an important role in the design, optimization and assessment of image and video coding systems. A novel algorithm for image blocking artifact detection is presented in this paper. Our experiment results show that the proposed method of measuring blocking artifacts exhibits satisfactory performance as compared to other postprocessing methods/ techniques and is very efficient and stable since the signal need not be compressed/decompressed. Keywords: Block discrete cosine transform, blocking artifacts, JPEG. 1. INTRODUCTION Transform coding is the heart of several industry standards for image and video compression. In particular, the discrete cosine transform (DCT) is the basis for the JPEG image coding standard [1], the MPEG video coding standard [2], and the ITUTH. 261 [3] and H.263 recommendations [4] for real time visual communication. However BDCT has a major drawback which is usually called blocking artifacts. In order to reduce blocking artifact, measurement of blocking artifact is very necessary. Several methods have been proposed to measure the blocking artifacts in compressed images [5-11]. In [5], a model was obtained that gives the numerical value depending upon the visibility of the blocking artifacts in compressed images and thus requires original image for comparison with reconstructed image. In practice the original images will not be available. In [6] the blocky image is modeled as a non blocky image interfering with a pure blocky signal. Blocking artifacts measurement is accomplished by estimating the power of blocky signal. The weakness of [8] is to assume that the difference of the pixel value in block boundary is caused only by blocking artifacts. This assumption decreases computation complexity but the measured value does not confirm to truth for the two adjacent blocks with a gradual change in pixel value. In [9],[10] and [11] the variation of pixel value across block boundary was modeled as a linear function. This method is not accurate especially for the adjacent blocks with a large change of

pixel value across the block boundary .In this paper we propose a blind but accurate measurement algorithm by taking into account that the change in pixel value across block boundary is large as compared to adjacent pixels as we more away across block boundary. 2. BLOCKING ARTIFACTS MEASUREMENT SYSTEM Blocking artifacts are introduced in the horizontal and vertical directions. Let us consider two adjacent blocks c1 and c2. Here we study the case of horizontally adjacent blocks. For the vertical adjacent blocks same principles apply. c1 c2 b

Fig.1 Illustration of constituting the new shifted block b. Let the right half of c1 and left half of c2 form a block denoted as block b. Block b is the 8x8 block which contains the boundary pixels, If any blocking artifacts occur between c1 and c2 the pixel value in b will be abruptly changed. In this paper a novel DCTdomain method for blind measurement of blocking artifacts is proposed, by modeling the abrupt change in b. Assume that the change in pixel value across the block boundary is very large as compared to the variation in pixel value as we move away from block boundary. Then the change in pixel value in block b can be modeled as a two dimensional function f(x,y) given by { { {{ . / .

{{

Where x, y =0 N-1. In (1), f(x, y) is constant in the vertical direction and anti-symmetric in the horizontal direction. Where {{ . . . JJ

{{

341

{{

Thus the eight pixels values on the function f(x, y) can be obtained as The 2-D 8x8 block g can be constituted by simply stacking the vector k row by row, i.e., Note that the block g is anti-symmetric horizontally and constant in the vertical direction. " B " # # $ $ % % & & ' '
{{{ {{ {{ {{ {{ {{ {{ {{{

. - JJ

{{

Let the slope of me right half of c is given by



.

The slope in block b can be computed by averaging IJ .


{ . {

{ {

{%{

-
#

Therefore the, 8x8 DCT transform of f has only four non -zero elements in the first row.

Once IJ are calculated the next part I{ { composed of J{ { and J{ { can be obtained by I{ { I{ { . { { J{ { - J{ { {{

("

# (#

{ { -

{ . {

( $

{%{

{ {

{{

Using the blocking artifact can be measured/estimated quantitatively. 3. FAST DCT DOMAIN ALGORITHM The BDCT of c1, c2 and b are denoted respectively by C1, C2 and B. Let us define two matrices q1 and q2 as follows:Fig.2 Illustration of replacing the step function with a function f(x,y) in the 1-D case The blocking artifacts between blocks c1 and c2 can be regarded as a 2-D step function in the block b given by J{ { .
#
#

J#

Where I is identity matrix and O is zero matrix I I# J# - I$ J$ (12) {{

H&0&

&0& D J$

@ &0&

H&0& D

Let the slope of f(x, y) and block b can be modeled as I{ {

JJ

JJ

{{

In DCT domain equation can be written as The 8x8 BDCT transform of the block I { { is given by 0 U[_ I { { # # - $ $ I I

be the Amplitude of f(x,y).Then, J{ { - J{ { {{

Where is the average value of b representing local brightness and J{ { represents the white Gaussian noise with zero mean [1112]. Figure 2 shows a 1-D model of the pixel value difference across the block boundary. Let the difference between two pixels in horizontal direction be denoted by Let the slope of left half of b is given by

.

- { { -

{ {

I{ { . I{ . {

{{
{{

Where k=1, 2 and I I

{ - { { - { 0 U[_

(" ("

I { {

{{

{ {

JJ

JJJ

{{

342

Assume that the variation in pixel value of ck is modeled by { { Here represents the slope of 2-D function g(x, y) in ck. For u = 0 and v= 1 Substituting { { for I { { in (14) gives I { { %
("

. @

The value can be easily obtained by using the above equation (17) with less computational complexity as only two DCT values are required. Let us denote the first row of the 8x8 BDCT transform of g(x,y) by {" # { . To find we first compute block by subtracting and g(x,y) from B as given below. { { .  IJ IJ { { JJJ (18)

{" #{

{" #{

Av. blocking artifacts

Where = -18.2241 according to park [10] .Then, by averaging the slopes of 2-D function f(x,y) in C1 and C2 we estimate as

I { {

0 U[_

. D
{$

#{

(16)

blocking artifacts which are measured by measuring the original pixel variation across the block boundary. As shown in Fig. 3 the measured (average) of the proposed method is more close to true blocking artifact as compared to the method in [10]. It should be noted that at low Q value the blocking artifact is relatively small. As the Q factor increases the blocking artifact (horizontal, vertical as well as average) also increases. Fig. (4) shows the average deviation at different values of Q factor. The average deviation is different for different images. Also the deviation remains constant at low values of Q factor. The results indicate that the proposed method can measure blocking artifacts more accurately than the method in [10].
Bridge

180 160 140 120 100 80 60 40 20 0 0 5 10 15


Q Factor Park Proposed Original

(17)

20

25

30

(a)
Lena

140

Original Park Proposed

Note that the 8x8 DCT transform of the 2-D step function defined in (4) has only four none zero elements in the first row. Let the vector v= [vo,v2,.,v7] be the first row of the 8x8 DCT transform of the 2-D step function, Then vo=v2=v4=v6=0. By the unitary property of the DCT we have $
.

Av. blocking artifacts

120 100 80 60 40 20 0 0 5

. .

J {

{%{

10

15
Q Factor

20

25

30

Hence the Parameter can be computed as follows


("

Av. blocking artifacts

Because of the sparseness of DCT coefficients in the DCT block, the proposed method is far more efficient than the conventional IDCT-DCT methods such as [6]. It should be noted that if the magnitude of the blocking artifact is very small as compared to the original variation of pixel values across the block boundary then the blocking artifacts may not observed. 4. DISCUSSION In the proposed method several 512 x512 images are coded at different bit rates. Fig. 3 shows the comparative results of the blocking artifact measurement done by proposed method and method in [10]. In addition the results are compared with true

# { { - % { { - ' { { - { {

{ {

(b)
Baboon

200 180 160 140 120 100 80 60 40 20 0 0 5 10 15


Q Factor Proposed Park

(20)

Original

20

25

30

Fig. 3. Avg. (a) Bridge (b) Lena (c) Babbon

(c)

343

5. CONCLUSION
Average deviation

Measure of deviation

In this paper a DCT-domain blind measurement of blocking artifacts method is proposed .which is stable and can be applied to a wide variety of images in both pixel and DCT domain. Experimental results indicate that the proposed method gives better results as compared to the method in [10]. The proposed method can be used to improve the performance of existing algorithms reducing the blocking artifacts. Due to its low computational cost, the technique can be integrated in to real- time image/video applications.

2.5 2 1.5 1 0.5 0 0 20


Fig. 4 Average
[8] C.Wang, W.J .Zhang and X.Z Fang, Adaptive reduction of blocking artifacts in DCT domain for highly compressed images, IEEE Trans. Consum. Electr., vol. 50, pp. 647-654, 2004. S.Z.Liu, and A.C.Bovik, Efficient DCT- domain blind measurement and reduction of blocking artifacts, IEEE Trans. Circuits Syst. Video Technol., vol. 12(12), pp. 1139-1149, 2002. Chun-Su Park, Jun-Hyung Kim and Sung-Jea Ko, Fast blind measurement of blocking artifacts in both pixel and DCT domain, J Math Imaging Vis., vol. 28, pp. 279-284, 2007. S.Singh, V.Kumar and H.K.Verma, Reduction of blocking artifacts in JPEG compressed images, Digital Signal Processing., vol. 17, pp. 225-243, 2007. A.C.Bovik. Handbook of image & video processing. San Diego, CA: Academic, 2000. B.A.Wandell, Foundations of Vision. Sunderland, M.A.:Sinauer Assocites, 1994.

Lena bridge

baboon

40
Q Factor

60

6. REFERENCES
[1] [2] W. Pennebaker and J.Mitchell, JPEG Sill Image Data Compression Standard. New York: Van Nostrand, 1993 J.L.Mitchel, W.B.Pannebaker, C.E.Foggand and D.J.LeGall, MPEG Video Compression Standard. New York: Chapman & Hall, 1997. Video Codecs for Audiovisual Services at p64 kb/s, ITU-T Rec. H.261, Mar. 1993. Video Coding for Low Bitrate Communication, ITU- Rec. H.263, 1998. S.A .Karunasekera and N.G.Kingsbury, A distortion measure for blocking artifacts in images based on human visual sensitivity, IEEE Trans. Image Processing, vol. 4, pp.713-724, 1995. Z.Wang and A.C.Bovik, Blind measurement of blocking artifacts in images, in Proc. IEEE Conf. Image Processing, Vancouver, Canada, pp. 981-984, 2000. S.D.Kim and J.B .Ra, Efficient DCT domain prefiltering inside a video encoder, in Proc. SPIE Visual Communication and Image Processing, vol. 4067, pp. 1579-1588, June 2000. [10]

[9]

[3] [4] [5]

[11]

[6]

[12] [13]

[7]

344

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

An Improved License Plate Extraction Technique Based on Gradient and Prolonged Haar Wavelet Analysis

Chirag N. Paunwala
Department of Electronics & Communication Sarvajanik College of Engg.& Tech. Surat, India cpaunwala@gmail.com

Dr. Suprava Patnaiak


Department of Electronics Engineering SVNIT Surat, India ssp@eced.svnit.ac.in

Abstract In a vehicle license plate extraction system, plate region detection is the key step before the final recognition. This paper presents a license plate detection algorithm from complex background based on gradient analysis and prolonged haar wavelet transform. First the license plate region is approximately detected using gradient analysis and top hat transformation of horizontal projection by choosing appropriate threshold. Accurate detection is obtained using multi resolution feature of candidates by using prolonged haar wavelet transformation. Due to the limitation of haar wavelet, we use expanded version of it to get better location of license plate without using morphological operations. Finally, accurate vertical position of license plate is detected and license plate is extracted from image. The test images taken from various scenes were employed, including diverse angles, different lightening conditions. The experiments shows that proposed method can quickly and correctly detect the region of license plate. Keywords- Gradient analysis, License plate recognition, Top hat transformation, Prolonged haar wavelet analysisplate recognition, mathematical morphology, otsus binerization

This paper mainly deals in two parts. First candidate region extraction is done using vertical gradient analysis, Top hat transformation and its horizontal projection. In second part by using properties of prolonged haar wavelet transform and by raw-column adjustment perfect license plate is extracted. This method can detect license plate with shadowing, night flash and of blurred image. II. A. ALGORITHM OF LPR

Candidate Region Extraction

I.

INTRODUCTION

Computer vision and character recognition algorithms for LPR are used as core modules for intelligent infrastructure systems like electronic payment systems and freeway and arterial management systems for traffic surveillance image [1]. There exist various approaches to extract license plate such as use of Hough transform [2], based on statistical analysis [3], Hierarchical representation [4]. However these approaches have certain limitations. The candidate regions that can contain the license plate, as identified using Hough transform, often include regions other than those containing just the license plate. Also detecting actual vertical lines using Hough transform is more difficult as they are more prone to noise than horizontal lines. Statistical analysis and Hierarchical representation usually takes more computation time and rate of false detection is also more. Region growing algorithm requires more memory as it involves recursion and performs satisfactorily only if the images are taken in good ambient lighting conditions [5]. Color-based LPD algorithms are usually computationally expensive and memoryconsuming. In addition, its detection rate is inherently unreliable because of its sensitivity to illumination variation. Sometimes it is also not useful for gray scale images [6].

1) Pre processing of an image: As one of the features of license plates, most license plates are composed of two colors. So gray of the character pixel and the background pixel are different. We can make use of this feature of the license plate to detect its region. For the images acquired from real environments, such as the corrupted and stained license plate, different distance from the CCD, uncontrolled illumination, with thick shades and in low contrast. So before license plate detection, a pre-processing should be done to the images. The operation is shown in (1). c A( x ) = tanh( x ) (1)

In this equation, x represents the intensity value of the input image and is the variance of the image, and c is a constant, which varies in the range from 0.8 to 1 from experimentation. For most of the images of license plate acquired from real environments are color images, we have to change them into grey ones before processing. The equation is shown in (2)
f (i, j ) = 0.114* A(i, j,1) + 0.587 * A(i, j , 2) + 0.299* A(i, j ,3)

(2)

where, f(i,j) is the array of gray image, A(i,j,1), A(i,j,2), A(i,j,3) are the R,G,B value of original image respectively. Fig. 1 indicates the overview of proposed algorithm.

345

projection of gradient variance using (4) and the result of this horizontal projection is shown in Fig. 3. n TH (i ) = g v (4) i =1

Figure 3. Horizontal Projection of Top-hat Transformation

From Fig. 3, the horizontal position of license plate has a wave ridge of the projection. Our main aim is to search the ridge to get the possible horizontal position of license plate. However, there are many burrs in the horizontal projection in Fig. 3. In order to get exonerate of these burrs, we introduce Gauss filter. In practice for the discrete curve, we are generally using (5) for the same.

Figure 1. Overview of the proposed algorithm

2) Acquire vertical gradient from gray image: By calculating the average gradient variance and comparing with each other, the bigger intense of variations can be find which represents the position of license plate region. So we can roughly locate the horizontal position candidate of license plate from the gradient value. We can calculate vertical gradient using (3) and result of same is shown in Fig. 2. g (i , j ) = f (i , j + 1) f (i , j ) (3) v
original Image Top Hat Transformation

w TH (i j ) h( j , ) + 1 TH (i ) + j =1 T (i + j ) h ( j , ) k H ( j 2 )/2 h( j , ) = e where ; (5) w k = 2 h( j , ) + 1 j =1 In (5), TH(i) represents the original projection value, TH(i) shows the filtered projection value, and i changes from 1 to m. w is the width of the smoothness region; h ( j , ) is the Gauss filter and represents the parameter of Gauss filter. After many experiments, the practicable values of Gauss filter parameters have been concluded. In our algorithm after many experiments, we choose w= 6 and =0.05. The result of horizontal projection smoothing by Gauss Filter is shown in Fig. 4.
T ' H (i ) =

Figure 2. Original gray image and its vertical gradient

3) Get Horizontal Projection: From last step, one can observe that the region with bigger value of vertical gradient can roughly represent the region of license plate. So the license plate region tends to have a big value for horizontal projection of vertical gradient variance. According to this feature of license plate, we calculate the horizontal

Figure 4. Horizontal Projection after smoothing

As shown in Fig. 4, one of wave ridges must represent the horizontal position of license plate. So the apices and

346

vales should be checked and identified. For many vehicles may have poster signs in the back window or other parts of the vehicle that would deceive the algorithm. Therefore, we have used a threshold T to locate the candidates of the horizontal position of the license plate. The threshold is calculated by (6) with M represents the mean of the filtered projection value TH(i) and wt represents weight parameter. The candidate regions obtained in this step may more than one in some images. T=wt*M (6)

In our algorithm, after experiments, wt =1.15. If TH (i) is larger than or equal to T, we set a mark on it and let the value of stack (1, i) be 1, else we let the value of stack (1, i) be 0. The result of above algorithm is shown in Fig. 5. From Fig. 5 (a), the continuous mark with 1s represents the candidate of the horizontal position of license plate. By means to integrate some regions it is possible and gets rid of some regions with too wide or too narrow width. Fig. 5(b) shows the result of candidate region extraction.
Candidate License Plate
1
0.9
0.8
0.7

high pass filter become [1/ 2 0 -1/2] . Likewise the new expanded low pass filter coefficient will become [1/3 1/3 1/3] so that it can still act as a low pass filter. Results for the high frequency computation using the problem can be corrected by expanding the filter coefficients into length of three instead of two as in the original haar wavelet transform. Thus the coefficients for high pass filter become [1/ 2 0 -1/2] . Likewise the new expanded low pass filter coefficient will become [1/3 1/3 1/3] so that it can still act as a low pass filter. Results for the high frequency computation using the new coefficient are shown in Fig. 8. Now the large drop at (a) is also detected. Notice that this prolonged haar wavelet looks very similar to the gradient base edge detector. Using the prolonged haar coefficients, the horizontal and vertical detail becomes as shown in (7) and (8). In these two equations, vector A is the low pass filter coefficient while vector B is the high pass filter coefficients.
250

200

0.6

150

0.5

0.4

(a)
100

0.3

0.2

0.1

0
0

50

100

150

200

250

300

350

400

(b)
50

Figure 5. (a) Stack information of RoI (b) Candidate region

Exact Detection 1) Haar Wavelet: In our work, we applied wavelet transformation onto the extracted candidate region. It is well understood that when image undergoes a wavelet transform then the resulting image will have a four components viz. approximation level (low frequency), horizontal and vertical details (mid-range frequencies) and diagonal details (high frequency). We make use of Haar wavelet transform because of simplicity, compact support, symmetry, orthogonality and is more efficient and feasible for real-time edge detection [8]. 2) Prolonged Haar Wavelet: However, the Haar wavelet has its limitation, which can be a problem. Since the Haar wavelet transform performs an average and difference on a pair of values and then shifts over by two values and calculates another average and difference on the next pair, it cannot detect if a big change takes place from an odd index value to an even index value [9,10]. For example consider a single dimensional signal with twenty two elements shown in Fig. 6, in which (a) and (b) shows sharp fall and rise respectively. After the first stage wavelets transform, results for high frequency computation is shown in Fig. 7. For large fall at (a) haar wavelet is not able to detect any changes whereas for sharp rise at (b) its detected by high frequency analysis of simple haar wavelet. The problem can be corrected by expanding the filter coefficients into length of three instead of two as in the original haar wavelet transform. Thus the coefficients for

B.

10

15

20

25

Figure 6. One dimensional signal with 22 elements


Prolonged detail Coef. for haar
150
100

50

-50 1

10

11

Figure. 7. Vertical Projection of Horizontal Gradient


Prolonged expanded Detail Coef. for haar
100

50

-50

-100

10

12

Figure 8. High pass filter coefficients using prolonged haar coefficient

1 0 -1 1 hx = [ A] [ B] = 1 0 -1 6 1 0 -1 1 1 1 1 T hy = [ B ] [ A] = 0 0 0 6 -1 -1 -1
T

(7)

(8)

We can use LH or HL band for edge detection as shown in Fig.9, but by using difference of three bands except horizontal band using (9) we can get good edge information as shown in Fig. 10.

347

D ( m, n +1) W D ( m , n ) + dif = W V ( m, n +1) W V ( m, n ) + W H ( m , n +1) W H ( m ,n ) W

(9)

License Plate Recognition From Still Images and Video Sequences: A Survey IEEE transactions on intelligent transportation systems, vol. 9, no. 3, pp. 377-391

[2] Tran Duc Duan, Duong Anh Duc, Tran Le Hong Du, 2004,
Combining Hough Transform and Contour Algorithm for detecting Vehicles License-Plates, Intelligent Multimedia, Video and Speech Processing, pp: 747-750

Figure 9. Edge information using expanded HL band only

[3] H.J. Lee, S.-Y. Chen, and S.-Z. Wang 2004 Extraction and
recognition of license plates of motorcycles and vehicles on highways in Proc. ICPR, pp. 356359

(9) From above steps, we can get the row and column position of the license plate. In order to make the location more accurately, we have to adjust the row and column position and then we extract the accurate license plate image from original image. For that, Otsus method [11] is used to binarize the image. g(x,y) indicates the binarize image and with some row, column restrictions, we get extracted license plate image shown in Fig. 11.
Figure 10. Edge Information using

[4] R. Zunino and S. Rovetta, 2000, Vector quantization for license-plate


location and image coding, IEEE Trans. Ind. Electron., vol. 47, no. 1, pp. 159167

[5] P. V. Suryanarayana, Suman K. Mitra, Asim Banerjee and Anil K.


Roy 2005, A Morphology Based Approach for Car License Plate Extraction, IEEE Indicon 2005 Conference, vol.-1, pp: 24-27

[6] H.J. Wang, X.N. Wang, and W.J. Li. 2007 Color prior knowledgebased license plate location algorithm IEEE Second Workshop on Digital Media and its Application in Museum & Heritage, Vol. 6, pp 43-52

[7] R.C. Gonzalez, R.E. Woods 2006, Digital Image Processing, PHI,
second edd

license plate The proposed method was implemented on a personal computer with an Intel Pentium Dual-Core processor-1.73 GHz CPU/1 GB DDR2 RAM using Matlab v.7.6. As the first step toward this goal, a large image data set of license plates has been collected and grouped according to several criteria such as type and color of plates, illumination conditions, various angles of vision, and indoor or outdoor images at [12]. We get accurate license plate from 381 images and 7 images failed. The rate of success is 98.16%. Table 1 shows the analysis of proposed algorithm on various images and comparision of it against algorithm of [13], which shows its robustness against a problem when the borders of the license plate do not exhibit much variation from the surrounding pixels (results are strongly dependent upon the image database used for work).
TABLE I RESULT OF THE LICENSE PLATE LOCATION
ON THE PROPOSED ALGORITHM

Figure 11. Extracted

[8] Weijuan Wen, Xianglin Huang, Lifang Yang, Zhao Yang, Pengju
Zhang, The Vehicle License Plate Location Method Based-on Wavelet Transform, International Joint Conference on Computational Sciences and Optimization, pp:381-384, 2009

[9] M. Mokji, S.A.R. Abu-Bakar 2004 Fingerprint Matching Based on


Directional Image Constructed using Expanded Haar Wavelet Transform, International Conference on Computer Graphics, Imaging and Visualization, pp:149-152

[10] S. Mallat, 1993, A theory for multi-resolution signal decomposition:


the wavelet representation, IEEE Trans actions on Pattern Analysis and Machine Intelligence, Vol. 11, No. 7, pp. 674-693

[11] N.Otsu, 1979, A Threshold Selection Method from Gray-Level


Histograms, IEEE Trans. Sys., Man and Cybernetics, Vol.SMC-9, No.1, pp.62-66

[12] http://www.medialab.ntua.gr/research/LPRdatabase.html [13] Feng Yang, Fan Yang, Detecting License Plate Based on Top-hat
Transform and Wavelet Transform , ICALIP, pp:998-2003,2008

Method [13] Proposed Method

Total Images 381 381

Final Detection 320 374

Failu re 41 7

Success rate 89.23% 98.16%

III.

CONCLUSIONS

From the experimentation results, it is proved that proposed algorithm is efficiently extract license plate region and fast, but still it requires some modification for extraction of plate for different weather conditions, poor resolution and blurry image. REFERENCES
[1] Christos-Nikolaos E. Anagnostopoulos, Ioannis E. Anagnostopoulos,
Ioannis D. Psoroulas, Vassili Loumos, and Eleftherios Kayafas 2008

348

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Relevance Feedback in Content Based Image Retrieval

Pushpa B. Patil
Dept. of Computer Science and Engineering, B.L.D.E. As, College of Engineering, and Technology, Bijapur, India E-mail: pushpa_ms@rediffmail.com
Abstract We address the challenge of semantic gap reduction for image retrieval through an improved Support vector machine (SVM) based interactive relevance feedback. The approach uses the linear kernel, which works effectively when number of training samples is smaller than feature. For image representation we used Dual tree complex wavelet transform (DT-CWT) visual features, and is compared with discrete wavelet transform (DWT). Experimental evaluations show that the use DT-CWT visual features with our relevance feedback can significantly improves the quality of the results. Keywords- Content-based image retrieval (CBIR), Relevance Feedback (RF), Dual tree complex wavelet, and Image retrieval

Manesh Kokare
Dept. of Electronics and Communication, S.G.G.S Institute of Engineering and technology, Nanded, India E-mail: mbkokare@sggs.ac.in pseudo code describing proposed RF using SVM. In section IV image retrieval method is discussed. In section V, experimental results are discussed. Finally, the conclusion is given in section VI. II. COMPLEX WAVELET TRANSFORMS

A. DWT Real discrete wavelet transform (DWT) extracts the features in only three directions (

0 , 90 , 45

I.

INTRODUCTION

Due to advancement in web technology, organizing the multimedia data and retrieval, search of these needed. To do so, earlier keyword based methods were popular. The use of keywords alone raises several problems: the manual annotation is time consuming and tedious task. These difficulties have boosted researchers to research activities in the field of CBIR, which uses visual representations such as color, texture and shape etc. However the use of visual presentation has its own limitations because visual representation can not always capture information in accordance to the human perceptive. In other words, there is biggest gap between low level features and high level concepts. In CBIR, it is called semantic gap. To overcome this, relevance feedback has been proposed.[1-4]. Relevance feedback is an interactive online process that refines retrieval results based on user feedback. In typical relevance feedback system, users gives feedback in the form of whether retrieval results are relevant or not, and in semantic learning process, the system captures the, what the user wants and produces another refined result set. It is repeated until user satisfaction or result remains same. Earlier relevance feedback techniques have been introduced based on heuristic weighting adjustment techniques such as query vector movement [5] and updating weight vector [3][6]. Recently, there is rapid growth in the field of RF , which uses many popular machine learning techniques such as Bayesian learning, ANN, decision tree, nearest neighbour classification , support vector machines[7-10].There is good review on RF [11]. The main contribution of the paper is, we have proposed new SVM based semantic learning which gives better retrieval performance. The rest of the paper is organized as follows; we briefly discuss the complex wavelet transforms in section II. In section III, we have, explained

) as shown in Fig. 1 and it is shift variant. These drawbacks are overcome by the complex wavelet transform (CWT), by introducing limited redundancy into the transform. But still it suffers from problems like no perfect reconstruction is possible using CWT decomposition beyond level 1, when input to each level becomes complex. To overcome this, Kingsbury [12] proposed a new transform, which provides perfect reconstruction along with providing the other advantages of complex wavelet.

Figure 1. 2D DWT

B. CT-DWT The DT-CWT uses a dual tree of real part of wavelet transform instead of using complex coefficients. This introduces a limited amount of redundancy and provides perfect reconstruction along with other advantages of complex wavelets. The DT-CWT is implemented using separable transforms and by combining subband signals appropriately. Specifically, the 1-D DT-CWT is implemented using two filter banks in parallel, operating on the same data. For d-dimensional input, a L scale DTCWT outputs an array of real scaling coefficients corresponding to the lowpass subbands in each dimension. The total redundancy of the transform is independent of L . The mechanism of the DT-CWT is not covered here. See [13] and [14] for a comprehensive explanation of the transform and details of filter design for the trees. A complex valued (t ) can be obtained as

and

349

( x ) =h ( x ) + j g ( x )
(1) Where

both real-valued wavelets. The impulse response of six wavelets associated with 2-D complex wavelet transform is illustrated in Fig. 2 as gray-scale image

(t)

and

(t)

are

Result=DisplayTop20 (D); DB=DB-NI; Repeat End End

Query image Feature extraction Similarity measure Results User Satisfie d


No Yes

Image database

Figure 2. 2D DT_CWT

III.

RELEVANCE FEEDBACK

The fundamental concept of RF is to establish interactions between the user and retrieval system and to refine retrieval based on feedback provided by the user. Generally speaking, RF is designed to bridge the semantic gap for enhancing performance A. Proposed method In this paper, we propose a new relevance feedback approach, which is based on a SVM with relevant and irrelevant examples. It uses linear kernel which gives better performance though the training samples are smaller than the dimensionality of feature space. Because, non linear mapping does not improve the performance. We found that average accuracy of retrieval system using the proposed relevance feedback is 82.6% for Corel database using DTCWT. The system block diagram is shown in Fig 3. In our system, firstly, we obtain the retrieval results from the CBIR. Then we allow the user to provide the feedback as to whether the results are relevant or irrelevant. If results are irrelevant the feedback loop is repeated many times until the user is satisfied. In every feedback iteration, we marked the positive and negative images to the results of system. Then this labelled data is treated as training data of SVM and testing data for SVM is the feature database. The SVM classify this data as relevant or irrelevant. Next we select the relevant images only, i.e. positive images among the testing data. Finally we applied the Canberra distance measure to this data set to get the top 20 images. The pseudo code describing the proposed method is given in table I.
TABLE I. PSEUDO CODE DESCRIBING THE PROPOSED METHOD

Feature extraction

Feature database

Stop

Labeling Classification (SVM) Select the relevant images


Figure 3. System Architecture

IV.

IMAGE RETRIEVAL

Input: query: user query DB: Image database P: Relevant images N: Irrelevant images Output: Result Begin Result= CBIR (DB, query); Repeat until user satisfaction or result remains same (P, N)=Labeling (Result); T= (PUN); (PI, NI)=SVMLearner(T,DB); For each x PI do Dx =CanberraDist(x, query); End SortDist (D);

A. Feature Database Creationthe abstract To construct the feature vectors of each image in the database, we decomposed each image using DT-CWT up to third level. The Energy and Standard Deviation (STD) were computed separately on each subband and the feature vector was formed using these two parameter values. The retrieval performance with combination of these two feature parameters always outperformed that using these features individually. The Energy Ek and Standard Deviation k of kth subband is computed as follows
Ek = M N 1 W k (i, j ) M xN i =1 j =1

(2)

350

N M 1 (W k ( i, j ) k = M N i =1 j =1

(3)

2 k ) 2

Where,

W k (i, j )

is the

decomposed subband, decomposed subband, and

MxN is the size of wavelet


Average accuracy

k th

We use an example to illustrate the performance improvement of the proposed approach using DT-CWT features in Fig. 6-9. Fig. 5. Shows initial query image which belongs to the Elephants category and Fig. 5-9 show the retrieved results for the each feedback iterations. wavelet
90

80

is the mean of the

70

these two parameters. Let us assume n is the total number of subbands, then length of feature vector will be equal to (n x no. of feature measures used in combination). For the creation of feature database, the above procedure is repeated for all the images in the database. Finally, feature vectors are stored in the feature database. B. Image Matching A query image, we have randomly selected any one of the 1000 images from Corel database and it is processed to compute the feature vector as in section II A. and II B. We have used Canberra distance metric as similarity measure. If x and y are the feature vectors of the database and query image respectively, x and y have d dimension, then the Canberra distance is given by Canb (x, y) =
i =1

k th

subband. The features vectors are constructed using

60

50

40 DW T D T-C W T 30 0 0 .5 1 1 .5 2 2 .5 F e e d b ac k It era tion s 3 3 .5 4

Figure 4. Average accuracy Vs no. of feedback iterations

xi y i

xi + y i
(4)
Figure 5. Result of CBIR using DT_CWT (9/20)

V.

EXPERIMENTS AND DISCUSSION

To evaluate the performance of a proposed system, accuracy is used as our performance metric. Accuracy [15] is defined as the percentage of the relevant images out of the total number of retrieved images. The average accuracy is simply the average of the accuracies measured for all the randomly selected test queries. Image Database We perform experiments over 1000 images from Corel image database. Corel image database contains 1000 color photographs of resolution 384X256 pixels, covering a wide range of semantic categories, from natural scenes to artificial objects[15][16]. The database is partitioned into ten categories, each with 100 photographs. The proposed retrieval system has been implemented using MATLAB on Pentium IV, 1 GB of RAM machine. B. Retrieval Performance For each experiment, one image was selected at random as the query image from each category and thus the retrieved images were obtained. Then, the users were asked to identify those images that are related to their expectations from the retrieved images. These selected images were used as feedback images for next iteration. We have performed ten such experiments. Thus 10x10 =100 images are tested for the accuracy. The feedback processes were performed 4 times. The result in terms of accuracy is shown in Fig.4. A.

Figure 6. Result of after first relevance feedback iteration (15/20)

Figure 7. Result of after second relevance feedback iteration (15/20)

351

REFERENCES
[1] I.. J. Cox, M.L. Miller, T.P. Minka, T.V.Papathomas, and P.N.Yianilos, The Bayesian Image Retieval System, PicHunter: Theory, Implementation and Psychophysical Experiments, IEEE Tran. on Image Processing Vol .9, Issue 1 , pp.20-37 Jan. 2000. Rui, Y. Huang, T. Ortega, M. Mehrotra,S. Relevance Feedback : A Power Tool In Interactive Content-Based Image Retrieval, IEEE Transactions on Circuits and Systems for Video Technology , Vol. 8(5), pp. 644-655, 1998. Y. Ishikawa, R. Subramanya, and C. Fatloutsos, MindReader: Querying databases through multiple examples, proc. VLDB Conf., 1998. Rui, Y., Huang, T.S., and Mehrotra,S. Content-based Image Retrieval with Relevance Feedback in MARS, in Proc. IEEE Int. Conf. on Image proc., 1997 Baice Li and Senmiao Yuan , A Novel Relevance Feedback Method in Content-Based Image Retrieval, in Proc of the International Conference on Information Technology: Coding and Computing, 2004 Esin Guldogan, Moncef Gabbouj, Dynamic Feature Weights with Relevance Feedback in Content-Based image Retrieval, in METU, Northern Cyprus Campus, pp. 56-59. September 2009. S. D. MacArthur, C. E. Brodley, and C. R. Shyu, Relevance Feedback Decision Trees in Content-based Image Retrieval, in Proc. IEEE Work-shop Content-based Access of Image and Video Libraries, pp.68-72, Jun 2000. Z. Su H. Zhang, S. Li, and S. Ma, Relevance Feedback in Contentbased Image Retrieval: Bayesian framework, Feature Subspaces, and Progressive Learning, IEEE Trans. Image Process., vol. 12, no. 8 , pp. 924-936, Aug 2003 Tong S. and Chang E., Support Vector Machines Active Learning for Image Retrieval Proc. ACM Multimedia, 2001 K. Tieu and P. Viola, Boosting image retrieval, in Proc. IEEE Conf. Computer Vision Pattern Recognition, pp. 228-235. Zhou, X. S. and Huang, T. S., Relevance Feedback in image retrieval: A Comprehensive review, Multimedia systems, 8, 6, pp. 536-544, 2003 N.G. Kingsbury, Image processing with complex wavelet, Phil. Trans. Roy. Soc. London A, vol. 357, pp. 2543-2560,sep.1999 N. G. Kingsbury, Complex wavelets for shift invariant analysis and filtering of signals, J.App. Comput. Harmon. Anal., vol. 10, no.3, pp.234-253, May 2001. I. Selesnick, R. Baraniuk, and N. Kingsbury, The dual-tree complex wavelet transform, IEEE Signal Process. Mag., vol.22, no.6, pp.123151, Nov. 2005. J. Li, J.Z. Wang, and G. Wiederhold, IRM: Integrated Region Matching for Image Retrieval, in Proc. 8th ACM Multimedia Conf. (MM00), Los Angeles, CA, pp. 147-156, 2000. Zoran Stejic, Yasufumi T., and Kaoru Hirota, Relevance FeedbackBased Image Retrieval Interface Incorporating Region and Feature Saliency Pattterns as Visualizable Image similarity Criteria, IEEE Trans. on Industrial electronics, vol. 50, No. 5, Oct 2003

[2]

[3] Figure 8. Result of after third relevance feedback iteration (16/20)

[4]

[5]

[6]

[7] Figure 9. Result of after fourth relevance feedback iteration (18/20) [8]

VI.

CONCLUSIONS
[9] [10] [11]

In this paper, we have shown the performance of relevance feedback in the content based image retrieval system is dependent on the feature vectors that represent the image in the image database as well as the interactive relevance feedback method using the proposed learning algorithm. We propose a new relevance feedback approach, which is based on a SVM with relevant and irrelevant examples. It uses linear kernel which gives better performance though the training samples are smaller than the dimensionality of feature space. Because, non linear mapping does not improve the performance. We have used dual-tree complex wavelet transform for feature extraction, which extracts features in six strongly oriented directions. We found that, these features outperform the retrieval accuracy not only for texture images, but also gives better retrieval performance for semantic categories using proposed relevance feedback approach.

[12] [13]

[14]

[15]

[16]

352

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Off-Line Handwritten Signature Identification Using Dual Tree Complex Wavelet Transforms

M. S. Shirdhonkar
Dept. of Computer Science and Engineering, B.L.D.E.As College of Engineering and Technology Bijapur, India E-mail:ms_shirdhonkar@rediffmail.com
Abstract In this paper, a new method for handwritten signature identification based on wavelet transform is proposed. We have proposed the use of dual tree complex wavelet transform (DTCWT) for signature feature extraction. In identification phase, Canberra distance measure is used. The proposed method is compared with discrete wavelet transform (DWT). From experimental results it is found that signature identification rate for DT-CWT is superior over DWT Keywords- Signature identification, complex wavelet transform, discrete wavelet transform, Persons identification.

Manesh Kokare
Dept. of Electronics and Telecommunication, S.G.G.S Institute of Engineering and Technology Nanded, India E-mail: mbkokare@sggs.ac.in identification rate with respect to static mode. The identification rate of dynamic mode is higher than static mode, but dynamic mode has main disadvantage: it is online, so it cannot be used for some important application that the signer could not be presented in singing place. B. Related works. Signature verification contain two areas: off-line signature verification ,where signature samples are scanned into image representation and on-line signature verification, where signature samples are collected from a digitizing tablet which is capable of pen movements during the writing .In our work, we survey the offline signature identification. In 2009, Ghandali and Moghaddam have proposed an off-line Persians signature identification and verification based on Image registration, DWT (Discrete Wavelet Transform) and fusion. They used DWT for features extraction and Euclidean distance for comparing features. It is language dependent method [1].In 2008, Larkins and Mayo have introduced a person dependent off-line signature verification method that is based on Adaptive Feature Threshold (AFT) [2].AFT enhances the method of converting a simple feature of signature to binary feature vector to improve its representative similarity with training signatures. They have used combination of spatial pyramid and equimass sampling grids to improve representation of a signature based on gradient direction. In classification phase, they used DWT and graph matching methods. In another work, Ramachandra et al [3], have proposed cross-validation for graph matching based off-line signature verification (CSMOSV) algorithm in which graph matching compares signatures and the Euclidean distance measures the dissimilarity between signatures. In 2007, Kovari et. al presented an approach for off-line signature verification, which was able to preserve and take usage of semantic information[4].They used position and direction of endpoints in features extraction phase. Porwik [5] introduced a three stages method for offline signature recognition. In this approach the hough transform ,center of gravity and horizontal-vertical signature histogram have been employed, using both static and dynamic features that were processed by DWT has been addressed in[6].The verification phase of this method is based on fuzzy net using the enhanced version of the MDF(Modified Direction feature)extractor has been presented by Armand et.al [7].The different neural classifier such as Resilient Back Propagation(RBP), Neural network and Radial Basis

I.

INTRODUCTION

A. Motivation A signature appears on many types of documents such as bank cheques in daily life and credit slips, thus signature has a great importance in a persons life. Automatic bank cheque processing is an active topic in the field of document analysis and processing. Signature validity confirmation of different document is one of the important problems in automatic document processing. Now a days, person identification and verification are very important in security and resource access control. For this purpose the first and simple way is to use Personal Identification Number (PIN).But PIN code may be forgotten. Now an interesting method to identification and verification is biometric approach [1]. Biometric is a measure for identification that is unique for each person. Always biometric is together with person and cannot be forgotten. In addition biometric usually cannot be misused. There are many applications for signature identification: in banking, user login in computer or personal digital assistant (PDA) and access control. There are two modes for signature identification and verification: static or off-line and dynamic or on-line. In static mode, the input of system is a 2D image of signature. Contrary in dynamic mode, the input is signature trace in time domain. In dynamic mode, person sign on an electronic tablet by an electronic pen and his/her signature is sampled. Each sample has three attributes: x and y in 2-dimentions coordinates and t as time of sample occurrence. So, in dynamic mode, the time attribute of each samples to extract useful information such as start and stop points. Velocity and acceleration of signature stroke. Some electronic tablets in addition of time sampling, could digitize the pressure. This additional information existing in dynamic mode increases

353

Function(RBF) network have been used in verification phase of this method. In 2005, Chen and Srihari [8] described an approach that obtains an exterior contour of the image to define pseudo writing path. To match two signatures a dynamic time wrapping (DTW) method has been employed to segment signature into curves. The main contribution of this paper is that, we have proposed off-line handwritten signature identification using dual tree complex wavelet transform, which captures information in six different directions for identification. In identification phases Canberra distance measure is used. The proposed method is language independent. The experimental results of proposed method were satisfactory and found that it had better results compare with related works. The rest of paper is organized as follows: In section II, discusses the feature extraction phase. The signature identification approaches is presented in section III. In section IV, the experimental results and the selection of training samples are mentioned, and finally section V concludes the work. II. FEATURE EXTRACTION PHASE The major task of feature extraction is to reduce image data to much smaller amount of data which represents the important characteristic of the image. In signature identification, edge information is very important in characterizing signature properties. Therefore we proposed the use dual tree complex wavelet, which captures the information in six different directions. The performance of the system is compared with standard discrete wavelet transform which captures information in only three directions. A. Discrete Wavelet Transform Features The multi resolution wavelet transform decomposes a signal into low pass and high pass information. The low pass information represents a smoothed version and the main body of the original data. The high pass information represents data of sharper variations and details. Discrete Wavelet Transform decomposes the image into four subimages when one level of decomposing is used. One of these sub-images is a smoothed version of the original image corresponding to the low pass information and the other three ones are high pass information that represents the horizontal, vertical and diagonal edges of the image respectively. When two images are similar, their difference would be existed in high-frequency information. A DWT with N decomposition levels has 3N+1 frequency bands with 3N high-frequency bands [9]. The impulse response associated with 2-D discrete wavelet transform are illustrated in Fig. 1 as gray-scale image

CWT decomposition beyond level 1, when input to each level becomes complex. To overcome this, Kingsbury [11] proposed a new transform, which provides perfect reconstruction along with providing the other advantages of complex wavelet, which is DT-CWT. The DT-CWT uses a dual tree of real part of wavelet transform instead using complex coefficients. This introduces a limited amount of redundancy and provides perfect reconstruction along with providing the other advantages of complex wavelets. The DT-CWT is implemented using separable transforms and by combining sub band signals appropriately. Even though it is non-separable yet it inherits the computational efficiency of separable transforms. Specifically, the 1-D DT-CWT is implemented using two filter banks in parallel, operating on the same data. For d-dimensional input, an L scale DT-CWT outputs an array of real scaling coefficients corresponding to the low pass sub bands in each dimension. The mechanism of the DT-CWT is not covered here. See [10],[12-13] for a comprehensive explanation of the transform and details of filter design for the trees. A complex valued (t ) can be obtained as

( x ) = h ( x ) + j g ( x )
(1) Where h ( x ) and g ( x ) are both real-valued wavelets. The six wavelets associated with 2-D complex wavelet transform are illustrated in Fig. 2 as gray-scale image

Impulse response of six wavelet filters of CT-DWT.

Feature Database Creation To conduct the experiments, each signature from database is decomposed using DT-CWT up to 6th level and two different sets of features were computed using algorithm 1, which uses DWT and DT-CWT respectively. To construct the feature vectors of each signature in the database, we decomposed each signature using DT-CWT up to 6th level. The Energy and Standard Deviation (STD) were computed separately on each sub band and the feature vector was formed using these two parameter values. The Energy Ek and Standard Deviation k of kth sub band is computed as follows

C.

Ek
B. Dual Tree Complex Wavelet Impulse response transforms. of DWT Drawbacks of the DWT are overcome by the complex wavelet transform (CWT).By introducing limited redundancy into the transform. But still it suffer from problem like no perfect reconstruction is possible using

1 M N
N M

i =1 j =1
k

Wk ( i, j )
2

(2)
2
1

1 k = M N

(W ( i, j ) )
i= 1 j= 1 k

(3) wavelet-

Where

W k (i, j )

is the

decomposed sub band,

MxN

k th

is the size of wavelet

354

decomposed sub band, and

k
[

is the mean of the

and standard deviation are f E = E1 and combined


f

k th

sub band. The resulting feature vector using energy

= [1 2

= [ 1 2 ... feature
... n E1

n ] respectively. So
vector
... En ]

E2

...

En

End

End for Display the minimum distance signature from dist vector.

IV.

EXPERIMENTAL RESULTS

is

E2

(4)

The step by step procedure for feature database creation using DWT and DT-CWT are explained in algorithm 1. Algorithm1: Feature database creation using DWT and DT-CWT Input: Signature image Database: DB Filters : LF, HF Handwritten Signature : Si Output: Feature database FV Begin For each Si in DB do Decompose the Si by applying low pass LF and high pass HF filters up to sixth level Calculate energy E and standard deviation SD for each sub band using (2) and (3) respectively in each level Feature vector F= [E U SD] FV=FV U F End for End III. SIGNATURE IDENTIFICATION PHASE

A. Image Database The signatures were collected using either black or blue ink (No pen brands were taken into consideration), on a white A4 sheet of paper, with eight signature per page. A scanner subsequently digitized the eight signatures, contained on each page, with a resolution in 256 grey levels. Afterwards the images were cut and pasted in rectangular areas of size 256x256 pixels. Sample signature database for 24 persons are shown in Fig.3. A group of 24 persons are selected for 16 specimen signatures which make the total of 24x16=384 signature database. B. Identification Performance For each person 12 signatures for training and 4 signatures for test are used for each person, which makes the total of 4x24=96 signature. The identification rate is 89.58 % using DT-CWT and 61.45 % using DWT.Fig.4 shows comparison between DWT and DT-CWT. Signature identification rate for DT-CWT is superior over DWT.

There are several ways to work out the distance between two points in multidimensional space. The most commonly used is the Canberra distance measure. It can be considered the shortest distance between two points. We have used Canberra distance metric as similarity measure. If x and y are the feature vectors of the database and query signature, respectively, x and y have dimension d, then the Canberra distance is given by Canb (x, y) =
i =1

xi y i

xi + y i

(5) The step by step procedure of identification is as follows Algorithm 2: Handwritten Signature Identification Input: Test signature: St Feature database: FV Output: Distance vector: Dist Handwritten signature identification Begin Calculate feature vector of test signature using algorithm 1 For each fv in FV do Dist= Calculate distance between test signature and fv using (5)

Figure 3. Sample Signature Images Database

355

[2]

% of Identification Rate

120 100 80 60 40 20 0 1 4 7 1013 161922 P e rso n n u m b e r

[3]

DW T D T-C W T
[4]

[5]

[6]

[7]

[8] Figure 4. Comparison between DWT and DT-CWT [9]

V.

CONCLUSION
[10]

In this paper, we introduced an approach for identification of off-line signatures. The proposed approach used DT-CWT for extracting details and Canberra distance for comparing features. The experimental results we found that signature identification rate for DT-CWT is superior over DWT. The proposed method is language independent REFERENCES
[1] Samanesh Ghandali and Mohsen Ebrahimi Moghaddam, Off-Line Persian Signature Identification and Verification based on Image Registration and Fusion In: Journal of Multimedia, volume 4, 2009, pages: 137-144.

[11] [12]

[13]

Larkins, R. Mayo, M., Adaptive Feature Thresholding for Off-Line Signature Verification, In: Image and vision computing New Zealand, 2008, pages: 1-6. Ramachandra, A.C. Pavitra, K.and Yashasvini, K. and Raja, K.B. and Venugopal, K.R. and Patnaik, L.M., Cross-Validation for Graph Matching based Off-Line Signature Verification, In IDICON 2008, India, 2008, pages: 17-22. Kovari, B. Kertesz, Z. and Major, a., Off-Line Signature Verification Based on Feature Matching: In: Intelligent Engineering Systems, 2007, pages 93-97. Porwik P., The Compact Three Stages Method of the Signatures Recognition, 6 th International Conference on Computer Information Systems and Industrial Management Applications, 2007, pages: 282-287. Wei Tian Yizheng Qiao Zhiqiang Ma, A New Scheme for Off-Line Signature Verification uses DWT and Fuzzy net, In: Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, 2007, pages: 30-35. Armand S., Blumenstein, M., Muthukkumarasamy V. Off-Line Signature and Neural Based Classification, In: Neural Networks, 2006 IJCNN, pages: 684-691. Chen S., Srihari S., Use of Exterior Contours and Shape Features in Off-Line Signature Verification, In: Eighth International Conference Document Analysis and Recognition, 2005, pages: 1280-1284. Gongalo Pajares, Jesus, Mahuel de la Cruz, A wavelet-based image fusion Tutorial, Pattern Recognition Volume 37, Issue 9, September 2004, Elsever Science Inc, pages: 1855-1872. Manesh Kokare, P.K. Biswas, and B.N. Chatterji, Texture Image retrieval using New Rotated Complex Wavelet Filters, IEEE Trans. on systems, man, and Cybernetics-Part B: Cybernetics, vol. 35, no.6, Dec. 2005 N.G. Kingsbury, Image processing with complex wavelet, Phil. Trans. Roy. Soc. N. G. Kingsbury, Complex wavelets for shift invariant analysis and filtering of signals, J.App. Comput. Harmon. Anal., vol. 10, no.3, pp.234-253, May 2001. I. Selesnick, R. Baraniuk, and N. Kingsbury, The dual-tree complex wavelet transform, IEEE Signal Process. Mag., vol.22, no. o6, pp.123-151, Nov. 2005.

356

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Edge Preserved Image Enhancement by Adaptively Fusing the Denoised Images by Wavelet and Curvelet Transform
G.G.Bhutada1*,
1-Research Scholar, Electrical Engineering Indian Institute of Technology Roorkee, India Corresponding author, Email- ggbhutada@gmail.com Abstract
In this paper, a new approach is proposed which utilizes features of wavelet and curvelet transform separately and adaptively in homogeneous, non-homogeneous and neither homogeneous nor non-homogeneous regions, identified by total variational approach. In this approach, image is denoised by using wavelet and curvelet transform separately. The edgy information which is lost (remnant) in wavelet denoising is extracted back by curvelet transform. This extracted information is used as base structure for fusing offshore regions of the images denoised by wavelet and curvelet transform. The results of the image enhanced by such spatially adaptive fusion techniques preserve the edgy information. It also removes or smoothen the fuzzy edges developed during denoising process in curvelet domain because it has better smoothness in background i.e. homogeneous region or non-edgy region. Keywords: Curvelet transform, Edge preservation, Image denoising, Image fusion, Remnant, Wavelet transform.

R.S. Anand2, S.C.Saxena3


2, 3-Professor, Electrical Engineering Indian Institute of Technology Roorkee, India *

1. Introduction
The distinct types of noises and artifacts in imaging modalities degrade the image quality. In practice, most common degradation in images varies from additive noise (Gaussian) to multiplicative noise (speckle). Such degradation can have a significant impact on the image quality and as a result, affects the human interpretation as well as the accuracy of computer-assisted methods in case of medical imaging. Additionally, feature extraction, analysis, recognition and quantitative measurements become difficult and unreliable due to poor quality of images. Thus, the denoising and enhancement of these images become prime requirements for many practical applications. Initial efforts in this area started with simple ideas based on statistical filter in spatial domain [1], followed by lot of work in the transform domain due to its primary properties like sparsity and multi scale decomposition of the coefficient like wavelet transform, curvelet transform domain etc [2]. By using these properties, it became easy to represent main energy of signal by few large coefficients and others many small coefficients representing small amount of energy of signal. As most of the noise power spreads in many small coefficients, it is necessary to modify these coefficients by certain rule to enhance the signal power and suppress the noise power. To improve this process of denoising, researchers tried to develop better and better thresholding function due to its effective and simplified way of implementation in wavelet transform and curvelet transform domain. The wavelet shrinkages methodology has been proposed by Donoho et al [3-4] for classifying wavelet

coefficients of real world noisy data which have been further modified to increase the signal to noise ratio. In the followed literature [5-7], there has been focus on developing and implementing the thresholding function in these domains. In the paper by Fodder and Kamath [8], an empirical study on the denoising by wavelet shrinkage like soft, hard, garrote and semisoft, have been presented which reported that the SureShrink and the BayesShrink produced better results. Later efforts in this area have suggested that substantial improvements in perceptual quality could be obtained by using proper shrinkages followed by suitable contrast enhancement techniques [9, 10] Further, it has also been reported that the improvement in perceptual quality of image can also be achieved by proper shrinkages using optimum threshold value determined in sub band adaptive methods [11]. Some of the researchers [12-17] used statistical approach like Bayesian approach with various noise models like Gaussian, Rayleigh[15], Maxwell [16], Fisher-Tippet [17] for distribution of noisy wavelet coefficients. Dependency of these methods to a specific noise type decreases their flexibility in their usage. Recently, in 2009, Nasri and Pour [18] introduced a new adaptive thresholding function based on Wavelet Transform based thresholding neural network (WT-TNN) methodology. They have reported that their methodology outperforms various other thresholding methodologies like soft, hard, garrote and other existing WTTNN methodologies. Further, they claimed that the suggested methodology suppresses the noise regardless of its distribution and modeling of the distribution of the image wavelet coefficients. Though the denoising methodology proposed by Nasri and Pour outperforms other thresholding methodologies in wavelet domain, it cannot yield better denoising in edgy region due to poor directional sparsity of the wavelet coefficients along the curve [19]. In 2004, Curvelet transform were first introduced by E. J. Candes and D.L. Donoho [20]. In their followed paper [21] it has been reported that curvelet transform is simpler and convenient for thresholding in denoising application for preserving the edges. Since then there has been further research in this domain [22-28]. The paper by Candes et al [28], introduced two versions of the discrete curvelet transforms. The application of these discrete curvelet transform for image denoising in thresholding process of coefficients tends to add some extra edges (fuzzy edges) in homogeneous region of denoised images [26]. From the above discussion, it is clear that wavelet transform yields better denoising, particularly in homogeneous region where as it does not give better results in edgy region because of generation of large wavelet coefficients even at fine scales and repeated at scale after scale for all along the important edges in the image. At the other hand, curvelet transform has been known for anisotropic

357

2
feature resulting the expertise particularly in edgy region denoising, although it adds some extra edges (fuzzy edges) in homogeneous region. Therefore, such expertise of these transforms in particular region, leaves the great potential to use its attributes separately in their region of expertise. This fact is explored in the proposed remnant approach. The section II presents the proposed approach with the brief introduction to denoising methodologies which have been used in the proposed approach viz: methodology based on WT and methodology based on curvelet transform. The results and its related observations have been presented in the sections III and IV. Finally conclusions have been drawn in section V. 2. The Proposed Remnant Approach
+ 1 = 0.5
0.5 m 1

. > (1)

| | + 2 /

+[( 2 2 )/ ] 0.5 ( )m 1

. . <

In this approach of denoising, it has been proposed to fuse the information of denoised images by wavelet transform and curvelet transform on the basis of edgy information recovered back rom the remnant of WT denoised image. The Fig. 1 (a)(c) present remnant of denoised images by WT-TNN, curvelet transform with hard thresholding, and curvelet transform with cycle spinning algorithm [28]. It can be observed in Fig 1 (a)(c) that edgy information lost in remnant of denoised image WT TNN is more than that of remnant of curvelet denoised image. From Fig. 1(a), it is clear that WT-TNN methodology of denoising does not restore the long edges even with latest thresholding function. At the most, it contains directional information limited to horizontal, vertical and diagonal direction only. This limitation is because of generation of large wavelet coefficients even at fine scales and repeated at scale after scale for all along the important edges (curve) in the image. At the same time curvelet denoising restores the directional information in better way but adds additional fuzzy edges in homogeneous region. Hence it is evident that these transforms have their own region of expertise, leaving the great potential to use their attributes in their region of expertise. The same idea has been explored and proposed in this novel remnant approach. The proposed approach incorporates fusing of denoised images obtained from WT-TNN and Curvelet transforms adaptively as described in section 2.3. The brief introduction of denoising methodology based on WT-TNN and Curvelet transform has been presented in following sections 2.1 and 2.2

This thresholding function varies from hard to soft thresholding by adjusting parameter k (0,1] of thresholding function. The parameter m decides the shape of thresholding function is threshold value, is the coefficient in wavelet domain and is thresholding function which returns the thresholded coefficients in wavelet transform domain. In this technique values , m and k are obtained for optimized performance by thresholding neural network. 2.2 Curvelet Denoising: The Curvelet transform, like the wavelet transform is multi-scale transform with frame element indexed by scale and location parameter [20]. This transform is very much effective for denoising the images particularly from edge preservation point of view because curvelet pyramid contains elements with a very high degree of directional specificity. This yields optimal sparse representation of the object with edges along the curve. If we compare the squared error (SE) approximation function F of m term i.e. square difference between whole expansion f and the best m-terms approximation , mw and mc of the coefficient of Fourier, Wavelet and Curvelet transform respectively given by Eqns. 2-4 [19], then such an mterm expansion obeys:

~ f mf

~ f

~ f

(a) (b) (c)

= 2 --M term SE approx. in case of Fourier transform is function of 1/m (2) 2 = 1 -- M term SE approximation in case of WT is function of 1/m (3) 2 = 2 --M term SE approx. in case of CT is function of 1/m2 (4)

(a) (b) (c) Fig. 1 The information lost in denoised images (a) WT-TNN (b) CT Hard Thresholding (c) CT Cycle spinning Algorithm 2.1 Wavelet denoising with thresholding neural network (WT1): This denoising methodology is basically modification of wavelet transform with sub-band adaptive soft thresholding (WT) methodology which uses following adaptive differentiable thresholding function shown in Eqn. 1 as proposed in paper [18].

From the above three Eqns. 2, 3 and 4 it is clear that, curvelet transform represents image with better sparsity. Such optimal sparse representation makes it convenient to preserve the edges in denoised images [19]. In this transform domain two approaches were developed by E. J. Candes et al [28]. In one of these approaches, this transform has been used with hard thresholding function (CT) whereas in other it has been used with cycle spinning algorithm (CT1). 2.3 The proposed Adaptive fusing algorithm: The noisy image with Gaussian noise of standard deviation 20 (as an example) shown in Fig. 2(a) have been initially soft thresholding (WT) methodology of level dependent wavelet transform by using biortho6.8 wavelet filter [29]. and the remnant of this denoised image is obtained as shown in Fig. 2(b). The edgy information presented in Fig. 2(c) is extracted back by denoising the remnant of WT methodology by curvelet transform methodology. It is not possible to add this information directly to wavelet denoised image although extract obtained itself have been from remnant of WT, as it does not tends to give better performance in non-edgy region or near edgy region. Therefore for efficient edge preservation, it is proposed to fuse denoised imagesof both these transform by adaptive fusing algorithm which has following steps:

358

3
where l is 1 to Nk and Nk is the number of pixel in Kth group of respective region. and are weights applied to the pixel value of wavelet and curvelet denoised image of Kth group of homo. region; Similarly, ( ) and ( ) are the pair of the terms for non homo and neither region, respectively; is the Kth group of pixel of homo region of denoised image by Wavelet based soft thresholding methodology; is the Kth group of pixel of homo region of denoised image by curvelet based hard thresholding methodology; th 1 is the K group of pixel of homo region of denoised image by curvelet based cycle spinning methodology; Similarly, , , 1 are the similar terms for non-homo region and , , 1 are the similar terms for neither region; In this equation, different weights are applied to the pixel value which is learnt by supervised learning method of neural network to maximize SNR. Step 5: Three classified region of denoised images by existing methodology are as shown in Fig. 5

(a) (b) (c) Fig. 2 (a) Noisy image (b) Remnant of WT Denoised image (c) Extracted image from Remnant of WT Step 1: The denoised images by obtained from WT and CT are converted into normalized variational images (VIs) by taking 3X3 block variation and these VIs have been thresholded to convert into binary; Step 2: A common black regions in the binary from of VIs of the denoised images obtained by WT and CT have been identified as homogeneous region. Similarly common white regions are identified as edgy region. Also, the regions which are not common among the white and black have been identified as a third region which is denoted as neither region. The original standard Lena image and its noisy version of standard deviation 20 is shown in Fig. 3. All the three regions of noisy Lena image are as shown in Fig. 4

Fig. 3 Lena image, original (left) and Noisy (right)

(a)

(b)

(c)

Fig 4 classified regions of noisy Lena image (a) Homo region of noisy (b) Non-homo region of noisy (c) Neither Region of noisy

Step 3: The extracted image shown in Fig. 2(c) is used as basis for the fusion of spatial information of the other denoised images obtained from WT and CT. For adaptive fusion, pixels in the three classified regions (homo region, non-homo region, neither region) are further grouped on the basis of the gray level of pixels of the extracted image. Step 4: Pixel value in each group of each region is adaptively fused by proposed Eqns. 5-7 for homo, non-homo and neither region, respectively.
() = () + (1 )

(i)

(ii)

(iii)

(iv)

Fig. 5 Different regions of Lena image denoised by various approaches: (i) wavelet transform with sub-band adaptive soft thresholding (WT) (ii) Wavelet transform based TNN (WT1) (iii) CT hard thresholding (CT) (iv) CT cycle spinning algorithm (CT1). First row of the images show the homogeneous region, second row show the edgy region and last row show the neither region of the images

+ (1 ) 1 ( )
+ (1 ) 1 ( ) + (1 ) 1 ( )

(5)

Step 6: By usage of fusing algorithmic step number 3 and 4 different respective regions shown in Fig 5 are fused. The denoised images of these three regions by the proposed fusing approach are as shown in Fig 6 (a)-(c). Step 7: Lastly, all modified pixels in three classified region are combined by Eqn. 8 and presented as final denoised image as shown in Fig. 6(d) = (8) (7)by where , , are the denoised images obtained Eqns. 5-7, respectively.

() = () + (1 )

(6)

( ) = + (1 )

(7)

359

4
The typical value of this scalar multiplier is 1/9 Maximum value of FOM is 1. The value closer to 1, indicates the better edge preservation and less edge displacement in denoised image. 1 1 FOM = (11) 2 =1
, 1+

Fig. 6 Different regions of Lena image denoised by proposed approach (a) homogeneous region, (b) edgy region (c) neither region (d) full denoised image

where nd and nr are number of detected pixel and reference pixel of edges respectively and dj is the Euclidean distance between jth pixel and nearest reference edge pixel

3. Assessment Parameters
To investigate effectiveness of proposed method, commonly used performance indices of noise suppression like Signal-to-Noise Ratio (SNR), Peak-Signal-to-Noise-Ratio (PSNR) are used. Though SNR and PSNR are the parameters of measures of noise suppression; they are not sufficient to reflect the performance of edge preservation capability. Therefore parameters like figure of merit (FOM) [30] are also used for testing edge preservation capability of denoised image. These indices have been defined as (i) Signal-to-Noise-Ratio (SNR): This performance index given below gives noise suppression quality. Higher is the SNR, better is the noise suppression. If i is original image and ir is denoised image SNR = 10 (, ) (9) where is variance of
1 =1 =1

4.

Results and Discussions

(, ) =

, ,

(ii) Peak signal to noise ratio (PSNR): This is the


standard performance index which is mostly used for 8 bit gray level image which are denoised for Gaussian random noise PSNR = 20 255 ( , )

(iii) Figure of merit (FOM): The most commonly used


standard performance index for measuring the edge preservation is figure of merit (FOM) [30] in which =1/9 is used as penalization factor for displaced edges from original location.

The effectiveness of proposed, Remnant approach with adaptive fusing algorithm is analyzed by denoising standard images like Lena in different classified denoised regions (already shown in fig 6) by proposed algorithm. The qauntative comparative performance with other various techniques (reported earlier) on the basis of SNR and PSNR is presented in Table 1. It can observed from Fig 7 that there is considerable noise suppression in homogeneous region as compared to edgy regions of the image. This fact is evident from the Table 1, in which SNR of homo region is enhanced to 26.94 from 25.95 (which was highest among the methodologies used here). The similar improvement is also reflected in PSNR. From this result of denoising of homogeneous region it is clear that it removes or smoothens the fuzzy edges developed during denoising process in curvelet domain. Thus, the denoising by proposed approach results into better smoothness in background. While denoising edgy region i.e. non-homo region, main emphasis was given to preserve the edges. Therefore much improvement cannot be seen in SNR and PSNR of proposed approach as compared to curvelet transform based denoising methodology which has characteristic of edge preservation. From this result of denoising of non-homo region it is clear that edge information existing in curvelet transform based denoising methodology is not lost while suppressing the noise. Basically denoising result of neither region is a compromise of edge preservation of true edges and removal of fuzzy edges existing in curvelet transform methodology. From the Table 1, it can be stated that in this region proposed approach also outperforms the other methods.

Table1. Comparison of denoised images in different classified region with performance indices SNR, PSNR Image Lena (512 X 512) with Gaussian noise of std. dev. 20 Region Homo region Non-Homo region Neither region Combine fused image WT 25.74 17.18 21.32 14.73 WT1 25.11 19.72 22.73 16.25 SNR Comparison CT CT1 25.86 25.95 20.25 21.59 22.1 23.16 16.54 17.43 Proposed 26.94 21.69 23.81 17.96 WT 36.58 31.55 35.79 29.27 PSNR Comparison WT1 CT CT1 35.95 36.70 36.8 34.55 34.61 35.95 37.2 36.57 37.63 30.82 31.08 31.97 Proposed 37.79 36.05 38.28 32.49

Similarly, for comparing the edge preservation performance, edges of denoised images by various techniques are obtained by applying Sobel, Prewitt, and Robert edge operator. Finally, FOM is calculated which is shown in Table 2. It can be seen

from these tables 1 and 2 that the proposed approach outperforms the other methodology used here in noise suppression as well as in preservation of edges by all above three mentioned edge operators.

Table 2: Edge Preservation Comparison by performance index: Figure of Merit (FOM x 10 -4 )


Image Lena Noise std. dev. 20 FOM By SOBEL Edge Operator WT 5858 WT1 7183 CT 7442 CT1 8342 Prop. 8374 FOM By PREWEIT Edge Operator WT 5904 WT1 7239 CT 7452 CT1 8363 Prop. 8429 FOM By ROBERT Edge Operator WT 6386 WT1 7303 CT 7537 CT1 8374 Prop. 8746

360

5
5 Conclusions
In the proposed approach, features of wavelet and curvelet transform are utilized separately and adaptively for homogeneous, non-homogeneous and neither homogeneous nor non-homogeneous classified regions. Further, the information of these different regions has been fused adaptively. Therefore it is not affecting the edges in denoising process. It is observed that improvement in SNR is not at the cost of blurring the edges of the denoised image. Also there is considerable improvement in noise suppression while keeping the edge information preserved. 15 proceeding on Wavelet Diversity and Design (2007) 65-68. S. Gupta, R.C. Chauhan S.C. Saxena, Locally adaptive wavelet domain Bayesian processor for denoising medical ultrasound images using Speckle modeling based on Rayleigh distribution, IEE Proc.-Vis. Image Signal Process 152(1) (2005)129-135. M.I.H. Bhuiyan, M.O. Ahmad, M.N.S. Swamy, New spatial adaptive wavelet based method for the despeckling of medical ultrasound image, IEEE proceeding (2007) 2347-2350,. O. V. Michailovich, Allen Tannenbum, Despeckling of ultrasound images, IEEE transactions on Ultrasonic, Ferroelectrics, and Frequency Control, 53(1) (2006) 6478. M. Nasri, H.N. Pour, Image denoising in the wavelet domain using a new adaptive thresholding function, Neurocomputing 72 (2009) 1012 1025 E.J. Candes, D.L. Donoho, New tight frames of Curvelets and object with piecewise C2 singularities, Comm. On Pure and Applied Maths 57 (2004) 219-266 E.J. Candes, D.L. Donoho, Curvelet [Online available ] http://wwwstat.stnaford.edu/~donoho/Reports/1999/curvelet.pd f J.L. Starck, E.J. Candes, D.L. Donoho, The Curvelet Transform for Image Denoising, IEEE Transaction on image processing, 11(6) (2002), 670-684. J.L. Starck, F. Murtagh, E.J. Candes , D.L. Donoho, Gray and color image contrast enhancement by the Curvelet transform, IEEE trans. on Image Processing 12(6) (2003) 706-716 B. Saevarsson, J. Sveinsson, J.A. Benediktsson, Time invariant Curvelet denoising, Procedding of NSPS (2004) 117-120. L. Parthiban, R. Subramanian, Speckle noise removal using contourlet, IEEE Procedding of ICIA (2006) 250-253. R. Sivakumar, Denoising of computer tomography images using curvelet transform, ARPN Journal of Engineering and Applied Sciences 2(1) (2007) 21-26. Q.W. Hong, F. C Sun, N. C. Yan, T. Z. Zong, Edge Enhanced speckle suppression using Curvelet transform with an optimal soft thresholding, IEEE Proceedings of ICWA and PR (2007) 204-209 J. Sveinsson, J.A. Benediktsson, Combined Wavelet and Curvelet denoising of SAR images using TV segmentation, IEEE Procedding (2007) 503-506. E.J. Candes, L. Demanet, D.L. Donoho and L. Ying, Fast Discrete Curvelet Transforms, Applied and Computational Mathematics, (2006). A. Thakur, R. S. Anand, Image quality based comparative evaluation of wavelet filter in ultrasound speckle reduction, Digital Sigal Processing 15(2005) 455-465. Prattt W.K., Digital Image Processing, Third edition, John Wiley and Sons 2006.

16

References
1 2 Gonzalez, R.C. , Woods, R.E. Digital Image Processing, second ed., Pearson Prentice-Hall, Inc., 2002. S. G. MALLAT, A Theory for multiresolution signal decomposition: The Wavelet representation, IEEE Transaction on Pattern Analysis and Machine Intelligence 2 (7) (1989) 674-694. D.L. Donoho, I.M. Johnstone, Ideal spatial adaptation by wavelet shrinkage, Biometrika 81 (3) (1994) 425 455. D.L. Donoho, I.M. Johnstone, Adapting to unknown smoothness via wavelet shrinkage, Journal of Amer. Statistic. Assoc. 90 (432) (1995) 12001224. D.L Donoho, Denoising by soft thresholding, IEEE trans. on information Theory 41 (1995) 613-627. H. Gao, A.G. Bruce, WaveShrink with firm shrinkage, Stat. Sin. 7 (1997) 855874. H. Gao, Wavelet shrinkage denoising using the nonnegative garrote, Journal of Computer Graph. Stat. 7 (1998) 469488. I.K. Fodder and C. Kamath, Denoising Through Wavelet shrinkage: An empirical Study Journal of Electronic imaging, July 2001 B.S Hashim, B.M. Norliza, B. W. Junaidy, Contrast resolution enhancement based on wavelet shrinkage and gray level mapping technique, IEEE proceedings (2000) 165-170. Q. Zao, L.Zhunag, D. Zhang, B. Zheng, Denoise and contrast enhancement of ultrasound speckle image, ICSP Conference proceedings (2002) 1500-1503. S. Chang, B. Yu, M. Vetterli, Adaptive wavelet thresholding for image denoising and compression, IEEE Trans. Image Process. 9 (2000) 15321546. A. Achim, A. Bezerianos, P. Tsakalides, Novel Bayesian multiscale method for speckle removal in medical ultrasound Images, IEEE Trans. medical imaging 20 (8) (2001) 772783. A. Achim, E. Kuruoghlu, Image denoising using alphastable distributions in the complex wavelet domain, IEEE Signal Processing Letter. 12 (1) (2005) 1720. M. Mohamad, M. hamid, Ultrasound speckle suppression using heavy tailed distribution in the dual tree complex wavelet domain, IEEE Conference

17

18 19 20

3 4 5 6 7 8 9

21 22

23 24 25 26

10 11 12

27 28 29

13 14

30

361

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

A New Multiple Range Embedding for High Capacity Steganography


S.Arivazhagan
s_arivu@yahoo.com

W.Sylvia Lilly Jebarani


vivimishi@yahoo.co.in

S.Bagavath
bagavath85@gmail.com

Department of Electronics and Communication Engineering, Mepco Schlenk Engineering College, Sivakasi 626 005, Tamil Nadu, India

Abstract Steganographic algorithms mainly focus on security of the hidden data rather than the amount of data that can be hidden. The tradeoff between the level of security and the capacity of data is inevitable. The proposed algorithm works to break this tradeoff over an acceptable level. In order to embed high capacity data into the cover without any degradation in the security level, a new Multiple Range Embedding concept is proposed. The number of bits embedded per pixel position is not made fixed which provides a fine layer of security to the embedded data. The usage of Arnold Transform for the secret image and embedding in transform domain provides an additional layer of security to the embedded secret. Keywords Arnold Transform, Multiple Range, Stego Image

I. INTRODUCTION Steganography is the art and science of writing hidden messages in such a way that no one apart from the sender and intended recipient even realizes there is a hidden message. By contrast, cryptography obscures the meaning of a message, but it does not conceal the fact that there is a message. Information hiding techniques have recently become important in a number of application areas. Digital audio, video, and pictures are increasingly furnished with distinguishing but imperceptible

marks, which may contain a hidden copyright notice or serial number or even help to prevent unauthorized copying directly [1]. Minimum Error replacement and Error Diffusion techniques were used in LSB based high capacity image and audio steganography in order to maintain mimimal error difference. [2], [3]. Adaptive embedding, where number of bits embedded per pixel position is not a fixed one, increases the level of security in high capacity algorithms [4]. Embedding in colour images added another advantage of randomizing embedding of bits among the three colour planes [5]. Chaotic principles [6] were used in data embedding techniques improving robustness, which showed better results [7],[8]. These steganographic algorithms either focus on the security of the hidden message or else the capacity of hidden message. The proposed algorithm tunes both the capacity and security of the hidden message. II. ALGORITHM A. Data Embedding The proposed algorithm works towards embedding high capacity data into the cover image.

a,h,v,d

DWT
a,h,v,d

Cover Image

DWT
a,h,v,d

DWT

Selection of Embedding Position & Color Plane

Embedding Algorithm

IDWT

StegoImage

Sub-components a-approximate h-horizontal v-vertical d-diagonal

Secret Image

Arnolds Transform

DWT

a,h,v,d

Bit Stream

Fig.1.Embedding Process

362

The base algorithm is abstracted from [Rongrong & Qiuqi, 2002] model where embedding is carried out with one bit per position. Here the algorithm embeds more than one bit per pixel position. The entire embedding algorithm is given by Fig.1. Initially the secret colour image to be embedded is subjected to Arnold transform and one of the Arnold transformed image will be chosen for embedding. The use of Arnold transform is to increase the robustness of the secret image. This image will be converted into stream of 1s & 0s.This will be embedded inside the colour cover image. The colour cover image will be initially split down into its respective colour planes (R, G, &B) and then each plane will be transformed using Discrete Wavelet Transform to get approximation and detail sub bands. The obtained transformed coefficients are given by equation (1) Y = Yk, l, o(x, y), 0 < X <N, 0 < Y <N where, k{R, G, B}, l {1, 2, 3}, o {a, h,v, d} (1)

embedding n bits per pixel position, the value of Q will be 2n , say if the bits to be embedded per pixel position are 4 then the entire range will be split down into 16 parts. Range splitting with Q=16 is given in Fig 2. Each line represents a value corresponding to bits 0000 to 1111. Let value of Yk2 be 18. Now based on the bits to be embedded the value of Y k2 will be replaced with the value corresponding to respective line. For example if the bits to be embedded is 0010 then Yk2 will be replaced with value corresponding to line representing 0010 i.e., 12. The difference is 6 i.e., 12~18.

0010

0011

0100

0101

1001

1011

1100

1110

0001

1010

0000

0110

Yk1 =10

0111

1000

1101

Yk3 =25

Fig 2. Range Splitting for High Capacity Model

and R ,G, B denote the colour planes , l represents level of decomposition and o indicat es the approximate and detail sub bands. The secret bit stream to be embedded will be split down into blocks of bits and the number and size of the blocks depend on the number of bands available to embed, which in turn depend on the level of decomposition. Once the bit stream is ready, the embedding procedure is initialized in a sequential manner. As the cover image is a colour image, three planes namely R, G & B will be available and the algorithm works in a way such that only one plane will be chosen for embedding which depends on the pixel values . So different planes will be chosen for embedding at each instance of embedding which in turn adds a layer of security to the secret that is embedded. The plane selection is done in such a way that the coefficient values are arranged in ascending order as given by equation (2) and the plane giving the median coefficient value (Yk2) will be chosen for embedding. Once coefficient is chosen embedding can be done. Y k1, l, o(x, y) < Y k2, l, o(x, y) < Y k3, l, o(x, y) (2) where k1, k2, k3 {R, G, B}. The range between Yk1 & yk3 is split down into Q parts. Here the value of Q depends on the number of bits embedded per pixel position. For

In the above mentioned case, the whole range will be split down into 2n parts i.e., 16 parts. The main drawback is that if the bits to be embedded are 1111 and then the shift in value of Y k2 will be more. Such cases lead to artifacts in stego image besides reduction in PSNR value. To overcome this, Multiple Range embedding is introduced. Here, the range between Yk1 & Yk3 will be split down into multiple ranges depending on the available range, i.e., Yk1 ~ Yk3. By doing so the shift in Yk2 is minimized thus avoiding artifacts. The concept of Multiple Range is explained with the help of Fig 3.
Sub Range 1 Sub Range 2

Yk1

1111

Yk3

Fig 3. Multiple Range Splitting for High Capacity Model

Here the entire range will be split down into many sub ranges each with Q parts. From each sub range the value corresponding to the data to be embedded will be considered as done previously. Among these values the one giving minimum difference with that of Yk2 will be considered and replaced with. Considering the above example, using single range Yk2 is replaced with 12. Introducing Multiple Range concept, Yk2 can be replaced with either 11 or with 18.5. The minimum difference is given by 18.5 and hence Yk2 is replaced with 18.5.

363

The number of sub range selection depends on the total available range i.e. Yk1 ~ Yk3. If the total range is high then more sub ranges have to be selected such that the shift in Yk2 will be minimal. Else, minimum number of sub ranges are enough to provide a minimal shift to Yk2. This way of multiple ranges at each pixel position also introduces yet another layer of security. B. Data Extraction The colour stego image will be split down in to its R, G and B planes respectively. The three planes are transformed using corresponding Discrete Wavelet Transform used during embedding. The resultant coefficient values (Y) is given by equation (3). Y = Yk, l, o(x, y), 0 < X <N, 0 < Y <N where, k{R, G, B}, l {1, 2, 3}, o {a, h, v, d} (3)

the drawbacks associated with using single range in high capacity model is discussed with results. The cover and secret images used for testing are shown in Fig 4. A cover image with less detail information is chosen so that artifacts if any can be clearly observed in the stego image. Initially secret image of size 38 x 38 is embedded into a cover of size 256 x 256 using base embedding algorithm. The resultant stego image is free from artifacts and is shown in Fig 5

and R ,G, B denote the colour planes , l represents levels of decomposition and o indicates the approximate and detail sub bands. For extracting data, the coefficients will be rearranged in ascending order as given by equation (4) and the range between Yk1 & Yk3 will be split down in to Q parts on the whole or with multiple sub ranges each with Q parts which depends on the total available range, i.e. Yk1 ~ Yk3. Yk1, l, o (x, y) < Yk2, l, o (x, y) < Yk3, l, o (x, y) where k1, k2, k3 {R, G, B}. The median coefficient Yk2 will be representing a binary combination in the split range and this is the embedded data. Once all data are extracted, the bit streams are processed to get back the three colour planes of secret. But still the recovered secret is of in Arnold transform form and hence the recovered secret has to be Arnold transformed further to get back the original secret. III. Results and Discussion Experimentation is conducted using the proposed Multiple Range Steganographic Model for all detail and approximation bands. As this new model is for embedding very high capacity and hence to differentiate this from the base work model of embedding one bit per pixel, comparisons between these two are drawn out in Table 1. Also to bring out the necessity for going into Multiple Range concept, (4)

Fig. 4 Cover Image & Secret Image

Fig. 5 Cover & Stego Image Base Embedding Algorithm (PSNR: 63.4751 dB)

Now using the High Capacity Steganographic model with single range alone, experiment is carried out with same cover size but with secret of size 64 x 64. Here the secret to be embedded is of thrice the capacity than the previous one. The resultant stego image is shown in Fig 6. The presence of artifacts can be observed in the stego image. Also the PSNR value is decreased from 63.4751 dB to 47.9682. This is because of two reasons. First there is an increase in secret size by three times. Second is due to usage of single range during embedding, the shift in Yk2 may be maximum at certain positions. For the same images with the same sizes, the concept of Multiple Range is introduced and embedding is done. The stego image resulting after embedding using Multiple Range concept is shown in Fig 7. Absence of artifacts and increase in PSNR value can be observed.

364

value maintenance. The usage of DWT domain and Arnold Transform makes the algorithm more secure. REFERENCES
[1] F.A.Petitcolas, R.J.Anderson, M.G.Kuhn, Information Hiding A Survey, Proceedings of the IEEE, Special Issue on Protection of Multimedia Content, vol. 87(7), pp.1062-1078, 1999. Fig 6 High Capacity Steganographic Model-Single Range: Cover & Stego Image (PSNR: 47.9682) [2] Nedeljko Cvejic, Tapio Seppanen, Increasing the Capacity of LSB based Audio Steganography, IEEE workshop on, Multimedia Signal Processing, pp. 336-338, Dec, 2002. [3] Lee, Yeuan-Kwen and Ling Hwei Chen, High Capacity Image Steganographic model, Vision Image and Signal Processing, IEEE Proceedings, vol.147.3, pp.288-294, 2000. [4] Lee, Yeuan-Kwen and Ling Hwei Chen, An Adaptive Image Steganographic Model Based on Minimum-Error LSB Replacement, Ninth National Conference on Information Security, pp.8-15, 1999. [5] Kundur.D, Hatzinakos.D, Digital Watermarking Using Multiresolution Wavelet Decomposition, IEEE International Conference on Acoustics, Speech and Signal Processing, vol.5, pp.2969-2972, 1998. [6] Der-Chyuan Lou, Chia-Hung Sung, A Steganographic Scheme for Secure Communications Based on The Chaos and Euler Theorem, IEEE Transactions on Multimedia, vol.6 (3), June 2004. [7] NI Rongrong, RUAN Qiuqi, Embedding Information into Colour Images Using Wavelet, TENCON02, Proceedings, IEEE Region 10 Conference on Computers, Communications, Control and Power Engineering, Vol.1, pp. 28-31, Oct 2002. [8] Bo Yang, Beixing Deng, Steganography in Gray Images Using Wavelet, ISCCSP 06, Morocca 2006.

Fig 7 High Capacity Steganographic Model-Multiple Range: Cover & Stego Image (PSNR: 62.9700) Table 1 High Capacity Steganographic Model: Four bits per pixel position Vs Base Model

Base (Rongrong & Qiuqi) One Bit per pixel position Cover :256 x 256 Secret : 38 x 38 Embedded Bits : 38 x 38 x 3 x 9= 38988 bits PSNR : 63.4751 dB

High Capacity Steganographic Model Four bits per pixel position Cover :256 x 256 Secret : 64 x 64 Embedded Bits : 64 x 64 x 3 x 9= 110592 bits PSNR : 62.9700 dB

Increase In Capacity : 3 Times PSNR : Almost SAME

Table 1 compares the results obtained by the base algorithm with that of the proposed multiple range embedding concept. It concludes that an increase in embedding capacity of 3 times is achieved without any significant loss in PSNR value. IV CONCLUSION The High Capacity steganographic algorithm proposed in this paper embeds voluminous data into a color image without any degradation in the PSNR of the stego image. The multiple range concept reduces the artifacts and helps in the PSNR

365

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Autotuning of PI controller for an unstable FOPDT process


Utkal Mehta and Prof. Somanath Majhi
Department of Electronics and Communication Engineering, Indian Institute of Technology Guwahati, Guwahati-781039, India. {utkal, smjhi} @iitg.ernet.in

AbstractThe aim of this paper is to present an improved tuning method for identification and control of unstable firstorder plus delay (FOPDT) process model. The process model parameters are calculated first from a few measurements made on the plots obtained from a single relay feedback test. Only half limit cycle is obtained to estimate the model of the process dynamics. Using PI-PD control tuning method based on the explicit process model, the controller settings are re-tuned noniteratively to achieve the enhanced performance. Simulation examples are given to show the performance of the method.
KeyWords-- Identification; PI-PD Controllers; Relay feedback; Unstable FOPDT.

control of unstable processes. The set point weight reduces the overshoot significantly but it does not improve the load disturbance response. In this paper, unstable FOPDT process model parameters are estimated with less effort and without solving the set of nonlinear equations. The improved identification of model parameters gives better performance in model based controller tuning method. Here, the PI controller is tuned based on the modification of formulas proposed by [14] for PI-PD controller design. The proportionalderivative (PD) in the inner feedback loop plays an important role in changing an open-loop unstable process to an open-loop stable process. The proposed control strategy shows better control performance compared with some previous methods.

I.

INTRODUCTION

II.

IDENTIFICATION OF PROCESS MODEL

For the purpose of identification and automatic tuning, many industrial controllers are equipped with relay feedback identification. A number of relay based identification methods have been proposed in the literature to obtain process models in terms of transfer functions. Several authors [1-5] have used the approximate describing function (DF) method with or without involving modified relays for estimating process transfer function models. It is observed that while many of the autotuning methods use the DF approximation to a relay in their analysis, a very few have applied the exact relay feedback expressions but involving more complexities. Whereas the time domain approaches to estimate parameters of first and second order plus dead time (SOPDT) models have been proposed in the literature [6-9] using symmetrical, asymmetrical relay test. However, it has been observed from several relay based exact identification methods that one needs to have to solve iterative algorithms or simultaneously a set of nonlinear algebraic equations for estimating the plant model parameters. The accuracy of the relay experiment is adversely effected if there is measurement noise present in the system output, in particular spurious switching of the relay can be experienced. Recently, Mehta and Majhi [10] have proposed an identification method based on half limit cycle data which is obtained from a symmetrical relay test. The method is robust from chattering problem due to hysteresis used in a relay. It has been reported in the literature that unlike stable processes, unstable FOPDT process under relay feedback produces limit cycle when the ratio of time delay to unstable time constant is less than 0.693. Majhi and Atherton [11] have shown that the above ratio can be extended up to 1 by providing an inner feedback proportional controller during the auto-tuning test. Unlike stable processes it is very difficult to control unstable processes with time delay. Park et al. [12] have proposed an enhanced PID control strategy including an inner feedback loop. Recently, Vivek and Chidambaram [13] extended the set point weighting method for

Fig. 1 Structure for identification and control Consider the structure as shown in Fig. 1 for identification and control of an unstable FOPDT process under a relay control. The relay is assumed with amplitude h and hysteresis width . The process dynamics expressed in the time domain controllable canonical form has the state and output equations

x(t ) = Ax(t ) + bu (t ) y (t ) = cx(t )

(1) (2)

where A is an nxn square matrix, b is an nx1 column vector and c is a row vector of dimension 1xn. Assume that there exists a symmetrical limit cycle with half period T which is obtained with the initial condition x(t0 ) . The relay switches from h to -h at time

t = (as shown in Fig. 2) and provides two different piecewise constant input signals during a half period of the process output. Let, the unstable FOPDT process model transfer function is
G p ( s) = ke s T1 s 1

(3)

1
366

Output
h
u (t - )

ln y (t0 ) = ln

kh T1

(8)

Ap

y(t)
Time

Fig. 2 shows the corresponding plot of ln y (t ) for the time range

t0 t T . Initially, the time constant T1 of the FOPDT model


can be obtained from the slope of the plot and the time delay by measuring t0 and t p from the plot as = (t p t0 ) . Then, using (8) the steady state gain is estimated as

t0

tp

-h

k=

T1 e

(ln y (t0 ) )

(9)

Output
Slope = -1/T1

from the measurement of initial value of the plot for ln y (t ) .


ln y (t )
Stable FOPDT

ln y (t0 )
Slope = 1/T1

Thus, all the parameters of the process model are calculated from the relay response data unlike the previously proposed methods need to solve a set of nonlinear equations or based on approximate DF method.

ln y (t0 )

ln y (t )

III.
Unstable FOPDT

TUNING RULES BASED ON PROCESS MODEL


PARAMETERS

Let, the PI controller be represented by the transfer function

Fig. 2 Half cycle of relay response curve and plot of ln y (t ) for a


FOPDT model

Gc1 ( s ) = K c (1 +

and the PD controller in the feedback loop be expressed as

1 ) Ti s

(10)

Gc 2 ( s ) = K f T f s + 1

(11)

Fig. 2 shows waveforms of the process input and output signals when identification test is conducted with a symmetrical relay. When the FOPDT model transfer function is expressed in the state space form, its state equation constants become

Here, the PD controller has an important role in changing the openloop unstable process G p ( s ) to a stable process Gv ( s )

1 A= ; T1

k b= ; T1

c =1

(4)

Gv ( s ) =

G p ( s)

1 + G p ( s )Gc 2 ( s )

(12)

The output expression is derived for an unstable FOPDT process model [10] for time range t0 to t p becomes

Since the characteristic eqn. (12) should have negative poles to be open-loop stable, it is recognized from the Routh-Hurwitz stability criterion: T f = n / 2 and K f =

y (t ) = e

t t0 T1

kh(1 e
t t0 T1

t t0 T1

(5)

1 2 will be satisfied [11]. k n

The first derivative of (5) gives

Thus, Gc 2 yields an open-loop stable process, Gv ( s )

kh y (t ) = e T1
Since the

(6)
of

Gv ( s ) =
at

k e n s T1 s + 1
k 2 / n 1
and T1 = n / 2 .

(13)

derivatives

y (t )

are

discontinuous

time t = = t p t0 , its first derivative shall have a large magnitude at that time. Therefore, taking natural logarithm of (6) one obtains

where, n = / T1 , k =

Now, the PI controller gains are re-tuned according to stable process Gv ( s ) instead of original unstable process G p . Here, balance tuning formulas for PI controller proposed by Klan and Gorez [14] based on ITAE performance index is modified to tune the unstable processes. Simple empirical formulae for the PI controller gains are derived in terms of unstable process model parameters as

ln y (t ) = ln

t t0 kh + T1 T1

(7)

Now, at time t = t0 i.e. at the starting of the half limit cycle waveform, (7) becomes

367

2 1.5 Ti n + n + 1.414 n = 0.5 T1 2 n + 1.414 n

(14)

kK c =

0.5 Ti (1.414 n ) 1.5 0.707 n + n

(15)

IV.

SIMULATION STUDIES

To demonstrate the simplicity and effectiveness of the proposed scheme, typical processes used in the relevant literature have been considered. Additionally, noise effect may result in apparently random switching of the relay and failure of the relay feedback test. To overcome the possible failure, the width of the hysteresis of the relay is set to twice of the standard deviation of the noise and the relay height will sufficient to produce a limit cycle with acceptable amplitude level. For ease in presentation of simulation results, the symmetrical relay with height h = 1 and hysteresis width = 0.1 has been set in the following examples. Example 1 Consider an unstable FOPTD system studied by Vivek and Chidambaram [13] is

Fig. 4. (a) Control signal by the proposed method, (b) by Vivek et al.'s method Example 2 Consider the process transfer function studied by Park et al. [12] as

G1 ( s ) =

e 0.2 s s 1

(16)

G2 ( s ) =

4e2 s 4s 1

(17)

Fig. 3. (a) Setpoint response by the proposed method, (b) by Vivek et al. 's method They have identified the process with parameters k = 0.9614 , T1 = 0.9903 and = 0.2044 and suggested PID controller

Fig. 5. (a) Setpoint response by the proposed method, (b) by Park et al. 's method Using the identification procedure given in section II, the parameters are calculated as k = 4.0 , T1 = 4.0 and = 2 indicating high accuracy of the proposed method. From these values, the PI-PD control gains are obtained as K f = 0.5 ,

is 4.9616(1 + 1 / 1.1330 s + 0.1064 s ) . Using the proposed method,

an FOPDT model is estimated with k = 1.0 , T1 = 1.0 and these values, the PI-PD controller gains are obtained as K f = 3.1623 , T f = 0.1 , Ti = 0.3550 and K c = 1.4868 . Figs. 3 and 4 illustrate significant performances of the proposed tuning methods compared to the results obtained by Vivek et al.

= 0.2 to within 103 of these values given in G1 ( s ) . Based on

T f = 0.25 , Ti = 2.5 and K c = 0.1563 . The PID-P controller


parameters of Park et al. are K c = 0.068 , Ti = 1.885 ,

Td = 4.296 and K f = 0.350 .

368

[13] S. Vivek, M. Chidambaram, An improved relay auto tuning of PID controllers for unstable FOPTD systems, Comp. Chem. Eng. 29 (2005) 2060 2068. [14] P. Klan, R. Gorez, Simple analytic rules for balanced tuning of PI controllers, Proceedings 2nd IFAC Conference on Control System Design (2004) 4752.

Fig. 6. (a) Control signal by the proposed method, (b) by Park et al.'s method Figs. 5 and 6 show the responses of both design methods for comparison. The magnitude of step load disturbance is -0.1. It is evident that our proposed design method is not only simple, but also gives a much improved performance.

V.

CONCLUSION

Our identification method estimates exact model parameters from only half limit cycle data for an unstable FOPDT process. Simple tuning rules for PI-PD controllers have been presented based on model parameters show the excellent performance in controlling unstable processes compared with several previous strategies. To prevent the system from oscillating at high frequency and relay switching at wrong instants, relay hysteresis is adopted. Simulation examples are given to illustrate the potential advantages of the proposed method.

REFERENCES
[1] W. L. Luyben, Derivation of transfer functions for highly nonlinear distillation columns, Ind. Eng. Chem. Res. 26 (1987) 24902495. [2] S. H. Shen, J. H. Wu, C. C. Yu, Use of biased-relay feedback for system identification, AIChE 42 (1996) 11741180. [3] G. Marchetti, C. Scali, D. R. Lewin, Identification and control of open loop unstable processes by relay methods, Automatica 37 (2001) 2049 2055. [4] K. Srinivasan, M. Chidambaram, Modified relay feedback method for improved system identification, Comp. Chem. Eng. 27 (2003) 727 732. [5] I. Kaya, D. P. Atherton, Parameter estimation from relay autotuning with asymmetric limitcycle data, J. Proc. Contr. 11 (2001) 429439. [6] Q.G.Wang, C. C. Hang, B. Zou, Low order modeling from relay feedback systems, Ind. Eng. Chem. Res. 36 (1997) 375281. [7] S.Majhi, D.P.Atherton, Autotuning and controller design for processes with small time delay, IEE Proc.-CTA 146 (1999) 415 424. [8] S. Vivek, M. Chidambaram, Identification using single symmetrical relay feedback test, Comp. Chem. Eng. 29 (2005) 16251630. [9] R. C. Panda, C. C. Yu, Analytical expressions for relay feedback responses, J. Proc. Contr. 13 (2003) 489501. [10] U. Mehta, S. Majhi, Estimation of process model parameters based on half limit cycle data, J. Syst. Science Engg. 17(2) (2008) 1321. [11] S. Majhi, D. P. Atherton, Online tuning of controllers for an unstable fopdt process, IEE Proc. - Contr. Theory Appl 147(4) (2000) 421427. [12] J. H. Park, S. W. Sung, I. Lee, An enhanced PID control strategy for unstable processes, Automatica 34(6) (1998) 751756.

4
369

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Order Reduction of Interval Systems Using Routh Hurwitz Array and Continued Fraction Expansion Method
Devender Kumar Saini, (EE), Dr. Rajendra Prasad, (EE)
..(1) G (s) = Abstract-- This paper presents a method for the reduction of the D( s) order of interval systems. Interval arithmetic is used to construct a generalized Routh array for determining the denominator polynomial of the reduced system by Routh Hurwitz array. The 2 n 1 + + + + numerator is obtained by matching the quotients of Cauer second G ( s ) = [c 0 , c 0 ] + [c1 , c1 ]s + [c 2 , c 2 ]s + + [ c n 1 , c n 1 ]s form of Continued Fraction Expansion (CFE). A numerical [ d 0 , d 0+ ] + [ d 1 , d 1+ ]s + [ d 2 , d 2+ ]s 2 + + [ d n , d n+ ]s n example illustrates the proposed procedure.
.. (2)

N ( s)

Keywords Model Reduction, Interval system, Interval arithmetic, Continued Fraction Expansion. I. INTRODUCTION The analysis and design of practical control systems become complex when the order of the system increases. In many situations it is desirable to replace the high-order system by a lower order model, which is sufficient approximate representation of the higher order system. In recent decades, much effort has been made in the field of model reduction for fixed systems and several methods like: Aggregation method [1], Pade approximation [2], Routh approximation [3] , Moment matching [4], Continued fraction expansion [5], technique have been proposed. The Pade approximation technique has many useful features such as computational simplicity and fitting of time moments. A serious disadvantage of this method, however, is that it often leads to an unstable reduced order model even though the original system is stable. In general, the practical systems have uncertainties about its parameters. Thus practical systems will have coefficients that may vary and it is represented by interval. Bandyopadhyay and O. Ismail have presented a procedure by mixing the Routh approximation and Pade approximation [6, 7]. The research on interval systems has received a great deal of attention since the pioneering work of Kharitonov [8]. In this note, model reduction of interval system is attempted. The denominator of interval-reduced model is obtained from the Routh Hurwitz array. The numerator is obtained by matching the quotients Cauer second form of CFE of the original interval system and its reduced model. II. PROBLEM FORMULATION

Where

[c , c ], i = 0,1,2,..., n 1 and [ d i , d i+ ], i = 0,1,2,..., n are the interval coefficient of


The corresponding r order reduced model is
+ + 2 [a0 , a0 ] +[a1 , a1+ ]s +[a2 , a2 ]s + +[ar1, ar+1]sr1 R(s) = + + 2 [b0 ,b0 ] +[b1 , b1+ ]s +[b2 ,b2 ]s + +[br ,br+ ]sk
th

+ i

higher order numerator and denominator polynomials respectively.

(3) Where
i + i

[ a i , a i+ ], i = 0,1,2....r 1

and

are the interval coefficients of lower order numerator and denominator polynomials, respectively. The rules of the interval arithmetic have been defined as follows. Let [a, b] and [c, d] be two intervals. Addition: [a, b]+[c, d] = [a + b, b + d] Subtraction: [a, b]-[c, d] = [a -d, b - c] Multiplication: [a, b][c, d] = [Min (ac, ad, bc, bd), Max (ac, ad, bc, bd)] Division:

[b , b ], i = 0,1,2,..., r

1 1 [ a , b] = [a, b][ , ], [ c, d ] d c Provide 0 [c, d ]. Consider a higher order SISO uncertain system represented by
the transfer function as

370

A. Determination of reduced denominator Constructing the Routh array from the denominator terms of given in equation (2). TABLE I GENERALIZED ROUTH ARRAY FOR DENOMINATOR

+ + + + [ A11 , A11 ] [ A12 , A12 ] [ A13 , A13 ] [ A14 , A14 ] L + + + L L [ A21 , A21 ] [ A22 , A22 ] [ A23 , A23 ] + + [ A31 , A31 ] [ A31 , A31 ] + L [ A41 , A41 ] + [ A51 , A51 ] M

L L

sn s n 1 s n2 s n3 s s M
n4 n5

+ [d n , dn ] [ d n 2 , d n+ 2 ]

[ d n 4 , d n+ 41 ]

+ + + [ d n1 , d n 1 ] [ d n 3 , d n 3 ] [ d n 5 , d n 5 ]
+ + [ d 31 , d 31 ] [ d 32 , d 32 ] KK

Where

Ai , j =
And

Ai 1,1 * Ai 2, j +1 Ai 2, j * Ai 1, j +1 Ai 1,1

(7)

+ + [d 41 , d 42 ] [d 42 , d 42 ] KK
+ [ d 51 , d 51 ] KK

hp =
Let

A p ,1 A p +1,1

;p=1,2,3

.. (8)

order reduced denominator is


+ + 2 [a0 , a0 ] +[a1, a1+ ]s +[a2 , a2 ]s +L+[ar1, ar+1]sr1 + r + r1 + r2 +[dn +L [dn +1r,1, dn+1r,1]s +[dn+2r,2 , dn+2r,1]s +1r,2 , dn+1r,2 ]s

M
Where:

R(s) =

. (9)

d i, j =

d i 1,1 * d i 2, j +1 d i 2, j * d i 1, j +1 d i 1,1

.. (4)

After finding the denominator polynomial the d numerator coefficients are determined by matching the h quotients of Cauer second form of CFE.
TABLE II

Order reduced denominator is:

ROUTH ARRAY FOR FINDING NUMERATOR TERMS OF

Dr (s) = [d

n+1r,1

,d

+ n+1r,1

]s +[d
r

n+2r,1

,d

+ n+2r,1

]s +
[d 0 , d 0+ ]
(5)

r1

REDUCED MODEL.

+ r2 [dn +KK +1r,2 , dn+1r,2 ]s

[ d 1 , d 1+ ]

[d 2 , d 2+ ]

[d 3 , d 3+ ]

B. Determination of reduced numerator The numerator of the reduced model is obtained by matching the quotients of Cauer second form of the original model and reduced model R ( s ) . Let G (s) =
n 1 + + + [c 0 , c0 ] + [c1 , c1+ ]s + [c2 , c2 ]s 2 + + [cn 1 , c n 1 ]s + + [d 0 , d 0+ ] + [d1 , d1+ ]s + [d 2 , d2 ]s 2 + + [d n , dn ]s n

+ + + + , A12 ] [ A13 , A13 ] [ A14 = [ A11 , A14 ] L , A11 ] [ A12 + + + [a0 , a0 ] [ a1 , a1 ] [a 2 , a 2 ] L L + + + L L = [ A21 , A21 ] [ A22 , A22 ] [ A23 , A23 ] + + L [ A31 , A31 ] [ A31 , A31 ] + L L [ A41 , A41 ] + [ A51 , A51 ] M

n1 + + + + [ A , A21 ] + [ A22 , A22 ]s + [ A23 , A23 ]s 2 +L+ [ A2 n, A 2n ]s 21 n + + + 2 [ A11 , A11 ] + [ A12 , A12 ]s + [ A13 , A13 ]s +L+ [ A1n+1, A1+ n+1 ]s

Where:

Ai , j =
So

Ai 1,1 * Ai 2, j +1 Ai 2, j * Ai 1, j +1 Ai 1,1

.... (10)

(6) To evaluate the quotients

[h , h ] ,

+ 1

[h , h ] ,

+ 2

+ [a 0 , a0 ] = [ d 0 , d 0+ ] [ h1 , h1+ ]

[h3 , h3+ ] . constructing Routh array from numerator &


denominator polynomials.

Likewise:
+ + + [a0 , a0 ] [d1 , d1+ ] [a 0 , a0 ] [a0 , a0 ] [a , a ] = + + [d 0 , d 0 ] [h2 , h2 ]
1 + 1

371

To minimize the steady state error the Zeros are adjusted by multiplying the numerator polynomial with the gain correction factor . It can be calculated using the relation

[20.5,21.5]

[35,36]

[17,18] [2,3] [2,3]

G (s) R (s) s =0

. (11)

[15,16] [17.5,18.5] [7.95,14.48]

For interval systems is calculated after converting the interval coefficients of G(s) and R(s) into the fixed coefficients by taking their means. Thus the gain correction factor is
0 0 = a d 0 0

[h1 , h1+ ] =
[h2 , h2+ ] =

[20.5,21.5] = [1.28,1.37] [15,16]


[15,16] = [1.04,2.01] [7.95,14.48]

c b

. (12)

Where
+ + + + c0 + c0 d0 + d0 b0 + b0 a0 + a0 c0 = ; d0 = ; b0 = ; a0 = 2 2 2 2

Constructing the Routh array for matching h coefficient from reduced model

[16.92,26.05] [29.47,35.71] [17,18]


+ [a0 , a0 ] + [ A31 , A31 ]

(13) III. NUMERICAL EXAMPLE Let the transfer function of a third order interval system be given as

[a1 , a1+ ]

G(s) =

[2,3]s2 + [17.5,18.5]s + [15,16] [2,3]s3 + [17,18]s2 + [35,36]s + [20.5,21.5]

+ [a0 , a0 ]=

[16.92,26.05] = [12.35,20.35] [1.28,1.37]

[a1 , a1+ ] = [2.01,38.46]


After steady state gain correction:

N ( s) . D(s)

The Routh array for determination of reduced denominator as

R2 (s) = R2 (s) =

[1.949,37.30]s + [11.978,19.737] [17,18]s 2 + [29.47,35.71]s + [16.92,26.05]

s3

[2,3] [35,36]

s2

[17,18] [20.5,21.5]
[29.47,35.71]

s1

IV. SIMULATION RESULTS


Step Response
0.8

s0

[16.92,26.05]

The second order reduced denominator

0.7
0.6
0.5

D2 ( s ) = [17,18]s 2 + [29.47,35.71]s + [16.92,26.05] D2 ( s) is a stable interval polynomial because D ( s ) is a


Amplitude

stable interval polynomial. Thus the second order model becomes

0.4
0.3
0.2
0.1
0
-0.1
0

R( s) =

+ [a0 , a0 ] + [a1 , a1+ ]s [16.92,26.05] + [29.47,35.71]s + [17,18]s 2

+ [a 0 , a0 ] is obtained by equating the quotients of Cauer

Original model

second form of the system and reduced model. Constructing Routh array for determine quotients of Cauer from original system.

Reduced model
1

Time (sec)

372

Step Response
1.4
1.2
1

system in advance. The reduced order models shows excellent step as well as frequency responses. The method is computationally simple and produces stable reduced order models for stable systems.

REFERENCES
[1] M. Aoki: Control of large-scale dynamic system by aggregation, IEEE Trans. On Automatic Control, Vol. AC-13, 1968. Y. Shamash; Stable reduced order models using Pade type approximation, IEEE Trans. On Automatic Control, Vol. 19, 1974, pp. 615-616. M.F. Hutton, B. Friendland: Routh approximation for reducing order of linear time varying systems, IEEE Trans. Automat. Control. Vol. 20, 1975, pp.. 329-337. N. K. Sinha, B. Kuszta: Modeling and identification of dynamic systems, 133-163, New York: Van Nostrand Reinhold, 1983. CHEN, C. F., and SHIEL, L.S.: A novel approach to linear model simplification, Int. j. Control, 1968, 8, pp. 561-570. B.Bandyopadhyay. et. Al.: Routh-Pade approximation for interval systems, IEEE Trans. Automat. Control, Vol. 39, 1994, pp. 2454-2456. B. Bandyopadhyay, A.Upadhye, O. Ismail: Routh approximation for interval system, IEEE Trans. Automat. Control, Vol.42, Aug. 1997, pp. 1127-1130. V. L. Kharitonov: Asymptotoc stability of an equilibrium position of a family of system of linear differential equation. Differentzialnye Uravneiya, Vol. 14, 1978, pp. 2086-2088.

Am plitu de

0.8
[2]

0.6
[3]

0.4
0.2
0
0

Original model

[4] [5] [6] [7]

Reduced model
1

Time (sec)
Bode Diagram
10
0

[8]

Magnitude (dB)

-10
-20
-30
-40 45

Original model
Phase (deg)
0
-45
-90
-135
10
-2

Reduced model

10

-1

10

10

10

Frequency (rad/sec)

Fig.1. Step response for upper (i) & lower limit (ii) and Bode plot for
original and reduced model (iii).

V. CONCLUSION A simple mixed method is proposed to interval polynomials for obtaining the stable reduced order system. The method is based on Routh Hurwitz array & CFE and is computationally better then some existing methods in the literature. The proposed method requires only the formation of Routh table and computation of quotients of CFE of second Cauer form. Indirectly it matches the time moments and does not require the calculation of time moments of the original high-order

373

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Model Reduction of Unstable Systems using Linear Transformation and Balanced Truncation
Nidhi Singh and Rajendra Prasad

Abstract A simple method of model reduction based on the


concept that linear transformation in the s-plane preserves the input-output properties of a system is presented in this paper. The proposed algorithm consists of the application of linear transformation on the unstable system which is reduced by the balanced truncation method. The reduced model is inverted back by using reverse linear transformation. Linear transformation parameter a is chosen based on some heuristic criteria. The method is illustrated with examples. Index Terms unstable systems, balanced truncation, linear transformation, model reduction

In this paper balanced realization method coupled with the linear transformation is used to reduce the unstable systems. Section 2 of this paper describes the balanced truncation method and section 3 explains how to choose the general point a for linear transformation. The proposed algorithm is given in section 4 and is illustrated with the help of two examples in section 5. Consider a continuous time, linear time invariant system described by a state space realization

& = Ax + Bu x y = Cx

(1)

I. INTRODUCTION The advantage of obtaining a reduced model is seen, when one is confronted with a complex high order system as theoretical analysis is too complex for practical implementations. Hence, the simplification of a high order system is highly desirable for analysis and control design. This paper deals with the open loop model reduction of unstable LTI control systems. A wide range of methods for model reduction have been proposed in last years described in both time and frequency domain for SISO and MIMO systems[1,2]. Many of the existing model reduction methods developed are suitable for stable systems but not for unstable systems. To control an unstable system, it is required to stabilize the system. The typical approach for this is to decompose an unstable system into stable and unstable subsystems. The reduction method is applied on stable subsystem and then directly added to the decomposed unstable part. Among so many methods developed for model reduction, balanced realization and optimal Hankel norm approximation methods have drastically changed the status of model reduction theory[3]. A survey of balancing related model reduction methods is available in [4]. The hankel norm approximation method, which minimizes the hankel (s) , norm G ( s ) G a measure of most

where x R , u R , y R
n p

with p inputs and q

outputs and ( A, B, C ) are constant matrices of appropriate dimensions. The system of (1) can be described equivalently by Y ( s ) = G ( s )U ( s ) (2) where the nth order transfer function G(s) is given by

G ( s ) = C ( sI A) 1 B

(3)

With the original system described above, the problem is to find a reduced order model such that the reduced model retains the important characteristics of the original system and approximates its response as closely as possible for the same type of inputs. The corresponding rth (r<n) reduced order model in state space form is as

& r = Ar x r (t ) + Br u (t ) x

y r = C r x r (t )
r q p

(4)

controllable/observable state was developed in [5].


Nidhi Singh, Professor, Electronics Engineering Department, RAIT, Dr. D. Y. Patil, Vidyanagar, Nerul, Navi Mumbai. Email: nidhidee@iitr.ernet.in Rajendra Prasad, Professor, Electrical Engineering Department, IIT Roorkee, Uttarakhand. Email: rpdeefee@iitr.ernet.in

where x r R , u R , y r R and Ar , Br ,Cr are reduced matrices of appropriate dimensions such that yr(t) is a close approximation of y(t) for all inputs u(t). The corresponding reduced transfer function can be obtained by using eqn. (3).

374

II. BALANCED TRUNCATION METHOD In [3], Moore developed the balanced realization method of model order reduction. This method has many attractive features but it has a shortcoming that it is applicable only for stable systems. The necessary and sufficient conditions for balancing unstable minimal MIMO systems were developed in [6], while a balanced realization algorithm for nonminimal MIMO systems is proposed in [7]. Low frequency approximation using balancing method for unstable system is given in [8]. The balanced truncation technique is based essentially on simultaneous diagonalization of controllability grammian (Wc) and observability grammian (Wo) matrices using appropriate similarity transformations. In this technique the reduced order model is obtained by direct elimination of weak subsystem (least controllable and least observable states), whose contribution to the impulse response of the system is negligible. The most controllable and observable part is then used as low order approximation for the model. For a linear nth order SISO asymptotically stable linear time invariant system described by eqn (1) Wc and Wo are given by the solution of Lyapunov equations-

A A = 11 A21

A12 B , B = 1 , C = [C1 A22 B2

C2 ]

(8)

The rth order reduced order model is (A11, B1, C1).

III. CHOICE OF

PARAMETER A

Concept of shifting of jw axis was appeared in [10] to deal with unstable systems using balanced structures. Here the value of shift parameter a is a scalar number larger then the real part of the largest unstable mode of a given system. It is known that for complex systems poles and zeros cluster in a particular zone of s-plane. To take care of this clustering phenomenon a centroid like point a is chosen based on the heuristic criteria as given below [11]a may be chosen as arithmetic mean (AM) of the magnitude of real part of the poles pi given by

W cA + AWc = BB
T

a=
i =1

pi n

(9)

W oA + AT Wo = C T C

(5)

The realization S(A,B,C) is said to be internally balanced [9] if the grammian matrices are diagonal and equal Wc = Wo = W= The matrix is diagonal and the diagonal positive elements

After several experimentations it has been found that for systems having a wide spread of poles but dominated by small magnitude poles the value of a from the relation (9) becomes very large and may eventually lead to an unstable reduced order model. For such cases a may be chosen to be harmonic mean (HM) of p i
n 1 1 n = a i =1 p i

(10)

are called second order modes or singular values

of the system.

= diag ( 1 ............ n )
where

(6)

a could also be chosen to be the geometric mean(GM) of

pi
a = ( pi
n i =1

1 > 2 > ............ > r >> r +1 > .......... > n

(11)

The matrix can be partitioned into two submatrices 1 and 2 in the following way

= 1 0
Where

0 2

(7)

1 = diag ( 1 ............ r ) 2 = diag ( r +1 ............ n ) .

and

It should be noted that the selection of shifting value a is critical for a particular model reduction scheme. Frequency response of the original system can be utilized to select a proper shifting value such that the error bound between the original and reduced model is satisfactory for a given frequency range.

IV. PROPOSED ALGORITHM Then A, B, C matrices are partitioned accordingly The proposed algorithm of model reduction is as follows-

375

Step Response

( s ) = G ( s + a) , where a 1. Transform G(s) into G


will be either of AM, HM or GM. 2. 3.

0.04 0.03 original system reduced system

Amplitude

( s ) by balanced Obtain the reduced order model R truncation method. Apply back transformation to obtain the reduced order model of the original system. i.e. (s a) R( s) = R
V. ILLUSTRATIVE EXAMPLES

0.02 0.01 0 -0.01 -0.02 -0.03 -0.04 -0.05

Example :1 Consider an unstable system given in [12]G ( s) = 2.5s 3 + 1.48s 2 + 2.56s + 1.83 s 7 + 5.3s 6 + 62.94s 5 + 261.16s 4 + 1063.1s 3 + 3080.9s 2 + 5870.8s + 2283.6

-0.06

4 Time (sec)

Fig. 1. Step response of original and reduced system

Values of AM, HM and GM are respectively 0.9285, 0.4814 and 0.6269. After applying linear transformation with a = 0.4814, transfer function is (s) = G 2.5s 3 + 5.091s 2 + 5.723s + 3.684 s + 8.67 s + 83.12s 5 + 435s 4 + 1726s 3 + 5054s 2 + 9711s + 5958
7 6

32.09 0.76257 3.2509 0.022567 36.617 10.8897 9.2572e 5 1.8997 0.98312 7.2562e 4 0.1708 4.9652e 3 0.012338 22.396 2.6316 8.7582e 4 31.604 11.720 A= 0 0 1 0 0 0 0 0 0 0 0 30 0 0 0 0 0 30 0 BT = 0 0 C= 0 0 0 0 0 30 0 0 0 30 0 1 0 0 0 0 0 0 1 0 30

Reducing the system to 4th order with balanced truncation gives


3 2 ( s ) = 0.002466 s 0.03465s + 0.3726 s + 0.08925 R s 4 + 2.229 s 3 + 51.28s 2 + 42.17 s + 554

Values of AM, HM and GM are respectively 11.2248, 0.8748 and 2.9548. The linear transformation is done using a = 0.8748. The transformed system is then reduced with the balanced truncation method.
1.836s 2 + 77.62s + 74.21 3.546s 2 112.9s 107.5 ( s) = 1 R (s ) 901s 2 + 405.8s + 300.4 2.455s 2 97.18s 346.8 D

Again applying back linear transformation reduced model comes out to be-

R( s) =

0.002466 s 3 0.03821s 2 + 0.4077 s 0.09844 s 4 + 0.3037 s 3 + 49.45s 2 6.097 s + 545.3

where

(s ) = s 3 + 31.45s 2 + 11.57 s + 2.057 D

The step response of original and reduced model is shown in figure 1. It is clear from the figure that the reduced model approximates the original system very closely.

And after applying back transformation, the reduced system is


R( s) = 1 1.836s 2 + 80.83s + 4.906 3.546s 2 106.7 s 11.47 D(s ) 901s 2 1171s + 634.8 2.455s 2 101.5s 259.9

Example:2 Consider a MIMO system of NASA HIMAT data taken from [13]. where

D(s ) = s 3 + 28.82s 2 41.15s + 15.33

376

The step response of original and reduced model is shown in figure 2 and it is observed that the reduced order model obtained is giving the same response as given by the original high order system.
Step Response From: In(1) 100 50 T o: O ut(1) 0 -50 -100 Amplitude -150 400 200 To: O ut(2) 0 -200 -400 0 1 2 3 0 Time (sec) 1 2 3 original system reduced system From: In(2)

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

Fig. 2. Step response of original and reduced system.

[14]

Glover Keith, All optimal Hankel-norm approximations of linear multivariable systems and their L error bounds, Int. J. Control, vol39, pp 1115-1193, 1984. J C. Kenny & G.Hewer, Necessary and sufficient conditions for balancing unstable systems IEEE Trans. Automatic Control, vol-AC32, No-2, pp 157-160, 1987. C.P.Therapos, Balancing transformations for unstable nonminimal linear systems IEEE Trans. Automatic Control, vol-34, No-4, pp455457, April 1989. Tai-Yih Chin, Model reduction by the low frequency approximation balancing method for unstable systems IEEE Trans. Automatic Control, vol-41, No-7, pp 995-997, July 1996. P.T. Kabamba, Balanced gains and their significance for L-2 model reduction IEEE Trans. Automatic Control, AC-30, No. 7, pp 690-693, July 1985. A. Zilouchian, Balanced structures and model reduction of unstable systems, IEE Proc. Of Southeastcon91, vol-2, pp 1198-1201, 7-10 April 1991. Rajendra Prasad, Jayanta Pal & A. K. Pant, Linear Multivariable system reduction by continued fraction expansion about a general point a, Advances in Modelling and Simulation, AMSE Press, vol-19, No-4, pp 47-58, 1990. Jie Yang, C.S. Chen, J.A. De Abreu-Garcia & Yangsheng Xu, Model reduction of unstable systems. Int. J. System Sc., vol-24, No-12, pp 2407-2414, 1993. S.K. Nagar & S.K.Singh, An Algorithmic approach for system decomposition and balanced realized model reduction. J. of the Franklin Institute, 341, pp-615-630, 2004. Robust Control Toolbox, MATLAB, version 6.5, Release 13, June 2002.

VI. CONCLUSION In this paper, linear transformation coupled with the balanced truncation method is used to reduce the unstable systems. It is shown that inspite of choosing the transformation parameter a randomly, it can be chosen by the heuristic criteria based on calculating a centroid like point given by AM, HM and GM of the original system. The system decomposition and model reduction algorithms are fully implementable using MATLAB package [14]. The effectiveness of the algorithm is illustrated with the help of two examples. The results for both SISO and MIMO systems are quite good for the a chosen on the basis of above criteria. The proposed method may also be used for controller reduction.

REFERENCES

[1]

[2]

[3]

[4]

Genesio R & Milanese, A note on the derivation and use of reduced order models, IEEE Trans. on Auto. Control, Vol-AC-21, pp 118-122, 1976. Bonvin D. & Mellichamp D.A, A unified derivation and critical review of modal approaches to model reduction, Int J.Control, vol-35, No-5, pp 829848, 1982. Moore B.C, Principal Component Analysis in linear systems: Controllability, Observability and model reduction, IEEE Trans. Automatic Control, vol-AC-26, No-1, pp 17-31, Feb-1981. Serkan Gugercin and Athanasios C Antoulas, A survey of model reduction by balanced truncation and some new results, Int. J. Control, vol-77, No-8, pp 748-766, May 2004.

377

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Discrete-Time Sliding Mode Tracking Control for Uncertain systems

Sanjoy Mondal and Dr. Chitralekha Mahanta


Department of Electronics and Communication Engineering Indian Institute of Technology Guwahati Guwahati-781039,India {m.sanjoy, chitra}@iitg.ac.in

AbstractThis paper proposes an approach for tracking of a discrete-time linear uncertain system with matched disturbance, using multirate output feedback control technique. The proposed discrete-time sliding mode control feedback scheme does not need any information about all the states as it takes only the output samples and past input for designing the controller. The tracking performance of the controlled systems with its asymptotic stability is found to be satisfactory. The reaching law used in designing the controller eliminates high frequency chattering. The tracking result is verified by simulation. KeyWords-Sliding Mode Control (SMC),discrete-time systems, multirate output feedback, uncertain systems. I. INTRODUCTION

which use multirate output feedback technique. In this paper, a robust sliding mode tracking controller is proposed based on multirate output feedback [8] design, for uncertain discrete time systems. In this technique, the output is sampled at a rate faster than the control input. Consequently, the control algorithm is based on output feedback and at the same-time, is applicable to all controllable and observable systems.

II.

MULTIRATE OUTPUT FEEDBACK

The concept of sliding mode control (SMC) has received much attention in the past few decades. SMC is a technique in which an appropriate input is applied so that the states are confined to a desired sliding manifold. The concept of SMC was proposed by Emelyanov [1] and Utkin [2], who showed that sliding mode could be achieved by changing the controller structure. The state trajectory is forced to move along a chosen manifold in the state space, called the sliding manifold, by the use of an appropriate variable structure control signal. The use of digital computers in controller design in recent years, has made the use of discrete-time representation is more suitable than continuous time representation [3]. Sliding mode control strategy is one of the robust control techniques, which is invariant against the matched parameter variations and disturbance. In case of discrete-time sliding mode control, the measurement and control signal application are performed only at after regular intervals of time and the control signal is held constant in between these instants. There has been a considerable amount of research done in discrete sliding mode controller. The above mentioned sliding mode control strategy is the state feedback control and it is assumed that all the states of the systems are available. But in real case this is not always true. In this paper we used sliding mode based multirate output feedback technique to design a tracking controller. The multirate output feedback scheme was originally proposed by Werner [4] . Then afterwards it has been used to stabilize systems [3]. In [5] Saaj et al. presents fast output sampling feedback technique to stabilize a linear system. In [6] Janardhanan and Bandyopadhyay proposed a controller which uses the Bartoszewicz [7] reaching law to stabilize linear systems with disturbances. In this paper the chattering effect is neglected by avoiding the switching component in the reaching law. In [8] Janardhnan and Kariwala proposed a optimal sliding surface to stabilize a linear system with disturbances. But till now there are not many tracking controllers

In case of designing a state feedback controller for sliding mode, all the states of the system must be available. But in practice all the states are seldom available. These difficulties can be avoided by designing proper observer for the system, but this increases the complexity in the system. In case of multirate output feedback, the error between the computed state and the actual state of the system goes to zero in definite time interval , once the output measurement is available. However, in case of an observer, the error between the estimated states and the actual states decreases asymptotically, but generally goes to zero only as time approaches infinity. Even the best designed Luenberger observer can assure zero error only after infinite sampling instants [3]. Multirate output feedback technique [9] can be used to design the tracking controller, which yeilds the desired performance as well as maintaining the overall stability of the system. In multirate output feedback scheme the output is sampled faster than the input. Consider the system described by the following equation[10]

 (t ) = Ax(t ) + Bu (t ) + Bd (t ) x y (t ) = C1 x(t )

(1)

Let the above continuous system be sampled at samples per second, assuming that the disturbance does not change during the sample time. So the discrete equivalent of the above plant is given as

x(k + 1) = x(k ) + u (k ) + d (k ) y (k ) = C1 x(k )

(2)

Where,

x(k ) R n , u ( k ) R m , y ( k ) R p
are respectively the state, input and output of the system ,

, C1

are the matrices of appropriate dimensions and

d (k ) is

the matched uncertainty. The value of above matrices can be calculated as follows in which the output is discredited at a sample frequency ,

378

= e A
= e A( t ) Bdt
0
( k +1)

d (k ) =

e A(( k +1) t ) d (t )dt

(3)

We make the following assumptions, Disturbance is slowly varying and remains unchanged during the sample period . Matrices Matrices

0 C C C , D0 = = C0 # # N 2 i N 1 C C i =0 From (7) the value of x( k ) can be written

in terms of output samples, delayed control input and disturbance vector


T T x(k ) = (C0 C0 ) 1 C0 ( yk +1 D0 )u (k ) D0 d (k ))

The input is applied with a sampling period , and the output is

( , ) are controllable. ( , C1 ) are observable.

(8)

From (6) and (7) the state can be expressed as a function of multirate output samples, past control input and past disturbance signal,

applied with a faster sampling period = / N , where N is an integer greater than or equal to the observability index [3] .When the system (2) discretized at a higher sample period , the state equation becomes

x(k ) = Ly yk + Lu u (k 1) + Ld d (k 1)
Where,
T T Ly = (C0 C0 ) 1 C0

(9)

x(k + 1) = x(k ) + u (k ) + d (k ) y (k ) = C1 x(k )

Lu = Ly D0

(4)

Ld = Ly D0
III.

(10)

the

system is independent of or and on the value of N . It can be shown that [5] = N


= i
i =0 N 1

By simple matrix manipulation, it can be shown that the relationship between the system parameters of the system and is only dependent

DESIGN OF SLIDING SURFACE AND CONTROL LAW

Considering the system (1), our objective is to design a control law that will asymptotically track the reference signal xd ( k ) . In this proposed method we choose a sliding surface

N 1 D = i D i =0

s (k ) = cT e(k ) , where e(k ) = x(k ) xd (k ) x(k ) the states and xd (k ) reference signal. We have, s (k ) = cT ( x(k ) xd (k )) s (k + 1) = cT ( x(k + 1) xd (k + 1)) x(k + 1) = x(k ) + u (k ) + d (k ) in
Putting the value of (11) , the

(5)

Then the multirate output feedback representation of the system with sampled at an interval and the input sampled at an interval , can be written as,

(11)

x(k + 1) = x(k ) + u (k ) + d (k )

(6)

expression becomes

The lifted output equation which is sampled at a higher rate given by

is
get

s(k + 1) = cT ( x(k ) + u (k ) + d (k ) xd (k + 1)) (12)


Using (7) and replacing the value of the state

Furthermore, represented as

yk +1 = C0 x(k ) + D0u (k ) + D0 d (k ) (7) the past N multirate sampled outputs are

x(k ) in (12), we

s (k + 1) = cT ( Ly yk + Lu u (k 1) + Lu d (k 1))

y ((k 1) ) y ((k 1) + ) yk = # y (k ) Where the coefficients C0 and D0 can be expressed as,

+cT u (k ) + cT D (k ) cT xd (k + 1) (13)
To reach the sliding surface in one sampling instant reaching law becomes s (k + 1) = 0 , but the expression (13) contains the uncertain term c Lu d1 ( k 1) , so the control law cannot be implemented, Let all the uncertainties are bounded, whose upper and lower bound are known [11]. Let
T

 (k 1) and cT D = d  (k ) cT Lu d1 (k 1) = d 2   s (k + 1) = d1 (k 1) d1m + d 2 (k ) d 2 m

(14)

379

Where, d1m and d 2 m are the mean values and d1s and d 2 s are the spread values of the bounded uncertainties, thus the mean and spread of the system are given by [6],

the (12) the control law can be derived as follows,

d + d1u d d , d1s = 1u 1l d1m = 1l 2 2 d 2l + d 2u d 2u d 2l (15) d2m = , d2s = 2 2 dl = Lower bound of the disturbance, du = Upper bound of disturbance. Using the reaching law (14) and s ( k + 1) from
cT ( Ly yk + Lu u (k 1) + lu d (k 1))

u (k ) = [0.79 1]xd (k + 1) [0.58 1.3683] yk 0.0017u (k 1) 1

(20)

Which shows that the control is independent of the states, it is only dependent on output samples and delayed input. Also it is independent of any switching function. Thus the control signal is free from high frequency chattering. The simulation results show that tracking error goes to zero Fig.1. Thus it indicates the stability of the sliding surface which asymptotically decays to zero.

+ cT u (k ) + cT D (k ) cT xd (k + 1)  (k 1) d + d  (k ) d =d
1 1m 2 2m

(16)

Thus the control law becomes

u (k ) = (cT ) 1 (cT xd (k + 1) cT Ly yk

cT Lu u (k 1) d1m d 2 m )

(17)

The above control law is freeing of uncertain terms so can be implemented to get the desired tracking performances.

IV.

ILLUSTRATIVE EXAMPLE

In this section an example is taken to validate the proposed method [3]. For these a linear system with uncertainties has chosen which is then discretized and the proposed control law (17) is applied to produce the required tracking performance. The continuous time model of the system is given by,

6.046 12.092  (t ) = x x(t ) + 12.092 6.046 6.046 6.046 6.046 u (t ) + 6.046 d (t ) y (t ) = [1 0]x(t ) (18) Where d ( k ) is the bounded state independent uncertainty. If the system is sampled with a sample time = 0.1sec , the
discretized model of the above plant can be expressed as,

Fig.2. and Fig.3. Shows the states

x1 and x2

respectively

0 1 0 0 x(k + 1) = x(k ) + u (k ) + d (k ) (19) 1 1 1 1


Choosing a sliding surface

s (k ) = [0.79 1]( x(k ) xd (k )) = 0 and N = 2 and , = 0.1 . The resultant control (17) is designed to produce a refercence tracking xd ( k ) = 2 . Let d1m = 0.5 and d 2 m = 0.5 and all the initial conditions of the different states are assumed to be [3 0] respectively. The resultant control law
satisfying the state feedback reaching law (14) can be formulated as

The combined response of the state Fig.4. Which shows that state

x1 and xd (k ) is shown in

x1 exactly tracks the reference signal

xd (k ) .

380

V.

CONCLUSION

A multirate output feedback based sliding mode controller for discrete-time linear time-invariant systems with uncertainty has been proposed. The proposed control algorithm makes use of only the past input and output samples of the systems to provide reasonably good tracking performance. Hence, it is more practical as compared to a state feedback based control algorithm where an observer is designed or it is based on the assumption that all the states are available. The reaching law is free from any switching function, so chattering is avoided. The proposed control algorithm was simulated through a numerical example and the results were found to be satisfactory.

REFERENCES
[1] S. V. Emelyanov, Variable structure control systems, Nauka, vol. 35, pp. 120125, 1967. [2]V. I. Utkin and K. K. D. Young, Methods for constructing discontinuity planes in multidimensional variable structure systems, Automatic Remote Control., vol. 39, pp. 14661470, 1978. [3] S. Janardhanan and B. Bandyopadhyay, Multirate output feedback based robust quasi-sliding mode control of discrete-time systems, IEEE Transactions on Automatic Control, vol. vol. 52, no. 3, p. 499503, 2007. [4] H. Werner, Multimodel robust control by fast output sampling an lmi approach, Automatica, vol. 34, pp. 16251630, 1998. [5] B. B. C. M. Saaj and H. Unbehauen, A new algorithm for discretetime sliding mode control using fast output sampling feedback, IEEE Transactions on Industrial Electronics, vol. 49, pp. 518523, 2002. [6] S. Janardhanan and B. Bandyopadhyay, Output feedback sliding-mode control for uncertain systems using fast output sampling technique, IEEE Transactions on Industrial Electronics, vol. 53,no.5, pp. 16771682, 2006. [7] A. Bartoszewicz, Discrete-time quasi-sliding-mode control strategies. ieee trans. on ind. electron., 45(1):633637, 1998, IEEE Transactions on Industrial Electronics, vol. 45(1), pp. 633637, 1998. [8] Janardhanan and Kariwala, Multirate-output-feedback-based lqoptimal discrete-time sliding mode control, IEEE Transactions on Automatic Control, vol. 53 No.1, pp. 367373, 2008. [9] S. Janardhanan and B. Bandyopadhyay, On discretization of continuous-time terminal sliding mode, IEEE Transactions on Automatic Control, vol. 51 no 9, pp. 15321536, 2006. [10] K. J. Astrom and B. Wittenmark., Computer-Controlled Systems, Theory and Design. Prentice-Hall, 1990. [11] B. B. G. D. Reddy and A. P. Tiwari, Multirate output feedback based sliding mode spatial control for a large phwr, IEEE Trans. Nucl. Sci, vol. 54 No.6, pp. 26772686, 2007.

The control input derived from (20) is plotted in Fig.5. Which shows the magnitude of the effort. The reaching law considered in this paper completely removes the chattering phenomenon.

381

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

SMES Base Multi-area AGC for Restructured Power System


Sandeep Bhongade#1,Prof. H. O. Gupta#2,Dr.BarjeevTyagi#3 # Electrical Engineering Department, Indian Institute of Technology, Roorkee, India 1 bhongadesandeep@gmail.com 2 harifee@iitr.ernet.in,3btygfee@iitr.ernet.in Abstract-- In this paper a general method for tuning of Proportional-Integral (PI) controller including Superconducting Energy Magnetic Storage (SEMS) unit for a single area and multi-area Automatic Generation Control Scheme in a deregulated electricity market has been proposed. The effect of the boiler system and governor dead band effects are neglected for simplicity. Parameters of the controller have been tuned by minimizing the Area Control Error (ACE) using genetic algorithm. Index Terms - Deregulated market, Superconducting Energy Magnetic Storage unit, Genetic Algorithm, PID controller, Automatic Generation Control I. INTRODUCTION this work fixed participation factor has been considered for the Gencos. The performance studies have been carried out by using the MATLAB SIMULINK. II. AUTOMATIC GENERATION CONTROL SCHEME A complete block diagram representation of isolated power system comprising turbine, generator, governor and load is obtained by combining the block diagram of individual components. The block diagram with feedback loop is as shown in Fig. 1 [3].

THE objective of providing an Automatic Generation Control (AGC) has been to maintain the system frequency at nominal value and the power interchange between different areas at their scheduled values. Maintenance of system frequency at the nominal value is the joint responsibility of all the generators owned by the utilities. Control area concept has been used to implement AGC scheme. The concepts of the conventional AGC are well discussed in [1], [2], [3] and [4]. Conventional schemes of automatic generation control (AGC) have evolved over the past several decades and are in use on interconnected systems. In a deregulated electricity market, there will be many market players such as generating companies (Gencos), distribution companies (Discos), and transmission companies (Transcos) and system operator (SO). For stable and secure operation of power system, SO has to provide number of ancillary services. One of the ancillary services is the 'frequency regulation' or 'load following' based on the concept of Load Frequency Control. A detailed discussion on Load Frequency Control issues in power system operation after deregulation is reported in reference [5]. unit turbine. Frequency transients must be eliminated as rapidly as possible. An SMES unit is placed in the power system to fulfill this requirement [10]. In this work, Genetic Algorithm based controller has been developed for multi area AGC scheme. The developed GA based controller has been tested on a practical power system network representing 39- bus New England system. A deregulated electricity market scenario has been assumed in the 39- bus system, which has been divided into two control areas. The participation factor of different Gencos can be determined by using the bids of the generators [5]. But in

Fig. 1 Linear model of isolated system Where, f Change in frequency of the generator, PG Change in governor output power, P V Change in valve output power, PT Change in turbine output power, P D Electrical load variation, Pref Change in reference power setting TH Hydraulic time constant, TT Time delay between the control valves is opened and the steam flow will reach the turbine cylinder, KP Transfer function gain of generator TP Time constant of generator, R is A constant term called regulation or droop. The problems of frequency control of inter-connected areas, or power pool are more important than isolated area system. Practically, all power systems today are interconnected. The advantage of interconnected system is that in normal operating condition each pool member or control member carry its own load. But in abnormal condition they share the load mutually. The advantages of belonging to a pool are particularly evident under emergency condition. The problems of frequency control of inter-connected areas, or power pool are more important than isolated area system. The controller shown in Fig. 1 and Fig. 2 may be either an integral controller having a st K (1) C ( s ) Kp i sK
s
d

Where, C(s) Transfer function of PID controller, K P = Proportional gain, Ki = Integral gain ,Kd = Derivative gain. In competitive electricity market environment there may be number of Discos in one area. Lets consider l number of Discos in area-i and, PD1 , PD 2 , ....... are their load variations. Net load variation i PDK

....... PDl n ith area the system is P D1 P D 2 ...... P DK ..... P Dl .


To meet out these load changes Discos contract with their own area Gencos, other areas Gencos and system operator. In a deregulated electricity market various types of

382

transactions, such as poolco based transactions, bilateral transactions and a combination of both are possible. The block diagram with feedback loop is as shown in Fig. 2 [3].

factor (apf) of the area. If there are k number of Gencos in area-i, then

apf
i 1

In a practical multi area power system, a control area is interconnected to its neighboring areas with tie lines, all forming part of the overall power pool. If P ij is the tie line real power flow from an area-i to another area- j and m is the total number of areas, the net tie line power flow from area-i will be Fig. 2 AGC Block Diagram For Area-i In Poolco based transaction the Discos and Gencos of the same area participate in the frequency regulation through system operator. System operator (SO) accepts bids (volume and price) from power producers (Gencos) who are willing to quickly (with in about 10-15 minutes) increase or decrease their level of production. Consumers (Discos) also can submit bids to SO for increasing or decreasing their level of consumption. When regulation is needed, the SO activates the most favorable bid. In Bilateral transaction the Discos of one area may contract with the Gencos of the same area or other area. In such contracts, a Genco changes its power to follow the predicted load as long as it does not exceed the contracted value. In order to meet the bilateral transactions, Disco participation Matrix (DPM) [8] has been used. This matrix has number of rows equal to the number of Gencos involved in the bilateral transactions and columns same as the number of Discos having bilateral contracts.
cpf11 cpf 21 .... DPM cpf i1 .... cpf m1 cpf12 cpf 22 .... cpf i 2 .... cpf m 2 .... .... .... .... .... .... cpf1 j cpf 2 j .... cpf ij .... cpf mj .... .... .... .... .... .... cpf1l cpf 2 j .... cpf il .... cpf ml

P tie i

p
j 1 j i

ij

In a conventional AGC formulation,

Ptiei is generally

maintained at a fixed value. However, in a deregulated electricity market, a Disco may have contracts with the Gencos in the same area as well as with the Gencos in other areas, too. Hence, the scheduled tie-line power of any area will change as the demand of the Disco changes. Thus, the net scheduled steady-state power flow on the tie line from an area-i can be expressed as
P i scheduled P i D ji D ji
j 1 j i j 1 j i m m

Where Dij is the demand of Discos in area-j from Gencos in area-i, and D ji is the demand of Discos in area- i from Gencos in area-j. During the transient period, at any given time, the tie-line power error is given as

P tie i error P tie i actual P tie i scheduled


This error can be used to generate the ACE signal as

ACEi Bi fi P tie i error


Where Bij the frequency bias factor, and frequency deviation in area-i. III.SMES SYSTEM The schematic diagram in Fig. 3 shows the configuration of a thyristor controlled SMES unit [3]. Control of the converter firing angle provides the DC voltage Ed appearing across the inductor to be continuously varied between a wide range of positive and negative values. The inductor is initially charged to its rated current Id0 by applying a low positive voltage. Once the current reaches the rated value, it is maintained constant by reducing the voltage across the inductor to zero since the coil is superconducting [5, 7]. Neglecting the transformer and the converter losses, the DC voltage is given by Ed = 2 Udo cos- 2 IdRc (2) where Ed is DC voltage applied to the inductor (kV), is firing angle (degrees), Id is current flowing through the inductor (kA), R, is equivalent commutating resistance () and UdO is maximum circuit bridge voltage (kV).

f i is the

Where cpf ij is the contract participation factor between Genco-i and Disco-j.The rating of area-i is different from the rating of area-j. As the Bilateral signal coming of a disco of area-i in per unit (pu) with respect to base of area-i. By taking pu with respect base of area-j the signal magnitude is different. Therefore the bilateral signal coming to area-j from area-i that is demand of Discos of area-i to Gencos of Area-j is not go directly but is multiplied by a factor aij , Where aij

Rating of Area-j Rating of Area-i

The factor aij is required to equalize the net load exchange in MW between both areas. In case of Poolco transition tieline power between area-i and area-j is settled at zero value. But in case of bilateral transition the tie-line power is not settled at zero value but settled according to the bilateral contract between Gencos of area-i and Discos of area-j. The Gencos share the load according to the ACE participation

383

studies a load change of 20% has been considered for each case. Fig. 5 shows the results of frequency deviation in an isolated system with and without SMES unit tuned with ISACE criterion respectively.
0.1 0 -0.1

Change in frequency

-0.2 -0.3 -0.4 -0.5 -0.6 -0.7 without SMES unit with SMES unit

10

20

30

40

50 Time(sec)

60

70

80

90

100

Fig 5. Change in frequency in single area system with and without SMES unit Fig.3 SMES unit

The inductor current deviation is used as a negative feedback signal in the SMES control loop. If the load demand changes suddenly, the feedback provides quickly restoration of current. In Fig. 4, the block diagram representation of SMES control scheme is shown.

B.Two Area System The 39-bus system has been divided into two control areas and it is assumed that each control area has two Discos. A general purpose Governor-Turbine model has been used, which is taken from [9]. The number of Gencos and rating of each area is given in Table I. Table I: Control areas in 39-bus New England power system
Control Area AREA-1 Area Rating(MW) 400 500 Market Participants Genco 1,2,3, 4,5 Disco 1,2 Genco 6,7,8,9,10 Disco 3,4

Fig.4 SMES control scheme.

AREA-2

The SMES control strategy is derived by utilizing the signals PSM and Ido instead of f, and f2 (see Fig. 4). This has the advantage (i) that this technique works equally well for load change in either area (ii) it does not depend whether the SMES unit is in area 1 or area 2.With the use of SMES in both the areas, frequency deviations in each area are effectively suppressed. However, it may not be economically feasible to use SMES in every area of a multiarea interconnected power system. Therefore, it is advantageous if an SMES located in an area is available for the control of frequency of other interconnected areas. In this work, MATLAB/SIMULINK has been used to tune the controller with GA toolbox.Fig.4 SMES control scheme. IV. SYSTEM STUDIES The proposed method of controller tuning described in previous section has been tested on a single area as well as on an interconnected two area power system in a deregulated electricity market, where, the transactions between the different players are assumed to be fixed. MATLAB/SIMULINK software has been used for simulation studies. A. Single Area System For the isolated system shown in Fig.1,with some modifications like isolated single area is modeled in a deregulated electricity market scenario with and without SMES unit, the parameters are taken from ref. [3]. And the data used for SMES unit is taken from [10]. GA has been run to determine the value of Kp and Ki. The optimal value of the controller parameters for the fitness function, given by equation (2) has been obtained using the GA. The values of Kp, and Ki determined using ISACE fitness criterion with SMES unit are Kp=0.87741, Ki=, 0.42989and without SMES unit are Kp=0.79404, Ki=0.14885.For simulation

For two area system as shown in figure 2, for the ith area, where i=2, i.e. Two area system, the parameters are taken as from ref. [3]. For simulation first K p and Ki value has been determined using GA and then PI controller parameters have been determined for the fitness functions given by equation (3). TABLE II
Area1 Area2 Kp -0.31457 0.21922 Kp 0.54081 0.34187 Ki -0.41698 -0.51162 Ki -0.65973 -0.67418 Kd 0 0 Kd 0 0

TABLE III
Area1 Area2

The 39-bus system has been used for simulation and a step load change of 0.2 pu has been assumed in each areas. A.Poolco Based Transactions In poolco transaction a step load change of 0.2 pu has been assumed in each areas. The participation factor of different Gencos can be determined by using the bids of the generators [5].But in this work fixed participation factor has been considered for the Gencos. Participation factor considered for different Gencos of different areas are given below in Table IV. Table IV: Participation factor considered AREA-1 AREA-2 G1- 0.25 G6- 0.10 G2- 0.10 G7- 0.35 G3- 0.30 G8- 0.0 G4-0.35 G9- 0.05 G5-0.0 G10- 0.5 The symbol G specifies for Genco of the area. It is assumed in the study that Discos are not participating in the frequency regulation, therefore gencos of each area share the load demand of their area as per their participation factors. It is assumed in the study that Discos are not

384

Change in gen.output

0.1 0.08 0.06 0.04 0.02 0 G5 G2 G1

Change in gen.output

participating in the frequency regulation, therefore gencos of each area share the load demand of their area as per their participation factors. Typical calculation for the change in generation of Genco G1 is given below: Change in generation of Genco G11 = participation factor of Genco G1*Net system Load change =0.25*0.4= 0.1 pu. Similarly for generators G2, G3, G4, G5, G6, G7, G8, G9, and G10 net change in the generation are 0.04 pu, 0.12 pu, 0.14 pu, 0.0 pu, 0.04 pu, 0.14 pu, 0.0 pu, 0.02 pu, 0.2 pu, respectively.Typical results for the change in generation of all the Gencos of area-1&2 with and without SMES unit using GAPI controller are shown in figure 6(c), (d), (e) and (f). Comparative results of Tie-line power deviation in area1&2 with and without SMES unit using GAPI controller are shown in figure 6(g) and (h).
AREA-2
Area-1 1.5

generation can be calculatedas shown above. Comparative results of frequency deviation in area-1 and area-2 using conventional PID controller tuned with GA is shown in fig. 7(a) and fig. 7 (b).The change in generation of all the Gencos of area-1 and area-2 with SMES unit due to area control error input by using conventional PID controller tuned with GA is shown in fig. 7(c) and fig. 7(d).
Area-1 1.5
1.5 Area-2

1
1

0.5
0.5

Change in frequency

0 -0.5 -1 -1.5 -2 -2.5 -3

Change in frequency
with SMES unit without SMES unit

0 -0.5 -1 -1.5 -2 -2.5 -3

with SMES unit without SMES unit

10

20

30

40

50 60 Time(sec)

70

80

90

100

10

20

30

40

50 60 Time(sec)

70

80

90

100

Fig 7(a) Change in frequency in area-1 and area-2


Area-1
Area-1 0.18 0.16 0.14 G4 0.12 G3

0.3

0.25

0.2

0.15

G4

G3

0.1 G1 0.05 G5 0 G2

1.5 1

1 0.5

0.5 0

Change in frequency

Change in frequency

0 -0.5 -1 -1.5 -2 -2.5 -3 -3.5 with SMES without SMES

-0.5 -1 -1.5 -2 -2.5 -3 -3.5 with SMES without SMES 0 10 20 30 40 50 Time(sec) 60 70 80 90 100

-0.02

-0.05
0 10 20 30 40 50 60 Time(sec) 70 80 90 100

10

20

30

40

50 60 Time(sec)

70

80

90

100

Fig 7(b) Change in gen.output in area-1 with & without SMES unit
Area-2 0.3 0.2 Area-2 G10 0.25 0.15 0.2
Change in gen.output

10

20

30

40

50 Time(sec)

60

70

80

90

100

Change in gen.output

Fig 6(a) Change in frequency in area-1 and area-2


Area-1 0.2
0.3 Area-1

G10 0.1

G7

0.15 G7 0.1 G6 0.05

G6 0.05 G9

G4 0.15 G3
0.25

Change in gen.output

0.2

G9 0 0
G4 G3 G1

0.1 G1 G2 0.05 G5 0

Change in gen.output

0.15

G8 G8

0.1

-0.05

10

20

30

40

50 60 Time(sec)

70

80

90

100

-0.05

10

20

30

40

50 60 Time(sec)

70

80

90

100

0.05

G5

G2

Fig 7(c) Change in gen.output in area-2 with & without SMES unit
90 100

-0.05

10

20

30

40

50 Time(sec)

60

70

80

90

100

-0.05

10

20

30

40

50 60 Time(sec)

70

80

Fig 6(b) Change in gen.output in area-1 with & without SMES unit
Area-2 0.2 0.18 0.16 0.14 G7 0.2 G10 0.25 G10 0.3 Area-2

0.12 0.1 0.08 0.06 0.04 G9 0.02 G8 0 0 10 20 30 40 50 60 Time(sec) 70 80 90 100 G6

G7 0.15

0.1 G6 0.05 G9

0 G8 -0.05

10

20

30

40

50 60 Time(sec)

70

80

90

100

Fig 6(c) Change in gen.output in area-2 with & without SMES unit
0.1 0.05

-0.05

-0.1 with SMES unit without SMES unit -0.15

10

20

30

40

50 Time(sec)

60

70

80

90

100

Fig 6(d) Tie-line power output with and without SMES unit

B. Combination of Poolco and Bilateral based transaction In another study combination of Poolco and bilateral transaction has been considered. The first contract is between disco1 in area-1 and genco G-2, G-7 in area-1 & 2. The second contract is between disco 3 in area-2 and G-2, G-6 in area-1& area-2. According to the above contracts DPM becomes 0 0 0.2 0 0 0 0 0 0 0 0 0 0 0 0 0.1 0 0 0 0.1 0 0 0 0 0 0 0 0

CONCLUSION It has been shown in this work that the presence of the relatively small size SMES unit can improve the dynamic response of an isolated single area and multi area AGC systems in a deregulated environment. By using the area control error and the inductor current as the control signals, the SMES unit is effective in damping out the oscillations that persists in the system following a disturbance. In this work an algorithm to tune the parameters of a PI controller used for an isolated as well as multi-area AGC scheme with and without SMES unit has been proposed. The ISACE criterion has been utilized as fitness functions for the proposed GA. REFRENCES
[1] O. I. Elgerd, and C. Fosha, Optimum Megawatt -Frequency Control of Multi-Area Electric Energy Systems, IEEE Transactions on Power Apparatus and Systems, vol. PAS-89, No. 4, April 1970, pp. 556-563. O. I. Elgerd, Electric Energy Systems Theory: An Introduction, McGraw Hill, 1982. Donde,V.; Pai, M.A.; Hiskens, I.A.; Simulation and optimization in an AGC system after deregulation, Power Systems, IEEE Transactions on Volume 16, Issue 3, Aug. 2001 Page(s):481 489. P.M. Anderson and A.A. Fouad, Power System Control and Stability, IOWA State University Press, 1984. S.C. Tripathy, R. Balasubramaniam, and P.S. Chanramohanan Nair, 1992, Effect of Superconducting Material Energy Storage on Automatic Generation Control Considering Governor Deadband and Boiler Dynamics, IEEE Trans. on Power Systems, Vol. 7, No. 3, pp. 12661273. Sinha, N.; Loi Lei Lai; Rao, V.G.; GA optimized PID Controllers for automatic generation control of two area reheat thermal systems under deregulated environment Electric Utility Deregulation and Restructuring and power Technologies 2008.pagese(s):1186 1191.

Change in gen.output

Tie-Line Power deviation in pu

Change in gen.output

[2] [3]

DPM=

[4] [5]

Change in generation of Generator G1 = Change in generation due poolco transition + change in generation due to bilateral contract = 0.112 pu. Similarly for generators G2, G3, G4, G5, G6, G7, G8, G9 and G10 change their

[6]

385

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Velocity Algorithm Implementation of PID Controller on FPGA


Dhirendra Kumar
Electronics & Instrumentation Engg. Deptt. Galgotias College of Engg. and Technology Gr Noida, India E-mail: dhirendra100@gmail.com
Abstract This paper describes the design of an FPGA based Discrete Proportional-Integral-Derivative Controller implemented using Velocity Algorithm. The plant used in this regard is Disturbance Control in a DC Motor. System level design tool System Generator using Matlab/Simulink environment from Xilinx is utilized for this purpose. This implementation results into low power consumption with use of less no of resources which is desired from an embedded control system. Keywords- Proportional-Integral-Differential(PID) Controller, Field Programmable Gate Array technology (FPGA), Velocity Algorithm, Discrete Controller.

Dr. Barjeev Tyagi


Department. of Electrical Engineering. Indian Institute of Technology Roorkee Roorkee, India E-mail: btyagfee@iitr.ernet.in

I.

INTRODUCTION

PID Algorithm is most popular feedback controller used within process industries. It is robust easily understood Algorithm that can provide excellent control performance despite the varied dynamic characteristics of process plant. PID controller consists of three basic modes that is Proportional, Integral and Derivative mode where the Proportional value determines the reaction to the current error, the Integral determines the reaction based on the sum of recent errors and the Derivative term determines the action based on rate of change of error. Types of PID controller are defined: Parallel controller Serial controller Mixed controller The Modern Discrete Control systems require more and more strong and faster calculation component, FPGA gives advantage of implementing multiple controller in single device and it also satisfies the sampling time requirement. Earlier discrete feedback controller has been used in various applications for ex.- Temp Control[1], Robotic control[2], DC Motor control[3], AC/DC Converter[4], Variable Speed Drive, reduction of Vibration in aircraft/Aerospace structure as well as helicopter fuselage[5], PWM inverter & Anti windup controller compensation.

Algorithm used for designing PID controller here is Velocity Algorithm which gives velocity of control variable. The control variable is obtained by integrating its Velocity, Implementation of this algorithm is also called Incremental algorithm. FPGA is an electronic circuit which can be configured by user to perform particular combination. FPGA were introduced in 1985 by Xilinx Company, since then now many companies Actel, Altera, Plessey, AMD, Quick Logic, Algotronix, Concurrent Logic & Cross Point Solutions are producing FPGA, it consists of 2D Array of logic blocks that can be connected by general interconnection resources. The interconnections are programmable switches that enable one to connect the logic blocks using wire segments or one wire segment to other. Logic circuits are implemented in FPGA by partitioning the logic into individual logic block, and then interconnecting the block as required via the switches. That means design using logic blocks coupled with that of interconnection, resource should facilitate the implementation of a large number of discrete logic circuits. Certain advantages of using FPGA are: Lower prototype cost Shorter production time Capability to execute concurrent operation Lower power consumption High speed Capable of complex functionality Reprogramability The organization of this paper is as follows section I gives the brief idea of the previous work done, Section II presents the Velocity Algorithm used and the block Diagram to be implemented using system generator blockset. Section III gives results obtained from closed loop negative feedback control system taking plant model as Disturbance control in a DC Motor, the PID controller used in this regard has to be implemented using Spartan3e FPGA. Section IV describes conclusions and future prospects, Section V is the references, and the responses from the design are at the end of the paper.

386

II.

DESIGNING OF PID CONTROLLERS MODULES FOR FPGA ENVIRONMENT

The problem used in this paper is Disturbance control in DC Motor [11] where PID controller is implemented on Matlab/Simulink environment using System Generator from Xilinx. As shown in Fig.1 the Digital PID controller is implemented in the closed loop, Here Uref is the reference input and Y is the previous output from the plant which is feedback, the error e from subtractor is calculated and fed to PID controller using Analog to Digital converter, finally the control action u from PID controller is fed to the Plant using Digital to Analog converter.

T m(k ) m(k 1) e(k ) e(k 1) 1 Kc e(k ) D [e(k ) 2e(k 1) e(k 2)] T T TI T2


(3) Rearranging (3) we get

m(k ) m(k 1) Kc[e(k ) e(k 1)]


Taking z transform of (4)

T T e(k ) D [e(k ) 2e(k 1) e(k 2)] TI T (4)

(1 z 1 )M ( z) Kc(1 z 1 ) E ( z)

T T E ( z) D (1 2 z 1 z 2 ) E ( z ) Ti T

(5) Again rearranging the terms in (5) we get T T Kc T (1 z 1 )M ( z) {Kc Kc. D }E ( z) {Kc 2 Kc }z 1E ( z) {Kc D }z 2 E ( z ) TI T TI T (6) This gives

M ( z) Az 1 Bz 2 C E( z) 1 z 1
Where
Fig.1 Closed Loop Discrete Control System

(7)

A= Kc

A. Velocity algorithm for implementation of PID controller. To obtain an equation that can be implemented using computer, it is necessary to replace continuous time operation like differentiation and integration by discrete time operations in z domain. Velocity algorithm has been used in this regard. Advantage of velocity algorithm: Wind up protection Bump less parameter changes are easy to implement The common PID controller equation is:-

T Kc Kc. D , TI T T ] TI

B= [ Kc 2 Kc

&

C= [ Kc

TD ] T

B. State Transition Signal Flow Diagram Implemented Below Fig.2 shows the State Transition Signal Flow Diagram for the (7).

m(t ) Kc[e(t )

1 de e(t )dt TD ] Mss TI 0 dt

(1) Where Mss is steady state output of controller when the error signal is zero, m(t) is final output and e(t) final error in continuous time domain[12],[22]. Differentiating (1) we get

Fig.2. STSFD for Velocity Algorithm

(2) Here this equation does not contain steady state value output now discretizing the equation using method of differentiation- Backward Difference method at time t=KT we get (3)

dm(t ) de(t ) 1 d 2e(t ) Kc[ e(t ) TD ] dt dt TI dt 2

The above STSFD gives the basic idea for the implementation of PID controller, since it cant be directly implemented on computer, so it has to be implemented using the commonly available blocks available with the software. The STSFD is implemented in block diagram form as shown in Fig .3 using some basic blocks available with the software platform.

387

C. Block diagram to be implemented on simulation. A _ + +

Delay Delay B

Delay
Fig.3.Block Diagram implementation for PID Controller

Using System Generator block set, this PID controller has been tested on negative feedback closed loop. Load disturbance of a DC motor is taken as process to be controlled as shown in fig.4 below.
Uref 1 unit Step Gain e
In1Out1

gives the advantage in terms of occupation of lesser number of logic slices on Spartan 3e kit and causing lesser power consumption as shown in Table 2.
TABLE-1 COMPARISON OF RESULTS FROM SIMULATION AND HARDWARE COSIMULATION BASED PID CONTROLLER

Va W Td

PID ON FPGA

Out1

Rise time Tr(s) Continuous Time PID Embedded multiplier PID 1.514 2.382

t Clock To Workspace

Scope Load Distubance DC MOTOR out Input Feedback Gain To Workspace1 1

Settling time Ts(s) 2.56 4.34

TDI (s) 3.53 4.54

TDO (s) 3.39 4.19

Fig.4 System generator based Discrete Time PID controller with Closed Loop Negative Feedback Control System

TABLE-2 DETAILS OF DESIGN ON FPGA Components utilized Velocity Algorithm based PID controller 2278 Power consumption 83 mW Clock frequency 45.89 MHz

III. RESULTS The disturbance Td= -0.1N.m introduced at time t=20seconds to t=40 seconds. Fig.5 below shows the disturbance introduced. Responses obtained from PID controller implemented in closed loop with plant model taken as Disturbance control in DC Motor. Fig.6 below shows response from continuous time based PID controller, Fig.7 below shows the simulation response from the System Generators Embedded Multiplier based PID controller (implemented using Velocity Algorithm). Below Fig.8 shows the results obtained from the hardware Spartan 3e. The results obtained from System generator based simulation and FPGA were almost the same. The simulation shows the following results as shown in Table-1. Where Tr-rise time, Ts-settling time ,TDI-time to recover when disturbance introduced, TDO- time required to recover when disturbance is over. The Table 1 shows that, because of the embedded multiplier based PID controller used available in FPGA Spartan 3e, the time required to recover from disturbance was slightly more than that of continuous time controller. Implementation of PID controller using Velocity algorithm

IV. CONCLUSIONS In this paper the PID controller implementation using velocity algorithm, for FPGA Spartan 3e platform has been tested on DC Motor Plant. The load disturbance in the plant has been taken as an input to the Motor. The PID controller developed on FPGA gives the advantage of Flexibity of changing the design for lower power consumption, and higher speed performance. The velocity algorithm implementation gives advantage of windup protection and bump less parameter change. The design has been implemented on FPGA Spartan 3e from Xilinx, and the PID response was taken from FPGA and fed to Simulink based blocks and the results were obtained on scope. Further work can be done in order to improve the time response i.e. rise time , settling time, time to recover from the disturbance comparable to the continuous time simulation.

388

V. REFERENCES
[1] Yuen Fong Chan, M. Moallem, and Wei Wang, Design and Implementation of Modular FPGA Based PID Controllers, IEEE transactions on Industrial Electronics,Vol.54,no 4,august 2007. Wei Zhao, Byung Hwa Kim, Amy C. Larson and Richard M. Voyles, FPGA Implementation of Closed Loop Control System for Small-Scale Robot University of Minnesota, Minneapolis MN , Updated 10th May, 2005. Mohamed Abedelati , FPGA-Based PID Controller Implementation The Islamic University of Gaza, Palestine Updated 22nd Jan,2009. Andres Upegui , Eduardo Sanchez, Evolving Hardware by Dynamically Xilinx FPGAs Ecole Polytechnique Fedrale de Lausanne-EPFL 1015 Lausanne ,Switzerland, Updated 1st June ,2005. Shashikala Prakash, D.V.Venkatasubramanyam,Bharath Krishnan & R. Nagendra, Compact Field Programmable Gate Array (Fpga) Controller For Aircraft /Aerospace Structure National Aerospace Laboratories , Bangalore, India, Updated 26-28 June,2008. Digital control and state variable methods conventional and neuro fuzzy control system. M.Gopal 2nd ed Tata Mc Graw Hill Publishing Company Limited. Pp168-169. Dejan Markovic , Chen Chang, Brian Richards, Hayden So,Borivoje Nikolic, & Robert W. Brodersen, ASIC Design and Verification in an FPGA Environment University of Calofornia, Berkeley, USA. Alireza Rezaee FPGA Implementation of Digital PID Amirkabir University IRAN Tehran, Updated 26th April,2005. Masmoudi N., Samet L., Kharrat M.W. and Kamoun L., Implementation of a PID Controller on FPGA, Updated 25 th April,2003

[2]

[3]

[4]

[5]

[6]

[7]

[10] Joao Lima, Ricardo Menotti, Joao M.P. Cardoso and Eduardo Marques, A Methodology to Design FPGA- based PID Controllers ,Updated 30th June, 2006. [11] DC Motor Control [Online] Available: http://www.mathworks. com/products/control/demos.html?file=/products/demos/shipping /control/dcdemo.html [12] Digital control system analysis & design Charles L Phillips, H. Troy Nagle. 1994 3rd ed. Prentice Hall Volume. [13] Control System Engineering I. J. Nagrath, and M. Gopal 4 th ed, New Age International Pub, 2006. [14] Digital Design M. Morris Mano 3rd ed, Pearson Education 2008 [15] Computer Organization & Architecture Designing For Performance William Stallings, 6th ed, Pearson Education. 2005. [16] System Generator Online help on MATLAB 7.3 [17] Xilinx Spartan 3e Reference manual, [Online] Available: http://www.xilinx.com/ support/documentation/spartan3e_board_and_kit_documentation.htm [18] Discretized PID Controllers [Online] Available: http://lorien.ncl.ac.uk /ming/digicont/digimath/ dpid1.htm Accessed 18/10/08 [19] PID controller [Online] Available : http://en.wikipedia.org/wiki/ PID_controller Accessed :18/11/09 [20] PID-Controller [Online] Available: http://www.answers.com /topic/pid-controller Accessed 20/11/08 [21] Christian Schmid, The classical three-term PID controller [Online]Available:http://virtual.cvut.cz/dynlabmodules/ihtml/dyn l abmodules/syscontrol/no de 61.html [22] Velocity and Full Value forms of the PID algorithm [Online] Available :http://www.controlviews.com/articles/QandA/velo cityfullvalue.html Accessed: Dec/2008

[8] [9]

Fig.5 Disturbance introduced

Fig.6 Response from continuous time simulation

Fig.7 Response from the Embedded multiplier based simulation

Fig.8 Response from Spartan 3e FPGA.

389

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Optimization of PID Controller using Modified Ant Colony System Algorithm


Shekhar Yadav
Control System, Dept. of Electrical Engg. IT-BHU, Varanasi, India shekhar.yadav.eee08@itbhu.ac.in

Gaurav Singh Uike


Control System, Dept. of Electrical Engg. IT-BHU, Varanasi, India gaurav.singh.eee08@itbhu.ac.in

J.P.Tiwari* Dept. of Electrical Engg. IT-BHU, Varanasi, India. jptiwari.eee@itbhu.ac.in

Abstract - Despite the popularity, the tuning of PID controllers is a challenge for researchers and plant operators. Here optimization of PID controller with self tuning parameter is proposed based on the modified Ant Colony System (ACS) algorithm. By testing different control systems with typical characteristic such as higher order and time delay, the proposed ACSPID algorithm has been demonstrated to have an adaptive property and robust stability in searching for the optimal PID controller parameters. This technique of optimization for PID controller gives better control performance as compared to other algorithms such as differential evolution(DE), genetic algorithm(GA) and simulated annealing(SA). Keywords - PID control, Modified ACS, Performance Indices.

Fig.1 PID controller

II. Proposed method for Optimal Design of PID Controller


To optimize the performance of a PID controlled system, the PID gains of the system are adjusted to optimize a certain performance index. The objective of the performance index is to encompass in a single number a quality measure for the performance of the system. Various objective functions were written based on error performance criterion. The performance index is calculated over a time interval; T, normally in the region of 0 T ts where ts is the settling time of the system. To emphasize the effectiveness of the proposed method, the ITAE performance criterion is adopted in this paper. Its formula is-

I. Introduction
The proportional integral derivative (PID) controller is by far the most commonly used controller in process control applications[1,2]. It has been crucial problem to tune properly the gains of PID controller because many industrial plants are often burdened with the characteristics such as higher order, time delay and nonlinearities. Considering the limitation of Zeigler-Nichols[3] formula and some empirical techniques in raising the performance of PID controller, recently optimization techniques[5] such as simulated annealing[4], genetic algorithm[6,7] and differential evolution[8] have been employed to improve the performance of PID controllers. In this paper, a novel intelligent design method for PID controller is proposed based on the modified Ant Colony System (ACS) algorithm, called the ACS-PID algorithm. This paper describes in detail how to use the ACS-PID algorithm to obtain the optimal PID controller parameters and examine the adaptation and robustness of the ACS algorithm by testing several typical control systems.The ACS algorithm is an improvement on the ant system (AS) algorithm[9], the first generation of ant algorithm. Compared with the AS algorithm, the ACS algorithm is more robust, faster, and has a higher probability in obtaining the globally optimal solution.

ITAE =

. ()

(1)

A. Generation of Nodes and Paths using ACS-PID Algorithm


Consider the PID controller, shown in fig.1 with parameters Kp, Ki, and Kd whose optimized value are to be generated for the given plant. Assume that the value of each of them has six valid digits. In the six digits of Kp, assume that there are two digits before decimal point and four digits after decimal point; in the six digits of Ki and Kd, assume that there is one digit before decimal point and five digits after decimal point. When using the ACS algorithm, a discrete solving space is

390

Fig.2 Diagram of generating nodes and paths

needed because the path selections of an ant in each step are limited. In order to use the ACS algorithm conveniently, it was decided to express the values of Kp, Ki, and Kd on plane O-XY. As shown in Fig.2, first we draw eighteen lines L1, L2, , L18, which have equal length and equal separation and are perpendicular to axis X. L1 ~ L6, L7 ~ L12, and L13 ~ L18 represent the first digit to the sixth digit of Kp, Ki, and Kd, respectively. The x coordinates of these lines are represented by numbers 1~18 respectively. Then, we divide equally each of these lines into nine portions and thus ten nodes are generated on each line. The ten nodes on each line represent respectively numbers 0~9, which are possible values of the digit corresponding to the line. Let an ant depart from the origin O of plane O-XY. When it moves to any node of line L18, it completes a tour. Its moving path can be represented by Path={O, Node (x1, y1, j), Node (x2, y2, j), , Node (x18, y18, j)}. Obviously, the values of Kp, Ki, and Kd represented by the path can be computed by the following formulas:
Kp = y1j 101 + y2j 100 + y3j 101 + y4j 102 + y5j 103 + y6j 104 Ki = y7j 100 + y8j 101 + y9j 102 + y10j 103 + y11j 104 + y12j 105 (2) Kd = y13j 100 + y14j 101 + y15j 102 + y16j 103 + y17j 104 + y18j 105

initially all the nodes have same amount of pheromone .In moving process,an ant k(k=1~m) on line 6 (i=1~18),will select a node j from the ten nodes of the next line to move to the according to the following transition rule : j =
( , ) . ( , ) , 0

(3)

and j = J , if qq0 where q is a random variable uniformly distributed over [0,1], 0 is tunable parameter. contains all of the nodes on line and J is a node that is randomly selected according to probability. ( , ) =
, ( , ) ( , ) ,

(4)

In formulas (3) and (4) (xi , yij ) is the visibility of node ( , ). For the computation of the visibility we are using the following formula: , =
10

10

(5)

B. Transition Rule
When all ants move to one lines, say, line , let 0~9 . then the total no. of ants is m= 9 =0 . Let ( , ) be the concentration of pheromone at Node (Xi,Yij) assume that

where the values of (i=1~18,j= 0~9) are the set in following way. In the first iteration of the ACS-PID algorithm the values of the are set to the vertical coordinates of the eighteen nodes which are obtained by mapping the values of PID parameters 0 , 0 and

391

0 onto Fig.2 where 0 , 0 and 0 are obtained by use of the classical Ziegler-Nichols tuning formula.In each of the following iterations, the values of are set to the vertical coordinates of the eighteen nodes which are obtained by the mapping the values of PID parameters , and onto Fig.2 where , and are the PID controller parameters corresponding to best tour generated since the beginning of the trial.

nodes less and less attractive for other ants, thus indirectly favoring the exploration of not yet visited nodes.

III. Simulation Discussion

Experiments

Results

and

A. Typical Transfer Functions


In order to examine the adaptation and robustness of the ACS-PID algorithm, two typical control systems were tested. The transfer functions of the plants in the three control systems are given as follows:

C. Global Updates of the Pheromone Concentration


When all of the ants in the colony complete their tours once in the ACS-PID algorithm i.e. when they arrive on the line 18 , the pheromone concentration of the each nodes belonging to the best tour since the beginning of the trial is updated by the following formulas:( , ) (1-). ( , ) + .( , ) ( , ) = Q (6) (7)

Case 1 (second-order system) : 1 () =


1 2 +1.6 +1

0 5, 0 3, 0 3

Case 2 (High-order system): 2 () =


1 +1 0.2 +1 0.05 +1 (0.01 +1)

; 0 20, 0

where Node (Xi, Yij)s are the nodes belonging to the best tour since the beginning of the trial; is the parameter which governs the pheromone decay; ITAE* is the value of the ITAE performance criterion corresponding to the best tour since the beginning of the trial; and Q is a positive constant which can be determined in the following way: for a given control system, first tune its PID controller parameters using the classical Ziegler-Nichols tuning formula, then compute the ITAE performance criterion of the system according to the obtained PID controller parameters and use ITAE0 to denote the obtained ITAE value, and then let Q be equal to ITAE0. Obviously, as the value of ITAE* becomes smaller and smaller, the value of Q/ITAE* will become greater and greater, which is helpful to increasing the pheromone concentration of the nodes on the best tour since the beginning of the trial and results in finding the best solution within the maximum number of iterations allowed.

10, 0 10 Case 3 (Time-delay system): 3 () =


2 +1

;L=1,T=5,0 5,0 3,0 3

In addition to examine the adaptation and robustness of the ACS-PID algorithm in different PID control systems, the performance of the ACS-PID controller was also measured by comparing with the DE-PID, the GA-PID and the SAPID controllers.

D. Local Updates of the Pheromone Concentration


The local update is performed as follows: when, while performing a tour, ant k is on line Li-1 and selects node j on line Li, the pheromone concentration of Node ( , ) is updated by the following formula: ( , ) (1-). ( , + 0 (8) In order to obtain the optimal PID controller parameters, for each study case 50 times iterations were repeatedly performed using the ACS-PID algorithm, the DE algorithm, and the GA algorithm. In SA, the Markov chain and Generations were set to 10, respectively. The best

Table 1. Summary of simulation results.

B. Unit Step Response

The value 0 is the same as the initial value of pheromone concentration. When an ant visits a node, the application of the local update rule makes the pheromone level of the node diminish. This has the effect of making the visited

392

solutions of each study case, which have the optimal or near optimal PID controller parameters, were summarized in Table 1. Using these PID controller parameters, the unit step responses of each study case were obtained as shown in Fig.3.

Fig.3(c) Unit step response of case 3

C. Comparison of PID Controllers


Observing the performance indexes in Table 1, we find that the ACS-PID controller has excellent unit step response in each study case, with better or equivalent performance compared with the DE-PID, the GA-PID, and the SA-PID controllers. Therefore, we conclude that the ACS-PID algorithm has the adaptive property and robustness in solving the optimal tuning problem of PID controller parameters.

Fig.3(a) Unit step response of case 1

IV. Conclusions
This paper presents a new and intelligent optimal design method for PID controller based on the ACS-PID algorithm. By testing different control systems with the typical characteristics such as second order and high-order system and time-delay system, the proposed ACS-PID algorithm has been demonstrated to have the adaptive property and robust stability in searching for the optimal PID controller parameters. The proposed ACS-PID controller has been demonstrated to be better than or equivalent to other techniques used for optimization of PID controllers in control performance.

Acknowledgment

Fig.3(b) Unit step response of case 2

We are grateful to Prof. J.P.Tiwari, Department of Electrical Engineering, IT-BHU for giving us the opportunity to work on this topic under his able guidance. We express our sincere gratitude to him for his precious inspiration and suggestions.

393

References
[1] G.J. Silva, A.Datta, et al., New results on the synthesis of PID controllers, IEEE Transactions on Automatic Control, 2002, 47(2), pp. 241-252. [2] Kim, Dong Hwa, Robust PID controller tuning of multivarlable system using clonal selection and fuzzy logic, in Proceedings of the SICE 2005 Annual Conference, Okayama, 2005, pp. 734-739. [3] S.Y.Chu, C.C.Teng, Tuning of PID controllers based on gain and phase margin specifications using fuzzy neural network, Fuzzy Sets and Systems, 1999, 101(1), pp. 21-30. [4] S. Chen, Adaptive simulated annealing for designing finite-precision PID Controller structures, in Proceedings of the 1998 IEEE Colloquium on Optimization in Control: Methods and Applications, London, UK, 1998, pp. 3/1-3/3. [5] De Moura Oliveira, P.B, Modern heuristics review for PID control systems optimization: A teaching experiment, in Proceedings of the 5th International Conference on Control and Automation, ICCA'05, 2005, pp. 828-833. [6] D. P. Kwok, F. Sheng, Genetic algorithm and simulated annealing for optimal robot arm PID control, in Proceedings of the First IEEE Conference on Evolutionary Computation, Orlando, FL USA, 1994, pp. 707-713. [7] R.A.Krohling, J.P.Rey, Design of optimal disturbance rejection PID controllers using genetic algorithms, IEEE Transactions on Evolutionary Computation, 2001, 5(1), pp. 78-82. [8] S. L. Cheng, C. Hwang, Designing PID controllers with a minimum IAE criterion by a differential evolution algorithm, Chemical Engineering Communications, 1998, vol. 170, pp. 83-115. [9] Dorigo M, Bonbeau E, Theraulaz G. Ant algorithms and stigmergy[J]. Future Generation Computer Systems, 2000, 16(8): 851871.

394

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Low Complexity Image Retrieval in DCT Compressed Domain with MPEG-7 Edge Descriptor
Amit Phadikar
Department of Information Technology MCKV Institute of Engineering Liluah, Howrah 711204, India. amitphadikar@rediffmail.com
AbstractIn this paper, we propose an image retrieval scheme in discrete cosine transform (DCT) compressed domain. A combination of three image features i.e. color histogram, color moments, and edge histogram are extracted directly form the compressed domain and are used for similarity matching using the Euclidian distance. The color histogram and color moments are extracted from the DC component of the image blocks, while edge feature is obtained by analyzing the relationship of the AC coefficients. The extraction of image features directly from the compressed domain not only improves the processing efficiency, but also reduces the requirements of the computer resources. The results obtained are compared with related scheme and found to be superior. KeywordsCBIR; Compressed Domain; Color Histogram; Color Moment, MPEG-7 Edge Descriptor.

Santi P. Maity
Department of Information Technology Bengal Engineering and Science University Shibpur, Howrah 711 103, India. santipmaity@it.becs.ac.in
that works directly in the compressed domain has become a hotspot recently and great many works have appeared in the open literature [5, 6, 7]. The scheme proposed in [5] uses only texture information for indexing. However, this is not the optimal case when one considers colored image database. In contrast the proposed scheme uses both the color feature and MPEG-7 edge descriptor for efficient indexing and retrieval. In this paper we propose, an image retrieval scheme in DCT compressed domain. We focus our attention in DCT compressed domain as more than 80% of image and video data are still available in DCT compressed form i.e. JPEG and MPEG. The main contribution is the extraction of image features i.e. color histogram [1], color moments, and edge histogram that are commonly used in CBIR, directly form DCT compressed domain. The color histogram and color moments features are extracted from the DC component of the image blocks, while edge feature is obtained by analyzing the relationship of the AC coefficients. The results obtained are compared with related scheme and found to be superior. The rest of the paper is organized as follows: Section II describes some image features that are used in the proposed CBIR system, while Section III describes the proposed CBIR system. The performance evaluation of the proposed scheme is demonstrated in Section IV. Conclusions are drawn in Section V.

I. INTRODUCTION
With the exploration of World Wide Web (WWW), and due to the availability of scanners, printer, and digital media, people now have access to hundreds of thousands of images. This trend is likely to continue which will provide more and more people with access to increasingly large image database. Then how one can find a particular image in a sea of images? Content based image retrieval (CBIR) has been proposed as a way to tackle this tough issue. The need of CBIR has received widespread attention and a great many solutions have been proposed in [1][4]. The work shown in [1] uses mixture of color, texture, and edge density of MPEG-7 standards , while in [2] the edge histogram is used. In [3] color and texture features are used for image retrieval. Considerable amount of work has already been done also for medical images. It is observed that for medical images, texture is used as a prominent image feature [4]. To make image retrieval faster, several indexing structures are also designed. The most popular ones are 2D-S tree, graph-based, containment tree, fuzzybased, relationship tree etc. Now-a-days most of the images in web are stored in their compressed form like JPEG and more recently JPEG-2000. However, most of the existing CBIR techniques extract visual features (like color, texture and shape etc.) of compressed data in spatial domain which increases computational complexity due to unnecessary decoding and re-encoding of compressed data [5]. If we extract image features of visual quad (like color, texture and shape etc.) from the compressed domain, this will avoid many constraints that may occur in spatial domain, like unnecessary decoding, signal quality degradation due to re-encoding, expensive processing in the uncompressed domain etc. Moreover, extracting image features in compressed domain will make the scheme efficient for searching images in real time. Thus, a CBIR technique

II. BASIC IMAGE FEATURES


In our proposed scheme, we have used image features like color histogram, color moments and edge histogram that are extracted from DCT compressed domain. A brief description of those image features are given below. A) Color Histogram: One way of representing color information of an image, is through color histograms [1]. A color histogram is a type of bar graph, where each bar represents a particular color of the used color space. The bars in a color histogram are referred to as bins and they represent the x-axis. The number of bins depends on the number of colors presents in an image. The y-axis denotes the number of pixels there are in each bin. For a three-channel image, we will have three such histograms. The histograms are normally divided into bins to coarsely represent the content and to reduce dimensionality of subsequent matching phase. A feature vector is formed by concatenation of three color channel histograms into one vector. Color histogram gets its popularity due to its rotation, translation and scaling invariant property. B) Color Moments: The color moments descriptor are a compact representation that includes the first, second and third moment of each color channel [8]. The first moment is the average color of the

395

image, the second moment is the standard deviation of each color channel, and the third moment is the third root of each color channel. The various moments are defined as [8]: F (1) E = 1 ( P )
i

calculating edge histogram in DCT domain can be summarized as follows:


INPUTS: DCT IMAGE BLOCKS (8X8) OUT PUTS: NUMBERS OF VERTICAL (V), HORIZONTAL (H), 45DEGREE DIAGONAL (D45), 135-DEGREE DIAGONAL (D135) AND NONDIRECTIONAL (ND) EDGE AND NON-EDGE (NE) BLOCKS. INITIALIZATION: V=0, H=0, D45=0, D135=0, ND=0, NE=0;

F j =1 ij
F

0.5

i = 1F

j =1
F

( Pij Ei )2
1/3

(2)

si = 1

j =1

( Pij Ei )3
th

(3)

R1 =

AC01 AC10

R2 =

AC10 AC01

where P is the value of the i ij

color channel at the j

th

image

th pixel, E is the average color of the i color channel, i is the i th color channel, si is the third root of standard deviation of the i
the i

th

color channel, and F is the total number of pixels.

C) Edge Histogram: Spatial distribution of edges in an image is the useful texture descriptor for similarity search and retrieval [2]. A large change in image brightness over a short spatial distance indicates the presence of an edge. To extract the edge using the descriptors provided by MPEG-7, firstly, the conventional approaches (like convolution mask) need to decode the compressed image to the pixel domain. The feature extraction process in that way is time-consuming, as well as computationally expensive. To improve efficiency of retrieval of compressed image, the scheme uses a fast algorithm proposed in [2]. Before classifying the edge orientation of the given block, the scheme [2] determines whether the block contains an edge or not. By Parseval theorem, the variance of the block in the spatial domain can also be expressed in DCT domain identically with the squared average of AC coefficients as below [2]: 7 1 7 2 2 (4) = 2 X (u , v ) ( u , v ) 0 u v 0 0 = = N

FOR EACH DCT BLOCK (8X8) DO IF (EDGE BLOCK) =TRUE IF |AC01|>|AC10| IF |AC01|>TH2 // VERTICAL DOMINANT EDGE ND=ND+1; ELSE IF (R1>TH1) AND (SAME POLARITY) V=V+1; ELSE D45=D45+1; END IF (R1>TH1) AND (DIFFERENT POLARITY) V=V+1; ELSE D135=D135+1; END END ELSE IF |AC10|>TH2 // HORIZANTAL DOMINANT EDGE ND=ND+1; ELSE IF (R2>TH1) AND (SAME POLARITY) H=H+1; ELSE D45=D45+1; END IF (R2>TH1) AND (DIFFERENT POLARITY) H=H+1; ELSE D135=D135+1; END END ELSE NE=NE+1; END END

where X (u , v ) is the DCT coefficient of a block and the symbol N is the number of rows/columns of a block. Instead of using the squared summation (Eq. 4), it is possible to use the following relationship with less computation [2]: 7 7 2 (5) (u , v ) 0 X (u ,v ) u =0 v =0 If the variance of the given block is greater than a certain threshold value, it is possible to think that the block contains high activity region. That in other words, the block contains an edge. In that case classification of the edge orientation is performed based on
the relationship of AC coefficients. The value of AC10 essentially depends on the intensity differences in the vertical direction between upper and lower parts of the given block. In other words, it provides information of edge strength in the horizontal direction. Similarly, the value of AC01 provides information of edge strength in the vertical direction for a given block. The method for

III. PROPOSED SCHEME


The proposed CBIR scheme consists of three modules namely, partial decoding of compressed data, feature extraction from the partial decoded data and similarity matching. The block diagram representation of the proposed scheme is shown in Fig. 1.

Fig.1: The retrieval process for the proposed scheme.

Step 1: Partial Decoding of Compressed Data: It is very difficult to extract image features directly from the compressed binary bit sequence. This in other words, suggests that partial decoding of the compressed data is necessary to extract the image features. To achieve that goal Huffman decoding, inverse quantization and

396

inverse zigzag scanning are performed on the bit sequence to get the DCT blocks. Step 2: Features Extraction: The color histogram and color moments are calculated from the subsampled image that is generated by considering only DC coefficients of each (88) blocks. The color histogram and color moments are calculated as describe in Section II (A) and Section II(B), respectively, while the edge histogram feature is calculated using the method as described in Section II(C). Step 3: Similarity Measure for Various Image Features: In our scheme, we have used three types of image features for CBIR. The distance matrices for different image features are descried as follows. A) Similarity Metric for Color Histogram: The Euclidian distance between two color histograms is defined as [9]:

where Q is the query image and I is the image stored in the database. The symbol N represents the number of bins in the edge histogram i.e. 6 in our scheme. value of i
th th

TQj (i )

and

TI j (i ) are

the

bin of j color band for Q and I, respectively. After

calculating d EH , the resultant value is normalized between 0 and 1 by dividing it with the maximum value of group of images in the database. Let, d normalized distance between Q and I.
D) Calculation of Final Similarity Metric: The final distance is calculated by the weighted sum of three normalized distance as described above. The final distance is represented by Eq. (9).
n n n d (Q , I ) = w1 d CH (Q , I ) + w2 d (Q , I ) + w3 d (Q , I ) (9) CM EH

d EH

for the entire

n (Q, I ) represents the EH

dCH (Q, I ) =

3 j =1

N i =1

j (HQ (i ) H Ij (i )) 2

1/ 2

(6)
n

where

where Q is the query image and I is the image stored in the database. The symbol N is the number of bins in the color histogram.
j HQ (i )

w1 , w2 and w3 are the positive weight factor of

is the value of ith bin of jth color band for the is the value of ith bin of jth color band for the

image Q and H I

(i )

n n and d respectively. Each weighting factor CM EH represents how much important each distance is during the CBIR. For example, if all distances are equally important, all factors

dCH , d

image I. The number of color bands is three i.e. Y, Cb and Cr . The greater the numbers of bins, the better matching performance with an increase in complexity. After calculating d CH , the resultant value is normalized between 0 and 1 by dividing it with the maximum of Let, d and I.
B) Similarity Metric for Color Moments: The Euclidian distance for color moments is defined as [9]:
dCM (Q, I ) =
3 i =1

should be 1/3 (the relationship w + w + w =1.0 must hold).

IV. PERFORMANCE EVALUATIONS


The performance of the proposed algorithm is tested over image database containing 1000 images. Fig. 2 shows some of the decoded query images in the database. All of the test images are of size (256256), 24 bit/pixel color image. All tests are conducted in Pentium IV, 2.80 GHz processor, with 504 MB RAM using MATLAB 7 version. The present study uses precision factor (PF) as a metric for measuring the performance of the proposed scheme. The precision factor is defined as:

dCH

for the entire group of images in the database.

n (Q, I ) represents the normalized distance between Q CH

PF =
(7)

R N

(10)

( EiQ EiI )2 +

3 i =1

( iQ iI )2 +

3 i =1

( siQ siI )2

where Q is the query image and I is the image stored in the database. and are the average color information , and and

are the standard deviation , skewness of the

are the third root i.e.

color bands of the images Q and I,

respectively. After calculating

dCM ,

the resultant value is

normalized between 0 and 1 by dividing it with the maximum of

dCM
n

for the entire group of images in the database. Let,

dCM (Q, I ) represents the normalized distance between Q and I.


C) Similarity Metric for Edge Histogram: The Euclidian distance between two edge histograms is defined as [9]:

where the symbol R and N are the number of relevant retrieved images and number of retrieved images, respectively. Table I shows the performance of the proposed scheme for different image features. From Table I it is clear that when the three image features namely CH, CM and EH are used, the system returns better performance in term of PF. One may incorporate more image features to increase the performance of the proposed scheme, while that will also increase the execution time. Moreover, from Table I it is also clear that the PF for the image feature CH is quite low. This is quite natural as the proposed scheme uses only 6 bins to represent the color histogram, which results low PF in CBIR. The value of PF in case of CH can be improved by increasing number of bins, while at that time computational complexity will be increased. Fig. 3 shows the results visually for test image Food. From Fig. 3 it is clear that only Fig. 3 (10) is the dissimilar image that is retrieved form the image database, while others are the similar type images for the query image Food.

d EH (Q, I ) =

3 j =1

(T j (i) TI j (i )) 2 i =1 Q

1/2

(8)

397

TABLE 1. RESULTS PRECISION FACTOR CH: COLOR HISTOGRAM; CM: COLOR MOMENTS; EH: EDGE HISTOGRAM Images
Food Bus Dinosaurs Flowers

CH
0.40 0.39 0.35 0.52

CM
0.80 0.76 0.75 0.81

EH
0.80 0.82 0.79 0.78

CH+CM+EH (Proposed) 0.933 0.921 0.912 0.923

proposed scheme returns quite effective results even if image features are extracted from the compressed domain. The average accuracy of image classifier is 88.38% for the scheme proposed in [11], while in our scheme it is above 90%.
TABLE II. COMPARATIVE PERFORMANCE MEASURE
Images

H. N. Pour et al. [11]


Proposed

Food Accuracy (Rate %) 89


94

Bus Accuracy (Rate %) 94


93

Flowers Accuracy (Rate %) 96


93

Fig. 2. Some of the query images; (a) Food, (b) Bus, (c) Dinosaurs, (d) Flower.

V. CONCLUSIONS AND SCOPE OF FUTURE WORKS


In this paper, we propose an image retrieval scheme in DCT compressed domain. In this study, we have used a combination of color histogram, color moments and edge histogram as the feature set, but one may use any other feature set also. Performance evaluation is done using standard benchmarks such as precision factor. The scheme extracts image features from partially decoded compressed image, without decreasing efficiency in PF. This technology will be extremely useful for content-based image indexing system where processing speed is a primary concern. Future work may be done for the development of hardware implementation of the proposed CBIR scheme through application specific integrated circuit (ASIC) or field programmable gate array (FPGA).
REFERENCES B. S. Manjunath, J. R. Ohm, V. V. Vasudevan, and A. Yamada, Color and texture descriptors, IEEE Transactions on Circuits and Systems for Video Technology, vol. 11, no. 6, pp. 70-715, 2001. [2] M. Eom, and Y. Choe, Fast extraction of edge histogram in DCT domain based on MPEG7, Proc. of World Academy of Science, Engineering and Technology, vol. 9, p. 209-212, 2005. [3] P. S. Hiremath, and J. Pujari, Content based image retrieval using color, texture and shape features, Proc. of 15th International Conference on Advanced Computing and Communications, IEEE Computer Society, p. 780-784, 2007. [4] T. Glatard, J. Montagnat, and I. E. Magnin, Texture-based medical image indexing and retrieval on grids, Medical Imaging technology, vol. 25, no. 5, pp. 333-338, 2007. [5] A. Armstrong, and J. Jiang, An efficient image indexing algorithm in JPEG compressed domain, Proc. of the International Conference on Consumer Electronics, p. 350-351, 2001. [6] J. Bracamonte, M. Ansorge, F. Pellandini, P. A. Farine, Efficient compressed domain target image search and retrieval, Lecture Notes in Computer Science, vol. 3568, pp. 154-163, 2005. [7] G. Voulgaris, and J. Jiang, Quad tree based image indexing in wavelets compressed domain, Proc. of the 20th Euro graphics UK Conference, pp. 89, 2002. [8] M. Stricker, and M. Orengo, Similarity of color images, In Storage and Retrieval of Image and Video Databases III, vol. 2, pp. 381-392, 1995. [9] R.C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Processing Using Matlab, Pearson Education, 2005. [10] E. Feig, A fast scaled DCT algorithm, Proc. of SPIE Image Processing Algorithms and Techniques, vol. 1224, p. 2-13, 1990. [11] H. N. Pour , and S. Saryazdi, Object-based image indexing and retrieval in DCT domain using clustering techniques, Proc. of World Academy of Science, Engineering and Technology, vol. 3, p. 98-101, 2005. [1]

Fig. 3: Results when image features are color histogram, color moments and edge histogram.

We have also studied the computational advantages of our proposed scheme. As noted before, the reason for the extraction of image features directly form compressed domain in designing CBIR system is that it requires less computation than the traditional system. Feig [10] pointed out that the complexity of one DCT or IDCT is

O(n log 2 n) where

n indicates the signal

length. Moreover, it also mentioned that DCT/IDCT takes 54 multiplications for a block (88). So, the traditional system that calculates feature vector of compressed data at the spatial domain, the extra operations for inverse DCT is approximately

T0 = N B n log 2 n
points in a block. Apparently

(11)

where N is the total number of images in the database, B is the number of blocks in an image and n is the number of signal

T0 is a linear function of N and B. T0 is totally

That is why, for the image database having large number of images that have also large size, the computational complicity will be quite high. However, in our scheme the complexity avoided, through the extraction of image features in compressed domain. Table II lists comparative results between [11] and the proposed algorithm. The numerical values in Table II indicate that the

398

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Enhancement of Medical Images Using a Novel Frobenius Norm Filtering Method


Mr.Akash Tayal 1 Harshita Sharma 2 Twinkle Bansal3

Electronics and Communication Engineering Department Indira Gandhi Institute of Technology, Guru Gobind Singh Indraprastha University Delhi, India. E-mail: 1akashtayal786@gmail.com, 2harshiita.s@gmail.com, 3twinkleigit2006@gmail.com
Abstract- The aim of Medical Image Enhancement is to improve the perception and interpretability of information in medical images for viewers; as such images are often degraded due to non-ideal acquisition processes like motion blurring, out-of-focus lens, poor illumination, coarse quantization etc. In this paper, we describe a novel two dimensional spatial-domain filtering approach using the Frobenius Matrix Norm for improvement of visual quality of medical images. The experimental results prove the accomplishment of the proposed image enhancement method. Keywords- Image processing, Image enhancement, SpatialDomain Filter, Vector Norm, Frobenius Matrix Norm.

I.

INTRODUCTION

Image enhancement is an image processing method which highlights some special information of an image and simultaneously weakens or removes the one not required. Image visual quality assessment includes objective metrics which might not always match subjective scores. Unfortunately, there is no general theory for determining what a good image enhancement is when it comes to human perception, as human vision system is the ultimate judge. They can be divided into two broad categories: Spatial domain methods, which operate directly on pixels, and frequency domain methods, which operate on the Fourier transform of an image [1], [2]. Image enhancement techniques are used as pre-processing tools for other image processing techniques. The last few decades have seen the development of Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Digital Subtraction Angiography, Doppler Ultrasound Imaging, Positron Emission Tomography (PET), Single Photon Emission Computed Tomography (SPECT), and other technologies are still emerging in the field of medical imaging. Medical Image Processing deals with the development of problem-specific approaches to enhance raw medical data for the purpose of selective visualization and further analysis [3]. General image enhancement is applied to fields like robotics, computer vision, medicine etc. that use methods including imaging geometry, linear transforms, frequency domain techniques, segmentation, histogram analysis, inverse filtration and Wiener filtration, which apply to any image modality and any application. These approaches are not suitable for medical image processing as several limitations such as the movement of a patient, distortion, blurring associated with relatively long acquisition time due

to anatomical motion, reconstruction errors due to noise of imaging devices, beam hardening etc exist, accounting for the differences between medical and non-medical approaches to processing and analysis. Generally, MRI images are corrupted by Rician noise, which arises from complex Gaussian noise in the original frequency domain (kspace) measurements. The quality of images is poor and to improve it, to enhance edges, to see clearly enough critical details, and to reduce the noise for diagnostic purposes, specific methods of enhancement must be used. Among the widely used spatial domain linear and nonlinear techniques are Average, Median and Wiener filters. Neighborhood averaging or Gaussian smoothing will tend to blur edges because the high frequencies in the image are attenuated [7]. Median filter is an order-statistic nonlinear filter that can be successfully used to suppress impulsive noise while preserving edges. It performs well, but fails when the probability of impulse noise occurrence is high [8], [9]. It also fails to provide sufficient smoothing of nonimpulsive noise. Classical Wiener filter which is a linear technique provides mathematical simplicity but has the disadvantage of blurring edges [10]. In this paper, we propose image enhancement based on a new spatial domain filtering method using the Frobenius Matrix norm. The paper is organized as follows. Section II explains the technique used, section III describes the experiments performed and results obtained, and section IV discusses the concluding remarks and future work of the study. II. A. THEORETICAL BACKGROUND

Image Enhancement through Spatial-Domain Filtering Linear filtering of an image is accomplished through convolution which is a neighborhood operation in which each output pixel is the weighted sum of neighboring input pixels. The matrix of weights is called the convolution kernel, also known as the filter [1], [2]. The filtering process is described in Fig. 1 and (1).

399

a) General Vector Norms: A vector norm for a real or complex vector space V is a function || || mapping V into R that satisfies the following conditions: || x || 0 and || x || = 0 x=0 || x || =|| ||x|| for all scalars . Triangle inequality: || x + y || || x || + || y ||. (5) (6) (7)

Figure 1.

Image Degradation and Filtering process.

(1) where, a filter mask of size m x n is being used such that m=2a+1 and n=2b+1. B. Common Spatial Domain Filtering Techniques

where x, y Rnx1 or x, y Cnx1.One such vector norm is the Euclidean Vector Norm which is further elaborated in [5]. b) General Matrix Norms: A matrix norm is a function |||| mapping a set of all real or complex matrices (of all finite orders) into R that satisfies the following properties: || A || 0 and || A || = 0 A=0 (8) || A || =|| ||A|| for all scalars . Triangle inequality: || A+B || || A || + || B || (9) for matrices of the same size. Cauchy-Bunyakoviskii-Schwarz(CBS) inequality: || AB || || A || || B || (10) for all conformable matrices.

a) Average filter: The center pixel in the window of size m x n is replaced by the average of the pixels. Some common weighting functions include a rectangular, a triangular or a Gaussian weighting function. Smoothing reduces or attenuates the higher frequencies, hence details in the image [7]. The most common averaging filter can be defined by the expression:

(2)

where A, B Rmxn or A, B Cmxn .The Frobenius Norm, defined below, satisfies the above definition. D. Frobenius Matrix Norm Matrix multiplication distinguishes matrix spaces from more general vector spaces, but the three vector-norm properties in say nothing about products. So, an extra property that relates ||AB|| to || A || and || B || is needed. The Frobenius norm suggests the nature of this property. Frobenius Norm is one of the simplest notions of a matrix norm. It is also called Hilbert-Schmidt Norm or the Schur Norm. If Cmxn is a complex vector space of dimension m x n, the Frobenius Norm of A Cmxn is defined by (11). (11) The CBS inequality insures that

b) Median Filter: This filter is based on ranking of the pixels in a neighborhood of size mxn within the input window and replacing the center pixel in the window by the median of the pixels [8], [9]. It is described by: (3) where Sxy is a rectangular subimage of size m x n. c) Wiener Filter: Within the local region, spatial domain Wiener filter is one that minimizes the mean square error (MSE) between the original image f(x,y) and the enhanced image s(x,y) given as: (4) where, E{} is the expectation value of the process [10]. C. stochastic

Or (12)

Norms A norm is a real-valued function that provides a measure of the size of multi-component mathematical entities such as vectors and matrices. Norms exist for a set of real numbers R or a set of complex numbers C. They can be classified as vector norms and matrix norms [5].

and we can say that the Frobenius matrix norm || ||F and the Euclidean vector norm || || 2 are compatible. The compatibility condition implies that for all conformable matrices A and B,

400

(13) Equation (13) suggests that the sub-multiplicative property should be added to vector norms to define a general matrix norm. III. EXPERIMENTS AND RESULTS
a) b)

Grayscale MRI images (256 X 256) were used as test images in the experiments. Noisy images were obtained by adding Rician noise and uncorrelated Additive White Gaussian Noise (AWGN) to the original images at different variances. These images were enhanced by using spatial domain average, median, Wiener filters and the proposed Frobenius Norm filter. The experimental result obtained for the noisy test images by using the filter are demonstrated in the Fig. 2-3. Also, the Signal to Noise Ratio (SNR) of noisy image and all results obtained by different filters are depicted in the following Table I and the histogram in Fig.4. The increase in SNR for Frobenius method further validates the effectiveness of the method.

c)

d)

e) a) b)

f)

Figure 3. MRI of fatal stroke a) Original image acquired by non-ideal acquisition process b) Noisy image obtained by adding AWGN c) Image enhanced by Average filter d)Image enhanced by Median Filter e) Image enhanced by Wiener filter f) Image enhanced by Frobenius Norm.

TABLE I.

c)

d)

Image and type of noise

SNR of results for different methods (in dB)

Noisy Image

Average filter

Median filter

Wiener Filter

Frobeni us filter

Normal MRI +Rician Noise Fatal stroke MRI+ AWGN

12.36

12.13

11.71

12.60

13.40

12.55

13.03

12.34

12.49

13.94

e)

f)

Figure 2. MRI of a) Original image acquired by non-ideal acquisition process b) Noisy image obtained using Rician Noise c) Image enhanced by Average filter d)Image enhanced by Median Filter e) Image enhanced by Wiener filter f) Image enhanced by Frobenius Norm.

401

[3]

FILTERING RESULTS
14 13.5
13.94 13.4 13.03 12.55 12.36 12.13 11.71 12.34 12.6 12.49

[4] [5]
Normal MRI corrupted with Rician Noise Fatal Stroke MRI corrupted with AWGN

S N R (in d B )

[6]

13 12.5 12 11.5 Noisy Image

[7]

Average filter

Median filter Type of Filter

Wiener Frobenius Filter filter

[8]

[9]

Figure 4.

Histogram of results [10]

IV.

SUMMARY AND FUTURE WORK


[11]

The proposed new spatial domain filtering method provides better performance than the generally used methods. It has a reasonable computational complexity of about 20 seconds and provides a better quality of medical images obtained by non-ideal image acquisition processes, with less blurring the edges or causing the important details of the image to disappear. The other spatial domain filters (mean, median and Wiener) do not provide satisfactory results and cause important details of the enhanced image to be lost. The experimental results prove the effectiveness of the proposed image enhancement method. Although, satisfactory results have been achieved at this stage, this filtering method carries potential of further improvement. It can also be used in the pre-processing stage for other image processing applications. We look forward to evaluate the performance by using it in Content Based Image Retrieval (CBIR) Systems. ACKNOWLEDGMETS There are no words which can adequately express us gratitude for the invaluable guidance, encouragement and technical direction, which we have been so fortunate to receive from Prof. Ashwani Kumar, Principal, IGIT, GGSIPU. Further, we are extremely grateful to Electronics and Communication Department faculty of IGIT, for providing us the opportunity and the infrastructure to complete the allotted work. REFERENCES
[1] [2] William K. Pratt, Digital Image Processing, Wiley, 1991. A.K. Jain, Fundamentals of digital image processing, Prentice Hall, Inc. Englewood Cliffs, NJ 1989.

[12]

[13]

[14]

[15]

[16] [17] [18] [19] [20] [21] [22] [23] [24]

A. Khare and U. S. Tiwary, Soft-thresholding for Denoising of Medical Images - A Multiresolution Approach, International Journal of Wavelet, Multiresolution and Information Processing, World Scientific Publication, vol.3, no. 3, 2005, in press. P. Zamperoni, "Image enhancement," Advanced in Image and Electron Physics, vol. 92, pp. 1-77, 1995. R. Hom and C. Johnson, Matrix Analysis Cambridge, UK: Cambridge University Press, 1985. W.M. Morrow, R.B. Paranjape, R.M. Rangayyan, and J.E.L.Desautels, "Region-based contrast enhancement of mammograms," IEEE Trans. on Medical Imaging, vol. 11, no. 3, pp. 392-406, 1992. I. Pitas and A. N. Venetsanopoulos, Nonlinear mean filters in image processing. IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. ASSP-34, No .3, June 1986. H. Hwang and R. A. Haddad, Adaptive median filters: New algorithms and results, IEEE Transactions on Image Processing, Vol. 4, No. 4, April 1995. H. Ibrahim, N. S. P. Kong, and T. F. Ng, Simple adaptive median filter for the removal of impulse noise from highly corrupted images, IEEE Transactions on Consumer Electronics, Vol. 54, No. 4, Nov. 2008. A. Khireddine, K. Benmahammed and W. Puech, Digital image restoration by Wiener filter in 2D case, Advances in Engineering Software, Volume 38, Issue 7, July 2007, Pages 513-516. Ogle, W.C.; Nguyen, H.N.; Tinston, M.A.; Goldstein, J.S.; Zulch, P.A.; Wicks, M.C."A multistage non homogeneity detector" Radar Conference, 2003. Proceedings of the 2003 IEEE. M. Sezgin, and B. Sankur, Survey over image thresholding techniques and quantitative performance evaluation, Journal of Electronic Imaging, Vol. 13 Issue 1, pp. 146-165, 2004. S. Aghagolzadeh and O.K. Ersoy, "Transform image enhancement,"Optical Engineering, vol. 31, no. 3, pp. 614-626, 1992. IEEE Trans. on Medical Imaging, vol. 13, no. 4, pp. 573-586, 1994. Fatma Arslan and A.M. Grigoryan, "Fast Splitting alpha rooting Method of Image Enhancement" IEEE Trans. on Image Processing, vol. 15, no. 11, pp. 3375-3384, Nov. 2006. D. Wang and A.H. Vagnucci, "Digital image enhancement," Computer Vision, Graphics, Image Processing, vol. 24, pp. 363-381, 1981. Phillips, D. Image Processing in C: Analyzing and Enhancing Digital Images, R&D Publications, Lawrence 1994 Bracewell, R.N., Two-Dimensional Imaging, Prentice-Hall, Englewood Cliffs 1995. Sanz, J.L.C. Image Technology: Advances in Image Processing, Multimedia, and Machine Vision, Springer, Berlin 1995. Oppenheim, A.V, Applications of Digital Signal Processing, Prentice Hall Englewood Cliffs 1978. Winkler, G. Image Analysis, Random Fields, and Dynamic Monte Carlo Methods, Springer, Berlin 1995. Marchese, F.T. Understanding Images: Finding Meaning in Digital Imagery Springer, Berlin 1995. W. Niblack, An introduction to digital image processing, Englewood Cliffs, N. J.,Prentice Hall, pp. 115-116, 1986. I. Pitas and A. N. Venetsanopoulos, Nonlinear Digital Filters: Principles and Applications, Kluwer Academic, 1990. Kenneth R. Castleman, Digital Image Processing, Prentice Hall, 1996.

402

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Texture analysis of Ultrasound images for Liver classification


Mandeep Singh
Dept. of Electrical & Instr. Engg.

Sukhwinder Singh
Dept. of Computer Sci. & Engg., UIET, Panjab Univ., Chandigarh, INDIA

Savita Gupta
Dept. of Computer Sci. & Engg., UIET, Panjab Univ., Chandigarh, INDIA savita2k8@yahoo.com

Thapar University, Patiala, INDIA


mdsingh@thapar.edu

sukhdalip@yahoo.com

Abstract This paper presents characterization of Liver Ultrasound images through Texture analysis. Various Liver images are studied for classification into Fatty and Normal. Role of texture features on the basis of SGLCM, GLDS, First order statistics, and Fractals is explored for the liver classification.

Keywords- Liver Ultrasound

classification,

Texture

analysis,

Liver

Introduction
Ultrasound as an imaging modality has become widely popular technique because of its ability to visualize many human soft tissues/organs without any deleterious effect. Liver, Kidneys, development of Fetus, abdominal etc. are the major organs being diagnosed through Ultrasound imaging. Since Ultrasound images suffer from speckle noise, thus it is difficult to differentiate an organ towards its specific pathological changes. However, the granular structure of the tissue/area can be studied and analyzed to characterize a tissue. This specific granular pattern of Normal Liver and Fatty Liver can be described as texture and thus Texture analysis for tissue characterization may provide crucial information that may not be obtained through visual interpretation of Ultrasound images.

viewed as the end result of a random walks, so the Mandelbrot model can be suitable for the analysis of ultrasonic liver images. Hence, the fractal dimension can measure the roughness and the granular structure of an image [11]. Recently some work has been reported for Liver classification using Texture analysis through Non separable wavelet transform [12], Fuzzy logic [13], Neural-network SOM [14] and through Bayesian classifier by Ricardo [15]. Fractal analysis has also been explored with the k-means classification method for Liver . The use of quantitative analysis of the backscattered ultrasound signal as a means of detecting diffuse diseases in various organs has been also reported [16].

I.

LIVER PHYSIOLOGY AND CHARACTRISTICS

Previous work: In the past few years, many statistical features which describe the physical properties of the tissue have been published to distinguish between normal and abnormal ultrasonic liver images [1, 2]. A number of approaches to the texture classification problem have been developed over the years. The most commonly used texture features that have been applied successfully to real-world textures are the spatial gray-level dependence matrices (SGLDM) by Haralick [3], the Fourier power spectrum (FPS) [4], and the Laws texture energy measures (TEM) [5]. The fractal concept developed by Mandelbrot [6] provides an excellent representation of the roughness of natural surfaces. It has been successfully applied to geographical simulations [7], texture analysis [8, 9], and X-ray medical images [10]. An intensity surface of an ultrasonic liver image can also be

Liver is the largest internal organ of the human body, which plays a vital role in various body metabolisms. Liver pathologies can be classified into two main categories according to the degree of dispersion of the disease. The first category is the focal or localized liver diseases in which the pathology is concentrated in small spot(s) in one or both of the liver lobes while the rest of the liver tissue remains normal. The second category is the diffused liver diseases in which at least one complete lobe of the liver is affected by the disease or, in other words, the disease is distributed over the whole liver volume. Focal diseases can be identified by differences in echogenicity between normal and diseased areas. In the presence of diffuse disease, the entire organ may be affected. In that situation, there is no change in echo intensity on which basis a diagnosis can be undertaken. Visual criteria for diagnosing diffused liver diseases are in general confusing and highly subjective. It depends upon the ability of the Radiologist to observe certain textural characteristics from the image and to compare them with those developed for different pathologies in order to determine the type of the disease. Examples for these features are texture homogeneity and texture echogenicity where their description can be widely debated among experienced Radiologists especially in marginal cases. In this paper an attempt is made to help the radiologist to diagnose the diffused liver disease in general and fatty liver

403

in specific. Visual classification of diffuse fatty infiltration disease from Ultrasound images is usually based on two main features: a) increase in liver parenchyma echogenicity and b) decreasing on the acoustic penetration with a corresponding visualization loss of the diaphragm and hepatic vessels. In this condition, the pixel intensities strongly decay with image depth (y axis). Diagnostic accuracy using only visual interpretation is currently estimated to be around 72% [17, 18]. This has led to the use an objective method based on some quantitative analysis for the characterization of Liver. II. MATERIALS AND METHODOLOGY

that Spatial Grey Level Co-occurrence Matrix (SGLCM) is optimal way to analyses texture features. This method is a measure of the variability in the specular component, normalized by the diffuse component, of the RF signal. SGLCM: The Spatial Grey Level Co-occurrence Matrix SGLCM texture features, as proposed by Haralick et al . [3], are the most frequently used texture features. These are based on the estimation of the second-order joint conditional probability density functions, where two pixel pairs (k, l ) and (m, n), with distance d in the direction specified by the angle q, have intensities of gray level g and gray level f. Based on the probability density functions, the following texture measures are computed: ASM, contrast, correlation, IDM, sum average, variance , and entropy. For a chosen distance d that is usually one pixel and for angles q = 0, 45, 90, and 135, four values for each of the above texture measures are computed and then their average is taken for all 30 Normal and 20 Fatty liver images. GLDS parameters: The Grey Level Difference Statics (GLDS) algorithm uses first-order statistics of local property values based on the absolute differences between pairs of gray levels or of average gray levels to extract the following texture measures: Contrast, ASM, Entropy, and Mean Fractal feature: The Hurst coefficients (H(k)) [11] are computed for different image resolutions, where a smooth texture surface is described by a large value of the parameter H ( k), whereas the reverse applies for a rough texture Surface. In this study k a resolution parameter is taken as 1and 2 to get Hurst coefficients. III. RESULTS AND DISCUSSION

A. Image Acquisition All images are acquired with GE Medicares Logic 3 PRO machine by angular array probe. 3.6MHz frequency is used to acquire the images of the object. Following criteria is adopted throughout the analysis. ROI location: To avoid the distorting effects in ultrasonic wave patterns, such as side lobes and grating lobes, the region should be selected each time along the center line of the image. Also, the depth of the region of interest needs to be chosen such that the distorting effects of the reverberations in the shallow parts and the attenuation in the deep parts are negligible. ROI Size and shape: To get reliable analysis results, the number of pixels in the ROI must be at least 800 to provide the reliable statistics [19]. Keeping this criterion, a square size containing 3600 pixels is selected. Fasting condition of the patient: It has been suggested that patients should be fasting for eight hours before any scan to avoid the effects of changing the liver glycogen and water storage on ultrasound attenuation [20]. This particular issue may not be too critical due to the sound differences between patients having the same pathological condition. The effect of the abdominal wall thickness and composition is not considered here. Liver Ultrasound has been done at 3.6 MHz frequency. Texture features which are used for analysis are spatial gray-level dependence matrices (SGLDM), first order statistics (FoS), Grey Level Difference Matrix (GLDS), Statistical Feature Matrix (SFM) and Fractals. B. Texture Features There is no work in the literature that directly links texture parameters measured from ultrasound signals to any physical properties of tissues. However, Thijssen et al. [21] examined the correlation between texture parameters and parameters extracted from the RF signal that are based on scattering properties of the underlying tissue. A similar study has also been done by Insana et al. [2] and reported

Before Texture features are studied on 50 liver images, whose pathological results were known. Analysis has been done on 50 images (30 Normal and 20 Fatty livers). In this paper, total 22 texture features are extracted and analyzed for the classification purpose. All the Normal and fatty liver images are analyzed quantitatively for 22 texture parameter values. Then their class wise Mean and standard deviation is reported below in tables 1 to 4. Statistical Feature Matrix parameters (SFM): Four Statistical Feature Matrix (SFM) parameters [22] measured are Periodicity, Roughness, Contrast and Coarseness. Periodicity and Roughness are in overlapping ranges, thus are not discriminating features. Whereas, Contrast and Coarseness are far different in Fatty than in Normal liver images. Therefore these can be considered as potential discriminative features for Liver classification. Fractal feature: The Hurst coefficients (H ( k)) [11] are computed for two image resolutions k=1 and k=2. A smooth texture surface is described by a large value of the parameter H ( k), whereas

404

the low value indicates for a rough Surface. As it is evident from the Table II, Fatty livers have higher value of Hurst coefficient than that of in Normal livers at both resolutions. This indicates that Fatty liver area is smoother than that of a normal liver. The ranges in two different categories are very distinctive thus they both can be considered as discriminative features for classification. The SGLCM features: Spatial Grey Level Co-occurrence Matrix based texture features calculated are Angular Second Moment (ASM), Contrast, Variance and Correlation reported in Table III. Inverse difference moment (IDM), Sum of variances, Sum of averages and Entropy are also calculated and reported in the Table III and IV. From the SGLCM Tables III and IV, it is clear that only ASM, Contrast, Correlation and Entropy are the discriminative features.
TABLE I. SFM PARAMETERS FOR FATTY AND NORMAL LIVER

TABLE V.

GLDS PARAMETERS FOR FATTY AND NORMAL LIVER

Engy Fatty Liver Normal Liver


TABLE VI.

Hom 0.38 0.03 0.247 0.02

Ent 2.2221 0.21 2.6982 0.24

Skew 1.3631 0.22 1.3075 0.25

Mean Std.dev Mean Std.dev

0.1465 0.01 0.0908 0.05

FOS PARAMETERS FOR FATTY AND NORMAL LIVER

Mean Fatty Liver Normal Liver Mean Std.dev Mean Std.dev 2.9552 0.48 5.0921 0.51

Kurt 5.7388 0.8 6.255 1.51

Med 39.9637 3.88 47.6098 7.63

Cont 21.0925 3.44 59.7103 6.42

Feature
Fatty Liver Normal Liver
TABLE II.

Period 0.7171 0.01 0.7250 0.05

Rough 2.3928 0.08 2.3079 0.14

Cont 6.4545 0.56 10.5847 0.76

Coars 20.3887 2.51 10.1896 1.36

ASM represents homogeneity which is higher in fatty livers and higher value of Entropy in normal livers shows that echogenicity increased in fatty liver. GLDS and FoS parameters: The Grey Level Difference Statistics based parameters like Energy, Homogeneity, Entropy and Skewness are reported in Table V, whereas Table VI contains values of First order Statistics parameters like Mean, Median, Kurtosis and Contrast. From the table 4a and 4b, it is clear that two GLDS parameters Energy and Homogeneity can be used in classification process and from FoS, Mean and Contrast features are used as discriminative for Liver classification. This Contrast is different than that of used and calculated using SGLCM.

Mean Std.dev Mean Std.dev

HURST COEFFICIENT FOR K=1 AND K=2 FOR FATTY AND NORMAL LIVER

Hurst coeff. Mean Fatty Liver Normal Liver


TABLE III.

k=1 0.4484 0.008 0.3834 0.002

k=2 0.3187 0.06 0.2663 0.01

Std.dev Mean Std.dev

SGLCM BASED TEXTURE FEATURES

ASM Fatty Liver Normal Liver Mean Std.dev Mean Std.dev 0.5595 0.21 0.3622 0.06

Cont. 0.0759 0.02 0.1321 0.02

Correl. 0.8312 0.02 0.8981 0.02

Variance 5.1096 0.83 6.2887 2.41

IV. CONCLUSION Texture analysis is very useful to discriminate Fatty Liver from Normal Liver images. In this paper total 22 texture features are extracted and analyzed for the classification. Analysis has been done on 50 images (30 Normal and 20 Fatty livers). All the Normal and fatty liver images are analyzed quantitatively for 22 texture features. Then their class wise Mean and standard deviation is calculated. After manual comparison, it is found that out of 22 texture features used, 12 features have discriminating power. Out of these 12 features, the best discriminating features for classification of Fatty Liver are Homogeneity from GLDS, Entropy and Contrast from SGLCM, Coarseness from SFM, Mean and Contrast from FOS and Fractals Hurst Coefficient at k=1.

TABLE IV.

SGLCM BASED TEXTURE FEATURES

IDM

Sm_Avg 4.4377 0.43 4.8027 0.77

Sm_Vari 17.4739 2.73 19.9347 7.56

Entrpy 0.3927 0.08 0.6366 0.11

Fatty Liver Normal Liver

Mean
Std.dev

0.9621 0.02 0.9346 0.02

Mean
Std.dev

405

FUTURE SCOPE The present work will be extended in near future to more texture features and also on more number of ultrasound images. Then discriminator will be found using some more sophisticated techniques. ACKNOWLEDGMENT This work is sponsored by the project under SERC Fasttrack scheme no. SR/FT/ETA-065/2008 funded by DST (Govt. of INDIA) New Delhi. REFERENCES
[1] R. Momenan, M.H. Loew, M.F. Insana, R.F. Wagner, and B.S. Garra, Application of pattern recognition techniques in ultrasound tissue characterization, 10th Int. Cont Pattern Recognition, vol. 1, pp. 608-612, 1990. [2] M. F. Insana, R. F. Wagner, B. S. Gam, R. Momenan, and T. H. Shawker, Pattern recognition methods for optimizing multivariate tissue signatures in diagnostic ultrasound, Ultrasonic Imaging, vol. 8, pp. 165- 180, 1986. [3] R. M. Haralick, K. Shanmugan, and 1. H. Dinstein, Texture features for image classification, IEEE Trans. Syst., Man, Cyber., vol. SMC-3, pp. 610-621, 1973. [4] G. O. Lendaris and G. L. Stanley, Diffraction pattern sampling for automatic pattern recognition, Proc. IEEE, vol. 58, pp. 198216, 1970. [5] K. I. Laws, Texture energy measures, in Proc. Image Understanding Workshop, pp. 47-51, 1979. [6] B. B. Mandelbrot, The Fractal Geometry of Nature, San Francisco, CA, Freeman, 1982. [7] A. Fournier, D. Fussell, and L. Carpenter, Computer rendering of stochastic models, ACM Commun., vol. 25, pp. 371 -384, 1982. [8] S. Peleg, J. Naor, R. Hartley, and D. Avnir, Multiple resolution analysis and classification, IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-6, pp. 518-523, 1984. [ 9] A. Pentland, Fractal-based description of nature scenes, IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-6, pp. 667674, 1984. [10] T. Lundahl, W. J. Ohley, S. M. Kay, and R. Siffert, Fractional Brownian motion: A maximum likelihood estimator and its application to image texture, IEEE Transactions on Medical Imaging, vol. MI-5, pp. 152-161, 1986. [11] Wu, C.M., Chen, Y.C. and Hsieh K.S., Texture features for classification of Ultrasonic Liver images , IEEE Transactions on Medical Imaging, Vol. 11, No. 2, 1992. [12] Aleksandra Mojsilovi, Miodrag Popovi, Characterization of visually similar Diffuse diseases from B-Scan Liver images using Non-separable Wavelet transform, IEEE Transactions on Medical Imaging, Vol. 17, NO. 4, 1998. [13] Ahmed M. Badawi, Ahmed S. Derbala , Abou-Bakr M. Youssef Fuzzy Logic Algorithm for quantitative Tissue characterization of diffuse Liver diseases from Ultrasound Images, International Journal of Medical Informatics vol. 55, pp 135147, 1999. [14] Mukherjee S., Chakravorty A., Ghosh K., Roy M., Adhikari A., Mazumdar S., Corroborating the Subjective Classification of

Ultrasound Images of Normal and Fatty Human Livers by the Radiologist through Texture Analysis and SOM, Advanced Computing and Communications, International Conference, pp 197 202, 2007. [15] Ricardo Ribeiro and Joao Sanches, Fatty Liver Characterization and Classification by Ultrasound, IBPRIA 2009, LNCS 5524 Springer-Verlag, pp. 354361, 2009. [16] Loew, M.H., Mia, R., Guo, An approach to image classification in ultrasound Applied Imagery Recognition Workshop, pp193199, 2000. [17] Chen et al, An automatic diagnosis system for CT Liver image classification, IEEE Transaction on Biomedical Engineering, Vol. 45, No. 6, pp 783-794, 2007. [18] K.J. Foster, K.C. Dewbury, A.H. Griffith, R. Wright, The Accuracy of Ultrasound in the Detection of Fatty Infiltration of the Liver, British Journal of. Radiology, Vol. 53, pp 440-442, 1980. [19] Kadah, Y.M., Farag, A.A., Zurada, J.M., Badawi, A.M., Youssef, A.M., Classification Algorithms for Quantitative Tissue. Characterization of Diffuse Liver Disease from Ultrasound Images, IEEE Transactions On Medical Imaging, Vol. 15, No. 4, 1996. [20] T.A. Tuthill, R.B. Baggs and K.J. Parker, Liver Glycogen and Water Storage: Effect on Ultrasound Attenuation, Ultrasound in Medicine & Biology, Vol. 15, No. 7 pp 621-627, 1989. [21] J. M. Thijssen, Frank M. J. and Valckx, Characterization of Echographic image texture by Cooccurrence Matrix parameters, Ultrasound in Medicine & Biology, Vol. 23. No. 4, pp. 559-571, 1997. [22] Wu, C.M. and Chen, Y.C., "Statistical Feature Matrix for Texture Analysis", CVGIP: Computer Vision Graphical Models and Image Processing, Vol 54, No 5, pp 407-419, Sep 1992.

Fatty Liver image

ROI

Normal Liver image

ROI

Figure 1. Two sets of Normal and Fatty Liver images used for analysis

406

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Performance Analysis of Transform Based Coding Algorithms for Medical Image Compression
M.A. Ansari1 * Member IEEE, R.S. Anand2 and Kuldeep Yadav3 1. Prof. & Head, EED, GCET, Gr. Noida, 2. Professor, EED, IITR 3. SL, CS, COER & RS, UTU, Dehradun
e_mail: 1. ma.ansari@ieee.org 2. anadfee@iitr.ernet.in 3. kul82_deep@rediffmail.com
Abstract- Medical image compression is of paramount importance because of the transmission bandwidth and storage constraints. In order to achieve optimum storage, fast transmission performance, higher compression rates and improved diagnostic image quality, different compression algorithms are implemented here and are compared to find out an optimal image coding technique. Among the three techniques namely DCT, DWT and SPIHT implemented, the scheme for encoding wavelet coefficients, termed as set partitioning in hierarchical trees (SPIHT) yields significantly better compression ratios and image quality performance than the Joint Photographic Experts Group (JPEG) based on discrete cosine transform (DCT) and the general Discrete Wavelet Transform (DWT) methods. Three different types of medical images have been taken for the testing of these algorithms and the subjective and objective performance comparison is made on the basis of visual quality, compression ratio, peak signal to noise ratio, mean square error, correlation coefficient and mean opinion scores judged by medical experts in order to testify the compression capabilities. Keywords- DCT, DWT, SPIHT, subjective & objective performance. transforms [4,5]. Because of the inherent multi-resolution nature, wavelet based coders facilitate progressive transmission of images thereby allowing variable bit rates. Comparatively, wavelet based coding proves to be effective technique for medical image compression, giving significantly better results than the JPEG standard with comparable computational efficiency and the performance parameters. The standard steps in wavelet compression are to perform the DWT, quantize the resulting wavelet coefficients and losslessly encode the quantized coefficients [6,7]. These coefficients are usually encoded to perform vector quantization within the various sub-blocks. The SPIHT technique based on wavelet transform given by Said and Pearlman [8] yields significantly better performance than conventional wavelet compression with similar computational complexity. In addition to providing efficient compression, it also transmits the compressed bitstream, in which approximations of the most important coefficients are transmitted first and the remaining information which yields the largest distortion reduction is transmitted next [9].

II. DCT BASED COMPRESSION


The JPEG compression process is a mathematical transformation based on the DCT [2]. It first transforms image data into the frequency domain using the DCT, which is a lossless process. It takes a set of points from the spatial domain and transforms them into an identical representation in the frequency or spectral domain with x and y-axes representing the frequencies of the two signals in two different directions. The implementation used here is based on the code distributed by the JPEG users group and the default quantization tables and Huffman coding tables are used [1]. The DCT based compression algorithms steps used are shown in Fig.1 below in a block diagram where various stages of the compression process are given.
Compressed Image

I. INTRODUCTION
A lot of research has been done on medical image compression to optimize storage and transmission of medical imagery since the establishment of the JPEG standard [1]. It is intended to compliment, not replace, the current JPEG standard by other standard methods like DWT and SPIHT. While the DCT-based image coders perform very well at moderate bit rates, however at higher CRs, image quality degrades because of the artifacts resulting from the block-based DCT scheme [2,3]. Wavelet-based coding on the other hand provides substantial improvement in the picture quality at low bit rates because of overlapping basis functions and better energy compaction property of wavelet
Input Image 88 block Extractor Forward DCT

Normalizer/ Quantizer

Symbol Encoder

Store / Transmit Reconstructed Image 88 block


Merger

Inverse DCT

Denormalizer

Symbol Decoder

Figure 1. DCT based Encoder and Decoder

A. Algorithm for the DCT based compression


In the DCT based image compression algorithm, the input image is divided into pixel blocks of size of 8x8 and the 2D-DCT is computed for each block which is performed by way of matrix multiplication. The DCT coefficients are then quantized and encoded. The resulting matrix is then processed in a quantization stage in which a user specified quality factor, usually between 1 and

100, is utilized to define quantum steps. The implementation of the DCT algorithm works as follows: 1. Subdivide the input image into nonoverlapping pixel blocks of size 8 x 8. 2. Subsequently process from left to right, top to bottom. As each 8x8 block or subimage is processed, its 64 pixels are level shifted

407

by subtracting 2(m-1), where 2m is number of gray levels. 3. Now its DCT is computed and the resulting coefficients are simultaneously normalized and quantized. 4. The quantization is done according to the
( u , v ) round formula: T ;Where T(u,v) Z ( u , v ) T ( u , v )

is the DCT transformed image and Z(u,v) is the normalization matrix. 5. After each blocks DCT coefficients are quantized, the elements of are (u , v) T reordered in accordance with a zig-zag pattern. Now the non-zero AC coefficients are coded using a variable length code. 6. The decoding process simply inverts the operation of encoding.

[4,5,6]. The wavelet coefficients are uniformly quantized by dividing by a user specified parameter and rounding off to the nearest integer. Typically, a large majority of coefficients with small values are quantized to zero by this step. The zeroes in the resulting sequence are run-length encoded, and Huffman and arithmetic coding are performed on the resulting sequence. The various subbands blocks of coefficients are coded separately, which improves the overall compression [7]. If the quantization parameter is increased, more coefficients are quantized to zero, the remaining ones are quantized more coarsely, the representation accuracy decreases and the CR increases consequently.

A. Wavelet Based Coding Technique


Wavelet coding techniques are based on the idea that the coefficient of a transform which de-correlates the pixels of an image can be coded more efficiently than the original pixels themselves [7]. The computed transform converts a large portion of the original image to horizontal (H), vertical (V) and diagonal (D) decomposition coefficients with zero mean and Laplacianlike distribution. The main difference between the wavelet-based and DCT-based transform coding system is the omission of transform coders sub-image processing stages. Because wavelet transforms are both computationally efficient and inherently local (i.e. their basis functions are limited in duration), subdivision of the original image is not required. The removal of the subdivision step eliminates the blocking artifact. The main steps of the wavelet encoder are shown in Fig.2 below.

III. WAVELET BASED COMPRESSION


Since the input image needs to be divided into blocs in DCT based compression [1], correlation across the block boundaries is not eliminated. This results in the blocking artifacts particularly at low bit rates. Whereas in the wavelet coding, there is no need to block the input image and its basis functions have variable length hence wavelet coding schemes at higher compression avoid blocking artifacts. The 9/7 tap biorthogonal filters [10], which produce floating point wavelet coefficients, are widely used in image compression techniques to generate a wavelet transform Input Image f(x,y)

Forward Wavelet Transform

Quantizer

Symbol Encoder

Compressed Image Data

Figure 2. Wavelet Encoder-Compression Steps

B. Two Dimensional (2D) Wavelet Transform


As 1D wavelet transform does not serve the purpose, the extension given by Mallat for 2D wavelet transform proves to be more useful for image analysis. In 2D wavelet analysis, like in the one-dimensional case, a scaling function), (x, y) is defined such that: (x, y) (x) (y) (1) where, (x) is a one-dimensional scaling function. Let (x) be the one-dimensional wavelet associated with the scaling function. Then, the three, 2D wavelets are defined as: H (x, y) (x) (y)
V

obtained after decomposing A1 f(x, y) progressively. The algorithm of 2D wavelet decomposition and reconstruction is the same as that for 1D wavelet. Mallat pyramid scheme is also used in this case. The Fig.3 shows level three wavelet decomposition and Fig.4 multilevel wavelet filter decomposition.

(2)

(x, y) (x) (y) (3) D (x, y) (x) (y) (4) where, H, V, and D stand for horizontal, vertical and diagonal respectively. The Fig.3 & Fig.4 shows the 2D wavelet transform decomposition of an image. The 2D multi-resolution analysis (MRA) decomposition is completed in two steps. First, using (x) and (x) in x direction, f(x, y) (an image) is decomposed into two parts, a smooth approximation and a detail. Next, the two parts are analyzed in the same way using (y) and (y) in y direction. As a result, four channel outputs are produced, one channel is A1 f(x, y), the level one smooth approximation of f(x, y), through (x) (y) processing, the other three channels are D(H)1 f(x, y), D(V) 1 f(x, y) and D(D) 1 f(x, y), the details of the image. Level two results are

Figure 3. Level three decomposition in DWT

Figure 4. Multilevel wavelet filter decomposition

408

C. Algorithm for Wavelet Transform Based Encoding


1. The first step of wavelet transform encoding process is to level shift the pixels of an image by subtracting 2(m-1), where 2m is the no. of gray levels in the image. 2. The one-dimensional DWT of the rows and the columns of the image can then be computed. 3. After the DWT has been computed, the total number of transform coefficients is equal to the number of samples in the original image but the important visual information is concentrated in a few coefficients. 4. The final step of the encoding process is to code the quantized coefficients arithmetically on a bit plane basis. 5. After decoding the arithmetically coded coefficients, a user-selected number of the original images subbands are reconstructed. 6. The de-normalized coefficients are then inverse transformed and level shifted to yield an approximation of the original image. 7. The decoding process simply inverts the operation of encoding.

threshold are found and coded using the set partitioning method. In the second pass, the subordinate pass, the precision of all previously significant coefficients is increased by sending the next bit from the binary representation of their values. Such refinement allows for progressive-approximation quantization and produces a fully embedded code, i.e., the transmission of the encoded bit stream can be stopped at any point and a lower rate image can still be decompressed and reconstructed. Additionally, a target bit rate or target distortion can be met exactly.

Figure 5. Parent Children Relationship of SPIHT algorithm

V. PERFORMANCE EVALUATION PARAMETERS


The main parameters used to measure the performance of a compression algorithm are - CR, MSE, PSNR, CoC and the bitb rate. The CR is defined as; CR = (size of original image in bits/ size of compressed image in bits). For the gray scale image, n is 8. The MSE of reconstructed image is defined as:
MSE 1 MN
M 1 N 1 x 0 y 0

IV. THE SPIHT CODING


The SPIHT, an example of a progressive image compression algorithm, is an extension of Shapiros EZW method [11]. The SPIHT algorithm uses 9/7-tap biorthogonal filter in the DWT [10]. The fully embedded nature of the output bit stream makes SPIHT an excellent choice for progressive transmission. The SPIHT uses three principles namely: 1-Exploitation of the hierarchical structure of the wavelet transform, by using a tree-based arrangement of the coefficients, 2-Partial ordering of transformed coefficients by magnitude, with the ordering data not explicitly transmitted but recalculated by the decoder, and 3-Ordered bit plane transmission of refinement bits for the coefficient values [9]. This leads to a compressed bitstream in which the most important coefficients are transmitted first, the values of all coefficients are progressively refined and the relationship between coefficients representing the same location at different scales is fully exploited.

x, y f x, y f

(5)

(x,y) is the where, f(x,y) f reconstructed image pixel value and the MxN size for n bit image (bits per pixel). For each filtering operation, the measurement of ability to reduce the noise is defined by PSNR given by:

2n 1 PSNR 20 log 10 dB MSE

(6)

Another important parameter to judge the quality of an image is defined by the correlation coefficient (CoC). The CoC is defined below which correlates the original and the reconstructed pixels.

A. The SPIHT Algorithm


It is essentially a wavelet transform-based embedded bit-plane encoding technique. It partitions transformed coefficients into spatial orientation tree sets as shown in Fig.5 based on the structure of the multi-resolution wavelet decomposition [11]. In SPIHT, to take advantage of the self-similarity among wavelet coefficient magnitudes in different scales, the coefficients are grouped into tree structures called zerotrees. The organization of wavelet coefficients into a zerotree is based on relating each coefficient at a given scale (parent) to a set of four coefficients with the same orientation at the next finer scale (children). Zerotrees allow the prediction of insignificance of the coefficients across scales (i.e. if the parent is insignificant w.r.t. a given threshold, its children are also likely to be insignificant) and represent this efficiently by coding the entire tree at once. The SPIHT algorithm transmits the wavelet coefficients in bit plane order with most significant bit plane first. For each bit plane there are two passes. In the first pass, called the dominant pass, coefficients which are significant with respect to the current
CoC
m

x 1 y 1

( x, y ) f ( x, y ) f

(7)

x 1 y 1

f ( x, y ) 2

x 1 y 1

( x, y ) 2 f

VI. RESULTS AND DISCUSSION


In the present performance analysis, we have taken three cases of different compression algorithms as given above and compared the results at various CRs from 10 to 50 and bpp from 0.8 to 0.15. We considered three types of test images for the case study i.e. Ultrasound (US), MRI and the X-ray images. All the images are 8-bit images (gray scale). The performance parameters of the compression algorithms based on the CR, MSE, PSNR, CoC and bpp are presented in the tabular and the graphical form given through Table 1, Table 2, Table 3, Fig.6, Fig.7, Fig.8, Fig.9 and Fig.10. Among the three cases of study, as the CR increases from 10 to 50 and the bit rate (bpp) decreases from 0.8 to 0.15, the MSE

409

increases rapidly in the case of JPEG and Wavelet transform where as in the case of SPIHT it increases slowly and is minimum among all the three cases (i.e. 604.18 for US image). For ultrasound image, at CR 50 and bpp 0.15, the MSE is almost three times in the case of JPEG (1450.54) and in case of wavelet transform it is 1357.93 as compared to SPIHT which is 604.18. Similarly, for MRI and X-ray images the MSE are lower in case of SPIHT while it is high for the JPEG and the WT. For the PSNR analysis, at CR 50 and bpp 0.15, it is 20.32 dB for the SPIHT where as for the JPEG is 15.92 dB and for wavelet it is 16.80 dB for the US image and similarly for the other cases studies. The CoC values for SPIHT are also higher than the other cases such as at 0.15 bpp it is 0.998291 for SPIHT, 0.977792 for the WT and 0.974651 for the JPEG compression. In

all the three cases (US, XR and MRI), the MSE is minimum & CoC, PSNR are maximum (which is required) for SPIHT as compared to JPEG and wavelet Transform. However, the performance of the WT is better than that of the JPEG in all cases and for all the IQM parameters. From the mean opinion score analysis, the visible image quality is superior for SPIHT as compared to the other two cases. Also, as the CR increases and the bit rate decreases, the change in the performance quality (subjective and objective) is consistent in the SPIHT but slightly fast decaying in the WT while it is rapid decaying in the JPEG. Hence in all the three case studies i.e. JPEG, WT and SPIHT, the SPIHT results outperform the JPEG and wavelet transform based compression results.

TABLE I : MSE & PSNR PARAMETERS (FOR US IMAGE)

MSE S.No. 1 2 3 4 5 CR 10.0007 20.0028 30.7692 40.0120 50.3547 bpp 0.80 0.40 0.26 0.20 0.15 JPEG 81.90 218.74 390.06 762.86 1450.54 Wavelet 46.38 129.28 260.50 712.36 1357.93 SPIHT 23.91 68.89 136.19 336.36 604.18 JPEG 29.00 24.73 22.22 19.31 15.92

PSNR (dB) Wavelet 31.47 27.02 23.97 19.60 16.80 SPIHT 34.35 29.75 26.79 22.86 20.32 JPEG 0.991482 0.987839 0.985439 0.982209 0.974651

CoC Wavelet 0.996570 0.993977 0.990814 0.988557 0.977792 SPIHT 0.999938 0.999490 0.999118 0.998696 0.998291

TABLE II : MSE & PSNR PARAMETERS (FOR MRI IMAGE)

MSE S.No. 1 2 3 4 5 CR 10.0007 20.0028 30.7692 40.0120 50.3547 bpp 0.80 0.40 0.26 0.20 0.15 JPEG 13.32 20.61 29.81 56.40 100.80 Wavelet 12.75 19.89 30.36 40.20 52.71 SPIHT 8.88 13.62 17.98 26.21 34.11 JPEG 36.89 34.99 33.38 30.62 28.10

PSNR (dB) Wavelet 37.08 35.15 33.31 32.09 30.91 SPIHT 38.65 36.79 35.58 33.95 32.80 JPEG 0.988185 0.986110 0.984367 0.982212 0.981262

CoC Wavelet 0.999719 0.997691 0.994666 0.993504 0.978869 SPIHT 0.999974 0.999916 0.999825 0.999712 0.999596

TABLE III : MSE & PSNR PARAMETERS (FOR X-RAY IMAGE)

MSE S.No. 1 2 3 4 5 CR 10.0007 20.0028 30.7692 40.0120 50.3547 bpp 0.80 0.40 0.26 0.20 0.15 JPEG 42.51 63.04 80.46 109.83 133.87 Wavelet 36.48 67.40 86.68 110.88 127.92 SPIHT 26.83 46.65 62.89 85.19 99.80 JPEG 31.85 30.14 29.08 27.72 26.00

PSNR (dB) Wavelet 32.51 29.84 28.75 27.68 26.50 SPIHT 33.85 31.44 30.15 28.83 28.14 JPEG 0.996208 0.995283 0.994421 0.993569 0.992873

CoC Wavelet 0.999114 0.998702 0.998118 0.997735 0.987160 SPIHT 0.999884 0.999772 0.999675 0.999584 0.999500

(a) Original US Image (500x500)

(b) Compressed Image CR=10

(c) Compressed @ CR=30

(d) Compressed @ CR=50

Figure 6. JPEG Compression Results (US Image)

410

(a) One level Wavelet Decomposition

(b) Two level decomposition

(c) Compressed Image (CR=30)

(d) Compressed Image(CR=50)

Figure 7. Wavelet Compression Results (US Image)

(a) Compressed @ CR = 10

(b) Compressed @ CR = 20

(c) Compressed @ CR = 30

(d) Compressed @ CR = 50

Figure 8. SPIHT Compression Results (US Image)

(a) Histogram plot of Original US Image

(b) Histogram Plot of Reconstructed US Image @ CR=20.0028

Figure 9: Histogram Analysis of reconstructed US image


Compression Ratio 'Vs' MSE (for US Image) JPEG WAVELET SPIHT 1200

1600 1400

Compression Ratio 'Vs' PSNR (for US Image)


40 35 30
JPEG Wavelet SPIHT

PSNR (dB)

1000
MSE

25 20 15 10 5 0

800 600 400 200 0 0 10 20 CR 30 40 50 60

10

20

30

40

50

60

CR

(a) CR 'Vs' MSE for JPEG, Wavelet & SPIHT (US Image)

(b) CR 'Vs' PSNR for JPEG, Wavelet & SPIHT (US Image)

411

PSNR 'Vs' Image Sequence

1.005000 1.000000

Correlation (CoC) 'Vs' CR of US Image

CoC

0.990000 0.985000 0.980000 0.975000 0.970000 0


JPEG Wavelet SPIHT

PSNR (dB)

0.995000

45 40 35 30 25 20 15 10 5 0 1 2 3 4 5 6

JPEG Wavelet SPIHT

10

20 CR

30

40

50

60

7 8 9 10 11 12 13 14 15 Image Sequence

(c) CR 'Vs' CoC for JPEG, Wavelet & SPIHT ( US Image)

(d) PSNR 'Vs' Image Sequence for JPEG, Wavelet & SPIHT

Figure 10. Comparative Graphical Results

VII. CONCLUSIONS What truly matters, of course, is the quality of reconstructed image after compression as judged by the expert in specific tasks such as medical image diagnosis. The results obtained here, clearly show that the SPIHT based algorithm gives better performance in terms of mean and maximum pixel differences, performance and transmission parameters, visual quality of image at higher CRs as compared to DCT and DWT. Therefore, the SPIHT algorithm because of extremely good performance over DCT and DWT algorithms for medical image compression may be very useful in practical implementations of teleradiology, telemedicine and PACS, however the current trends in the medical image compression are taking over the context based compression because of the diagnostic importance and the legal implications of the medical image data to be compressed and transmitted efficiently. REFERENCES
[1]. Wallace GK, The JPEG still picture compression standard, Comm of the ACM, vol. 34, pp.30-44, 1991. [2]. Yung-Gi Wu, Medical image compression by sampling DCT coefficients, IEEE Transactions on Information Technology in Biomedicine, Volume:6, Issue 1, Pages: 86-94. March 2002. [3]. Ahn C.B., Kim I.Y., Han S.W.,Medical Image Compression Using JPEG Progressive Coding, IEEE Conf. on Nuclear Science Symposium and Medical Imaging ., pp. 13361339, 31 Oct.-6 Nov. 1993. [4]. Yelland M.R., Aghdasi F., Wavelet transform for medical image compression, IEEE AFRICON, Vol.1, pp. 303308,28 Sept.-1 Oct. 1999. [5]. Yao-Tien Chen, Din-Chang Tseng, Pao-Chi Chang, Wavelet-based medical image compression with adaptive prediction, IEEE Proceedings of International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS05), Pages:825 828, 13-16 Dec. 2005. [6]. Da-Zeng Tian, Ming-Hu Ha, Applications of wavelet transform in medical image processing, IEEE Proc. of Int. Conf. on Machine Learning and Cybernetics, Volume 3, pp.1816 1821, 26-29 Aug. 2004. [7]. Yue Wang, Huao Li, Jianhua Xuan, Lo, S.-C.B., Mun, S.K., Modeling of wavelet coefficients in medical image compression, Proceedings of International Conference on Image Processing, Vol.1, Pages:644 647, 26-29 Oct. 1997.

[8]. Said A, Pearlman WA, lmage compression using the spatial orientation tree, IEEE Int. Symp on Circuits and System,s pp.279 -282, May 1993. [9]. Ali M., Medical image compression using set partitioning in hierarchical trees for (military) telemedicine applications, IEE Seminar on Time-scale and Time-Frequency Analysis and Applications, Pages: 22/1 22/5, 29 Feb. 2000. [10]. Antonini M, Barlaud M, Mathieu P, Daubechies I, Image coding using wavelet transform, IEEE Trans Image Proc. Vol 1, pp. 205 -220, 1992. [11]. Shapiro JM, Embedded image coding using zerotrees of wavelet coefficients, IEEE Trans Signal Proc. vol. 41, pp.3445-3462, 1993.

412

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Scene based non uniformity correction algorithm by equalizing output of different detectors
Parul Goyal
Department of Electronics & Communication Engineering Uttaranchal Institute of Technology Dehradun, India Email parulgoyal1973@gmail.com

Abstract-Infrared imaging systems are widely used for a variety of applications such as astronomy, defence, surveillance. Many infrared detectors are based on Infrared Focal Plane Array technology. A problem of this technology is that every detector in the array can have a different response to the same stimulus causing fixed pattern noise in the resulting images. To obtain a noise-free image, the nonuniformity has to be removed. Scene based NUC algorithm to remove non uniformity by equalizing output of different detectors is proposed in this paper. It matches pixels exposed to the same information in an IR video sequence to estimate the non-uniformity correction parameters. The proposed technique uses masks during registration and estimation process. The registration mask is used to exclude the contribution of the misleading areas such as occlusions, local motions and smooth regions from the motion estimation. The estimation mask prevents estimating NUC parameters in local motion or occlusion areas, where the motion is not able to produce correct pixel correspondences. The proposed algorithm was applied on a video sequence with simulated FPN as well as on a video sequence with real FPN. Results proved that reduction in FPN is achieved. Keywords-Algorithm, Correction, Detectors, IRFPA, Motion, NUC

preliminary motion estimation may contain errors due to possible occlusions and local motions that usually don't obey the motion model. Moreover, in the presence of FPN, smooth areas in which the FPN variance is stronger than the signal variance may bias the motion estimation towards a zero motion, since the best match in these regions is obtained for zero motion. Therefore, in order to achieve a more accurate registration, a registration mask is created Fig.1 (b). It is then used to exclude the problematic regions from the motion estimation. Once the improved registration is achieved, it provides more reliable pixel correspondences in regions that obey the motion model Fig.1 (c). Now, the NUC parameters can be estimated based on these pixel correspondences. It is important to note that we do include smooth regions in the NUC parameter estimation process, since smooth regions obey the motion model. All regions that participate in the estimation process are identified by a parameter estimation mask Fig.1 (d). Then, the NUC parameters estimation is iteratively performed by the LMS algorithm.

I. INTRODUCTION
A variety of non-uniformity correction (NUC) techniques have been proposed. The simplest and the most accurate NUC procedures use uniform blackbody infrared sources as a reference to estimate detector parameters and compensate the non-uniformity. IRFPA sensors are commonly characterized by the following linear model:

Figure1: Proposed algorithm outline A. Robust motion estimation

(1) where ij ij ij x y g and ij o are the incident radiance, output, gain and offset of a detector (i, j) , respectively. A two-point calibration technique that uses two different temperature reference targets to calculate gain and offset parameters of each detector are also used. Methods based on temperature references require interrupting vision while the detector array is calibrated, which may be undesirable in many applications. Moreover, since the non-uniformity tends to drift over time as a function of ambient temperature, the NUC has to be performed on a regular basis, which makes the problem of the vision interruption even worse. Therefore, considerable research has been focused on developing adaptive scene-based NUC methods that need no temperature reference targets.

In this section we describe the robust motion estimation algorithm in detail. The preliminary motion estimation, as well as the improved motion estimation, is performed using the registration algorithm. According to it, the transformation between two consequent images Ik (i,j) and Ik+1(i, j) is modeled as follows:

(2) where a, b and are horizontal shift, vertical shift and rotation angle, respectively, and k is the frame number. The parameters a, b and are estimated by minimizing the following cost function:

II. PROPOSED ALGORITHM


The outline of the proposed algorithm is shown. It is composed of a number of steps as illustrated in Fig.1. Preliminary motion estimation is performed in subsequent frames Fig.1 (a). However, the

(3) where is the set of all pixel indices in the frame. As noted above, the minimization of J may lead to registration errors because of local motions, occlusions and FPN in smooth areas. To neutralize the effect of these areas, we propose a modified cost function:

413

(4) Here the set of summation pixel indices in (3) is replaced by a set , in which local motion regions and smooth regions are excluded. These regions are identified by a registration mask M r, which is obtained as follows:

where Ik is a warped version of Ik+1 in the coordinate system of Ik. On one hand, since the local motions and occlusion regions don't obey the global model, they are expected to have high values in Dk. On the other hand, the smooth regions have low values in Dk. Therefore, local motions and occlusions are excluded by applying a high threshold equal to1.5 k :

(5) Here Ms identifies smooth areas of the image, and Ml identifies occlusions and local motion areas. An example of Ms and Ml obtained in a video sequence used in our simulations is shown in Fig. 2. Pixels with values equal to 0 belong to regions that Ms and Ml intend to exclude, whereas the rest of the image pixels receive the value of 1.

(7) where as smooth regions are excluded by applying a low threshold equal to 0.5 k:

(8) Here k denotes a robust estimation of the noise obtained as follows:

(9) where MAD denotes the median absolute deviation for robust standard deviation estimation; (a)

(10) The resulting motion estimation is expected to be robust to errors that may be contributed by the presence of local motion, occlusion and smooth areas in the video sequence. B. NUC parameters estimation

(b)

In this section we describe the proposed NUC parameter estimation process. As already noted above, a simple and accurate characterization of the sensor response is based on the following linear model:

(11) (c) where xij, yij, gij, oij are the incident radiance, sensor response, gain and offset of a detector at a position (i, j) in the array, respectively. This model may be equivalently rewritten as:

(12) (d) Figure 2: (a) original image with FPN (b) smooth areas mask Ms (c) local motions and occlusions mask Ml (d) combined mask Mr the black areas in each mask denote the excluded pixels bij = The generation of Ms and Ml is based on the motion estimation results and is performed as follows: First, a preliminary motion is estimated between two subsequent frames Ik +1 and Ik by minimizing the cost function J of (3).Then, we calculate a difference image Dk as follows: (6) (13) where the parameters wij &bij are related to the gain and offset gij & oij as:

The parameter estimation is performed by minimizing the following function:

414

where x(i, j), x(i, j) are corresponding pixels from two subsequent frames, receiving identical incident radiance. According to the LMS algorithm proposed in [9] for the parameter estimation, w and b are updated as follows:

Fig. 4(e). It can be noticed that the ANN algorithm produces blurring and ghosting effects in the area surrounding the boat and behind the boat, which are successfully eliminated by proposed algorithm, whereas in smooth regions, such as water and coastline, similar correction is attained by both algorithms.

(14) where is some fixed learning rate parameter, which controls the convergence rate of the algorithm. Smaller values of give a more accurate but yet slower convergence. As mentioned above, the motion estimation may produce incorrect pixel correspondences in regions of local motion or occlusions. Wrong pixel correspondences may introduce errors into the estimation of w and b, which makes the parameter estimation in these pixels undesirable. In order to avoid parameter updating in these areas, we use a mask Mp obtained as follows:

(a)

(b) (15) where Dk is a new difference frame generated after the improved motion estimation:

(16) where Ik is obtained by warping the frame Ik +1 to the coordinate system of Ik. Local motions and occlusions produce high values in the difference frame, therefore a threshold of 2k is chosen. An example of the estimation mask is shown in Fig. 3.

(c)

(d)

Figure3: Parameter estimation mask Mp. Zero valued areas (black areas) correspond to local motions and occlusions in which the parameter update is suppressed

III. RESULTS
In order to evaluate the proposed algorithm we use a video sequence "coastguard that consists of 300 frames, which was duplicated back and forth to create a video sequence of 3300 frames. The simulated FPN was obtained by generating uniformly distributed gains g ~U [0.95 1.05] and offsets o~U [0.05 0.05], where U [a, b] denotes the uniform distribution on the interval [a, b]. Before correcting NUC parameters, the video sequence was linearly adjusted by bringing the maximum value to 1. The results of the proposed algorithm were compared to the results of the ANN algorithm [9]. The last frame from the original image sequence is shown in Fig. 4(a). A version containing synthetic FPN of the same frame is shown in Fig.4 (b). The corrected frame by proposed algorithm is shown in Fig. 4(d), whereas the corrected frame by ANN algorithm is shown in

(e)

(f) Figure 4 :(a) Original image (b) Noisy image (d) Corrected image with = 0.0025 (e) Corrected image with ANN algorithm using = 0.00035 and block of 8 (c) Real IR image with FPN (f) Corrected image with proposed algorithm = 0.02 .

415

In Fig.5, we compare results of both algorithms in terms of SNR improvement as a function of frame number. Both algorithms where supplied with the same simulated FPN, corresponding to the initial SNR of 24 dB. The proposed algorithm managed to overcome the changes in the image (mainly resulting from local motion), achieving most of the correction after 1500 frames, and a total SNR improvement of 8dB As can be seen in fig. 5, the ANN achieved a poorer SNR improvement of 1.5dB, since it was unable to adjust the NUC parameters very well in the regions that contain edges.

REFERENCES
[1] Douglas L. Jones, The LMS Adaptive Filter Algorithm, Connections website http://cnx.org/content/m11829/latest/, Feb 2007. [2] Digital Signal Processing using Matlab by Vinay K. Ingle & John G. Proakis, 2007 Thomson Learning [3] Fundamentals of Digital Signal Processing using Matlab by Robert J. Schilling & Sandra L. Harris,2007, Thomson Learning [4] B. M. Ratliff, M. M. Hayat, and J. S. Tyo, Generalized algebraic scene-based non-uniformity correction algorithm, Journal of the Optical Society of America A, vol. 22, no. 2, pp. 239249, 2005. [5] Fundamentals of Neural Networks Architecture, Algorithms, and Applications by Laurene Fausett, 2005, Pearson Education [6] S. Lim, A. El Gamal, Gain FPN Correction via Optical Flow, IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 51, no. 4, APRIL 2004. [7] B.M. Ratliff, M.M. Hayat, and R.C. Hardie, An algebraic algorithm for non-uniformity correction in focal-plane arrays, Journal of the Optical Society of America A, vol. 19, no. 9, pp. 1737 1747, 2002. [8]Digital Image Processing Second Edition by Rafael C. Gonzalez & Richard E. Woods , 2002, Pearson Education [9] E. Vera, S. Torres, Fast Adaptive Non-uniformity Correction For Infrared Focal-Plane Array Detectors, EURASIP Journal on Applied Signal Processing 2005:13, 19942004 2005. [10] B. M. Ratliff, M. M. Hayat, and R. C. Hardie, Algebraic scene based non-uniformity correction in focal plane arrays, in Proceedings of the SPIE: Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XII, G. C. Holst, ed., 4372, pp. 114 124, The International Society for Optical Engineering, (Orlando, FL), Sept. 2001. [11]Fundamentals of Digital Image Processing by Anil Kumar Jain, 2001, PHI [12] S. N. Torres and M. M. Hayat, Kalman filtering for adaptive non-uniformity correction in infrared focal-plane arrays, Journal of the Optical Society of America A, vol. 20, no. 3, pp. 470480, 2003.

Figure 5: SNR results for synthetic video sequence corrected with: the proposed algorithm (blue line), motion algorithm with no masks (red dotted line) and the ANN algorithm (green dashed line). In order to examine the effectiveness of using masks in the proposed algorithm, a version of the proposed approach without using masks was examined. In Fig. 5, one can see that the algorithm without using masks converges faster, achieving better results during the first 1000 frames, but at the same time, it is much more sensitive to local motions entering the scene. The algorithm without using masks achieved 5.5 dB SNR improvements, which is approximately 2 dB less than what was obtained by proposed algorithm. The proposed algorithm was also applied to a real data sequence counting 1000 frames obtained from an 8-12 micron IR camera. Frame 1000 from the original image sequence is displayed in Fig. 4 (c), versus the same frame after correction by the proposed algorithm, as shown in Fig. 4 (f); the noise lines appearing throughout Fig. 4 (c) and the strong noise in the middle of the frame vanish in Fig. 4 (f), giving a smooth clean image, especially in the lower part of the image.

IV. CONCLUSION
Scene based NUC algorithm to remove non uniformity by equalizing output of different detectors is proposed in this paper. It matches pixels exposed to the same information in an IR video sequence to estimate the non-uniformity correction parameters. The proposed technique uses masks during registration and estimation process. The registration mask is used to exclude the contribution of the misleading areas such as occlusions, local motions and smooth regions from the motion estimation. The estimation mask prevents estimating NUC parameters in local motion or occlusion areas, where the motion is not able to produce correct pixel correspondences. The proposed algorithm was applied on a video sequence with simulated FPN as well as on a video sequence with real FPN. Results proved that reduction in FPN is achieved.

416

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Rain Rendering in Videos


Abhishek Kumar Tripathi
Department of Electronics and Electrical Communication Engineering Indian Institute of Technology Kharagpur 721302 INDIA Email: aktripathy@ece.iitkgp.ernet.in

Sudipta Mukhopadhyay
Department of Electronics and Electrical Communication Engineering Indian Institute of Technology Kharagpur 721302 INDIA Email: smukho@ece.iitkgp.ernet.in

AbstractIn this paper a novel, efcient, real-time and more realistic rain rendering algorithm is proposed. Rendering snow and fog in video is straightforward but rain rendering requires the knowledge of weather conditions, mechanics of falling raindrops, raindrops shape distribution and various other parameters. Here a relation between the direction of falling raindrop and wind velocity is developed. In this proposed algorithm blurring and darkening effects are considered, which produce more realism. For darkening effect an empirical expression is introduced. Proposed algorithm works only on intensity values without affecting chrominance values, which makes it fast and easy to implement. Here density of rain, direction and length of streaks, size and shape of raindrops, and ambient light can be congured. Proposed algorithm introduces the concept of spatial independence and visual persistence property. Proposed algorithm uses simple photometric model, and spatial independence and visual persistence property of rain which makes it useful for real-time applications. Simulation results show that proposed algorithm is able to render rain in a realistic way and the output video looks natural. Index TermsRain rendering, newtonian mechanics, raindrop shape, outdoor vision, statistical modeling, rainfall.

softwares such as Maya. Recently, rain rendering algorithms have been developed by Starik and Werman [1]; Garg and Nayar [5]; Yang, Zhu, Meil, and Chen [11]; Rousseau, Jolivet, and Ghazanfarpour [3]. Rain rendered videos can be used for the research work of rain detection and removal. This paper is organized as follows: In Sec. II the statistical model of the raindrop is explained. In Sec. III a rain rendering algorithm is proposed and explained how this algorithm can be used for realtime application. In Sec. IV simulation results are shown and Sec. V concludes this paper. II. S TATISTICAL MODEL OF RAINDROP A statistical model of raindrop provides deeper understanding of raining effect. Here; how wind velocity changes the orientation of rain, how the shape of drop varies according to its size and what are the size distribution of raindrop in space are shown. A. Dynamics of raindrop Here the relation between the wind velocity and the direction of the rainfall is shown. Consider a raindrop of radius R falling under gravity g with a velocity v , while wind velocity is u. This raindrop is subjected to a buoyant force FB = V g , where is the density of air, V is the volume of the raindrop, and g is the acceleration due to gravity. A drag force Fd = 6Rv , where is the dynamic viscosity, will also act opposite to the direction of propagation.

I. I NTRODUCTION Rendering of rain in videos is an interesting task. A common approach to rain rendering is dealt with the modeling of particle system [4],[10],[8]. We analyzed rain appearance in terms of time and space. Rain is a dynamic feature [6]; it is not easy to recognize rain by just looking into a single video frame. If we shufe the video frames in a rain video with static background then the resulting video looks as natural as the original. This shows the temporal independence between the rain frames [1]. One more interesting property of rain is that it gives positive uctuations in the intensity values and chrominance values remain unaffected [2]. Thus proposed algorithm works on single intensity plane rather than on all the three red, green and blue planes, which reduces the time and complexity signicantly. Here independent rain masks are generated assuming that raindrops are uniformly distributed in space and add these masks independently and adaptively to the original rain free video frames. Based on Newtonian dynamics [9] an expression have been derived which shows how the orientation of falling rain drops change due to the wind. In proposed algorithm the density of rain, raindrops size and shape, wind velocity, and streak size can be controlled. When rain starts, it darkens the environment due to cloud; such a darkening effect is simulated by power law transformation. Here an empirical expression for obtaining the darkening effect is introduced. This expression adaptively darkens the current video frame according to its mean intensity. This darkening effect produces more visual realism. Rain rendering algorithms have a wide range of applications. It can be used in entertainment industries, game programming, and as a software tool. Several methods of rain rendering in videos have been developed in computer graphics, some of which are used in many

Fig. 1: Free-body diagram of falling raindrop

417

If ax and ay is the acceleration acting on the horizontal and vertical direction respectively. Horizontal forces Fx = max max = 6R(u vx ) Vertical forces Fy = may may = mg V g 6Rvy From eq (1) 6R (u vx) ax = m 6R dvx (u vx ) = m dt
vx 0

(1)

(2)
Fig. 2: Shapes of raindrops C. Visual effects The visual appearance of rain consists of two effects. (1) Darkening effect, and (2) Blurring effect. When rain starts, it changes the lighting condition and the atmosphere gets darker. This darkening effect contracts the histogram of the image or the current video frame. This can be done by power law transformation (s = c.r ). where, s is the output intensity, r is the input intensity, c and are the positive constants. An empirical expression for is introduced by us after analyzing various images. which is = 2.8194 + (13.4818 ) (5.8635 ()2 ). where, is the mean intensity of the normalized intensity plane. Coefcient of expression for is set experimentally. In addition, when rain starts, cause some blurring [1] due to the motion and the lens effect of raindrops. This blurring effect can be produced by the convolution of the image with the blurring mask. III. R AIN RENDERING ALGORITHM Here rain rendering algorithm is proposed. For each video frame an independent rain mask is generated and adjust this mask over the video frame by considering only intensity plane because raindrops give only positive change in intensity values without affecting the chrominance values.

dvx 6R (u m

vx )

=
0

dt
] (3)

vx = u[1 e
From eq (2) ay = g

6R t m

6R V g vy m m 6R V g dvy vy =g m m dt
vy 0

(g

V g ) m

dvy

6R vy m

=
0

dt
(4)

vy =
Rainfall direction is

6R V g m )[1 e m t ] (g m 6R

= tan1 (

vy ) vx

From eq (3) and eq (4) = tan1 [


V g 1 m ) ] (g m u 6R
(5)

A. Photometric model According to Sonia and Werman [1] change in intensity due to rain follow the assumptions of inverse dependence with the rain color intensity of the general image brightness. Which means for dark background there should be more rain color and less for bright background. According to Garg and Nayar [2], change in intensity due to rain is Y = Y + , where 0 < < 0.039 and = Er with = 1.18 ms and Er is the irradiance due to rain drop. Proposed algorithm makes use of this simple linearly constraints of the photometric model, to save complexity and computational time. Here is replaced by the intensity of rain. B. Spatial Independence Rain being a complex phenomenon, each rain drop seems to be independent to other drop even in the same frame. To test that a simple experiment is conducted from the real rain video. The rain mask is extracted using temporal median lter. Each frame of the generated rain mask is divided into tiles and rearranged to form new mask video. When this new mask is used on the rain removed original video, the synthesized rain video can not be distinguished from the original rain video. Hence, a rain mask frame can be divided into smaller tiles and these tiles can be tessellated to form a number of temporally independent rain mask frames.

A free-falling object achieves its terminal velocity [9] when the net force on the object is zero, resulting in an acceleration of zero. Thus the raindrop achieves its terminal velocity when max = 0 and may = 0. which gives terminal velocity as: vx = u and vy =
B. Shape and size of raindrop Analysis shows that small raindrops are nearly spherical and as the size of the drops increases it get attened at the bottom. Beard and Chuang [7] suggested a parametric equation which describe the shape of a raindrop as the weighted sum of the tenth order of cosine.
10

(6)
(7)

mg V g 6R

r() = a(1 +
n=0

cn cos(n))

where a is the radius of the undistorted sphere, is the polar angle of elevation with = 0 corresponds to the direction of the rainfall and cn are the shape coefcients [7] as given in Table I. The shapes of the drops of various sizes (1 - 5 mm) are shown in Fig. 2.

418

TABLE I: Shape Coefcients cn for cosine distortion


a
1mm 2mm 3mm 4mm 5mm 6mm

0 -28 -131 -282 -458 -644 -840

1 -30 -120 -230 -335 -416 -480

cn for n= 2 -83 -376 -779 -1211 -1629 -2034

3 -22 -96 -175 -227 -246 -237

4 -3 -4 -21 -83 176 297

5 2 15 46 89 131 166

(cn 104 ) 8 7 6 0 0 1 -2 0 5 -7 -6 11 -21 -13 12 -44 -18 2 -72 -19 -21

9 0 0 0 1 9 24

10 0 1 3 8 14 23

C. Visual Persistence Since raindrops are randomly distributed in 3D space, due to the high velocity of these rain drops between the two consecutive frames their perspective projection forms the rain streaks. Analysis of the original rain videos show that there is no correlation between the rain streaks in any two frames. Assuming the temporal independence property of the rain, a stack of rain mask is prepared. Hence for a video of N frames, a stack of N independent rain masks will be required. However, the human visual memory is very less. Hence, it is experimentally validated that human eye can not perceive the difference if rain mask is repeated after a certain interval. This property of visual perception can be used to our advantage for the preparation of rain mask. D. Real-Time application Processing over one plane rather than all three planes reduces the complexity to one-third. Using spatial independence property, rain mask can be generated by a set of smaller tiles. These smaller tiles reduces the memory and complexity requirements. Using the visual perception property of eye the same rain mask stack can be repeated without any visual degradation. This helps to reduce the computation and storage in mask generation. Once the stack of masks prepared the system have to only render these masks over each video frame which makes it easy for real-time implementation. E. Mask Generation Steps for mask generation are as follows : Step1 : Assign wind velocity and rain density. Assign different values of raindrops intensities and different diameters. Depending on the values of diameters, assign different shapes to raindrops. Assign the length of the rain streaks. Step2 : Generate a random binary matrix, of the size of the chosen tile, having values 1 and 0. Here 1 denotes the presence of raindrop and 0 denotes the absence of raindrop. Number of 1 in the mask is set according to the rain density. These 1 are uniformly randomly distributed. Generate raindrops centering 1s according to the assigned diameters and the intensities. Step3 : Motion blur this tile at a certain length and certain angle. This angle shows the direction of rain, which is computed according to the wind velocity. Since the length of the streaks is the distance travel by the drop during the exposure time but here for simplication it is assumed that the length of blur pixels decides the length of the streaks. Step4 : Normalize this tile i.e. [0,1]. This normalized tile is our rain tile (RT ) Step5 : For each tile repeat step 1 to 4 and generate adequate number of independent rain tile. Step6 : Randomly tesselate all the generated rain tile and form a rain mask (RM ) of size M N and pixel value of this mask is rain intensity.

F. Rendering mask in video frame Steps for rendering rain mask in video frame are as follows : Step1 : Convert RGB video frame into YIQ color space. Step2 : Now normalize the intensity plane(Y ) and nd out the mean intensity value . Step3 : Darken the intensity plane according to power law transformation (s = c.r ). where, s is the output intensity, r is the input intensity, c = 1, and = 2.8194+(13.4818)(5.8635()2 ). where, is the mean intensity of the normalized intensity plane. Step4 : Blur this darken normalized intensity plane with simple averaging lter (3 3 convolution mask), because falling raindrops produce blurring effect. Now this blurred image is our new intensity image (Y1 ). Step5 : Now for each pixel (i, j ), compute a Y1 mask according to following equation Y1 (i, j ) = (Y1 (i, j ) + RM (i, j )) where, rst term is a constant and second term (Y1 (i, j ) + RM (i, j )) shows the change in intensity due to rain as dened in Sec.III-A. Step6 : Add this Y1 mask to intensity image (Y1 ) and we get our rain intensity image. Y = Y1 + Y1 Step7 : Scale the range of this rain intensity image to [0,255] and convert this Y IQ image into RGB color space. Step8 : Repeat step 1 to 7 for each video frame. IV. S IMULATION R ESULTS In our experiments all possible type of videos; videos with static background, and videos with dynamic background, are used. All video results are given in http://smukho.googlepages.com/ rainrenderingdemo. Proposed algorithm automatically adjusts the darkening effect by using an empirical equation. The direction as well as the amount of rain (light and heavy) are congurable. Results for different congurable parameters are shown in Fig. 3. Results which are obtained, demonstrates that the proposed algorithm has ability to produce rain videos looking realistic. Rain rendering in static and dynamic videos are shown in Fig. 4 and Fig. 5 respectively. V. C ONCLUSION In this paper, an algorithm to render articial rain into a video is proposed by considering a statistical model of raindrop. A relation has been developed, which shows that the direction of falling drops depend upon the size of drop and velocity of wind. Proposed algorithm is congurable, user can congure various parameters such as intensity of rain, density of rain, drops size and shape, wind velocity, ambient light, and streak size as per requirement or otherwise can congure default values of these parameters for the ease of rain rendering.

419

(a) (a)

(b)

Fig. 4: Rain rendering in static background video (a) without rain (b) with rain

(a)

(b)

(b)

Fig. 5: Rain rendering in dynamic background Indian football match video (a) without rain (b) with rain

Proposed algorithm, work only on the intensity plane rather on all three planes, uses simple photometric model, and uses spatial independence and visual persistence property, reduces the rendering time and complexity. Low complexity make the proposed algorithm useful for real-time applications. Proposed algorithm is useful for applications such as video editing, game programming, and lm post production. In future, our focus will be on various other effects of rain, such as reections, dripping off objects surfaces, water ripples and puddles.
(c)

R EFERENCES
[1] S. Starik and M. Werman, Simulation of rain in videos, Proceedings of Texture: The 3rd International Workshop on Texture Analysis and Synthesis, France, pp. 95-100, 2003. [2] K. Garg and S. K. Nayar, Vision and Rain, International Journal of Computer Vision, Vol. 75, No. 1, pp. 3-27, 2007. [3] P. Rousseau, V. Jolivet, D. Ghazanfarpour, Realistic real-time rain rendering, Science-direct Journal of Computer and Graphics, Vol. 30, No. 4, pp. 507-518, 2007. [4] P. Fearing, Computer modeling of falling snow, In Computer Graphics (SIGRAPH Conference Proceedings), pp 37-46 2000. [5] K. Garg and S. K. Nayar, Photorealistic rendering of rain streaks, Proceedings of ACM SIGGRAPH, Vol. 25, No. 3, pp 996-1002, 2006. [6] S. G. Narasimhan and S. K. Nayar, Vision and the Atmosphere, International Journal of Computer Vision, Vol. 48, No. 3, pp. 233-254, 2002. [7] K. V. Beard and C. Chuang, A new model for the equilibrium shape of raindrops, Journal of the Atmospheric Sciences, Vol. 44, No. 11, pp. 1509-1524 ,1987. [8] G. Zhang, J. Vivekanandan and E. Brandes, A Method For Estimating Rain Rate And Drop Size Distribution From Polarimetric Radar Measurements, IEEE Transactions on Geoscience and Retnote Sensing, Vol. 39, No. 4, pp. 830-841, 2001. [9] B. Crowell, Newtonian Physics, Light and Matter, Second Edition, 2000. [10] W. T. Reeves, Particle system-a technique for modeling a class of fuzzy objects, ACM Trans. Graph., Vol. 2, No. 2, pp. 91-108, 1983. [11] Y. Yang, X. Zhu, J. Meil, D. Chen, Design and realtime simulation of rain and snow based on LOD and fuzzy motion, Third International Conference on Pervasive Computing and Applications, Vol. 10, pp. 510513, 2008.

(d)

(e)

Fig. 3: Simulation of proposed algorithm (a) original rain video (b) rain rendered video (c) change of angle due to wind in rain rendered video (d) light rain rendered (e) heavy rain rendered

420

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Detection and Analysis of Macula in Retinal Images using Fuzzy and Optimization Based Hybrid Method
Sri Madhava Raja N, Kavitha G* and Ramakrishnan S# Department of Electronics and Instrumentation Engineering, St. Josephs College of Engineering, Chennai, India # Department of Instrumentation Engineering, * Department of Electronics Engineering, Madras Institute of Technology, Anna University Chennai, India

ramki@mitindia.edu,ramki@annauniv.edu

Abstract In this work, a combined approach to detect optic disc and macula in normal and abnormal images is proposed. Normal and Diabetic Retinopathy images are used for this study. The fundus retinal images are subjected to a region dividing approach based on fuzzy mean filter to precisely detect the macula and its centre. The Ant Colony Optimization (ACO) based method is used to identify optic disc in these images. The results show that this combined approach for macula detection demonstrate improved performance than any stand alone algorithm. As identification of Optic Disc and macula are important for pathological assessment these studies seems to be clinically relevant. Index Terms Retina, optic disc, macula, fuzzy, ant colony optimization

I. INTRODUCTION

HE retina is a thin multi-layered sensory tissue that lies at the back of the eye. The outlying parts of the retina are responsible for peripheral vision while the central area is responsible for central vision. The human retina is the only part of the central nervous system that can be imaged directly. The fundus imaging provides ophthalmologists with digitised data that can be used for detection of eye diseases based on image processing and pattern recognition technology [1]. A computer-aided fundus image analysis provides an immediate detection and characterisation of retinal features prior to specialist inspection. Images of the retina are used to diagnose and monitor the progress of a variety of diseases such as diabetic retinopathy, age-related macular degeneration and glaucoma. The macula is a round area in the central region of the retina. Fovea is the central part of the macula that provides the sharpest vision and the centre of the fovea is usually located at a distance of approximately 2.5 times the diameter of the optic disc, from the centre of the optic disc . The fovea corresponds to the region of retina with highest sensitivity. The macula

region is generally darker than the surrounding retinal tissue, due to the higher density of carotenoid pigments in the retina and pigment granules in the retinal pigment epithelial layer beneath the retina. It exhibits non specific structure and varies greatly across individuals due to variations in the levels of pigment associated with factors such as ethnicity, age, diet, and disease states. The loss of peripheral vision may go unnoticed for some time but damage to the macula will result in loss of central vision, which may have serious effects. There is a need for not only the detection of retinal objects but also to describe their spatial locations and the spatial relationship between them. The detection of macula has been carried by matching correlation [2], locating the darkest pixel in the coarse resolution image following an a-priori geometric criteria based on eyes anatomy [3], mathematical morphology based methods Zana et al [4], using a probabilistic segmentation [5] and point-distribution models [6]. Ibanez and Simo [7] applied Bayesian methodology to detect the fovea contour. Goldbaum et al. [8] fixed the position of the fovea relative to the optic disk. Li et al. [9] presented a model based approach where the information from the active shape model was used to find the Macula Centre. The macula is also extracted as the connected region around the fovea by using region growing algorithm [10]. Recent methods model the likely position of the fovea with respect to the position of the optic disc and vascular geometry features [11]. This paper describes a hybrid method to detect complex macula object together with the optic disc. In this work, topdown region dividing (TDRD) based approach [12] is used for detection and analysis.

II. METHODOLOGY The images needed for the study were obtained from nearby hospitals. Images of similar sizes were considered for analysis. The TDRD based segmentation consists of two steps: region dividing and sub-region evaluation procedures, to iteratively segment an input region into several sub-regions by evaluating

421

both feature and spatial properties. The algorithm consists of the following steps: i. The input image is considered as a big region and added into a dividing list. ii. Region dividing procedure: Segment each region in the dividing list into several sub-regions. iii. Sub-region evaluation procedure: On each divided obtained sub-region, determine whether a sub-region needs to be continuously divided. If it does, add the sub-region into the dividing list. iv. Repeat steps 2 and 3 until the dividing list is empty. v. Output the segmentation result by collecting all the sub-regions. The region dividing procedure combines the advantages from histogram-based and region-based segmentation methods. It includes three steps: suspicious intensities determination, suspicious pixels determination, and final intensity determination (FID). The suspicious intensities are determined by comparing the histograms of two transformed images using histogram equalization. The input grayscale image of 256 colors will become a binary image of two colors. Let OriGray and NewGray indicate the original and the translated intensities, respectively. NewGray = 0 (black) if OriGray_i, and NewGray=1 (white) for the others. NewGray = 0 (black) if OriGray j , NewGray = 1 (gray) if j <OriGray k, and NewGray = 2 (white) if OriGray>k. A pixel is considered as a suspicious pixel if its intensity is one of the suspicious intensities. In the histogram-based approach, it is difficult to determine the proper threshold. In order to avoid the difficulty, a range of suspicious intensities are first determined based on the histogram analysis. Once the suspicious pixels are derived, the local spatial information is used to determine the final intensity of suspicious pixels. Then the pixels and their neighbours are analysed to determine the final intensity. The final intensity determination is based on the Weighted Fuzzy Mean (WFM) filter [13]. The steps involved are as follows: 1. Setup L-R type fuzzy subsets: FMi, FMj, FMk, and FMFE using arrays. For example, FMi = array fmi [h], where 0 h 255 and 0 array fmi [h]. 2. For each suspicious pixel (n, m), calculate Fuzzyi(n, m), Fuzzyj (n, m), Fuzzyk(n, m) and FuzzyFE(n, m). 3. If (|Fuzzyj (n, m) FuzzyFM(n, m)|<|Fuzzyi(n, m) FuzzyFM(n, m)|) and (|Fuzzyj (n, m)FuzzyFM(n, m)|<| Fuzzyk(n, m) FuzzyFM(n, m)|) then Fin(n, m) = 0 elseIf (|Fuzzyi(n, m) FuzzyFM(n, m)|_|Fuzzyj (n, m) FuzzyFM(n, m)|) and (|Fuzzyi(n, m)FuzzyFM(n, m)|_| Fuzzyk(n, m) FuzzyFM(n, m)|) then Fin(n, m) = 2 else New(n, m) = 1.

4. Repeat steps 13 to determine the final intensity for all suspicious pixels. The results obtained from the suspicious pixels by fuzzy based method are then combined with ACO method [14] for detection of the optic disc as well as the macula in a single image. Their relationship could be further used for the classification of normal and abnormal images.

III. RESULTS AND DISCUSSION Typical original retinal images and the images processed as per the proposed methods are shown in Figures 1 and 2. Figure 1(a) and 2(a) shows the original normal and abnormal images. Figure 1(a) shows a normal original image. Figure 1(b) shows the image subjected to bi-level thresholding. In this large blood vessels are clearly seen and the smaller vessels and the optic disc are not detected. This result is subjected to further processing in finding the suspicious pixels, having varying shades of black, white and gray pixels and the results are shown in figure 1(c). For the precise location of macula and also its centre, the processed image is subjected to fuzzy mean filter, to find the final intensity determination as shown in figure 1(d). The edge detection using Canny approach is performed to this processed image. The results show clearly delineated macula boundary and isolated from other parts as shown in figure 1(e). The results obtained after the final intensity determination using fuzzy approach is combined with ACO method. Figure 1(f) shows the final results obtained for the identification of macula and optic disc in a single image. The optic disc along with the macula is distinctly seen in this hybrid approach. Further the centre of the macula, the fovea, is also clearly seen. A typical diabetic retinopathy image was also considered for this analysis and the corresponding results are shown from figures 2(a) to 2(f). Similar steps were carried out and observations were recorded. IV. CONCLUSION The characterization of optic disc and macula are considered important for clinical inference of eye disorders. Many of the earlier reports describe the dependence of optic disc to characterize the macula and they are based on a single algorithm. It has already been shown that ACO based method is effective in identifying optic disc [15]. In this work, a hybrid approach is employed to detect optic disc and macula in a single image. Results demonstrate that this combined approach is effective in detecting optic disc and macula together in a single image.

422

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 1 (a) Original normal image (b) Bi-level thresholding (c) Suspicious pixels determination (d) Final intensity Determination using Fuzzy (e) After applying canny edge detection to (d) (f) Combined ACO and fuzzy results

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 2 (a) Original abnormal image (b) Bi-level thresholding (c) Suspicious pixels determination (d) Final intensity Determination using Fuzzy (e) After applying canny edge detection to (d) (f) Combined ACO and fuzzy results

423

The fuzzy based region dividing approach is efficient for the detection of the macula and its centre. As identification of individual objects and extraction of features are important for assessment of eye disorders these studies seem to be clinically relevant. REFERENCES
[1] J. S. Duncan, N. Ayache, Medical image analysis: progress over two decades and the challenges ahead, IEEE Trans. on pattern analysis and machine intelligence, vol.22, No.1, 2000, pp.85-106. C. Sinthanayothin, J. Boyce, H. Cook, T.Williamson, Automatic localization of Optic Disc, Fovea, and Retinal Blood Vessel from Digital Colour Fundus Images, British Journal of Ophthalmology, vol.83, 1999,pp.902-910. L. Gagnon, M. Lalonde, M. Beaulieu, M.-C. Boucher, Procedure to Detect Anatomical Structures in Optical Fundus Images, Proceedings of SPIE Medical Imaging: Image Processing, vol. 4322, 2001, pp.12181225. F. Zana, I. Meunier and J. C. Klein, A region merging algorithm using mathematical morphology: application to macula detection, in Proceedings of International Symposium on Mathematical Morphology, 2001,pp. 423-430. C.-H.Wu and G. Agam, Probabilistic retinal vessel segmentation, in Proceedings of SPIE, 2007, pp. 65121316512138. M. Niemeijer, M. D. Abramoff, and B. V. Ginneken, Segmentation of the optic disc, macula and vascular arch in fundus photographs, IEEE Trans. Medical Imaging, vol. 26, no. 1, 2007, pp.116 127. M. Ibanez and A. Simo,(1999) Bayesian detection of the fovea in eye fundus angiographies, Pattern Recognition Letters, 1999,pp. 229-240.

[8]

M. Goldbaum, S. Moezzi, S. Taylor, S. Chatterjee, J. Boyd, E. Hunter, R. Jain.,(1999), Automated diagnosis and image understanding with object extraction, object classification, and inferencing in retinal images, in Proceedings of IEEE International Conference on Image Processing, vol. 3, 1999,pp. 695698. H. Li and O. Chutatape,(2004) Automated feature extraction in color retinal images by a model based approach, IEEE Trans. Biomedical. Engineering, vol. 51, no. 2, 2004, pp. 246254.

[9]

[2]

[10] M. Mancas, B. Gosselin, B. Macq, (2005), Segmentation Using a Region Growing Thresholding, Image Processing: Algorithms and Systems IV. Edited by Dougherty, Edward R.; Astola, Jaakko T.; Egiazarian, Karen O. Proceedings of the SPIE, vol. 5672, 2005, pp. 388398. [11] K. Tobin, E. Chaum, V. Govindasamy, and T. Karnowski, (2007), Detection of anatomic structures in human retinal imagery, IEEE Trans. on Medical Imaging, vol. 26, no. 12, 2007,pp. 17291739. [12] Yi-TaWua, FrankY. Shih, Jiazheng Shi, Yih-TyngWu, A top-down region dividing approach for image segmentation, Pattern Recognition, vol.41, 2008, pp.1948 1960. [13] C.S. Lee, Y.H. Kuo, P.T. Yu, Weighted fuzzy mean filters for image processing, Fuzzy Sets and Systems vol.89, 1997, pp.157180. [14] Jing, T, Weiyu, Y, and Shengli. X, Ant colony optimization algorithm for image edge detection, IEEE Congress on Evolutionary Computation, 2008, pp. 751- 756. [15] G.Kavitha and S.Ramakrishnan, An Approach to identify optic disc in human retinal images using Ant Colony Optimization method, Journal of Medical Systems, 2009 (DOI 10.1007/s10916-009-9295-4).

[3]

[4]

[5]

[6]

[7]

424

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Quality Access Control and Tracking of Illicit Distribution of Compressed Gray Scale Image
Amit Phadikar
Department of Information Technology MCKV Institute of Engineering Liluah, Howrah 711204, India. amitphadikar@rediffmail.com

Santi P. Maity, Minakshi Chakraborty


Department of Information Technology Bengal Engineering and Science University Shibpur, Howrah 711 103, India. santipmaity@it.becs.ac.in, minakshi.bec@gmail.com

AbstractThis paper, proposes a scheme that integrates quality access control and tracking of illicit distribution of digital image(s) in a single platform. The goal is achieved by (1) modulating some of the valuable coefficients of the DCT (discrete cosine transform) compressed data such that an unauthorized user will not be able to see the image properly, and (2) embedding a binary watermark as a tracking information in the compressed and modulated data using quantization index modulation (QIM), for the tracking of illicit distribution by authorized user of the decrypted image. The coefficients that are modulated are selected pseudo randomly using a secret key (K). Before watermark embedding the watermark is encoded using convolution coding to increase robustness so that the extracted watermark is recognizable even after bit error due to data transmission over noisy wireless channels. The simulation results have shown the validity of above claims without decreasing the compatibility of standard JPEG coding scheme. KeywordsAccess Control; Illicit Distribution; Encryption; Compressed Domain; Data Hiding; DCT.

I. INTRODUCTION
Now-a-days, due to the highly developed technology of network, digital media such as images and videos can be distributed rapidly and widely via World Wide Web (WWW) in a cost-efficient manner. The sale of high value products and services through Internet is also on rise. However, due to lack of security in the communication network, vendors may be unwilling to distribute digital data such as digital image over the Internet, as it can be accessed illegally, tampered with, and also redistributed illegally. Access control technique may find its usage to provide a kind of security technique either to deny fully or to allow partial accessing of the digital content [1-2]. On the other hand, in the context of tracking of illicit distribution, watermarking can provide an useful mechanism to track such illicit copies or to attach property rights information to the material. In other words, the watermark as tracking information is used to keep honest people honest. The need of access control and tracking of illicit distribution has received widespread attention and a number of solutions have been proposed in [3-9]. Typically two classes of techniques, namely data encryption and data hiding either separately or in combined form are used to meet the goal of access control [1- 6]. Recently, Phadikar et al. [6] propose a quality access control scheme of colour image in DCT compressed domain. The goal is achieved by modulating selected AC coefficient of compressed data. The encryption-based schemes as discussed above cannot protect the digital media after decryption, while decryption is necessary in order to exhibit the digital document. Once they are decrypted can be redistribute illegally. Within this context, watermarking can provide a useful mechanism to track illicit copies or to attach property rights

information to the multimedia content [7]. In [8] authors propose forward system architecture for copyright protection and tracking based on digital watermark and mobile agent. Bloom et al. [9] propose a scheme for secured delivery of motion picture to the projector. The scheme attaches forensic tracking information to the motion picture content that provides persistent tracking beyond the projector. The review of the previous works reveals that most of the works discussed so far focus on individual application. Moreover, it is now rare to encounter multimedia signals of any kind in a raw, uncompressed format. Therefore, it is highly desirable to develop a data-hiding algorithm that works entirely in the compressed domain. Although, new standard of compression like JPEG-2000 has been introduced and wavelet becomes appealing but in reality more than 80% of the image and the video data are still available in DCT compressed form [6]. Moreover, data hiding in compressed data is also challenging as the two operations are antagonistic in characteristics. While data hiding uses the very redundancy present in the host signal to make embedding imperceptible, compression removes the redundancy and keeps no or less rooms for data insertion. So the development of efficient quality access control scheme that also track illicit distribution of DCT compressed images is quite demanding and needs attention to the research community. However, algorithms developed and reported in literature, in this domain to the best of our knowledge, are also less in number. To take into account the fact of wide use of DCT compression, security and illicit distribution problem, this work integrates quality access control and tracking of illicit distribution of digital image(s) in DCT compressed domain. The goal is achieved by modulating some of the valuable coefficients of the compressed data so that an unauthorized user will not be able to see the image properly that leads to access control. Moreover, a binary watermark as tracking information is embedded in the compressed and modulated data using quantization index modulation (QIM)[10], for tracking of illicit distribution by authorized user of the decrypted image. Before watermark embedding the watermark is encoded using convolution coding to increase robustness against bit errors in the transmission channel which is mainly corrupted by additive white Gaussian noise (AWGN). The simulation results show that the user having full knowledge of the key, can get the best copy of the image, while all other users can only access the image up to a certain level of quality. Moreover, the use of channel coding increases tracking performance of illicit distribution.

II. PROPOSED SCHEME


The proposed joint image quality access control and tracking scheme consists of two modules, namely image encoding and image decoding.

425

A) Image Encoding Process: The inputs to the encoding process are the gray scale image, the owner key (K), and a set of dither used for watermark bit embedding through QIM. The output of the encoding process is the modulated compressed watermarked image. The encoding process consists of the following steps. Step 1: Image Blocking: Host image of size ( N N ) is partitioned into non-overlapping blocks of size ( n n ). We select (88) blocks in order to make the scheme compliant with the JPEG codec. Step 2: Image Preprocessing and Transformation: Each block is level shifted by subtracting 2m-1, where m is the number of bits required to represent the pixel values of the host images. This preprocessing operation not only prevents numerical overflow but also makes arithmetic coding, context specification etc. simpler. In particular, this allows the compression more efficient with absolutely no or low loss in image quality. Step 3: Quantization of DCT Coefficients: The resulting coefficients are quantized using standard quantization table of JPEG compression and are ordered into zigzag pattern. Step 4: Block Based Modulation of DCT Coefficients: The modulation of selected DCT coefficients reduces the understandability of the images and plays the key role in quality access control. The coefficients that are modulated are selected pseudo randomly depending upon secret key (K). The modulation can be done either by changing sign bit (using Eq. 1) or encrypting selected DCT coefficients by some public key cryptography like RSA. In our scheme, we select horizontal and vertical AC coefficients for the modulation purpose. The energy of horizontal and vertical AC components represents the horizontal and vertical edge component of the block. The modulation of selected coefficients by the modification of sign bit can be described as:
e C = (-1) C

and

d q (1)

is

/ 2 . The sequences d q (0)

and

d q (1)

are used,

respectively, for embedding the bit 0 and bit 1. Step 7: Watermark Insertion: The bits of the watermark ( W ) are now embedded into the DCT coefficients applying QIM. The qth watermarked DCT coefficient

Sq

is obtained as follows:

Sq =

Q { X q + d q (0), } d q (0) Q { X q + d q (1), } d q (1)

if W (i, j ) = 0 if W (i, j ) = 1

(5)

where X q is the original q-th DCT coefficient, Q is a uniform quantizer (and dequantizer) with step . Step 8: Efficient Representation of Bit Symbol: Each non-zero coefficients and each run of zero coefficients are replaced by Huffman code. The code is written in a file and they constitute the compressed and the modulated data. B) Image Decoding Process: The inputs to the decoder are the modulated compressed Huffman bit sequence, the key and the set of dithers that is used at the time of encoding. The output will be of good quality of image if the image is decoded by authorized user. The result will be reversed other wise. The extracted watermark is used as forensic information that can track the illicit distribution of the decoded image. The steps for decoding process are described as follows. Step 1: Huffman Decoding and Watermark Bit Extraction: Huffman decoding is done on the watermarked image to get the quantized DCT coefficients. The watermark bit extraction method uses the principle of minimum distance decoding to determine which quantizer has been used at the encoder side. A watermark bit

(1)
(2)

The modulation of selected coefficients by RSA can be described as:

C = RSA_E(C, P)

(i, j ) ) is decoded by examining the group of coefficients (W using the following rule.
L 1

where, C and C are the quantized selected AC coefficients in a block before and after modulation. RSA_E is the RSA encryption function and P is the public key. Step 5: Encoding of Watermark : The watermark ( W ) which is a binary image in our scheme, is encoded using convolution coding[11] with rate 1/R, that increases in redundancy leading to greater robustness against bit error in the transmission channel corrupted by additive white gaussian noise (AWGN). Let W is generated due to convolution encoding of W . Step 6: Generation of Binary Dither: Two dither sequences, with length L, are generated pseudo randomly using a key with step size ( ) as follows: (3) dq (0) = {(key) } / 2 0 q L 1
d q (1) = d q (0) + / 2 d q (0) / 2 if d q (0) < 0 if d q (0) 0

=
q=0

( Q(Y

+ d q (0), ) d q (0) Yq

)
q

(6a)
(6b)

L 1

B=
q =0

( Q(Y + d (1), ) d (1) Y )


q q q

where Yq is the q-th DCT coefficient of the received signal. The watermark bit

(i, j ) W

is now decoded using the following rule.

if A < B (7) j) = 0 W(i, 1 otherwise The extracted watermark bits are then decoded by Viterbi decoding.

(4)

where ( key ) is a random number generator. In the present scheme, the value of L is made 63, as one block may contain maximum of 63 AC coefficients. The scheme leave DC coefficient unchanged as use of DC component for watermark embedding reduces the image fidelity of the watermarked image. The distance between the corresponding elements of two dither levels

Let W be the extracted watermark after viterbi decoding. The Viterbi decoding technique becomes the dominant technique because of highly satisfactory bit error performance, high speed of operation, ease of implementation, low cost, fixed decoding time [11]. Step 2: Calculation of Bit Error Rate(BER):We calculate the bit error rate (BER) between the original watermark image ( W ) and the decoded watermark ( W (i, j ) ) in order to make a decision for a given watermark whether exists or not. The BER is defined by Eq. 8 below. 1 m m (8) BER(W ,W ) = W (i, j ) W (i, j ) m m i =1 j =1

d q (0)

426

where the symbol

is the number of rows/columns of the

watermark and is the XOR operator. Lower the BER value better is the system performance in term of tracking of illicit distribution of decoded image by the authorized user. Step 3: Block Based Demodulation of DCT Coefficients: This is the inverse step of block based modulation of the encoding step. The demodulation can be done either by change of sign or by decrypting the selected AC coefficients that are used at the time of encoding. The demodulation of selected coefficients by the modification of sign bit can be described as:
(9) The demodulation of selected coefficients by decryption can be described as: el e (10) C = RSA_D(C , S)
where, C and C , are respectively the quantized DCT coefficients before and after demodulation process . RSA_D is the RSA decryption function and S is the secret key Step 4: Reconstruction of Image Reverse zigzag scan, denormalization, inverse DCT and post processing are performed on the resultant demodulated quantized coefficients to reconstruct the image.

digital image due to change of sign bit and RSA, respectively. From the results it is clear that, the result due to change of sign bit is more efficient compared to RSA, because change of only sign bit of the selected coefficients keeps the compression ratio unchanged and modulation process in low cost. But in concern to the security, RSA is more secure compared to the former.
TABLE I RESULTS OF IMAGES WITHOUT AND WITH WATERMARKING, B.R.: BIT RATE, C.R.: COMPRESSION RATIO Images
Without Watermarking

el

= (-1) C

B.R.
Lena Boat Pepper Babon Lena Boat Pepper Babon 2.43 2.46 3.13 4.52 2.62 2.62 3.32 4.81

C.R.
3.29 : 1 3.25 : 1 2.56 : 1 1.77 : 1 3.05 : 1 3.06 : 1 2.41 : 1 1.66 : 1

el

With Watermarking

PSNR (dB) 37.17 37.63 35.42 31.84 36.97 36.70 34.74 31.36

MSSIM
0.96 0.96 0.95 0.93 0.96 0.95 0.95 0.92

TABLE II PSNR AND MSSIM FOR WATERMARKED IMAGE FOR DIFFERENT WATERMARK POWER WP(dB)

III. PERFORMANCE EVALUATIONS


The performance of the proposed scheme is evaluated over a large number of benchmark images. All of the test images are of size (256256), 8-bit/pixel gray scale images. The present study uses peak-signal-to-noise-ratio (PSNR) and meanstructuresimilarity-index-measure (MSSIM)[12] as a distortion measure for the watermarked image under inspection with respect to the original image. Figs. 1(a) -1(e) show one of the host image, decompressed image, decompressed watermarked image, original watermark and the extracted watermark, respectively. Table I shows the various results for the same. The results in Table I show that the data is embedded without affecting compatibility (in term of bit rate and compression ratio) with standard JPEG coding scheme. The value of step size ( ) taken into consideration is 16 (i.e. watermark power (WP) of 13.29 dB). The watermark power is defined as

Step size ( )
PSNR(dB) MSSIM

9.21 10
40.32 0.97

13.29 16
36.97 0.96

15.23 20
32.62 0.89

B) Noise Resilience Capability: In our scheme, noise resilience means how much the scheme decode the encrypted message if the encrypted message is corrupted by noise like AWGN. Table IV shows the quality of the decoded image after adding noise having various noise variances ( ) when both horizontal and vertical coefficients are modulated by change of sign and RSA, respectively. We consider the vertical and horizontal coefficients as those coefficients are responsible for horizontal and vertical edges, respectively. From the results it is clear that noise resiliency property due to change of sign is better than the RSA. That is in other words access control using change of sign is more robust than RSA in term of noise resilience.
TABLE III RESULTS DUE TO MODULATION PSNR MSSIM PSNR MSSIM Compression Compression before before after after Rate Ratio deco. deco. deco. deco. (bits/pixel) AC coefficient (Vertical + Horizontal) for Change of Sign Bit 20.84 0.68 36.96 0.96 2.62 3.05:1 AC coefficient (Vertical + Horizontal) for RSA 8.65 0.16 36.96 0.96 2.28 3.01: 1
TABLE IV RESULT OF THE DECODED IMAGE DUE TO AWGN

WP = 10 log10

2
dB (11)

12

We also study the effect of different watermark power (WP) on the relative gain/loss in image fidelity. Table II shows the image fidelity in term of PSNR and MSSIM for different watermark power. From the Table II it is clear that the increase in watermark power decreased the fidelity of the watermarked image.

Noise Variance
(a) (b) (c) (d) (e) Fig. 1. (a) Host image Lena; (b) Decompressed Image; (c) Decompressed Watermark Image; (d) Watermark image; (e) Extracted watermark.
value
18.74 5.92 1.87 0.59 0.18

A) Quality Access Control: In the proposed scheme, quality access control is done by changing sign bit and also by applying public key cryptography like RSA on some selected AC coefficients. Table III shows a comparative study of image degradation of a compressed

Change of Sign Bit PSNR(dB) MSSIM 15.35 0.35 20.27 0.55 24.99 0.74 29.47 0.86 33.04 0.92

RSA SNR(dB) MSSIM 2.93 0.04 3.62 0.05 4.12 0.06 5.53 0.11 8.96 0.26

427

0.9

0.8

Without Convolution Coding R=1/2 R=1/4 R=1/6

error in the received signal when the signal is transmitted trough the noisy channel or manipulated by some user. The results shown in Table V highlights the superiority of our method.

0.7

0.6
BER

V. CONCLUSIONS AND SCOPE OF THE FUTURE WORK


In this paper, we propose a joint quality access control and tracking of illicit distribution scheme of digital images in compressed domain using data hiding and modulation. Experimental results show that the use of convolution coding improves the tracking performance against AWGN. Experimental results also show that the tracking information as a watermark is embedded without decreasing compatibility (in term of bit rate and compression ratio) with standard JPEG coding functionality. The scheme is simple, and easy to implement. It is also secured scheme, only the one with the correct key can decrypt the data for quality access control. Future works can be concentrated for further performance improvement of the proposed scheme and extension of the proposed scheme for the sound data.
REFERENCES [1] S. Imaizumi, O. Watanabe, M. Fujiyoshi, and H. Kiya, Generalized hierarchical encryption of JPEG 2000 code streams for access control, Proc. IEEE International Conference on Image Processing, p. 1094-7, 2005. Y. G. Won, T. M. Bae, and Y. M. Ro, Scalable protection and access control in full scalable video coding, LNCS, vol. 4283, pp. 407-421, 2006. E. Bertino, J. Fan, E. Ferrari, M. S. Hacid, A. K. Elmagarmid, and X. Zhu, Hierarchical access control model for video database systems, ACM Transactions on Information Systems, vol. 21, pp.155 191, 2003. F. C. Chang, H. C. Huang, and H. M. Hang , Layered access control schemes on watermarked scalable media, Journal of VLSI Signal Processing (Springer Netherlands), vol. 49, pp. 443 455, June 2007. J. L. Liu, Efficient selective encryption for JPEG 2000 images using private initial table, Pattern Recognition, vol. 39, pp. 509-1517, 2006. A. Phadikar, and S. P. Maity, Quality access control of compressed color images, AE-International Journal of Electronics and Communications, Elsevier in press. C. Neubauer, and J. Herre, Audio watermarking of MPEG-2 AAC bit streams, Proc. Of the 108 th Convention, p. 5101-5121, 2000. L. Quan, and L. Hong, Application of digital watermark and mobile agent in copyright protection system, Proc. of the International Conference on Computer Science and Information Technology, Singapore, p. 3 6, 2008. J. A. Bloom, Security and rights management in digital cinema, Proc. of International Conference on Multimedia and Expo, Maryland, USA, p. 16-21, 2003. B. Chen, and G. W. Wornell, Quantization index modulation: a class of provably good methods for digital watermarking and information embedding, IEEE Transaction of Information Theory, vol. 47, pp. 1423-1443, 2001. R. Bose, Information theory coding and cryptography. Tata McGrawHill, 2005. Z. Wang, A. C. Bovik, H. R Sheikh, and E. P Simoncelli, Image Quality Assessment: From Error Measurement to Structural Similarity, IEEE Transactions on Image Processing, vol. 13, pp. 1-14, 2004. A. A. Zaidee, S. Fazdliana, and B. J. Adznan, Content access control for JPEG images using CRND zigzag scanning and QBP, Proc. IEEE International Conference on Computer and Information Technology (CIT'06), p. 146 146, 2006.

0.5

0.4

0.3

0.2

0.1

0 -30

-20

-10

0 WNR

10

20

30

(a) PSNR: 20. 34 dB PSNR: 8. 63 dB

(b) (c) Fig. 2. (a) Effect of convolution coding rate on BER, (b) & (c): Decoded image using fake key (b) Due to change of sigh; (c) Due to RSA.

[2]

C) Tracking of Illicit Distribution: We have studied the usefulness of convolution coding on the tracking performance. The simulation is done by adding AWGN having different watermark-to-noise ratio (WNR). The WNR is defined by the following equation. (12) WNR = 10 log ( 2 /12( 2 )) dB

[3]

10

where is the step size and is the noise variance. Fig. 2 (a) shows the BER performance for different code rate like R=1/2, 1/4, 1/8. Each point on the curve is obtained as the average value of 100 independent experimentations conducted over large number of benchmark images having varied image characteristics. From Fig. 2(a) it is clear that the increase in coding rate, decreases the BER that indirectly increased the efficiency of the tracking system. This is due to the fact that if BER is decreased, the extracted watermark will be more correct and also effectively find the location of leakage of illegal distribution. D) Security: We also have studied the security of our proposed scheme by decoding encoded image using false key. Figs. 2(b) & 2(c) show the decoded image when access control is done by change of sign bit and RSA, respectively. From Fig. 2(b) & 2(c) it is clear that the decoded image using fake key is of poor quality and hence it is secured. It is also clear that RSA is more secured compare to the former. Various image qualities of Figs. 2(b) & 2(c) prove this above claim.
TABLE V COMPARISON OF RESULTS: PSNR AND MSSIM OF DECODED IMAGE WHEN BER IS 10-4. Algorithms Zaidee [13] Proposed Change of Sign Bit Method RSA PSNR 28.84 32.05
29.05

[4]

[5]

[6]

[7] [8]

[9]

[10]

[11] [12]

MSSIM 0.91 0.94


0.92

The robustness of the proposed scheme over noisy channels is compared with the results obtained in [13] in terms of PSNR, MSSIM, and BER (bit error rate). The word BER here implies the

[13]

428

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

High Performance Interface Circuits between Adiabatic and Standard CMOS logic at 90 nm Technology

Dr. B.P. Singh


Faculty of Engineering and Tehnology Mody Institute of Technology and Science Lakshmangarh, India bpsingh@ieee.org
Abstract Adiabatic circuits and standard CMOS logic are
widely employed in Low power VLSI chips to achieve high system performance. The power saving of adiabatic circuit can reach more than 90% compared to conventional static CMOS logic. The clocking schemes and signal waveforms of adiabatic circuits are different from those of standard CMOS circuits. This paper investigates the design approaches of low-power interface circuits in terms of energy dissipation. Several low-power interface circuits that convert signals between adiabatic logic and standard CMOS circuits are presented. With BSIM 3v3 90 nm CMOS technology, the energy consumption of proposed interface circuits has relatively large power saving over the wide range of frequencies. This paper also investigates the different power delay product over the wide range of supply voltages. Simulation has been done on tanner EDA tool at BSIM 3v3 90nm technology. Keywords- Standard CMOS logic, Adiabatic circuit, Interfaces.

Ms. Neha Arora


Faculty of Engineering and Tehnology Mody Institute of Technology and Science Lakshmangarh, India neha.241986@gmail.com
other is based on comparators. At 90nm technology the peak sampling technique is inferior because it shows high energy dissipation compared with other techniques. This paper investigates the comparison between interfaces at 90 nm technology.

A. Peak Sampling Based Technique


The logic value of the adiabatic signal can be sampled and then held using a master-slave flip-flop to attain a standard CMOS output. Fig.1 shows an adiabatic-CMOS interface based on the peak sampling. A positive edge triggered C2MOS flip-flop with a master-slave configuration is used to sample the peak voltage of the adiabatic signal. The pc signal is the rectangle-wave clock that comes from the synchronous Power-clock generator. The sampling operation should be switched in the proper phase, i.e. only when the adiabatic signal inclk is at the hold state. The values of power delay product at different values of Vdd. As we are increasing the value of Vdd the PDP is also increasing. Delay remains constant for all the values of Vdd. This interface does not show appreciable power saving over a range of frequencies at 90nm technology and provides static CMOS output.
W=22u L=2u

I.

INTRODUCTION

With the development of VLSI technology the power dissipation is increasing dramatically. Low power has become one of the crucial design constraint, especially for portable and battery operated systems. In standard CMOS logic circuits, each switching event causes an energy transfer from the power supply to the output node or from the output node to ground. Compared with the conventional low power approaches, power dissipation can be significantly reduced by using the adiabatic computation. Adiabatic logic circuits utilize AC voltage supplies (powerclocks) to recycle the energy of circuit nodes. During recovery phase, the energy of the circuit nodes is recovered to the power source instead of being dissipated as heat. In the adiabatic circuits, circuit nodes are charged and discharged by AC voltage supplies, thus their output signals are clocked AC signals (adiabatic signals). Since the AC supplies control working rhythm of the circuits, they are also called power-clocks. As is well known, a DC power supply and rectangle-wave clocks are used in the standard CMOS circuits, thus the outputs of the standard CMOS circuits are typical rectangle-wave signals (CMOS signals). In order to utilize the strengths of various low-power approaches, we consider that both adiabatic logic and standard CMOS circuits co-exist on a single chip.

W=22u L=2u

W=22u L=2u
W=22u L=2u

W=22u L=2u

W=22u L=2u
W=22u L=2u

out

pc

pcb
W=22u L=2u

inclk
W=22u L=2u

pcb
W=22u L=2u

pc
W=22u L=2u

Fig. 1 Peak Sampling based Interface B. Comparator Based Techniques


The adiabatic-CMOS interface can also be realized by using comparators. A Schmitt inverter has been used for the adiabaticCMOS interface. The schematic is shown in Fig. 2. The Schmitt circuit responds to a slowly changing input waveform with a fast transition at the output terminal. Thus, the short-circuit loss of the next-stage circuit can be reduced, while itself has larger shortcircuit current because of its positive-feedback configuration. The circuit is simulated using 90nm CMOS technology. At 90nm technology this circuit shows more power dissipation compared to peak sampling based technique. For all the values of Vdd delay remains constant and power dissipation is increasing as Vdd increases. This circuit provides dynamic CMOS output.

II.

ADIABATIC- CMOS INTERFACE CIRUITS

The Adiabatic CMOS Interfaces can be implemented using two approaches. One is based on peak sampling techniques and the

429

Fig. 2 Schmitt Inverter based Interface

In order to convert the adiabatic signal into a static CMOS one, an improved power-clocked CMOS (IPC2MOS) inverter is shown in Fig. 5. The second-stage inverter is used for shaping of the output signal. The power-clocks clk0, clk3, and clk2 drive the gates of the transistors, and they are used to avoid the short-circuit current and control the comparison time of the IPC2MOS inverter. The IPC2MOS inverter doesnt have short-circuit current. A standard static CMOS signal can be obtained, because the comparison is only carried during the peak of the adiabatic signal.

Inverter itself is a comparator. It can be used as a Adiabatic CMOS interface circuit as shown in Fig.3. The second stage of inverter can be used to shaping the outputs. The delay remains constant for all the supply voltages and power dissipation advances as the supply voltages increases. This circuit shows a larger short circuit current. At the maximum operational frequency, the circuit shows the minimum energy loss per cycle.

W=22u L=2u

W=22u L=2u

Fig.5 Improved power-clocked CMOS (IPC2MOS) Comparator

in
W=22u L=2u

out
W=22u L=2u

Fig.3 CMOS Inverter based Interface circuit. The above two circuits have large short-circuit current, because of the gradually rising and falling adiabatic signal. For eliminating the short circuit current a power clocked CMOS (PC2MOS) inverter is shown in Fig. 4. The second-stage inverter is used for shaping of the output signal. The structure of PC2MOS is similar to clocked CMOS circuits, but the gate of the P-type and N-type transistors is controlled by the power-clock instead of the rectangle-wave clock used in CMOS circuits. The PC2MOS has only charging current for the output node A when the input signals (in) falls. The introduction of the power-clock clk doesnt add large energy loss because it operates in a fully adiabatic manner for the gate of the transistors. It is verified that the interface based on the PC2MOS comparator has low energy loss because the short-circuit dissipation has been completely eliminated. This circuit provides dynamic CMOS output. For all the values of Vdd delay remains constant. Power dissipation gradually increases with the increase in Vdd. Energy loss per cycle is decreasing as the operational frequency is increasing. This circuit provides dynamic CMOS output.

Although this circuit provides very good results at 180 nm TSMC technology but this circuit provides the worst results at 90nm technology. This circuit shows very high power dissipation and energy loss per cycle. So, some modifications to this circuit have been done to achieve lower power dissipation. In the proposed circuit some different clocking schemes are applied to the improved power clocked CMOS inverter reported in literature. This circuit does not show any short circuit current dissipation because one of the five transistors always remains in off situation so there is no direct path from Vdd to ground. The proposed design shows the best results at 90nm technology. A proposed IPC2MOS interface is shown in Fig.6. This proposed circuit shows the least power dissipation. This circuit shows the least energy loss per cycle. This circuit provides the static CMOS output as the improved power clocked CMOS inverter does.

W=22u L=2u

W=22u L=2u

clk0
W=22u

W=22u L=2u

clk3

inclk1

L=2u

out
W=22u L=2u
W=22u L=2u

W=22u L=2u

Fig. 6 Proposed circuit for adiabatic-CMOS interface III.


CMOS-ADIABATIC INTERFACE CIRUITS

Fig. 4 Power clocked CMOS Comparator based interface circuit.

Numbers of CMOS-adiabatic interface circuits are shown in Figs. 7 ,8 and 9. The first interface is shown in figure 7. It consists of a signal converter, a CMOS edge-triggered flop-flop, and the two

430

CMOS inverters. The signal converter converts the signals (inclk0 and inbclk0) to the adiabatic signals (out and outb). In order to avoid the deformation of the adiabatic signals, the input signals (inclk0 and inbclk0) of the converter should been only switched during wait states, thus a flip-flop is used to synchronize. The rectangle-wave clock (pc0) is generated by using the CMOS inverter comparing clk0 with VDD/2. Thus, the outputs of the flipflop are only switched during wait states to make the inputs of the converter to have the proper phase. This circuit has been simulated at 90 nm technology. Power delay product and Energy loss per cycle at various values of Vdd and over a wide range of frequencies respectively has been calculated. Comparatively more energy loss is depicted by this circuit as compared to other interfaces.

synchronize. The structure and operation of PCFF are similar to CMOS transmission-gate flip-flops with a master-slave configuration except that it uses 4-transisor transmission gates that are driven by power-clocks instead of rectangle wave clocks.
W=22u L=2u

QM

W=22u L=2u

clk3
L=2u W=22u
W=22u L=2u

clk0
W=22u L=2u
W=22u

W=22u L=2u

clk3
L=2u W=22u
W=22u L=2u

clk0
W=22u

W=22u
L=2u

W=22u

L=2u

clk1
clk2

clk1

clk2
W=22u

L=2u

L=2u

clk1
W=22u L=2u

clk2
W=22u L=2u

clk1
L=2u W=22u

clk2
W=22u L=2u
W=22u

L=2u

clk1

W=22u L=2u
W=22u L=2u
W=22u L=2u

W=22u L=2u

W=22u

L=2u

W=22u

L=2u

W=22u
W=22u L=2u
L=2u

inclk0

L=2u

clk3

clk0

clk3

clk0
W=22u L=2u

out
W=22u L=2u

W=22u

W=22u L=2u

Fig. 9 CMOS-adiabatic interface based on power-clocked flipflop. Because the above interface doesnt need the comparator and the energy of its clock input capacitances can be well recovered, lower energy dissipation can be expected and this circuit shows large power saving at 90 nm technology except 150MHz frequency.
Fig. 7 CMOS-adiabatic interface based on static flip-flop and comparator using two CMOS inverters. One source of energy loss occurs from the comparators large short-circuiting dissipation. The interface based on Power clocked CMOS inverter is shown in fig.8. The rectangle-wave clock is generated using the PC2MOS is to reduce short-circuit current. The signal converter uses the buffer to reduce the input capacitances of the signal converter. The simulations reveals that this circuit also shows a large power saving over a wide range of frequencies at 90nm technology.

IV.

COMPARISONS OF ENERGY DISSIPATIONS

Series1 is representing Schmitt inverter based interface.Series2 is representing CMOS inverter based interface. Series 3 is representing power clocked CMOS inverter based interface. Series 4 is representing Proposed Interface at 90 nm technology. The proposed interface is showing the least energy dissipation over a wide range of frequencies in MHz.
enegy comparisons of adiabatic-CMOS interfaces
3.50E-13
energy loss per cycle

3.00E-13 2.50E-13 2.00E-13 1.50E-13 1.00E-13 5.00E-14


Series1

L=2u

W=22u L=2u

W=22u

W=22u

W=22u L=2u

pc0
W=22u W=22u L=2u L=2u

L=2u

pc0b
W=22u W=22u L=2u L=2u

L=2u

W=22u L=2u
W=22u L=2u

CLK1

W=22u

W=22u

W=22u L=2u

W=22u L=2u

W=22u L=2u

in

L=2u

outb

L=2u

W=22u L=2u

W=22u L=2u

W=22u L=2u

W=22u L=2u

clk0

clk3
W=22u L=2u

pc0
W=22u L=2u

Series2

pc0b
pc0
W=22u L=2u

out
W=22u L=2u

Series3
Series4

W=22u L=2u

pc0b

W=22u
W=22u L=2u

pc0
W=22u W=22u L=2u L=2u

L=2u

outb
W=22u L=2u
W=22u W=22u L=2u L=2u

0.00E+00 50 100 150 200 250 300 op freq

W=22u L=2u

W=22u L=2u

pc0b
W=22u L=2u

pc0b
pc0

Fig.10Energy dissipation comparisons of various adiabaticCMOS interfaces


Fig. 8 CMOS-adiabatic interface based on static flip-flop and comparator using power-clocked CMOS (PC2MOS). The CMOS flip-flop used in the CMOS-adiabatic interfaces has large energy loss compared with adiabatic circuits, because it doesnt operate in an adiabatic manner. A power-clocked flip-flop (PCFF) to reduce its energy loss is shown in Fig 9. The interface consists of a FPAL buffer that converts CMOS signal to adiabatic one, and a PCFF that is used to

Series1is representing CMOS-adiabatic interface based on static flip-flop and comparator using two CMOS inverters. Series 2 is representing CMOS-adiabatic interface based on static flip-flop and comparator using power-clocked CMOS (PC2MOS). Series 3 is representing CMOS-adiabatic interface based on power-clocked flip-flop. CMOS Adiabatic interface based on power clocked flip flop shows it energy efficiency over a wide range of frequencies in MHz except 150 MHz frequency.

431

Energy Com parisons ofCMOS-Adiabatic interfaces


1.20E-12
energy loss per cycle

VI.

CONCLUSION

1.00E-12 8.00E-13 6.00E-13 4.00E-13 2.00E-13 0.00E+00 50 100 150 200 250 300 op freq(MHz)
Series1

Series2
Series3

Fig. 11 Energy dissipation comparisons of various CMOS adiabatic interfaces V.


COMPARISONS OF

POWER DELAY PRODUCTS

Power delay product of various Adiabatic-CMOS Interfaces has been compared over a wide range of supply voltages in volts. Series1is representing Peak sampling based interface. Series2 is representing Schmitt inverter based interface.Series3 is representing CMOS inverter based interface. Series4 is representing Power clocked CMOS inverter based interface.Series5 is representing Proposed interface at 90nm technology. Proposed design shows the best results.
power delay product comparisons
5.00E-13 4.50E-13 4.00E-13 3.50E-13 3.00E-13 2.50E-13 2.00E-13 1.50E-13 1.00E-13 5.00E-14 0.00E+00 1 1.2 1.4 1.6 1.8 VDD 2 2.2 2.4

Integrated circuits can be designed with both adiabatic and conventional CMOS logic on the same chip. There were different types of interface circuits available in the literature. Power-clocked CMOS comparator shows relatively large power savings over a wide range of frequencies, for clocked CMOS output. Proposed Improved Power-clocked CMOS comparator with different clocking schemes shows relatively large power savings over a wide range of frequencies, for static CMOS output. CMOS-adiabatic interface based on power-clocked flip-flop and CMOSadiabatic interface based on static flip-flop and comparator using power-clocked CMOS shows the best performance among the all other reported circuits in terms of energy dissipation over a wide range of frequencies. They are more suitable for low power embedding systems, where both adiabatic logic and standard CMOS circuits are co-existed on a single chip to attain ultra low-power design. These all interface circuits are simulated at BSIM3V3 90 nm technology in tanner EDA tool. REFERENCES
[1]

Series1

Series2

PDP

[2]

Series3

Series4

Series5

[3]

[4]

Fig.12 PDP comparisons of various adiabatic-CMOS interfaces


[5]

Power delay product of various CMOS-Adiabatic Interfaces has been compared over a wide range of supply voltages in volts. Series1is representing CMOS-adiabatic interface based on static flip-flop and comparator using two CMOS inverters. Series 2is representing CMOS-adiabatic interface based on static flip-flop and comparator using power-clocked CMOS. Series 3 is representing CMOS-adiabatic interface based on power-clocked flip-flop. Interface based on power clocked flip flop shows the best results.
Power Delay Products Comparisons
1.00E-12 9.00E-13 8.00E-13 7.00E-13 6.00E-13 5.00E-13 4.00E-13 3.00E-13 2.00E-13 1.00E-13 0.00E+00 1 1.2 1.4 1.6 Vdd 1.8 2 2.2 2.4

[6]

[7]

[8]

J. M. Rabaey, A. Chandrakasan, and B. Nikolic, Digital Integrated Circuits: A Design Perspective (2nd edition). New York: Prentice Hall,2003) R. Tessier, D. Jasinski, A. Maheshwari, et al, An energyaware active smart card, IEEE Trans. on VLSI Systems, vol. 13 (10), 2005,pp. 1190-1199R. A. Blotti and R. Saletti, Ultra low-power adiabatic circuit semicustomdesign, IEEE Tran. on VLSI Systems, vol. 12(11), 2004, pp. 1248-1253. A. Vetuli, S. Di Pascoli, and L. M. Reyneri, Positive feedback in adiabatic logic, Electron. Lett., vol. 32(20), 1996, pp. 1867-1869. J. P. Hu , T. F. Xu, and H. Li, A lower-power register file based oncomplementary pass-transistor adiabatic logic, IEICE Trans. On Informations and Systems, vol. E88-D (7), 2005, pp. 1479-1485. A. Kamer, J. S. Denker, B. Flower, et al, Second-order adiabatic computation with 2N-2P and 2N-2N2P logic circuits, in Proc. Int. Sym. on Low Power Design, Dana Point, Canada, 1995, pp. 191-196. K. T. Lau, W. Y. Wang, and K. W. Ng, Adiabatic-CMOS / CMOS adiabatic logic interface circuit, Int. J. Electronics, vol. 87(1), 2000, pp. 27-32. Sung-mo Kang, Yusuf Leblebici, CMOS Digital Integrated Circuits Analysis and Design

Series1

PDP

Series2

Series3

Fig. 13 PDP comparisons of various CMOS -adiabatic interfaces

432

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

RSTDB & Cache Conscious Techniques for Frequent Pattern Mining


Vaibhav Kant Singh
Department of Computer Science & Engineering Samrat Ashok Technological Institute Vidisha (Madhya Pradesh) Vidisha, India vibhu200427@yahoo.com

Abstract In this paper we have given a brief overview on the proposed algorithm RSTDB which we have implemented for solving problem of multiple scans of database using both upward and downward closure property. What we have done previously was to find an algorithm that can be efficient in removing to the flaws in the basic approach i.e. Apriori.Now in this paper along with the previous approach we have put a brief look on the advancement in processor technology and the lack of proper facilities to properly handle processor capability. The Proposed algorithm is having a special module to combat the inefficiency in Apriori. The module prepared is named heuristic function. The concept is derived from the concept of heuristic function in Artificial Intelligence used to make efficient tracking of the goal state in blind search algorithms. In this paper we have a brief look on the cache optimization techniques to have better processor performance for evaluating frequent patterns and in turn making frequent pattern mining efficient.
Keywords-Cache conscious optimization, Data mining, Frequent Pattern mining.

use of the resources for better outcome. Together with the proposed approach the efficiency of frequent pattern mining is going to improve by some amount. A. Emergence of Data mining The emergence of this technology was a result of the support of three technologies namely massive data collection, high performance computing and Data mining algorithms. The first two factors have a very vital role in the advent of this technology which has had a very significant effect in the information processing that has been accomplished in the todays environment. There are several factors that have contributed to bring Data mining to the forefront. Some of the basic factors are untapped value in database, concept of data warehousing and drop in Cost/Performance ratio. B. Data mining an Overview Data mining is the exploration and analysis of large datasets, in order to discover meaningful patterns and rules. Data mining is a component of a wider Process called (KDD) Knowledge Discovery from Database. Before a data set is mined, it first has to be cleaned. This removes, errors, ensures consistency and takes missing values into account. Data mining may use quite simple or highly sophisticated data analysis. Data mining is a component of Data ware housing but it can be a stand alone process for data analysis, even in the absence of a Data warehouse.Data mining is the non-trivial process of identifying valid, novel, potentially useful and ultimately understandable patterns in data. Data mining, the extraction of the hidden predictive information from large databases, is a Powerful new technology with great potential to analyze important information in a Data ware-house.A Data ware-house is a subject oriented, integrated, time-varying, non-volatile collection of data in support of the managements decision-making process. Data ware-housing is the process of integrating enterprise-wide corporate data into a single repository, from which end-users can easily run queries, make reports and perform analysis. Data warehousing is a new approach to enterprise-wide computing at the strategic or Architectural level.

I.

INTRODUCTION

The Growth of Computer technology in the current scenario has majorly affected Business data processing and scientific computing to a large extent. The major reason being low storage cost when considering the former case while fast and efficient computational facilities in the latter case. Due to widespread computerization and also due to affordable storage facilities, there is an enormous wealth of information embedded in huge database belonging to different enterprises. In the domain of scientific computing the major problem is to infer some valuable information on from observed data. The key idea of Data mining is to find effective ways to combine the computers power to process data with human eyes ability to detect patterns. The techniques of Data-mining are designed for and work best with, large data sets. Today, the Computer Processor is having speed that is underutilized due to improper localization of the various parameters if these parameters would be properly localized than the performance of the system can be improved a lot. This can be done using several cache conscious mechanisms that are going to help in optimal

433

C. Data mining V/S KDD The Data mining process can be coupled with a DBMS in tightly coupled mode or loosely coupled mode. Data mining techniques support automatic exploration of data. Data mining attempts to source out trends or patterns in the data and infers rules from this patterns.The term Data Mining refers to the finding of relevant and useful information from database. Data mining and knowledge discovery in the database is a new interdisciplinary field, merging ideas from statistics, machine learning and parallel computing. Data Mining is only, one of the many steps involved in the database. The various steps involved in KDD process include data selection, data cleaning and Preprocessing, data transformation and reduction, Data Mining Algorithm selection and finally the Post-Processing and the interpretation of the discovered knowledge. The KDD process tends to be highly iterative and interactive under computation: KDD is the Process of identifying a valid, potentially useful and ultimately understandable structure in data. The structures that are outcomes of the data mining process must meet certain conditions to be considered as knowledge. These are validity, understandability, utility, novelty and interestingness. II. PREVIOUS APRIORI APPROACH

To find Lk, a set of candidate k-item sets is generated by joining Lk-1 with itself. This set of candidates is denoted Ck. 2) Prune Step Ck is a superset of Lk, that is, its members may or may not be frequent, but all of the frequent k item sets are included in Ck. A scan of the database to determine the count of each candidate in Ck would result in the determination of Lk.Ck, however, can be huge, and so this could involve heavy computation. To reduce the size of Ck, the Apriori property is used as follows:Any (k-1) item set that is not frequent cannot be a subset of frequent k-item set. Hence, if (k-1) subset of a candidate kitem set is not in Lk-1 , then the candidate cannot be frequent either and so can be removed from Ck-1. III.
PROPOSED WORK

Apriori is an influential algorithm for mining frequent item sets for Boolean association rules. The name of the algorithm is based on the fact that the algorithm uses prior knowledge of frequent item set properties. Apriori employs an iterative approach known as a level wise search, where k-item sets are used to explore (k+1) item sets. First, the set of frequents 1- item sets is found. This set is denoted L1. L1 is used to find L2, the set of frequent 2 item sets, which is used to find L3 and so on, until no more frequent k-item sets can be found. The finding of each Lk requires one full scan of database. To improve the efficiency of the level wise generation of frequent item sets, an important property called the Apriori property, is used to reduce the search space.In order to use the Apriori property, all nonempty subsets of a frequent item set must also be frequent. This property is based on the following observation. By definition, if an item set I does not satisfy the minimum support threshold, min_sup, then I is not frequent, that is, P (I) < min_sup. If an item A is added to the item set I, then the resulting item set (i.e. I A) cannot occur more frequently than I. Therefore, IUA is not frequent either, that is P (I A) <min_sup. There are two steps for understanding that how Lk-1 is used to find Lk: --1) The Join Step

A. RSTDB Algorithm In this paper we are proposing an algorithm called RSTDB which reduces the number of scans involved in Apriori for this we will be using heuristic function which calculates the overall number of times the scanning is going to be done before actually iteration starts this reduces the number of passes required for frequent set estimation. 1) RSTDB Algorithm: STEP 1: Calculate the size of each transaction in the Transaction Database. STEP 2: Evaluate the transaction set having maximum size. STEP 3: Check for the Transaction set size having frequency or support value more than the given threshold value. Set this Transaction size as the maximum value up to which scanning & candidate-generation step has to proceed. This will be the maximum value of k up to which iteration has to be done. Value HeuFn [no. of items, TDB size] = Max k Step 4 & Step 5 iterates until k = Max k Value of k lies between 1 and Max k. STEP 4: Candidate-Generation gen_cand_itemsets with the given Lk-1 as follows Ck = for all item set I1 Lk-1 do for all item set I2 Lk-1 do If I1[1]= I 2 [1] I 1 [2]= I 2 [2] ... I 1 [k-1]< I 2 [k-1] then c= I 1 [1], I 1 [2].. I 1 [k-1],I 2[k-1] Ck = Ck {c} STEP 5: Candidate Set Pruning Prune (Ck) for all c Ck for all ( k-1 ) subsets d of c do

434

If d Lk-1 then Ck = Ck\{c} Here, After step 5 in each time the value of Lk is updated with the item sets in Ck having support equal to or greater than minimum threshold. k is the number of passes required. I is the item set present in TDB. Lk is a set of candidate k-item set. Ck is a superset of Lk.

Time Graph RSTDB V/S Apriori 200


Time

100 0 0 0.5 1 1.5 Support threshold%


Figure 3.2: Time Complexity Graph for TDB1

Apriori RSTDB

Here, B. RSTDB V/S Apriori For Evaluating the performance difference between the Apriori Algorithm and RSTDB we will be considering the example of the below transaction database. Consider the below database having five elements A, B, C, D, E.In the below table there are 10 transactions. We will mathematically show as to the difference between the two approaches:Transaction ID Items 100 A 101 B 102 C 103 D 104 E 105 B 106 B 107 A 108 B 109 C
Table 3.1:-TDB1 consisting of A, B, C, D, E

It is assumed that each scan require one unit time for the purpose and for space two units of data bytes is assumed to take space for each counter. C. Limitation of RSTDB 1) It does not work for all conditions as it depends on the heuristic function 2) The increased efficiency is very less 3) Overhead is associated with heuristic function evaluation. D. Experimental Result 1) The Proposed algorithm depends on the heuristic function. 2) It is more efficient for lower threshold values. 3) It depends both on the number of different items and total number of transactions. IV.
CACHE CONSCIOUS OPTIMIZATIONS

When we apply the two algorithms on the above Transaction Database, both the algorithms show slight difference in terms of utilization of memory and execution time for different set of Threshold values. For the TDB the two approaches RSTDB and Apriori give the following results in terms of Space and execution time (Fig. 3.1 & 3.2)

There are several mechanisms for improving the efficiency of frequent pattern mining using cache conscious optimizations. Some of them are Prefix Trees, FP-Growth Algorithm, Spatial Locality Related Enhancements, Prefetching and Temporal Locality Related Enhancements. A. Prefix Trees A prefix tree is a data structure that provides a compact representation of transaction data set. Each node of the tree stores an item label and a count, with the count representing the number of transactions which contain all the items in the path from the root node to the current node. B. FP-Growth Algorithm The FPGrowth is a prefix tree based approach to frequent pattern mining that helps in optimizing the space time gap for having optimal processor utilization. The FP-Growth approach is a very efficient approach as the intended content required for processing is present in the cache and thus improves performance. This is possible due to the adopted data structuring.

Space Graph RSTDB v/s Apriori


20
Space

15 10 5 0 0 0.5 1 1.5 Support threshold%


Figure 3.1: Space complexity Graph for TDB1

Apriori RSTDB

435

C. Spatial Locality Related Enhancements A Cache Conscious prefix-tree a data structure designed to significantly improve cache performance through spatial locality. A cache conscious prefix tree is a modified prefix tree which accommodates fast bottom up traversals and improves cache line usage. D. Prefetching Cache line Prefetching is a popular technique for reducing the effect of cache line misses, particularly when applications do not perform a significant amount of computation per cache line. E. Temporal Locality Related Enhancements Temporal locality states that recently accessed memory locations are likely to be accessed again in near future. Cache designers assume that programs will exhibit good temporal locality, and store recently accessed data in the cache accordingly.Therefore,it is imperative that we find any existing temporal locality in the algorithm and restructure computation to exploit it. V.
CONCLUSION

[2]

[3]

[4]

[5]

[6]

[7]

[8]

Apriori is the basic algorithm that is used for mining of frequent patterns from the transaction database. To reduce the number of scans required for the process of extraction of the frequent set the proposed algorithm reduces the number of scans by using both upward and downward closure propertyMining frequent patterns from large database plays an essential role in many data mining tasks and has broad applications. Most of the previous proposed methods adopt Apriori like candidate-generation-and-test approaches. However, those methods may encounter serious challenges when mining datasets with prolific patterns and or long patterns.RSTDB although not that efficient increases the efficiency in frequent pattern mining by some amount. It mainly concerns with reducing the number of scans of database involved in mining process.In this work several optimization techniques that are going to improve the Data mining system performance by proper cache utilization are mentioned are going to improve system processor performance. ACKNOWLEDGMENT At last but not the least we want to present our sincere thanks to Dr. Vinay Kumar Singh Department of MCA GGU Bilaspur and also Dr. L.P.Pateria, Professor and Head, Department of Management Studies, Guru Ghasidas Central University Bilaspur (C.G.), India, for there help in our research work REFERENCES
[1] R. Agarwal, T. Imielinski, and A. N. Swami, Mining association rules between sets of items in large databases, Proc. Of ACM SIGMOID International Conference on Management of Data,ACM Press,.Washington DC, pp. 207216, May 1993.

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

M.J. Zaki, Scalable Algorithms for Association Mining, IEEE Transaction on Knowledge and Data Engineering, vol. 12. No. 3, pp. 372-390, May/June 2000. J. Han, J. Pei and Y. Yin, Mining Frequent Patterns without Candidate generation, Proc. Of ACM SIGMOID International Conference on Management of Data, ACM Press, Dallas, Texas,pp. 112, May 2000. J. Pei, J. Han, H. Lu, S. Nishio, S. Tang and D. Yang, Hmine: Hyper-Structure Mining of Frequent Patterns in Large Databases, Proc. Of IEEE International Conference on Data mining, pp. 441-448, 2001. Pietracaprina, and D. Zandoline, Mining Frequent itemset using Patricia tries, FIMI03, Frequent Pattern mining implementations, Proc. Of the ICDM 2003 Workshop on Frequent itemset mining implementations, Melbourne, Florida, Dec 2003. G. Grahane, and J. Zhu, Efficiently using prefix trees in mining frequent item sets, FIMI03, Frequent Pattern mining implementations, Proc. Of the ICDM 2003 Workshop on Frequent itemset mining implementations, Melbourne, Florida, Dec 2003. D. Burdick, M. Calimlim, Jason Flannick, and Johannes Gehrke, MAFIA: A Maximal Frequent Itemset Algorithm, IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 11, pp. 1490-1505, November 2005. M.J. Zaki, S. Parthasarthy, M. Ogihara, and W. Li, New algorithms for fast discovery of association rules, Proc. Of the Third International Conference on Knowledge Discovery and Data mining, AAAI Press, pp. 283-286, 1997. P. Shenoy, J.R. Haritsa, S. Sudarshan, G. Bhalotia, M. Bawa and D. Shah, Turbo charging vertical mining of large databases, Proc. Of ACM SIGMOD International Conference on Management of Data, ACM Press, Dallas, Texas, pp. 22-23, May 2000. D. Burdick, M. Calimlim and J. Gehrke, MAFIA: A maximal itemset algorithm for transaction databases, Proc. Of International Conference on Data Engineering, Heidelberg, Germany, Cell Proliferation without Neurogenesis in Adult Primate Neocortex, pp. 443-452, April 2001. M.J. Zaki, and K. Gouda, Fast vertical mining using diffset, Proc. Of the Nineth ACM SIGKDD International Conference on Knowledge Discovery and Data mining, Washington D.C.,ACM Press, Newyork, pp. 326-335, 2003. R. Agrawal, C. Agrawal, and V. Prasad, A Tree Projection Algorithm for generation of frequent itemsets, Parallel and Distributed Computing, pp. 350-371, 2003. A. Ghoting, G. Buehrer, S. Parthasarathy, D. Kim, A. Nguyen, Y.K. Chen, and P. Dubey, Cache conscious frequent pattern mining on a modem processor, Proc. Of the 31st VLDB Conference, Trondheim, Norway, 2005. V.K. Singh, and V. Shah, Minimizing space time complexity in frequent pattern mining by reducing database scanning and using pattern growth method, Chhattisgarh Journal of Science & Technology, ISSN 0973-7219, 2008. V.K. Singh, V. Shah, Y.K. Jain, A. Shukla, A.S. Thoke, V.K. Singh, C. Dule, and V. Parganiha, Proposing an efficient method for frequent pattern mining, Proc. Of 36 Volume of World Academy of Science Engineering and Technology for ICCSS International conference at Bangkok, Thailand, ISSN 2070-3740, 17-19 December 2008. V.K. Singh, and V.K. Singh, Minimizing space time complexity by RSTDB a new method for frequent pattern mining, Proc. Of First International Conference on Intelligent Human Computer Interaction IHCI 2009, Indian Institute Information Technology, Allahabad, Springer, 20-23 January 2009. V.K. Singh, V.S. Thakur, and N.K. Pandey, Proposing Data mining as an efficient technique for rural development, Accepted for publication in the upcomming conference Proc. Of Intenational Multiconference on IISN, ISTK Haryana, India, Feb. 2010.

436

An Evolutionary Programming Algorithm for the RWA Problem in Survivable Optical Networks
Urmila Bhanja, Sudipta Mahapatra, Rajarshi Roy
Department of Electronics & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Kharagpur, India e-mail: urmilabhanja@gmail.com, sudipta@ece.iitkgp.ernet.in, royr@ece.iitkgp.ernet.in

Abstract We propose and use an evolutionary programming

algorithm for solving the survivable dynamic routing and wavelength assignment (DRWA) problem in optical wavelengthdivision multiplexing (WDM) networks under the wavelength continuity constraint. We assume an ideal physical channel and therefore neglect the blocking of connection requests due to the physical impairments. The main contribution of our work is to develop an evolutionary programming algorithm for the Survivable DRWA problem (protection scheme) that provides a dedicated back up path, along with the primary path, for each connection request with minimum number of resources (minimum number of wavelengths). Our algorithm is based on an initial population of a single individual, and takes into account path length (number of hops), path cost, number of free wavelengths, and set up time of each lightpath to solve the DRWA problem. The proposed algorithm is shown to give a lower blocking probability and a lower mean execution time than the existing bioinspired algorithms available in the literature. Two types of wavelength assignment techniques, such as First-fit and Round Robin wavelength assignment techniques have been investigated here. The ability to guarantee a low blocking probability without any wavelength converters, as well as small delay makes the improved algorithm attractive for current optical switching networks.

Keywords-Dedicated path protection, Evolutionary programming algorithm, Mean blocking probability, Set up time, Mutation, Mean execution time.

There are many papers found in the literature that solve the survival DRWA problem. Heuristics based strategies and ILP techniques have been advocated for making the lightpath networks survivable to link failures. In [2], [3], and [4], the authors have used heuristics for solving the survivable DRWA problem with both the dedicated path protection and the shared path protection with the objective of providing resource efficient solutions in both the type of protection schemes. Similarly, Kennington et al. [5] have proposed an ILP for the network survivability problem; but this solution works only for small size problems. Bisbal et al. [6] have proposed the use of a genetic algorithm, known as FTGRWA, to provide protection at the optical layer for the WDM optical network. Their algorithm uses backup multiplexing technique for using the network resources efficiently. The cost of primary path and the cost of backup path are different in their algorithm. Number of hops traversed by the backup path and the wavelength sharing mechanism mainly decides the cost of the backup path. They have compared their mean blocking probability with alternate routing method for routing primary-backup lightpath pairs (AR-PIBWA) proposed by Mohan et al [7]. Vinh Trong Le et al. [8] have proposed an algorithm based on the combination of mobile agent technique and a genetic algorithm, in which they propose a new fitness function that aims at utilizing the network resources more efficiently when establishing a protected lightpath. Recently Kvian et al. [9] have proposed a genetic algorithm for a survivable wavelength routed optical network trying to minimize the number of wavelengths for both the working lightpath and the spare lightpath.

I.

INTRODUCTION

II.

PROBLEM FORMULATION

The optical networks employing wavelength-division multiplexing (WDM) technology have emerged as the potential candidate for the next generation backbone networks. Therefore, the design of fault tolerant optical core networks with protection and restoration schemes is an important issue and has received much attention in recent years [1]. The routing and wavelength assignment problem for the survivable WDM optical network is NP-hard [2]. The network service, which gets disrupted after a link failure in a WDM network, can be brought back to the normal condition by utilizing either the protection scheme where the dedicated backup resources are reserved during a lightpath set up, or the restoration scheme, in which the backup lightpaths are explored dynamically after the occurrence of a link failure.

The 14 node NSF network [6] is considered here to illustrate the proposed algorithm. This can be modeled as a graph G (V, E), where V is the set of nodes, representing routers or optical cross connect switches (OXCs), and E is the set of fiber links representing physical connectivity between the nodes. The link existing between a pair of nodes is bidirectional in nature. We set a variable I ij lp to 1 when the lightpath lp uses the link (i,j); otherwise, I ij lp is sett to 0. LP is the set of all the lightpaths. S and D are the source and destination nodes respectively. Eight wavelengths are assumed to be provided per fiber. We have considered here only eight wavelengths to test the efficiency of the algorithm for smaller number of resources. However, the algorithm can be tested for higher number of wavelengths also. The proposed algorithm tries to maximize the fitness function or

437

objective function as described below while subjected to the lightpath flow conservation constraint, wavelength assignment constraint and link disjoint constraint. The optimization problem is stated as follows: (1)
Maximize = fx
k x 1 j =1

W 1

I
w=0 x

lp ( x , y ) ( x , y ) ijw

lp ( y , x ) ( y , x ) lp ,y= j I ijw = I ij l w=0 x

W 1

W 1

Wx

gx ( j ), gx ( j +1)

( i , j )E

Wx W + x H ij Tx

I
w=0 x

lp ( x , y ) ( x , y ) ijw

I
w=0 x W 1

W 1

(7)
lp ( y , x ) ( y , x ) ijw

= I , y = i
lp ij

s.t.
( i , j )E

W 1

I
w=0 x

lp ( x , y ) ( x , y ) ijw

lp ( y , x ) ( y , x ) I ijw = 0, y i , y j l w=0 x

I ij I I
ij

lp

( j , i )E

I ji = 1, if i = S , lp LP
lp

(2)
lp

( i , j )E

( j , i )E

I I

lp ji

= 1, if i = D, lp LP

lp ij

lp ji

= 0, if i S , i D, lp LP

( i , j )E

( j , i )E

Equation 4 implies that the wavelength used by a lightpath is unique. Equation 5 ensures that the wavelength continuity constraint is adhered to. Equation 6 ensures that two lightpaths using the same link can not be assigned identical wavelengths. Equation 7 expresses the conservation of wavelengths at the end nodes of physical links on a lightpath. We ensure that the paths (primary and backup paths) corresponding to any request are link disjoint by imposing the constraint:
lp1 lp 2 I ij I ij for lp1 LP, lp2 LP, lp1 lp2

i j ( i , j )E

I I

lp ij

1, if i D, lp LP

(8)

(3)
lp ij

Equation 8 implies that the paths (a primary and a back up path) corresponding to any requests are link disjoint

= 0, if i = D, lp LP

i j ( i , j )E

III.

THE EP ALGORITHM APPROACH

In equation 1, f x represents the fitness value of the x-th chromosome or x-th individual that represents a possible path between the source and destination. The nodes in the path correspond to the genes of the chromosome. The denominator of the first term represents the total cost of a path, the second term represents the total hop count a path takes during its traversal, and the third term represents the set up time of the light path or request. The numerator of all the three terms represents the free wavelength factor for a request or a lightpath. k (x) is the length of the x-th chromosome, g x(j) represents the gene of the j-th locus of the x-th chromosome and g x(j+1 ) represents the gene of the (j+1)th locus of the x-th chromosome. C ij represents the link cost between the nodes i and j. If a wavelength continuous path is available the free wavelength factor W x is one; otherwise, it is set to zero. Equation 2 and 3 respectively give the flow conservation constraint [10] and ensure the absence of loops in the lightpath. Suppose I ijw lp(x,y) is the lightpath wavelength link indicator that is one when a lightpath uses wavelength w on link (i,j) between nodes x and y; and l(x,y) is the physical link between nodes x and y. Then the wavelength constraints are as follows [11]:

LP=0,Count=0

Request arrival Routing using EP technique Yes Assignment Available


?

Find best route

LP=LP+1

Yes No Topology is changed (links of route are cut)

Is LP< 2

Count=Count+1

No Is Count>0

No

Process the next request Yes

I = I ( i , j )
lp ij w=0 lp ijw

W 1

(4) (5) (6)

Block the request

lp ( x , y ) lp I ijw I ijw (i , j ), ( x, y ), w

I
i, j

lp ( x , y ) ijw

1( x, y ), w

Fig. 1 Flow chart of Survivable DRWA algorithm using EP technique

438

In this paper we have represented the lightpaths between a source and destination pair as a set of chromosomes. Each chromosome is a sequence of nodes, which is randomly generated, satisfying the physical connectivity topology of the particular network. We have taken variable length chromosomes. A chromosome is a lightpath encoded from the source node S to the destination node D. Our chromosome encoding process is similar to the reference paper [10]. The EP technique we have used consists of the following four steps: (i) random initialization to generate a single individual as the initial population, (ii) mutation of the parent chromosome to generate 15 offsprings; the mutation process is also similar to the reference [10], (iii) evaluation of the fitness of all the chromosomes, (iv) selection of the chromosome having the best fitness value. In the proposed algorithm we try to generate a pair of disjoint paths, one primary path and another secondary or back up path, for a single request. In the process of disjoint path generation, first the primary path is found using the above basic steps of Evolutionary Programming algorithm and then its links are all temporarily removed. After the removal of links the secondary back up path is found using the same basic Evolutionary Programming algorithmic steps. We consider dedicated backup path protection scheme without any wavelength sharing with other back up paths. This approach provides a dedicated backup path along with the primary path for each request. In other words each dynamic request must be assigned two distinct lightpaths that do not share any common links so that the network is able to recover from any link failures. In this a request is blocked if the secondary lightpath does not find a wavelength continuous channel even if the primary lightpath finds one. It means that two disjoint wavelength continuous lightpaths should be found for any request. The flowchart for the proposed algorithm is given in Fig.1. Two alternative wavelength assignment schemes, such as the first-fit and the round robin scheme [12] have been used to implement the wavelength assignment block (-assignment) and their performance has been reported. First-fit Technique: The First-fit approach always tries to use the first free wavelength, thereby using the later wavelengths rarely. The probability of higher indexed wavelengths being available is higher in this type of technique and the wavelengths are not utilized uniformly [1]. Round robin Technique: In this strategy, the wavelengths are numbered or indexed. The assignment starts from the lowest numbered wavelength for the first request. For the next consecutive request the next consecutive higher wavelength is assigned. This process is continued in a round robin manner. In our implementation, if wavelength continuity constraint is not satisfied the assignment approach tries to find a free wavelength in a round robin manner just once. If a wavelength continuous path is not found, the request is blocked. The next consecutive request starts from its scheduled assigned wavelength. In this approach the wavelengths are utilized uniformly [12].

probabilistic processes, to correctly assess the result we have executed our program 50 times. In our calculation we have calculated the mean blocking probability as the average of mean blocking probability over these 50 runs and the mean execution time as the average of mean execution time over these 50 runs. Fig.2 and Fig.3 respectively depict the performance of the proposed algorithm in terms of the mean blocking probability, and the mean execution time for a fixed network load of 60 Erlang. It is observed from Fig.2 that the round robin technique offers a lower mean blocking probability compared to the first-fit technique. Our EP routing approach along with the round robin technique offers a mean blocking probability of around .000442 and with first fit approach around .004114 that are much lower than the mean blocking probability of 0.066 [6] and 0.038 [8] as quoted by the authors of paper [8] for a network load of 56 Erlang, though on a different simulation environment. The mean execution times for the survivable DRWA problem are estimated and plotted in Fig.3 for the first-fit and round-robin wavelength assignment techniques. It is observed that the first-fit approach has a lower mean execution time compared to the round robin technique. For a network load of 60 our approach gives mean execution time of 0.294551 ms for the first fit technique and 0.301113 ms for the round robin approach that are much smaller compared to the mean execution time of 20.57 ms [6] and 12.15 ms [8] as quoted by the authors in the paper [8]. Fig.4 depicts the variation of the mean blocking probability with the applied load where we compare the round-robin wavelength assignment technique with the first-fit technique. It is observed that the round robin technique offers a lower mean blocking probability compared to the first-fit technique. The mean execution time for the survival DRWA problem for different network loads is shown in Fig.5. From this figure we conclude that the proposed algorithm with the round robin wavelength assignment technique offers much lower mean blocking probability as compared to that of the first fit technique. Though this technique incurs higher mean execution time compared to the first fit technique, from the simulation experiment it is observed that this difference is not much. So the round robin approach can be used for the real time applications

10

ROUND ROBIN FIRST FIT

Mean blocking probability

10

-1

10

-2

10

-3

IV.

SIMULATION RESULTS

The proposed technique is simulated using Microsoft Visual C++ on an Intel Centrino processor (1.6 GHz clock and 512 MB RAM, 20 GB HDD). The cost values of the links are assigned following the reference [6]. We have calculated both the mean blocking probability and the mean execution time of the algorithm. We have simulated our algorithm to generate approximately 106 to 107 requests. Since our algorithm is a stochastic one involving many

10

-4

Number of generations

Fig.2 Mean blocking probability with W=8 and a traffic load of 60 Erlang

439

0.4 FIRST FIT ROUND ROBIN 0.35

0.3

0.25

0.2

0.15

0.1

execution time. Also, we observed that the round robin wavelength assignment technique offers the least mean blocking probability and first-fit wavelength assignment technique offers the least mean execution time. However, at the higher generation the mean execution time of the round robin technique is approximately same as that of the first-fit technique. As we aim to minimize both the blocking of the requests and the execution time of each request, round robin wavelength assignment technique is concluded to be the best wavelength assignment technique for the survivable DRWA problem. Still lower mean blocking probability can be achieved with the proposed algorithm by assuming more number of wavelengths like 16 or 32 wavelengths per fiber.

Mean execution time(ms)

0.05

REFERENCES
1 2 3 4 5 6 7 8

Number of generations

Fig.3 Mean execution time with W=8 and a traffic load of 60 Erlang
-1

10

FIRST FIT ROUND ROBIN

10

-2

10

-3

10

-4

10

-5

50

55

60

65

70

75

80

85

90

95

100

Traffic load (Erlang)

Fig. 4 Mean blocking probability with W=8 and various traffic loads

0.36 0.35

Mean Execution Time(ms)

0.34 0.33 0.32 0.31 0.3 0.29 0.28 0.27 50

FIRST FIT ROUND ROBIN

55

60

65

70

75

80

85

90

95

100

Traffic load (Erlang)

Fig. 5 Mean execution time with W=8 and various traffic loads

V.

CONCLUSION

[1] J. Zhang, B. Mukherjee, A review of fault management in WDM mesh networks: basic concepts and research challenges, IEEE Network, 2004; 18:4148. [2] Y.Xin, and G.N.Rouskas, A study of path protection in LargeScale Optical Networks, Photonic Network Communications, 7:3, 267-278, 2004. [3]X.Yang, L.Shen, and B.Ramamurthy, Survivable lightpath provisioning in WDM mesh Networks under shared path protection and signal quality constraints, Journal of Lightwave Technology, 2005; 23:15561564. [4] A. Jaekel and Y. Chen, Distributed Dynamic Lightpath Allocation in Survivable WDM Networks, IWDC 2005, LNCS 3741, pp. 189 194, 2005. [5] J. L. Kennington, E. V. Olinicka, G. Spiride, Basic mathematical programming models for capacity allocation in mesh-based survivable networks, OMEGA, The International Journal of Management Science 2006; 35:116. [6] D. Bisbal, I.D. Miguel, F. Gonzalez, J. Blas, J. C. Aguado, P. Fernandez, J. Duran, R. Duran, R. M. Lorenzo, E. J. Abril, M. Lopez, Dynamic Routing and Wavelength Assignment in Optical Networks by Means of Genetic Algorithms, Photonic Network Communications, 7:1, pp.43-58,2004, Kluwer Academic Publisher, Manufactured in The Netherlands. [7] G. Mohan, C. S. Ramamurthy, Lightpath restoration in WDM optical Networks, IEEE Networks,vol.14,no.6,(Nov/Dec 2000), pp.24-32. [8] V. T. Le, S. H. Ngo, X. Jiang, S. Horiguchi, Y. Inoguchi, A Hybrid Algorithm for Dynamic Lightpath Protection in Survivable WDM Optical networks, Proceedings of the 8th International Symposium on Parallel Architectures, Algorithms and Networks (ISPAN05). [9] Y. S. Kavian, W. Ren, M. Naderi1, M. S. Leeson, and E. L. Hines, Survivable wavelength-routed optical network design using genetic algorithms, European Transactions on Telecommunications, 2008; 19:247255. [10] Ahn Chang, Wook. and Ramakrishna, R.S. (2002) A Genetic Algorithm for Shortest Path Routing Problem and the Sizing of Populations, IEEE Transactions on Evolutionary Computation, vol. 6, no. 6, pp.566-579. [11] P.P.Sahu. A new shared protection scheme in optical network, Current science, vol.91, no.9, Nov. 2006. [12] P. Bijja, Wavelength assignment for all optical networks for mesh topologies, M.S thesis, Louisiana State University, 2003

We have tested the proposed EP technique with two different wavelength assignment techniques. The proposed algorithm is found to give a low blocking probability while incurring a low

Mean blocking probability

440

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Parallel Particle Swarm Optimization for Non Convex Economic Dispatch Problem
C.Christopher Columbus, Sishaj P Simon, Department of Electrical and Electronics Engineering, National Institute of Technology, Trichy, Tamil Nadu, India. rushtoccc@yahoo.co.in sishajpsimon@nitt.edu
processors. Here four processors are used for the star topology (Figure 1) and two of them for peer to peer model (Figure 2). Figure 1 Star topology In the star topology there is a central connection point called the hub or sometimes just a switch. The advantage in a star topology is that during a cable failure only one computer will be affected and not the entire network. The processors are connected as the client server paradigm using star network. The client server model assigns asymmetric roles to two collaborating processes, one process the server, plays the role of service provider, waiting passively for the arrival of request. In the other process the clients issue specific requests to the server and waits for the servers response.Figure.2 represents the client server paradigm. The client server model provides an efficient abstraction for the delivery of network services. Operation required includes those for a server process to listen and to accept requests, and for a client process to issue requests and accept responses. By assigning asymmetric roles to the two sides, event synchronization is simplified. The server process wait for requests, and the client in turn wait for responses. The client server has two specific roles and executes as per the requirement. As a server, a participant waits to hear an announcement from the first auctioneer (i.e. the work allotted by the server) when the sessions starts, the second auctioneer (the client) receives the message whenever there is an update on the current highest bid, and the session ends. Then the client sends a request that announces the acknowledgement of tasks given by the server. After this, for accepting new tasks the client sends a new request to the server. The server accepts the new request and updates the current highest bid. Here in this star topology one of the processor acts as a master and others act as slave. The topology consists of four processors and each of them comprises two workers. Figure 2 Client-server paradigm Peer to peer is a communication model in which each node has the same capabilities and any of the nodes can initiate a communication session. The peer-to-peer communication can be implemented having both server and client capabilities as shown in Figure 3. Figure.3. Peer to peer model Here the worker performs equal roles, with equivalent capabilities and responsibilities. A worker may issue a request to another worker and receive a response as shown in Figure. 4. The peer to peer model is implemented exclusively as a master slave model. Figure.4.Communication of workers in peer to peer model

Email:

Abstract This paper proposes a parallel particle swarm


optimization (PPSO) technique to solve non convex economic load dispatch problem (NCED) with non-smooth function. The optimization process is implemented on a clustered environment to reduce computational time. The workstations are parallel connected and tested by star and peer to peer networking topologies. The NCED problem is tested and validated on an IEEE-30 bus system.

Keywords- parallel particle swarm optimization, economic


dispatch, parallel processing, networking of workstations.

I. Introduction
The power scenario around the globe is changing rapidly from a regulated to deregulated environment thereby introducing more competitions among power utilities. Here both the customers and the power industry should strive to benefit among each others requirement. Therefore both the advancement in technologies and fast intelligent decision management models should be adopted in the restructured complex power system environment. The economic load dispatch problem is one of the important problems in the power system planning and operation. There are many conventional and non conventional methods available in the literature to solve the economic dispatch problem [1]. Conventional methods assume quadratic or piecewise quadratic cost curves of power generators however they may fail when non-linearities due to valve point effect is considered[2]. Non conventional methods such as genetic algorithms (GA), particle swarm Optimizations (PSO), simulated annealing (SA), tabu search(TS), differential evolution (DE) , artificial immune system(AIS) etc[3-5]are free from convexity assumptions and succeed in achieving near global solutions. Faster decisions are always appreciated in a power system management environment. Therefore, this paper investigates both accuracy of solution and reduction of computational time using PSO by networking of workstations. In order to reduce the computational time, parallel implementation of PSO is carried out simultaneously by different processors. In general, parallel processing is classified as 1) Distributed parallel processing and 2) Concentrated parallel processing or multiprocessing [5-7]. In the first method, different tasks that constitute the problem are solved in a parallel or distributed approach in a computer cluster environment. The second method incorporates many processors in a single computer. Also every processor can have several cores such as core 2 duo, dual core etc... Though the ELD problem using PSO can be solved either in a distributed or multiprocessing environment [3, 8], here in this research paper distributed computation using different topologies is carried out by a parallel cluster with four

441

II. Non convex Economic dispatch (NCED)


The economic load dispatch (ELD) allocates the power demand among available generators in the most economical manner, while satisfying the physical and operational constraints. The cost of power generation is very high and economic dispatch helps in saving a significant amount of revenue. Conventional methods like lambda iteration, base point participation factor, gradient methods, etc. rely heavily on the convexity assumption of generator cost curves and hence approximate these curves using quadratic or piecewise quadratic monotonically increasing cost functions. However, practical generators have a variety of non-linearities and discontinuities in their characteristics due to prohibited operating zones, ramp-rate limits and valve-point loading effects. Heat-run tests are carried out for constructing the generator cost functions. Normally, large turbine generators have number of fuel admission valves which are opened one by one when the unit is called upon to increase production. When a valve is opened, the throttling losses increase rapidly, as a result of which, the incremental heat rate rises suddenly. The valve-point effects introduce ripples in the heat-rate curves and make the objective function discontinuous, non convex and with multiple minima. If valve point effects are neglected, the generator cost curve can be approximated by a quadratic function, but for accurate modeling valve point loading effect needs to be included. One way of including valve point effect is to model the generator cost curve by piecewise quadratic function. Another way is used to rectify sinusoidal function in the cost function. In this paper, the latter model has been used for evaluating the fitness function [2]. The fuel inputpower output cost function of jth unit is given as

Where PL Where D PL P j, min P j, max B ji B oj B oo

Pj B ji Pi Pj Boj Boo
j 1 i 1 i 1

(5)

- Total load demand, - Total transmission line loss, - Minimum power output - Maximum power output - jth element of the loss coefficient square matrix - jth element of the loss coefficient vector - Loss coefficient constant.

III. Particle Swarm Optimization.


Particle Swarm Optimization (PSO) is an optimization technique inspired from bird flocking which is developed by Dr Eberhart and Dr. Kennedy way back in the year 1995 [10]. It is a population based algorithm where each individual (particle) in the population is a potential solution, flies in the D dimensional problem space with a velocity which is dynamically adjusted according to the flying experiences of its own and its colleagues. A) Standard PSO algorithm Suppose that the search space is D-dimensional and m particles form the colony. The ith particle represents a D dimensional vector Xi (i=1, 2 m). It means that the ith particle positions at

X i ( xi1 , xi 2 ,......., xiD )

(i=1, 2 m) in the

Fj ( Pj ) a j Pj b j Pj c j e j * sin( f j * ( Pj
2

min

Pj )) (1)
th

Where aj, bj, cj are the fuel cost coefficients of the j unit, and ej and fj are the fuel cost coefficients of the jth unit with valve point effects. The NCED problem is to determine the generated powers Pj of units for a total load of PD so that the total fuel cost, FT for the n number of generating units is minimized subject to the power balance constraint and unit upper and lower operating limits. The objective function is given as follows:

searching space. The position of each particle is a potential result. The calculation of the particles fitness is carried out by putting its position value into a designated objective function. When the fitness is higher, the corresponding Xi is better. The ith particles flying velocity is also a D-dimensional vector, denoted as position of the

Vi (vi 1 , vi 2 ,......., vi D ) Denote the best the ith particle as P i ( pi1 , pi 2 ,....., piD ) and
position of the respectively. colony The as PSO

best

Minimize FT F j ( Pj )
j 1

Pg ( p g1 , p g 2 ,......., p gD )

(2)

algorithm can be performed by the following equations (6, 7).

Where FT - Total fuel cost, n- Number of online generating unit and F j (P j) -Operating fuel cost of generating unit j. The minimization of the NCED problem is subjected to the following constraints 1. Generator constraint:

Vid (k 1) Vid (k) c1 r1 (Pid (k) - x id (k)) c 2 r2 (Pgd (k) - x id (k)

(6) (7)

xid (k 1) xid (k ) vid (k 1)

Where k represents the iterative number, c1, c2 are learning factors. Usually c1= c2=2, r1, r2 are random numbers between (0, 1).The termination criterion for the iterations are determined according to whether the max generation or a designated value of the fitness of P g is reached

Pj min Pj Pj max
2.
n

(3)

B) Parallel PSO algorithm


Implementation of Parallel PSO (PPSO) algorithm is done as per the flowchart in Figure. 5. The initial population is generated as in equation (8). Here the standard PSO algorithm for NCED is parallelized in such a way to reduce the computational time. PPSO works through the particles to maximize fitness; hence the fitness function is evaluated as

Power balance constraint:


j

P
j 1

D PL

(4)

442

reciprocal of the function (10). The initial particle evolves using the standard PSO. Its implementation consists of the following steps Step 1: Initialization of the swarm: For a population size m, the particles are randomly generated and normalized between the maximum and the minimum operating limits of the generators. The generated particles will be divided by the master equally to the number of workers. The particles are initialized as per equation (8). If there are N units; the ith particle is represented as

Step 4: Selection of gbest and updation of velocity and positions of particles: The best value among all the pbest values of the workers is identified as gbest by the master. To modify the position of each individual in the next stage is obtained from equation (6). The weighting function is defined as follows

wmax wmin w wmax iter max


Where,

iter

(11)

Pi P , P ,........, P
n i1 n i2

n iN

(8)

The jth element (i.e., generator) dimension of the ith particle is normalized to Pijn as given below to satisfy the constraint given by (3).Here, r [0, 1].

Pij Pij min r ( Pij max Pij min )


n

(9)

- Current iteration number To control excessive roaming of particles, velocity is restricted between

wmax , wmin itermax iter

- Initial, final weights - Maximum iteration number

- V max
Step 2: Defining the evaluation function: The merit of each individual particle in the swarm is found using a fitness function called evaluation function. The evaluation function should be such that cost is minimized while constraints are satisfied. The popular penalty function method [10] employs functions composed of squared or absolute violations to reduce the fitness of the particle in proportion to the magnitude of the violation. Large values for penalty parameters ill condition the penalty function while very small values do not allow the violations to contribute effectively in penalizing a particle. Therefore, the penalty parameters are chosen carefully to distinguish between feasible and infeasible solution. Hence, the penalty parameters, are chosen such that an infeasible solution is awarded fitness worse than the weakest feasible string [9]. Since two infeasible strings are not treated equally, the string further away from the feasibility boundary, is more heavily penalized. Thus, a constrained optimization problem is converted to unconstrained optimization problem

and V max
Pj
max

The maximum velocity limit for the jth generating unit is computed as follows:

max

Pj R

min

(12)

The particle position vector is updated using equation (7). The values of the evaluation function are calculated for the updated positions of the particles. Step5. Calculation of new gbest: If the new value is better than the previous pbest, the new value is set to new pbest. Similarly, value of gbest is also updated if the best pbest is better than the stored value of gbest. Step6: Selection of global gbest: The master receives the gbest from its workers and compares the new values with the previous values. Among these the global gbest is selected. Step 7: Stopping criteria: The iterations are stopped based on the tolerance limit or maximum number of iterations. In this paper maximum number of iterations is chosen as the stopping criterion after which the positions of gbest are stored as the optimal solution.

Figure 5: Flow chart of Parallel PSO


The evaluation function f(Pj) is defined to minimize the non smooth cost function given by equation (2) for a given load demand PD while satisfying the constraints given by Equations (3) and (4) as
N N f Pj F j Pj Pj PD PL 2 j 1 j 1

(10)

IV. Computational efficiency:


The normal PSO and parallel PSO algorithm are implemented in matlab environment as per the software and hardware specifications given in Table 1.Both the algorithms are validated by IEEE 30 bus test system. Here the star topology consist of four processors A,B,C and D, each having two workers namely AW1,AW2, BW1,BW2,CW1,CW2,DW1 and DW2.From these four processors, two of them is used for peer to peer topology (A, B and each allotted with two workers namely AW1, AW2, BW1 and BW2 ). Out of these workers AW1 acts as the master for both star and peer to peer topology. The

Where is the penalty parameter for not satisfying load demand Step 3: Initialization of pbest and gbest in workers: The fitness values obtained above for the initial particles of the swarm are set as the initial pbest values of the particles. The best pbest values are calculated by each worker and its best pbest value is send to the master.

443

master allots different tasks to each worker and gets their response. Their computational time are noted and tabulated in Table 2. The PSO parameters are set by trial and error approach. The parameter settings are total number of particles (m=80), the number of iteration used is 100, c1=c2=0.15, w is varied from 0.9 to 0.4 and maximum velocity is 2. The best computational time achieved by the Parallel PSO algorithm is given in Table 2. In the star topology, the execution time taken by PPSO is 1.92, whereas the standard PSO algorithm took 2.62 sec. In peer to peer topology, the execution time taken by PPSO is 1.58 where the standard PSO took 2.62 sec. The reason for the time reduction, not achieved, in a significant manner is due to the fact that only 20 to 30% of the codes are executed in parallel mode. This is because only independent operations can be executed in parallel mode. However 30 to 40% of reduction in time is achieved in both star and peer to peer network topologies. Table 1 Hardware and software specification Intel,Pentium4, CPU RAM HDD Operating System Software 3.2GHz 1GB 40 GB Windows XP Professional, Version 2002, Service pack 3 Mat lab R 2007b

Table.3. shows the best results of economic dispatch of IEEE 30 bus 6 unit system with losses calculated using Bmn coefficients [3]. An average cost of 820 $/hr is obtained out of 15 trials. Figure.7.Communication time between workers in peer to peer topology. Table 3 Minimum power dispatch of IEEE 30 bus System Generating Units P1 P2 P3 P4 P5 P6 PL Total Cost ($/hr) Power in MW 167.52 30.30 16.40 11.71 24.00 46.60 13.13 819.2

V.Conclusion:
The proposed PPSO algorithm for NCED helps in the reduction of computation time which is achieved by executing the workstations in parallel mode. Here both star and peer to peer topologies are considered. This approach can be used in places where faster decisions are required such as bidding strategy in a power system deregulated environment. The proposed approach can also be extended to large power system networks where more number of workstations are involved.

Table 2 Computational time (star / peer to peer topology) Topology Worker Identity AW1 AW2 BW1 BW2 CW1 CW2 DW1 DW2 AW1 AW2 BW1 BW2 Processor AW1 (Master) B C D AW1 Master Slave Execution time(sec) 1.92 1.30 1.43 1.27 1.65 1.25 1.27 1.60 1.58 1.01 1.22 0.89

VI .References:
1. Allen.J.Wood, Bruce.F.Wollenberg, Power Generation Operation and Control, Second Edition, A wiley-Interscience publication, John Wiley & sons, INC. Krishna Teerth Chaturvedi, Manjaree Pandit, Laxmi Srivastava, Particle swarm optimizations with crazy particles for non convex economic dispatch, Applied soft computing -9 pages 962-969,2009. S. Khamsawang and S. Jiriwibhakorn Solving the Economic Dispatch Problem using Novel Particle Swarm Optimization, International Journal of Electrical, Computer, and Systems Engineering 3:1 2009. Jarurote Tippayachai, Weerakorn Ongsakul, and Issarachai Ngamroo Parallel Micro Genetic Algorithm for Constrained Economic Dispatch, IEEE trans. on Power Systems , vol. 17, no. 3, August 2002. N.Misra and Y.Baghzouz,Implementation of the Unit commitment problem on supercomputer, IEEE trans.on Power System, Vol 9, No 1, February 1994.

Star

2.

Peer to Peer

3.

Figure 6 shows the total time taken for executing the parallel codes in the workers of peer to peer topology and Figure 7 shows the time taken by the master for execution and waiting in peer to peer topology using Parallel PSO. Figure 6: Time execution for the parallel codes

4.

5.

444

6.

Kai Hwang, Advanced computer Architecture: Parallelism, Scalability, Programmability-, TATA McGRAW-HILL edition. 7. G. Gutierrez-Alcaraz F. Martnez-Crdenas, Lagrangian Relaxation Unit Commitment by Parallel Processing, Rodrigo Trasvia Electronics, Robotics and Automotive Mechanics Conference 2008 8. Michael Gerndt Parallel Programming Models, Tools and Performance Analysis, published in Modern Methods and Algorithms of Quantum Chemistry, Proceedings, Second Edition, J. Grotendorst (Ed.), John von Neumann Institute for Computing, Julich,NIC Series, Vol. 3, ISBN 300- 005834-6, pp. 27-45, 2000. 9. Nambo Jin, Yahya Rahmat-Samii,Parallel Particle Swarm Optimization and Finite Difference Time Domain Algorithms for Multiband and Wide-Band Patch Antenna Designs. IEEE trans.on Antenna and Propagation, vol.33, no. 11, November 2005. 10. J. Kennedy and R. C. Eberhart, "Particle swarm optimization," IEEE international Conference on Neural Networks, vol.4, pp.1942-1948, 1995.

Figure.3. Peer to peer topology

Figure.4.Communication of workers in peer to peer

Figure 1 Star topology

Figure 2 Client-server paradigm

445

Figure 6: Time execution for the parallel codes

Figure.7.Communication time between workers in peer to peer topology.

Figure 5: Flow chart of Parallel PSO

446

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Distributed Coordinator Model and License Key Management for Software Piracy Prevention
Dr. S.A.M Rizvi
Prof & Head Dept of CS Jamia Millia Islamia New Delhi, India samsam_rizvi@yahoo.com

Vineet Kumar Sharma


Associate Professor, Dept of CSE. Krishna Institute of Engg. & Tech. Ghaziabad, India vineet_sharma@kiet.edu

Anurag Singh Chauhan


Student B.Tech. IV yr. Dept of CSE. Krishna Institute of Engg. & Tech. Ghaziabad, India asc.kiet03@gmail.com

Abstract It has been a great advantage to computer users that they get some software free with the operating system. But what if the user would have been charged for these software, the problem of software piracy would have been more grave. Today the software technologies have evolved it to the extent that now a customer can have free and open source software available in the market. But with this evolution the menace of software piracy has also evolved. Unlike other things a customer purchases, the software applications and fonts bought don't belong to the specified user. Instead, the customer becomes a licensed user means the customer purchases the right to use the software on a single computer, and can't put copies on other machines or pass that software along to colleagues. Software piracy is the illegal distribution and/or reproduction of software applications for business or personal use. Whether software piracy is deliberate or not, it is still illegal and punishable by law. The major reasons of piracy include the high cost of software and the rigid licensing structure which is becoming even less popular due to inefficient software utilization. Various software companies are inclined towards the research of techniques to handle this problem of piracy. Many defense mechanisms have been devised till date but the hobbyists or the black market leaders (so called software pirates) have always found a way out of it. This paper identifies the types of piracies and licensing mechanisms along with the flaws in the existing defense mechanisms and examines social and technical challenges associated with handling software piracy prevention. The goal of this paper is to design, implement and empirically evaluate a comprehensive framework for software piracy prevention and optimal utilization of the software.

Keywords: distributed system, distributed coordinator , sub coordinator

I.

INTRODUCTION

Most retail programs are licensed for use at just one computer site or for use by only one user at any time. By buying the software, the customer becomes a licensed user rather than an owner. Customers are allowed to make copies of the software for backup purposes, but it is against the law to give copies to friends and colleagues. Software piracy is all but impossible to stop, although software companies are launching more and more lawsuits against major infractions. Originally, software companies tried to stop software piracy by copy-protecting their software. This strategy failed, however, because it was inconvenient for

users and was not 100 percent foolproof. Most software now requires some sort of registration, which may discourage pirates, but doesn't really stop software piracy. An entirely different approach to software piracy, called shareware, acknowledges the futility of trying to stop people from copying software and instead relies on people's honesty. Commercial programs that are made available to the public illegally are often called warez. The main objective is to prevent intellectual property of individuals and organizations. Software piracy prevention is important because the legal actions are lengthy and it is difficult to change the moral standards of the people. Among the various approaches that have been explored recently to counteract the problem of software piracy, some are of legal, ethical and technical means. The concept of software license was developed by the software industry since its early inspection. Software license is that the software publisher grants a license to use one or more copies of software, but that ownership of those copies remains with the software publisher. One consequence of this feature of proprietary software licenses is that virtually all rights regarding the software are reserved by the software publisher. Only a very limited set of well-defined rights are conceded to the end-user. Most licenses were limited to operating systems and development tools. Enforcement of licenses was relatively trivial and painless. Any software customer had certain rights and expectations from the software and developer. Some software was licensed only to one user or one machine, while some software may have been licensed to a site specifying the maximum number of machines or concurrent instances of the program in execution (processes). These are also known as End User License Agreements (EULAs). The terms of each EULA may vary but the general purpose is the same establish the terms of contract between software developer and user of software product. II. OPTIMAL USE OF SOFTWARE

For optimal use of the software a model in an organization is used which tries to keep the information about the specified software on a single machine (considered as coordinator) and the complete management of the dynamic

447

distribution of that software and its license is to be done on the same machine. The selection of the co-ordinator is done arbitrary or by executing the election algorithms. If in any case the co-ordinator goes down than any other machine is voluntary elected as the co-ordinator to provide uninterrupted functioning for dynamic or electronic distribution of the software license. Here the software and license key management is done dynamically by the coordinator machine. The co-ordinator machine is responsible to make an account for all those machines which are executing the software. In this methodology the organization cannot use the software on the number of computers, exceeding the number of license purchased but this methodology provides an ethical way for optimal uses of the software on the network of an organization. Therefore it prevents organizational piracy and supports optimal use of software on the network of an organization, for e.g. there are 500 users on a network and software is used by at most 300 users at a time then it is better to take 300 licenses and use it with the prevention of piracy. In this scheme a machine known as co-ordinator is dedicated for dynamic software and license management. A. Distributed Co-Ordinator Model The approach of single coordinator model can be enhanced in terms of efficiency and performance. In single coordinator model there is only one coordinator which manages the distribution of software license keys among the client machines when there is the requirement of execution of the software by that client. Now in the present approach the concept of Super Coordinator is used. Now in this approach of Super Coordinator a tree based approach is used. In this-tree based approach the root is the super coordinator and its children are the sub coordinators. Various client machines communicate with their corresponding sub coordinators only. Here, sub coordinator takes up the job of providing the license keys to the client machines. The super coordinator performs the crucial task of assigning the particular number of license keys to the sub coordinators as per their requirement. For example, super coordinator has 500 license keys and then it assigns 100 license keys to sub coordinator 1, 150 license keys to sub coordinator 2, 100 license keys to sub coordinator 3, and 80 license keys to sub coordinator 4. The remaining 70 license keys are kept with the super coordinator itself which can be used in case of need and can be assigned to a particular sub coordinator on demand. In case sub coordinator 3 is finished executing the software and is in no need of more than 40 license keys then the remaining license keys (that is, 60) is returned back to the super coordinator and can be used in future need by any of the sub coordinator. The operation of this system of approach is quite similar as that of single coordinator approach. If a client machine needs a license key then communication between the client and its sub coordinator occurs. If the sub coordinator has a

free license key (free license key is the key that is not being used by any machine for the execution of the software) then it is been sent to the client and the client starts its execution of the software. On the contrary, if the sub coordinator does not have any free license key then it requests the super coordinator to provide it with the license key which it could provide to its client. If the super coordinator has a free license key then it is provided to the sub coordinator and then to the client machine, otherwise the sub coordinator along with the client machine goes into waiting queue and waits for the license key. B. Role of sub-coordinator When a user wants to execute the software application it broadcasts port specific UDP message in its sub network to the sub coordinator (this is the only sub coordinator which is listening request messages on that port). On receiving the request the sub coordinator checks for the available license keys in the data bank, if keys are available then it is delivered to the client and makes its entry in the active client list. On the contrary if the sub coordinator does not have requisite number of keys then it maintains a waiting queue and if it crosses a particular threshold value then it sends request message to the super coordinator. Now there arises two cases: Case:1- If the super coordinator has available number of keys then it grants those keys to the subcoordinator. Case:2-If the super coordinator possess lesser keys or does not possess any available key messages are transmitted to each sub-coordinators and then the subcoordinators surrender their unused keys to the super coordinator. C. Fault tolerance On peaceful termination of the client the keys available with it are surrendered back to its sub-coordinator along with the deletion of its entry from the active client list available with other clients. In case of abnormal termination of the client there is a procedure of dual check, in this process an Is Alive message is repetitively sent to each client by the sub coordinators, if a response is received from the client then it means that the client is functioning. On the other hand if there is no response from the client then the sub coordinator checks it for second time by sending the message again. If this request is also without the response then the client is supposed to be dead and it results in surrendering of its keys to the sub coordinator and deletion of its entry from the active client list of every alive clients. Now there arises a case of sub-coordinator crash, in such cases there occurs a voluntary selection of a client as the sub-coordinator and then this information is passed to all the available clients by updating the active client list of every client. D. Evaluation and Performance This approach is better than the single coordinator approach in terms of efficiency. The efficiency of such systems is measured in terms of number of messages that are passed

448

between the communicating entities (that is, client machine and the coordinator). More the number of messages transmitted between the communicators, less is the efficiency and vice versa. Therefore, efficiency is inversely proportional to the number of messages transmitted between the communicators. In this approach the number of messages to be communicated is decreased to a great extent. As the technique used is a tree based approach, a client demanding the license key broadcasts the request messages in its own sub network only. This request packet is captured by the sub coordinator working in the same sub network. In case of a single coordinator the broadcast packets transmitted by a client are destined to all the machines of the entire network and this is how number of messages is reduced in the hierarchical coordinator environment. Thus the overhead of messages on a single coordinator is distributed on many sub coordinators. Similarly, the messages received and sent by super coordinator are reduced as no client machine communicates directly with the super coordinator but it communicates with the sub coordinator which in turn communicates with the super coordinator. The major aspect of efficiency of this system is based on its fault tolerance mechanism. This approach is very viable in terms of failure of the coordinator. If a sub coordinator fails then the client machines which falls under its scope becomes functionless while other client machines under other sub coordinators work without any disturbance. This is how the complete system is prevented against absolute failure. On the other hand if the super coordinator fails then the sub coordinators are still functioning along with their respective client machines which are under their scope. III. CONCEPT OF KEY DISTRIBUTION

If the length of the waiting client list exceeds the threshold limit (TH), the sub-coordinator sends a Request Message to the super-coordinator demanding the T H license keys. Then super-coordinator checks the availability of the license keys. If the super-coordinator possess more than or equal to TH license keys then super-coordinator transmit TH keys to the sub-coordinator else the super-coordinator unicasts a Surrender Back UDP Message to each sub-coordinator except the one for which super-coordinator is intending to provide license keys, requesting to surrender the unused license keys. Each sub-coordinator receives this message and returns back some of the unused license keys to the super-coordinator. Now the availability of the license keys is checked by the super-coordinator and if it is again lesser than T H then same steps are repeated until available keys becomes greater than or equal to the TH or available keys do not increase at all. These keys are transmitted to the sub-coordinator. Then the Sub-coordinator serves these keys to the clients of the waiting client list based on the FIFO principal. IV. TERMINATION PROCEDURE

A. Peaceful Termination of a client On the peaceful termination of its own software application the client sends a message to sub-coordinator and then the Sub-coordinator deletes the entry of the specific client form its active client list and make one more license key available. B. Abnormal termination of a client The Sub-coordinator sends a regular Is-Alive UDP Message to the clients which are in the active client list of the sub-coordinator. When clients receive these messages they respond back to the sub-coordinator indicating their active state. If sub-coordinator does not receive the response message from any specific client it performs the dual check by sending the message again. If this time again the same client does not respond back to the sub-coordinator, the subcoordinator considers it as an abnormal termination of that client and deletes its entry from the current active client list and increases the available license key by one. C. Abnormal termination of a sub-coordinator The Sub-coordinator periodically transmits the active& waiting client list, available unused license keys and the ip address of the super-coordinator to all the active clients. By this way all the active clients keep all the latest information which the sub-coordinator is bearing. In case of the abnormal termination of the sub-coordinator, the active clients will not receive the Is-Alive messages. On the expiration of the timer which works at the client machines, the client can get an idea about the abnormal termination of

A hierarchical approach is used consisting of three entities at different levels Super-Coordinator at level 0, SubCoordinator at level 1 and the clients at level 2. The clients, who want to execute the software, first of all broadcast the Search UDP Port Specific Message in its own sub network. This message packet is captured by the sub-coordinator working in the same sub network. The Sub-coordinator sends the response packet back to the same client. Because UDP packet contains the ip address of both sender and receiver, the client gets the ip address of the subcoordinator. Now the same client unicasts the Request UDP Port Specific Message to its sub-coordinator requesting for a license key for execution of the software. The sub-coordinator receives the request message and check out the availability of the license keys. If license keys are available with the sub-coordinator then it is provided to the requesting client and its entry is being made in the current active list of the sub-coordinator. If the subcoordinator does not have any available key then it ask the client whether it is ready to wait or want to quit. If the client is ready to wait to get a license key then the sub-coordinator makes its entry in the waiting client list else the client quits.

449

the sub-coordinator. The clients getting the information about the absence of the sub-coordinator will wait for a random amount of time and then they transmit the message packets to all other active clients indicating itself as the new sub-coordinator. Because the active client whose timer is expired, have to wait for a random time the contention of election of multiple coordinator is very less. But still if two or more clients take part in process of becoming the new coordinator, they settle this issue by the voting technique. When a new coordinator is elected, it sends a New Subcoordinator UDP Message to all the current and waiting clients. The new sub-coordinator also updates the supercoordinator about itself. V. TIGHTER SECURITY BY COMBINING HARDWARE AND SOFTWARE TOKEN TYPE This invention relates to security mechanisms which prevent unauthorized use of the software, and in particular to mechanisms which prevent the unauthorized use of software on more than one computer. Various security mechanisms have been heretofore devised for preventing the use of software without authorization of the software supplier. These have included hardware security devices, which must be attached to a computer before the software can run on the computer. Typically, the software that is to run includes an inquiry which looks for an indication that the hardware device has been installed. Such hardware security devices assure that the software will only execute on one computer at any one time. These hardware devices can, however, be relatively expensive and moreover need to be adaptable to various types of computers in which they are to be attached. One of the best example to quote for this tight security provided by hardware and software together is seen in HSBC banks. The HSBC banks provide its users with software security of login and password and along with it, the bank also provides with a hardware key which produces a random number each time it is turned on. The user is supposed to use this identification number generated by the hardware key along with the login and password in order to access the account. The numbers generated by the key follow some particular sequence according to some algorithm which is been checked by the bank and then it allows the user to use his account. VI. CONCLUSION

environment by achieving a better degree of prevention of software piracy. Its strength is the dynamic distribution of software license keys to the clients with the help of hierarchical coordinators. The chance of piracy is eliminated up-to a remarkable position because no static measures to prevent piracy are used. In terms of efficiency the technique of hierarchical coordinator is better than the single coordinator technique. REFERENCES [1] C. Collberg and C. Thomborson. Software watermarking: Models and dynamic embeddings. In Principles of Programming Languages, pages 311324, 1999. [2] Mukesh Singhal & Niranjan G. Shivratri. Voting and Election Algorithms. In Advanced concept in operating Systems pages 209 & 343, 2002 [3] George Coulouris, Jean Dollimore & Tim Kindberg. Election Algorithm, Bully Algo & Ring based algo. In Distributed Systems page 445-448, 2006 [4] Leili Noorian & Mark Perry. Autonomic Software License Management System: an implementation of licensing patterns. 2009 Fifth International Conference on Autonomic and Autonomous Systems. IEEE 978-0-76953584-5/09

Now a day the businesses of software companies are dependent on flexible software applications for changing market environments but the rigid licensing structures for software distribution, as used with most legacy systems, is becoming hurdle in it. The paper presents the types of piracy and how the piracy is done. Besides this it provides a new and ethical technique for the optimal utilization of the software resources of an organization in its own network

[5] Mathias Dalheimer and Franz-Josef Pfreundt. License Management for Grid and Cloud Computing Environments. 9th IEEE/ACM International Symposium on Cluster Computing and the Grid. IEEE 978-0-7695-3622-4/09 [6] Mikhail J. Atallah, Jiangtao Li. Enhanced Smart-card based License Management. IEEE International Conference on E-Commerce (CEC03)0-7695-1969-5/03 2003 IEEE [7] Yawei Zhang, Lei Jin, Xiaojun Ye Dongqing Chen.Software Piracy Prevention: Splitting on Client. 2008 International Conference on Security Technology IEEE 978-0-7695-3486-2/0 [8]Daniel Ferrante. Software Licensing Models:Whats Out There? 1520-9202/06/ 2006 IEEE [9] Dinesh R. Bettadapur.Software Licensing Models in the EDA Industry 0-7803-4425-1/98/$10.00 1998 IEEE [10] Sathiamoorthy Manoharan and Jesse Wu. Software Licensing: A Classification and Case Study.Proceedings of the First International Conference on the Digital Society (ICDS'07) 0-7695-2760-4/07 $20.00 2007 IEEE [11] Zhengxiong Hou, Xingshe Zhou, Yunlan Wang Software License Management Optimization in the Campus Computational Grid Environment. Third International Conference on Semantics, Knowledge and Grid 0-76953007-9/07 2007 IEEE [12] Petar Djekic & Claudia Loebbecke Software Piracy Prevention through Digital Rights Management Systems Proceedings of the Seventh IEEE International Conference on E-Commerce Technology (CEC05) 1530-1354/05 $20.00 2005 IEEE

450

4th International Conference On Computer Applications In Electrical Engineering Recent Advances CERA-09, February 19-21, 2010

Privacy Preservation Using Nave Bayes Classification and Secure Third Party Computation
Keshavamurthy B.N , Mitesh Sharma and Durga Toshniwal Electronics & Computer Engineering Indian Institute of Technology Roorkee Uttarkhand, India kesavpec@iitr.ernet.in, mitusuec@iitr.ernet.in, durgafec@iitr.ernet.in
The rest of the paper is organized as follows: Section 2 related research work. Section 3 presents the Nave Bayes classification using trusted third party computation. Section 4 experimental results. Section 5 includes conclusion.

Abstract- Privacy preservation is an important area of concern in the present times. Many real world applications have data being generated at various locations. All the sites generating the data form part of the distributed database. The collaborating parties are generally interested in finding global patterns from their integrated data. In order to facilitate privacy preserving mining of global data in a distributed database scenario we propose to use a secure trusted third party which performs the mining on the data.
I.

II

Related Work

INTRODUCTION

In recent years, due to the advancement of computing and storage technology, digital data can be easily collected. It is very difficult to analyze the entire data manually. Thus a lot of work is going on for mining and analyzing such data. In real many world applications like hospitals, retail-shops and universities database, data is distributed across different sources. The distributed data base is comprised of horizontal, vertical or arbitrary fragments. In case of horizontal fragmentation, each site has the complete information on a distinct set of entities. An integrated dataset consists of the union of these datasets. In case of vertical fragments each site has partial information on the same set of entities. An integrated dataset would be produced by joining the data from the sites. Arbitrary fragmentation is a hybrid of previous two. The key goal for privacy preserving data mining is to allow computation of aggregate statistics over an entire data set with out compromising the privacy of private data of the participating data sources. The key methods used in privacy preservation are randomization [2], secure multiparty computation, k-anonymity [14], l-diversity [15] and t-closeness [16]. Most of the methods for privacy computation use some transformation on the data in order to perform the privacy preservation. One of the methods in distributed computing environment which uses the secure sum multi party computation technique of privacy preservation is Nave Bayes classification [3].

Initially, for privacy preserving data mining randomization method were used, the randomization method has been traditionally used in the context of distorting data by probability distribution [1] [10]. In [4][5], it was discussed how to use the approach for classification. In [4] discusses about privacy protection and knowledge preservation by using the method of ananymization, it anonymizes data by randomly breaking links among attribute values in records by data randomization. By that it maintains statistical relations among data to preserve knowledge. In [5] it was discussed how to use the approach for classification, more specifically, it gives the building block to obtain random forests classification with enhanced prediction performance for classical homomorphic election model, for supporting multi-candidate elections. A number of other techniques [6] [7] have also been proposed for privacy preservation which works on different classifiers such as in [6] combine the two strategies of data transform and data hiding to propose a new randomization method, Randomized Response with Partial Hiding (RRPH), for distorting the original data. Then, an effective Naive Bayes classifier is presented to predict the class labels for unknown samples according to the distorted data by RRPH. In [7], Proposes optimal randomization schemes for privacy preserving density estimation. Techniques also been proposed for improving the efficiency of classifiers. The work in [8] [9] describes the methods of improving the effectiveness of classification such as in [8] proposes two algorithms BiBoost (Bipartite Boosting) and MultBoost (Multipart Boosting) which allow two or more participants to construct a boosting classifier without explicitly sharing their data sets and

A lot of research papers have discussed the privacy preserving mining across distributed databases. One of important drawback with the existing methods of computation is that the global pattern computation is done at one of the data
source itself. This paper addresses this issue very efficiently by using a trusted third party for this purpose.

analyze both the computational and the security aspects of the algorithms. In [9] it proposes a method which eliminates the privacy breach (how much an adversary learns from the publishing data) and increase utility (accuracy of the data mining task) of the released database.
In case of distributed environment, the most widely used technique in privacy preservation mining is secure sum computation [13]. Here when there are n data sources DS , DS ,...D such that each DSi has a
0 1 n 1

private data item

di ,

i=0, 1,.n-1, the parties want to compute

n 1 d i 0 i

451

privately, without revealing their private data d i to each other. The following method was presented: We assume that protocol initiator. 1 At the beginning
DS j

d
i 0

n 1

is in the range [0, m - 1] and

DS j

is the

chooses a uniform random number r within [0,


di +

m - 1]. 2 Then DS sends the sum


j

r (mod m) to the data source

DS j 1 (mod n) .

nac is the number of instances in the (global) training set that have the class value c and an attribute value of a, while na is the number of instances in the global training set which simply have an attribute value of a. The necessary parameters are simply the counts of instances. nac and na. Due to horizontal partitioning of data, each party has partial information about every attribute. Each party can locally compute the local count of instances. The global count is given by the sum of the local counts .secure computing a global count is straightforward. Assuming that the total number of instances is public, the required probability can be computed by dividing the appropriate global sums where the local number of instances will not be revealed. For an attribute a with l different attribute values and a total of r distinct classes, l.r different counts need to be computed for each combination of attribute value and class value. For each attribute value a total instance count also to be computed. Which gives l additional count [3].

3 Each remaining data sources

DSi

do the following: upon receiving

a value x the data source DSi Sends the sum d i + x (mod m) to the data source DSi + 1 mod n. 4 Finally, when party DS j receives a value from the data source DSn 1 1( mod n), it will be equal to the total sum r + only known to DS j it can find the sum parties. In [3], Nave Bayes applies to learning tasks where each instance x is described by a conjunction of attribute values and the target function f (x) can take on any value from some finite set C. A set of training examples of the target function is provided, and a new instance is presented, described by the tuple of attribute values a1, a2, . . . , an. The learner is asked to predict the target value, or classification, for this new instance. The Bayesian approach to classifying the new instance is to assign the most probable target value, cMAP, given the attribute values a1,a2,.....an that describe the instance. cMAP = argmax P(cj | a1, a2, . . . , an ) (1)
n 1 d i 0 i

n 1 d i 0 i

. Since r is

and distribute to other

III NAVE BAYES CLASSIFICATION USING SECURE THIRD PARTY COMPUTATION


So for, in most of the research works, one of the data sources itself computes the global patterns. We propose to use a secure trusted third party computation instead. Algorithm Assumption: n parties, r class values, z attribute values, jth attribute contain lj different values, S Supervisor, P Parties,

Cil.r

= no of

instances with party Pi having classes r and attribute values l.

N ir

= no of instances with party Pi having classes r.

Using Bayes theorem,


cMAP = argmax (P(a1, a2, . . . , an | cj) P(cj) | P(a1, a2, . . . , , an)) = arg ( P(a1, a2, . . . , an | cj) P(cj) ) (2)

Method: S: Select one party randomly among n different parties and send it the random values. P: For all class values r do For i= 1, 2, ., n do For all z, Party Pi locally computes Party Pi locally computes

Cil.r

The Nave Bayes classifier makes the further simplifying assumption that the attribute values are conditionally independent given the target value. Therefore, cNB = argmax ( P(cj)

N ir

P(ai| cj) )

(3)

where cNB denotes the target value output by the Nave Bayes classifier. The conditional probabilities P(ai | cj) need to be estimated from the training set. The prior probabilities P(cj) also need to decided by some heuristics. Probabilities are computed differently for nominal and numerical attributes. For a nominal attribute in a horizontally partitioned data, the conditional probability P(C=c|A=a) that an instance belongs to class c given that the instance has an attribute value A=a, is given by
P(C c A a P(C c A a P(A a) n ac na

Endfor Endfor Add using secure sum starting at party selected by S n i 1 C l.r C l.r i 1
n i 1 Nr Nr i 1

(5)

(6)

1 1 Parties calculate the required probabilities from C l.r and N r , on that basis will predict the class of these variables. The data sources go for global model.
Here each party first calculate local instances of nac and na for all possible combination of attributes and classes. When all this calculation is done, then for getting results parties need the global value of these variables. We assume that (7) as follows:

(4)

452

n 1 d i 0 i

(7)

CONCLUSION

is in the range [0, m-1] and trusted third party is the protocol initiator. At the beginning trusted third party chooses a uniform random number r within [0, m-1] and select randomly one of the party p x : 0<x<n-1 as the starting party. The starting party sends sum dt+ x mod n to its neighbor p+1 (mod n) then dt+ y mod n will be send to p+2 and this will continue till party pn-1. Finally when trusted third
di . party receives values from party n-1 it can compute the sum i 0 n 1

When legal/commercial reasons make it impossible to share data, it may be necessary to share even models generated from the data. We have implemented Naive Bayes algorithm on distributed system which overcome the privacy problem on horizontally framed database distributed database. We used the semi-honest model, where each party assumed to follow the protocols but may try to infer information from the messages it sees. Here we also used secure sum algorithm, which help in preserving privacy and getting final result.

IV EXPERIMENTAL RESULTS
The dataset used for the purpose of experimentation is the Nursery dataset [12]. Nursery Database was derived from a hierarchical decision model originally developed to rank applications for nursery schools. It was used during 1980's when there was excessive enrollment to the nursery schools in Ljubljana, Slovenia, and the rejected applications frequently needed an objective explanation. The final decision depended on three sub things. Occupation of parents and child's nursery Family structure and financial standing Social and health picture of the family

TABLE I Classification Analysis Accuracy in Percentage


Sl. No.
1

Description
Non Distributed Scenario with Trusted Third Party Distributed Scenario with Trusted Third Party

Number of Parties
1

Records Per Party


12960 Party1: 4000 records Party2: 4000 Records Party3 : 4960 Records

% Accuracy
49.32

2 3

49.32

The nursery-school applications according to the following concept structure: [i] Evaluation of applications for nursery schools 1. EMPLOY Employment of parents and child's nursery 1.1 parents: Parents' occupation 1.2 has_nurs: Child's nursery 2. STRUCT_FINAN Family structure and financial standings 2.1 form: Form of the family 2.2 children: Number of children 2.3 housing: Housing conditions 2.4 finance: Financial standing of the family 3. SOC_HEALTH Social and health picture of the family 3.1 social: Social conditions 3.2 health: Health conditions [ii] Attribute Information: form: complete, completed, incomplete, foster children: 1, 2, 3, more housing: convenient, less_conv, critical finance: convenient, inconv social: non-prob, slightly_prob, problematic health: recommended, priority, not_recom class values: not_recom, recommend, very_recom, priority, spec_prior We have conducted an experiment for a distributed scenario, where the data is horizontally fragmented across three parties such as party1, party2 and party3 each of them respectively hold 4000, 4000 and 4960 records. We also conducted an experiment for a non distributed scenario for the same dataset of 12960 records. The analysis of both, in percentage of accuracy for Nave Bayes classification using trusted third party computation is given in Table I.
[1]

REFERENCES
Leew C.K., Choi U.J and Liew C.J. A data distortion by probability distribution , ACM TODS, 1985, pp.395-411. Agarwal R and Srikanth R, .PrivacyPreserving Data Mining, ACM SiGMOD Conference, 2000.

[2]

[3] Jaideep Vaidya, Murat Kantarcolu and Chris Clifton, Privacypreserving Nave Bayes classification, The Very Large Data Base Journal The International Journal on Very Large Data Bases, Vol.17 No.4, Pp.879-898, July 2008. [4] Agarwal R.and Srikanth R, Privacypreserving data Mining, Procedings of the ACM SIGMOD conference, 2005. [5] Agarwal D and Agarwal C.C, On the design and Quantification of PrivacyPreserving Data Mining Algorithms, ACM PODS Conference, 2002,pp 1224-1236. [6] Zhang P., Tong Y. and Tang D, Privacy Preserving Nave Bayes Classifier., Lecture notes in computer science, 2005. vol 3584. [7] Zhu Y., Liu L, Optimal Randomization for PrivacyPreserving Data Mining, KDD ACM KDD Conference, 2004. Gambs S., kegl B and Aimeur E, Privacy Preserving Boosting, Journal, to appear. PoovammalE and Poonavaikko, An Improved Method for Privacy Preserving Data Mining, IEEE IACC Conference, 2009, Patiala, India. pp. 1453-1458.

[8]

[9]

[10] Warner S.L, Randomized Response: A survey technique for eliminating evasive answer bias, Journal of American Statistical Association. 1965. pp. 63-69. [11] J. Han and M. Kamber, Data mining concepts and techniques, Morgan Kaufmann, 2006.

453

[12]

A. Asuncion, & D.J. Newman, UCI Machine Learning Repository [http://www.ics.uci.edu/~mlearn/MLRepository.html].Irvine, CA: University of California, School of Information and Computer Science, 2007.

[13] Andrew C. Yao, Protocol for secure sum computations, In proc. IEEE Foundations of Computer Science, 1982, pp 160-164. [14] Ciriani V., De Capitiani di VimercatiS., Foresti S., Samarati P. , kAnonymity. Security in Decentalized Data Management, ed.jajodia S., Yu T., Springer, 2006. [15] Machanavajjhala A., Gehrke J.,kifer D., and Venkitasubramaniam M., l-Diversity: Privacy Beyond k-Anonymity, ICDE, 2006. Li N., Li T. and Venkatasubramanian S., t-closeness: Privacy beyond k-anonymity and l-diversity, ICDE Conference, 2007.

[16]

454

AuthorIndex
AQAnsari215 A.Chakrabarti292 A.H.Bhat163 A.Jagan155 A.Jagan304 A.K.Pradhan128 A.MartinsMathew159 A.Hkiranmayee23 A.K.Wadhwani259 A.N.Patel39 A.Q.Ansari280 A.Ragavendiran5 A.S.Siddiqui253 AbhishekKumarTripathi417 AjitKumar120 AkashTayal399 AlfredKirubaraj1 AmbrishChandra182 AmbrishChandra62 AmitPhadikar395 AmitPhadikar425 AmodKumar159 AmolA.Kalage27 AnmolRatnaSaxena244 AnuragSinghChauhan447 AnuragTrivedi35 ArchanB.Patel39 ArghyaSarkar292 AsirRajan82 B.K.Panigrahi178 B.P.Singh429 B.R.Bhalja124 B.ChittiBabu276 B.K.Mishra296 B.Ramachandra137 BarjeevTyagi382 BarjeevTyagi386 BhimSingh178 BhimSingh182 BhimSingh62 BinishThomas159 Buttasingh151 CPGupta253 C.B.Vishwakarma199 C.Sasivarnan155 C.Sasivarnan304 C.Christober82 C.ChristopherColumbu441 C.G.Raghavendra288 C.Srisailm35 CH.Venkatesh186 ChiragN.Paunwala345 ChitralekhaMahanta378 DVimalRaj264 D.K.Palwalia51 D.Thukaram313 D.Thukaram67 D.MadhanMohan178 DalvinderKaur284 DeepakKumar108 DerejeShiferaw272 DerejeShiferaw31 Devenderkumarsaini370 DhirendraKumar386 Dilbagsingh151 DilbagSingh,MoinUddin341 DileepGanta236 DolaGobindaPadhan9 DurgaToshniwal451 E.Fernandez227 FarheenaKhan108 Firehun.T248 G.G.Bhutada356 G.N.Khanduja174 GauravSinghUike390 GautamSharma159 GurunathGurrala120 H.K.Verma14 H.O.Gupta382 HariOmGupta170 HarshitaSharma399 Ibraheem321 IndraneelSen120 J.D.Sharma329 J.Raja82 J.P.Tiwari390 J.S.Dhillon116 JagroopSingh341 Jayantapal219 JayashriVajpai19 JigneshG.Bhatt14 K.Vadirajacharya170 K.VenkataSrinivas182 K.B.Mohanty43 K.Gowrishankar5 K.K.Swarnkar259 K.S.Aprameya296 K.S.Denpiya39 KAlHaddad62 KalyanMBhavaraju268 KamalAlHaddad182 KanwardeepSingh329 KapilKumarNagwanshi100 KavithaA146 KavithaG421 KeshavamurthyB.N451 KuldeepYadav407 L.Ganesan337 LakhwinderSingh116 LillieDewan284 M.Malathi1 M.Sudhakaran264 M.T.Ahmed301 M.Veerachary132

M.A.Ansari407 M.K.Bhuyan155 M.K.Bhuyan304 M.M.Waware190 M.SDharmaprakash137 M.S.Shirdhonkar353 M.Veerachary248 MajidJamil253 MandeepSingh403 ManeshKokare349 ManeshKokare353 MariappaneThiagarajah264 MinakshiChakraborty425 MiteshSharma451 Mohd.SamarAnsari224 Mohd.SamarAnsari241 MrinmoyRit219 MummadiVeerachary244 N.G.Chothani124 N.Langer163 N.P.Padhy329 N.C.Lenin58 N.N.S.S.R.KPrasad288 Narasimharaju.B.L,231 NarayanaPrasadPadhy325 NarendraKumar72 Narendra.K227 NareshKumarYadav321 NarinderHuda72 NayanKumarDalei276 NeeleshKumar159 NeenaGupta97 NeerajGupta280 NehaArora429 NidhiSingh374 PAjay264 PKKatti87 P.Agarwal163 P.BhanuPrasad104 P.Jena128 P.CPanchariya23 P.K.Kankar268 ParulGoyal413

PraghneshBhatt317 PramodAgarwal170 PramodAgarwal190 PrashantP.Bedekar112 PrayantaS219 PushpaB.Patil349 R.A.Gupta47 R.Gnanadass325 R.Mitra272 R.N.Patel308 R.Prasad199 R.Prasad308 R.Rajathy325 R.SudharshanKaarthik276 R.Arumugam58 R.Gnanadass5 R.NewlinShebiah337 R.P.Maheshwari124 R.S.Anand356 R.S.Anand407 R.S.Anand296 R.Vigneshwaran276 RabiNarayanDas276 RajarshiRoy437 RajendraPrasad370 RajendraPrasad374 RajeshKumar47 RajeshS.Surjuse47 RajkumarPatra195 RajkumarPatra333 RamakrishnanS146 RamakrishnanS421 RameshK.Sunkaria141 RanjitMitra31 RanjitRoy317 RashmiJain253 RimjhimAgrawal67 Ritu72 RohitRaja195 RohitRaja333 S.K.Sinha308 S.P.Singh51 S.Sengupta292

S.A.MRizvi447 S.Ahmed296 S.Arivazhagan337 S.Arivazhagan362 S.Bagavath362 S.C.Saxena356 S.C.Sharma268 S.H.Chetwani39 S.Jeevananthan186 S.P.Ghoshal317 S.P.Harsha268 S.Sangeetha186 S.SelvaNidhyanandhan337 S.Wadhwani259 SachinDevassy132 SaifulIslam108 SajjanPalSingh231 SandeepBhongade382 SanjoyMondal378 SantiP.Maity395 SantiP.Maity425 SarikaV.Tade27 SatishMohanty104 SatyaPrakashDubey231 SavitaGupta403 SawanSen292 ShailendraSharma62 ShashankKaranth67 ShekharYadav390 ShilpaJindal97 ShirleyTelles141 SishajPSimon441 SomanathMajhi366 SomanathMajhi9 SriMadhavaRajaN421 SubbaReddyB78 SudhirR.Bhide112 SudiptaMahapatra437 SudiptaMukhopadhyay417 SujathaCM146 SukhwinderSingh403 SukwinderSingh341 SupravaPatnaiak345

SurendraS.313 SureshC.Saxena141 SusantaKumarSatpathy100 SwagatPati43 SyedA.Imam93 SyedAtiqurRahman SyedAtiqurRahman241 T.Parveen301 TarunChopra19

TilendraShishirSinha100 TilendraShishirSinha195 TilendraShishirSinha333 TwinkleBansal399 UdayaKumar78 UrmilaBhanja437 Usha.A137 UtkalMehta366 V.K.Nangia215

VaibhavKantSingh433 VanamUpendranath104 Vedpal72 VeeracharyMummadi236 VibhavKumarSachan93 VijayS.Kale112 VineetKumarSharma447 VinodKumar141 W.SylviaLillyJebarani362


////

PROGRAMME

Schedule-CERA-09
Activity Timings

19-02-2010

Registration Inaugural Tea Venue: O P Jain Auditorium

08:30 am 10:00 am 10:00 am - 10:30 am

Inaugural & key note address

Venue: O P Jain Auditorium

10:30 am - 01:00 pm

Lunch

01:00 pm - 02:00 pm

Session - 1

Venue-I

Instrumentation C: Prof. Kailash Chandra, IITR

02:30 pm - 04:00 pm Co: Prof. Marc Brecht, Germany

Venue-II

Electrical Drives C: Prof. S. S. Murthy, IITD

02:30 pm - 04:00 pm Co: Dr. M. K. Pathak, IITR

Intersession Tea

04:00 pm - 04:30 pm

Plenary Talk

Venue-I

Speaker : Prof. S. S. Murthy, IITD

04:30 pm 05:00 pm

Session - 2

Venue-I

Power System Operation, Control and Protection-I C: Prof. K. B. Mishra, IIT Co: Prof. B. Das, IITR KGP

05:00 pm - 06:30 pm

Venue-II

Computer Networks and Security C: Prof. M. K. Vasantha, Co: Prof. Manoj Mishra, IITR IITR

05:00 pm - 06:30 pm

Dinner

08:00 pm - 09:30 pm

20-02-2010

Session - 3

Venue-I

Power System Operation, Control and Protection-I C: Prof. M. P. Dave Co: Dr. V. Pant, IITR

09:00 am - 10:30 am

Venue-II

Biomedical Signal Processing C: Prof. Vinod Kumar Co: Dr. Satish Hamde SGGS Nanded IIT Roorkee

09:00 am - 10:30 am

Intersession Tea

10:30 am - 11:00 am

Plenary Talk

Venue-I

Speaker : Prof. Bhim Singh, IITD

11:00 am - 11:30 am

Session - 4

Venue-II Venue-I

Speaker : Prof. Nikhil S. Padhye, USA Power Quality C: Prof. Bhim Singh, Co: Prof. P. IITD Agrawal, IITR

11:00 am - 11:30 am 11:30 am - 01:00 pm

Venue-II

Artificial Intelligence I Co: Dr. G. N. Pillai, C: Prof. Nikhil S. Padhye, USA IITR

11:30 am - 01:00 pm

Lunch

01:00 pm - 02:30 pm

Session - 5

Venue-I

Power Electronics Convertors C: Prof. V. K. Verma, Prof. S. P. Singh, /Prof. G. K. Singh, IITR

02:30 pm - 04:00 pm

Venue-II

Artificial Intelligence II C: Prof. M. K. Co: Prof. Vasantha, IITR B,Tyagi/R.Prasad, IITR

02:30 pm - 04:00 pm

Intersession Tea

04:00 pm - 04:30 pm

Plenary Talk

Venue-I Venue-II

Speaker : Prof. Peter W. Macfarlane, UK Speaker : Prof. T. Gonsalves, Director, IIT Mandi

04:30 pm -05:00 pm

Session - 6

Venue-I

Signal Processing C: Prof. Peter W. Macfarlane, UK

05:00 pm - 06:30 pm Co: Prof. S Mukharjee

Venue-II

Power System Operation and Control under Deregulated Environment C: Dr. B. Das, IITR Co: Dr. E. Fernandez, IITR

05:00 pm - 06:30 pm

Cultural Program

07:00 pm - 08:30 pm

Conference Banquet

08:30 pm - 09:30 pm

21-02-2010

Session - 7

Venue-I

Image Processing C: Prof. R. C. Joshi, IITR

09:00 am - 10:30 am Co: Dr. Indra Gutpa, IITR

Venue-II

C: Prof. M. K. Vasantha, IITR

Co: Prof. R Prasad, IITR

Robotics 09:00 am - 10:30 am and Control 10:30 am - 11:00 am

Intersession Tea

Plenary Talk

Venue-I

Prof. M.M. Gupta, Canada

11:00 am 11:30 am

Session - 8

Venue-II Venue-I

Prof. Pramod Kumar, DCE C: Prof. R Mitra, IITR Co: Dr. D Singh, IITR Soft Computing C: Prof. Rama Bhargav, IITR

Image and Video Processing

11:00 am 11:30 am 11:30 am - 01:00 pm

Venue-II

11:30 am - 01:00 pm Co: Dr. C P Gupta, IITR

Lunch

01:00 pm - 02:00 pm

Valedictory Function (Auditorium, Department of Electrical Engineering)

02:30 pm - 04:30 pm

Venue I- Auditorium, Department of Electrical Engineering Venue II- Committee Room, Department of Electrical Engineering

SPECIAL THANKS TO OUR SPONSORS


Department of Science & Technology
Department of Science & Technology (DST) was established in May 1971, with the objective of promoting new areas of Science & Technology and to play the role of a nodal department for organizing, coordinating and promoting S&T activities in the country.

All India Council of Technical Education


The AICTE was constituted in 1945 as an advisory body in all matters relating to technical education. Even though it had no statutory powers, it played a very important role in the development of technical education in the country.

Council of Scientific & Industrial Research


The Council of Scientific & Industrial Research (CSIR) -The premier industrial R&D organization in India was constituted in 1942 by a resolution of the then Central Legislative Assembly. It is an autonomous body registered under the Registration of Societies Act of 1860.CSIR aims to provide industrial competitiveness, social welfare, strong S&T base for strategic sectors and advancement of fundamental knowledge.

OUR CO-SPONSOR
Dell Inc., together with its subsidiaries, engages in the design, development, manufacture, marketing, sale, and support of computer systems and services worldwide. It offers desktop PCs and workstations; notebook computers; servers and networking products; and storage solutions, including storage area networks, networkattached storage, direct-attached storage, disk and tape backup systems, and removable disk backup. The company also offers third party software products, which comprise operating systems, business and office applications, anti-virus and related security software, and entertainment software; peripheral products, including software titles, printers, televisions, notebook accessories, networking and wireless products, digital cameras, power adapters, scanners, and other products; and displays, including flat panel monitors and projectors. In addition, it provides infrastructure consulting services, deployment services, asset recovery and recycling services, training services, support services, and managed services. Further, the company provides a range of financial services, including originating, collecting, and servicing customer receivables related to the purchase of Dell products; and financing alternatives, asset management services, and other customer financial services for business and consumer customers. Its customers comprise large corporate, government, healthcare, and education accounts, as well as small and medium businesses, and individual consumers. The company sells its products and services directly to customers through sales representatives, telephone-based sales, and online at www.dell.com, as well as through various indirect sales channels. It has strategic alliance agreements with Perot Systems Corp., which provides integrated IT solutions; and Kingsoft Co. Ltd. Dell Inc. was founded in 1984 and is headquartered in Round Rock, Texas.

OTHER-SPONSORS
National Instruments transforms the way engineers and scientists around the world design, prototype, and deploy systems for test, control, and embedded design applications. Using NI open graphical programming software and modular hardware, customers at more than 30,000 companies annually simplify development, increase productivity, and dramatically reduce time to market. From testing next-generation gaming systems to creating breakthrough medical devices, NI customers continuously develop innovative technologies that impact millions of people. Designing and testing increasingly complex products to meet tight time-to-market demands requires a highly efficient, tightly integrated platform. The NI graphical system design platform for test, control, and embedded design spans the entire product design cycle, dramatically increasing efficiency and improving the bottom line. NI complements its industry-leading software and hardware with an extensive collection of services and support solutions from the planning and development phases through deployment and ongoing maintenance. For more information Delhi,avik.neogi@ni.com contact : Avik Neogi ,Regional mob:9717370303 Sale Manager ,NI off:01140548892.

HCL has always had the uncanny ability to read ahead, of any market inflexion point and adapt itself to derive maximum advantage. The result is today's HCL Enterprise - one of the pioneers in technology, transforming organizations across the world. HCLs product ranges are Digital Copier ,Printers, High speed Master Printers, A-0 size Printers/Scanners ,Projectors, Digital Surveillance Systems, EPABX Systems, Library Management Systems, AVSI Systems, Virtual Class Room Systems Etc For more information contact Mr. Suresh Kumar, Regional Manager Mobile: 9557791012,9719871500, V. N. Sharma territory Manager Mobile: 9557791014 Email: suresh.vk@hcl.in, vnsharma@hcl.in.

HP is a technology company that operates in more than 170 countries around the world. We explore how technology and services can help people and companies address their problems and challenges, and realize their possibilities, aspirations and dreams. We apply new thinking and ideas to create more simple, valuable and trusted experiences with technology, continuously improving the way our customers live and work.

Gentech Marketing and distribution Pvt. Limited. (Research for Life Sciences) B-116 & B117 x Plot No.2,D.B Gupta Road > Motia Khan,Pahar Ganj New Delhi-110055 Tel: 91-11-47563575, 91-11-23550371 Fax: 91-11-47563576, Email: gentechmd@gmail.com.

INTECH INFOSYS is a leading company having specialization in Supply & Support of IT Products/Solutions/M-CAD/E-CAD/CAM/CAE/Image Processing & GIS softwares etc. of reputed brands in this region. We deal with the following IT giants, for the sale & support of their products as listed below, as their authorized Dealer/Reseller/Business Partner/Authorized Representative & Service Provider M/s HP India Sales Pvt. Ltd., Gurgaon, M/s NIIT GIS Ltd. (ESRI INDIA), New Delhi, M/s Microsoft Corporation (India) Pvt. Ltd, M/s Ansys India Pvt. Ltd OTHER SOLUTIONS Pro-E, CATIA, ABAQUS, ENVI, ERDAS IMAGINE, ERDAS APOLLO SERVER, ORACLE, PANASONIC TOUGHBOOKS, SOLIDWORKS, SOLIDEDGE, NASTRAN, RED HAT LINUX, SUSE LINUX etc

FOR MORE DETAILS, PLEASE CONTACT: INTECHINFOSYS A402/15,32,CivilLines,Roorkee247667 Ph:+911332271117,Tele/Fax:+911332273461 Mobile:+919837000572,EMailintechrk@sancharnet.in, intech_rke@bsnl.in,intech@intechrk.com,visit:http://mail.intechrk.com

2010byDepartmentofElectricalEngineering,IlTRoorkee,India

You might also like