You are on page 1of 534

Electronics and Tele-Communication Seminar Topics

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. High Speed Packet Access HSPA Paper Battery HawkEye Bio Battery Mobile Train Radio Communication Face Recognition Using Neural Network Data Loggers Concentrating Collectors Bluetooth Network Security Artificial Intelligence In Power Station Embedded System in Automobiles Third Generation Solid State Drives Security In Embedded Systems Securing Underwater Wireless Communication Networks Secure Electronic Voting System Based on Image Steganography Lunar Reconnaissance Orbiter Miniature RF Technology Demonstration Bubble Power Vehicle-to-Grid V2G E-Waste Super Capacitor Smart Antenna Black-Box Adaptive Missile Guidance Using GPS Autonomous Underwater Vehicle Hydrogen Super Highway Silicon on Plastic BlueStar Intervehicle Communication Intelligent Wireless Video Camera Image Coding Using Zero Tree Wavelet Human-Robot Interaction Wireless LAN Security Smart Note Taker Embedded Web Technology Electrooculography Distributed COM Remote Access Service Wireless Charging Of Mobile Phones Using Microwaves 3-Dimensional Printing Humanoids Robotics Transparent Electronics

42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84. 85. 86.

Thermography Surface Plasmon Resonance Microwave Superconductivity Memristor Earthing transformers For Power systems Direct Current Machines Optical Ethernet DD Using Bio-robotics Clos Architecture in OPS 4G Wireless Systems Wearable Bio-Sensors Poly Fuse Non Visible Imaging Nuclear Batteries-Daintiest Dynamos MILSTD 1553B Micro Electronic Pill MOBILE IPv6 Chip Morphing Challenges in the Migration to 4G CAN BIT for Intelligent system design A 64 Point Fourier Transform Chip Anthropomorphic Robot hand: Gifu Hand II ANN for misuse detection Adaptive Optics in Ground Based Telescopes Aluminum Electrolytic Capacitors IBOC Technology Honeypots Immersion Lithography Grating Light Valve Display Technology Fractal Antennas HART Communication E-Textiles Electro Dynamic Tether FPGA in Space DV Libraries and the Internet Co-operative cache based data access in ad hoc networks Mesh Topology Mesh Radio Metamorphic Robots Low Energy Efficient Wireless Communication Network Design Indoor Geolocation Wireless DSL Wireless Microserver User Identification Through Keystroke Biometrics

87. Ultrasonic Motor 88. Virtual Retinal Display 89. Spectrum Pooling 90. Signaling System 91. Ultra Conductors 92. Self Phasing Antenna Array 93. Role of Internet Technology in Future Mobile Data System 94. Service Aware Intelligent GGSN 95. Push Technology 96. GMPLS 97. Fluorescent Multi-layer Disc 98. Compact peripheral component interconnect (CPCI) 99. Datalogger 100. Wideband Sigma Delta PLL Modulator 101. Voice morphing 102. VISNAV 103. Speed Detection of moving vehicle using speed cameras 104. Optical Switching 105. Optical Satellite Communication 106. Optical Packet Switching Network 107. SATRACK 108. Crusoe Processor 109. Radio Frequency Light Sources 110. QoS in Cellular Networks Based on MPT 111. Project Oxygen 112. Polymer Memory 113. Navbelt and Guidicane 114. Multisensor Fusion and Integration 115. MOCT 116. Mobile Virtual Reality Service 117. Smart Pixel Arrays 118. Adaptive Blind Noise Suppression 119. An Efficient Algorithm for iris pattern 120. Analog-Digital Hybrid Modulation 121. Artificial Intelligence Substation Control 122. Speech Compression - a novel method 123. Class-D Amplifiers 124. Digital Audio's Final Frontier-Class D Amplifier 125. Optical Networking and Dense Wavelength Division Multiplexing 126. Optical Burst Switching 127. Bluetooth Based Smart Sensor Networks 128. Laser Communications 129. CorDECT 130. E-Intelligence 131. White LED

132. 133. 134. 135. 136. 137. 138. 139. 140. 141. 142. 143. 144. 145. 146. 147. 148. 149. 150. 151. 152. 153. 154. 155. 156. 157. 158. 159. 160. 161. 162. 163. 164. 165. 166. 167. 168. 169. 170. 171. 172. 173. 174. 175. 176.

Carbon Nanotube Flow Sensors Cellular Positioning Iontophoresis Dual Energy X-ray Absorptiometry Pervasive Computing Passive Millimeter-Wave RAID Holographic Data Storage Organic Display Symbian OS Ovonic Unified Memory Spintronics E-Commerce Bio-Molecular Computing Code Division Duplexing Orthogonal Frequency Division Multiplexing Utility Fog VLSI Computations Tunable Lasers HAAPS Daknet Digital Light Processing Free Space Laser Communications Millipede Distributed Integrated Circuits AC Performance Of Nanoelectronics High Performance DSP Architectures FinFET Technology Stream Processor GPRS Free Space Optics FDDI E-Nose Embryonics Approach Towards Integrated Circuits Embedded Systems and Information Appliances Electronic Data Interchange DSP Processor Direct to Home Television (DTH) Digital Subscriber Line Digital HUBUB Crusoe Bio-metrics Augmented Reality Asynchronous Transfer Mode Artifical Eye

177. 178. 179. 180. 181. 182. 183. 184. 185. 186. 187. 188. 189. 190. 191. 192. 193. 194. 195. 196. 197. 198. 199. 200. 201. 202. 203. 204. 205. 206. 207. 208. 209. 210. 211. 212. 213. 214. 215. 216. 217. 218. 219. 220. 221.

AI for Speech Recognition Treating Cardiac Disease With Catheter-Based Tissue Heating Surround Sound System Space Time Adaptive Processing Real Time System Interface Radio Frequency Identification (RFID) Quantum Dot Lasers Plasma Antennas Organic Light Emitting Diode Narrow Band & Broad Band ISDN Nanotechnology Led Wireless Laser Communication Systems Josephson Junction Introduction to the Internet Protocols Imagine Cellular Communications Heliodisplay Optical Mouse Time Division Multiple Access Element Management System Extended Markup Language Synchronous Optical Network Dig Water CRT Display Satellite Radio TV System Robotics Wireless Application Protocol Synchronous Optical Networking Cellular Radio Optic Fibre Cable Infinite Dimensional Vector Space Low Voltage Differential Signal Plasma Display GPRS Landmine Detection Using Impulse Ground Penetrating Radar NRAM GSM Wireless Intelligent Network Integrated Voice and Data MEMS Smart Quill Automatic Number Plate Recognition Optical Camouflage Smart Fabrics

222. 223. 224. 225. 226. 227. 228. 229. 230. 231. 232. 233. 234. 235. 236. 237. 238. 239. 240. 241. 242. 243. 244. 245. 246. 247. 248. 249. 250. 251. 252. 253. 254. 255. 256. 257. 258. 259. 260. 261. 262. 263. 264. 265. 266.

Java Ring Internet Protocol Television FireWire Night Vision Technology RD RAM Implementation Of Zoom FFT in Ultrasonic Blood Flow Analysis Military Radars Modern Irrigation System Towards Fuzzy Smart Cameras in Embedded Systems Spin Valve Transistor Moletronics- an invisible technology Laser Communications Solar Power Satellites MIMO Wireless Channels Fractal Robots Stereoscopic Imaging Ultra-Wideband Home Networking Digital Cinema Face Recognition Technology Universal Asynchronous Receiver Transmitter Automatic Teller Machine Wavelength Division Multiplexing Object Oriented Concepts Frequency Division Multiple Access Real-Time Obstacle Avoidance Delay Tolerant Networking EDGE Psychoacoustics Integer Fast Fourier Transform Worldwide Inter operatibility for Microwave Access Code Division Multiple Access Optical Coherence Tomography Symbian OS Home Networking Guided Missiles AC Performance Of Nanoelectronics Acoustics BiCMOS technology Fuzzy based Washing Machine Low Memory Color Image Zero Tree Coding Stealth Fighter Border Security Using Wireless Integrated Network Sensors A Basic Touch-Sensor Screen System GSM Security And Encryption

267. 268. 269. 270. 271. 272. 273. 274. 275. 276. 277. 278. 279. 280. 281. 282. 283. 284. 285. 286. 287. 288. 289. 290. 291. 292. 293. 294. 295. 296. 297. 298. 299. 300. 301. 302. 303. 304. 305. 306. 307. 308. 309. 310. 311.

Design of 2-D Filters using a Parallel Processor Architecture Software-Defined Radio Smart Dust Adaptive Blind Noise Suppression An Efficient Algorithm for iris pattern Significance of real-time transport Protocol in VOIP Storage Area Networks Quantum Information Technology Money Pad, The Future Wallet Buffer overflow attack : A potential problem and its Implications Robotic Surgery Swarm intelligence & traffic Safety Smart card Cellular Through Remote Control Switch Terrestrial Trunked Radio HVAC Electronics Meet Animal Brains Satellite Radio Search For Extraterrestrial Intelligence Line-Reflect-Reflect Technique Low Power UART Design for Serial Data Communication Light emitting polymers Cruise Control Devices Boiler Instrumentation and Controls SPECT Sensors on 3D Digitization Asynchronous Chips Optical packet switch architectures Digital Audio Broadcasting Cellular Neural Network FRAM Wireless Fidelity Synthetic Aperture Radar System Touch Screens Tempest and Echelon VoCable Data Compression Techniques Fractal Image Compression Computer Aided Process Planning Space Shuttles and its Advancements Space Robotics Welding Robots Sensotronic Brake Control Mobile IP Power System Contingencies

312. 313. 314. 315. 316. 317. 318. 319. 320. 321. 322. 323. 324. 325. 326. 327. 328. 329. 330. 331. 332. 333. 334. 335. 336. 337. 338. 339. 340. 341. 342. 343. 344. 345. 346.

Lightning Protection Using LFAM Wideband Sigma Delta PLL Modulator Bioinformatics Extreme Ultraviolet Lithography Animatronics Molecular Electronics Cellonics Technology Cellular Digital Packet Data CT Scanning Continuously variable transmission (CVT) High-availability power systems Redundancy options IGCT Iris Scanning ISO Loop magnetic couplers LWIP Image Authentication Techniques Seasonal Influence on Safety of Substation Grounding Wavelet transforms Cyberterrorism Ipv6 - The Next Generation Protocol Driving Optical Network Evolution Radio Network Controller Wireless Networked Digital Devices 3- D IC's Sensors on 3D Digitization Fuzzy Logic Simputer Wavelet Video Processing Technology IP Telephony RPR PH Control Technique using Fuzzy Logic Multisensor Fusion and Integration Integrated Power Electronics Module H.323 GMPLS All the Contents are the collection of webpage information from www.seminarsonly.com This Collection is by Gowrav L , Mail Gowrav at gowrav.hassan@gmail.com

This Page is Intentionally Left Blank

High Speed Packet Access - HSPA

Abstract of HSPA The High Speed Packet Access technology is the most widely used mobile broadband technology in communication world. It was already built in more than 3.8 billion connection with GSM family of technologies. The HSPA technology is referred to both High Speed Downlink Packet Access (3GPP Release 5) and to High Speed Uplink Packet Access (3GPP Release 6). The Evolved HSPA technology or HSPA + is the evolution of HSPA that extends operators investments before the next generations technology 3GPP Long Term Evolution (LTE or 3GPP Release 8). The HSPA technology is implemented on third generation (3G) UMTS/WCDMA network and accepted as the leader in mobile data communication. Using the HSDPA optimization on downlink is performed, whereas the HSUPA technology applying Enhanced Dedicated Channel (E-DCH) sets some improvements for the uplink performance optimization. The products that support HSUPA became available in 2007 and the combination of both HSDPA and HSUPA were called HSPA. Adopting these technologies the throughput, latency and spectral efficiency were improved. Introducing HSPA resulted to the increase of overall throughput approximately to 85 % on the uplink and a rise more than 50 % in user throughput. The HSPA downlink available rates are 1 to 4 Mbps and for the uplink are 500 kbps to 2Mbps as of 1 quarter of 2009. The theoretical bit rates are 14Mbps at the downlink and 5.8 Mbps at the uplink in a 5MHz channel. Besides, the latency is notably reduced as well. In the improved network, the latency is less than 50ms, and after the introduction of 2ms Transmission Time Interval (TTI) latency is expected to be just 30ms. High Speed Downlink Packet Access The main idea of HSDPA concept is a growth of packet access throughput with methods known from Global System for Mobile Communication (GSM)/ Enhanced Data Rates for Global Evolution (EDGE) standards, involving link adaptation and fast physical layers (L1) retransmission combining. The demand of arriving to possible memory requirements and bringing control for link adaptation closer to the air interface brought forward the High Speed Downlink Shared Channel (HS-DSCH). The functioning of HSDPA is done in a way that after calculating the quality of every HSDPA user based for example on power control, ACK/NACK ratio, and HSDPA specific user feedback at the Node-B, then scheduling and link adaptation are immediately conducted depending on the active scheduling algorithm and user prioritization scheme. Using HSDPA the fundamental features of WCDMA like variable spreading factor (SF) and fast power control are switched off and replaced by adaptive modulation and coding (AMC), extensive multicode operation and a fast and spectrally efficient retransmission strategy. The power control dynamics in downlink is 20 dB, and at the uplink it is 70 dB. Due to intra-cell interference (interference between users on parallel code channels) and Node-B implementation some limitation are appeared for the downlink dynamics. Consequently, a near to Node-B users power is unable to be reduced maximally by the power control. Moreover, the reduced power beyond 20 dB influences a little on the capacity. With HSDPA, this property is handled by the link

adaptation function and AMC to choose a coding and modulation combination that demands higher Ec/Io, which is available to the user near to Node-B. This leads to the increase of customer throughput. Utilizing simultaneously up to 15 multicodes in parallel, a large dynamic range of the HSDPA link adaptation and maintenance of a good spectral efficiency are enabled. Using more robust coding, fast Hybrid Automatic Repeat Request (HARQ) and multicode operation makes the variable SF no more necessary. In order to profit from the short term variations, the scheduling decisions are performed in the NodeB, so the capacity allocations for one user are done in a short time, in a friendly conditions. The physical layer packet combining is that the terminal accumulates the received data packets in soft memory and in the case of decoding failure, the new transmission is combined with the old one before channel decoding. The retransmission can be the same as the first transmission or can be with different bits relatively to the channel encoder output received during the last transmission. With addition incremental strategy, a diversity gain and improving decoding efficiency can be achieved. The Physical Layer Operation Procedure

The steps of the physical layer operation of the HSDPA: The scheduler in the Node B estimates the conditions of the channel, the pended data in the buffer, the expired time since the last session of the user and so on. After defining TTI for the terminal, the HS-DSCH parameters are assigned. In order to inform the terminal of the necessary parameters, the HS-SCCH two slots are transmitted by the Node-B before the corresponding HS-DSCH TTI. The given HS-SCCHs are monitored and after the decoding of the Part1 from an HSSCCHdetermined for that terminal, the rest of the HS-SCCH is decoded and terminalwill buffer the necessary codes from the HS-DSCH. As soon as the HS-SCCH parameters are decoded from Part 2, the terminal can define to which ARQ process the data belongs and the whether it is required the combine of the data and that already in the soft buffer. After the potentially combined data is decoded, the terminal sends ACK/NACK indicator in the uplink direction.

If the transmission is performed in the same TTI the same HS-SCCH is used. References 1) Holma, H., Toskala, A. WCDMA for UMTS. Radio access for third generation mobile communications. West Sussex: John Wiley & Sons, 2004. 2) Juha Karhonen, Introduction to 3G Mobile Communications, Artech House, 2003 3) http://www.3gamericas.org/index.cfm?fuseaction=page&sectionid=247 4) EDGE, HSPA, LTE: Broadband Innovation, September 2008, 3G Americas, RYSAVY Research 5) David Maidment, Understanding HSDPA's Implementation Challenges, picoChip Designs, 2005 http://www.eetimes.com/design/embedded-internet-design/4009356/Understanding- HSDPA-sImplementation-Challenges 6) Eiko Seidel, Standartization updates on HSPA Evolution, Nomor Research GmbH, Munich, Germany, 2009

Paper Battery

Definition A paper battery is a flexible, ultra-thin energy storage and production device formed by combining carbon nanotube s with a conventional sheet of cellulose-based paper. A paper battery acts as both a high-energy battery and supercapacitor , combining two components that are separate in traditional electronics . This combination allows the battery to provide both long-term, steady power production and bursts of energy. Non-toxic, flexible paper batteries have the potential to power the next generation of electronics, medical devices and hybrid vehicles, allowing for radical new designs and medical technologies. Paper batteries may be folded, cut or otherwise shaped for different applications without any loss of integrity or efficiency . Cutting one in half halves its energy production. Stacking them multiplies power output. Early prototypes of the device are able to produce 2.5 volt s of electricity from a sample the size of a postage stamp Paper battery offers future power They have produced a sample slightly larger than a postage stamp that can store enough energy to illuminate a small light bulb. But the ambition is to produce reams of paper that could one day power a car. Professor Robert Linhardt, of the Rensselaer Polytechnic Institute, said the paper battery was a glimpse into the future of power storage. The team behind the versatile paper, which stores energy like a conventional battery, says it can also double as a capacitor capable of releasing sudden energy bursts for high-power applications.

How a paper battery works While a conventional battery contains a number of separate components, the paper battery integrates all of the battery components in a single structure, making it more energy efficient.

Integrated devices The research appears in the Proceedings of the National Academy of Sciences (PNAS). "Think of all the disadvantages of an old TV set with tubes," said Professor Linhardt, from the New York-based institute, who co-authored a report into the technology. "The warm up time, power loss, component malfunction; you don't get those problems with integrated devices. When you transfer power from one component to another you lose energy. But you lose less energy in an integrated device." The battery contains carbon nanotubes, each about one millionth of a centimetre thick, which act as an electrode. The nanotubes are embedded in a sheet of paper soaked in ionic liquid electrolytes, which conduct the electricity. The flexible battery can function even if it is rolled up, folded or cut. Although the power output is currently modest, Professor Linhardt said that increasing the output should be easy. Construction and Structure Construction A very brief explanation has been provided. Cathode: Carbon Nanotube (CNT) Anode: Lithium metal (Li+) Electrolyte: All electrolytes (incl. bio electrolytes like blood, sweat and urine) Separator: Paper (Cellulose)

The process of construction can be understood in the following steps: Firstly, a common Xerox paper of desired shape and size is taken. Next, by conformal coating using a simple Mayer rod method, the specially formulated ink with suitable substrates (known as CNT ink) is spread over the paper sample. The strong capillary force in paper enables high contacting surface area between the paper and nanotubes after the solvent is absorbed and dried out in an oven. A thin lithium film is laminated over the exposed cellulose surface which completes our paper battery. This paper battery is then connected to the aluminum current collectors which connect it to the external load. The working of a paper battery is similar to an electrochemical battery except with the constructional differences. The paper battery is designed to use a paper-thin sheet of cellulose (which is the major constituent of regular paper, among other things) infused with aligned carbon nanotubes. The nanotubes act as electrodes, allowing the storage devices to conduct electricity. The battery will currently provide a low, steady power output, as well as a supercapacitors quick burst of energy. While a conventional battery contains a number of separate components, the paper battery integrates all of the battery components in a single structure, making it more energy efficient and lighter.

Tweet To Download Full Report of Paper Battery Click Here

HawkEye

Abstract of HawkEye In the 1970s, the world hockey champions had a coach who inspired them by insisting that they start every match, by imagining they were 0-3 down. A goal for your weaknesses, another for your opponents strengths and a third for umpiring errors. In the past few decades, skepticism about umpiring follies hasnt abated. In the world of sports, where stakes are increasing by every passing minute and an erroneous line-call can mean change of fortunes, there is an increasing reliance on technology to ensure that all arbitrations are unbiased. The component of human error in making judgments of crucial decisions often turns out to be decisive. It is not uncommon to see matches turning from being interesting to being one sided due to a couple of bad umpiring decisions. There is thus a need to bring in technology to try and minimize the chances of human error in such decision making. Teams across the world are becoming more and more professional with the way they play the game. Teams now have official strategists and technical support staff which help players to study their past games and improve. Devising strategies against opponent teams or specific players is also very common in modern day sports. All this has become possible due to the advent of technology. Technological developments have been harnessed to collect various data very precisely and use it for various purposes. The HawkEye is one such technology which is considered to be really top notch in sports. The basic idea is to monitor the trajectory of the ball during the entire duration of play. This data is then processed to produce life like visualizations showing the paths which the ball took. Such data has been used for various purposes, popular uses including the LBW decision making software and colorful wagon wheels showing various statistics. The HAWKEYE is one of the most commonly used technologies in the game of cricket today. It has been put to a variety of uses, such as providing a way to collect interesting statistics, generate very suggestive visual representations of the game play and even helping viewers to better understand the umpiring decisions, especially in the case of LBWs. Why Hawk-Eye? Hawk-Eye is the most sophisticated officiating tool used in any sport. It is accurate, reliable and practical: fans now expect and demand it to be a part of every event. Hawk-Eye first made its name in Cricket broadcasting, yet the brand has diversified into Tennis, Snooker and Coaching. Hawk-Eye is currently developing a system for Football. In Tennis the technology is an integral part of the ATP, WTA and ITF tennis tours, featuring at the Masters Cup in Shanghai, the US Open, and the Australian Open etc. Hawk-Eye is the only ball-tracking device to have passed stringent ITF testing measures. Hawk-Eye offers a unique blend of innovation, experience and accuracy that has revolutionized the sporting world. The system is the most technologically advanced cricket coaching system in the world. It will provide valuable information to players, coaches and umpires to enable them to identify faults, measure performance and improvement, focus on specific areas, improve tactical awareness and provide a level of realism never before achieved in a net environment.

Hawk eye technology since from its beginning has gained huge popularity due to its highly innovative and state of the art features. Though initially it was made for the benefit of umpires regarding decisions in cricket but now it is being used in tennis, snooker, video games and also for enhancing military strength. While the system provides for things which we see every day on television, there is very impressive technology going into it, which many of us are oblivious to. All Hawk-Eye systems are based on the principles of triangulation using the visual images and timing data provided by at least four high-speed video cameras located at different locations and angles around the area of play. The system rapidly processes the video feeds by a high-speed video processor and ball tracker. A data store contains a predefined model of the playing area and includes data on the rules of the game. In each frame sent from each camera, the system identifies the group of pixels which corresponds to the image of the ball. It then calculates for each frame the 3D position of the ball by comparing its position on at least two of the physically separate cameras at the same instant in time. A succession of frames builds up a record of the path along which the ball has travelled. It also "predicts" the future flight path of the ball and where it will interact with any of the playing area features already programmed into the database. The system can also interpret these interactions to decide infringements of the rules of the game. The system generates a graphic image of the ball path and playing area, which means that information can be provided to judges, television viewers or coaching staff in near real time Principle of hawk eye A hawk Eye system is based on the principle of 'triangulation' in geometry. Triangulation is the process of determining the location of a point by measuring angles to it from known points at either end of a fixed base line. Triangulation is the process of finding coordinates and distance to a point by calculating the length of one side of a triangle, given measurements of angles and sides of the triangle formed by that point and two other known reference points, using the law of sin.

Therefore

The coordinates and distance to a point can be found by calculating the length of one side of a triangle, given measurements of angles and sides of the triangle formed by that point and two other known reference points. The formulas can be applied in flat or Euclidean geometry. Step-by-step details of the HAWKEYE system In this section, we go into the technical details of the steps involved in the HAWKEYE system. The process, as done before, can be broken down into the following steps (we will mainly concentrate on working of Hack-Eye in a cricket field); 1. The cameras: Typically, for a cricket field, 6 cameras are used. These cameras are placed around the field at roughly the places as indicated in the diagram below:

Fig: The position of cameras around the field As one can see, the 6 cameras in use are positioned at roughly sixty degree from each other. They are placed high in the stands, so that there is lesser chance of their view being blocked by the fielders. There are two cameras, one each looking at the wickets directly in sideways fashion. These 6 cameras are calibrated according to the distance they are at from the pitch. In order to get good accuracy, one needs to restrict the view of each camera to a smaller region. This means each camera image would show a more prominent picture of the ball and hence the ball will be located more accurately. However, we also need to keep in mind that the whole field of play has to be covered by just the 6 cameras which are available. This puts some limitation on how restricted the view of a camera can be. Nevertheless, the accuracy obtained by using 6 cameras is acceptable to the standards prevalent today. References 1) Video Processor Systems for Ball Tracking in Ball Games. 2) www.hawkeyeinnovations.co.uk 3) www.wikipedia.org 4) www.therulesofcricket.co.uk

Bio Battery

Definition When a glucose solution is poured into the white cubes, the Walkman begins to play. When an isotonic drink is poured in, a propeller starts to spin. In the summer of 2007, the Sony-developed bio battery was announced in newspapers, magazines, and TV reports, and evoked a strong response. Carbohydrates (glucose) are broken down to release energy and generate electricity. This bio battery, which is based on mechanisms used in living organism, is not only friendly to the environment but also has great potential for use as an energy source. This prototype bio battery has achieved the worlds highest power output of 50 mW*2 when employed for a passive type*1 system. These research results were published at the 234th American Chemical Society National Meeting & Exposition in August 2007 and earned respect from an academic point of view. Sony successfully demonstrated bio battery powered music playback with a memory type Walkman and passive speakers (which operate on power supplied by the Walkman) by connecting four bio battery units in series. The case of this bio battery, which is made from an organic plastic (polylactate), is designed to be reminiscent of a living cell. Plants create both carbohydrates and oxygen by photosynthesis from carbon dioxide and water. Animals take up those carbohydrates and oxygen and utilize them as an energy source and release carbon dioxide and water. Then this cycle starts again. Since the carbon dioxide is recycled in this system, the amount of carbon dioxide in the atmosphere does not increase. If electrical energy could be directly acquired from this cycle, we could obtain more environmentally friendly energy than that from fossil fuels. Furthermore, renewable energy sources such as glucose (which is present in plants and therefore abundantly available) have an extremely high energy density. One bowl of rice (about 100 grams) is equivalent to 160 kilocalories, which corresponds to the energy about 64 AA alkaline dry cells. Therefore, this bio battery, which is based on Energy for activity, that is the ATP and thermal energy commonly used in the living organism, can be obtained from the exchange of the electrons and protons through these two enzymatic reactions. To take advantage of this living organism mechanism, the energy for activity from inside the organism must be removed outside the organism as electrical energy. That is, when the electrons and protons move from enzyme to enzyme, it is necessary to extract just the electrons and divert them through a separate path. Thus Sony used an electron transport mediator so that electrons could be exchanged smoothly between the enzymes and the electrodes that are the entrance and exit to that detour. The principles of the bio battery are based on the energy conversion mechanism in living organisms. However, in order to create the bio battery, several technologies needed to be developed. These include immobilization of enzymes that are normally incompatible with carbon and metal electrodes, electrode structures, and electrolytes. mechanisms used in living organisms, is not only friendly to the environment but is also likely to be of practical use as an energy source. Sony has focused on these advantages since 2001 and has developed an electrical power generation device that uses mechanisms similar to those in living organisms.

Mobile Train Radio Communication


Definition Each mobile uses a separate, temporary radio channel to talk to the cell site. The cell site talks to many mobiles at once, using one channel per mobile. Channels use a pair of frequencies for communication. One for transmitting from the cell site, the forward link , and one frequency for the cell site to receive calls from the users, the reverse link. Communication between mobile units can be either half-duplex or full-duplex. In case of half-duplex , transmit and receive communications between the mobile units are not at the same time, i.e. talking and listening can not be done at the same time. In case of full-duplex communication, transmit and receive communication is at the same time, i.e. one can talk and listen at the same time. When communications between mobile units are within a cell , and if the same is half-duplex , then it shall require only one pair of frequency. If the same is full-duplex , then requirement of frequency pair shall be two . When a mobile unit is communicating with a mobile unit outside the cell, then the requirement of frequency pair shall be one per cell for both half-duplex and full-duplex communication. Hence the system resources are utilized more if the mobile units communicate with each other in full-duplex mode. MOBILE TRAIN RADIO SYSTEMS Present Day Scenario A choice of mobile system for a set up is governed mainly by the following facts. Coverage area Number of subscriber to be catered Frequency spectrum available Nature of the terrain Type of application i.e. voice of data or both Integration with other systems Future technological migration capability Cost of the system Railways' Present Day Requirements.

The Train Mobile System's present day requirements are not just voice transmission, but also along with voice the system shall be capable of handling data also. Typical applications for the Modern Train Mobile System are as under. Text and status message transmission. Automatic Train operation's critical alarms. Train status and alarm information Passenger information system control Train passenger emergency system Closed circuit TV system

Face Recognition Using Neural Network

Abstract of Face Recognition Using Neural Network A neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships . In the broader sense, a neural network is a collection of mathematical models that emulate some of the observed properties of biological nervous systems and draw on the analogies of adaptive biological learning. It is composed of a large number of highly interconnected processing elements that are analogous to neurons and are tied together with weighted connections that are analogous to synapses. To be more clear, let us study the model of a neural network with the help of figure.1. The most common neural network model is the multilayer perceptron (MLP). It is composed of hierarchical layers of neurons arranged so that information flows from the input layer to the output layer of the network. The goal of this type of network is to create a model that correctly maps the input to the output using historical data so that the model can then be used to produce the output when the desired output is unknown. Neural network is a sequence of neuron layers. A neuron is a building block of a neural net. It is very loosely based on the brain's nerve cell. Neurons will receive inputs via weighted links from other neurons. This inputs will be processed according to the neurons activation function. Signals are then passed on to other neurons. In a more practical way, neural networks are made up of interconnected processing elements called units which are equivalent to the brains counterpart ,the neurons. Neural network can be considered as an artificial system that could perform "intelligent" tasks similar to those performed by the human brain. Neural networks resemble the human brain in the following ways: 1. A neural network acquires knowledge through learning. 2. A neural network's knowledge is stored within inter-neuron connection strengths known as synaptic weights. 3. Neural networks modify own topology just as neurons in the brain can die and new synaptic connections grow. Why we choose face recognition over other biometric? There are a number reasons to choose face recognition. This includes the following : 1. It requires no physical inetraction on behalf of the user. 2. It is accurate and allows for high enrolment and verification rates. 3. It does not require an expert to interpret the comparison result. 4. It can use your existing hardware infrastructure, existing camaras and image capture devices will work with no problems. 5. It is the only biometric that allow you to perform passive identification in a one to many environment (eg: identifying a terrorist in a busy Airport terminal.

The face is an important part of who you are and how people identify you. Except in the case of identical twins, the face is arguably a person's most unique physical characteristics. While humans have the innate ability to recognize and distinguish different faces for millions of years , computers are just now catching up. For face recognition there are two types of comparisons .the first is verification. This is where the system compares the given individual with who that individual says they are and gives a yes or no decision. The second is identification. This is where the system compares the given individual to all the other individuals in the database and gives a ranked list of matches. All identification or authentication technologies operate using the following four stages: 1. capture: a physical or behavioural sample is captured by the system during enrollment and also in identification or verification process. 2. Extraction: unique data is extracted from the sample and a template is created. 3. Comparison: the template is then compared with a new sample. 4. Match/non match : the system decides if the features extracted from the new sample are a match or a non match. Face recognition starts with a picture, attempting to find a person in the image. This can be accomplished using several methods including movement, skin tones, or blurred human shapes. The face recognition system locates the head and finally the eyes of the individual. A matrix is then developed based on the characteristics of the individuals face. The method of defining the matrix varies according to the algorithm (the mathematical process used by the computer to perform the comparison). This matrix is then compared to matrices that are in a database and a similarity score is generated for each comparison. Artificial intelligence is used to simulate human interpretation of faces. In order to increase the accuracy and adaptability, some kind of machine learning has to be implemented.

There are essentially two methods of capture. One is video imaging and the other is thermal imaging. Video imaging is more common as standard video cameras can be used. The precise position and the angle of the head and the surrounding lighting conditions may affect the system performance. The complete facial image is usually captured and a number of points on the face can then be mapped, position of the eyes, mouth and the nostrils as a example. More advanced technologies make 3-D map of the face which multiplies the possible measurements that can be made. Thermal imaging has better accuracy as it uses facial temperature variations caused by vein structure as the distinguishing traits. As the heat pattern is emitted from the face itself without source of external radiation these systems can capture images despite the lighting condition, even in the dark. The drawback is high cost. They are more expensive than standard video cameras. Face recognition technologies have been associated generally with very costly top secure applications. Today the core technologies have evolved and the cost of equipments is going down dramatically due to the intergration and the increasing processing power. Certain application of face recognition technology are now cost effective, reliable and highly accurate. As a result there are no technological or financial barriers for stepping from the pilot project to widespread deployment. References 1. ELECTRONICS FOR YOU- Part 1 April 2001 & Part 2 May 2001 2. ELECTRONIC WORLD - DECEMBER 2002 3. MODERN TELEVISION ENGINEERING- Gulati R.R 4. IEEE IN TELLIGENT SYS TEMS - MAY/JUNE 2003 5. WWW.FACEREG.COM 6. WWW. IMAGESTECHNOLOGY.COM 7. WWW.IEEE.COM

Data Loggers

Abstract of Data Loggers A data logger (also data logger or data recorder) is an electronic device that records data over time or in relation to location either with a built in instruments or sensors or via external instruments and sensors. Increasingly, but not entirely, they are based on a digital processor (or computer). They generally are small, battery powered, portable, and equipped with a microprocessor, internal memory for data storage, and sensors. Some data loggers interface with a personal computer and utilize software to activate the data logger and view and analyze the collected data, while others have a local interface device (keypad, LCD) and can be used as a stand-alone device. Data loggers vary between general purpose types for a range of measurement applications to very specific devices for measuring in one environment or application type only. It is common for general purpose types to be programmable; however, many remain as static machines with only a limited number or no changeable parameters. Electronic data loggers have replaced chart recorders in many applications. One of the primary benefits of using data loggers is the ability to automatically collect data on a 24-hour basis. Upon activation, data loggers are typically deployed and left unattended to measure and record information for the duration of the monitoring period. This allows for a comprehensive, accurate picture of the environmental conditions being monitored, such as air temperature and relative humidity.

Cold Chain Distribution Imini Plus Pdf - DataLogger The NEW I Mini Plus PDF data logger from Escort will generate a comprehensive PDF report without the need for proprietary software or interface on receipt by the Receiver. The I Mini Plus PDF data logger is the ideal instrument for monitoring temperature sensitive shipments to distant destinations where proprietary software and interfaces are unavailable. Every data logger is supplied with a simple USB to USB cable for PC connection and travels with the logger in the consignment. Report generation includes a summary of transport condition, time spent out of specification and trip statistics. In addition the report provides date, time and temperature of all readings plus a graph of the temperature of the trip. The PDF report can be validated by Console software by emailing the PDF report and binary files to the Supplier thus meeting the requirements for 21 CFR Part11. For

complete solutions for complex real-time processing and control applications, contact CAS DataLoggers Solution Analysts for application specific designs. Pricing based on individual application needs and required results. Block Diagram of Data Logger

Sensors are used to take readings or measurements at regular intervals of their environment. The sensors could be collecting data on a wide range of things such as temperature, humidity, pressure, wind speed, water currents, electrical voltage, pH readings etc. The sensors may be either analogue or digital. If they take analogue readings, an Analogue to Digital Converter (ADC) will be needed to convert the signal into digital data which the computer can understand. As the sensor takes a reading, the data is sent though a cable or wireless link to the data logger. The data logger usually stores the data for a period of time before sending it in a large batch to a computer which will process and analyse it. A data logger is often a hand-held battery-operated device which has a large amount of memory. Using the data logger software, download the data to the pc. Most data logger software packages allow the user to view data in a number of different formats. The most common formats are graphical and tabular. The graphical format, as shown in Figure 3, allows the data to be viewed pictorially asa graph. This format provides the user with a quick means for getting a feel for what is happening and observing trends. The tabular format, shown in Figure 4, provides the user with the raw data.

Data in this format can be exported to a spread sheet application such as Microsoft Excel, for further manipulation. Once the data is in the pc, it can be saved as a file on the computer or floppy disk for recall at a later date or printed. Madge Tech data loggers are small, battery powered ,intelligent electronic devices that record measurements of physical parameters in the world for later retrieval by a computer. As the technology improves, significant achievements in performance can be made. Madge Tech has defined its mission as in corporating the latest technology into its data loggers as soon as it becomes available. At Madge Tech, we are taking the lead in the industry when it comes to price/performance as we have an unrelenting commitment to pursue performance improvements in our existing data loggers as well as designing and developing new data loggers. References www.ieeeexplorer.com www.wikepidea.com www.dataloggerinc.com

Concentrating Collectors

Abstract of Concentrating Collectors Concentrating, or focusing, collectors intercept direct radiation over a large area and focus it onto a small absorber area. These collectors can provide high temperatures more efficiently than flat-plate collectors, since the absorption surface area is much smaller. However, diffused sky radiation cannot be focused onto the absorber. Most concentrating collectors require mechanical equipment that constantly orients the collectors toward the sun and keeps the absorber at the point of focus. Therefore; there are many types of concentrating collectors [2]. Solar energy is an abundant and renewable energy source. The annual solar energy incident at the ground in India is about 20,000 times the current electrical energy consumption. The use of solar energy in India has been very limited. The average daily solar energy incident in India is 5 kWh/m 2 day and hence energy must be collected over large areas resulting in high initial capital investment; it is also an intermittent energy source. Hence solar energy systems must incorporate storage in order to take care of energy needs during nights and on cloudy days. This results in further increase in the capital cost of such systems.Solar power is the conversion of sunlight into electricity, either directly using photovoltaics (PV), or indirectly using concentrated solar power (CSP). Concentrated solar power systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. Photovoltaics convert light into electric current using the photoelectric effect The term is applied to solar hot water panels, but may also be used to denote more complex installations such as solar parabolic, solar trough and solar towers or simpler installations such as solar air heat. The more complex collectors are generally used in solar power plants where solar heat is used to generate electricity by heating water to produce steam which drives a turbine connected to an electrical generator. The simpler collectors are typically used for supplemental space heating in residential and commercial buildings. A collector is a device for converting the energy in solar radiation into a more usable or storable form. The energy in sunlight is in the form of electromagnetic radiation from the infrared (long) to the ultraviolet (short) wavelengths. The solar energy striking the Earth's surface depends on weather conditions, as well as location and orientation of the surface, but overall, it averages about 1,000 watts per square meter under clear skies with the surface directly perpendicular to the sun's rays. Solar collectors fall into two general categories: non-concentrating and concentrating. In the nonconcentrating type, the collector area (i.e. the area that intercepts the solar radiation) is the same as the absorber area (i.e., the area absorbing the radiation). In these types the whole solar panel absorbs the light.Flat plate and evacuated tube solar collectors are used to collect heat for space heating, domestic hot water or cooling with an absorption chiller. Parabolic troughs, dishes and towers described in this are used almost exclusively in solar power generating stations or for research purposes. Although simple, these solar concentrators are quite far from the theoretical maximum concentration. For example, the parabolic trough concentration is about 1/3 of the theoretical maximum for the same acceptance angle, that is, for the same overall tolerances for the system. Approaching the theoretical maximum may be achieved by using more elaborate concentrators based on nonimaging optics.

Types Of Concentratng Collector There are four basic types of concentrating collectors:

Parabolic trough system Parabolic dish Power tower Stationary concentrating collectors

Parabolic trough system Parabolic troughs are devices that are shaped like the letter u . The troughs concentrate sunlight onto a receiver tube that is positioned along the focal line of the trough. Sometimes a transparent glass tube envelops the receiver tube to reduce heat loss [3]. This type of collector is generally used in solar power plants . A trough-shaped parabolic reflector is used to concentrate sunlight on an insulated tube ( Dewar tube ) or heat pipe , placed at the focal point , containing coolant which transfers heat from the collectors to the boilers in the power station.

Parabolic troughs often use single-axis or dual-axis tracking . A parabolic dish system uses a computer to track the sun and concentrate the sun's rays onto a receiver located at the focal point in front of the dish. In some systems, a heat engine, such as a Stirling engine, is linked to the receiver to generate electricity. Parabolic dish systems can reach

1000 C at the receiver, and achieve the highest efficiencies for converting solar energy to electricity in the small-power capacity range . A parabolic dish system uses a computer to track the sun and concentrate the sun's rays onto a receiver located at the focal point in front of the dish. In some systems, a heat engine, such as a Stirling engine, is linked to the receiver to generate electricity. Parabolic dish systems can reach 1000 C at the receiver, and achieve the highest efficiencies for converting solar energy to electricity in the small-power capacity range . It is the most powerful type of collector which concentrates sunlight at a single, focal point, via one or more parabolic dishesarranged in a similar fashion to a reflecting telescope focuses starlight, or a dish antenna focuses radio waves. This geometry may be used in solar furnaces and solar power plants. There are two key phenomena to understand in order to comprehend the design of a parabolic dish. One is that the shape of a parabola is defined such that incoming rays which are parallel to the dish's axis will be reflected toward the focus, no matter where on the dish they arrive. The second key is that the light rays from the sun arriving at the Earth's surface are almost completely parallel. So if the dish can be aligned with its axis pointing at the sun, almost all of the incoming radiation will be reflected towards the focal point of the dishmost losses are due to imperfections in the parabolic shape and imperfect reflection. Losses due to atmosphere between the dish and its focal point are minimal, as the dish is generally designed specifically to be small enough that this factor is insignificant on a clear, sunny day. Compare this though with some other designs, and you will see that this could be an important factor, and if the local weather is hazy, or foggy, it may reduce the efficiency of a parabolic dish significantly. In dish stirling power plant designs, a stirling engine coupled to a dynamo, is placed at the focus of the dish, which absorbs the heat of the incident solar radiation, and converts it into electricity. Engines currently under consideration include Stirling and Brayton cycle engines. Several prototype dish/engine systems, ranging in size from 7 to 25 kW have been deployed in various locations in the USA. High optical efficiency and low start up losses make dish/engine systems the most efficient of all solar technologies. A Stirling engine/parabolic dish system holds the worlds record for converting sunlight into electricity. In 1984, a 29% net efficiency was measured at Rancho Mirage, California . References [1]http://aloisiuskolleg.www.de/schule/fachbereiche/comenius/charles/solar.html [2]http://www.tpub.com/utilities/index.html [3]http://www.canren.gc.ca/tech.appl/index.asp [4]http://www.geocities.com/dieret/re/Solar/solar.html [5]http://www.wikipedia.org

Bluetooth Network Security

Abstract of Bluetooth Network Security Wireless communications offer organizations and users many benefits such as portability and flexibility, increased productivity, and lower installation costs. Wireless local area network (WLAN) devices, for instance, allow users to move their laptops from place to place within their offices without the need for wires and without losing network connectivity. Ad hoc networks, such as those enabled by Bluetooth, allow users to: Data synchronization with network systems and application sharing between devices. Eliminates cables for printer and other peripheral device connections. Synchronize personal databases. Provide access to network services such as wireless e-mail, Web browsing, and Internet access. However, risks are inherent in any wireless technology. The loss of confidentiality and integrity and the threat of denial of service (DoS) attacks are risks typically associated with wireless communications. Specific threats and vulnerabilities to wireless networks and handheld devices include the following: All the vulnerabilities that exist in a conventional wired network apply to wireless technologies. Malicious entities may gain unauthorized access to an agencys computer network through wireless connections, bypassing any firewall protections. Sensitive information that is not encrypted (or that is encrypted with poor cryptographic techniques) and that is transmitted between two wireless devices may be intercepted and disclosed. Sensitive data may be corrupted during improper synchronization. Data may be extracted without detection from improperly configured devices. Security Aspects in Bluetooth The Bluetooth-system provide security at two level At Link layer At Application layer Link layer security Four different entities are used for maintaining security at the link layer: a Bluetooth device address, two secret, keys, and a pseudo-random number that shall be regenerated for each new transaction.

The four entities and their sizes are summarized in Table-

L2CAP: enforce security for cordless telephony. RFCOMM: enforce security for Dial-up networking. OBEX: files transfer and synchronization. The encryption key in Bluetooth changes every time the encryption is activated, the authentication key depends on the running application to change the key or not. Another fact regarding the keys is that the encryption key is derived from the authentication key during the authentication process. The time required to refresh the encryption key is 228 Bluetooth clocks which is equal to approx. 23 hours. RAND or the random number generator is used for generating the encryption and authentication key. Each device should have its own random number generator. It is used in pairing (the process of authentication by entering two PIN-codes) for passed keys in the authentication process. Security modes in Bluetooth In Bluetooth there are three security modes which are: Mode 1: Non-secure. Mode 2: Service level security

Trusted device Un-trusted devices Unknown devices Mode 3: Link level.

The trusted device is a device that has been connected before, its link key is stored and its flagged as a trusted device in the device database. The un-trusted devices are devices that have also previously connected and authenticated, link key is stored but they are not flagged as a trusted devices. The unknown devices are the devices that have not connected before. In Bluetooth service level we have three type of service in regard to the security: Services that need authentication and authorization: this is automatically granted to the trusted devices but for the un-trusted devices manual authentication is required. Services that need authentication only: in this case the authorization process is not necessary. Attack Tools & Programs Hardware Used: Dell XPS, Nokia N95, Nokia 6150, Hp IPAQ HX2790b. Operating Systems: Ubuntu, Backtrack, Windows Vista, Symbian OS, windows mobile. Software used: Bluebugger, Bluediving, Bluescanner, Bluesnarfer, BTscanner, Redfang, Blooover2, Ftp_bt. Dell laptop with windows vista to be broken into and for scanning then with Linux to attempt attacks. Pocket pc for being attacked, and one mobile for attacking one for being attacked.

Artificial Intelligence In Power Station

Abstract of Artificial Intelligence In Power Station Recently, due to concerns about the liberalization of electricity supply, deregulation, and global impact on the environment, securing a reliable power supply has become an important social need worldwide. To ensure this need is fulfilled, detailed investigations and developments are in progress on power distribution systems and the monitoring of apparatus. These are on (1) digital technology based on the application of semiconductor high-speed elements, (2) intelligent substations applying IT (information technology), and (3) system configurations aimed at high-speed communication. Incorporated in these are demands for the future intelligent control of substations, protection, monitoring, and communication systems that have advantages in terms of high performance, functional distribution, information-sharing and integrated power distribution management. Todays conventional apparatus also requires streamlining of functions, improvements in sensor technology, and standardized interfaces. By promoting these developments, the following savings for the whole system can be expected: (1) reduced costs in remote surveillance in the field of apparatus monitoring, operation, and maintenance, (2) reduced maintenance costs based on the integrated management of equipment, and (3) reduced costs due to space saving as a result of miniaturizing equipment Concept Of Intelligent Substations In conventional substations, substation apparatus, such as switchgear and transformer, control, protection and monitoring equipment is independent of every other device, and connection is based on the signals coming through the cable. On the other hand, an intelligent substation shares all information on apparatus, control, protection, measurement and apparatus monitoring equipment through one bus by applying both digital technology and IT-related technology. Moreover, high efficiency and miniaturization can be achieved because the local cubicle contains unified control/protection and measurement equipment that is one integrated system (see Fig. 1). Since an optical bus shares the information between the apparatus and equipment, the amount of cable is sharply reduced. Moreover, as international standards (IEC 61850 and 61375 etc.) are adopted and the system conforms to the telecommunications standard, equipment specifications can be standardized for different vendors. Apparatus Monitoring System: All the data from each monitoring and measuring device is transmitted and used for a higher-level monitoring system via an optical bus. The required data is accessed through the Intranet or the Internet at the maintenance site of an electricity supply company or a manufacturer and the apparatus can be monitored from a remote location. The construction, analysis and diagnosis of the database including trend management and history management also become possible. As a result, signs of abnormalities can be checked out well in advance, and prompt action can be taken in times of emergency. Maintenance plans can also be drafted to ensure reliability, by inspecting revision description and parts management, efficient maintenance planning and reliability maintenance are also realized simultaneously.

Power System Controls Power system controls can be broadly classified into two categories: local and area (regional/systemwide). The boundary between these two categories is not precise as area controls are often implemented by optimally adjusting local control parameters and set points. Area controls main characteristic is the need to process information gathered at various points of the network and to model the behavior of large parts of the power system. This type of control is usually not limited to the automatic feedback type but often includes strategies based on empirical knowledge and human intervention. Local control, on the other hand, is typically implemented using conventional automatic control rules, such as, PID control, which are believed to offer adequate performance in most applications. Still, this is not to discount the usefulness of new intelligent methodologies, such as, fuzzy logic controllers, for local controls.

For convenience, power system higher level controls are classified here as: Generation scheduling and automatic control: includes unit commitment, economic dispatch, and automatic generation control; in the past, well established control methods were used but this situation has been changing to deal with the new scenario created by the power industry restructuring; Voltage control: is mostly of the local type but some systems have already gone to a higher coordinated secondary control to allow a more effective use of reactive power sources and increase stability margins; Preventive security control: has the objective to detect insecure operating points and to suggest corrective actions; the grand challenges in this area are on-line Dynamic and Voltage Security Assessment (DSA and VSA); Emergency control: manages the problem of controlling the system after a large disturbance; it is an event driven type of control and includes special protection schemes; Restorative control: its main function is to re-energize the system after a major disturbance followed by a partial or total blackout. Intelligent system techniques may be of great help in the implementation of area power system controls. Most of these applications require large quantities of system information, which can be provided by modern telecommunications and computing technology, but require new processing techniques able to extract salient information from these large sets of raw data. Importantly, such large data sets are never error free and often contain various types of uncertainty. Finally, control actions may be based on operating strategies specified in qualitative form, which need to be translated into quantitative decisions

Embedded System in Automobiles

Abstract of Embedded System in Automobiles There are several tasks in which real time OSs beat their desktop counterparts hands-down. A common application of embedded systems in the real world is in automobiles because these systems are cheap, efficient and problem free. Almost every car that rolls off the production line these days makes use of embedded technology in one form or the other. RTOSs are performed in this area due to their fast response times and minimal system requirements. Most of the embedded systems in automobiles are rugged in nature, as most of these systems are made up of a single chip. Other factors aiding their use are the low costs involved, ease of development, and the fact that embedded devices can be networked to act as sub modules in a large system. No driver clashes or system busy condition happen in these systems. Their compact profiles enable them to fit easily under the cramped hood of a car. Embedded systems can be used to implement features ranging from adjustment of the suspension to suit road conditions and the octane content in the fuel to anti lock braking systems (ABS) and security systems. Speaking of the things nearer home the computer chip that control fuel injections in a Hyundai Santro or the one that controls the activation of air bag in a Fiat in a weekend in nothing but an embedded system. Right from brakes to automatic traction control to air bags and fuel/air mixture controls, there may be upto 30-50 embedded systems within a present-day car. And this is just a beginning. Introduction of Embedded System in Automobiles Embedded systems can also make driverless vehicle control a reality. Major automobile manufacturers are already engaged in work on these concepts. One such technology is Adaptive Cruise Control (ACC). ACC allows cars to keep safe distances from other vehicles on busy highways. The driver can set the speed of his car and the distance between his car and others. When traffic slows down, ACC alters vehicle speed using moderate braking. This ensures that a constant distance is maintained between cars. As soon as traffic becomes less, ACC moves up to the desired cruise speed that has been set by the driver. The driver can over ride the system any time he wants to be breaking. Each car with ACC has a micro wave radar unit or laser transceiver fixed in front of it to determine the distance and relative speed of any vehicle in the path. The ACC computer (What else but an embedded system or a grouped system of embedded system) constantly controls the throttle and brakes of the car. This helps to make sure that the set cruise speed or adapted speed of traffic at that time is not exceeded.

The Working Principle Of Adaptive Cruise Control : As already mentioned each car with ACC have a micro wave radar unit fixed in front of it to determine the distance and relative speed of any vehicle in it's path. The principle behind the working of this type of radar is- the Doppler Effect. Doppler Effect: Doppler Effect is the change in frequency of the waves when there is a relative motion between the transmitting and receiving units. The two figures below clearly show the Doppler Effect. Higher Pitch Sound

In this case the vehicle is speeding towards the stationary listener. The distance between the listener and the car is decreasing. Then the listener will hear a higher pitch sound from the car, which means the frequency of sound, is increased. Lower pitch sound

In this case the vehicle is moving away from the listener. The distance between and the car is increasing. Then the listener will hear a lower pitch sound from the car, which means the frequency of sound, is decreased. So that is the Doppler Effect in case of sound waves.Similarly the radar unit in ACC will be continuously transmitting radio waves. They will be reflected and echo singles (reflected waves) will be having the same frequency or different frequency depending on speed/position of the object due to which the echo singles originate. If the echoes singles have the same frequency it is clear that there is no relative motion between the transmitting and receiving ends. If the frequency is increased it is clear that the distance between the two is decreasing and if the frequency is decreased it means that the distance is increasing. The figure below shows a car having ACC transmitting and receiving radio waves.

In the above case, the gun transmits the waves at a given frequency toward an oncoming car. Reflecting waves return to the gun at a different frequency, depending on how fast the car being tracked is moving. A device in the gun compares the transmission frequency to the received frequency to determine the speed of the car. Here, the high frequency or the reflected waves indicate the motorist in the left car is speeding. The embedded system is connected to the radar unit and its output will be sent to breaking and accelerating unit as early mentioned the embedded system is a device controlled by instructions stored in a chip. So we can design the chip or ACC having an algorithm such that it will give output only when the input signals are less than the corresponding safe distance value. So only when the between the car and the object in front of it is less then the same distance value the embedded system will give output to the breaking and the accelerating units. Thus the safe distance will be kept always. That's how the ACC works.

Third Generation Solid State Drives


Abstract The explosion of flash memory technology has dramatically increased storage capacity and decreased the cost of non-volatile semiconductor memory. The technology has fueled the proliferation of USB flash drives and is now poised to replace magnetic hard disks in some applications. A solid state drive (SSD) is a non-volatile memory system that emulates a magnetic hard disk drive (HDD). SSDs do not contain any moving parts, however, and depend on flash memory chips to store data. With proper design, an SSD is able to provide high data transfer rates, low access time, improved tolerance to shock and vibration, and reduced power consumption. For some applications, the improved performance and durability outweigh the higher cost of an SSD relative to an HDD. Using flash memory as a hard disk replacement is not without challenges. The nano-scale of the memory cell is pushing the limits of semiconductor physics. Extremely thin insulating glass layers are necessary for proper operation of the memory cells. These layers are subjected to stressful temperatures and voltages, and their insulating properties deteriorate over time. Quite simply, flash memory can wear out. Fortunately, the wear-out physics are well understood and data management strategies are used to compensate for the limited lifetime of flash memory. Flash memory was invented by Dr. Fujio Masuoka while working for Toshiba in 1984. The name "flash" was suggested because the process of erasing the memory contents reminded him of the flash of a camera. Flash memory chips store data in a large array of floating gate metaloxide semiconductor (MOS) transistors. Silicon wafers are manufactured with microscopic transistor dimension, now approaching 40 nanometers. Intel Corporation introduces its highly anticipated third-generation solid-state drive (SSD) the Intel Solid-State Drive 320 Series. Based on its industry-leading 25-nanometer (nm) NAND flash memory, the Intel SSD 320 replaces and builds on its high-performing Intel X25-M SATA SSD. Delivering more performance and uniquely architected reliability features, the new Intel SSD 320 offers new higher capacity models, while taking advantage of cost benefits from its 25nm process with an up to 30 percent price reduction over its current generation. Floating Gate Flash Memory Cells SSDs mainly depend on flash memory chips to store data. The name "flash" was suggested because the process of erasing the memory contents reminded him of the flash of a camera. Flash memory chips store data in a large array of floating gate metaloxidesemiconductor (MOS) transistors. Silicon wafers are manufactured with microscopic transistor dimension, now approaching 40 nanometers. In this flash memory thin insulating glass layers are necessary for proper operation of the memory cells. These layers are subjected to stressful temperatures and voltages, and their insulating properties deteriorate over time. Quite simply, flash memory can wear out. A floating gate memory cell is a type of metal-oxide-semiconductor field-effect transistor (MOSFET). Silicon forms the base layer, or substrate, of the transistor array. Areas of the silicon are masked off and infused with different types of impurities in a process called doping. Impurities are carefully added to adjust the electrical properties of the silicon. Some impurities, for example phosphorous, create an excess of electrons in the silicon lattice. Other impurities, for example boron,

create an absence of electrons in the lattice. The impurity levels and the proximity of the doped regions are set out in a lithographic manufacturing process. In addition to doped silicon regions, layers of insulating silicon dioxide glass (SiO2) and conducting layers of polycrystalline silicon and aluminum are deposited to complete the MOSFET structure. MOS transistors work by forming an electrically conductive channel between the source and drain terminals. When a voltage is applied to the control gate, an electric field causes a thin negatively charged channel to form at the boundary of the SiO2 and between the source and drain regions. When the N-channel is present, electricity is easily conducted from the source to the drain terminals. When the control voltage is removed, the N-channel disappears and no conduction takes place. The MOSFET operates like a switch, either in the on or off state.

In addition to the control gate, there is a secondary floating gate which is not electrically connected to the rest of the transistor. The voltage at the control gate required for N-channel formation can be changed by modifying the charge stored on the floating gate. Even though there is no electrical connection to the floating gate, electric charge can be put in to and taken off of the floating gate. A quantum physical process called Fowler-Nordheim tunneling coaxes electrons through the insulation between the floating gate and the P-well. When electric charge is removed from the floating gate, the cell is considered in an erased state. When electric charge is added to the floating gate, the cell is considered in the programmed state. A charge that has been added to the floating gate will remain for a long period of time. It is this process of adding, removing and storing electric charge on the floating gate that turns the MOSFET into a memory cell. Erasing the contents of a memory cell is done by placing a high voltage on the silicon substrate while holding the control gate at zero. The electrons stored in the floating gate tunnel through the oxide barrier into the positive substrate. Thousands of memory cells are etched onto a common section of the substrate, forming a single block of memory. All of the memory cells in the block are simultaneously erased when the substrate is flashed to a positive voltage. An erased memory cell

will allow N-channel formation at a low control gate voltage because all of the charge in the floating gate has been removed. This is referred to as logic level 1 in a single-level cell (SLC) flash memory cell.

The cell is programmed by placing a high voltage on the control gate while holding the source and drain regions at zero. The high electric field causes the N-channel to form and allows electrons to tunnel through the oxide barrier into the floating gate. Programming the memory cells is performed one word at a time and usually an entire page is programmed in a single operation. A programmed memory cell inhibits the control gate from forming an N-channel at normal voltages because of the negative charge stored on the floating gate. To form the N-channel in the substrate, the control gate voltage must be raised to a higher level. This is referred to as logic level 0 in an SLC flash memory cell. References http://www.intel.com/pressroom/.html http://www.intel.com/go/ssd/.html http://www.physorg.com/news/.html http://www.storageview.com/.html www. ieeexplore.ieee.org

Security In Embedded Systems


Abstract An embedded system is a special-purpose computer system, which is completely encapsulated in the device it controls. It has specific requirements and performs pre-defined tasks, unlike a generalpurpose personal computer. PDAs, mobile phones, household appliances, monitoring and control systems for industrial automation etc are examples embedded systems. Even though the technology of embedded systems is fast improving and devices are becoming more and more advanced very less attention is being given to its security. The reason mainly being the high sensitivity of costs, even a small rise in cost of production will make a big difference when building millions of units. Embedded systems often have significant energy constraints, and many are battery powered. Some embedded systems can get a fresh battery charge daily, but others must last months or years on a single battery. Protection of an embedded system is very intricate from that of a typical computer because of the working environments of the particular system. Building a secure embedded system also involves designing a robust application that can deal with internal failures; no level of security is useful if the system crashes and is rendered unusable. It is obvious that the higher the level of security the bigger the cost of the product. For this reason the manufacturer should carry out a risk analysis to determine what the cost of a successful attack against his product will be and what class of attacker he must protect the product from. Once he knows the possible loss he must identify the candidate security measures for completion and their cost. Counter Measures to Avoid Attacks SOFTWARE METHODS Whenever a system has to communicate with another device the data passes through a number of untrusted intermediate points. Therefore the secure data must be scrambled in such a way that the data will be useless or unintelligible for anyone who is having unauthorized access to the secure data. This can be achieved with the help of cryptographic methods, Digital Signatures and Digital Certificates. Data Encryption Encryption is the process of scrambling/encrypting any amount of data using a (secret) key so that only the recipient, who is having access to the key, will be able to decrypt the data. The algorithm used for the encryption can be any publicly available algorithm like DES, 3DES or AES or any algorithm proprietary to the device manufacturer. The Data Encryption Standard (DES) is a block cipher (a method for encrypting information) .It is based on a Symmetric-key algorithm that uses a 56-bit key. An algorithm that takes a fixed-length string of plain text bits and transforms it through a series of complicated operations into another cipher text bit string of the same length. In the case of DES, the block size is 64 bits. DES uses a key to customize the transformation, so that decryption can supposedly only be performed by those who

know the particular key used to encrypt. The key ostensibly consists of 64 bits; however, only 56 of these are actually used by the algorithm. Eight bits are used solely for checking parity, and are thereafter discarded. Hence the effective key length is 56 bits. If publicly available algorithms are used, the security of the transferred data totally depends on the secrecy of the keys used for the encryption. Public-key Key Agreement Algorithm Key agreement algorithms exchange some public information between two parties so they each can calculate a shared secret key. However, they do not exchange enough information that eavesdroppers on the conversation can calculate the same shared key. Key agreement algorithm will have a privatekey and an associated public-key. The private-key is generally a random number of hundreds or few thousands of bits and the public-keys are derived from the private-key using the one-way function specified by the key agreement algorithm.

The key generation algorithm 'Generate Key' will be such that the generated keys at the device A and B will be the same, that is shared secret KA=KB=K(PA, PB, C). This protocol faces a deficiency. Nothing in this key agreement protocol prevents someone from impersonating.. So even though the transmissions of the public keys do not need to be encrypted, they should be signed for maximum safety. Digital Signature As mentioned before, public-key cryptography is inefficient. In order to make public-key authentication practical, the digital signature was invented. A digital signature is simply a hash of the data to be sent encrypted using the public-key authentication method. Some digital signature algorithms DSA, Full Domain Hash, RSA-PSS, etc.

Fig 2 Digital signature algorithm For establishing shared secret using the key agreement algorithm, it is important for device to receive an authenticated public-key from the peer. For authenticated exchange of public-key, Digital Signature is used. By encrypting only the fixed-size message hash, we remove the inefficiency of the public-key algorithm and we can efficiently authenticate any arbitrary amount of data. In digital signature, like the key agreement algorithm, a device uses a pair of keys, 'sign private-key' and 'sign public-key'. Only the device knows its sign private-key whereas the sign public-key is distributed to all the communicating devices. A device signs the message using a signatures algorithm with its sign private-key to generate a signature and any device that has got the access to the sign public-key of the signed device can verify the data with the signature using the signature verification algorithm. If any third party modifies the data or signature, the verification fails. Digital Certificate Even while using the digital signature algorithm, the 'sign public-key' from a peer device has to be obtained by an authenticated way to ensure the authenticity of a received message. For key agreement or digital signature the authenticated transfer of public-key n a large network is difficult or even not possible without a centralized trusted authority. This centralized authority is trusted by all the devices in the network. This authority is generally known as trusted Certificate Authority or CA. The Certificate Authority (CA) signs the public-keys of devices along with the device ID using the CA's private-key to generate the signature. Security Needs Within The Device Security is not all about encryption. It's also about policy, procedure, and implementation. Case in point, encryption based on a secret key is only as good as the policy that controls access to the key. Secure code alone does not make a secure system. Security must be considered at each phase of the process, from requirements to design to testing, and even support. Whether it is the private-key of any public-key algorithm or it is any previously negotiated shared secret between the devices, the security of data transferred depends in the secrecy of these keys. These secret keys and secret values stored in the device (some even for the lifetime of the device.) that requires protection from unauthorized exposure. Hardware and software security measures implemented in the device must defeat any attempts of unauthorized access to retrieve these secret keys References 1. Jess Lizarraga, Roberto Uribeetxeberria, Urko Zurutuza, Miguel Fernndez SECURITY IN EMBEDDED SYSTEMS Computer Science Department Mondragon University, Spain. 2. Lawrence Ricci (eMVP) Larry McGinnes (CPL) Applied Data Systems COACT White Paper Embedded System Security Designing Secure Systems with Windows CE 3. www. ieeexplore.ieee.org

Securing Underwater Wireless Communication Networks


Abstract Underwater wireless communication networks (UWCNs) include sensors and autonomous underwater vehicles (AUVs) that interact to perform specific applications such as underwater monitoring. Coordination and information sharing between sensors and AUVs make the provision of security challenging. The unique characteristics of the underwater acoustic channel and the differences between such networks and their ground based counterparts require the development of efficient and reliable security mechanisms. The aquatic environment is particularly vulnerable to malicious attacks due to the high bit error rates, large and variable propagation delay, low bandwidth of acoustic channels in water. Achieving reliable inter vehicle and sensor-AUV communication is especially difficult due to the mobility of AUVs and the movement of sensors with water currents. The above mentioned characteristics of UWCNs have several security issues associated like packet errors, eavesdropping, modification of packets, and many more. Also since power consumption in underwater communications is higher than in terrestrial radio communications energy exhaustion attacks can reduce network life. The different attacks possible are Jamming, Wormholes, Selective Forwarding, Sybil Attacks, etc. Defences for these are discussed. Jamming can be overcome by Spread Spectrum techniques, Wormhole detection is done with a visual modelling using Dis-VoW and other attacks can be countered by authentication, verification, and positioning. Open research challenges for secure localization, routing and time synchronization are mentioned. In this paper UWCNs is discussed, with emphasis on the possible attacks, countermeasures and further opportunities and scope for development in this direction to improve security of such networks. Overview of Underwater Wireless Communication Networks Underwater wireless communication networks (UWCNs) consist of sensors and autonomous underwater vehicles (AUVs) that interact, coordinate and share information with each other to carry out sensing and monitoring functions. A pictorial representation is shown below: In last several years, underwater communication network (UWCN) has found an increasing use in a widespread range of applications, such as coastal surveillance systems, environmental research, autonomous underwater vehicle (AUV) operation, oil-rig maintenance, collection of data for water monitoring, linking submarines to land, to name a few.

By deploying a distributed and scalable sensor network in a 3-dimensional underwater space, each underwater sensor can monitor and detect environmental parameters and events locally. Hence, compared with remote sensing, UWCNs provide a better sensing and surveillance technology to acquire better data to understand the spatial and temporal complexities of underwater environments. Present underwater communication systems involve the transmission of information in the form of sound, electromagnetic (EM), or optical waves. Each of these techniques has advantages and limitations. Based on applications there are three types of UWSNs (sensor networks): 1) Mobile UWSNs for long-term non-time critical applications (M-LT-UWSNs); 2) Static UWSNs for long-term non-time critical applications (S-LT-UWSNs); 3) Mobile UWSNs for short-term time-critical applications (M-ST-UWSNs). Besides the UWSNs mentioned above, underwater networks also include sparse mobile AUV (autonomous underwater vehicle) or UUV (unmanned underwater vehicle) networks, where vehicles/nodes can be spaced out by several kilometres. These types of networks have their unique communication requirements. Among the three types of waves, acoustic waves are used as the primary carrier for underwater wireless communication systems due to the relatively low absorption in underwater environments. The security requirements to be met in UWCNs are: Authentication: Authentication is the proof that the data received was sent by a legitimate sender. This is essential in military and safety-critical applications of UWCNs. Confidentiality: Confidentiality means that information is not accessible to unauthorized third parties. It needs to be guaranteed in critical applications such as maritime surveillance. Integrity: It ensures that information has not been altered by any adversary. Many underwater sensor applications for environmental preservation, such as water quality monitoring, rely on the integrity of information.

Availability: The data should be available when needed by an authorized user. Lack of availability due to denial-of-service attacks would especially affect time-critical aquatic exploration applications such as prediction of seaquakes. Some common terminology used here is defined: Attack: Attempt to gain unauthorized access to a service, resource, or information, or the attempt to compromise integrity, availability, or confidentiality. Attacker, Intruder, Adversary : The originator of an attack. Vulnerability : Weakness in system security design, implementation, or limitations that could be exploited. Threat : Any circumstance or event (such as the existence of an attacker and vulnerabilities) with the potential to adversely impact a system through a security breach. Defence : An idea or system or model that counters an attack Jamming and Spread Spectrum Technique to Counter Jamming Jamming is deliberate interference with radio reception to deny the target's use of a communication channel. For single-frequency networks, it is simple and effective, rendering the jammed node unable to communicate or coordinate with others in the network.

A jamming attack consists of interfering with the physical channel by putting up carriers on the frequencies used by nodes to communicate. Since it requires a lot of energy, attackers usually attack in sporadic bursts. Since underwater acoustic frequency bands are narrow (from a few to hundreds of kilohertz), UWCNs are vulnerable to narrowband jamming. Localization is affected by the replay attack when the attacker jams the communication between a sender and a receiver, and later replays the same message with stale information (an incorrect reference) posing as the sender.In frequency hopping, a device transmits a signal on a frequency for a short period of time, changes to a different frequency and repeats. The transmitter and receiver must be coordinated. Direct-sequence spreads the signal over a wide band, using a pseudo-random bit stream. A receiver must know the spreading code to distinguish the signal from noise.

Frequency-hopping schemes are somewhat resistant to interference from an attacker who does not know the hopping sequence. However, the attacker may be able to jam a wide band of the spectrum, or even follow the hopping sequence by scanning for the next transmission and quickly tuning the transmitter. References 1. Mari Carmen Domingo, Securing Underwater Wireless Communication Networks, IEEE Wireless Communications, February 2011 2. W. Wang, J. Kong, B. Bhargava and M. Gerla, Visualization of Wormholes in Underwater Sensor Networks: A Distributed Approach, International Journal of Security and Networks, vol. 3, no. 1, 2008, pp. 1023. 3. www. ieeexplore.ieee.org

Secure Electronic Voting System Based on Image Steganography


Abstract As information technology evolves over time, the need for a better, faster, more convenient and secure electronic voting is relevant since the traditional election procedures cannot satisfy all of voters demands. To increase the number of voters and to offer an enhanced procedure in an election, many researchers have been introducing novel and advanced approaches to secure electronic voting systems. This paper proposes a secure electronic voting system that provides enhanced security by implementing cryptography and steganography in Java. As a preliminary investigation, two steganography techniques are considered and their performances are compared for different image file formats. Voting has played a major role in the democratic societies. Traditional voting procedure uses paperbased ballot. However, this approach is costly, inconvenient and time consuming for voters. Many people nowadays prefer a more instant way to vote. With the evolution of computer technology, many researchers are proposing secure, reliable and convenient electronic voting systems as a substitute to the traditional voting procedure. It thus helps to encourage each voter to make use of their right to vote. Such systems have to be designed to satisfy the following requirements. Completeness All valid votes are counted correctly. Soundness The dishonest voter cannot disrupt the voting. Privacy All votes must be secret. Unreusability No voter can vote twice. Eligibility No one who is not allowed to vote can vote. Fairness Nothing must affect the voting. No one can indicate the tally before the votes are counted. Verifiability No one can falsify the result of the voting. Robustness The result reflects all submitted and well-formed ballots correctly, even if some voters and (or) possibly some of the entities running the election cheat. Uncoercibility No voter should be able to convince any other participant of the value of its vote. Receipt-freeness Voters must neither obtain nor be able to construct a receipt proving the content of their vote. Mobility The voter can vote anytime and anywhere through internet. Convenience System must allow voters to cast their votes quickly, in one session and with minimal equipment or special skills. In the recent years, researchers are more focusing on developing a new technology which can support uncoercibility, receipt-freeness and also universal-verifiability. Many end-to-end verifiable systems (E2E) are proposed and being widely used. In principle, E2E voting system offer assurance to the voters as they cast their vote by distributing a receipt of their vote which can be used for verification purpose from the overall tabulation of the collected votes. Yet on the other hand, this receipt cannot be used as a proof in vote buying or vote coercion although all of the receipts will be posted publicly in a secured append-only Bulletin Board once the voter finished the voting process. Therefore, the E2E system would still protect the voters privacy. Visual Cryptography

This scheme was introduced by Naor and Shamir. In cryptography field, visual cryptography offers less computational performance compare to the other cryptography schemes due to their complex cryptographic algorithms used to protect a secret. It encrypts visual information, for example pictures, text, etc. in a particular way and produces a set of shares as the result. These shares need to be stacked altogether by using a visual cryptography tool to reveal the hidden secret .Visual Cryptography is a method for protecting image-based secrets that has a computation-free decryption process .

Figure 1. Data flow diagram of how visual cryptography applied in the system It can be considered as a convenient and reliable tool for secret protection or even for verification (authentication) process because it is not time-consuming, low in computational cost and could still be done without any external devices needed. Illustration on how visual cryptography works in electronic voting system is displayed in Fig. 1. Image Steganography Steganography is the art of hiding information in ways that prevent the detection of hidden messages [12]. It is introduced as a branch of information security based on the needs of providing an enhanced security technique of hiding secret information. As the information technology evolves, more threats arise and a simple encryption method is just not sufficient enough to protect the secrecy of data anymore. An encrypted data could easily cause suspicion since it is clearly shown as one. On the other hand, steganography offers a less suspicious way of hiding a secret. Therefore, steganography is proposed to be used as a main tool in this paper to secure the data communication in the election procedure, as its purpose is to maintain a secret communication between two parties. This scheme could be applied to various types of data such as text, images, audio, video and protocol file format. The methods of steganography vary from invisible inks, microdots, character arrangement, digital signatures, covert channels, to spreadspectrum communications. Unlike cryptography, the input and output data (stego-object) of steganography would be identical. As a result, it would be difficult to recognize and interpret the hidden secret in the stego-object. However some of the steganography schemes like text steganography are limited in data encoding.

Thus, they are not completely feasible to be applied in the system. Image steganography on the other hand, give a better encoding technique to be used as it can securely hide the secret message. It supports data transmission process by securely transferring a hidden secret in a digital image file. For this reason, image steganography would be applied in this system together with secret-ballot receipts scheme and visual cryptography in the voting stage and also to be used throughout the system for data transmission process.

Fig. 2 and Fig. 3 shows the comparison of an original and stego-image with very unnoticeable difference. Fig. 2 is the original image and Fig. 3 has been encoded with random encrypted words. Image steganography can be separated into two types based on its compression method, image (spatial) domain and transform (frequency) domain [13]. For image domain, a message would directly be embedded into a source image and then it would be compressed with lossy compression. Therefore, all the embedded information would not be altered in the compression phase. On the other hand, in transform domain, message would be embedded into an image in between the compression phases with both lossy and lossless compression. In general, transform domain is more robust compare to image domain technique because it eliminates the possibility of message being destroyed during the compression process when the excess image data is removed (lossy compression). Secret-ballot Receipts The principle of secret-ballot receipt lays in the concept where privacy violation should not occur at all in the election, to ensure its integrity and also to prevent altered votes in the election process due to vote buying or selling and vote coercion [8], [14]. The technique provides a direct assurance of

each voters vote. The implementation of secretballot receipts scheme helps to reduce the need for physical security, audit and observation in several stages of the system as it does not require any external hardware to be applied as a compliment to its functionality. The initial flow of secret ballot receipts implementation requires each voter to use an external hardware (printer) in order to retrieve and verify their casted ballots in a printed receipt. It also applies the same concept as Naor and Shamir in visual cryptography [9], where a secret in this case, a vote that is hidden in two separated layer of pixel symbols. The secret would only be revealed once those layers are overlaying one another. However, to minimize external hardware usage in the election process, instead of generating a printed receipt, the system would then generate a digital receipt in an image format to each voter. Chaum also proposed the conjunction of mix-net scheme in the secret-ballot receipt to ensure the integrity and privacy of the tallying process [8]. Here, the voters chosen layer would be passed respectively among few trustees who are going to generate intermediate batches based on voters receipt batch as an input. The final product of this process is a tally batch in the form of ballot image. However, the processes would be timeconsuming because different trustees (servers) are required to be included in the election procedure. Hence, a simple amendment made to the applied mix-net scheme would be introduced in this paper in order to fulfill the systems requirement by implementing threshold decryption scheme. References 1. H. Wei, Z. Dong, C. Ke-fei, "A receipt-free punch-hole ballot electronic voting scheme," SignalImage Technologies and Internet-Based System, 2007. SITIS 2007. Third International IEEE Conference, pp.355-360, Shanghai Jiaotong University, China, Dec. 16-18, 2007 2. L. Langer, A. Schmidt, R. Araujo, A pervasively verifiable online voting scheme, Informatik 2008 LNI, vol. 133, pp. 457-462, 2008 3. www. ieeexplore.ieee.org

Lunar Reconnaissance Orbiter Miniature RF Technology Demonstration

Abstract of Lunar Reconnaissance Orbiter The Miniature Radio Frequency (Mini-RF) system is manifested on the Lunar Reconnaissance Orbiter (LRO) as a technology demonstration and an extended mission science instrument. Mini-RF represents a significant step forward in space borne RF technology and architecture. It combines synthetic aperture radar (SAR) at two wavelengths (S-band and X-band) and two resolutions (150 m and 30 m) with interferometric and communications functionality in one lightweight (16 kg) package. Previous radar observations (Earth-based, and one bistatic data set from Clementine) of the permanently shadowed regions of the lunar poles seem to indicate areas of high circular polarization ratio (CPR) consistent with volume scattering from volatile deposits (e.g. water ice) buried at shallow (0.1-1 m) depth, but only at unfavorable viewing geometries, and with inconclusive results. The LRO Mini-RF utilizes new wideband hybrid polarization architecture to measure the Stokes parameters of the reflected signal. These data will help to differentiate true volumetric ice reflections from false returns due to angular surface regolith. Additional lunar science investigations (e.g. pyroclastic deposit characterization) will also be attempted during the LRO extended mission. LROs lunar operations will be contemporaneous with Indias Chandrayaan-1, which carries the Forerunner Mini-SAR (S-band wavelength and 150-m resolution), and bistatic radar (S-Band) measurements may be possible. On orbit calibration, procedures for LRO Mini-RF have been validated using Chandrayaan 1 and ground-based facilities (Arecibo and Greenbank Radio Observatories). Why we require Mini RF technology? The LRO Mini-RF payload will address key science questions during the LRO primary and extended missions. These include exploring the permanently shadowed Polar Regions and probing the lunar regolith in other areas of scientific interest. The nature and distribution of the permanently shadowed polar terrain of the Moon has been the subject of considerable controversy. Previous earth based observations uses that when an incident circularly polarized radar wave is backscattered off an interface, the polarization state of the wave changes. For most surfaces, this leads to a return with more of an opposite sense (OC) polarization than a same sense (SC) polarization, so the circular polarization ratio (CPR=SC/OC) remains less than 1.

However, in weakly absorbing media (such as water ice) the radar signal can undergo a series of forward scattering events off small imperfections in the material, each of which preserves the polarization properties of the signal . Those portions of the wave front that are scattered along the same path but in opposite directions combine coherently to produce an increase in the SC radar backscatter .This coherent backscatter effect leads to large returns in the same sense (SC) polarization and values for CPR that can exceed unity. Note that CPR > 1, while diagnostic of water ice, is not a unique signature for water ice. Very rough, dry surfaces have also been observed to have CPRs that exceed unity. The CPR may increase in such regions due to doublebounce scattering between the surface and rock faces. In order distinguish between true ice and false ice mini RF technology will be helpful. Mini-RF Instrument Description The Mini-RF observations are made possible within the mass and power constraints imposed by LRO via application of a number of technologies. Two key technologies are a wideband Microwave Power Module (MPM) based transmitter and a lightweight broadband antenna and polarization design. The Mini-RF Instrument is comprised of the following elements: (1) Antenna, (2) Transmitter, (3) Digital receiver/quadrature detector waveform synthesizer, (4) Analog RF receiver, (5) Control Processor, (6) Interconnection module (7) Supporting harness, RF cabling, and structures The mini RF architecture is unique in planetary radar; it transmits right circular polarization radiation and receives the horizontal (H) and vertical (V) polarization components coherently, which are then reconstructed as Stokes parameters during the data processing step. Both the communications and the radar astronomical objectives impose a requirement for circular polarization on the transmitted field. Conventional radar that would measure CPR then would have to be dual-circularly polarized on receiver. The hybrid-polarity approach provides weight savings by eliminating circulator elements in the receiver paths, which reduces mass, increases RF efficiency, and minimizes cross-talk and other self-noise aspects of the received data. The H and V signals are passed directly to the groundbased processor. References 1. www.spudislunarresources.com 2. www. lunar.gsfc.nasa.gov 3. www. ieeexplore.ieee.org

4. Ritchriya 5. NASA Fact

Vehicle-to-Grid V2G

Abstract of Vehicle-to-Grid V2G Electric drive vehicles can be thought of as mobile, self-contained, and-in the aggregate-highly reliable power resources. "Electric-drive vehicles" (EDVs) include three types: battery electric vehicles, the increasingly popular hybrids, and fuel-cell vehicles running on gasoline, natural gas, or hydrogen. All these vehicles have within them power electronics which generate clean, 60 Hz AC power, at power levels from 10kW (for the Honda Insight) to 100kW (for GM's EV1). When vehicle power is fed into the electric grid, we refer to it as "Vehicle-to-Grid" power, or V2G. With the popularization of electric vehicles and the construction of charging stations, the understanding of people to the electric vehicle and the changing station is not only confined to the transportation and the "gas station". It is desired to exploit more extensive application. The concept of V2G was firstly brought out by Willet Kempton of the Delaware University. The initial goal of V2G was to provide peak power, that is, the electric vehicle owners charging the vehicles in low load with lower price and discharging the vehicles in peak load with higher price. Then, the vehicle owners can get the profits from the V2G project. The functions of the vehicle in power grid were expanded, and the conclusion was get that benefit of providing peak power is significantly less than providing auxiliary services to the power grid . The V2G research also was carried out in some other countries such as Denmark, Britain and Germany, etc . Introduction of Vehicle-to-Grid V2G The first V2G requirement is the power connection. Battery vehicles must already be connected to the grid inorder to recharge their batteries; to add V2G capability requires little or no modification to the charging station and no modification to the cables or connectors, but on board power electronics must be designed for this purpose had "zero incremental cost." The second requirement for V2G is control, for the utility or system operator to request vehicle power exactly when needed. This is essential because vehicle power has value greater than the cost to produce it only if the buyer (the system operator) can determine the precise timing of dispatch. The automobile industry is moving towards making real-time communications a standard part of vehicles. This field, called "telematics" has already begun with luxury vehicles; over a period of time it will be available for most new car models. Whether using built-in vehicle telematics, or in the interim using add-on communications, the vehicle could receive a radio signal from the grid operator indicating when power is needed. The third element of precision, certified, tamper-resistant metering, measures exactly how much power or ancillary services a vehicle did provide, and at which times. The telematics could again be used to transmit meter readings back to the buyer for credit to the vehicle owner's account Thinking about the metering of V2G expands the usual concept of a "utility meter." Electronic metering and telemetric appear to have efficiency advantages in eliminating the meter reader, transfer of billing data to the central computer, and the monthly meter-read cycle. More unnerving, electronic metering and telemetric also eliminates the service address! An onboard meter would transmit its own serial number or account number with its readings, via telemetric, and presumably

this would be billed in conjunction with a traditional metered account with a service address. A large-scale V2G system would automate accounting and reconciliation of potentially millions of small transactions, similar to the recording and billing of calls from millions of cellular phone customers.Thus the mobile metered KWhs or ancillary services wold be added or subtracted to the amount registered on the fixed meter to reconcile both billing amounts. Concept Of V2G :

Figure schematically illustrates connections between vehicles and the electric power grid. Electricity flows one-way from generators through the grid to electricity users. Electricity flows back to the grid from EDVs, or with battery EDVs, the flow is two ways (shown in Fig. as lines with two arrow ). The control signal from the grid operator (labelled ISO, for Independent System Operator) could be a broadcast radio signal, or through a cell phone network, direct Internet connection, or power line carrier. In any case, the grid operator sends requests for power to a large number of vehicles. The signal may go directly to each individual vehicle, schematically in the upper right of Fig. , or to the office of a fleet operator, which in turn controls vehicles in a single parking lot, schematically shown in the lower right of Fig. , or through a third-party aggregator of dispersed individual vehicles' power (not shown). (The grid operator also dispatches power from traditional central-station generators using a voice telephone call or a T1 line, not shown in Fig .)

E-Waste

Abstract of E-Waste Electronic waste or e-waste is any broken or unwanted electrical or electronic appliance. E-waste includes computers, entertainment electronics, mobile phones and other items that have been discarded by their original users. E-waste is the inevitable by-product of a technological revolution. Driven primarily by faster, smaller and cheaper microchip technology, society is experiencing an evolution in the capability of electronic appliances and personal electronics. For all its benefits, innovation brings with it the byproduct of rapid obsolescence. According to the EPA, nationally, an estimated 5 to 7 million tons of computers, televisions, stereos, cell phones, electronic appliances and toys, and other electronic gadgets become obsolete every year. According to various reports, electronics comprise approximately 1 - 4 percent of the municipal solid waste stream. The electronic waste problem will continue to grow at an accelerated rate. Electronic, or e-waste, refers to electronic products being discarded by consumers. Introduction of E-Waste E-waste is the most rapidly growing waste problem in the world. It is a crisis of not quantity alone but also a crisis born from toxics ingredients, posing a threat to the occupational health as well as the environment. Rapid technology change, low initial cost, high obsolescence rate have resulted in a fast growing problem around the globe. Legal framework, proper collection system missing. Imports regularly coming to the recycling markets. Inhuman working conditions for recycling. Between 1997 and 2007, nearly 500 million personal computers became obsolete-almost two computers for each person. 750,000 computers expected to end up in landfills this year alone. In 2005, 42 million computers were discarded 25 million in storage 4 million recycled 13 million land filled 0.5 million incinerated

IT and telecom are two fastest growing industries in the country. India, by 2008, should achieve a PC penetration of 65 per 1,000 from the existing 14 per 1,000 (MAIT) At present, India has 15 million computers. The target being 75 million computers by 2010. Over 2 million old PCs ready for disposal in India. Life of a computer reduced from 7 years to 3-5 years. E-Waste: Growth Over 75 million current mobile users, expected to increase to 200 million by 2007 end. Memory devices, MP3 players, iPods etc. are the newer additions. Preliminary estimates suggest that total WEEE generation in India is approximately 1, 46,000 tonnes per year. E-waste: It's implications : Electronic products often contain hazardous and toxic materials that pose environmental risks if they are land filled or incinerated . Televisions, video and computer monitors use cathode ray tubes (CRTs), which have significant amounts of lead. Printed circuit boards contain primarily plastic and copper , and most have small amounts of chromium, lead solder, nickel, and zinc. In addition, many electronic products have batteries that often contain nickel, cadmium, and other heavy metals . Relays and switches in electronics, especially older ones, may contain mercury. Also , capacitors in some types of older and larger equipment that is now entering the waste stream may contain polychlorinated biphenyls (PCBs) . You can reduce the environmental impact of your E-Waste by making changes in your buying habits, looking for ways to reuse including donating or recycling. Preventing waste to begin with is the preferred waste management option. Consider, for example, upgrading or repairing instead of buying new equipment to extend the life of your current equipment and perhaps save money. If you must buy new equipment, consider donating your still working, unwanted electronic equipment. This reuse extends the life of the products and allows non-profits, churches, schools and community organizations to have equipment they otherwise may not be able to afford. In South Carolina, for example, Habitat for Humanity Resale Stores, Goodwill and other similar organizations may accept working computers. When buying new equipment, check with the retailer or manufacturer to see if they have a "take-back program" that allows consumers to return old equipment when buying new equipment. Dell Computers, for example, became the first manufacturer to set up a program to take back any of its products anywhere in the world at no charge to the consumer. And, when buying, consider products with longer warranties as an indication of long-term quality.

SuperCapacitor

Abstract of SuperCapacitor Supercapacitor also known as electric double-layer capacitor (EDLC), super condenser, pseudo capacitor, electrochemical double layer capacitor, or ultracapacitors, is an electrochemical capacitor with relatively high energy density. Compared to conventional electrolytic capacitors the energy density is typically on the order of hundreds of times greater. In comparison with conventional batteries or fuel cells, EDLCs also have a much higher power density. In this article the use of super capacitors likes hybrid power supply for various applications is presented. The main application is in the field of automation. The specific Power of the super capacitors and its high lifetime (1 million of Cycles) makes it very attractive for the startup of the automobiles. Unfortunately, the specific energy of this component is very low. For that this technology is associated with battery to supply the starter alternator. Introduction of Super Capacitor Super capacitors also known as Electric double-layer capacitors, or electrochemical double layer capacitors (EDLCs), or ultracapacitors, are electrochemical capacitors that have an unusually high energy density when compared to common capacitors, typically on the order of thousands of times greater than a high capacity electrolytic capacitor. For instance, a typical electrolytic capacitor will have a capacitance in the range of tens of millifarads. The same size super capacitor would have a capacitance of several farads, an improvement of about two or three orders of magnitude in capacitance but usually at a lower working voltage. Larger, commercial electric doublelayer capacitors have capacities as high as 5,000farads.

In a conventional capacitor, energy is stored by the removal of charge carriers, typically electrons, from one metal plate depositing them on another. This charge separation creates a potential between the two plates, which can be harnessed in an external circuit. The total energy stored in this fashion

increases with both the amount of charge stored and the Potential between the plates. The amount of charge stored per unit voltage is essentially a function of the size, the distance, and the material properties of the plates and the material in between the plates (the dielectric), while the potential between the plates is limited by breakdown of the dielectric. The dielectric controls the capacitor's voltage. Optimizing the material leads to higher energy density for a given size of capacitor. EDLCs do not have a conventional dielectric. Rather than two separate plates separated by an intervening substance, these capacitors use "plates" that are in fact two layers of the same substrate, and their electrical properties, the so-called "electrical double layer", result in the effective separation of charge despite the vanishingly thin (on the order of nanometers) physical separation of the layers. The lack of need for a bulky layer of dielectric permits the packing of plates with much larger surface area into a given size, resulting in high capacitances in practical-sized packages. Super capacitor technology is based the electric double layer phenomenon that has been understood for over a hundred years. However, it has only been exploited by commercial applications for about ten years. As in a conventional capacitor, in an ultracapacitor two conductors and a dielectric generate an electric field where energy is stored. The double layer is created at a solid electrodesolution interface - it is, then, essentially a charge separation that occurs at the interface between the solid and the electrolyte. Two charge layers are formed, with an excess of electrons on one side and an excess of positive ions on the other side. The polar molecules that reside in between form the dielectric. In most ultracapacitors, the electrode is carbon combined with an electrolyte. The layers that form the capacitor plate's boundaries, as well as the small space between them, create a very high capacitance. In addition, the structure of the carbon electrode, which is typically porous, increases the effective surface area to about 2000 m2/g Green Technology Super Capacitors :

Activated carbon used is unsustainable and expensive. Biochar is viewed as a green solution to the activated carbon currently used in super capacitor electrodes. Unlike activated carbon, biochar is the byproduct of the pyrolysis process used to produce biofuels and it is nontoxic and will not pollute the soil when it is tossed out. Biochar costs almost half as much as activated carbon, and is more sustainable because it reuses the waste from biofuel production, a process with sustainable intentions to begin with.

Smart Antenna

Definition A smart antenna consists of several antenna elements, whose signal are processed adaptively in order to exploit the spatial domain of the mobile radio channel . Usually the signals received at the different antenna elements are multiplied with complex weights W and then summed up the weights are chosen adaptively not the antenna itself, but the whole antenna system including the signal processing is called "adaptive". Types of Smart Antenna Systems: Terms commonly heard today that embrace various aspects of a smart antenna system technology include intelligent antennas, phased array, SDMA, spatial processing, digital beam forming, adaptive antenna systems, and others. Smart antenna systems are customarily categorized, however, as either switched beam or adaptive array systems.The following are distinctions between the two major categories of smart antennas regarding the choices in transmit strategy: Switched beam- a finite number of fixed, predefined patterns or combining strategies (sectors). Adaptive array - an infinite number of patterns (scenario-based ) that are adjusting in real time. Smart Antennas for TDMA: In a convetional time division multiple access (TDMA) or frequency division multiple access (FDMA) cellular system, carrier frequencies that are used in one cell cannot be reused in the neighboring cells, because the resulting co-channel interference would be too strong. Rather, those frequencies are reused at a greater distance. The distance (related to cell radius) between two base stations which use the same carrier frequency is named reuse distance D/R. The number of cells that have to use different carrier frequencies is called cluster size N or reuse factor. Typically, a signal-tonoise-and-interferenfe ratio (SNIR) of lODb is required for each user, resulting in a cluster size of 3 or more (N.=3) for sector cells.

Smart Antennas for TDMA (2): The increase in capacity can now be accomplished in different ways. One possibility is so-called spatial filtering for interference reduction (SFIR). Thereby, we can put base stations with the same carrier frequencies closer

Black-Box
Abstract of Black-Box As the technology progressing, the speed of traveling is also increased. The source to destination became so closer to each others. The main advancement in the field of the air traveling system with the help of airplane. This is the major discovery of technology. But as the speed increases , the horror of air crash also introduced. Because at a height of 2000m and above if a plane crashes ,it will be a terror for any body. So to take the feed back of the various activities happens in the plane and record them engineers need a mechanism to record such activities . With any airplane crash, there are many unanswered questions as to what brought the plane down. Investigators turn to the airplane's flight data recorder (FDR) and cockpit voice recorder (CVR), also known as "black boxes," for answers. In Flight 261, the FDR contained 48 parameters of flight data, and the CVR recorded a little more than 30 minutes of conversation and other audible cockpit noises. Introduction of Black-Box In almost every commercial aircraft, there are several microphones built into the cockpit to track the conversations of the flight crew. These microphones are also designed to track any ambient noise in the cockpit, such as switches being thrown or any knocks or thuds. There may be up to four microphones in the plane's cockpit, each connected to the cockpit voice recorder (CVR).

Any sounds in the cockpit are picked up by these microphones and sent to the CVR, where the recordings are digitized and stored. There is also another device in the cockpit, called the associated control unit , that provides pre-amplification for audio going to the CVR.

Here are the positions of the four microphones:


Pilot's headset Co-pilot's headset Headset of a third crew member (if there is a third crew member) Near the center of the cockpit, where it can pick up audio alerts and other sounds

Most magnetic-tape CVRs store the last 30 minutes of sound. They use a continuous loop of tape that completes a cycle every 30 minutes. As new material is recorded, the oldest material is replaced. CVRs that used solid-state storage can record two hours of audio. Similar to the magnetic-tape recorders, solid-state recorders also record over old material. Flight Data Recorders : The flight data recorder (FDR) is designed to record the operating data from the plane's systems. There are sensors that are wired from various areas on the plane to the flight-data acquisition unit, which is wired to the FDR. When a switch is turned on or off, that operation is recorded by the FDR. In the United States , the Federal Aviation Administration (FAA) requires that commercial airlines record a minimum of 11 to 29 parameters, depending on the size of the aircraft. Magnetic-tape recorders have the potential to record up to 100 parameters. Solid-state FDRs can record more than 700 parameters. On July 17, 1997 , the FAA issued a Code of Federal Regulations that requires the recording of at least 88 parameters on aircraft manufactured after August 19, 2002 . Here are a few of the parameters recorded by most FDRs:

Time Pressure altitude Airspeed Vertical acceleration Magnetic heading Control-column position Rudder-pedal position Control-wheel position Horizontal stabilizer Fuel flow

Solid-state recorders can track more parameters than magnetic tape because they allow for a faster data flow. Solid-state FDRs can store up to 25 hours of flight data. Each additional parameter that is recorded by the FDR gives investigators one Source : http://travel.howstuffworks.com/faa.htm

Adaptive Missile Guidance Using GPS

Abstract of Adaptive Missile Guidance Using GPS In the modern day theatre of combat, the need to be able to strike at targets that are on the opposite side of the globe has strongly presented itself. This had led to the development of various types of guided missiles. These guided missiles are self -guiding weapons intended to maximize damage to the target while minimizing collateral damage. The buzzword in modern day combat is fire and forget. GPS guided missiles, using the exceptional navigational and surveying abilities of GPS, after being launched, could deliver a warhead to any part of the globe via the interface pof the onboard computer in the missile with the GPS satellite system. Under this principle many modern day laser weapons were designed. Laser guided missiles use a laser of a certain frequency bandwidth to acquire their target. GPS/inertial weapons are oblivious to the effects of weather, allowing a target to be engaged at the time of the attacker's choosing. GPS allows accurate targeting of various military weapons including ICBMs, cruise missiles and precision-guided munitions. Artillery projectiles with embedded GPS receivers able to withstand accelerations of 12,000 G have been developed for use in 155 mm. GPS signals can also be affected by multipath issues, where the radio signals reflect off surrounding terrain; buildings, canyon walls, hard ground , etc. These delayed signals can cause inaccuracy. A variety of techniques, most notably narrow correlator spacing, have been developed to mitigate multipath errors. Multipath effects are much less severe in moving vehicles. When the GPS antenna is moving, the false solutions using reflected signals quickly fail to converge and only the direct signals result in stable solutions. The multiple independently targeted re-entry vehicles (MIRVs) ICBMs with many sub-missiles were developed in the late 1960s. The cruise missile has wings like an airplane, making it ca pable of flying at low altitudes. In summary, GPS-INS guided weapons are not affected by harsh weather conditions or restricted by a wire, nor do they leave the gunner vulnerable for attack. GPS guided weapons, with their technological advances over previous, are the superior weapon of choice in modern day warfare. Introduction of Adaptive Missile Guidance Using GPS Guided missile systems have evolved at a tremendous rate over the past four decades, and recent breakthroughs in technology ensure that smart warheads will have an increasing role in maintaining our military superiority. On ethical grounds, one prays that each warhead deployed during a sortie will strike only its intended target, and that innocent civilians will not be harmed by a misfire. From a tactical standpoint, our military desires weaponry that is reliable and effective, inflicting maximal damage on valid military targets and ensuring our capacity for light ingfast strikes with pinpoint accuracy. Guided missile systems help fulfill all of these demands. Many of the early guidance systems used in missiles where based on gyroscope models. Many of these models used magnets in their gyroscope to increase the sensitivity of the navigational array. In modern day warfare, the inertial measurements of the missile are still controlled by a gyroscope in one form or another, but the method by which the missile approaches the target bears a technological edge. On the battlefield of today, guided missiles are guided to or acquire their targets by using:

Radar signal Wires Lasers (or) Most recently GPS. The central idea behind the design of DGPS/GPS/inertial guided weapons is that of using a 3- axis gyro/accelerometer package as an inertial reference for the weapon's autopilot, and correcting the accumulated drift error in the inertial package by using GPS PPS/P-code. Such weapons are designated as "accurate" munitions as they will offer CEPs (Circular Error Probable) of the order of the accuracy of GPS P -code signals, typically about 40ft. Global Positioning System used in ranging navigation guidance : The next incremental step is then to update the weapon before launch with a DGPS derived position estimate, which will allow it to correct its GPS error as it flies to the target, such weapons are designated "precise" and will offer accuracies greater than laser or TV guided weapons, potentially CEPs of several feet.For an aircraft to support such munitions, it will require a DGPS receiver, a GPS receiver and interfaces on its multiple ejector racks or pylons to download target and launch point coordinates to the weapons. The development of purely GPS/inertial guided munitions will produce substantial changes in how air warfare is conducted.

Unlike a laser-guided weapon, a GPS/inertial weapon does not require t hat the launch aircraft remain in the vicinity of the target to illuminate it for guidance - GPS/inertial weapons are true fireand-forget weapons, which once released, are wholly autonomous, and all weather capable with no degradation in accuracy. Existing precision weapons require an unobscured line of sight between the weapon and the target for the optical guidance to work The proliferation of GPS and INS guidance is a double-edged sword. On the one hand, this technology promise a revolution in air warfare not seen since the laser guided bomb, with single bombers being capable of doing the task of multiple aircraft packages. In summary, GPS-INS guided weapons are not affected by harsh weather conditions or restricted by a wire, nor do they leave the gunner vulnerable for attack. GPS guided weapons, with their technological advances over previous, are the superior weapon of choice in modern day warfare. Tweet

Autonomous Underwater Vehicle

Abstract of Autonomous Underwater Vehicle The demand for a more sophisticated underwater robotic technology that minimizes the cost and eliminates the need for human operator and is therefore capable of operating autonomously becomes apparent. These requirements led to the development of Autonomous Underwater Vehicles (AUVs). A key problem with autonomous underwater vehicles is being able to navigate in a generally unknown environment. The available underwater sensor suites have a limited capability to cope with such a navigation problem. In practice, no single sensor in the underwater environment can provide the level of accuracy, reliability and the coverage of information necessary to perform underwater navigation to cent percent safety. In order to navigate accurately an AUV needs to employ a navigation sensor with a high level of accuracy and reliability. It is therefore necessary to use a number of sensors and combine their information to provide the necessary navigation capability. To achieve this, a multisensor data fusion (MSDF) approach, which combines data from multiple sensors and related information from associated databases, can be used. The aim of this paper is to survey previous work and recent development in AUV navigation and to introduce MSDF techniques as a means of improving the AUV's navigation capability. Introduction of Autonomous Underwater Vehicle Dead reckoning is a mathematical means to determine position estimates when the vehicle starts from a known point and moves at known velocities, the present position is equal to the time integral of the velocity. Measurement of the vector velocity components of the vehicle is usually accomplished with a compass (to obtain direction) and a water speed sensor (to obtain magnitude), The principal problem is that the presence of an ocean current can add a velocity component to the vehicle, which is not detected by the speed sensor. An Inertial Navigation System (INS) is a dead reckoning technique that obtains position estimates by integrating the signal from an accelerometer, which measures the vehicle's acceleration. The vehicle position is obtained by double integration of the acceleration. The orientation of the accelerometer is governed by means of a gyroscope, which maintains either a fixed or turning position as prescribed by some steering function. The orientation may also, in principle, be determined by integration of the angular rates of the gyroscope. Both the accelerometer and the gyroscope depend on inertia for their operation. A dead reckoning navigation system is attractive mainly because it uses sensors that are able to provide fast dynamic measurements. Unfortunately in practice, this integration leads to unbounded growth in position error with time due to the noise associated with the measurement and the nonlinearity of the sensors, and there is no built-in method for reducing this error. Two types of dead reckoning sensors have been widely employed in AUVs: Inertial Measurements Units (IMUs) and Doppler velocity sonar (DVS). DVS sensors provide measurement of a velocity vector with respect to the sea floor. However, these results can only be achieved when the speed of

sound in the AUV's area of operation does not vary significantly as a result of changes in the salinity, temperature and density of the water. Radio Navigation : Radio navigation systems mainly use the Global Positioning System (GPS). The GPS is a satellitebased navigational system that provides the most accurate open ocean navigation available. GPS consists of a constellation of 24 satellites that orbit the earth in 12 hours. The GPS based navigation system is used extensively in surface vessels as these vehicles can directly receive signals radiated by the GPS. Unfortunately, these signals have a limited water-penetrating capability. Therefore to receive the signals, an antenna associated with an AUV employing a GPS system must be clear and free of water. There are three possible antenna configurations to meet this requirement. These are fixed, retractable, or expendable antennas. A fixed antenna is a non-moving antenna placed on the outside of the AUV. The AUV has to surface to expose this antenna and stay surfaced until the required information has been received and processed adequately. A retractable antenna is one that the AUV would deploy while still submerged. When the required information is received, the antenna is retracted back to the AUV .The expendable antenna works along the same principle as the retractable antenna, except that it is used once and discarded. When required, another antenna would be deployed.

Silicon-on-Plastic

Abstract of Silicon on Plastic The plastic substrates are thinner, lighter, shatterproof, flexible, rollable and foldable, making Silicon-on-Plastic an enabling technology for new applications/products. This paper studies the development of Silicon on Plastic technology. Advances in poly-silicon technology have expanded TFT (THIN FILM TRANSISTORS) technology to high-speed electronics applications such as Smart Cards, RFID tags, portable imaging devices, photo-voltaic devices and solid-state lighting and other integrated circuit functions. The challenge of Silicon-on-Plastic technology is to overcome the fact that plastic melts at the temperature required to build transistors in conventional TFT processes. Technological innovations have been made to accommodate silicon processing at low temperatures. T his paper describes an innovative ultra-low temperature poly-silicon TFT process on plastic substrates , Key technologies includes near room-temperature silicon and oxide deposition steps, laser crystallization and dopant activation. Manufacturing issues related to plastic material compatibility in a TFT process are reviewed. Lamination and de-lamination of plastic wafers to glass carrier wafers for manufacturability is discussed. An active matrix TFT backplane will be fabricated with an OLED (Organic Light Emitting Diode) display to demonstrate this technology. Introduction of Silicon on Plastic Currently, amorphous silicon thin film transistors (TFT's) on glass are predominantly used in the flat panel display industry for notebook computers, mobile phones, PDA's (Personal Digital Assistant), and other handheld devices. Today, flat panels made by amorphous TFT technology are replacing desktop computer CRT (Cathode Ray Tube) monitors at an ever-increasing rate. Amorphous TFT technology applications are limited due to its inherently low electron mobility. Applications that require integration of display drivers such as hand-held camcorder and cell phone displays are using poly-silicon based TFT's for cost and space savings. This eliminates the need for costly assembly of conventional silicon chips onto the amorphous TFT display panels. Advances in poly-silicon technology have expanded TFT technology to high-speed electronics applications such as Smart Cards, RFID tags and other integrated circuit functions. Recently developed ultra low-temperature polysilicon TFT technology can be applaid on both glass and plastic substrates. The plastic substrates are thinner, lighter, shatterproof, flexible, rollable and foldable, making silicon-on-plastic an enabling technology for new applications/products. Some of the possibilities are roll-up/down displays, lightweight, thin wall-mounted TVs, electronic newspapers, and wearable display/computing devices. Moreover, plastic substrates offer the potential of roll-to-roll (R2R) manufacturing which can reduce manufacturing cost substantially compared to conventional plate-to-plate (P2P) methods. Other possibilities include smart cards, RFID tags, and portable imaging devices, photo-voltaic devices and solid-state lighting. The challenge of silicon-on-plastic technology is to overcome the fact that plastic melts at the temperature required to build transistors in conventional TFT processes. The ultra low-temperature process is compatible with plastic substrates and offers good TFT performance. Technological innovations have been made to accommodate silicon processing at low temperatures.

Low temperature (< 100 C) gate oxide deposition : A proprietary deposition machine and a compatible process were developed to deposit high quality TFT gate oxides at sub-100 C temperatures. It is a special PECVD (Plasma-Enhanced Chemical Vapor Deposition) system with an added plasma source configuration akin to ECR (Electron Cyclotron Resonance) to generate high-density plasma at low temperature. The process is optimized to provide high-density plasma for silicon dioxide deposition using SiH4 and O2. The gate oxide film at 100 nm thickness has a breakdown voltage of more than 70V, while the gate leakage current density is less than 60 nA/cm2 at 20-V bias.As-deposited gate oxideshows good C-V characterstics . 1. A small amount of hysteresis is observed before annealing takes place. A pre-oxidation plasma treatment step using a mixture of H2 and O2 to grow a very thin oxide at the interface between the deposited silicon and the gate oxide with acceptable interface states was added to the process flow. Sufficiently high-density plasma must be generated in order to grow oxide with any significant thickness. The chuck is cooled to 20 C to keep the plastic temperature below 100 C during the entire pre-oxidation and deposition process. The cleanliness of the Si surface is critical prior to the oxidation process. The result exhibits the difference between gate oxides with and without pre-oxidation. With preoxidation, we obtain an oxide C-V curve very close to the one calculated theoretically.

A Xe-Cl excimer laser is used to crystallize sputtered silicon on plastic, thereby forming large polysilicon grains for TFT's with much higher mobility than its amorphous counterpart. The extremely short laser pulses provide sufficient energy to melt the deposited Si, while the subsequent cooling forms a polycrystalline structure. This crystallization technique is similar to polysilicon formation on glass. The challenge with plastic substrates is to melt the deposited silicon while preserving the structural quality of the underlying base material

BlueStar

Abstract of BlueStar An important challenge in defining the BlueStar architecture is that both Bluetooth and WLANs employ the same 2.4 GHz ISM band and can possibly impact the performance. The interference generated by WLAN devices over the Bluetooth channel called as persistent interference , while the presence of multiple piconets in the vicinity creates interference referred to as intermittent interference. To combat both of these interference sources and provide effective coexistence, authors proposed a unique hybrid approach of adaptive frequency hopping (AFH) and a new mechanism called Bluetooth carrier sense (BCS) in Blue-Star. AFH seeks to mitigate persistent interference by scanning the channels during a monitoring period. BCS takes care of the intermittent interference by sensing channel before transmission. BlueStar takes advantage of the widely available WLAN installed base as it is advantageous to use pre-existing WLAN infrastructure. This can easily support long-range, large-scale mobility as well as provide uninterrupted access to Bluetooth devices . Most recent solutions do not tackle the issue of simultaneous operation of Bluetooth and WLANs, that is, either Bluetooth or WLANs - but not both - can access the wireless medium at a time. The architecture proposed enables simultaneous operation by using existing WLAN hardware infrastructure. BlueStar enables Bluetooth devices, belonging to either a piconet or a scatternet, to access the WAN through the BWG without the need for any fixed Bluetooth access points, while utilizing widely deployed base of IEEE 802.11 networks. Introduction of BlueStar BlueStars produces a mesh-like connected scatternet with multiple routes between pairs of nodes. It is a distributed solution. That is, all the nodes participate in the formation of the scatternet. But they do so with minimal, local topology knowledge (nodes only know about their one-hop neighbors). BlueStars, a new scatternet formation protocol for multi-hop Bluetooth networks, that overcomes the drawbacks of previous solutions in that it is fully distributed, does not require each node to be in the transmission range of each other node and generates a scatternet whose topology is a mesh. The protocol proceeds in three phases: 1.The first phase, topology discovery , concerns the discovery of neighboring devices . This phase allows each device to become aware of its one hop neighbors' ID and weight.By the end of this phase, neighboring devices acquire a "symmetric" knowledge of each other .

2.The second phase takes care of BlueStar (piconet) formation. Given that each piconet is formed by one master and a limited number of slaves that form a star-like topology, we call this phase of the protocol BlueStars formation phase. Based on the information gathered in the previous phase, namely, the ID, the weight, and synchronization information of the discovered neighbors, each device performs the protocol locally. A device decides whether it is going to be a master or a slave depending on the decision made by the neighbors with bigger weight. By the end of this phase, the whole network is covered by disjoint piconets.

3. The final phase concerns the selection of gateway devices to connect multiple BlueStars. The purpose of the third phase of our protocol is to interconnect neighboring BlueStars by selecting inter-piconet gateway devices so that the resulting scatternet is connected whenever physically possible. The main task accomplished by this phase of the protocol is gateway selection and interconnection.

Bluetooth carrier sense (BCS) : BlueStar employs carrier sense so that intermittent-like interference can be avoided. Carrier sensing is fundamental to any efficient interference mitigation with other technologies using the same ISM frequency band, and among Bluetooth piconets themselves. Author has incorporated BCS into Bluetooth without any modifications to the current slot structure. Before starting packet transmission, the next channel is checked (i.e., sense) in the turn around time of the current slot. If the next channel is busy or becomes busy during the sense window, the sender simply withholds any attempt for packet transmission, skips the channel, and waits for the next chance. Otherwise, packet transmission is carried out. A direct consequence of this approach is that, eventually, an ARQ (automatic retransmission request) packet will be sent when the slot is clear and the communication is carried out. Future work in BlueStar includes defining a more elaborate capacity allocation algorithm. In addition, we plan to investigate the correlation amongst the various simulation parameters in order to assess their impact on BCS and AFH. Mobility of both IEEE 802.11 and Bluetooth devices and its impact on both systems are also part of our future research.

Intervehicle Communication

Abstract of Intervehicle Communication Intervehicle Communication (IVC) is attracting considerable attention from the research community and the automotive industry, where it is beneficial in providing intelligent transportation system (ITS) as well as drivers and passengers' assistant services. ITS that aim to streamline the operation of vehicles, manage vehicle traffic, assist drivers with safety and other information, along with provisioning of convenience applications for passengers such as automated toll collection systems, driver assist systems and other information provisioning systems. In this context, Vehicular Ad hoc NETworks (VANETs) are emerging as a new class of wireless network, spontaneously formed between moving vehicles equipped with wireless interfaces that could have similar or different radio interface technologies, employing short-range to medium-range communication systems. A VANET is a form of mobile ad hoc network, providing communications among nearby vehicles and between vehicles and nearby fixed equipment on the roadside. Vehicular networks are a novel class of wireless networks that have emerged thanks to advances in wireless technologies and the automotive industry. Vehicular networks are spontaneously formed between moving vehicles equipped with wireless interfaces that could be of homogeneous or heterogeneous technologies. These networks, also known as VANETs, are considered as one of the ad hoc network real-life application enabling communications among nearby vehicles as well as between vehicles and nearby ?xed equipment, usually described as roadside equipment. Introduction of Intervehicle Communication Vehicular network can be deployed by network operators and service providers or through integration between operators, providers, and a governmental authority. Recent advances in wireless technologies and the current and advancing trends in ad hoc network scenarios allow a number of deployment architectures for vehicular networks, in highway, rural, and city environments. Such architectures should allow communication among nearby vehicles and between vehicles and nearby ?xed roadside equipment.

Figure 1 illustrates the reference architecture. This reference architecture is proposed within the C2C-CC, distinguishing it from 3 domains: in-vehicle, ad hoc and infrastructure domain[6]. The invehicle domain refers to a local network inside each vehicle logically composed of two types of units: an on-board unit (OBU) and one or more application unit(s) (AUs). An OBU is a device in the vehicle having communication capabilities (wireless and/or wired), while an AU is a device executing a single or a set of applications while making use of the OBU's communication capabilities. Indeed, an AU can be an integrated part of a vehicle and be permanently connected to an OBU. It can also be a portable device such as a laptop or PDA that can dynamically attach to (and detach from) an OBU. The AU and OBU are usually connected with a wired connection, while wireless connection is also possible (using, e.g., Bluetooth, WUSB, or UWB). This distinction between AU and OBU is logical, and they can also reside in a single physical unit. The ad hoc domain is a network composed of vehicles equipped with OBUs and road side units (RSUs) that are stationary along the road. OBUs of different vehicles form a mobile ad hoc network (MANET), where an OBU is equipped with communication devices, including at least a short-range wireless communication device dedicated for road safety. OBUs and RSUs can be seen as nodes of an ad hoc network, respectively, mobile and static nodes. An RSU can be attached to an infrastructure network, which in turn can be connected to the Internet. RSUs can also communicate to each other directly or via multihop, and their primary role is the improvement of road safety, by executing special applications and by sending, receiving, or forwarding data in the ad hoc domain. Two types of infrastructure domain access exist: RSU and hot spot. RSUs may allow OBUs to access the infrastructure, and consequently to be connected to the Internet. OBUs may also communicate with Internet via public, commercial, or private hot spots (Wi-Fi hot spots). In the absence of RSUs and hot spots, OBUs can utilize communication capabilities of cellular radio networks (GSM, GPRS, UMTS, WiMax, and 4G) if they are integrated in the OBU.

Active Road Safety Applications : Active road safety applications are those that are primarily employed to decrease the probability of traffic accidents and the loss of life of the occupants of vehicles [7] . A signi?cant percentage of accidents that occur every year in all parts of the world are associated with intersection, head, rearend and lateral vehicle collisions. Active road safety applications primarily provide information and assistance to drivers to avoid such collisions with other vehicles. This can be accomplished by sharing information between vehicles and road side units which is then used to predict collisions. Such information can represent vehicle position, intersection position, speed and distance heading. Moreover, information exchange between the vehicles and the road side units is used to locate hazardous locations on roads, such as slippery sections or potholes. Some examples of active road safety applications are given below as Intersection collision warning: In this use case, the risk of lateral collisions for vehicles that are approaching road intersections is detected by vehicles or road side units. This information is signaled to the approaching vehicles in order to lessen the risk of lateral collisions. Lane change assistance: The risk of lateral collisions for vehicles that are accomplishing a lane change with blind spot for trucks is reduced. Overtaking vehicle warning: Aims to prevent collision between vehicles in an overtake situation, where one vehicle, say vehicle1 is willing to overtake a vehicle, say vehicle3, while another vehicle, say vehicle2 is already doing an overtaking maneuver on vehicle3. Collision between vehicle1 and vehicle2 is prevented when vehicle2 informs vehicle1 to stop its overtaking procedure.

Intelligent Wireless Video Camera

Abstract of Intelligent Wireless Video Camera The Intelligent Wireless Video Camera described in this paper is designed using wireless video monitoring system, for detecting the presence of a person who is inside the restricted zone. This type of automatic wireless video monitors is quite suitable for the isolated restricted zones, where the tight security is required.The principle of remote sensing is utilized in this, to detect the presence of any person who is very near to reference point with in the zone. A video camera collects the images from the reference points and then converts into electronic signals. The collected images are converted from visible light into invisible electronic signals inside a solid-state imager. These signals are transmitted to the monitor, Each reference point is arranged with two infrared LEDs and one lamp. This arrangement is made to detect the presence of a person who is near the reference point. The reference point is nothing but restricted area, when any person comes near to any reference point, then immediately that particular reference point output will become high and this high signal is fed to the computer. Now the computer energizes that particular reference point lamp and rotates the video camera towards that reference point for collecting the images at that particular reference point. To rotate the video camera towards interrupted reference point, stepper motor is used Introduction of Intelligent Wireless Video Camera The video surveillance unit is designed for the widest possible viewing range and portability. The unit consists of a Stepper Motor, which drives the camera towards reference points automatically by the computer and a transmitter transmits the images collected by the camera to a distant end. Thus a automatic controlled wireless camera is very useful for surveillance of places where the particular location makes it inconvenient or impractical for a wired operation of the unit. The robotic action made by the stepper motor is attached to the camera allows surveillance of maximum area with one single camera. Such setup can be very flexible to the user and can save valuable company resources. There are several types of security systems existed in the Market; one of the most common security systems is CCTV (Closed Circuit Television). The CCTV consists Video Surveillance Camera used as securitymonitoring device plays a major roll in security systems. One reason for this is the fact that a picture is worth then a thousand words. This is especially true in a court of law where an eyewitness is required who can place the criminal at the scene of a crime. CCTV systems are also helpful in the residential security Market. They allow homeowners to see their callers, thus establishing their identity before they open an outside entrance door. This is an important feature too, because other wise, they might open their door to a criminal. The Infrared sensing circuit consists of two infrared LED's for transmitting the signal as well as receiving the signal. The signal transmitted by the transmitting LED omits the signal in a line like laser beam, the radiated signal from the transmitting LED is invisible and harmless. Whenever the human body interrupts the transmitting signal, there the signal is reflected and this reflected signal is received by the infrared receiving LED.

Stepper Motor Drive Circuit : The stepper motor used in this is having four windings and these windings are energized one after another in a sequence according to the code produced by the computer through motor drive circuit. This motor rotates in step wise and the step angle is 1.8 0 . Varying the pulse rate can vary the speed of the motor. The pulses are produced by the computer can be controlled through the program by which motor speed can be varied. The stepper motor used in this project work is capable to drive up to 3kg load. Each reference point is arranged with two infrared sensors and both the sensors are arranged side by side within the distance of one inch approximately and both are directed towards the space with the zone. The energization sequence for the forward and backward movements of the stepper motor is fed from computer with the help of software written in 'C' language. The ratings of the motor employed are 12V dc 4-coil. The driving circuit of the motor is designed using BC 547 (low power Transistor) and 2N5296 (Medium power Transistor). These two transistors are configured in emitter following mode for amplifying the current rating. The motor winding when it is energized, it consumes 350 mA approximately.

Image Coding Using Zero Tree Wavelet

Abstract of Image Coding Using Zero Tree Wavelet Image compression is very important for efficient transmission and storage of images. Embedded Zerotree Wavelet (EZW) algorithm is a simple yet powerful algorithm having the property that the bits in the stream are generated in the order of their importance. Image compression can improve the performance of the digital systems by reducing time and cost in image storage and transmission without significant reduction of the image quality. For image compression it is desirable that the selection of transform should reduce the size of resultant data set as compared to source data set. EZW is computationally very fast and among the best image compression algorithm known today. This paper proposes a technique for image compression which uses the Wavelet-based Image Coding. A large number of experimental results are shown that this method saves a lot of bits in transmission, further enhances the compression performance. This paper aims to determine the best threshold to compress the still image at a particular decomposition level by using Embedded Zerotree Wavelet encoder. Compression Ratio (CR) and Peak-Signal-to-Noise (PSNR) is determined for different threshold values ranging from 6 to 60 for decomposition level 8. Introduction of Image Coding Using Zero Tree Wavelet Most natural images have smooth colour variations, with the fine details being represented as sharp edges in between the smooth variations. Technically, the smooth variations in colour can be termed as low frequency variations and the sharp variations as high frequency variations. The low frequency components (smooth variations) constitute the base of an image, and the highfrequency components (the edges which give the detail)add upon them to refine the image, thereby giving a detailed image. Hence, the smooth variations are demanding more importance than the details. Separating the smooth variations and details of the image can be done in many ways. One such way is the decomposition of the image using a Discrete Wavelet Transform (DWT). Wavelets are being used in a number of different applications. The practical implementation of wavelet compression schemes is very similar to that of subband coding schemes. As in the case of subband coding, the signal is decomposed using filter banks. In a discrete wavelet transform, an image can be analyzed by passing it through an analysis filter bank followed by a decimation operation. This analysis filter bank, which consists of a low pass and a high pass filter at each decomposition stage, is commonly used in image compression. When a signal passes through these filters, it is split into two bands. The low pass filter, which corresponds to an averaging operation, extracts the coarse information of the signal. The high pass filter, which corresponds to a differencing operation, extracts the detail information of the signal. The output of the filtering operations is then decimated by two. A two-dimensional transform can be accomplished by performing two separate one-dimensional transforms. First, the image is filtered along the x-dimension using low pass and high pass analysis filters and decimated by two. Low pass filtered coefficients are stored on the leftpart of the matrix and high pass filtered on the right.Because of decimation, the total size of the transformed image is same as the original image. Then, it is followed by filtering the sub-image along the y-dimension and decimated by two. Finally, the image

has been split into four bands denoted by LL, HL, LH, and HH, after one level of decomposition. The LL band is again subject to the same procedure. Quantization : Quantization refers to the process of approximating the continuous set of values in the image data with a finite, preferably small, set of values. The input to a quantizer is the original data and the output is always one among a finite number of levels. The quantizer is a function whose set of output values are discrete and usually finite. Obviously, this is a process of approximation and a good quantizer is one which represents the original signal with minimum loss or distortion. here are two types of quantization: scalar quantization and vector quantization. In scalar quantization, each input symbol is treated in producing the output while in vector quantization the input symbols are clubbed together in groups called vectors, and processed to give the output. This clubbing of data and treating them as a single unit,increases the optimality of the vector quantizer, but at thecost of increased computational complexity Image coding utilizing scalar quantization on hierarchical structures of transformed images has been a very effective and computationally simple technique. Shapiro was the first to introduce such a technique with his EZW [13] algorithm. Different variants of this technique have appeared in the literatures which providean improvement over the initial work. Said & Pearlman[1] successively improved the EZW algorithm byextending this coding scheme, and succeeded in presenting a different implementation based on a setpartitioning sorting algorithm. This new coding scheme, called the SPIHT [1], provided an even better performance than the improved version of EZW.

Human-Robot Interaction

Abstract of Human-Robot Interaction A very important aspect in developing robots capable of Human-Robot Interaction (HRI) is the research in natural, human-like communication, and subsequently, the development of a research platform with multiple HRI capabilities for evaluation. Besides a flexible dialog system and speech understanding, an anthropomorphic appearance has the potential to support intuitive usage and understanding of a robot, e.g .. human-like facial expressions and deictic gestures can as well be produced and also understood by the robot. As a consequence of our effort in creating an anthropomorphic appearance and to come close to a human-human interaction model for a robot, we decided to use human-like sensors, i.e., two cameras and two microphones only, in analogy to human perceptual capabilities too. Despite the challenges resulting from these limits with respect to perception, a robust attention system for tracking and interacting with multiple persons simultaneously in real time is presented. The tracking approach is sufficiently generic to work on robots with varying hardware, as long as stereo audio data and images of a video camera are available. To easily implement different interaction capabilities like deictic gestures, natural adaptive dialogs, and emotion awareness on the robot, we apply a modular integration approach utilizing XML-based data exchange. The paper focuses on our efforts to bring together different interaction concepts and perception capabilities integrated on a humanoid robot to achieve comprehending human-oriented interaction. Introduction of Human-Robot Interaction For face detection, a method originally developed by Viola and Jones for object detection is adopted. Their approach uses a cascade of simple rectangular features that allows a very efficient binary classification of image windows into either the face or non face class. This classification step is executed for different window positions and different scales to scan the complete image for faces. We apply the idea of a classification pyramid starting with very fast but weak classifiers to reject image parts that are certainly no faces. With increasing complexity of classifiers, the number of remaining image parts decreases. The training of the classifiers is based on the AdaBoost algorithm . Combining the weak classifiers iteratively to more stronger ones until the desired level of quality is achieved.

As an extension to the frontal view detection proposed by Viola and Jones, we additionally classify the horizontal gazing direction of faces, as shown in Fig. 4, by using four instances of the classifier pyramids described earlier, trained for faces rotated by 20", 40", 60", and 80". For classifying left and right-turned faces, the image is mirrored at its vertical axis, and the same four classifiers are applied again. The gazing direction is evaluated for activating or deactivating the speech processing, since the robot should not react to people talking to each other in front of the robot, but only to communication partners facing the robot. Subsequent to the face detection, a face identification is applied to the detected image region using the eigenface method to compare the detected face with a set of trained faces. For each detected face, the size, center coordinates, horizontal rotation, and results of the face identification are provided at a real-time capable frequency of about 7 Hz on an Athlon64 2 GHz desktop PC with I GB RAM. Voice Detection: As mentioned before, the limited field-of-view of the cameras demands for alternative detect ion and tracking methods. Motivated by human perception, sound location is applied to direct the robot's attention. The integrated speaker localization (SPLOC) realizes both the detection of possible communication partners outside the field-of-view of the camera and the estimation whether a person found by face detection is currently speaking. The program continuously captures the audio data by the two microphones. To estimate the relative direction of one or more sound sources in front of the robot, the direction of sound toward the microphones is considered . Dependent on the position of a sound source in front of the robot, the run time difference t results from the run times tr and tl of the right and left microphone. SPLOC compares the recorded audio signal of the left and the right] microphone using a fixed number of samples for a cross power spectrum phase (CSP) to calculate the temporal shift between the signals. Taking the distance of the microphones dmic and a minimum range of 30 cm to a sound source into account, it is possible to estimate the direction of a signal in a 2-D space. For multiple sound source detection, not only the main energy value for the CSP result is taken, but also all values exceeding an adjustable threshold. In the 3-D space, distance and height of a sound source is needed for an exact detection.

This information can be obtained by the face detection when SPLOC is used for checking whether a found person is speaking or not. For coarsely detecting communication partner, outside the field-ofview, standard values are used that are sufficiently accurate to align the camera properly to get the person hypothesis into the field-of-view. The position of a sound source (a speaker mouth) is assumed at a height of 160 Cm for an average adult. The standard distance is adjusted to 110 Cm, as observed during interactions with naive users.

Wireless LAN Security

Wireless local area networks (WLANs) based on the Wi-Fi (wireless fidelity) standards are one of today's fastest growing technologies in businesses, schools, and homes, for good reasons. They provide mobile access to the Internet and to enterprise networks so users can remain connected away from their desks. These networks can be up and running quickly when there is no available wired Ethernet infrastructure. They can be made to work with a minimum of effort without relying on specialized corporate installers. Some of the business advantages of WLANs include: " Mobile workers can be continuously connected to their crucial applications and data; " New applications based on continuous mobile connectivity can be deployed; " Intermittently mobile workers can be more productive if they have continuous access to email, instant messaging, and other applications; " Impromptu interconnections among arbitrary numbers of participants become possible. " But having provided these attractive benefits, most existing WLANs have not effectively addressed security-related issues. THREATS TO WLAN ENVIRONMENTS All wireless computer systems face security threats that can compromise its systems and services. Unlike the wired network, the intruder does not need physical access in order to pose the following security threats: Eavesdropping This involves attacks against the confidentiality of the data that is being transmitted across the network. In the wireless network, eavesdropping is the most significant threat because the attacker can intercept the transmission over the air from a distance away from the premise of the company. Tampering The attacker can modify the content of the intercepted packets from the wireless network and this results in a loss of data integrity. Unauthorized access and spoofing The attacker could gain access to privileged data and resources in the network by assuming the identity of a valid user. This kind of attack is known as spoofing. To overcome this attack, proper authentication and access control mechanisms need to be put up in the wireless network.

To Download Full Report Click Here

Smart Note Taker


Iris USOA400 IRISPen Express 6 Pen Scanner

Definition
The Smart NoteTaker is such a helpful product that satisfies the needs of the people in today's technologic and fast life. This product can be used in many ways. The Smart NoteTaker provides taking fast and easy notes to people who are busy one's self with something. With the help of Smart NoteTaker, people will be able to write notes on the air, while being busy with their work. The written note will be stored on the memory chip of the pen, and will be able to read in digital medium after the job has done. This will save time and facilitate life. The Smart NoteTaker is good and helpful for blinds that think and write freely. Another place, where our product can play an important role, is where two people talks on the phone. The subscribers are apart from each other while their talk, and they may want to use figures or texts to understand themselves better. It's also useful especially for instructors in presentations. The instructors may not want to present the lecture in front of the board. The drawn figure can be processed and directly sent to the server computer in the room. The server computer then can broadcast the drawn shape through network to all of the computers which are present in the room. By this way, the lectures are aimed to be more efficient and fun. This product will be simple but powerful. The product will be able to sense 3D shapes and motions that user tries to draw. The sensed information will be processed and transferred to the memory chip and then will be monitored on the display device. The drawn shape then can be broadcasted to the network or sent to a mobile device. There will be an additional feature of the product which will monitor the notes, which were taken before, on the application program used in the computer. This application program can be a word document or an image file. Then, the sensed figures that were drawn onto the air will be recognized and by the help of the software program we will write, the desired character will be printed in the word document. If the application program is a paint related program, then the most similar shape will be chosen by the program and then will be printed on the screen. Since, JAVA Applet is suitable for both the drawings and strings, all these applications can be put together by developing a single JAVA program. The JAVA code that we will develop will also be installed on the pen so that the processor inside the pen will type and draw the desired shape or text on the display panel. To dowload full details please click the below link. Smart Note Taker To Download Full Report Click Here

Embedded Web Technology

Abstract of Embedded Web Technology Embedded Web Technology (EWT) is regarded as the 'marriage' of Web technologies with embedded systems. In other words, the software developed for embedded systems is applied by making use of the Internet. Embedded technology has been around for a long time and its use has gradually expanded into the PC market. Speed, accuracy, reliability were the reasons why embedded technology entered computers. With an great market size of billions in the next coming years, the future is embedded. Embedded systems contain processors, software, input sensors and output actuators, which work as the controls of a device and are subject to constraints. These Embedded systems may not have disk drives, keyboards, display devices and are typically restricted in terms of power, memory, GUIs and debugging interfaces.The central building blocks are microcontrollers, i.e. microprocessors integrated with memory units and specific peripherals for the observation and control of these embedded systems.On the other hand, Web technologies employ client-server models. Introduction of Embedded Web Technology The embedded Web system works on the same principle as that traditional Web request-response systems. Web pages from the embedded system (server) are transmitted to the Web browser (client) , which implements the user interface (Presentation layer). In other cases , the the embedded system dynamically generates the pages to convey the current state of the device to the user at the centralized location. These end users can also use the Web browser to send the information to the embedded system for the configuration and control ofd the device. Web-enabled devices use the HTTP (Hyper Text Transfer Protocol) standard protocol to transmit Web pages from the embedded system to the Web browser , and to transmit HTML (Hyper Text Markup Languages) form the data from the browser back to the device. The devices require a network interface such as Ethernet ,TCP/IP software , embedded Web server software , and the Web pages (both static and generated) that make up the device- specific GUI. The HTTP protocol engine takes the request from the Web browser and sends it on the TCP/IP. The HTTP protocol Engine parses the request and sends it to the embedded application for processing. After producing the results , the embedded application generates the HTML code and feeds it to the HTTP Engine , which sends it back to the client using TCP/IP.

Figure 1: Web-enabled devices use the HTTP standard protocol to transmit Web pages from the embedded system to the Web browser, and to transmit HTML form data from the browser back to the device . Embedded Web Technology is an enabling, or platform, technology. This means that it is relevant to a wide variety of applications, many of which have not yet been identified. We at NASA have promoted EWT through workshops, participation in shows, and one-to-one consultations with our partners. Embedded Software : The Internet is the dominant method of information access. People are using universal clients such as Web browsers and email readers to connect to any system, from anywhere, and at any time. With the use of embedded Internet technology, innovative companies are building products that let people use these same universal clients to manage embedded devices. Using Web or email technologies in a networked device delivers user control with any Web browser or email client. This approach eliminates the need to build custom management applications and provides access to the device using the Internet tools that everyone is familiar with. Embedded software space is vast and wide open. Newer embedded systems can require different software based applications. These software based applications are : Database applications, Internet applications, Mobile office productivity tools, And personal applications.

Developing and running these applications require tools and supporting software platforms. All these embedded software requirements can be broadly classified into "embedded database", "embedded language extensions", "embedded development tools", "embedded applications" and their subclassification will be a long list of specific areas.

Electrooculography

Abstract of Electrooculography Electrooculography (EOG) is a new technology of placing electrodes on user's forehead around the eyes to record eye movements . This technology is based on the principle of recording the polarization potential or corneal-retinal potential (CRP), which is the resting potential between the cornea and the retina. This potential is commonl y known as electrooculogram. (EOG) is a very small electrical potential that can be detected using electrodes. The EOG ranges from 0.05 to 3.5 mV in humans and is linearly proportional to eye displacement. Compared with the electroencelography (EEG), EOG signals have the characteristics as follows: the amplitude is relatively the same (15-200uV), the relationship between EOG and eye movements is linear, and the waveform is easy to detect. Considering the characteristics of EOG mentioned above, EOG based HCI is becoming the hotspot of bio-based HCI research in recent years. Basically EOG is a bio-electrical skin potential measured around the eyes but first we have to understand eye itself: Introduction of Electrooculography The electrooculogram (EOG) is the electrical signal produced by the potential difference between the retina and the cornea of the eye. This difference is due to the large presence of electrically active nerves in the retina compared to the front of the eye. Many experiments show that the corneal part is a positive pole and the retina part is a negative pole in the eyeball. Eye movement will respectively generates voltage up to 16uV and 14uV per 1 in horizontal and vertical way. The typical EOG waveforms generated by eye movements are shown in Fig 3.2 . In Fig 3.2 the diagram top figure shows the three types of eye movements and the bottom figure shows the original EOG waveform. Positive or negative pulses will be generated when the eyes rolling upward or downward. The amplitude of pulse will be increased with the increment of rolling angle, and the width of the positive (negative) pulse is proportional to the duration of the eyeball rolling process. When the eyes are stationary or when the eyes are looking straight ahead, there is no considerable change in potential and the amplitude of signal obtained is approximately zero.

Fig 3.2 EOG generation using the eye movements and EOG waveform When the eyes are made to move upwards, then there results an action potential, which when measured will give a value of -0.06v to +0.06v. Similarly a downward movement of the eyes will give a similar voltage with opposite polarities to that obtained due to the left movement. The important factor regarding the EOG signal is that it does not fall in the amplitude or frequency range of the EMG signal ,thus during the process of measurement of the EOG signals ,the head or other parts of the body can be moved ,as these muscular activities will not interfere with the EOG signals and can be filtered easily. The ECG signal can be easily filtered out from the EOG signals by using a low pass filter, as the ECG signals have a higher bandwidth. One more interesting factor regarding the ECG signals are that, it does not interfere with the EOG signals, because when EOG is measured using precision electrodes, and as ECG is generated by the heart it does not get detected by the electrodes placed near the eye. EOG Detection : The primary function in EOG signal estimation and processing is the detection of the EOG signals. The detection takes place as shown below. The below figure shows the method of detection of EOG signals using electrodes

Fig 5.1: Electrode placements for EOG detection As it can be seen from the above figure, four to five electrodes are required for the detection of the EOG signals. In the process of detection, the electrodes act as a transducer converting the ion current obtained at the skin to electron current. The derivation of the EOG is achieved placing two electrodes on the outer side of the eyes to detect horizontal movement and another pair above and below the eye to detect vertical movement.

Distributed COM

Abstract of Distributed COM Microsoft Distributed COM (DCOM) extends the Component Object Model (COM) to support communication among objects on different computers-on a LAN, a WAN, or even the Internet. With DCOM, your application can be distributed at locations that make the most sense to your customer and to the application. Because DCOM is a seamless evolution of COM, the world's leading component technology, you can take advantage of your existing investment in COM-based applications, components, tools, and knowledge to move into the world of standards-based distributed computing. As you do so, DCOM handles low-level details of network protocols so you can focus on your real business: providing great solutions to your customers. Introduction of Distributed COM DCOM is an extension of the Component Object Model (COM). COM defines how components and their clients interact. This interaction is defined such that the client and the component can connect without the need of any intermediary system component. The client calls methods in the component without any overhead whatsoever.

In today's operating systems, processes are shielded from each other. A client that needs to communicate with a component in another process cannot call the component directly, but has to use some form of interprocess communication provided by the operating system. COM provides this communication in a completely transparent fashion: it intercepts calls from the client and forwards them to the component in another process. Figure 2 illustrates how the COM/DCOM run-time libraries provide the link between client and component.

When client and component reside on different machines, DCOM simply replaces the local interprocess communication with a network protocol. Neither the client nor the component is aware that the wire that connects them has just become a little longer.

Figure 3 shows the overall DCOM architecture: The COM run-time provides object-oriented services to clients and components and uses RPC and the security provider to generate standard network packets that conform to the DCOM wire-protocol standard.

Components and Reuse : Most distributed applications are not developed from scratch and in a vacuum. Existing hardware infrastructure, existing software, and existing components, as well as existing tools, need to be integrated and leveraged to reduce development and deployment time and cost. DCOM directly and transparently takes advantage of any existing investment in COM components and tools. A huge market for off-the-shelf components makes it possible to reduce development time by integrating standardized solutions into a custom application. Many developers are familiar with COM and can easily apply their knowledge to DCOM-based distributed applications. Any component that is developed as part of a distributed application is a candidate for future reuse. Organizing the development process around the component paradigm lets you continuously raise the level of functionality in new applications and reduce time-to-market by building on previous work. Designing for COM and DCOM assures that your components are useful now and in the future. With DCOM these critical design constraints are fairly easy to work around, because the details of deployment are not specified in the source code. DCOM completely hides the location of a component, whether it is in the same process as the client or on a machine halfway around the world. In all cases, the way the client connects to a component and calls the component's methods is identical. Not only does DCOM require no changes to the source code, it does not even require that the program be recompiled. A simple reconfiguration changes the way components connect to each other. DCOM's location independence greatly simplifies the task of distributing application components for optimum overall performance. Suppose, for example, that certain components must be placed on a specific machine or at a specific location. If the application has numerous small components, you can reduce network loading by deploying them on the same LAN segment, on the same machine, or even in the same process. If the application is composed of a smaller number of large components, network loading is less of a problem, so you can put them on the fastest machines available, wherever those machines are.

Remote Access Service

Abstract of Remote Access Service Users connecting to a RAS services, through a modem, can limited to accessing only that server, or can be access to the entire network. effectively, this is same as the local connection to the network, except that any type of data transfer runs significantly slower. you will need to select connection option appropriate to your access requirements available support, and budgetary constraints. In the current business environment, organizations are under pressure to reduce costs, increase efficiency, and maximize performance from the existing infrastructure. The growth of the Internet, together with new global business opportunities, makes it imperative that organizations provide secure 24x7 network access to employees and locations around the world. Introduction of Remote Access Service In most networks clients are connected directly to the network. In some cases, however remote connection are needed for your users. Microsoft provides Remote Access Services to set and configure client access. Users connecting to a RAS services, through a modem, can limited to accessing only that server, or can be access to the entire network. effectively, this is same as the local connection to the network, except that any type of data transfer runs significantly slower. you will need to select connection option appropriate to your access requirements available support, and budgetary constraints. In the current business environment, organizations are under pressure to reduce costs, increase efficiency, and maximize performance from the existing infrastructure. The growth of the Internet, together with new global business opportunities, makes it imperative that organizations provide secure 24x7 network access to employees and locations around the world The RAS API is designed for use by C/C++ programmers. Microsoft Visual Basic programmers may also find the API useful. Programmers should be familiar with networking concepts. Some of the functions in the RAS API are supported only on network servers and other functions are supported only on network clients. For more specific information about which operating systems support a particular function RAS Common Dialog Boxes : Windows provides a set of functions that display the RAS dialog boxes provided by the system. These functions make it easy for applications to display a familiar user interface so that users can perform RAS tasks. For example, users can establish and monitor connections, or work with phonebook entries. Windows 95 does not currently support these functions. The Ras Phonebook Dlg function displays the main Dial-Up Networking dialog box. From this dialog box, the user can dial, edit, or delete a selected phone-book entry, create a new phone-book entry, or specify user preferences. The Ras Phonebook Dlg function uses the RASPBDLG structure

to specify additional input and output parameters. For example, you can set members of the structure to control the position of the dialog box on the screen. You can use the RASPBDLG structure to specify a RasPBDlgFunc callback function that receives notifications of user activity while the dialog box is open. For example, RAS calls your RasPBDlgFunc function if the user dials, edits, creates, or deletes a phone-book entry. You can use the RasDialDlg function to start a RAS connection operation without displaying the main Dial-Up Networking dialog box. With RasDialDlg, you specify a phone number or phone-book entry to call. The function displays a stream of dialog boxes that indicate the state of the connection operation. The RasDialDlg function uses a RASDIALDLG structure to specify additional input and output parameters, such as position of the dialog box and the phone-book subentry to call.

Wireless Charging Of Mobile Phones Using Microwaves

Abstract of Wireless Charging Of Mobile Phones Using Microwaves With mobile phones becoming a basic part of life, the recharging of mobile phone batteries has always been a problem. The mobile phones vary in their talk time and battery standby according to their manufacture and batteries. All these phones irrespective of their manufacturer and batteries have to be put to recharge after the battery has drained out. The main objective of this current proposal is to make the recharging of the mobile phones independent of their manufacturer and battery make. In this paper a new proposal has been made so as to make the recharging of the mobile phones is done automatically as you talk in your mobile phone! This is done by use of microwaves. The microwave signal is transmitted from the transmitter along with the message signal using special kind of antennas called slotted wave guide antenna at a frequency is 2.45 GHz. There are minimal additions, which have to be made in the mobile handsets, which are the addition of a sensor, a Rectenna, and a filter. With the above setup, the need for separate chargers for mobile phones is eliminated and makes charging universal. Thus the more you talk, the more is your mobile phone charged! With this proposal the manufacturers would be able to remove the talk time and battery stand by from their phone specifications. Introduction of Wireless Charging Of Mobile Phones Using Microwaves The basic addition to the mobile phone is going to be the rectenna. A rectenna is a rectifying antenna, a special type of antenna that is used to directly convert microwave energy into DC electricity. Its elements are usually arranged in a mesh pattern, giving it a distinct appearance from most antennae. A simple rectenna can be constructed from a Schottky diode placed between antenna dipoles. The diode rectifies the current induced in the antenna by the microwaves. Rectenna are highly efficient at converting microwave energy to electricity. In laboratory environments, efficiencies above 90% have been observed with regularity. Some experimentation has been done with inverse rectenna, converting electricity into microwave energy, but efficiencies are much lower--only in the area of 1%. With the advent of nanotechnology and MEMS the size of these devices can be brought down to molecular level. It has been theorized that similar devices, scaled down to the proportions used in nanotechnology, could be used to convert light into electricity at much greater efficiencies than what is currently possible with solar cells. This type of device is called an optical rectenna. Theoretically, high efficiencies can be maintained as the device shrinks, but experiments funded by the United States National Renewable energy Laboratory have so far only obtained roughly 1% efficiency while using infrared light. Another important part of our receiver circuitry is a simple sensor. Receiver Design :

The basic addition to the mobile phone is going to be the rectenna. A rectenna is a rectifying antenna, a special type of antenna that is used to directly convert microwave energy into DC electricity.

Rectifies received microwaves into DC current a rectenna comprises of a mesh of dipoles and diodes for absorbing microwave energy from a transmitter and converting it into electric power. Its elements are usually arranged in a mesh pattern, giving it a distinct appearance from most antennae. A simple rectenna can be constructed from a Schottky diode placed between antenna dipoles as shown in Fig... The diode rectifies the current induced in the antenna by the microwaves. Rectenna are highly efficient at converting microwave energy to electricity. In laboratory environments, efficiencies above 90% have been observed with regularity. In future rectennass will be used to generate large-scale power from microwave beams delivered from orbiting SPS satellites. The sensor circuitry is a simple circuit, which detects if the mobile phone receives any message signal. This is required, as the phone has to be charged as long as the user is talking. Thus a simple F to V converter would serve our purpose. In India the operating frequency of the mobile phone operators is generally 900MHz or 1800MHz for the GSM system for mobile communication.

3-Dimensional Printing

Abstract of 3-Dimensional Printing 3D printing is a form of additive manufacturing technology where a three dimensional object is created by laying down successive layers of material. It is also known as rapid prototyping, is a mechanized method whereby 3D objects are quickly made on a reasonably sized machine connected to a computer containing blueprints for the object. The 3D printing concept of custom manufacturing is exciting to nearly everyone. This revolutionary method for creating 3D models with the use of inkjet technology saves time and cost by eliminating the need to design; print and glue together separate model parts. Now, you can create a complete model in a single process using 3D printing. The basic principles include materials cartridges, flexibility of output, and translation of code into a visible pattern. 3D Printers are machines that produce physical 3D models from digital data by printing layer by layer. It can make physical models of objects either designed with a CAD program or scanned with a 3D Scanner. It is used in a variety of industries including jewelry, footwear, industrial design, architecture, engineering and construction, automotive, aerospace, dental and medical industries, education and consumer products. Introduction of 3-Dimensional Printing Stereo lithographic 3D printers (known as SLAs or stereo lithography apparatus) position a perforated platform just below the surface of a vat of liquid photo curable polymer. A UV laser beam then traces the first slice of an object on the surface of this liquid, causing a very thin layer of photopolymer to harden. The perforated platform is then lowered very slightly and another slice is traced out and hardened by the laser. Another slice is then created, and then another, until a complete object has been printed and can be removed from the vat of photopolymer, drained of excess liquid, and cured. Fused deposition modeling - Here a hot thermoplastic is extruded from a temperature-controlled print head to produce fairly robust objects to a high degree of accuracy. The model to be manufactured is built up a layer at a time. A layer of powder is automatically deposited in the model tray. The print head then applies resin in the shape of the model. The layer dries solid almost immediately. The model tray then moves down the distance of a layer and another layer of power is deposited in position, in the model tray. The print head again applies resin in the shape of the model, binding it to the first layer. This sequence occurs one layer at a time until the model is complete. The Algorithm : The algorithm used in the Inkjet 3-D Printing is depicted in the figure mentioned below.

The workflow can be easily understood with the help of the flowchart given below. A 3-D prototype of a desired object is created in three basic steps and these steps are: Pre-Process 3-D Printing Post-Process The 3D printer runs automatically, depositing materials at layers ~.003? thick. This is roughly the thickness of a human hair or sheet of paper. The time it takes to print a given object depends primarily on the height of the design, but most designs take a minimum of several hours. The average cost for printing a full color prototype is somewhere between 50 - 100 $. 3D printing has a bright future, not least in rapid prototyping (where its impact is already highly significant), but also in medicine the arts, and outer space. Desktop 3D printers for the home are already a reality if you are prepared to pay for one and/or build one yourself. 3D printers capable of outputting in color and multiple materials also exist and will continue to improve to a point where functional products will be able to be output. As devices that will provide a solid bridge between cyberspace and the physical world, and as an important manifestation of the Second Digital Revolution , 3D printing is therefore likely to play some part in all of our futures.

Humanoids Robotics
Definition The field of Humanoids Robotics is widely recognized as the current challenge for robotics research .The humanoid research is an approach to understand and realize the complex real world interactions between a robot, an environment, and a human. The humanoid robotics motivates social interactions such as gesture communication or co-operative tasks in the same context as the physical dynamics. This is essential for three-term interaction, which aims at fusing physical and social interaction at fundamental levels. People naturally express themselves through facial gestures and expressions. Our goal is to build a facial gesture human-computer interface fro use in robot applications. This system does not require special illumination or facial make-up. By using multiple Kalman filters we accurately predict and robustly track facial features. Since we reliably track the face in real-time we are also able to recognize motion gestures of the face. Our system can recognize a large set of gestures (13) ranging from "yes", "no" and "may be" to detecting winks, blinks and sleeping. Integration of vision and touch in edge

In order to validate the anthropomorphic model of sensory-motor co-ordination in grasping, a module was implemented to perform visual and tactile edge tracking, considered as the first step of sensory-motor co-ordination in grasping actions. The proposed methodology includes the application of the reinforcement-learning paradigm to back propagation NNs, in order to replicate the human capability of creating associations between sensory data and motor schemes, based on the results of attempts to perform movements. The resulting robot behavior consists in co-ordinating the movement of the fingertip along an object edge, by integrating visual information on the edge, proprioceptive information on the arm configuration, and tactile information on the contact, and by processing this information in a neural framework based on the reinforcement-learning paradigm. The aimed goal of edge tracking is pursued by a strategy starting from a totally random policy and evolving via rewards and punishments The Vision System: The use of MEP tracking system is made to implement the facial gesture interface. This vision system is manufactured by Fujitsu and is designed to track in real time multiple templates in frames of a NTSC video stream. It consists of two VME-bus cards, a video module and tracking module, which can track up to 100 templates simultaneously at video frame rate (30Hz for NTSC). The tracking of objects is based on template (8x8 or 16x16 pixels) comparison in a specified search area. The video module digitizes the video input stream and stores the digital images into dedicated video RAM. The tracking module also accesses this RAM. The tracking module compares the digitized frame with the tracking templates within the bounds of the search windows. This comparison is done by using a cross correlation which sums the absolute difference between corresponding pixels of the template and the frame. The result of this calculation is called the

distortion and measures the similarity of the two comparison images. Low distortions indicate a good match while high distortions result when the two images are quite different. Tracking

Transparent Electronics
Definition Researchers at Oregon State University and Hewlett Packard have reported their first example of an entirely new class of materials which could be used to make transparent transistors that are inexpensive, stable, and environmentally benign. This could lead to new industries and a broad range of new consumer products, The possibilities include electronic devices produced so cheaply they could almost be one-time "throw away" products, better large-area electronics such as flat panel screens, or flexible electronics that could be folded up for ease of transport. Findings about this new class of "thin-film" materials are called amorphous heavy-metal cation multicomponent oxides.

This is a significant breakthrough in the emerging field of transparent electronics, experts say. The new transistors are not only transparent, but they work extremely well and could have other advantages that will help them transcend carbon-based transistor materials, such as organics and polymers, that have been the focus of hundreds of millions of dollars of research around the world. Compared to organic or polymer transistor materials, these new inorganic oxides have higher mobility, better chemical stability, ease of manufacture, and are physically more robust. Oxide-based transistors in many respects are already further along than organics or polymers are after many years of research, and this may blow some of them right out of the water. Advances Made In Transparent Electronics Significant advances in the emerging science of transparent electronics, creating transparent "p-type" semiconductors that have more than 200 times the conductivity of the best materials available for that purpose a few years ago. This basic research is opening the door to new types of electronic circuits that, when deposited onto glass, are literally invisible. The studies are so cutting edge that the products which could emerge from them haven't yet been invented, although they may find applications in everything from flat-panel displays to automobiles or invisible circuits on visors.

Most materials used to conduct electricity are opaque, but some invisible conductors of electricity are already in fairly common use, the scientists said. More complex types of transparent electronic devices, however, are a far different challenge - they require the conduction of electricity via both electrons and "holes," which are positively charged entities that can be thought of as missing electrons. These "p-type" materials will be necessary for the diodes and transistors that are essential to more complex electronic devices.Only a few laboratories in the world are working in this area, mostly in Japan, the OSU scientists. As recently as 1997, the best transparent p-type transparent conductive materials could only conduct one Siemen/cm, which is a measure of electrical conductivity. The most sophisticated materials recently developed at OSU now conduct 220 Siemen/cm. These are all copper oxide-based compounds that we're working with. Right now copper chromium oxide is the most successful. Researchers continue to work with these materials to achieve higher transparency and even greater conductivity.

Thermography
Definition Thermography is a non-contact, non-destructive test method that utilizes a thermal imager to detect, display and record thermal patterns and temperatures across the surface of an object. Infrared thermography may be applied to any situation where knowledge of thermal profiles and temperatures will provide meaningful data about a system, object or process. Since infrared radiation is emitted by all objects based on their temperatures, according to the black body radiation law, thermography makes it possible to "see" one's environment with or without visible illumination. The amount of radiation emitted by an object increases with temperature; therefore thermography allows one to see variations in temperature. Thermography is widely used in industry for predictive maintenance, condition assessment, quality assurance, and forensic investigations of electrical, mechanical and structural systems. Other applications include, but are not limited to: law enforcement, firefighting, search and rescue, and medical and veterinary sciences. What makes Thermography useful? 1.It is non-contact -Uses remote sensing -Keeps the user out of danger -Does not intrude upon or affect the target at all 2. It is two dimensional -Comparison between areas of the target is possible -The image allows for excellent overview of the target -Thermal patterns can be visualized for analysis 3. It is real time -Enables very fast scanning of stationary targets -Enables capture of fast moving targets -Enables capture of fast changing thermal patterns FactTemperature is the number one form of measurement used in any process control application. As we get better at non-contact measurement and customers gain confidence, the technology will expand. Advantages: No cabinets to open, no down time required to de-energize circuit for safety reasons using the Spyglass and Viewport technology Reduces loss of revenue due to down time Prevents premature failures Quickly locates problem, without interrupting service

Reduces time going to back up generator systems Reduces the time spent in low power operations Reduces man hours Reduces overall operating cost Reduces insurance cost Increases safety

Surface Plasmon Resonance

Definition Surface plasmon resonance (SPR) is a phenomenon occurring at metal surfaces(typically gold and silver) when an incident light beam strikes the surface at a particular angle.Depending on the thickness of a molecular layer at the metal surface,the SPR phenomenon results in a graded reduction in intensity of the reflected light.Biomedical applications take advantage of the exquisite sensitivity of SPR to the refractive index of the medium next to the metal surface, which makes it possible to measure accurately the adsorption of molecules on the metal surface an their eventual interactions with specific ligands. The last ten years have seen a tremendous development of SPR use in biomedical applications. The technique is applied not only to the measurement in real time of the kinetics of ligands receptor interactions and to the screening of lead compounds in the pharmaceutical industry, but also to the measurement DNA hybridization, enzyme- substrate interactions, in polyclonal antibody characterization, epitope mapping, protein conformation studies and label free immunoassays. Conventional SPR is applied in specialized biosensing instruments. These instruments use expensive sensor chips of limited reuse capacity and require complex chemistry for ligand or protein immobilization. Laboratory has successfully applied SPR with colloidal gold particles in buffered solutions. This application offers many advantages over conventional SPR. SPR Resonance Wavelength Factors 1. Metal 2. Structure of the metal's surface 3. The nature of the medium in contact with the surface Metal To be useful for SPR, a metal must have conduction band electrons capable of resonating with light at a suitable wavelength. The visible and near-infrared parts of the spectrum are particularly convenient because optical components and high performance detectors appropriate for this region are readily available. A variety of metallic elements satisfy this condition. They include silver, gold, copper, aluminum, sodium, and indium. There are two critical limitations on the selection of a metal for sensor construction. The surface exposed to light must be pure metal. Oxides, sulfides and other films formed by atmospheric exposure interfere with SPR. The metal must also be compatible with the chemistries needed to perform assays. Specifically, the chemical attachment of antibodies or other binding molecules to the metal surface must not impair the resonance Surface The resonance condition that permits energy transfer from photons to plasmons depends upon a quantum mechanical criterion related to the energy and momentum of the photons and plasmons. Both the energy and momentum of the photons must match exactly the energy and momentum of the

plasmons. For a flat metal surface, there is no wavelength of light that satisfies this constraint. Hence, there can be no surface plasmon resonance. However, there are three general configurations of SPR devices that alter the momentum of photons in a way that fulfills the resonance criterion, namely, prisms, gratings and optical waveguide-based SPR system (Figure 3). All three have been used to generate SPR Medium Plasmons, although composed of many electrons, behave as if they were single charged particles. Part of their energy is expressed as oscillation in the plane of the metal surface. Their movement, like the movement of any electrically charged particles, generates an electrical field. The plasmon's electrical field extends about 100 nanometers perpendicularly above and below the metal surface. The interaction between the plasmon's electrical field and the matter within the field determines the resonance wavelength. Any change in the composition of the matter within the range of the plasmon's field causes a change in the wavelength of light that resonates with the plasmon. The magnitude of the change in the resonance wavelength, the SPR shift, is directly and linearly proportional to the change in composition

Microwave Superconductivity
Definition Superconductivity is a phenomenon occurring in certain materials generally at very low temperatures, characterized by exactly zero electrical resistance and the exclusion of the interior magnetic field (the Meissner effect). It was discovered by Heike Kamerlingh Onnes in 1911. Applying the principle of S uper conductivity in microwave and millimeter-wave (mm-wave) regions, components with superior performance can be fabricated. Major problem during the earlier days was the that the cryogenic burden has been perceived as too great compared to the performance advantage that could be realized. There were very specialized applications, such as low-noise microwave and mm-wave mixers and detectors, for the highly demanding radio astronomy applications where the performance gained was worth the effort and complexity. With the discovery of high temperature superconductors like copper oxide, rapid progress was made in the field of microwave superconductivity. Microwave Superconductivity According to BCS theory cooper pairs are formed during superconducting state and it is having energy slightly less than the normal electrons.so there exist a superconducting energy gap between normal electrons and cooper pairs. The band gap 'E' related to transition temperature by relation, E (at t=0K) =3.52*Kb*Tc Where Kb - Boltzman's constant Tc - Critical temperature and 3.52 is a constant for ideal superconductor and may vary from 3.2 to 3.6 for most superconductors. If a microwave or a millimeter wave photon with energy greater than superconducting energy gap incident on a sample and is absorbed by the cooper pair, it will be broken with two normal electron created above the energy gap and zero resistance property is lost by material. This property is shown in fig below. For ideal with a transition temperature of Tc = 1K, the frequency of the mm wave photon with energy equal to superconducting energy gap at T=0K would be about 73GHz. For practical superconductors the photon energy corresponding to energy gap would scale with Tc. For niobium (Tc=9.2K) the most common material in LTS devices and circuits, the frequency of radiation corresponding to energy gap is about 670GHz. The zero resistance property of the superconductor is true for dc (f=0). For finite frequencies there are finite but usually very small electrical losses. The origin of these losses at non zero frequency is due to the presence of two type of charge carriers in the superconductor. Although cooper pairs move without resistance, the carriers in normal state, those above energy gap behave as electrons in normal conductor. As long as the operating frequency is below energy gap the equivalent circuit for the superconductor is simply the parallel combination of resistor and inductor, where resistor indicate normal electrons and inductor the cooper pairs. These two carriers contribute separately to the screening of fields.

The characteristic decay length of fields into a super conductor as determined by cooper pair current is superconducting penetration depth. The penetration depth get larger with increased temperature but only slightly close to Tc

Memristor

Definition Memristor theory was formulated and named by Leon Chua in a 1971 paper. Chua strongly believed that a fourth device existed to provide conceptual symmetry with the resistor, inductor, and capacitor. This symmetry follows from the description of basic passive circuit elements as defined by a relation between two of the four fundamental circuit variables. A device linking charge and flux (themselves defined as time integrals of current and voltage), which would be the memristor, was still hypothetical at the time. However, it would not be until thirty-seven years later, on April 30, 2008, that a team at HP Labs led by the scientist R. Stanley Williams would announce the discovery of a switching memristor. Based on a thin film of titanium dioxide, it has been presented as an approximately ideal device. The reason that the memristor is radically different from the other fundamental circuit elements is that, unlike them, it carries a memory of its past. When you turn off the voltage to the circuit, the memristor still remembers how much was applied before and for how long. That's an effect that can't be duplicated by any circuit combination of resistors, capacitors, and inductors, which is why the memristor qualifies as a fundamental circuit element. Need For Memristor A memristor is one of four basic electrical circuit components, joining the resistor, capacitor, and inductor. The memristor, short for "memory resistor" was first theorized by student Leon Chua in the early 1970s. He developed mathematical equations to represent the memristor, which Chua believed would balance the functions of the other three types of circuit elements. The known three fundamental circuit elements as resistor, capacitor and inductor relates four fundamental circuit variables as electric current, voltage, charge and magnetic flux. In that we were missing one to relate charge to magnetic flux. That is where the need for the fourth fundamental element comes in. This element has been named as memristor. Memristance (Memory + Resistance) is a property of an Electrical Component that describes the variation in Resistance of a component with the flow of charge. Any two terminal electrical component that exhibits Memristance is known as a Memristor. Memristance is becoming more relevant and necessary as we approach smaller circuits, and at some point when we scale into nano electronics, we would have to take memristance into account in our circuit models to simulate and design electronic circuits properly. An ideal memristor is a passive two-terminal electronic device that is built to express only the property of memristance (just as a resistor expresses resistance and an inductor expresses inductance). However, in practice it may be difficult to build a 'pure memristor,' since a real device may also have a small amount of some other property, such as capacitance (just as any real inductor also has resistance). A common analogy for a resistor is a pipe that carries water. The water itself is analogous to electrical charge, the pressure at the input of the pipe is similar to voltage, and the rate of flow of the water through the pipe is like electrical current. Just as with an electrical resistor, the flow of water through the pipe is faster if the pipe is shorter and/or it has a larger diameter.

Earthing transformers For Power systems


Definition Normally power systems and net works are operated under variable complex stresses.In power systems the faults are not avodable even after taking utmost care at every stage-from planning to maintainance. The grounding of a circuit reduces potential stresses under fault condition. Power feeding from delta delta or if there is no accessibility for star connected transformers occasionally shorted to ground is very common-un-intentional grounding occurs any where from the feeding system to utilization equipment The main objective of grounding neutral is to make a short circuit current sufficient in magnitude for the relay action. This article restricted to zig-zag type with oil filled transformers. The neutral point is usually available at every voltage level from generator to transformers. In the absence of a power transformer of suitable capacity, connection and design a separate grounding transformer can be used .They are inductive devices intended primarily to provide a neutral point for grounding purpose. Rating and its inter-related parameters of an earthing transformer The earthing transformer is of short time rating (10 seconds to 1 minute). The rating of an earthing transformer is entirely different from that of a power transformer. Power transformers are designed to carry total load continuously, whilst an earthing transformer carries no load, and supplies current only if one of the lines becomes grounded. It is usual to specify the single phase earth fault current, that the earthing transformer must carry for sufficient time. Since it is almost working on no-load, dictates to have low iron losses. Because of it being a short time device, its size and cost are less than that of a continuous duty transformer of equal KVA rating. The KVA rating of a three phase earthing transformer or a bank is the product of normal line to neutral voltage (KV) and the neutral or ground amperes that the transformer is designed to carry under fault conditions for a specified time. The total earth fault current and V the line voltage, the earthing transformer short time rating is equal to v3VI. When specifying rating of the earthing transformer the important parameters are: Voltage:- The line-to-line voltage of the system . Current:- The maximum neutral current to carry for a specified duration. In a grounded system it is based on the type of grounding. Depending on their duration, several rates of short Time:- Designed to carry rated current for a short time duration i.e., 10 seconds to 60 seconds .Depending upon the time setting of the protective gear on the system, And the location of the transformer .Earthing transformer time is 10 sec for protection, and for feeder it I 60 sec. Reactance:- this quantity is a function of the initial symmetrical three phase short circuit KVA. It is also based on the type of grounding, and type of application of lightning arrester and transient over voltages.

Direct Current Machines


Definition A d.c,machine is a highly versversatile energy conversion device.In normal dc machines, stator pole is not laminated,armature core is always laminated to reduce eddy -current losses d.c.machines used in control systems These d.c machines mainly consists of two types .i.e generators & motors, In the year1821 Michael faraday invented these machines i.e.generators&motors basing on his laws of magnetic induction. Applications The DC generator system is designed and optimized to deliver the high currents at low voltages required for battery charging and operating DC loads. No battery chargers or power supplies are required. DC generators do not require a transfer switch. Transfer switches lower system reliability. In prime power applications the DC generator lowers the overall cost of the system. Certain AC generators and switch mode power supplies are incompatible. These AC generators have voltage regulators that cannot regulate voltage due to the current pulsing load of the switch mode power supplies. Polar's DC generators when connected to a battery do not suffer this incompatibility. DC generators are more fuel-efficient. Site operators want the longest run time with the least amountof fuel on site. Polar's DC generators are simpler in design, have considerably less maintenance and are more reliable than AC generators. Propane carburetion and electronic speed governors require frequent calibration and testing. If the propane carburetion, ignition system, or governor speed control should develop a problem, alternator voltage regulation and frequency control will fail. Some equipment powered by the generator will be damaged, other equipment may survive. Polar's DC generators have a current limit control to prevent the alternator from overheating and the engine from stalling during shorts or overloads. This feature is extremely important in battery charging because a battery in a low state of charge can demand more power than the generator or battery charger can manage. Polar's DC generators will continues to supply power under current limit control, allowing the battery to increase its charge and drop its current demand. The AC generator uses a fuse or circuit breaker to protect against shorts and over current so the battery fails to get charged if it is overly discharged. The additional problem is that for remote sites a person is required to visit the site to replace or reset the fuse or circuit breaker and devise a means where the batteries can be brought to a state of charge where the batteries can take over. Polar's DC generators can be connected in parallel and load share. Paralleling these small AC generators is not practical. In many projects there are concerns about future site expansion and given load estimates are sometimes understated. DC generators use smaller engines that can be lifted by hand and transported to a shop for repair. The high-level generator mechanics are not required, or their expensive travel time to the site. AC generators are typically oversized to handle starting currents of motors and to provide light enough engine loads to facilitate speed regulation. Engines that are lightly loaded build up carbon around the valves and exhaust lines (wet stacking) this creates additional engine maintenance.

Optical Ethernet

Definition The most formidable adversary that you have to overcome is the issue of High-speed access and transfer capabilities for managing the huge amount of Voice and data traffic that spreads across wide geographical area. Addressing similar issues, 80 percent of the traffic in corporate intranets today, is through Ethernet, though at a smaller scale. While the Ethernet has been the simplest and the most reliable of the technologies used for local area networking, which actually obviates the issue of bandwidth, the primary concern has been that of reaching out to the core network that connect to the backbone. Thereby comes the thought of extending the capabilities of local area network (LAN) over a core network. With the telecom sector being deregulated in India, many incumbents and emerging carrier networks have taken up the issue of bandwidth seriously. Optical fiber technology has provided access to a virtually, unlimited option in the core network. In light of the recent debacles of the dot-coms, where in the world law a plethora of dotcoms mushrooming without proper business structures, and then law them closing operations equally soon, the recently opened up telecom sector needs to be treated with care. While molt of the existing and emerging carriers are hollering about bandwidth, which in fact is a core issue, nevertheless, they have not focused on providing their subscribers value-added services. Providing secure point-topoints connectivity with gigabit speeds is one area that ought to be liven a lot of thought. Service provider providing solution for such issues came up with options like lease tine connections and wireless,and are also the means to this problem. It would be much simpler and coat effective if the power of the Ethernet in its native were exploited through the entire journey from the LAN, to the MAN, to the backbone. In the wake of deregulation, most of the aspiring and existing telecos are just looking at providing a basic telephony and WLL. Most of them forget the being a longer race player, providing value added utility services is what the competitive environment demands. Just al important it providing a fast core networks facility, and the first and the last mile connectivity, which, unfortunately is suffering. This is where the problem is, as core network entry point traffic jams are the essence of the issue of solving the bandwidth problem. What is necessary is the robust, cost effective, scalable end-to-end network based on one common language - the Ethernet. S more and business are upgrading LAN to fast Ethernet (100 mbps), to gigabit Ethernet (1000 mbps), and are looking to extend mission critical e-business extranets at native speeds to MAN and WAN, this provides a great opportunity for various players. IDC reports suggests by 2003, Ethernet based technologies will account for more than 97 percent of the word's network connection shipments. This means that the market opportunities for service providers could reach $5 billion in that time frame. These users would want to interconnect their LAN's at native speed throughout tile network rather than having to go through service adaptations. The respite for them would come from what is called the "OPTICAL ETHERNET". This technology attempts at combining the power of optical and the

utility of Ethernet via an integrated business, service providers optical network based on one common language-Ethernet technology. By eliminating their need for translations between Ethernet and other transport protocols, such al t1, DS3 and ATM, optical Ethernet effectively extends an organization's LAN beyond its four walls, enabling a radical shift in the way computing and network resources are deployed. The idea is to capitalize on the de facto global LAN standard to network end to end. Ethernet no longer being just a LAN technology, has grown up from 1gbps to 10 gbps in the future. Thus, by marrying the of optical technology with the reliability, simplicity and cost- effectiveness of the Ethernet, optical Ethernet does more than just find answers o entry point log jams.

DD Using Bio-robotics
In order to measure quantitatively the neuro-psychomotor conditions of an individual with a view to subsequently detecting his/her state of health, it is necessary to obtain a set of parameters such as reaction time, speed, strength and tremor. By processing these parameters through the use of fuzzy logic it is possible to monitor an individual's state of health, .i.e. whether he/she is healthy or affected by a particular pathology such as Parkinson's disease, dementia, etc. The set of parameters obtained is useful not only to diagnose neuro-motor pathologies (e.g. Parkinson Disease), but also to assess general everyday health or to monitor sports performance; moreover, continuous use of the device by an individual for health-monitoring purposes, not only allows for detection of the onset of a particular pathology but also provides greater awareness in terms of how life style or certain habits tend to have repercussions on psycho-physical well-being. Since an individual's state of health should be continually monitored, it is essential that he or she can manage the test autonomously without his/her emotional state being influenced: autonomous testing is important, as the individual is likely to be more relaxed thus obviating emotional problems. The new system has been designed with reference to the biomechanical characteristics of the human finger. Disease detector (DDX) is a new bio robotic device that is a fuzzy based control system for the detection of neuro-motional and psychophysical health conditions. The initial experimental system (DD1) and the current system (DD2) are not easily portable and, even if they are very reliable, cannot estimate the patient health beyond the typical parameters of Parkinson's disease nor are they able to remotely transmit such diagnoses. This new bio-robotic system is exploited in order to obtain an intelligent and reliable detector supported by a very small and portable device, with a simple joystick with few buttons, a liquid-display (LCD), and a simple interface for remote communication of diagnosis. It may be adopted for earth and space applications, because of its portability, in order to measure all the reactions in front of external effects. The DDX control system consists of a small board with an internal fuzzy microcontroller that acquires, through the action on a button on the joystick, some important parameters: reaction time, motion speed, force of the finger on the button, and tremor and analyses them by fuzzy rules in order to detect the patient's disease class. Moreover this new device also includes a system to detect vocal reaction. The resulting output can be visualized through a display or transmitted by a communication interface. BACKGROUND Reaction time, speed, force, and tremor are parameters that are used to obtain a quantitative instrumental determination of a patient's neuro-psychophysical health. These parameters have been used in the study of the progression of Parkinson's disease, a particularly degenerative neural process, but these parameters can also be useful in detecting the wellness of a healthy person. As a matter of fact, these measurements turn out to be an excellent method of finding reactive parameters alteration due not only to a pathology, but also, for example, to the use of drugs, alcohol, drugs used in the treatment of mental conditions, or other substances that could affect a person's reactive and coordination capabilities.

To Download Full Report Click Here

Clos Architecture in OPS


The need to transmit information in large volumes and in more compact forms is felt these days more than ever before. To provide the bandwidth necessary to fulfill the ever-increasing demand, the copper networks have been upgraded and nowadays to a great extend replaced with optical fiber networks. Though initially these were deployed as point-to-point interconnections, real optical networking using optical switches is possible today. Since the advent of optical amplifiers allowed the deployment of dense wavelength division multiplexing (DWDM), the bandwidth available on a single fiber has grown significantly. Optical communication can take place in one of the two ways - either circuit switching or else packet switching. In circuit switching, the route and bandwidth allocated to the stream remain constant over the lifetime of the stream. The capacity of each channel is divided into a number of fixed-rate logical channels, called circuits. Optical cross connects (OXCs) switch wavelengths from their input ports to their output ports. To the client layer of the optical network, the connections realized by the network of OXCs are seen as a virtual topology, possibly different from physical topology (containing WDM links). To set up the connections, as in the old telephony world, a so called control plane is necessary to allow for signaling. Enabling automatic setup of connections through such a control plane is the focus of the work in the automatically switched optical network (ASON) framework. Since the light paths that have to be set up in such an ASON will have a relatively long lifetime (typically in the range of hours to days), the switching time requirements on OXCs are not very demanding. It is clear that the main disadvantage of such circuit switched networks is that they are not able to adequately cope with highly variable traffic. Since the capacity offered by a single wavelength ranges up to a few tens of gigabits per second, poor utilization of the available bandwidth is likely. A packet switched concept, where bandwidth is effectively consumed when data is being sent, clearly allows more efficient handling of traffic that greatly varies in both volume and communication endpoints, such as in currently dominant internet traffic. In packet switching, the data stream originating at the source is divided into packets of fixed or variable size. In this method the bandwidth is effectively consumed when data is being sent and so allows a more efficient handling of traffic that greatly varies in both volume and communication endpoints. In the last decade, various research groups have focused on optical packet switching (OPS), aimed at more efficiently using the huge bandwidths offered by such networks. The idea is to use optical fiber to transport optical packets, rather than continuous streams of light. Optical packets consist of a header and a payload. In an OPS node, the transported data is kept in the optical domain, but the header information is extracted and processed using mature control electronics, as optical processing is still in its infancy. To limit the amount of header processing, client layer traffic (e.g., IP traffic) will be aggregated into fairly large packets. o unlock the possibilities of OPS, several issues arise and are being solved today. To be competitive with the other solutions, the OPS cost node needs to be limited, and the architectures should be future proof (i.e., scalable). In this context, the work of Clos on multistage architectures has been inspiring.

To Download Full Report Click Here

4G Wireless Systems

Definition
Fourth generation wireless system is a packet switched wireless system with wide area coverage and high throughput. It is designed to be cost effective and to provide high spectral efficiency . The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB),and Millimeter wireless. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.The high performance is achieved by the use of long term channel prediction, in both time and frequency, scheduling among users and smart antennas combined with adaptive modulation and power control. Frequency band is 2-8 GHz. it gives the ability for world wide roaming to access cell anywhere.

4g wireless systems abstract Wireless mobile communications systems are uniquely identified by "generation designations. Introduced in the early 1980s, first generation (1G) systems were marked by analog frequency modulation and used primarily for voice communications. Second generation (2G) wireless communications systems, which made their appearance in the late 1980s, were also used mainly for voice transmission and reception The wireless system in widespread use today goes by the name of 2.5G-an "in between " service that serves as a stepping stone to 3G. Whereby 2G communications is generally associated with Global System for Mobile (GSM) service, 2.5G is usually identified as being "fueled " by General Packet Radio Services (GPRS) along with GSM. In 3G systems, making their appearance in late 2002 and in 2003, are designed for voice and paging services, as well as interactive media use such as teleconferencing, Internet access, and other services. The problem with 3G wireless systems is bandwidth-these systems provide only WAN coverage ranging from 144 kbps (for vehicle mobility applications) to 2 Mbps (for indoor static applications). Segue to 4G, the "next dimension " of wireless communication. The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB), and Millimeter wireless and smart antenna. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.Frequency band is 2 ]8 GHz. it gives the ability for world wide roaming to access cell anywhere.

Features:
o Support for interactive multimedia, voice, streaming video, Internet, and other broadband services o IP based mobile system o High speed, high capacity, and low cost per bit o Global access, service portability, and scalable mobile services o Seamless switching, and a variety of Quality of Service driven services o Better scheduling and call admission control techniques o Ad hoc and multi hop networks (the strict delay requirements of voice make multi hop network service a difficult problem) o Better spectral efficiency o Seamless network of multiple protocols and air interfaces (since 4G will be all ]IP, look for 4G systems to be compatible with all common network technologies, including802.11, WCDMA, Blue tooth, and Hyper LAN). o An infrastructure to handle pre existing 3G systems along with other wireless technologies, some of which are currently under development.

See Full Report To Download Full Report Click Here

Wearable Bio-Sensors

Definition
Wearable sensors and systems have evolved to the point that they can be considered ready for clinical application. The use of wearable monitoring devices that allow continuous or intermittent monitoring of physiological signals is critical for the advancement of both the diagnosis as well as treatment of diseases. Wearable systems are totally non-obtrusive devices that allow physicians to overcome the limitations of ambulatory technology and provide a response to the need for monitoring individuals over weeks or months. They typically rely on wireless miniature sensors enclosed in patches or bandages or in items that can be worn, such as ring or shirt. The data sets recorded using these systems are then processed to detect events predictive of possible worsening of the patient's clinical situations or they are explored to access the impact of clinical interventions. It is a pulse oximetry sensor that allows one to continuously monitor heart rate and oxygen saturation in a totally unobtrusive way. The device is shaped like a ring and thus it can be worn for long periods of time without any discomfort to the subject. The ring sensor is equipped with a low power transceiver that accomplishes bi-directional communication with a base station, and to upload date at any point in time. Each time the heart muscle contracts,blood is ejected from the ventricles and a pulse of pressure is transmitted through the circulatory system.This pressure pulse when traveling through the vessels,causes vessel wall displacement which is measurable at various points.inorder to detect pulsatile blood volume changes by photoelectric method,photo conductors are used.normally photo resistors are used, for amplification purpose photo transistors are used. Light is emitted by LED and transmitted through the artery and the resistance of photo resistor is determined by the amount of light reaching it.with each contraction of heart,blood is forced to the extremities and the amount of blood in the finger increases.it alters the optical density with the result that the light transmission through the finger reduces and the resistance of the photo resistor increases accordingly. The photoresistor is connected as a part of voltage divider circuit and produces a voltage that varies with the amount of blood in the finger.This voltage that closely follows the pressure pulse.

To Download Full Report Click Here

Poly Fuse
Polyfuse is a new standard for circuit protection. It is resettable. Many manufacturers also call it polyswitch or multifuse. Polyfuses are not fuses but polymeric positive temperature co-efficient (PPTC) thermistors. Current limiting can be accomplished by using resistors , fuses , switches or positive temperature coefficient devices. Resistors are rarely an acceptable solution because the high power resistors that are usually required are expensive. One-shot fuses can be used, but they might fatigue, and they must be replaced after a fault event. Ceramic PTC devices tends to have high resistance and power dissipation characteristics. The preferred solution is a PPTC device which has low resistance in normal operation and high resistance when exposed to a fault. Electrical shorts or electrically over-loaded circuits can cause over-current and over temperature damage. Like traditional fuses , PPTC devices limit the flow of dangerously high current during fault conditions. Unlike traditional fuses, PPTC devices reset after the fault is cleared and the power to the circuit is removed. THE BASICS Technically, polyfuses are not fuses but polymeric positive temperature co-efficient (PPTC) thermistors. For thermistors characterized as positive temperature co-efficient , the device resistance increases with temperature. These comprise thin sheets of conductive plastic with electrodes attached to either side. The conductive plastic is basically a non-conductive crystalline polymer loaded with a highly conductive carbon to make it conductive. The electrodes ensure even distribution of power throughout the device. Polyfuses are usually packaged in radial, axial, surface- mount, chip, disk or washer form, these are available in voltage ratings of 30 to 250 volts and current ratings of 20Ma to 100 amps. OPERATING PARAMETERS FOR POLYFUSES 1) INITIAL RESISTANCE:- The resistance of the device as received from the factory of manufacturing. 2) OPERATING VOLTAGE:- The maximum voltage a device can withstand without damage at the rated current. 3) HOLDING CURRENT:- Safe current through the device. 4) TRIP CURRENT:- Where the device interrupts the current. 5) TIME TO TRIP:- The time it takes for the device to trip at a given temperature. 6) TRIPPED STATE:- Transition from the low resistance state to the high resistance state due to an overload. 7) LEAKAGE CURRENT:- A small value of stray current flowing through the device after it has switched to high resistance mode. 8) TRIP CYCLE:- The number of trip cycles (at rated voltage and current) the device sustains without failure. 9) TRIP ENDURANCE:- The duration of time the device sustains its maximum rated voltage in the tripped state without failure. 10) POWER DISSIPATION:- Power dissipated by the device in its tripped state. 11) THERMAL DURATION:- Influence of ambient temperature. 12) HYSTERESIS:- The period between the actual beginning of the signaling of the device to trip and the actual tripping of the device.

To Download Full Report Click Here

Non Visible Imaging


Near infrared light consists of light just beyond visible red light (wavelengths greater than 780nm). Contrary to popular thought, near infrared photography does not allow the recording of thermal radiation (heat). Far-infrared thermal imaging requires more specialized equipment. Infrared images exhibit a few distinct effects that give them an exotic, antique look. Plant life looks completely white because it reflects almost all infrared light (because of this effect, infrared photography is commonly used in aerial photography to analyze crop yields, pest control, etc.) The sky is a stark black because no infrared light is scattered. Human skin looks pale and ghostly. Dark sunglasses all but disappear in infrared because they don't block any infrared light, and it's said that you can capture the near infrared emissions of a common iron. Infrared photography has been around for at least 70 years, but until recently has not been easily accessible to those not versed in traditional photographic processes. Since the charge-coupled devices (CCDs) used in digital cameras and camcorders are sensitive to near-infrared light, they can be used to capture infrared photos. With a filter that blocks out all visible light (also frequently called a "cold mirror" filter), most modern digital cameras and camcorders can capture photographs in infrared. In addition, they have LCD screens, which can be used to preview the resulting image in real-time, a tool unavailable in traditional photography without using filters that allow some visible (red) light through. INTRODUCTION Near infrared light consists of light just beyond visible red light (wavelengths greater than 780nm). Contrary to popular thought, near infrared photography does not allow the recording of thermal radiation (heat). Far-infrared thermal imaging requires more specialized equipment. Infrared images exhibit a few distinct effects that give them an exotic, antique look. Plant life looks completely white because it reflects almost all infrared light (because of this effect, infrared photography is commonly used in aerial photography to analyze crop yields, pest control, etc.) The sky is a stark black because no infrared light is scattered. Human skin looks pale and ghostly. Dark sunglasses all but disappear in infrared because they don't block any infrared light, and it's said that you can capture the near infrared emissions of a common iron Near-infrared (1000 - 3000 nm) spectrometry, which employs an external light source for determination of chemical composition, has been previously utilized for industrial determination of the fat content of commercial meat products, for in vivo determination of body fat, and in our laboratories for determination of lipoprotein composition in carotid artery atherosclerotic plaques. Near-infrared (IR) spectrometry has been used industrially for several years to determine saturation of unsaturated fatty acid esters (1). Near-IR spectrometry uses an tunable light source external to the experimental subject to determine its chemical composition. Industrial utilization of near-IR will allow for the in vivo measurement of the tissue-specific rate of oxygen utilization as an indirect estimate of energy expenditure. However, assessment of regional oxygen consumption by these methods is complex, requiring a high level of surgical skill for implantation of indwelling catheters to isolate the organ under study.

To Download Full Report Click Here

Nuclear Batteries-Daintiest Dynamos

Micro electro mechanical systems (MEMS) comprise a rapidly expanding research field with potential applications varying from sensors in air bags, wrist-warn GPS receivers, and matchbox size digital cameras to more recent optical applications. Depending on the application, these devices often require an on board power source for remote operation, especially in cases requiring for an extended period of time. In the quest to boost micro scale power generation several groups have turn their efforts to well known enable sources, namely hydrogen and hydrocarbon fuels such as propane, methane, gasoline and diesel. Some groups are developing micro fuel cells than, like their micro scale counter parts, consume hydrogen to produce electricity. Others are developing on-chip combustion engines, which actually burn a fuel like gasoline to drive a minuscule electric generator. But all these approaches have some difficulties regarding low energy densities, elimination of by products, down scaling and recharging. All these difficulties can be overcome up to a large extend by the use of nuclear micro batteries. Radioisotope thermo electric generators (RTGs) exploited the extraordinary potential of radioactive materials for generating electricity. RTGs are particularly used for generating electricity in space missions. It uses a process known as See-beck effect. The problem with RTGs is that RTGs don't scale down well. So the scientists had to find some other ways of converting nuclear energy into electric energy. They have succeeded by developing nuclear batteries. NUCLEAR BATTERIES Nuclear batteries use the incredible amount of energy released naturally by tiny bits of radio active material without any fission or fusion taking place inside the battery. These devices use thin radioactive films that pack in energy at densities thousands of times greater than those of lithium-ion batteries. Because of the high energy density nuclear batteries are extremely small in size. Considering the small size and shape of the battery the scientists who developed that battery fancifully call it as "DAINTIEST DYNAMO". The word 'dainty' means pretty. Scientists have developed two types of micro nuclear batteries. One is junction type battery and the other is self-reciprocating cantilever. The operations of both are explained below one by one. JUNCTION TYPE BATTERY The kind of nuclear batteries directly converts the high-energy particles emitted by a radioactive source into an electric current. The device consists of a small quantity of Ni-63 placed near an ordinary silicon p-n junction - a diode, basically. WORKING As the Ni-63 decays it emits beta particles, which are high-energy electrons that spontaneously fly out of the radioisotope's unstable nucleus. The emitted beta particles ionized the diode's atoms, exciting unpaired electrons and holes that are separated at the vicinity of the p-n interface. These separated electrons and holes streamed away form the junction, producing current.

To Download Full Report Click Here

MILSTD 1553B
The digital data bus MILSTD 1553B was designed in early 1970's to replace analog point to point wire bundles between electronic instrumentations. The MILSTD 1553B has four main elements. 1. Bus controller to manage information flow. 2. Remote terminals to interface one or more subsistence. 3. Bus monitor for data bus monitoring. 4. Data bus components. Including bus couplers, cabling, terminators and connectors. This is a differential serial bus used in military and space equipments comprised of multiple redundant bus connection and communicates at one MBps. Bus has single active bus control and up to 31 remote terminals. Data transfer contain up to 16 bit data word. INTERFACE DESCRIPTION MILSTD 1553 is military standard that defines the electrical and protocol characteristic for data bus. A data bus is used to provide a medium for exchange of data between various systems. It is similar to LAN in personal computers and office automation industry. A data transmission medium which would allow all systems and subsistence to share a common set of wires was needed. So MILSTD 1553 standard redefines TDM as the transmission of data from several signal sources through one communication system with different signal samples staggered in time to form a composite pulse train. HISTORY In 1968 SAE a technical body of military and industrial members estd a subcommittee to define a serial data bus to meet the needs of military avionics community under the project name A 2 - K. This subcommittee developed first draft in 1970. In 1973 MILSTD 1553 was released through F - 16 fighter plane. Then MILSTD 1553 A was released in 1975 through air forces F 16 and army's attack helicopter AH - 64 A Apache. 1553 B was released in 1978. The SAE decided to freeze the standard to allow the component manufacturers to develop real world 1553 products. APPLICATIONS Applied to satellites as well as in space shuttles. Used in large transporters, aerial refuelers and bombers, tactic fighters and helicopters. It is even contained in missiles and may act as primary interface between aircraft and missiles. Navy has applied this data bus to surface and subsurface applications. Army has put 1553 in to tanks. Commercial applications have applied standard to systems like manufacturing production lines and BART (bay area rapid transport). UK has issued standard 0018 P and NATO has published STANAG 3838 AVS both in accordance to 1553 B. MILSTD 1760 A the air craft store interconnect has 1553 B embedded in it.

Micro Electronic Pill


The invention of transistor enabled the first use of radiometry capsules, which used simple circuits for the internal study of the gastro-intestinal (GI) [1] tract. They couldn't be used as they could transmit only from a single channel and also due to the size of the components. They also suffered from poor reliability, low sensitivity and short lifetimes of the devices. This led to the application of single-channel telemetry capsules for the detection of disease and abnormalities in the GI tract where restricted area prevented the use of traditional endoscopy. They were later modified as they had the disadvantage of using laboratory type sensors such as the glass pH electrodes, resistance thermometers, etc. They were also of very large size. The later modification is similar to the above instrument but is smaller in size due to the application of existing semiconductor fabrication technologies. These technologies led to the formation of "MICROELECTRONIC PILL". Microelectronic pill is basically a multichannel sensor used for remote biomedical measurements using micro technology. This is used for the real-time measurement parameters such as temperature, pH, conductivity and dissolved oxygen. The sensors are fabricated using electron beam and photolithographic pattern integration and were controlled by an application specific integrated circuit (ASIC).

BLOCK DIAGRAM Microelectronic pill consists of 4 sensors (2) which are mounted on two silicon chips (Chip 1 & 2), a control chip (5), a radio transmitter (STD- type 1-7, type2-crystal type-10) & silver oxide batteries (8). 1-access channel, 3-capsule, 4- rubber ring, 6-PCB chip carrier BASIC COMPONENTS A. Sensors There are basically 4 sensors mounted on two chips- Chip 1 & chip 2. On chip 1(shown in fig 2 a), c), e)), temperature sensor silicon diode (4), pH ISFET sensor (1) and dual electrode conductivity sensor (3) are fabricated. Chip 2 comprises of three electrode electrochemical cell oxygen sensor (2) and optional NiCr resistance thermometer. 1) Sensor chip 1: An array consisting of both temperature sensor & pH sensor platforms were cut from the wafer & attached onto 100-m- thick glass cover slip cured on a hot plate. The plate acts as a temporary carrier to assist handling of the device during level 1 of lithography when the electric connections tracks, electrodes bonding pads are defined. Bonding pads provide electrical contact to the external electronic circuit.

See Full Report To Download Full Report Click Here

MOBILE IPv6
Mobile IP is the IETF proposed standard solution for handling terminal mobility among IP subnets and was designed to allow a host to change its point of attachment transparently to an IP network. Mobile IP works at the network layer, influencing the routing of datagrams, and can easily handle mobility among different media (LAN, WLAN, dial-up links, wireless channels, etc.). Mobile IPv6 is a protocol being developed by the Mobile IP Working Group (abbreviated as MIP WG) of the IETF (Internet Engineering Task Force). The intention of Mobile IPv6 is to provide a functionality for handling the terminal, or node, mobility between IPv6 subnets. Thus, the protocol was designed to allow a node to change its point of attachment to the IP network such a way that the change does not affect the addressability and reachability of the node. Mobile IP was originally defined for IPv4, before IPv6 existed. MIPv6 is currently becoming a standard due to inherent advantages of IPv6 over IPv4 and will therefore be ready soon for adoption in 3G Mobile networks. Mobile IPv6 is a highly feasible mechanism for implementing static IPv6 addressing for mobile terminals. Mobility signaling and security features (IPsec) are integrated in the IPv6 protocol as header extensions. LIMITATIONS OF IPv4 The current version of IP (known as version 4 or IPv4) has not changed substantially since RFC 791, which was published in 1981. IPv4 has proven to be robust, and easily implemented and interoperable. It has stood up to the test of scaling an internetwork to a global utility the size of today's Internet. This is a tribute to its initial design. However, the initial design of IPv4 did not anticipate: " The recent exponential growth of the Internet and the impending exhaustion of the IPv4 address space Although the 32-bit address space of IPv4 allows for 4,294,967,296 addresses, previous and current allocation practices limit the number of public IP addresses to a few hundred million. As a result, IPv4 addresses have become relatively scarce, forcing some organizations to use a Network Address Translator (NAT) to map a single public IP address to multiple private IP addresses. " The growth of the Internet and the ability of Internet backbone routers to maintain large routing tables Because of the way that IPv4 network IDs have been (and are currently) allocated, there are routinely over 85,000 routes in the routing tables of Internet backbone routers today. " The need for simpler configuration Most current IPv4 implementations must be either manually configured or use a stateful address configuration protocol such as Dynamic Host Configuration Protocol (DHCP). With more computers and devices using IP, there is a need for a simpler and more automatic configuration of addresses and other configuration settings that do not rely on the administration of a DHCP infrastructure. " The requirement for security at the IP level Private communication over a public medium like the Internet requires cryptographic services that protect the data being sent from being viewed or modified in transit. Although a standard now exists for providing security for IPv4 packets (known as Internet Protocol Security, or IPSec), this standard is optional for IPv4 and proprietary security solutions are prevalent. " The need for better support for real-time delivery of data-also called quality of service (QoS)

To Download Full Report Click Here

Chip Morphing
1.1. The Energy Performance Tradeoff Engineering is a study of tradeoffs. In computer engineering the tradeoff has traditionally been between performance, measured in instructions per second, and price. Because of fabrication technology, price is closely related to chip size and transistor count. With the emergence of embedded systems, a new tradeoff has become the focus of design. This new tradeoff is between performance and power or energy consumption. The computational requirements of early embedded systems were generally more modest, and so the performance-power tradeoff tended to be weighted towards power. "High performance" and "energy efficient" were generally opposing concepts. However, new classes of embedded applications are emerging which not only have significant energy constraints, but also require considerable computational resources. Devices such as space rovers, cell phones, automotive control systems, and portable consumer electronics all require or can benefit from high-performance processors. The future generations of such devices should continue this trend. Processors for these devices must be able to deliver high performance with low energy dissipation. Additionally, these devices evidence large fluctuations in their performance requirements. Often a device will have very low performance demands for the bulk of its operation, but will experience periodic or asynchronous "spikes" when high-performance is needed to meet a deadline or handle some interrupt event. These devices not only require a fundamental improvement in the performance power tradeoff, but also necessitate a processor which can dynamically adjust its performance and power characteristics to provide the tradeoff which best fits the system requirements at that time. 1.2. Fast, Powerful but Cheap, and Lots of Control These motivations point to three major objectives for a power conscious embedded processor. Such a processor must be capable of high performance, must consume low amounts of power, and must be able to adapt to changing performance and power requirements at runtime. The objective of this seminar is to define a micro-architecture which can exhibit low power consumption without sacrificing high performance. This will require a fundamental shift to the power-performance curve presented by traditional microprocessors. Additionally, the processor design must be flexible and reconfigurable at run-time so that it may present a series of configurations corresponding to different tradeoffs between performance and power consumption. 1.3. MORPH These objectives and motivations were identified during the MORPH project, a part of the Power Aware Computing / Communication (PACC) initiative. In addition to exploring several mechanisms to fundamentally improve performance, the MORPH project brought forth the idea of "gear shifting" as an analogy for run-time reconfiguration. Realizing that real world applications vary their performance requirements dramatically over time, a major goal of the project was to design microarchitectures which could adjust to provide the minimal required performance at the lowest energy cost. The MORPH project explored a number of microarchitectural techniques to achieve this goal, such as morphable cache hierarchies and exploiting bit-slice inactivity. One technique, multi-cluster architectures, is the direct predecessor of this work. In addition to microarchitectural changes, MORPH also conducted a survey of realistic embedded applications which may be power constrained. Also, design implications of a power aware runtime system were explored.

To Download Full Report Click Here

Challenges in the Migration to 4G


Second-generation (2G) mobile systems were very successful in the previous decade. Their success prompted the development of third generation (3G) mobile systems. While 2G systems such as GSM, IS-95, and cdmaOne were designed to carry speech and low-bit-rate data, 3G systems were designed to provide higher-data-rate services. During the evolution from 2G to 3G, a range of wireless systems, including GPRS, IMT-2000, Bluetooth, WLAN, and HiperLAN, have been developed. All these systems were designed independently, targeting different service types, data rates, and users. As all these systems have their own merits and shortcomings, there is no single system that is good enough to replace all the other technologies. Instead of putting efforts into developing new radio interfaces and technologies for 4G systems, which some researchers are doing, we believe establishing 4G systems that integrate existing and newly developed wireless systems is a more feasible option. Researchers are currently developing frameworks for future 4G networks. Different research programs, such as Mobile VCE, MIRAI, and DoCoMo, have their own visions on 4G features and implementations. Some key features (mainly from user's point of view) of 4G networks are stated as follows: " High usability: anytime, anywhere, and with any technology " Support for multimedia services at low transmission cost " Personalization " Integrated services First, 4G networks are all IP based heterogeneous networks that allow users to use any system at any time and anywhere. Users carrying an integrated terminal can use a wide range of applications provided by multiple wireless networks. Second, 4G systems provide not only telecommunications services, but also data and multimedia services. To support multimedia services, high-data-rate services with good system reliability will be provided. At the same time, a low per-bit transmission cost will be maintained. Third, personalized service will be provided by this new-generation network. It is expected that when 4G services are launched, users in widely different locations, occupations, and economic classes will use the services. In order to meet the demands of these diverse users, service providers should design personal and customized services for them. Finally, 4G systems also provide facilities for integrated services. Users can use multiple services from any service provider at the same time. Just imagine a 4G mobile user, Mary, who is looking for information on movies shown in nearby cinemas. Her mobile may simultaneously connect to different wireless systems. These wireless systems may include a Global Positioning System (GPS) (for tracking her current location), a wireless LAN (for receiving previews of the movies in nearby cinemas), and a code-division multiple access (CDMA) (for making a telephone call to one of the cinemas). In this example Mary is actually using multiple wireless services that differ in quality of service (QoS) levels, security policies, device settings, charging methods and applications. It will be a significant revolution if such highly integrated services are made possible in 4G mobile applications. To migrate current systems to 4G with the features mentioned above, we have to face a number of challenges. In this article these challenges are highlighted and grouped into various research areas. An overview of the challenges in future heterogeneous systems will be provided. Each area of challenges will be examined in detail. The article is then concluded.

CAN
The development of CAN began when more and more electronic devices were implemented into modern motor vehicles. Examples of such devices include engine management systems, active suspension, ABS, gear control, lighting control, air conditioning, airbags and central locking. All this means more safety and more comforts for the driver and of course a reduction of fuel consumption and exhaust emissions. To improve the behavior of the vehicle even further, it was necessary for the different control systems to exchange information. This was usually done by discrete interconnection of the different systems. The requirement for information exchange has then grown to such an extent that a cable network with a length of up to several moles and many connectors was required. This produced throwing problems concerning material cost, production time and reliability. The solution to the problem was the connection of the control systems through a serial bus system. This bus had to fulfill some special requirements due to its usage in a vehicle. With the use of CAN, point-to-point wiring is replaced by one serial bus connecting all control systems. This is accomplished by adding some CAN-specific hardware to each control unit that provides the "rules" or the protocol for transmitting and receiving information via the bus. CAN or Controller Area Network is an advanced serial bus system that efficiently supports distributed control systems, It was initially developed for the use in motor vehicles by Robert Bosch Gmbh, Germany, in the late 1980s, also holding the CAN license. CAN is most widely used in the automotive and industrial market segments. Typical applications for CAN are motor vehicles, utility vehicles, and industrial automation. Other applications are trains, medical equipment, building automation, household appliances, and office automation. FEATURES OF CAN " " " " " " " Multimaster Concept No node addressing, Message identifier specifies contents &priority Easy connection and disconnection of nodes Broadcast and Multicast capability Sophisticated Error detection NRZ Code + Bit Stuffing for Synchronization Bus access Via CSMA

MULTIMASTER CONCEPT CAN is a multi-master bus with an open, linear structure with one logic bus line and equal nodes. The number of nodes is not limited by the protocol. ADDRESSING NODES In CAN protocol, the bus nodes do not have a specific address. Instead, the address information is contained in the identifiers of the transmitted messages, indicating the message content and the priority of the message.

BIT for Intelligent system design


The principal of Built-in-test and self-test has been widely applied to the design and testing of complex, mixed-signal electronic systems, such as integrated circuits (IC s) and multifractional instrumentation [1]. A system with BIT is characterized by its ability to identify its operation condition by itself, through the testing and diagnosis capabilities built into its in structure. To ensure reliable performance, testability needs to be incorporated into the early stage of system and product design. Various techniques have been developed over the past decades to implement the BIT technique. In the semiconductor, the objective of applying BIT is to improve the yield of chip fabrication, enable robust and efficient chip testing and better scope with the increasing circuit complexity and integration density. This has been achieved by having an IC chip generate its own test stimuli and measure the corresponding responses from the various elements within the chip to determine its condition. In recent years, BIT has seen increasing applications in other branches of industry, eg. manufacturing, aerospace and transportation and for the purposes of system condition monitoring. In manufacturing systems, BIT facilitates automatic detection of toolwear and breakage and assists in corrective actions to ensure part quality and reduce machine downtime. 2. BIT TECHNIQUES BIT techniques are classified: a. on-line BIT b. off-line BIT On-line BIT: It includes concurrent and nonconcurrent techniques. Testing occurs during normal functional operation. Concurrent on-line BIST - Testing occurs simultaneously with normal operation mode, usually coding techniques or duplication and comparison are used. [3] Nonconcurrent on-line BIST - testing is carried out while a system is in an idle state, often by executing diagnostic software or firmware routines Off-line BIT: System is not in its normal working mode it usually uses onchip test generators and output response analysers or micro diagnostic routines. Functional off-line BIT is based on a functional description of the Component Under Test (CUT) and uses functional high-level fault models. Structural off-line BIT is based on the structure of the CUT and uses structural fault models. 3. BIT FOR THE IC INDUSTRY IC s entering the market today is more complex in design with a higher integration density. This leads to increased vulnerability of the chip to problems such as cross talk noise contamination, and internal power dissipation. These problems reduce the reliability of the chip. Further more, with increased chip density, it becomes mo0re difficult to access test points on a chip for external testing. Also, testing procedures currently in use are time consuming, presenting a bottleneck for higher productivity [2]. These factors have led to the emergence of BIT in the semiconductor industry as a cost effective, reliable, and efficient quality control technique. Generally, adding testing circuitry on to the same IC chip increases the chip area requirement conflicting with the need for system miniaturization and power conception reduction. On the other hand, techniques have been developed to allow the circuit-under-test (CUT) to be tested using existing on-chip hardware, thus keeping the area overhead to a minimum [1]. Also, the built-in-test functions obviate the need for expensive external testers. Further more; since the chip testing procedure is generated and performed on the chip itself, it takes less time as compared to one external testing procedure.

A 64 Point Fourier Transform Chip


Fourth generation wireless and mobile system are currently the focus of research and development. Broadband wireless system based on orthogonal frequency division multiplexing will allow packet based high data rate communication suitable for video transmission and mobile internet application. Considering this fact we proposed a data path architecture using dedicated hardwire for the baseband processor. The most computationally intensive part of such a high data rate system are the 64-point inverse FFT in the transmit direction and the viterbi decoder in the receiver direction. Accordingly an appropriate design methodology for constructing them has to be chosen a) how much silicon area is needed b) how easily the particular architecture can be made flat for implementation in VLSI c) in actual implementation how many wire crossings and how many long wires carrying signals to remote parts of the design are necessary d) how small the power consumption can be .This paper describes a novel 64-point FFT/IFFT processor which has been developed as part of a large research project to develop a single chip wireless modem. ALGORITHM FORMULATION The discrete fourier transformation A(r) of a complex data sequence B(k) of length N where r, k ={0,1, N-1} can be described as Where WN = e-2?j/N . Let us consider that N=MT , ? = s+ Tt and k=l+Mm,where s,l ? {0,1..7} and m, t ? {0,1,.T-1}. Applying these values in first equation and we get This shows that it is possible to realize the FFT of length N by first decomposing it to one M and one T-point FFT where N = MT, and combinig them. But this results in in a two dimensional instead of one dimensional structure of FFT. We can formulate 64-point by considering M =T = 8

This shows that it is possible to express the 64-point FFT in terms of a two dimensional structure of 8-point FFTs plus 64 complex inter-dimensional constant multiplications. At first, appropriate data samples undergo an 8-point FFT computation. However, the number of non-trivial multiplications required for each set of 8point FFT gets multiplied with 1. Eight such computations are needed to generate a full set of 64 intermediate data, which once again undergo a second 8-point FFT operation . Like first 8-point FFT for second 8-point again such computions are required. Proper reshuffling of the data coming out from the second 8-point FFT generates the final output of the 64-point FFT . Fig. Signal flow graph of an 8-point DIT FFT. For realization of 8-point FFT using the conventional DIT does not need to use any multiplication operation. The constants to be multiplied for the first two columns of the 8-point FFT structure are either 1 or j . In the third column, the multiplications of the constants are actually addition/subtraction operation followed multiplication of 1/?2 which can be easily realized by using only a hardwired shift-and-add operation. Thus an 8-point FFT can be carried out without using any true digital multiplier and thus provide a way to realize a low- power 64-point FFT at reduced hardware cost. Since a basic 8-point FFT does not need a true multiplier. On the other hand, the number of non-trivial complex multiplications for the conventional 64point radix-2 DIT FFT is 66. Thus the present approach results in a reduction of about 26% for complex multiplication compared to that required in the conventional radix-2 64-point FFT. This reduction of arithmetic complexity furthur enhances the scope for realizing a low-power 64-point FFT processor. However, the arithmetic complexity of the proposed scheme is almost the same to that of radix-4 FFT algorithm since the radix-4 64-point FFT algorithm needs 52 non-trivial complex multiplications.

Anthropomorphic Robot hand


This paper presents an anthropomorphic robot hand called the Gifu hand II, which has a thumb and four fingers, all the joints of which are driven by servomotors built into the fingers and the palm. The thumb has four joints with four-degrees-of-freedom (DOF); the other fingers have four joints with 3-DOF; and two axes of the joints near the palm cross orthogonally at one point, as is the case in the human hand. The Gifu hand II can be equipped with six-axes force sensor at each fingertip and a developed distributed tactile sensor with 624 detecting points on its surface. The design concepts and the specifications of the Gifu hand II, the basic characteristics of the tactile sensor, and the pressure distributions at the time of object grasping are described and discussed herein. Our results demonstrate that the Gifu hand II has a high potential to perform dexterous object manipulations like the human hand. INTRODUCTION IT IS HIGHLY expected that forthcoming humanoid robots will execute various complicated tasks via communication with a human user. The humanoid robots will be equipped with anthropomorphic multifingered hands very much like the human hand. We call this a humanoid hand robot. Humanoid hand robots will eventually supplant human labor in the execution of intricate and dangerous tasks in areas such as manufacturing, space, the seabed, and so on. Further, the anthropomorphic hand will be provided as a prosthetic application for handicapped individuals. Many multifingered robot hands (e.g., the Stanford-JPL hand by Salisbury et al. [1], the Utah/MIT hand by Jacobsen et al. [2], the JPL four-fingered hand by Jau [3], and the Anthrobot hand by Kyriakopoulos et al. [4]) have been developed. These robot hands are driven by actuators that are located in a place remote from the robot hand frame and connected by tendon cables. The elasticity of the tendon cable causes inaccurate joint angle control, and the long wiring of tendon cables may obstruct the robot motion when the hand is attached to the tip of the robot arm. Moreover, these hands have been problematic commercial products, particularly in terms of maintenance, due to their mechanical complexity. To solve these problems, robot hands in which the actuators are built into the hand (e.g., the Belgrade/USC hand by Venkataraman et al. [5], the Omni hand by Rosheim [6], the NTU hand by Lin et al. [7], and the DLR's hand by Liu et al. [8]) have been developed. However, these hands present a problem in that their movement is unlike that of the human hand because the number of fingers and the number of joints in the fingers are insufficient. Recently, many reports on the use of the tactile sensor [9]-[13] have been presented, all of which attempted to realize adequate object manipulation involving contact with the finger and palm. The development of the hand, which combines a 6-axial force sensor attached at the fingertip and a distributed tactile sensor mounted on the hand surface, has been slight. Our group developed the Gifu hand I [14], [15], a five-fingered hand driven by built-in servomotors. We investigated the hand's potential, basing the platform of the study on dexterous grasping and manipulation of objects. Because it had a nonnegligible backlash in the gear transmission, we redesigned the anthropomorphic robot hand based on the finite element analysis to reduce the backlash and enhance the output torque. We call this version the Gifu hand II.

ANN for misuse detection


Because of the increasing dependence which companies and government agencies have on their computer networks the importance of protecting these systems from attack is critical. A single intrusion of a computer network can result in the loss or unauthorized utilization or modification of large amounts of data and cause users to question the reliability of all of the information on the network. There are numerous methods of responding to a network intrusion, but they all require the accurate and timely identification of the attack. Intrusion Detection Systems The timely and accurate detection of computer and network system intrusions has always been an elusive goal for system administrators and information security researchers. The individual creativity of attackers, the wide range of computer hardware and operating systems, and the ever changing nature of the overall threat to target systems have contributed to the difficulty in effectively identifying intrusions. While the complexities of host computers already made intrusion detection a difficult endeavor, the increasing prevalence of distributed network-based systems and insecure networks such as the Internet has greatly increased the need for intrusion detection. There are two general categories of attacks which intrusion detection technologies attempt to identify anomaly detection and misuse detection .Anomaly detection identifies activities that vary from established patterns for users, or groups of users. Anomaly detection typically involves the creation of knowledge bases that contain the profiles of the monitored activities. The second general approach to intrusion detection is misuse detection. This technique involves the comparison of a user's activities with the known behaviors of attackers attempting to penetrate a system. While anomaly detection typically utilizes threshold monitoring to indicate when a certain established metric has been reached, misuse detection techniques frequently utilize a rule-based approach. When applied to misuse detection, the rules become scenarios for network attacks. The intrusion detection mechanism identifies a potential attack if a user's activities are found to be consistent with the established rules. The use of comprehensive rules is critical in the application of expert systems for intrusion detection. Current approaches to intrusion detection systems Most current approaches to the process of detecting intrusions utilize some form of rule-based analysis. Rule-Based analysis relies on sets of predefined rules that are provided by an administrator, automatically created by the system, or both. Expert systems are the most common form of rule-based intrusion detection approaches. The early intrusion detection research efforts realized the inefficiency of any approach that required a manual review of a system audit trail. While the information necessary to identify attacks was believed to be present within the voluminous audit data, an effective review of the material required the use of an automated system. The use of expert system techniques in intrusion detection mechanisms was a significant milestone in the development of effective and practical detection-based information security systems. An expert system consists of a set of rules that encode the knowledge of a human "expert". These rules are used by the system to make conclusions about the security-related data from the intrusion detection system. Expert systems permit the incorporation of an extensive amount of human experience into a computer application that then utilizes that knowledge to identify activities that match the defined characteristics of misuse and attack.

Aluminum Electrolytic Capacitors


Aluminium electrolytic capacitors are widely used in power supply circuitry of electronic equipment as there after several advantages over other types of capacitances. The selection of a capacitor for an application without knowing the basics may result in unreliable performance of the equipment due to expanitor problems. It may lead to customer dissatisfaction and damage market to potential or the image of a reputed company. The aluminium eletrolytic capacitors are suitable to be used when a great capacitance value is required in a very small size. The volume of an electrolytic capacitor is more than 10 times less than a film one considering the same rated voltage and capacitance. The cost per F is also less when compared with all other capacitors. CONSTRUCTION An aluminium electrolytic capacitor is composed of high-purity, thin aluminium foil (0.05 to I mm thick) having a dieletric anidation on its surface to prevent current flow in one direction. This outs as anode. Another these two aluminium coils is an electrolytic impregnated paper, which cuts as the dieletric. Since the capacitors is inversely propotional to the dieletric thiclenen. And the dieletric thicknen is propotional to the forming voltage, the relationship between capacitance and cerming voltage is. Capacitance X Forming Voltage = Constant. Aluminium tabs attached to the anode and cathode coils act as the positive and negative leads of the capacitor respectively. The entire element is sealed into an aluminium can by using rubber, bakelite or phenolic plastic. The construction of an aluminum electrolytic capacitor is the following: The anode (A): The anode is formed by an aluminium foil of extreme purity. The effective surface area of the coil is greatly enlarged (by a factor upto 200) by electrochemical etching in order to achive the maximum possible capacitance values. The dieletric (O): The aluminum foil (A) is covered by a very thin oxidised layer of aluminium oride (O=Al O3. This oxide is obtained by means of an eletro chemical process. The typical value of forming voltage is 1.2 nm/v. the oxide with stands a high electric field strength and it has a high dielectric constant. Aluminium oxide is therefore well suited as a capacitor dieletric in a polar capacitor. The A12O3 has a high insulation resistance for voltages lower than the forming voltage. The oxide layer consistitutes a nonlinear voltage dependent resistance: the current increases more steeply as the voltage increases The electrolytic Paper, cathode (C,K) The negative electrode is a liquid electrolyte absorbed in a paper. The paper also acts as a spacer between the positive foil carrying the dieletric layer and the opposite Al-foil ( the negative Coil) acting as a contact medium to the eletrolyte. The cathode foil serves as a large contact area for passing current to the operating eletrolyte. Bipolar Al electrolytic capacitors are also available. In this designs both the anode foil and cathode foil are anodized. The cathode foil has the same capacitance rating as the anode foil. This construction allows for operation of direct voltage of either polarity as well as operation of purely alternating voltages. Since it causes internal heating the applied atternating voltage must be kept considerably below the direct voltage rating. Since we have the series connection of two capacitor elements, the total capacitance is equal to only half the individual capacitance value. So compared to polar capacitor, a bipolar capacitor requires upto twice the volume for the same total capacitance.

IBOC Technology
The engineering world has been working on the development and evaluation of IBOC transmission for some time. The NRSC began evaluation proceedings of general DAB systems in 1995. After the proponents merged into one, Ibiquity was left in the running for potential adoption. In the fall of 2001,the NRSC issued a report on Ibiquity's FM IBOC. This comprehensive report runs 62 pages of engineering material plus 13 appendices. All of the system with its blend-to analog operation as signal levels changes. The application of the FM IBOC has been studied by the NRSC and appears to be understood and accepted by radio engineers. AM IBOC has recently been studied by an NRSC working group as prelude to its adoption for general broadcast use .Its was presented during the NAB convention in April. The FM report covers eight areas of vital performance concerns to the broadcaster and listener alike .If all of these concerns can be met as successfully by AM IBOC, and the receiver manufactures rally to develop and produce the necessary receiving equipment. The evaluated FM concerns were audio quality, service area, acquisition performance, durability, auxiliary data capacity, and behavior as signal degrades, stereo separation and flexibility. The FM report paid strong attention to the use of SCA services on FM IBOC. About half of all the operating FM stations employ one or more SCAs for reading for the blind or similar services. Before going to the description of FM IBOC system, it is important to discuss the basic principles of digital radio, and IBOC technology. In the foregoing sections we see the above-mentioned topics 2. BASIC PRINCIPLES OF DIGITAL RADIO WHAT IS DIGITAL RADIO? Digital radio is a new method of assembling, broadcasting and receiving communications services using the same digital technology now common in many products and services such as computers, compact discs (CDs) and telecommunications. Digital radio can: " Provide for better reception of radio services than current amplitude modulation (AM) and frequency modulation (FM) radio broadcasts; " Deliver higher quality sound than current AM and FM radio broadcasts to fixed, portable and mobile receivers; and " Carry ancillary services-in the form of audio, images, data and text-providing " Program information associated with the station and its audio programs (such as station name, song title, artist's name and record label), " Other information (e.g. Internet downloads, traffic information, news and weather), and " Other services (e.g. paging and global satellite positioning). A fundamental difference between analog and digital broadcasting is that digital technology involves the delivery of digital bit streams that can be used not only for sound broadcasting but all manner of multimedia services.

Honeypots
The Internet is growing fast and doubling its number of websites every 53 days and the number of people using the internet is also growing. Hence, global communication is getting more important every day. At the same time, computer crimes are also increasing. Countermeasures are developed to detect or prevent attacks - most of these measures are based on known facts, known attack patterns. Countermeasures such as firewalls and network intrusion detection systems are based on prevention, detection and reaction mechanism; but is there enough information about the enemy? As in the military, it is important to know, who the enemy is, what kind of strategy he uses, what tools he utilizes and what he is aiming for. Gathering this kind of information is not easy but important. By knowing attack strategies, countermeasure scan be improved and vulnerabilities can be fixed. To gather as much information as possible is one main goal of a honeypot. Generally, such information gathering should be done silently, without alarming an attacker. All the gathered information leads to an advantage on the defending side and can therefore be used on productive systems to prevent attacks. A honeypot is primarily an instrument for information gathering and learning. Its primary purpose is not to be an ambush for the blackhat community to catch them in action and to press charges against them. The focus lies on a silent collection of as much information as possible about their attack patterns, used programs, purpose of attack and the blackhat community itself. All this information is used to learn more about the blackhat proceedings and motives, as well as their technical knowledge and abilities. This is just a primary purpose of a honeypot. There are a lot of other possibilities for a honeypot - divert hackers from productive systems or catch a hacker while conducting an attack are just two possible examples. They are not the perfect solution for solving or preventing computer crimes. Honeypots are hard to maintain and they need operators with good knowledge about operating systems and network security. In the right hands, a honeypot can be an effective tool for information gathering. In the wrong, unexperienced hands, a honeypot can become another infiltrated machine and an instrument for the blackhat community. This paper will present the basic concepts behind honeypots and also the legal aspects of honeypots. HONEYPOT BASICS Honeypots are an exciting new technology with enormous potential for the security community. The concepts were first introduced by several icons in computer security, specifically Cliff Stoll in the book "The Cuckoo's Egg" , and Bill Cheswick's paper "An Evening with Berferd". Since then, honeypots have continued to evolve, developing into the powerful security tools they are today. Honeypots are neither like Firewalls that are used to limit or control the traffic coming into the network and to deter attacks neither is it like IDS (Intrusion Detection Systems) which is used to detect attacks. However it can be used along with these. Honeypots does not solve a specific problem as such, it can be used to deter attacks, to detect attacks, to gather information, to act as an early warning or indication systems etc. They can do everything from detecting encrypted attacks in IPv6 networks to capturing the latest in on-line credit card fraud. It is this flexibility that gives honeypots their true power. It is also this flexibility that can make them challenging to define and understand. The basic definition of honeypots is:

Immersion Lithography
OPTICAL LITHOGRAPHY The dramatic increase in performance and cost reduction in the electronics industry are attributable to innovations in the integrated circuit and packaging fabrication processes. ICs are made using Optical Lithography. The speed and performance of the chips, their associated packages, and, hence, the computer systems are dictated by the lithographic minimum printable size. Lithography, which replicates a pattern rapidly from chip to chip, wafer to wafer, or substrate to substrate, also determines the throughput and the cost of electronic systems. From the late 1960s, when integrated circuits had linewidths of 5 m, to 1997, when minimum linewidths have reached 0.35 m in 64Mb DRAM circuits, optical lithography has been used ubiquitously for manufacturing. This dominance of optical lithography in production is the result of a worldwide effort to improve optical exposure tools and resists. A lithographic system includes exposure tool, mask, resist, and all of the processing steps to accomplish pattern transfer from a mask to a resist and then to devices. Light from a source is collected by a set of mirrors and light pipes, called an illuminator, which also shapes the light. Shaping of light gives it a desired spatial coherence and intensity over a set range of angles of incidence as it falls on a mask. The mask is a quartz plate onto which a pattern of chrome has been deposited. It contains the pattern to be created on the wafer. The light patterns that pass through the mask are reduced by a factor of four by a focusing lens and projected onto the wafer which is made by coating a silicon wafer with a layer of silicon nitride followed by a layer of silicon dioxide and finally a layer of photoresist. The photo resist that is exposed to the light becomes soluble and is rinsed away, leaving a miniature image of the mask pattern at each chip location. Regions unprotected by photo resist are etched by gases, removing the silicon dioxide and the silicon nitride and exposing the silicon. Impurities are added to the etched areas, changing the electrical properties of the silicon as needed to form the transistors. As early as the 1980s, experts were already predicting the demise of optical lithography as the wavelength of the light used to project the circuit image onto the silicon wafer was too large to resolve the evershrinking details of each new generation of ICs. Shorter wavelengths are simply absorbed by the quartz lenses that direct the light onto the wafer. Although lithography system costs (which are typically more than one third the costs of processing a wafer to completion) increase as minimum feature size on a semiconductor chip decreases, optical lithography remains attractive because of its high wafer throughput. RESOLUTION LIMITS FOR OPTICAL LITHOGRAPHY The minimum feature that may be printed with an optical lithography system is determined by the Rayleigh equation: W=k1? NA where, k1 is the resolution factor, ? is the wavelength of the exposing radiation and NA is the numerical aperture.

Grating Light Valve Display Technology


The Cathode Ray Tube TV set has ruled the consumer electronics world for decades. Now a days every one wants big screens for their entertainment rooms. But bigger a CRT screen, the glass tube will be deeper and the set becomes impossibly heavy and unwieldy when the diagonal measurement of the screen goes beyond about 36 inches. Thus the CRT is destined for a slow but sure decline giving way to many new technologies. All of us must have noted the red colour of the hero's suit in the movie 'spiderman', the blues of the sky and water scenes were captivating. The dark horse technology behind this is GRATING LIGHT VALVE TECHNOLOGY (GLV). The original GLV device concepts were developed at Stanford University. Later Silicon Light Machines was found in 1994 to develop and commercialize a range of products based on this technology. The GLV device is a type of optical micro electromechanical system or MEMS essentially a movable, light reflecting surface created directly on a silicon wafer, utilizing standard semiconductor processes and equipment. FUNDAMENTAL CONCEPT A Grating Light Value (GLV) device consists of parallel rows of reflective ribbons. Alternate rows of ribbons can be pulled down approximately one-quarter wavelength to create diffraction effects on incident light (see figure 1). When all the ribbons are in the same plane, incident light is reflected from their surfaces. By blocking light that returns along the same path as the incident light, this state of the ribbons produces a dark spot in a viewing system. When the (alternate) movable ribbons are pulled down, however, diffraction produces light at an angle that is different from that of the incident light. Unblocked, this light produces a bright spot in a viewing system.

The Grating Light Valve uses reflection and diffraction to create dark and bright image areas. If an array of such GLV elements is built, and subdivided into separately controllable picture elements, or pixels, then a white-light source can be selectively diffracted to produce an image of monochrome bright and dark pixels. By making the ribbons small enough, pixels can be built with multiple ribbons producing greater image brightness. If the up and down ribbon switching state can be made fast enough, then modulation of the diffraction can produce many gradations of gray and\or colors. There are several means for displaying color images using GLV devices. These include color filters with multiples light valves, field sequential color, and sub-pixel color using "turned" diffraction gratings.

Fractal Antennas

There has been an ever-growing demand, in both the military as well as the commercial sectors, for antenna designs that possess the following highly desirable attributes: 1. Compact size 2. Low profile 3. Conformal 4. Multi-band or broadband There are a variety of approaches that have been developed over the years, which can be utilized to achieve one or more of these design objectives. Recently, the possibility of developing antenna designs that exploit in some way the properties of fractals to achieve these goals, at least in part, has attracted a lot of attention. The term fractal, which means broken or irregular fragments, was originally coined by Mandelbrot to describe a family of complex shapes that possess an inherent self-similarity or self- affinity in their geometrical structure. The original inspiration for the development of fractal geometry came largely from an in-depth study of the patterns of nature. For instance, fractals have been successfully used to model such complex natural objects as galaxies, cloud boundaries, mountain ranges, coastlines, snowflakes, trees, leaves, ferns, and much more. Since the pioneering work of Mandelbrot and others, a wide variety of applications for fractals continue to be found in many branches of science and engineering. One such area is fractal electrodynamics, in which fractal geometry is combined with electromagnetic theory for the purpose of investigating a new class of radiation, propagation, and scatter problems. One of the most promising areas of fractal-electrodynamics research is in its application to antenna theory and design. Traditional approaches to the analysis and design of antenna systems have their foundation in Euclidean geometry. There have been considerable amounts of recent interest, however, in the possibility of developing new types of antennas that employ fractal rather than Euclidean geometric concepts in their design. We refer to this new and rapidly growing field of research as fractal antenna engineering. Because fractal geometry is an extension of classical geometry, its recent introduction provides engineers with the unprecedented opportunity to explore a virtually limitless number of previously unavailable configurations for possible use in the development of new and innovative antenna designs. There primarily two active areas of research in fractal antenna engineering. These include: 1.) the study of fractal-shaped antenna elements, and 2.) the use of fractals in the design of antenna arrays. The purpose of this article is to provide an overview of recent developments in the theory and design of fractal antenna elements, as well as fractal antenna arrays. The related area of fractal frequency-selective surfaces will also be considered in this article. WHAT IS FRACTALS, WHAT IS FRACTAL GEOMETRY? The term "Fractal means linguistically "broken" or "fractured" from the Latin "fractus". Benoit Mandelbrot, a French mathematician, introduced the term about 20 years ago in his book "The Fractal Geometry of Nature". However many of the fractal function go back classic mathematics. Names like G. Cantor (1872), G. Peano (1890), D. Hilbert (1891), Helge von Koch (1904), W. Sierprinski (1916) Gaston Julia (1918) and other personalities played an important role in Mandelbrot's concepts of a new geometry.

HART Communication
For many years, the field communication standard for process automation equipment has been a milliamp (mA) analog current signal. The milliamp current signal varies within a range of 4-2OmA in proportion to the process variable being represented. Li typical applications a signal of 4mA will correspond to the lower limit (0%) of the calibrated range and 2OmA will correspond to the upper limit (100%) of the calibrated range. Virtually all installed systems use this international standard for communicating process variable information between process automation equipment. HART Field Communications Protocol extends this 4- 2OmA standard to enhance communication with smart field instruments. The HART protocol was designed specifically for use with intelligent measurement and control instruments which traditionally communicate using 4-2OmA analog signals. HART preserves the 4signal and enables two way digital communications to occur without disturbing the integrity of the 4-2OmA signal. Unlike other digital communication technologies, the HART protocol maintains compatibility with existing 4-2OmA systems, and in doing so, provides users with a uniquely backward compatible solution. HART Communication Protocol is well-established as the existing industry standard for digitally enhanced 42OmA field communication. THE HART PROTOCOL - AN OVERVIEW HART is an acronym for "Highway Addressable Remote Transducer". The HART protocol makes use of the Bell 202 Frequency Shift Keying (FSK) standard to superimpose digital communication signals at a low level on top of the 4-2OmA. This enables two-way field communication to take place and makes it possible for additional information beyond just the normal process variable to be communicated to/from a smart field instrument. The HART protocol communicates at 1200 bps without interrupting the 4-2OmA signal and allows a host application (master) to get two or more digital updates per second from a field device. As the digital FSK signal is phase continuous, there is no interference with the 4- 2OrnA signal. HART is a master/slave protocol which means that a field (slave) device only speaks when spoken to by a master. The HART protocol can be used in various modes for communicating information to/from smart field in3truments and central control or monitor systems. HART provides for up to two masters (primary and secondary). This allows secondary masters such as handheld communicators to be used without interfering with communications to/from the primary master, i.e. control/monitoring system. The most commonly employed HART communication mode is master/slave communication of digital information simultaneous with transmission of the 4-2OmA signal. The HART protocol permits all digital communication with field devices in either point-to-point or multidrop network configuration. There is an optional "burst" communication mode where single slave device can continuously broadcast a standard HART reply message. HART COMMUNICATION LAYERS The HART protocol utilizes the OSI reference model. As is the case for most of the communication systems on the field level, the HART protocol implements only the Layers 1, 2 and 7 of the OSI model. The layers 3 to 6 remain empty since their services are either not required or provided by the application layer 7

E-Textiles
THE scaling of device technologies has made possible significant increases in the embedding of computing devices in our surroundings. Embedded microcontrollers have for many years surpassed microprocessors in the number of devices manufactured. The new trend, however, is the networking of these devices and their ubiquity not only in traditional embedded applications such as control systems, but in items of everyday use, such as clothing, and in living environments. A trend deserving particular attention is that in which large numbers of simple, cheap processing elements are embedded in environments. These environments may cover large spatial extents, as is typically the case in networks of sensors, or may be deployed in more localized constructions, as in the case of electronic textiles. These differing spatial distributions also result in different properties of the networks constituted, such as the necessity to use wireless communication in the case of sensor networks and the feasibility of utilizing cheaper wired communications in the case of electronic textiles. Electronic textiles, or e-textiles, are a new emerging inter disciplinary field of research, bringing together specialists in information technology, microsystems, materials, and textiles. The focus of this new area is on developing the enabling technologies and fabrication techniques for the economical manufacture of large-area, flexible, conformable information systems that are expected to have unique applications for both the consumer electronics and aerospace/military industries. They are naturally of particular interest in wearable computing, where they provide lightweight, flexible computing resources that that are easily integrated or shaped into clothing. Due to their unique requirements, e-textiles pose new challenges to hardware designers and system developers, cutting across the systems, device, and technology levels of abstraction: The need for a new model of computation intended to support widely distributed applications, with highly unreliable behavior, but with stringent constraints on the longevity of the system. Reconfigurability and adaptability with low computational over head. E-textiles must rely on simple computing elements embedded into a fabric or directly into active yarns. As operating conditions change (environmental, battery lifetime, etc.), the system has to adapt and reconfigure on-the-fly to achieve better functionality. Device and technology challenges imposed by embedding simple computational elements into fabrics, by building yarns with computational capabilities, or by the need for unconventional power sources and their manufacturing in filament form. In contrast to traditional wearable computers, which are often a single monolithic computer or a small computer system that can be worn, e-textiles will be cheap, general purpose computing substrates in the form of a woven fabric that can be used to build useful computing and sensing systems "by the yard" [1]. Techniques to program such networks are required that permit useful applications to be constructed over the defect and fault-prone substrate. There is a need for a new model of computation to support distributed application execution with highly unreliable behavior at the device-level, but with stringent constraints on longevity at the system level. Such a model should be able to support local computation and inexpensive communication among computational elements. In the classical design cycle (Fig. 1), the application is mapped onto a given platform architecture, underspecified constraints (performance, area, power consumption).When these constraints are met, the prototype is tested, manufactured, and used for running the application.

Electro Dynamic Tether


Tether is a word, which is not heard often. The word meaning of tether is 'a rope or chain to fasten an animal so that it can graze within a certain limited area'. We can see animals like cows and goats 'tethered' to trees and posts. In space also tethers have an application similar to their word meaning. But instead of animals, there are spacecrafts and satellites in space. A tether if connected between two spacecrafts (one having smaller orbital altitude and the other at a larger orbital altitude) momentum exchange can take place between them. Then the tether is called momentum exchange space tether. A tether is deployed by pushing one object up or down from the other. The gravitational and centrifugal forces balance each other at the center of mass. Then what happens is that the lower satellite, which orbits faster, tows its companion along like an orbital water skier. The outer satellite thereby gains momentum at the expense of the lower one, causing its orbit to expand and that of the lower to contract. This was the original use of tethers. But now tethers are being made of electrically conducting materials like aluminium or copper and they provide additional advantages. Electrodynamic tethers, as they are called, can convert orbital energy into electrical energy. It works on the principle of electromagnetic induction. This can be used for power generation. Also when the conductor moves through a magnetic field, charged particles experience an electromagnetic force perpendicular to both the direction of motion and field. This can be used for orbit raising and lowering and debris removal. Another application of tethers discussed here is artificial gravity inside spacecrafts. NEED AND ORIGIN OF TETHERS Space tethers have been studied theoretically since early in the 20th century, it wasn't until 1974 that Guiseppe Colombo came up with the idea of using a long tether to support satellite from an orbiting platform. But that was simple momentum exchange space tether. Now lets see what made scientists think of electrodynamic tethers. Every spacecraft on every mission has to carry all the energy sources required to get its job done, typically in the form of chemical propellants, photovoltaic arrays or nuclear reactors. The sole alternative - delivery service - can be very expensive. For example, a spacecraft orbiting in the International space Station (ISS) will need an estimated 77 metric tons of booster propellant over its anticipated 10 year life span just to keep itself from gradually falling out of orbit. Assuming a minimal price of $7000 a pound (dirt cheap by current standards) to get fuel up to the station's 360 km altitude, i.e. $1.2 billion simply to maintain the orbital status quo. So scientists have are taking a new look at space tether, making it electrically conductive. In 1996, NASA launched a shuttle to deploy a satellite on a tether to study the electrodynamic effects of a conducting tether as it passes through the earth's magnetic fields. As predicted by the laws of electromagnetism, a current was produced in the tether as it passed through the earth's magnetic field, acting as an electrical generator. This was the origin of electrodynamic tethers

FPGA in Space
A quiet revolution is taking place. Over the past few years, the density of the average programmable logic device has begun to skyrocket. The maximum number of gates in an FPGA is currently around 500,000 and doubling every 18 months. Meanwhile, the price of these chips is dropping. What all of this means is that the price of an individual NAND or NOR is rapidly approaching zero! And the designers of embedded systems are taking note. Some system designers are buying processor cores and incorporating them into system-ona-chip designs; others are eliminating the processor and software altogether, choosing an alternative hardware-only design. As this trend continues, it becomes more difficult to separate hardware from software. After all, both hardware and software designers are now describing logic in high-level terms, albeit in different languages, and downloading the compiled result to a piece of silicon. Surely no one would claim that language choice alone marks a real distinction between the two fields. Turing's notion of machine-level equivalence and the existence of language-to-language translators have long ago taught us all that that kind of reasoning is foolish. There are even now products that allow designers to create their hardware designs in traditional programming languages like C. So language differences alone are not enough of a distinction. Both hardware and software designs are compiled from a human-readable form into a machine-readable one. And both designs are ultimately loaded into some piece of silicon. Does it matter that one chip is a memory device and the other a piece of programmable logic? If not, how else can we distinguish hardware from software? Regardless of where the line is drawn, there will continue to be engineers like you and me who cross the boundary in our work. So rather than try to nail down a precise boundary between hardware and software design, we must assume that there will be overlap in the two fields. And we must all learn about new things. Hardware designers must learn how to write better programs, and software developers must learn how to utilize programmable logic. TYPES OF PROGRAMMABLE LOGIC Many types of programmable logic are available. The current range of offerings includes everything from small devices capable of implementing only a handful of logic equations to huge FPGAs that can hold an entire processor core (plus peripherals!). In addition to this incredible difference in size there is also much variation in architecture. In this section, I'll introduce you to the most common types of programmable logic and highlight the most important features of each type. PLDs At the low end of the spectrum are the original Programmable Logic Devices (PLDs). These were the first chips that could be used to implement a flexible digital logic design in hardware. In other words, you could remove a couple of the 7400-series TTL parts (ANDs, ORs, and NOTs) from your board and replace them with a single PLD. Other names you might encounter for this class of device are Programmable Logic Array (PLA), Programmable Array Logic (PAL), and Generic Array Logic (GAL).

DV Libraries and the Internet


The recent academic and commercial efforts in digital libraries have demonstrated the potential for white scale online search and retrieval of cataloged electronic content. By improving access to scientific, educational and historical documents and information, digital libraries create powerful opportunities for revamping education, accelerating, scientific discovery and technical advancement, and improving knowledge. Further more, digital libraries go well beyond traditional libraries in storing and indexing diverse and complex types of material such as images, video, graphics, audio, and multimedia. Concurrent with the advancements in digital libraries, the Internet has become a pervasive medium for information access and communication. With the broad penetration of the internet, network-based digital libraries can interoperate with other diverse networked information systems and provide around the clock real time access to widely distributed information catalogs. Ideally the integration of the digital libraries and the Internet complete a powerful picture for accessing electronic content. However, in reality, the current technologies under lying digital libraries and Internet need considerable advancement before digital libraries supplant traditional libraries. While many of the benefits of the digital libraries result from their support for complex content, such as video, many challenges remain for enabling efficient search and transport. Many of the fundamental problems with digital video libraries will gain new focus in the Next Generation Internet (NGI) initiative.

DIGITAL VIDEO LIBRARIES Digital video libraries deal with cataloging, searching, and retrieving digital video. Since libraries are designed to search large numbers of users, digital video libraries have greatest utility when deployed online. In order to effectively service users, digital video libraries need to efficiently handle both the search and transport of video. The model for user interaction with the digital video libraries is illustrated in the figure. Video is initially added to the digital video libraries in an accessioning process that catalogs, indexes and store the video data. The user then searches the digital video library by querying the catalog and index data. The results are return to and browsed by the user. The user then has options for refining the search, such as by relevance feedback, and selecting items for delivery. The two prevalent modes for delivering video to the user are video retrieval and streaming. In video streaming the video is played back over the network to the user. In many fast forward, reversed, pause, and so forth. In video retrieval, the video is down loaded over the network to the users local terminal. In this case, the video may be later viewed or used for other applications. Other forms of video information systems, such as video on demand (VOD), video conferencing, and video data base (VDB) systems, share characteristics with digital video libraries. The system generally differs in their support for video storage, searching, cataloging, browsing, and retrieval. Video conferencing systems typically deal with the live, real time communication of video over networks. VOD systems deliver high bandwidth video to groups of users. VDBs deal with storing and searching the structured meta-data relative to video, but are not oriented to words video streaming or concurrent play back to large numbers of users.

Co-operative cache based data access in ad hoc networks


Introduction
Wireless ad hoc network is a collection of autonomous nodes or terminals that communicate with each other by forming a multihop radio network and maintaining connectivity in a decentralized manner. Since the nodes communicate over wireless links, they have to contend with the effects of radio communication, such as noise, fading, and interference. In addition, the links typically have less bandwidth than in a wired network. Each node in a wireless ad hoc network functions as both a host and a router, and the control of the network is distributed among the nodes. The network topology is in general dynamic, because the connectivity among the nodes may vary with time due to node departures, new node arrivals, and the possibility of having mobile nodes. Hence, there is a need for efficient routing protocols to allow the nodes to communicate over multihop paths consisting of possibly several links in a way that does not use any more of the network "resources" than necessary. TYPES OF AD HOC NETWORKS There are two major types of wireless ad hoc networks: a) Mobile Ad hoc Networks b) Smart Sensor Networks Mobile Ad Hoc Networks (MANETs) In the next generation of wireless communication systems, there will be a need for the rapid deployment of independent mobile users. Significant examples include establishing survivable, efficient, dynamic communication for emergency or rescue operations, disaster relief efforts, and military networks. Such network scenarios cannot rely on centralized and organized connectivity, and can be conceived as applications of Mobile Ad Hoc Networks. A MANET is an autonomous collection of mobile users that communicate over relatively bandwidth constrained wireless links. Since the nodes are mobile, the network topology may change rapidly and unpredictably over time. The network is decentralized, where all network activity including discovering the topology and delivering messages must be executed by the nodes themselves, i.e., routing functionality will be incorporated into mobile nodes. Figure 1. Cellular Network vs. Ad Hoc Network The set of applications for MANETs is diverse, ranging from small, static networks that are constrained by power sources, to large-scale, mobile, highly dynamic networks. The design of network protocols for these networks is a complex issue. Regardless of the application, MANETs need efficient distributed algorithms to determine network organization, link scheduling, and routing. However, determining viable routing paths and delivering messages in a decentralized environment where network topology fluctuates is not a well defined problem. While the shortest path (based on a given cost function) from a source to a destination in a static network is usually the optimal route, this idea is not easily extended to MANETs. Factors such as variable wireless link quality, propagation path loss, fading, multiuser interference, power expended, and topological changes, become relevant issues. The network should be able to adaptively alter the routing paths to alleviate any of these effects. Moreover, in a military environment, preservation of security, latency, reliability, intentional jamming, and recovery from failure are significant concerns. Military networks are designed to maintain a low probability of intercept and/or a low probability of detection. Hence, nodes prefer to radiate as little power as necessary and transmit as infrequently as possible, thus decreasing the probability of detection or interception. A lapse in any of these requirements may degrade the performance and dependability of the network.

Mesh Topology
According to the San Francisco-based market research and consulting firm, Internet traffic will have reached 350,000 Terabytes per month as we pass into the new millennium. This is a significant milestone, as it indicates that data has already surpassed the voice network. To keep pace with seemingly insatiable demand for higher-speed access, a huge, complex, network-building process is beginning. Decisions made by network Architects today will have an immense impact on the future profitability, flexibility, and competitiveness of network operators. Despite the dominance of synchronous optical network (SONET), a transport technology based on time division multiplexing (TDM), more and more operators consider adopting a point-to-point strategy and eventual mesh topology. This article highlights the key advantages of this new approach. With such strong demand for wideband access - 1.5 million house holds already have cable or digital subscriber line (DSL) modern capable of operating at 1 Mbps - there is no doubt that the future for service providers is extremely bright. However, there are a number of more immediate challenges that must be addressed. At the top of the list is the fact they network investments must be made before revenues are realized. As a result, there is a need for less complex and more efficient network builds. In an effort to cut network costs, action is being taken across several fronts: consolidating network elements, boosting reliability, reducing component system costs, and slashing operational costs. As far as optical networks are concerned, the action likely to made the most positive impact is the development of new network architectures, such as point-to-point/mesh designs. Ring architectures will still be supported, but new Internet protocol (IP) and asynchronous transfer mode (ATM) networks will find that mesh, with its welldefined optical nodes, lends itself to robust optical rerouting schemes 2. POINT-TO-POINT OR MESH TOPOLOGIES IN THE METRO OPTICAL NETWORK Definition According to point-to-point topology, one node connects directly to another node. Mesh is a network architecture that improves on point-to-point topology by providing each node with a dedicated connection to every other node. This article highlights the key advantages of adopting a point-to-point strategy and eventual mesh topology, a new approach in transport technology. Topology It is the method of arranging various devices in a network. Depending on the way in which the devices are interlinked to each other, topologies are classified into:" " " " " Star Ring Bus Tree Mesh

Of the above mentioned types, the most popular & advantageous is the mesh topology.

Mesh Radio
Introduction Governments are keen to encourage the roll-out of broadband interactive multimedia services to business and residential customers because they recognise the economic benefits of e-commerce, information and entertainment. Digital cable networks can provide a compelling combination of simultaneous services including broadcast TV, VOD, fast Internet and telephony. Residential customers are likely to be increasingly attracted to these bundles as the cost can be lower than for separate provision. Cable networks have therefore been implemented or upgraded to digital in many urban areas in the developed countries. ADSL has been developed by telcos to allow on-demand delivery via copper pairs. A bundle comparable to cable can be provided if ADSL is combined with PSTN telephony and satellite or terrestrial broadcast TV services but incumbant telcos have been slow to roll it out and 'unbundling' has not proved successful so far. Some telcos have been accused of restricting ADSL performance and keeping prices high to protect their existing business revenues. Prices have recently fallen but even now the ADSL (and SDSL) offerings are primarily targeted at provision of fast (but contended) Internet services for SME and SOHO customers. This slow progress (which is partly due to the unfavourable economic climate) has also allowed cable companies to move slowly.

A significant proportion of customers in suburban and semi-rural areas will only be able to have ADSL at lower rates because of the attenuation caused by the longer copper drops. One solution is to take fibre out to street cabinets equipped for VDSL but this is expensive, even where ducts are already available. Network operators and service providers are increasingly beset by a wave of technologies that could potentially close the gap between their fibre trunk networks and a client base that is all too anxious for the industry to accelerate the rollout of broadband. While the established vendors of copper-based DSL and fibre-based cable are finding new business, many start-up operators, discouraged by the high cost of entry into wired markets, have been looking to evolving wireless radio and laser options. One relatively late entrant into this competitive mire is mesh radio, a technology that has quietly emerged to become a potential holder of the title 'next big thing'. Mesh Radio is a new approach to Broadband Fixed Wireless Access (BFWA) that avoids the limitations of point to multi-point delivery. It could provide a cheaper '3rd Way' to implement residential broadband that is also independent of any existing network operator or service provider. Instead of connecting each subscriber individually to a central provider, each is linked to several other subscribers nearby by low-power radio transmitters; these in turn are connected to others, forming a network, or mesh, of radio interconnections that at some point links back to the central transmitter.

Metamorphic Robots
Robots out on the factory floor pretty much know what's coming. Constrained as they are by programming and geometry, their world is just an assembly line. But for robots operating out doors, away from civilization, both mission and geography are unpredictable. Here, robots with the ability to change their shape could adapt to constantly varying terrain. Metamorphic robots are designed so that they can change their external shape without human intervention. One general way to achieve such functionality is to build a robot composed of multiple, identical unit modules. If the modules are designed so that they can be assembled into rigid structures, and so that individual units within such structures can be relocated within and about the structure, then selfreconfiguration is possible. These systems claim to have many desirable properties including versatility, robustness and low cost. Each module has its own computer, a rich set of sensors, actuators and communication networks. However, the practical application outside of research has yet to be seen .One outstanding issue for such systems is the increasing complexity for effectively programming a large distributed system, with hundreds or even thousands of nodes in changing configurations. PolyBot has been developed through as third generation at the Xerox Palo alto Research Center. Conro robot built at the information sciences institute at the University of Southern California are examples for metamorphic robots. SELF-RECONFIGURATION THROUGH MODULARITY Modularity means composed of multiple identical units called modules. The robot is made up of thousands of modules. The systems addressed here are automatically reconfiguring, and for this the hardware systems that tend to be more homogenous than heterogenous. That is the system may have different types of modules but the ratio of the number of module types to the number of modules is very low. Systems with all of these characteristics are called n-modular where n refers to the number of module types and n is small typically one or two. (e.g. a system with two types of modules is called 2-modular ). The general philosophy is to simplify the design and construction of components while enhancing functionality and versatility through larger numbers of modules. Thus, the low heterogeneity of the system is a design leverage point getting more functionality for a given amount of design .The analog in architecture is the building of a cathedral from many simple bricks in which bricks are of few types .In nature. The analogy is complex organisms like mammals, which have billions of cells, but only hundreds of cell types. THREE PROMISES OF N-MODULAR SYSTEMS 1. Versatility Versatility stems from the many ways in which modules can be connected, much like a child's Lego bricks. It can shape itself to a dog , chair or to a house by reconfiguration. The same set of modules could connect to form a robot with a few long thin arms and a long reach or one with many shorter arms that could lift heavy objects. For a typical system with hundred of modules, there are usually millions of possible configurations, which can be applied to many diverse tasks. Modular reconfiguration robots with many modules have the ability to form a large variety of shapes to suit different tasks. Figure 2 shows robot in the form of a loop rolling over a flat terrain. Figure 3 shows an earthworm type to slither through obstacles.. Finally Figure 4 shows a spider form to stride over bumpy or hilly terrain.

Low Energy Efficient Wireless Communication Network Design


Energy efficient wireless communication network design is an important and challenging problem. It is important because mobile units operate on batteries with energy supply. It is challenging because there are many different issues that must be dealt with when designing a low energy wireless communication system (such as amplifier design, coding, and modulation design), and these issues are coupled with one another. Furthermore, the design and operation of each component of a wireless communication system present trade-offs between performance and energy consumption. Therefore, the challenge is to exploit the coupling among the various components of a wireless communication system, and understand trade-offs between performance and energy consumption in each individual component, in order to come up with an overall integrated system design that has optimal performance and achieves low energy (power). The key observation is that constraining the energy of a node imposes a coupling among the design layers that cannot be ignored in performing system optimization. In addition, the coupling between layers requires simulation in order to accurately determine the performance. The purpose of this power is to present a methodology for the design, simulation and optimization of wireless communication networks for maximum performance with an energy constraint. Before we proceed, we illustrate, through simple examples, a couple of issues that need to be addressed. To highlight the trade-offs between performance and energy consumption at individual components, consider the design and operation of an amplifier. The amplifier boosts the power of the desired signal so that the antenna can radiate sufficient power for reliable communications. However, typical power amplifiers have maximum efficiency in converting DC power into RF power when the amplifier is driven into saturation. In this region of operation, the amplifier voltage transfer function is nonlinear. Because of this non linearity, the amplifier generates unwanted signals (so called intermodulation products) in the band of the desired signal and in adjacent bands. When the amplifier drive level is reduced significantly (large back off) the amplifier voltage transfer characteristic becomes approximately linear. In this case it does not generate intermodulation products. However, with large back off the amplifier is not able to efficiently convert DC power into RF power. Thus, there is considerable wasting of power at low drive levels, but at high drive levels more interfering signal are generated. To highlight the coupling among the design of individual components of a wireless system, consider packet routing in a wireless network that contain no base station (i.e. an ad hoc network). For simplicity consider a network with nodes A, B and C shown in figure. If Node A wants to transmit a message to Node C, it has two options. Transmit with power sufficient to reach Node C in a single transmission, or transmit first from A to B with smaller power, and then B to C. since the received signal power typically decays with distance as d4, there is significantly smaller power loss due to propagation in the second option because d^4ac>d^4ab+d^4bc.however even though Node A transmits with smaller output power, it does not necessarily proportionally decreases the amount of actually consumed because of the amplifier's effect discussed above. Furthermore, besides the energy required for packet transmission, there are energy requirements for packet reception and information decoding. The probability of packet error reception that is achieved depends on energy allocated to the receiver. Consequently, there is a coupling among amplifier design, coding and modulation design, and decoding design as well as routing protocol.

Indoor Geolocation

Recently, there is an increasing interest in accurate location finding techniques and location based applications for indoor areas. The Global Positioning System (GPS) and wireless enhanced 911(E-911) services also address the issue of location finding. However, these technologies cannot provide accurate indoor geolocation, which has its own independent market and unique technical challenges. Despite extraordinary advances in global positioning system (GPS) technology, millions of square meters of indoor space are out of reach of GPS satellites. Their signals, originating high above the earth, are not designed to penetrate most construction materials, and no amount of technical wizardry is likely to help. So the greater part of the world's commerce, being conducted indoors, cannot be followed by GPS satellites. Consider some everyday business challenges. Perpetual physical inventory is needed for manufacturing control, as well as to keep assets from being lost or pilfered. Mobile assets, such as hospital crash carts, need to be on hand in an emergency. Costly and baroque procedures presently track and find manufacturing work-in-process. Nor is the office immune: loss of valuable equipment such as laptop computers has become a serious problem, and locating people in a large office takes time and disrupts other activities. What these systems share is a need to find and track physical assets and people that are inside buildings. The design differences between an efficient asset-tracking system and GPS arc more basic. First and foremost, control of the situation shifts from users of GPS receivers, querying the system for a fix on their position, to overhead scanners, checking up on the positions of many specially tagged objects and people. In GPS, each receiver must determine its own position in reference to a fixed infrastructure, whereas inside a building, the tracking infrastructure must keep tabs on thousands of tags.

Systems consulting for a health maintenance organization (HMO) sparked the interest in this technology. As patients' files were often impossible to find, doctors were forced to see one in five persons unaided by a medical record. Attempts to bar code the records did not solve the problem, as HMO staff frequently forgot to scan critical files when passing them between offices. Not surprisingly, the files most often misplaced concerned complicated cases with multiple caregivers. The record room employees could be found in clinical areas, most of the time, consulting lists of desperately needed records as they sifted through piles of paper. Several physicians wondered if there was anything like the GPS devices that they could use to track the records through the facility. Accurate indoor geolocation is an important and novel emerging technology for commercial, public safety and military applications. In commercial applications for residential and nursing homes there is an increasing need for indoor geolocation systems to track people with special needs, the elderly, and children who are away from visual supervision, to locate in-demand portable equipment in hospitals, and to find specific items in warehouses. In public safety and military applications, indoor geolocation systems are needed to track inmates in prisons, and navigating policeman, firefighters and soldiers to complete their missions inside buildings. These incentives have initiated interest in modeling the radio channel for indoor geolocation, development of new technologies, and emergence of first generation indoor geolocation products. To help the growth of this emerging industry there is a need to develop a scientific framework to lay a foundation for design and performance evaluation of such systems.

Tweet

Wireless DSL

1.1 The Wireless Last Mile In October 2002, at Owensboro, a small Kentucky city on the Ohio River, after a five-month pilot program, the local electricity and water provider, Owensboro Municipal Utilities (OMU), rolled out a high-speed broadband service to the city's 58 000 residents at US $25 a month, just $2 more than what many were paying for low-speed dial-up access. In a short six months since launching service, OMU Online has connected more than 700 customers with broadband access. Currently, it has a backlog of several hundred connections, and expects to have a total of 1,500 customers by the end of the year. Two months earlier and 3000 km away, in Klamath Falls, Ore., a small start-up company, Always On Network Inc. (Chiloquin, Ore.), began serving up broadband to 30 test customers, converting them into paying customers a few months later. What make these enterprises novel aren't the data rates, which aren't exceptional for broadband, at 2501000 Kb/s. It's the way that the bits are delivered-wirelessly-at least for the critical last mile to the home. Bypassing the copper wires that connect a phone company's central offices to its customers, these wireless Internet service providers can deliver broadband more cheaply than digital subscriber lines (DSLs) and can reach out to rural homes and others not currently served at all except by dial-up. The two providers are destined to be midwives to the next generation of broadband: wireless metropolitanarea networks (MANs). Propelled in part by a new standard, IEEE 802.16, wireless MANs are expected to do for neighborhoods, villages, and cities what IEEE 802.11, the standard for wireless local-area networks, is doing for homes, coffee shops, airports, and offices. The reason is that OMU and Always On represent opposing approaches to wireless MANs from two of the technology's top system vendors, Alvarion Ltd. (Tel Aviv, Israel, and Carlsbad, Calif.) and Soma Networks Inc. (San Francisco). Alvarion has embraced the 802.16 standard and is a founding member of the WiMax Forum (San Jose, Calif.), an industry consortium created to commercialize it and a corresponding standard from the European Telecommunications Standards Institute known as HIPERMAN. The institute is based in Sophia Antipolis, France. Alvarion's existing wireless last-mile products, and those of the other WiMax members, are designed for wireless Internet service providers (WISPs), making the connection between homes and the Internet backbone that lets end users bypass their telephone companies. Soma, reluctant to abandon or change a five-year odyssey of its wireless MAN technology development, and believing it to be superior to anything its competitors have, is ready to stand apart from the standard and go it alone. Its system, while eminently usable by WISPs, is chock full of quality-of-service features that help it transport voice-over-Internetprotocol packets. That means it's especially well positioned to be adopted by the phone companies themselves. Andrew B. King of Web Site Optimization LLC (Ann Arbor, Mich.) predicts that 50 percent of all Internet access will be broadband by July 2004, and that this will climb to two-thirds just a year later. But if the future of telecommunications lies in broadband services, that only spells more trouble for phone companies. They've staked their broadband futures on DSL, which provides data rates 10-20 times as fast as dial-up on the same old copper phone lines. Creating a wireless network is relatively simple. At its heart is a base station, which can be put on top of a building's roof, a cellular tower, or even a water tower. The base station is the bridge between the wired world of the Internet, on one end, and subscribers, with whom it is connected by radio waves, on the other. With each station generally serving a 10- to 15-km radius, base stations can be put up where-and only where-they're economically justified. Dan Stanton, Always On's chief operating officer, says he needs $4.2 million to pay for a base station and enough home devices for the 6000 households it plans to sign up within a 500-km2 area centered on Klamath Falls. Stanton says that the return on that investment would be $2.5 million per year. Even if that figure, and the 6000 subscribers, were to prove unrealistic, it is clear that the costs of serving a rural area are quite a bit lower than they are for wired DSL

Wireless Microserver

Since the early days of the Web, server-side executable content has been an important Ingredient of server technology. It has turned simple hypertext retrieval into real applications. Not surprisingly, the idea of remotely controlling devices through the Web1, 2 has always seemed near at hand. Because hypertext user interfaces can run on any Web browser, UI development boils down to Web content creation. Furthermore, thanks to the HTTP standard's smart and scalable nature, we can fit embedded servers into simple 8-bit microcontrollers with only a few Kbytes of RAM and ROM Ever since we started integrating hypertext browsers into mobile phones, people have proposed using mobile phones as remote controls. Now, with the provision of short-range wireless connectivity- for example, through Bluetooth mobile phones and other handhelds might substantially change the way people interact with electronic devices. Here, we report on our effort to create a low-power wireless microserver with a very small form factor and connect it to mobile devices using standard consumer technology.

EMBEDDING SERVERS INTO DEVICES There are several ways to make things visible on the Web. In the simplest case, a server hosts an item's Web presence without a physical connection to the item. A handheld device reads links between the item and its Web presence, connects to the respective URL, and retrieves information about the item. A wellknown example for this approach is the Cooltown Museum, 1 where small infrared transceivers are located close to the pictures. When coming close, the visitor's PDAs receive Web links that point to the information pages for the particular picture. Unfortunately, interacting with the item itself is impossible. Interaction with a device would be possible if the device had a wireless control interface to its internal logic. For example, the mobile terminal could download a device-specific user interface application from the Web and use it to control the device through a device-dependent protocol (see Figure A1). This approach might become feasible when we can download Java applications into mobile terminals with access to Bluetooth APIs. Accessing the device immediately and locally without an Internet connection would be possible only if the device contained an embedded Web server (see Figure A2). An execution environment, such as serverside scripting, would be required to interact with the device's logic. Short-range connectivity seems to be an obstacle, but it empowers location-aware applications through the wireless link's limited reach. If a user wants to adjust a microserver-equipped TV's volume, he or she does not want to accidentally interact with somebody else's TV. Therefore, short-range wireless radio links, preferably using unlicensed bands, are well suited for networking things and people.

User Identification Through Keystroke Biometrics

The increasing use of automated information systems together with our pervasive use of computers has greatly simplified our lives, while making us overwhelmingly dependent on computers and digital networks. Technological achievements over the past decade have resulted in improved network services, particularly in the areas of performance, reliability, and availability, and have significantly reduced operating costs due to the more efficient utilization of these advancements. Some authentication mechanisms recently developed requires users to perform a particular action and then some behavior of that action is examined. The traditional method of signature verification falls in this category. Handwritten signatures are extremely difficult to forge without assistance of some copier. A number of identification solutions based on verifying some physiological aspect - known as BIOMETRICS have emerged. Biometrics, the physical traits and behavioral characteristics that make each of us unique, are a natural choice for identity verification. Biometrics is an excellent candidate for identity verification because unlike keys or passwords, biometrics cannot be lost, stolen, or overheard, and in the absence of physical damage they offer a potentially foolproof way of determining someone's identity. Physiological (i.e., static) characteristics, such as fingerprints, are good candidates for verification because they are unique across a large section of the population. Indispensable to all biometric systems is that they recognize a living person and encompass both physiological and behavioral characteristics. Biometrics is of two kinds. One deals with the physical traits of the user and the other deals with the behavioral traits of the user. Retinal scanning, fingerprint scanning, face recognition, voice recognition and DNA testing comes under the former category, while typing rhythm comes under the later category. Physiological characteristics such as fingerprints are relatively stable physical features that are unalterable without causing trauma to the individual. Behavioral traits, on the other hand, have some physiological basis, but also react to a person's psychological makeup. Most systems make use of a personal identification code in order to authentication the user. In these systems, the possibility of a malicious user gaining access to the code cannot be ruled out. However, combing the personal identification code with biometrics provides for a robust user authentication system. Authentication using the typing rhythm of the user on keyboard or a keypad takes advantage of the fact that each user would have a unique manner of typing the keys. It makes use of the inter-stroke gap that exists between consecutive characters of the user identification code. While considering any system for authenticity, one needs to consider the false acceptance rate and the false rejection rate. The False Acceptance Rate (FAR) is the percentage of un-authorised users accepted by the system and the False Rejection Rate (FRR) is the percentage of authorised users not accepted by the system. An increase in one of these metrics decreases the other and vice versa. The level of error must be controlled in the authentication system by the use of a suitable threshold such that only the required users are selected and the others who are not authorised are rejected by the system.

Ultrasonic Motor

All of us know that motor is a machine which produces or imparts motion, or in detail it is an arrangement of coils and magnets that converts electric energy into mechanical energy and ultrasonic motors are the next generation motors. In 1980,the world's first ultrasonic motor was invented which utilizes the piezoelectric effect in the ultrasonic frequency range to provide its motive force resulting in a motor with unusually good low speed, high torque and power to weight characteristics. Electromagnetism has always been the driving force behind electric motor technology. But these motors suffer from many drawbacks. The field of ultrasonic seems to be changing that driving force. DRAWBACKS OF ELECTROMAGNETIC MOTORS Electromagnetic motors rely on the attraction and repulsion of magnetic fields for their operation. Without good noise suppression circuitry, their noisy electrical operation will affect the electronic components inside it. Surges and spikes from these motors can cause disruption or even damage in nonmotor related items such as CRTs and various types of receiving and transmitting equipments. Also , electromagnetic motors are notorious for consuming high amount of power and creating high ambient motor temperatures. Both are undesirable from the efficiency point of view. Excessive heat energy is wasted as losses. Even the efficiently rated electromagnetic motor has high input to output energy loss ratios. Replacing these by ultrasonic motors would virtually eliminate these undesirable effects. The electromagnetic motors produce strong magnetic fields which cause interference. Ultrasonic motors use piezoelectric effect and hence no magnetic interference. PRINCIPLE OF OPERATION PIEZOELECTRIC EFFECT Many polymers, ceramics and molecules are permanently polarized; that is some parts of the molecules are positively charged, while other parts are negatively charged. When an electric field is applied to these materials, these polarized molecules will align themselves with the electric field, resulting in induced dipoles within the molecular or crystal structure of the material. Further more a permanently polarized material such as Quartz(SiO2) or Barium Titanate(BaTiO3) will produce an electric field when the material changes dimensions as a result of an imposed mechanical force. These materials are piezoelectric and this phenomenon is known as Piezoelectric effect. Conversely, an applied electric field can cause a piezoelectric material to change dimensions. This is known as Electrostriction or Reverse piezoelectric effect. Current ultrasonic motor design works from this principle, only in reverse.

Virtual Retinal Display

Information displays are the primary medium through which text and images generated by computer and other electronic systems are delivered to end-users. While early computer systems were designed and used for tasks that involved little interactions between the user and the computer, today's graphical and multimedia information and computing environments require information displays that have higher performance, smaller size and lower cost. The market for display technologies also has been stimulated by the increasing popularity of hand-held computers, personal digital assistants and cellular phones; interest in simulated environments and augmented reality systems; and the recognition that an improved means of connecting people and machines can increase productivity and enhance the enjoyment of electronic entertainment and learning experiences. For decades, the cathode ray tube has been the dominant display device. The cathode ray tube creates an image by scanning a beam of electrons across a phosphor-coated screen, causing the phosphors to emit visible light. The beam is generated by an electron gun and is passed through a deflection system that scans the beam rapidly left to right and top to bottom, a process called Rastering. A magnetic lens focuses the beam to create a small moving dot on the phosphor screen. It is these rapidly moving spots of light ("pixels") that raster or "paint" the image on the surface of the viewing screen. Flat panel displays are enjoying widespread use in portable computers, calculators and other personal electronics devices. Flat panel displays can consist of hundreds of thousands of pixels, each of which is formed by one or more transistors acting on a crystalline material. In recent years, as the computer and electronics industries have made substantial advances in miniaturization, manufacturers have sought lighter weight, lower power and more cost-effective displays to enable the development of smaller portable computers and other electronic devices. Flat panel technologies have made meaningful advances in these areas. Both cathode ray tubes and flat panel display technologies, however, pose difficult engineering and fabrication problems for more highly miniaturized, high-resolution displays because of inherent constraints in size, weight, cost and power consumption. In addition, both cathode ray tubes and flat panel display are difficult to see outdoors or in other setting where the ambient light is brighter than the light emitted from the screen. Display mobility is also limited by size, brightness and power consumption.

As display technologies attempt to keep pace with miniaturization and other advances in information delivery systems, conventional cathode ray tube and flat panel technologies will no longer be able to provide an acceptable range of performance characteristics, particularly the combination of high resolution, high level of brightness and low power consumption, required for state-of-the-art mobile computing or personal electronic devices.

Spectrum Pooling
THE success of future wireless systems will depend on the concepts and technology innovations in architecture and in efficient utilization of spectral resources. There will be a substantial need for more bandwidth as wireless applications become more and more sophisticated. This need will not be satisfied by the existing frequency bands being allocated for public mobile radio even with very evolved and efficient transmission techniques. Also wide ranges of potential spectral resources are used only very rarely. In the presented approach that is called spectrum pooling, different spectrum owners (e.g. military, trunked radio etc.) bring their frequency bands into a common pool from which rental users may rent spectrum. Spectrum pooling reflects the need for a completely new way of radio resource management. Interesting aspects of the spectral efficiency gain that is obtained with the deployment of spectrum pooling. A potential rental system needs to be highly flexible with respect to the spectral shape of the transmitted signal. Spectral ranges that are accessed by licensed users have to be spared from transmission power. OFDM modulation is a candidate for such a system as it is possible to leave a set of subcarriers unmodulated. Thus, providing a flexible spectral shape that fills the spectral gaps without interfering with the licensed users. A schematic example of this method is given in Fig. 1. Furthermore, spectrum pooling systems are not supposed to compete with existing and upcoming 2G and 3G standards. They are rather meant to be a complement in hot spot areas with a high demand for bandwidth (e.g.airports, convention centers etc.). Hence, it is straightforward to apply modified versions of OFDM based wireless LAN standards like IEEE802.11a and HIPERLAN/2. There are many modifications to consider in order to make wireless LANs capable of spectrum pooling. They range from front end via baseband processing to higher layer issues. One important task when implementing spectrum pooling is the periodic detection of idle subbands of the licensed system delivering a binary allocation vector as shown in Fig. 1. A detailed description of how to perform this in an optimal fashion is given. We propose an approach where any associated mobile terminal of the rental system conducts its own detection. This detection is the first step in a whole protocol sequence that is illustrated in Fig. 2. Having finished the detection cycle, the results are then gathered at the access point as visualized in Fig. 2b). The received information can be processed by the access point which basically means that the individual binary (allocated/deallocated) detection results are logically combined by an OR operation. Thereafter, a common pool allocation vector which is mandatory for every mobile terminal is broadcast in a last phase as shown in Fig. (2c). It is shown that this distributed technique is more reliable and yields a higher system throughput than only having the access point conduct a spectral detection. However, if the collection of the detection results is realized by sending a MAC layer data packet for each mobile terminal, the signaling overhead will be very high as the number of mobile terminals can be as high as 250 in the considered wireless LAN systems. Now, one could reduce the number of detecting mobile terminals. Unfortunately, this approach has several drawbacks. The random choice of the detecting rental users would not guarantee an optimal spatial distribution of the detecting mobile terminals. The transmission of these results would still take a lot of time and their correct reception is disturbed by rental users that have accessed their subbands since the last detection cycle. One further problem is the redundancy in the measurement data. Several mobile terminals can encounter the same constellation of licensed user accesses. We investigated techniques like the adaptive tree walk protocol to reduce the amount of measurement data packets but none of them was satisfactory with respect to duration and robustness.

Signaling System
Signaling System 7 (SS7) is architecture for performing out-of-band signaling in support of the callestablishment, billing, routing, and information-exchange functions of the public switched telephone network (PSTN). It identifies functions to be performed by a signaling-system network and a protocol to enable their performance. What is Signaling? Signaling refers to the exchange of information between call components required to provide and maintain service. As users of the PSTN, we exchange signaling with network elements all the time. Examples of signaling between a telephone user and the telephone network include: dialing digits, providing dial tone, accessing a voice mailbox, sending a call-waiting tone. SS7 is a means by which elements of the telephone network exchange information. Information is conveyed in the form of messages. SS7 messages can convey information such as: " I'm forwarding to you a call placed from 212-555-1234 to 718-555-5678. Look for it on trunk 067. " Someone just dialed 800-555-1212. Where do I route the call? " The called subscriber for the call on trunk 11 is busy. Release the call and play a busy tone. " The route to XXX is congested. Please don't send any messages to XXX unless they are of priority 2 or higher. " I'm taking trunk 143 out of service for maintenance. " SS7 is characterized by high-speed packet data and out-of-band signaling. What is Out-of-Band Signaling? Out-of-band signaling is signaling that does not take place over the same path as the conversation. We are used to thinking of signaling as being in-band. We hear dial tone, dial digits, and hear ringing over the same channel on the same pair of wires. When the call completes, we talk over the same path that was used for the signaling. Traditional telephony used to work in this way as well. The signals to set up a call between one switch and another always took place over the same trunk that would eventually carry the call. Signaling took the form of a series of multi frequency (MF) tones, much like touch tone dialing between switches. Out-of-band signaling establishes a separate digital channel for the exchange of signaling information. This channel is called a signaling link. Signaling links are used to carry all the necessary signaling messages between nodes. Thus, when a call is placed, the dialed digits, trunk selected, and other pertinent information are sent between switches using their signaling links, rather than the trunks which will ultimately carry the conversation. Today, signaling links carry information at a rate of 56 or 64 kbps. It is interesting to note that while SS7 is used only for signaling between network elements, the ISDN D channel extends the concept of out-of-band signaling to the interface between the subscriber and the switch. With ISDN service, signaling that must be conveyed between the user station and the local switch is carried on a separate digital channel called the D channel. The voice or data which comprise the call is carried on one or more B channels. Why Out-of-Band Signaling? Out-of-band signaling has several advantages that make it more desirable than traditional in-band signaling. " It allows for the transport of more data at higher speeds (56 kbps can carry data much faster than MF out pulsing). " It allows for signaling at any time in the entire duration of the call, not only at the beginning. " It enables signaling to network elements to which there is no direct trunk connection.

Ultra Conductors

1.1 Superconductivity Superconductivity is the phenomenon in which a material losses all its electrical resistance and allowing electric current to flow without dissipation or loss of energy. The atoms in materials vibrate due to thermal energy contained in the materials: the higher the temperature, the more the atoms vibrate. An ordinary conductor's electrical resistance is caused by these atomic vibrations, which obstruct the movement of the electrons forming the current. If an ordinary conductor were to be cooled to a temperature of absolute zero, atomic vibrations would cease, electrons would flow without obstruction, and electrical resistance would fall to zero. A temperature of absolute zero cannot be achieved in practice, but some materials exhibit superconducting characteristics at higher temperatures. In 1911, the Dutch physicist Heike Kamerlingh Onnes discovered superconductivity in mercury at a temperature of approximately 4 K (-269o C). Many other superconducting metals and alloys were subsequently discovered but, until 1986, the highest temperature at which superconducting properties were achieved was around 23 K (-250o C) with the niobium-germanium alloy (Nb3Ge) In 1986 George Bednorz and Alex Muller discovered a metal oxide that exhibited superconductivity at the relatively high temperature of 30 K (-243o C). This led to the discovery of ceramic oxides that super conduct at even higher temperatures. In 1988, and oxide of thallium, calcium, barium and copper (Ti2Ca2Ba2Cu3O10) displayed superconductivity at 125 K (-148o C), and, in 1993 a family based on copper oxide and mercury attained superconductivity at 160 K (-113o C). These "high-temperature" superconductors are all the more noteworthy because ceramics are usually extremely good insulators. Like ceramics, most organic compounds are strong insulators; however, some organic materials known as organic synthetic metals do display both conductivity and superconductivity. In the early 1990's, one such compound was shown to super conduct at approximately 33 K (-240o C). Although this is well below the temperatures achieved for ceramic oxides, organic superconductors are considered to have great potential for the future. New superconducting materials are being discovered on a regular basis, and the search is on for room temperature superconductors, which, if discovered, are expected to revolutionize electronics. Room temperature superconductors (ultraconductors) are being developed for commercial applications by Room Temperature Superconductors Inc.(ROOTS).Ultraconductors are the result of more than 16 years of scientific research ,independent laboratory testing and eight years of engineering development. From an engineering perspective, ultraconductors are a fundamentally new and enabling technology. These materials are claimed to conduct electricity at least 100,000 times better than gold, silver or copper. 1.2 Technical introduction Ultraconductors are patented1 polymers being developed for commercial applications by Room Temperature Superconductors Inc (ROOTS). The materials exhibit a characteristic set of properties including conductivity and current carrying capacity equivalent to superconductors, but without the need for cryogenic support.

Self Phasing Antenna Array


Many fixed-link wirless communication systems require the accurance pointing of high-gain antennas. In a mobile wireless situation the ability to perform this pointing action automatically using adaptive antenna techniques has been previously demonstrated and has lead to new possibilities for frequency reuse and increased traffic capacity for a given bandwidth usage. A self-phased, or retrodirective, array-performs steering action automatically by virtue of the array architecture used. In a retrodirective antenna array each array element is independently phased; this phasing is derived directly from the signals each array element receives. Thus the antenna array continuously adapts its phase response to track an incoming signal without a prior knowledge of the spatial position of this incoming signal and, in general, the selfphasing behaviour of the antenna array can compensate for inhomogeneities in thepropagation characteristics of the envronment through which the signal has to travel. In principle a self-phasing antenna array can acquirean incoming signal in a very short time period, automatically compensate for propagation path variation and antenna array misalignment and return the incoming signal back in thedirection of the incoming wavefront. The basic principle on which a retrodirective antenna array operates relies on the self conjugation of an incoming wavefront, i.e. the phase of the signal retransmitted from each element in the array is advanced as much as the phase received by that element was delayed and vice versa. A well known example of a passive retrodirective antenna, used mostly in the marine environment, is the dihedral corner reflector. Here its purpose is to enhance the radar cross-section of the boat on which it is mounted in order to enhance radar visibility to other shipping. CORNER REFLECTOR Fig. 1 shows a retrodirective corner reflector. This reflector consists of two flat conducting plates set at 900 to each other. An incident signal arriving at an angle 0 with respect to plate 'Y' will be reflected back at its Snell angle as shown in Fig.1. The reflected signal from the plate 'Y' will arrive at an angle 900-0 with respectto the plate 'X' and will again be reflected at its Snell angle. Here the angle of this reflected signal with respect to axis a-a', which is parallel to plate 'Y', is the same as the angle of the incident wave with respect to plate 'Y'. This indicates that the outgoing signal is returned in parallel to be incoming signal, i.e. the incoming wave returns back in the derection from which it came. A signalariving at plate 'X' will be reflected back from plate 'Y' in a similar manner. Other configurations, such as the triangular trihedral reflector, exist which can provide three-dimensional retrodirective responses as compared to the twodimensional coverage of a simple dihedral corne reflector. VAN ATTA ARRAY In 1959 L.C. Van Atta patented an array variant of the corner reflector. This apparatus, known as the Van Atta array, is shown in its simples embodiment in Fig.2. In this arrangement samples of the incoming wave front received by the antenna array elements to the right of the centre of the array are retransmitted from the mirror image element on the left side of the array. Each wavefront sample experiences the same timedelay, achieved by interconnecting the elements of each mirror image pair by a transmission line, the lengths of the transmission lines being equal.

Role of Internet Technology in Future Mobile Data System


Next Generation Internet The Internet has dramatically changed the way America communicates and does business. Between 1991 to 1999, the number of domain names with an IP address rose from almost zero in 1991 to by 45,000,000 by 1999.1 From the consumer's standpoint, the Internet offers the ability to interact with health practitioners online and easily access health-related information. It's no wonder, then, that more people use the Internet to gather information about health-related topics than any other subject. However, there are numerous barriers that might inhibit telehealth growth on the Internet, including growing delays, costs, and lack of security, reliability and availability on a worldwide basis. The development of Internet2 might help address some of these barriers. Internet2 is a joint venture by academia, the federal government and industry. This group is using a new high-speed backbone network with a core subnetwork consisting of a 2.4-Gbps, 13,000-mile research network to test Internet applications (for example, Internet Protocol (IP) multicasting, differentiated service levels, and advanced security). It will also allow researchers to test and resolve problems such as bandwidth constraints, quality and security issues. In the telehealth industry, wireless technology is most commonly used for telemetry and emergency medical services. However, in countries that have adapted to digital wireless phone systems faster than the United States, the future of wireless technology is already available. For example, in Japan, Nippon Telephone & Telegraph will provide Internet e-mail access via its wireless phone services to 1 million customers. This year, Japanese companies will also introduce a mobile videophone to its local markets that can transmit live video at 32 kbps. In the Netherlands, Nokia has already introduced the Nokia 9110 Communicator, which can link to a digital camera; store images, and then e-mail them. Nokia's Communicator will be available in the United States within in the next year, but mobile videophones may not be for several years. Companies in the United States have already have introduced wireless handheld computers, such as the Palm Series and its competitors. More recently, mobile phone providers, such as Sprint PCS , have introduced products with the ability to access limited Web pages for text information but direct access to the Web and its graphics is not yet possible without appropriate technical standards. However, a standard called the Wireless Application Protocol (WAP) is already under development. WAP is a way of converting information on Internet Web sites into a form that can be displayed on a mobile hand-held phone device. The advent of so-called microbrowsers may still be a few years away, because mobile systems currently do not have the capacity to support high-speed connections with the Internet. Once faster speeds are available, WAP proponents believe that consumers will be able to get message notification and call management, electronic mail, mapping and location services, weather and traffic alerts, sports and financial services, address book and directory services, and corporate intranet applications on their handheld devices.

Service Aware Intelligent GGSN


GPRS operators can transform their data networks from a simple IP traffic medium to a rich service delivery channel by deploying an intelligent GGSN INTRODUCTION For many years, mobile network technology - the Global System for Mobile communication (GSM) and Code Division Multiple Access (CDMA) - has been the dominant means of making voice calls when away from home or the office. Today, mobile packet data networks are just starting to be deployed and have yet to be widely adopted. However, there are ambitious hopes for this technology. World standards bodies, such as the Third Generation Partnership Project (3GPP), have defined the General Packet Radio Service (GPRS) architecture for second generation GSM, and the packet domain architecture for the Third Generation Universal Mobile Telecommunications System (3G UMTS).1 Unlike telephony, which has been enhanced by numerous revenue-generating supplementary services, such as call transfer, voice mail and the Short Message Service (SMS), packet data transmission provides few supplementary services, making it necessary for operators to earn revenue primarily from basic data transport. Although the standard has defined mechanisms for providing supplementary packet services based on intelligent network architecture and the Customized Applications for Mobile network Enhanced Logic (CAMEL) protocol, their functionality remains limited, particularly when it comes to differentiated charging systems. In addition, a growing number of service providers are interacting directly with users in a manner that is totally transparent to the network, based on the internet philosophy of locating intelligence at the ends, thereby requiring the minimum of network services. Examples include downloading video and audio clips, games, ring tones, screen savers and MP3 files. The GPRS operator has no part to play and is thus reduced to the role of a simple "pipe" supplier. The challenge for cellular operators will be to evolve to become suppliers of high added value services. Naturally, mobile operators can offer their own "end-to-end" services. In addition, because of their position as infrastructure managers, they can offer profitable enhancements linked to data transport. The Gateway GPRS Support Node (GGSN) can help mobile packet network operators to achieve this aim. Evolution Paths for the GGSN The GGSN can play a major part in the mobile operator's strategy of evolving from simply being a "pipe" provider to a deliverer/enabler of mobile data services Part of the subscriber's data experience Some new services undoubtedly belong to the mobile operator's domain (e.g. content servers and applications). Nevertheless, a major part of the content will be created by third parties. Content services, such as gaming, music, audio and video streaming, are best provided by content partners whose core business is built around creating and/or managing compelling content. Network operators' profitability can partly come from offering an intelligent delivery channel to a large number of content partners.

Push Technology
Push technology reverses the Internet's content delivery model. Before push, content publishers had to reply upon the end-users own initiative to bring them to a web site or download content. With push technology the publisher can deliver a content directly to the users PC, thus substantially improving the likelihood that the user will view it. Push content can be extremely timely, and delivered fresh several times a day. Information keeps coming to user whatever he asked for it or not. The most common analog for push technology is a TV channel; it keeps sending us stuff whether we care about it or not. Push was created to alleviate two problems facing users of net. The first problem is information overload. The volume and dynamic nature of content on the internet is a impediment to users, and has become an ease-of -use of issue. Without push applications can be tedious, time consuming, and less than dependable. Users have to manually hunt down information, search out links, and monitor sites and information sources. Push applications and technology building blocks narrow that focus even further and add considerable ease of use. The second problem is that most end-users are restricted to low bandwidth internet connections, such as 33.3 kbps modems, thus making it difficult to receive multimedia content. Push technology provides means to pre-deliver much larger packages of content. Push technology enables the delivery of multimedia content on the internet through the use of local storage and transparent content downloads. Like a faithful delivery agent, push, often referred to as broadcasting, delivers content directly to user transparently and automatically. It is one of the internet's most promising technologies.

Already a success, push is being used to pump data in the form of news, current affairs and sports etc, to many computers connected to the internet.Updating software is one of the fastest growing uses of push. It is a new and exciting way to manage software update and upgrade hassles. Using the internet today without the aid of a push application can be a tedious, time consuming, and less than dependable. Computer programming is an inexact art, and there is a huge need to quickly and easily get bug fixes, software updates, and even whole new program out to people. Users have to manually hunt down information, search out links, and monitor sites and information sources. 2. THE PUSH PROCESS For the end user, the process of receiving push content is quite simple. First, an individual subscribes to a publisher's site or channel by providing the content preferences. The subscriber also sets up a schedule specifying when information should be delivered. Based on the subscriber's schedule, the PC connects to the internet, and the client software notifies the publisher's server that the download can occur. The server collates the content pertaining to the subscriber's profile and downloads it to the subscriber's machine, after which the content is available for the subscriber's viewing WORKING Interestingly enough, from a technical point of view, most push applications are pull and just appear to be 'push' to the user. In fact, a more accurate description of this process would be 'automated pull'. The web currently requires the user to poll sites for new or updated information. This manual polling and downloading process is referred to as 'pull' technology. From a business point of view, this process provides little information about user, and even little control over what information is acquired. It is the user has to keep track of the location of the information sites, and the user has to continuously search for informational changes - a very time consuming process. The 'push' model alleviates much of this tedium.

GMPLS

Introduction
The emergence of optical transport systems has dramatically increased the raw capacity of optical networks and has enabled new sophisticated applications. For example, network-based storage, bandwidth leasing, data mirroring, add/drop multiplexing [ADM], dense wavelength division multiplexing [DWDM], optical cross-connect [OXC], photonic cross-connect [PXC], and multiservice switching platforms are some of the devices that may make up an optical network and are expected to be the main carriers for the growth in data traffic. Multiple Types of Switching and Forwarding Hierarchies Generalized MPLS (GMPLS) differs from traditional MPLS in that it supports multiple types of switching, i.e. the addition of support for TDM, lambda, and fiber (port) switching. The support for the additional types of switching has driven GMPLS to extend certain base functions of traditional MPLS and, in some cases, to add functionality. These changes and additions impact basic LSP properties, how labels are requested and communicated, the unidirectional nature of LSPs, how errors are propagated, and information provided for synchronizing the ingress and egress LSRs. 1. Packet Switch Capable (PSC) interfaces: Interfaces that recognize packet boundaries and can forward data based on the content of the packet header. Examples include interfaces on routers that forward data based on the content of the IP header and interfaces on routers that forward data based on the content of the MPLS "shim" header. 2 . Time-Division Multiplex Capable (TDM) interfaces: Interfaces that forward data based on the data's time slot in a repeating cycle. An example of such an interface is that of a SDH/SONET Cross-Connect (XC), Terminal Multiplexer (TM), or Add-Drop Multiplexer (ADM). 3 . Lambda Switch Capable (LSC) interfaces: Interfaces that forward data based on the wavelength on which the data is received. An example of such an interface is that of a Photonic Cross-Connect (PXC) or Optical Cross-Connect (OXC) that can operate at the level of an individual wavelength. Additional examples include PXC interfaces that can operate at the level of a group of wavelengths, i.e. a waveband. 4. Fiber-Switch Capable (FSC) interfaces: Interfaces that forward data based on a position of the data in the real world physical spaces. An example of such an interface is that of a PXC or OXC that can operate at the level of a single or multiple fibers. The diversity and complexity in managing these devices have been the main driving factors in the evolution and enhancement of the MPLS suite of protocols to provide control for not only packetbased domains, but also time, wavelength, and space domains. GMPLS further extends the suite of IP-based protocols that manage and control the establishment and release of label switched paths (LSP) that traverse any combination of packet, TDM, and optical networks. GMPLS adopts all technology in MPLS.

Fluorescent Multi-layer Disc

Introduction
Requirements for removable media storage devices (RMSDs) used with personal computers have changed significantly since the introduction of the floppy disk in 1971. At one time, desktop computers depended on floppy disks for all of their storage requirements. Even with the advent of multigigabyte hard drives, floppy disks and other RMSDs are still an integral part of most computer systems, providing. Transport between computers for data files and software Backup to preserve data from the hard dive A way to load the operating system software in the event of a hard failure. Data storage devices currently come in a variety of different capacities, access time, data transfer rate and cost per Gigabyte. The best overall performance figures are currently achieved using hard disk drives (HDD), which can be integrated into RAID systems (reliable arrays of inexpensive drives) at costs of $10 per GByte (1999). Optical disc drives (ODD) and tapes can be configured in the form of jukeboxes and tape libraries, with cost of a few dollars per GByte for the removable media. However, the complex mechanical library mechanism serves to limit data access time to several seconds and affects the reliability adversely. Most information is still stored in non-electronic form, with very slow access and excessive costs (e.g., text on paper, at a cost of $10 000 per GByte).Some RMSD options available today are approaching the performance, capacity, and cost of hard-disk drives. Considerations for selecting an RMSD include capacity, speed, convenience, durability, data availability, and backwardcompatibility. Technology options used to read and write data include. Magnetic formats that use magnetic particles and magnetic fields. Optical formats that use laser light and optical sensors. Magneto-optical and magneto-optical hybrids that use a combination of magnetic and optical properties to increase storage capacity. The introduction of the Fluorescent Multi-layer Disc (FMD) smashes the barriers of existing data storage formats. Depending on the application and the market requirements, the first generation of 120mm (CD Sized) FMD ROM discs will hold 20 - 100 GigaBytes of pre -recorded data on 12 - 30 data layers with a total thickness of under 2mm.In comparison, a standard DVD disc holds just 4.7 gigabytes. With C3D's (Constellation 3D) proprietary parallel reading and writing technology, data transfer speeds can exceed 1 gigabit per second, again depending on the application and market need. WHY FMD? Increased Disc Capacity DVD data density (4.7 GB) on each layer of data carriers up to 100 layers. Initially, the FMD disc will hold anywhere from 25 - 140 GB of data depending on market need. Eventually a terabyte of data on a single disc will be achievable.

Quick Parallel Access and Retrieval of Information Reading from several layers at a ime and multiple tracks at a time nearly impossible using the reflective technology of a CD

Compact peripheral component interconnect (CPCI)

Introduction
Compact peripheral component interconnect (CPCI) is an adaptation of the peripheral component interconnect (PCI) specification for industrial computer applications requiring a smaller, more robust mechanical form factor than the one defined for the desktop. CompactPCI is an open standard supported by the PCI Industrial Computer Manufacturer's Group (PICMG). CompactPCI is best suited for small, high-speed industrial computing applications where transfers occur between a number of high-speed cards. It is a high-performance industrial bus that uses the Eurocard form factor and is fully compatible with the Enterprise Computer Telephony Forum(ECTF) computer telephony (CT) Bus H.110 standard specification. CompactPCI products make it possible for original equipment manufacturers (OEM), integrators, and resellers to build powerful and cost-effective solutions for telco networks, while using fewer development resources. CompactPCI products let developers scale their applications to the size, performance, maintenance, and reliability demands of telco environments by supporting the CT Bus, hot swap, administrative tools such as simple network management protocol (SNMP), and extensive system diagnostics. The move toward open, standards-based systems has revolutionized the computer telephony (CT) industry. There are a number of reasons for these changes. Open systems have benefited from improvements in personal computer (PC) hardware and software, as well as from advances in digital signal processing (DSP) technology. As a result, flexible, high performance systems are scalable to thousands of ports while remaining cost effective for use in telco networks. In addition, fault-tolerant chassis, distributed software architecture, and N+1 redundancy have succeeded in meeting the demanding reliability requirements of network operators. One of the remaining hurdles facing open CT systems is serviceability. CT systems used in public networks must be extremely reliable and easy to repair without system downtime. In addition, network operation requires first-rate administrative and diagnostic capabilities to keep services up and running. The Compact PCI Standard The Peripheral Component Interconnect Industrial Computer Manufacturer's Group (PICMG) developed the compact peripheral component interconnect (CompactPCI) specification in 1994. CompactPCI is a high-performance industrial bus based on the peripheral component interconnect (PCI) electrical standard. It uses the Eurocard form factor first popularized by VersaModuleEurocard (VME). Compared to the standard PCI desktop computer, CompactPCI supports twice as many PCI slots (eight) on a single system bus. In addition, CompactPCI boards are inserted from the front of the chassis and can route input/output (I/O) through the backplane to the back of the chassis. These design considerations make CompactPCI ideal for telco environments.

CompactPCI offers a substantial number of benefits for developers interested in building telco-grade applications. CompactPCI systems offer the durability and maintainability required for network applications. At the same time, they can be built using standard, off-the-shelf components and can run almost any operating system and thousands of existing software applications without modification. Other advantages of CompactPCI are related to its Eurocard form factor, durable and rugged design, hot swap capability, and compatibility with the CT Bus.

Datalogger

Introduction
A data logger (or datalogger) is an electronic instrument that records data over time or in relation to location. Increasingly, but not necessarily, they are based on a digital processor (or computer). They may be small, battery powered and portable and vary between general purpose types for a range of measurement applications to very specific devices for measuring in one environment only.It is common for general purpose types to beprogrammable. Standardisation of protocols and data formats is growing in the industry and XML is increasingly being adopted for data exchange. The development of the Semantic Web is likely to accelerate this trend. A smart protocol, SDI-12, exists that allows some instrumentation to be connected to a variety of data loggers. The use of this standard has not gained much acceptance outside the environmental industry. SDI-12 also supports multi drop instruments. Some datalogging companies are also now supporting the MODBUS standard, this has been used traditionally in the industrial control area there are many industrial instruments which support this communication standard. Some data loggers utilize a flexible scripting environment to adapt themselves to various non-standard protocols. Another multi drop protocol which is now stating to become more widely used is based upon CANBUS (ISO 11898) this bus system was originally developed by Robert Bosch for the automotive industry. This protocol is ideally suited to higher speed logging, the data is divided into small individually addressed 64 bit packets of information with a very strict priority. This standard from the automotive/machine area is now seeping into more traditional data logging areas, a number of newer players and some of the more traditional players have loggers supporting sensors with this communications bus. DATA LOGGING VERSUS DATA ACQUISITION The terms data logging and data acquisition are often used interchangeably. However, in a historical context they are quite different. A data logger is a data acquisition system, but a data acquisition system is not necessarily a data logger. " Data loggers typically have slower sample rates. A maximum sample rate of 1 Hz may be considered to be very fast for a data logger, yet very slow for a typical data acquisition system. " Data loggers are implicitly stand-alone devices, while typical data acquisition system must remain tethered to a computer to acquire data. This stand-alone aspect of data loggers implies on-board memory that is used to store acquired data. Sometimes this memory is very large to accommodate many days, or even months, of unattended recording. This memory may be battery-backed static random access memory, flash memory or EEPROM. Earlier data loggers used magnetic tape, punched paper tape, or directly viewable records such as "strip chart recorders". " Given the extended recording times of data loggers, they typically feature a time- and datestamping mechanism to ensure that each recorded data value is associated with a date and time of acquisition. As such, data loggers typically employ built-in real-time clocks whose published drift can be an important consideration when choosing between data loggers. " Data loggers range from simple single-channel input to complex multi-channel instruments. Typically, the simpler the device the less programming flexibility. Some more sophisticated instruments allow for cross-channel computations and alarms based on predetermined conditions.

The newest of data loggers can serve web pages, allowing numerous people to monitor a system remotely.

Wideband Sigma Delta PLL Modulator

Introduction
The proliferation of wireless products over past few years has been rapidly increasing. New wireless standards such as GPRS and HSCSD have brought new challenges to wireless transceiver design. One pivotal component of transceiver is frequency synthesizer. Two major requirements in mobile applications are efficient utilization of frequency spectrum by narrowing the channel spacing and fast switching for high data rates. This can be achieved by using fractional- N PLL architecture. They are capable of synthesizing frequencies at channel spacings less than reference frequency. This will increase the reference frequency and also reduces the PLL's lock time. Fractional N PLL has the disadvantage that it generates high tones at multiple of channel spacing. Using digital sigma delta modulation techniques. we can randomize the frequency division ratio so that quantization noise of the divider can be transferred to high frequencies thereby eliminatory the spurs. Conventional PLL The advantages of this conventional PLL modulator is that they offer small frequency resolution, wider tuning bandwidth and fast switching speed. But they have insufficient bandwidth for current wireless standards such as GSM. so that they cannot be used as a closed loop modulator for digital enhanced codeless (DECT) standard. they efficiently filter out quantization noise and reference feed through for sufficiently small loop bandwidth. Wide Band PLL For wider loop band width applications bandwidth is increased. but this will results in residual spurs to occur. this due to the fact that the requirement of the quantization noise to be uniformly distributed is violated. since we are using techniques for frequency synthesis the I/P to the modulator is dc I/P which will results in producing tones even when higher order modulators are used. with single bit O/P level of quantization noise is less but with multi bit O/P s quantization noise increases. So the range of stability of modulator is reduced which will results in reduction of tuning range. More over the hardware complexity of the modulator is higher than Mash modulator. In this feed back feed forward modulator the loop band width was limited to nearly three orders of magnitudes less than the reference frequency. So if it is to be used as a closed loop modulator power dissipation will increase. So in order to widen the loop band width the close-in-phase noise must be kept within tolerable levels and also the rise of the quantization noise must be limited to meet high frequency offset phase noise requirements. At low frequencies or dc the modulator transfer function has a zero which will results in addition of phase noise. For that the zero is moved away from dc to a frequency equal to

some multiple of fractional division ratio. This will introduce a notch at that frequency which will reduce the total quantization noise. Now the quantization noise of modified modulator is 1.7 times and 4.25 times smaller than Mash modulator. At higher frequencies quantization noise cause distortion in the response. This is because the step size of multi bit modulator is same as single bit modulator. So more phase distortion will be occurring in multi bit PLLs. To reduce quantization noise at high frequencies the step size is reduced by producing functional division ratios. This is achieved by using a phase selection divider instead of control logic in conventional modulator. This divider will produce phase shifts of VCO signal and changes the division ratio by selecting different phases from the VCO. This type of divider will produce quarter division ratios. /DVD - is easily achieved in FMD. This will allow for retrieval speeds of up to 1 gigabyte per second. Media Tolerances By using incoherent light to read data the FMD/FMC media will have far fewer restrictions in temperature range, vibration and air- cleanness during manufacturing. And will provide a considerably more robust data carrier than existing CD and DVDs.

Voice morphing

Definition
Voice morphing means the transition of one speech signal into another. Like image morphing, speech morphing aims to preserve the shared characteristics of the starting and final signals, while generating a smooth transition between them. Speech morphing is analogous to image morphing. In image morphing the in-between images all show one face smoothly changing its shape and texture until it turns into the target face. It is this feature that a speech morph should possess. One speech signal should smoothly change into another, keeping the shared characteristics of the starting and ending signals but smoothly changing the other properties. The major properties of concern as far as a speech signal is concerned are its pitch and envelope information. These two reside in a convolved form in a speech signal. Hence some efficient method for extracting each of these is necessary. We have adopted an uncomplicated approach namely cepstral analysis to do the same. Pitch and formant information in each signal is extracted using the cepstral approach. Necessary processing to obtain the morphed speech signal include methods like Cross fading of envelope information, Dynamic Time Warping to match the major signal features (pitch) and Signal Re-estimation to convert the morphed speech signal back into the acoustic waveform.
INTROSPECTION OF THE MORPHING PROCESS

Speech morphing can be achieved by transforming the signal's representation from the acoustic waveform obtained by sampling of the analog signal, with which many people are familiar with, to another representation. To prepare the signal for the transformation, it is split into a number of 'frames' - sections of the waveform. The transformation is then applied to each frame of the signal. This provides another way of viewing the signal information. The new representation (said to be in the frequency domain) describes the average energy present at each frequency band. Further analysis enables two pieces of information to be obtained: pitch information and the overall envelope of the sound. A key element in the morphing is the manipulation of the pitch information. If two signals with different pitches were simply cross-faded it is highly likely that two separate sounds will be heard. This occurs because the signal will have two distinct pitches causing the auditory system to perceive two different objects. A successful morph must exhibit a smoothly changing pitch throughout. The pitch information of each sound is compared to provide the best match between the two signals' pitches. To do this match, the signals are stretched and compressed so that important sections of each signal match in time. The interpolation of the two sounds can then be performed which creates the intermediate sounds in the morph. The final stage is then to convert the frames back into a normal waveform.

VISNAV
Abstract

The VISNAV system uses a Position Sensitive Diode (PSD) sensor for 6 DOF estimation. Output current from the PSD sensor determines the azimuth and elevation of the light source with respect to the sensor. By having four or more light source called beacons in the target frame at known positions the six degree of freedom data associated with the sensor is calculated. The beacon channel separation and demodulation are done on a fixed point digital signal processor (DSP) Texas Instruments TMS320C55x [2] using digital down conversion, synchronous detection and multirate signal processing techniques. The demodulated sensor currents due to each beacon are communicated to a floating point DSP Texas Instruments TMS320VC33 [2] for subsequent navigation solution by the use of colinearity equations. Among other competitive systems [3] a differential global positioning system (GPS) is limited to midrange accuracies, lower bandwidth, and requires complex infrastructures. The sensor systems based on differential GPS are also limited by geometric dilution of precision, multipath errors, receiver errors, etc.These limitations can be overcome by using the DSP embedded VISNAV system Signal Processing

Block diagram of DSP embedded VISNAV system This the general block diagram of VISNAV system. A sinusoidal carrier of approximately 40 kHz frequency is applied to modulate each beacon LED drive current. the resulting induced PSD signal current then vary sinusoidally at approximately same frequency and are demodulated to recover the currents that are proportional to the beacon light centroid.

The output of PSD is very weak. So we have to amplify these signals by using a preamplifier. After amplification this signal is fed to four channel analog to digital converter. This converts the four channels of analog data into digital form. And is then fed to the DSP, TMS320C55x [2] to demodulate the signal. After the demodulation the four channel data is fed to the Six Degree Of Freedom estimator, which uses DSP for estimation. From this point we get the sensor co-ordinates. As discussed earlier that the controlling of beacons to avoid the problem of saturation we uses the beacon control data which is given by the DSP, TMS320VC33 [2]. This control data is in digital form. We use radio link to communicate the control data from the sensor electronics module to the beacon controller module

Speed Detection of moving vehicle using speed cameras

Definition
Although there is good road safety performance the number of people killed and injured on our roads remain unacceptably high. So the roads safety strategy was published or introduced to support the new casualty reduction targets. The road safety strategy includes all forms of invention based on the engineering and education and enforcement and recognizes that there are many different factors that lead to traffic collisions and casualties. The main reason is speed of vehicle. We use traffic lights and other traffic manager to reduce the speed. One among them is speed cameras. Speed cameras on the side of urban and rural roads, usually placed to catch transgressors of the stipulated speed limit for that road. The speed cameras, the solely to identify and prosecute those drivers that pass by the them when exceed the stipulated speed limit. At first glance this seemed to be reasonable that the road users do not exceed the speed limit must be a good thing because it increases road safety, reduces accidents and protect other road users and pedestrians. So speed limits are good idea. To enforce these speed limit; laws are passed making speed an offence and signs are erected were of to indicate the maximum permissible speeds. The police can't be every where to enforce the speed limit and so enforcement cameras art director to do this work; on one who's got an ounce of Commons sense, the deliberately drive through speed camera in order fined and penalized . So nearly everyone slowdown for the speed Camera. We finally have a solution to the speeding problem. Now if we are to assume that speed cameras are the only way to make driver's slowdown, and they work efficiently, then we would expect there to be a great number of these every were and that day would be highly visible and identifiable to make a drivers slow down.

Optical Switching

Definition
Explosive information demand in the internet world is creating enormous needs for capacity expansion in next generation telecommunication networks. It is expected that the data- oriented network traffic will double every year. Optical networks are widely regarded as the ultimate solution to the bandwidth needs of future communication systems. Optical fiber links deployed between nodes are capable to carry terabits of information but the electronic switching at the nodes limit the bandwidth of a network. Optical switches at the nodes will overcome this limitation. With their improved efficiency and lower costs, Optical switches provide the key to both manage the new capacity Dense Wavelength Division Multiplexing (DWDM) links as well as gain a competitive advantage for provision of new band width hungry services. However, in an optically switched network the challenge lies in overcoming signal impairment and network related parameters. Let us discuss the present status, advantages and challenges and future trends in optical switches. A fiber consists of a glass core and a surrounding layer called the cladding. The core and cladding have carefully chosen indices of refraction to ensure that the photos propagating in the core are always reflected at the interface of the cladding. The only way the light can enter and escape is through the ends of the fiber. A transmitter either alight emitting diode or a laser sends electronic data that have been converted to photons over the fiber at a wavelength of between 1,200 and 1,600 nanometers. Today fibers are pure enough that a light signal can travel for about 80 kilometers without the need for amplification. But at some point the signal still needs to be boosted. Electronics for amplitude signal were replaced by stretches of fiber infused with ions of the rare-earth erbium. When these erbium-doped fibers were zapped by a pump laser, the excited ions could revive a fading signal. They restore a signal without any optical to electronic conversion and can do so for very high speed signals sending tens of gigabits a second. Most importantly they can boost the power of many wavelengths simultaneously. Now to increase information rate, as many wavelengths as possible are jammed down a fiber, with a wavelength carrying as much data as possible. The technology that does this has a name-dense wavelength division multiplexing (DWDM ) - that is a paragon of technospeak.Switches are needed to route the digital flow to its ultimate destination. The enormous bit conduits will flounder if the light streams are routed using conventional electronic switches, which require a multi-terabit signal to be converted into hundreds of lower-speed electronic signals. Finally, switched signals would have to be reconverted to photons and reaggregated into light channels that are then sent out through a designated output fiber.

Optical Satellite Communication

Definition
The European Space Agency (ESA) has programmes underway to place Satellites carrying optical terminals in GEO orbit within the next decade. The first is the ARTEMIS technology demonstration satellite which carries both microwave and SILEX (Semiconductor Laser Intro satellite Link Experiment) optical interorbit communications terminal. SILEX employs direct detection and GaAIAs diode laser technology; the optical antenna is a 25cm diameter reflecting telescope. The SILEX GEO terminal is capable of receiving data modulated on to an incoming laser beam at a bit rate of 50 Mbps and is equipped with a high power beacon for initial link acquisition together with a low divergence (and unmodulated) beam which is tracked by the communicating partner. ARTEMIS will be followed by the operational European data relay system (EDRS) which is planned to have data relay Satellites (DRS). These will also carry SILEX optical data relay terminals. Once these elements of Europe's space Infrastructure are in place, these will be a need for optical communications terminals on LEO satellites which are capable of transmitting data to the GEO terminals. A wide range of LEO space craft is expected to fly within the next decade including earth observation and science, manned and military reconnaissance system. The LEO terminal is referred to as a user terminal since it enables real time transfer of LEO instrument data back to the ground to a user access to the DRS s LEO instruments generate data over a range of bit rates extending of Mbps depending upon the function of the instrument. A significant proportion have data rates falling in the region around and below 2 Mbps. and the data would normally be transmitted via an S-brand microwave IOL ESA initiated a development programme in 1992 for LEO optical IOL terminal targeted at the segment of the user community. This is known as SMALL OPTICAL USER TERMINALS (SOUT) with features of low mass, small size and compatibility with SILEX. The programme is in two phases. Phase I was to produce a terminal flight configuration and perform detailed subsystem design and modelling. Phase 2 which started in september 1993 is to build an elegant bread board of the complete terminal.

Optical Packet Switching Network

Definition
With in today's Internet data is transported using wavelength division multiplexed (WDM) optical fiber transmission system that carry 32-80 wavelengths modulated at 2.5gb/s and 10gb/s per wavelength. Today's largest routers and electronic switching systems need to handle close to 1tb/s to redirect incoming data from deployed WDM links. Mean while next generation commercial systems will be capable of single fiber transmission supporting hundreds of wavelength at 10Gb/s and world experiments have demonstrated 10Tb/shutdown transmission. The ability to direct packets through the network when single fiber transmission capacities approach this magnitude may require electronics to run at rates that outstrip Moor's law. The bandwidth mismatch between fiber transmission systems and electronics router will becomes more complex when we consider that future routers and switches will potentially terminate hundreds of wavelength, and increase in bit rate per wavelength will head out of beyond 40gb/s to 160gb/s. even with significance advances in electronic processor speed, electronics memory access time only improve at the rate of approximately 5% per year, an important data point since memory plays a key role in how packets are buffered and directed through a router. Additionally opto-electronic interfaces dominate the power dissipations, footprint and cost of these systems, and do not scale well as the port count and bit rate increase. Hence it is not difficult to see that the process of moving a massive number of packets through the multiple layers of electronics in a router can lead to congestion and exceed the performance of electronics and the ability to efficiently handle the dissipated power. In this article we review the state of art in optical packet switching and more specifically the role optical signal processing plays in performing key functions. It describe how all-optical wavelength converters can be implemented as optical signal processors for packet switching, in terms of their processing functions, wavelength agile steering capabilities, and signal regeneration capabilities. Examples of how wavelength converters based processors can be used to implement asynchronous packet switching functions are reviewed. Two classes of wavelength converters will be touched on: monolithically integrated semiconductor optical amplifiers (SOA) based and nonlinear fiber based.

SATRACK

Definition
According to the dictionary guidance is the 'process of guiding the path of an object towards a given point, which in general may be moving'. The process of guidance is based on the position and velocity if the target relative to the guided object. The present day ballistic missiles are all guided using the global positioning system or GPS.GPS uses satellites as instruments for sending signals to the missile during flight and to guide it to the target. SATRACK is a system that was developed to provide an evaluation methodology for the guidance system of the ballistic missiles. This was developed as a comprehensive test and evaluation program to validate the integrated weapons system design for nuclear powered submarines launched ballistic missiles.this is based on the tracking signals received at the missile from the GPS satellites. SATRACK has the ability to receive record, rebroadcast and track the satellite signals. SATRACK facility also has the great advantage that the whole data obtained from the test flights can be used to obtain a guidance error model. The recorded data along with the simulation data from the models can produce a comprehensive guidance error model. This will result in the solution that is the best flight path for the missile. The signals for the GPS satellite navigation are two L-band frequency signals. They can be called L1 and L2.L1 is at 1575.42 MHz and L2 at 1227.60 MHz.The modulations used for these GPS signals are 1. Narrow band clear/acquisition code with 2MHz bandwidth. 2. Wide band encrypted P code with 20MHz bandwidth. L1 is modulated using the narrow band C/A code only. This signal will give an accuracy of close to a 100m only. L2 is modulated using the P code. This code gives a higher accuracy close to 10m that is why they are encrypted. The parameters that a GPS signal carries are latitude, longitude, altitude and time. The modulations applied to each frequency provide the basis for epoch measurements used to determine the distances to each satellite. Tracking of the dual frequency GPS signals provides a way to correct measurements from the effect of refraction through the ionosphere. An alternate frequency L3 at 1381.05MHz was also used to compensate for the ionospheric effects.

Crusoe Processor

Definition
Mobile computing has been the buzzword for quite a long time. Mobile computing devices like laptops, webslates & notebook PCs are becoming common nowadays. The heart of every PC whether a desktop or mobile PC is the microprocessor. Several microprocessors are available in the market for desktop PCs from companies like Intel, AMD, Cyrix etc.The mobile computing market has never had a microprocessor specifically designed for it. The microprocessors used in mobile PCs are optimized versions of the desktop PC microprocessor. Mobile computing makes very different demands on processors than desktop computing, yet up until now, mobile x86 platforms have simply made do with the same old processors originally designed for desktops. Those processors consume lots of power, and they get very hot. When you're on the go, a power-hungry processor means you have to pay a price: run out of power before you've finished, run more slowly and lose application performance, or run through the airport with pounds of extra batteries. A hot processor also needs fans to cool it; making the resulting mobile computer bigger, clunkier and noisier. A newly designed microprocessor with low power consumption will still be rejected by the market if the performance is poor. So any attempt in this regard must have a proper 'performance-power' balance to ensure commercial success. A newly designed microprocessor must be fully x86 compatible that is they should run x86 applications just like conventional x86 microprocessors since most of the presently available software's have been designed to work on x86 platform. Crusoe is the new microprocessor which has been designed specially for the mobile computing market. It has been designed after considering the above mentioned constraints. This microprocessor was developed by a small Silicon Valley startup company called Transmeta Corp. after five years of secret toil at an expenditure of $100 million. The concept of Crusoe is well understood from the simple sketch of the processor architecture, called 'amoeba'. In this concept, the x86-architecture is an ill-defined amoeba containing features like segmentation, ASCII arithmetic, variable-length instructions etc. The amoeba explained how a traditional microprocessor was, in their design, to be divided up into hardware and software. Thus Crusoe was conceptualized as a hybrid microprocessor that is it has a software part and a hardware part with the software layer surrounding the hardware unit. The role of software is to act as an emulator to translate x86 binaries into native code at run time. Crusoe is a 128-bit microprocessor fabricated using the CMOS process. The chip's design is based on a technique called VLIW to ensure design simplicity and high performance. Besides this it also uses Transmeta's two patented technologies, namely, Code Morphing Software and Longrun Power Management. It is a highly integrated processor available in different versions for different market segments.

Radio Frequency Light Sources

Definition
RF light sources follow the same principles of converting electrical power into visible radiation as conventional gas discharge lamps. The fundamental difference between RF lamps and conventional lamps is that RF lamps operate without electrodes .the presence of electrodes in conventional florescent and High Intensity Discharge lamps has put many restrictions on lamp design and performance and is a major factor limiting lamp life. Recent progress in semiconductor power switching electronics, which is revolutionizing many factors of the electrical industry, and a better understanding of RF plasma characteristics, making it possible to drive lamps at high frequencies.The very first proposal for RF lighting, as well as the first patent on RF lamps, appeared about 100years ago, a half century before the basic principles lighting technology based on gas discharge had been developed. Discharge tubes Discharge Tube is the device in which a gas conducting an electric current emits visible light. It is usually a glass tube from which virtually all the air has been removed (producing a near vacuum), with electrodes at each end. When a high-voltage current is passed between the electrodes, the few remaining gas atoms (or some deliberately introduced ones) ionize and emit coloured light as they conduct the current along the tube. T he light originates as electrons change energy levels in the ionized atoms. By coating the inside of the tube with a phosphor, invisible emitted radiation (such as ultraviolet light) can produce visible light; this is the principle of the fluorescent lamp. We will consider different kinds of RF discharges and their advantages and restrictions for lighting applications.

QoS in Cellular Networks Based on MPT

Definition
In recent years, there has been a rapid increase in wireless network deployment and mobile device market penetration. With vigorous research that promises higher data rates, future wireless networks will likely become an integral part of the global communication infrastructure. Ultimately, wireless users will demand the same reliable service as today's wire-line telecommunications and data networks. However, there are some unique problems in cellular networks that challenge their service reliability. In addition to problems introduced by fading, user mobility places stringent requirements on network resources. Whenever an active mobile terminal (MT) moves from one cell to another, the call needs to be handed off to the new base station (US), and network resources must be reallocated. Resource demands could fluctuate abruptly due to the movement of high data rate users. Quality of service (QoS) degradation or even forced termination may occur when there are insufficient resources to accommodate these handoffs. If the system has prior knowledge of the exact trajectory of every MT, it could take appropriate steps to reserve resources so that QoS may be guaranteed during the MT's connection lifetime. However, such an ideal scenario is very unlikely to occur in real life. Instead, much of the work on resource reservation has adopted a predictive approach. One approach uses pattern matching techniques and a self-adaptive extended Kalman filter for nextcell prediction based on cell sequence observations, signal strength measurements, and cell geometry assumptions. Another approach proposes the concept of a shadow cluster: a set of BSs to which an MT is likely to attach in the near future. The scheme estimates the probability of each MT being in any cell within the shadow cluster for future time intervals, based on knowledge about individual MTs' dynamics and call holding patterns.

Project Oxygen

Definition
Oxygen enables pervasive, human-centered computing through a combination of specific user and system technologies. Oxygen's user technologies directly address human needs. Speech and vision technologies enable us to communicate with Oxygen as if we're interacting with another person, saving much time and effort. Automation, individualized knowledge access, and collaboration technologies help us perform a wide variety of tasks that we want to do in the ways we like to do them. Oxygen's system technologies dramatically extend our range by delivering user technologies to us at home, at work, or on the go. Computational devices, called Enviro21s (E21s), embedded in our homes, offices, and cars sense and affect our immediate environment. Hand-held devices, called Handy21s (H21s), empower us to communicate and compute no matter where we are. Dynamic networks (N21s) help our machines locate each other as well as the people, services, and resources we want to reach. Oxygen's user technologies include: The Oxygen technologies work together and pay attention to several important themes: " Distribution and mobility - for people, resources, and services. " Semantic content - what we mean, not just what we say. " Adaptation and change - essential features of an increasingly dynamic world. " Information personalities - the privacy, security, and form of our individual interactions with Oxygen. Oxygen is an integrated software system that will reside in the public domain. Its development is sponsored by DARPA and the Oxygen Alliance industrial partners, who share its goal of pervasive, human-centered computing. Realizing that goal will require a great deal of creativity and innovation, which will come from researchers, students, and others who use Oxygen technologies for their daily work during the course of the project. The lessons they derive from this experience will enable Oxygen to better serve human needs.

Polymer Memory

Definition
Polymers are organic materials consisting of long chains of single molecules. Polymers are highly adaptable materials, suitable for myriad applications. Until the 1970s and the work of Nobel laureates Alan J. Heeger, Alan G. MacDiarmid and Hideki Shirakawa, polymers were only considered to be insulators. Heeger et al showed that polymers could be conductive. Electrons were removed, or introduced, into a polymer consisting of alternately single and double bonds between the carbon atoms. As these holes or extra electrons are able to move along the molecule, the structure becomes electrically conductive. Thin Film Electronics has developed a specific group of polymers that are bistable and thus can be used as the active material in a non-volatile memory. In other words, the Thin Film polymers can be switched from one state to the other and maintain that state even when the electrical field is turned off. This polymer is "smart", to the extent that functionality is built into the material itself, like switchability, addressability and charge store. This is different from silicon and other electronic materials, where such functions typically are only achieved by complex circuitry. "Smart" materials can be produced from scratch, molecule by molecule, allowing them to be built according to design. This opens up tremendous opportunities in the electronics world, where "tailor-made" memory materials represent unknown territory Polymers are essentially electronic materials that can be processed as liquids. With Thin Film's memory technology, polymer solutions can be deposited on flexible substrates with industry standard processes like spin coating in ultra thin layers. Digital memory is an essential component of many electronic devices, and memory that takes up little space and electricity is in high demand as electronic devices continue to shrink Researchers from the Indian Association for the Cultivation of Science and the Italian National used positive and negative electric charges, or space charges, contained within plastic to store binary numbers Research Council. A polymer retains space charges near a metal interface when there is a bias, or electrical current, running across the surface. These charges come either from electrons, which are negatively charged, or the positively-charged holes vacated by electrons. We can store space charges in a polymer layer, and conveniently check the presence of the space charges to know the state of the polymer layer. Space charges are essentially differences in electrical charge in a given region. They can be read using an electrical pulse because they change the way the devices conduct electricity.

Navbelt and Guiecane

Definition
Recent revolutionary achievements in robotics and bioengineering have given scientists and engineers great opportunities and challenges to serve humanity. This seminar is about "NAVBELT AND GUIDECANE", which are two computerised devices based on advanced mobile robotic navigation for obstacle avoidance useful for visually impaired people. This is "Bioengineering for people with disabilities". NavBelt is worn by the user like a belt and is equipped with an array of ultrasonic sensors. It provides acoustic signals via a set of stereo earphones that guide the user around obstacles or displace a virtual acoustic panoramic image of the traveller's surroundings. One limitation of the NavBelt is that it is exceedingly difficult for the user to comprehend the guidance signals in time, to allow fast work. A newer device, called GuideCane, effectively overcomes this problem. The GuideCane uses the same mobile robotics technology as the NavBelt but is a wheeled device pushed ahead of the user via an attached cane. When the Guide Cane detects an obstacle, it steers around it. The user immediately feels this steering action and can follow the Guide Cane's new path easily without any conscious effort. The mechanical, electrical and software components, user-machine interface and the prototypes of the two devices are described.

Multisensor Fusion and Integration

Introduction
Sensor is a device that detects or senses the value or changes of value of the variable being measured. The term sensor some times is used instead of the term detector, primary element or transducer. The fusion of information from sensors with different physical characteristics, such as light, sound, etc enhances the understanding of our surroundings and provide the basis for planning, decision making, and control of autonomous and intelligent machines. Sensors Evolution A sensor is a device that responds to some external stimuli and then provides some useful output. With the concept of input and output, one can begin to understand how sensors play a critical role in both closed and open loops. One problem is that sensors have not been specified. In other words they tend to respond variety of stimuli applied on it without being able to differentiate one from another. Neverthless, sensors and sensor technology are necessary ingredients in any control type application. Without the feedback from the environment that sensors provide, the system has no data or reference points, and thus no way of understanding what is right or wrong g with its various elements. Sensors are so important in automated manufacturing particularly in robotics. Automated manufacturing is essentially the procedure of remo0ving human element as possible from the manufacturing process. Sensors in the condition measurement category sense various types of inputs, condition, or properties to help monitor and predict the performance of a machine or system. Multisensor Fusion And Integration Multisensor integration is the synergistic use of the information provided by multiple sensory devices to assist in the accomplishment of a task by a system. Multisensor fusion refers to any stage in the integration process where there is an actual combination of different sources of sensory information into one representational format. Multisensor Integration The diagram represents multisensor integration as being a composite of basic functions. A group of n sensors provide input to the integration process. In order for the data from each sensor to be used for integration, it must first be effectively modelled. A sensor model represents the uncertainty and error in the data from each sensor and provides a measure of its quality that can be 7used by the subsequent integration functions.

Magneto-optical current transformer technology (MOCT)

Definition
An accurate electric current transducer is a key component of any power system instrumentation. To measure currents power stations and substations conventionally employ inductive type current transformers with core and windings. For high voltage applications, porcelain insulators and oilimpregnated materials have to be used to produce insulation between the primary bus and the secondary windings. The insulation structure has to be designed carefully to avoid electric field stresses, which could eventually cause insulation breakdown. The electric current path of the primary bus has to be designed properly to minimize the mechanical forces on the primary conductors for through faults. The reliability of conventional high-voltage current transformers have been questioned because of their violent destructive failures which caused fires and impact damage to adjacent apparatus in the switchyards, electric damage to relays, and power service disruptions. With short circuit capabilities of power systems getting larger, and the voltage levels going higher the conventional current transformers becomes more and more bulky and costly also the saturation of the iron core under fault current and the low frequency response make it difficult to obtain accurate current signals under power system transient conditions. In addition to the concerns, with the computer control techniques and digital protection devices being introduced into power systems, the conventional current transformers have caused further difficulties, as they are likely to introduce electro-magnetic interference through the ground loop into the digital systems. This has required the use of an auxiliary current transformer or optical isolator to avoid such problems. It appears that the newly emerged Magneto-optical current transformer technology provides a solution for many of the above mentioned problems. The MOCT measures the electric current by means of Faraday Effect, which was first observed by Michael Faraday 150 years ago. The Faraday Effect is the phenomenon that the orientation of polarized light rotates under the influence of the magnetic fields and the rotation angle is proportional to the strength of the magnetic field component in the direction of optical path.

Mobile Virtual Reality Service (VRS)

Definition
A mobile virtual reality service (VRS) will make the presence and presentation of the sounds and sights of an actual physical environment virtually available everywhere in real time through the use of mobile telecommunication devices and networks. Furthermore, the VRS is the conversion of a physical system into its digital representation in a three-dimension (3D) multimedia format. This paper addresses one aspect of the notion of bringing an actual multimedia environment to its virtual presence everywhere in real time . An international telecommunication union (ITC) recommendation document, containing ITU's visions on mostly forward-looking and innovative services and network capabilities, addresses the capability needed in a telecommunication system to allow mobile access to real-time sights and sounds of an actual physical environment in the contest and forms of a VRS episode . Presently, the availability of a VRS is limited to fixed-access phenomena in non-real time , for example , entertainment machines and various simulations equipment. There are also some limited fixed-access and real-time services that require low data transmission rates, such as net meetings. In the latter case, a user can experience a limited real-life environment as opposed to the former case of a non-real-life computer-generated environment. These existing virtual reality services do not allow user control in viewing 3D environments, and they are generally limited to viewing images on a monitor in two dimensions. The VRS-capable systems, however, will allow rather 3D representations of remote real-life environments. For instance, a passenger in a train or in a car could become a participant in a conference call in a 3D environment or become virtually present among the audience in a concert hall or sports stadium viewing a live concert or event.

Smart Pixel Arrays (SPAs)

Definition
High speed smart pixel arrays (SPAs) hold great promise as an enabling technology for board-toboard interconnections in digital systems. SPAs may be considered an extension of a class of optoelectronic components that have existed for over a decade, that of optoelectronic integrated circuits (OEICs). The vast majority of development in OEICs has involved the integration of electronic receivers with optical detectors and electronic drivers with optical sources or modulators. In addition, very little of this development has involved more than a single optical channel. But OEICs have underpinned much of the advancement in serial fiber links. SPAs encompass an extension of these optoelectronic components into arrays in which each element of the array has a signal processing capability. Thus, a SPA may be described as an array of optoelectronic circuits for which each circuit possesses the property of signal processing and, at a minimum, optical input or optical output (most SPAs will have both optical input and output). The name smart pixel is combination of two ideas, "pixel" is an image processing term denoting a small part, or quantized fragment of an image, the word "smart" is coined from standard electronics and reflects the presence of logic circuits. Together they describe a myriad of devices. These smart pixels can be almost entirely optical in nature, perhaps using the non-linear optical properties of a material to manipulate optical data, or they can be mainly electronic, for instance a photoreceiver coupled with some electronic switching. Smart pixel arrays for board-to-board optical interconnects may be used for either backplane communications or for distributed board-to-board communications, the latter known as 3-D packaging. The former is seen as the more near-term of the two, employing free-space optical beams connecting SPAs located on the ends of printed circuit boards in place of the current state-of-the-art, multi-level electrical interconnected boards. 3-D systems, on the other hand, are distributed board-to-board optical interconnects, exploiting the third dimension and possibly employing holographic interconnect elements to achieve global connectivity (very difficult with electrical interconnects).

Adaptive Blind Noise Suppression in some Speech Processing Applications


In many applications of speech processing the noise reveals some specific features. Although the noise could be quite broadband, there are a limited number of dominant frequencies, which carry the most of its energy. This fact implies the usage of narrow-band notch filters that must be adaptive in order to track the changes in noise characteristics. In present contribution, a method and a system for noise suppression are developed. The method uses adaptive notch filters based on second-order Gray-Markel lattice structure. The main advantages of the proposed system are that it has very low computational complexity, is stable in the process of adaptation, and has a short time of adaptation. Under comparable SNR improvement, the proposed method adjusts only 3 coefficients against 250450 for the conventional adaptive noise cancellation systems. A framework for a speech recognition system that uses the proposed method is suggested. INTRODUCTION The noise existence is inevitable in real applications of speech processing. It is well known that the additive noise affects negatively the performance of the speech codecs designed to work with noisefree speech especially codecs based on linear prediction coefficients (LPC). Another application strongly influenced by noise is related to the hands free phones where the background noise reduces the signal to noise ratio (S/N) and the speech intelligibility. Last but not least, is the problem of speech recognition in a noisy environment. A system that works well in noise-free conditions, usually shows considerable degradation in performance when background noise is present It is clear that a strong demand for reliable noise cancellation methods exists that efficiently separate the noise from speech signal. The endeavors in designing of such systems can be followed some 20 years ago The core of the problem is that in most situations the characteristics of the noise are not known a priori and moreover they may change in time. This implies the use of adaptive systems capable of identifying and tracking the noise characteristics. This is why the application of adaptive filtering for noise cancellation is widely used. The classical systems for noise suppression rely on the usage of adaptive linear filtering and the application of digital filters with finite impulse response (FIR). The strong points of this approach are the simple analysis of the linear systems in the process of adaptation and the guaranteed stability of FIR structures. It is worth mentioning the existence of relatively simple and well investigated adaptive algorithms for such kind of systems as least mean squares (LMS) and recursive least squares (RLS) algorithms. The investigations in the area of noise cancellation reveal that in some applications the nonlinear filters outperform their linear counterparts. That fact is a good motivation for a shift towards the usage of nonlinear systems in noise reduction .Another approach is based on a microphone array instead of the two microphones, reference and primary, that are used in the classical noise cancellation scheme . A brief analysis of all mentioned approaches leads to the conclusion that they try to model the noise path either by a linear or by a nonlinear system. Each of these methods has its strengths and weaknesses. For example, for the classical noise cancellation with two microphones this is the need of reference signal; for the neural filters - the fact that as a rule they are slower than classic adaptive filters and they are efficient only for noise suppression on relatively short data sequences which is not true for speech processing and finally for microphone arrays - the need of precise space alignment In present contribution the approach is slightly different.

The basic idea is that in many applications, for instance, hands-free cellular phones in car environment howling control in hands-free phones, noise reduction in an office environment, the noise reveals specific features that can be exploited. In most instances although the noise might be quite wide-band, there are always, as a rule, no more than two or three regions of its frequency spectrum that carry most of the noise energy and the removal of these dominant frequencies results in a considerable improvement of S/N ratio. This brings the idea to use notch adaptive filters capable of tracking the noise characteristics. In this paper a modification of all-pass structures is used They are recursive, and at the same time, are stable during the adaptive process. The approach is called "blind" because there is no need of a reference signal.

An Efficient Algorithm for iris pattern Recognition using 2D Gabor Wavelet Transformation in Matlab
Wavelet analysis have received significant attention because their multi-resolution decomposition allows efficient image analysis. It is widely used for varied applications such as noise reduction, and data compression, etc. In this paper we have introduced and applied the concept of 2 dimensional Gabor wavelet transform to Biometric Iris recognition system. The application of this transform in encoding the iris image for pattern recognition proves to achieve increased accuracy and processing speed compared to other methods. With a strong scientific approach and mathematical background we have developed an algorithm to facilitate the implementation of this method under the platforms of MATLAB IMAGES - An introduction: A dictionary defines image as a "reproduction or representation of the form of a person or thing". The inherent association of a human with the visual senses, predisposes one to conceive an image as a stimulus on the retina of the eye, in which case the mechanism of optics govern the image formation resulting in continuos range, multi-tone images. A digital image can be defined to be a numerical representation of an object or more strictly to be sampled, quantized function of two dimensions which has been generated by optical means, sampled in an equally spaced rectangular grid pattern, and quantized in equal intervals of graylevel. The word is crying out for the simpler access controls to personal authentication systems and it looks like biometrics may be the answer. Instead of carrying bunch of keys, all those access cards or passwords you carry around with you, your body can be used to uniquely identify you. Furthermore, when biometrics measures are applied in combination with other controls, such as access cards or passwords, the reliability of authentication controls takes a giant step forward. BIOMETRICS-AN OVERVIEW: Biometrics is best defined as measurable physiological and/or behavioral characteristics that can be utilized to verify the identity of an indivisual. They include the following: " Iris scanning " Facial recognition " Fingerprint verification " Hand geometry " Retinal scanning " Signature verification " Voice verification ADVANTAGES OF THE IRIS IDENTIFICATION: " Highly protected internal organ of the eye. " Iris patterns possess a high degree of randomness. " Variability: 244 degrees of freedom. " Entropy: 3.2 bits per square millimetre. " Uniqueness: set by combinatorial complexity. " Patterns apparently stable throughout life. IRIS - An introduction: The iris is a colored ring that surrounds the pupil and contains easily visible yet complex and distinct

combinations of corona, pits, filaments, crypts, striations, radial furrows and more. The iris is called the "Living password" because of its unique, random features. It's always with you and can't be stolen or faked. As such it makes an excellent biometrics identifier.

Analog-Digital Hybrid Modulation for Improved Efficiency over Broadband Wireless Systems
This paper seeks to present ways to eliminate the inherent quantization noise component in digital communications, instead of conventionally making it minimal. It deals with a new concept of signaling called the Signal Code Modulation (SCM) Technique. The primary analog signal is represented by: a sample which is quantized and encoded digitally, and an analog component, which is a function of the quantization component of the digital sample. The advantages of such a system are two sided offering advantages of both analog and digital signaling. The presence of the analog residual allows for the system performance to improve when excess channel SNR is available. The digital component provides increased SNR and makes it possible for coding to be employed to achieve near error-free transmission. Introduction Let us consider the transmission of an analog signal over a band-limited channel. This could be possible by two conventional techniques: analog transmission, and digital transmission, of which the latter uses sampling and quantization principles. Analog Modulation techniques such as Frequency and Phase Modulations provide significant noise immunity as known and provide SNR improvement proportional to the square root of modulation index, and are thus able to trade off bandwidth for SNR. The SCM Technique : An Analytical Approach Suppose we are given a bandlimited signal of bandwidth B Hz, which needs to be transmitted over a channel of bandwidth Bc with Gaussian noise of spectral density N0 watts per Hz. Let the transmitter have an average power of P watts. We consider that the signal is sampled at the Nyquist rate of 2B samples per second, to produce a sampled signal x(n). Next, let the signal be quantized to produce a discrete amplitude signal of M=2b levels. Where b is the no. of bits per sample of the digital symbol D, which is to be encoded. More explicitly, let the values of the 2b levels be, q1, q2, q3, q4qM which are distributed over the range [-1, +1], where is the proportionality factor determined relative to the signal. Given a sample x(n) we find the nearest level qi(n). Here, qi(n) is the digital symbol and xa(n)= x(n)-qi(n) is the analog representation. The exact representation of the analog signal is given by x(n)=qi(n)+xa(n). We can accomplish the transmission of this information over the noisy channel by dividing it into two channels: one for analog information and another for digital information. The analog channel bandwidth is Ba= aB, and the digital channel bandwidth being Bd= dB, where Ba+Bd=Bc, the channel bandwidth. Let =Bc/B, be the bandwidth expansion factor, i.e. the ratio of the bandwidth of the channel to the bandwidth of the signal. Similarly, the variables a and d are the ratios of Ba/B and Bd/B. Here we will assume that a=1 so that d= -1. The total power is also divided amongst the two channels with fraction pa for the analog channel and fraction pd for the digital one, so that pa+pd=1.

Artificial Intelligence Substation Control


Controlling a substation by a fuzzy controller speeds up response time diminishes up the possibility of risks normally related to human operations. The automation of electric substation is an area under constant development Our research has focused on, the Selection of the magnitude to be controlled, Definition and implementation of the soft techniques, Elaboration of a programming tool to execute the control operations. it is possible to control the desired status while supervising some important magnitudes as the voltage, power factor, and harmonic distortion, as well as the present status. The status of the circuit breakers can be control by using a knowledge base that relates some of the operation magnitudes, mixing status variables with time variables and fuzzy sets .The number of necessary magnitudes to a supervise and to control a substation can be very high in the present research work, many magnitudes were not included .To avoid the extensive number of required rules nevertheless , controlling a substation by a fuzzy controller has the advantage that it can speed up the response time and diminish the possibility of risks normally related to human operations. Introduction Electric substations are facilities in charge of the voltage transformation to provide safe and effective energy to the consumers. This energy supply has to be carried out with sufficient quality and should guarantee the equipment security. The associated cost to ensure quality and security during the supply in substations is high. Automatic mechanisms are generally used in greater or lesser scale, although they mostly operate according to an individual control and protection logic related with the equipment itself and not with the topology of the whole substation in a given moment. The automation of electric substation is an area under constant development. Nevertheless, the control of a substation is a very complex task due to the great number of related problems and, therefore, the decision variables that can influence the substation performance. Under such circumstances, the use of learning control systems can be very useful. Many papers on applications of artificial intelligence (AI) techniques to power system have been published in the last year. The difficulties associated with the application of this technique include: " Selection of the magnitude to be controlled " Definition and implementation of the soft techniques " Elaboration of a programming tool to execute the control operations " Selection, acquisition and installation of the measurement and control equipment " Interface with this equipment and Applications of the controlling technique in existent substations. Even when all the magnitudes to be controlled cannot be included in the analysis (mostly due to the great number of measurements and status variables of the substation and, therefore, to the rules that would be required by the controller), it is possible to control the desired status while supervising some important magnitudes as the voltage, power factor, and harmonic distortion, as well as the present status.

Speech Compression - a novel method


This paper illustrates a novel method of speech compression and transmission. This method saves the transmission bandwidth required for the speech signal by a considerable amount. This scheme exploits the property of low pass nature of the speech signal. Also this method applies equally well for any signal, which is low pass in nature, speech being the more widely used in Real Time Communication, is highlighted here. As per this method, the low pass signal (speech) at the transmitter is divided into set of packets, each containing, say N number of samples. Of the N samples per packet, only certain lesser number of samples, say N alone are transmitted. Here is less than unity, so compression is achieved. The N samples per packet are subjected to a N-Point DFT. Since low pass signals alone are considered here, the number of significant values in the set of DFT samples is very limited. Transmitting these significant samples alone would suffice for reliable transmission. The number of samples, which are transmitted, is determined by the parameter . The parameter is almost independent of the source of the speech signal. In other methods of speech compression, the specific characteristics of the source such as pitch are important for the algorithm to work. An exact reverse process at the receiver reconstructs the samples. At the receiver, the N-point IDFT of the received signal is performed after necessary zero padding. Zero padding is necessary because at the transmitter of the N samples only N samples are transmitted, but at the receiver N samples are again needed to honestly reconstruct the signal. Hence this method is efficient as only a portion of the total number of samples is transmitted thereby saving the bandwidth. Since the frequency samples are transmitted the phase information has also to be transmitted. Here again by exploiting the property of signals and their spectra that the PHASE INFORMATION CAN BE EMBEDDED WITHIN THE MAGNITUDE SPECTRUM by using simple mathematics without any heavy computations or by increasing the bandwidth. Also the simulation result of this method shows that smaller the size of the packet the more faithful is the reproduction of received signal that is again an advantage as the computation time is reduced. The reduction in the computation time is due to the fact that the transmitter has to wait until N samples are obtained before starting the transmission. If N is small, the transmitter has to wait for a less duration of time and a smaller value of N achieves a better reconstruction at the receiver. Thus this scheme provides a more efficient method of speech compression and this scheme is also very easy to implement with the help of available high-speed processors.Transmitting the spectrum of the signal instead of transmitting the original signal is far more efficient. This is because the energy of the speech signal above 4 kHz is negligible; we can very well compute the spectrum of the signal and transmit only the samples that correspond to 4 KHz of the spectrum irrespective of the sampling frequency. By this type of transmission we can save the bandwidth required for transmission considerably. Also it is not necessary that we have to transmit all the samples corresponding to the 4 kHz frequency as it is sufficient to transmit a fraction of the samples without any degradation in the quality. Since the spectrum is considered in the above method both the magnitude and phase information must be transmitted to reproduce the signal without any error. But this requires twice the actual

bandwidth. Exploiting the property of real and even signals can solve this problem. The spectrum of the samples is real and evenliness is artificially introduced such that their spectra are also real and even. Thus by simple mathematics the complete phase information is embedded within the magnitude spectrum and it is needed only to send 'aN' samples instead of '2N'samples of the spectra (Magnitude and phase). Adopting all these procedures and embedding the phase information in the magnitude spectrum have performed a MATLAB simulation performed to determine the optimum value of 'a' and 'N'. The result of the simulation is also provided.

Class-D Amplifiers
Class D amplifiers present us with a revolutionary solution, which would help us eliminate loss and distortions caused due to conversion of digital signals to analog while amplifying signals before transmitting it to speakers. This inchoate piece of knowledge could prove to detrimental in improving and redefining essence of sound and take it to a different realm. This type of amplifiers do not require the use of D-A conversion and hence reduce the costs incurred for developing state of art output technology. The digital output from sources such as CD's, DVD's and computers now can directly be sent for amplification without the need for any conversion. Another important feature of these unique and novel kind of amplifiers are that they give us a typical efficiency of 90% compared to that of the normal ones which give us a efficiency of 65-70%. This obviously means less amount of dissipation that indirectly means lower rated heat sinks and low waste of energy. This makes the use of D type amplifiers in miniature and portable devices all the more apt. All these years D type amplifiers have been used for purposes where efficiency was the key whereas now developments in this technology have made its entry possible into other domain that are less hifi. Thus showing up in MP3 players, portable CD players, laptop computers, cell phones, even personal digital assistants. Digital Audio's Final Frontier-Class D Amplifier Introduction Digital technology continues its march from media like CDs and DVDs toward your audio speakers. Today, amplifiers based on digital principles are already having a profound effect on equipment efficiency and size. They are also beginning to set the standard for sound quality. An old idea, the Class D amplifier has taken on new life as equipment manufacturers and consumers redefine the musical experience to be as likely to occur in a car, on a personal stereo, or on an airplane as in a living room. For most consumers today, portability and style outweigh other factors in the choice of new audio gear. Class D amplifiers are ideally suited to capitalize on the trend. They are already starting to displace conventional high-fidelity amplifiers, particularly in mobile and portable applications, where their high efficiency and small size put them in a class by themselves. For example, they are fast becoming the dominant technology for entertainment systems in cars, where passengers are now apt to watch a DVD-and expect from the vehicle's compact, ill-ventilated electronics the same rousing surround-sound experience they get at home. The new amplifiers can provide it. They are typically around 90 percent efficient at rated power, versus 65-70 percent for conventional audio amps. Such high efficiency means, for one thing, that the amplifiers can get by with much smaller heat sinks to carry away the energy they waste. Also, portable devices like MP3 players can go much longer on a battery charge or can be powered by tinier, lighter batteries. Class D amplifiers have been used for decades in industrial and medical applications when high efficiency is key. They have been applied with great success in devices as small as hearing aids and as large as controllers for hefty motors and electromagnets. They blossomed as a significant force in high-fidelity audio a few years ago, when Class D power amplifier chips were released by companies

like Tripath Technology, Texas Instruments, and Cirrus Logic in the United States; Philips and STMicroelectronics(partnering with ApogeeDDX) in Europe; and Sanyo (partnering with Bang & Olufsen) in Japan. More recently, Class D amps have expanded beyond the hi-fi niche, showing up in MP3 players, portable CD players, laptop computers, cellphones, even personal digital assistants (PDAs). At the same time, they have been making forays into the world of home audio in the form of products based on those new chips. Notable entries include amplifiers from Bel Canto Design Ltd. (Minneapolis, Minn.) and PS Audio (Boulder, Colo.).

Digital Audio's Final Frontier-Class D Amplifier

Digital technology continues its march from media like CDs and DVDs toward your audio speakers. Today, amplifiers based on digital principles are already having a profound effect on equipment efficiency and size. They are also beginning to set the standard for sound quality. An old idea, the Class D amplifier has taken on new life as equipment manufacturers and consumers redefine the musical experience to be as likely to occur in a car, on a personal stereo, or on an airplane as in a living room. For most consumers today, portability and style outweigh other factors in the choice of new audio gear. Class D amplifiers are ideally suited to capitalize on the trend. They are already starting to displace conventional high-fidelity amplifiers, particularly in mobile and portable applications, where their high efficiency and small size put them in a class by themselves. For example, they are fast becoming the dominant technology for entertainment systems in cars, where passengers are now apt to watch a DVD-and expect from the vehicle's compact, ill-ventilated electronics the same rousing surround-sound experience they get at home. The new amplifiers can provide it. They are typically around 90 percent efficient at rated power, versus 65-70 percent for conventional audio amps. Such high efficiency means, for one thing, that the amplifiers can get by with much smaller heat sinks to carry away the energy they waste. Also, portable devices like MP3 players can go much longer on a battery charge or can be powered by tinier, lighter batteries. Class D amplifiers have been used for decades in industrial and medical applications when high efficiency is key. They have been applied with great success in devices as small as hearing aids and as large as controllers for hefty motors and electromagnets. They blossomed as a significant force in high-fidelity audio a few years ago, when Class D power amplifier chips were released by companies like Tripath Technology, Texas Instruments, and Cirrus Logic in the United States; Philips and STMicroelectronics

Optical Networking and Dense Wavelength Division Multiplexing


This paper deals with the twin concepts of optical networking and dense wavelength division multiplexing. The paper talks about the various optical network architectures and the various components of an all-optical network like Optical Amplifiers, Optical Add/Drop Multiplexers, Optical Splitters etc. Important optical networking concepts like wavelength routing and wavelength conversion are explained in detail. Finally this paper deals with industry related issues like the gap between research and the industry, current and projected market for optical networking & DWDM equipment and future direction of research in this field. INTRODUCTION One of the major issues in the networking industry today is tremendous demand for more and more bandwidth. Before the introduction of optical networks, the reduced availability of fibers became a big problem for the network providers. However, with the development of optical networks and the use of Dense Wavelength Division Multiplexing (DWDM) technology, a new and probably, a very crucial milestone is being reached in network evolution. The existing SONET/SDH network architecture is best suited for voice traffic rather than today's high-speed data traffic. To upgrade the system to handle this kind of traffic is very expensive and hence the need for the development of an intelligent all-optical network. Such a network will bring intelligence and scalability to the optical domain by combining the intelligence and functional capability of SONET/SDH, the tremendous bandwidth of DWDM and innovative networking software to spawn a variety of optical transport, switching and management related products. Optical Networking Optical networks are high-capacity telecommunications networks based on optical technologies and component that provide routing, grooming, and restoration at the wavelength level as well as wavelength-based services. The origin of optical networks is linked to Wavelength Division Multiplexing (WDM) which arose to provide additional capacity on existing fibers. The optical layer, whose standards are being developed, will ideally be transparent to the SONET layer, providing restoration, performance monitoring, and provisioning of individual wavelengths instead of electrical SONET signals. So in essence a lot of network elements will be eliminated and there will be a reduction of electrical equipment. It is possible to classify networks into three generations depending on the physical-level technology employed. First generation networks use copper-based or microwave technologies e.g Ethernet, satellites etc. In second generation networks, these copper links or microwave links with optical fibers. However, these networks still perform the switching of data in the electronic domain though the transmission of data is done in the optical domain. Finally we have the third generation networks that employ Wavelength Division Multiplexing technology. They do both the transmission and the switching of data in the optical domain. This has resulted in the onset of tremendous amount of bandwidth availability. Further the use of non-overlapping channels allows each channel to operate at peak speeds. 1.2 Dense Wavelength Division Multiplexing (DWDM) Dense Wavelength Division Multiplexing (DWDM) is a fiber-optic transmission technique. It involves the process of multiplexing many different wavelength signals onto a single fiber. So each fiber has a set of parallel optical channels each using slightly different light wavelengths. It employs light wavelengths to transmit data parallel-by-bit or serial-by-character. DWDM is a very crucial

component of optical networks that will allow the transmission of data: voice, video-IP, ATM and SONET/SDH respectively, over the optical layer. Hence with the development of WDM technology, optical layer provides the only means for carriers to integrate the diverse technologies of their existing networks into one physical infrastructure. For example, though a carrier might be operating both ATM and SONET networks, with the use of DWDM it is not necessary for the ATM signal to be multiplexed up to the SONET rate to be carried on the DWDM network. Hence carriers can quickly introduce ATM or IP without having to deploy an overlay network for multiplexing.

Optical Burst Switching


Optical burst switching is a promising solution for all optical WDM networks It combines the benefits of optical packet switching and wavelength routing while taking into account the limitations of current all optical technology In OBS the user data is collected at the edge of the network, sorted based on destination address,and grouped into variable sized bursts Prior to transmitting a burst, a control packet is created and immediately send toward the destination in order to setup a buffer less optical path for its corresponding burst After an offset delay time, the data burst itself is transmitted without waiting for positive acknowledgement from the destination node the OBS framework has been widely studied in the past few years because it achieves high traffic throughput and high resource utilization . Introduction: Optical communication has been used for a long time and it very much popular with the invention of wavelength-division multiplexing(WDM) Current WDM works over point-to-point links,where optical-to-electrical-to-optical(OEO) conversion is required over each step The elimination of OEO conversion in all optical networks(AON) allows for unprecedented transmission rates AON's can further be categorized as wavelength-routed networks(WRNs).,optical burst switched networks(OBSNs),or optical packet switched networks(OPSNs).Now we discuss here about optical burst switching(OBS) In optical burst switching(OBS) data is transported in variable sized units called bursts Due to the great variability in the duration of bursts the OBS network can be viewed as lying between OPSNs and WRNS That is, when all burst durations are very short,equal to the duration of an optical packet,OBSN can be seen as resembling an OPSN On the other hand,when all the burst durations are extremely long the OBSN may seem resembling a WRN In OBS there is strong separation between the data and control planes,which allows for greater network manageability and flexibility In addition its dynamic nature leads to high network adaptability and scalability,which makes it quite suitable for transmission of bursty traffic . In general,the OBS network consists of interconnected core nodes that transport data from various edge users The users consist of an electronic router and an OBS interface, while the core OBS nodes require an optical switching matrix,a switch control; unit and routing and signaling processors OBS has received considerable attention in the past few years and various solutions have been proposed and analyzed in an attempt to improve it's performance Here we describe the various OBS architectures by grouping the material logically per OBS design parameter Burst aggregation: OBS collects upper layer traffic and sort it based on destination addresses and aggregate it into variable size bursts The exact algorithm for creating the bursts can greatly impact the overall network operation because it allows the network designers to control the burst characteristics and therefore shape the burst arrival traffic The burst assembly algorithm has to consider a preset timer and maximum and minimum burst lengths The burst aggregation algorithm may use bit-padding ,the differentiation of class traffic , create classes of service by varying the preset timers and maximum/minimum burst sizes

One of the most interesting benefit of burst aggregation is it shapes the traffic by reducing the degree of self-similarity,making it less bursty in comparison to the flow of the original higher-layer packets Traffic is considered bursty if busy periods with a large of arrivals are followed by long idle periods The term self-similar traffic refers to an arrival process that exhibits burstiness when viewed at varying time scales:milliseconds,seconds,minutes,hours even days and weeks Self-similar traffic is characterized by longer queuing delays therefore degrades network performance Therefore reducing self-similarity is a desirable feature of the burst assembly process and concluded that traffic is less self-similar after the assembly.

Bluetooth Based Smart Sensor Networks

Definition
The communications capability of devices and continuous transparent information routes are indispensable components of future oriented automation concepts. Communication is increasing rapidly in industrial environment even at field level.In any industry the process can be realized through sensors and can be controlled through actuators. The process is monitored on the central control room by getting signals through a pair of wires from each field device in Distributed Control Systems (DCS). With advent in networking concept, the cost of wiring is saved by networking the field devices. But the latest trend is elimination of wires i.e., wireless networks. Wireless sensor networks - networks of small devices equipped with sensors, microprocessor and wireless communication interfaces.In 1994, Ericsson Mobile communications, the global telecommunication company based in Sweden, initiated a study to investigate, the feasibility of a low power, low cost ratio interface, and to find a way to eliminate cables between devices. Finally, the engineers at the Ericsson named the new wireless technology as "Blue tooth" to honour the 10th century king if Denmark, Harald Blue tooth (940 to 985 A.D). The goals of blue tooth are unification and harmony as well, specifically enabling different devices to communicate through a commonly accepted standard for wire less connectivity. BLUE TOOTH Blue tooth operates in the unlicensed ISM band at 2.4 GHZ frequency band and use frequency hopping spread spectrum technique. A typical Blue tooth device has a range of about 10 meters and can be extended to 100meters. Communication channels supports total bandwidth of 1 Mb / sec. A single connection supports a maximum asymmetric data transfer rate of 721 KBPS maximum of three channels. BLUE TOOTH - NETWORKS In bluetooth, a Piconet is a collection of up to 8 devices that frequency hop together. Each Piconet has one master usually a device that initiated establishment of the Piconet, and up to 7 slave devices. Master's Blue tooth address is used for definition of the frequency hopping sequence. Slave devices use the master's clock to synchronize their clocks to be able to hop simultaneously. A Piconet When a device wants to establish a Piconet it has to perform inquiry to discover other Blue tooth devices in the range. Inquiry procedure is defined in such a way to ensure that two devices will after some time, visit the same frequency same time when that happens, required information is exchanged and devices can use paging procedure to establish connection.When more than 7 devices needs to communicate, there are two options. The first one is to put one or more devices into the park state. Blue tooth defines three low power modes sniff, hold and park. When a device is in the park mode then it disassociates from and Piconet, but still maintains timing synchronization with it. The master of the Piconet periodically broadcasts beacons (Warning) to invite the slave to rejoin the Piconet or to allow the slave to request to rejoin. The slave can rejoin the Piconet only if there are less than seven slaves already in the Piconet. If not so, the master has to 'park' one of the active slaves first.

All these actions cause delay and for some applications it can be unacceptable for eg: process control applications, that requires immediate response from the command centre (central control room).Scatternet consists of several Piconets connected by devices participating in multiple Piconet. These devices can be slaves in all Piconets or master in one Piconet and slave in other Piconets. Using scatternets higher throughput is available and multi-hop connections between devices in different Piconets are possible. i.e., The unit can communicate in one Piconet at time so they jump from pioneer to another depending upon the channel parameter.

Laser Communications

Definition
Laser communications offer a viable alternative to RF communications for inter satellite links and other applications where high-performance links are a necessity. High data rate, small antenna size, narrow beam divergence, and a narrow field of view are characteristics of laser communications that offer a number of potential advantages for system design. Lasers have been considered for space communications since their realization in 1960. Specific advancements were needed in component performance and system engineering particularly for space qualified hardware. Advances in system architecture, data formatting and component technology over the past three decades have made laser communications in space not only viable but also an attractive approach into inter satellite link applications. Information transfer is driving the requirements to higher data rates, laser cross -link technology explosions, global development activity, increased hardware, and design maturity. Most important in space laser communications has been the development of a reliable, high power, single mode laser diode as a directly modulable laser source. This technology advance offers the space laser communication system designer the flexibility to design very lightweight, high bandwidth, low-cost communication payloads for satellites whose launch costs are a very strong function of launch weigh. This feature substantially reduces blockage of fields of view of most desirable areas on satellites. The smaller antennas with diameter typically less than 30 centimeters create less momentum disturbance to any sensitive satellite sensors. Fewer on board consumables are required over the long lifetime because there are fewer disturbances to the satellite compared with heavier and larger RF systems. The narrow beam divergence affords interference free and secure operation. Laser communication systems offer many advantages over radio frequency (RF) systems. Most of the differences between laser communication and RF arise from the very large difference in the wavelengths. RF wavelengths are thousands of times longer than those at optical frequencies are. This high ratio of wavelengths leads to some interesting differences in the two systems. First, the beam-width attainable with the laser communication system is narrower than that of the RF system by the same ratio at the same antenna diameters (the telescope of the laser communication system is frequently referred as an antenna). For a given transmitter power level, the laser beam is brighter at the receiver by the square of this ratio due to the very narrow beam that exits the transmit telescope. Taking advantage of this brighter beam or higher gain, permits the laser communication designer to come up with a system that has a much smaller antenna than the RF system and further, need transmit much less power than the RF system for the same receiver power. However since it is much harder to point, acquisition of the other satellite terminal is more difficult. Some advantages of laser communications over RF are smaller antenna size, lower weight, lower power and minimal integration impact on the satellite. Laser communication is capable of much higher data rates than RF. The laser beam width can be made as narrow as the diffraction limit of the optic allows. This is given by beam width = 1.22 times the wavelength of light divided by the radius of the output beam aperture. The antennae gain is proportional to the reciprocal of the beam width squared. To achieve the potential diffraction limited beam width a single mode high beam quality laser source is required; together with very high quality optical components throughout the transmitting sub system. The

possible antennae gain is restricted not only by the laser source but also by the any of the optical elements . In order to communicate, adequate power must be received by the detector, to distinguish the signal from the noise. Laser power, transmitter, optical system losses, pointing system imperfections, transmitter and receiver antennae gains, receiver losses, receiver tracking losses are factors in establishing receiver power. The required optical power is determined by data rate, detector sensitivity, modulation format ,noise and detection methods.

CorDECT

Definition
This report describes a new wireless local loop system for rapid expansion of telecom services developed under a joint project involving Indian scientists form Indian Institute Of Technology, Chennai, Midas technology and Analog Devices Inc., USA. The new system, called corDECT, is based on microcellular architecture and uses a modest bandwidth of 20MHz to provide voice, fax, and data communication in low as well as very high subscriber density environments. The high capacity at a modest bandwidth is made possible without prior frequency planning through a completely decentralized channel allocation procedure called dynamic channel selection. This technology provides cost-effective simultaneous high quality voice and data connectivity, a voice communication using 32Kbps ADPCM and Internet connectivity at 35\70 Kbps. This report discusses the relevance of corDECT in the context of current trends towards wireless systems, contrasts the microcellular architecture of corDECT with existing wireless systems based on macrocellular architectures, and outlines its market potential. INTRODUCTION/OVERVIEW A new wireless local loop system to eliminate the physical connections between telephone exchanges and subscribers has just hit the market after a two-year long joint research effort by Indian and US engineers. The new system, called corDECT, is said to offer significant cost-savings, rapid installation, and improved reliability over traditional connections based on copper cables. It is based on a microcellular architecture that is said to offer cost and operational advantages over wireless/mobile telephone systems based on macrocellular architectures. The corDECT system is based on the European Digital Enhanced Cordless Telecommunications standard that uses a modest bandwidth of 20MHz in the 1880-1900 MHz range and does not require prior frequency planning necessary in conventional mobile cellular systems. The corDECT technology uses relatively low-cost, easy-to-install subsystems and can serve relatively high subscriber density environments -several thousands of subscribers per square kilometer. Four Indian companies have bought the technology for domestic manufacture. Its developers believe there is a large market potential in the Asia-Pacific region and in other developing countries. This report will describe the CorDECT wireless local loop system and its subsystems and compare the microcellular architecture of corDECT with macrocellular architectures employed in many wireless telephone systems. Need for wireless local loop system: The telephone and the Internet have changed the way we deliver and receive information and the way we use it for business, entertainment, planning and living. Unfortunately, only 15 percent of the world's population is believed to have access to the Internet. And more than 80 percent of people in the world are believed to have never even heard a dial tone. This Digital divide -between the information 'haves' and 'have nots' -is widening. In India, the problem is acute. Among 1000 million people. There are fewer than 35 million people; there are fewer than 35 million phones connections and around two million Internet connections. There is an urgent need to bridges the gap. The biggest reason for is high cost. The existing per line cost of a telephone network is Rs.30, 000, which most people in India cannot afford; this has to be reduced by a factor of three to four. In order to reduce the cost, we must consider the factors responsible for overall system cost. The telecom network consists of two components.

1.A backbone network consisting of routers, switches and interconnection of exchanges and routers, including intercity and international connections. 2.An access network that includes the connection of the exchange to the office and home. Fortunately, the cost of the backbone network is reducing rapidly each successive year with improvements in technology. In order to reduce overall costs. There is a need to focus on the cost of the access networks- that is, the cost of the local loop. By reducing this cost, it is possible to reduce overall per line cost by Rs12, 000 to Rs.16, 000. For nearly a century, these connections have relied on pairs of copper cables. But laying out wired local loops has been an expensive, time-consuming process that also requires detailed planning and intensive labor costs. According to a projection by the International Telecommunications Union, developing countries alone will require 35-million pair kilometers of copper cable by the turn of the century just to maintain existing waiting lists. The increasing cost of copper, the operational problems associated with wired lines, and the demands for mobility are factors fueling the move towards wireless local loops. Importance of CorDECT in present scenario: Internet connections today, for the most part, use a modem to connect a computer to a telephone line. In this case, Internet traffic passes through the telecom network, which overloads the telecom network. It is necessary to develop an access network technology, which separates Internet data form the voice and prevent it from interfering with the telephone network. This would also make it possible to use the telephone and the Internet on the same line simultaneously.

E-Intelligence

Definition
As corporations move rapidly toward deploying e-business systems, the lack of business intelligence facilities in these systems prevents decision-makers from exploiting the full potential of the Internet as a sales, marketing, and support channel. To solve this problem, vendors are rapidly enhancing their business intelligence offerings to capture the data flowing through e-business systems and integrate it with the information that traditional decision-making systems manage and analyze. These enhanced business intelligence-or e-intelligence-systems may provide significant business benefits to traditional brick-and-mortar companies as well as new dot-com ones as they build e-business environments. Organizations have been successfully using decision-processing products, including data warehouse and business intelligence tools, for the past several years to optimize day-to-day business operations and to leverage enterprise-wide corporate data for a competitive advantage. The advent of the Internet and corporate extranets has propelled many of these organizations toward the use of ebusiness applications to further improve business efficiency, decrease costs and increase revenues and to compete with new dot.com companies appearing in the marketplace. The explosive growth in the use of e-business has led to the need for decision-processing systems to be enhanced to capture and integrate business information flowing through e-business systems. These systems also need to be able to apply business intelligence techniques to this capturedbusiness information. These enhanced decision processing systems, or E-Intelligence, have the potential to provide significant business benefits to both traditional bricks-and-mortar companies and new dot.com companies as they begin to exploit the power of e-business processing. E-INTELLIGENCE FOR BUSINESS E-intelligence systems provide internal business users, trading partners, and corporate clients rapid and easy access to the e-business information, applications, and services they need in order to compete effectively and satisfy customer needs. They offer many business benefits to organizations in exploiting the power of the Internet. For example, e-intelligence systems give the organization the ability to: 1.Integrate e-business operations into the traditional business environment, giving business users a complete view of all corporate business operations and information. 2.Help business users make informed decisions based on accurate and consistent e-business information that is collected and integrated from e-business applications. This business information helps business users optimize Web-based offerings (products offered, pricing and promotions, service and support, and so on) to match marketplace requirements and analyze business performance with respect to competitors and the organization's businessperformance objectives. 3.Assist e-business applications in profiling and segmenting e-business customers. Based on this information, businesses can personalize their Web pages and the products and services they offer. 4.Extend the business intelligence environment outside the corporate firewall, helping the organization share internal business information with trading partners. Sharing this information will let it optimize the product supply chain to match the demand for products sold through the Internet and minimizes the costs of maintaining inventory. 5.Extend the business intelligence environment outside the corporate firewall to key corporate clients, giving them access to business information about their accounts. With this information, clients can analyze and tune their business relationships with other

organization, improving client service and satisfaction. 6.Link e-business applications with business intelligence and collaborative processing applications, allowing internal and external users to seamlessly move among different systems. INTELLIGENT E-SERVICES The building blocks of new, sophisticated, intelligent data warehousing applications are now intelligent e-services. An e-service is any asset made available via the Internet to drive new revenue streams or create new efficiencies. What makes e-services valuable is not only the immediacy of the service, but also the intelligence behind the service. While traditional data warehousing meant simple business rules, simple queries and pro-active work to take advantage of the Web, E-Intelligence is much more sophisticated and enables the Web to work on our behalf. Combining intelligence with eservices promises exciting business opportunities.

White LED
Until recently, though, the price of an LED lighting system was too high for most residential use. With sales rising and prices steadily decreasing, it's been said that whoever makes the best white LED will open a goldmine. White LED lighting has been used for years by the RV and boating crowd, running off direct current (DC) battery systems. It then got popular in off-the-grid houses, powered by photovoltaic cells. It used to be that white LED was possible only by "rainbow" groups of three LEDs -- red, green, and blue -- and controlling the current to each to yield an overall white light. Now a blue indium gallium chip with a phosphor coating is used to create the wave shift necessary to emit white light from a single diode. This process is much less expensive for the amount of light generated. Each diode is about 1/4 inch and consumes about ten milliamps (a tenth of a watt). Lamps come in various arrangements of diodes on a circuit board. Standard arrays are three, six, 12, or 18 diodes, or custom sizes -- factories can incorporate these into custom-built down lights, sconces and surfacemounted fixtures. With an inexpensive transformer, they run on standard 120-volt alternating current (AC), albeit with a slight (about 15% to 20%) power loss. They are also available as screw-in lamps to replace incandescent. A 1.2 watt white LED light cluster is as bright as a 20-watt incandescent lamp.

Carbon Nanotube Flow Sensors

Introduction
Direct generation of measurable voltages and currents is possible when a fluids flows over a variety of solids even at the modest speed of a few meters per second. In case of gases underlying mechanism is an interesting interplay of Bernoulli's principle and the See beck effect: Pressure differences along streamlines give rise to temperature differences across the sample; these in turn produce the measured voltage. The electrical signal is quadratically dependent on the Mach number M and proportional to the Seebeck coefficient of the solids. This discovery was made by professor Ajay sood and his student Shankar Gosh of IISC Bangalore, they had previously discovered that the flow of liquids, even at low speeds ranging from 10 -1 meter/second to 10 -7 m/s (that is, over six orders of magnitude), through bundles of atomic-scale straw-like tubes of carbon known as nanotubes, generated tens of micro volts across the tubes in the direction of the flow of the liquid. Results of experiment done by Professor Sood and Ghosh show that gas flaw sensors and energy conversion devices can be constructed based on direct generation of electrical signals. The experiment was done on single wall carbon nanotubes (SWNT).These effect is not confined to nanotubes alone these are also observed in doped semiconductors and metals. The observed effect immediately suggests the following technology application, namely gas flow sensors to measure gas velocities from the electrical signal generated. Unlike the existing gas flow sensors, which are based on heat transfer mechanisms from an electrically heated sensor to the fluid, a device based on this newly discovered effect would be an active gas flow sensor that gives a direct electrical response to the gas flow. One of the possible applications can be in the field of aerodynamics; several local sensors could be mounted on the aircraft body or aerofoil to measure streamline velocities and the effect of drag forces. Energy conversion devices can be constructed based on direct generation of electrical signals i.e. if one is able to cascade millions these tubes electric energy can be produced. As the state of art moves towards the atomic scales, sensing presents a major hurdle. The discovery of carbon nanotubes by Sujio Iijima at NEC, Japan in 1991 has provided new channels towards this end. A carbon nanotube (CNT) is a sheet of graphene which has been rolled up and capped with fullerenes at the end. The nanotubes are exceptionally strong, have excellent thermal conductivity, are chemically inert and have interesting electronic properties which depend on its chirality. The main reason for the popularity of the CNTs is their unique properties. Nanotubes are very strong, mechanically robust, and have a high Young's modulus and aspect ratio. These properties have been studied experimentally as well as using numerical tools. Bandgap of CNTs is in the range of 0~100 meV, and hence they can behave as both metals and semiconductors. A lot of factors like the presence of a chemical species, mechanical deformation and magnetic field can cause significant changes in the band gap, which consequently affect the conductance of the CNTs. Its unique electronic properties coupled with its strong mechanical strength are exploited as various sensors. And now with the discovery of a new property of flow induced voltage exhibited by nanotubes discovered by two Indian scientists recently, has added another dimension to micro sensing devices. CNT Electronic Properties Electrically CNTs are both semiconductor and metallic in nature which is determined by the type of nanotube, its chiral angle, diameter, relation between the tube indices etc. The electronic properties

structure and properties is based on the two dimensional structure of Graphene. For instance if the tube indices, n and m, satisfies the condition n-m=3q where q is and integer it behaves as a metal. Metal, in the sense that it has zero band gap energy. But in case of armchair (where n=m) the Fermi level crosses i.e. the band gap energy merges. Otherwise it is expected the properties of tube will be that of semiconductor. The table below (Table 1) is the observations of experiments done on nanotubes by Scanning tunneling microscope (STM) and Scanning tunneling spectroscopes (STS).

Fluid Flow Through Carbon


Nanotube Recently there has been extensive study on the effect of fluid flow through nanotubes, which is a part of an ongoing effort worldwide to have a representative in the microscopic nano-world of all the sensing elements in our present macroscopic world. Indian Institute of Science has a major contribution in this regard. It was theoretically predicted that flow of liquid medium would lead to generation of flow-induced voltage. This was experimentally established by two Indian scientist at IISc. Only effect of liquid was theoretically investigated and established experimentally, but effect of gas flow over nanotubes were not investigated, until A.K Sood and Shankar Ghosh of IISc investigated it experimentally and provided theoretical explanation for it. The same effect as in case of liquid was observed, but for entirely different reason. These results have interesting application in biotechnology and can be used in sensing application. Micro devices can be powered by exploiting these properties.

Cellular Positioning

Introduction
Location related products are the next major class of value added services that mobile network operators can offer their customers. Not only will operators be able to offer entirely new services to customers, but they will also be able to offer improvements on current services such as locationbased prepaid or information services. The deployment of location based services is being spurred by several factors: Competition The need to find new revenue enhancing and differentiating value added services has been increasing and will continue to increase over time. Regulation The Federal Communications Commission (FCC) of the USA adopted a ruling in June 1996 (Docket no. 94-102) that requires all mobile network operators to provide location information on all calls to "911", the emergency services. The FCC mandated that by 1 st October 2001, all wireless 911 calls must be pinpointed within125 meters, 67% of the time. On December 24 1998, the FCC amended its ruling to allow terminal based solutions as well as network based ones (CC Docket No. 94-102, Waivers for Handset-Based Approaches). There are a number of regulations that location based services must comply with, not least of all to protect the privacy of the user. Mobile Streams believes that it is essential to comply with all such regulations fully. However, such regulations are only the starting point for such services- there are possibilities for a high degree of innovation in this new market that should not be overlooked. Technology There have been continuous improvements in handset, network and positioning technologies. For example, in 1999, Benefon, a Finnish GSM and NMT terminal vendor launched the ESC! GSM/ GPS mapping phone. Needs Of Cellular Positioning There are a number of reasons why it is useful to be able to pinpoint the position of a mobile telephone, some of which are described below. Location-Sensitive billing Different tariff can be provided depending upon the position of the cell phone. This allows the operator without a copper cable based PSTN to offer competitive rates for calls from home or office. Increased subscriber safety. A significant number of emergency calls like US.911 are coming from cell phones, and in most of the cases the caller can not provide the accurate information about their position. As a real life example let us take the following incident. In February 1997 a person became stranded along a highway during a winter blizzard (Associated press 1997).She used her cellular phone to call for help but could not provide her location due to white-out conditions. To identify the callers approximate position authorities asked her to tell them when she could hear the search plane flying above. From the time of her first call forty hours elapsed before a ground rescue team reached her. An automatic positioning system would have allowed rescuers to reach her far sooner. Positioning Techniques There are a variety of ways in which position can be derived from the measurement of signals and these can be applied to any cellular system including GSM. The important measurements are the Time of Arrival (TOA), the Time Difference of Arrival (TDOA), the Angle of Arrival (AOA) and Carrier phase. All these measurement put the object to be positioned on a particular locus. Multiple

measurements give multiple loci and the point of their intersection gives the position. If the density of the base stations is such that more measurements can be done than required then a least square approach can be used. If the measurements are too few in number the loci will intersect at more than one point result in ambiguous position estimate. In the following discussion we assume that the mobile station and base station are lying in the same plane. This is approximately true for most networks unless the geography include hilly topology or high rise buildings. Time of Arrival (TOA) In a remote positioning system this involves the measurement of the propagation time of a signal from the mobile phone to a base station. Each measurement fixes the position of the mobile on a circle. With two stations there will be two circle and they can intersect in a maximum of two points. This gives rise to an ambiguity and it is resolved by including a priory information of the trajectory of the mobile phone or making a propagation time measurement to a third base station. The TOA measurement requires exact time synchronization between the base stations and the receiver should have an accurate clock, so that the receiver knows the exact time of transmission and an exact TOA measurement have made by the receiver.

Iontophoresis

Iontophoresis is an effective and painless method of delivering medication to a localized tissue area by applying electrical current to a solution of the medication. The delivered dose depends on the current flowing and its duration. Iontophoresis is a recognized therapeutic method for delivering ionic compounds, i.e. drugs, into and through the skin by applying electrical current. It has proven to be a beneficial treatment for many localized skin disorders such as; nail diseases, Herpies lesions, psoriasis, eczematous, and cutaneous T-cell lymphoma. The method has also been reported useful for topical anesthesia to the skin prior to cut-down for artificial kidney dialysis, insertion of tracheotomy tubes and infiltration of lidocaine into the skin prior to venipuncture. Treatment of various musculoskeletal disorders with anti-inflammatory agents has been reported in the literature. Iontophoresis enhances the transdermal delivery of ionized drugs through the skin's outermost layer (stratum corneum) which is the main barrier to drug transport. The absorption rate of the drug is increased, however, once the drug passes through the skin barrier natural diffusion and circulation are required to shuttle the drug to its proper location. The mechanism by which iontophoresis works is based upon the knowledge that like electrical charges repel. Application of a positive current from an electrode to a solution applied to a skin surface will drive the positively charged drug ions away from the electrode and into the skin. Obviously, negatively charged ions will behave in the same manner.

Dual Energy X-ray Absorptiometry

Definition
The basic principles of Dual Energy X-ray Absorptiometry have been discussed in this presentation. DEXA is a instrumental technique used to measure bone mineral density (BMD), which is the widely accepted indicator of bone strength. DEXA scanner is the most widely used modern electronic machine to diagnose the disease osteoporosis, the thinning of bones. Human body being a heterogeneous system, use of a dual energy, rather than single energy, X-ray source is necessary for scanning. The interaction of the sample with the X-ray beams results in a reduction or attenuation of the energy of the X-ray beam. The extent to which the photon energy is attenuated is a function of the initial energy of the X-ray photon, the mass per unit area (M) of the absorber material and the mass attenuation coefficient (U) of the absorber. For a given absorber material, U (which is a measure of the degree of attenuation) is a constant at any given photon energy. U increases with the density of the absorber material and decreases with the energy of the X-ray beam. U can be used to calculate the Mass per unit area (M) of a homogenous absorber irradiated at a specific incident X-ray energy. The mass of bone and soft tissue 'below' this square would represent the mass per unit area of the absorber, viz., leg. For instance, if there are 100 grams of bone and soft tissue below this square, the mass per unit area (M) would be 100 g/cm 2 . Knowledge of M of the human body components, especially of bone, is important in determining the possibility of osteoporosis. The calculations of M of the various components of the body are discussed in detail. From knowledge of mass attenuation coefficient (U) of the absorber and the energy of the incident X-ray beam (E0) and of the emerging beam (E), we can calculate M of a homogeneous absorber from the following relationship connecting these properties. Dual Energy X-ray Absorptiometry (DEXA) is an instrumental technique used to measure bone mineral density (BMD) that includes the hip and spine, compared to SXA (Single Energy X-ray Absorptiometry) that measures only the wrist or heel bone. BMD is the widely accepted indicator of bone strength. DEXA (the whole body scanner) uses low dose x-rays to give us information on bone content and density. It is currently the most widely used machine in the clinical setting to diagnose the disease osteoporosis, the thinning of bones. DEXA is the most commonly used modern technique to determine the bone density and hence the bone strength. The DEXA results help to predict the patient's risk factors for osteoporosis. It is a fast, accurate, and less expensive technique. It exposes the patient to fewer amounts of radiations. So the risk is reduced to a great extend. Studies using DEXA scanning have shown that women with osteoporosis have substantially lower bone density measurements than normal, age-matched women. Bone mineral density is widely accepted as a good indicator of bone strength. Thus low values can be compared against standard bone density measurements and help predict a patient's risk for fracture based upon the DEXA scan measurements.

Pervasive Computing

Definition
Pervasive computing refers to embedding computers and communication in our environment. Pervasive computing provides an attractive vision for the future of computing. The idea behind the pervasive computing is to make the computing power disappear in the environment, but will always be there whenever needed or in other words it means availability and invisibility. These invisible computers won't have keyboards or screens, but will watch us, listen to us and interact with us. Pervasive computing makes the computer operate in the messy and unstructured world of real people and real objects. Distributed devices in this environment must have the ability to dynamically discover and integrate other devices. The prime goal of this technology is to make human life more simple, safe and efficient by using the ambient intelligence of computers. Pervasive computing environments involve the interaction, coordination, and cooperation of numerous, casually accessible, and often invisible computing devices. These devices will connect via wired and wireless links to one another as well as to the global networking infrastructure to provide more relevant information and integrated services. Existing approaches to building distributed applications, including client/server computing, are ill suited to meet this challenge. They are targeted at smaller and less dynamic computing environments and lack sufficient facilities to manage changes in the network configurations. Networked computing devices will proliferate in the user's landscape, being embedded in objects ranging from home appliances to clothing. Applications will have greater awareness of context, and thus will be able to provide more intelligent services that reduce the burden on users to direct and interact with applications. Many applications will resemble agents that carry out tasks on behalf of users by exploiting the rich sets of services available within computing environments. Mobile computing and communication is one of the major parts of the pervasive computing system. Here data and computing resources are shared among the various devices. The coordination between these devices is maintained through communication, which may be wired or wireless. With the advent of Bluetooth and Ad hoc networking technologies the wireless communication has overtaken the wired counter part.The reduction in size and cost of processor chips made it possible to implement it in every field of life. Nowadays about 99% of processors made are for embedded devices compared to the PC applications. Voice and Gesture recognition along with steerable interface will make the interactions and use of these devices more user friendly. Efficient security and privacy policies along with power management can enhance the performance of such systems.

Current Embedded Technology


Embedded technology is the process of introducing computing power to various appliances. These devices are intended to perform certain specific jobs and processors giving the computing power are designed in an application oriented way. Computers are hidden in numerous information appliances which we use in our day-to- day life. These devices find there application in every segment of life such as consumer electronics, avionics, biomedical engineering, manufacturing, process control, industrial, communication, defence etc Embedded systems, based on there functionality and performance requirement are basically categorized as: i. Stand alone systems ii. Real time systems

iii. Networked systems iv. Mobile devices

Passive Millimeter-Wave

Definition
Passive millimeter-wave (PMMW) imaging is a method of forming images through the passive detection of naturally occurring millimeter-wave radiation from a scene. Although such imaging has been performed for decades (or more, if one includes microwave radiometric imaging), new sensor technology in the millimeter-wave regime has enabled the generation of PMMW imaging at video rates and has renewed interest in this area. This interest is, in part, driven by the ability to form images during the day or night; in clear weather or in low-visibility conditions, such as haze, fog, clouds, smoke, or sandstorms; and even through clothing. This ability to see under conditions of low visibility that would ordinarily blind visible or infrared (IR) sensors has the potential to transform the way low-visibility conditions are dealt with. For the military, low visibility can become an asset rather than a liability. In the commercial realm, fog-bound airports could be eliminated as a cause for flight delays or diversions. For security concerns, imaging of concealed weapons could be accomplished in a nonintrusive manner with PMMW imaging. Like IR and visible sensors, a camera based on PMMW sensors generates easily interpretable imagery in a fully covert manner; no discernible radiation is emitted, unlike radar and lidar. However, like radar PMMW sensors provide penetrability through a variety of low-visibility conditions (moderate/heavy rainfall is an exception). In addition, the underlying phenomenology that governs the formation of PMMW images leads to two important features. First, the signature of metallic objects is very different from natural and other backgrounds. Second, the clutter variability is much less in PMMW images than in other sensor images. Both of these characteristics lead to much easier automated target detection with fewer false alarms. The wide range of military imaging missions that would benefit from an imaging capability through lowvisibility conditions, coupled with its inherent covertness, includes surveillance, precision targeting, navigation, aircraft landing, refueling in clouds, search and rescue, metal detection in a cluttered environment, and harbor navigation/surveillance in fog. Similarly, a number of civilian missions would benefit, such as commercial aircraft landing aid in fog, airport operations in fog, harbor surveillance, highway traffic monitoring in fog, and concealed weapons detection in airports and other locations. This article introduces the concept of PMMW imaging, describes the phenomenology that defines its performance, explains the technology advances that have made these systems a reality, and presents some of the missions in which these sensors can be used.

Overview of millimeter wave radiometry:


The regime of the electromagnetic spectrum where it is possible for humans to see is that part where the sun's radiance peaks (at about 6,000 K): the visible regime. In that regime, the human eye responds to different wavelengths of scattered light by seeing different colors. In the absence of sunlight, however, the natural emissions from Earth objects (at about 300 K) are concentrated in the IR regime. Advances in IR-sensor technology in the last 30 years have produced detectors sensitive in that frequency regime, making night vision possible. The exploitation of the millimeter-wave regime (defined to lie between 30 and 300 GHz, with corresponding wavelengths between 10 and 1 mm) follows as a natural progression in the quest to expand our vision. The great advantage of millimeter-wave radiation is that it can be used not only in day and night conditions, but also in fog and other poor visibility conditions that normally limit the "seeing" ability of both visual and IR sensors.

RAID

Definition
Information has become a commodity in today's world, and protecting that information has become mission critical. The Internet has helped push this information age forward. Popular websites process so much information, that any type of slowdown or downtime can mean the loss of millions of dollars. Clearly, just a bunch of hard disks won't be able to cut it anymore. So Redundant Array of Independent (or Inexpensive) Disks (RAID) was developed to increase the performance and reliability of data storage by spreading data across multiple drives. RAID technology has grown and evolved throughout the years to meet these ever-growing demands for speed and data security. A technique was developed to provide speed, reliability, and increased storage capacity using multiple disks, rather than single disk solutions. RAID takes multiple hard drives and allows them to be used as one large hard drive with benefits depending on the scheme or level of RAID being used. The better the RAID implementation, the more expensive it is. There is no one best RAID implementation. Some implementations are better than others depending upon the actual application. It used to be that RAID was only available in expensive server systems. However, with the advent of inexpensive RAID controllers, it seems it has pretty much reached the mainstream market.

The Array And Raid Controller Concept:


A drive array is a collection of hard disk drives that are grouped together. When we talk about RAID, there is often a distinction between physical drives and arrays and logical drives and arrays. Physical arrays can be divided or grouped together to form one or more logical arrays. These logical arrays can be divided into logical drives that the operating system sees. The logical drives are treated as single hard drives and can be partitioned and formatted accordingly. The RAID controller is what manages how the data is stored and accessed across the physical and logical arrays. It ensures that the operating system sees the logical drives only and need not worry about managing the underlying schema. As far as the system is concerned, it is dealing with regular hard drives. A RAID controller's functions can be implemented in hardware or software. Hardware implementations are better for RAID levels that require large amounts of calculations. With today's incredibly fast processors, software RAID implementations are more feasible, but the CPU still gets bogged-down with large amounts of I/O. The basic concepts made use of in RAID are: .Mirroring .Parity .ECC .Exclusive OR .Striping

Mirroring:
Mirroring involves having two copies of the same data on separate hard drives or drive arrays. So the data is effectively mirrored on another drive. The system writes data simultaneously to both hard drives. This is one of the two data redundancy methods used in RAID to protect from data loss. The benefit is that when one hard drive or array fails, the system can still continue to operate since there

are two copies of data. Downtime is minimal and data recovery is relatively simple. All you need to do is rebuild the data from good copy. A raid controller writes the same data blocks to each mirrored drive. This means that each drive or array has the same information in it. We can add another level of complexity by introducing yet another technique called striping. If we have one striped array we can mirror the array at the same time on the second striped array. To set up mirroring the number of drives will have to be in the power of two.

Holographic Data Storage


Mass memory systems serve computer needs in both archival and backup needs. There exist numerous applications in both the commercial and military sectors that require data storage with huge capacity, high data rates and fast access. To address such needs 3-D optical memories have been proposed. Since the data are stored in volume, they are capable of much higher storage densities than existing 2-D memory systems. In addition this memory system has the potential for parallel access. Instead of writing or reading a sequence of bits at each time, entire 2-D data pages can be accessed at one go. With advances in the growth and preparation of various photorefractive materials, along with the advances in device technologies such as spatial light modulators(SLM), and detector arrays, the realizations of this optical system is becoming feasible. A hologram is a recording of the optical interference pattern that forms at the intersection of two coherent optical beams. Typically, light from a single laser is split into two paths, the signal path and the reference path.. The beam that propagates along the signal path carries information, whereas the reference is designed to be simple to reproduce. A common reference beam is a plane wave: a light beam that propagates without converging or diverging. The two paths are overlapped on the holographic medium and the interference pattern between the two beams is recorded. A key property of this interferometric recording is that when it is illuminated by a readout beam, the signal beam is reproduced. In effect, some of the light is diffracted from the readout beam to "reconstruct" a weak copy of the signal beam. If the signal beam was created by reflecting light off a 3D object, then the reconstructed hologram makes the 3D object appear behind the holographic medium. When the hologram is recorded in a thin material, the readout beam can differ from the reference beam used for recording and the scene will still appear.

Volume Holograms
To make the hologram, the reference and object beams are overlapped in a photosensitive medium, such as a photopolymer or inorganic crystal. The resulting optical interference pattern creates chemical and/or physical changes in the absorption, refractive index or thickness of the storage media, preserving a replica of the illuminating interference pattern. Since this pattern contains information about both the amplitude and the phase of the two light beams, when the recording is illuminated by the readout beam, some of the light is diffracted to "reconstruct" a weak copy of the object beam .If the object beam originally came from a 3-D object, then the reconstructed hologram makes the 3-D object reappear. Since the diffracted wave front accumulates energy from throughout the thickness of the storage material, a small change in either the wavelength or angle of the readout beam generates enough destructive interference to make the hologram effectively disappear through Bragg selectivity. As the material becomes thicker, accessing a stored volume hologram requires tight tolerances on the stability and repeatability of the wavelength and incidence angle provided by the laser and readout optics. However, destructive interference also opens up a tremendous opportunity: a small storage volume can now store multiple superimposed holograms, each one distributed throughout the entire volume. The destructive interference allows each of these stored holograms to be independently accessed with its original reference beam. To record a second, angularly multiplexed hologram, for instance, the angle of the reference beam is changed sufficiently so that the reconstruction of the first hologram effectively disappears. The new incidence angle is used to record a second hologram with a new object beam. The two holograms can be independently accessed by changing the readout laser beam angle back and forth.

For a 2-cm hologram thickness, the angular sensitivity is only 0.0015 degrees. Therefore, it becomes possible to store thousands of holograms within the allowable range of reference arm angles (typically 20-30 degrees). The maximum number of holograms stored at a single location to date7 is 10,000.

Organic Display

With the imaging appliance revolution underway, the need for more advanced handheld devices that will combine the attributes of a computer, PDA, and cell phone is increasing and the flat-panel mobile display industry is searching for a display technology that will revolutionize the industry. The need for new lightweight, low-power, wide viewing angled displays has pushed the industry to revisit the current flat-panel digital display technology used for mobile applications. Struggling to meet the needs of demanding applications such as e-books, smart networked household appliances, identity management cards, and display-centric handheld mobile imaging devices, the flat panel industry is now looking at new and revolutionary form of displays known as Organic Light Emitting Diodes (OLED). OLEDs offer higher efficiency and lower weight than many other types of displays, and are present in myriad forms that lend themselves to various applications. Many exciting virtual imaging applications will become a reality as new advanced OLED - on - silicon micro displays enter the market place over the next few years. The field of semi conducting polymers has its root in the 1977 discovery of the semi conducting properties of polyacetylene. This breakthrough earned Alan Heeger, Alan MacDiarmid, and Hideki Shirakawa the 2000 Nobel Prize in Chemistry for 'the discovery and development of conductive polymers'. The physical and chemical understanding of these novel materials has led to new device applications as active and passive electronic and optoelectronic devices ranging from diodes and transistors to polymer LEDs, photodiodes, lasers, and solar cells. Much interest in plastic devices derives from the opportunities to use clever control of polymer structure combined with relatively economical polymer synthesis and processing techniques to obtain simultaneous control over electronic, optical, chemical, and mechanical features. With the imaging appliance revolution underway, the need for more advanced handheld devices that will combine the attributes of a computer, PDA, and cell phone is increasing and the flat-panel mobile display industry is searching for a display technology that will revolutionize the industry. The need for new lightweight, low-power, wide viewing angled, handheld portable communication devices have pushed the display industry to revisit the current flat-panel digital display technology used for mobile applications. Struggling to meet the needs of demanding applications such as ebooks, smart networked household appliances, identity management cards, and display-centric handheld mobile imaging devices, the flat panel industry is now looking at new displays For the preparation of the latest materials to prepare against this onslaught of demand for lighter and less power hungry display technologies, electrical engineers have enlisted the help of the humble jellyfish in their efforts to develop better light-emitting diodes (LEDs),Moreover, the jellyfish accomplishes this with great efficiency: its lightcomes from a substance dubbed green fluorescent protein (GFP), which collects the energy produced in a certain cellular chemical reaction and emits it as green light from a molecular package known as a chromophore. An OLED is an electronic device made by placing a series of organic thin films between two conductors. When electrical current is applied, a bright light is emitted. This process is called electro phosphorescence. Even with the layered system, these systems are very thin, usually less than 500 nm (0.5 thousandths of a millimeter). known as Organic Light Emitting Diodes (OLED).

Symbian OS

Definition
Symbian OS is designed for the mobile phone environment. It addresses constraints of mobile phones by providing a framework to handle low memory situations, a power management model, and a rich software layer implementing industry standards for communications, telephony and data rendering. Even with these abundant features, Symbian OS puts no constraints on the integration of other peripheral hardware. This flexibility allows handset manufacturers to pursue innovative and original designs. Symbian OS is proven on several platforms. It started life as the operating system for the Psion series of consumer PDA products (including Series 5mx, Revo and netBook), and various adaptations by Diamond, Oregon Scientific and Ericsson. The first dedicated mobile phone incorporating Symbian OS was the Ericsson R380 Smartphone, which incorporated a flip-open keypad to reveal a touch screen display and several connected applications. Most recently available is the Nokia 9210Communicator, a mobile phone that has a QWERTY keyboard and color display, and is fully open to third-party applications written in Java or C++. The five key points - small mobile devices, mass-market, intermittent wireless connectivity, diversity of products and an open platform for independent software developers - are the premises on which Symbian OS was designed and developed. This makes it distinct from any desktop, workstation or server operating system. This also makes Symbian OS different from embedded operating systems, or any of its competitors, which weren't designed with all these key points in mind. Symbian is committed to open standards. Symbian OS has a POSIX-compliant interface and a Sun-approved JVM, and the company is actively working with emerging standards, such as J2ME, Bluetooth, MMS, SyncML, IPv6 and WCDMA. As well as its own developer support organization, books, papers and courses, Symbian delivers a global network of third-party competency and training centers - the Symbian Competence Centers and Symbian Training Centers. These are specifically directed at enabling other organizations and developers to take part in this new economy. Symbian has announced and implemented a strategy that will see Symbian OS running on many advanced open mobile phones. Small devices come in many shapes and sizes, each addressing distinct target markets that have different requirements. The market segment we are interested in is that of the mobile phone. The primary requirement of this market segment is that all products are great phones. This segment spans voice-centric phones with information capability to information-centric devices with voice capability. These advanced mobile phones integrate fully-featured personal digital assistant (PDA) capabilities with those of a traditional mobile phone in a single unit. There are several critical factors for the need of operating systems in this market. It is important to look at the mobile phone market in isolation. It has specific needs that make it unlike markets for PCs or fixed domestic appliances. Scaling down a PC operating system, or bolting communication capabilities onto a small and basic operating system, results in too many fundamental compromises. Symbian believes that the mobile phone market has five key characteristics that make it unique, and result in the need for a specifically designed operating system: 1) mobile phones are both small and mobile. 2) mobile phones are ubiquitous - they target a mass-market of consumer, enterprise and professional users. 3) mobile phones are occasionally connected - they can be used when connected to the wireless phone network, locally to other devices, or on their own.

4) manufacturers need to differentiate their products in order to innovate and compete in a fastevolving market.

Ovonic Unified Memory (OUM)

Definition
Ovonyx is developing a microelectronics memory technology called Ovonic Unified Memory (OUM). This technology is originally developed by Mr. Stanford Ovshinsky and exclusively licensed from Energy Conversion Devices (ECD) Inc. Ovonic unified memory -- its name is derived from ''Ovshinsky'' and ''electronic''. OVM is also known as phase change memory because it uses unique thin-film phase change material to store information economically and with excellent solid-state memory properties. It would be the replacement of conventional memories like Magnetic Random Access Memory (MRAM), Ferro electric Random Access Memory (FeRAM or FRAM), Dynamic Random Access Memory (DRAM), and Static Random Access Memory (SRAM). OVM allows the rewriting of CD & DVDs .CD & DVD drives read or write ovonic material with laser , but OVM uses electric current to change the phase of the material. The thin-film material is a phase-change chalcogenide alloy similar to the film used to store information on commercial CDRW and DVD-RAM optical disks, based on proprietary technology originally developed by and exclusively licensed from Energy Conversion Devices.

Evolution Of OUM
Magnetic Random Access Memory (MRAM), a technology first developed in the 1970's, but rarely commercialized, has attracted by the backing of I.B.M. Motorola and others. MRAM stores information by flip flopping two layers of magnetic material in and out of alignment with an electric current. For reading and writing data, MRAM can be as fast as a few nanoseconds, or billionths of a second, best among the next three generation memory candidates. And if promises to integrate easily with the industry's existing chip manufacturing process. MRAM is built on top of silicon circuitry. The biggest problem with MRAM is a relatively small distance, difficult to detect, between it's ON and OFF states. The second potential successor to flash, Ferro - electric Random Access Memory (FeRAM / FRAM), has actually been commercially available for nearly 15 years, has attracted by the backing of Fujitsu, Matsushita, I.B.M. and Ramtron. FRAM relies on the polarization of what amount to tiny magnets inside certain materials like perouikite, from basaltic rocks. FRAM memory cells do not wear out until they have been read or written to billions of times, while MRAM and OUM would require the addition of six to eight "masking" layers in the chip manufacturing process, just like Flash, FRAM might require as little as two extra layers. OUM is based on the information storage technology developed by Mr.Ovshinsky that allows rewriting of CD's and DVD's. While CD and DVD drives read and write ovonic material with lasers, OUM uses electric current to change the phase of memory cells. These cells are either in crystalline state, where electrical resistance is low or in amorphous state, where resistance is high. OUM can be read and write to trillionths of times making its use essentially nondestructive, unlike MRAM or FRAM. OUM's dynamic range, difference between the electrical resistance in the crystalline state and in the amorphous state - is wide enough to allow more than one set of ON and OFF values in a cell, dividing it into several bits and multiplying memory density by two, four potential even 16 times. OUM is not as fast as MRAM. The OUM solid-state memory has cost advantages over conventional solid-state memories such as DRAM or Flash due to its thin-film nature, very small active storage media, and simple device

structure. OUM requires fewer steps in an IC manufacturing process resulting in reduced cycle times, fewer defects, and greater manufacturing flexibility.

Spintronics
Spintronics can be fairly new term for you but the concept isn't so very exotic .This technological discipline aim to exploit subtle and mind bending esoteric quantum property of electron to develop a new generation of electronics devices. The ability to exploit spin in semiconductor promise a new logical devices as spin transistor etc with enhanced functionality higher speed and reduction power conception and might have a spark revolution in semiconductor industry. so far the problem of injecting electron with controlled spin direction has held up the realization of such spintronics Spintronics is an emergent technology that exploits the quantum propensity of the electrons to spin as well as making use of their charge state. The spin itself is manifested as a detectable weak magnetic energy state characterised as "spin up" or "spin down". Conventional electronic devices rely on the transport of electrical charge carriers - electrons - in a semiconductor such as silicon. Device engineers and physicists are now trying to exploit the spin of the electron rather than its charge. Spintronic-devices combine the advantages of magnetic materials and semiconductors. They are expected to be non-volatile, versatile, fast and capable of simultaneous data storage and processing, while at the same time consuming less energy. Spintronic-devices are playing an increasingly significant role in high-density data storage, microelectronics, sensors, quantum computing and biomedical applications, etc.

E-Commerce
In an effort to further the development of e-commerce, the federal Electronic Signatures Act (2000) established uniform national standards for determining the circumstances under which contracts and notifications in electronic form are legally valid. Legal standards were also specified regarding the use of an electronic signature ("an electronic sound, symbol, or process, attached to or logically associated with a contract or other record and executed or adopted by a person with the intent to sign the record"), but the law did not specify technological standards for implementing the act. The act gave electronic signatures a legal standing similar to that of paper signatures, allowing contracts and other agreements, such as those establishing a loan or brokerage account, to be signed on line. Once consumers' worries eased about on-line credit card purchases, e-commerce grew rapidly in the late 1990s. In 1998 on-line retail ("e-tail") sales were $7.2 billion, double the amount in 1997. Online retail ordering represented 15% of nonstore sales (which included catalogs, television sales, and direct sales) in 1998, but this constituted only 1% of total retail revenues that year. Books are the most popular on-line product order-with over half of Web shoppers ordering books (one on-line bookseller, Amazon.com, which started in 1995, had revenues of $610 million in 1998)-followed by software, audio compact discs, and personal computers. Other on-line commerce includes trading of stocks, purchases of airline tickets and groceries, and participation in auctions.

Bio-Molecular Computing

Definition
Molecular computing is an emerging field to which chemistry, biophysics, molecular biology, electronic engineering, solid state physics and computer science contribute to a large extent. It involves the encoding, manipulation and retrieval of information at a macromolecular level in contrast to the current techniques, which accomplish the above functions via IC miniaturization of bulk devices. The biological systems have unique abilities such as pattern recognition, learning, selfassembly and self-reproduction as well as high speed and parallel information processing. The aim of this article is to exploit these characteristics to build computing systems, which have many advantages over their inorganic (Si,Ge) counterparts. DNA computing began in 1994 when Leonard Adleman proved thatDNA computing was possible by finding a solution to a real- problem, a Hamiltonian Path Problem, known to us as the Traveling Salesman Problem,with a molecular computer. In theoretical terms, some scientists say the actual beginnings of DNA computation should be attributed to Charles Bennett's work. Adleman, now considered the father of DNA computing, is a professor at the University of Southern California and spawned the field with his paper, "Molecular Computation of Solutions of Combinatorial Problems." Since then, Adleman has demonstrated how the massive parallelism of a trillion DNA strands can simultaneously attack different aspects of a computation to crack even the toughest combinatorial problems. Adleman's Traveling Salesman Problem: The objective is to find a path from start to end going through all the points only once. This problem is difficult for conventional computers to solve because it is a "non-deterministic polynomial time problem" . These problems, when they involve large numbers, are intractable with conventional computers, but can be solved using massively parallel computers like DNA computers. The Hamiltonian Path problem was chosen by Adleman because it is known problem. The following algorithm solves the Hamiltonian Path problem: 1.Generate random paths through the graph. 2.Keep only those paths that begin with the start city (A) and conclude with the end city (G). 3.If the graph has n cities, keep only those paths with n cities. (n=7) 4.Keep only those paths that enter all cities at least once. 5.Any remaining paths are solutions. The key was using DNA to perform the five steps in the above algorithm. Adleman's first step was to synthesize DNA strands of known sequences, each strand 20 nucleotides long. He represented each of the six vertices of the path by a separate strand, and further represented each edge between two consecutive vertices, such as 1 to 2, by a DNA strand which consisted of the last ten nucleotides of the strand representing vertex 1 plus the first 10 nucleotides of the vertex 2 strand. Then, through the sheer amount of DNA molecules (3x1013 copies for each edge in this experiment!) joining together in all possible combinations, many random paths were generated. Adleman used well-established techniques of molecular biology to weed out the Hamiltonian path, the one that entered all vertices, starting at one and ending at six. After generating the numerous random paths in the first step, he used polymerase chain reaction (PCR) to amplify and keep only the paths that began on vertex 1 and ended at vertex 6. The next two

steps kept only those strands that passed through six vertices, entering each vertex at least once. At this point, any paths that remained would code for a Hamiltonian path, thus solving the problem.

Code Division Duplexing

Introduction
Reducing interference in a cellular system is the most effective approach to increasing radio capacity and transmission data rate in the wireless environment. Therefore, reducing interference is a difficult and important challenge in wireless communications. In every two-way communication system it is necessary to use separate channels to transmit information in each direction. This is called duplexing. Currently there exist only two duplexing technologies in wireless communications, Frequency division duplexing (FDD) and time division duplexing (TDD). FDD has been the primary technology used in the first three generations of mobile wireless because of its ability to isolate interference. TDD is seemingly a more spectral efficient technology but has found limited use because of interference and coverage problems. Code-division duplexing (CDD) is an innovative solution that can eliminate all kinds of interference. CDMA is the best multiple access scheme when compared to all others for combating interference. However, the codes in CDMA can be more than one type of code. A set of smart codes can make a high-capacity CDMA system very effective without adding other technologies. The smart code plus TDD is called CDD. This paper will elaborate on a set of smart codes that will make an efficient CDD system a reality. The CDMA system based on this is known as the LAS-CDMA, where LAS is a set of smart codes. LAS-CDMA is a new coding technology that will increase the capacity and spectral efficiency of mobile networks. The advanced technology uses a set of smart codes to restrict interference, a property that adversely affects the efficiency of CDMA networks. To utilize spectrum efficiently, two transmission techniques need to be considered: one is a multiple access scheme and the other a duplexing system. There are three multiple access schemes namely TDMA, FDMA and CDMA. The industry has already established the best multiple access scheme, code-division multiple access (CDMA), for 3G systems. The next step is to select the best duplexing system. Duplexing systems are used for two-way communications. Presently, there are only two duplexing systems used: frequency-division duplexing (FDD), and time-division duplexing (TDD). The former uses different frequencies to handle incoming and outgoing signals. The latter uses a single frequency but different time slots to handle incoming and outgoing signals. In the current cellular duplexing systems, FDD has been the appropriate choice, not TDD. Currently, all cellular systems use frequency-division duplexing in an attempt to eliminate interference from adjacent cells. The use of many technologies has limited the effects of interference but still certain types of interference remain. Time-division duplexing has not been used for mobile cellular systems because it is even more susceptible to different forms of interference. TDD can only be used for small confined area systems. Code-division duplexing is an innovative solution that can eliminate all kinds of interference. Eliminating all types of interference makes CDD the most spectrum efficient duplexing system.

CDMA overview
Interference and Capacity One of the key criteria in evaluating a communication system is its spectral efficiency, or the system capacity, for a given system bandwidth, or sometimes, the total data rate supported by the system. For a given bandwidth, the system capacity for narrow band radio systems is dimension limited, while the system capacity of a traditional CDMA system is interference limited.

Traditional CDMA systems are all self-interference system. Three types of interference are usually considered. By ISI we mean Inter Symbol Interference, which is created by the multi-path replica of the useful signal itself; MAI, or Mutual Access Interference, which is the interference created by the signals and their multi-path replica from the other users onto the useful signal; and ACI, or Adjacent Cell Interference, which is all the interfering signals from the adjacent cells onto the useful signal.

Orthogonal Frequency Division Multiplplexing

Definition
Multi Carrier Modulation is a technique for data transmission by dividing a high ]bit rate data stream is several parallel low bit ]rate data streams and using these low bit rate data streams to modulate several carriers. Multi Carrier Transmission has a lot of useful properties such as delay spread tolerance and spectrum efficiency that encourage their use in untethered broadband communications. OFDM is a multi carrier modulation technique with densely spaced sub carriers that has gained a lot of popularity among the broadband community in the last few years. It has found immense applications in communication systems. This report is intended to provide a tutorial level introduction to OFDM Modulation, its advantages and demerits, and some applications of OFDM.

History Of OFDM
The concept of using parallel data transmission by means of frequency division multiplexing (FDM) was published in mid 60s. Some early development can be traced back in the 50s. A U.S. patent was filled and issued in January, 1970. The idea was to use parallel data streams and FDM with overlapping sub channels to avoid the use of high speed equalization and to combat impulsive noise, and multipath distortion as well as to fully use the available bandwidth. The initial applications were in the military communications. In the telecommunications field, the terms of discrete multi tone (DMT), multichannel modulation and multicarrier modulation (MCM) are widely used and sometimes they are interchangeable with OFDM. In OFDM, each carrier is orthogonal to all other carriers. However, this condition is not always maintained in MCM. OFDM is an optimal version of multicarrier transmission schemes. For a large number of sub channels, the arrays of sinusoidal generators and coherent demodulators required in a parallel system become unreasonably expensive and complex. The receiver needs precise phasing of the demodulating carriers and sampling times in order to keep crosstalk between sub channels acceptable. Weinstein and Ebert applied the discrete Fourier transform (DFT) to parallel data transmission system as part of the modulation and demodulation process. In addition to eliminating the banks of subcarrier oscillators and coherent demodulators required by FDM, a completely digital implementation could be built around special purpose hardware performing the fast Fourier transform (FFT). Recent advances in VLSI technology enable making of high speed chips that can perform large size FFT at affordable price. In the 1980s, OFDM has been studied for high speed modems, digital mobile communications and high density recording. One of the systems used a pilot tone for stabilizing carrier and clock frequency control and trellis coding was implemented. Various fast modems were developed for telephone networks. In 1990s, OFDM has been exploited for wideband data communications over mobile radio FM channels, high bit rate digital subscriber lines (HDSL, 1.6 Mb/s), asymmetric digital subscriber lines (ADSL, 1,536 Mb/s), very high speed digital subscriber lines (VHDSL, 100 Mb/s), digital audio broadcasting (DAB) and HDTV terrestrial broadcasting.

Utility Fog

Definition
Nanofog is a highly advanced nanotechnology, which the Technocratic Union has developed as the ultimate multi-purpose tool. It is a user-friendly, completely programmable collection of avogadro (6 x1023) numbers of nanomachines that can form a vast range of machinery, from wristwatches to spaceships. It can simulate any material from gas, liquid, and solid, and it can even be used in sufficient quantities to implement the ultimate in virtual reality. ITx researchers suggest that more complex applications could include uploading human minds into planet-sized collections of Utility Fog. Active, polymorphic material, Utility Fog can be designed as a conglomeration of 100-micron robotic cells called foglets. Such robots could be built with the techniques of molecular nanotechnology. Controllers with processing capabilities of 1000 MIPS per cubic micron, and electric motors with power densities of one milliwatt per cubic micron are assumed. Utility Fog should be capable of simulating most everyday materials, dynamically changing its form and proper ties, and forms a substrate for an integrated virtual reality and telerobotics. This paper will examine the concept, and explore some of the applications of this material.

Introduction
Imagine a microscopic robot. It has a body about the size of a human cell and 12 arms sticking out in all directions. A bucketful of such robots might form a "robot crystal" by linking their arms up into a lattice structure. Now take a room, with people, furniture, and other objects in it it's still mostly empty air. Fill the air completely full of robots. With the right programming, the robots can exert any force in any direction on the surface of any object. They can support the object, so that it apparently floats in the air. They can support a person, applying the same pressures to the seat of the pants that a chair would. They can exert the same resisting forces that elbows and fingertips would receive from the arms and back of the chair. A program running in the Utility Fog can thus simulate the physical existence of an object. Although this class of nanotechnology has been envisioned by the technocracy since early times, and has been available to us for over twenty years, the name is more recent. A mundane scientist, J. Storrs Hall provided an important baseline examination of the issues involved in the application and design of Utility fog. He envisioned it as an active polymorphic material designed as a conglomeration of 100-micron robotic cells or foglets, built using molecular nanotechnology. An appropriate mass of Utility Fog could be programmed to simulate, to the same precision as measured by human senses, most of the physical properties, such as hardness, temperature, light, of any macroscopic object, including expected objects such as tables and fans, but also materials such as air and water. The major exceptions would be taste, smell, and transparency. To users, it would seem like the Star Trek Holodeck except that it would use atoms instead of holographic illusions. It is an indication of the degree to which our science and technology have permeated society that a non member could so accurately describe and visualise the way in which "Utility Fog" operates.

VLSI Computations

Definition
Over the past four decades the computer industry has experienced four generations of development, physically marked by the rapid changing of building blocks from relays and vacuum tubes (19401950s) to discrete diodes and transistors (1950-1960s), to small- and medium-scale integrated (SSI/MSI) circuits (1960-1970s), and to large- and very-large-scale integrated (LSI/VLSI) devices (1970s and beyond). Increases in device speed and reliability and reductions in hardware cost and physical size have greatly enhanced computer performance. However, better devices are not the sole factor contributing to high performance. Ever since the stored-program concept of von Neumann, the computer has been recognized as more than just a hardware organization problem. A modern computer system is really a composite of such items as processors, memories, functional units, interconnection networks, compilers, operating systems, peripherals devices, communication channels, and database banks. To design a powerful and cost-effective computer system and to devise efficient programs to solve a computational problem, one must understand the underlying hardware and software system structures and the computing algorithm to be implemented on the machine with some user-oriented programming languages. These disciplines constitute the technical scope of computer architecture. Computer architecture is really a system concept integrating hardware, software algorithms, and languages to perform large computations. A good computer architect should master all these disciplines. It is the revolutionary advances in integrated circuits and system architecture that have contributed most to the significant improvement of computer performance during the past 40 years. In this section, we review the generations of computer systems and indicate the general tends in the development of high performance computers. Generation of Computer Systems The division of computer systems into generations is determined by the device technology, system architecture, processing mode, and languages used. We consider each generation to have a time span of about 10 years. Adjacent generations may overlap in several years as demonstrated in the figure. The long time span is intended to cover both development and use of the machines in various parts of the world. We are currently in the fourth generation, while the fifth generation is not materialized yet. The Future Computers to be used in the 1990s may be the next generation. Very large-scale integrated (VLSI) chips will be used along with high-density modular design. Multiprocessors like the 16 processors in the S-1 project at Lawrence Livermore National Laboratory and in the Denelcor's HEP will be required. Cray-2 is expected to have four processors, to be delivered in 1985. More than 1000 mega float-point operations per second (megaflops) are expected in these future supercomputers.

Need For Parallel Processing Achieving high performance depends not only on using faster and more reliable hardware devices, but also on major improvements in computer architecture and processing techniques. State - of - the art parallel computer systems can be characterized into three structural classes: pipelined computers, array processors and multi-processor systems. Parallel processing computers provide a cost-effective means to achieve high system performance through concurrent activities.

Tunable lasers

Definition
Tunable lasers as the name suggests are lasers whose wavelengths can be tuned or varied. They play an important part in optical communication networks. Recent improvements in tunable laser technologies are enabling highly flexible and effective utilization of the massive increases in optical network capacity brought by large-scale application of dense wavelength division multiplexing. Several tunable laser technologies have emerged, each with its own set of tradeoffs with respect to the needs of particular optical networking applications. Tunable lasers are produced mainly in 4 ways: The distributed feedback laser (DFB), the external cavity diode laser, the vertical cavity diode laser and the micro electro mechanical system (MEMS) technology. Tunable lasers help network administrators to save a lot of cost, by allowing them to efficiently manage the network with lesser number of spares. They also enable reliable functioning of the optical network. Changing traffic patterns, customer requirements, and new revenue opportunities require greater flexibility than static OADMs can provide, complicating network operations and planning. Incorporating tunable lasers removes this constraint altogether by allowing any channel to be added by the OADM at any time. In a wavelength-division multiplexed (WDM) network carrying 128 wavelengths of information, we have 128 different lasers giving out these wavelengths of light. Each laser is designed differently in order to give the exact wavelength needed. Even though the lasers are expensive, in case of a breakdown, we should be able to replace it at a moment's notice so that we don't lose any of the capacity that we have invested so much money in. So we keep in stock 128 spare lasers or maybe even 256, just to be prepared for double failures. What if we have a multifunctional laser for the optical network that could be adapted to replace one of a number of lasers out of the total 128 wavelengths? Think of the money that could be saved, as well as the storage space for the spares. What is needed for this is a "tunable laser," Tunable lasers are still a relatively young technology, but as the number of wavelengths in networks increases so will their importance. Each different wavelength in an optical network will be separated by a multiple of 0.8 nanometers (sometimes referred to as 100GHz spacing. Current commercial products can cover maybe four of these wavelengths at a time. While not the ideal solution, this still cuts your required number of spare lasers down. More advanced solutions hope to be able to cover larger number of wavelengths, and should cut the cost of spares even further. The devices themselves are still semiconductor-based lasers that operate on similar principles to the basic non-tunable versions. Most designs incorporate some form of grating like those in a distributed feedback laser. These gratings can be altered in order to change the wavelengths they reflect in the laser cavity, usually by running electric current through them, thereby altering their refractive index. The tuning range of such devices can be as high as 40nm, which would cover any of 50 different wavelengths in a 0.8nm wavelength spaced system. Technologies based on vertical cavity surface emitting lasers (VCSELs) incorporate moveable cavity ends that change the length of the cavity and hence the wavelength emitted. Current designs of tunable VCSELs have similar tuning ranges. Lasers are devices giving out intense light at one specific color. The kinds of lasers used in optical networks are tiny devices - usually about the size of a grain of salt. They are little pieces of semiconductor material, specially engineered to give out very precise and intense light. Within the semiconductor material are lots of electrons - negatively charged particles.

High Altitude Aeronautical Platforms

Definition
Affordable bandwidth will be as essential to the Information Revolution in the21 st century as inexpensive power was to the Industrial Revolution in the 18 th and 19 th centuries. Today's global communications infrastructures of landlines, cellular towers, and satellites are inadequately equipped to support the increasing worldwide demand for faster, better, and less expensive service. At a time when conventional ground and satellite systems are facing increasing obstacles and spiraling costs, a low cost solution is being advocated. This paper focuses on airborne platforms- airships, planes, helicopters or some hybrid solutions which could operate at stratospheric altitudes for significant periods of time, be low cost and be capable of carrying sizable multipurpose communications payloads. This report briefly presents an overview about the internal architecture of a High Altitude Aeronautical Platform and the various HAAPS projects. High Altitude Aeronautical Platform Stations (HAAPS) is the name of a technology for providing wireless narrowband and broadband telecommunication services as well as broadcasting services with either airships or aircrafts. The HAAPS are operating at altitudes between 3 to 22 km. A HAPS shall be able to cover a service area of up to 1'000 km diameter, depending on the minimum elevation angle accepted from the user's location. The platforms may be airplanes or airships (essentially balloons) and may be manned or un-manned with autonomous operation coupled with remote control from the ground. While the term HAP may not have a rigid definition, we take it to mean a solar-powered and unmanned airplane or airship, capable of long endurance on-station possibly several years. Various types of platform options exist: SkyStation, the Japanese Stratospheric Platform Project, the European Space Agency (ESA) and others suggest the use of airships/blimps/dirigibles. These will be stationed at 21km and are expected to remain aloft for about 5 years. Angel Technologies (HALO), AeroVironment/ NASA (Helios) and the European Union (Heliplat) propose the use of high altitude long endurance aircraft. The aircraft are either engine or solar powered and are stationed at 16km (HALO) or 21km (Helios). Helios is expected to stay aloft for a minimum of 6 months whereas HALO will have 3 aircraft flying in 8- hour shifts. Platforms Wireless International is implementing a tethered aerostat situated at ~6km. A high altitude telecommunication system comprises an airborne platform - typically at high atmospheric or stratospheric altitudes - with a telecommunications payload, and associated ground station telecommunications equipment. The combination of altitude, payload capability, and power supply capability makes it ideal to serve new and metropolitan areas with advanced telecommunications services such as broadband access and regional broadcasting. The opportunities for applications are virtually unlimited. The possibilities range from narrowband services such as paging and mobile voice to interactive broadband services such as multimedia and video conferencing. For future telecommunications operators such a platform could provide blanket coverage from day one with the added advantage of not being limited to a single service. Where little or unreliable infrastructure exists, traffic could be switched through air via the HAPS platform. Technically, the concept offers a solution to the propagation and rollout problems of terrestrial infrastructure and capacity and cost problems of satellite networks.

Daknet

Introduction
Now a day it is very easy to establish communication from one part of the world to other. Despite this even now in remote areas villagers travel to talk to family members or to get forms which citizens in-developed countries an call up on a computer in a matter of seconds. The government tries to give telephone connection in very village in the mistaken belief that ordinary telephone is the cheapest way to provide connectivity. But the recent advancements in wireless technology make running a copper wire to an analog telephone much more expensive than the broadband wireless Internet connectivity. Daknet, an ad hoc network uses wireless technology to provide digital connectivity. Daknet takes advantages of the existing transportation and communication infrastructure to provide digital connectivity. Daknet whose name derives from the Hindi word "Dak" for postal combines a physical means of transportation with wireless data transfer to extend the internet connectivity that a uplink, a cyber caf or post office provides. Real time communications need large capital investment and hence high level of user adoption to receiver costs. The average villager cannot even afford a personnel communications device such as a telephone or computer. To recover cost, users must share the communication infrastructure. Real time aspect of telephony can also be a disadvantage. Studies show that the current market for successful rural Information and Communication Technology (ICT) services does not appear to rely on real-time connectivity, but rather on affordability and basic interactivity. The poor not only need digital services, but they are willing and able to pay for them to offset the much higher costs of poor transportation, unfair pricing, and corruption. It is useful to consider non real-time infrastructures and applications such as voice mail, e-mail, and electronic bulletin boards. Technologies like store- and forward or asynchronous modes of communication can be significantly lower in cost and do not necessarily sacrifice the functionality required to deliver valuable user services. In addition to non real-time applications such as e-mail and voice messaging , providers can use asynchronous modes of communication to create local information repositories that community members can add to and query.

Wireless Catalyst
Advances in the IEEE 802 standards have led to huge commercial success and low pricing for broadband networks. These techniques can provide broadband access to even the most remote areas at low price. Important considerations in a WLAN are Security: In a WLAN, access is not limited to the wired PCs but it is also open to all the wireless network devices, making it for a hacker to easily breach the security of that network. Reach: WLAN should have optimum coverage and performance for mobile users to seamlessly roam in the wireless network Interference: Minimize the interference and obstruction by designing the wireless network with proper placement of wireless devices. Interoperability: Choose a wireless technology standard that would make the WLAN a truly interoperable network with devices from different vendors integrated into the same. Reliability: WLAN should provide reliable network connection in the enterprise network.

Manageability: A manageable WLAN allows network administrators to manage, make changes and troubleshoot problems with fewer hassles. Wireless data networks based on the IEEE 802.11 or wifi standard are perhaps the most promising of the wireless technologies. Features of wifi include ease of setup, use and maintenance, relatively high bandwidth; and relatively low cost for both users and providers. Daknet combines physical means of transportation with wireless data transfer to extend the internet connectivity. In this innovative vehicle mounted access points using 802.11b based technology to provide broadband, asynchronous, store and forward connectivity in rural areas.

Digital Light Processing

Definition
Large-screen, high-brightness electronic projection displays serve four broad areas of application: (1) electronic presentations (e.g., business, education, advertising), (2) entertainment (e.g., home theater, sports bars, theme parks, electronic cinema), (3) status and information (e.g., military, utilities, transportation, public, sports) and (4) simulation (e.g., training, games). The electronic presentation market is being driven by the pervasiveness of software that has put sophisticated presentation techniques (including multimedia) into the hands of the average PC user. A survey of high-brightness (>1000 lumens) electronic projection displays for comparing the already existing three types of projection display technologies namely, Oil film, CRT-LCD, and AM-LCD was conducted. Developed in the early 1940s at the Swiss Federal Institute of Technology and later at Gretag AG, oil film projectors (including the GE Talaria) have been the workhorse for applications that require projection displays of the highest brightness. But the oil film projector has a number of limitations including size, weight, power, setup time, stability, and maintenance. In response to these limitations, LCD-based technologies have challenged the oil film projector. These LCD-based projectors are of two general types: (1) CRT-addressed LCD light valves and (2) activematrix (AM) LCD panels. LCD-based projectors have not provided the perfect solution for the entire range of high-brightness applications. CRT-addressed LCD light valves have setup time and stability limitations. Most active-matrix LCDs used for high-bright-ness applications are transmissive and, because of this, heat generated by light absorption cannot be dissipated with a heat sink attached to the substrate. This limitation is mitigated by the use of large-area LCD panels with forced-air cooling. However, it may still be difficult to implement effective cooling at the highest brightness levels. In response to these and other limitations, as well as to provide superior image quality under the most demanding environmental conditions, high-brightness projection display systems have been developed based on Digital Light Processing technology. DLP is based on a micro electro mechanical system (MEMS) device known as the Digital Micro mirror Device (DMD). The DMD, invented in 1987 at Texas Instruments, is a semiconductor-based array of fast, reflective digital light switches that precisely control a light source using a binary pulse modulation technique. It can be combined with image processing, memory, a light source, and optics to form a DLP system capable of projecting large, bright, seamless, high-contrast color images.

The Mirror as a Switch


The DMD light switch is a member of a class of devices known as micro electromechanical systems. Other MEMS devices include pressure sensors, accelerometers, and micro actuators. The DMD is monolithically fabricated by CMOS-like processes over a CMOS memory. Each light switch has an aluminum mirror, 16 m square that can reflect light in one of two directions depending on the state of the underlying memory cell. Rotation of the mirror is accomplished through electrostatic attraction produced by voltage differences developed between the mirror and the underlying memory cell. With the memory cell in the on state, the mirror rotates to +10 degrees. With the memory cell in the off state, the mirror rotates to .10 degrees. A close-up of DMD mirrors operating in a scanning electron microscope (SEM). By combining the DMD with a suitable light source and projection optics (Figure 6), the mirror reflects incident light either into or out of the pupil of the projection lens by a simple beam-steering technique. Thus, the state of the mirror appears bright and the state of the mirror appears dark. Compared to diffraction-based light switches, the beam-steering action of the

DMD light switch provides a superior tradeoff between contrast ratio and the overall brightness efficiency of the system. By electrically addressing the memory cell below each mirror with the binary bit plane signal, each mirror on the DMD array is electrostatically tilted to the on or off positions. The technique that determines how long each mirror tilts in either direction is called pulse width modulation (PWM). The mirrors are capable of switching on and off more than 1000 times a second. This rapid speed allows digital gray scale and color reproduction. At this point, DLP becomes a simple optical system. After passing through condensing optics and a color filter system, the light from the projection lamp is directed at the DMD. When the mirrors are in the on position, they reflect light through the projection lens and onto the screen to form a digital, square-pixel projected image.

Free Space Laser Communications

Definition
Lasers have been considered for space communications since their realization in 1960. However, it was soon recognized that, although the laser had potential for the transfer of data at extremely high rates, specific advancements were needed in component performance and systems engineering, particularly for space-qualified hardware. Advances in system architecture, data formatting, and component technology over the past three decades have made laser communications in space not only a viable but also a attractive approach to intersatellite link applications. The high data rate and large information throughput available with laser communications are many times greater than in radio frequency (RF) systems. The small antenna size requires only a small increase in the weight and volume of host vehicle. In addition, this feature substantially reduces blockage of fields of view of the most desirable areas on satellites. The smaller antennas, with diameters typically less than 30cm, create less momentum disturbance to any sensitive satellite sensors. Fewer onboard consumables are required over the long lifetime because there is less disturbance to the satellite compared with larger and heavier RF systems. The narrow beam divergence of affords interference-free and secure operation. Features Of Laser Communications System A block diagram of typical terminal is illustrated in Fig 1. Information, typically in the form of digital data, is input to data electronics that modulates the transmitting laser source. Direct or indirect modulation techniques may be employed depending on the type of laser employed. The source output passes through an optical system into the channel. The optical system typically includes transfer, beam shaping, and telescope optics. The receiver beam comes in through the optical system and is passed along to detectors and signal processing electronics. There are also terminal control electronics that must control the gimbals and other steering mechanisms, and servos, to keep the acquisition and tracking system operating in the designed modes of operation. Operation Free space laser communications systems are wireless connections through the atmosphere. They work similar to fiber optic cable systems except the beam is transmitted through open space. The carrier used for the transmission of this signal is generated by either a high power LED or a laser diode. The laser systems operate in the near infrared region of the spectrum. The laser light across the link is at a wavelength of between 780 - 920 nm. Two parallel beams are used, one for transmission and one for reception. Acquisition And Tracking There are three basic steps to laser communication: acquisition, tracking, and communications. Of the three, acquisition is generally the most difficult; angular tracking is usually the easiest. Communications depends on bandwidth or data rate, but is generally easier than acquisition unless very high data rates are required. Acquisition is the most difficult because laser beams are typically much smaller than the area of uncertainty. Satellites do not know exactly where they are or where the other platform is located, and since everything moves with some degree of uncertainty, they cannot take very long to search or the reference is lost. Instability of the platforms also causes uncertainty in time. In the ideal acquisition method, the beam width of the source is greater than the angle of uncertainty in the location of receiver. The receiver field of includes the location uncertainty of the transmitter. Unfortunately, this ideal method requires a significant amount of laser power.

It is possible to operate a number of laser types at high peak power and low duty cycle to make acquisition easier. This is because a lower pulse rate is needed for acquisition than for tracking and communications. High peak power pulses more easily overcome the receiver set threshold and keep the false alarm rate low. A low duty cycle transmitter gives high peak power, yet requires less average power, and is thus a suitable source for acquisition. As the uncertainty area becomes less, it becomes more feasible to use a continues source of acquisition, especially if the acquisition time does not have to be short.

Millipede

Definition
Today data storage is dominated by the use of magnetic disks. Storage densities of about more than 5 Gb/cm 2 have been achieved. In the past 40 years areal density has increased by 6 orders of magnitude. But there is a physical limit. It has been predicted that superparamagnetic effects- the bit size at which stored information become volatile as a function of time- will limit the densities of current longitudinal recording media to about 15.5 Gb/cm2 . In the near future century nanometer scale will presumably pervade the field of data storage. In magnetic storage used today, there is no clear-cut way to achieve the nanometer scale in all three dimensions. So new techniques like holographic memory and probe based data storage are emerging. If an emerging technology is to be considered as a serious candidate to replace an existing technology, it should offer long-term perspectives. Any new technology with better areal density than today's magnetic storage should have long-term potential for further scaling, desirably down to nanometer or even atomic scale. The only available tool known today that is simple and yet offer these long-term perspectives is a nanometer-sharp tip like in atomic force microscope (AFM) and scanning tunneling microscope (STM). The simple tip is a very reliable tool that concentrates on one functionality: the ultimate local confinement of interaction. In local probe based data storage we have a cantilever that has a very small tip at its end. Small indentations are made in a polymer medium laid over a silicon substrate. These indentations serve as data storage locations. A single AFM operates best on the microsecond time scale. Conventional magnetic storage, however, operates at best on the nanosecond time scale, making it clear that AFM data rates have to be improved by at least three orders of magnitude to be competitive with current and future magnetic recording. The "millipede" concept is a new approach for storing data at high speed and with an ultrahigh density. Millipede Concept Millipede is a highly parallel scanning probe based data storage that has a real storage densities far beyond superparamagnetic limits and data rates comparable to today's magnetic recording. At the first glance, millipede looks like a conventional 14 X 7 mm 2 silicon chip. Mounted at the center of the chip is a miniature two-dimensional array of 1024 'v'-shaped cantilevered arms that are 70 m long and 0.5 m thick. A nano-sharp fang-like tip, only 20 nm in diameter, hangs from the apex of each cantilever. The multiplex drivers, allow addressing of each tip individually. Beneath the cantilever array, is a thin layer of polymer film deposited on a movable, three-axis silicon table. The 2-D AFM cantilever array storage technique called "millipede" is based on a mechanical parallel x/y scanning of either the entire cantilever array chip or the storage medium. In addition, a feedback-controlled z-approaching and leveling scheme brings the entire cantilever array chip into contact with the storage medium. The tip-medium contact is maintained and controlled while x/y scanning is performed for read/write. The millipede approach is not based on individual z-feedback for each cantilever ; rather it uses a feedback control for the entire chip, which greatly simplifies the system. However this requires very good control and uniformity of tip height and cantilever bending. Chip approach/leveling makes use of additionally integrated approaching cantilever sensors in the corners of the array chip to control the approach of the chip to the storage medium. Signals from these sensors provide feedback signals to adjust the z-actuators until contact with the medium is established. Feedback loops maintain the chip leveled and in contact with the surface while x/y scanning is performed for write/read operations.

Millipede Is Unique Conventional data storage devices, such as disk drives and CD/DVDs, are based on systems that sense changes in magnetic fields or light to perform the read/write/store/erase functions. Millipede is unique both in form and the way it performs data storage tasks; it is based on a chip-mounted, mechanical system that senses a physical change in the storage media. The millipede's technology is actually closer to, although on an atomic scale, the archaic punched card than the more recent magnetic media. Using millipede, the IBM scientists have demonstrated a data storage density of a trillion bits per square inch -20 times higher than the densest magnetic storage available today. Millipede is dense enough to store the equivalent of 25 DVDs on a surface of the size of a postage stamp. This technology may boost the storage capacity of handheld devices - personal digital assistants (PDAs) and cell phones - often criticized for their low storage capabilities.

Distributed Integrated Circuits

Definition
"Divide and conquer" has been the underlying principle used to solve many engineering and social problems. Over many years engineers have devised systematic ways to divide a design objective into a collection of smaller projects and tasks defined at multiple levels of abstraction. This approach has been quite successful in an environment where a large number of people with different types and levels of expertise work together to realize a given objective in a limited time. Communication system design is a perfect example of this process, where the communication system is initially defined atthe application level, then descried using system level terms, leading to an architecture using a number of cascaded sub blocks that can be implemented as integrated circuits. The integrated circuit design process is then divided further by defining the specifications for circuit building blocks and their interfaces that together form the system. The circuit designer works with the specifications at a lower level of abstraction dealing with transistors and passive components whose models have been extracted from the measurements, device simulations, or analytical calculations based on the underlying physical principles of semiconductor physics and electrodynamics. This process of breaking down the ultimate objective into smaller, more manageable projects and tasks has resulted in an increased in the number of experts with more depth yet in more limited sublevels of abstraction. While this divide-and-conquer process has been quite successful in streamlining innovation, the overspecialization and short time specifications associated with today's design cycles sometimes result in suboptimal designs in the grand scheme of things. Also, in any reasonably mature field many of the possible innovations leading to useful new solutions within a given level of abstraction have already been explored. Further advancements beyond these local optima can be achieved by looking at the problem across multiple levels of abstraction to find solutions not easily seen when one confines one's search space to one level ( e.g., transistor-level circuit design). This explains why most of today's research activities occurs at the boundaries between different levels of abstractions artificially created to render the problem more tractable. Distributed circuit design is a multilevel approach allowing a more integral co-design of the building blocks at the circuit and device levels. Unlike most conventional circuits, it relies on multiple parallel signal paths operating in harmony to achieve the design objective. This approach offers attractive solutions to some of the more challenging problems in high speed communication circuit design. Issues In High-Speed Integrated Communication Circuits Integration of high-speed circuits for wireless (e.g., cellular phones) and wired applications (e.g., optical fiber communications) poses several challenges. High-speed analog integrated circuits used in wireless and wired communication systems have to achieve tight and usually contradictory specifications. Some of the most common specifications are the frequency of operation, power dissipation, dynamic range, and gain. Once in a manufacturing setting, additional issues, such as cost, reliability, and repeatability, also come into play. To meet these specifications, the designer usually has to deal with physical and topological limitations caused by noise, device non-linearity, small power supply, and energy loss in the components. Frequency of operation is perhaps one of the most important properties of communication integrated circuits since a higher frequency of operation is one of the more evident methods of achieving larger

bandwidth, and hence higher bit rates in digital communication systems. A transistor in a given process technology is usually characterized by its unity-gain frequency shown as fT. This is the frequency at which the current gain of a transistor drops to unity. While the unity-gain frequency of a transistor provide a approximate measure to compare transistors in different process technologies, the circuit built using these transistors scarcely operate close to the fT and usually operate at frequencies 3-100 times smaller depending on the complexity of their function. There are two main reasons for this behavior. First, analog building blocks and systems usually relay on closed loop operation based on negative feedback to perform a given function independent of these parameter variations. An open loop gain much higher than one is thus required for the negative feedback to be effective. Even if no feedback is present and open loop operation is acceptable, a higher gain usually improves the noise and power efficiency of the circuits. Therefore the transistor has to operate at a frequency lower than the fT to provide the desired gain. Second, passive devices (e.g. capacitors and inductors), necessary in most high-speed analog circuits, have their on frequency limitation due to parasitic components that can become the bottleneck of the design. The combination of these two effects significantly lowers the maximum frequency of reliable operation in most conventional circuit building blocks and provides a motivation to pursue alternative approaches to alleviate the bandwidth limitations.

AC Performance Of Nanoelectronics

Definition
Nano electronic devices fall into two classes: tunnel devices and ballistic transport devices. In Tunnel devices single electron effects occur if the tunnel resistance is larger than h/e = 25 K . In Ballistic devices with cross sectional dimensions in the range of quantum mechanical wavelength of electrons, the resistance is of order h/e = 25 K . This high resistance may seem to restrict the operational speed of nano electronics in general. However the capacitance values and drain source spacing are typically small which gives rise to very small RC times and transit times of order of ps or less. Thus the speed may be very large, up to THz range. The goal of this seminar is to present the models an performance predictions about the effects that set the speed limit in carbon nanotube transistors, which form the ideal test bed for understanding the high frequency properties of Nano electronics because they may behave as ideal ballistic 1d transistors. Ballistic Transport- An Outline When carriers travel through a semiconductor material, they are likely to be scattered by any number of possible sources, including acoustic and optical phonons, ionized impurities, defects, interfaces, and other carriers. If, however, the distance traveled by the carrier is smaller than the mean free path, it is likely not to encounter any scattering events; it can, as a result, move ballistically through the channel. To the first order, the existence of ballistic transport in a MOSFET depends on the value of the characteristic scattering length (i.e. mean free path) in relation to channel length of the transistor. This scattering length, l , can be estimated from the measured carrier mobility where t is the average scattering time, m* is the carrier effective mass, and vth is the thermal velocity. Because scattering mechanisms determine the extent of ballistic transport, it is important to understand how these depend upon operating conditions such as normal electric field and ambient temperature. Dependence On Normal Electric Field In state-of-the-art MOSFET inversion layers, carrier scattering is dominated by phonons, impurities (Coulomb interaction), and surface roughness scattering at the Si-SiO2 interface. The relative importance of each scattering mechanism is dependent on the effective electric field component normal to the conduction channel. At low fields, impurity scattering dominates due to strong Coulombic interactions between the carriers and the impurity centers. As the electric field is increased, acoustic phonons begin to dominate the scattering process. At very high fields, carriers are pulled closer to the Si-SiO2 gate oxide interface; thus, surface roughness scattering degrades carrier mobility. A universal mobility model has been developed to relate field strength with the effective carrier mobility due to phonon and surface roughness scattering: Dependence On Temperature When the temperature is changed, the relative importance of each of the aforementioned scattering mechanisms is altered. Phonon scattering becomes less important at very low temperatures. Impurity scattering, on the other hand, becomes more significant because carriers are moving slower (thermal velocity is decreased) and thus have more time to interact with impurity centers. Surface roughness scattering remains the same because it does not depend on temperature. At liquid nitrogen temperatures (77K) and an effective electric field of 1MV/cm, the electron and hole mobilities are ~700 cm2/Vsec and ~100 cm2/Vsec, respectively. Using the above equations, the scattering lengths are approximately 17nm and 3.6nm.These scattering lengths can be assumed to be worst-case scenarios, as large operating voltages (1V) and aggressively scaled gate oxides (10) are assumed. Thus, actual scattering lengths will likely be larger than the calculated values.

Further device design considerations in maximizing this scattering length will be discussed in the last section of this paper. Still, the values calculated above are certainly in the range of transistor gate lengths currently being studied in advanced MOSFET research (<50nm). Ballistic carrier transport should thus become increasingly important as transistor channel lengths are further reduced in size. In addition, it should be noted that the mean free path of holes is generally smaller than that of electrons. Thus, it should be expected that ballistic transport in PMOS transistors is more difficult to achieve, since current conduction occurs through hole transport. Calculation of the mean scattering length, however, can only be regarded as a first-order estimation of ballistic transport. To accurately determine the extent of ballistic transport evident in a particular transistor structure, Monte Carlo simulation methods must be employed. Only by modeling the random trajectory of each carrier traveling through the channel can we truly assess the extent of ballistic transport in a MOSFET.

High Performance DSP Architectures

Definition
Digital Signal Processing is carried out by mathematical operations. Digital Signal Processors are microprocessors specifically designed to handle Digital Signal Processing tasks. These devices have seen tremendous growth in the last decade, finding use in everything from cellular telephones to advanced scientific instruments. In fact, hardware engineers use "DSP" to mean Digital Signal Processor, just as algorithm developers use "DSP" to mean Digital Signal Processing. DSP has become a key component in many consumer, communications, medical, and industrial products. These products use a variety of hardware approaches to implement DSP, ranging from the use of offthe-shelf microprocessors to field-programmable gate arrays (FPGAs) to custom integrated circuits (ICs). Programmable "DSP processors," a class of microprocessors optimized for DSP, are a popular solution for several reasons. In comparison to fixed-function solutions, they have the advantage of potentially being reprogrammed in the field, allowing product upgrades or fixes. They are often more cost-effective than custom hardware, particularly for low-volume applications, where the development cost of ICs may be prohibitive. DSP processors often have an advantage in terms of speed, cost, and energy efficiency. DSP Algorithms Mould DSP Architectures From the outset, DSP algorithms have moulded DSP processor architectures. For nearly every feature found in a DSP processor, there are associated DSP algorithms whose computation is in some way eased by inclusion of this feature. Therefore, perhaps the best way to understand the evolution of DSP architectures is to examine typical DSP algorithms and identify how their computational requirements have influenced the architectures of DSP processors. Fast Multipliers The FIR filter is mathematically expressed as a vector of input data, along with a vector of filter coefficients. For each "tap" of the filter, a data sample is multiplied by a filter coefficient, with the result added to a running sum for all of the taps . Hence, the main component of the FIR filter algorithm is a dot product: multiply and add, multiply and add. These operations are not unique to the FIR filter algorithm; in fact, multiplication is one of the most common operations performed in signal processing convolution, IIR filtering, and Fourier transforms also all involve heavy use of multiply-accumulate operations. Originally, microprocessors implemented multiplications by a series of shift and add operations, each of which consumed one or more clock cycles. As might be expected, faster multiplication hardware yields faster performance in many DSP algorithms, and for this reason all modern DSP processors include at least one dedicated single- cycle multiplier or combined multiply-accumulate (MAC) unit. Multiple Execution Units DSP applications typically have very high computational requirements in comparison to other types of computing tasks, since they often must execute DSP algorithms in real time on lengthy segments of signals sampled at 10-100 KHz or higher. Hence, DSP processors often include several independent execution units that are capable of operating in parallel for example, in addition to the MAC unit, they typically contain an arithmetic- logic unit (ALU) and a shifter. Efficient Memory Accesses Executing a MAC in every clock cycle requires more than just a single-cycle MAC unit. It also

requires the ability to fetch the MAC instruction, a data sample, and a filter coefficient from memory in a single cycle. To address the need for increased memory bandwidth, early DSP processors developed different memory architectures that could support multiple memory accesses per cycle. Often, instructions were stored in the memory bank, while data was stored in another. With this arrangement, the processor could fetch an instruction and a data operand in parallel in every cycle. Since many DSP algorithms consume two data operands per instruction, a further optimization commonly used is to include a small bank of RAM near the processor core that is used as an instruction cache. When a small group of instructions is executed repeatedly, the cache is loaded with those instructions, freeing the instruction bus to be used for data fetches instead of instruction fetches thus enabling the processor to execute a MAC in a single cycle. High memory bandwidth requirements are often further supported via dedicated hardware for calculating memory addresses. These address generation units operate in parallel with the DSP processor's main execution units, enabling it to access data at new locations in memory without pausing to calculate the new address. Memory accesses in DSP algorithms tend to exhibit very predictable patterns; for example, for each sample in an FIR filter, the filter coefficients are accessed sequentially from start to finish for each sample, then accesses start over from the beginning of the coefficient vector when processing the next input sample.

FinFET Technology

Definition
Since the fabrication of MOSFET, the minimum channel length has been shrinking continuously. The motivation behind this decrease has been an increasing interest in high-speed devices and in very large-scale integrated circuits. The sustained scaling of conventional bulk device requires innovations to circumvent the barriers of fundamental physics constraining the conventional MOSFET device structure. The limits most often cited are control of the density and location of dopants providing high I on /I off ratio and finite sub threshold slope and quantum-mechanical tunneling of carriers through thin gate from drain to source and from drain to body. The channel depletion width must scale with the channel length to contain the off-state leakage I off. This leads to high doping concentration, which degrade the carrier mobility and causes junction edge leakage due to tunneling. Furthermore, the dopant profile control, in terms of depth and steepness, becomes much more difficult. The gate oxide thickness tox must also scale with the channel length to maintain gate control, proper threshold voltage VT and performance. The thinning of the gate dielectric results in gate tunneling leakage, degrading the circuit performance, power and noise margin. Alternative device structures based on silicon-on-insulator (SOI) technology have emerged as an effective means of extending MOS scaling beyond bulk limits for mainstream high-performance or low-power applications .Partially depleted (PD) SOIwas the first SOI technology introduced for high-performance microprocessor applications. The ultra-thin-body fully depleted (FD) SOI and the non-planar FinFET device structures promise to be the potential "future" technology/device choices. In these device structures, the short-channel effect is controlled by geometry, and the thin Si film limits the off-state leakage. For effective suppression of the off-state leakage, the thickness of the Si film must be less than one quarter of the channel length. The desired VT is achieved by manipulating the gate work function, such as the use of midgap material or poly-SiGe. Concurrently, material enhancements, such as the use of a) high-k gate material and b) strained Si channel for mobility and current drive improvement, have been actively pursued. As scaling approaches multiple physical limits and as new device structures and materials are introduced, unique and new circuit design issues continue to be presented. In this article, we review the design challenges of these emerging technologies with particular emphasis on the implications and impacts of individual device scaling elements and unique device structures on the circuit design. We focus on the planar device structures, from continuous scaling of PD SOI to FD SOI, and new materials such as strained-Si channel and high-k gate dielectric.

Partially Depleted [PD] SOI The PD floating-body MOSFET was the first SOI transistor generically adopted for highperformance applications, primarily due to device and processing similarities to bulk CMOS device. The PD SOI device is largely identical to the bulk device, except for the addition of a buried oxide ("BOX") layer. The active Si film thickness is larger than the channel depletion width, thus leaving a quasi-neutral "floating" body region underneath the channel. The V T of the device is completely decoupled from the Si film thickness, and the doping profiles can be tailored for any desired VT. The device offers several advantages for performance/ power improvement: 1) Reduced junction capacitance,

2) Lower average threshold due to positive V BS during switching. 3) Dynamic loading effects, in which the load device tends to be in high VT state during switching The performance comes at the cost of some design complexity resulting from the floating body of the device, such as 1) Parasitic bipolar effect and 2) Hysteretic VT variation.

Stream Processor

Definition
For many signal processing applications programmability and efficiency is desired. With current technology either programmability or efficiency is achievable, not both. Conventionally ASIC's are being used where highly efficient systems are desired. The problem with ASIC is that once programmed it cannot be enhanced or changed, we have to get a new ASIC for each modification. Other option is microprocessor based or dsp based applications. These can provide either programmability or efficiency. Now with stream processors we can achieve both simultaneously. A comparison of efficiency and programmability of Stream processors and other techniques are done. We will look into how efficiency and programmability is achieved in a stream processor. Also we will examine the challenges faced by stream processor architecture. The complex modern signal and image processing applications requires hundreds of GOPS (giga, or billions, of operations per second) with a power budget of a few watts, an efficiency of about 100 GOPS/W (GOPS per watt), or 10 pJ/op (Pico Joules per operation). To meet this requirement current media processing applications use ASICs that are tailor made for a particular application. Such processors require significant design efforts and are difficult to change when a new media processing application or algorithm evolve. The other alternative to meet the changing needs is to go for a dsp or microprocessor, which are highly flexible. But these do not provide the high efficiency needed by the application. Stream processors provide a solution to this problem by giving efficiency and programmability simultaneously. They achieve this by expressing the signal processing problems as signal flow graphs with streams flowing between computational kernels. Stream processors have efficiency comparable to ASICs (200 GOPS/W), while being programmable in a high-level language. Many signal processing applications require both efficiency and programmability. The complexity of modern media processing, including 3D graphics, image compression, and signal processing, requires tens to hundreds of billions of computations per second. To achieve these computation rates, current media processors use special-purpose architectures tailored to one specific application. Such processors require significant design effort and are thus difficult to change as media-processing applications and algorithms evolve. Digital television, surveillance video processing, automated optical inspection, and mobile cameras, camcorders, and 3G cellular handsets have similar needs. The demand for flexibility in media processing motivates the use of programmable processors. However, very large-scale integration constraints limit the performance of traditional programmable architectures. In modern VLSI technology, computation is relatively cheap - thousands of arithmetic logic units that operate at multi gigahertz rates can fit on a modestly sized 1 cm 2 die. The problem is that delivering instructions and data to those ALUs is prohibitively expensive. For example, only 6.5 percent of the Itanium 2 die is devoted to the 12 integer and two floating-point ALUs and their register files; communication, control, and storage overhead consume the remaining die area. In contrast, the more efficient communication and control structures of a special purpose graphics chip, such as the NVIDIA GeForce4, enable the use of many hundreds of floating-point and integer ALUs to render 3D images. Conventional signal processing solutions can provide high efficiency or programmability, but are unable to provide both at the same time. In applications that demand efficiency, a hardwired application-specific processor-ASIC (application-specific integrated

circuit) or ASSP (application-specific standard part)-has an efficiency of 50 to 500 GOPS/W, but offers little if any flexibility. At the other extreme, microprocessors and DSPs (digital signal processors) are completely programmable but have efficiencies of less than 10 GOPS/W. DSP (digital signal processor) arrays and FPGAs (field-programmable gate arrays) offer higher performance than individual DSPs, but have roughly the same efficiency. Moreover, these solutions are difficult to program-requiring parallelization, partitioning, and, for FPGAs, hardware design. Applications today must choose between efficiency and programmability.

General Packet Radio Service

Definition
Wireless phone use is taking off around the world. Many of us would no longer know how to cope without our cell phones. Always being connected offers us flexibility in our lifestyles, makes us more productive in our jobs, and makes us feel more secure. So far, voice has been the primary wireless application. But with the Internet continuing to influence an increasing proportion of our daily lives, and more of our work being away from the office, it is inevitable that the demand for wireless data is going to ignite. Already, in those countries that have cellular-data services readily available, the number of cellular subscribers taking advantage of data has reached significant proportions. But to move forward, the question is whether current cellular-data services are sufficient, or whether the networks need to deliver greater capabilities. The fact is that with proper application configuration, use of middleware, and new wireless-optimized protocols, today's cellular-data can offer tremendous productivity enhancements. But for those potential users who have stood on the sidelines, subsequent generations of cellular data should overcome all of their objections. These new services will roll out both as enhancements to existing second-generation cellular networks, and an entirely new third generation of cellular technology. In 1999, the primary cellular based data services were Cellular Digital Packet Data (CDPD), circuitswitched data services for GSM networks, and circuit-switched data service for CDMA networks. All of these services offer speeds in the 9.6 Kbps to 14.4 Kbps range. The basic reason for such low speeds is that in today's cellular systems, data is allocated to the same radio bandwidth as a voice call. Since voice encoders (vocoders) in current cellular networks digitize voice in the range of 8 to 13 Kbps, that's about the amount available for data. Back then, 9.6 Kbps was considered more than adequate. Today, it can seem slow with graphical or multimedia content, though it is more than adequate for text-based applications and carefully configured applications. There are two basic ways that the cellular industry is currently delivering data services. One approach is with smart phones, which are cellular phones that include a microbrowser. With these, you can view specially formatted Internet information. The other approach is through wireless modems, supplied either in PC Card format or by using a cell phone with a cable connection to a computer. The GPRS services will reflect the GSM services with an exception that the GPRS will have a tremendous transmission rate which will make a good impact in the most of the existing services and a possibility of introduction of new services as operators and users (business/private) appreciate the newly introduced technology. Services such as the Internet, videoconferencing and on-line shopping will be as smooth as talking on the phone, moreover we'll be able to access these services whether we are at work, at home or traveling. In the new information age, the mobile phone will deliver much than just voice calls. It will become a multi-media communications device, capable of sending and receiving graphic images and video.

The most common methods used for data transfer are circuit-switching and packet-switching. With circuit-switched transmission the dedicated circuit is first established across a sequence of links and then the whole channel is allocated to a single user for the whole duration of the call. With packet switched transmission, the data is first cut in to small parts called packages which are then sent in sequence to the receiver, which again builds the packages back together. This ensures that the same link resources can be shared at the same time buy many different users. The link is used only when the user has something to send. When there is no data to be sent the link is free to be used by another call. Packet switching is ideal for bursty traffic, e.g. voice.

Free Space Optics

Definition
Mention optical communication and most people think of fiber optics. But light travels through air for a lot less money. So it is hardly a surprise that clever entrepreneurs and technologists are borrowing many of the devices and techniques developed for fiber-optic systems and applying them to what some call fiber-free optical communication. Although it only recently, and rather suddenly, sprang into public awareness, free-space optics is not a new idea. It has roots that go back over 30 years--to the era before fiber-optic cable became the preferred transport medium for high-speed communication. In those days, the notion that FSO systems could provide high-speed connectivity over short distances seemed futuristic, to say the least. But research done at that time has made possible today's free-space optical systems, which can carry full-duplex (simultaneous bidirectional) data at gigabit-per-second rates over metropolitan distances of a few city blocks to a few kilometers. FSO first appeared in the 60's, for military applications. At the end of 80's, it appeared as a commercial option but technological restrictions prevented it from success. Low reach transmission, low capacity, severe alignment problems as well as vulnerability to weather interferences were the major drawbacks at that time. The optical communication without wire, however, evolved! Today, FSO systems guarantee 2.5 Gb/s taxes with carrier class availability. Metropolitan, access and LAN networks are reaping the benefits. The use of free space optics is particularly interesting when we perceive that the majority of customers does not possess access to fibers as well as fiber installation is expensive and demands long time. Moreover, right-of-way costs, difficulties in obataining government licenses for new fiber installation etc. are further problems that has turned FSO into the option of choice for short reach applications. FSO uses lasers, or light pulses, to send packetized data in the terahertz (THz) spectrum range. Air, ot fiber, is the transport medium. This means that urban businesses needing fast data and Internet access have a significantly lower-cost option.

FSO
FSO technology is implemented using a laser device .These laser devices or terminals can be mounted on rooftops ,Corners of buidings or even inside offices behind windows. FSOdevices look like security video cameras. Low-power infrared beams, which do not harm the eyes, are the means by which free-space optics technology transmits data through the air between transceivers, or link heads, mounted on rooftops or behind windows. It works over distances of several hundred meters to a few kilometers, depending upon atmospheric conditions. Commercially available free-space optics equipment provides data rates much higher than digital subscriber lines or coaxial cables can ever hope to offer. And systems even faster than the present range of 10 Mb/s to 1.25 Gb/s have been announced, though not yet delivered.

Fiber Distributed Data Interface

Definition
The Fiber Distributed Data Interface (FDDI) standard was produced by the ANSI X3T9.5 standards committee in the mid-1980s. During this period, high-speed engineering workstations were beginning to tax the capabilities of existing local-area networks (LANs) (primarily Ethernet and Token Ring). A new LAN was needed that could easily support these workstations and their new distributed applications. At the same time, network reliability was becoming an increasingly important issue as system managers began to migrate mission-critical applications from large computers to networks. FDDI was developed to fill these needs. After completing the FDDI specification, ANSI submitted FDDI to the International Organization for Standardization (ISO). ISO has created an international version of FDDI that is completely compatible with the ANSI standard version. Today, although FDDI implementations are not as common as Ethernet or Token Ring, FDDI has gained a substantial following that continues to increase as the cost of FDDI interfaces diminishes. FDDI is frequently used as a backbone technology as well as a means to connect high-speed computers in a local area.

Technology Basics
FDDI specifies a 100-Mbps, token-passing, dual-ring LAN using a fiber-optic transmission medium. It defines the physical layer and media-access portion of the link layer, and so is roughly analogous to IEEE 802.3 and IEEE 802.5 in its relationship to the Open System Interconnection (OSI) reference model. Although it operates at faster speeds, FDDI is similar in many ways to Token Ring. The two networks share many features, including topology (ring), media-access technique (token passing), reliability features (redundant rings, for example), and others. For more information on Token Ring and related technologies. One of the most important characteristics of FDDI is its use of optical fiber as a transmission medium. Optical fiber offers several advantages over traditional copper wiring, including security (fiber does not emit electrical signals that can be tapped), reliability (fiber is immune to electrical interference), and speed (optical fiber has much higher throughput potential than copper cable). FDDI defines use of two types of fiber: single mode (sometimes called monomode) and multimode. Modes can be thought of as bundles of light rays entering the fiber at a particular angle. Single-mode fiber allows only one mode of light to propagate through the fiber, while multimode fiber allows multiple modes of light to propagate through the fiber. Because multiple modes of light propagating through the fiber may travel different distances (depending on the entry angles), causing them to arrive at the destination at different times (a phenomenon called modal dispersion), single-mode fiber is capable of higher bandwidth and greater cable run distances than multimode fiber. Due to these characteristics, single-mode fiber is often used for interbuilding connectivity, while multimode fiber is often used for intrabuilding connectivity. Multimode fiber uses light-emitting diodes (LEDs) as the light-generating devices, while single-mode fiber generally uses lasers.

E-Nose

Definition
In an ever-developing world, where electronic devices are duplicating every other sense of perception, the sense of smell is lagging behind. Yet, recently, there has been an urgent increase in the need for detecting odours, to replace the human job of sensing and quantification. Some of the most important applications fall in the category where human beings cannot afford to risk smelling the substance. Other important applications are continuous monitoring, medical applications, etc. These applications allow man to perform tasks that were once considered impossible.The fast paced technology has helped develop sophisticated devices that have brought the electronic nose to miniature sizes and advanced capabilities. The trend is such that there will be accurate, qualitative and quantitative measurements of odour in the near future. Living beings interact with the surrounding environment through particular interfaces called senses, which can be divided in two groups: those detecting physical quantities and those detecting chemical quantities. Physical interfaces (that deals with acoustic, optic, temperature and mechanic interaction mechanisms) are sufficiently well known and a wealth of successful studies to construct their artificial counterparts has been done in the past years. On the other side the chemical interfaces (bio transducers of chemical species in air: olfaction, and in solution: taste) even if well described in literature, present some aspects of their physiological working principal that are still unclear. It has also to be remarked a psychological difference, in human beings, between the two groups. Indeed the information from the physical senses can be adequately elaborated, verbally expressed, firmly memorized and fully communicated. On the contrary chemical information, coming from nose and tongue, are surrounded by vagueness and this is reflected in the poor description and memorization capacity in reporting olfactory and tasting experiences. Chemical information is of primary importance for the major part of the animals; for many of them, indeed, chemistry is the unique realm of which they are concerned, while for human beings evolution has enhanced about exclusively the physical interfaces, leaving little care of the chemical interface, if we exclude unconscious acquisition and side behaviours. For these intrinsic difficulties toward the understanding of the nature of these senses for many years only sporadic research on the possibility of fabricating artificial olfactory systems were performed. Only at the end of the eighties a new and promising approach was introduced. It was based on the assumption that an array of non-selective chemical sensors, matched with a suitable data processing method, could mimic the functions of olfaction. In the past decade, electronic nose instrumentation has generated much interest internationally for its potential to solve a wide variety of problems in fragrance and cosmetics production, food and beverages manufacturing, chemical engineering, environmental monitoring, and more recently, medical diagnostics and bioprocesses. Several dozen companies are now designing and selling electronic nose units globally for a wide variety of expanding markets. An electronic nose is a machine that is designed to detect and discriminate among complex odours using a sensor array. The sensor array of consists of broadly tuned (non-specific) sensors that are treated with a variety of odour-sensitive biological or chemical materials. An odour stimulus generates a characteristic fingerprint (or smell-print) from the sensor array. Patterns or fingerprints from known odours are used to construct a database and train a pattern recognition system so that unknown odours can subsequently be classified and identified. Thus,

electronic nose instruments are comprised of hardware components to collect and transport odours to the sensor array - as well as electronic circuitry to digitise and stored the sensor responses for signal processing.

Embryonics Approach Towards Integrated Circuits

Definition
Embryonics is embryonic electronics. Working of multicellular organization in living beings suggests that concepts from biology can be applied to development of new "embryonic" integrated circuits. The final objective is the development of VLSI circuits that can partially reconstruct themselves in case of a minor fault (self-repair) or completely reconstruct the original device in case of major fault (self-replication). These features are advantageous for applications depending on high reliability, like avionics and medical electronics. The basic primitive of the system is the molecule: the element of new FPGA- essentially a multiplexer associated with a programmable connection network. A finite set of molecules comprises a cell, i.e., a very simple processor associated to some memory resources. A finite set of cells comprises an organism, i.e., an application- specific multiprocessor system. The organism itself can self-replicate, giving rise to a population of identical organisms. The self-repair and self-replication are achieved by providing spare cells. This seminar report tries to bring out the basic concepts in the embryonics approach to realize VLSI circuits. The growth and operation of all living beings are directed by the interpretation, in each of the their cells, of a chemical program, the DNA string or genome.This process is the source of inspiration for Embryonics (embryonic electronics),whose final objective is the design of highly roubst integrated circuits, endowed with properties usually associated with the living world: self repair (cicatrisation) and self-replication.The embryonics architecture is based on four hierarchical levels of organization. 1. The basic primitive of our system is the molecule, a multiplexer-based element of a novel programmable cicuit. 2. A finite set of molecules makes up a cell, essentially a small processor with an associated memory. 3. A finite set of cells makes up an organism,an application specific multiprocessor system. 4. The organism can itself replicate,giving rise to a population of identical organisms, capable of self replication and repair. Each of the artificial cell is characterized by a fixed architecture .Multicellular arrays can realize a variety of different organisms, all capable of self replication and self repair.In order to allow for a wide range of application we then introduce a flexible architeture, realized using a new type of finegrained field-programmable gate array whose basic element, our molecule, is essentially a programmable multiplexer. Toward Embryonics A human being consists of approximately 60 trillion cells.At each instant, in each of these 60 trillion cells, the genome, a ribbon of 2 billion characters, is decoded to produce the proteins needed for survival of the organism.The genome contains the ensemble of the genetic inheritance of the individual and, at the same time, the instructions for both the construction and operation of the organism.the parallel execution of 60 trillion genomes in as many cells occurs ceaselessly from conception to death of the individual.Faults are rare, and in majority of cases, successfully detected and repaired. This process is remarkable for its complexity and its precision.Moreover, it relies on completely discrete information :the struture of DNA (the chemical substrate of the genome) is a sequence of four bases, usually designated with letters A(adenine),C(cytosine),G(guanine) and T(thymine).

Embedded Systems and Information Appliances

Definition
Embedded system is a combination of computer hardware, software and, perhaps, additional mechanical parts, designed to perform a specific function. Embedded systems are usually programmed in high level language that is compiled (and/or assembled) into an executable ("machine") code. These are loaded into Read Only Memory (ROM) and called "firmware", "microcode" or a "microkernel". The microprocessor is 8-bit or 16-bit.The bit size refers to the amount of memory accessed by the processor. There is usually no operating system and perhaps 0.5k of RAM. The functions implemented normally have no priorities. As the need for features increases and/or as the need to establish priorities arises, it becomes more important to have some sort of decision making mechanism be part of the embedded system. The most advanced systems actually have a tiny, streamlined OS running the show, executing on a 32-bit or 64-bit processor. This is called RTOS.

Embedded Hardware
All embedded system has a microprocessor or microcontroller for processing of information and execution of programs, memory in the form of ROM/RAM for storing embedded software programs and data, and I/O interfaces for external interface. Any additional requirement in an embedded system is dependent on the equipment it is controlling. Very often these systems have a standard serial port, a network interface, I/O interface, or hardware to interact with sensors and activators on the equipment.

Embedded Software
C has become the language of choice for embedded programmers, because it has the benefit of processor independence, which allows the programmer to concentrate on algorithms and applications, rather than on the details of processor architecture. However, many of its advantages apply equally to other high-level languages as well. Perhaps the greatest strength of C is that it gives embedded programmers an extraordinary degree of direct hardware control without sacrificing the benefits of high-level languages. Compilers and cross compilers are also available for almost every processor with C. Any source code written in C or C++ or Assembly language must be converted into an executable image that can be loaded onto a ROM chip. The process of converting the source code representation of your embedded software into an executable image involves three distinct steps, and the system or computer on which these processes are executed is called a host computer .First, each of the source files that make an embedded application must be compiled or assembled into distinct object files.Second, all of the object files that result from the first step must be linked into a final object file called the relocatable program.

DSP Processor

Definition
The best way to understand the requirements is to examine typical DSP algorithms and identify how their compositional requirements have influenced the architectures of DSP processor. Let us consider one of the most common processing tasks the finite impulse response filter. For each tap of the filter a data sample is multiplied by a filter coefficient with result added to a running sum for all of the taps .Hence the main component of the FIR filter is dot product: multiply and add .These options are not unique to the FIR filter algorithm; in fact multiplication is one of the most common operation performed in signal processing -convolution, IIR filtering and Fourier transform also involve heavy use of multiply -accumulate operation. Originally, microprocessors implemented multiplication by a series of shift and add operation, each of which consumes one or more clock cycle .First a DSP processor requires a hardware which can multiply in one single cycle. Most of the DSP algorithm require a multiply and accumulate unit (MAC). In comparison to other type of computing tasks, DSP application typically have very high computational requirements since they often must execute DSP algorithms in real time on lengthy segments ,therefore parallel operation of several independent execution units is a must -for example in addition to MAC unit an ALU and shifter is also required . Executing a MAC in every clock cycle requires more than just single cycle MAC unit. It also requires the ability to fetch the MAC instruction, a data sample, and a filter coefficient from a memory in a single cycle. Hence good DSP performance requires high memory band width-higher than that of general microprocessors, which had one single bus connection to memory and could only make one access per cycle. The most common approach was to use two or more separate banks of memory, each of which was accessed by its own bus and could be written or read in a single cycle. This means programs are stored in a memory and data in another .With this arrangement, the processor could fetch and a data operand in parallel in every cycle .since many DSP algorithms consume two data operands per instruction a further optimization commonly used is to include small bank of RAM near the processor core that is used as an instruction cache. When a small group of instruction is executed repeatedly, the cache is loaded with those instructions, freeing the instruction bus to be used for data fetches instead of instruction fetches -thus enabling the processor to execute a MAC in a single cycleHigh memory bandwidth requirements are often further supported by dedicated hard ware for calculating memory address. These memory calculating units operate in parallel with DSP processors main execution units, enabling it to access data in new location in the memory without pausing to calculate the new address. Memory accesses in DSP algorithm tend to exhibit very predictable pattern: for example For sample in FIR filter , the filter coefficient are accessed sequentially from start to finish , then accessed start over from beginning of the coefficient vector when processing the next input sample .This is in the contrast of other computing tasks ,such as data base processing where accesses to memory are less predictable .DSP processor address generation units take advantage of this predictability of supporting specialize addressing modes that enable the processor to efficiently access data in the patterns commonly found in DSP algorithms . The most common of these modes is register indirect addressing with post increment , which is used to automatically increment the address pointer for the algorithms where repetitive computations are performed on a series of data stored sequentially in the memory .Without this feature , the programmer would need to spend instruction explicitly incrementing the address pointer .

Direct to home television (DTH)

Definition
Direct to home (DTH) television is a wireless system for delivering television programs directly to the viewer's house. In DTH television, the broadcast signals are transmitted from satellites orbiting the Earth to the viewer's house. Each satellite is located approximately 35,700 km above the Earth in geosynchronous orbit. These satellites receive the signals from the broadcast stations located on Earth and rebroadcast them to the Earth. The viewer's dish picks up the signal from the satellite and passes it on to the receiver located inside the viewer's house. The receiver processes the signal and passes it on to the television.The DTH provides more than 200 television channels with excellent quality of reception along with teleshopping, fax and internet facilities. DTH television is used in millions of homes across United States, Europe and South East Asia. Direct to home television is a wireless system for delivering television programming directly to a viewer's house. Usually broadcasting stations use a powerful antenna to transmit radio waves to the surrounding area. Viewer's can pickup the signal with a much smaller antenna. The main limitation of broadcast television is range. The radio signal used to broadcast television shoot out from the broadcast antenna in a straight line. Inorder to receive these signals, you have to be in the direct "line of sight" of the antenna. Small obstacles like trees or small buildings aren't a problem; but a big obstacle, such as Earth, will reflect these waves. If the Earth were perfectly flat, you could pickup broadcast television thousands of miles from the source. But because the planet is curved, it eventually breaks the signal's line of sight. The other problem with broadcast television is that the signal is often distorted even in the viewing area. To get a perfectly clear signal like you find on the cable one has to be pretty close to the broadcast antenna without too many obstacles in the wave. DTH Television solves both these problems by transmitting broadcast signals from satellites orbiting the Earth. Since satellites are high in the sky there are a lot more customers in the line of sight. Satellites television systems transmit and receive radio signals using specialized antennas called satellite dishes. The television satellites are all in geosynchronous orbit approximately 35,700 km above the Earth. In this way you have to direct the dish at the satellite only once, and from then on it picks up the signal without adjustment. More than 200 channels with excellent audio and video are made available. The dish required is quite small (30 to 95 cm in diameter).

The Overall System


Early satellite TV viewers were explorers of sorts. They used their expensive dishes to discover unique programming that wasn't necessarily intended for mass audiences. The dish and receiving equipment gave viewers the tools to pick up foreign stations, live feeds between different broadcast stations, NASA activities and a lot of other stuff transmitted using satellites. Some satellite owners still seek out this sort of programming on their own, but today, most Direct to home TV customers get their programming through a direct broadcast satellite (DBS) provider, such as DirecTV or the Dish Network. The provider selects programs and broadcasts them to subscribers as a set package. Basically, the provider's goal is to bring dozens or even hundreds of channels to your television in a form that approximates the competition, cable TV.

Unlike earlier programming, the provider's broadcast is completely digital, which means it has much better picture and sound quality. Early satellite television was broadcast in C-band radio -- radio in the 3.4-gigahertz (GHz) to 7-GHz frequency range. Digital broadcast satellite transmits programming in the Ku frequency range (12 GHz to 14 GHz ).

Digital Subscriber Line (DSL)

Definition
The accelerated growth of content-rich applications that demand high bandwidth has changed the nature of information networks. High-speed communication is now an ordinary requirement throughout business, government, academic, and "work-at-home" environments. High-speed Internet access, telecommuting, and remote LAN access are three services that network access providers clearly must offer. These rapidly growing applications are placing a new level of demand on the telephone infrastructure, in particular, the local loop portion of the network (i.e., the local connection from the subscriber to the local central office). The local loop facility is provisioned with copper cabling, which cannot easily support high bandwidth transmission. This environment is now being stressed by the demand for increasingly higher bandwidth capacities. Although this infrastructure could be replaced by a massive rollout of fiber technologies, the cost to do so is prohibitive in today's business models. More importantly, the time to accomplish such a transition is unacceptable, because the market demand exists today! This demand for data services has created a significant market opportunity for providers that are willing and able to invest in technologies that maximize the copper infrastructure. Both incumbent and competitive Local Exchange Carriers (ILECs and CLECs) are capitalizing on this opportunity by embracing such technologies. The mass deployment of high-speed Digital Subscriber Line (DSL) has changed the playing field for service providers. DSL, which encompasses several different technologies, essentially allows the extension of megabit bandwidth capacities from the service provider central office to the customer premises. Utilizing existing copper cabling, DSL is available at very reasonable costs without the need for massive infrastructure replacement. These new DSL solutions satisfy the business need to provision the network in a fast, cost-effective manner, while both preserving the infrastructure and allowing a planned migration into newer technologies. DSL has the proven ability to meet the customer demand for high bandwidth right now, at costs that make sense. ADSL, or Asymmetric DSL, has emerged as thetechnology of choice for delivering greater throughputto the desktop. Currently, the ADSL Lite specification,also known as g.lite, is expected to be standardized bythe end of June, 1999 as a low-cost, easy-to-installversion of ADSL specifically designed for the consumer marketplace. While g.lite is expected to become the predominant standard for consumer services, HDSL2 is becoming the protocol of choice for business service

The Telecommunications Infrastructure


The telecommunications industry has developed and deployed cost-effective technologies and created global, high-bandwidth, interoffice networks capable of supporting the demands of the information age. This network infrastructure, however, has been lacking one significant componenta ubiquitous low-cost, high-bandwidth access circuit for the local loop. This fact, more than any other, has slowed the growth and availability of high-bandwidth network services. The pervasive copper cable infrastructure deployed throughout the local loop was historically incapable of supporting the throughput required by growing consumer traffic. In response, the industry embraced DSL, which has proven to be the most significant technological development for solving the local loop demand for higher bandwidth.

more on HDSL2 to come).

Digital Hubbub

Definition
As far as consumer electronics is considered the latest talk in the town is about digital hubbub. his device is used as a hub to interconnect various any home devices .Along with the interconnecting capability hub also incorporates several functions like recording play backing etc of data streams from various electronic devices in the house . The electronic devices mentioned include a TV, VCR, Camcorder, personal computers etc. It consists of both software part and hardware part. Hardware comes along with CPU. Digital signal processing chips in a memory and different ports for interfacing .Software has got 3 layers an inner layer, Middle layer , and an outer layer. These layers are divided on the basis of various functions they have to do. Various companies is now trying to make their dream possible , as a company which brought digital hubbub for the consumers. Moxi digital (Palo Alto, California ) demonstrated what it called media centre hub at jan 2002 at consumer electronics show. It has PVR , a cd/dvd player innovative user interface software and a wireless home distribution network. In February Digeo Inc. ( Kirkland, Washington) a start-up controlled by Microsoft cofounder Paul Allen, announced plans to build a strikingly similar hub in partnership with Motorola Inc. (Schaumburg, Ill.) and cable company Charter Communications Inc. (St. Louis, Mo.), also controlled by Allen. In March, Moxi and Digeo merged and took on the Digeo name. The merged company intends to roll out Moxi's software on Motorola's set-top box hardware; it is also moving forward with tests of Moxi media center prototypes among subscribers to Echostar Communications Corp. (Littleton, Colo.), a satellite TV service. Established makers of set-top boxes, including Royal Philips Electronics (Amsterdam, the Netherlands) and Pioneer Corp. (Tokyo)-and, of course, Motorola-are building boxes that include high-speed data connections and home-network capabilities, in addition to the digital TV decoders of ordinary cable systems Entertainment is the major aspect of any human's life. This brings out the importance of consumer electronics. Consumer electronics play a very important role in today's household entertainment devices such as TV, VCR, music systems like CD player MP3 etc.A lot of innovations are taking place in the field of consumer electronics. The latest talk is about a single device which can interconnect all these entertainment devices and provide many functions such as record , archive, and playback music and videos, organize digital and photo albums , and distribute signal media around the home.So companies gave birth to a new electronic device which is known as DIGITAL HUBBUB. To Apple and Microsoft, it looks like a computer. To cable and satellite companies like Charter, Echostar, or DirecTv and their suppliers, it's a set-top box. To consumer electronics companies like Philips or Samsung, it's a stereo component. The hub consists of a software part and hardware part .

Hub As A Factotum
Factotum means an employ who does a lot of things. This is what digital hubbub do by recording, archiving, store digital photos and albums and distribute digital media around home. This can be done from sources like cd library, broadcast TV, and the internet. They will also be able to store and play video games. And they will organize all your media files in an easy-to-browse fashion and play

them back on demand, making available features such as pause, rewind , and several varieties of skip and fast-forward. Typically, a hub might incorporate a high-capacity hard-disk drive, a CD/DVD player, a TV tuner, and inputs for digital cameras, digital video, and broadband digital data. Add a cable or digital subscriber line (DSL) modem on the front end of the hub, and the package is fairly complete. Generally the user interface would be a file browser like that of a PC desktop, modified for a TV screen, and a remote control.

Crusoe

Definition
The Crusoe processor solutions consist of a hardware engine logically surrounded by a software layer. The engine is a very long instruction word (VLIW) CPU capable of executing up to four operations in each clock cycle. The VLIW's native instruction set bears no resemblance to the x86 instruction set; it has been designed purely for fast lowpower implementation using conventional CMOS fabrication. The surrounding software layer gives x86 programs the impression that they are running on x86 hardware. The software layer is called Code Morphing software because it dynamically "morphs" x86 instructions into VLIW instructions. The Code Morphing software includes a number of advanced features to achieve good system-level performance. Code Morphing support facilities are also built into the underlying CPUs. In other words, the Transmeta designers have judiciously rendered some functions in hardware and some in software, according to the product design goals and constraints. Transmeta's Code Morphing technology changes the entire approach to designing microprocessors. By demonstrating that practical microprocessors can be implemented as hardwaresoftware hybrids, Transmeta has dramatically expanded the design space that microprocessor designers can explore for optimum solutions. Upgrades to the software portion of a microprocessor can be rolled out independently from the chip. Finally, decoupling the hardware design from the system and application software that use it frees hardware designers to evolve and eventually replace their designs without perturbing legacy software.

Crusoe processor
Mobile computing has been the buzzword for quite a long time.Mobile computing devices like laptops,webslates & notebook PCs are becoming common nowadays.The heart of every PC whether a desktop or mobile PC is the microprocessor.Several microprocessors are available in the market for desktop PCs from companies like Intel , AMD , Cyrix etc.The mobile computing market has never had a microprocesor specifically designed for it.The microprocessors used in mobile PCs are optimized versions of the desktop PC microprocessor. The concept of Crusoe is well understood from the simple sketch of the proceesor architecture , called 'amoeba'.In this concept , the x86 architecture is an ill-defined amoeba containing features like segmentation, ASCII arithmetic ,variable-length instructions etc. The amoeba explained how a traditional microprocessor was, in their design, to be divided up into hardware and software.Thus Crusoe was conceptualised as a hybrid microprocessor , that is it has a software part and a hardware part with the software layer surrounding the hardware unit.The role of software is to act as an emulator to translate x86 binaries into native code at run time. Crusoe is a 128-bit microprocessor fabricated using the CMOS process. The chip's design is based on a technique called VLIW to ensure design simplicity and high performance.Besides this it also uses Transmeta's two pateneted technologies , namely , Code Morphing Software and LongRun Power Management.It is a highly integrated processor available in different vesions for different market segments.

Bio-metrics

Definition
Biometrics refers to the automatic identification of a person based on his/her physiological or behavioral characteristics such as finger scan, retina, iris, voice scan, signature scan etc. This method of identification is preferred over traditional methods involving passwords and PIN numbers for various reasons: the person to be identified is required to be physically present at the point-of-identification; identification based on biometric techniques obviates the need to remember a password or carry a token. With the increased use of computers as vehicles of information technology, it is necessary to restrict access to sensitive/personal data. By replacing PINs, biometric techniques can potentially prevent unauthorized access to or fraudulent use of ATMs, cellular phones, smart cards, desktop PCs, workstations, and computer networks. A biometric system is essentially a pattern recognition system, which makes a personal identification by determining the authenticity of a specific physiological, or behavioral characteristics possessed by the user. An important issue in designing a practical system is to determine how an individual is identified. Depending on the context, a biometric system can be either a verification (authentication) system or an identification system. Biometrics is a rapidly evolving technology, which is being widely used in forensics such as criminal identification and prison security, and has the potential to be used in a large range of civilian application areas. Biometrics can be used to prevent unauthorized access to ATMs, cellular phones, smart cards, desktop PCs, workstations, and computer networks. It can be used during transactions conducted via telephone and Internet (electronic commerce and electronic banking). In automobiles, biometrics can replace keys with key-less entry devices Biometrics technology allows determination and verification of one's identity through physical characteristics. To put it simply, it turns your body into your password. These characteristics can include face recognition, voice recognition, finger/hand print scan, iris scans and even retina scans. Biometric systems have sensors that pick up a physical characteristic, convert it into a digital pattern and compare it to stored patterns for identification

Identification And Verification Systems


A persons identity can be resolved in two ways: identification and verification. The former involves identifying a person from all biometric measurements collected in a database and this involves a oneto-many match also referred to as a cold search. Do I know who you are? Is the inherent question this process seeks to answer. Verification involves authenticating a persons claimed identity from his or her previously enrolled pattern and this involves a one-to-one match. The question it seeks to answer is, Are you claim to be?

Verification
Verification requires comparing a persons fingerprint to one that pass previously recorded in the system database. The person claiming an identity provided a fingerprint, typically by placing a finger on an optical scanner. The computer locates the previous fingerprint by looking up the persons identity. This process is relatively easy because the computer needs to compare two-fingerprint record (although most systems use two fingerprints from each person to provide a safety factor). The verification process is referred as a closed search because the search field is limited. The second

question is who is this person? This is the identification function, which is used to prevent duplicate application or enrollment. In this case a newly supplied fingerprint is supplied to all others in the database. A match indicates that the person has already enrolled/applied.

Identification
The identification process, also known as an open search, is much more technically demanding. It involves many more comparisons and may require differentiating among several database fingerprints that are similar to the objects.

Augmented reality (AR)

Definition
Augmented reality (AR) refers to computer displays that add virtual information to a user's sensory perceptions. Most AR research focuses on see-through devices, usually worn on the head that overlay graphics and text on the user's view of his or her surroundings. In general it superimposes graphics over a real world environment in real time. Getting the right information at the right time and the right place is key in all these applications. Personal digital assistants such as the Palm and the Pocket PC can provide timely information using wireless networking and Global Positioning System (GPS) receivers that constantly track the handheld devices. But what makes augmented reality different is how the information is presented: not on a separate display but integrated with the user's perceptions. This kind of interface minimizes the extra mental effort that a user has to expend when switching his or her attention back and forth between real-world tasks and a computer screen. In augmented reality, the user's view of the world and the computer interface literally become one. Between the extremes of real life and Virtual Reality lies the spectrum of Mixed Reality, in which views of the real world are combined in some proportion with views of a virtual environment. Combining direct view, stereoscopic video, and stereoscopic graphics, Augmented Reality describes that class of displays that consists primarily of a real environment, with graphic enhancements or augmentations.In Augmented Virtuality, real objects are added to a virtual environment. In Augmented reality, virtual objects are added to real world. An AR system supplements the real world with virtual (computer generated) objects that appear to co-exist in the same space as the real world. Virtual Reality is a synthetic environment Comparison between AR and virtual environments.The overall requirements of AR can be summarized by comparing them against the requirements for Virtual Environments, for the three basic subsystems that they require. 1) Scene generator: Rendering is not currently one of the major problems in AR. VE systems have much higher requirements for realistic images because they completely replace the real world with the virtual environment. In AR, the virtual images only supplement the real world. Therefore, fewer virtual objects need to be drawn, and they do not necessarily have to be realistically rendered in order to serve the purposes of the application. 2) Display device: The display devices used in AR may have less stringent requirements than VE systems demand, again because AR does not replace the real world. For example, monochrome displays may be adequate for some AR applications, while virtually all VE systems today use full color. Optical see-through HMDs with a small field-of-view may be satisfactory because the user can still see the real world with his peripheral vision; the see-through HMD does not shut off the user's normal field-of-view. Furthermore, the resolution of the monitor in an optical see-through HMD might be lower than what a user would tolerate in a VE application, since the optical see-through HMD does not reduce the resolution of the real environment. 3) Tracking and sensing: While in the previous two cases AR had lower requirements than VE, that is not the case for tracking and sensing. In this area, the requirements for AR are much stricter than those for VE systems. A major reason for this is the registration problem.

Asynchronous Transfer Mode (ATM)

Definition
These computers include the entire spectrum of PCs, through professional workstations up to supercomputers. As the performance of computers has increased, so too has the demand for communication between all systems for exchanging data, or between central servers and the associated host computer system.The replacement of copper with fiber and the advancement sin digital communication and encoding are at the heart of several developments that will change the communication infrastructure. The former development has provided us with huge amount of transmission bandwidth. While the latter has made the transmission of all information including voice and video through a packet switched network possible. With continuously work sharing over large distances, including international communication, the systems must be interconnected via wide area networks with increasing demands for higher bit rates. For the first time, a single communications technology meets LAN and WAN requirements and handles a wide variety of current and emerging applications. ATM is the first technology to provide a common format for bursts of high speed data and the ebb and flow of the typical voice phone call. Seamless ATM networks provide desktop-to-desktop multimedia networking over single technology, high bandwidth, low latency network, removing the boundary between LAN WAN. ATM is simply a Data Link Layer protocol. It is asynchronous in the sense that the recurrence of the cells containing information from an individual user is not necessarily periodic. It is the technology of choice for evolving B-ISDN (Board Integrated Services Digital Network), for next generation LANs and WANs. ATM supports transmission speeds of 155Mbits / sec. In the future, Photonic approaches have made the advent of ATM switches feasible, and an evolution towards an all packetized, unified, broadband telecommunications and data communication world based on ATM is taking place. These computers include the entire spectrum of PCs, through professional workstations upto supercomputers. As the performance of computers has increased, so too has the demand for communication between all systems for exchanging data, or between central servers and the associated host computer system. The replacement of copper with fiber and the advancement sin digital communication and encoding are at the heart of several developments that will change the communication infrastructure. The former development has provided us with huge amount of transmission bandwidth. While the latter has made the transmission of all information including voice and video through a packet switched network possible. With continuously work sharing over large distances, including international communication, the systems must be interconnected via wide area networks with increasing demands for higher bit rates.For the first time, a single communications technology meets LAN and WAN requirements and handles a wide variety of current and emerging applications. ATM is the first technology to provide a common format for bursts of high speed data and the ebb and flow of the typical voice phone call. Seamless ATM networks provide desktop-to-desktop multimedia networking over single technology, high bandwidth, low latency network, removing the boundary between LAN WAN.

Artificial Eye

Definition
The retina is a thin layer of neural tissue that lines the back wall inside the eye. Some of these cells act to receive light, while others interpret the information and send messages to the brain through the optic nerve. This is part of the process that enables us to see. In damaged or dysfunctional retina, the photoreceptors stop working, causing blindness. By some estimates, there are more than 10 million people worldwide affected by retinal diseases that lead to loss of vision. The absence of effective therapeutic remedies for retinitis pigmentosa (RP) and age-related macular degeneration (AMD) has motivated the development of experimental strategies to restore some degree of visual function to affected patients. Because the remaining retinal layers are anatomically spared, several approaches have been designed to artificially activate this residual retina and thereby the visual system. At present, two general strategies have been pursued. The "Epiretinal" approach involves a semiconductor-based device placed above the retina, close to or in contact with the nerve fiber layer retinal ganglion cells. The information in this approach must be captured by a camera system before transmitting data and energy to the implant. The "Sub retinal" approach involves the electrical stimulation of the inner retina from the sub retinal space by implantation of a semiconductor-based micro photodiode array (MPA) into this location. The concept of the sub retinal approach is that electrical charge generated by the MPA in response to a light stimulus may be used to artificially alter the membrane potential of neurons in the remaining retinal layers in a manner to produce formed images. Some researchers have developed an implant system where a video camera captures images, a chip processes the images, and an electrode array transmits the images to the brain. It's called Cortical Implants.

The Visual System


The human visual system is remarkable instrument. It features two mobile acquisition units each has formidable preprocessing circuitry placed at a remote location from the central processing system (brain). Its primary task include transmitting images with a viewing angle of at least 140deg and resolution of 1 arc min over a limited capacity carrier, the million or so fibers in each optic nerve through these fibers the signals are passed to the so called higher visual cortex of the brain The nerve system can achieve this type of high volume data transfer by confining such capability to just part of the retina surface, whereas the center of the retina has a 1:1 ration between the photoreceptors and the transmitting elements, the far periphery has a ratio of 300:1. This results in gradual shift in resolution and other system parameters. At the brain's highest level the visual cortex an impressive array of feature extraction mechanisms can rapidly adjust the eye's position to sudden movements in the peripherals filed of objects too small to se when stationary. The visual system can resolve spatial depth differences by combining signals from both eyes with a precision less than one tenth the size of a single photoreceptor.

AI for Speech Recognition

Definition
AI is the study of the abilities for computers to perform tasks, which currently are better done by humans. AI has an interdisciplinary field where computer science intersects with philosophy, psychology, engineering and other fields. Humans make decisions based upon experience and intention. The essence of AI in the integration of computer to mimic this learning process is known as Artificial Intelligence Integration When you dial the telephone number of a big company, you are likely to hear the sonorous voice of a cultured lady who responds to your call with great courtesy saying "welcome to company X. Please give me the extension number you want" .You pronounces the extension number, your name, and the name of the person you want to contact. If the called person accepts the call, the connection is given quickly. This is artificial intelligence where an automatic call-handling system is used without employing any telephone operator.

The Technology
Artificial intelligence (AI) involves two basic ideas. First, it involves studying the thought processes of human beings. Second, it deals with representing those processes via machines (like computers, robots, etc).AI is behaviour of a machine, which, if performed by a human being, would be called intelligence. It makes machines smarter and more useful, and is less expensive than natural intelligence. Natural language processing (NLP) refers to artificial intelligence methods of communicating with a computer in a natural language like English. The main objective of a NLP program is to understand input and initiate action.The input words are scanned and matched against internally stored known words. Identification of a keyword causes some action to be taken. In this way, one can communicate with the computer in one's language. No special commands or computer language are required. There is no need to enter programs in a special language for creating software. Voice XML takes speech recognition even further. Instead of talking to your computer, you're essentially talking to a web site, and you're doing this over the phone.OK, you say, well, what exactly is speech recognition? Simply put, it is the process of converting spoken input to text. Speech recognition is thus sometimes referred to as speech-to-text.Speech recognition allows you to provide input to an application with your voice. Just like clicking with your mouse, typing on your keyboard, or pressing a key on the phone keypad provides input to an application; speech recognition allows you to provide input by talking. In the desktop world, you need a microphone to be able to do this. In the Voice XML world, all you need is a telephone. The speech recognition process is performed by a software component known as the speech recognition engine. The primary function of the speech recognition engine is to process spoken input and translate it into text that an application understands. The application can then do one of two things:The application can interpret the result of the recognition as a command. In this case , the application is a command and control application. If an application handles the recognized text simply as text, then it is considered a dictation application.

The user speaks to the computer through a microphone, which in turn, identifies the meaning of the words and sends it to NLP device for further processing. Once recognized, the words can be used in a variety of applications like display, robotics, commands to computers, and dictation.

Treating Cardiac Disease With Catheter-Based Tissue Heating

Definition
In microwave ablation, electromagnetic energy would be delivered via a catheter to a precise location in a coronary artery for selective heating of a targeted atherosclerotic lesion. Advantageous temperature profiles would be obtained by controlling the power delivered, pulse duration, and frequency. The major components of an apparatus for microwave ablation apparatus would include a microwave source, a catheter/transmission line, and an antenna at the distal end of the catheter .The antenna would focus the radiated beam so that most of the microwave energy would be deposited within the targeted atherosclerotic lesion. Because of the rapid decay of the electromagnetic wave, little energy would pass into, or beyond, the adventitia. By suitable choice of the power delivered, pulse duration, frequency, and antenna design (which affects the width of the radiated beam), the temperature profile could be customized to the size, shape, and type of lesion being treated. For decades, scientists have been using electromagnetic and sonic energy to serve medicine. But, aside from electro surgery, their efforts have focused on diagnostic imaging of internal body structures-particularly in the case of x-ray, MRI, and ultrasound systems. Lately, however, researchers have begun to see acoustic and electromagnetic waves in a whole new light, turning their attention to therapeutic-rather than diagnostic-applications. Current research is exploiting the ability of radio-frequency (RF) and microwaves to generate heat, essentially by exciting molecules. This heat is used predominantly to ablate cells. Of the two technologies, RF was the first to be used in a marketable device. And now microwave devices are entering the commercialization stage. These technologies have distinct strengths weaknesses that will define their use and determine their market niches. The depth to which microwaves can penetrate tissues is primarily a function of the dielectric properties of the tissues and of the frequency of the micro waves. The tissue of the human body is enormously varied and complex, with innumerable types of structures, components, and cells. These tissues vary not only with in an individual, but also among people of different gender, age, physical condition, health and even as a function of external in puts, such as food eaten, air breathed, ambient temperature, or even state of minds. From the point of view of RF and Microwaves in the frequency range 10 MHz ~ 10GHz, however biological tissue can be viewed macroscopically in terms of its bulk shape and electromagnetic characteristic: dielectric constant and electrical conductivity . These are dependent on frequency and very dependent on the particular tissue type. All biological tissue is somewhat electrically conductive, absorbing microwave power and converting it to heat as it penetrates the tissue. Delivering heat at depth is not only valuable for cooking dinner, but it can be quite useful for many therapeutic medical applications as well. These includes: diathermy for mild orthopedic heating, hyperthermia cell killing for cancer therapy, microwave ablation and microwave assisted balloon angioplasty. These last two are the subject of this article. It should also be mention that based on the long history of hi power microwave exposure in human, it is reasonable certain that, barring overheating effects, microwave radiation is medically safe. There have been no credible reported carcinogenic , muragenic or poisonous effects of microwave exposure.

Surround Sound System

Definition
There are many surround systems available in the market .They use different technologies for produce surround effect. Some Surround sound is based on using audio compression technology (for example Dolby ProLogic or Digital AC-3) to encode and deliver a multi-channel soundtrack, and audio decompression technology to decode the soundtrack for delivery on a surround sound 5speaker setup. Additionally, virtual surround sound systems use 3D audio technology to create the illusion of five speakers emanating from a regular set of stereo speakers, therefore enabling a surround sound listening experience without the need for a five speaker setup. We are now entering the Third Age of reproduced sound. The monophonic era was the First Age, which lasted from the Edison's invention of the phonograph in 1877 until the 1950s. during those times, the goal was simply to reproduce the timbre of the original sound. No attempts were made to reproduce directional properties or spatial realism. The stereo era was the Second Age. It was based on the inventions from the 1930s, reached the public in the mid-'50s, and has provided great listening pleasure for four decades. Stereo improved the reproduction of timbre and added two dimensions of space: the left - right spread of performers across a stage and a set of acoustic cues that allow listeners to perceive a front-to-back dimension. In two-channel stereo, this realism is based on fragile sonic cues. In most ordinary two-speaker stereo systems, these subtle cues can easily be lost, causing the playback to sound flat and uninvolved. Multichannel surround systems, on the other hand, can provide this involving presence in a way that is robust, reliable and consistent. The purpose of this seminar is to explore the advances and technologies of surround sound in the consumer market. Human hearing is binaural (based on two ears), yet we have the ability to locate sound spatially. That is, we can determine where a sound is coming from, and in most cases, from how far away. In addition, humans can distinguish multiple sound sources in relation to the surrounding environment. This is possible because our brains can determine the location of each sound in the three-dimensional environment we live in by processing the information received by our two ears. The principal localization cues used in binaural human hearings are Interaural Intensity Difference ( IID ) and Interaural Time Difference ( ITD ). IID refers to the fact that if a sound is closer to one ear than the other, its intensity at that ear is greater than at the other ear, which is not only farther away but also receives the sound shadowed by the listener's head. ITD is related to the fact that unless the sound is located at exactly the same distance from both ears (i.e. directly in front or back of the listener), it arrives at one ear sooner than the other. If the sound reaches the right ear first, the source is somewhere to the right, and vice-versa. By combining these two cues and other related to the reflection of the sound as they travel to our eardrums, our brains are able to determine the position of an individual sound source. The principal format for digital discrete surround is the "5.1 channel" system. The 5.1 name stands for five channels (see figure 1 below) (in front: left, right and centre, and behind: left surround and right surround)of full bandwidth audio (20 Hz to 20 kHz) plus a sixth channel which will, at times, contain additional bass information to maximize the impact of scenes such as explosions, etc.

Space Time Adaptive Processing

Definition
Space-Time Adaptive Processing (STAP) refers to a class of signal processing techniques used to process returns of an antenna array radar system. It enhances the ability of radars to detect targets that might otherwise be obscured by clutter or jamming. . The output of STAP is a linear combination or weighted sum of the input signal samples .The "adaptive" in STAP refers to the fact that STAP weights are computed to reflect the actual noise, clutter and jamming environment in which the radar finds itself. The "space" in STAP refers to the fact that STAP the STAP weights (applied to the signal samples at each of the elements of the antenna array) at one instant of time define an antenna pattern in space. If there are jammers in the field of view, STAP will adapt the radar antenna pattern by placing nulls in the directions those jammers thus rejecting jammer power. The "time" in STAP refers to the fact that the STAP weights applied to the signal samples at one antenna element over the entire dwell define a system impulse response and hence a system frequency response. STAP is a multi-dimensional adaptive signal processing technique over spatial and temporal samples. In this approach, the input data collected from several antenna sensors has a cubic form. Depending on how this input data cube is processed, STAP is classified into Higher Order PostDoppler (HOPD), Element Space Pre-Doppler, Element Space Post-Doppler, Beam Space PreDoppler, and Beam Space Post-Doppler. STAP consists of three major computation steps. First, a set of rules called the training strategy is used to select data which will be processed in the subsequent computation. The second step is weight computation. It requires solving a set of linear equations. This is the most computationally intensive step. Finally, thresholding operation is performed after applying the computed weights. In HOPD processing, Doppler processing (FFT computations) is followed by solving least square problems (QR decompositions).

Introduction To Radar
Radar is an electromagnetic system for the detection and location of objects. RADAR is nothing but Radio Detection And Ranging. It operates by transmitting a particular type of waveform and detects the nature of the echo signal. An elementary form of radar consists of a transmitting antenna emitting electromagnetic radiation generated by an oscillator of some sort, a receiving antenna, and an energy detecting device or receiver .A portion of the transmitted signal is intercepted by a reflecting object (target) and is reradiated in all directions. It is the energy reradiated in the back direction that is of prime interest to the radar. The receiving antenna collects the returned energy and delivers it to a receiver, where it is processed to detect the presence of the target and to extract its location and relative velocity. The transmitter may be an oscillator such as magnetron, which is pulsed by the modulator to generate a repetitive train of pulses. The waveform generated by the transmitter travels via a transmission line to the antenna where it is radiated into space. A single antenna is generally used for both transmitting and receiving. The receiver must be protected from damage caused by the high power of the transmitter. This is the function of the duplexer. The duplexer also serves to channel the returned echo signals to the receiver and not to the transmitter. The receiver is usually of the superhetrodyne type.

Real- Time Systems and Real- Time Operating Systems

Definition
Real-time systems play a considerable role in our society, and they cover a spectrum from the very simple to the very complex. Examples of current real-time systems include the control of domestic appliances like washing machines and televisions, the control of automobile engines, telecommunication switching systems, military command and control systems, industrial process control, flight control systems, and space shuttle and aircraft avionics. All of these involve gathering data from the environment, processing of gathered data, and providing timely response. A concept of time is the distinguishing issue between real-time and non-real-time systems. When a usual design goal for non-real-time systems is to maximize system's throughput, the goal for real-time system design is to guarantee, that all tasks are processed within a given time. The taxonomy of time introduces special aspects for real-time system research. Real-time operating systems are an integral part of real-time systems. Future systems will be much larger, more widely distributed, and will be expected to perform a constantly changing set of duties in dynamic environments. This also sets more requirements for future real-time operating systems Timeliness is the single most important aspect of a real -time system. These systems respond to a series of external inputs, which arrive in an unpredictable fashion. The real-time systems process these inputs, take appropriate decis ions and also generate output necessary to control the peripherals connected to them. As defined by Donald Gillies "A real-time system is one in which the correctness of the computations not only depends upon the logical correctness of the computation but also upon the time in which the result is produced. If the timing constraints are not met, system failure is said to have occurred." It is essential that the timing constraints of the system are guaranteed to be met. Guaranteeing timing behaviour requires that the system be predictable. Most real -time systems interface with and control hardware directly. The software for such systems is mostly custom -developed. Real -time Applications can be either embedded applications or non embedded (desktop) applications. Real -time systems often do not have standard peripherals associated with a desktop computer, namely the keyboard, mouse or conventional display monitors. In most instances, real-time systems have a customized version of these devices.

Real-time Programs: The Computational Model


A simple real -time program can be defined as a program P that receives an event from a sensor every T units of time and in the worst case, an event requires C units of computation time. Assume that the processing of each event must always be completed before the arrival of the next event (i.e., when there is no buffering). Let the deadline for completin g the computation be D. If D < C, the deadline cannot be met. If T < D, the program must still process each event in a time O/ T, if no events are to be lost. Thus the deadline is effectively bounded by T and we need to handle those cases where C O/ D O/ T.

Radio frequency identification (RFID)

Definition
Radio frequency identification (RFID) is a contactless form of automatic identification and data capture. Dating back to World War II, RFID transponders were used to identify friendly aircraft. The RFID system consists of a reader, transponder, and antenna utilizing several frequency ranges. Over 40 million RFID tags will be used in 1999 with sales projected to break the one billion-dollar mark before 2003 (Frost & Sullivan, 1997). Radio frequency identification is used in access control, asset control, and animal identification. The advantages of RFID are the capability for multiple reads, ability to be used in almost any environment, and the accuracy. The Automatic Identification Manufacturers, International Standards Organization, and the American National Standards Institute are currently developing standards. Barcodes have been developed in the railroad business to keep track of the various cars. Out of this system of identification grew the U.P.C. (Universal Product Code) which is now used in almost all manufactured goods. UPC is used to store the manufacturer code as well as the product code in a form that can be easily read by various scanners - even from a distance. But there are limits to the use of barcodes. There must be a direct line of sight between the reader and the code. The barcode can be obscured, for example by paint. One only has read-access to the data, i.e., one cannot add new data without adding another label. This is the point where a relatively new technology comes in: RFID (Radio Frequency IDentification). In RFID electronic chips are used to store data that can be broadcast via radio waves to the reader, eliminating the need for a direct line of sight and making it possible for "tags" to be placed anywhere on or in the product. One can even write to tags made of semiconductor chips, thus enabling updating of data. This write function introduces new capabilities, such as the updating of the manufacturing process of the attached item. RFID first appeared in tracking and access applications during the 1980s. These wireless AIDC systems allow for non-contact reading and are effective in manufacturing and other hostile environments where bar code labels could not survive. RFID has established itself in livestock identification and automated vehicle identification (AVI) systems because of its ability to track moving objects. To understand and appreciate the capabilities of RFID systems it is necessary to consider their constituent parts. It is also necessary to consider the data flow requirements that influence the choice of systems and the practicalities of communicating across the air interface. By considering the system components and their function within the data flow chain it is possible to grasp most of the important issues that influence the effective application of RFID.

The RFID reader is designed for fast and easy system integration without losing performance, functionality or security. The RFID reader consists of a real time processor, operating system, virtual portable memory, and transmitter/receiver unit in one small self-contained module that is easily installed in the ceiling or in any other convenient location.

Quantum Dot Lasers

Definition
Quantum Dot Lasers can be considered as a quantum leap in the development of lasers. Quantum Dots improve basically the laser emissions. This property of Quantum Dots is well utilized for fiber optic communication, which is now the leading subject under research and development. Quantum Dots are thus very well used in applications fiber optic communication. The remaining major division of the field of quantum electronics deals with the interactions of coherent light with matter and again leads to a wide range of all-optical and optoelectronic devices. Basically Quantum Dots are made of InGaAs or simply GaAs structures. Also the possibility for extended wave length (>1.1m) emission from GaAs based devices is an important characteristic of Quantum Dots. The QDs are formed by an optimized growth approach of alternating sub-monolayer deposition of column III and column V, constituents for optoelectronic device fabrication. Thus there is a large energy separation between states.The infrastructure of the Information Age has to date relied upon advances in microelectronics to produce integrated circuits that continually become smaller, better, and less expensive. The emergence of photonics, where light rather than electricity is manipulated, is posed to further advance the Information Age. Central to the photonic revolution is the development of miniature light sources such as the Quantum dots(QDs). Today, Quantum Dots manufacturing has been established to serve new datacom and telecom markets. Recent progress in microcavity physics, new materials, and fabrication technologies has enabled a new generation of high performance QDs. This presentation will review commercial QDs and their applications as well as discuss recent research, including new device structures such as composite resonators and photonic crystals Semiconductor lasers are key components in a host of widely used technological products, including compact disk players and laser printers, and they will play critical roles in optical communication schemes. The basis of laser operation depends on the creation of non-equilibrium populations of electrons and holes, and coupling of electrons and holes to an optical field, which will stimulate radiative emission. . Other benefits of quantum dot active layers include further reduction in threshold currents and an increase in differential gain-that is, more efficient laser operation. Since the 1994 demonstration of a quantum dot (QD) semiconductor laser, the research progress in developing lasers based on QDs has been impressive. Because of their fundamentally different physics that stem from zero-dimensional electronic states, QD lasers now surpass the established planar quantum well laser technology in several respects. These include their minimum threshold current density, the threshold dependence on temperature, and range of wavelengths obtainable in given strained layer material systems. Self-organized QDs are formed from strained-layer epitaxy. Upon reaching such conditions, the growth front can spontaneously reorganize to form 3dimensional islands. The greater strain relief provided by the 3-dimensionally structured crystal surface prevents the formation of dislocations. When covered with additional epitaxy, the coherently strained islands form the QDs that trap and isolate individual electron-hole pairs to create efficient light emitters. Optimizing the QD characteristics for use as practical, commercial light sources is based on controlling their density, shape, and uniformity during epitaxy. In particular, the QD's shape plays a large role in determining its dynamic response, as well as the temperature sensitivity of the laser's

characteristics. Their density, shape, and uniformity also establish the optical gain of a QD ensemble. All three physical characteristics can be engineered through the precise deposition conditions in which temperature, growth rate, and material composition are carefully controlled.

Plasma Antennas

Definition
Plasma antennas are radio frequency antennas that employ plasma as the guiding medium for electromagnetic radiation. The concept is to use plasma discharge tubes as the antenna elements. When the tubes are energized, they become conductors, and can transmit and receive radio signals. When they are de-energised, they revert to non-conducting elements and do not reflect probing radio signals. Plasma antenna can be "Steered" electronically. Another feature of the plasma antenna is that it can be turned off rapidly, reducing ringing on pulse transmission.On earth we live upon an island of "ordinary" matter. The different states of matter generally found on earth are solid, liquid, and gas. Sir William Crookes, an English physicist identified a fourth state of matter, now called plasma, in 1879. Plasma is by far the most common form of matter. Plasma in the stars and in the tenuous space between them makes up over 99% of the visible universe and perhaps most of that which is not visible. Important to ASI's technology, plasmas are conductive assemblies of charged and neutral particles and fields that exhibit collective effects. Plasmas carry electrical currents and generate magnetic fields. When the Plasma Antenna Research Laboratory at ANU investigated the feasibility of plasma antennas as low radar cross-section radiating elements, Redcentre established a network between DSTO ANU researchers, CEA Technologies, Cantec Australasia and Neolite Neon for further development and future commercialization of this technology. The plasma antenna R & D project has proceeded over the last year at the Australian National University in response to a DSTO (Defence Science and Technology Organisation) contract to develop a new antenna solution that minimizes antenna detectability by radar. Since then, an investigation of the wider technical issues of existing antenna systems has revealed areas where plasma antennas might be useful. The project attracts the interest of the industrial groups involved in such diverse areas as fluorescent lighting, telecommunications and radar. Plasma antennas have a number of potential advantages for antenna design. When a plasma element is not energized, it is difficult to detect by radar. Even when it is energized, it is transparent to the transmissions above the plasma frequency, which falls in the microwave region. Plasma elements can be energized and de-energized in seconds, which prevents signal degradation. When a particular plasma element is not energized, its radiation does not affect nearby elements. HF CDMA Plasma antennas will have low probability of intercept( LP) and low probability of detection( LPD ) in HF communications.

Plasma Antenna Technology


Since the discovery of radio frequency ("RF") transmission, antenna design has been an integral part of virtually every communication and radar application. Technology has advanced to provide unique antenna designs for applications ranging from general broadcast of radio frequency signals for public use to complex weapon systems. In its most common form, an antenna represents a conducting metal surface that is sized to emit radiation at one or more selected frequencies.

Antennas must be efficient so the maximum amount of signal strength is expended in the propogated wave and not wasted in antenna reflection. Plasma antenna technology employs ionized gas enclosed in a tube (or other enclosure) as the conducting element of an antenna.

Organic Light Emitting Diodes (OLED)

Definition
Scientific research in the area of semiconducting organic materials as the active substance in light emitting diodes (LEDs) has increased immensely during the last four decades. Organic semiconductors was first reported in the 60:s and then the materials where only considered to be merely a scientific curiosity. (They are named organic because they consist primarily of carbon, hydrogen and oxygen.). However when it was recognized in the eighties that many of them are photoconductive under visible light, industrial interests were attracted. Many major electronic companies, such as Philips and Pioneer, are today investing a considerable amount of money in the science of organic electronic and optoelectronic devices. The major reason for the big attention to these devices is that they possibly could be much more efficient than todays components when it comes to power consumption and produced light. Common light emitters today, Light Emitting Diodes (LEDs) and ordinary light bulbs consume more power than organic diodes do. And the strive to decrease power consumption is always something of matter. Other reasons for the industrial attention are i.e. that eventually organic full color displays will replace todays liquid crystal displays (LCDs) used in laptop computers and may even one day replace our ordinary CRT-screens. Organic light-emitting devices (OLEDs) operate on the principle of converting electrical energy into light, a phenomenon known as electroluminescence. They exploit the properties of certain organic materials which emit light when an electric current passes through them. In its simplest form, an OLED consists of a layer of this luminescent material sandwiched between two electrodes. When an electric current is passed between the electrodes, through the organic layer, light is emitted with a color that depends on the particular material used. In order to observe the light emitted by an OLED, at least one of the electrodes must be transparent. When OLEDs are used as pixels in flat panel displays they have some advantages over backlit active-matrix LCD displays - greater viewing angle, lighter weight, and quicker response. Since only the part of the display that is actually lit up consumes power, the most efficient OLEDs available today use less power. Based on these advantages, OLEDs have been proposed for a wide range of display applications including magnified microdisplays, wearable, head-mounted computers, digital cameras, personal digital assistants, smart pagers, virtual reality games, and mobile phones as well as medical, automotive, and other industrial applications.

OLED Versus LED Electronically, OLED is similar to old-fashioned LEDs -- put a low voltage across them and they glow. But that's as far as the similarity goes: instead of being made out of semiconducting metals, OLEDs are made from polymers, plastics or other carbon-containing compounds. These can be made very cheaply and turned into devices without all the expensive palaver that goes with semiconductor fabrication. Light-emitting diodes, based upon semiconductors such as Gallium Arsenide, Gallium Phosphide, and, most recently, Gallium Nitride, have been around since the late '50s. They are mostly used as indicator lamps, although they were used in calculators before liquid crystals, and are used in large advertising signs, where they are valued for very long life and high brightness. Such crystalline

LEDs are not inexpensive, and it is very difficult to integrate them into small high-resolution displays.

Nanotechnology

Definition
Nanotechnology is defined as fabrication of devices with atomic or molecular scale precision. Devices with minimum feature sizes less than 100 nanometers (nm) are considered to be products of nanotechnology. A nanometer is one billionth of a meter (10-9 m) and is the unit of length that is generally most appropriate for describing the size of single molecules. The nanoscale marks the nebulous boundary between the classical and quantum mechanical worlds; thus, realization of nanotechnology promises to bring revolutionary capabilities. Fabrication of nanomachines, nanoelectronics and other nanodevices will undoubtedly solve an enormous amount of the problems faced by mankind today. Nanotechnology is currently in a very infantile stage. However, we now have the ability to organize matter on the atomic scale and there are already numerous products available as a direct result of our rapidly increasing ability to fabricate and characterize feature sizes less than 100 nm. Mirrors that don't fog, biomimetic paint with a contact angle near 180, gene chips and fat soluble vitamins in aqueous beverages are some of the first manifestations of nanotechnology. However, immenant breakthroughs in computer science and medicine will be where the real potential of nanotechnology will first be achieved. Nanoscience is an interdisciplinary field that seeks to bring about mature nanotechnology. Focusing on the nanoscale intersection of fields such as physics, biology, engineering, chemistry, computer science and more, nanoscience is rapidly expanding. Nanotechnology centers are popping up around the world as more funding is provided and nanotechnology market share increases. The rapid progress is apparent by the increasing appearance of the prefix "nano" in scientific journals and the news. Thus, as we increase our ability to fabricate computer chips with smaller features and improve our ability to cure disease at the molecular level, nanotechnology is here.

History of Nanotechnology
The amount of space available to us for information storage (or other uses) is enormous. As first described in a lecture titled, 'There's Plenty of Room at the Bottom' in 1959 by Richard P. Feynman, there is nothing besides our clumsy size that keeps us from using this space. In his time, it was not possible for us to manipulate single atoms or molecules because they were far too small for our tools. Thus, his speech was completely theoretical and seemingly fantastic. He described how the laws of physics do not limit our ability to manipulate single atoms and molecules. Instead, it was our lack of the appropriate methods for doing so. However, he correctly predicted that the time would come in which atomically precise manipulation of matter would inevitably arrive. Prof. Feynman described such atomic scale fabrication as a bottom-up approach, as opposed to the top-down approach that we are accustomed to. The current top-down method for manufacturing involves the construction of parts through methods such as cutting, carving and molding.

LED wireless

Definition
Billions of visible LEDs are produced each year, and the emergence of high brightness AlGaAs and AlInGaP devices has given rise to many new markets. The surprising growth of activity in, relatively old, LED technology has been spurred by the introduction of AlInGaP devices. Recently developed AlGaInN materials have led to the improvements in the performance of bluish-green LEDs, which have luminous efficacy peaks much higher than those for incandescent lamps. This advancement has led to the production of large-area full-color outdoors LED displays with diverse industrial applications. The novel idea of this article is to modulate light waves from visible LEDs for communication purposes. This concurrent use of visible LEDs for simultaneous signaling and communication, called iLight, leads to many new and interesting applications and is based on the idea of fast switching of LEDs and the modulation visible-light waves for free-space communications. The feasibility of such approach has been examined and hardware has been implemented with experimental results. The implementation of an optical link has been carried out using an LED traffic-signal head as a transmitter. The LED traffic light (fig 1 below) can be used for either audio or data transmission. Audio messages can be sent using the LED transmitter, and the receiver located at a distance around 20 m away can play back the messages with the speaker. Another prototype that resembles a circular speed-limit sign with a 2-ft diameter was built. The audio signal can be received in open air over a distance of 59.3 m or 194.5 ft. For data transmission, digital data can be sent using the same LED transmitter, and the experiments were setup to send a speed limit or location ID information. The work reported in this article differs from the use of infrared (IR) radiation as a medium for shortrange wireless communications. Currently, IR links and local-area networks available. IR transceivers for use as IR data links are widely available in the markets. Some systems are comprised of IR transmitters that convey speech messages to small receivers carried by persons with severe visual impairments. The Talking Signs system is one such IR remote signage system developed at the Smith-Kettlewell Rehabilitation Engineering Research center. It can provide a repeating, directionally selective voice message that originates at a sign. However, there has been very little work on the use of visible light as a communication medium. The availability of high brightness LEDs make the visible-light medium even more feasible for communications. All products with visible-LED components (like an LED traffic signal head) can be turned into an information beacon. This iLight technology has many characteristics that are different from IR. The iLight transceivers make use of the direct line-of-sight (LOS) property of visible light, which is ideal in applications for providing directional guidance to persons with visual impairments. On the other hand, IR has the property of bouncing back and forth in a confined environment. Another advantage of iLight is that the transmitter provides easy targets for LOS reception by the receiver. This is because the LEDs, being on at all times, are also indicators of the location of the transmitter. A user searching for information has only to look for lights from an iLight transmitter. Very often, the device is concurrently used for illumination, display, or visual signage. Hence, there is no need to implement an additional transmitter for information broadcasting. Compared with an IR transmitter, an iLight transmitter has to be concerned with even brightness. There should be no apparent difference to a user on the visible light that emits from an iLight device.

It has long been realized that visible light has the potential to be modulated and used as a communication channel with entropy. The application has to make use of the directional nature of the communication medium because the receiver requires a LOS to the audio system or transmitter. The locations of the audio signal broadcasting system and the receiver are relatively stationary. Since the relative speed between the receiver and the source are much less than the speed of light, the Doppler frequency shift observed by the receiver can be safely neglected. The transmitter can broadcast with viewing angle close to 180 . The frequency of an ON period followed by an OFF period to transmit information is short enough to be humanly unperceivable; so that it does not affect traffic control. This article aims to present an application of high-brightness visible LEDs for establishing optical free-space links.

Laser Communication Systems

Definition
Lasers have been considered for space communications since their realization in 1960. Specific advancements were needed in component performance and system engineering particularly for space qualified hardware. Advances in system architecture, data formatting and component technology over the past three decades have made laser communications in space not only viable but also an attractive approach into inter satellite link applications. Information transfer is driving the requirements to higher data rates, laser cross -link technology explosions, global development activity, increased hardware, and design maturity. Most important in space laser communications has been the development of a reliable, high power, single mode laser diode as a directly modulable laser source. This technology advance offers the space laser communication system designer the flexibility to design very lightweight, high bandwidth, low-cost communication payloads for satellites whose launch costs are a very strong function of launch weigh. This feature substantially reduces blockage of fields of view of most desirable areas on satellites. The smaller antennas with diameter typically less than 30 centimeters create less momentum disturbance to any sensitive satellite sensors. Fewer on board consumables are required over the long lifetime because there are fewer disturbances to the satellite compared with heavier and larger RF systems. The narrow beam divergence affords interference free and secure operation.

Background
Until recently, the United States government was funding the development of an operational space laser cross-link system employing solid-state laser technology. The NASA is developing technology and studying the applicability of space laser communication to NASA's tracking and data relay network both as cross-link and for user relay links. NASA's Jet Propulsion Laboratory is studying the development of large space and ground-base receiving stations and payload designs for optical data transfer from interplanetary spacecraft. Space laser communication is beginning to be accepted as a viable and reliable means of transferring data between satellites. Presently, ongoing hardware development efforts include ESA's Space satellite Link Experiment (SILEX) and the Japanese's Laser Communication Experiment (LCE). The United States development programs ended with the termination of both the production of the laser cross-link subsystem and the FEWS satellite program . Satellite use from space must be regulated and shared on a worldwide basis. For this reason, frequencies to be used by the satellite are established by a world body known as the International Telecommunications Union (ITU) with broadcast regulations controlled by a subgroup known as World Administrative Radio Conference (WARC). An international consultative technical committee (CCIR) provides specific recommendations on satellite frequencies under consideration by WARC. The basic objective is to allocate particular frequency bands for different types of satellite services, and also to provide international regulations in the areas of maximum radiation's level from space, co-ordination with terrestrial systems and the use of specific satellite locations in a given orbit. Within these allotments and regulations an individual country can make its own specific frequency selections based on intended uses and desired satellite services.

Josephson Junction

Definition
The 20th century saw many developments in the field of electronics because of basically two reasons 1. The development of transistors, which forms the basics of everything that is electronics. 2. The development of IC, which helped in the fabrication of fast, compact & sophisticated electronic circuits. In the 21st century we are going to see some radical changes in the approach towards electronics. These are : 1. The replacement of semiconducting devices with superconducting devices. 2. The use of new classical theories in physics like the relative physics & quantum mechanics to explain various phenomenon, application & working of electronic devices. The first step to integrate the previously separate branches, electronics &super conductivity was done by the scientist called Brian Josephson by the invention of the JJ in the year 1962 for which he received the Nobel prize in the year 1973.The analysis of the device is impossible using classical theories of physics. The device has immense potential & numerous applications in almost all fields of applied electronics. The Josephson junction (JJ) is basically an insulator sandwitched between the two semiconductor layers. Hence the device is also called as a SIS (Superconductor-Insulator-superconductor). A tunneling phenomenon called Josephson tunneling takes place through the insulator when the thickness of the insulator is very thin (less than 1.5 nm) and the insulator turns into a superconductor due to the tunneling of charge carriers from the 1st to the 2nd super conductor; through the insulator. To explain the working of the device we need to analyze the principles of superconductivity & the principles of tunneling. The superconductivity is explained in terms of BCS theory & tunneling in terms of the uncertainity principle.

Superconductivity
It is a remarkable property in which there is a complete loss of resistivity in a metal or alloy, usually at temperature close to the absolute zero & this property was discovered by Kammerlingh Onnes. As perfect conductors, superconductors will carry current without resistance loss, i.e, the current applied will persist forever without any loss of power. These materials are also perfect diamagnetic & magnet placed above the super conductor will levitate under its own magnetic field. Low temperature superconductors exhibit property at temperature near-250?C. LBCO &certain alloys of La & Ba shows this property near 35k, RBa2Cu3O7, Bi2Sr2ca2Cu3O10 can show the property near 90k. Thallium based & mercury based cuprates can show superconductivity at 134k. Progress in the development of high temperature superconductivity & particular cuprate based superconductors has made significant advances. Some organic compounds have lately been developed as Superconductors.

Introduction to the Internet Protocols

Definition
TCP/IP TCP/IP is a set of protocols developed to allow cooperating computers to share resources across a network. It was developed by a community of researchers centered around the ARPAnet. Certainly theARPAnet is the best-known TCP/IP network. However as of June, 87, at least 130 different vendors had products that support TCP/IP, and thousands of networks of all kinds use it. First some basic definitions. The most accurate name for the set of protocols we are describing is the "Internet protocol suite". TCP and IP are two of the protocols in this suite. Because TCP and IP are the best known of the protocols, it has become common to use the term TCP/IP or IP/TCP to refer to the whole family. It is probably not worth fighting this habit. However this can lead to some oddities. For example, I find myself talking about NFS as being based on TCP/IP, even though it doesn't use TCP at all. The Internet is a collection of networks, including the Arpanet, NSFnet, regional networks such as NYsernet, local networks at a number of University and research institutions, and a number of military networks. The term "Internet" applies to this entire set of networks. The subset of them that is managed by the Department of Defense is referred to as the "DDN" (Defense Data Network). This includes some research-oriented networks, such as the Arpanet, as well as more strictly military ones. (Because much of the funding for Internet and DDN can sometimes seem equivalent.) All of these networks are connected to each other. Users can send messages from any of them to any other, except where there are security or other policy restrictions on access. Officially speaking, the Internet protocol documents are simply standards adopted by the Internet community for its own use. More recently, the Department of Defense issued a MILSPEC definition of TCP/IP. This was intended to be a more formal definition, appropriate for use in purchasing specifications. However most of the TCP/IP community continues to use the Internet standards. The MILSPEC version is intended to be consistent with it.Whatever it is called, TCP/IP is a family of protocols. A few provide "low-level" functions needed for many applications. These include IP, TCP, and UDP. (These will be described in a bit more detail later.) Others are protocols for doing specific tasks, e.g. transferring files between computers, sending mail, or finding out who is logged in on another computer. Initially TCP/IP was used mostly between minicomputers or mainframes. These machines had their own disks, and generally were self-contained. Thus the most important "traditional" TCP/IP services are: - file transfer. The file transfer protocol (FTP) allows a user on any computer to get files from another computer, or to send files to another computer. - remote login. The network terminal protocol (TELNET) allows a user to log in on any other computer on the network. You start a remote session by specifying a computer to connect to. - computer mail. This allows you to send messages to users on other computers. Originally, people tended to use only one or two specific computers. - network file systems. This allows a system to access files on another computer in a somewhat more closely integrated fashion than FTP. A network file system provides the illusion that disks or other devices from one system are directly connected to other systems.

- remote printing. This allows you to access printers on other computers as if they were directly attached to yours. (The most commonly used protocol is the remote lineprinter protocol from Berkeley Unix).

Imagine

Definition
The focus of the Imagine is to develop a programmable architecture that achieves the performance of special purpose hardware on graphics and image/signal processing. This is accomplished by exploiting stream-based computation at the application, compiler, and architectural level. At the application level, we have cast several complex media applications such as polygon rendering, stereo depth extraction, and video encoding into streams and kernels. At the compiler-level, we have developed programming languages for writing stream-based programs and have developed software tools that optimize their execution on stream hardware. Finally, at the architectural level, we have developed the Imagine stream processor, a novel architecture that executes stream-based programs and is able to sustain over tens of GFLOPs over a range of media applications with a power dissipation of less than 10 Watts.

Research Contributions
Stream Architecture The Imagine Stream Architecture is a novel architecture that executes stream-based programs. It provides high performance with 48 floating-point arithmetic units and a area- and power-efficient register organization. A streaming memory system loads and stores streams from memory. A stream register file provides a large amount of on-chip intermediate storage for streams. Eight VLIW arithmetic clusters perform SIMD operations on streams during kernel execution. Kernel execution is sequenced by a micro-controller. A network interface is used to support multi-Imagine systems and I/O transfers. Finally, a stream controller manages the operation of all of these units. Stream Programming Model Applications for Imagine are programmed using the stream programming model. This model consists of streams and kernels. Streams are sequences of similar data records. Kernels are small programs which operate on a set of input streams and produce a set of output streams. Software Tools Imagine is programmed with a set of languages and software tools that implement the stream programming model. Applications are programmed in StreamC and KernelC. A stream scheduler maps StreamC to stream instructions for Imagine and a kernel scheduler maps KernelC to VLIW kernel instructions for Imagine. Imagine applications have been tested using a cycle accurate simulator, named ISim, and are currently being tested on a prototype board. Programmable Graphics and Real-time Media Applications The Imagine stream processor combines full programmability with high performance. This has enabled research into new real-time media applications such as programmable graphics pipelines. VLSI Prototype A prototype Imagine processor was design and fabricated in conjunction with Texas Instruments.and received by Stanford on April 9, 2002. Imagine contains 21 million transistors and has a die size of 16mm x 16mm in a 0.15 micron standard cell technology. Stream Processor Development Platform

A prototype development board was designed and fabricated in conjunction with ISI-East Dynamic Systems Division. This board has enabled experimental measurements of the prototype Imagine processor, experiments on performance of multi-Imagine systems, and additional application and software tool development.

Cellular Communications
Roke Manor Research is a leading provider of mobile telecommunications technology for both terminals and base stations. We add value to our clients' projects by reducing time-to-market and lowering production costs, and provide lasting benefits through building long-term relationships and working in partnership with our customers. We have played an active role in cellular communications technology since the 1980's, working initially in GSM and more recently in the definition and development of 3G (UMTS). Roke Manor Research has over 200 engineers with experience in designing hardware and software for 3G terminals and base stations and is currently developing technology for 4G and beyond. We are uniquely positioned to provide 2G, 3G and 4G expertise to our customers. The role of Roke Manor Research engineers in standardisation bodies (e.g. ETSI and 3GPP) provides us with intimate knowledge of all the 2G and 3G standards (GSM, GPRS, EDGE, UMTS FDD (WCDMA) and TD-SCDMA standards). Our engineers are currently contributing to the evolution of 3G standards and can provide up-to-the-minute implementation advice to customers.

Heliodisplay

The heliodisplay is an interactive planar display. Though the image it projects appears much like a hologram, its inventors claim that it doesn't use holographic technology, though it does use rear projection (not lasers as originally reported) to project its image. It does not require any screen or substrate other than air to project its image, but it does eject a water-based vapour curtain for the image to be projected upon. The curtain is produced using similar ultrasonic technology as used in foggers and comprises a number of columns of fog. This curtain is sandwiched between curtains of clean air to create an acceptable screen.Heliodisplay moves through a dozen metal plates and then comes out again. (The exact details of its workings are unknown, pending patent applications.) It works as a kind of floating touch screen, making it possible to manipulate images projected in air with your fingers, and can be connected to a computer using a standard VGA connection. It can also connect with a TV or DVD by a standard RGB video cable. Though due to the turbulent nature of the curtain, not currently suitable as a workstation. The Heliodisplay is an invention by Chad Dyner, who built it as a 5-inch prototype in his apartment before founding IO2 technologies to further develop the product.

Optical Mouse
An optical mouse is an advanced computer pointing device that uses a light-emitting diode (LED), an optical sensor, and digital signal processing (DSP) in place of the traditional mouse ball and electromechanical transducer. Movement is detected by sensing changes in reflected light, rather than by interpreting the motion of a rolling sphere. The optical mouse takes microscopic snapshots of the working surface at a rate of more than 1,000 images per second. If the mouse is moved, the image changes. The tiniest irregularities in the surface can produce images good enough for the sensor and DSP to generate usable movement data. The best surfaces reflect but scatter light; an example is a blank sheet of white drawing paper. Some surfaces do not allow the sensor and DSP to function properly because the irregularities are too small to be detected. An example of a poor optical-mousing surface is unfrosted glass. In practice, an optical mouse does not need cleaning, because it has no moving parts. This allelectronic feature also eliminates mechanical fatigue and failure. If the device is used with the proper surface, sensing is more precise than is possible with any pointing device using the old electromechanical design. This is an asset in graphics applications, and it makes computer operation easier in general.

Time Division Multiple Access (TDMA)

TDMA, or Time Division Multiple Access was one of the first cell phone digital standards available in the United States. It was the first successor to the original AMPS analog service that was popular throughout the country, and was in popular service from the early-mid 1990's up until roughly 2003 when the last of the TDMA carriers, Cingular and AT&T, switched to the GSM digital standard. TDMA was a significant leap over the analog wireless service that was in place at the time, and it's chief benefit for carriers was that it used the available wireless spectrum much more efficiently than analog, allowing more phone calls to go through simultaneously. An additional benefit to carriers was that it virtually eliminated the criminal cell phone cloning that was popular at the time by encrypting the signal it's wireless signal. The primary benefit for wireless users of the era was dramatically increased call quality over the scratchy, frequently garbled or "under water" sounds that analog users had become accustomed to. All manufacturers produced TDMA handsets during this period of time, but Nokia's ubiquitous model 5165 is probably the most popular example of TDMA technology.TDMA was replaced by GSM to permit the use of advanced, data intensive features such as text messaging and picture messaging, and to allow an even more efficient use of bandwidth.

Element Management System


An element management system (EMS) manages one or more of a specific type of telecommunications network element (NE). Typically, the EMS manages the functions and capabilities within each NE but does not manage the traffic between different NEs in the network. To support management of the traffic between itself and other NEs, the EMS communicates upward to higher-level network management systems (NMS) as described in the telecommunications management network (TMN) layered model. The EMS provides the foundation to implement TMN-layered operations support system (OSS) architectures that enable service providers to meet customer needs for rapid deployment of new services, as well as meeting stringent quality of service (QoS) requirements. The TeleManagement Forum common object request broker architecture (CORBA) EMS-to-NMS interface heralds a new era in OSS interoperability by making the TMN architecture a practical reality. This tutorial provides a comprehensive understanding of the role of the EMS in the telecommunications network; the functions that are within the domain of EMSs; and trade-offs between various approaches to element management. It is hoped that this information will assist readers by enhancing their basic understanding of these multifaceted components of the evolving network.

Extended Markup Language

Many Web pages today are poorly written. Syntactically incorrect HTML code may work in most browsers even if it doesn't follow HTML rules. Browsers employ heuristics to deal with these flawed Web pages; however, Web-enabled wireless devices (such as PDAs) can't accommodate these hefty Web browsers. The next step in HTML's evolution comes in the form of XHTML (eXtended Hypertext Markup Language), which is basically a combination of HTML and XML. XML, the eXtended Markup Language, is a successor for SGML. More general than html, it incorporate data inside tags themselves and has unlimited description capacities. The format of the display is independant, and given by another document, the XSLT. Rules to create tags are defined by another document, the DTD (Document Type Declaration) which describes the grammar of the tags. Xml features - Significant tags based upon the content of data. - Separated document used for the presentation. Why to use Xml? This is a standard and universal data format. It allows to reuse a presentation for different data or use different presentations for same data.

Synchronous Optical Networking


The Synchronous optical network, commonly known as SONET, is a standard for communicating digital information using lasers or light emitting diodes (LEDs) over optical fiber as defined by GR253-CORE from Telcordia. It was developed to replace the PDH system for transporting large amounts of telephone and data traffic and to allow for interoperability between equipment from different vendors. The more recent Synchronous Digital Hierarchy (SDH) standard developed by ITU (G.707 and its extension G.708) is built on experience in the development of SONET. Both SDH and SONET are widely used today; SONET in the U.S. and Canada, SDH in the rest of the world. SDH is growing in popularity and is currently the main concern with SONET now being considered as the variation. SONET differs from PDH in that the exact rates that are used to transport the data are tightly synchronized to network based clocks. Thus an entire network can operate synchronously, though the presence of different timing sources allow for different circuits within a SONET signal to be timed off of different clocks (through the use of pointers and buffers.) SDH was made possible by the existence of atomic clocks. Both SONET and SDH can be used to encapsulate earlier digital transmission standards, such as the PDH standard, or used directly to support either ATM or so-called Packet over SONET networking. As such, it is inaccurate to think of SONET as a communications protocol in and of itself, but rather as a generic and all-purpose transport container for moving both voice and data.

Digital Watermarking
With the rapid growth of Internet and networks technique, multimedia data transforming and sharing is common to many people. Multimedia data is easily copied and modified, so necessity for copyright protection is increasing. It is the imperceptible marking of multimedia data to "brand" ownership. Digital watermarking has been proposed as technique for copyright protection of multimedia data. Digital watermarking invisibly embeds copyright information into multimedia data. Thus, digital watermarking has been used for copyright protection, finger printing, copy protection and broadcast monitoring. Indeed, a watermarking algorithm requires both invisibility and robustness, which exist in a trade-off relation. Thus good watermarking algorithm must be satisfied the requirements. The process of digital watermarking involves the modification of the original multimedia data to embed a watermark containing key information such as authentication or copyright codes. The embedding method must leave the original data perceptually un-changed, yet should impose modifications which can be detected by using an appropriate extraction algorithm. Common types of signals to watermark are images, music clips and digital video. The application of digital watermarking to still images is concentrated here. The major technical challenge is to design a highly robust digital watermarking technique, which discourages copyright infringement by making the process of watermarking removal tedious and costly.

CRT Display

CRT stands for cathode ray tube, the common picture tube used in TV sets for decades and still the most common display type today. Newer display types like Plasma and LCD work differently and aren't as large or heavy as the CRT, but the picture isn't necessarily any better. CRT produces its pictures from a ray of electrons emanating from a cathode in the neck of a picture tube. The ray strikes the inner face of the picture tube, which is coated with lines of phosphor that light up when struck by the electron beam. The scan line offers resolution greater than most LCD, plasma or DLP displays. For a fixed pixel display to equal a CRT running 1080i, a resolution of 1920x1080 would be required. Only the most expensive LCD or plasma displays can reach this resolution. Advantages of CRT 1.Over 50 years of engineering experience in CRT. 2.CRT TV sets are reliable and have a long life. 3.CRT rear-projection HDTVs are the least expensive large-screen TV. 4.1080i is equal to a fixed pixel resolution of 1920x1080. 5.Direct-view CRT is still the all-around best picture of all display types. 6.CRT TV is inexpensive and better than ever before. Disadvantages of CRT 1.CRT's are the biggest and heaviest of all TV types. 2.Direct-view picture tubes have a size limitation about 36". 3.CRTs use more energy and generate more heat than DLP or LCD. 4.Rear-projection TVs have limited brightness when viewing at an angle.

Satellite Radio/TV System


Satellite systems are ideally suited for television and radio distribution, providing high-quality, highreliability, low-maintenance, flexible alternatives to terrestrial systems. Unlike terrestrial microwave systems, there are no towers or repeaters to maintain, no radio fades to degrade performance, no extensive troubleshooting to diagnose problems and far less land to lease. Your capital investment for a satellite network is also much lower, especially in areas with difficult terrain. Receive stations can be deployed in a fraction of the time it would take to install a terrestrial system With the advent of digital modulation and compression techniques, crystal clarity can be achieved with both video and audio, while at the same time minimizing transmission costs and ensuring the privacy of your network. The signals you receive are virtually identical to those generated at the studio. With newergeneration satellites, occupied satellite bandwidths can be as little as 9 MHz for a TV signal and its associated (stereo) audio channels. Stereo radio signals can be multiplexed with the TV signal or transmitted on separate narrowband digital carriers. Only stations designated by your control center will be able to decode your transmissions, thus ensuring privacy. Solid-state transmitter equipment is rapidly becoming the standard for new installations. Although initially more expensive, solid-state equipment enjoys the advantage of reduced maintenance costs for the life of the equipment.

Robotics

Over the course of human history the emergence of certain new technologies have globally transformed life as we know it. Disruptive technologies like fire, the printing press, oil, and television have dramatically changed both the planet we live on and mankind itself, most often in extraordinary and unpredictable ways. In pre-history these disruptions took place over hundreds of years. With the time compression induced by our rapidly advancing technology, they can now take place in less than a generation. We are currently at the edge of one such event. In ten years robotic systems will fly our planes, grow our food, explore space, discover life saving drugs, fight our wars, sweep our homes and deliver our babies. In the process, this robotics driven disruptive event will create a new 200 billion dollar global industry and change life as you now know it, forever. Just as my children cannot imagine a world without electricity, your children will never know a world without robots. Come take a bold look at the future and the opportunities for Mechanical Engineers that wait there. The Three Laws of Robotics are: 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

Wireless Application Protocol

Definition
The Wireless Application Protocol Forum is an industry group dedicated to the goal of enabling sophisticated telephony and information services on handheld wireless devices. These devices include mobile telephones, pagers, personal digital assistants (PDAs) and other wireless terminals. Recognizing the value and utility of the World Wide Web architecture, the WAP forum has chosen to align its technology closely with the Internet and the Web. The WAP specification extends and leverages existing technologies, such as IP, HTTP, XML, SSL, URLs, scripting and other content formats. Ericsson, Motorola, Nokia and Unwired Planet founded the WAP Forum in June, 1997. Since then, it has experienced impressive membership growth with members joining from the ranks of the world's premiere wireless service providers, handset manufacturers, infrastructure providers, and software developers. WAP Forum membership is open to all industry participants.

Goals of WAP Forum


The WAP Forum has the following goals: To bring Internet content and advanced data services to Wireless phones and other wireless terminals. To create a global wireless protocol specification that works across all wireless network technologies. To enable the creation of content and applications that scale across a wide range of wireless bearer networks and device types. To embrace and extend existing standards and technology wherever possible and appropriate. It is also very important for the WAP Forum's specification in such a way that they complement existing standards. For example, the WAPV1.0 specification is designed to sit on top of existing bearer channel standards so that any bearer standard can be used with the WAP protocols to implement complete product solutions. When the WAP Forum identifies a new area of technology where a standard does not exist, or exists but needs modification for wireless, it works to submit its specifications to other industry standard groups.

WAP Protocol Stack


Any network is organized as a series of layers or levels where each level performs a specific function. The set of rules that governs the communication between the peer entities within a layer are called protocols. The layers and protocols together forms the Protocol Stack. The request from the mobile device is sent as a URL through the operator's network to the WAP gateway, which is the interface between the operator's network and the Internet.

Synchronous Optical Networking


The Synchronous optical network, commonly known as SONET, is a standard for communicating digital information using lasers or light emitting diodes (LEDs) over optical fiber as defined by GR253-CORE from Telcordia. It was developed to replace the PDH system for transporting large amounts of telephone and data traffic and to allow for interoperability between equipment from different vendors. The more recent Synchronous Digital Hierarchy (SDH) standard developed by ITU (G.707 and its extension G.708) is built on experience in the development of SONET. Both SDH and SONET are widely used today; SONET in the U.S. and Canada, SDH in the rest of the world. SDH is growing in popularity and is currently the main concern with SONET now being considered as the variation. SONET differs from PDH in that the exact rates that are used to transport the data are tightly synchronized to network based clocks. Thus an entire network can operate synchronously, though the presence of different timing sources allow for different circuits within a SONET signal to be timed off of different clocks (through the use of pointers and buffers.) SDH was made possible by the existence of atomic clocks. Both SONET and SDH can be used to encapsulate earlier digital transmission standards, such as the PDH standard, or used directly to support either ATM or so-called Packet over SONET networking. As such, it is inaccurate to think of SONET as a communications protocol in and of itself, but rather as a generic and all-purpose transport container for moving both voice and data.

Cellular Radio
Abstract of Cellular Radio Cellular mobile radio systems aim to provide high-mobility, wide-ranging, two-way wireless voice communications. These systems accomplish their task by integrating wireless access with large-scale networks, capable of managing mobile users. Cellular radio technology generally uses transmitter power at a level around 100 times that used by a cordless telephone (approximately 2 W for cellular). Standards Cellular radio has evolved into digital radio technologies, using the systems standards of GSM (at 900 and 1800 MHz) in Europe, PDC in Japan, and IS-136A and IS-95A in the United States. Thirdgeneration systems, such as wideband code division multiple access (WCDMA) and cdma2000, are currently under development. Design Considerations One of the most significant consideration in designing digital systems is the high cost of cell sites. This has motivated system designers to try to maximize the number of users per megahertz, and users per cell site. Another important consideration is maintaining adequate coverage in areas of varying terrain and population density. For example, in order to cover sparsely populated regions, system designers have retained the high-power transmission requirement to provide maximum range from antenna locations. Communications engineers have also been developing very small coverage areas, or microcells. Microcells provide increased capacity in areas of high user density, as well as improved coverage of shadowed areas. Some microcell base stations are installed in places of high user concentrations, The mobile radio is a two-way communication gadget that operates through radio frequencies. As such, the channel of information and messages in a mobile radio is variable. Used to be known as radiophone, the earlier versions of the mobile radio, were one-way communication systems used for broadcast. Contemporary mobile radio systems can have as much as a hundred channels and may be controlled by microprocessors. These types require the use of software to encode channels and operate their integrated functions. The mobile radio, also known as a two-way radio system, allows the exchange of messages only with other mobile radios through push-to-talk (PTT) functions. A mobile radio also features wireless transceivers, making mobile radios portable. Mobile radio systems may be used for communications in aircraft, ships, automobiles, and other vehicles. The power supply on which mobile radios run depend on the type of vehicle these are mounted on. A mobile radio system is composed of a transceiver and microphone with a push-to-talk key. It has an antenna that links to the transceiver. Since most types of mobile radio are used in moving vehicles, where the surrounding noise can be loud, some mobile radio types come with an external speaker. Other models have headsets and microphones with noise-reduction capabilities. How does a mobile radio work?

Most mobile radios operate on a single band of frequency. The radio transceiver contains transmit and receive frequencies. Very high frequency (VHF) and ultra high frequency (UHF) allow a mobile radio to operate on maximum coverage. This means that its average operating range is from 150 to 470 MHz. To transmit a message, the PTT key must be pressed during talk-time to allow the voice message to be dispatched by the sending party. During this period, the sending party cannot hear or receive any incoming messages from the mobile radio. Once the PTT button is released, the sender may hear the response of the receiving party. Why do we need a mobile radio The use of mobile radio in transportation, security, and general operations makes communication fast, efficient and safe. It allows control centers to monitor location of vehicles and dispatch announcements to several receivers simultaneously. Additionally, the range of its area coverage is very high and is not dependent on a cellular network, which may fluctuate during emergency situations. Different types of mobile radio are portable and capable of withstanding shock and severe weather conditions. Most countries impose certain requirements on the manufacture, sale and use of tworadio systems. This helps ensure that the communication gadget functions according to standards and that its use does not interfere with other communication systems.

Optic Fibre Cable


Optical fiber (or "fiber optic") refers to the medium and the technology associated with the transmission of information as light pulses along a glass or plastic wire or fiber. Optical fiber carries much more information than conventional copper wire and is in general not subject to electromagnetic interference and the need to retransmit signals. Most telephone company longdistance lines are now of optical fiber. Transmission on optical fiber wire requires repeaters at distance intervals. The glass fiber requires more protection within an outer cable than copper. For these reasons and because the installation of any new wiring is labor-intensive, few communities yet have optical fiber wires or cables from the phone company's branch office to local customers (known as local loops). Optical fiber consists of a core, cladding, and a protective outer coating, which guide light along the core by total internal reflection. The core, and the higher-refractive-index cladding, are typically made of high-quality silica glass, though they can both be made of plastic as well. An optical fiber can break if bent too sharply. Due to the microscopic precision required to align the fiber cores, connecting two optical fibers, whether done by fusion splicing or mechanical splicing, requires special skills and interconnection technology. Two main categories of optical fiber used in fiber optic communications are multi-mode optical fiber and single-mode optical fiber. Multimode fiber has a larger core allowing less precise, cheaper transmitters and receivers to connect to it as well as cheaper connectors

Infinite Dimensional Vector Space


We are familiar with the properties of finite dimensional vector spaces over a field. Many of the results that are valid in finite dimensional vector spaces can very well be extended to infinite dimensional cases sometimes with slight modifications in definitions. But there are certain results that do not hold in infinite dimensional cases. Here we consolidate some of those results and present it in a readable form. We present the whole work in three chapters. All those concepts in vector spaces and linear algebra which we require in the sequel are included in the first chapter. In section I of chapter II we discuss the fundamental concepts and properties of infinite dimensional vector spaces and in section II, the properties of the subspaces of infinite dimensional vector spaces are studied and will find that the chain conditions which hold for finite cases do not hold for infinite cases. The linear transformation on infinite dimensional vector spaces and introduce the concept of infinite matrices. We will show that every linear transformation corresponds to a row finite matrix over the underlying field and vice versa and will prove that the set of all linear transformations of an infinite dimensional vector space in to another is isomorphic to the space of all row finite matrices over the underlying field. In section II we consider the conjugate space of an infinite dimensional vector space and define its dimension and cardinality and will show that the dimension of the conjugate space is greater than the original space. Finally we will show that the conjugate space of the conjugate space of an infinite dimensional vector space cannot be identified with the original space.

Low-Voltage Differential Signaling (LVDS)


Low-Voltage Differential Signaling (LVDS) is a new technology addressing the needs of today's high performance data transmission applications. The LVDS standard is becoming the most popular differential data transmission standard in the industry. This is driven by two simple features: "Gigabits @ milliwatts!" LVDS delivers high data rates while consuming significantly less power than competing technologies. In addition, it brings many other benefits, which include: a)Low-voltage power supply compatibility b)Low noise generation c)High noise rejection d)Robust transmission signals e) Ability to be integrated into system level ICs LVDS technology allows products to address high data rates ranging from 100's of Mbps to greater than 2 Gbps. For all of the above reasons, it has been deployed across many market segments wherever the need for speed and low power exists. Consumers are demanding more realistic visual information in the office and in the home. This drives the need to move video, 3D graphics and photo-realistic image data from cameras to PCs and printers through LAN, phone, and satellite systems to home set-top boxes and digital VCRs. Solutions exist today to move this high-speed digital data both very short and very long distances, on printed circuit boards (PCB) and across fiber or satellite networks. Moving this data from board-to-board or box-to-box however, requires an extremely highperformance solution that consumes a minimum of power, generates little noise, (must meet increasingly stringent FCC/CISPR EMI requirements) is relatively immune to noise, and is costeffective.

Plasma Display
A type of flat-panel display that works by sandwiching a neon/xenon gas mixture between two sealed glass plates with parallel electrodes deposited on their surfaces. The plates are sealed so that the electrodes form right angles, creating pixels. When a voltage pulse passes between two electrodes, the gas breaks down and produces weakly ionized plasma, which emits UV radiation. The UV radiation activates color phosphors and visible light is emitted from each pixel. Also called "gas discharge display," a flat-screen technology that uses tiny cells lined with phosphor that are full of inert ionized gas (typically a mix of xenon and neon). Three cells make up one pixel (one cell has red phosphor, one green, one blue). The cells are sandwiched between x- and y-axis panels, and a cell is selected by charging the appropriate x and y electrodes. The charge causes the gas in the cell to emit ultraviolet light, which causes the phosphor to emit color. The amount of charge determines the intensity, and the combination of the different intensities of red, green and blue produce all the colors required. Today, Plasma displays are becoming more and more popular. Compared to conventional CRT displays, plasma displays are about one-tenth the thickness--around 4'', and one-sixth the weight-less than 67 pounds for a 40" display. They use over 16 million colors and have a 160 degreeviewing angle. Companies such as Panasonic, Fujitsu, and Pioneer manufacture plasma displays. Plasma displays were initially monochrome, typically orange, but color displays have become very popular and are used for home theater and computer monitors as well as digital signs. The plasma technology is similar to the way neon signs work combined with the red, green and blue phosphor technology of a CRT. Plasma monitors consume significantly more current than LCDbased monitors.

Landmine Detection Using Impulse Ground Penetrating Radar

Definition
Landmines are affecting the lives and livelihood of millions of people around the world. The video impulse ground penetrating radar system for detection for small and shallow buried objects has been developed. The hardware combines commercially available components with components specially developed or modified for being used in the system. The GPR system has been desired to measure accurately electromagnetic field backscattered from subsurface targets in order to allow identification of detected targets through the solution of the inverse scattering problem. The GPR has been tested in different environmental conditions and has proved its ability to detect small and shallow buried targets. Landmines and unexploded ordnance (UXO) are a legacy of war, insurrection, and guerilla activity. Landmines kill and maim approximately 26,000 people annually. In Cambodia, whole areas of arable land cannot be farmed due to the threat of landmines. United Nations relief operations are made more difficult and dangerous due to the mining of roads. Current demining techniques are heavily reliant on metal detectors and prodders. Technologies are used for landmine detection are: Metal detectors--- capable of finding even low-metal content mines in mineralized soils. Nuclear magnetic resonance, fast neutron activation and thermal neutron activation. Thermal imaging and electro-optical sensors--- detect evidence of buried objects. Biological sensors such as dogs, pigs, bees and birds. Chemical sensors such as thermal fluorescence--- detect airborne and waterborne presence of explosive vapors. In this seminar, we will concentrate on Ground Penetrating Radar (GPR). This ultra wide band radar provides centimeter resolution to locate even small targets. There are two distinct types of GPR, time-domain and frequency domain. Time domain or impulse GPR transmits discrete pulses of nanosecond duration and digitizes the returns at GHz sample rates. Frequency domain GPR systems transmit single frequencies either uniquely, as a series of frequency steps, or as a chirp. The amplitude and phase of the return signal is measured. The resulting data is converted to the time domain. GPR operates by detecting the dielectric contrasts in the soils, which allows it to locate even non-metallic mines. In this discussion we deal with buried anti-tank (AT) and anti-personnel (AP) landmines, which require close approach or contact to activate. AT mines range from about 15 to 35 cm in size. They are typically buried up to 40cm deep, but they can also be deployed on the surface of a road to block a column of machinery. AP mines range from about 5 to 15cm in size. AT mines, which are designed to impede, the progress of destroy vehicles and AP mines which are designed to kill and maim people.

General Packet Radio Service

Definition
Wireless phone use is taking off around the world. Many of us would no longer know how to cope without our cell phones. Always being connected offers us flexibility in our lifestyles, makes us more productive in our jobs, and makes us feel more secure. So far, voice has been the primary wireless application. But with the Internet continuing to influence an increasing proportion of our daily lives, and more of our work being away from the office, it is inevitable that the demand for wireless data is going to ignite. Already, in those countries that have cellular-data services readily available, the number of cellular subscribers taking advantage of data has reached significant proportions. But to move forward, the question is whether current cellular-data services are sufficient, or whether the networks need to deliver greater capabilities. The fact is that with proper application configuration, use of middleware, and new wireless-optimized protocols, today's cellular-data can offer tremendous productivity enhancements. But for those potential users who have stood on the sidelines, subsequent generations of cellular data should overcome all of their objections. These new services will roll out both as enhancements to existing second-generation cellular networks, and an entirely new third generation of cellular technology. In 1999, the primary cellular based data services were Cellular Digital Packet Data (CDPD), circuitswitched data services for GSM networks, and circuit-switched data service for CDMA networks. All of these services offer speeds in the 9.6 Kbps to 14.4 Kbps range. The basic reason for such low speeds is that in today's cellular systems, data is allocated to the same radio bandwidth as a voice call. Since voice encoders (vocoders) in current cellular networks digitize voice in the range of 8 to 13 Kbps, that's about the amount available for data. Back then, 9.6 Kbps was considered more than adequate. Today, it can seem slow with graphical or multimedia content, though it is more than adequate for text-based applications and carefully configured applications. There are two basic ways that the cellular industry is currently delivering data services. One approach is with smart phones, which are cellular phones that include a microbrowser. With these, you can view specially formatted Internet information. The other approach is through wireless modems, supplied either in PC Card format or by using a cell phone with a cable connection to a computer. The GPRS services will reflect the GSM services with an exception that the GPRS will have a tremendous transmission rate which will make a good impact in the most of the existing services and a possibility of introduction of new services as operators and users (business/private) appreciate the newly introduced technology. Services such as the Internet, videoconferencing and on-line shopping will be as smooth as talking on the phone, moreover we'll be able to access these services whether we are at work, at home or traveling. In the new information age, the mobile phone will deliver much than just voice calls. It will become a multi-media communications device, capable of sending and receiving graphic images and video.

The most common methods used for data transfer are circuit-switching and packet-switching. With circuit-switched transmission the dedicated circuit is first established across a sequence of links and then the whole channel is allocated to a single user for the whole duration of the call. With packet switched transmission, the data is first cut in to small parts called packages which are then sent in sequence to the receiver, which again builds the packages back together. This ensures that the same link resources can be shared at the same time buy many different users. The link is used only when the user has something to send. When there is no data to be sent the link is free to be used by another call. Packet switching is ideal for bursty traffic, e.g. voice.

NRAM

Definition
Nano-RAM, is a proprietary computer memory technology from the company Nantero and NANOMOTOR is invented by University of bologna and California nano systems.NRAM is a type of nonvolatile random access memory based on the mechanical position of carbon nanotubes deposited on a chip-like substrate. In theory the small size of the nanotubes allows for very high density memories. Nantero also refers to it as NRAM in short, but this acronym is also commonly used as a synonym for the more common NVRAM, which refers to all nonvolatile RAM memories.Nanomotor is a molecular motor which works continuously without the consumption of fuels. It is powered by sunlight. The research are federally funded by national science foundation and national academy of science.

Carbon Nanotubes
Carbon nanotubes (CNTs) are a recently discovered allotrope of carbon. They take the form of cylindrical carbon molecules and have novel properties that make them potentially useful in a wide variety of applications in nanotechnology, electronics, optics, and other fields of materials science. They exhibit extraordinary strength and unique electrical properties, and are efficient conductors of heat. Inorganic nanotubes have also been synthesized. A nanotube is a member of the fullerene structural family, which also includes buckyballs. Whereas buckyballs are spherical in shape, a nanotube is cylindrical, with at least one end typically capped with a hemisphere of the buckyball structure. Their name is derived from their size, since the diameter of a nanotube is on the order of a few nanometers (approximately 50,000 times smaller than the width of a human hair), while they can be up to several millimeters in length. There are two main types of nanotubes: single-walled nanotubes (SWNTs) and multi-walled nanotubes (MWNTs). Manufacturing a nanotube is dependent on applied quantum chemistry, specifically, orbital hybridization. Nanotubes are composed entirely of sp2 bonds, similar to those of graphite. This bonding structure, stronger than the sp3 bonds found in diamond, provides the molecules with their unique strength. Nanotubes naturally align themselves into "ropes" held together by Van der Waals forces. Under high pressure, nanotubes can merge together, trading some sp2 bonds for sp3 bonds, giving great possibility for producing strong, unlimited-length wires through high-pressure nanotube linking.

Fabrication Of NRAM
This nano electromechanical memory, called NRAM, is a memory with actual moving parts, with dimensions measured in nanometers. Its carbon nanotube based technology makes advantage of vaanderwaals force to create basic on off junctions of a bit. Vaanderwaals forces interaction between atoms that enable noncovalant binding. They rely on electron attractions that arise only at nano scale levels as a force to be reckoned with. The company is using this property in its design to integrate nanoscale material property with established cmos fabrication technique.

Storage In NRAM
NRAM works by balancing the on ridges of silicon. Under differing electric charges, the tubes can be physically swung into one or two positions representing one and zeros. Because the tubes are very small-under a thousands of time-this movement is very fast and needs very little power, and because the tubes are a thousand times conductive as copper it is very to sense to read back the data. Once in position the tubes stay there until a signal resets them.

The bit itself is not stored in the nano tubes, but rather is stored as the position of the nanotube. Up is bit 0 and down is bit 1.Bits are switched between the states by the application of the electric field. The technology work by changing the charge placed on a latticework of crossed nanotube. By altering the charges, engineers can cause the tubes to bind together or separate, creating ones and zeros that form the basis of computer memory. If we have two nano tubes perpendicular to each other one is positive and other negative, they will bend together and touch. If we have two similar charges they will repel. These two positions are used to store one and zero. The chip will stay in the same state until you make another change in the electric field. So when you turn the computer off, it doesn't erase the memory .We can keep all the data in the NRAM and gives your computer an instant boot.

Global System for Mobile Communication (GSM)

Global system for mobile communication (GSM) is a globally accepted standard for digital cellular communication. GSM is the name of a standardization group established in 1982 to create a common European mobile telephone standard that would formulate specifications for a pan-European mobile cellular radio system operating at 900 MHz. It is estimated that many countries outside of Europe will join the GSM partnership.Throughout the evolution of cellular telecommunications, various systems have been developed without the benefit of standardized specifications. This presented many problems directly related to compatibility, especially with the development of digital radio technology. The GSM standard is intended to address these problems. GSM provides recommendations, not requirements. The GSM specifications define the functions and interface requirements in detail but do not address the hardware. The reason for this is to limit the designers as little as possible but still to make it possible for the operators to buy equipment from different suppliers. The GSM network is divided into three major systems: the switching system (SS), the base station system (BSS), and the operation and support system (OSS). The operations and maintenance center (OMC) is connected to all equipment in the switching system and to the BSC. The implementation of OMC is called the operation and support system (OSS). The OSS is the functional entity from which the network operator monitors and controls the system. The purpose of OSS is to offer the customer cost-effective support for centralized, regional, and local operational and maintenance activities that are required for a GSM network. An important function of OSS is to provide a network overview and support the maintenance activities of different operation and maintenance organizations.

Wireless Intillegent Network (WIN)


(WIN) is a concept being developed by the Telecommunications Industry Association (TIA) Standards Committee TR45.2. The charter of this committee is to drive intelligent network (IN) capabilities, based on interim standard (IS)-41, into wireless networks. IS-41 is a standard currently being embraced by wireless providers because it facilitates roaming. Basing WIN standards on this protocol enables a graceful evolution to an IN without making current network infrastructure obsolete. Today's wireless subscribers are much more sophisticated telecommunications users than they were five years ago. No longer satisfied with just completing a clear call, today's subscribers demand innovative ways to use the wireless phone. They want multiple services that allow them to handle or select incoming calls in a variety of ways. Enhanced services are very important to wireless customers. They have come to expect, for instance, services such as caller ID and voice messaging bundled in the package when they buy and activate a cellular or personal communications service (PCS) phone. Whether prepaid, voice/data messaging, Internet surfing, or location-sensitive billing, enhanced services will become an important differentiator in an already crowded, competitive service-provider market. Enhanced services will also entice potentially new subscribers to sign up for service and will drive up airtime through increased usage of PCS or cellular services. As the wireless market becomes increasingly competitive, rapid deployment of enhanced services becomes critical to a successful wireless strategy.

Intelligent network (IN) solutions have revolutionized wireline networks. Rapid creation and deployment of services has become the hallmark of a wireline network based on IN concepts. Wireless intelligent network (WIN) will bring those same successful strategies into the wireless networks.

Micro-electro Mechanical Systems (MEMS)


The satellite industry could experience its biggest revolution since it joined the ranks of commerce, thanks to some of the smallest machines in existence. Researchers are performing experiments designed to convince the aerospace industry that microelectromechanical systems (MEMS) could open the door to low-cost, high-reliability, mass-produced satellites.MEMS combine conventional semiconductor electronics with beams, gears, levers, switches, accelerometers, diaphragms, microfluidic thrusters, and heat controllers, all of them microscopic in size. "We can do a whole new array of things with MEMS that cannot be done any other way," said Henry Helvajian, a senior scientist with Aerospace Corp., a nonprofit aerospace research and development organization in El Segundo, Calif. Microelectromechanical Systems, or MEMS, are integrated micro devices or systems combining electrical and mechanical components. They are fabricated using integrated circuit (IC) batch processing techniques and can range in size from micrometers to millimeters. These systems can sense, control and actuate on the micro scale, and function individually or in arrays to generate effects on the macro scale. MEMS is an enabling technology and current applications include accelerometers, pressure, chemical and flow sensors, micro-optics, optical scanners, and fluid pumps. Generally a satellite consists of battery, internal state sensors, communication systems and control units. All these can be made of MEMS so that size and cost can be considerably reduced. Also small satellites can be constructed by stacking wafers covered with MEMS and electronics components. These satellites are called 'I' Kg class satellites or Picosats. These satellites having high resistance to radiation and vibration compared to conventional devices can be mass-produced there by reducing the cost. These can be used for various space applications.Also small satellites can be constructed by stacking wafers covered with MEMS and electronics components. These satellites are called 'I' Kg

Smart Quill

Definition
Lyndsay Williams of Microsoft Research's Cambridge UK lab is the inventor of the Smartquill,a pen that can remember the words that it is used to write, and then transform them into computer text . The idea that "it would be neat to put all of a handheld-PDA type computer in a pen," came to the inventor in her sleep . "It's the pen for the new millennium," she says. Encouraged by Nigel Ballard, a leading consultant to the mobile computer industry, Williams took her prototype to the British Telecommunications Research Lab, where she was promptly hired and given money and institutional support for her project. The prototype, called SmartQuil, has been developed by world-leading research laboratories run by BT (formerly British Telecom) at Martlesham, eastern England. It is claimed to be the biggest revolution in handwriting since the invention of the pen. The sleek and stylish prototype pen is different from other electronic pens on the market today in that users don't have to write on a special pad in order to record what they write. User could use any surface for writing such as paper, tablet, screen or even air. The SmartQuill isn't all space-age, though -- it contains an ink cartridge so that users can see what they write down on paper. SmartQuill contains sensors that record movement by using the earth's gravity system, irrespective of the platform used. The pen records the information inserted by the user. Your words of wisdom can also be uploaded to your PC through the "digital inkwell", while the files that you might want to view on the pen are downloaded to SmartQuill as well. It is an interesting idea, and it even comes with one attribute that makes entire history of pens pale by comparison-if someone else picks your SmartQuill and tries to write with it- it won't. Because user can train the pen to recognize a particular handwriting. Hence SmartQuill recognizes only the owner's handwriting. SmartQuill is a computer housed within a pen which allows you to do what a normal personal organizer does .It's really mobile because of it's smaller size and one handed use. People could use the pen in the office to replace a keyboard, but the main attraction will be for users who usually take notes by hand on the road and type them up when returning to the office. SmartQuill will let them skip the step of typing up their notes.
WORKING

SmartQuill is slightly larger than an ordinary fountain pen. Users can enter information into these applications by pushing a button on the pen and writing down what they would like to enter .The SmartQuill does not need a screen to work. The really clever bit of the technology is its ability to read handwriting not only on paper but on any flat surface - horizontal or vertical. There is also a small three-line screen to read the information stored in the pen; users can scroll down the screen by tilting the pen slightly. The user trains the pen to recognize a particular handwriting style - no matter how messy it is, as long as it is consistent, the pen can recognize it. The handwritten notes are stored on hard disk of the pen. The pen is then plugged into an electronic "inkwell" ,text data is transmitted to a desktop computer, printer, or modem or to a mobile telephone to send files electronically. Up to 10 pages of notes can be stored locally on the pen . A tiny light at the tip allows writing in the dark. When the pen is kept idle for some time ,power gets automatically off.
FEATURES

" Display technology used in SmartQuill

" Handwriting recognition and signature verification " Display scrolls using tilt sensors " Communication with other devices " Memory and power

Automatic Number Plate Recognition

Definition
Automatic Number Plate Recognition or ANPR is a technology that uses pattern recognition to 'read' vehicle number plates. " work by tracking vehicles' travel time between two fixed points, and therefore calculate the average speed "In simple terms ANPR cameras 'photograph' the number plates of the vehicles that pass them. This 'photograph' is then fed in a computer system to find out details about the driver and owner of the vehicle and details about the vehicle itself "ANPR consists of cameras linked to a computer. "As a vehicle passes, ANPR 'reads' Vehicle Registration Marks - more commonly known as number plates - from digital images, captured through cameras located either in a mobile unit, in-built in traffic vehicles or via Closed Circuit Television (CCTV). "The digital image is converted into data, which is processed through the ANPR system.

ANPR is used for Detecting crime through the use of intelligence monitoring. o Identifying stolen vehicles. oDetecting vehicle document crime oelectronic toll collection etc. There are six primary algorithms that the software requires for identifying a licence plate: 1.Plate localisation - responsible for finding and isolating the plate on the picture 2.Plate orientation and sizing - compensates for the skew of the plate and adjusts the dimensions to the required size 3.Normalisation - adjusts the brightness and contrast of the image 4.Character segmentation - finds the individual characters on the plates 5.5.Optical character recognition6.Syntactical/Geometrical analysis - check characters and positions against country specific rules oPoor image resolution, usually because the plate is too far away but sometimes resulting from the use of a low-quality camera. oBlurry images, particularly motion blur and most likely on mobile units oPoor lighting and low contrast due to overexposure, reflection or shadows oAn object obscuring the plate, quite often a tow bar, or dirt on the plate oA different font, popular for vanity plates oCircumvention techniques oAutomatic number plate recognition (ANPR) is a mass surveillance method that uses optical character recognition on images to read the licence plates on vehicles. As of 2006 systems can scan

number plates at around one per second on cars travelling up to 100 mph (160 km/h). They can use existing closed-circuit television or road-rule enforcement cameras, or ones specifically designed for the task. They are used by various police forces and as a method of electronic toll collection on payper-use roads, and monitoring traffic activity such as red light adherence in an intersection.

Optical Camouflage

Definition
Optical camouflage is a hypothetical type of active camouflage currently only in a very primitive stage of development. The idea is relatively straightforward: to create the illusion of invisibility by covering an object with something that projects the scene directly behind that object. Although optical is a term that technically refers to all forms of light, most proposed forms of optical camouflage would only provide invisibility in the visible portion of the spectrum. Prototype examples and proposed designs of optical camouflage devices range back to the late eighties at least, and the concept began to appear in fiction in the late nineties. The most intriguing prototypes of optical camouflage yet have been created by the Tachi Lab at the University of Tokyo, under the supervision of professors Susumu Tachi, Masahiko Inami and Naoki Kawakami. Their prototype uses an external camera placed behind the cloaked object to record a scene, which it then transmits to a computer for image processing. The computer feeds the image into an external projector which projects the image onto a person wearing a special retroreflective coat. This can lead to different results depending on the quality of the camera, the projector, and the coat, but by the late nineties, convincing illusions were created. The downside is the large amount of external hardware required, along with the fact that the illusion is only convincing when viewed from a certain angle. Creating complete optical camouflage across the visible light spectrum would require a coating or suit covered in tiny cameras and projectors, programmed to gather visual data from a multitude of different angles and project the gathered images outwards in an equally large number of different directions to give the illusion of invisibility from all angles. For a surface subject to bending like a flexible suit, a massive amount of computing power and embedded sensors would be necessary to continuously project the correct images in all directions. This would almost certainly require sophisticated nanotechnology, as our computers, projectors, and cameras are not yet miniaturized enough to meet these conditions. Although the suit described above would provide a convincing illusion to the naked eye of a human observer, more sophisticated machinery would be necessary to create perfect illusions in other electromagnetic bands, such as the infrared band. Sophisticated target-tracking software could ensure that the majority of computing power is focused on projecting false images in those directions where observers are most likely to be present, creating the most realistic illusion possible. Creating a truly realistic optical illusion would likely require Phase Array Optics, which would project light of a specific amplitude and phase and therefore provide even greater levels of invisibility. We may end up finding optical camouflage to be most useful in the environment of space, where any given background is generally less complex than earthly backdrops and therefore easier to record, process, and project. Active camouflage Active camouflage is a group of camouflage technologies which allow an object to blend into its surroundings by use of panels or coatings capable of altering their appearance, color, luminance and reflective properties. Active camouflage has the potential to achieve perfect concealment from visual detection.Active camouflage differs from conventional means of concealment in two important ways. First, it makes the object appear not merely similar to its surroundings, but invisible through the use of perfect mimicry. Second, active camouflage changes the appearance of the object in real

time. Ideally, active camouflage mimics nearby objects as well as objects as distant as the horizon. The effect should be similar to looking through a pane of glass, making the camouflaged object practically invisible. Active camouflage has its origins in the diffused lighting camouflage first tested on Canadian Navy corvettes during World War II, and later in the armed forces of the United Kingdom and the United States of America. Current systems began with a United States Air Force program which placed low-intensity blue lights on aircraft. As night skies are not pitch black, a 100 percent black-colored aircraft might be rendered visible. By emitting a small amount of blue light, the aircraft blends more effectively into the night sky.

Smart Fabrics

Definition
Based on the advances in computer technology, especially in the field of miniaturization, wireless technology and worldwide networking, the vision of wearable computers emerged. We already use a lot of portable electronic devices like cell phones, notebooks and organizers. The next step in mobile computing could be to create truly wearable computers that are integrated into our daily clothing and always serve as our personal assistant. This paper explores this from a textile point of view. Which new functions could textiles have? Is a combination of textiles and electronics possible? What sort of intelligent clothing can be realized? Necessary steps of textile research and examples of current developments are presented as well as future challenges.

Introduction
Today, the interaction of human individuals with electronic devices demands specific user skills. In future, improved user interfaces can largely alleviate this problem and push the exploitation of microelectronics considerably. In this context the concept of smart clothes promises greater userfriendliness, user empowerment, and more efficient services support. Wearable electronics responds to the acting individual in a more or less invisible way. It serves individual needs and thus makes life much easier. We believe that today, the cost level of important microelectronic functions is sufficiently low and enabling key technologies are mature enough to exploit this vision to the benefit of society. In the following, we present various technology components to enable the integration of electronics into textiles. Electronic textiles (e-textiles) are fabrics that have electronics and interconnections woven into them. Components and interconnections are a part of the fabric and thus are much less visible and, more importantly, not susceptible to becoming tangled together or snagged by the surroundings. Consequently, e-textiles can be worn in everyday situations where currently available wearable computers would hinder the user. E-textiles also have greater flexibility in adapting to changes in the computational and sensing requirements of an application. The number and location of sensor and processing elements can be dynamically tailored to the current needs of the user and application, rather than being fixed at design time. As the number of pocket electronic products (mobile phone, palm-top computer, personal hi-fi, etc.) is increasing, it makes sense to focus on wearable electronics, and start integrating today's products into our clothes. The merging of advanced electronics and special textiles has already begun. Wearable computers can now merge seamlessly into ordinary clothing. Using various conductive textiles, data and power distribution as well as sensing circuitry can be incorporated directly into wash-and-wear clothing.

Java Ring

Definition
A Java Ring is a finger ring that contains a small microprocessor with built-in capabilities for the user, a sort of smart card that is wearable on a finger. Sun Microsystem's Java Ring was introduced at their JavaOne Conference in 1998 and, instead of a gemstone, contained an inexpensive microprocessor in a stainless-steel iButton running a Java virtual machine and preloaded with applets (little application programs). The rings were built by Dallas Semiconductor. Workstations at the conference had "ring readers" installed on them that downloaded information about the user from the conference registration system. This information was then used to enable a number of personalized services. For example, a robotic machine made coffee according to user preferences, which it downloaded when they snapped the ring into another "ring reader." Although Java Rings aren't widely used yet, such rings or similar devices could have a number of real-world applications, such as starting your car and having all your vehicle's components (such as the seat, mirrors, and radio selections) automatically adjust to your preferences. The Java Ring is an extremely secure Java-powered electronic token with a continuously running, unalterable real-time clock and rugged packaging, suitable for many applications. The jewel of the Java Ring is the Java iButton -- a one-million transistor, single chip trusted microcomputer with a powerful Java Virtual Machine (JVM) housed in a rugged and secure stainless-steel case. The Java Ring is a stainless-steel ring, 16-millimeters (0.6 inches) in diameter, that houses a 1-milliontransistor processor, called an iButton. The ring has 134 KB of RAM, 32 KB of ROM, a real-time clock and a Java virtual machine, which is a piece of software that recognizes the Java language and translates it for the user's computer system. The Ring, first introduced at JavaOne Conference, has been tested at Celebration School, an innovative K-12 school just outside Orlando, FL. The rings given to students are programmed with Java applets that communicate with host applications on networked systems. Applets are small applications that are designed to be run within another application. The Java Ring is snapped into a reader, called a Blue Dot receptor, to allow communication between a host system and the Java Ring. Designed to be fully compatible with the Java Card 2.0 standard the processor features a high-speed 1024-bit modular exponentiator fro RSA encryption, large RAM and ROM memory capacity, and an unalterable real time clock. The packaged module has only a single electric contact and a ground return, conforming to the specifications of the Dallas Semiconductor 1-Wire bus. Lithium-backed non-volatile SRAM offers high read/write speed and unparallel tamper resistance through nearinstantaneous clearing of all memory when tampering is detected, a feature known as rapid zeroization. Data integrity and clock function are maintained for more than 10 years. The 16millimeter diameter stainless steel enclosure accomodates the larger chip sizes needed for up to 128 kilobytes of high-speed nonvolatile static RAM. The small and extremely rugged packaging of the module allows it to attach to the accessory of your choice to match individual lifestyles, such as key fob, wallet, watch, necklace, bracelet, or finger ring!!!!! A Java Ring--and any related device that houses an iButton with a Java Virtual Machine--goes beyond a traditional smart card by providing real memory, more power, and a capacity for dynamic

programming. On top of these features, the ring provides a rugged environment, wear-tested for 10year durability. You can drop it on the floor, step on it, forget to take it off while swimming and the data remains safe inside. Today iButtons are primarily used for authentication and auditing types of applications. Since they can store data, have a clock for time-stamping, and support for encryption and authentication, they are ideal for audit trails.

Internet Protocol Television (IPTV)

Definition
Over the last decade, the growth of satellite service, the rise of digital cable, and the birth of HDTV have all left their mark on the television landscape. Now, a new delivery method threatens to shake things up even more powerfully. Internet Protocol Television (IPTV) has arrived, and backed by the deep pockets of the telecommunications industry, it's poised to offer more interactivity and bring a hefty dose of competition to the business of selling TV. IPTV describes a system capable of receiving and displaying a video stream encoded as a series of Internet Protocol packets. If you've ever watched a video clip on your computer, you've used an IPTV system in its broadest sense. When most people discuss IPTV, though, they're talking about watching traditional channels on your television, where people demand a smooth, high-resolution, lag-free picture, and it's the Telco's that are jumping headfirst into this market. Once known only as phone companies, the Telco's now want to turn a "triple play" of voice, data, and video that will retire the side and put them securely in the batter's box. In this primer, we'll explain how IPTV works and what the future holds for the technology. Though IP can (and will) be used to deliver video over all sorts of networks, including cable systems. How It Works First things first: the venerable set-top box, on its way out in the cable world, will make resurgence in IPTV systems. The box will connect to the home DSL line and is responsible for reassembling the packets into a coherent video stream and then decoding the contents. Your computer could do the same job, but most people still don't have an always-on PC sitting beside the TV, so the box will make a comeback. Where will the box pull its picture from? To answer that question, let's start at the source. Most video enters the system at the Telco's national head end, where network feeds are pulled from satellites and encoded if necessary (often in MPEG-2, though H.264 and Windows Media are also possibilities). The video stream is broken up into IP packets and dumped into the Telco's core network, which is a massive IP network that handles all sorts of other traffic (data, voice, etc.) in addition to the video. Here the advantages of owning the entire network from stem to stern (as the Telco's do) really come into play, since quality of service (QoS) tools can prioritize the video traffic to prevent delay or fragmentation of the signal. Without control of the network, this would be dicey, since QoS requests are not often recognized between operators. With end-to-end control, the Telco's can guarantee enough bandwidth for their signal at all times, which is key to providing the "just works" reliability consumers have come to expect from their television sets. The video streams are received by a local office, which has the job of getting them out to the folks on the couch. This office is the place that local content (such as TV stations, advertising, and video on demand) is added to the mix, but it's also the spot where the IPTV middleware is housed. This software stack handles user authentication, channel change requests, billing, VoD requests, etc.basically, all of the boring but necessary infrastructure. All the channels in the lineup are multicast from the national headend to local offices at the same time, but at the local office, a bottleneck becomes apparent. That bottleneck is the local DSL loop, which has nowhere near the capacity to stream all of the channels at once. Cable systems can do this, since their bandwidth can be in the neighborhood of 4.5Gbps, but even the newest ADSL2+

technology tops out at around 25Mbps (and this speed drops quickly as distance from the DSLAM [DSL Access Multiplier] grows).

FireWire

Definition
FireWire, originally developed by Apple Computer, Inc is a cross platform implementation of the high speed serial data bus -define by the IEEE 1394-1995 [FireWire 400],IEEE 1394a-2000 [FireWire 800] and IEEE 1394b standards-that move large amounts of data between computers and peripheral devices. Its features simplified cabling, hot swapping and transfer speeds of up to 800 megabits per second. FireWire is a high-speed serial input/output (I/O) technology for connecting peripheral devices to a computer or to each other. It is one of the fastest peripheral standards ever developed and now, at 800 megabits per second (Mbps), its even faster . Based on Apple-developed technology, FireWire was adopted in 1995 as an official industry standard (IEEE 1394) for cross-platform peripheral connectivity. By providing a high-bandwidth, easy-to-use I/O technology, FireWire inspired a new generation of consumer electronics devices from many companies, including Canon, Epson, HP, Iomega, JVC, LaCie, Maxtor, Mitsubishi, Matsushita (Panasonic), Pioneer, Samsung, Sony and FireWire has also been a boon to professional users because of the high-speed connectivity it has brought to audio and video production systems. In 2001, the Academy of Television Arts & Sciences presented Apple with an Emmy award in recognition of the contributions made by FireWire to the television industry. Now FireWire 800, the next generation of FireWire technology, promises to spur the development of more innovative highperformance devices and applications. This technology brief describes the advantages of FireWire 800 and some of the applications for which it is ideally suited. TOPOLOGY The 1394 protocol is a peer-to-peer network with a point-to-point signaling environment. Nodes on the bus may have several ports on them. Each of these ports acts as a repeater, retransmitting any packets received by other ports within the node. Figure 1 shows what a typical consumer may have attached to their 1394 bus. Because 1394 is a peer-to-peer protocol, a specific host isn't required, such as the PC in USB. In Figure 1, the digital camera could easily stream data to both the digital VCR and the DVD-RAM without any assistance from other devices on the bus FireWire uses 64-bit fixed addressing, based on the IEEE 1212 standard. There are three parts to each packet of information sent by a device over FireWire: " A 10-bit bus ID that is used to determine which FireWire bus the data came from " A 6-bit physical ID that identifies which device on the bus sent the data " A 48-bit storage area that is capable of addressing 256 terabytes of information for each node! The bus ID and physical ID together comprise the 16-bit node ID, which allows for 64,000 nodes on a system. Individual FireWire cables can run as long as 4.5 meters. Data can be sent through up to 16 hops for a total maximum distance of 72 meters. Hops occur when devices are daisy-chained together. Look at the example below. The camcorder is connected to the external hard drive connected to Computer A. Computer A is connected to Computer B, which in turn is connected to Computer C. It takes four hops for Computer C to access camera. The 1394 protocol supports both asynchronous and isochronous data transfers. Isochronous transfers: Isochronous transfers are always broadcast in a one-to-one or one-to-many fashion. No error correction or retransmission is available for isochronous transfers. Up to 80% of the available bus bandwidth can be used for isochronous transfers.

Asynchronous transfers: Asynchronous transfers are targeted to a specific node with an explicit address. They are not guaranteed a specific amount of bandwidth on the bus, but they are guaranteed a fair shot at gaining access to the bus when asynchronous transfers are permitted. This allows errorchecking and retransmission mechanisms to take place.

Night Vision Technology

Definition
Night vision is a spy or action movie you've seen, in which someone straps on a pair of night-vision goggles to find someone else in a dark building on a moonless night. With the proper night-vision equipment, you can see a person standing over 200 yards (183 m) away on a moonless, cloudy night. Night vision can work in two very different ways, depending on the technology used. " Image enhancement - This works by collecting the tiny amounts of light, including the lower portion of the infrared light spectrum, that are present but may be imperceptible to our eyes, and amplifying it to the point that we can easily observe the image. " Thermal imaging - This technology operates by capturing the upper portion of the infrared light spectrum, which is emitted as heat by objects instead of simply reflected as light. Hotter objects, such as warm bodies, emit more of this light than cooler objects like trees or buildings. To study about night vision technology we should first know about ligt. The amount of energy in a light wave is related to its wavelength: Shorter wavelengths have higher energy. Of visible light, violet has the most energy, and red has the least. Just next to the visible light spectrum is the infrared spectrum. Night vision technology consists of two major types: light amplification (or intensification) and thermal (infrared). Most consumer night vision products are light amplifying devices. All ITT Night Vision products use light-amplifying technology. This technology takes the small amount of light that's in the surrounding area (such as moonlight or starlight), and converts the light energy (scientists call it photons) into electrical energy (electrons). These electrons pass through a thin disk that's about the size of a quarter and contains more than 10 million channels. As the electrons go through the channels, they strike the channel walls and thousands more electrons are released. These multiplied electrons then bounce off of a phosphor screen which converts the electrons back into photons and lets you see an impressive nighttime view even when it's really dark. In night vision, thermal imaging takes advantage of this infrared emission. Thermal imaging works as 1. A special lens focuses the infrared light emitted by all of the objects in view. 2. The focused light is scanned by a phased array of infrared-detector elements. The detector elements create a very detailed temperature pattern called a thermogram. It only takes about onethirtieth of a second for the detector array to obtain the temperature information to make the thermogram. This information is obtained from several thousand points in the field of view of the detector array. 3. The thermogram created by the detector elements is translated into electric impulses. 4. The impulses are sent to a signal-processing unit, a circuit board with a dedicated chip that translates the information from the elements into data for the display. 5. The signal-processing unit sends the information to the display, where it appears as various colors depending on the intensity of the infrared emission. The combination of all the impulses from all of the elements creates the image. Types Of Thermal Imaging Devices

Most thermal-imaging devices scan at a rate of 30 times per second. They can sense temperatures ranging from -4 degrees Fahrenheit (-20 degrees Celsius) to 3,600 F (2,000 C), and can normally detect changes in temperature of about 0.4 F (0.2 C).

RD RAM

Definition
During the last two decades, there has been an exponential growth in the operational speed of microprocessors. Also RAM capacities have been improving at more than fifty percent per year. However the speed and access time of the memory have been improving at slower rate. In order to keep up in performance and reliability with processor technology it is necessary to make considerable improvements in the memory access time. The Rambus founders emerged with a memory technology-RD RAM. RDRAM memory provides the highest bandwidth -2.1GB/sec. per pin- from the fewest pins at five-times the speed of industry available DRAM. The RDRAM memory channel achieves its high-speed operation through several innovative techniques including separate control and address buses, highly efficient protocol, low voltage signaling, and precise clocking to minimize skew between clock and data lines. A single RDRAM device is capable of transferring data at 1066Mb/sec. per-pin to Rambus-compatible ICs. Data rate per-pin will increase beyond 1066Mb/sec per pin in the future.

Implementation Of Zoom FFT in Ultrasonic Blood Flow Analysis

An adequate blood flow supply is necessary for all organs of the body. Analysis of the blood flow finds its importance in the diagnoses of diseases. There are many techniques for analyzing the blood flow. These techniques are not affordable by the poor people because of their high expense. So we have implemented a technique called Zoom-FFT. This technique is simple and affordable to detect the blood clots and other diseases. Human with his potential tries to get whichever is unexplored, explored, and till now we are managing and succeeding using some technical ways. In the same way this is one of the explorations made for scanning the intra details of some specific objects using ultrasound named SONOGRAPHY, which is used as an alternative to x-ray photography. In this paper, the method to zoom the image or the scanned data-using zoom FFT has been discussed. It also explains the algorithm to get ZOOM FFT and how it can be obtained via simulation. Real time experimentation and its applications, with basics of ultrasound scanning are also explained. Here a specific application will be dealt i.e., ultrasonic blood flow analyzer using ZOOM FFT. Blood flow analysis is done by passing a high frequency ultrasonic wave in the blood vessels through a transducer (transmitter) .The reflected signal; from the receiver transducer has a different frequency due to the Doppler principle. This signal is passed to a DSP processor to find the frequency spectrum. Because of the high frequency of the ultrasonic wave, the resolution of the frequency spectrum output will not be good. Therefore we go for advanced Zoom FFT technique, wherein a very small frequency change due to the clot formation can be obtained with a good resolution. It can be used to locate the initial presence of a blood clot. All of these tasks must be achieved with a single DSP chip in order for the system to be both cost-effective and power efficient and thus widely accepted. This paper proposes: 1.Study of Bio-medical signal processing 2.Mixing down the input signal to the base band frequency using Hilbert Transform 3. Finding the down sampling using the decimation process 4.Obtaining the spectrum output using fast Fourier transform 5.Simulation is done by Matlab/C. 6.TMS320C5X/6X DSP processor does real time implementation. Steps involved: " Sound generation: The ultrasonic sound is generated using the piezoelectric transducer " Number of transducer may vary from 1 to many. " Narrow beam of wave is to be feed in. " Continuous mode of operation with no timed switching is applied in real time to measure Frequency and Amplitude " Doppler shift analysis for frequency content is to be done. " Creation of image - to plot in 2 Dimension. " Display using color differentiation. REAL BLOOD FLOW ANALYSIS:

In an Ultrasonic blood flow analysis, a beam of ultrasonic energy is directed through a blood vessel at a shallow angle and its transit time is then measured.. More common are the ultrasonic analyzers based on the Doppler principle. An oscillator, operating at a frequency of several Mega Hertz, excites a piezoelectric transducer. This transducer is coupled to the wall of an exposed blood vessel and sends an ultrasonic beam with a frequency F into the flowing blood. A small part of the transmitted energy is scattered back and is received by a second transducer arranged opposite the first one.

Military Radars

RADAR (Radio Detection and Ranging) is basically a means of gathering information about distant objects by transmitting electromagnetic waves at them and analyzing the echoes. Radar has been employed on the ground, in air, on the sea and in space. Radar finds a number of applications such as in airport traffic control, military purposes, coastal navigation, meteorology and mapping etc. The development of the radar technology took place during the World War II in which it was used for detecting the approaching aircraft and then later for many other purposes which finally led to the development of advanced military radars being used these days. Military radars have a highly specialized design to be highly mobile and easily transportable, by air as well as ground. INTRODUCTION Military radar should be an early warning, altering along with weapon control functions. It is specially designed to be highly mobile and should be such that it can be deployed within minutes. Military radar minimizes mutual interference of tasks of both air defenders and friendly air space users. This will result in an increased effectiveness of the combined combat operations. The command and control capabilities of the radar in combination with an effective ground based air defence provide maximum operational effectiveness with a safe, efficient and flexible use of the air space. The increased operational effectiveness is obtained by combining the advantages of centralized air defence management with decentralized air defence control. ADVANCED FEATURES AND BENEFITS Typical military radar has the following advanced features and benefits: All-weather day and night capability. Multiple target handling and engagement capability. Short and fast reaction time between target detection and ready to fire moment. Easy to operate and hence low manning requirements and stress reduction under severe conditions. Highly mobile system, to be used in all kind of terrain Flexible weapon integration, and unlimited number of single air defence weapons can be provided with target data. High resolution, which gives excellent target discrimination and accurate tracking. The identification of the targets as friend or hostile is supported by IFF, which is an integral part of the system. During the short time when the targets are exposed accurate data must be obtained. A high antenna rotational speed assures early target detection and a high data update rate required for track accuracy. The radar can use linear (horizontal) polarization in clear weather. During rains, to improve the suppression of rain clutter, provision exists to change to circular polarization at the touch of the button from the display console.

Modern Irrigation System Towards Fuzzy


In the past few years, there has been an increasing interest in the application of the fuzzy set theory to many control problems. For many complex control systems, the construction of an ordinary model is difficult due to nonlinear and time varying nature of the system. Fuzzy Control has been applied in traditional control systems, which yields promising results, It is applied for the processes, which yields promising results, it is applied for the processes, which are too complex to be analyzed by conventional techniques or where the available information is uncertain. In fact, fuzzy logic controller (FLC) is easier to prototype, simple to describe and verify, can be maintained and also extended with grater accuracy in less time. These advantages make fuzzy logic technology to be used for irrigation system also. NEED FOR MODERN IRRIGATION SYSTEM Water and electricity should be optimally utilized in an agricultural like India. The development in the filed of science and technology should be appropriately used in the field of agriculture for better yields. Irrigation has traditionally resulted in excessive labour and nonuniformity in water application across the filed. Hence, an automatic irrigation system is required to reduce the labour cost and to give uniformity in water application across the field. PHYSIOLOGICAL PROCESSING In the irrigation system, plant take-varying quantities of water at different stages of plant growth. Unless adequate and timely supply of water is assured, the physiological activities taking place within the plant are bound to be adversely affected, thereby resulting in reduced yield of crop. The amount of water to be irrigated in an irrigation schedule depends upon the evapotranspiration(ET) from adjacent soil and from plant leaves at that specified time. The rate of ET of a given crop is influenced by its growth stages, environmental conditions and crop management. The consumptive use or evapotranspiration for a given crop at a given place may vary through out the day, through out the month and through out the crop period. Values of daily consumptive use or monthly consumptive use are determined for a given crop and at a given place. It also varies from crop to crop. There are several elimatological factors, which will influence and decide the rate of evaporation. Some of the important factors of elimate influencing the evaporation are radiation, temperature, humidity and wind speed. In this work, the input variables chosen for the system are evapotranspiration and rate of change of evapotranspiration called as error and the output variable is water amount. FUZZIFICATION UNIT It converts a crisp process state into a fuzzy state so that it is compatible with the fuzzy set representation of the process required by the inference unit. KNOWLEDGE BASE The Knowledge base consists of two components. A rule base, which describes the behaviour of control surfaces, which involves writing the rules that tie the input values to the output model properties. Rule formation can be framed by discussing with the experts. A database contains the definition of the fuzzy sets representing the linguistic terms used in the rules. The knowledge base is generally represented by a fuzzy associative memory. INFERENCE UNIT This unit is the core of the fuzzy controller. It generates fuzzy control actions applying the rules in the knowledge base to the current process state. It determines the degree to which each measured valued is a member of a given labeled group.

A given measurement can be classified simultaneously as belonging to several linguistic groups. The degree of fulfillment (DOF) of each rule is determined by applying the rules of Boolean algebra to each linguistic group that is part of the rule. This is done for all the rules in the system. Finally the net control action is determined by weighting action associated with each rule by degree of fulfillment.

Smart Cameras in Embedded Systems

A smart camera performs real-time analysis to recognize scenic elements. Smart cameras are useful in a variety of scenarios: surveillance, medicine, etc.We have built a real-time system for recognizing gestures. Our smart camera uses novel algorithms to recognize gestures based on low-level analysis of body parts as well as hidden Markov models for the moves that comprise the gestures. These algorithms run on a Trimedia processor. Our system can recognize gestures at the rate of 20 frames/second. The camera can also fuse the results of multiple cameras Overview Recent technological advances are enabling a new generation of smart cameras that represent a quantum leap in sophistication. While today's digital cameras capture images, smart cameras capture high-level descriptions of the scene and analyze what they see. These devices could support a wide variety of applications including human and animal detection, surveillance, motion analysis, and facial identification. Video processing has an insatiable demand for real-time performance. Fortunately, Moore's law provides an increasing pool of available computing power to apply to real-time analysis. Smart cameras leverage very large-scale integration (VLSI) to provide such analysis in a low-cost, lowpower system with substantial memory. Moving well beyond pixel processing and compression, these systems run a wide range of algorithms to extract meaning from streaming video. Because they push the design space in so many dimensions, smart cameras are a leading-edge application for embedded system research. Detection and Recognition Algorithms Although there are many approaches to real-time video analysis, we chose to focus initially on human gesture recognition-identifying whether a subject is walking, standing, waving his arms, and so on. Because much work remains to be done on this problem, we sought to design an embedded system that can incorporate future algorithms as well as use those we created exclusively for this application. Our algorithms use both low-level and high-level processing. The low-level component identifies different body parts and categorizes their movement in simple terms. The high-level component, which is application-dependent, uses this information to recognize each body part's action and the person's overall activity based on scenario parameters. Low-level processing The system captures images from the video input, which can be either uncompressed or compressed (MPEG and motion JPEG), and applies four different algorithms to detect and identify human body parts. Region extraction: The first algorithm transforms the pixels of an image into an M N bitmap and eliminates the background. It then detects the body part's skin area using a YUV color model with chrominance values down sampled Nextthe algorithm hierarchically segments the frame into skin-tone and non-skin-tone regions by

extracting foreground regions adjacent to detected skin areas and combining these segments in a meaningful way. Contour following: The next step in the process involves linking the separate groups of pixels into contours that geometrically define the regions. This algorithm uses a 3 3 filter to follow the edge of the component in any of eight different directions. Ellipse fitting: To correct for deformations in image processing caused by clothing, objects in the frame, or some body parts blocking others, an algorithm fits ellipses to the pixel regions to provide simplified part attributes. The algorithm uses these parametric surface approximations to compute geometric descriptors for segments such as area, compactness (circularity), weak perspective invariants, and spatial relationships. Graph matching: Each extracted region modeled with ellipses corresponds to a node in a graphical representation of the human body. A piecewise quadratic Bayesian classifier uses the ellipses parameters to compute feature vectors consisting of binary and unary attributes. It then matches these attributes to feature vectors of body parts or meaningful combinations of parts that are computed offline. To expedite the branching process, the algorithm begins with the face, which is generally easiest to detect.

Spin Valve Transistor

In a world of ubiquitous presence of electrons can you imagine any other field displacing it? It may seem peculiar, even absurd, but with the advent of spintronics it is turning into reality. In our conventional electronic devices we use semi conducting materials for logical operation and magnetic materials for storage, but spintronics uses magnetic materials for both purposes. These spintronic devices are more versatile and faster than the present one. One such device is spin valve transistor. Spin valve transistor is different from conventional transistor. In this for conduction we use spin polarization of electrons. Only electrons with correct spin polarization can travel successfully through the device. These transistors are used in data storage, signal processing, automation and robotics with less power consumption and results in less heat. This also finds its application in Quantum computing, in which we use Qubits instead of bits. INTRODUCTION Two experiments in 1920's suggested spin as an additional property of the electron. One was the closely spaced splitting of Hydrogen spectralines, called fine structure. The other was Stern -Gerlach experiment, which in 1922 that a beam of silver atoms directed through an inhomogeneous magnetic field would be forced in to two beams. These pointed towards magnetism associated with the electrons. Spin is the root cause of magnetism that makes an electron tiny magnet. Magnetism is already been exploited in recording devices. Where data is recorded and stored as tiny areas of magnetized iron or chromium oxide. To access that information the head detects the minute changes in magnetic field. This induces corresponding changes in the head's electrical resistance - a phenomenon called Magneto Resistance. EVOLUTION OF SPINTRONICS: Spintronics came into light by the advent of Giant Magneto Resistance (GMR) in 1988. GMR is 200 times stronger than ordinary Magneto Resistance. It results from subtle electron - spin effects in ultra multilayers of magnetic materials that cause a huge change in electrical resistance. The discovery of Spin Valve Transistor (GMR in magnetic multilayers) has let to a large number of studies on GMR systems. Usually resistance of multilayer is measured with the Current in Plane (CIP). For instance, Read back magnetic heads uses this property. But this suffers from several drawbacks such as; shunting and channeling, particularly for uncoupled multilayers and for thick spaced layers diminish the CIP magneto resistance. Diffusive surface scattering reduces the magneto resistance for sandwiches and thin multilayers. In spin valve transistor (SVT) electrons are injected in to metallic base across a Schottky barrier (Emitter side) pass through the spin valve and reach the opposite side (Collector side) of transistor. When these injected electrons traverse the metallic base electrons are above Fermi level, hence hot electron magneto transport should be considered in Spin Valve Transistor (SVT).

The transport properties of hot electrons are different from Fermi electrons .For example spin polarisation of Fermi electrons mainly depends on Density Of States (DOS) at Fermi level, while the spin polarisation of hot electron is related to the density of unoccupied states above the fermi level. For the preparations of transistor we apply direct bonding, both to obtain device quality semiconductor material for the emitter and to allow room temperature processes.The starting material for both emitter and collector is a 380um, 5-10Ocm, n-si (100) wafer. After back side n++ implantation ,wafer is dry oxidised to anneal the implant and to form a SIO2 layer . After depositing a Pt ohmic contact on to the back side, wafer is sawn in to 10X10mm collector and 1.6X1.6mm emitters. Collector is subsequently dipped in HNO3, 2% HF to remove the native oxide on silicon fragments,5% Tetra methyl Ammonium Hydroxide at 90, and buffered HF to remove thermal oxide .following each step the collector is rinsed in demineralised water.

Moletronics- an invisible technology


As a scientific pursuit, the search for a viable successor to silicon computer technology has garnered considerable curiosity in the last decade. The latest idea, and one of the most intriguing, is known as molecular computers, or moletronics, in which single molecules serve as switches, "quantum wires" a few atoms thick serve as wiring, and the hardware is synthesized chemically from the bottom up. The central thesis of moletronics is that almost any chemically stable structure that is not specifically disallowed by the laws of physics can in fact be built. The possibility of building things atom by atom was first introduced by Richard Feynman in 1959. An "assembler", which is little more than a submicroscopic robotic arm can be built and be controlled. We can use it to secure and position compounds in order to direct the precise location at which chemical reactions occur. This general approach allows the construction of large, atomically precise objects by initiating a sequence of controlled chemical reactions. In order for this to function as we wish, each assembler requires a process for receiving and executing the instruction set that will dictate its actions. In time, molecular machines might even have onboard, high speed RAM and slower but more permanent storage. They would have communications capability and power supply. Moletronics is expected to touch almost every aspect of our lives, right down to the water we drink and the air we breathe. Experimental work has already resulted in the production of molecular tweezers, a carbon nanotube transistor, and logic gates. Theoretical work is progressing as well. James M. Tour of Rice University is working on the construction of a molecular computer. Researchers at Zyvex have proposed an Exponential Assembly Process that might improve the creation of assemblers and products, before they are even simulated in the lab. We have even seen researchers create an artificial muscle using nanotubes, which may have medical applications in the nearer term. Teramac computer has the capacity to perform 1012 operations in one seconds but it has 220,000 hardware defects and still has performed some tasks 100 times faster than single-processor .The defect-tolerant computer architecture and its implications for moletronics is the latest in this technology. So the very fact that this machine worked suggested that we ought to take some time and learn about it. Such a 'defect-tolerant' architecture through moletronics could bridge the gap between the current generation of microchips and the next generation of molecular-scale computers. Moletronic circuit--QCA basics The interaction between cells is Coulombic, and provides the necessary computing power. No current flows between cells and no power or information is delivered to individual internal cells. Local interconnections between cells are provided by the physics of cell-cell interaction. The links below describes the QCA cell and the process of building up useful computational elements from it. The discussion is mostly qualitative and based on the intuitively clear behavior of electrons in the cell. Fundamental Aspects of QCA A QCA cell consists of 4 quantum dots positioned at the vertices of a square and contains 2 extra electrons. The configuration of these electrons is used to encode binary information. The 2 electrons sitting on diagonal sites of the square from left to right and right to left are used to represent the

binary "1" and "0" states respectively. For an isolated cell these 2 states will have the same energy. However for an array of cells, the state of each cell is determined by its interaction with neighboring cells through the Coulomb interaction.

Solar Power Satellites

The new millennium has introduced increased pressure for finding new renewable energy sources. The exponential increase in population has led to the global crisis such as global warming, environmental pollution and change and rapid decrease of fossil reservoirs. Also the demand of electric power increases at a much higher pace than other energy demands as the world is industrialized and computerized. Under these circumstances, research has been carried out to look into the possibility of building a power station in space to transmit electricity to Earth by way of radio waves-the Solar Power Satellites. Solar Power Satellites(SPS) converts solar energy in to micro waves and sends that microwaves in to a beam to a receiving antenna on the Earth for conversion to ordinary electricity. SPS is a clean, large-scale, stable electric power source. Solar Power Satellites is known by a variety of other names such as Satellite Power System, Space Power Station, Space Power System, Solar Power Station, Space Solar Power Station etc. One of the key technologies needed to enable the future feasibility of SPS is that of Microwave Wireless Power Transmission.WPT is based on the energy transfer capacity of microwave beam i.e, energy can be transmitted by a well focused microwave beam. Advances in Phased array antennas and rectennas have provided the building blocks for a realizable WPT system. Increasing global energy demand is likely to continue for many decades. Renewable energy is a compelling approach - both philosophically and in engineering terms. However, many renewable energy sources are limited in their ability to affordably provide the base load power required for global industrial development and prosperity, because of inherent land and water requirements. The burning of fossil fuels resulted in an abrupt decrease in their .it also led to the green house effect and many other environmental problems. Nuclear power seems to be an answer for global warming, but concerns about terrorist attacks on Earth bound nuclear power plants have intensified environmentalist opposition to nuclear power. Moreover, switching on to the natural fission reactor, the sun, yields energy with no waste products. Earth based solar panels receives only a part of the solar energy. It will be affected by the day & night effect and other factors such as clouds. So it is desirable to place the solar panel in the space itself, where, the solar energy is collected and converted in to electricity which is then converted to a highly directed microwave beam for transmission. This microwave beam, which can be directed to any desired location on Earth surface, can be collected and then converted back to electricity. This concept is more advantageous than conventional methods. Also the microwave energy, chosen for transmission, can pass unimpeded through clouds and precipitations. SPS- THE BACKGROUND The concept of a large SPS that would be placed in geostationary orbit was invented by Peter Glaser in 1968. The SPS concept was examined extensively during the late 1970s by the U.S Department of Energy (DOE) and the National Aeronautics and Space Administration (NASA). The DOE-NASA put forward the SPS Reference System Concept in 1979. The central feature of this concept was the creation of a large scale power infrastructure in space, consisting of about 60 SPS, delivering a total of about 300GW.But, as a result of the huge price tag, lack of evolutionary concept and the subsiding energy crisis in 1980-1981, all U.S SPS efforts were

terminated with a view to re-asses the concept after about ten years. During this time international interest in SPS emerged which led to WPT experiments in Japan.

MIMO Wireless Channels: Capacity and Performance Prediction


Multiple-input multiple-output (MIMO) communication techniques make use of multi-element antenna arrays at both the TX and the RX side of a radio link and have been shown theoretically to drastically improve the capacity over more traditional single-input multiple output (SIMO) systems [2, 3, 5, 7]. SIMO channels in wireless networks can provide diversity gain, array gain, and interference canceling gain among other benets. In addition to these same advantages, MIMO links can offer a multiplexing gain by opening Nmin parallel spatial channels, where Nmin is the minimum of the number of TX and RX antennas. Under certain propagation conditions capacity gains proportional to Nmin can be achieved [8]. Space-time coding [14] and spatial multiplexing [1, 2, 7, 16] (a.k.a. \BLAST") are popular signal processing techniques making use of MIMO channels to improve the performance of wireless networks. Previous work and open problems. The literature on realistic MIMO channel models is still scarce. For the line-of-sight (LOS) case, previous work includes [13]. In the fading case, previous studies have mostly been conned to i.i.d. Gaussian matrices, an idealistic assumptions in which the entries of channel matrix are independent complex Gaussian random variables [2, 6, 8]. The influence of spatial fading correlation on either the TX or the RX side of a wireless MIMO radio link has been addressed in [3, 15]. In practice, however, the realization of high MIMO capacity is sensitive not only to the fading correlation between individual antennas but also to the rank behavior of the channel. In the existing literature, high rank behavior has been loosely linked to the existence of a dense scattering environment. Recent successful demonstrations of MIMO technologies in indoor-toindoor channels, where rich scattering is almost always guaranteed. Here we suggest a simple classification of MIMO channel and devise a MIMO channel model whose generality encompasses some important practical cases. Unlike the channel model used in [3, 15], our model suggests that the impact of spatial fading correlation and channel rank are decoupled although not fully independent, which allows for example to describe MIMO channels with uncorrelated spatial fading at the transmitter and the receiver but reduced channel rank (and hence low capacity). This situation typically occurs when the distance between transmitter and receiver is large. Furthermore,our model allows description of MIMO channels with scattering at both the transmitter and the receiver. We use the new model to describe the capacity behavior as a function of the wavelength, the scattering radii at the transmitter and the receiver, the distance between TX and RX arrays, antenna beamwidths, and antenna spacing. Our model suggests that full MIMO capacity gain can be achieved for very realistic values of scattering radii, antenna spacing and range. It shows, in contrast to usual intuition, that large antenna spacing has only limited impact on capacity under fairly general conditions. Another case described by the model is the "pin-hole" channel where spatial fading is uncorrelated and yet the channel has low rank and hence low capacity.We show that this situation typically occurs for very large distances between transmitter and receiver. In the 1 * 1 case (i.e. one TX and one RX antenna), the pinhole channel yields capacities worse than the traditional Rayleigh fading channel. Our results are validated by comparing with a ray tracingbased channel simulation. We find a good match between the two models over a wide range of situations.

Fractal Robots

Definition
In order to respond to rapidly changing environment and market, it is imperative to have such capabilities as flexibility, adaptability, reusability, etc. for the manufacturing system. The fractal manufacturing system is one of the new manufacturing paradigms for this purpose. A basic component of fractal manufacturing system, called a basic fractal unit (BFU), consists of five functional modules such as an observer, an analyzer, an organizer, a resolver, and a reporter. Each module autonomously cooperates and negotiates with others while processing its jobs by using the agent technology. The resulting architecture has a high degree of self-similarity, one of the main characteristics of the fractal. What this actually means in this case is something that when you look at a part of it, it is similar to the whole object. Some of the fractal specific characteristics are: Self-similarity Self-organization Goal-orientation FRACTAL ROBOTS Fractal Robot is a science that promises to revolutionize technology in a way that has never been witnessed before. Fractal Robots are objects made from cubic bricks that can be controlled by a computer to change shape and to reconfigure themselves into objects of different shapes. These cubic motorized bricks can be programmed to move and shuffle themselves to change shape to make objects like a house potentially in few seconds. It is exactly like kids playing with Lego bricks and making a toy house or a toy bridge by snapping together Lego bricks, except that here we are using a computer. This technology has the potential to penetrate every field of human work like construction, medicine, research and others. Fractal robots can enable buildings to build within a day, help perform sensitive medical operations and can assist in laboratory experiments. Also, Fractal Robots have built-in self repair which means they continue to work without human intervention. Also, this technology brings down the manufacturing price down dramatically. A Fractal Robot resembles itself, i.e. wherever you look at, any part of its body will be similar to the whole object. The robot can be animated around its joints in a uniform manner. Such robots can be straight forward geometric patterns/images that look more like natural structures such as plants. This patented product however has a cubical structure.A fractal cube can be of any size. The smallest expected size is between 1000 and 10,000 atoms wide. These cubes are embedded with computer chips that control their movement. FRACTAL ROBOT MECHANISM
SIMPLE CONSTRUCTION DETAILS

Considerable effort has been spent in making the robotic cube as simple as possible after the invention had been conceived. The design is such that it has the fewest possible moving parts so that

they can be mass produced. Materials requirements have been made as flexible as possible so that they can be built from metals and plastics which are cheaply available in industrial nations but also from ceramics and clays which are environmentally friendlier and more readily available in developing nations. The cube therefore is hollow and the plates have all the mechanisms. Each of these face plates have electrical contact pads that allow power and data signals to be routed from one robotic cube to another. They also have 45 degree petals that push out of the surface to engage the neighboring face that allows one robotic cube to lock to its neighbors. The contact pads are arranged symmetrically around four edges to allow for rotational symmetry .

Stereoscopic Imaging

Definition
A stereoscopic motion or still picture in which the right component of a composite image usually red in color is superposed on the left component in a contrasting color to produce a three-dimensional effect when viewed through correspondingly colored filters in the form of spectacles. The modes of 3D presentation you are most familiar with are the paper glasses with red and blue lenses. The technology behind 3D, or stereoscopic, movies is actually pretty simple. They simply recreate the way humans see normally. Since your eyes are about two inches apart, they see the same picture from slightly different angles. Your brain then correlates these two images in order to gauge distance. This is called binocular vision - ViewMasters and binoculars mimic this process by presenting each eye with a slightly different image. Now you're learning! Need to know more about how do 3D glasses work? Read on. The binocular vision system relies on the fact that our two eyes are spaced about 2 inches (5 centimeters) apart. Therefore, each eye sees the world from a slightly different perspective, and the binocular vision system in your brain uses the difference to calculate distance. Your brain has the ability to correlate the images it sees in its two eyes even though they are slightly different. If you've ever used a ViewMaster or a stereoscopic viewer, you have seen yourbinocular vision system in action. In a View-Master, each eye is presented with an image. Two cameras photograph the same image from slightly different positions to create these images. Your eyes can correlate these images automatically because each eye sees only one of the images. A 3D film viewed without glasses is a very strange sight and may appear to be out of focus, fuzzy or out of register. The same scene is projected simultaneously from two different angles in two different colors, red and cyan (or blue or green). Here's where those cool glasses come in -- the colored filters separate the two different images so each image only enters one eye. Your brain puts the two pictures back together and now you're dodging a flying meteor! 3D glasses make the movie or television show you're watching look like a 3-D scene that's happening right in front of you. With objects flying off the screen and careening in your direction, and creepy characters reaching out to grab you, wearing 3-D glasses makes you feel like you're a part of the action - not just someone sitting there watching a movie. Considering they have such high entertainment value, you'll be surprised at how amazingly simple 3-D glasses are. The binocular vision system relies on the fact that our two eyes are spaced about 2 inches (5 centimeters) apart. Therefore, each eye sees the world from a slightly different perspective, and the binocular vision system in your brain uses the difference to calculate distance. Your brain has the ability to correlate the images it sees in its two eyes even though they are slightly different. If you've ever used a View-Master or a stereoscopic viewer, you have seen your binocular vision system in action. In a View-Master, each eye is presented with an image. Two cameras photograph the same image from slightly different positions to create these images. Your eyes can correlate these images automatically because each eye sees only one of the images. The reason why you wear 3-D glasses in a movie theater is to feed different images into your eyes just like a View-Master does. The screen actually displays two images, and the glasses cause one of the images to enter one eye and the other to enter the other eye. There are two common systems for doing this:

Although the red/green or red/blue system is now mainly used for television 3-D effects, and was used in many older 3-D movies. In this system, two images are displayed on the screen, one in red and the other in blue (or green). The filters on the glasses allow only one image to enter each eye, and your brain does the rest. You cannot really have a color movie when you are using color to provide the separation, so the image quality is not nearly as good as with the polarized system.

Ultra-Wideband

Definition
UWB is a wireless technology that transmits binary data-the 0s and 1s that are the digital building blocks of modern information systems. It uses low-energy and extremely short duration (in the order of pico seconds) impulses or bursts of RF (radio frequency) energy over a wide spectrum of frequencies, to transmit data over short to medium distances, say about 15-100 m. It doesn't use carrier wave to transmit data. UWB is fundamentally different from existing radio frequency technology. For radios today, picture a guy watering his lawn with a garden hose and moving the hose up and down in a smooth vertical motion. You can see a continuous stream of water in an undulating wave. Nearly all radios, cell phones, wireless LANs and so on are like that: a continuous signal that's overlaid with information by using one of several modulation techniques. Now picture the same guy watering his lawn with a swiveling sprinkler that shoots many, fast, short pulses of water. That's typically what UWB is like: millions of very short, very fast, precisely timed bursts or pulses of energy, measured in nanoseconds and covering a very wide area. By varying the pulse timing according to a complex code, a pulse can represent either a zero or a one: the basis of digital communications. Wireless technologies such as 802.11b and short-range Bluetooth radios eventually could be replaced by UWB products that would have a throughput capacity 1,000 times greater than 802.11b (11M bit/sec). Those numbers mean UWB systems have the potential to support many more users, at much higher speeds and lower costs, than current wireless LAN systems. Current UWB devices can transmit data up to 100 Mbps, compared to the 1 Mbps of Bluetooth and the 11 Mbps of 802.11b. Best of all, it costs a fraction of current technologies like Blue-tooth, WLANs and Wi-Fi. The concepts of communication and computation are so close that their tight connection is obvious even for PR departments of major IT companies. Quite often it makes no sense to separate these concepts. Today, speaking about growing power of computing devices we imply both growing performance of their processors and growing throughput of their communication channels. The communication channels include internal: " caches " system buses " memory interfaces " interfaces of storage devices ...and external: " interfaces of peripherals " wireless network channels " wired network channels structures of data transfer. External wired communication channels are developing mainly in two directions - cost reduction and increase of availability of optical channels (top-down) and growth of throughput (bottum-up). However, the two physical carriers are not so close yet (first of all, in prices) to be involved in direct

competition - in 90% of cases a character of a problem to be solved determines the technology to be preferred. Internal wired channels are switching over from specialized parallel interfaces to high-level serial packet interface (Serial ATA, 3GIO/PCI Express, Hyper Transport). It fosters a convergence of external and internal communication technologies: in future separate components of a computer case will be combined into a normal network. It's quite a logical solution a modern chipset, thus, works as a network switch equipped with multiple interfaces such as a DDR memory bus or a processor bus and AGP/PCI.

Home Networking
Home Networking is the collection of elements that process, manage, transport, and store information, enabling the connection and integration of multiple computing, control, monitoring, and communication devices in the home. The price of home computers keep falling, while the advantages for consumers from being connected online investing and shopping, keeping in touch with long distance friends and tapping the vast resource of the Internet CE keep multiplying. No wonder an increasing number of households own two or more PCs.Until recently, the home network has been largely ignored. However, the rapid proliferation of personal computers (PCs) and the Internet in homes, advancements in telecommunications technology, and progress in the development of smart devices have increasingly emphasized the need for an in home networking. Furthermore, as these growth and advancement trends continue, the need for simple, flexible, and reliable home networks will greatly increase. Overview The latest advances in the Internet access technologies, the dropping of PC rates, and the proliferation of smart devices in the house, have dramatically increased the number of intelligent devices in the consumer's premises. The consumer electronics equipment manufacturers are building more and more intelligence into their products enabling those devices to be networked into clusters that can be controlled remotely. Advances in the Wireless communication technologies have introduced a variety of wireless devices, like PDAs, Web Pads, into the house. Advent of multiple PCs and smart devices into the house, and the availability of high-speed broadband Internet access, have resulted in in-house networking needs to meet the following requirements of the consumers: " Simultaneous internet access to multiple home users " Sharing of peripherals and files " Home Control/Automation " Multi-player Gaming " Connect to/from the workplace " Remote Monitoring/Security " Distributed Video The home networking requirement introduces into the market a new breed of products called Residential Gateways. A Residential Gateway (RG) will provide the necessary connectivity features to enable the consumer to exploit the advantages of a networked home. The RG will also provide the framework for Residential Connectivity Based Services to reach the home. Examples of such Residential Connectivity Based Services include: Video on Demand, IP Telephony, Home Security & Surveillance, Remote Home Appliance Repair & Trouble shooting, Utility/Meter Reading, Virtual Private Network Connectivity and Innovative E-commerce solutions. Using a reusable framework for home service gateway architecture, offers end-to-end product design and realization services for the residential gateways. Coupled with our standards based and ready-todeploy home networking components & solutions (like the Wipro BlueTooth Stack, IEEE 1394 core, Voice Over Broadband Infrastructure, Embedded TCP/IP Stack etc.), our customers can enjoy the much need time to market advantage and competitive edge. 2. What is Home Networking?

We have all become very comfortable with networks. Local area networks (LANs) and Wide Area Networks (WANs) have become ubiquitous. The network hierarchy has been rapidly moving lower in the chain towards smaller and more personal devices. These days, Home Area Networks (HANs) and Personal Area Networks (PANs) are joining their larger brother as ever-present communications channels. Home Networking is the collection of elements that process, manage, transport, and store information, enabling the connection and integration of multiple computing, control, monitoring, and communication devices in the home.

Digital Cinema

Definition
Digital cinema encompasses every aspect of the movie making process, from production and postproduction to distribution and projection. A digitally produced or digitally converted movie can be distributed to theaters via satellite, physical media, or fiber optic networks. The digitized movie is stored by a computer/server which "serves" it to a digital projector for each screening of the movie. Projectors based on DLP Cinema technology are currently installed in over 1,195 theaters in 30 countries worldwide - and remain the first and only commercially available digital cinema projectors. When you see a movie digitally, you see that movie the way its creators intended you to see it: with incredible clarity and detail. In a range of up to 35 trillion colors. And whether you're catching that movie on opening night or months after, it will always look its best, because digital movies are immune to the scratches, fading, pops and jitter that film is prone to with repeated screenings.Main advantage of digital movies are that, expensive film rolls and postprocessing expenses could be done away. Movie would be transmitted to computers in movie theatres, hence the movie could be released in a larger number of theatres. Digital technology has already taken over much of the home entertainment market. It seems strange, then, that the vast majority of theatrical motion pictures are shot and distributed on celluloid film,just like they were more than a century ago. Of course, the technology has improved over the years, but it's still based on the same basic principles. The reason is simple: Up until recently, nothing could come close to the image quality of projected film. Digital cinema is simply a new approach to making and showing movies. The basic idea is to use bits and bytes (strings of 1s and 0s) to record, transmit and replay images, rather than using chemicals on film. The main advantage of digital technology (such as a HYPERLINK "http://entertainment.howstuffworks.com/cd.htm" CD ) is that it can store, transmit and retrieve a huge amount of information exactly as it was originally recorded. Analog technology (such as an audio tape) loses information in transmission, and generally degrades with each viewing. Digital information is also a lot more flexible than analog information. A computer can manipulate bytes of data very easily, but it can't do much with a streaming analog signal. It's a completely different language. Digital cinema affects three major areas of movie-making: " Production - how the movie is actually made " Distribution - how the movie gets from the production company " to movie theaters " Projection - how the theater presents the movie . Production With an $800 consumer digital camcorder, a stack of tapes, a computer and some video-editing software, you could make a digital movie. But there are a couple of problems with this approach. First, your image resolution won't be that great on a big movie screen. Second, your movie will look like news footage, not a normal theatrical film. onventional video has a completely different look from film, and just about anybody can tell the difference in a second.

Film and video differ a lot in image clarity, depth of focus and color range, but the biggest contrast is frame rate. Film cameras normally shoot at 24 frames per second, while most U.S. television video cameras shoot at 30 frames per second (29.97 per second, to be exact).

Face Recognition Technology

Definition
Humans are very good at recognizing faces and if computers complex patterns. Even a passage of time doesn't affect this capability and therefore it would help become as robust as humans in face recognition. Machine recognition of human faces from still or video images has attracted a great deal of attention in the psychology, image processing, pattern recognition, neural science, computer security, and computer vision communities. Face recognition is probably one of the most nonintrusive and user-friendly biometric authentication methods currently available; a screensaver equipped with face recognition technology can automatically unlock the screen whenever the authorized user approaches the computer. Face is an important part of who we are and how people identify us. It is arguably a person's most unique physical characteristic. While humans have had the innate ability to recognize and distinguish different faces for millions of years, computers are just now catching up. Visionics, a company based in New Jersey, is one of many developers of facial recognition technology. The twist to its particular software, FaceIt, is that it can pick someone's face out of a crowd, extract that face from the rest of the scene and compare it to a database full of stored images. In order for this software to work, it has to know what a basic face looks like. Facial recognition software is designed to pinpoint a face and measure its features. Each face has certain distinguishable landmarks, which make up the different facial features. These landmarks are referred to as nodal points. There are about 80 nodal points on a human face. Here are a few of the nodal points that are measured by the software: Distance between eyes " Width of nose " Depth of eye sockets " Cheekbones " Jaw line " Chin These nodal points are measured to create a numerical code, a string of numbers that represents the face in a database. This code is called a faceprint. Only 14 to 22 nodal points are needed for the FaceIt software to complete the recognition process. Software Facial recognition software falls into a larger group of technologies known as biometrics. Biometrics uses biological information to verify identity. The basic idea behind biometrics is that our bodies contain unique properties that can be used to distinguish us from others. Besides facial recognition, biometric authentication methods also include: " Fingerprint scan " Retina scan " Voice identification Facial recognition methods generally involve a series of steps that serve to capture, analyze and compare a face to a database of stored images. The basic processes used by the FaceIt system to capture and compare images are: Detection - When the system is attached to a video surveillance system, the recognition software

searches the field of view of a video camera for faces. If there is a face in the view, it is detected within a fraction of a second. A multi-scale algorithm is used to search for faces in low resolution. The system switches to a high-resolution search only after a head-like shape is detected. 2. Alignment - Once a face is detected, the system determines the head's position, size and pose. A face needs to be turned at least 35 degrees toward the camera for the system to register it. 3. Normalization -The image of the head is scaled and rotated so that it can be registered and mapped into an appropriate size and pose. Normalization is performed regardless of the head's location and distance from the camera. Light does not impact the normalization process. 4. Representation - The system translates the facial data into a unique code. This coding process allows for easier comparison of the newly acquired facial data to stored facial data. 5. Matching - The newly acquired facial data is compared to the stored data and (ideally) linked to at least one stored facial representation.

Universal Asynchronous Receiver Transmitter

Introduction
The Universal Asynchronous Receiver Transmitter (UART) is the most widely used serial data communication circuit ever. UARTs allow full duplex communication over serial communication links as RS232. UARTs are available as inexpensive standard products from many semiconductor suppliers, making it unlikely that this specific design is useful by itself. The basic functions of a UART are a microprocessor interface, double buffering of transmitter data, frame generation, parity generation, parallel to serial conversion, double buffering of receiver data, parity checking, serial to parallel conversion. The data is transmitted asynchronously one bit at a time and there is no clock line. The frame format of used by UARTs is a low start bit, 5-8 data bits, optional parity bit, and 1 or 2 stop bits. Universal Asynchronous Receive/Transmit consists of baud rate generator, transmitter and receiver. The number of bits transmitted per second is called baud rate and the baud rate generator generates the transmitter and receiver clocks separately. UART synchronizes the incoming bit stream with the local clock. Transmitter interfaces to the data bus with the transmitter data register empty (TDRE) and write signals. When transmitting, UART takes eight bits of parallel data and converts it into serial bit stream and transmit them serially. Receiver interfaces to the data bus with the receiver ready and the read signals. When UART detects the start bit, it receives the data serially and converts it into parallel form and when stop bit (logic high) is detected, data is recognized as a valid data.

UART Transmitter
The UART transmitter mainly consists of two eight bit registers the Transmit Data Register (TDR) and Transmit Shift Register (TSR) along with the Transmitter Control. The transmitter control generates the TDRE and TSRE signals which controls the data transmission through the UART transmitter. The write operation into the TDR is based on the signals generated from the microprocessor.

Automatic Teller Machine (ATM)


Nowadays, most us are surrounded by powerful computer systems with graphics oriented input and output.These computers include the entire spectrum of PCs, through professional workstations upto super-computers. As the performance of computers has increased, so too has the demand for communication between all systems for exchanging data, or between central servers and the associated host computer system. The replacement of copper with fiber and the advancement sin digital communication and encoding are at the heart of several developments that will change the communication infrastructure. The former development has provided us with huge amount of transmission bandwidth. While the latter has made the transmission of all information including voice and video through a packet switched network possible.With continuously work sharing over large distances, including international communication, the systems must be interconnected via wide area networks with increasing demands for higher bit rates.For the first time, a single communications technology meets LAN and WAN requirements and handles a wide variety of current and emerging applications. ATM is the first technology to provide a common format for bursts of high speed data and the ebb and flow of the typical voice phone call. Seamless ATM networks provide desktop-to-desktop multimedia networking over single technology, high bandwidth, low latency network, removing the boundary between LAN WAN. ATM is simply a Data Link Layer protocol. It is asynchronous in the sense that the recurrence of the cells containing information from an individual user is not necessarily periodic. It is the technology of choice for evolving B-ISDN (Board Integrated Services Digital Network), for next generation LANs and WANs. ATM supports transmission speeds of 155Mbits / sec. In the future. Photonic approaches have made the advent of ATM switches feasible, and an evolution towards an all packetized, unified, broadband telecommunications and data communication world based on ATM is taking place.

Wavelength Division Multiplexing (WDM)


In a WDM system, each of the wavelengths is launched into the fiber, and the signals are demultiplexed at the receiving end. Like TDM, the resulting capacity is an aggregate of the input signals, but WDM carries each input signal independently of the others. This means that each channel has its own dedicated bandwidth; all signals arrive at the same time, rather than being broken up and carried in time slots. The difference between WDM and dense wavelength division multiplexing (DWDM) is fundamentally than does WDM, and therefore has a greater overall capacity.The limits of this spacing are not precisely known, and have probably not been reached, though systems are available in mid-year 2000 with a capacity of 128 lambdas on one fiber.These include the ability to amplify all the wavelengths at once without first converting them to electrical signals, and the ability to carry signals of different speeds and types simultaneously and transparently over the fiber (protocol and bit rate independence). WDM increases the carrying capacity of the physical medium (fiber) using a completely different method from TDM. WDM assigns incoming optical signals to specific frequencies of light (wavelengths, or lambdas) within a certain frequency band. This multiplexing closely resembles the way radio stations broadcast on different wavelengths without interfering with each other (see Figure 1-7). Because each channel is transmitted at a different frequency, we can select from them using a tuner. Another way to think about WDM is that each channel is a different color of light; several channels thenmake up a "rainbow." DWDM mesh networks, consisting of interconnected all-optical nodes, will require the next generation of protection. Where previous protection schemes relied upon redundancy at the system, card, or fiber level, redundancy will now migrate to the wavelength level. This means, among other things, that a data channel might change wavelengths as it makes its way through the network, due either to routing or to a switch in wavelength because of a fault. The situation is analogous to that of a virtual circuit through an ATM cloud, which can experience changes in its virtual path identifier (VPI)/virtual channel identifier (VCI) values at switching points. In optical networks, this concept is sometimes called a light path.

Object-Oriented Concepts

Object-oriented techniques achieve further reusability through the encapsulation of programs and data. The techniques and mechanisms we shall discuss here are primarily concerned with paradigms for packaging objects in such a way that they can be conveniently reused without modification to solve new problems. Instantiation and Object Classes Instantiation is perhaps the most basic object-oriented reusability mechanism. Every programming language provides some built-in data types (like integers and floating-point numbers) that can be instantiated as needed. Objects may either be statically or dynamically instantiated. Statically instantiated objects are allocated at compile-time and exist for the duration that the program executes. Dynamically instantiated objects require run-time support for allocation and for either explicit deallocation or some form of garbage collection. The next step is to provide a way for programmers to define and instantiate their own objects. This can be done by providing the programmer with a facility to define object classes, as is the case in Smalltalk. An object class specifies a set of visible operations, a set of hidden instance variables and a set of hidden methods which implement the operations. The instance variables can only be modified indirectly by invoking the operations. When a new instance of an object class is created, it has its own set of instance variables, and it shares the operations' methods with other instances of its class. A simple example is the class ComplexNumber. The programmer would define an interface consisting of the arithmetic operations that complex numbers support, and provide the implementation of these operations and the internal data structures. It would be up to the programmer to decide, for example, whether to use a representation based on Cartesian or polar coordinates. An alternative approach to instantiation is to use prototypical objects rather than object classes as the "template" from which new instances are forged. This is exactly what we do when we make a copy of a text file containing a document composed in a formatting language like TeX ~ or troff: we reuse the structure of the old document, altering its contents, and possibly refining the layout. This approach is useful to avoid a proliferation of object classes in systems where objects evolve rapidly and display more differences than similarities.

Real-Time Obstacle Avoidance

Introduction
Real-time obstacle avoidance is one of the key issues to successful applications of mobile robot systems. All mobile robots feature some kind of collision avoidance, ranging from primitive algorithms that detect an obstacle and stop the robot short of it in order to avoid a collision, through sophisticated algorithms, that enable the robot to detour obstacles. The latter algorithms are much more complex, since they involve not only the detection of an obstacle, but also some kind of quantitative measurements concerning the obstacle's dimensions. Once these have been determined, the obstacle avoidance algorithm needs to steer the robot around the obstacle and resume motion toward the original target. Autonomous navigation represents a higher level of performance, since it applies obstacle avoidance simultaneously with the robot steering toward a given target. Autonomous navigation, in general, assumes an environment with known and unknown obstacles, and it includes global path planning algorithms [3] to plan the robot's path among the known obstacles, as well as local path planning for real-time obstacle avoidance. This article, however, assumes motion in the presence of unknown obstacles, and therefore One approach to autonomous navigation is the wall-following method Here the robot navigation is based on moving alongside walls at a predefined distance. If an obstacle is encountered, the robot regards the obstacle as just another wall, following the obstacle's contour until it may resume its original course. This kind of navigation is technologically less demanding, since one major problem of mobile robots) the determination of their own position) is largely facilitated. Naturally, robot navigation by the wall-following method is less versatile and is suitable only for very specific applications. One recently introduced commercial system uses this method on a floorcleaning robot for long hallways A more general and commonly employed method for obstacle avoidance is based on edge detection. In this method, the algorithm tries to determine the position of the vertical edges of the obstacle and consequently attempts to steer the robot around either edge. The line connecting the two edges is considered to represent one of the obstacle's boundaries. This method was used in our own previous research, as well as in several other research projects, such as. A disadvantage with obstacle avoidance based on edge detecting is the need of the robot to stop in front of an obstacle in order to allow for a more accurate measurement. A further drawback of edge-detection methods is their sensitivity to sensor accuracy. Unfortunately, ultrasonic sensors, which are mostly used in mobile robot applications, offer many shortcomings in this respect: 1. Poor directionality that limits the accuracy in determination of the spatial position of an edge to 10-50 cm, depending on the distance to the obstacle and the angle between the obstacle surface and acoustic beam. 2. Frequent misreading that is caused by either ultrasonic noise from external sources or stray reflections from neighboring sensors ("cross talk"). Misreading cannot always be filtered out and they cause the algorithm to "see" nonexistent edges. 3. Specular reflections, which occur when the angle between the wave front and the normal to a smooth surface is too large. In this case the surface reflects the incoming ultra-sound waves away from the sensor, and the obstacle is either not detected at all, or (since only part of the surface is detected) "seen" much smaller than it is in reality.

To reduce the effects listed above we have decided to represent obstacles with the Certainty Grid method. This method of obstacle representation allows adding and retrieving data on the fly and enables easy integration of multiple sensors. The representation of obstacles by certainty levels in a grid model has been suggested by Elfes, who used the Certainty Grid for off-line global path planning. Moravec and Elfes , and Moravec also describe the use of Certainty Grids for map building. Since the obstacle avoidance approach makes use of this method, the basic idea of the Certainty Grid will be described briefly. In order to create a Certainty Grid, the robot's work area is divided into many square elements (denoted as cells), which form a grid (in our implementation the cell size is 10 cm by 10 cm). Each cell (i,j) contains a Certainty Value C(i,j) that indicates the measure of confidence that an obstacle exists within the cell area. The greater C(i,j), the greater the level of confidence that the cell is occupied by an obstacle.

Delay Tolerant Networking

Introduction
Increasingly, network applications must communicate with counterparts across disparate networking environments characterized by significantly different sets of physical and operational constraints; wide variations in transmission latency are particularly troublesome. The proposed Interplanetary Internet, which must encompass both terrestrial and interplanetary links, is an extreme case. An architecture based on a "least common denominator" protocol that can operate successfully and (where required) reliably in multiple disparate environments would simplify the development and deployment of such applications. The highly successful architecture and supporting protocols of today's Internet are ill suited for this purpose. But Delay Tolerant Networking will crossover this bottle-neck. In this seminar the fundamental principles that would underlie a delay-tolerant networking (DTN) architecture and the main structural elements of that architecture, centered on a new end-to-end overlay network protocol called Bundling is examined. The US Defense Advanced Research Projects Agency (DARPA), as part of its "Next Generation Internet" initiative, has recently been supporting a small group at the Jet Propulsion Laboratory (JPL) in Pasadena, California to study the technical architecture of an "Interplanetary Internet". The idea was to blend ongoing work in standardized space communications capabilities with state of the art techniques being developed within the terrestrial Internet community, with a goal of facilitating a transition as the Earth's Internet moves off-planet. The "Interplanetary Internet" name was deliberately coined to suggest a far-future integration of space and terrestrial communications infrastructure to support the migration of human intelligence throughout the Solar System. Joining the JPL team in this work was one of the original designers of the Internet and coinventor of the "Transmission Control Protocol/Internet Protocol" (TCP/IP) protocol suite. Support for the work has recently transitioned from DARPA to NASA. An architecture based on a "least common denominator " protocol that can operate successfully and reliably in multiple disparate environments would simplify the development and deployment of Interplanetary Internet. It is this analysis that lead to the proposal of Delay-Tolerant Network (DTN) architecture, an architecture that can support deep space applications, centered on a new end-to-end overlay network protocol called 'Bundling'. The architecture and protocols developed for the project could also be useful in terrestrial environments where the dependence on real time interactive communication is not possible. The Internet protocols are ill suited for this purpose, while the overlay protocol used in DTN architecture serves to bridge between different stacks at the boundaries between environments in a standard manner, in effect providing a general -purpose application-level gateway infrastructure that can be used by any number of applications. DTN is an architecture based on Internet-independent middleware: use exactly those protocols at all layers that are best suited to operation within each environment, but insert a new overlay network protocol between the applications and the locally optimized stacks. Research on extending Earth's Internet into interplanetary space has been underway for several years as part of an international communications standardization body known as the Consultative Committee for Space Data Systems (CCSDS). CCSDS is primarily concerned with communications standards for scientific satellites, with a focus more on the needs of near-term missions. To extend this horizon into the future, and to involve the terrestrial internet research and engineering communities, a special Interplanetary Internet study was proposed and subsequently sponsored in the

United States by NASA's Jet Propulsion Laboratory (JPL) and DARPA's Next Generation Internet Initiative

EDGE

Introduction
EDGE is the next step in the evolution of GSM and IS- 136. The objective of the new technology is to increase data transmission rates and spectrum efficiency and to facilitate new applications and increased capacity for mobile use. With the introduction of EDGE in GSM phase 2+, existing services such as GPRS and high-speed circuit switched data (HSCSD) are enhanced by offering a new physical layer. The services themselves are not modified. EDGE is introduced within existing specifications and descriptions rather than by creating new ones. This paper focuses on the packetswitched enhancement for GPRS, called EGPRS. GPRS allows data rates of 115 kbps and, theoretically, of up to 160 kbps on the physical layer. EGPRS is capable of offering data rates of 384 kbps and, theoretically, of up to 473.6 kbps. A new modulation technique and error-tolerant transmission methods, combined with improved link adaptation mechanisms, make these EGPRS rates possible. This is the key to increased spectrum efficiency and enhanced applications, such as wireless Internet access, e-mail and file transfers. GPRS/EGPRS will be one of the pacesetters in the overall wireless technology evolution in conjunction with WCDMA. Higher transmission rates for specific radio resources enhance capacity by enabling more traffic for both circuit- and packet-switched services. As the Third-generation Partnership Project (3GPP) continues standardization toward the GSM/EDGE radio access network (GERAN), GERAN will be able to offer the same services as WCDMA by connecting to the same core network. This is done in parallel with means to increase the spectral efficiency. The goal is to boost system capacity, both for real- time and best-effort services, and to compete effectively with other third-generation radio access networks such as WCDMA and cdma2000.

Technical differences between GPRS and EGPRS


Introduction Regarded as a subsystem within the GSM standard, GPRS has introduced packet-switched data into GSM networks. Many new protocols and new nodes have been introduced to make this possible. EDGE is a method to increase the data rates on the radio link for GSM. Basically, EDGE only introduces a new modulation technique and new channel coding that can be used to transmit both packet-switched and circuit-switched voice and data services. EDGE is therefore an add-on to GPRS and cannot work alone. GPRS has a greater impact on the GSM system than EDGE has. By adding the new modulation and coding to GPRS and by making adjustments to the radio link protocols, EGPRS offers significantly higher throughput and capacity. GPRS and EGPRS have different protocols and different behavior on the base station system side. However, on the core network side, GPRS and EGPRS share the same packet-handling protocols and, therefore, behave in the same way. Reuse of the existing GPRS core infrastructure (serving GRPS support node/gateway GPRS support node) emphasizes the fact that EGPRS is only an "addon" to the base station system and is therefore much easier to introduce than GPRS . In addition to enhancing the throughput for each data user, EDGE also increases capacity. With EDGE, the same time slot can support more users. This decreases the number of radio resources required to support the same traffic, thus freeing up capacity for more data or voice services. EDGE makes it easier for circuit-switched and packet-switched traffic to coexist, while making more efficient use of the same radio resources. Thus in tightly planned networks with limited spectrum, EDGE may also be seen as a capacity booster for the data traffic.

EDGE technology EDGE leverages the knowledge gained through use of the existing GPRS standard to deliver significant technical improvements. Figure 2 compares the basic technical data of GPRS and EDGE. Although GPRS and EDGE share the same symbol rate, the modulation bit rate differs. EDGE can transmit three times as many bits as GPRS during the same period of time. This is the main reason for the higher EDGE bit rates. The differences between the radio and user data rates are the result of whether or not the packet headers are taken into consideration. These different ways of calculating throughput often cause misunderstanding within the industry about actual throughput figures for GPRS and EGPRS. The data rate of 384 kbps is often used in relation to EDGE. The International Telecommunications Union (ITU) has defined 384 kbps as the data rate limit required for a service to fulfill the International Mobile Telecommunications-2000 (IMT-2000) standard in a pedestrian environment. This 384 kbps data rate corresponds to 48 kbps per time slot, assuming an eight-time slot terminal.

Psychoacoustics

Introduction
Basics of Audio Compression Advances in digital audio technology are fueled by two sources: hardware developments and new signal processing techniques. When processors dissipated tens of watts of power and memory densities were on the order of kilobits per square inch, portable playback devices like an MP3 player were not possible. Now, however, power dissipation, memory densities, and processor speeds have improved by several orders of magnitude. Advancements in signal processing are exemplified by Internet broadcast applications: if the desired sound quality for an internet broadcast used 16-bit PCM encoding at 44.1 KHz, such an application would require a 1.4 Mbps (2 x 16 x 44k) channel for a stereo signal! Fortunately new bit rate reduction techniques in signal processing for audio of this quality are constantly being released. Increasing hardware efficiency and an expanding array of digital audio representation formats are giving rise to a wide variety of new digital audio applications. These applications include portable music playback devices, digital surround sound for cinema, high-quality digital radio and television broadcast, Digital Versatile Disc (DVD), and many others. This paper introduces digital audio signal compression, a technique essential to the implementation of many digital audio applications. Digital audio signal compression is the removal of redundant or otherwise irrelevant information from a digital audio signal, a process that is useful for conserving both transmission bandwidth and storage space. We begin by defining some useful terminology. We then present a typical "encoder" (as compression algorithms are often called) and explain how it functions. Finally consider some standards that employ digital audio signal compression, and discuss the future of the field. Psychoacoustics is the study of subjective human perception of sounds. Effectively, it is the study of acoustical perception. Psychoacoustic modeling has long-since been an integral part of audio compression. It exploits properties of the human auditory system to remove the redundancies inherent in audio signals that the human ear cannot perceive. More powerful signals at certain frequencies 'mask' less powerful signals at nearby frequencies by de-sensitizing the human ear's basilar membrane (which is responsible for resolving the frequency components of a signal). The entire MP3 phenomenon is made possible by the confluence of several distinct but interrelated elements: a few simple insights into the nature of human psychoacoustics, a whole lot of number crunching, and conformance to a tightly specified format for encoding and decoding audio into compact bitstreams.

Terminology
Audio Compression vs. Speech Compression This paper focuses on audio compression techniques, which differ from those used in speech compression. Speech compression uses a model of the human vocal tract to express particular signal in a compressed format. This technique is not usually applied in the field of audio compression due to the ast array of sounds that can be generated - models that represent audio generation would be too complex to implement. So instead of modeling the source of sounds, modern audio compression models the receiver, i.e., the human ear.

Lossless vs. Lossy When we speak of compression, we must distinguish between two different types: lossless, and lossy. Lossless compression retains all the information in a given signal, i.e., a decoder can, perfectly reconstruct a compressed signal. In contrast, lossy compression eliminates information, from the original signal. As a result, a reconstructed signal may differ from the original. With audio signals, the differences between the original and reconstructed signals only matter if they are detectable by the human ear. As we will explore shortly, audio compression employs both lossy and lossless techniques. Basic Building Blocks Figure 1 shows a generic encoder or "compressor that takes blocks of sampled audio signal as its input. These blocks typically consist of between 500 and 1500 samples per channel, depending on the encoder specification. For example, the MPEG-1 layer III (MP3) specification takes 576 samples per channel per input block. The output is a compressed representation of the input block (a "frame") that can be transmitted or stored for subsequent decoding.

Integer Fast Fourier Transform

Introduction
The DSP world today is ruled by different transforms. Discrete Fourier transforms are most used. So is the fast Fourier transform, which is the faster version of DFT. Suppose we take FFT of a signal. On taking the inverse fast Fourier transform, we except the output to be the same as the input. This is called the invertibility property. But on most occasions, disappointment is the result. This is due to the finite word length of the registers used to store the samples and the coefficients. This seminar introduces a method called integer fast Fourier transforms to give the invertibility property to FFT, without in any way destroying its accuracy and speed. It first reviews the necessary backgrounds to understand the concept including the discrete Fourier transform and split-radix FFT, its fixed-point implementation, and lifting scheme. Then the basics are applied to split radix FFT to describe the integer fast Fourier transforms. The accuracy and complexity of the proposed approach is then discussed. The final section gives the use of the IntFFT in noise reduction application and compares its performance with the FxpFFT. Fourier transform for approximating the discrete Fourier transform, which is one of the most fundamental operations in digital signal processing, is introduced. Unlike fixed- point fast Fourier transform, the new transform has the properties that it is an integer-to-integer mapping, is power adaptable and is reversible. In this paper, a concept of integer fast Fourier transform for approximating the discrete Fourier transform is introduced . The transform can be implemented by using only bit shifts and additions and no multiplication. A method for minimizing the no of additions required is presented. While preserving the reversibility, the integer FFT is shown experimentally to yield same accuracy as the fixed-point FFT when their coefficients are quantised to a certain number of bits. Complexity of the integer FFT is shown to be much lower than that of fixed point FFT in terms of the no of additions and shifts. Finally, the are applied to noise reduction applications, where the integer FFT provides significantly improvement over the fixed-point FFT at low power and maintains similar results at high power. THE DISCRETE FOURIER TRANSFORM (DFT) is one of the most fundamental operations in digital signal processing. Because of the efficiency of the convolutional property, the DFT is often used in linear filtering found in many applications such as quantum mechanics, noise reduction and image re-construction. However, the computational requirements for completing the DFT of a finite length signal are relatively intensive. In particular, if the input signal has length N, directly calculating its DFT requires 2 N complex multiplications2 4 ( N real multiplications) and some additional additions. In 1965, Cooley and Tukey introduced the fast Fourier transform (FFT), which efficiently and significantly reduces the computational cost of calculating N-point DFT from 2 N to Nlog N 2 . Since then, there have been numerous further developments that extended Cooley and Tukey's original contribution. Many efficient structures for computing DFT have been discovered by taking advantage of the symmetry and periodicity properties of the roots f unity such as the radix-2 FFT, radix-4 FFT,and split-radix FFT. The order of the multiplicative complexity is commonly used to measure and compare the efficiency of the algorithms since multiplications are intrinsically more complicated among all operations.It is well-known in the field of VLSI that among the digital arithmetic

operations (addition, multiplication, shiftingand addressing, etc.), multiplication is the operation that consumes most of the time and power required for the entire computation and, therefore, causes the resulting devices to be large and expensive. Therefore, reducing the number of multiplications in digital chip design is usually a desirable task. In this paper, utilizing the existing efficient structures, a novel structure for approximating the DFT is presented. This proposed structure is shown to be a reversible integer-to-integer mapping called Integer FFT (IntFFT). All coefficients can be represented by finite-length binary numbers. The complexity of the proposed IntFFT will be compared with the conventional fixedpointimplementation of the FFT (FxpFFT). Moreover, the performances of the new transforms are also tested in noise reduction problem. The invertibility of the DFT is guaranteed by orthogonality.The inverse (the IDFT) is just the conjugate transpose. Inpractice, fixed-point arithmetic is often used to implement the DFT in hardware since it is impossible to retain infinite resolution of the coefficients and operations. The complex coefficients of the transform are normally quantized to a certain number of bits depending on the tradeoff between the cost (or power) and the accuracy of the transform. However, direct quantization of the coefficients used in the conventional structures, including both direct and reduced-complexity (e.g., radix-2, radix-4, etc.) methods, destroys the invertibility of the transform.

Worldwide Inter operatibility for Microwave Access

Introduction
In recent years, Broadband technology has rapidly become an established, global commodity required by a high percentage of the population. The demand has risen rapidly, with a worldwide installed base of 57 million lines in 2002 rising to an estimated 80 million lines by the end of 2003. This healthy growth curve is expected to continue steadily over the next few years and reach the 200 million mark by 2006. DSL operators, who initially focused their deployments in densely-populated urban and metropolitan areas, are now challenged to provide broadband services in suburban and rural areas where new markets are quickly taking root. Governments are prioritizing broadband as a key political objective for all citizens to overcome the "broadband gap" also known as "digital divide". Wireless DSL (WDSL) offers an effective, complementary solution to wireline DSL, allowing DSL operators to provide broadband service to additional areas and populations that would otherwise find themselves outside the broadband loop. Government regulatory bodies are realizing the inherent worth in wireless technologies as a means for solving digital-divide challenges in the last mile and have accordingly initiated a deregulation process in recent years for both licensed and unlicensed bands to support this application. Recent technological advancements and the formation of a global standard and interoperability forum - WiMAX, set the stage for WDSL to take a significant role in the broadband market. Revenues from services delivered via Broadband Wireless Access have already reached $323 million and are expected to jump to $1.75 billion. E-Mail Newsletters There are several ways to get a fast Internet connection to the middle of nowhere. Until not too long ago, the only answer would have been "cable" - that is, laying lines. Cable TV companies, who would be the ones to do this, had been weighing the costs and benefits. However this would have taken years for the investment to pay off. So while cable companies might be leading the market for broadband access to most people (of the 41% of Americans who have high-speed Internet access, almost two-thirds get it from their cable company), they don't do as well to rural areas. And governments that try to require cable companies to lay the wires find themselves battling to force the companies to take new customers. Would DSL be a means of achieving this requisite of broadband and bridging the digital divide? The lines are already there, but the equipment wasn't always the latest and greatest, even then. Sending voice was not a matter of big concern, but upgrading the system to handle DSL would mean upgrading the central offices that would have to handle the data coming from all those farms. The most rattling affair is that there are plenty of places in cities that can't handle DSL, let alone the countryside. Despite this, we'll still read about new projects to lay cable out to smaller communities, either by phone companies, cable companies, or someone else. Is this a waste of money? Probably because cables are on their way out. Another way to get broadband to rural communities is the way many folks get their TV: satellite, which offers download speeds of about 500 Kbps -faster than a modem, but at best half as fast as DSL - through a satellite dish. But you really, really have to want it. The system costs $600 to start, then $60 a month by the services provided by DIRECWAY in the US. There are other wireless ways to get broadband access.

MCI ("Microwave Communications Inc.") was originally formed to compete with AT & T by using microwave towers to transmit voice signals across the US. Unlike a radio (or a Wi-Fi connection), those towers send the signal in a straight line -unidirectional instead of omni directional. That's sometimes called fixed wireless or point-to-point wireless. One popular standard for this is called LMDS: local multipoint distribution system. Two buildings up to several miles apart would have microwave antennas pointing at each other. WiMAX: WiMax delivers broadband to a large area via towers, just like cell phones. This enables your laptop to have high-speed access in any of the hot spots. Instead of yet another cable coming to your home, there would be yet another antenna on the cell-phone tower. This is definitely a point towards broadband service in rural areas.

Code Division Multiple Access (CDMA)

Overview
Code division multiple access (CDMA) is a modulation and multiple-access scheme based on spread-spectrum communication. In this scheme, multiple users share the same frequency band at the same time, by spreading the spectrum of their transmitted signals, so that each user's signal is pseudo-orthogonal to the signals of the other users. CDMA Signals In a CDMA system, each signal consists of a different pseudorandom binary sequence (called the spreading code) that modulates a carrier, spreading the spectrum of the waveform. A large number of CDMA signals share the same frequency spectrum. If CDMA is viewed in either the frequency or time domain, the multiple access signals overlap with each other. However, the use of statistically orthogonal spreading codes separates the various signals in the code space. CDMA Receivers A CDMA receiver separates the signals by means of a correlator that uses the particular binary sequence to despread the signal and collect the energy of the desired signal. Other users' signals, whose spreading codes do not match this sequence, are not despread in bandwidth and, as a result, contribute only to the noise. These signals represent a self-interference generated by the system. The output of the correlator is sent to a narrow-bandwidth filter. The filter allows all of the desired signal's energy to pass through, but reduces the interfering signal's energy by the ratio of the bandwidth before the correlator to the bandwidth after the correlator. This reduction greatly improves the signal-to-interference ratio of the desired signal. This ratio is also known as the processing gain. The signal-to-noise ratio is determined by the ratio of the desired signal power to the sum of all of the other signal powers. It is enhanced by the processing gain or the ratio of spread bandwidth to baseband data rate. CDMA Channel Assignments A CDMA digital cellular waveform design uses a pseudorandom noise (PN) sequence to spread the spectrum. The sample rate of the spreading sequence (called the chip rate) is chosen so that the bandwidth of the filtered signal is several times the bandwidth of the original signal. A typical system might use multiple PN sequences. In addition, it might use repeated spreading codes of known lengths to ensure orthogonality between signals intended for different users. The channel assignment is essentially determined by the set of codes that are used for that particular link. Thus, the signal transmitted at any time in a logical channel is determined by: * The frequency of operation for the base station * The current symbol * The specific orthogonal spreading code assigned for the logical channel * The PN spreading code

Optical Coherence Tomography (OCT)


Optical Coherence Tomography (OCT) is an imaging technique that is similar in principle to ultrasound, but with superior resolution. It relies on exposing a sample to a burst of light and then measuring the reflective response from different depths and is therefore capable of scanning noninvasively beneath the surface of the sample. In ultrasound imaging, it is relatively easy to measure the time delay of each reflected packet. However, for light pulses, interferometry must be used to measure the displacement with meaningful accuracy. The amount of light reflected from each point within the scanning window in the sample is plotted graphically as an OCT image. The goal of this investigation is to use Optical Coherence Tomography to image epileptic lesions on cortical tissue from rats. Such images would be immensely useful for surgical purposes. They would detail how deep the lesion is, allowing for precise removal that neither removes an insufficient amount of damaged tissue nor extracts too much healthy tissue. Though commerical OCT systems already exist, they typically do not scan very deeply beneath sample surfaces. For the purpose of this study, a system must be constructed that scans up to 2 millimeters into tissue1. Unfortunately, an increase in axial depth necessitates a decrease in transverse (along the surface of the sample) resolution due to focal restrictions of the objective lenses2. However, this loss is acceptable for this investigation, as the main goal is to determine lesion depth and not to achieve perfect image clarity. The ability to detect the positional delay of light reflecting from a tissue sample is at the heart of OCT. Low-coherence interferometry provides just that. A low-coherence light source has the potential to produce interference fringes only when integrated with light from the same source that has traveled nearly exactly the same distance3. This means that if light from such a source is split by a beam splitter into two equal parts, both of them reflect off of different objects, and combine to form one beam again, they will produce an interference fringe pattern only if thedistance they traveled while split was exactly the same.

Symbian OS

Definition
Symbian OS is designed for the mobile phone environment. It addresses constraints of mobile phones by providing a framework to handle low memory situations, a power management model, and a rich software layer implementing industry standards for communications, telephony and data rendering. Even with these abundant features, Symbian OS puts no constraints on the integration of other peripheral hardware. This flexibility allows handset manufacturers to pursue innovative and original designs. Symbian OS is proven on several platforms. It started life as the operating system for the Psion series of consumer PDA products (including Series 5mx, Revo and netBook), and various adaptations by Diamond, Oregon Scientific and Ericsson. The first dedicated mobile phone incorporating Symbian OS was the Ericsson R380 Smartphone, which incorporated a flip-open keypad to reveal a touch screen display and several connected applications. Most recently available is the Nokia 9210Communicator, a mobile phone that has a QWERTY keyboard and color display, and is fully open to third-party applications written in Java or C++. The five key points - small mobile devices, mass-market, intermittent wireless connectivity, diversity of products and an open platform for independent software developers - are the premises on which Symbian OS was designed and developed. This makes it distinct from any desktop, workstation or server operating system. This also makes Symbian OS different from embedded operating systems, or any of its competitors, which weren't designed with all these key points in mind. Symbian is committed to open standards. Symbian OS has a POSIX-compliant interface and a Sun-approved JVM, and the company is actively working with emerging standards, such as J2ME, Bluetooth, MMS, SyncML, IPv6 and WCDMA. As well as its own developer support organization, books, papers and courses, Symbian delivers a global network of third-party competency and training centers - the Symbian Competence Centers and Symbian Training Centers. These are specifically directed at enabling other organizations and developers to take part in this new economy. Symbian has announced and implemented a strategy that will see Symbian OS running on many advanced open mobile phones. Small devices come in many shapes and sizes, each addressing distinct target markets that have different requirements. The market segment we are interested in is that of the mobile phone. The primary requirement of this market segment is that all products are great phones. This segment spans voice-centric phones with information capability to information-centric devices with voice capability. These advanced mobile phones integrate fully-featured personal digital assistant (PDA) capabilities with those of a traditional mobile phone in a single unit. There are several critical factors for the need of operating systems in this market. It is important to look at the mobile phone market in isolation. It has specific needs that make it unlike markets for PCs or fixed domestic appliances. Scaling down a PC operating system, or bolting communication capabilities onto a small and basic operating system, results in too many fundamental compromises. Symbian believes that the mobile phone market has five key characteristics that make it unique, and result in the need for a specifically designed operating system: 1) mobile phones are both small and mobile. 2) mobile phones are ubiquitous - they target a mass-market of consumer, enterprise and professional users. 3) mobile phones are occasionally connected - they can be used when connected to the wireless phone network, locally to other devices, or on their own.

4) manufacturers need to differentiate their products in order to innovate and compete in a fastevolving market.

Home Networking
Home Networking is the collection of elements that process, manage, transport, and store information, enabling the connection and integration of multiple computing, control, monitoring, and communication devices in the home. The price of home computers keep falling, while the advantages for consumers from being connected online investing and shopping, keeping in touch with long distance friends and tapping the vast resource of the Internet CE keep multiplying. No wonder an increasing number of households own two or more PCs.Until recently, the home network has been largely ignored. However, the rapid proliferation of personal computers (PCs) and the Internet in homes, advancements in telecommunications technology, and progress in the development of smart devices have increasingly emphasized the need for an in home networking. Furthermore, as these growth and advancement trends continue, the need for simple, flexible, and reliable home networks will greatly increase. Overview The latest advances in the Internet access technologies, the dropping of PC rates, and the proliferation of smart devices in the house, have dramatically increased the number of intelligent devices in the consumer's premises. The consumer electronics equipment manufacturers are building more and more intelligence into their products enabling those devices to be networked into clusters that can be controlled remotely. Advances in the Wireless communication technologies have introduced a variety of wireless devices, like PDAs, Web Pads, into the house. Advent of multiple PCs and smart devices into the house, and the availability of high-speed broadband Internet access, have resulted in in-house networking needs to meet the following requirements of the consumers: " Simultaneous internet access to multiple home users " Sharing of peripherals and files " Home Control/Automation " Multi-player Gaming " Connect to/from the workplace " Remote Monitoring/Security " Distributed Video The home networking requirement introduces into the market a new breed of products called Residential Gateways. A Residential Gateway (RG) will provide the necessary connectivity features to enable the consumer to exploit the advantages of a networked home. The RG will also provide the framework for Residential Connectivity Based Services to reach the home. Examples of such Residential Connectivity Based Services include: Video on Demand, IP Telephony, Home Security & Surveillance, Remote Home Appliance Repair & Trouble shooting, Utility/Meter Reading, Virtual Private Network Connectivity and Innovative E-commerce solutions. Using a reusable framework for home service gateway architecture, offers end-to-end product design and realization services for the residential gateways. Coupled with our standards based and ready-todeploy home networking components & solutions (like the Wipro BlueTooth Stack, IEEE 1394 core, Voice Over Broadband Infrastructure, Embedded TCP/IP Stack etc.), our customers can enjoy the much need time to market advantage and competitive edge. 2. What is Home Networking?

We have all become very comfortable with networks. Local area networks (LANs) and Wide Area Networks (WANs) have become ubiquitous. The network hierarchy has been rapidly moving lower in the chain towards smaller and more personal devices. These days, Home Area Networks (HANs) and Personal Area Networks (PANs) are joining their larger brother as ever-present communications channels. Home Networking is the collection of elements that process, manage, transport, and store information, enabling the connection and integration of multiple computing, control, monitoring, and communication devices in the home.

Guided Missiles

Definition
Looking back in to the history of rockets and guided missiles, we find that rockets were used in China and India around 1000AD for fire works as well as other purposes. During the 18th Century, unguided rockets propelled missiles were used by 'Hyber Ali' and his son 'Tippu Sulthan' against the British. The current phase in the history of missiles began during the Second World War with the use of V1 and V2 missiles by the Germany. Since then there has been a rapid growth in this field because of technological development.

Types Of Guided Missiles


Presently there are many types of guided missiles. They can be broadly classified on the basis of their features such as type of target, method of launching, range, launch platform, propulsion or guidance and tye of trajectory. On the basis of target they are classified as Antitank/ Antiarmour, Antipersonnel, Antiaircraft, Antiship/ Antisubmarine, Antisatallites or Antimissiles.Another classification of missiles which depends upon the method of launching. They are surface- to- surface Missiles [ SSM], Surface-to-Air Missiles [SAM], Air -to-Air Missiles [AAM] and Air- to- Surface Missiles. Surface- to - Surface Missiles are common Ground-to-Ground ones. Although these may be launched from a ship to another ship. Under water weapons, which are launched from a submarine, also come under this classification.Surface-to-Air Missiles ar3 essential complaint of modern air defence systems along with Antiaircraft guns which are used against hostile aircrafts. Air-to-air Missiles are for air born battle among fighter or bomber aircraft. These are usually mounted under the wings of the aircraft and are fired against the target. The computer and radar networks control these missiles. On the basis of range, missiles can be broadly classified as short range missiles, medium range missiles, intermediate range missiles and long range missiles. These classifications is mainly used in the care of SSM missiles which travel a distance of about a distance of about 50 to 100 km. Are designated as short range missiles. Those with a range of 100 to 1500 km. Are called medium range missiles and missiles having a range up to 5000 km. Are said to be intermediate- range missiles. Missiles, which travel a distance of 12000 km, are called long-range missiles. On the basis of launch platform missiles can be termed as shoulder fired, Land/ Mobile fired, Aircraft/ Helicopter borne, Ship/ submarine- launched. Based on guidance, missiles are broadly classified as command guider missiles, Homing guidance, Beam rider guidance, inertial navigation guidance and stellar guidance.One more classification is based on the type of trajectory and a missile is called as a ballistic missile or a cruise missile. By definition ballistic missile is the one, which covered a major part of its range outside the atmosphere where the only external force active on the missile is the gravitational force of earth, while the cruise is the one, which travels its entire range within the atmosphere at aim nearly constant height and speed.Another classification based on the propulsion system provided in the missile. So they are classified solid propulsion systems, liquid propulsion systems and hybrid propulsion systems.

AC Performance Of Nanoelectronics

Definition
Nano electronic devices fall into two classes: tunnel devices and ballistic transport devices. In Tunnel devices single electron effects occur if the tunnel resistance is larger than h/e = 25 K . In Ballistic devices with cross sectional dimensions in the range of quantum mechanical wavelength of electrons, the resistance is of order h/e = 25 K . This high resistance may seem to restrict the operational speed of nano electronics in general. However the capacitance values and drain source spacing are typically small which gives rise to very small RC times and transit times of order of ps or less. Thus the speed may be very large, up to THz range. The goal of this seminar is to present the models an performance predictions about the effects that set the speed limit in carbon nanotube transistors, which form the ideal test bed for understanding the high frequency properties of Nano electronics because they may behave as ideal ballistic 1d transistors. Ballistic Transport- An Outline When carriers travel through a semiconductor material, they are likely to be scattered by any number of possible sources, including acoustic and optical phonons, ionized impurities, defects, interfaces, and other carriers. If, however, the distance traveled by the carrier is smaller than the mean free path, it is likely not to encounter any scattering events; it can, as a result, move ballistically through the channel. To the first order, the existence of ballistic transport in a MOSFET depends on the value of the characteristic scattering length (i.e. mean free path) in relation to channel length of the transistor. This scattering length, l , can be estimated from the measured carrier mobility where t is the average scattering time, m* is the carrier effective mass, and vth is the thermal velocity. Because scattering mechanisms determine the extent of ballistic transport, it is important to understand how these depend upon operating conditions such as normal electric field and ambient temperature. Dependence On Normal Electric Field In state-of-the-art MOSFET inversion layers, carrier scattering is dominated by phonons, impurities (Coulomb interaction), and surface roughness scattering at the Si-SiO2 interface. The relative importance of each scattering mechanism is dependent on the effective electric field component normal to the conduction channel. At low fields, impurity scattering dominates due to strong Coulombic interactions between the carriers and the impurity centers. As the electric field is increased, acoustic phonons begin to dominate the scattering process. At very high fields, carriers are pulled closer to the Si-SiO2 gate oxide interface; thus, surface roughness scattering degrades carrier mobility. A universal mobility model has been developed to relate field strength with the effective carrier mobility due to phonon and surface roughness scattering: Dependence On Temperature When the temperature is changed, the relative importance of each of the aforementioned scattering mechanisms is altered. Phonon scattering becomes less important at very low temperatures. Impurity scattering, on the other hand, becomes more significant because carriers are moving slower (thermal velocity is decreased) and thus have more time to interact with impurity centers. Surface roughness scattering remains the same because it does not depend on temperature. At liquid nitrogen temperatures (77K) and an effective electric field of 1MV/cm, the electron and hole mobilities are ~700 cm2/Vsec and ~100 cm2/Vsec, respectively. Using the above equations, the scattering lengths are approximately 17nm and 3.6nm.These scattering lengths can be assumed to be worst-case scenarios, as large operating voltages (1V) and aggressively scaled gate oxides (10) are assumed. Thus, actual scattering lengths will likely be larger than the calculated values.

Further device design considerations in maximizing this scattering length will be discussed in the last section of this paper. Still, the values calculated above are certainly in the range of transistor gate lengths currently being studied in advanced MOSFET research (<50nm). Ballistic carrier transport should thus become increasingly important as transistor channel lengths are further reduced in size. In addition, it should be noted that the mean free path of holes is generally smaller than that of electrons. Thus, it should be expected that ballistic transport in PMOS transistors is more difficult to achieve, since current conduction occurs through hole transport. Calculation of the mean scattering length, however, can only be regarded as a first-order estimation of ballistic transport. To accurately determine the extent of ballistic transport evident in a particular transistor structure, Monte Carlo simulation methods must be employed. Only by modeling the random trajectory of each carrier traveling through the channel can we truly assess the extent of ballistic transport in a MOSFET.

Acoustics
Human beings extract a lot of information about their environment using their ears. In order to understand what information can be retrieved from sound, and how exactly it is done, we need to look at how sounds are perceived in the real world. To do so, it is useful to break the acoustics of a real world environment into three components: the sound source, the acoustic environment, and the listener: 1. The sound source: this is an object in the world that emits sound waves. Examples are anything that makes sound - cars, humans, birds, closing doors, and so on. Sound waves get created through a variety of mechanical processes. Once created, the waves usually get radiated in a certain direction. For example, a mouth radiates more sound energy in the direction that the face is pointing than to side of the face. 2. The acoustic environment: once a sound wave has been emitted, it travels through an environment where several things can happen to it: it gets absorbed by the air (the high frequency waves more so than the low ones. The absorption amount depends on factors like wind and air humidity); it can directly travel to a listener (direct path), bounce off of an object once before it reaches the listener (first order reflected path), bounce twice (second order reflected path), and so on; each time a sound reflects off an object, the material that the object is made of has an effect on how much each frequency component of the sound wave gets absorbed, and how much gets reflected back into the environment; sounds can also pass through objects such as water, or walls; finally, environment geometry like corners, edges, and small openings have complex effects on the physics of sound waves (refraction, scattering). 3. The listener: this is a sound-receiving object, typically a "pair of ears". The listener uses acoustic cues to interpret the sound waves that arrive at the ears, and to extract information about the sound sources and the environment. How Virtual Surround Works A 3D audio system aims to digitally reproduce a realistic sound field. To achieve the desired effect a system needs to be able to re-create portions or all of the listening cues discussed in the previous chapter: IID, ITD, outer ear effects, and so on. A typical first step to building such a system is to capture the listening cues by analyzing what happens to a single sound as it arrives at a listener from different angles. Once captured, the cues are synthesized in a computer simulation for verification. What is an HRTF? The majority of 3D audio technologies are at some level based on the concept of HRTFs, or HeadRelated Transfer Functions. An HRTF can be thought of as set of two audio filters (one for each ear) that contains in it all the listening cues that are applied to a sound as it travels from the sound's origin (its source, or position in space), through the environment, and arrives at the listener's ear drums. The filters change depending on the direction from which the sound arrives at the listener. The level of HRTF complexity necessary to create the illusion of 3D realistic hearing is subject to considerable discussion and varies greatly across technologies. HRTF Analysis The most common method of measuring the HRTF of an individual is to place tiny probe microphones inside a listener's left and right ear canals, place a speaker at a known location relative to the listener, play a known signal through that speaker, and record the microphone signals. By comparing the resulting impulse response with the original signal, a single filter in the HRTF set has

been found. After moving the speaker to a new location, the process is repeated until an entire, spherical map of filter sets has been devised.

BiCMOS Technology

Introduction
The history of semiconductor devices starts in 1930's when Lienfed and Heil first proposed the mosfet. However it took 30 years before this idea was applied to functioning devices to be used in practical applications, and up to the late 1980 this trend took a turn when MOS technology caught up and there was a cross over between bipolar and MOS share.CMOS was finding more wide spread use due to its low power dissipation, high packing density and simple design, such that by 1990 CMOS covered more than 90% of total MOS scale. In 1983 bipolar compatible process based on CMOS technology was developed and BiCMOS technology with both the MOS and bipolar device fabricated on the same chip was developed and studied. The objective of the BiCMOS is to combine bipolar and CMOS so as to exploit the advantages of both at the circuit and system levels. Since 1985, the state-of-the-art bipolar CMOS structures have been converging. Today BiCMOS has become one of the dominant technologies used for high speed, low power and highly functional VLSI circuits especially when the BiCMOS process has been enhanced and integrated in to the CMOS process without any additional steps. Because the process step required for both CMOS and bipolar are similar, these steps cane be shared for both of them.

System On Chip (SOC) Fundamentals


The concept of system-on-chip (SOC) has evolved as the number of gates available to a designer has increased and as CMOS technology has migrated from a minimum feature size of several microns to close to 0.1 m. Over the last decade, the integration of analog circuit blocks is an increasingly common feature of SOC development, motivated by the desire to shrink the number of chips and passives on a PC board. This, in turn, reduces system size and cost and improves reliability by requiring fewer components to be mounted on a PC board. Power dissipation of the system also improves with the elimination of the chip input-output (I/O) interconnect blocks. Superior matching and control of integrated components also allows for new circuit architectures to be used that cannot be attempted in multi-chip architectures. Driving PC board traces consume significant power, both in overcoming the larger capacitances on the PC board and through larger signal swings to overcome signal cross talk and noise on the PC board. Large-scale microcomputer systems with integrated peripherals, the complete digital processor of cellular phone, and the switching system for a wire-line data-communication system are some of the many applications of digital SOC systems. Examples of analog or mixed-signal SOC devices include analog modems; broadband wired digital communication chips, such as DSL and cable modems; Wireless telephone chips that combine voice band codes with base band modulation and demodulation function; and ICs that function as the complete read channel for disc drives. The analog section of these chips includes wideband amplifiers, filters, phase locked loops, analog-to-digital converters, digital-to-analog converters, operational amplifiers, current references, and voltage references. Many of these systems take advantage of the digital processors in an SOC chip to auto-calibrate the analog section of the chip, including canceling de offsets and reducing linearity errors within data converters. Digital processors also allow tuning of analog blocks, such as centering filter-cutoff frequencies. Built-in self-test functions of the analog block are also possible through the use of on-chip digital processors.

Analog or mixed-signal SOC integration is inappropriate for designs that will allow low production volume and low margins. In this case, the nonrecurring engineering costs of designing the SOC chip and its mask set will far exceed the design cost for a system with standard programmable digital parts, standard analog and RF functional blocks, and discrete components. Noise issues from digital electronics can also limit the practicality of forming an SOC with high-precision analog or RF circuits. A system that requires power-supply voltages greater than 3.6 V in its analog or RF stages is also an unattractive candidate for an SOC because additional process modifications would be required for the silicon devices to work above the standard printed circuit board interface voltage of 3.3 V+- 10%. Before a high-performance analog system can be integrated on a digital chip, the analog circuit blocks must have available critical passive components, such as resistors and capacitors. Digital blocks, in contrast, require only n-channel metal-oxide semiconductor (NMOS) and p-channel metaloxide semiconductor (PMOS) transistors. Added process steps may be required to achieve characteristics for resistors and capacitors suitable for high-performance analog circuits. These steps create linear capacitors with low levels of parasitic capacitance coupling to other parts of the IC, such as the substrate. Though additional process steps may be needed for the resistors, it may be possible to alternatively use the diffusions steps, such as the N and P implants that make up the drains and sources of the MOS devices. The shortcomings of these elements as resistors, as can the poly silicon gate used as part of the CMOS devices. The shortcomings of these elements as resistors, beyond their high parasitic capacitances, are the resistors, beyond their high parasitic capacitances, are the resistor's high temperature and voltage coefficients and the limited control of the absolute value of the resistor.

Fuzzy based Washing Machine


Fuzzy Logic has played a pivotal part in this age of rapid technological development .In this paper we have elaborated on the automation process used in a washing machine. This paper has focused on the two subsystems of the washing machine namely the sensor mechanism and the controller unit. It also discuss on the use of singletons for fuzzy sets. This paper also highlights the use of a fuzzy controller to give the correct wash time. The use of fuzzy controller has the advantage of managing time, increasing equipment effiency and diagnosing malfunctions. INTRODUCTION Classical feedback control theory has been the basis for the development of simple automatic control systems .It is easily comprehensible principle and relatively simple implementation has been the main reason for its wide applications in industry. Such fixed-gain feedback controllers are insufficient, however to compensate for parameter variations in the plant as well as to adapt to changes in the environment. The need to overcome such problems and to have a controller welltuned not just for one operating point for a whole range of operating points has motivated the idea of an adaptive controller. In order to illustrate some basic concepts in fuzzy logic consider a simplified example of a thermostat controlling a heater fan illustrated in fig.1.The room temperature detected through a sensor is input to a controller which outputs a control force to adjust the heater fan speed. A conventional thermostat works like an ON/OFF switch. If we set it at 78F then the heater is activated only when the temperature falls below 75F.When it reaches 81F the heater is turned off .As a result the desired room temperature is either too warm or too hot. A fuzzy thermostat works in shades of gray where the temperature is treated as a series of overlapping ranges .For example, 78F is 60% warm and 20% hot .The controller is programmed with simple if-then rules that tell the heater fan how fast to run. As a result, when the temperature changes the fan speed will continuously adjust to keep the temperature at desired level. Our first step in designing such a fuzzy controller is to characterize the range of values for the input and output variables of the controller. Then we assign labels such as cool for the temperature and high for the fan speed, and we write a set of simple English-like rules to control the system. Inside the controller all temperature regulating actions will be based on how the current room temperature falls into these ranges and the rules describing the system behavior .The controller's output will vary continuously to adjust the fan speed. The temperature controller described above can be defined in four simple rules: If temperature is COLD then fan speed is HIGH If temperature is COOL then fan speed is MEDIUM If temperature is WARM then fan speed is LOW If temperature is HOT then fan speed is ZERO Here the linguistic variables cool; warm, high, etc. are labels, which refer to the set of overlapping values. These triangular shaped values are called membership functions.

Low Memory Color Image Zero Tree Coding

This paper presents a zero tree coding method for color images that uses no lists during encoding and decoding,permitting the omission of the lists requirement in Said and Pearlman's Set Partitioning In Hierarchical Trees (SPIHT) algorithm [3]. Without the lists, the memory requirement in a VLSI implementation is reduced significantly. This coding algorithm is also developed to reduce the circuit complexity of an implementation. Our experimental results show only a minor reduction of PSNR values when compared with the PSNR values obtained by the SPIHT codec illustrating well the trade-off between memory requirement and hardware simplicity. Introduction Since being introduced by Shapiro [4], the zerotree wavelet image coding has been a well-recognized image coding method and based on the zerotree theory several coding algorithms have be developed. SPIHT is the most significant algorithm because it demonstrates a very sim-pleand efficient way to code a discrete wavelet transformed (DWT) image. However, a SPIHT codec needs to main-tain three lists during coding and decoding to store the co-ordinates of significance coefficients and subset trees in the sorting order. The three lists become drawbacks for a hard-ware implementation because a large amount of memory is needed to maintain these lists. For color image coding the memory demand increases significantly. For example, for a 512x512 color image, one single entry of the list needs 18 bits of memory to store the row and column coordinates. Given that the total number of list entries of a single color element is approximately twice the total number of coeffi-cients,the total memory required is 3.375 MBytes 1 and the required memory will increase if the bit rate increases.This 118(bits)x512(pixels)x512(lines)x3(colors)x2/8bits/1K/1K = 3.375MB high memory requirement makes SPIHT not a cost effective compression algorithm for VLSI implementation.In this paper we present a zerotree coding algorithm for color image coding called Listless Zerotree Coding (LZC). The advantage of LZC over SPIHT is that no lists are needed during coding and decoding. Instead, a color co-efficient only needs a 3-bit flag if the coefficient is in the first wavelet transform level and a 6-bit flag if it is in any other transform level. Consequently, the amount of memory that required by a LZC codec is only a fraction of the amount needed by a SPIHT codec. In common with SPIHT, LZC is a progressive coding algorithm. The color compo-nents are coded in the sequence of Y tree, V tree, then U tree, and the coding can stop at any point to give a precise bit-rate control. LZC coding algorithm and SPIHT are quite alike. How-ever, since the usage of lists had been abandoned by LZC, different tree structure and coding procedure were developed for LZC. The tree symbols of LZC zero tree are ex-plained as follow. _ C(i,j) wavelet coefficient at the coordinate (i,j); _ O(i,j) set of child coefficients of C(i,j), ie. Coefficients at coordinates (2i,2j), (2i,2j+1), (2i+1,2j), (2i+1,2j+1); except at the finest transform level (ie. Level 1); _ D(i,j) set of descendant coefficients of C(i,j), ie. all offsprings of C(i,j); _ F C (i,j) significant map of coefficient C(i,j); _ F D (i,j) significant map of set D(i,j);

_ R(i,j) set of root coefficients at LL band. _ LZC's zerotree relations adopt Shapiro's zerotree relation . The positions of significant pixels are encoded by symbol Cand symbol D. The maps used to indicate the significance of C C and D D (ie. storing temporary zerotree structure) arefd map and fcfC map, re-spectively, as shown in Figure 1(b). The size of Fc map is same size as the image. Whereas the size of FD map is only a quarter of the image because coefficients in level 1 do not have any descendants. Therefore, for a 512x512 color image, the total memory required to store zerotree structure is only 120KBytes (2) for all bit rates. Comparing to the 3.375 MBytes memory requirement for SPIHT, memory requirement for LZC has been reduced significantly.

Stealth Fighter

Definition
Stealth means 'low observable'. The very basic idea of Stealth Technology in the military is to 'blend' in with the background. The quest for a stealthy plane actually began more than 50 years ago during World War II when RADAR was first used as an early warning system against fleets of bombers. As a result of that quest, the Stealth Technology evolved. Stealth Technology is used in the construction of mobile military systems such as aircrafts and ships to significantly reduce their detection by enemy, primarily by an enemy RADAR. The way most airplane identification works is by constantly bombarding airspace with a RADAR signal. When a plane flies into the path of the RADAR, a signal bounces back to a sensor that determines the size and location of the plane. Other methods focus on measuring acoustic (sound) disturbances, visual contact, and infrared (heat) signatures. Stealth technologies work by reducing or eliminating these telltale signals. Panels on planes are angled so that radar is scattered and no signal returns. Planes are also covered in a layer of absorbent materials that reduce any other signature the plane might leave. Shape also has a lot to do with the `invisibility' of stealth planes. Extreme aerodynamics keeps air turbulence to a minimum and cut down on flying noise. Special low-noise engines are contained inside the body of the plane. Hot fumes are then capable of being mixed with cool air before leaving the plane. This fools heat sensors on the ground. This also keeps heat seeking missiles from getting any sort of a lock on their targets. Stealth properties give it the unique ability to penetrate an enemy's most sophisticated defenses and threaten its most valued and heavily defended targets. At a cost of $2 billion each, stealth bombers are not yet available worldwide, but military forces around the world will soon begin to attempt to mimic some of the key features of stealth planes, making the skies much more dangerous. HISTORY OF STEALTH AIRCRAFT With the increasing use of early warning detection devices such as radar by militaries around the world in the 1930's the United States began to research and develop aircraft that would be undetectable to radar detection systems. The first documented stealth prototype was built out of two layers of plywood glued together with a core of glue and sawdust. This prototype's surface was coated with charcoal to absorb radar signals from being reflected back to the source, which is how radar detection systems detect items in the air. Jack Northrop built a flying wing in the 1940's. His plane was the first wave of stealth aircraft that actually flew. The aircraft proved to be highly unstable and hard to fly due to design flaws. The United States initially orders 170 of these aircraft from Northrop but cancelled the order after finding that the plane had stability Flaws. Then in 1964, SR-71 the first Stealth airplane launched. It is well known as 'black bird'. It is a jet black bomber with slanted surfaces. This aircraft was built to fly high and fast to be able to bypass radar by its altitude and speed. HOW DOES STEALTH TECHNOLOGY WORK? The idea is for the radar antenna to send out a burst of radio energy, which is then reflected back by any object it happens to encounter. The radar antenna measures the time it takes for the reflection to arrive, and with that information can tell how far away the object is. The metal body of an airplane is very good at reflecting radar signals, and this makes it easy to find and track airplanes with radar equipment. The goal of stealth technology is to make an airplane invisible to radar. There are two different ways

to create invisibility: The airplane can be shaped so that any radar signals it reflects are reflected away from the radar equipment. The airplane can be covered in materials that absorb radar signals.

Border Security using Wireless Integrated Network Sensors

Definition
Wireless Integrated Network Sensors (WINS) now provide a new monitoring and control capability for monitoring the borders of the country. Using this concept we can easily identify a stranger or some terrorists entering the border. The border area is divided into number of nodes. Each node is in contact with each other and with the main node. The noise produced by the foot-steps of the stranger are collected using the sensor. This sensed signal is then converted into power spectral density and the compared with reference value of our convenience. Accordingly the compared value is processed using a microprocessor, which sends appropriate signals to the main node. Thus the stranger is identified at the main node. A series of interface, signal processing, and communication systems have been implemented in micro power CMOS circuits. A micro power spectrum analyzer has been developed to enable low power operation of the entire WINS system. Thus WINS require a Microwatt of power. But it is very cheaper when compared to other security systems such as RADAR under use. It is even used for short distance communication less than 1 Km. It produces a less amount of delay. Hence it is reasonably faster. On a global scale, WINS will permit monitoring of land, water, and air resources for environmental monitoring. On a national scale, transportation systems, and borders will be monitored for efficiency, safety, and security. INTRODUCTION Wireless Integrated Network Sensors (WINS) combine sensing, signal processing, decision capability, and wireless networking capability in a compact, low power system. Compact geometry and low cost allows WINS to be embedded and distributed at a small fraction of the cost of conventional wireline sensor and actuator systems. On a local, wide-area scale, battlefield situational awareness will provide personnel health monitoring and enhance security and efficiency. Also, on a metropolitan scale, new traffic, security, emergency, and disaster recovery services will be enabled by WINS. On a local, enterprise scale, WINS will create a manufacturing information service for cost and quality control. The opportunities for WINS depend on the development of scalable, low cost, sensor network architecture. This requires that sensor information be conveyed to the user at low bit rate with low power transceivers. Continuous sensor signal processing must be provided to enable constant monitoring of events in an environment. Distributed signal processing and decision making enable events to be identified at the remote sensor. Thus, information in the form of decisions is conveyed in short message packets. Future applications of distributed embedded processors and sensors will require massive numbers of devices. In this paper we have concentrated in the most important application, Border Security. WINS SYSTEM ARCHITECTURE Conventional wireless networks are supported by complex protocols that are developed for voice and data transmission for handhelds and mobile terminals. These networks are also developed to support communication over long range (up to 1km or more) with link bit rate over 100kbps. In contrast to conventional wireless networks, the WINS network must support large numbers of sensors in a local area with short range and low average bit rate communication (less than 1kbps). The network design must consider the requirement to service dense sensor distributions with an emphasis on recovering environment information. Multihop communication yields large power and scalability advantages for WINS networks. Multihop communication, therefore, provides an immediate advance in capability for the WINS narrow Bandwidth devices.

However, WINS Multihop Communication networks permit large power reduction and the implementation of dense node distribution. The multihop communication has been shown in the figure 2. The figure 1 represents the general structure of the wireless integrated network sensors (WINS) arrangement.

A Basic Touch-Sensor Screen System

Introduction
The touch-sensor technology is about using our fingers or some other pointer, to view and manipulate information on a screen. On a conventional system, with every mouse click, the operating system registers a mouse event. With a touch-screen system, every time your finger touches the screen, a touch event is registered.

Working
A basic touch-screen system is made up of three components: 1. A touch sensor 2. Controller 3. Software driver The touch-sensor is a clear panel, which when touched, registers a voltage change that is sent to the controller. The controller processes this signal and passes the touch event data to the PC through a bus interface. The software driver takes this data and translates the touch events into mouse events. A touch-screen sensor any of the following five mechanics: resistance, capacitance, acoustics, optics and mechanical force. 1. Resistance-based sensors. A resistant sensor uses a thin, flexible membrane separated from a glass or plastic substance by insulating spacers. Both layers are coated with ITO (Indium-tin-oxide). These metallic coatings meet when a finger or stylus presses against the screen, thus closing an electric circuit. 2. Capacitance-based sensors. Here voltage is applied to the corners of the screen with electrodes spread uniformly across the field. When a finger touches the screen, it draws current from each current proportionately. The frequency changes are measured to determine the X and Y coordinates of the touch event. 3. Acoustic sensors. These sensors detect a touch event when a finger touches the screen resulting in absorption of sound energy. Bursts of high frequency (5-MHz) acoustic energy are launched from the edges of the screen. Arrays of reflection at the edges divert the acoustic energy across the screen and redirect the energy to the sensors. Because the speed of sound in glass is constant the energy arrival time identifies its path. A touch causes a dip in the received energy waveform for both axes. The timing of dips indicates the X and Y touch point coordinates.

GSM Security And Encryption


The motivations for security in cellular telecommunications systems are to secure conversations and signaling data from interception as well as to prevent cellular telephone fraud. With the older analogbased cellular telephone systems such as the Advanced Mobile Phone System (AMPS) and the Total Access Communication System (TACS), it is a relatively simple matter for the radio hobbyist to intercept cellular telephone conversations with a police scanner. A well-publicized case involved a potentially embarrassing cellular telephone conversation with a member of the British royal family being recorded and released to the media. Another security consideration with cellular telecommunications systems involves identification credentials such as the Electronic Serial Number (ESN), which are transmitted "in the clear" in analog systems. With more complicated equipment, it is possible to receive the ESN and use it to commit cellular telephone fraud by "cloning" another cellular phone and placing calls with it. Estimates for cellular fraud in the U.S. in 1993 are as high as $500 million. The procedure wherein the Mobile Station (MS) registers its location with the system is also vulnerable to interception and permits the subscriber's location to be monitored even when a call is not in progress, as evidenced by the recent highly-publicized police pursuit of a famous U.S. athlete. The security and authentication mechanisms incorporated in GSM make it the most secure mobile communication standard currently available, particularly in comparison to the analog systems described above. Part of the enhanced security of GSM is due to the fact that it is a digital system utilizing a speech coding algorithm, Gaussian Minimum Shift Keying (GMSK) digital modulation, slow frequency hopping, and Time Division Multiple Access (TDMA) time slot architecture. To intercept and reconstruct this signal would require more highly specialized and expensive equipment than a police scanner to perform the reception, synchronization, and decoding of the signal. In addition, the authentication and encryption capabilities discussed in this paper ensure the security of GSM cellular telephone conversations and subscriber identification credentials against even the determined eavesdropper. GSM (group special mobile or general system for mobile communications) is the Pan-European standard for digital cellular communications. The Group Special Mobile was established in 1982 within the European Conference of Post and Telecommunication Administrations (CEPT). A Further important step in the history of GSM as a standard for a digital mobile cellular communications was the signing of a GSM Memorandum of Understanding (MoU) in 1987 in which 18 nations committed themselves to implement cellular networks based on the GSM specifications. In 1991 the first GSM based networks commenced operations. GSM provides enhanced features over older analog-based systems, which are summarized below: Total Mobility: The subscriber has the advantage of a Pan-European system allowing him to communicate from everywhere and to be called in any area served by a GSM cellular network using the same assigned telephone number, even outside his home location. The calling party does not need to be informed about the called person's location because the GSM networks are responsible for the location tasks. With his personal chipcard he can use a telephone in a rental car, for example, even outside his home location. This mobility feature is preferred by many business people who constantly need to be in touch with their headquarters. High Capacity and Optimal Spectrum Allocation: The former analog-based cellular networks had to combat capacity problems, particularly in metropolitan areas. Through a more efficient utilization of the assigned frequency bandwidth and smaller cell sizes, the GSM System is capable of serving a

greater number of subscribers. The optimal use of the available spectrum is achieved through the application Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), efficient half-rate and full-rate speech coding, and the Gaussian Minimum Shift Keying (GMSK) modulation scheme. Security: The security methods standardized for the GSM System make it the most secure cellular telecommunications standard currently available. Although the confidentiality of a call and anonymity of the GSM subscriber is only guaranteed on the radio channel, this is a major step in achieving end-to- end security. The subscriber's anonymity is ensured through the use of temporary identification numbers. The confidentiality of the communication itself on the radio link is performed by the application of encryption algorithms and frequency hopping which could only be realized using digital systems and signaling. Services: The list of services available to GSM subscribers typically includes the following: voice communication, facsimile, voice mail, short message transmission, data transmission and supplemental services such as call forwarding.

Design of 2-D Filters using a Parallel Processor Architecture

Two-dimensional filters are usually part of the implementation of digital image processing applications. These filters process recursive sets of instructions and require high computational speed. Optimized implementations of these filters depend on the use of Application Specific Integrated Circuits (ASICs). A system with multiple parallel processing units is a feasible design option able to achieve the required computational performance. In this paper, a loop transformation algorithm, which allows the efficient utilization of a parallel multiprocessor system, is presented. Uniform nested loops representing the image filters and the available processors are modeled as multi-dimensional data flow graphs. A new loop structure is generated so that an arbitrary number of processors available in the system can run in parallel. INTRODUCTION Image enhancement and edge detection are well known digital image signal processing applications that may require two-dimensional (2-D) filter-like computational solutions. These applications usually depend on computation intensive code sections, consisting of the repetition of sequences of operations. They are also characterized by the multi-dimensionality of the data involved. An effective technique in improving the computing performance of such applications has been the design and use of Application Specific Integrated Circuits (ASICs). This paper presents a new technique applicable to the design of a 2-D filter system using multiple parallel processors. A multidimensional retiming algorithm embedded in this new technique provides the fully parallel utilization of the available processors, thus reducing the overall execution time of the filter function. Parallel architectures are an important tool in ASIC design. However, these architectures require a careful partitioning of the problem in order to improve the utilization of the parallel processors [2, 17, 24]. During the circuit design phase, nested loop structures can be coded using hardware description languages, such as VHDL constructs, in order to reduce the design time. However, in VHDL, the loop control indices will represent the number of times a section of the circuit will be replicated in the final synthesis under the assumption that there are no physical or cost constraints in the circuit implementation . In this paper, a multi-dimensional retiming technique is used to transform the loop in such a way to produce the parallel solution for the problem for a given number of processing units. Such a solution can then be implemented on a standard multiprocessor architecture. Retiming was originally proposed by Leiserson - Saxe, focusing on improving the cycle time of onedimensional problems [13]. Most work done in this area, is subject to limitations imposed by the number of delays (memory elements) existing in a cycle of a data flow graph representing the problem [3, 6, 10, 11, 12, 16, 22, 25]. Other methods focus on multi-processor scheduling and are also applicable to one-dimensional problems [7, 8, 14, 16, 18]. This study focuses on the parallelism inherent to multi-imensional applications, ignored by the onedimensional methods. Retiming and other loop transformations have since been applied in areas such as scheduling and parallel processing, with the main goal of exploiting fine-grain parallelism in the loop body [4, 15]. Due to the different focus in obtaining parallelism, those techniques are not aimed to improve the execution of parallel iterations in multiprocessor systems. Research by Passos and Sha extended the retiming concept to multi-dimensional (MD) applications [19]. The multi-dimensional retiming concept is used in this paper to model the partitioning of the loop among the available processors.

Multi-dimensional retiming brings some advantages to the process, since it is readily applicable to the multi-dimensional fields considered, eliminating the need for a loop transformation that converts the original problem to one dimension. Another significant advantage of MD retiming is that there are no restrictions on its applicability, not being constrained by the characteristics of the onedimensional methods.

Software-Defined Radio (SDR)

Introduction
Software-Defined Radio (SDR) Forum defines SDR technology as "radios that provide software control of a variety of modulation techniques, wide-band or narrow-band operation, communications security functions (such as hopping), and waveform requirements of current & evolving standards over a broad frequency range." In a nutshell, Software-Defined Radio (SDR) refers to the technology wherein software modules running on a generic hardware platform consisting of DSPs and general purpose microprocessors are used to implement radio functions such as generation of transmitted signal (modulation) at transmitter and tuning/detection of received radio signal (demodulation) at receiver. Software-Defined Radio (SDR) is a rapidly evolving technology that is receiving enormous recognition and generating widespread interest in the telecommunication industry. Over the last few years, analog radio systems are being replaced by digital radio systems for various radio applications in military, civilian and commercial spaces. In addition to this, programmable hardware modules are increasingly being used in digital radio systems at different functional levels. SDR technology aims to take advantage of these programmable hardware modules to build an open-architecture based radio system software. SDR technology facilitates implementation of some of the functional modules in a radio system such as modulation/demodulation, signal generation, coding and link-layer protocols in software. SDR technology can be used to implement military, commercial and civilian radio applications. A wide range of radio applications like Bluetooth, WLAN, GPS, Radar, WCDMA, GPRS, etc. can be implemented using SDR technology. This whitepaper provides an overview of generic SDR features and its architecture with a special focus on the benefits it offers in commercial wireless communication domain. This section gives a brief overview of a basic conventional digital radio system and then explains how SDR technology can be used to implement radio functions in software. It then explains the software architecture of SDR. The digital radio system consists of three main functional blocks: RF section, IF section and baseband section. The RE section Consists of essentially analog hardware modules while IF and baseband sections contain digital hardware modules. SDR has generated tremendous interest in the wireless communication industry for the wide- ranging economic and deployment benefits it offers. Following are some of the problems faced by the wireless communication industry due to implementation of wireless networking infrastructure equipment and terminals completely in hardware: Commercial wireless network standards are continuously evolving from 2G to 2.5G/3G and then further onto 4G. Each generation of networks differ significantly in link-layer protocol standards causing problems to subscribers, wireless network operators and equipment vendors. Subscribers are forced to buy new handsets whenever a new generation of network standards is deployed. Wireless network operators face problems during migration of the network from one generation to next due to presence of large number of subscribers using legacy handsets that may be incompatible with newer generation network.

The network operators also need to incur high equipment costs when migrating from one generation to next. Equipment vendors face problems in rolling out newer generation equipment due to short time-to-market requirements.

Smart Dust

Definition
The current ultramodern technologies are focusing on automation and miniaturization. The decreasing computing device size, increased connectivity and enhanced interaction with the physical world have characterized computing's history. Recently, the popularity of small computing devices, such as hand held computers and cell phones; rapidly flourishing internet group and the diminishing size and cost of sensors and especially transistors have accelerated these strengths. The emergence of small computing elements, with sporadic connectivity and increased interaction with the environment, provides enriched opportunities to reshape interactions between people and computers and spur ubiquitous computing researches. Smart dust is tiny electronic devices designed to capture mountains of information about their surroundings while literally floating on air. Nowadays, sensors, computers and communicators are shrinking down to ridiculously small sizes. If all of these are packed into a single tiny device, it can open up new dimensions in the field of communications. The idea behind 'smart dust' is to pack sophisticated sensors, tiny computers and wireless communicators in to a cubic-millimeter mote to form the basis of integrated, massively distributed sensor networks. They will be light enough to remain suspended in air for hours. As the motes drift on wind, they can monitor the environment for light, sound, temperature, chemical composition and a wide range of other information, and beam that data back to the base station, miles away. MAJOR COMPONENTS AND REQUIREMENTS Smart Dust requires both evolutionary and revolutionary advances in miniaturization, integration, and energy management. Designers can use microelectromechanical systems to build small sensors, optical communication components, and power supplies, whereas microelectronics provides increasing functionality in smaller areas, with lower energy consumption. The power system consists of a thick-film battery, a solar cell with a charge-integrating capacitor for periods of darkness, or both. Depending on its objective, the design integrates various sensors, including light, temperature, vibration, magnetic field, acoustic, and wind shear, onto the mote. An integrated circuit provides sensor-signal processing, communication, control, data storage, and energy management. A photodiode allows optical data reception. There are presently two transmission schemes: passive transmission using a corner-cube retro reflector, and active transmission using a laser diode and steerable mirrors. The mote's minuscule size makes energy management a key component. The integrated circuit will contain sensor signal conditioning circuits, a temperature sensor, and A/D converter, microprocessor, SRAM, communications circuits, and power control circuits. The IC, together with the sensors, will operate from a power source integrated with the platform.The MEMS industry has major markets in automotive pressure sensors and accelerometers, medical sensors, and process control sensors. Recent advances in technology have put many of these sensor processes on exponentially decreasing size, power, and cost curves. In addition, variations of MEMS sensor technology are used to build micro motors. WORKING OF SMART DUST The smart dust mote is run by a microcontroller that not only determines the task performed by the mote, but consists of the power to the various components of the system to conserve energy. Periodically the micro controller gets a reading from one of the sensors, which measure one of a number of physical or chemical stimuli such as temperature, ambient light, vibration, acceleration, or

air pressure, process the data, and store it in memory. It also turns on optical receiver to see if anyone is trying to communicate with it. This communication may include new programs or messages from other motes. In response to a message or upon its own initiative, the microcontroller will use the corner cube retro reflector or laser to transmit sensor data or a message to a base station or another mote. The primary constraint in the design of the Smart Dust motes is volume, which in turn puts a severe constraint on energy since we do not have much room for batteries or large solar cells. Thus, the motes must operate efficiently and conserve energy whenever possible. Most of the time, the majority of the mote is powered off with only a clock and a few timers running. When a timer expires, it powers up a part of the mote to carry out a job, then powers off. A few of the timers control the sensors that measure one of a number of physical or chemical stimuli such as temperature, ambient light, vibration, acceleration, or air pressure. When one of these timers expires, it powers up the corresponding sensor, takes a sample, and converts it to a digital word. If the data is interesting, it may either be stored directly in the SRAM or the microcontroller is powered up to perform more complex operations with it. When this task is complete, everything is again powered down and the timer begins counting again.

Adaptive Blind Noise Suppression in some Speech Processing Applications


In many applications of speech processing the noise reveals some specific features. Although the noise could be quite broadband, there are a limited number of dominant frequencies, which carry the most of its energy. This fact implies the usage of narrow-band notch filters that must be adaptive in order to track the changes in noise characteristics. In present contribution, a method and a system for noise suppression are developed. The method uses adaptive notch filters based on second-order Gray-Markel lattice structure. The main advantages of the proposed system are that it has very low computational complexity, is stable in the process of adaptation, and has a short time of adaptation. Under comparable SNR improvement, the proposed method adjusts only 3 coefficients against 250450 for the conventional adaptive noise cancellation systems. A framework for a speech recognition system that uses the proposed method is suggested. INTRODUCTION The noise existence is inevitable in real applications of speech processing. It is well known that the additive noise affects negatively the performance of the speech codecs designed to work with noisefree speech especially codecs based on linear prediction coefficients (LPC). Another application strongly influenced by noise is related to the hands free phones where the background noise reduces the signal to noise ratio (S/N) and the speech intelligibility. Last but not least, is the problem of speech recognition in a noisy environment. A system that works well in noise-free conditions, usually shows considerable degradation in performance when background noise is present It is clear that a strong demand for reliable noise cancellation methods exists that efficiently separate the noise from speech signal. The endeavors in designing of such systems can be followed some 20 years ago The core of the problem is that in most situations the characteristics of the noise are not known a priori and moreover they may change in time. This implies the use of adaptive systems capable of identifying and tracking the noise characteristics. This is why the application of adaptive filtering for noise cancellation is widely used. The classical systems for noise suppression rely on the usage of adaptive linear filtering and the application of digital filters with finite impulse response (FIR). The strong points of this approach are the simple analysis of the linear systems in the process of adaptation and the guaranteed stability of FIR structures. It is worth mentioning the existence of relatively simple and well investigated adaptive algorithms for such kind of systems as least mean squares (LMS) and recursive least squares (RLS) algorithms. The investigations in the area of noise cancellation reveal that in some applications the nonlinear filters outperform their linear counterparts. That fact is a good motivation for a shift towards the usage of nonlinear systems in noise reduction .Another approach is based on a microphone array instead of the two microphones, reference and primary, that are used in the classical noise cancellation scheme . A brief analysis of all mentioned approaches leads to the conclusion that they try to model the noise path either by a linear or by a nonlinear system. Each of these methods has its strengths and weaknesses. For example, for the classical noise cancellation with two microphones this is the need of reference signal; for the neural filters - the fact that as a rule they are slower than classic adaptive filters and they are efficient only for noise suppression on relatively short data sequences which is not true for speech processing and finally for microphone arrays - the need of precise space alignment In present contribution the approach is slightly different.

The basic idea is that in many applications, for instance, hands-free cellular phones in car environment howling control in hands-free phones, noise reduction in an office environment, the noise reveals specific features that can be exploited. In most instances although the noise might be quite wide-band, there are always, as a rule, no more than two or three regions of its frequency spectrum that carry most of the noise energy and the removal of these dominant frequencies results in a considerable improvement of S/N ratio. This brings the idea to use notch adaptive filters capable of tracking the noise characteristics. In this paper a modification of all-pass structures is used They are recursive, and at the same time, are stable during the adaptive process. The approach is called "blind" because there is no need of a reference signal.

An Efficient Algorithm for iris pattern Recognition using 2D Gabor Wavelet Transformation in Matlab
Wavelet analysis have received significant attention because their multi-resolution decomposition allows efficient image analysis. It is widely used for varied applications such as noise reduction, and data compression, etc. In this paper we have introduced and applied the concept of 2 dimensional Gabor wavelet transform to Biometric Iris recognition system. The application of this transform in encoding the iris image for pattern recognition proves to achieve increased accuracy and processing speed compared to other methods. With a strong scientific approach and mathematical background we have developed an algorithm to facilitate the implementation of this method under the platforms of MATLAB IMAGES - An introduction: A dictionary defines image as a "reproduction or representation of the form of a person or thing". The inherent association of a human with the visual senses, predisposes one to conceive an image as a stimulus on the retina of the eye, in which case the mechanism of optics govern the image formation resulting in continuos range, multi-tone images. A digital image can be defined to be a numerical representation of an object or more strictly to be sampled, quantized function of two dimensions which has been generated by optical means, sampled in an equally spaced rectangular grid pattern, and quantized in equal intervals of graylevel. The word is crying out for the simpler access controls to personal authentication systems and it looks like biometrics may be the answer. Instead of carrying bunch of keys, all those access cards or passwords you carry around with you, your body can be used to uniquely identify you. Furthermore, when biometrics measures are applied in combination with other controls, such as access cards or passwords, the reliability of authentication controls takes a giant step forward. BIOMETRICS-AN OVERVIEW: Biometrics is best defined as measurable physiological and/or behavioral characteristics that can be utilized to verify the identity of an indivisual. They include the following: " Iris scanning " Facial recognition " Fingerprint verification " Hand geometry " Retinal scanning " Signature verification " Voice verification ADVANTAGES OF THE IRIS IDENTIFICATION: " Highly protected internal organ of the eye. " Iris patterns possess a high degree of randomness. " Variability: 244 degrees of freedom. " Entropy: 3.2 bits per square millimetre. " Uniqueness: set by combinatorial complexity. " Patterns apparently stable throughout life. IRIS - An introduction: The iris is a colored ring that surrounds the pupil and contains easily visible yet complex and distinct

combinations of corona, pits, filaments, crypts, striations, radial furrows and more. The iris is called the "Living password" because of its unique, random features. It's always with you and can't be stolen or faked. As such it makes an excellent biometrics identifier.

Significance of real-time transport Protocol in VOIP (RTP)

Definition
The advent of Voice over IP (VoIP) has given a new dimension to Internet and opened a host of new possibilities and opportunities for both corporate and public network planners. More and more companies are seeing the value of transporting voice over IP networks to reduce telephone and facsimile costs. Adding voice to packet networks requires an understanding of how to deal with system level challenges such as interoperability, packet loss, delay,density, scalability, and reliability. This is because of the real time constraints that come into picture. But then the basic protocols being used at the network and transport layer have remained unchanged. This calls for the definition of new protocols, which can be used in addition with the existing protocols. Such a protocol should provide the application using them with enough information to conform to the real-time constraints. This paper discusses the significance of Real-time Transport Protocol (RTP) in VoIP applications. A brief introduction to VoIP and then a description of the RTP header are given in sections 1-5. The actual realisation of the RTP header, packetisation and processing of an RTP packet is discussed section six. Section 7, called 'Realising RTP functionalities', discusses a few problems that occur in a real time environment and how RTP provides information to counter the same. Finally, sample codes that we wrote for realising RTP packetisation, processing and RTP functionalities, written in 'C', for a Linux platform are presented. Please note that RTP is incomplete without the companion RTP Control Protocol (RTCP), but a detailed description of RTCP is beyond the scope of this paper.

Storage Area Networks

Definition
A storage area network (SAN) is defined as a set of interconnected devices (for example, disks and tapes) and servers that are connected to a common communication and data transfer infrastructure such as Fibre Channel. The common communication and data transfer mechanism for a given deployment is commonly known as the storage fabric. The purpose of the SAN is to allow multiple servers access to a pool of storage in which any server can potentially access any storage unit. Clearly in this environment, management plays a large role in providing security guarantees (who is authorized to access which devices) and sequencing or serialization guarantees (who can access which devices at what point in time). SANs evolved to address the increasingly difficult job of managing storage at a time when the storage usage is growing explosively. With devices locally attached to a given server or in the server enclosure itself, performing day-to-day management tasks becomes extremely complex; backing up the data in the datacenter requires complex procedures as the data is distributed amongst the nodes and is accessible only through the server it is attached to. As a given server outgrows its current storage pool, storage specific to that server has to be acquired and attached, even if there are other servers with plenty of storage space available. Other benefits can be gained such as multiple servers can share data (sequentially or in some cases in parallel), backing up devices can be done by transferring data directly from device to device without first transferring it to a backup server. So why use yet another set of interconnect technologies? A storage area network is a network like any other (for example a LAN infrastructure). A SAN is used to connect many different devices and hosts to provide access to any device from anywhere. Existing storage technologies such as SCSI are tuned to the specific requirements of connecting mass storage devices to host computers. In particular, they are low latency, high bandwidth connections with extremely high data integrity semantics. Network technology, on the other hand, is tuned more to providing application-to-application connectivity in increasingly complex and large-scale environments. Typical network infrastructures have high connectivity, can route data across many independent network segments, potentially over very large distances (consider the internet), and have many network management and troubleshooting tools.

Quantum Information Technology

Definition
The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This document aims to summarize not just quantum computing, but the whole subject of quantum information theory. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, the paper begins with an introduction to classical information theory .The principles of quantum mechanics are then outlined. The EPR-Bell correlation and quantum entanglement in general, form the essential new ingredient, which distinguishes quantum from classical information theory, and, arguably, quantum from classical physics. Basic quantum information ideas are described, including key distribution, teleportation, the universal quantum computer and quantum algorithms. The common theme of all these ideas is the use of quantum entanglement as a computational resource. Experimental methods for small quantum processors are briefly sketched, concentrating on ion traps, super conducting cavities, Nuclear magnetic resonance imaging based techniques, and quantum dots. "Where a calculator on the Eniac is equipped with 18000 vacuum tubes and weighs 30 tons, computers in the future may have only 1000 tubes and weigh only 1 1/2 tons" Popular Mechanics, March 1949. Now, if this seems like a joke, wait a second. "Tomorrows computer might well resemble a jug of water" This for sure is no joke. Quantum computing is here. What was science fiction two decades back is a reality today and is the future of computing. The history of computer technology has involved a sequence of changes from one type of physical realization to another --- from gears to relays to valves to transistors to integrated circuits and so on. Quantum computing is the next logical advancement. Today's advanced lithographic techniques can squeeze fraction of micron wide logic gates and wires onto the surface of silicon chips. Soon they will yield even smaller parts and inevitably reach a point where logic gates are so small that they are made out of only a handful of atoms. On the atomic scale matter obeys the rules of quantum mechanics, which are quite different from the classical rules that determine the properties of conventional logic gates. So if computers are to become smaller in the future, new, quantum technology must replace or supplement what we have now. Quantum technology can offer much more than cramming more and more bits to silicon and multiplying the clock-speed of microprocessors. It can support entirely new kind of computation with qualitatively new algorithms based on quantum principles!

Future Wallet

Definition
"Money in the 21st century will surely prove to be as different from the money of the current century as our money is from that of the previous century. Just as fiat money replaced specie-backed paper currencies, electronically initiated debits and credits will become the dominant payment modes, creating the potential for private money to compete with government-issued currencies." Just as every thing is getting under the shadow of "e" today we have paper currency being replaced by electronic money or e-cash. Hardly a day goes by without some mention in the financial press of new developments in "electronic money". In the emerging field of electronic commerce, novel buzzwords like smartcards, online banking, digital cash, and electronic checks are being used to discuss money. But how are these brand-new forms of payment secure? And most importantly, which of these emerging secure electronic money technologies will survive into the next century? These are some of the tough questions to answer but here's a solution, which provides a form of security to these modes of currency exchange using the "Biometrics Technology". The Money Pad introduced here uses the biometrics technology for Finger Print recognition. Money Pad is a form of credit card or smartcard, which we name so. Every time the user wants to access the Money Pad he has to make an impression of his fingers which will be scanned and matched with the one in the hard disk of data base server. If the finger print matches with the user's he will be allowed to access and use the Pad other wise the Money Pad is not accessible. Thus providing a form of security to the ever-lasting transaction currency of the future "e-cash". Money Pad - A form of credit card or smart card similar to floppy disk, which is introduced to provide, secure e-cash transactions. To Download Full Report Click Here

Buffer Overflow Attack

Introduction
By combining the C programming language's liberal approach to memory handling with specific Linux filesystem permissions, this operating system can be manipulated to grant unrestricted privilege to unprivileged accounts or users. A variety of exploit that relies upon these two factors is commonly known as a buffer overflow, or stack smashing vulnerability. Stack smashing plays an important role in high profile computer security incidents. In order to secure modern Linux systems, it is necessary to understand why stack smashing occurs and what one can do to prevent it. Pre-requisites To understand what goes on, some C and assembly knowledge is required. Virtual Memory, some Operating Systems essentials, like, for example, how a process is laid out in memory will be helpful. You MUST know what a setuid binary is, and of course you need to be able to -at least- use Linux systems. If you have an experince of gdb/cc, that is something really good. Document is Linux/ix86 specific. The details differ depending on the Operating System or architecture you're using. Here, I have tried out some small buffer overflows that can be easily grasped. The pre-requisites described above are explained is some detail below. Linux File System Permissions In order to better understand stack smashing vulnerabilities, it is first nec-essary to understand certain features of filesystem permissions in the Linux operating system. Privileges in the Linux operating system are invested solely in the user root, sometimes called the superuser, root's infallibility is ex-pected under every condition including program execution. The superuser is the main security weakness in the Linux operating system. Because the superuser can do anything, after a person gains superuser privileges for ex-ample, by learning the root password and logging in as root that person can do virtually anything to the system. This explains why most attackers who break into Linux systems try to become superusers. Each program (process) started by the root user inherits the root user's allinclusive privilege. In most cases the inherited privilege is subsequently passed to other programs spawned by root's running processes. Set UID (SUID) permissions in the Linux operating system grant a user privilege to run programs or shell scripts as another user. Linux operating system, the process in memory that handles the program execution is usually owned by the user who executed the program. Using a unique permission bit to indicate SUID, the filesystem indicates to the op-erating system that the program will run under the file owner's ID rather than the user's ID who executed the program. Often times SUID programs are owned by root; while these programs may be executable by an under-privileged user on the system, they run in memory with unrestricted access to the system. As one can see, SUID root permissions are used to grant an unprivileged user temporary, and necessary, use of privileged resources. Many Linux programs need to run with superuser privileges. These pro-grams are run as SUID root programs, when the system boots, or as network servers. A single bug in any of these complicated programs can compromise the safety of your entire system. This characteristic is probably a design flaw, but it is basic to the design of Linux, and it not likely to change. Exploitation of this "feature turned design flaw" is critical in constructing buffer overflow exploits.

Robotic Surgery

Definition
The field of surgery is entering a time of great change, spurred on by remarkable recent advances in surgical and computer technology. Computer-controlled diagnostic instruments have been used in the operating room for years to help provide vital information through ultrasound, computer-aided tomography (CAT), and other imaging technologies. Only recently have robotic systems made their way into the operating room as dexterity-enhancing surgical assistants and surgical planners, in answer to surgeons' demands for ways to overcome the surgical limitations of minimally invasive laparoscopic surgery. The Robotic surgical system enables surgeons to remove gallbladders and perform other general surgical procedures while seated at a computer console and 3-D video imaging system acrossthe room from the patient. The surgeons operate controls with their hands and fingers to direct a robotically controlled laparoscope. At the end of the laparoscope are advanced, articulating surgical instruments and miniature cameras that allow surgeons to peer into the body and perform the procedures. Now Imagine : An army ranger is riddled with shrapnel deep behind enemy lines. Diagnostics from wearable sensors signal a physician at a nearby mobile army surgical hospital that his services are needed urgently. The ranger is loaded into an armored vehicle outfitted with a robotic surgery system. Within minutes, he is undergoing surgery performed by the physician, who is seated at a control console 100 kilometers out of harm's way. The patient is saved. This is the power that the amalgamation of technology and surgical sciences are offering Doctors. Just as computers revolutionized the latter half of the 20th century, the field of robotics has the potential to equally alter how we live in the 21st century. We've already seen how robots have changed the manufacturing of cars and other consumer goods by streamlining and speeding up the assembly line. We even have robotic lawn mowers and robotic pets now. And robots have enabled us to see places that humans are not yet able to visit, such as other planets and the depths of the ocean. In the coming decades, we will see robots that have artificial intelligence,coming to resemble the humans that create them. They will eventually become self-aware and conscious, and be able to do anything that a human can. When we talk about robots doing the tasks of humans, we often talk about the future, but the future of Robotic surgery is already here.

Smart card

Definition
In this seminar ,is giving some basic concepts about smart cards. The physical and logical structure of the smart card and the corresponding security access control have been discussed in this seminar . It is believed that smart cards offer more security and confidentiality than the other kinds of information or transaction storage. Moreover, applications applied with smart card technologies are illustrated which demonstrate smart card is one of the best solutions to provide and enhance their system with security and integrity. The seminar also covers the contactless type smart card briefly. Different kinds of scheme to organise and access of multiple application smart card are discussed. The first and second schemes are practical and workable on these days, and there is real applications developed using those models. For the third one, multiple independent applications in a single card, there is still a long way to go to make it becomes feasible because of several reasons. At the end of the paper, an overview of the attack techniques on the smart card is discussed as well. Having those attacks does not mean that smart card is unsecure. It is important to realise that attacks against any secure systems are nothing new or unique. Any systems or technologies claiming 100% secure are irresponsible. The main consideration of determining whether a system is secure or not depends on whether the level of security can meet the requirement of the system. The smart card is one of the latest additions to the world of information technology. Similar in size to today's plastic payment card, the smart card has a microprocessor or memory chip embedded in it that, when coupled with a reader, has the processing power to serve many different applications. As an access-control device, smart cards make personal and business data available only to the appropriate users. Another application provides users with the ability to make a purchase or exchange value. Smart cards provide data portability, security and convenience. Smart cards come in two varieties: memory and microprocessor. Memory cards simply store data and can be viewed as a small floppy disk with optional security. A microprocessor card, on the other hand, can add, delete and manipulate information in its memory on the card. Similar to a miniature computer, a microprocessor card has an input/output port operating system and hard disk with built-in security features. On a fundamental level, microprocessor cards are similar to desktop computers. They have operating systems, they store data and applications, they compute and process information and they can be protected with sophisticated security tools. The self-containment of smart card makes it resistant to attack as it does not need to depend upon potentially vulnerable external resources. Because of this characteristic, smart cards are often used in different applications, which require strong security protection and authentication.

Cellular through remote control switch

Definition
Cellular through remote control switch implies control of devices at a remote location via circuit interfaced to the remote telephone line/device by dialing specific DTMF (dual tune multi frequency) digits from a local telephone. This project Cellular through remote control switch has the following features 1. It can control multiple load (on/off/status each load) 2. It provides you feedback when the circuit is in energized state and also sends an acknowledgement indicating action with respect to the switching on of each load and switching off of all loads (together). It can selectively switch on any one or more loads one after the other and switch off all loads simultaneously
OPERATION

1. Dial the Phone Number - a OK tone produced 2. Password - 4321 3. Load number - 1, 2, 3, 4 4. Control number - 9/on, O/off, # / status When the phone number is dialed the ring detector sense the ring and the auto lifter works after some time. When the auto lifter works an OK tone is produced. Then the password is entered. The password is 123451 Then to check the status of the corresponding load enter # and load number. To on the load enter 9 and load number. To off the load enter 0 and load number. The whole operation is done within 3 minutes. After 3 minutes the operation is timeout.

Terrestrial Trunked Radio (TETRA)

Definition
TErrestrial Trunked RAdio (TETRA) standard was designed to meet some common requirements and objectives of the PMR and PAMR market alike. One of the last strong holds of analog technology in a digital world has been the area of trunked mobile radio. Although digital cellular technology has made great strides with broad support from a relatively large number of manufactures, digital trunked mobile radio systems for the Private Mobile Radio (PMR) and Public Access Mobile Radio (PAMR) market have lagged behind. Few manufacture currently offer digital systems, all of which are based on proprietary technology. However, the transition to digital is gaining momentum with the emergence of an open standard TETRA TETRA is a Digital PMR Standard developed by ETSI. It is an open standard offers interoperability of equipment and networks from different manufacturers. It is potential replacement for analog and proprietary digital systems. Standard originated in1989 as Mobile Digital Trunked Radio System (MDTRS), later renamed to Trans European Trunked Radio, and is called TETRA since 1997. TErrestrial Trunked Radio TETRA is the agreed standard for a new generation of digital land mobile radio communications designed to meet the needs of the most demanding Professional Mobile Radio networks (PMR) and Public Access Radio (PAMR) users. TETRA is the only existing digital PMR standard defined by the European Telecommunications Standard Institute (ETSI). Among the standard's many features are voice and extensive data communications services. Networks based on the TETRA standard will provide cost-effective, spectrum-efficient and secure communications with advance capabilities for the mobile and fixed elements of companies and organizations. As a standard, TETRA should be regarded as complementary to GSM and DECT. In comparison with GSM as currently implemented, TETRA provides faster call set-up, higher data rates, group calls and direct mode. TETRA manufactures have been developing their products for ten years. The investments have resulted in highly sophisticated products. A number of important orders have already been placed. According to estimates, TETRA-based networks will have 5-10 million users by the year 2010.

HVAC

Definition
Wireless transmission of electromagnetic radiation (communication signals) has become a popular method of transmitting RF signals such as cordless, wireless and cellular telephone signals, paper signals, two way radio signals,video conferencing signals and LAN signals indoors. Indoor wireless transmission has the advantage that building in which transmission is taking place does not have to be filled with wires or cables that are equipped to carry a multitude of signals. Wires and signals are costly to install and may require expensive upgrades when their capacity is exceeded or when new technologies require different types of wires and cables than those already installed. Traditional indoor wireless communication systems transmit and receive signals through the use of a network of transmitters, receivers and antennas that are placed through out the interior of a building. Devices must be located such that signals must not be lost or signal strength may not get attenuated. Again a change in the existing architecture also affects the wireless transmission. Another challenge related to installation of wireless networks in buildings is the need to predict the RF propagation and coverage in the presence of complex combinations of shapes and materials in the buildings. In general, the attenuation in buildings is larger than that in free space, requiring more cells and higher power to obtain wider coverage. Despite of all these, placement of antennas, receivers and antennas in an indoor environment is largely a process of trial and error. Hence there is need for a method and a system for efficiently transmitting RF and microwave signals indoors without having to install an extensive system of wires and cables inside the buildings. This paper suggests an alternative method of distributing electromagnetic signals in buildings by the recognition that every building is equipped with an RF wave guide distribution system, the HVAC ducts. The use of HVAC ducts is also amenable to a systematic design procedure but should be significantly less expensive than other approaches since existing infrastructure is used and RF is distributed more efficiently.

Electronics Meet Animal Brains

Definition
Until recently, neurobiologists have used computers for simulation, data collection, and data analysis, but not to interact directly with nerve tissue in live, behaving animals. Although digital computers and nerve tissue both use voltage waveforms to transmit and process information, engineers and neurobiologists have yet to cohesively link the electronic signaling of digital computers with the electronic signaling of nerve tissue in freely behaving animals. Recent advances in microelectromechanical systems (MEMS), CMOS electronics, and embedded computer systems will finally let us link computer circuitry to neural cells in live animals and, in particular, to reidentifiable cells with specific, known neural functions. The key components of such a brain-computer system include neural probes, analog electronics, and a miniature microcomputer. Researchers developing neural probes such as sub- micron MEMS probes, microclamps, microprobe arrays, and similar structures can now penetrate and make electrical contact with nerve cells with out causing significant or long-term damage to probes or cells. Researchers developing analog electronics such as low-power amplifiers and analog-to-digital converters can now integrate these devices with micro- controllers on a single low-power CMOS die. Further, researchers developing embedded computer systems can now incorporate all the core circuitry of a modern computer on a single silicon chip that can run on miniscule power from a tiny watch battery. In short, engineers have all the pieces they need to build truly autonomous implantable computer systems. Until now, high signal-to-noise recording as well as digital processing of real-time neuronal signals have been possible only in constrained laboratory experiments. By combining MEMS probes with analog electronics and modern CMOS computing into self-contained, implantable microsystems, implantable computers will free neuroscientists from the lab bench. Neurons and neuronal networks decide, remember, modulate, and control an animal's every sensation, thought, movement, and act. The intimate details of this network, including the dynamic properties of individual neurons and neuron populations, give a nervous system the power to control a wide array of behavioral functions. The goal of understanding these details motivates many workers in modern neurobiology. To make significant progress, these neurobiologists need methods for recording the activity of single neurons or neuron assemblies, for long timescales, at high fidelity, in animals that can interact freely with their sensory world and express normal behavioral responses.

Satellite Radio

Definition
We all have our favorite radio stations that we preset into our car radios, flipping between them as we drive to and from work, on errands and around town. But when travel too far away from the source station, the signal breaks up and fades into static. Most radio signals can only travel about 30 or 40 miles from their source. On long trips that find you passing through different cities, you might have to change radio stations every hour or so as the signals fade in and out. Now, imagine a radio station that can broadcast its signal from more than 22,000 miles (35,000 kill) away and then come through on your car radio with complete clarity without ever having to change the radio station. Satellite Radio or Digital Audio Radio Service (DARS) is a subscriber based radio service that is broadcast directly from satellites. Subscribers will be able to receive up to100 radio channels featuring Compact Disk digital quality music, news, weather, sports. talk radio and other entertainment channels. Satellite radio is an idea nearly 10 years in the making. In 1992, the U.S. Federal Communications Commission (FCC) allocated a spectrum in the "S" band (2.3 GHz) for nationwide broadcasting of satellite-based Digital Audio Radio Service (DARS).. In 1997. the FCC awarded 8-year radio broadcast licenses to two companies, Sirius Satellite Radio former (CD Radio) and XM Satellite Radio (former American Mobile Radio). Both companies have been working aggressively to be prepared to offer their radio services to the public by the end of 2000. It is expected that automotive radios would be the largest application of Satellite Radio. The satellite era began in September 2001 when XM launched in selected markets. followed by full nationwide service in November. Sirius lagged slightly, with a gradual rollout beginning _n February, including a quiet launch in the Bay Area on June 15. The nationwide launch comes July 1.

Search For Extraterrestrial Intelligence

Definition
THE PRINCIPLE OF SEARCH DRAKE EQUATION

N=R*xfpxnexflxfixfcxL The terms in the equation can be explained as follows o N - Number of communicative civilizations o R* - Average rate of formation of stars over the lifetime of the galaxy (10 to 40 per year) o fp - Fraction of those stars with planets (0 < fp <1, estimated at 0.5 or 50 percent) o ne - Average number of earth-type planets per planetary system (0 < ne <1, estimated at 0.5 or 50 percent) o fl - Fraction of those planets where life develops (0 < fl <1, estimated at 1 or 100 percent) o fi - Fraction of life that develops intelligence (0 < fi <1, estimated at 0.1 or 10 percent) o fc - Fraction of planets where intelligent life develops technology such as radio (0 < fc <1, estimated at 0.1or 10 percent) o L - Lifetime of the communicative civilization in years (estimates are highly variable, from 100s to 1000s of years, approximately 500 years for example purposes)
BASIC PROBLEMS IN SEARCH

" How to search such a large area of sky " Where to look on the radio dial for ET " How to make the best use of the limited radio-telescope resources available for SETI Large vs. Small Areas of Sky Wide-field search - In this method large chunks of the sky are searched at a low resolution within a short period of time. Targeted search - In this method, intensive investigations of a limited number of sun-like stars are made for ET signals. What's the Frequency? In the 1- to 10-gigahertz (GHz) range of frequencies, there are two frequencies: 1.42 GHz, caused by hydrogen atoms, and 1.65 GHz, caused by hydroxyl ions. This area is called as the water hole. Limited Radio-telescope Resources " Conduct limited observing runs on existing radio telescopes " Conduct SETI analyses of radio data acquired by other radio astronomers (piggyback or parasite searches) " Build new radio telescopes that are entirely dedicated to SETI research

Line-Reflect-Reflect Technique

Definition
LRR- LINE REFLECT REFLECT is a new self-calibration procedure for the calibration of vector network analyzers (VNA). VNA measure the complex transmission and reflection characteristics of microwave devices. The analyzers have to be calibrated in order to eliminate systematic errors from the measurement results. The LRR calibration circuits consist of partly unknown standards, where L symbolizes a line element and R represents a symmetrical reflection standard. The calibration circuits are all of equal mechanical length. The obstacle, a symmetrical-reciprocal network is placed at three consecutive positions. The network consists of reflections, which might show a transmission. The calibration structures can be realized very easily as etched structures in microstrip technology. During the calibration [G], [H], which represents the systematic errors of the VNA is eliminated in order to determine the unknown line and obstacle parameters.
MICROWAVE DEVICES

Microwave devices are devices operating with a signal frequency range of 1-300GHz. A microwave circuit ordinarily consists of several microwave devices connected in some way to achieve the desired transmission of a microwave signal. The various microwave solid state devices are, * Tunnel diodes These are also known as Esaki diodes. It is a specially made PN junction device which exhibits negative resistance over part of the forward bias characteristic. Both the P and the N regions are heavily doped. The tunneling effect is a majority carrier effect and is very fast. It is useful for oscillation and amplification purposes. Because of the thin junction and shot transit time, it is useful for microwave applications in fast switching circuits. * Transferred electron devices These are all two terminal negative resistance solid state devices which has no PN junction. Gunn diode is one of the transferred electron devices and which works with the principle that there will be periodic fluctuations in the current passing through an n-type GaAs substrate when the applied voltage increases a critical value i.e. 2-4Kv/cm.

Low Power UART Design for Serial Data Communication

Definition
With the proliferation of portable electronic devices, power efficient data transmission has become increasingly important. For serial data transfer, universal asynchronous receiver / transmitter (UART) circuits are often implemented because of their inherent design simplicity and application specific versatility. Components such as laptop keyboards, palm pilot organizers and modems are few examples of devices that employ UART circuits. In this work, design and analysis of a robust UART architecture has been carried out to minimize power consumption during both idle and continuous modes of operation. UART An UART (universal asynchronous receiver / transmitter) is responsible for performing the main task in serial communications with computers. The device changes incoming parallel information to serial data which can be sent on a communication line. A second UART can be used to receive the information. The UART performs all the tasks, timing, parity checking, etc. needed for the communication. The only extra devices attached are line driver chips capable of transforming the TTL level signals to line voltages and vice versa. To use the device in different environments, registers are accessible to set or review the communication parameters. Setable parameters are for example the communication speed, the type of parity check, and the way incoming information is signaled to the running software. UART types Serial communication on PC compatibles started with the 8250 UART in the XT. In the years after, new family members were introduced like the 8250A and 8250B revisions and the 16450. The last one was first implemented in the AT. The higher bus speed in this computer could not be reached by the 8250 series. The differences between these first UART series were rather minor. The most important property changed with each new release was the maximum allowed speed at the processor bus side. The 16450 was capable of handling a communication speed of 38.4 kbs without problems. The demand for higher speeds led to the development of newer series which would be able to release the main processor from some of its tasks. The main problem with the original series was the need to perform a software action for each single byte to transmit or receive. To overcome this problem, the 16550 was released which contained two on-board FIFO buffers, each capable of storing 16 bytes. One buffer for incoming, and one buffer for outgoing bytes.

Light Emitting Polymers (LEP)

Definition
Light emitting polymers or polymer based light emitting diodes discovered by Friend et al in 1990 has been found superior than other displays like, liquid crystal displays (LCDs) vacuum fluorescence displays and electro luminescence displays. Though not commercialised yet, these have proved to be a mile stone in the filed of flat panel displays. Research in LEP is underway in Cambridge Display Technology Ltd (CDT), the UK. In the last decade, several other display contenders such as plasma and field emission displays were hailed as the solution to the pervasive display. Like LCD they suited certain niche applications, but failed to meet broad demands of the computer industry. Today the trend is towards the non_crt flat panel displays. As LEDs are inexpensive devices these can be extremely handy in constructing flat panel displays. The idea was to combine the characteristics of a CRT with the performance of an LCD and added design benefits of formability and low power. Cambridge Display Technology Ltd is developing a display medium with exactly these characteristics. The technology uses a light-emitting polymer (LEP) that costs much less to manufacture and run than CRTs because the active material used is plastic. LEP is a polymer that emits light when a voltage is applied to it. The structure comprises a thin film semi conducting polymer sandwiched between two electrodes namely anode and cathode. When electrons and holes are injected from the electrodes, the recombination of these charge carriers takes place, which leads to emission of light that escape through glass substrate.

Cruise Control Devices

Definition
Everyday the media brings us the horrible news on road accidents. Once a report said that the damaged property and other costs may equal 3 % of the world's gross domestic product. The concept of assisting driver in longitudinal vehicle control to avoid collisions has been a major focal point of research at many automobile companies and research organizations. The idea of driver assistance was started with the 'cruise control devices' first appeared in 1970's in USA. When switched on, this device takes up the task of the task of accelerating or braking to maintain a constant speed. But it could not consider the other vehicles on the road. An 'Adaptive Cruise Control' (ACC) system developed as the next generation assisted the driver to keep a safe distance from the vehicle in front. This system is now available only in some luxury cars like Mercedes S-class, Jaguar and Volvo trucks the U.S. Department of transportation and Japan's ACAHSR have started developing 'Intelligent Vehicles' that can communicate with each other with the help of a system called 'Co operative Adaptive Cruise Control' .this paper addresses the concept of Adaptive Cruise Control and its improved versions. ACC works by detecting the distance and speed of the vehicles ahead by using either a Lidar system or a Radar system [1, 2].The time taken by the transmission and reception is the key of the distance measurement while the shift in frequency of the reflected beam by Doppler Effect is measured to know the speed. According to this, the brake and throttle controls are done to keep the vehicle the vehicle in a safe position with respect to the other. These systems are characterized by a moderately low level of brake and throttle authority. These are predominantly designed for highway applications with rather homogenous traffic behavior. The second generation of ACC is the Stop and Go Cruise Control (SACC) [2] whose objective is to offer the customer longitudinal support on cruise control at lower speeds down to zero velocity [3]. The SACC can help a driver in situations where all lanes are occupied by vehicles or where it is not possible to set a constant speed or in a frequently stopped and congested traffic [2]. There is a clear distinction between ACC and SACC with respect to stationary targets. The ACC philosophy is that it will be operated in well structured roads with an orderly traffic flow with speed of vehicles around 40km/hour [3]. While SACC system should be able to deal with stationary targets because within its area of operation the system will encounter such objects very frequently.

Boiler Instrumentation and Controls

Definition
Instrumentation and controls in a boiler plant encompass an enormous range of equipment from simple industrial plant to the complex in the large utility station. The boiler control system is the means by which the balance of energy & mass into and out of the boiler are achieved. Inputs are fuel, combustion air, atomizing air or steam &feed water. Of these, fuel is the major energy input. Combustion air is the major mass input, outputs are steam, flue gas, blowdown, radiation & soot blowing.
CONTROL LOOPS

Boiler control systems contain several variable with interaction occurring among the control loops for fuel, combustion air, & feedwater . The overall system generally can be treated as a series of basic control loops connected together. for safety purposes, fuel addition should be limited by the amount of combustion air and it may need minimum limiting for flame stability. Combustion controls Amounts of fuel and air must be carefully regulated to keep excess air within close tolerancesespecially over the loads. This is critical to efficient boiler operation no matter what the unit size, type of fuel fired or control system used. Feedwater control Industrial boilers are subject to wide load variations and require quick responding control to maintain constant drum level. Multiple element feed water control can help faster and more accurate control response.

Single Photon Emission Computed Tomography (SPECT)

Definition
Emission Computed Tomography is a technique where by multi cross sectional images of tissue function can be produced , thus removing the effect of overlying and underlying activity. The technique of ECT is generally considered as two separate modalities. SINGLE PHOTON Emission Computed Tomography involves the use single gamma ray emitted per nuclear disintegration. Positron Emission Tomography makes use of radio isotopes such as gallium-68, when two gamma rays each of 511KeV, are emitted simultaneously where a positron from a nuclear disintegration annihilates in tissue. SPECT, the acronym of Single Photon Emission Computed Tomography is a nuclear medicine technique that uses radiopharmaceuticals, a rotating camera and a computer to produce images which allow us to visualize functional information about a patient's specific organ or body system. SPECT images are functional in nature rather than being purely anatomical such as ultrasound, CT and MRI. SPECT, like PET acquires information on the concentration of radio nuclides to the patient's body. SPECT dates from the early 1960 are when the idea of emission traverse section tomography was introduced by D.E.Kuhl and R.Q.Edwards prior to PET, X-ray, CT or MRI. THE first commercial Single Photon- ECT or SPECT imaging device was developed by Edward and Kuhl and they produce tomographic images from emission data in 1963. Many research systems which became clinical standards were also developed in 1980's. SPECT is short for single photon emission computed tomography. As its name suggests (single photon emission) gamma rays are the sources of the information rather than X-ray emission in the conventional CT scan. Similar to X-ray, CT, MRI, etc SPECT allows us to visualize functional information about patient's specific organ or body system. Internal radiation is administrated by means of a pharmaceutical which is labeled with a radioactive isotope. This pharmaceutical isotope decays, resulting in the emission of gamma rays. These gamma rays give us a picture of what's happening inside the patient's body. By using the most essential tool in Nuclear Medicine-the Gamma Camera. The Gamma Camera can be used in planner imaging to acquire a 2-D image or in SPECT imaging to acquire a 3-D image.

Sensors on 3D Digitization

Introduction
Digital 3D imaging can benefit from advances in VLSI technology in order to accelerate its deployment in many fields like visual communication and industrial automation. High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated. Intelligent digitizers will be capable of measuring accurately and simultaneously colour and 3D. Colour 3D Imaging Technology Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available [1]. Passive vision, attempts to analyze the structure of the scene under ambient light. [1] Stereoscopic vision is a passive optical technique. The basic idea is that two or more digital images are taken from known locations. The images are then processed to find the correlations between them. As soon as matching points are identified, the geometry can be computed. Active vision attempts to reduce the ambiguity of scene analysis by structuring the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with two-dimensional imaging systems. Lidar based or triangulation based laser range cameras are examples of active vision technique. One digital 3D imaging system based on optical triangulation were developed and demonstrated. Sensors For 3D Imaging The sensors used in the autosynchronized scanner include 1. Synchronization Circuit Based Upon Dual Photocells This sensor ensures the stability and the repeatability of range measurements in environment with varying temperature. Discrete implementations of the so-called synchronization circuits have posed many problems in the past. A monolithic version of an improved circuit has been built to alleviate those problems. [1] 2. Laser Spot Position Measurement Sensors High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated.[1]

Asynchronous Chips

Definition
Computer chips of today are synchronous. They contain a main clock, which controls the timing of the entire chips. There are problems, however, involved with these clocked designs that are common today.One problem is speed. A chip can only work as fast as its slowest component. Therefore, if one part of the chip is especially slow, the other parts of the chip are forced to sit idle. This wasted computed time is obviously detrimental to the speed of the chip. New problems with speeding up a clocked chip are just around the corner. Clock frequencies are getting so fast that signals can barely cross the chip in one clock cycle. When we get to the point where the clock cannot drive the entire chip, we'll be forced to come up with a solution. One possible solution is a second clock, but this will incur overhead and power consumption, so this is a poor solution. It is also important to note that doubling the frequency of the clock does not double the chip speed, therefore blindly trying to increase chip speed by increasing frequency without considering other options is foolish. The other major problem with c clocked design is power consumption. The clock consumes more power that any other component of the chip. The most disturbing thing about this is that the clock serves no direct computational use. A clock does not perform operations on information; it simply orchestrates the computational parts of the computer. New problems with power consumption are arising. As the number of transistors on a chi increases, so does the power used by the clock. Therefore, as we design more complicated chips, power consumption becomes an even more crucial topic. Mobile electronics are the target for many chips. These chips need to be even more conservative with power consumption in order to have a reasonable battery lifetime.The natural solution to the above problems, as you may have guessed, is to eliminate the source of these headaches: the clock.

Optical packet switch architectures

Definition
Space switch fabric architecture is shown in figure. The switch consists of N incoming and N out going fiber links, with n wavelengths running on each fiber link. The switch is slotted, and the length of the slot is such that an optical packet can be transmitted and propagated from an input port to an out put optical buffer. The switch fabric consists of three parts; optical packet encoder, space switch, and optical packet buffer. The optical packet buffer works as follows. For each incoming fiber link, three is an optical demultiplexer, which divides the incoming optical signal to different wavelengths. Each wavelength is fed to a different tunable wavelength converter ( TWC), which converts the wave length of the optical packet to a wavelength that is free at the destination optical output fiber. Then, through the space switch fabric, the optical packet can be switched to any of the N out put optical buffers. Specifically the out put of a TWC is fed to a splitter, which distributes the same signal to N different out put fibbers, one per out put fibber. The signal on each of these out put fibbers goes through another splitter, which distributes this in to d+1 different out put fibbers, and each out put is connected through an optical gate to one of the ODLs of the destination out put buffer. The optical packet is forwarded to an ODL by appropriately keeping one optical gate open and closing the rest. The information regarding to which wavelength of an incoming packet and the decision as to which ODL of the destination out put buffer the packet will be switched to is provided by the control unit, which ahs knowledge of the state of the entire switch. Each out put buffer is an optical buffer implemented as follows. It consists of d+1 ODLs, numbered from 0 to d. ODL1 delays optical packet for a fixed delay equal to 1 slots. ODL0 provides zero delay, and a packet arriving at this ODL is simply transmitted out of the out put port.

Digital Audio Broadcasting

Definition
Digital audio broadcasting, DAB, is the most fundamental advancement in radio technology since that introduction of FM stereo radio. It gives listeners interference - free reception of CD quality sound, easy to use radios, and the potential for wider listening choice through many additional stations and services. DAB is a reliable multi service digital broadcasting system for reception by mobile, portable and fixed receivers with a simple, non-directional antenna. It can be operated at any frequency from 30 MHz to 36Hz for mobile reception (higher for fixed reception) and may be used on terrestrial, satellite, hybrid (satellite with complementary terrestrial) and cable broadcast networks. DAB system is a rugged, high spectrum and power efficient sound and data broadcasting system. It uses advanced digital audio compression techniques (MPEG 1 Audio layer II and MPEG 2 Audio Layer II) to achieve a spectrum efficiency equivalent to or higher than that of conventional FM radio. The efficiency of use of spectrum is increased by a special feature called Single. Frequency Network (SFN). A broadcast network can be extended virtually without limit a operating all transmitters on the same radio frequency.
EVOLUTION OF DAB

DAB has been under development since 1981 of the Institute Fur Rundfunktechnik (IRT) and since 1987 as part of a European Research Project (EUREKA-147). " In 1987 the Eureka-147 consoritium was founded. It's aim was to develop and define the digital broadcast system, which later became known as DAB. " In 1988 the first equipment was assembled for mobile demonstration at the Geneva WARC conference. " By 1990, a small number of test receivers was manufactured. They has a size of 120 dm3 " In 1992, the frequencies of the L and S - band were allocated to DAB on a world wide basis. " From mid 1993 the third generation receivers, widely used for test purposes had a size of about 25 dm3, were developed. " The fourth generation JESSI DAB based test receivers had a size of about 3 dm3. 1995 the first consumer - type DAB receivers, developed for use in pilot projects, were presented at the IFA in Berlin.

Cellular Neural Network (CNN)


Abstract of Cellular Neural Network Cellular Neural Network is a revolutionary concept and an experimentally proven new computing paradigm for analog computers. Looking at the technological advancement in the last 50 years ; we see the first revolution which led to pc industry in 1980's, second revolution led to internet industry in 1990's cheap sensors & mems arrays in desired forms of artificial eyes, nose, ears etc. this third revolution owes due to C.N.N.This technology is implemented using CNN-UM and is also used in imageprocessing.It can also implement any Boolean functions. ARCHITECTURE OF CNN A standard CNN architecture consists of an m*n rectangular array of cells c(i,j) with Cartesian coordinates (i,j) i=1,2..M, j=12...N. A class -1 m*n standard CNN is defined by a m*n rectangular array of cells cij located at site (i,j) i= 1,2 .m ,j=1,2,.n is defined mathematically by (dXij/dt )= -Xij + A(I,j,k,l) Ykl + B(i,j,k,l) + Zij Simplical CNN Recently a novel structure has been introduced to implement any Boolean / gray level function of any number of variables .The output is no longer restricted to be binary so that CNNs with gray scale outputs are obtained. Simplical CNNs are implemented using RTDs (resonant tunneling diodes) RTDs are nano electric quantum device is featuring high speed regims and small integration sizes and they can be designed to operate in nano /femto sizes leading to extreme low power designs. In addition they exhibit an intrinsic non linear behaviour which can be exploited in many diverse applications, for instance in frequency multiplier and parity generators the threshold logic gates multi gigahertz a-d converters, and multivalued logic applications including multivalued memory design among others. A simplical partition is used to subdivide the domain in to convex regions called simplices which are the natural extension 2-d triangle into an n-d space the corners of these simplices are called vertices & for the particular chosen domain are the points of the form ( +1,-1,+1,-1) it was proven that the set of all PWL function f is a linear vector space. Every PWL function can be expressed as a linear combination F= CiAi (u) Ai(v-j)=1, if i=j C.N.N. Universal Machine The first spatio temporalanalogic array computer is CNN universal machine. The two different operations are continous-time, continues valued spatio-temporal nonlinear array dynamics (2-d$3-

darrays) local & global logic hence analog &logic operations are mixed and embedded in array computer. Therefore we call this type of computing: analogic. The CNN-UM architecture It has a minimum number of component types Provides stored programmability spatio temporal array computing Is universal in two sense: As spatial logic, it is equivalent to a Turing machine and as a local logic it may compute any Boolean function. As a nonlinear dynamic operator, it can realize any local operator of fading memory. the CNN UM is a common computational paradigm for as diverse fields of spatio-temporal computing as for example, retinal model, reaction diffusion equations, mathematical morphology, etc. The stored program ,as a sequence of templates is considered as a genetic code of CNN UM. The elementary genes are the templates.

FRAM

Definition
Before the 1950's, ferromagnetic cores were the only type of random-access, nonvolatile memories available. A core memory is a regular array of tiny magnetic cores that can be magnetized in one of two opposite directions, making it possible to store binary data in the form of a magnetic field. The success of the core memory was due to a simple architecture that resulted in a relatively dense array of cells. This approach was emulated in the semiconductor memories of today (DRAM's, EEPROM's, and FRAM's). Ferromagnetic cores, however, were too bulky and expensive compared to the smaller, low-power semiconductor memories. In place of ferromagnetic cores ferroelectric memories are a good substitute. The term "ferroelectric' indicates the similarity, despite the lack of iron in the materials themselves. Ferroelectric memory exhibit short programming time, low power consumption and nonvolatile memory, making highly suitable for application like contact less smart card, digital cameras which demanding many memory write operations. In other word FRAM has the feature of both RAM and ROM. A ferroelectric memory technology consists of a complementry metal-oxide-semiconductor (CMOS) technology with added layers on top for ferroelectric capacitors. A ferroelectric memory cell has at least one ferroelectric capacitor to store the binary data, and one or two transistors that provide access to the capacitor or amplify its content for a read operation.A ferroelectric capacitor is different from a regular capacitor in that it substitutes the dielectric with a ferroelectric material (lead zirconate titanate (PZT) is a common material used)-when an electric field is applied and the charges displace from their original position spontaneous polarization occurs and displacement becomes evident in the crystal structure of the material. Importantly, the displacement does not disappear in the absence of the electric field. Moreover, the direction of polarization can be reversed or reoriented by applying an appropriate electric field.A hysteresis loop for a ferroelectric capacitor displays the total charge on the capacitor as a function of the applied voltage. It behaves similarly to that of a magnetic core, but for the sharp transitions around its coercive points, which implies that even a moderate voltage can disturb the state of the capacitor. One remedy for this would be to modify a ferroelectric memory cell including a transistor in series with the ferroelectric capacitor. Called an access transistor, it wo control the access to the capacitor and eliminate the need for a square like hysteresis loop compensating for the softness of the hysteresis loop characteristics and blocking unwanted disturb signals from neighboring memory cells.

Tweet

Wireless Fidelity

Definition
Wi-Fi, or Wireless Fidelity is freedom :it allows you to connect to the internet from your couch at home, in a hotel room or a conferance room at work without wires . Wi-Fi is a wireless technology like a cell phone. Wi-Fi enabled computers send and receive data indoors and out; anywhere within the range of a base station. And the best thing of all, it is fast. However you only have true freedom to be connected any where if your computer is configured with a Wi-Fi CERTIFIED radio (a PC card or similar device). Wi-Fi certification means that you will be able able to connect anywhere there are other Wi-Fi CERTIFIED products - whether you are at home ,office , airports, coffee shops and other public areas equipped with a Wi-Fi access availability.Wi-Fi will be a major face behind hotspots , to a much greater extent.More than 400 airports and hotels in the US are targeted as Wi-Fi hotspots. The Wi-Fi CERTIFIED logo is your only assurance that the product has met rigorous interoperability testing requirements to assure products from different vendors will work together. The Wi-Fi CERTIFIED logo means that it is a "safe" buy. Wi-Fi certification comes from the Wi-Fi Alliance, a non profit international trade organisation that tests 802.11 based wireless equipment to make sure that it meets the Wi-Fi standard and works with all other manufacturer's Wi-Fi equipment on the market. The Wi-Fi Alliance (WELA) also has a WiFi certification program for Wi-Fi products that meet interoperability standards. It is an international organisation devoted to certifying interoperability of 802.11 products and to promoting 802.11as the global wireless LAN std across all market segment.
IEEE 802.11 ARCHITECTURES

In IEEE's proposed standard for wireless LANs (IEEE 802.11), there are two different ways to configure a network: ad-hoc and infrastructure. In the ad-hoc network, computers are brought together to form a network "on the fly." As shown in Figure 1, there is no structure to the network; there are no fixed points; and usually every node is able to communicate with every other node. A good example of this is the aforementioned meeting where employees bring laptop computers together to communicate and share design or financial information. Although it seems that order would be difficult to maintain in this type of network, algorithms such as the spokesman election algorithm (SEA) [4] have been designed to "elect" one machine as the base station (master) of the network with the others being slaves. Another algorithm in ad-hoc network architectures uses a broadcast and flooding method to all other nodes to establish who's who.

Synthetic Aperture Radar System

Introduction
When a disaster occurs it is very important to grasp the situation as soon as possible. But it is very difficult to get the information from the ground because there are a lot of things which prevent us from getting such important data such as clouds and volcanic eruptions. While using an optical sensor, large amount of data is shut out by such barriers. In such cases, Synthetic Aperture Radar or SAR is a very useful means to collect data even if the observation area is covered with obstacles or an observation is made at night at night time because SAR uses microwaves and these are radiated by the sensor itself. The SAR sensor can be installed in some satellite and the surface of the earth can be observed. To support the scientific applications utilizing space-borne imaging radar systems, a set of radar technologies have been developed which can dramatically lower the weight, volume, power and data rates of the radar systems. These smaller and lighter SAR systems can be readily accommodated in small spacecraft and launch vehicles enabling significantly reduced total mission cost. Specific areas of radar technology development include the antenna, RF electronics, digital electronics and data processing. A radar technology development plan is recommended to develop and demonstrate these technologies and integrate them into the radar missions in a timely manner. It is envisioned that these technology advances can revolutionize the approach to SAR missions leading to higher performance systems at significantly reduced mission costs. The SAR systems are placed on satellites for the imaging process. Microwave satellites register images in the microwave region of the electromagnetic spectrum. Two mode of microwave sensors exit- the active and the passive modes. SAR is an active sensor which carry on -board an instrument that sends a microwave pulse to the surface of the earth and register the reflections from the surface of the earth. One way of collecting images from the space under darkness or closed cover is to install the SAR on a satellite . As the satellite moves along its orbit, the SAR looks out sideways from the direction of travel, acquiring and storing the radar echoes which return from a strip of earth's surface that was under observation. The raw data collected by SAR are severely unfocussed and considerable processing is required to generate a focused image. The processing has traditionally been done on ground and a downlink with a high data rate is required. This is a time consuming process as well. The high data rate of the downlink can be reduced by using a SAR instrument with on-board processing. X-Band SAR Instrument Demonstrator The X-band SAR instrument demonstrator forms the standardized part or basis for a future Synthetic Aperture Radar (SAR) instrument with active front- end. SAR is an active sensor. Active sensors carry on-board an instrument that sends a microwave pulse to the surface of the earth and register the reflections from the surface of the earth. Different sensor use different bands in the microwave regions of the electromagnetic spectrum for collecting data. In the X-band SAR instrument, the Xband is used for collecting data.

Touch Screens

Introduction
A type of display screen that has a touch-sensitive transparent panel covering the screen. Instead of using a pointing device such as a mouse or light pen, you can use your finger to point directly to objects on the screen. Although touch screens provide a natural interface for computer novices, they are unsatisfactory for most applications because the finger is such a relatively large object. It is impossible to point accurately to small areas of the screen. In addition, most users find touch screens tiring to the arms after long use. Touch-screens are typically found on larger displays, in phones with integrated PDA features. Most are designed to work with either your finger or a special stylus. Tapping a specific point on the display will activate the virtual button or feature displayed at that location on the display.Some phones with this feature can also recognize handwriting written on the screen using a stylus, as a way to quickly input lengthy or complex information A touchscreen is an input device that allows users to operate a PC by simply touching the display screen. Touch input is suitable for a wide variety of computing applications. A touchscreen can be used with most PC systems as easily as other input devices such as track balls or touch pads. Browse the links below to learn more about touch input technology and how it can work for you. History Of Touch Screen Technology A touch screen is a special type of visual display unit with a screen which is sensitive to pressure or touching. The screen can detect the position of the point of touch. The design of touch screens is best for inputting simple choices and the choices are programmable. The device is very user-friendly since it 'talks' with the user when the user is picking up choices on the screen. Touch technology turns a CRT, flat panel display or flat surface into a dynamic data entry device that replaces both the keyboard and mouse. In addition to eliminating these separate data entry devices, touch offers an "intuitive" interface. In public kiosks, for example, users receive no more instruction than 'touch your selection. Specific areas of the screen are defined as "buttons" that the operator selects simply by touching them. One significant advantage to touch screen applications is that each screen can be customized to reflect only the valid options for each phase of an operation, greatly reducing the frustration of hunting for the right key or function. Pen-based systems, such as the Palm Pilot and signature capture systems, also use touch technology but are not included in this article. The essential difference is that the pressure levels are set higher for pen-based systems than for touch.Touch screens come in a wide range of options, from full color VGA and SVGA monitors designed for highly graphic Windows or Macintosh applications to small monochrome displays designed for keypad replacement and enhancement. Specific figures on the growth of touch screen technology are hard to come by, but a 1995 study last year by Venture Development Corporation predicted overall growth of 17%, with at least 10% in the industrial sector.Other vendors agree that touch screen technology is becoming more popular because of its ease-of-use, proven reliability, expanded functionality, and decreasing cost.

A touch screen sensor is a clear glass panel with a touch responsive surface. The touch sensor/panel is placed over a display screen so that the responsive area of the panel covers the viewable area of the video screen. There are several different touch sensor technologies on the market today, each using a different method to detect touch input. The sensor generally has an electrical current or signal going through it and touching the screen causes a voltage or signal change. This voltage change is used to determine the location of the touch to the screen.

Tempest and Echelon

Introduction
The notion of spying is a very sensitive topic after the September 11 attack of Terrorists in New York. In the novel 1984, George Orwell foretold a future where individuals had no expectation of privacy because the state monopolized the technology of spying. Now the National security Agency Of USA developed a secret project to spy on people for keep tracing their messages to make technology enabled interception to find out the terrorist activities across the globe, named as Echelon. Leaving the technology ahead of the any traditional method of interception . The secret project Developed by NSA (National Security Agency of USA) and its allies is tracing every single transmission even a single of keyboard. The allies of USA in this project are UK, Australia, New Zealand and Canada. Echelon is developed with the highest computing power of computers connected through the satellites all over the world. In this project the NSA left the wonderful method of Tempest and Carnivores behind. Echelon is the technology for sniffing through the messages sent over a network or any transmission media, even it is wireless messages. Tempest is the technology for intercepting the electromagnetic waves over the air. It simply sniffs through the electromagnetic waves propagated from any devices, even it is from the monitor of a computer screen. Tempest can capture the signals through the walls of computer screens and keystrokes of key board even the computer is not connected to a network. Thus the traditional way of hacking has a little advantage in spying. For the common people it is so hard to believe that their monitor can be reproduced from anywhere in one kilometer range without any transmission media in between the equipment and their computer. So we have to believe the technology enabled us to reproduce anything from a monitor of computer to the Hard Disks including the Memory (RAM) of a distant computer without any physical or visual contact. It is done with the Electromagnetic waves propagated from that device. The main theory behind the Tempest(Transient Electromagnetic Pulse Emanation Standard.) is that any electronic or electrical devices emit Electromagnetic radiations of specific key when it is operated. For example the picture tube of computer monitor emits radiations when it is scanned up on vertical of horizontal range beyond the screen. It will not cause any harm to a human and it is very small. But it has a specific frequency range. You can reproduce that electromagnetic waves by tracing with the powerful equipments and the powerful filtering methods to correct the errors while transmission from the equipment. Actually this electromagnetic waves are not necessary for a human being because it not coming from a transmitter, but we have a receiver to trace the waves. For the project named as Echelon the NSA is using supercomputers for sniffing through the packets and any messages send as the electromagnetic waves. They are using the advantage of Distributed computing for this. Firstly they will intercept the messages by the technology named as the Tempest and also with the Carnivore. Every packet is sniffed for spying for the USA's NSA for security reasons.

VoCable

Introduction
Voice (and fax) service over cable networks is known as cable-based Internet Protocol (IP) telephony. Cable based IP telephony holds the promise of simplified and consolidated communication services provided by a single carrier at a lower cost than consumers currently to pay to separate Internet, television and telephony service providers. Cable operators have already worked through the technical challenges of providing Internet service and optimizing the existing bandwidth in their cable plants to deliver high speed Internet access. Now, cable operators have turned their efforts to the delivery of integrated Internet and voice service using that same cable spectrum. Cable based IP telephony falls under the broad umbrella of voice over IP (VoIP), meaning that many of the challenges that telecom carriers facing cable operators are the same challenges that telecom carriers face as they work to deliver voice over ATM (VoATM) and frame-relay networks. However, ATM and frame-relay services are targeted primarily at the enterprise, a decision driven by economics and the need for service providers to recoup their initial investments in a reasonable amount of time. Cable, on the other hand, is targeted primarily at home. Unlike most businesses, the overwhelming majority of homes in the United States is passed by cable, reducing the required upfront infrastructure investment significantly. Cable is not without competition in the consumer market, for digital subscriber line (xDSL) has emerged as the leading alternative to broadband cable. However, cable operators are well positioned to capitalize on the convergence trend if they are able to overcome the remaining technical hurdles and deliver telephony service that is comparable to the public switched telephone system. In the case of cable TV, each television signal is given a 6-megahertz (MHz, millions of cycles per second) channel on the cable. The coaxial cable used to carry cable television can carry hundreds of megahertz of signals -- all the channels we could want to watch and more. In a cable TV system, signals from the various channels are each given a 6-MHz slice of the cable's available bandwidth and then sent down the cable to your house. In some systems, coaxial cable is the only medium used for distributing signals. In other systems, fibre-optic cable goes from the cable company to different neighborhoods or areas. Then the fiber is terminated and the signals move onto coaxial cable for distribution to individual houses. When a cable company offers Internet access over the cable, Internet information can use the same cables because the cable modem system puts downstream data -- data sent from the Internet to an individual computer -- into a 6-MHz channel. On the cable, the data looks just like a TV channel. So Internet downstream data takes up the same amount of cable space as any single channel of programming. Upstream data -- information sent from an individual back to the Internet -- requires even less of the cable's bandwidth, just 2 MHz, since the assumption is that most people download far more information than they upload. Putting both upstream and downstream data on the cable television system requires two types of equipment: a cable modem on the customer end and a cable modem termination system (CMTS) at the cable provider's end. Between these two types of equipment, all the computer networking, security and management of Internet access over cable television is put into place.

Data Compression Techniques

Introduction
Data compression is the process of converting an input data stream or the source stream or the original raw data into another data stream that has a smaller size. data compression is popular because of two reasons 1) People like to accumulate data and hate to throw anything away. No matter however large a storage device may be, sooner or later it is going to overflow. Data compression seems useful because it delays this inevitability 2) People hate to wait a long time for data transfer. There are many known methods of data compression. They are based on the different ideas and are suitable for different types of data. They produce different results, but they are all based on the same basic principle that they compress data by removing the redundancy from the original data in the source file. The idea of compression by reducing redundancy suggests the general law of data compression, which is to "assign short codes to common events and long codes to rare events". Data compression is done by changing its representation from inefficient to efficient form. The main aim of the field of data compression is of course to develop methods for better and better compression. Experience shows that fine tuning an algorithm to squeeze out the last remaining bits of redundancy from the data gives diminishing returns. Data compression has become so important that some researches have proposed the "simplicity and power theory". Specifically it says, data compression may be interpreted as a process of removing unnecessary complexity in information and thus maximizing the simplicity while preserving as much as possible of its non redundant descriptive power. Basic Types Of Data Compression There are two basic types of data compression. 1. Lossy compression 2. Lossless compression Lossy Compression In lossy compression some information is lost during the processing, where the image data is stored into important and unimportant data. The system then discards the unimportant data It provides much higher compression rates but there will be some loss of information compared to the original source file. The main advantage is that the loss cannot be visible to eye or it is visually lossless. Visually lossless compression is based on knowledge about colour images and human perception. Lossless Compression In this type of compression no information is lost during the compression and the decompression process. Here the reconstructed image is mathematically and visually identical to the original one. It

achieves only about a 2:1 compression ratio. This type of compression technique looks for patterns in strings of bits and then expresses them more concisely.

Fractal Image Compression

Introduction
The subject of this work is image compression with fractals. Today JPEG has become an industrial standard in image compression. Further researches are held in two areas, wavelet based compression and fractal image compression. The fractal scheme was introduced by Michael F Barnsley in the year 1945.His idea was that images could be compactly stored as iterated functions which led to the development of the IFS scheme which forms the basis of fractal image compression. Further work in this area was conducted by A.Jacquin, a student of Barnsley who published several papers on this subject. He was the first to publish an efficient algorithm based on local fractal system. Fractal image compression has the following features: " Compression has high complexity. " Fast image decoding " High compression ratios can be achieved. These features enable applications such as computer encyclopedias, like the Microsoft Atlas that came with this technology. The technology is relatively new. Overview Of Image Compression Images are stored as pixels or picture forming elements. Each pixel requires a certain amount of computer memory to be stored on. Suppose we had to store a family album with say a hundred photos. To store this on a computer memory would require say a few thousands of dollars. This problem can be solved by image compression. The number of pixels involved in the picture can be drastically reduced by employing image compression techniques. The human eye is insensitive to a wide variety of information loss. The redundancies in images cannot be easily detected and certain minute details in pictures can also be eliminated while storing so as to reduce the number of pixels. These can be further incorporated while reconstructing the image for minimum error. This is the basic idea behind image compression. Most of the image compression techniques are said to be lossy as they reduce the information being stored. The present method being employed consists of storing the image by eliminating the high frequency Fourier co-efficients and storing only the low frequency coefficients. This is the principle behind the DCT transformation which forms the basis of the JPEG scheme of image compression. Barnsley suggested that storing of images as iterated functions of parts of itself leads to efficient image compression. In the middle of the 80's this concept of IFS became popular. Barnsley and his colleagues in the Georgia University first observed the use of IFS in computerized graphics applications. They found that IFS was able to cress colour images upto 10000 times. The compression contained two phases. First the image was partitioned to segments that were self-similar as possible. Then each part was described as IFS with probabilities

Computer Aided Process Planning

Introduction
Technological advances are reshaping the face of manufacturing, creating paperless manufacturing environments in which computer automated process planning (CAPP) will play a preeminent role. The two reasons for this effect are: Costs are declining, which encourages partnerships between CAD and CAPP developers and access to manufacturing data is becoming easier to accomplish in multivendor environments. This is primarily due to increasing use of LANs; IGES and the like are facilitating transfer of data from one point to another on the network; and relational databases (RDBs) and associated structured query language (SQL) allow distributed data processing and data access. . With the introduction of computers in design and manufacturing, the process planning part needed to be automated. The shop trained people who were familiar with the details of machining and other processes were gradually retiring and these people would be unavailable in the future to do process planning. An alternative way of accomplishing this function was needed and Computer Aided Process Planning (CAPP) was the alternative. Computer aided process planning was usually considered to be a part of computer aided manufacturing. However computer aided manufacturing was a stand alone system. Infact a synergy results when CAM is combined with CAD to create a CAD/CAM. In such a system CAPP becomes the direct connection between design and manufacturing. Moreover, the reliable knowledge based computer-aided process planning application MetCAPP software looks for the least costly plan capable of producing the design and continuously generates and evaluates the plans until it is evident that non of the remaining plans will be any better than the best one seen so far. The goal is to find a useful reliable solution to a real manufacturing problem in a safer environment. If alternate plans exist, rating including safer conditions is used to select the best plans COMPUTER AIDED DESIGN (CAD) A product must be defined before it can be manufactured. Computer Aided Design involves any type of design activity that makes use of the computer to develop, analyze or modify an engineering design. There are a number of fundamental reasons for implementing a computer aided design system. a. Increase the productivity of the designer: This is accomplished by helping the designer to visualize the product and its component subassemblies and parts; and by reducing the time required in synthesizing, analyzing, and documenting the design. This productivity improvement translates not only into lower design cost but also into shorter project completion times. b. To improve the quality of the design: A CAD system permits a more thorough engineering analysis and a larger number of design alternatives can be investigated. Design errors are also reduced through the greater accuracy provided by the system. These factors lead to a better design. c. To improve communications: Use of a CAD system provides better engineering drawings, more standardization in the drawings, better documentation of the design, fewer drawing error, and greater legibility. d. To create a database for manufacturing: In the process of creating a the documentation for the

product design (geometries and dimensions of the product and its components, material specification for components, bill of materials etc), much of the required data base to manufacture the product is also created. Design usually involves both creative and repetitive tasks. The repetitive tasks within design are very appropriate for computerization.

Space Shuttles and its Advancements

Definition
The successful explortion of space requires a system that will reliably transport payloads into space and return back to earth; without subjecting them an uncomfortable or hazardous environment. In other words, the space crafts and its pay loads have to be recovered safely into the earth. The space shuttle used at older times were not re-usable. So NASA invented re-usable space shuttle that could launch like a rocket but deliver and land like an aeroplane. Now NASA is planning to launch a series of air-breathing planes that would replace the space shuttle. A Brief History Of The Space Shuttle Near the end of the Apollo space program, NASA officials were looking at the future of the American space program. At that time, the rockets used to place astronauts and equipment in outer space was one-shot disposable rockets. What they needed was a reliable, but less expensive, rocket, perhaps one that was reusable. The idea of a reusable "space shuttle" that could launch like a rocket but deliver and land like an airplane was appealing and would be a great technical achievement. NASA began design, cost and engineering studies on a space shuttle. Many aerospace companies also explored the concepts. In 1972 NASA announced that it would develop a reusable space shuttle or space transportation programme (STS).NASA decided that the shuttle would consist of an orbiter attached to solid rocket boosters and an external fuel tank because this design was considered safer and more cost effective. At that time, spacecraft used ablative heat shields that would burn away as the spacecraft re-entered the Earth's atmosphere. However, to be reusable, a different strategy would have to be used. The designers of the space shuttle came up with an idea to cover the space shuttle with many insulating ceramic tiles that could absorb the heat of re-entry without harming the astronauts. Finally, after many years of construction and testing (i.e. orbiter, main engines, external fuel tank, solid rocket boosters), the shuttle was ready to fly. Four shuttles were made (Columbia, Discovery, Atlantis, Challenger). The first flight was in 1981 with the space shuttle Columbia, piloted by astronauts John Young and Robert Crippen. Columbia performed well and the other shuttles soon made several successful flights. The space shuttle consists of the following major components: " Two solid rocket boosters (SRB) - critical for the launch " External fuel tank (ET) - carries fuel for the launch " Orbiter - carries astronauts and payload The Space Shuttle Mission A typical shuttle mission lasts seven to eight days, but can extend to as much as 14 days depending upon the objectives of the mission. A typical shuttle mission is as follows: 1. Getting into orbit

1.1 Launch - the shuttle lifts off the launching pad. 1.2 Ascent. 1.3 Orbital maneuvering burn. 2. Orbit-life in space. 3. Re-entry. 4. Landing.

Space Robotics

Definition
Robot is a system with a mechanical body, using computer as its brain. Integrating the sensors and actuators built into the mechanical body, the motions are realised with the computer software to execute the desired task. Robots are more flexible in terms of ability to perform new tasks or to carry out complex sequence of motion than other categories of automated manufacturing equipment. Today there is lot of interest in this field and a separate branch of technology 'robotics' has emerged. It is concerned with all problems of robot design, development and applications. The technology to substitute or subsidise the manned activities in space is called space robotics. Various applications of space robots are the inspection of a defective satellite, its repair, or the construction of a space station and supply goods to this station and its retrieval etc. With the over lap of knowledge of kinematics, dynamics and control and progress in fundamental technologies it is about to become possible to design and develop the advanced robotics systems. And this will throw open the doors to explore and experience the universe and bring countless changes for the better in the ways we live. Areas Of Application The space robot applications can be classified into the following four categories 1 In-orbit positioning and assembly: For deployment of satellite and for assembly of modules to satellite/space station. 2 Operation: For conducting experiments in space lab. 3 Maintenance: For removal and replacement of faulty modules/packages. 4 Resupply: For supply of equipment, materials for experimentation in space lab and for the resupply of fuel. The following examples give specific applications under the above categories Scientific experimentation: Conduct experimentation in space labs that may include " Metallurgical experiments which may be hazardous. " Astronomical observations. " Biological experiments. Assist crew in space station assembly " Assist in deployment and assembly out side the station. " Assist crew inside the space station: Routine crew functions inside the space station and maintaining life support system. Space servicing functions

" Refueling. " Replacement of faulty modules. " Assist jammed mechanism say a solar panel, antenna etc. Space craft enhancements " Replace payloads by an upgraded module. " Attach extra modules in space. Space tug " Grab a satellite and effect orbital transfer. " Efficient transfer of satellites from low earth orbit to geostationary orbit.

Welding Robots

Definition
Welding technology has obtained access virtually to every branch of manufacturing; to name a few bridges, ships, rail road equipments, building constructions, boilers, pressure vessels, pipe lines, automobiles, aircrafts, launch vehicles, and nuclear power plants. Especially in India, welding technology needs constant upgrading, particularly in field of industrial and power generation boilers, high voltage generation equipment and transformers and in nuclear aero-space industry. Computers have already entered the field of welding and the situation today is that the welding engineer who has little or no computer skills will soon be hard-pressed to meet the welding challenges of our technological times. In order for the computer solution to be implemented, educational institutions cannot escape their share of responsibilities. Automation and robotics are two closely related technologies. In an industrial context, we can define automation as a technology that is concerned with the use of mechanical, electronics and computerbased systems in the operation and control of production. Examples of this technology include transfer lines, mechanized assembly machines, feed back control systems, numerically controlled machine tools, and robots. Accordingly, robotics is a form of industrial automation. There are three broad classes of industrial automation: fixed automaton, programmable automation, and flexible automation. Fixed automation is used when the volume of production is very high and it is therefore appropriate to design specialized equipment to process the product very efficiently and at high production rates. A good example of fixed automation can be found in the automobile industry, where highly integrated transfer lines consisting of several dozen work stations are used to perform machining operations on engine and transmission components. The economics of fixed automation are such that the cost of the special equipment can be divided over a large number of units, and resulting unit cost are low relative to alternative methods of production. The risk encountered with fixed automation is this; since the initial investment cost is high, if the volume of production turns out to be lower than anticipated, then the unit costs become greater than anticipated. Another problem in fixed automation is that the equipment is specially designed to produce the one product, and after that products life cycle is finished, the equipment is likely to become obsolete. For products with short life cycle, the use of fixed automation represents a big gamble. Programmable automation is used when the volume of production is relatively low and there are a variety of products to be made. In this case, the production equipment is designed to be adaptable to variations in product configuration. This adaptability feature is accomplished by operating the equipment under the control of "program" of instructions which has been prepared especially for the given product. The program is read into the production equipment, and the equipment performs the particular sequence of processing operations to make that product. In terms of economics, the cost of programmable equipment can be spread over a large number of products even though the products are different. Because of the programming feature, and the resulting adaptability of the equipment, many different and unique products can be made economically in small batches.

Sensotronic Brake Control

Definition
Sensotronic Brake Control (SBC) works electronically, and thus faster and more precisely, than a conventional hydraulic braking system. As soon as you press the brake pedal and the sensors identify the driving situation in hand, the computer makes an exact calculation of the brake force necessary and distributes it between the wheels as required. This allows SBC to critically reduce stopping distances. SBC also helps to optimise safety functions such as ESP, ASR, ABS and BAS. With Sensotronic Brake Control, electric impulses are used to pass the driver's braking commands onto a microcomputer which processes various sensor signals simultaneously and, depending on the particular driving situation, calculates the optimum brake pressure for each wheel. As a result, SBC offers even greater active safety than conventional brake systems when braking in a corner or on a slippery surface. A high-pressure reservoir and electronically controllable valves ensure that maximum brake pressure is available much sooner. Moreover, the system offers innovative additional functions to reduce the driver's workload. These include Traffic Jam Assist, which brakes the vehicle automatically in stopand-go traffic once the driver takes his or her foot off the accelerator. The Soft-Stop function another first - allows particularly soft and smooth stopping in town traffic When drivers hit the brake pedal today, their foot moves a piston rod which is linked to the brake booster and the master brake cylinder. Depending on the pedal force, the master brake cylinder builds up the appropriate amount of pressure in the brake lines which - in a tried and tested interaction of mechanics and hydraulics - then presses the brake pads against the brake discs via the wheel cylinders. By contrast, in the Mercedes-Benz Sensotronic Brake Control, a large number of mechanical components are simply replaced by electronics. The brake booster will not be needed in future either. Instead sensors gauge the pressure inside the master brake cylinder as well as the speed with which the brake pedal is operated, and pass these data to the SBC computer in the form of electric impulses. To provide the driver with the familiar brake feel, engineers have developed a special simulator which is linked to the tandem master cylinder and which moves the pedal using spring force and hydraulics. In other words: during braking, the actuation unit is completely disconnected from the rest of the system and serves the sole purpose of recording any given brake command. Only in the event of a major fault or power failure does SBC automatically use the services of the tandem master cylinder and instantly establishes a direct hydraulic link between the brake pedal and the front wheel brakes in order to decelerate the car safely. The central control unit under the bonnet is the centrepiece of the electrohydraulic brake. This is where the interdisciplinary interaction of mechanics and electronics provides its greatest benefits the microcomputer, software, sensors, valves and electric pump work together and allow totally novel, highly dynamic brake management: In addition to the data relating to the brake pedal actuation, the SBC computer also receives the sensor signals from the other electronic assistance systems.

For example, the anti-lock braking system (ABS) provides information about wheel speed, while Electronic Stability Program (ESP) makes available the data from its steering angle, turning rate and transverse acceleration sensors. The transmission control unit finally uses the data highway to communicate the current driving range. The result of these highly complex calculations is rapid brake commands which ensure optimum deceleration and driving stability as appropriate to the particular driving scenario. What makes the system even more sophisticated is the fact that SBC calculates the brake force separately for each wheel.

Mobile IP

Introduction
While Internet technologies largely succeed in overcoming the barriers of time and distance, existing Internet technologies have yet to fully accommodate the increasing mobile computer usage. A promising technology used to eliminate this current barrier is Mobile IP. The emerging 3G mobile networks are set to make a huge difference to the international business community. 3G networks will provide sufficient bandwidth to run most of the business computer applications while still providing a reasonable user experience. However, 3G networks are not based on only one standard, but a set of radio technology standards such as cdma2000, EDGE and WCDMA. It is easy to foresee that the mobile user from time to time also would like to connect to fixed broadband networks, wireless LANs and, mixtures of new technologies such as Bluetooth associated to e.g. cable TV and DSL access points. In this light, a common macro mobility management framework is required in order to allow mobile users to roam between different access networks with little or no manual intervention. (Micro mobility issues such as radio specific mobility enhancements are supposed to be handled within the specific radio technology.) IETF has created the Mobile IP standard for this purpose. Mobile IP is different compared to other efforts for doing mobility management in the sense that it is not tied to one specific access technology. In earlier mobile cellular standards, such as GSM, the radio resource and mobility management was integrated vertically into one system. The same is also true for mobile packet data standards such as CDPD, Cellular Digital Packet Data and the internal packet data mobility protocol (GTP/MAP) of GPRS/UMTS networks. This vertical mobility management property is also inherent for the increasingly popular 802.11 Wireless LAN standard. Mobile IP can be seen as the least common mobility denominator - providing seamless macro mobility solutions among the diversity of accesses. Mobile IP is defining a Home Agent as an anchor point with which the mobile client always has a relationship, and a Foreign Agent, which acts as the local tunnel-endpoint at the access network where the mobile client is visiting. Depending on which network the mobile client is currently visiting; its point of attachment Foreign Agent) may change. At each point of attachment, Mobile IP either requires the availability of a standalone Foreign Agent or the usage of a Co-located care-of address in the mobile client itself. The concept of "Mobility" or "packet data mobility", means different things depending on what context the word is used within. In a wireless or fixed environment, there are many different ways of implementing partial or full mobility and roaming services. The most common ways of implementing mobility (discrete mobility or IP roaming service) support in today's IP networking environments includes simple "PPP dial-up" as well as company internal mobility solutions implemented by means of renewal of IP address at each new point of attachment. The most commonly deployed way of supporting remote access users in today's Internet is to utilize the public telephone network (fixed or mobile) and to use the PPP dial-up functionality.

Power System Contingencies

Introduction
Power system voltage control has a hierarchy structure with three levels: the primary, secondary, and the tertiary voltage control. Over the past 20 yrs, one of the most successful measures proposed to improve power system voltage regulation has been the application of secondary voltage control, initiated by the French electricity company, EDF, and followed by some other electricity utilities in European countries. The secondary voltage control closes the control loop of the references value setting of controllers at the primary level. The primary objective of secondary voltage control is to achieve better voltage regulation in power systems. In addition, it brings in the extra benefit of improvement of power system voltage stability, for this application, several methods to design secondary voltage controllers have been proposed. The useful concept of secondary voltage control is explored for a new application-the elimination of the voltage violations in power system contingencies. For this particular application, the coordination of various secondary voltage controllers is proposed to be based on a multi agent request -andanswer type of protocol to between any two agents. The resulted secondary voltage control can only cover the location where voltage controllers are installed. This paper presents results of significant progresses in investigating this new application to eliminate voltage violations in power system contingencies via secondary voltage control. A collaboration protocol, expressed graphically as finite state machine, is proposed for the coordination among multiple FACTS voltage controllers. The coordinated secondary voltage control is suggested to cover multiple locations to eliminate voltage violations in the adjacent locations to a voltage controller. A novel scheme of a learning fuzzy logic control is proposed for the design of the secondary voltage controller. A key parameter of the learning fuzzy logic controller is proposed to be trained through off-line simulation with the injection of artificial loads in the controller's adjacent locations. FACTS (Flexible AC Transmission Systems) Sudden changes in the power demands or changes in the system conditions in the power system are often followed by prolonged electromechanical oscillations leading to power system instability. AC transmission lines are dominantly reactive networks characterized by their per mile series inductance and shunt capacitances. Suitably changing the line impedance and thus the real and reactive power flow through the transmission line is an effective measure for controlling the power system oscillations and thereby improving the system stability. Advances in high power semiconductors and sophisticated electronic control technologies have led to the development of FACTS. Through FACTS the effective line impedance can be controlled within a few milliseconds time. Damping of the power system oscillation is possible through effective changes in the line impedance by employing FACTS members (SVC, STATCOM, UPFC etc).

Lightning Protection Using LFA-M

Introduction
A new simple, effective and inexpensive method for lightning protection of medium voltage overhead distribution line is using long flashover arresters (LFA). A new long flashover arrester model has been developed. It is designated as LFA-M. It offers great number of technical and economical advantages. The important feature of this modular long flashover arrester (LFA-M) is that it can be applied for lightning protection of overhead distribution line against both induced overvoltages and direct lightning strokes. The induced over voltages can be counteracted by installing a single arrester on an overhead line support (pole). For the protection of lines against direct lightning strokes, the arresters are connected between the poles and all of the phase conductors in parallel with the insulators. Lightning is an electrical discharge between cloud and the earth, between clouds or between the charge centers of the same cloud. Lightning is a huge spark and that take place when clouds are charged to at a high potential with respect to earth object (e.g. overhead lines) or neighboring cloud that the dielectric strength of the neighboring medium(air) is destroyed. TYPES OF LIGHTNING STROKES There are two main ways in which the lightning may strike the power system . They are 1. Direct stroke 2. Indirect stroke Direct Stroke In direct stroke, the lightning discharge is directly from the cloud to the an overhead line. From the line, current path may be over the insulators down to the pole to the ground. The over voltage set up due to the stroke may be large enough to flashover this path directly to the ground. The direct stroke can be of two types 1. stroke A 2. stroke B In stroke A, the lightning discharge is from the cloud to the subject equipment(e.g. overhead lines). The cloud will induce a charge of opposite sign on the tall object. When the potential between the cloud and line exceed the breakdown value of air, the lightning discharge occurs between the cloud and the line. In stroke B the lightning discharge occurs on the overhead line as the result of stroke A between the clouds. There are three clouds P,Q and R having positive, negative and positive charge respectively.

Charge on the cloud Q is bound by cloud R.If the cloud P shift too nearer to cloud Q,Then lightning discharge will occur between them and charges on both these cloud disappear quickly. The result is that charge on cloud R suddenly become free and it then discharges rapidly to earth, ignoring tall object.

Wideband Sigma Delta PLL Modulator

Introduction
The proliferation of wireless products over past few years has been rapidly increasing. New wireless standards such as GPRS and HSCSD have brought new challenges to wireless transceiver design. One pivotal component of transceiver is frequency synthesizer. Two major requirements in mobile applications are efficient utilization of frequency spectrum by narrowing the channel spacing and fast switching for high data rates. This can be achieved by using fractional- N PLL architecture. They are capable of synthesizing frequencies at channel spacings less than reference frequency. This will increase the reference frequency and also reduces the PLL's lock time. Fractional N PLL has the disadvantage that it generates high tones at multiple of channel spacing. Using digital sigma delta modulation techniques. we can randomize the frequency division ratio so that quantization noise of the divider can be transferred to high frequencies thereby eliminatory the spurs. Conventional PLL The advantages of this conventional PLL modulator is that they offer small frequency resolution, wider tuning bandwidth and fast switching speed. But they have insufficient bandwidth for current wireless standards such as GSM. so that they cannot be used as a closed loop modulator for digital enhanced codeless (DECT) standard. they efficiently filter out quantization noise and reference feed through for sufficiently small loop bandwidth. Wide Band PLL For wider loop band width applications bandwidth is increased. but this will results in residual spurs to occur. this due to the fact that the requirement of the quantization noise to be uniformly distributed is violated. since we are using techniques for frequency synthesis the I/P to the modulator is dc I/P which will results in producing tones even when higher order modulators are used. with single bit O/P level of quantization noise is less but with multi bit O/P s quantization noise increases. So the range of stability of modulator is reduced which will results in reduction of tuning range. More over the hardware complexity of the modulator is higher than Mash modulator. In this feed back feed forward modulator the loop band width was limited to nearly three orders of magnitudes less than the reference frequency. So if it is to be used as a closed loop modulator power dissipation will increase. So in order to widen the loop band width the close-in-phase noise must be kept within tolerable levels and also the rise of the quantization noise must be limited to meet high frequency offset phase noise requirements. At low frequencies or dc the modulator transfer function has a zero which will results in addition of phase noise. For that the zero is moved away from dc to a frequency equal to some multiple of fractional division ratio. This will introduce a notch at that frequency which will

reduce the total quantization noise. Now the quantization noise of modified modulator is 1.7 times and 4.25 times smaller than Mash modulator. At higher frequencies quantization noise cause distortion in the response. This is because the step size of multi bit modulator is same as single bit modulator. So more phase distortion will be occurring in multi bit PLLs. To reduce quantization noise at high frequencies the step size is reduced by producing functional division ratios. This is achieved by using a phase selection divider instead of control logic in conventional modulator. This divider will produce phase shifts of VCO signal and changes the division ratio by selecting different phases from the VCO. This type of divider will produce quarter division ratios.

Bioinformatics

Introduction
Rapid advances in bioinformatics are providing new hopes to patients of life threatening diseases. Gene chips will be able to screen heart attack and diabetics years before patients develop symptoms. In near future, patients will go to a doctor's clinic with lab- on- a- chip devices. The device will inform the doctor in real time if the patient's ailment will respond to a drug based on his DNA. These will help doctors diagnose life-threatening illness faster, eliminating expensive, timeconsuming ordeals like biopsies and sigmoidoscopies. Gene chips reclassify diseases based on their underlying molecular signals, rather than misleading surface symptoms. The chip would also confirm the patient's identity and even establish paternity. Bioinformatics is an inter disciplinary research area. It is a fusion of computing, biotechnology and biological sciences. Bioinformatics is poised to one of the most prodigious growth areas in the next to decades. Being the interface between the most rapidly advancing fields of biological and computational sciences, it is immense in scope and vast in applications. Bioinformatics is the study of biological information as it passes from its storage site in the genome to the various gene products in the cell. Bioinformatics involves the creation and computational technologies for problems in molecular biology. As such ,it deals with methods for storing, retrieving and analyzing biological data, such as nuclei acid (DNA/RNA)and protein sequence, structures, functions, path ways and interactions. The science of Bioinformatics, which is the melding of molecular biology with computer science is essential to the use of genomic information in understanding human diseases and in the identification of new molecular targets of drug discovery. New discoveries are being made in the field of genomics, an area of study which looks at the DNA sequence of an organism in order to determine which genes code for beneficial traits and which genes are involved in inherited diseases.If you are not tall enough, the stature could be altered accordingly. If you are weak and not strong enough, your physique could be improved. If you think this is the script for a science fiction movie, you are mistaken. It is the future reality. Evolution Of Bioinformatics DNA is the genetic material of organism. It contains all the information needed for the development and existence of an organism. The DNA molecule is formed of two long polynucleotide chains which are spirally coiled on each other forming a double helix. Thus it has the form of spirally twisted ladder. DNA is a molecule made from sugar, phosphate and bases. The bases are guanine (G), cytosine(C)adenine(A) and thiamine(T).Adenine pairs only with Thiamine and Guanine pairs only with Cytosine. The various combinations of these bases make up with DNA. That is; AAGCT, CCAGT, TACGGT etc. An infinite number of combinations of these bases is possible. And then the gene is a sequence of DNA that represents a fundamental unit of heredity. Human genome consists of approximately 30,000 genes, containing approximately 3 billion base pairs.

Extreme Ultraviolet Lithography

Introduction
Silicon has been the heart of the world's technology boom for nearly half a century, but microprocessor manufacturers have all but squeezed the life out of it. The current technology used to make microprocessors will begin to reach its limit around 2005. At that time, chipmakers will have to look to other technologies to cram more transistors onto silicon to create more powerful chips. Many are already looking at extreme-ultraviolet lithography (EUVL) as a way to extend the life of silicon at least until the end of the decade. Potential successors to optical projection lithography are being aggressively developed. These are known as "Next-Generation Lithographies" (NGL's). EUV lithography (EUVL) is one of the leading NGL technologies; others include x-ray lithography, ion-beam projection lithography, and electronbeam projection lithography. Using extreme-ultraviolet (EUV) light to carve transistors in silicon wafers will lead to microprocessors that are up to 100 times faster than today's most powerful chips, and to memory chips with similar increases in storage capacity. Extreme ultraviolet lithography (EUVL) is an advanced technology for making microprocessors a hundred times more powerful than those made today. EUVL is one technology vying to replace the optical lithography used to make today's microcircuits. It works by burning intense beams of ultraviolet light that are reflected from a circuit design pattern into a silicon wafer. EUVL is similar to optical lithography in which light is refracted through camera lenses onto the wafer. However, extreme ultraviolet light, operating at a different wavelength, has different properties and must be reflected from mirrors rather than refracted through lenses. The challenge is to build mirrors perfect enough to reflect the light with sufficient precision EUV RADIATION We know that Ultraviolet radiations are very shortwave (very low wavelength) with high energy. If we further reduce the wavelength it becomes Extreme Ultraviolet radiation. Current lithography techniques have been pushed just about as far as they can go. They use light in the deep ultraviolet range- at about 248-nanometer wavelengths-to print 150- to 120-nanometer-size features on a chip. (A nanometer is a billionth of a meter.) In the next half dozen years, manufacturers plan to make chips with features measuring from 100 to 70 nanometers, using deep ultraviolet light of 193- and 157-nanometer wavelengths. Beyond that point, smaller features require wavelengths in the extreme ultraviolet (EUV) range. Light at these wavelengths is absorbed instead of transmitted by conventional lenses Lithography Computers have become much more compact and increasingly powerful largely because of lithography, a basically photographic process that allows more and more features to be crammed onto a computer chip. Lithography is akin to photography in that it uses light to transfer images onto a substrate. Light is directed onto a mask-a sort of stencil of an integrated circuit pattern-and the image of that pattern is

then projected onto a semiconductor wafer covered with light-sensitive photoresist. Creating circuits with smaller and smaller features has required using shorter and shorter wavelengths of light.

Animatronics

Introduction
The first use of Audio-Animatronics was for Walt Disney's Enchanted Tiki Room in Disneyland, which opened in June, 1963. The Tiki birds were operated using digital controls; that is, something that is either on or off. Tones were recorded onto tape, which on playback would cause a metal reed to vibrate. The vibrating reed would close a circuit and thus operate a relay. The relay sent a pulse of energy (electricity) to the figure's mechanism which would cause a pneumatic valve to operate, which resulted in the action, like the opening of a bird's beak. Each action (e.g., opening of the mouth) had a neutral position, otherwise known as the "natural resting position" (e.g., in the case of the Tiki bird it would be for the mouth to be closed). When there was no pulse of energy forthcoming, the action would be in, or return to, the natural resting position. This digital/tone-reed system used pneumatic valves exclusively--that is, everything was operated by air pressure. Audio-Animatronics' movements that were operated with this system had two limitations. First, the movement had to be simple--on or off. (e.g., The open and shut beak of a Tiki bird or the blink of an eye, as compared to the many different positions of raising and lowering an arm.) Second, the movements couldn't require much force or power. (e.g., The energy needed to open a Tiki Bird's beak could easily be obtained by using air pressure, but in the case of lifting an arm, the pneumatic system didn't provide enough power to accomplish the lift.) Walt and WED knew that this this pneumatic system could not sufficiently handle the more complicated shows of the World's Fair. A new system was devised. In addition to the digital programming of the Tiki show, the Fair shows required analog programming. This new "analog system" involved the use of voltage regulation. The tone would be on constantly throughout the show, and the voltage would be varied to create the movement of the figure. This "varied voltage" signal was sent to what was referred to as the "black box." The black boxes had the electronic equipment that would receive the signal and then activate the pneumatic and hydraulic valves that moved the performing figures. The use of hydraulics allowed for a substantial increase in power, which was needed for the more unwieldy and demanding movements. (Hydraulics were used exclusively with the analog system, and pneumatics were used only with the tonereed/digital system.) There were two basic ways of programming a figure. The first used two different methods of controlling the voltage regulation. One was a joystick-like device called a transducer, and the other device was a potentiometer (an instrument for measuring an unknown voltage or potential difference by comparison to a standard voltage--like the volume control knob on a radio or television receiver). If this method was used, when a figure was ready to be programmed, each individual action--one at a time-- would be refined, rehearsed, and then recorded. For instance, the programmer, through the use of the potentiometer or transducer, would repeatedly rehearse the gesture of lifting the arm, until it was ready for a "take."

Molecular Electronics

Introduction
Will silicon technology become obsolete in future like the value technology done about 50 years ago? Scientists and technologists working in anew field of electronics, known as molecular electronics is a relatively new field, which emerged as an important area of research only in the 1980's. It was through the efforts of late professor Carter of the U.S.A that the field was born. Conventional electronics technology is much indebted to the integrated circuit (IC) technology. IC technology is one of the important aspects that brought about a revolution in electronics. With the gradual increased scale of integration, electronics age has passed through SSI (small scale integration), MSI (medium scale integration), LSI (large scale integration), and ULSI (ultra large scale integration). These may be respectively classified as integration technology with 1-12 gates, 12-30 gates, 30-300 gates, 300-10000 gates, and beyond 10000 gates on a single chip. The density of IC technology is increasing in pace with Famous Moore's law of 1965. Till date Moore's law about the doubling of the number of components in an I.C every year holds good. He wrote in his original paper entitled 'Cramming More Components Onto Integrated Circuit ', that, "the complexity for minimum component costs has increased at the rate of roughly a factor of 2 per year. Certainly, over the short term, this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe that it will not remain constant for at least ten more years. It is now over 30 years since Moore talked of this so called technology-mantra. It is found that I.C's are following his law and there is a prediction that Moore's law shall remain valid till 2010.the prediction was based on a survey of industries and is believed to be correct with research of properties of semiconductors and production processes. But beyond ULSI, a new technology may become competitive to semiconductor technology. This new technology is known as Molecular electronics. Semiconductor integration beyond ULSI, through conventional electronic technology is facing problems with fundamental physical limitations like quantum effects etc. Molecular based electronics can overcome the fundamental physical and economic issues limiting Silicon Technology. For a scaling technology beyond ULSI, prof. Forest Carter put forward a novel idea. In digital electronics, 'YES' and 'NO' states are usually and respectively implemented and/or defined by 'ON' and 'OFF' conditions of a switching transistor. Prof. Carter postulated that instead using a transistor; a molecule (a single molecule or a small aggregate of molecule) might be used to represent the two states, namely YES & NO of digital electronics. For e.g. one can use positive spin & negative spin of a molecule to represent respectively 'YES' & 'NO' states of binary logic. As in the new concept a molecule rather than a transistor is proposed to be used, the scaling technology may go to molecular scale. It is therefore defined as MSE (molecular scale electronics). MSE is far beyond the ULSI technology in terms of scaling. In order to augment his postulation Prof. Carter conducted a number of international conferences on the subject. The outcome of these conferences has been to establish the field of molecular electronics.

Cellonics Technology

Introduction
In digital communication , CellonicsTM offers a fundamental change to the way modem solutions have traditionally been designed and built. CellonicsTM technology introduces a simple and swift Carrier - Rate DecodingTM solution to the receiving and decoding of a modulated signal. It encodes and decodes signals at one symbol per cycle-a feature not found elsewhere. Its simplicity will obsolete the super heterodyne receiver design that has been in use since its invention by Major Edward Armstrong in 1918.In fact, according to one estimate,98 % of the worlds radio systems are still based on this superhet design. Cellonics Inc. has invented and patented a number of circuits that mimic the above biological cell behavior. The CellonicsTM circuits are incredibly simple with advantages of low-cost, low power consumption and smallness of size. When applied in communication, the CellonicsTM technology is a fundamental modulation and demodulation technique. The CellonicsTM receivers are used as devices that generate pulses from the received analog signal and perform demodulation based on pulse counting Birth Of Cellonics For the last 60 years, the way radio receivers are designed and built has undergone amazingly little change. Much of the current approach could be attributed to EH Armstrong, the oft -credited Father of FM, who invented the super heterodyne method in 1918.He further developed it into a complete FM commercial system in 1933 for use in public-radio broadcasting. Today, more than 98% of receivers in radios, television and mobile phones use this method. The subsystem used in the superhet design consists of radio-frequency (RF)amplifiers mixers ,phaselock loops ,filters, oscillators and other components ,which are all complex ,noisy ,and power hungry. Capturing a communications element from the air to retrieve its modulated signal is not easy ,and a system often needs to spend thousands of carrier cycles to recover just one bit of information .This process of demodulation is inefficient ,and newly emerging schemes result in complex chips difficult and expensive to manufacture. So it was necessary to invent a new demodulation circuit ,which do the job of conventional superheterodyne receiver but at afar lesser component count, faster and lower in power consumption and possessing greater signal robustness These requirements were met by designing a circuit which models the biological cell behavior as explained earlier. The technology for this, named CELLONICS ,was invented by scientists from CWC(Center for Wireless communication) and Computational Science Department in Singapore. Principles Of The Technology The Cellonics technology is a revolutionary and unconventional approach based on the theory of nonlinear dynamical systems and modeled after biological cell behavior. When used in the field of communication, the technology has the ability to encode, transmit and decode digital information powerfully over a variety of physical channels, be they cables or wirelessly through air.

Cellular Digital Packet Data

Introduction
Cellular Digital Packet Data (CDPD) systems offer what is currently one of the most advanced means of wireless data transmission technology. Generally used as a tool for business, CDPD holds promises for improving law enforcement communications and operations. As technologies improve, CDPD may represent a major step toward making our nation a wireless information society. While CDPD technology is more complex than most of us care to understand, its potential benefits are obvious even to technological novices. In this so-called age of information, no one need to be reminded of speed but also accuracy in the storage, retrieval and transmission of data. The CDPD network is a little one year old and already is proving to be a hot digital enhancement to the existing phone network. CDPD transmits digital packet data at 19.2 Kbps, using idle times between cellular voice calls on the cellular telephone network. CDPD technology represent a way for law enforcement agencies to improve how they manage their communications and information systems. For over a decade, agencies around the world have been experimenting with placing Mobile Data Terminals(MDT) in their vehicles to enhance officer safety and efficiency. Early MDT's transmits their information using radio modems. In this case data could be lost in transmission during bad weather or when mobile units are not properly located in relation to transmission towers. More recently MDT's have transmitted data using analog cellular telephone modems. This shift represented an improvement in mobile data communications, but systems still had flaws which limited their utility. Since the mid-1990's, computer manufacturers and the telecommunication industry have been experimenting with the use of digital cellular telecommunications as a wireless means to transmit data. The result of their effort is CDPD systems. These systems allow users to transmit data with a higher degree of accuracy, few service interruptions, and strong security. In addition CDPD technology represent a way for law enforcement agencies to improve how they manage their communications and information systems. This results in the capacity for mobile users to enjoy almost instantaneous access to information. WHAT IS CDPD? CDPD is a specification for supporting wireless access to the Internet and other public packetswitched networks. Data transmitted on the CDPD systems travel several times faster than data send using analog networks Cellular telephones and modem providers that offer CDPD support make it possible for mobile users to get access to the Internet at up to 19,2 Kbps. Because CDPD is an open specification that adheres to the layered structure of the Open Systems Interconnection (OSI) model, it has the ability to be extended in the future. CDPD supports both the Internet's Connectionless Network Protocol (CLNP).

CDPD also supports IP multicast (one-to-many) service. With multicast, a company can periodically broadcast company updates to sales and service people on the road or a news subscription service can transmit its issues as they are published. It will also support the next level of IP, IPV6. With CDPD we are assigned our very own address. With this address, we are virtually always connected to our host without having to keep a constant connection. There are currently two methods for sending data over cellular networks: cellular digital packet data (CDPD) and cellular switched-circuit data (CSCD). Each has distinct advantages depending on the type of application, amount of data to send or receive, and geographic coverage needs.

CT Scanning

Introduction
There are two main limitations of using conventional x-rays to examine internal structures of the body. Firstly superimpositions of the 3-dimensional information onto a single plane make diagnosis confusing and often difficult. Secondly the photographic film usually used for making radiographs has a limited dynamic range and therefore only object that have large variation in the x-ray absorption relative to their surroundings will cause sufficient contrast differences on the film to be distinguished by the eye. Thus the details of bony structures can be seen, it is difficult to discern the shape and composition of soft tissue organ accurately. CT uses special x-ray equipment to obtain image data from different angles around a body and then shows a cross section of body tissues and organs. i.e., it can show several types of tissuelung,bone,soft tissue and blood vessel with great clarity. CT of the body is a patient friendly exam that involves little radiation exposure. Basic Principle In CT scanning, the image is reconstructed from a large number of absorption profiles taken at regular angular intervals around a slice, each profile being made up from a parallel set of absorption values through the object. ie, CT also passes x-rays through the body of the patient but the detection method is usually electronic in nature, and the data is converted from analog signal to digital impulses in an AD converter. This digital representation of the x-ray intensity is fed in to a computer, which then reconstruct an image. The method of doing of tomography uses an x-ray detector which translates which translates linearly on a track across the x-ray beam, and when the end of the scan is reached the x-ray tube and the detector are rotated to a new angle and the linear motion is repeated. The latest generation of CT machines use a 'fan-beam' geometry with an array of detectors which simultaneously detect x-rays on a number of different paths through the patient. CT Scanner CT scanner is a large square machine with a hole in the centre, something like a doughnut. The patient lies still on a table that can move up/down and slide in to and out from the centre of hole. With in the machine an X-ray tube on a rotating gantry moves around the patient's body to produce the images. Procedure In CT the film is replaced by an array of detectors which measures X-ray profile. Inside the scanner, a rotating gantry that has an X-ray tube mounted on one side an arc -shaped detector mounted on opposite side. An X-ray beam is emitted in a fan beam as the rotating frame spins the X-ray tube and detector around the patient. Each time the X-ray tube and detector make a 360 degree rotation and X-ray passes through the patient's body the image of a thin section is acquired. During each rotation the detector records about 1000 images (profiles) of the expanded X-ray beam. Each profile is then reconstructed by a dedicated computer into two time.

Continuously variable transmission (CVT)

Definition
After more than a century of research and development, the internal combustion (IC) engine is nearing both perfection and obsolescence: engineers continue to explore the outer limits of IC efficiency and performance, but advancements in fuel economy and emissions have effectively stalled. While many IC vehicles meet Low Emissions Vehicle standards, these will give way to new, stricter government regulations in the very near future. With limited room for improvement, automobile manufacturers have begun full-scale development of alternative power vehicles. Still, manufacturers are loath to scrap a century of development and billions or possibly even trillions of dollars in IC infrastructure, especially for technologies with no history of commercial success. Thus, the ideal interim solution is to further optimize the overall efficiency of IC vehicles. One potential solution to this fuel economy dilemma is the continuously variable transmission (CVT), an old idea that has only recently become a bastion of hope to automakers. CVTs could potentially allow IC vehicles to meet the first wave of new fuel regulations while development of hybrid electric and fuel cell vehicles continues. Rather than selecting one of four or five gears, a CVT constantly changes its gear ratio to optimize engine efficiency with a perfectly smooth torquespeed curve. This improves both gas mileage and acceleration compared to traditional transmissions. The fundamental theory behind CVTs has undeniable potential, but lax fuel regulations and booming sales in recent years have given manufacturers a sense of complacency: if consumers are buying millions of cars with conventional transmissions, why spend billions to develop and manufacture CVTs? Although CVTs have been used in automobiles for decades, limited torque capabilities and questionable reliability have inhibited their growth. Today, however, ongoing CVT research has led to ever-more robust transmissions, and thus ever-more-diverse automotive applications. As CVT development continues, manufacturing costs will be further reduced and performance will continue to increase, which will in turn increase the demand for further development. This cycle of improvement will ultimately give CVTs a solid foundation in the world's automotive infrastructure. CVT Theory & Design Today's automobiles almost exclusively use either a conventional manual or automatic transmission with "multiple planetary gear sets that use integral clutches and bands to achieve discrete gear ratios" . A typical automatic uses four or five such gears, while a manual normally employs five or six. The continuously variable transmission replaces discrete gear ratios with infinitely adjustable gearing through one of several basic CVT designs.

High-availability power systems: Redundancy options

Introduction
In major applications like major computer installations, process control in chemical plants, safety monitors, IC units of hospitals etc., even a temporary power failure may lead to large economic losses. For such critical loads, it is of paramount importance to use UPS systems. But all UPS equipments should be completely de-energized for preventive maintenance at least once per year. This limits the availability of the power system. Now there are new UPS systems in the market to permit concurrent maintenance. High-Availability Power Systems The computing industry talks in terms of "Nines" of availability. This refers to the percentage of time in a year that a system is functional and available to do productive work. A system with four "Nines" is 99.99 percent available, meaning that downtime is less than 53 minutes in a standard 365-day year. Five "Nines" (99.999 percent available) equates to less than 5.3 minutes of downtime per year. Six "Nines" (99.9999 percent available) equates to just 32 seconds of downtime per year. These same numbers apply when we speak of availability of conditioned power. The goal is to maximize the availability of conditioned power and minimize exposure to unconditioned utility power. The concept of continuous availability of conditioned power, takes this concept one step further. After all, 100 percent is greater than 99.99999 percent. The Road To Continuous Availability We determine availability by studying four key elements: o Reliability The individual UPS modules, static transfer switches and other power distribution equipment must be incredibly reliable, as measured by field-documented MTBF (Mean Time Between Failures). In addition, the system elements must be designed and assembled in a way that minimizes the complexity and single points of failure. o Functionality The UPS must be able to protect the critical load from the full range of power disturbances, and only a true double-conversion UPS can do this. Some vendors offer single- conversion (line-interactive) three-phase UPS products as a lower cost alternative. However, these alternative UPS's do not protect against all disturbances, including power system short circuits, frequency variations, harmonics and common mode noise. If your critical facility is truly critical, only a true double conversion UPS is suitable. o Maintainability The system design must permit concurrent maintenance of all power system components, supporting the load with part of the UPS system while other parts are being serviced. As we shall see, single bus solutions do not completely support concurrent maintenance. o Fault Tolerance The system must have fault resiliency to cope with a failure of any power system component without

affecting the operation of the critical load equipment. Furthermore, the power distribution system must have fault resiliency to survive the inevitable load faults and human error. The two factors of field-proven critical bus MTBF in excess of one million hours and doubleconversion technology ensure reliability and functionality. With reliability and functionality assured, let us look at how different UPS system configurations compare for maintainability and fault tolerance.

IGCT

Introduction
Thyristor technology is inherently superior to transistor for blocking voltage values above 2.5kV, plasma distributions equal to those of diodes offering the best trade-off between the on-state and blocking voltages. Until the introduction of newer power switches, the only serious contenders for high-power transportation systems and other applications were the GTO (thyristor), with its cumbersome snubbers, and the IGBT (transistor), with its inherently high losses. Until now, adding the gate turn-off feature has resulted in GTO being constrained by a variety of unsatisfactory compromises. The widely used standard GTO drive technology results in inhomogenous turn-on and turn-off that call for costly dv/dt and di/dt snubber circuits combined with bulky gate drive units. Rooting from the GTO is one of the newest power switches, the Gate-Commutated Thyristor (GCT). It successfully combines the best of the thyristor and transistor characteristics, while fulfilling the additional requirements of manufacturability and high reliability. The GCT is a semiconductor based on the GTO structure, whose cathode emitter can be shut off "instantaneously", thereby converting the device from a low conduction-drop thyristor to a low switching loss, high dv/dt bipolar transistor at turn- off. The IGCT (Integrated GCT) is the combination of the GCT device and a low inductance gate unit. This technology extends transistor switching performance to well above the MW range, with 4.5kV devices capable of turning off 4kA, and 6kV devices capable of turning off 3kA without snubbers. The IGCT represents the optimum combination of low loss thyristor technology and snubberles gate effective turn off for demanding medium and high voltage power electronics applications. The thick line shows the variation of the anode voltage during turn-off. The lighter shows the variation of the anode current during turn-off process of IGCT. GTO and thyristor are four layer (npnp) devices. As such, they have only two stable points their characteristics-'on' and 'off'. Every state in between is unstable and results in current filamentation. The inherent instability is worsened by processing imperfections. This has led to the widely accepted myth that a GTO cannot be operated without a snubber. Essentially, the GTO has to be reduced to a stable pnp device i.e. a transistor, for the few critical microseconds during turn-off. To stop the cathode (n) from taking part in the process, the bias of the cathode n-p junction has to be reversed before voltage starts to build up at the main junction. This calls for commutation of the full load current from the cathode (n) to the gate (p) within one microsecond. Thanks to a new housing design, 4000A/us can be achieved with a low cost 20V gate unit. Current filamentation is totally suppressed and the turn-off waveforms and safe operating area are identical to those of a transistor. IGCT technology brings together the power handling device (GCT) and the device control circuitry (freewheeling diode and gate drive) in an integrated package. By offering four levels of component packaging and integration, it permits simultaneous improvement in four interrelated areas; low switching and conduction losses at medium voltage, simplified circuitry for operating the power semiconductor, reduced power system cost, and

enhanced reliability and availability. Also, by providing pre- engineered switch modules, IGCT enables medium-voltage equipment designers to develop their products faster.

Iris Scanning

Introduction
In today's information age it is not difficult to collect data about an individual and use that information to exercise control over the individual. Individuals generally do not want others to have personal information about them unless they decide to reveal it. With the rapid development of technology, it is more difficult to maintain the levels of privacy citizens knew in the past. In this context, data security has become an inevitable feature. Conventional methods of identification based on possession of ID cards or exclusive knowledge like social security number or a password are not altogether reliable. ID cards can be almost lost, forged or misplaced: passwords can be forgotten. Such that an unauthorized user may be able to break into an account with little effort. So it is need to ensure denial of access to classified data by unauthorized persons. Biometric technology has now become a viable alternative to traditional identification systems because of its tremendous accuracy and speed. Biometric system automatically verifies or recognizes the identity of a living person based on physiological or behavioral characteristics. Since the persons to be identified should be physically present at the point of identification, biometric techniques gives high security for the sensitive information stored in mainframes or to avoid fraudulent use of ATMs. This paper explores the concept of Iris recognition which is one of the most popular biometric techniques. This technology finds applications in diverse fields. Biometrics - Future Of Identity Biometric dates back to ancient Egyptians who measured people to identify them. Biometric devices have three primary components. 1. Automated mechanism that scans and captures a digital or analog image of a living personal characteristic 2. Compression, processing, storage and comparison of image with a stored data. 3. Interfaces with application systems. A biometric system can be divided into two stages: the enrolment module and the identification module. The enrolment module is responsible for training the system to identity a given person. During an enrolment stage, a biometric sensor scans the person's physiognomy to create a digital representation. A feature extractor processes the representation to generate a more compact and expressive representation called a template. For an iris image these include the various visible characteristics of the iris such as contraction, Furrows, pits, rings etc. The template for each user is stored in a biometric system database. The identification module is responsible for recognizing the person. During the identification stage, the biometric sensor captures the characteristics of the person to be identified and converts it into the same digital format as the template. The resulting template is fed to the feature matcher, which compares it against the stored template to determine whether the two templates match. The identification can be in the form of verification, authenticating a claimed identity or recognition, determining the identity of a person from a database of known persons. In a verification system, when the captured characteristic and the stored template of the claimed identity are the same, the system concludes that the claimed identity is correct. In a recognition

system, when the captured characteristic and one of the stored templates are the same, the system identifies the person with matching template.

Isoloop magnetic couplers

Introduction
Couplers, also known as "isolators" because they electrically isolate as well as transmit data, are widely used in industrial and factory networks, instruments, and telecommunications. Every one knows the problems with optocouplers. They take up a lot of space, are slow, optocouplers age and their temperature range is quite limited. For years, optical couplers were the only option. Over the years, most of the components used to build instrumentation circuits have become ever smaller. Optocoupler technology, however, hasn't kept up. Existing coupler technologies look like dinosaurs on modern circuit boards. Magnetic couplers are analogous to optocouplers in a number of ways. Design engineers, especially in instrumentation technology, will welcome a galvanically-isolated data coupler with integrated signal conversion in a single IC. My report will give a detailed study about 'ISOLOOP MAGNETIC COUPLERS'. GROUND LOOPS When equipment using different power supplies is tied together (with a common ground connection) there is a potential for ground loop currents to exist. This is an induced current in the common ground line as a result of a difference in ground potentials at each piece of equipment. Normally all grounds are not in the same potential. Widespread electrical and communications networks often have nodes with different ground domains. The potential difference between these grounds can be AC or DC, and can contain various noise components. Grounds connected by cable shielding or logic line ground can create a ground loop-unwanted current flow in the cable. Groundloop currents can degrade data signals, produce excessive EMI, damage components, and, if the current is large enough, present a shock hazard. Galvanic isolation between circuits or nodes in different ground domains eliminates these problems, seamlessly passing signal information while isolating ground potential differences and commonmode transients. Adding isolation components to a circuit or network is considered good design practice and is often mandated by industry standards. Isolation is frequently used in modems, LAN and industrial network interfaces (e.g., network hubs, routers, and switches), telephones, printers, fax machines, and switched-mode power supplies. Giant Magnetoresistive (GMR): Large magnetic field dependent changes in resistance are possible in thin film ferromagnet/nonmagnetic metallic multilayers. The phenomenon was first observed in France in 1988, when changes in resistance with magnetic field of up to 70% were seen. Compared to the small percent change in resistance observed in anisotropic magnetoresistance, this phenomenon was truly 'giant' magnetoresistance. The spin of electrons in a magnet is aligned to produce a magnetic moment. Magnetic layers with opposing spins (magnetic moments) impede the progress of the electrons (higher scattering) through a sandwiched conductive layer. This arrangement causes the conductor to have a higher resistance to current flow. An external magnetic field can realign all of the layers into a single magnetic moment. When this happens, electron flow will be less effected (lower scattering) by the uniform spins of the adjacent ferromagnetic layers. This causes the conduction layer to have a lower resistance to current flow.

Note that these phenomenon takes places only when the conduction layer is thin enough (less than 5 nm) for the ferromagnetic layer's electron spins to affect the conductive layer's electron's path.

LWIP

Introduction
Over the last few years, the interest for connecting computers and computer supported devices to wireless networks has steadily increased. Computers are becoming more and more seamlessly integrated with everyday equipment and prices are dropping. At the same time wireless networking technologies, such as Bluetooth and IEEE 802.11b WLAN , are emerging. This gives rise to many new fascinating scenarios in areas such as health care, safety and security, transportation, and processing industry. Small devices such as sensors can be connected to an existing network infrastructure such as the global Internet, and monitored from anywhere. The Internet technology has proven itself flexible enough to incorporate the changing network environments of the past few decades. While originally developed for low speed networks such as the ARPANET, the Internet technology today runs over a large spectrum of link technologies with vastly different characteristics in terms of bandwidth and bit error rate. It is highly advantageous to use the existing Internet technology in the wireless networks of tomorrow since a large amount of applications using the Internet technology have been developed. Also, the large connectivity of the global Internet is a strong incentive. Since small devices such as sensors are often required to be physically small and inexpensive, an implementation of the Internet protocols will have to deal with having limited computing resources and memory. This report describes the design and implementation of a small TCP/IP stack called lwIP that is small enough to be used in minimal systems. Overview As in many other TCP/IP implementations, the layered protocol design has served as a guide for the design of the implementation of lwIP. Each protocol is implemented as its own module, with a few functions acting as entry points into each protocol. Even though the protocols are implemented separately, some layer violations are made, as discussed above, in order to improve performance both in terms of processing speed and memory usage. For example, when verifying the checksum of an incoming TCP segment and when demultiplexing a segment, the source and destination IP addresses of the segment has to be known by the TCP module. Instead of passing these addresses to TCP by the means of a function call, the TCP module is aware of the structure of the IP header, and can therefore extract this information by itself. lwIP consists of several modules. Apart from the modules implementing the TCP/IP protocols (IP, ICMP, UDP, and TCP) a number of support modules are implemented. The support modules consists of :" The operating system emulation layer (described in Chapter3) " The buffer and memory management subsystems (described in Chapter 4) " Network interface functions (described in Chapter 5) " Functions for computing Internet checksum (Chapter 6)

" An abstract API (described in Chapter 8 )

Image Authentication Techniques

Introduction
This paper explores the various techniques used to authenticate the visual data recorded by the automatic video surveillance system. Automatic video surveillance systems are used for continuous and effective monitoring and reliable control of remote and dangerous sites. Some practical issues must be taken in to account, in order to take full advantage of the potentiality of VS system. The validity of visual data acquired, processed and possibly stored by the VS system, as a proof in front of a court of law is one of such issues. But visual data can be modified using sophisticated processing tools without leaving any visible trace of the modification. So digital or image data have no value as legal proof, since doubt would always exist that they had been intentionally tampered with to incriminate or exculpate the defendant. Besides, the video data can be created artificially by computerized techniques such as morphing. Therefore the true origin of the data must be indicated to use them as legal proof. By data authentication we mean here a procedure capable of ensuring that data have not been tampered with and of indicating their true origin. Automatic Visual Surveillance System Automatic Visual Surveillance system is a self monitoring system which consists of a video camera unit, central unit and transmission networks A pool of digital cameras is in charge of frame the scene of interest and sent corresponding video sequence to central unit. The central unit is in charge of analyzing the sequence and generating an alarm whenever a suspicious situation is detected. Central unit also transmits the video sequences to an intervention centre such as security service provider, the police department or a security guard unit. Somewhere in the system the video sequence or some part of it may be stored and when needed the stored sequence can be used as a proof in front of court of law. If the stored digital video sequences have to be legally credible, some means must be envisaged to detect content tampering and reliably trace back to the data origin Authentication Techniques Authentication techniques are performed on visual data to indicate that the data is not a forgery; they should not damage visual quality of the video data. At the same time, these techniques must indicate the malicious modifications include removal or insertion of certain frames, change of faces of individual, time and background etc. Only a properly authenticated video data has got the value as legal proof. There are two major techniques for authenticating video data. They are as follows 1. Cryptographic Data Authentication It is a straight forward way to provide video authentication, namely through the joint use of asymmetric key encryption and the digital Hash function. Cameras calculate a digital summary (digest) of the video by means of hash function. Then they encrypt the digest with their private key, thus obtaining a signed digest which is transmitted to the

central unit together with acquired sequences. This digest is used to prove data integrity or to trace back to their origin. Signed digest can only read by using public key of the camera. 2. Watermarking- based authentication Watermarking data authentication is the modern approach to authenticate visual data by imperceptibly embedding a digital watermark signal on the data. Digital watermarking is the art and science of embedding copyright information in the original files. The information embedded is called 'watermarks '. Digital watermarks are difficult to remove without noticeably degrading the content and are a covert means in situation where copyright fails to provide robustness.

Seasonal Influence on Safety of Substation Grounding

Introduction
With the development of modern power system to the direction of extra-high voltage, large capacity, far distance transmission and application of advanced technologies the demand on the safety, stability and economic operation of power system became higher. A good grounding system is the fundamental insurance to keep the safe operation of the power system. The good grounding system should ensure the following: " To provide safety to personnel during normal and fault conditions by limiting step and touch potential. " To assure correct operation of electrical devices. " To prevent damage to electrical apparatus. " To dissipate lightning strokes. " To stabilize voltage during transient conditions and therefore to minimize the probability of flashover during the transients As it is stated in the ANSI/IEEE Standard 80-1986 "IEEE Guide for Safety in AC substation grounding," a safe grounding design has two objectives: " To provide means to carry electric currents into the earth under normal and fault condition without exceeding any operational and equipment limit or adversely affecting continuity of service. " To assure that a person in the vicinity of grounded facilities is not exposed to the danger of critical electrical shock. A practical approach to safe grounding considers the interaction of two grounding systems: The intentional ground, consisting of ground electrodes buried at some depth below the earth surface, and the accidental ground, temporarily established by a person exposed to a potential gradient at a grounded facility. An ideal ground should provide a near zero resistance to remote earth. In practice, the ground potential rise at the facility site increases proportionally to the fault current; the higher the current, the lower the value of total system resistance which must be obtained. For most large substations the ground resistance should be less than 1 Ohm. For smaller distribution substations the usually acceptable range is 1-5 Ohms, depending on the local conditions. When a grounding system is designed, the fundamental method is to ensure the safety of human beings and power apparatus is to control the step and touch voltages in their respective safe region. step and touch voltage can be defined as follows. Step Voltage It is defined as the voltage between the feet of the person standing in near an energized object. It is equal to the difference in voltage given by the voltage distribution curve between two points at different distance from the electrode. Touch Voltage It is defined as the voltage between the energized object and the feet of the person in contact with the object. It is equal to the difference in voltage between the object and a point some distance away

from it. In different season, the resistivity of the surface soil layer would be changed. This would affect the safety of grounding systems. The value of step and touch voltage will move towards safe region or to the hazard side is the main concerned question

Wavelet transforms

Introduction
Wavelet transforms have been one of the important signal processing developments in the last decade, especially for the applications such as time-frequency analysis, data compression, segmentation and vision. During the past decade, several efficient implementations of wavelet transforms have been derived. The theory of wavelets has roots in quantum mechanics and the theory of functions though a unifying framework is a recent occurrence. Wavelet analysis is performed using a prototype function called a wavelet. Wavelets are functions defined over a finite interval and having an average value of zero. The basic idea of the wavelet transform is to represent any arbitrary function f (t) as a superposition of a set of such wavelets or basis functions. These basis functions or baby wavelets are obtained from a single prototype wavelet called the mother wavelet, by dilations or contractions (scaling) and translations (shifts). Efficient implementation of the wavelet transforms has been derived based on the Fast Fourier transform and short-length 'fast-running FIR algorithms' in order to reduce the computational complexity per computed coefficient. First of all, why do we need a transform, or what is a transform anyway? Mathematical transformations are applied to signals to obtain further information from that signal that is not readily available in the raw signal. Now, a time-domain signal is assumed as a raw signal, and a signal that has been transformed by any available transformations as a processed signal. There are a number of transformations that can be applied such as the Hilbert transform, short-time Fourier transform, Wigner transform, the Radon transform, among which the Fourier transform is probably the most popular transform. These mentioned transforms constitute only a small portion of a huge list of transforms that are available at engineers and mathematicians disposal. Each transformation technique has its own area of application, with advantages and disadvantages. Importance Of The Frequency Information Often times, the information that cannot be readily seen in the time-domain can be seen in the frequency domain. Most of the signals in practice are time-domain signals in their raw format. That is, whatever that signal is measuring, is a function of time. In other words, when we plot the signal one of the axis is time (independent variable) and the other (dependent variable) is usually the amplitude. When we plot time-domain signals, we obtain a time-amplitude representation of the signal. This representation is not always the best representation of the signal for most signal processing related applications. In many cases, the most distinguished information is hidden in the frequency content of the signal. The frequency spectrum of a signal is basically the frequency components (spectral components) of that signal. The frequency spectrum of a signal shows what frequencies exist in the signal.

Cyberterrorism

Definition
Cyberterrorism is a new terrorist tactic that makes use of information systems or digital technology, especially the Internet, as either an instrument or a target. As the Internet becomes more a way of life with us,it is becoming easier for its users to become targets of the cyberterrorists. The number of areas in which cyberterrorists could strike is frightening, to say the least. The difference between the conventional approaches of terrorism and new methods is primarily that it is possible to affect a large multitude of people with minimum resources on the terrorist's side, with no danger to him at all. We also glimpse into the reasons that caused terrorists to look towards the Web, and why the Internet is such an attractive alternative to them. The growth of Information Technology has led to the development of this dangerous web of terror, for cyberterrorists could wreak maximum havoc within a small time span. Various situations that can be viewed as acts of cyberterrorism have also been covered. Banks are the most likely places to receive threats, but it cannot be said that any establishment is beyond attack. Tips by which we can protect ourselves from cyberterrorism have also been covered which can reduce problems created by the cyberterrorist. We, as the Information Technology people of tomorrow need to study and understand the weaknesses of existing systems, and figure out ways of ensuring the world's safety from cyberterrorists. A number of issues here are ethical, in the sense that computing technology is now available to the whole world, but if this gift is used wrongly, the consequences could be disastrous. It is important that we understand and mitigate cyberterrorism for the benefit of society, try to curtail its growth, so that we can heal the present, and live the future

Ipv6 - The Next Generation Protocol

Definition
The Internet is one of the greatest revolutionary innovations of the twentieth century.It made the 'global village utopia ' a reality in a rather short span of time. It is changing the way we interact with each other, the way we do business, the way we educate ourselves and even the way we entertain ourselves. Perhaps even the architects of Internet would not have foreseen the tremendous growth rate of the network being witnessed today.With the advent of the Web and multimedia services, the technology underlying t he Internet has been under stress. It cannot adequately support many services being envisaged, such as real time video conferencing, interconnection of gigabit networks with lower bandwidths, high security applications such as electronic commerce, and interactive virtual reality applications. A more serious problem with today's Internet is that it can interconnect a maximum of four billion systems only, which is a small number as compared to the projected systems on the Internet in the twenty-first century. Each machine on the net is given a 32-bit address. With 32 bits, a maximum of about four billion addresses is possible. Though this is a large a number, soon the Internet will have TV sets, and even pizza machines connected to it, and since each of them must have an IP address, this number becomes too small. The revision of IPv4 was taken up mainly to resolve the address problem, but in the course of refinements, several other features were also added to make it suitable for the next generation Internet. This version was initially named IPng (IP next generation) and is now officially known as IPv6. IPv6 supports 128-bit addresses, the source address and the destination address, each being, 128 bits long. IPv5 a minor variation of IPv4 is presently running on some routers. Presently, most routers run software that support only IPv4. To switch over to IPv6 overnight is an impossible task and the transition is likely to take a very long time. However to speed up the transition, an IPv4 compatible IPv6 addressing scheme has been worked out. Major vendors are now writing softwares for various computing environments to support IPv6 functionality. Incidentally, software development for different operating systems and router platforms will offer major jobs opportunities in coming years.

To Download Full Report Click Here

Driving Optical Network Evolution

Definition
Over the years, advancement in technologies has improved transmission limitations, the number of wavelengths we can send down a piece of fiber, performance, amplification techniques, and protection and redundancy of the network. When people have described and spoken at length about optical networks, they have typically limited the discussion of optical network technology to providing physical-layer connectivity. When actual network services are discussed, optical transport is augmented through the addition of several protocol layers, each with its own sets of unique requirements, to make up a service-enabling network. Until recently, transport was provided through specific companies that concentrated on the core of the network and provided only point-to- point transport services. A strong shift in revenue opportunities from a service provider and vendor perspective, changing traffic patterns from the enterprise customer, and capabilities to drive optical fiber into metropolitan (metro) areas has opened up the next emerging frontier of networking. Providers are now considering emerging lucrative opportunities in the metro space. Whereas traditional or incumbent vendors have been installing optical equipment in the space for some time, little attention has been paid to the opportunity available through the introduction of new technology advancements and the economic implications these technologies will have. Specifically, the new technologies in the metro space provide better and more profitable economics, scale, and new services and business models. The current metro infrastructure comprises this equipment, which emphasizes voice traffic; is limited in scalability; and was not designed to take advantage of new technologies, topologies, and changing traffic conditions. Next-generation equipment such as next-generation Synchronous Optical Network (SONET), metro core dense wavelength division multiplexing (DWDM), metro-edge DWDM, and advancements in the optical core have taken advantage of these limitations, and they are scalable and data optimized; they include integrated DWDM functionality and new amplification techniques; and they have made improvements in the operational and provisioning cycles. This tutorial provides technical information that can help engineers address numerous Cisco innovations and technologies for Cisco Complete Optical Multiservice Edge and Transport (Cisco COMET). They can be broken down into five key areas: photonics, protection, protocols, packets, and provisioning.

Radio Network Controller

Definition
A Radio Network Controller (RNC) provides the interface between the wireless devices communicating through Node B transceivers and the network edge. This includes controlling and managing the radio transceivers in the Node B equipment, as well as management tasks like soft handoff. The RNC performs tasks in a 3G wireless network analogous to those of the Base Station Controller (BSC) in a 2G or 2.5G network. It interfaces with GPRS Service Nodes (SGSNs) and Gateways (GGSNs) to mediate with the network service providers. A radio network controller manages hundreds of Node B transceiver stations while switching and provisioning services off the Mobile Switching Center and 3G data network interfaces. The connection from the RNC to a Node B is called the User Plane Interface Layer and it uses T1/E1 transport to the RNC. Due to the large number of Node B transceivers, a T1/E1 aggregator is used to deliver the Node B data over channelized OC-3 optical transport to the RNC. The OC-3 pipe can be a direct connection to the RNC or through traditional SONET/SDH transmission networks. A typical Radio Network Controller may be built on a PICMG or Advanced TCA chassis. It contains several different kinds of cards specialized for performing the functions and interacting with the various interfaces of the RNC.

Wireless Networked Digital Devices

Definition
The proliferation of mobile computing devices including laptops, personal digital assistants (PDAs),and wearable computers has created a demand for wireless personal area networks (PANs).PANs allow proximal devices to share information and resources.The mobile nature of these devices places unique requirements on PANs,such as low power consumption, frequent make-and-break connections, resource discovery and utilization, and international regulations. This paper examines wireless technologies appropriate for PANs and reviews promising research in resource discovery and service utilization. We recognize the need for PDAs to be as manageable as mobile phones and also the restrictive screen area and input area in mobile phone. Thus the need for a new breed of computing devices to fit the bill for a PAN. The above devices become especially relevant for mobile users such as surgeons and jet plane mechanics who need both hands free and thus would need to have "wearable" computers. This paper first examines the technology used for wireless communication.Putting a radio in a digital device provides physical connectivity;however,to make the device useful in a larger context a networking infrastructure is required. The infrastructure allows devices o share data,applications,and resources such as printers, mass storage, and computation power. Defining a radio standard is a tractable problem as demonstrated by the solutions presented in this paper. Designing a network infrastructure is much more complex. The second half of the paper describes several research projects that try to address components of the networking infrastructure. Finally there are the questions that go beyond the scope of this paper, yet will have he greatest effect on the direction,capabilities,and future of this paradigm. Will these networking strategies be incompatible, like he various cellular phone systems in the United States, or will there be a standard upon which manufacturers and developers agree, like the GSM (global system for mobile communication)cellular phones in Europe? Communication demands compatibility, which is challenging in a heterogeneous marketplace. Yet by establishing and implementing compatible systems, manufacturers can offer more powerful and useful devices to their customers. Since these are, after all, digital devices living in a programmed digital world, compatibility and interoperation are possible. Technologies explored: 1. Electric field- use human body as a current conduit. 2.Magnetic field-use base station technology for picocells of space. 3.Infra Red- Basic issues including opaque body obstruction. 4.Wireless Radio Frequency- The best technology option however has to deal with the finite resource of the electro magnetic spectrum. Also must meet international standards by a compatible protocol. a. UHF Radio.

b. Super regenerative receiver c. SAW/ASH Receiver.

3- D IC's

Introduction
There is a saying in real estate; when land get expensive, multi-storied buildings are the alternative solution. We have a similar situation in the chip industry. For the past thirty years, chip designers have considered whether building integrated circuits multiple layers might create cheaper, more powerful chips. Performance of deep-sub micrometer very large scale integrated (VLSI) circuits is being increasingly dominated by the interconnects due to increasing wire pitch and increasing die size. Additionally, heterogeneous integration of different technologies on one single chip is becoming increasingly desirable, for which planar (2-D) ICs may not be suitable. The three dimensional (3-D) chip design strategy exploits the vertical dimension to alleviate the interconnect related problems and to facilitate heterogeneous integration of technologies to realize system on a chip (SoC) design. By simply dividing a planar chip into separate blocks, each occupying a separate physical level interconnected by short and vertical interlayer interconnects (VILICs), significant improvement in performance and reduction in wire-limited chip area can be achieved.In the 3-Ddesign architecture, an entire chip is divided into a number of blocks, and each block is placed on a separate layer of Si that are stacked on top of each other. Motivation For 3-D ICs The unprecedented growth of the computer and the information technology industry is demanding Very Large Scale Integrated ( VLSI ) circuits with increasing functionality and performance at minimum cost and power dissipation. Continuous scaling of VLSI circuits is reducing gate delays but rapidly increasing interconnect delays. A significant fraction of the total power consumption can be due to the wiring network used for clock distribution, which is usually realized using long global wires. Furthermore, increasing drive for the integration of disparate signals (digital, analog, RF) and technologies (SOI, SiGe, GaAs, and so on) is introducing various SoC design concepts, for which existing planner (2-D) IC design may not be suitable. 3D Architecture Three-dimensional integration to create multilayer Si ICs is a concept that can significantly improve interconnect performance ,increase transistor packing density, and reduce chip area and power dissipation. Additionally 3D ICs can be very effective large scale on chip integration of different systems. In 3D design architecture, and entire(2D) chips is divided into a number of blocks is placed on separate layer of Si that are stacked on top of each other. Each Si layer in the 3D structure can have multiple layer of interconnects(VILICs) and common global interconnects.

Sensors on 3D Digitization

Introduction
Digital 3D imaging can benefit from advances in VLSI technology in order to accelerate its deployment in many fields like visual communication and industrial automation. High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated. Intelligent digitizers will be capable of measuring accurately and simultaneously colour and 3D. Colour 3D Imaging Technology Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available [1]. Passive vision, attempts to analyze the structure of the scene under ambient light. [1] Stereoscopic vision is a passive optical technique. The basic idea is that two or more digital images are taken from known locations. The images are then processed to find the correlations between them. As soon as matching points are identified, the geometry can be computed. Active vision attempts to reduce the ambiguity of scene analysis by structuring the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with two-dimensional imaging systems. Lidar based or triangulation based laser range cameras are examples of active vision technique. One digital 3D imaging system based on optical triangulation were developed and demonstrated. Sensors For 3D Imaging The sensors used in the autosynchronized scanner include 1. Synchronization Circuit Based Upon Dual Photocells This sensor ensures the stability and the repeatability of range measurements in environment with varying temperature. Discrete implementations of the so-called synchronization circuits have posed many problems in the past. A monolithic version of an improved circuit has been built to alleviate those problems. [1] 2. Laser Spot Position Measurement Sensors High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated.[1]

Simputer

Introduction
Simputer is a multilingual mass access low cost hand held device currently being developed. The information mark up language is the primary format of the content accessed by the Simputer. The information mark up language (IML) has been created to provide a uniform experience to users and to allow rapid development of solution on any platform. The Simputer proves that illiteracy is no longer a barrier in handling a computer. The Simputer through its smart card feature allows for personal information management at the individual level for a unlimited number of users. Applications in diverse sectors can be made possible at an affordable price. A rapid growth of knowledge can only happen in an environment which admits free exchange of thought of information. Indeed, nothing else can explain the astounding progress of science in the last three hundred years. Technology has unfortunately not seen this freedom two often. Several rounds of intends discussions among the trustees convinced them that the only way to break out the current absurdities is to foster a spirit of co-operation in inventing new technologies. The common mistake of treating to-operation as a synonym of charity poses its own challenges. The Simputer Licensing Framework is the Trust's responds to these challenges. What is Simputer? A Simputer is a multilingual, mass access, low cost, portable alternative to PC's by which the benefits of IT can reach the common man. It has a special role in the third world because it is ensures that illiteracy is no longer barrier in handling a computer. The key to bridging the digital divide is to have shared devices that permit truly simple and natural users interfaces based on sight, touch and studio. The Simputer meets these demands through a browser for the Information Markup Language (IML). IML has been created to provide a uniform experience to users to allow rapid development of solutions on any platform. Features Simputer is a hand held device with the following features: - It is portable - A (320 X 240) LCD Panel which is touch enabled - A speaker, microphone and a few keys - A soft keyboard - A stylus is a pointing device - Smart card reader in Simputer - The use of extensive audio in the form of text to speech and audio snippets The display resolution is much smaller than the usual desktop monitor but much higher than usual wireless devices (cell phones, pagers etc). The operating system for Simputer is Linux. It is designed so that Linux is to be started up in frequently, but the Simputer is in a low power mode during the times it is not in use. When the Simputer is 'powered on', the user is presented with a screed having several icons. What Makes Simputer Different From Regular PCs? Simputer is not a personal computer. It could however be a pocket computer. It is much more powerful than a Palm, with screen size 320 x 240 and memory capability (32MB RAM).

The Wintel (Windows + Intel) architecture of the de facto standard PC is quite unsuitable for deployment on the low cost mass market. The entry barrier due to software licensing is just too high. While the Wintel PC provides a de facto level f standardization, it is not an open architecture. The Simputer mean while is centered around Linux which is freely available, open and modular.

Wavelet Video Processing Technology

Introduction
Uncompressed multimedia data requires considerable storage capacity and transmission bandwidth. Despite rapid progress in mass storage density processor speeds and digital communication system performance, demand for data storage capacity and data transmission bandwidth continues to outstrip the capabilities of available technologies. The recent growth of data intensive multimedia-based web applications have not only sustained the need for more efficient ways to encode signals and images but have made compression of such signals central to storage and communication technology. For still image compression, the joint photographic experts group (JPEG) standard has been established. The performance of these codes generally degrades at low bit rates mainly because of the underlying block-based Discrete cosine Transform (DCT) scheme. More recently, the wavelet transform has emerged as a cutting edge technology, within the field of image compression. Wavelet based coding provides substantial improvements in picture quality at higher compression ratios. Over the past few years, a variety of powerful and sophisticated wavelet based schemes for image compression have been developed and implemented. Because of the many advantages, the top contenders in JPEG-2000 standard are all wavelet based compression algorithms. Image Compression Image compression is a technique for processing images. It is the compressor of graphics for storage or transmission. Compressing an image is significantly different than compressing saw binary data. Some general purpose compression programs can be used to compress images, but the result is less than optimal. This is because images have certain statistical properties which can be exploited by encoders specifically designed for them. Also some finer details in the image can be sacrificed for saving storage space. Compression is basically of two types. 1. Lossy Compression 2. Lossless Compression. Lossy compression of data concedes a certain loss of accuracy in exchange for greatly increased compression. An image reconstructed following lossy compression contains degradation relative to the original. Often this is because the compression scheme completely discards redundant information. Under normal viewing conditions no visible is loss is perceived. It proves effective when applied to graphics images and digitized voice. Lossless compression consists of those techniques guaranteed to generate an exact duplicate of the input data stream after a compress or expand cycle. Here the reconstructed image after compression is numerically identical to the original image. Lossless compression can only achieve a modest amount of compression. This is the type of compression used when storing data base records, spread sheets or word processing files.

IP Telephony

Introduction
If you've never heard of Internet Telephony, get ready to change the way you think about longdistance phone calls. Internet Telephony, or Voice over Internet Protocol, is a method for taking analog audio signals, like the kind you hear when you talk on the phone, and turning them into digital data that can be transmitted over the Internet. How is this useful? Internet Telephony can turn a standard Internet connection into a way to place free phone calls. The practical upshot of this is that by using some of the free Internet Telephony software that is available to make Internet phone calls, you are bypassing the phone company (and its charges) entirely. Internet Telephony is a revolutionary technology that has the potential to completely rework the world's phone systems. Internet Telephony providers like Vonage have already been around for a little while and are growing steadily. Major carriers like AT&T are already setting up Internet Telephony calling plans in several markets around the United States, and the FCC is looking seriously at the potential ramifications of Internet Telephony service. Above all else, Internet Telephony is basically a clever "reinvention of the wheel." In this article, we'll explore the principles behind Internet Telephony, its applications and the potential of this emerging technology, which will more than likely one day replace the traditional phone system entirely. The interesting thing about Internet Telephony is that there is not just one way to place a call. There are three different "flavors" of Internet Telephony service in common use today: ATA - The simplest and most common way is through the use of a device called an ATA (analog telephone adaptor). The ATA allows you to connect a standard phone to your computer or your Internet connection for use with Internet Telephony. The ATA is an analog-to-digital converter. It takes the analog signal from your traditional phone and converts it into digital data for transmission over the Internet. Providers like Vonage and AT&T CallVantage are bundling ATAs free with their service. You simply crack the ATA out of the box, plug the cable from your phone that would normally go in the wall socket into the ATA, and you're ready to make Internet Telephony calls. Some ATAs may ship with additional software that is loaded onto the host computer to configure it; but in any case, it is a very straightforward setup. IP Phones - These specialized phones look just like normal phones with a handset, cradle and buttons. But instead of having the standard RJ-11 phone connectors, IP phones have an RJ-45 Ethernet connector. IP phones connect directly to your router and have all the hardware and software necessary right onboard to handle the IP call. Wi-Fi phones allow subscribing callers to make Internet Telephony calls from any Wi-Fi hot spot. Computer-to-computer - This is certainly the easiest way to use Internet Telephony. You don't even have to pay for long-distance calls. There are several companies offering free or very low-cost software that you can use for this type of Internet Telephony. All you need is the software, a microphone, speakers, a sound card and an Internet connection, preferably a fast one like you would get through a cable or DSL modem. Except for your normal monthly ISP fee, there is usually no charge for computer-to-computer calls, no matter the distance.

If you're interested in trying Internet Telephony, then you should check out some of the free Internet Telephony software available on the Internet. You should be able to download and set it up in about three to five minutes. Get a friend to download the software, too, and you can start tinkering with Internet Telephony to get a feel for how it works.

RPR

Introduction
The nature of the public network has changed. Demand for Internet Protocol (IP) data is growing at a compound annual rate of between 100% and 800%1, while voice demand remains stable. What was once a predominantly circuit switched network handling mainly circuit switched voice traffic has become a circuit-switched network handling mainly IP data. Because the nature of the traffic is not well matched to the underlying technology, this network is proving very costly to scale. User spending has not increased proportionally to the rate of bandwidth increase, and carrier revenue growth is stuck at the lower end of 10% to 20% per year. The result is that carriers are building themselves out of business. Over the last 10 years, as data traffic has grown both in importance and volume, technologies such as frame relay, ATM, and Point-to-Point Protocol (PPP) have been developed to force fit data onto the circuit network. While these protocols provided virtual connections-a useful approach for many services-they have proven too inefficient, costly and complex to scale to the levels necessary to satisfy the insatiable demand for data services. More recently, Gigabit Ethernet (GigE) has been adopted by many network service providers as a way to network user data without the burden of SONET/SDH and ATM. GigE has shortcomings when applied in carrier networks were recognized and for these problems, a technology called Resilient Packet Ring Technology were developed. RPR retains the best attributes of SONET/SDH, ATM, and Gigabit Ethernet. RPR is optimized for differentiated IP and other packet data services, while providing uncompromised quality for circuit voice and private line services. It works in point-to-point, linear, ring, or mesh networks, providing ring survivability in less than 50 milliseconds. RPR dynamically and statistically multiplexes all services into the entire available bandwidth in both directions on the ring while preserving bandwidth and service quality guarantees on a per-customer, per-service basis. And it does all this at a fraction of the cost of legacy SONET/SDH and ATM solutions. Data, rather than voice circuits, dominates today's bandwidth requirements. New services such as IP VPN, voice over IP (VoIP), and digital video are no longer confined within the corporate local-area network (LAN). These applications are placing new requirements on metropolitan-area network (MAN) and wide-area network (WAN) transport. RPR is uniquely positioned to fulfill these bandwidth and feature requirements as networks transition from circuit-dominated to packetoptimized infrastructures. RPR technology uses a dual counter rotating fiber ring topology. Both rings (inner and outer) are used to transport working traffic between nodes. By utilizing both fibers, instead of keeping a spare fiber for protection, RPR utilizes the total available ring bandwidth. These fibers or ringlets are also used to carry control (topology updates, protection, and bandwidth control) messages. Control messages flow in the opposite direction of the traffic that they represent. For instance, outer-ring traffic-control information is carried on the inner ring to upstream nodes.

PH Control Technique using Fuzzy Logic

Introduction
Fuzzy control is a practical alternative for a variety of challenging control applications since it provides a convenient method for constructing non-linear controllers via the use of heuristic information. Since heuristic information may come from an operator who has acted as "a human in the loop" controller for a process. In the fuzzy control design methodology, a set of rules on how to control the process is written down and then it is incorporated into a fuzzy controller that emulates the decision making process of the human. In other cases, the heuristic information may come from a control engineer who has performed extensive mathematical modelling, analysis and development of control algorithms for a particular process. The rest of the process is the same as the earlier case. The ultimate objective of using fuzzy control is to provide a user-friendly formalism for representing and implementing the ideas we have about how to achieve high performance control. Apart from being a heavily used technology these days, fuzzy logic control is simple, effective and efficient. In this paper, the structure, working and design of a fuzzy controller is discussed in detail through an in-depth analysis of the development and functioning of a fuzzy logic pH controller. PH Control To illustrate the application of fuzzy logic, the remaining section of the paper is directed towards the design and working of a pH control system using fuzzy logic. PH is an important variable in the field of production especially in chemical plants, sugar industries, etc. PH of a solution is defined as the negative of the logarithm of the hydrogen ion concentration, to the base 10. I.e., PH= -log 10 [H+] Let us consider the stages of operation of a sugar industry, where PH control is required. The main area of concern is the clarification of raw juice of sugarcane. The raw juice will be having a PH of 5.1 to 5.5. The clarified juice should ideally be neutral. I.e., the set point should be a PH of 7. The process involves addition of lime and SO2 gas for clarifying the raw juice. The addition of these two are called liming and sulphitation respectively. Since the process involves continuous addition of lime and SO2 ; lime has a property of increasing the PH of the clarified juice. This is the principle used for PH control in sugar industries. The PH of the raw juice is measured and this value is compared to the set point and this is further used for changing the diameter of the lime flow pipe as per the requirement. The whole process can be summarised as follows. The PH sensor measures the PH. This reading is amplified and recorded. The output of the amplifier is also fed to the PH indicator and interface. The output of this block is fed to the fuzzy controller. The output of fuzzy controller is given to the stepper motor drive. This inturn adjusts the diameter of lime flow pipe as per the requirement. Thus, the input to the fuzzy controller is the PH reading of the raw juice. The output of the fuzzy controller is the diameter of the lime flow pipe valve or a quantity that controls the diameter of the lime flow pipe valve like a DC current, voltage, etc.

The output obtained from the fuzzy controller is used to drive a stepper motor which inturn controls the diameter of the value opening of the lime flow pipe. This output tends to maintain the pH value of sugar juice to a target value. A detailed description of the design and functioning of the fuzzy controller is given in the following section.

Multisensor Fusion and Integration

Introduction
Sensor is a device that detects or senses the value or changes of value of the variable being measured. The term sensor some times is used instead of the term detector, primary element or transducer. The fusion of information from sensors with different physical characteristics, such as light, sound, etc enhances the understanding of our surroundings and provide the basis for planning, decision making, and control of autonomous and intelligent machines. Sensors Evolution A sensor is a device that responds to some external stimuli and then provides some useful output. With the concept of input and output, one can begin to understand how sensors play a critical role in both closed and open loops. One problem is that sensors have not been specified. In other words they tend to respond variety of stimuli applied on it without being able to differentiate one from another. Neverthless, sensors and sensor technology are necessary ingredients in any control type application. Without the feedback from the environment that sensors provide, the system has no data or reference points, and thus no way of understanding what is right or wrong g with its various elements. Sensors are so important in automated manufacturing particularly in robotics. Automated manufacturing is essentially the procedure of remo0ving human element as possible from the manufacturing process. Sensors in the condition measurement category sense various types of inputs, condition, or properties to help monitor and predict the performance of a machine or system. Multisensor Fusion And Integration Multisensor integration is the synergistic use of the information provided by multiple sensory devices to assist in the accomplishment of a task by a system. Multisensor fusion refers to any stage in the integration process where there is an actual combination of different sources of sensory information into one representational format. Multisensor Integration The diagram represents multisensor integration as being a composite of basic functions. A group of n sensors provide input to the integration process. In order for the data from each sensor to be used for integration, it must first be effectively modelled. A sensor model represents the uncertainty and error in the data from each sensor and provides a measure of its quality that can be 7used by the subsequent integration functions.

Integrated Power Electronics Module

Introduction
In power electronics, solid-state electronics is used for the control and conversion of electric power .The goal of power electronics is to realize power conversion from electrical source to an electrical load in a highly efficient, highly reliable and cost effective way. Power electronics modules are key units in a power electronics system. These modules contain integration of power switches and associated electronic circuitry for drive control and protection and other passive components. During the past decades, power devices underwent generation-by-generation improvements and can now handle significant power density. On the other hand power electronics packaging has not kept pace with the development of semiconductor devices. This is due to the limitations of power electronics circuits. The integration of power electronics circuit is quite different from that of other electronics circuits. The objective of power electronics circuits is electronics energy processing and hence require high power handling capability and proper thermal management. Most of the currently used power electronic modules are made by using wire-bonding technology [1,2]. In these packages power semi conductor dies are mounted on a common substrate and interconnected with wire bonds. Other associated electronic circuitries are mounted on a multi layer PCB and connected to the power devices by vertical pins. These wire bonds are prone to resistance, parasitic and fatigue failure. Due to its two dimensional structure the package has large size. Another disadvantage is the ringing produced by parasitic associated with the wire bonds. To improve the performance and reliability of power electronics packages, wire bonds must be replaced. The researches in power electronic packaging have resulted in the development of an advanced packaging technique that can replace wire bonds. This new generation package is termed as 'Integrated Power Electronics Module' (IPEM) [1]. In this, planar metalization is used instead of conventional wire bonds. It uses a three-dimensional integration technique that can provide low profile high-density systems. It offers high frequency operation and improved performance. It also reduces the size, weight and cost of the power modules. Features Of IPEMS The basic structure of an IPEM contains power semi conductor devices, control/drive/protection electronics and passive components. Power devices and their drive and protection circuit is called the active IPEM and the remaining part is called passive IPEM. The drive and protection circuits are realized in the form of hybrid integrated circuit and packaged together with power devices. Passive components include inductors, capacitors, transformers etc. The commonly used power switching devices are MOSFETs and IGBTs [3]. This is mainly due to their high frequency operation and low on time losses. Another advantage is their inherent vertical structure in which the metalization electrode pads are on two sides. Usually the gate source pads are on the top surface with non-solderable thin film metal Al contact. The drain metalization using Ag or Au is deposited on the bottom of chip and is solderable. This

vertical structure of power chips offers advantage to build sand witch type 3-D integration constructions.

GMPLS

Introduction
The emergence of optical transport systems has dramatically increased the raw capacity of optical networks and has enabled new sophisticated applications. For example, network-based storage, bandwidth leasing, data mirroring, add/drop multiplexing [ADM], dense wavelength division multiplexing [DWDM], optical cross-connect [OXC], photonic cross-connect [PXC], and multiservice switching platforms are some of the devices that may make up an optical network and are expected to be the main carriers for the growth in data traffic. Multiple Types of Switching and Forwarding Hierarchies Generalized MPLS (GMPLS) differs from traditional MPLS in that it supports multiple types of switching, i.e. the addition of support for TDM, lambda, and fiber (port) switching. The support for the additional types of switching has driven GMPLS to extend certain base functions of traditional MPLS and, in some cases, to add functionality. These changes and additions impact basic LSP properties, how labels are requested and communicated, the unidirectional nature of LSPs, how errors are propagated, and information provided for synchronizing the ingress and egress LSRs. 1. Packet Switch Capable (PSC) interfaces: Interfaces that recognize packet boundaries and can forward data based on the content of the packet header. Examples include interfaces on routers that forward data based on the content of the IP header and interfaces on routers that forward data based on the content of the MPLS "shim" header. 2 . Time-Division Multiplex Capable (TDM) interfaces: Interfaces that forward data based on the data's time slot in a repeating cycle. An example of such an interface is that of a SDH/SONET Cross-Connect (XC), Terminal Multiplexer (TM), or Add-Drop Multiplexer (ADM). 3 . Lambda Switch Capable (LSC) interfaces: Interfaces that forward data based on the wavelength on which the data is received. An example of such an interface is that of a Photonic Cross-Connect (PXC) or Optical Cross-Connect (OXC) that can operate at the level of an individual wavelength. Additional examples include PXC interfaces that can operate at the level of a group of wavelengths, i.e. a waveband. 4. Fiber-Switch Capable (FSC) interfaces: Interfaces that forward data based on a position of the data in the real world physical spaces. An example of such an interface is that of a PXC or OXC that can operate at the level of a single or multiple fibers. The diversity and complexity in managing these devices have been the main driving factors in the evolution and enhancement of the MPLS suite of protocols to provide control for not only packetbased domains, but also time, wavelength, and space domains. GMPLS further extends the suite of IP-based protocols that manage and control the establishment and release of label switched paths (LSP) that traverse any combination of packet, TDM, and optical networks. GMPLS adopts all technology in MPLS.

You might also like