You are on page 1of 23

'1xEV-DO' Architecture for Wireless Internet

Millions of people around the world use the Internet every day -- to communicate with others, follow the stock market, keep up with the news, check the weather, shop, entertain themselves and also to learn. Staying connected has become so important that it's hard to get away from our computer and our Internet connection because we might miss an e-mail message or any news we need to know. With the world's business and people's personal lives growing more dependent on communication over the Internet, we must be ready to take the next step and get a device that allows us to access the Internet on the go. That's where Wireless Internet comes in. This revolution has coincided with the cellular boom in the last few years. The expanding network of digital cellular and personal communication services (PCS) has created a solid foundation for wireless Internet services

A novel method of compressing speech with higher bandwidth efficiency

This uses a novel method of speech compression and transmission. This method saves the transmission bandwidth required for the speech signal by a considerable amount. This scheme exploits the property of low pass nature of the speech signal. Also this method applies equally well for any signal, which is low pass in nature, speech being the more widely used in Real Time Communication, is highlighted here. As per this method, the low pass signal (speech) at the transmitter is divided into set of packets, each containing, say N number of samples. Of the N samples per packet, only certain lesser number of samples, say N alone are transmitted. Here is less than unity, so compression is achieved. The N samples per packet are subjected to a N-Point DFT. Since low pass signals alone are considered here, the number of significant values in the set of DFT samples is very limited. Transmitting these significant samples alone would suffice for reliable transmission. The number of samples, which are transmitted, is determined by the parameter . The parameter is almost independent of the source of the speech signal. In other methods of speech compression, the specific characteristics of the source such as pitch are important for the algorithm to work. An exact reverse process at the receiver reconstructs the samples. At the receiver, the N-point IDFT of the received signal is performed after necessary zero padding. Zero padding is necessary because at the transmitter of the N samples only N samples are transmitted, but at the receiver N samples are again needed to honestly reconstruct the signal

An ATM with an eye

The rise of technology has brought into force many types of equipment that aim at more customer satisfaction. ATM is one such machine which made money transactions easy for customers to bank. The other side of this improvement is the enhancement of the culprit's probability to get his 'unauthentic' share. Traditionally, security is handled by requiring the combination of a physical access card and a PIN or other password in order to access a customer's account. This model invites fraudulent attempts through stolen cards, badly-chosen or automatically assigned PINs, cards with little or no encryption schemes, employees with access to non-encrypted customer account information and other points of failure. Our paper proposes an automatic teller machine security model that would combine a physical access card, a PIN, and electronic facial recognition. By forcing the ATM to match a live image of a customer's face with an image stored in a bank database that is associated with the account number, the damage to be caused by stolen cards and PINs is effectively neutralized. Only when the PIN matches the account and the live image and stored image match would a user be considered fully verified.

The main issues faced in developing such a model are keeping the time elapsed in the verification process to a negligible amount, allowing for an appropriate level of variation in a customer's face when compared to the database image, and that credit cards which can be used at ATMs to withdraw funds are generally issued by institutions that do not have in-person contact with the customer, and hence no opportunity to acquire a photo. Because the system would only attempt to match two (and later, a few) discrete images, searching through a large database of possible matching candidates would be unnecessary. The process would effectively become an exercise in pattern matching, which would not require a great deal of time. With appropriate lighting and robust learning software, slight variations could be accounted for in most cases. Further, a positive visual match would cause the live image to be stored in the database so that future transactions would have a broader base from which to compare if the original account image fails to provide a match thereby decreasing false negatives. When a match is made with the PIN but not the images, the bank could limit transactions in a manner agreed upon by the customer when the account was opened, and could store the image of the user for later examination by bank officials. In regards to bank employees gaining access to customer PINs for use in fraudulent transactions, this system would likewise reduce that threat to exposure to the low limit imposed by the bank and agreed to by the customer on visually unverifiable transactions. In the case of credit card use at ATMs, such a verification system would not currently be feasible without creating an overhaul for the entire credit card issuing industry, but it is possible that positive results (read: significant fraud reduction) achieved by this system might motivate such an overhaul. The last consideration is that consumers may be wary of the privacy concerns raised by maintaining images of customers in a bank database, encrypted or otherwise, due to possible hacking attempts or employee misuse. However, one could argue that having the image compromised by a third party would have far less dire consequences than the account information itself. Furthermore, since nearly all ATMs videotape customers engaging in transactions, it is no broad leap to realize that banks already build an archive of their customer images, even if they are not necessarily grouped with account information. Analog-Digital Hybrid Modulation for improved efficiency over Broadband Wireless Systems This paper seeks to present ways to eliminate the inherent quantization noise component in digital communications, instead of conventionally making it minimal. It deals with a new concept of signaling called the Signal Code Modulation (SCM) Technique. The primary analog signal is represented by: a sample which is quantized and encoded digitally, and an analog component, which is a function of the quantization component of the digital sample. The advantages of such a system are two sided offering advantages of both analog and digital signaling. The presence of the analog residual allows for the system performance to improve when excess channel SNR is available. The digital component provides increased SNR and makes it possible for coding to be employed to achieve near error-free transmission. we introduce the concept of Signal Code Modulation (SCM) which utilizes both the analog, as well as, digital modulation techniques. The primary analog input signal is sampled at the appropriate rate and quantized. The digital samples are denoted by symbols D. The resulting D symbols are then transmitted using digital transmission techniques (like QAM) optimized for that channel. Those D symbols represent N bits per analog input sample. The quantization residual, which is not left behind, is transmitted over the noisy channel as an analog symbol A, corresponding to the digital symbol D,To produce the quantization error A, the quantized data is converted back into analog form and subtracted from the original analog input signal. This symbol A, for noise immunity, is amplified by a gain of 2N (or any proportional factor that will optimize the voltage swing of the signal with that of the channel). Architectural requirements for a DSP processer

The best way to understand the requirements is to examine typical DSP algorithms and identify how their compositional requirements have influenced the architectures of DSP processor. Let us consider one of the most common processing tasks the finite impulse response filter. For each tap of the filter a data sample is multiplied by a filter coefficient with result added to a running sum for all of the taps .Hence the main component of the FIR filter is dot product: multiply and add .These options are not unique to the FIR filter algorithm; in fact multiplication is one of the most common operation

performed in signal processing -convolution, IIR filtering and Fourier transform also involve heavy use of multiply -accumulate operation. Originally, microprocessors implemented multiplication by a series of shift and add operation, each of which consumes one or more clock cycle .First a DSP processor requires a hardware which can multiply in one single cycle. Most of the DSP algorithm require a multiply and accumulate unit (MAC). In comparison to other type of computing tasks, DSP application typically have very high computational requirements since they often must execute DSP algorithms in real time on lengthy segments ,therefore parallel operation of several independent execution units is a must -for example in addition to MAC unit an ALU and shifter is also required . Executing a MAC in every clock cycle requires more than just single cycle MAC unit. It also requires the ability to fetch the MAC instruction, a data sample, and a filter coefficient from a memory in a single cycle. Hence good DSP performance requires high memory band width-higher than that of general microprocessors, which had one single bus connection to memory and could only make one access per cycle. The most common approach was to use two or more separate banks of memory, each of which was accessed by its own bus and could be written or read in a single cycle. This means programs are stored in a memory and data in another .With this arrangement, the processor could fetch and a data operand in parallel in every cycle .since many DSP algorithms consume two data operands per instruction a further optimization commonly used is to include small bank of RAM near the processor core that is used as an instruction cache. When a small group of instruction is executed repeatedly, the cache is loaded with those instructions, freeing the instruction bus to be used for data fetches instead of instruction fetches -thus enabling the processor to execute a MAC in a single cycle.

Augmented reality

Augmented reality (AR) refers to computer displays that add virtual information to a user's sensory perceptions. Most AR research focuses on see-through devices, usually worn on the head that overlay graphics and text on the user's view of his or her surroundings. In general it superimposes graphics over a real world environment in real time.Getting the right information at the right time and the right place is key in all these applications. Personal digital assistants such as the Palm and the Pocket PC can provide timely information using wireless networking and Global Positioning System (GPS) receivers that constantly track the handheld devices. But what makes augmented reality different is how the information is presented: not on a separate display but integrated with the user's perceptions. This kind of interface minimizes the extra mental effort that a user has to expend when switching his or her attention back and forth between real-world tasks and a computer screen. In augmented reality, the user's view of the world and the computer interface literally become one. Artificial intelligence for speech recognition Artificial intelligence (AI) involves two basic ideas. First, it involves studying the thought processes of human beings. Second, it deals with representing those processes via machines (like computers, robots, etc).AI is behaviour of a machine, which, if performed by a human being, would be called intelligence. It makes machines smarter and more useful, and is less expensive than natural intelligence. Natural language processing (NLP) refers to artificial intelligence methods of communicating with a computer in a natural language like English. The main objective of a NLP program is to understand input and initiate action. The input words are scanned and matched against internally stored known words. Identification of a keyword causes some action to be taken. In this way, one can communicate with the computer in one's

language. No special commands or computer language are required. There is no need to enter programs in a special language for creating software. VoiceXML takes speech recognition even further.Instead of talking to your computer, you're essentially talking to a web site, and you're doing this over the phone.OK, you say, well, what exactly is speech recognition? Simply put, it is the process of converting spoken input to text. Speech recognition is thus sometimes referred to as speech-to-text. Speech recognition allows you to provide input to an application with your voice. Just like clicking with your mouse, typing on your keyboard, or pressing a key on the phone keypad provides input to an application; speech recognition allows you to provide input by talking. In the desktop world, you need a microphone to be able to do this. In the VoiceXML world, all you need is a telephone Artificial neural networks Just as life attempts to understand itself better by modeling it, and in the process create something new, so Neural computing is an attempt at modeling the workings of a brain and this presentation is an attempt to understand the basic concept of artificial neural networks. In this paper, a small but effective overall content of artificial neural networks is presented . .First,the history of Neural Networks which deals with the comparative study of how vast the Neural Networks have developed over the years is presented. Next, having known what exactly is a neural network with the help of a MLP model, we proceed to next session: resemblance with brain where in the comparison between brain and neural networks as well as neurons and perceptrons are made with the help of figures. The most basic component of a neural network is the perceptron, which is called the artificial neuron, is studied and depicted in the Structure of a Neural Network section which is followed by architecture. The most important concept of the neural networks are its wide range of its applications, a few of which will be dealt in the consequent sections and then its limitations. The main question of interest to us would be "What will be the future of Neural Networks, Will it survive or will it rule us?"-This section leads us to a brief conclusion and we end the paper with the references. Artificial Eye The retina is a thin layer of neural tissue that lines the back wall inside the eye. Some of these cells act to receive light, while others interpret the information and send messages to the brain through the optic nerve. This is part of the process that enables us to see. In damaged or dysfunctional retina, the photoreceptors stop working, causing blindness. By some estimates, there are more than 10 million people worldwide affected by retinal diseases that lead to loss of vision. The absence of effective therapeutic remedies for retinitis pigmentosa (RP) and age-related macular degeneration (AMD) has motivated the development of experimental strategies to restore some degree of visual function to affected patients. Because the remaining retinal layers are anatomically spared, several approaches have been designed to artificially activate this residual retina and thereby the visual system. At present, two general strategies have been pursued. The "Epiretinal" approach involves a semiconductorbased device placed above the retina, close to or in contact with the nerve fiber layer retinal ganglion cells. The information in this approach must be captured by a camera system before transmitting data and energy to the implant. The "Sub retinal" approach involves the electrical stimulation of the inner retina from the sub retinal space by implantation of a semiconductor-based micro photodiode array (MPA) into this location. The concept of the sub retinal approach is that electrical charge generated by the MPA in response to a light stimulus may be used to artificially alter the membrane potential of neurons in the remaining retinal layers in a manner to produce formed images. Some researchers have developed an implant system where a video camera captures images, a chip processes the images, and an electrode array transmits the images to the brain. It's called Cortical Implants.

Asymmetric digital subscriber line

ADSL technology is asymmetric. It allows more bandwidth downstream -from an NSP's central office to customer site - than upstream from the subscriber to the central office. This asymmetry, companied with always-on access (which eliminates call setup), makes ADSL ideal for Internet/intranet surfing, video- on demand, and remote LAN access. Uses of this application typically download much more information than they send.

ADSL transmits more than 6 Mbps to a subscriber, and as much as 640Kbps more in both directions. Such rate expands existing access capacity by a factor of 50 or more with out new cabling. ADSL can literally transform the existing public information network from one limited to voice, text, and low-resolution graphics to a powerful, ubiquitous system capable of bringing multimedia, including full motion video, to every home this century.

ADSL will play a crucial role over the next decade or more as telephone companies enter new markets for delivery information in video and multimedia formats. New broadband cabling will take decades to reach all prospective subscribers. Success of these new services will depend on reaching as many subscribers as possible during the first few years. By bringing movies, television, video catalogs, remote CD-ROMs, corporate LANs and the internet into homes and small businesses, ADSL will makes these markets viable and profitable for telephone company and application suppliers. Imagine yourself in a world where humans interact with computers. You are sitting in front of your personal computer that can listen, talk, or even scream aloud. It has the ability to gather information about you and interact with you through special techniques like facial recognition, speech recognition, etc. It can even understand your emotions at the touch of the mouse. It verifies your identity, feels your presents, and starts interacting with you .You ask the computer to dial to your friend at his office. It realizes the urgency of the situation through the mouse, dials your friend at his office, and establishes a connection. ATM

ATM is simply a Data Link Layer protocol. It is asynchronous in the sense that the recurrence of the cells containing information from an individual user is not necessarily periodic. It is the technology of choice for evolving B-ISDN (Board Integrated Services Digital Network), for next generation LANs and WANs. ATM supports transmission speeds of 155Mbits / sec. In the future. Photonic approaches have made the advent of ATM switches feasible, and an evolution towards an all packetized, unified, broadband telecommunications and data communication world based on ATM is taking place. . Synchronous Transfer Mode (STM) was the first technique to be considered due to its compatibility with most existing systems and the desire to preserve the investment in existing equipment while evolving to a more flexible network. ATM has been proposed to overcome the limitations of STM and the large delay incurred by conventional packet switching. ATM is one of the general classes of digital packet technologies that relay and route traffic by means of an address contained within the packet. What makes packet technologies attractive for data traffic is that they exploit communication channels much more efficiently than the STM technologies common used to transmit digitized voice Blues eyes

Human cognition depends primarily on the ability to perceive, interpret, and integrate audio-visuals and sensoring information. Adding extraordinary perceptual abilities to computers would enable computers to work together with human beings as intimate partners. Researchers are attempting to add more capabilities to computers that will allow them to interact like humans, recognize human presents, talk, listen, or even guess their feelings.

The BLUE EYES technology aims at creating computational machines that have perceptual and sensory ability like those of human beings. It uses non-obtrusige sensing method, employing most modern video cameras and microphones to identifies the users actions through the use of imparted sensory abilities . The machine can understand what a user wants, where he is looking at, and even realize his physical or emotional states Cellular Digital Packet Data (Cdpd)

CDPD is a secure, proven, and reliable protocol that has been used for several years by law enforcement, public safety, and mobile professionals to securely access critical, private information. CDPD has several features to enhance the security of the mobile end user's data and these are discussed below. OPERATION OF CDPD A brief overview of the operation of the CDPD network is as follows: A wireless modem (or Mobile End System-M-ES) communicates by radio with the Mobile Data Base Station (MDBS). The MDBS transfers this data by landline and microwave to the Mobile Data Intermediate Systems (MD-IS), which processes and sends the information, by Intermediate System gateways (routers), to the appropriate destination.Over the years semiconductor devices have set a new trend in the field of electronics and created waves in the existing technology of memory storage. Flash memorywhich has been in use for long now , is now being replaced by new advents in technology of memory such as Ferro Electric Memory, Magneto resistive Type Memory and Ovonic Unified Memory. This paper contrasts the working of flash kind of memory used in Dynamic - RAMS with that of the other types mentioned above . In brief , advantages over the flash kind of memory are evaluated and justified. In a world where "nanoseconds " matter a lot , 'the new Technology ' awaits recognition and exposure Class-D Amplifiers Class D amplifiers present us with a revolutionary solution, which would help us eliminate loss and distortions caused due to conversion of digital signals to analog while amplifying signals before transmitting it to speakers. This inchoate piece of knowledge could prove to detrimental in improving and redefining essence of sound and take it to a different realm. This type of amplifiers do not require the use of D-A conversion and hence reduce the costs incurred for developing state of art output technology. The digital output from sources such as CD's, DVD's and computers now can directly be sent for amplification without the need for any conversion. Another important feature of these unique and novel kind of amplifiers are that they give us a typical efficiency of 90% compared to that of the normal ones which give us a efficiency of 65-70%. This obviously means less amount of dissipation that indirectly means lower rated heat sinks and low waste of energy. This makes the use of D type amplifiers in miniature and portable devices all the more apt. All these years D type amplifiers have been used for purposes where efficiency was the key whereas now developments in this technology have made its entry possible into other domain that are less hi-fi. Thus showing up in MP3 players, portable CD players, laptop computers, cell phones, even personal digital assistants Cluster computing

This cluster of computers shares common network characteristics like the same namespace and it is available to other computers on the network as a single resource. These computers are linked together using high-speed network interfaces between themselves and the actual binding together of the all the individual computers in the cluster is performed by the operating system and the software used. It's a kind of high-performance massively parallel computer built primarily out of commodity hardware components, running a free-software operating system like Linux or Free BSD, interconnected by a private

high-speed network. High cost of 'traditional' High Performance Computing. Clustering using Commercial Off The Shelf (COTS) is way cheaper than buying specialized machines for computing. Cluster computing has emerged as a result of the convergence of several trends, including the availability of inexpensive high performance microprocessors and high-speed networks, and the development of standard software tools for high performance distributed computing. Increased need for High Performance Computing As processing power becomes available, applications which require enormous amount of processing, like weather modeling are becoming more common place requiring the high performance computing provided by ClustersThus the viable alternative to this problem is "Building Your Own Cluster", which is what Cluster Computing is all about Computer forensics The proliferation of computer use in today's networked society is creating some complex side effects in the application of the age-old greed, jealousy, and revenge. Criminals are becoming much more sophisticated in committing crimes. Computers are being encountered in almost every type of criminal activity. Gangs use computers to clone mobile telephones and to re-encode credit cards. Drug dealers use computers to store their transaction ledgers. Child pornography distributors use the Internet to peddle and trade their wares. Fraud schemes have been advertised on the Internet. Counterfeiters and forgers use computers to make passable copies of paper currency or counterfeit cashiers checks, and to create realistic looking false identification. In addition, information stored in computers has become the target of criminal activity. Information such as social security and credit card numbers, intellectual property, proprietary information, contract information, classified documents, etc., have been targeted. Further, the threat of malicious destruction of software, employee sabotage, identity theft, blackmail, sexual harassment, and commercial and government espionage is on the rise. Personnel problems are manifesting themselves in the automated environment with inappropriate or unauthorized use complaints resulting in lawsuits against employers as well as loss of proprietary information costing millions of dollars. All of this has led to an explosion in the number and complexity of computers and computer systems encountered in the course of criminal or internal investigations and the subsequent seizure of computer systems and stored electronic communications. Computer evidence has become a 'fact of life' for essentially all law enforcement agencies and many are just beginning to explore their options in dealing with this new venue. Almost overnight, personal computers have changed the way the world does business. They have also changed the world's view of evidence because computers are used more and more as tools in the commission of 'traditional' crimes. Evidence relative to embezzlement, theft, extortion and even murder has been discovered on personal computers. This new technology twist in crime patterns has brought computer evidence to the forefront in law enforcement circles. Computer memory based on the protein bacterio-rhodopsin

With existing methods fast approaching their limits, it is no wonder that a number of new storage technologies are developing. Currently, researches are looking at protien-based memory to compete with the speed of electronic memory, the reliability of magnetic hard-disks, and the capacities of optical/magnetic storage. We contend that three-dimensional optical memory devices made from bacteriorhodopsin utilizing the two photon read and write-method is such a technology with which the future of memory lies. In a prototype memory system, bacteriorhodopsin stores data in a 3-D matrix. The matrix can be build by placing the protein into a cuvette (a transparent vessel) filled with a polyacrylamide gel. The protein, which is in the bR state, gets fixed in by the polymerization of the gel. A battery of Krypton lasers and a chargeinjection device (CID) array surround the cuvette and are used to write and read data. While a molecule changes states within microseconds, the combined steps to read or write operation take about 10 milliseconds. However like the holographic storage, this device obtains data pages in parallel, so a 10 Mbps is possible. This speed is similar to that of slow semiconductor memo

Computerized Paper Evaluation Using Neural Network A New Proposal

Computers have revolutionized the field of education. The rise of internet has made computers a real knowledge bank providing distant education, corporate access etc. But the task of computers in education can be comprehensive only when the evaluation system is also computerized. The real assessment of students lies in the proper evaluation of their papers. Conventional paper evaluation leaves the student at the mercy of the teachers. Lady luck plays a major role in this current system of evaluation. Also the students don't get sufficient opportunities to express their knowledge. Instead they are made to regurgitate the stuff they had learnt in their respective text books. This hinders their creativity to a great extent. Also a great deal of money and time is wasted. The progress of distance education has also been hampered by the non-availability of a computerized evaluation system. This paper addresses how these striking deficiencies in the educational system can be removed Data Security Definition

Security is defined as "a guarantee that an obligation will be met". In simplest form it is concerned with people trying to access remote services that they are not authorized to use or it is concerned with making sure that nosy people cannot read, or worse yet, modify messages intended for other recipients. Security is a broad topic and covers a multitude of sins. Most security problems intentionally caused by malicious people trying to gain some benefit or harm someone. A few of the most common perpetrators are student, hacker, sales representative, business man, ex-employee, accountant, stock broker, conman, spy, etc. The intruders would first have a panoramic view of the victim's network and then start digging the holes. Today the illicit activities of the hackers are growing by leaps and bounds. Data security problems can be divided roughly into four intertwined areas: Secrecy, Authentication, NonRepudiation and Integrity control. The solutions for various type of security attacks are provided by cryptography, firewalls etc. a) Secrecy - has to do with keeping information out of the hands of unauthorized users. b) Authentication- deals with determining whom you are talking to before revealing sensitive information or entering into a business deal. c) Non repudiation- deals with signatures d) Data integrity- Ensures that the information exchanged in an electronic transaction is not alterable without detection, typically provided by digital signatures. Data Security in Local Network using Distributed Firewalls Distributed firewalls are host-resident security software applications that protect the enterprise network's servers and end-user machines against unwanted intrusion. They offer the advantage of filtering traffic from both the Internet and the internal network. This enables them to prevent hacking attacks that originate from both the Internet and the internal network. This is important because the most costly and destructive attacks still originate from within the organization. They are like personal firewalls except they offer several important advantages like central management, logging, and in some cases, access-control granularity. These features are necessary to implement corporate security policies in larger enterprises. Policies can be defined and pushed out on an enterprisewide basis. A feature of distributed firewalls is centralized management. The ability to populate servers and end-users machines, to configure and "push out" consistent security policies helps to maximize limited resources. The ability to gather reports and maintain updates centrally makes distributed security practical. Distributed

firewalls help in two ways. Remote end-user machines can be secured . Secondly, they secure critical servers on the network preventing intrusion by malicious code and "jailing" other such code by not letting the protected server be used as a launch pad for expanded attacks. Dynamic virtual private network However, the benefits promised by intranets lead to an important challenge for businesses using this technology: how to establish and maintain trust in an environment which was designed originally for free and open access to information. The Internet was not designed with business security in mind. It was designed by universities as an open network where users could access, share and add to information as early as possible. A way has to be found to secure an intranet for businesses without impinging on the intranet's inherent benefits of flexibility interoperability and ease of use. Indeed, an ideal solution must also provide not only the highest levels of security but also security in such a way that users can easily access, modify and share more information, not less, under carefully controlled and maintained conditions. The most appropriate and successful answer to this challenge will be a DYNAMIC VIRTUAL PRIVATE NETWORK. Unlike traditional VPNs that offer limited or inflexible security, a dynamic VPN provides both extremely high levels of security and, equally important, the flexibility to accommodate dynamically changing groups of users and information needs. A dynamic VPN is actually an intranet enabler. It enables an intranet to offer more resources and services than it could otherwise, thereby allowing the business to make more use of its information resources Efficient implementation of cryptographically useful "large" Boolean functions Cryptography provides the necessary tools for accomplishing private and authenticated communication and for performing secure and authenticated transactions over the Internet as well as other networks. It is highly probable that every single bit of information flowing through our networks will have to be either encrypted or signed and authenticated in a few years from now. Historically, four groups of people have used and contributed to the art of cryptography: The military, the diplomatic corps, diarists, and lovers. Of these, the military has had the most important role and has shaped the field. Within military organizations, the messages to be encrypted have traditionally been given to poorly paid clerks for encryption and transmission. Until the advent of computers, one of the main constraints on cryptography had been the ability of the code clerk to perform the necessary transformations, often on a battlefield with little equipment. An additional constraint has been the difficulty in switching over quickly from one cryptographic method to another one, since this entails retraining a large number of people. Modern cryptography is based on key, denoted by K. This key might be any one of a large number of values. The range of possible values of the key is called the keyspace. Both the encryption and decryption operations use this key, so the functions are given as: Ek(M)=C Dk(C)=M where M is the message and C is ciphertext. A message is nothing but plaintext. The process of disguising a message in such a way as to hide its substance is encryption. An encrypted message is ciphertext. The purpose of turning cipher text back into plaintext is decryption. Home Networking The latest advances in the Internet access technologies, the dropping of PC rates, and the proliferation of smart devices in the house, have dramatically increased the number of intelligent devices in the consumer's premises. The consumer electronics equipment manufacturers are building more and more intelligence into their products enabling those devices to be networked into clusters that can be controlled remotely. Advances in the Wireless communication technologies have introduced a variety of wireless devices, like PDAs, Web Pads, into the house. Advent of multiple PCs and smart devices into the house, and the availability of high-speed broadband Internet access, have resulted in in-house networking needs to meet the following requirements of the consumers: " Simultaneous internet access to multiple home users " Sharing of peripherals and files " Home Control/Automation

" " " "

Multi-player Gaming Connect to/from the workplace Remote Monitoring/Security Distributed Video

The home networking requirement introduces into the market a new breed of products called Residential Gateways. A Residential Gateway (RG) will provide the necessary connectivity features to enable the consumer to exploit the advantages of a networked home. The RG will also provide the framework for Residential Connectivity Based Services to reach the home. Examples of such Residential Connectivity Based Services include: Video on Demand, IP Telephony, Home Security & Surveillance, Remote Home Appliance Repair & Trouble shooting, Utility/Meter Reading, Virtual Private Network Connectivity and Innovative E-commerce solutions. Embryonic approach towards integrated circuits The growth and operation of all living beings are directed by the interpretation, in each of the cells, of a chemical program, the DNA string or genome. His process is the source of inspiration for Embryonics embryonic electronics),whose final objective is the design of highly robust integrated circuits, endowed with properties usually associated with the living world: self repair (cicatrisation) and self-replication.The embryonics architecture is based on four hierarchical levels of organization.

1. The basic primitive of our system is the molecule, a multiplexer-based element of a novel programmable circuit.

2. A finite set of molecules makes up a cell, essentially a small processor with an associated memory. 3. A finite set of cells makes up an organism, an application specific multiprocessor system. 4. The organism can itself replicate, giving rise to a population of identical organisms, capable of self replication and repair.

Each of the artificial cell is characterized by a fixed architecture .Multicultural arrays can realize a variety of different organisms, all capable of self replication and self repair. In order to allow for a wide range of application we then introduce a flexible architecture, realized using a new type of fine-grained fieldprogrammable gate array whose basic element, our molecule, is essentially a programmable multiplexer FUZZY ART MAP ALGORITHM FOR DATA MINING IN BIO INFORMATICS

Bio informatics is the application of information technology to the management of bio logical data. It is interdisciplinary area of science where mathematics , statistics,and computer science are applied to data produced by experimental work in bio chemistry,cell,biology and genetics. The need for merging of the biological sciences with the world of It and computer science has mainly arisen due to the huge amount of information being produced from the study of genetic material . Mapping is the process of splitting each chromosome into smaller fragments,which could be propagated and characterized and placed back in correct order on each chromosome. Sequencing is a process of determination of the order of the nucleotides(base sequences) in DNA or RNA molecule,the order of amino acids protein. Genomics refers to the number of genes ,the function of genes the location and regulation of genes. In data acquisition first the DNA sample of the person has to be retrieved. A DNA sample can be obtained from any tissue, including blood. then the given DNA sample is ionized using ESIElectrospray ionization (ESI) allows production of molecular ions directly from samples in solution.

It can be used for small and large molecular-weight biopolymers (peptides, proteins, carbohydrates, and DNA fragments), and lipids. I is a continuous ionization method that is suitable for using as an interface with HPLC or capillary electrophoresis. Multiply charged ions are usually produced. ESI should be considered a complement to MALDI. The sample must be soluble, stable in solution, polar, and relatively clean (free of nonvolatile buffers, detergents, salts, etc.). Laboratories preprocess the genetic data obtain from the ionization method using a series of modular programs .These are most commonly written in Perl,however other languages that are often used include Python,XML and JAVA.These preprocessing basically involves the organizing of sequence of data and the checking of data integrity ,after these processes are carried out the data can be imported into a data base.Sequence data comes in the form of base strings

General packet radio system Wireless phone use is taking off around the world. Many of us would no longer know how to cope without our cell phones. Always being connected offers us flexibility in our lifestyles, makes us more productive in our jobs, and makes us feel more secure. So far, voice has been the primary wireless application. But with the Internet continuing to influence an increasing proportion of our daily lives, and more of our work being away from the office, it is inevitable that the demand for wireless data is going to ignite. Already, in those countries that have cellular-data services readily available, the number of cellular subscribers taking advantage of data has reached significant proportions. But to move forward, the question is whether current cellular-data services are sufficient, or whether the networks need to deliver greater capabilities. The fact is that with proper application configuration, use of middleware, and new wireless-optimized protocols, today's cellular-data can offer tremendous productivity enhancements. But for those potential users who have stood on the sidelines, subsequent generations of cellular data should overcome all of their objections. These new services will roll out both as enhancements to existing second-generation cellular networks, and an entirely new third generation of cellular technology

Graphics processing unit

There are various applications that require a 3D world to be simulated as realistically as possible on a computer screen. These include 3D animations in games, movies and other real world simulations. It takes a lot of computing power to represent a 3D world due to the great amount of information that must be used to generate a realistic 3D world and the complex mathematical operations that must be used to project this 3D world onto a computer screen. In this situation, the processing time and bandwidth are at a premium due to large amounts of both computation and data. The functional purpose of a GPU then, is to provide a separate dedicated graphics resources, including a graphics processor and memory, to relieve some of the burden off of the main system resources, namely the Central Processing Unit, Main Memory, and the System Bus, which would otherwise get saturated with graphical operations and I/O requests. The abstract goal of a GPU, however, is to enable a representation of a 3D world as realistically as possible. So these GPUs are designed to provide additional computational power that is customized specifically to perform these 3D tasks High altitude aeronautical platforms

High Altitude Aeronautical Platform Stations (HAAPS) is the name of a technology for providing wireless narrowband and broadband telecommunication services as well as broadcasting services with either airships

or aircrafts. The HAAPS are operating at altitudes between 3 to 22 km. A HAPS shall be able to cover a service area of up to 1'000 km diameter, depending on the minimum elevation angle accepted from the user's location. The platforms may be airplanes or airships (essentially balloons) and may be manned or unmanned with autonomous operation coupled with remote control from the ground. While the term HAP may not have a rigid definition, we take it to mean a solar-powered and unmanned airplane or airship, capable of long endurance on-station -possibly several years. Various types of platform options exist: SkyStation, the Japanese Stratospheric Platform Project, the European Space Agency (ESA) and others suggest the use of airships/blimps/dirigibles. These will be stationed at 21km and are expected to remain aloft for about 5 years. Angel Technologies (HALO), AeroVironment/ NASA (Helios) and the European Union (Heliplat) propose the use of high altitude long endurance aircraft. The aircraft are either engine or solar powered and are stationed at 16km (HALO) or 21km (Helios). Helios is expected to stay aloft for a minimum of 6 months whereas HALO will have 3 aircraft flying in 8- hour shifts. Platforms Wireless International is implementing a tethered aerostat situated at ~6km. A high altitude telecommunication system comprises an airborne platform - typically at high atmospheric or stratospheric altitudes - with a telecommunications payload, and associated ground station telecommunications equipment. The combination of altitude, payload capability, and power supply capability makes it ideal to serve new and metropolitan areas with advanced telecommunications services such as broadband access and regional broadcasting. The opportunities for applications are virtually unlimited. The possibilities range from narrowband services such as paging and mobile voice to interactive broadband services such as multimedia and video conferencing. For future telecommunications operators such a platform could provide blanket coverage from day one with the added advantage of not being limited to a single service. Where little or unreliable infrastructure exists, traffic could be switched through air via the HAPS platform Image Authentication: A Few Approaches Using DigitalWatermarking A digital Watermark is a digital signal or pattern inserted into a digital image. Since this signal or pattern is present in each unaltered copy of the original image, the digital Watermark may also serve as a digital signature for the copies. The desirable characteristics of a Watermark are Watermark should be resilient to standard manipulations of any nature. It should be statistically irremovable Every Watermarking system consists at least two different parts: 1. Watermark Embedding Unit 2. Watermark Detection and Extraction Unit Watermark in a still image. A robust, secure, invisible Watermark is imprinted on the image I, and the Watermarked image WI, is distributed. The author keeps the original image I. To prove that an image WI' or a portion of it has been pirated, the author shows that W' contains his Watermark (to this purpose, he could but does not have to use his original image I). The best a pirate can do is to try to remove the original Watermark (which is impossible if the Watermark is secure). There can be another way out for the pirate, as to embed his signature in the image. But this does not help him too much because both his "original" and his Watermarked Image will contain the author's Watermark (due to robustness property), while the author can present an image without pirate's Watermark. Thus, the ownership of the image can be resolved in the court of law. We have done the implementation in MATLAB and doing the simulation in C++.. Infini band

Amdahl's Law is one of the fundamental principles of computer science and basically states that efficient systems must provide a balance between CPU performance, memory bandwidth, and I/O performance. At odds with this, is Moore's Law which has accurately predicted that semiconductors double their performance

roughly every 18 months. Since I/O interconnects are governed by mechanical and electrical limitations more severe than the scaling capabilities of semiconductors, these two laws lead to an eventual imbalance and limit system performance. This would suggest that I/O interconnects need to radically change every few years in order to maintain system performance. In fact, there is another practical law which prevents I/O interconnects from changing frequently - if it am not broke don't fix it. Bus architectures have a tremendous amount of inertia because they dictate the bus interface architecture of semiconductor devices. For this reason, successful bus architectures typically enjoy a dominant position for ten years or more. The PCI bus was introduced to the standard PC architecture in the early 90's and has maintained its dominance with only one major upgrade during that period: from 32 bit/33 MHz to 64bit/66Mhz. The PCI-X initiative takes this one step further to 133MHz and seemingly should provide the PCI architecture with a few more years of life. But there is a divergence between what personal computer and servers require. Personal Computers or PCs are not pushing the bandwidth capabilities of PCI 64/66. PCI slots offer a great way for home or business users to purchase networking, video decode, advanced sounds, or other cards and upgrade the capabilities of their PC. On the other hand, servers today often include clustering, networking (Gigabit Ethernet) and storage (Fibre Channel) cards in a single system and these push the 1GB bandwidth limit of PCI-X Intruction detection system

In the last three years, the networking revolution has finally come of age. More than ever before, we see that the Internet is changing computing as we know it. The possibilities and opportunities are limitless; unfortunately, so too are the risks and chances of malicious intrusions.

It is very important that the security mechanisms of a system are designed so as to prevent unauthorized access to system resources and data. However, completely preventing breaches of security appear, at present, unrealistic. We can, however, try to detect these intrusion attempts so that action may be taken to repair the damage later. This field of research is called Intrusion Detection.

Anderson, while introducing the concept of intrusion detection in 1980, defined an intrusion attempt or a threat to be the potential possibility of a deliberate unauthorized attempt to" access information, " manipulate information, or " render a system unreliable or unusable. Since then, several techniques for detecting intrusions have been studied. This paper discusses why intrusion detection systems are needed, the main techniques, present research in the field, and possible future directions of research Laser communication systems

Lasers have been considered for space communications since their realization in 1960. Specific advancements were needed in component performance and system engineering particularly for space qualified hardware. Advances in system architecture, data formatting and component technology over the past three decades have made laser communications in space not only viable but also an attractive approach into inter satellite link applications. Information transfer is driving the requirements to higher data rates, laser cross -link technology explosions, global development activity, increased hardware, and design maturity. Most important in space laser communications has been the development of a reliable, high power, single mode laser diode as a directly modulable laser source. This technology advance offers the space laser communication system designer the flexibility to design very lightweight, high bandwidth, low-cost communication payloads for satellites whose launch costs are a very strong function of launch weigh. This feature substantially reduces blockage of fields of view of most desirable areas on satellites.

The smaller antennas with diameter typically less than 30 centimeters create less momentum disturbance to any sensitive satellite sensors. Fewer on board consumables are required over the long lifetime because there are fewer disturbances to the satellite compared with heavier and larger RF systems. The narrow beam divergence affords interference free and secure operation.

Magnetic RAM In 1984 Drs. Arthur Pohm and Jim Daughton, both employed at that time by Honeywell, conceived of a new class of magnetoresistance memory devices which offered promise for high density, random access, nonvolatile memory. In 1989 Dr. Daughton left Honeywell to form Nonvolatile Electronics, Inc. having entered into a license agreement allowing him to sublicense Honeywell MRAM technology for commercial applications. Dr. Pohm, Dr. Daughton, and others at NVE continued to improve basic MRAM technology, and innovated new techniques which take advantage of revolutionary advances in magnetoresistive devices, namely giant magnetoresistance and spin dependent tunneling. Today there is a tremendous potential for MRAM as a nonvolatile, solid state memory to replace flash memory and EEPROM where fast writing or high write endurance is required, and in the longer term as a general purpose read/write random access memory. NVE has a substantial patent portfolio containing 10 MRAM patents, and is willing to license these, along with 12 Honeywell MRAM patents, to companies interested in manufacturing MRAM. In addition, NVE is considering internal production of certain niche MRAM products over the next several years.

Multimedia messaging Service

"A picture says more than a thousand words and is more fun to look at!!!". Everyone in this world believes in this quote. And this is also one of the main quotes that inspired mobile developers who gave this hot technology -MMS. MMS, Multimedia Messaging Service, is a standardized messaging service. It traced its roots from SMS (Short Messaging Services) and EMS (Enhanced Messaging Services) .MMS will allow users to send and receive messages exploiting the whole array of media types available today, e.g. text, images, audio, and video, text, Graphics, data, animations, while also making it possible to support new content types as they become popular. With MMS, for example, users could send each other personal pictures together with a voice message, such as a greeting card with a picture, handwritten message, and a personal song or sound clip that has been recorded by the user itself. Video conferencing, which is expected to make a great impact in the future, is also possible with this technology. Using the Wireless Application Protocol (WAP) as bearer technology and powered by the high-speed transmission technologies EDGE, GPRS and UMTS (WCDMA), Multimedia Messaging allows users to send and receive messages that look like PowerPoint-style Presentations. MMS supports standard image formats such as GIF and JPEG, video formats such as MPEG 4, and audio formats such as MP3, MIDI and WAV, also the new AMR... The greatest advantage of MMS is its ability to interact with mobile to mobile terminals as well as with mobile to PDA \Laptop \Internet and other data devices. MMS can also act as a virtual email client. Greatly anticipated by young users in particular, MMS is projected to fuel the growth of related market segments by as much as forty percent. "A picture says more than a thousand words and is more fun to look at!!!". Everyone in this world believes in this quote. And this is also one of the main quotes that inspired mobile developers who gave this hot technology -MMS.

MMS, Multimedia Messaging Service, is a standardized messaging service. It traced its roots from SMS (Short Messaging Services) and EMS (Enhanced Messaging Services) .MMS will allow users to send and receive messages exploiting the whole array of media types available today, e.g. text, images, audio, and video, text, Graphics, data, animations, while also making it possible to support new content types as they become popular. With MMS, for example, users could send each other personal pictures together with a voice message, such as a greeting card with a picture, handwritten message, and a personal song or sound clip that has been recorded by the user itself. Video conferencing, which is expected to make a great impact in the future, is also possible with this technology. Using the Wireless Application Protocol (WAP) as bearer technology and powered by the high-speed transmission technologies EDGE, GPRS and UMTS (WCDMA), Multimedia Messaging allows users to send and receive messages that look like PowerPoint-style Presentations. MMS supports standard image formats such as GIF and JPEG, video formats such as MPEG 4, and audio formats such as MP3, MIDI and WAV, also the new AMR... The greatest advantage of MMS is its ability to interact with mobile to mobile terminals as well as with mobile to PDA, Laptop, Internet and other data devices. MMS can also act as a virtual email client. Greatly anticipated by young users in particular, MMS is projected to fuel the growth of related market segments by as much as forty percent Multiple Domain Orientation - A Theoretical Proposal for Storage Media

In the today's cyber world, we are largely dependent on computers. With the advancement of technology and complexity of computers, the need for massive storage is mandatory. Hard disks have been a major storage media for the past several years. Hard disks continue to shrink in size, gain increased storage capacity and increased transfer speeds. The focus of development has been on increasing the density. But this may ultimately lead to saturation to atomic levels one day. Hence in this paper, on the basis of domain theory, different states have been given to an individual bit field making it possible to store more information on a single bit field without modifying its density.

Nanotechnological proposal of RBC Molecular manufacturing promises precise control of matter at the atomic and molecular level. One major implication of this realization is that in the next 10-30 years it may become possible to construct machines on the micron scale, comprised of parts on the nanometer scale. Subassemblies of such devices may include such useful robotic components as 100-nm manipulator arms, 400-nm mechanical GHz-clock computers, 10-nm sorting rotors for molecule-by-molecule reagent purification, and smooth superhard surfaces made of atomically flawless diamond. Such technology has clear medical implications. It would allow physicians to perform precise interventions at the cellular and molecular level. Medical nanorobots have been proposed for gerontological applications, in pharmaceutical research, and to diagnose diseases ,mechanically reverse atherosclerosis, supplement the immune system, rewrite DNA sequences in vivo, repair brain damage, and reverse cellular insults caused by "irreversible" processes or by cryogenic storage of biological tissues. The goal of the present paper is to present one such preliminary design for a specific medical nanodevice that would achieve a useful result: An artificial mechanical erythrocyte (red blood cell, RBC), or "respirocyte."

Neural Networks

Neural networks include the capacity to map the perplexed & extremely non - linear relationship between load levels of zone and the system topologies, which is required for the feeder reconfiguration in distribution systems. This study is intended to purpose the strategies to reconfigure the feeders by using artificial neural n/w s with the mapping ability. Artificial neural n/w's determine the appropriate system topology that reduces the power loss according to the variation of load pattern. The control strategy can be easily obtained on the basis of system topology which is provided by artificial neural networks. Artificial neural networks determine the most appropriate system topology according to the load pattern on the basis of trained knowledge in the training set . This is contrary to the repetitive process of transferring the load & estimating power loss in conventional algorithm.

ANN are designed to two groups: 1) The first group is to estimate the proper load data of each zone . 2)The second is to determine the appropriate system topology from input load level .

In addition, several programs with the training set builder are developed for the design the training & accuracy test of A.N.N.This paper will present the strategy of feeder reconfiguration to reduce power loss, by using A.N.N. The approach developed here is basically different from methods reviewed above on flow solution during search process are not required .The training set of A.N.N is the optimal system topology corresponding to various load patterns which minimizes the loss under given conditions Open RAN The vision of the OpenRAN architecture is to design radio access network architecture with the following characteristics: pen,Flexible,Distributed,Scalable. Such architecture would be open because it defines open, standardized interfaces at key points that in past architectures were closed and proprietary. It would be flexible because it admits of several implementations, depending on the wired network resources available in the deployment situation. It would be distributed because monolithic network elements in architectures would have been broken down into their respective functional entities, and the functional entities would have been grouped into network elements that can be realized as a distributed system. The architecture would define an interface with the core network that allows the core network to be designed independently from the RAN, preserving access network independence in the core. Finally, the architecture would not require changes in radio link protocols; in particular, a radio link protocol based on IP would not be necessary. This document presents the first steps in developing the OpenRAN vision. In its first phase, the subject of this document, the OpenRAN architecture is purely concerned with distributing RAN functions to facilitate achieving open interfaces and flexible deployment. The transport substrate for implementing the architecture is assumed to be IP but no attempts is made to optimize the use of IP protocols, nor are specific interfaces designated as open. The architecture could as well be implemented on top of existing functional architectures that maintain a strict isolation between the transport layer and radio network layer, by splitting an existing radio network layer into control and bearer parts. In addition, interoperation with existing core and RAN networks is supported via interworking functions. Chapters 7 through 11 in this report are exclusively concerned with this first phase of the architecture, and it is possible that the architecture may change as the actual implementation of the OpenRAN is considered and For Further Study items are resolved

Optical networking

Here we explains about SONET -Synchronous Digital Network, that has been greeted with unparalleled enthusiasm throughout the world. It also explains how it came into existence and in which way it differs from others. What does synchronous mean?" Bits from one telephone call are always in the same location inside a digital transmission frame". This material is assumed to be comfortable to the reader as the basic concepts of a public telecommunications network, with its separate functions of transmission and switching, and is assumed to be aware of the context for the growth of broadband traffic. In the early 1970's digital transmission systems began to appear, utilizing a method known as Pulse Code Modulation (PCM), first proposed by STC in 1937. As demand for voice telephony increased, and levels of traffic in the network grew ever higher, it became clear that standard 2 Mbit/s signal was not sufficient. To cope with the traffic loads occurring in the trunk network. As the need arose, further levels multiplexing were added to the standard at much higher speed and thus SONET came into existence. For the first time in telecommunications history there will be a worldwide, uniform and seamless transmission standard for service delivery. SONET provides the capability to send data at multi-gigabit rate over today's single-mode fiber-optic links As end-users become ever more dependent on effective communications, there has been an explosion in the demand for sophisticated telecom services. Services such as videoconferencing remote database access, and multimedia file transfer require a flexible network with the availability of virtually unlimited bandwidth. The complexity of the network, means that network operators are unable to meet this demand. At present SONET is being implemented for long-haul traffic, but there is no reason it cannot be used for short distances

Palladium cryptography As we tend towards a more and more computer centric world, the concept of data security has attained a paramount importance. Though present day security systems offer a good level of protection, they are incapable of providing a "trust worthy" environment and are vulnerable to unexpected attacks. Palladium is a content protection concept that has spawned from the belief that the pc, as it currently stands, is not architecturally equipped to protect a user forms the pitfalls and challenges that an allpervasive network such as the Internet poses. As a drastic change in pc hardware is not feasible largely due to economic reasons, palladium hopes to introduce a minimal change in this front. A paradigm shift is awaited in this scenario with the advent of usage of palladium, thus making content protection a shared concern of both software and hardware. In the course of this paper the revolutionary aspects of palladium are discussed in detail. A case study to restructure the present data security system of JNTU examination system using palladium is put forward

Quantum cryptography Quantum cryptography is an effort to allow two users of a common communication channel to create a body of shared and secret information. This information, which generally takes the form of a random string of bits,

can then be used as a conventional secret key for secure communication. It is useful to assume that the communicating parties initially share a small amount of secret information, which is used up and then renewed in the exchange process, but even without this assumption exchanges are possible. The advantage of quantum cryptography over traditional key exchange methods is that the exchange of information can be shown to be secure in a very strong sense, without making assumptions about the intractability of certain mathematical problems. Even when assuming hypothetical eavesdroppers with unlimited computing power, the laws of physics guarantee (probabilistically) that the secret key exchange will be secure, given a few other assumptions

Quantum dot lasers

The infrastructure of the Information Age has to date relied upon advances in microelectronics to produce integrated circuits that continually become smaller, better, and less expensive. The emergence of photonics, where light rather than electricity is manipulated, is posed to further advance the Information Age. Central to the photonic revolution is the development of miniature light sources such as the Quantum dots(QDs). Today, Quantum Dots manufacturing has been established to serve new datacom and telecom markets. Recent progress in microcavity physics, new materials, and fabrication technologies has enabled a new generation of high performance QDs. This presentation will review commercial QDs and their applications as well as discuss recent research, including new device structures such as composite resonators and photonic crystals Semiconductor lasers are key components in a host of widely used technological products, including compact disk players and laser printers, and they will play critical roles in optical communication schemes. The basis of laser operation depends on the creation of non-equilibrium populations of electrons and holes, and coupling of electrons and holes to an optical field, which will stimulate radiative emission. . Other benefits of quantum dot active layers include further reduction in threshold currents and an increase in differential gain-that is, more efficient laser operation. SCSI

SCSI is actually an acronym for Small Computer System Interface and it is pronounced as "skuzzy". It is the second-most popular hard disk interface used in PCs today. It's a high-speed, intelligent peripheral I/O bus with a device independent protocol for transferring data between different types of peripheral devices. The SCSI bus connects all parts of a computer system so that they can communicate with each other. The bus frees the host processor from the responsibility of I/O internal tasks. A SCSI bus can be either internal, external, or cross the boundary from internal to external. The SCSI protocol is a peer-to-peer relationship: one device does not have to be subordinated to another device in order to perform I/0 activities. Only two of these devices can communicate on the bus at any given time.

Each SCSI bus can connect up to 8 or up to 16 peripherals; one of those devices will always be the computer or the SCSI card, because they too are devices on the SCSI. SCSI devices are designated as either initiators (drivers) or targets (receivers) and the interface to the host computer is called the host adapter. Every device connected to the bus will have a different SCSI ID, ranging from 0 to 7. The host adapter takes up one ID leaving 7 ID's for other hardware. SCSI hardware typically consists of hard drives, tape drives, CD-ROMs, printers and scanners. Semiconductor Devices (A New Revolution in Memories

Billions of chips in today s computers , automobiles, cellphones , media cards ,and those cleverkey chain memories are powerless when idle , yet they dispense immense data and instructions at the flick of the switch. They are flash kind of memorychips , a type of electrically erasable and programmable read -only memory also used as DRAM, dynamic random access memory. Non volatility , flash s property , is the most crucial for electronic systems like cell phones to hold instructions and data needed to send , receive calls,and store phone numbers. Electronic products of all types , from micro wave ovens to industrial machinery make use of flash memory . Its programmability is the main feature that lets users add addresses , calender entries and memos to personal digital assistants and erase and reuse the media cards that store pictures taken with a digital camera.But flash technology is being overpowered by technologies bent to prove their dominancy. These random access memories have a little in common. These new RAMS constitute Smart Cameras in Embedded Systems

A smart camera performs real-time analysis to recognize scenic elements. Smart cameras are useful in a variety of scenarios: surveillance, medicine,etc. We have built a real-time system for recognizing gestures. Our smart camera uses novel algorithms to recognize gestures based on low-level analysis of body parts as well as hidden Markov models for the moves that comprise the gestures. These algorithms run on a Trimedia processor. Our system can recognize gestures at the rate of 20 frames/second. The camera can also fuse the results of multiple cameras Recent technological advances are enabling a new generation of smart cameras that represent a quantum leap in sophistication. While today's digital cameras capture images, smart cameras capture high-level descriptions of the scene and analyze what they see. These devices could support a wide variety of applications including human and animal detection, surveillance, motion analysis, and facial identification. Video processing has an insatiable demand for real-time performance. Fortunately, Moore's law provides an increasing pool of available computing power to apply to real-time analysis. Smart cameras leverage very large-scale integration (VLSI) to provide such analysis in a low-cost, low-power system with substantial memory. Moving well beyond pixel processing and compression, these systems run a wide range of algorithms to extract meaning from streaming video. Because they push the design space in so many dimensions, smart cameras are a leading-edge application for embedded system research.

Space-time adaptive processing

Space-Time Adaptive Processing (STAP) refers to a class of signal processing techniques used to process

returns of an antenna array radar system. It enhances the ability of radars to detect targets that might otherwise be obscured by clutter or jamming.

The output of STAP is a linear combination or weighted sum of the input signal samples .The "adaptive" in STAP refers to the fact that STAP weights are computed to reflect the actual noise, clutter and jamming environment in which the radar finds itself. The "space" in STAP refers to the fact that STAP the STAP weights (applied to the signal samples at each of the elements of the antenna array) at one instant of time define an antenna pattern in space. If there are jammers in the field of view, STAP will adapt the radar antenna pattern by placing nulls in the directions those jammers thus rejecting jammer power. The "time" in STAP refers to the fact that the STAP weights applied to the signal samples at one antenna element over the entire dwell define a system impulse response and hence a system frequency response Surround sound system

We are now entering the Third Age of reproduced sound. The monophonic era was the First Age, which lasted from the Edison's invention of the phonograph in 1877 until the 1950s. During those times, the goal was simply to reproduce the timbre of the original sound. No attempts were made to reproduce directional properties or spatial realism. The stereo era was the Second Age. It was based on the inventions from the 1930s, reached the public in the mid-'50s, and has provided great listening pleasure for four decades. Stereo improved the reproduction of timbre and added two dimensions of space: the left - right spread of performers across a stage and a set of acoustic cues that allow listeners to perceive a front-to-back dimension. In two-channel stereo, this realism is based on fragile sonic cues. In most ordinary two-speaker stereo systems, these subtle cues can easily be lost, causing the playback to sound flat and uninvolved. Multi channel surround systems, on the other hand, can provide this involving presence in a way that is robust, reliable and consistent. The purpose of this seminar is to explore the advances and technologies of surround sound in the consumer market The Architecture of a Moletronics Computer

Some novel designs for several such simple molecular electronic digital logic circuits: a complete set of three fundamental logic gates: (AND, OR, and XOR gates), plus and adder function built up from the gates via the well-known combinational logic, was demonstrated. This means in coming future, this technology could be a replacement for VLSI. However, currently, this technology is only available under lab condition. How to mass product moletronic chips is still a big problem. Currently, integrated circuits by etching silicon wafers using beam of light. It's the VLSI lithography-based technology makes mass production of Pentium III processor possible. But as the size of logic block goes to nano-scale, this technology no long available. As wavelength get too short, they tend to become X-rays and can damage the micro structure of molecules. On the other hand, the mask of lithography of Pentium III is so complex, and the shape and the dimension of its logic block varies so much. Looking at currently available integrated circuits, the transistor density of memory chip are much higher than processor chip, the reason is that the cell of memory is much more simple than circuit of processor. Because, except the decoding logic, most of the memory bit cell is the same. Could we find a way to fabricate complex logic circuit as Pentium processor using million of same logic units? The

PLD(Programmable Logic Devices) is the answer. The paper is organized as following: section II presents some basic of moletronic gate circuit. section III uses PLD technology to build more complex blocks. section IV shows the nanotube can be used for interconnection wires

The TIGER SHARC Processor

The Tiger SHARC processor is the newest and most power member of this family which incorporates many mechanisms like SIMD, VLIW and short vector memory access in a single processor. This is the first time that all these techniques have been combined in a real time processor. The TigerSHARC DSP is an ultra high-performance static superscalar architecture that is optimized for telecommunications infrastructure and other computationally demanding applications. This unique architecture combines elements of RISC, VLIW, and standard DSP processors to provide native support for 8, 16, and 32-bit fixed, as well as floating-point data types on a single chip. Large on-chip memory, extremely high internal and external bandwidths and dual compute blocks provide the necessary capabilities to handle a vast array of computationally demanding, large signal processing tasks Tracking and positioning of mobiles in telecommunication

Mobile positioning technology has become an important area of research, for emergency as well as for commercial services. Mobile positioning in cellular networks will provide several services such as, locating stolen mobiles, emergency calls, different billing tariffs depending on where the call is originated, and methods to predict the user movement inside a region. The evolution to location-dependent services and applications in wireless systems continues to require the development of more accurate and reliable mobile positioning technologies. The major challenge to accurate location estimation is in creating techniques that yield acceptable performance when the direct path from the transmitter to the receiver is intermittently blocked. This is the Non-Line-Of-Sight (NLOS) problem, and it is known to be a major source of error since it systematically causes mobile to appear farther away from the base station (BS) than it actually is, thereby increasing the positioning error.

In this paper, we present a simple method for mobile telephone tracking and positioning with high accuracy. Our paper presents the location of a mobile telephone by drawing a plurality of circles with the radii being the distances between a mobile telephone and a several base stations (it will be found using Time Of Arrival (TOA)) and the base stations at their centers, and using location tracking curves connecting the intersection points between each circle pair instead of the common chords defined by the circles. We use location tracking curves connecting the intersection points of the two circles which will be drawn by ordinary TOA method, instead of the common chord as in TDOA

Tunable lasers

Tunable lasers are still a relatively young technology, but as the number of wavelengths in networks increases so will their importance. Each different wavelength in an optical network will be separated by a multiple of 0.8 nanometers (sometimes referred to as 100GHz spacing. Current commercial products can cover maybe four of these wavelengths at a time. While not the ideal solution, this still cuts your required number of spare lasers down. More advanced solutions hope to be able to cover larger number of wavelengths, and should cut the cost of spares even further. The devices themselves are still semiconductor-based lasers that operate on similar principles to the basic non-tunable versions. Most designs incorporate some form of grating like those in a distributed feedback laser. These gratings can be altered in order to change the wavelengths they reflect in the laser cavity, usually by running electric current through them, thereby altering their refractive index. The tuning range of such devices can be as high as 40nm, which would cover any of 50 different wavelengths in a 0.8nm wavelength spaced system. Technologies based on vertical cavity surface emitting lasers (VCSELs) incorporate moveable cavity ends that change the length of the cavity and hence the wavelength emitted. Current designs of tunable VCSELs have similar tuning ranges Voice over internet protocol Voice over Internet Protocol refers to sending voice and fax phone calls over data networks, particularly the Internet. This technology offers cost savings by making more efficient use of the existing network. Traditionally, voice and data were carried over separate networks optimized to suit the differing characteristics of voice and data traffic. With advances in technology, it is now possible to carry voice and data over the same networks whilst still catering for the different characteristics required by voice and data.

Voice-over-Internet-Protocol (VOIP) is an emerging technology that allows telephone calls or faxes to be transported over an IP data network. The IP network could be A local area network in an office A wide area network linking the sites of a large international organization A corporate intranet The internet Any combination of the above

Wearable computers

Wearable computing facilitates a new form of human - computer interaction based on a small body-worn computer system that is always ON and always ready and accessible. In this regard, the new computational framework differs from that of hand held devices, laptop computers and Personal Digital Assistants (PDA's). The "always ready" capability leads to a new form of synergy between human and computer, characterized by long-term adaptation through constancy of user-interface. This new technology has a lot in store for you. You can do a lot of amazing things like typing your document while jogging, shoot a video from a horseback, or while riding your mountain-bike over the railroad ties. And quite amazingly, you can even recall scenes that ceased to exist. The whole of a wearable computer spreads all over the body, with the main unit situated in front of the user's eye. Wearable computers find a variety of applications by providing the user mediated augmented reality, helping people with poor eyesight etc. The MediWear and ENGwear are two models that highlight the applications of Wearable computers. However, some disadvantages do exist. With the introduction of "Under wearable computers" by Covert Systems, you can surely look ahead at the future of wearable computers in a optimistic way. Wireless communication

The hottest technology in personal computing is wireless networking, which allows users to share a high speed Internet connection among multiple computers without being tethered to wires or wall jacks. Known as WI-FI, this wireless system is commonly used to connect computers in various rooms of a home, but it's also popping up in public houses like airports, hotel lobbies and coffee shops. The Mobile Internet relies on a new set of standards, known as the Wireless Application Protocol (WAP). The existing technology will provide efficient access to information and services from a wide range of mobile devices. In particular, WAP extends the Internet by addressing the unique requirements of the wireless network environment and the unique characteristics of the small handheld devices

You might also like