You are on page 1of 97

CHAPTER-I I. INTRODUCTION 1.

1 CAN: Controllerarea network (CAN or CAN-bus) is a vehicle bus standard designed to allow microcontrollers and devices to communicate with each other within a vehicle without a host computer. CAN is a message based protocol, designed specifically for automotive applications but now also used in other areas such as industrial automation and medical equipment. Development of the CAN-bus started originally in 1983 at Robert Bosch GmbH.The protocol was officially released in 1986 at the Society of Automotive Engineers (SAE) congress in Detroit, Michigan. The first CAN controller chips, produced by Intel and Philips, came on the market in 1987. Bosch published the CAN 2.0 specification in 1991. CAN is one of five protocols used in the OBD-II vehicle diagnostics standard. The OBD standard has been mandatory for all cars and light trucks sold in the United States since 1996, and the EOBD standard, mandatory for all petrol vehicles sold in the European Union since 2001 and all diesel vehicles since 2004. Applications of CAN A modern automobile may have as many as 70 electronic control units (ECU) for various subsystems. Typically the biggest processor is the engine control unit (also engine control module/ECM in automobiles); others are used for transmission, airbags, antilock braking, cruise control, electric power steering, audio systems, windows, doors,
1

mirror adjustment, etc. Some of these form independent subsystems, but communications among others are essential. A subsystem may need to control actuators or receive feedback from sensors. The CAN standard was devised to fill this need. The CAN bus may be used in vehicles to connect engine control unit and transmission, or (on a different bus) to connect the door locks, climate control, seat control, etc. Today the CAN bus is also used as a fieldbus in general automation environments, primarily due to the low cost of some CAN Controllers and processors. Bosch holds patents on the technology, and manufacturers of CAN-compatible microprocessors pay license fees to Bosch, which are normally passed on to the customer in the price of the chip. Manufacturers of products with custom ASICs or FPGAs containing CAN-compatible modules may need to pay a fee for the CAN Protocol License. Technology in CAN CAN is a multi-master broadcast serial bus standard for connecting electronic control units (ECUs). Each node is able to send and receive messages, but not simultaneously. A message consists primarily of an id, which represents the priority of the message, and up to eight data bytes. It is transmitted serially onto the bus. This signal pattern is encoded in nonreturn-to-zero (NRZ) and is sensed by all nodes. The devices that are connected by a CAN network are typically sensors, actuators, and other control devices. These devices are not connected directly to the bus, but through a host processor and a CAN controller.If the bus is free, any node may begin to transmit. If two or more nodes begin sending messages at

the same time, the message with the more dominant id (which has more dominant bits, i.e., zeroes) will overwrite other nodes' less dominant id's, so that eventually (after this arbitration on the id.) only the dominant message remains and is received by all nodes. This mechanism is referred to as priority based bus arbitration. Messages with numerically smaller values of id. have higher priority and are transmitted first. Each node requires a Host processor. The host processor decides what received messages mean and which messages it wants to transmit itself. Sensors, actuators and control devices can be connected to the host processor. Receiving: The CAN controller stores received bits serially from the bus until an entire message is available, which can then be fetched by the host processor (usually after the CAN controller has triggered an interrupt). Sending: The host processor stores its transmit messages to a CAN controller, which transmits the bits serially onto the bus. Transceiver (possibly integrated into the CAN controller) Receiving: It adapts signal levels from the bus to levels that the CAN controller expects and has protective circuitry that protects the CAN controller. Sending: CAN controller (hardware with a synchronous clock).

It converts the transmit-bit signal received from the CAN controller into a signal that is sent onto the bus.Bit rates up to 1 Mbit/s are possible at network lengths below 40 m. Decreasing the bit rate allows longer network distances (e.g., 500 m at 125 kbit/s). The CAN data link layer protocol is standardized in ISO 11898-1 (2003). This standard describes mainly the data link layer (composed of the logical link control (LLC) sublayer and the media access control (MAC) sublayer) and some aspects of the physical layer of the OSI reference model. All the other protocol layers are the network designer's choice. Layers of CAN Based on levels of abstraction, the structure of the CAN protocol can be described in terms of the following layers: * Application Layer * Object Layer 1. Message Filtering 2 .Message and Status Handling 1.2 PIC MICROCONTROLLER: PIC is a family of Harvard architecture microcontrollers made by Microchip Technology, derived from the PIC1640[1] originally developed by General Instrument's Microelectronics Division. The name PIC initially referred to "Peripheral Interface Controller".PICs are popular with both industrial developers and hobbyists alike due to their low cost, wide availability, large user base, extensive collection of
4

application notes, availability of low cost or free development tools, and serial programming (and re-programming with flash memory) capability.Microchip announced on February 2008 the shipment of its six billionth PIC processor. Core architecture The PIC architecture is characterized by its multiple attributes: * Separate code and data spaces (Harvard architecture) for devices other than PIC32, which has a Von Neumann architecture. * A small number of fixed length instructions * Most instructions are single cycle execution (2 clock cycles), with one delay cycle on branches and skips * One accumulator (W0), the use of which (as source operand) is implied (i.e. is not encoded in the opcode) * All RAM locations function as registers as both source and/or destination of math and other functions.[3] * A hardware stack for storing return addresses * A fairly small amount of addressable data space (typically 256 bytes), extended through banking * Data space mapped CPU, port, and peripheral registers * The program counter is also mapped into the data space and writable (this is used to implement indirect jumps).

There is no distinction between memory space and register space because the RAM serves the job of both memory and registers, and the RAM is usually just referred to as the register file or simply as the registers. Data space (RAM) PICs have a set of registers that function as general purpose RAM. Special purpose control registers for on-chip hardware resources are also mapped into the data space. The addressability of memory varies depending on device series, and all PIC devices have some banking mechanism to extend addressing to additional memory. Later series of devices feature move instructions which can cover the whole addressable space, independent of the selected bank. In earlier devices, any register move had to be achieved via the accumulator.To implement indirect addressing, a "file select register" (FSR) and "indirect register" (INDF) are used. A register number is written to the FSR, after which reads from or writes to INDF will actually be to or from the register pointed to by FSR. Later devices extended this concept with post- and preincrement/decrement for greater efficiency in accessing sequentially stored data. This also allows FSR to be treated almost like a stack pointer (SP).External data memory is not directly addressable except in some high pin count PIC18 devices.

Code space

The code space is generally implemented as ROM, EPROM or flash ROM. In general, external code memory is not directly addressable due to the lack of an external memory interface. The exceptions are PIC17 and select high pin count PIC18 devices. Word size All PICs handle (and address) data in 8-bit chunks. However, the unit of addressability of the code space is not generally the same as the data space. For example, PICs in the baseline and mid-range families have program memory addressable in the same wordsize as the instruction width, i.e. 12 or 14 bits respectively. In contrast, in the PIC18 series, the program memory is addressed in 8-bit increments (bytes), which differs from the instruction width of 16 bits. In order to be clear, the program memory capacity is usually stated in number of (single word) instructions, rather than in bytes. Stacks PICs have a hardware call stack, which is used to save return addresses. The hardware stack is not software accessible on earlier devices, but this changed with the 18 series devices. Hardware support for a general purpose parameter stack was lacking in early series, but this greatly improved in the 18 series, making the 18 series architecture more friendly to high level language compilers.

Instruction set A PIC's instructions vary from about 35 instructions for the lowend PICs to over 80 instructions for the high-end PICs. The instruction
7

set includes instructions to perform a variety of operations on registers directly, the accumulator and a literal constant or the accumulator and a register, as well as for conditional execution, and program branching.Some operations, such as bit setting and testing, can be performed on any numbered register, but bi-operand arithmetic operations always involve W (the accumulator), writing the result back to either W or the other operand register. To load a constant, it is necessary to load it into W before it can be moved into another register. On the older cores, all register moves needed to pass through W, but this changed on the "high end" cores. PIC cores have skip instructions which are used for conditional execution and branching. The skip instructions are 'skip if bit set' and 'skip if bit not set'. Because cores before PIC18 had only unconditional branch instructions, conditional jumps are implemented by a conditional skip (with the opposite condition) followed by an unconditional branch. Skips are also of utility for conditional execution of any immediate single following instruction.The 18 series implemented shadow registers which save several important registers during an interrupt, providing dware support for automatically saving processor state when servicing interrupts. In general, PIC instructions fall into 5 classes: 1. Operation on working register (WREG) with 8-bit immediate ("literal") operand. E.g. movlw (move literal to WREG), andlw (AND literal with WREG). One instruction peculiar to the PIC is retlw, load immediate into WREG and return, which is used with computed branches to produce lookup tables.

2. Operation with WREG and indexed register. The result can be written to either the Working register (e.g. addwf reg,w). or the selected register (e.g. addwf reg,f). 3. Bit operations. These take a register number and a bit number, and perform one of 4 actions: set or clear a bit, and test and skip on set/clear. The latter are used to perform conditional branches. The usual ALU status flags are available in a numbered register so operations such as "branch on carry clear" are possible. 4. Control transfers. Other than the skip instructions previously mentioned, there are only two: goto and call. 5. A few miscellaneous zero-operand instructions, such as return from subroutine, and sleep to enter low-power mode. Performance The architectural decisions are directed at the maximization of speed-to-cost ratio. The PIC architecture was among the first scalar CPU designs,[citation needed] and is still among the simplest and cheapest. The Harvard architecturein which instructions and data come from separate sourcessimplifies timing and microcircuit design greatly, and this benefits clock speed, price, and power consumption. The PIC instruction set is suited to implementation of fast lookup tables in the program space. Such lookups take one instruction and two instruction cycles. Many functions can be modeled in this way. Optimization is facilitated by the relatively large program space of the PIC (e.g. 4096 x 14-bit words on the 16F690) and by the design of the instruction set, which allows for embedded constants. For example, a

branch instruction's target may be indexed by W, and execute a "RETLW" which does as it is named - return with literal in W. Execution time can be accurately estimated by multiplying the number of instructions by two cycles; this simplifies design of real-time code. Similarly, interrupt latency is constant at three instruction cycles. External interrupts have to be synchronized with the four clock instruction cycle, otherwise there can be a one instruction cycle jitter. Internal interrupts are already synchronized. The constant interrupt latency allows PICs to achieve interrupt driven low jitter timing sequences. An example of this is a video sync pulse generator. This is no longer true in the newest PIC models, because they have a synchronous interrupt latency of three or four cycles. Advantages The PIC architectures have these advantages: * Small instruction set to learn * RISC architecture * Built in oscillator with selectable speeds * Easy entry level, in circuit programming plus in circuit debugging PICKit units available from Microchip.com for less than $50 * Inexpensive microcontrollers * Wide range of interfaces including I2C, SPI, USB, USART, A/D, programmable Comparators, PWM, LIN, CAN, PSP, and Ethernet[6] Limitations
10

The PIC architectures have these limitations: * One accumulator * Register-bank switching is required to access the entire RAM of many devices * Operations and registers are not orthogonal; some instructions can address RAM and/or immediate constants, while others can only use the accumulator 1.3 Bluetooth Bluetooth is a proprietary open wireless technology standard for exchanging data over short distances (using short wavelength radio transmissions) from fixed and mobile devices, creating personal area networks (PANs) with high levels of security. Created by telecoms vendor Ericsson in 1994,[1] it was originally conceived as a wireless alternative to RS-232 data cables. It can connect several devices, overcoming problems of synchronization. Bluetooth is managed by the Bluetooth Special Interest Group, which has more than 14,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. The SIG oversees the development of the specification, manages the qualification program, and protects the trademarks. To be marketed as a Bluetooth device, it must be qualified to standards defined by the SIG. A network of patents are required to implement the technology and are only licensed to those qualifying devices; thus the protocol, whilst open, may be regarded as proprietary. Implementation

11

Bluetooth uses a radio technology called frequency-hopping spread spectrum, which chops up the data being sent and transmits chunks of it on up to 79 bands (1 MHz each; centered from 2402 to 2480 MHz) in the range 2,400-2,483.5 MHz (allowing for guard bands). This range is in the globally unlicensed Industrial, Scientific and Medical (ISM) 2.4 GHz short-range radio frequency band. Originally Gaussian frequency-shift keying (GFSK) modulation was the only modulation scheme available; subsequently, since the introduction of Bluetooth 2.0+EDR, /4-DQPSK and 8DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode where an instantaneous data rate of 1 Mbit/s is possible. The term enhanced data rate (EDR) is used to describe /4-DPSK and 8DPSK schemes, each giving 2 and 3 Mbit/s respectively. The combination of these (BR and EDR) modes in Bluetooth radio technology is classified as a "BR/EDR radio". Bluetooth is a packet-based protocol with a master-slave structure. One master may communicate with up to 7 slaves in a piconet; all devices share the master's clock. Packet exchange is based on the basic clock, defined by the master, which ticks at 312.5 s intervals. Two clock ticks make up a slot of 625 s; two slots make up a slot pair of 1250 s. In the simple case of single-slot packets the master transmits in even slots and receives in odd slots; the slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3 or 5 slots long but in all cases the master transmit will begin in even slots and the slave transmit in odd slots.

12

Bluetooth provides a secure way to connect and exchange information between devices such as faxes, mobile phones, telephones, laptops, personal computers, printers, Global Positioning System (GPS) receivers, digital cameras, and video game consoles. Communication and connection A master Bluetooth device can communicate with up to seven devices in a piconet. (An ad-hoc computer network using Bluetooth technology) The devices can switch roles, by agreement, and the slave can become the master at any time. At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in a round-robin fashion. The Bluetooth Core Specification provides for the connection of two or more piconets to form a scatternet, in which certain devices serve as bridges, simultaneously playing the master role in one piconet and the slave role in another. Many USB Bluetooth adapters or "dongles" are available, some of which also include an IrDA adapter. Older (pre-2003) Bluetooth dongles, however, have limited capabilities, offering only the Bluetooth Enumerator and a less-powerful Bluetooth Radio incarnation. Such devices can link computers with Bluetooth with a distance of 100 meters, but they do not offer as many services as modern adapters do.

Bluetooth profile A Bluetooth profile is a wireless interface specification for Bluetoothbased communication between devices. In order to use Bluetooth
13

technology, a device must be compatible with the subset of Bluetooth profiles necessary to use the desired services. A Bluetooth profile resides on top of the Bluetooth Core Specification and (optionally) additional protocols. While the profile may use certain features of the core specification, specific versions of profiles are rarely tied to specific versions of the core specification. For example, there are HFP 1.5 implementations using both Bluetooth 2.0 and Bluetooth 1.2 core specifications. The way a device uses Bluetooth technology depends on its profile capabilities. The profiles provide standards which manufacturers follow to allow devices to use Bluetooth in the intended manner. For the Bluetooth low energy stack according to Bluetooth V4.0 a special set of profiles applies. At a maximum, each profile specification contains information on the following topics: * Dependencies on other formats * Suggested user interface formats * Specific parts of the Bluetooth protocol stack used by the protocol. To perform its task, each profile uses particular options and parameters at each layer of the stack. This may include an outline of the required service record, if appropriate The Serial Port Profile Group The Serial Port Profile is based on Radio Frequency COMMunications (RFCOMM). RFCOMM provides serial port emulation, enabling Bluetooth support for serial data connections. This profile provides useful functionality on its own because it allows
14

applications to treat Bluetooth links as virtual COM ports. It also supports four other Bluetooth profiles and the Generic Object Exchange Profile group. The Serial Port Profile group defines two roles, a gateway that provides access to a service, and a terminal that uses that service. In the headset profile, the terminal is the headset itself; the gateway is a device, such as a phone, supplying an audio call to the headset. The signaling for the audio call uses AT commands (the format used by modems); this is the part that relies on the Serial Port Profilethe audio call simply uses an SCO (audio) link. The LAN Access Profile has a gateway providing a link to a local area network (LAN). The terminal is anything that you might connect to a LANthis typically is a laptop PC, but a PDA or even a smartphone might be terminals. The Dialup Networking Profile (DUN) provides modem services. The gateway gives a link to a telephone network via a cellular or Landline connection. The FAX profile similarly provides a link to a telephone network, but this time specifically for faxes rather than for general data transfer. 1.4 Vector Quantization Introduction Vector quantization (VQ) is a lossy data compression method based on the principle of block coding. It is a fixed-to-fixed length
15

algorithm. In the earlier days, the design of a vector quantizer (VQ) is considered to be a challenging problem due to the need for multidimensional integration. In 1980, Linde, Buzo, and Gray (LBG) proposed a VQ design algorithm based on a training sequence. The use of a training sequence bypasses the need for multi-dimensional integration. A VQ that is designed using this algorithm are referred to in the literature as an LBG-VQ.

Preliminaries A VQ is nothing more than an approximator. The idea is similar to that of ``rounding-off'' (say to the nearest integer). An example of a 1dimensional VQ is shown below:

Here, every number less than -2 are approximated by -3. Every number between -2 and 0 are approximated by -1. Every number between 0 and 2 are proximated by +1. Every number greater than 2 are approximated by +3. Note that the approximate values are uniquely represented by 2 bits. This is a 1-dimensional, 2-bit VQ. It has a rate of 2 bits/dimension. An example of a 2-dimensional VQ is shown below:

16

Here, every pair of numbers falling in a particular region are approximated by a red star associated with that region. Note that there are 16 regions and 16 red stars -- each of which can be uniquely represented by 4 bits. Thus, this is a 2-dimensional, 4-bit VQ. Its rate is also 2 bits/dimension. In the above two examples, the red stars are called codevectors and the regions defined by the blue borders are called encoding regions. The set of all codevectors is called the codebook and the set of all encoding regions is called the partition of the space.

Signal-to-noise ratio
Signal-to-noise ratio (often abbreviated SNR or S/N) is a measure used in science and engineering to quantify how much a signal has been corrupted by noise. It is defined as the ratio of signal power to the noise power corrupting the signal. A ratio higher than 1:1 indicates more signal than noise. While SNR is commonly quoted for electrical signals, it can

17

be applied to any form of signal (such as isotope levels in an ice core or biochemical signaling between cells). In less technical terms, signal-to-noise ratio compares the level of a desired signal (such as music) to the level of background noise. The higher the ratio, the less obtrusive the background noise is."Signal-tonoise ratio" is sometimes used informally to refer to the ratio of useful information to false or irrelevant data in a conversation or exchange. For example, in online discussion forums and other online communities, offtopic posts and spam are regarded as "noise" that interferes with the "signal" of appropriate discussion. Definition Signal-to-noise ratio is defined as the power ratio between a signal (meaningful information) and the background noise (unwanted signal):

where P is average power. Both signal and noise power must be measured at the same or equivalent points in a system, and within the same system bandwidth. If the signal and the noise are measured across the same impedance, then the SNR can be obtained by calculating the square of the amplitude ratio:

where A is root mean square (RMS) amplitude (for example, RMS voltage). Because many signals have a very wide dynamic range, SNRs are often expressed using the logarithmic decibel scale. In decibels, the SNR is defined as

18

which may equivalently be written using amplitude ratios as

The concepts of signal-to-noise ratio and dynamic range are closely related. Dynamic range measures the ratio between the strongest undistorted signal on a channel and the minimum discernable signal, which for most purposes is the noise level. SNR measures the ratio between an arbitrary signal level (not necessarily the most powerful signal possible) and noise. Measuring signal-to-noise ratios requires the selection of a representative or reference signal. In audio engineering, the reference signal is usually a sine wave at a standardized nominal or alignment level, such as 1 kHz at +4 dBu (1.228 VRMS). SNR is usually taken to indicate an average signal-to-noise ratio, as it is possible that (near) instantaneous signal-to-noise ratios will be considerably different. The concept can be understood as normalizing the noise level to 1 (0 dB) and measuring how far the signal 'stands out'. Alternative definition An alternative definition of SNR is as the reciprocal of the coefficient of variation, i.e., the ratio of mean to standard deviation of a signal or measurement:

where is the signal mean or expected value and is the standard deviation of the noise, or an estimate thereof. Notice that such an alternative definition is only useful for variables that are always positive (such as photon counts and luminance). Thus it is commonly used in image processing, where the SNR of an image is usually calculated as the ratio of the mean pixel value to the standard deviation of the pixel values

19

over a given neighborhood. Sometimes SNR is defined as the square of the alternative definition above. The Rose criterion (named after Albert Rose) states that an SNR of at least 5 is needed to be able to distinguish image features at 100% certainty. An SNR less than 5 means less than 100% certainty in identifying image details. Yet another alternative, very specific and distinct definition of SNR is employed to characterize sensitivity of imaging systems; see signal to noise ratio (imaging).Related measures are the "contrast ratio" and the "contrast-to-noise ratio". Improving SNR in practice

Recording of the noise of a thermo gravimetric analysis device that is poorly isolated from a mechanical point of view; the middle of the curve shows a lower noise, due to a lesser surrounding human activity at night. All real measurements are disturbed by noise. This includes electronic noise, but can also include external events that affect the measured phenomenon wind, vibrations, gravitational attraction of the moon, variations of temperature, variations of humidity, etc., depending on what is measured and of the sensitivity of the device. It is often possible to reduce the noise by controlling the environment. Otherwise, when the characteristics of the noise are known and are different from the signals, it is possible to filter it or to process the signal. When the signal is

20

constant or periodic and the noise is random, it is possible to enhance the SNR by averaging the measurement. Digital signals When a measurement is digitised, the number of bits used to represent the measurement determines the maximum possible signal-tonoise ratio. This is because the minimum possible noise level is the error caused by the quantization of the signal, sometimes called Quantization noise. This noise level is non-linear and signal-dependent; different calculations exist for different signal models. Quantization noise is modeled as an analog error signal summed with the signal before quantization ("additive noise"). This theoretical maximum SNR assumes a perfect input signal. If the input signal is already noisy (as is usually the case), the signal's noise may be larger than the quantization noise. Real analog-to-digital converters also have other sources of noise that further decrease the SNR compared to the theoretical maximum from the idealized quantization noise, including the intentional addition of dither. Although noise levels in a digital system can be expressed using SNR, it is more common to use Eb/No, the energy per bit per noise power spectral density. The modulation error ratio (MER) is a measure of the SNR in a digitally modulated signal. Fixed point For n-bit integers with equal distance between quantization levels (uniform quantization) the dynamic range (DR) is also determined. Assuming a uniform distribution of input signal values, the quantization noise is a uniformly-distributed random signal with a peak-to-peak amplitude of one quantization level, making the amplitude ratio 2n/1. The formula is then:

21

This relationship is the origin of statements like "16-bit audio has a dynamic range of 96 dB". Each extra quantization bit increases the dynamic range by roughly 6 dB. Assuming a full-scale sine wave signal (that is, the quantizer is designed such that it has the same minimum and maximum values as the input signal), the quantization noise approximates a sawtooth wave with peak-to-peak amplitude of one quantization level and uniform distribution. In this case, the SNR is approximately

Floating point Floating-point numbers provide a way to trade off signal-to-noise ratio for an increase in dynamic range. For n bit floating-point numbers, with n-m bits in the mantissa and m bits in the exponent:

Note that the dynamic range is much larger than fixed-point, but at a cost of a worse signal-to-noise ratio. This makes floating-point preferable in situations where the dynamic range is large or unpredictable. Fixed-point's simpler implementations can be used with no signal quality disadvantage in systems where dynamic range is less than 6.02m. The very large dynamic range of floating-point can be a disadvantage, since it requires more forethought in designing algorithms. DISCUSSION OF FILTERS Butterworth filter
22

The Butterworth filter is a type of signal processing filter designed to have as flat a frequency response as possible in the passband so that it is also termed a maximally flat magnitude filter. It was first described by the British engineer Stephen Butterworth in his paper entitled "On the Theory of Filter Amplifiers". Butterworth had a reputation for solving "impossible" mathematical problems. At the time filter design was largely by trial and error because of their mathematical complexity. His paper was far ahead of its time: the filter was not in common use for over 30 years after its publication. Butterworth stated that: "An ideal electrical filter should not only completely reject the unwanted frequencies but should also have uniform sensitivity for the wanted frequencies." At the time filters generated substantial ripple in the passband and the choice of component values was highly interactive. Butterworth showed that low pass filters could be designed whose frequency response (gain) was;

where is the angular frequency in radians per second and n is the number of reactive elements (poles) in the filter. Butterworth only dealt with filters with an even number of poles in his paper: he may have been unaware that such filters could be designed with an odd number of poles. His plot of the frequency response of 2, 4, 6, 8, and 10 pole filters is shown as A, B, C, D, and E in his original graph. Butterworth solved the equations for two- and four-pole filters, showing how the latter could be cascaded when separated by vacuum tube amplifiers and so enabling the construction of higher-order filters despite
23

inductor losses. In 1930 low-loss core materials such as molypermalloy had not been discovered and air-cored audio inductors were rather lossy. Butterworth discovered that it was possible to adjust the component values of the filter to compensate for the winding resistance of the inductors. He also showed that his basic low-pass filter could be modified to give low-pass, high-pass, band-pass and band-stop functionality. The frequency response of the Butterworth filter is maximally flat (has no ripples) in the passband and rolls off towards zero in the stopband. When viewed on a logarithmic Bode plot the response slopes off linearly towards negative infinity. A first-order filter's response rolls off at 6 dB per octave (20 dB per decade) (all first-order lowpass filters have the same normalized frequency response). A second-order filter decreases at 12 dB per octave, a third-order at 18 dB and so on. Butterworth filters have a monotonically changing magnitude function with , unlike other filter types that have non-monotonic ripple in the passband and/or the stopband. Compared with a Chebyshev Type I/Type II filter or an elliptic filter, the Butterworth filter has a slower roll-off, and thus will require a higher order to implement a particular stopband specification, but Butterworth filters have a more linear phase response in the pass-band than Chebyshev Type I/Type II and elliptic filters can achieve. Chebyshev filter Chebyshev filters are analog or digital filters having a steeper roll-off and more passband ripple (type I) or stopband ripple (type II) than Butterworth filters. Chebyshev filters have the property that they minimize the error between the idealized and the actual filter characteristic over the range of the filter, but with ripples in the passband.

24

This type of filter is named in honor of Pafnuty Chebyshev because their mathematical characteristics are derived from Chebyshev polynomials. Because of the passband ripple inherent in Chebyshev filters, filters which have a smoother response in the passband but a more irregular response in the stopband are preferred for some applications These are the most common Chebyshev filters. The gain (or amplitude) response as a function of angular frequency of the nth order low pass filter is

where is the ripple factor, 0 is the cutoff frequency and Tn() is a Chebyshev polynomial of the nth order. The passband exhibits equiripple behavior, with the ripple determined by the ripple factor . In the passband, the Chebyshev polynomial alternates between 0 and 1 so the filter gain will alternate between maxima at G = 1 and minima at again has the value . At the cutoff frequency 0 the gain but continues to drop into the stop band as

the frequency increases. This behavior is shown in the diagram on the right. (note: the common definition of the cutoff frequency to 3 dB does not hold for Chebyshev filters!) The order of a Chebyshev filter is equal to the number of reactive components (for example, inductors) needed to realize the filter using analog electronics. The ripple is often given in dB:

Ripple in dB =

25

so that a ripple amplitude of 3 dB results from An even steeper roll-off can be obtained if we allow for ripple in the stop band, by allowing zeroes on the j-axis in the complex plane. This will however result in less suppression in the stop band. The result is called an elliptic filter, also known as Cauer filters.

The group delay The group delay is defined as the derivative of the phase with respect to angular frequency and is a measure of the distortion in the signal introduced by phase differences for different frequencies.

The gain and the group delay for a fifth order type I Chebyshev filter with =0.5 are plotted in the graph on the left. It can be seen that there are ripples in the gain and the group delay in the passband but not in the stop band.

Least mean squares filter:

26

Least mean squares (LMS) algorithms are a class of adaptive filter used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean squares of the error signal (difference between the desired and the actual signal). It is a stochastic gradient descent method in that the filter is only adapted based on the error at the current time. It was invented in 1960 by Stanford University professor Bernard Widrow and his first Ph.D. student, Ted Hoff. Problem formulation

Most linear adaptive filtering problems can be formulated using the block diagram above. That is, an unknown system is to be identified and the adaptive filter attempts to adapt the filter to make it as close as possible to , while using only observable signals x(n), d(n) and e(n); but y(n), v(n) and h(n) are not directly observable. Its solution is closely related to the Wiener filter. definition of symbols

d(n) = y(n) + (n)

27

Idea The idea behind LMS filters is to steepest distance use steepest distance to find filter weights which minimize a cost function . We start by

defining the cost function as

where e(n) is the error at the current sample 'n' and E{.} denotes the expected value. This cost unction (C(n)) is the mean square error, and it is minimized by the LMS. This is where the LMS gets its name. Applying steepest descent means to take the partial derivatives with respect to the individual entries of the filter coefficient (weight) vector

where

is the gradient operator.

Now,

is a vector which points towards the steepest ascent of the

cost function. To find the minimum of the cost function we need to take a step in the opposite direction of terms . To express that in mathematical

where

is the step size(adaptation constant). That means we have found

a sequential update algorithm which minimizes the cost function.

28

Unfortunately, this algorithm is not realizable until we know . Generally, the expectation above is not computed. Instead, to run the LMS in an online (updating after each new sample is received) environment, we use an instantaneous estimate of that expectation. See below. Simplifications For most systems the expectation function must be approximated. This can be done with the following unbiased estimator.

where N indicates the number of samples we use for that estimate. The simplest case is N = 1

For that simple case the update algorithm follows as

Indeed this constitutes the update algorithm for the LMS filter

1.4 EXISTING METHOD The object of the existing system is to present an interaction system implementing a vehicle-to-driver communication and a vehicle-toenvironment communication based on a smart phone core and a wireless Bluetooth medium. The system is targeted to increase the safety level of a motorcycle. In the last decade, the research interest in two wheels

29

vehicles has been driven by two main features. On one hand, motorcycles are means for personal mobility with a low environmental impact due to the relatively low weight, and they seem to be extremely promising for the development of electric vehicles. On the other hand, they are responsible for about 20% of road incidents due to a relatively low level of safety. To ensure an improved safety in a motorcycle, this letter proposes to increase the informative interaction between the vehicle and the driver, and to provide a remote monitoring for maintenance purposes. The hands-free audio interaction seems to be the best compromise between safety and the level of information to deliver. In the literature, the problems of audio and video vehicle-to-driver interaction and remote maintenance have been tackled by several papers mainly focused on automobiles. Yet, to the best of our knowledge, these problems seem to be less explored in motorcycles, although the interaction between the vehicle and the driver is different. The objective of this letter is to study a system capable of implementing a bidirectional audio interaction with the driver and to provide remote monitoring of the vehicle parameters. To this aim, the system should include the following features: the acquisition, elaboration, and request of the vehicle data; the wireless audio communication to and from the driver (via the helmet headset); the wireless bidirectional communication with the environment; the computational capability to implement the audio synthesis and recognition for the driver interaction; and a humanmachine interface for the setup by the user. Another objective of the system is to use off-the-shelf pieces of hardware to concentrate the development complexity mainly on the software layer. Under this perspective, a common smartphone can be used. In fact, it usually includes a 3G connection;a wireless
30

communication (e.g., based on Bluetooth and WIFI standards); a relatively large computational capability (supported also by the presence of an operating system); sensing (accelerometers, magnetometers, cameras, etc.); positioning (GPS); and a humanmachineinterface (HMI). The architecture is completed by a CAN-Bluetooth converter (specifically developed for this application) and an off-the-shelf audio helmet with Bluetooth interface. An electronic system fully developed for this specific application may be adopted, however, it would duplicate the functionalities of a smartphone, which has the clear advantage of being a common device largely spread and usable in several applications. Hence, the smartphone may be viewed intrinsically as a plug and play smart gateway. This kind of use of a mobile device may be extremely ntriguing, although the literature usually puts the focus on the HMI capability of smartphones . The system so defined present three end-points: an audio system integrated in a helmet with Bluetooth communication, the VCU, which collects the data and controls the vehicle, and a web server for remote monitoring. The gateway between these points is represented by a smartphone. The letter is outlined as follows. Section II is devoted to the hardware architecture. The software architecture is presented in Section III. Section IV presents some case studies and related evaluations 1.4.1 SYSTEM ARCHITECTURE The prototype VEDE system consists of the following elements (see Fig. 1). A motorcycle natively equipped with a vehicle control unit (VCU), which collects the data from the vehicle, controls the engine, and displays the relevant information on the dashboard. The information is broadcasted over a standard CAN bus between the electronic subsystems in the vehicle An embedded electronic unit interfaced to the CAN bus of the vehicle, and capable of collecting optional analog and digital signals.
31

The electronic board includes a Bluetooth transceiver. A helmet with audio speakers and a microphone, the audio signals may be transmitted over a Bluetooth channel by the embedded transceiver of the helmet. A smartphone equipped with a Bluetooth transceiver and a 3G communication system. The Bluetooth channel is used to establish a communication between the helmet and the CAN-Bluetooth converter. The 3G communication is exploited to transmit and receive data from the web. The mobile device also includes a processor which can handle the audio and

32

vocal

synthesis,

and

the

information

management.

H FP/H SP G PRS/U M T S
D IF F E R E N T SEN SOR S C ON NECTD TO V E E H IC L E & V E H IC L E STATU S IN D IC A T O R S

B LU E T O O TH SPP

W IR E D C O N N E C T IO N

33

H FP/H SP G PRS/U M T S
D IF F E R E N T SEN SOR S C ON NECTD TO V E E H IC L E & V E H IC L E STATU S IN D IC A T O R S

B LU E T O O TH SPP

W IR E D C O N N E C T IO N

FIGURE 1: Existing System Architecture

1.4.2 HARDWARE ARCHITECTURE

34

The in-vehicle embedded electronic system acts as a gateway between the CAN bus and the Bluetooth channel It also may acquire some additional analog and digital data from the vehicle. Besides the above specifications, one of the main requirement for a motorcycle application is the compactness of the electronic board, which also has to be compliant with automotive standards (e.g., vibrations, electromagnetic compatibility, temperature, etc.). To this aim, the electronic system has been designed according to a modular approach. This consists of two modules: one two-layered main board and one Bluetooth add-on. In particular, the embedded electronic system comprises the following components. o An electronic conditioning for the optional analog and digital acquisition. The analog signals are filtered with second order Bessel low-pass filter. o A CAN controller set with appropriate baud rate. o A Bluetooth transceiver with hands-free profile (HFP) and serialport profile (SPP).
o A dsPIC microcontroller to implement the primary functionalities

for acquisition (12-bit ADC) and the CAN-Bluetooth gateway. The presence of a dsPIC makes the embedded electronic system a smart device .

1.4.3 SOFTWARE ARCHITECTURE

35

FIGURE 2: Existing System Software Architecture In order to implement the system, each layer of hardware, as presented in the previous section, needs to be complemented by a corresponding layer of software. The software architecture is represented

36

in Fig. 2, which depicts each of the hardware subsystems, and the software interface between them. For the sake of conciseness, Fig. 2 focuses on the driver-to-motorcycle interaction. Notice that this kind of interaction is the most critical due to the presence of audio commands.

A. Motorcycle Layer The motorcycle is natively equipped with a VCU, which manages the vehicle and collects the data. The information interface is constituted by a bus, according to the CAN protocol. The VCU may send messages through it (vehicle sensing), and may act on the basis of the messages that it reads (actuation). B. CAN-Bluetooth Gateway Layer The CAN-Bluetooth gateway filters and sends the information over the CAN bus according to the database of accessible information. The gateway also implements a SPP to communicate according to the Bluetooth protocol. The measurements of the vehicle are translated into notifications, and the commands are turned into commands to VCU. At the gateway level, a critical alert may also be recognized on the CAN and then sent to the upper level. Note that the interface between the smartphone and the gateway is a critical aspect in terms of throughput, since there is a clear tradeoff between the general load of nformation and the load capacity of the Bluetooth channel . C. The Smartphone Layer

37

From a computational point of view, the smartphone represents the core of the system. The mobile device is characterized by the presence of an operating system that allows the multitasking of the processes (e.g., Windows Mobile, Symbian, and Google Android). The software for the VEDE system may be developed according to the appropriate platform and to the operating system. The interface between the mobile device and the gateway is implemented with the SPP over Bluetooth. The interface between the smartphone and the helmet is provided by the HFP or, alternatively, by the HSP over Bluetooth. The software on the smartphone translates the notification from the gateway into a stream of audio data (voice synthesis). It also turns the speaker data from the helmet into a command to the gateway (speech recognition). The smartphone also manages the communication to and from the web server (remote point) according to the HTTP protocol (natively embedded in the mobile). D. The Helmet Layer The helmet provides the audio interface to the driver by the means of an integrated headset. The helmet also includes a Bluetooth module implementing the HFP or HSP layer for the interface to the smartphone. The headset natively records and codes the driver audio commands through the Bluetooth channel. It converts also the smartphone audio stream into an audio signal for the driver. Note that this part of software is natively included in the helmet provided with a Bluetooth channel 1.5 DRAWBACKS OF THE EXISTING SYSTEM In the existing work the project is involving in voice so proper filtering of speech is necessary. In the hardware architecture second order

38

Bessel low pass filter is proposed. But in the software architecture no proper filter has been introduced and the performance is up to 95 % as mentioned in the paper. Remaining 5% may lead to problem in noisy environment and moreover it is not possible for an user to move in a noise free environment always .So additional filter is required for increasing the reorganization of voice to instruction set in the system.

CHAPTER-II 2. PROPOSED SYSTEM: The system proposed is to increase the performance of the existing system by introducing the filter in the software architecture. To select the proper filter which will increase the performance of the system we are using the MATLAB. By means of using the MATLAB different types of filter have been compared with the existing system and Butterworth filter
39

have been chosen to be the best because the distance between values by using vector quantization have been reduced to the a maximum when compared to other filters as discussed in the experimental results below. In the proposed system for the estimation of filter operation three sensors are used in the hardware architecture in the vehicle layer and those values are transmitted to the system by means of Bluetooth transmitter and for interfacing the hardware with the software architecture of the proposed system java developer kid and comm software is used. The comm software is used for interfacing the serial port of the Bluetooth with the system there by achieving the SPP (Serial Port Profile) as mentioned in the existing system. The voice interaction is performed by introducing the vector quantization algorithm in the software architecture. The voice command for each sensor have been recorded and saved already in wave format. The incoming new voice from the external user by means of mic is first recorded and compared with the existing recorded speech signal by vector quantization technique and the value which gives least difference in the comparison is considered as the speech command and the corresponding instruction for the command is executed. The architectural design of the proposed system is as discussed below. 2.2 SYSTEM ARCHITECTURE:

40

USER INTERACTION HFP/HSP

BLUETOOTH SPP

DIFFERENT SENSORS CONNECTD TO VEEHICLE & VEHICLE STATUS INDICATORS WIRED CONNECTION

FIGURE 3: Proposed System Architecture The system architecture is as shown in the figure above. The system architecture is divided in to hardware block and the software block.

2.3 HARDWARE ARCHITECTURE The hardware block contains two circuit boards one acts as the ECU electronic control unit and the other contains Bluetooth transceiver for transmission of data to the system. The system pin configuration and working of the hardware block is as shown in the figure below.

41

Figure 4: VCU Circuit Board VCU Circuit Board The three types of sensors collect the information from the vehicle and three parameters used here are temperature, pressure and speed. These three values are interfaced to the for PIC16F877 collecting information from these sensors. The output from these sensors is connected to the pin numbers 8, 9 and 10 respectively. The received data from these sensors are transmitted to the controller area network by
42

means of 4 wires. The output from the PIC is taken from the pins from 37 to 40. The controller area network operates under two ICs MCP2515 and PCA82C250, one for receiving and operation and the other for transmission of data. The output from the PIC IC is connected to MCP2515 IC by interconnecting to the pins 13 to 16 as shown in the figure above. The output from the pin is passed to the Bluetooth transceiver kid by means of connecting to the PCA82C250 IC and the data are transmitted by means of two wires as shown in the pin diagram above. The variation in the readings is monitored by means of a LCD display. The display is interfaced to the PIC by pins 20 to 30 as shown in the figure. The display unit is used to check for the flow of data in the consecutive layers.

43

Figure 5: Bluetooth transmitter Circuit Board Bluetooth transmitter Circuit Board The Bluetooth circuit board operation is as shown in the figure above. The controller area network implemented in this board acts as the

44

slave which receives the data from the master controller area network from the vehicle control unit. The Bluetooth and the controller area network IC are interconnected to the microcontroller and the datas are transmitted between them by mean of this microcontroller. The data received from the VCU are transmitted to the system by means of the Bluetooth transceiver . the data are interfaced to the matlab for evaluation by means of java developer kid . the data received are stored in the form of text file .The matlab reads the text file stored by the system and the data flow is continued for evaluation . The evaluation of performance are discussed in the experimental results as shown below 2.4 SOFTWARE ARCHETECTURE: The figure shown above describes the operation of the software architecture of the system. The system consists of mic for user interaction. the input from the mic is passed in to the sampling and noise addition block where the incoming voice signal is sampled and the reference noise is added to the voice. The addition of reference noise is to estimate the system performance. After the addition of reference noise the filtering of incoming signal is performed by the existing Bessels low pass filter and the proposed Butterworth filter.

45

FIGURE 6: Proposed System Software Architecture The filtered voice signal is passed in to the vector quantization block, where the vector quantization algorithm is used for voice detection. The system performance is compared with the existing system and the improved performance in voice detection is shown in the results. After the detection of the voice the the parameter analysis is executed and the corresponding parameter is noted form the data received from the Bluetooth transceiver and the voice conversion is carried out and the corresponding voice data is produced in the speaker. 2.4 IMPLEMENTATION TOOL MATLAB7.5 Software requirement of this project is MAT LAB 7.0.MATLAB, which stands for MATrix LABoratory, is a software package developed by Math Works, Inc. to facilitate numerical computations as well as some symbolic manipulation. it offers greater flexibility. MATLAB has evolved over a period of years with input from many users. In university
46

environments it is the standard instructional tool for introductory and advanced courses in mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high productivity research, development, and analysis. MATLAB features a family of add- on application specific solutions called toolboxes. Very important to most users of MATLAB, toolboxes allow you to learn and apply specialized techniques. Toolboxes are comprehensive collections of MATLAB functions. Areas in which toolboxes are available include signal and image processing, control systems, neural networks, fuzzy logic, and many others.The image processing toolbox support a wide range of processing operations, including spatial image transformations, morphological operations, neighborhood and block operations, filter design, image analysis and enhancement.

MPLAB
MPLAB Integrated Development Environment (IDE) is a free, integrated toolset for the development of embedded applications on Microchip's PIC and dsPIC microcontrollers. MPLAB IDE v8 The current version of MPLAB IDE is version 8. It is a 32-bit application on Microsoft Windows and includes several free software components for application development, hardware emulation and debugging. MPLAB IDE also serves as a single, unified graphical user interface for additional Microchip and third-party software and hardware development tools. Both Assembly and C programming languages can be used with MPLAB IDE v8. Others may be supported through the use of third-party programs.

47

Support for MPLAB IDE, along with sample code, tutorials, and drivers can be found on Microchip's website. MPLAB IDE v8 does not support Linux, Unix or Macintosh operating systems. MPLAB X IDE MPLAB X is not a new version of the current MPLAB IDE v8 framework but is instead based on Oracle's open-source NetBeans platform. In addition to its predecessor's functionalities and compatibility with Microchip's existing development tools, the new IDE utilises many NetBeans features allowing for user-interface improvements and performance upgrades. This also includes highly-anticipated crossplatform support in MPLAB IDE, allowing development for PIC microcontrollers on Mac OS X and Linux operating systems, in addition to Windows.MPLAB X is currently in beta.

CHAPTER-III 3.1 EXPERIMENTAL RESULTS

48

Figure 7: error calculation using SNR for Butterworth filter

49

Figure 8: error calculation using SNR for Adaptive LMS filter

50

Figure 9: error calculation using SNR for Chebyshev filter

Error analysis:

51

For evaluation of the error in the speech input we are using signal to noise ratio for the calculation of amount of error occurred during the filtering of noise using filters. The error rate for the Butterworth filter is higher when compared to the other two filters and the Chebyshev filter is noted to be the low error producing filter after repeating the compression for different voice commands under different noisy environments. Filter implementation: In the implementation of filter in the software architecture we prefer Chebyshev filter as it produces less error as discussed in the error analysis. The performance of the implemented new filter in the system is evaluated by passing the noisy voice command along the two filters and the net result seams to be the best with maximum noise reduced for all types of command and the command detection improved than the existing system.The proposed system error ratio analysis is as shown in the figure below

52

Figure 9: error calculation using SNR for Chebyshev filter and Bessel low pass filter .

3.2 ADVANTAGES OF PROPOSED SYSTEM

53

Mileage level indication with respect to speed Temperature level indication in voice Speed level indication in voice GPS indication to the driver in voice Voice based locking system. Extended range of applications up to 70 vocal interactions with the vehicle

3.5 INTERFACING HARDWARE AND SOFTWARE: For interfacing hard ware and software we are using the java developer kid . The dates received from the serial port are stored in the system by means of the programming in java. The text file is used for interfacing between two hardware and software layers. The values received from different sensors are stored in the text file as shown in the figure 10. The text files are read and executed in MATLAB in text as well as in voice as shown figure 11.

3.4 INTERFACING EVALUATION

54

Figure 9: Interfacing The Hardware And Software By Means Of Text File

55

Figure 10: Reding the Datas from the Text file to the Software Layer

3.6 CONCULSION

56

The system performances of the proposed system have been improved more than the existing system by adding the filter in the software architecture. The addition of filter reduces the difference in the distance of the voice in the vector quantization algorithm used in the software block. There by improving the command execution better than the existing system.

57

3.7 FUTURE WORK The security level of the vehicle can be improved by designing a separate embedded system instead of smart phone. By means of designing a unique separate embedded system for each vehicle can increase the applications further such as voice based vehicle locking system, more secure remote monitoring system etc. The application can be extended depending upon the requirements of the vehicle.

58

REFERENCES [1] Cristiano Spelta, Vincenzo Manzoni, Andrea Corti, Andrea Goggi, and Sergio Matteo Savaresi , Smartphone-Based Vehicle-toDriver/Environment Interaction System for Motorcycles,in IEEE embedded systems letters , vol.2 , no. 2,p.39, June 2010. [2] M. Pieve, F. Tesauri, and A. Spadoni, Mitigation accident risk in powered two wheelers domain: Improving effectiveness of human machine interface collision avoidance system in two wheelers, in Proc. 2nd. Conf. Human Syst. Interact., Catania, Italy, May 2123, 2009, pp. 603607. [3] E. D. Bekiaris, A. Spadoni, and S. I. Nikolaou, SAFERIDER project: New safety and comfort in powered two wheelers, in Proc. 2nd Conf.Human Syst. Interact., Catania, Italy, May 2123, 2009, pp. 600602. [4] A. Barn and P. Green, Safety and Usability of Speech Interfaces for In-Vehicle Tasks While Driving: A Brief Literature Review Tech. Rep. UMTRI-2006-5, 2006. [5] M. Cellario, Human-centered intelligent vehicles: Toward multimodal interface integration, IEEE Intell. Syst., vol. 16, no. 4, p. 78, Jul. 2001. [6] F. Bellotti, COMUNICAR: Designing a multimedia, contextaware human-machine interface for cars, Cogn., Technol. Work, vol. 7, p.36, 2005. [7] G. Costagliola, S. D. Martino, F. Ferrucci, G. Oliviero, U. Montemurro, and A. Paliotti, Handy: A new interaction device for vehicular information systems, in Proc. Mobile Human-Comput. Interact., Glasgow, U.K., 2004, pp. 545547.

59

[8]

A. W. Gellatly, The Use of Speech Recognition Technology in Automotive Applications, Ph.D. dissertation, Virginia Polytech. Inst.Blacksburg, VA, 1997.

[9]

K. Komiya, Guidance of a Wheelchair by Voice, Human Interface, vol. 1999, pp. 277280, 2000.

[10] J. D. Lee, B. Caven, S. Haake, and T. L. Brown, Speech-based interaction with in-vehicle computers: The effect of speech-based e-mail on drivers attention to the roadway, Human Factors: J. Human Factors Ergonomics Soc., vol. 43, pp. 631640, Jan. 1, 2001. [11] C. Little, The intelligent vehicle initiative: Advancing humancentered smart vehicles, Public Roads, vol. 61, p. 18, 1997. [12] S. M. Savaresi, The role of real-time communication for distributed or centralized architectures in vehicle dynamics control systems, in Proc. 6th IEEE Int. Workshop Factory Commun. Syst., Torino, Italy,2006, pp. 16, (plenary presentation). [13] Bluetooth SIG, 2005, Specification of the Bluetooth System. Core,Version 1:2005-10. [14] Road Vehicles-Interchange of Digital Information-Controller Area Network (CAN) for High Speed Communication, Standard ISO: ISO 11898, 1993.Authorized. [15] Driver Assistance Systems, 2007 [16]. ARE Database, Traffic Safety Basic Facts, November 2006 A. Hart, Developing market growth strategies for Advanced

[17]. Andreone , M. Provera, Intervehicle communication and cooperative system: local dynamic safety information distributed
60

among the infrastructure and the vehicle as virtual sensors to enhance road safety, June 2005 [18]. DOT HS 809 724, NCSA, Traffic Safety Facts Crash Stats, June 2004 [19]. Benso, D3.2 WATCHOVER Application and Protocols, January 2008 [20]. [21]. K. Meinken , F. Visintainer, R. Montanari , J. Moore, D3.3 HMI Concepts And Prototypes, May 2008 Cho, M., K.Ku, Y. Shi, and Kanagawa, A Human Interface Design of Multiple Collision Warning System, 2005

[22] SAFERIDER Description of Work, copyright SAFERIDER Consortium, October 2007. [23] SAFETYNET, Traffic Safety Basic Facts 2005 for Motorcycles & Mopeds, European Commission, October 2005. [24] DOT HS 809 724, NCSA, Traffic Safety Facts Crash Stats, June 2004. [25] MAIDS project Final Report 1.2, In-depth investigations of accidents involving powered two-wheelers, September 2004. [26] Traffic Safety Facts 2003, A Compilation of Motor Vehicle Crash Data from the Fatality Analysis Reporting System and the General Estimates System, NHTSA, DOT HS 809 775, January 2005. [27] [28] Available: http://www-nrd.nhtsa.dot.gov/Pubs/809775.PDF European Commission e-Safety strategic plan 2002-2010, DG Information Society Technologies, September 2001.

61

APPENDEX HARDWARE BLOCK VCU #include <pic.h> #include "lcd_porte.h" #include "delay.c" #include "can_config.h" //#include "adc.h" //__CONFIG(HS & UNPROTECT & WRTEN & DUNPROT); //SPI definition #define CS #define SCLK #define SDI #define SD0 RC0 RC3 RC4 RC5

//#define pulse RB0 //spi function prototype void SPI_Init(); void SPI_Write(char); void SPI_Read(); char SPI_Read_Status(); void SPI_CAN_Write(char,char);
62

void peripherals(); void ser_out(unsigned char); char read_temp(); char read_sensor(); char read_temp1(); char SPI_dummy,data,temp,filter,temp1; char ad_result; unsigned char sw1,sensor=0,a,sensor1,msec,sec; unsigned int speed; void interrupt ext() { if(TMR1IF==1) { TMR1ON=0; TMR1IF=0; msec++; if(msec>=20) { sec++; msec=0; } TMR1H=0x3c;TMR1L=0xaf; TMR1ON=1; } if(INTF) {
63

INTF=0; sensor++; } } void main() { //input & output configuration ADCON1 =0x0E; //configure all pins as digital except RA0 TRISA=0X03; TRISE=0X00; TRISB=0X01; TRISC=0XD0; >o/p TMR1IE=PEIE=GIE=1; TMR1H=0x3c;TMR1L=0xaf; TMR1ON=1; INTE=1; INTEDG=1; lcd_init(); // peripherals(); command(0x80); lcd_dis("Smart Phone Bsd ",16); command(0xc0); lcd_dis("Interaction Sys ",16); DelayUs(5000);DelayUs(5000);DelayUs(5000);DelayUs(5000); command(0x80); lcd_dis("Temp: Lvl: command(0xc0);
64

//RA0->input //PORTB=0X00; //0001 0000 //RC4-->i/p & RC3,RC5--

",16);

lcd_dis("Speed: SPI_Init();

",16);

//initialize spi

//select spi write and reset mcp2515 CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_Write(CAN_RESET); DelayUs(250); CS=1; //CS disable

//set configuration mode and Fclkout=sys_clk/1 CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(CANCTRL,0x80); CS=1; //CS disable

//disable all error interrupts CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(CANINTE,0x00); SPI_Write(0x00); CS=1; //CS disable

//clear TXB0 txmn status flags and set TXB0=highest msg priority CS=0; //CS enable
65

SPI_Write(CAN_WRITE); SPI_CAN_Write(TXB0CTRL,0x03); CS=1; //CS disable /* Set the bit rate of the data transmission to 125 Kbps*/ //SJW=1*TQ CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(CNF1,0x07); CS=1; //CS disable

//PHSEG2 len dtmn by CNF3,sample point-1, PHSEG1=3*TQ CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(CNF2,0x90); CS=1; //CS disable

//wake up filter disabled, PHSEG2=3*TQ CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(CNF3,0x02); CS=1; //CS disable //set sensor node msg id CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(TXB0SIDH,0xb0);//1011 0000
66

SPI_Write(0X00);//lower 3 bits of std id=000, ext id =00, disable ext id SPI_Write(0X00);//ext id =0000 0000 SPI_Write(0X00);//ext id =0000 0000 SPI_Write(0X03);//set data length=1 byte ( 8 bits) SPI_Write(0X00);//initial data buffer =0000 0000 CS=1; //CS disable */ //receive valid msg with std id, no RTR, rollover disabled & filhit=rxfo //RXB0 SPECIFIC CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(RXB0CTRL,0x20); CS=1; //CS disable

//acceptance filer mask n std high id CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(RXM0SIDH,0xFF); SPI_Write(0Xe0); SPI_Write(0X00); SPI_Write(0X00); CS=1; //CS disable //set msg id CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(RXF0SIDH,0xb0);//accept temp value SPI_Write(0X00);
67

SPI_Write(0X00); SPI_Write(0X00); SPI_Write(0X01); CS=1; //CS disable

//set mcp2515 to normal operating mode DelayMs(10); while(1) { CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(CANCTRL,OPMODE_NORMAL); CS=1; //CS disable TMR1ON=1; while(sec<1) { sensor1=sensor; } TMR1ON=0; sec=0;sensor=0; temp=read_temp(); temp1=read_temp1(); // command(0xCb);hex_dec(sensor1);
68

CS=0; SPI_Write(CAN_WRITE); SPI_CAN_Write(TXB0DB0,temp); TR-reg SPI_Write(temp1); SPI_Write(sensor1); CS=1; speed=sensor1*30; command(0xC6);hex_dec1(speed); command(0x85);hex_dec(temp); command(0x8c);hex_dec2(temp1); CS=0; //CS enable //write transmit data to

SPI_Write(CAN_WRITE); SPI_CAN_Write(TXB0CTRL,0x0b);//enable data transmission CS=1; //CS disable CS=0; SPI_Write(CAN_BIT_MODIFY); SPI_Write(CANINTF); //CLEAR TransmitBuffer0 INT flag SPI_Write(0xff); SPI_Write(0x00); CS=1; DelayMs(250); } }

69

void SPI_Init() { SSPCON=0X30; mode,clk=Fosc/16 SSPSTAT=0X00; //CKE=0,SMP=0 } void SPI_CAN_Write(char addr,char data) { SPI_Write(addr); SPI_Write(data); } void SPI_Write(char data) { SSPBUF=data; do{ }while(!SSPIF); SPI_dummy=SSPBUF; SSPIF=0; } void SPI_Read() { CKP=0; data=0; SSPBUF=data; do{ } while(!SSPIF);
70

//SSPEN=1,CKP=1,spi

master

SPI_dummy=SSPBUF; CS=0x1; CKP=1; } char read_temp() { ADCON0=0X01; DelayMs(10); ADCON0=0X05; while(ADCON0!=0X01); ad_result=ADRESH; return ad_result; } char read_temp1() { ADCON0=0X09; DelayMs(10); ADCON0=0X05; while(ADCON0!=0X01); ad_result=ADRESH; return ad_result; } char SPI_Read_Status() { CS=0; //CS enable SPI_Write(CAN_READ_STAT);
71

SPI_Read(); CS=1; //CS disable return SPI_dummy; } /*void peripherals() { GIE=PEIE=1; SPBRG= 25; BRGH = 1; SYNC = 0; SPEN = 1; TXEN = 1; }*/ void ser_out(unsigned char ss) { TXREG = ss; while(!TXIF); TXIF = 0; } /*char read_sensor() { unsigned char sen=0; TMR1H=0x3c;TMR1L=0xaf; TMR1ON=1; while(!TMR1IF){if(!pulse && !a)a=1;else if(pulse && a){sen++;a=0;}}
72

// for 9600 baud rate 4Mhz // baud rate //high // asynchronous mode // serial port enable // tx enable

TMR1IF=0;TMR1ON=0; return sen; }*/

CAN #include<pic.h> #include "pic_lcd8.h" #include "delay.c" #include "can_config.h" #include "pic_serial.h" //__CONFIG(HS & UNPROTECT & WRTEN & DUNPROT); //SPI definition #define CS RC0 #define SCLK RC3 #define SDI RC4 #define SD0 RC5 void SPI_Init(); void SPI_Write(char); void SPI_Read(); char SPI_Read_Status(); void SPI_CAN_Write(char,char); void bluetooth_init(); void Command_Send(const unsigned char *,unsigned char); void ack(); void del();
73

unsigned ; unsigned int temp1; void main() { ADCON1 =0x06; TRISD=0x00; TRISE=0x00; TRISB=0X00; TRISC=0xD0; lcd8_init(); lcd8_display(0xc0,"Smart Phone Bsd ",16); lcd8_display(0x80,"Interaction Sys ",16); DelayUs(5000);DelayUs(5000); Serial_Init(9600); bluetooth_init(); Serial_Out('!'); Serial_Out('@'); Serial_Out('#'); Receive(0); lcd8_display(0x80,"Smart Phone Bsd ",16); lcd8_display(0xc0,"Interaction Sys ",16); //1101 0000 //RC4-->i/p & RC3,RC5-->o/p

char

SPI_dummy,data,temp0,filter=0,temp2,temp3,i,j,v[60],f,g,re,p,sec,count1

DelayUs(5000);DelayUs(5000);DelayUs(5000);DelayUs(5000); lcd8_display(0x80,"Temp: Lvl: lcd8_display(0xc0,"Speed:


74

",16); ",16);

SPI_Init(); //initialize spi //select spi write and reset mcp2515 CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_Write(CAN_RESET); DelayUs(250); CS=1; //CS disable

//set configuration mode and Fclkout=sys_clk/1 CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(CANCTRL,0x80); CS=1; //CS disable

//disable all error interrupts CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(CANINTE,0x00); SPI_Write(0x00); CS=1; //CS disable

/*//clear TXB0 txmn status flags and set TXB0=lowest msg priority CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(TXB0CTRL,0x03); CS=1; //CS disable */
75

//SJW=1*TQ CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(CNF1,0x07); CS=1; //CS disable

//PHSEG2 len dtmn by CNF3,sample point-1, PHSEG1=3*TQ CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(CNF2,0x90); CS=1; //CS disable

//wake up filter disabled, PHSEG2=3*TQ CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(CNF3,0x02); CS=1; //CS disable /*//set sensor node msg id CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(TXB0SIDH,0xb0);//transmit speed value SPI_Write(0X00); SPI_Write(0X00); SPI_Write(0X00);
76

SPI_Write(0X03); SPI_Write(0X00); CS=1; //CS disable */ //receive valid msg with std id, no RTR, rollover disabled & filhit=rxfo //RXB0 SPECIFIC CS=0; //CS enable

SPI_Write(CAN_WRITE); SPI_CAN_Write(RXB0CTRL,0x20); CS=1; //CS disable

//acceptance filer mask n std high id CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(RXM0SIDH,0xFF); SPI_Write(0Xe0); SPI_Write(0X00); SPI_Write(0X00); CS=1; //CS disable //set msg id CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(RXF0SIDH,0xb0);//accept temp value SPI_Write(0X00); SPI_Write(0X00); SPI_Write(0X00);
77

//SPI_Write(0X01); CS=1; //CS disable //set mcp2515 to normal operating mode DelayMs(10); GIE=1; PEIE=1; TMR1IE=1; TMR1H=0x3C; TMR1L=0xAF; T1CON=0X10; TMR1ON=0; while(1) { TMR1ON=1; CS=0; //CS enable SPI_Write(CAN_WRITE); SPI_CAN_Write(CANCTRL,OPMODE_NORMAL); CS=1; //CS disable // lcd8_decimal3(0xcc,sec); if(SPI_Read_Status()& 0x01) was received into RX Buffer 0 { CS=0; SPI_CAN_Write(CAN_READ,RXB0CTRL);//read filter hit bit for RX Buffer 0
78

//receive data //check whether any data

SPI_Read(); CS=1; filter=SPI_dummy & 0x01; if(filter==0) value { CS=0; SPI_CAN_Write(CAN_READ,0x66); SPI_Read(); CS=1; temp0=SPI_dummy; lcd8_decimal3(0x85,temp0); CS=0; SPI_CAN_Write(CAN_READ,0x67); SPI_Read(); CS=1; temp2=SPI_dummy; lcd8_decimal33(0x8c,temp2); CS=0; SPI_CAN_Write(CAN_READ,0x68); SPI_Read(); CS=1; temp3=SPI_dummy; // lcd8_decimal3(0xcb,temp3); temp1=temp3*30; lcd8_decimal4(0xc6,temp1);
79

//Dispaly temperature

if(sec>=2) { sec=0;TMR1ON=0; Serial_Out('*'); Serial_Out(temp0%1000/100+0x30); Serial_Out(temp0%100/10+0x30); Serial_Out(temp0%10/1+0x30); Serial_Out(temp2%1000/100+0x30); Serial_Out(temp2%100/10+0x30); Serial_Out('.'); Serial_Out(temp2%10/1+0x30); Serial_Out(temp1%10000/1000+0x30); Serial_Out(temp1%1000/100+0x30); Serial_Out(temp1%100/10+0x30); Serial_Out(temp1%10/1+0x30); Serial_Out('#'); Serial_Out(0X0D); Serial_Out(0X0A); delay(65000);delay(65000); sec=0;TMR1ON=1; } } CS=0; SPI_Write(CAN_BIT_MODIFY); SPI_Write(CANINTF); //CLEAR TransmitBuffer0 flag SPI_Write(0xff); SPI_Write(0x00);
80

INT

CS=1; DelayMs(200); } } } void SPI_Init() { SSPCON=0X30; mode,clk=Fosc/16 SSPSTAT=0X00; //CKE=0,SMP=0 } void SPI_CAN_Write(char addr,char data) { SPI_Write(addr); SPI_Write(data); } void SPI_Write(char data) { SSPBUF=data; do{ }while(!SSPIF); SPI_dummy=SSPBUF; SPI_dummy=SSPBUF; SSPIF=0; } void SPI_Read()
81

//SSPEN=1,CKP=1,spi

master

{ CKP=0; data=0; SSPBUF=data; do{ } while(!SSPIF); SPI_dummy=SSPBUF; CS=0x1; CKP=1; } char SPI_Read_Status() { CS=0; SPI_Read(); CS=1; //CS disable return SPI_dummy; } void bluetooth_init() { lcd8_display(0xc0," Initialize ack();del(); lcd8_display(0x80," lcd8_display(0xc0," LLL ",16); ",16); ",16); //CS enable SPI_Write(CAN_READ_STAT);

Command_Send("LLL",3);

82

ack();del(); lcd8_display(0xc0," AT+MODE=0 Command_Send("AT+MODE=0",9); ack();del(); lcd8_display(0xc0," ack();del(); lcd8_display(0x80," lcd8_display(0xc0," ack();del(); lcd8_display(0x80," ",16); ",16); LLL ",16); ",16); AT+F ",16); ",16);

Command_Send("AT+F",4);

Command_Send("LLL",3);

lcd8_display(0xc0," AT+MODE=1 Command_Send("AT+MODE=1",9); ack();del();

lcd8_display(0x80," Init Complete ",16); lcd8_display(0xc0," lcd8_display(0x80," lcd8_display(0xc0," ",16); ",16); ",16); ",16);

lcd8_display(0xc0," AT+INQ Command_Send("AT+INQ",6); ack();del();while(RCIF==0);ack(); delay(65000);delay(65000);

83

lcd8_display(0x80," lcd8_display(0xc0,"

",16); ",16); ",16);

lcd8_display(0xc0," AT+CON ack();while(!RCIF);ack();del(); lcd8_display(0x80," lcd8_display(0xc0," } ",16); ",16);

Command_Send("AT+CON=00:1F:81:00:02:50,2",26);

void interrupt ser_int(void) { if(RCIF) { RCIF=0; v[i]=RCREG;i++; } if(TMR1IF==1) { TMR1ON=0; TMR1IF=0; count1++; if(count1>=20){sec++;count1=0;} TMR1H=0x3C; TMR1L=0xAF; TMR1ON=1; } }
84

void Command_Send(const unsigned char *dat,unsigned char n) { unsigned char ser_j; Serial_Out(0x0d); Serial_Out(0x0a); for(ser_j=0;ser_j<n;ser_j++) { Serial_Out(dat[ser_j]); } Serial_Out(0x0d); Serial_Out(0x0a); }

void ack() { delay(55000); re=i;i=0; for(j=0;j<re;j++) { lcd8_write(0x80+j,v[j]); } if(re>15) { for(g=0,j=16;j<re;g++,j++) { lcd8_write(0xc0+g,v[j]); }


85

} for(j=0;j<=re;j++)v[j]=0; } void del() { delay(65000);delay(40000); }

INTERFSCING JAVA CODES import java.io.BufferedWriter; import java.io.File; import java.io.FileWriter; import java.io.InputStream; import java.io.OutputStream; import java.sql.ResultSet; import java.sql.Statement; import java.util.ArrayList; import java.util.Enumeration; import javax.comm.CommPortIdentifier;
86

import javax.comm.SerialPort; import javax.comm.SerialPortEvent; import javax.comm.SerialPortEventListener;

public class comcheck implements SerialPortEventListener{ int i=1; SerialPort serialport; String str1,str2; String test,test1,test3; int count=0; String d1; String button; String status="n"; String ccc; Enumeration<CommPortIdentifier> listofport; CommPortIdentifier identifier; CommPortIdentifier identifier1; InputStream inputstream; OutputStream outputstream; String Out; String d,f1; ResultSet rs,rs2; String str5; Statement st; ArrayList<String> arr=new ArrayList<String>(); String SplitArray[]=null;
87

static public String Path; int flag=0; @SuppressWarnings("unchecked") public comcheck(){ this.Out=Out; listofport=CommPortIdentifier.getPortIdentifiers(); while(listofport.hasMoreElements()){ identifier=(CommPortIdentifier)listofport.nextElement(); if(identifier.getPortType()==CommPortIdentifier.PORT_SERIAL) { if(identifier.getName().equals("COM4")){ System.out.println("Enter into port"); try{ serialport=(SerialPort)identifier.open("SimpleApp",2000); serialport.addEventListener(this); serialport.notifyOnDataAvailable(true);

serialport.setSerialPortParams(9600,SerialPort.DATABITS_8,SerialPort. STOPBITS_1,SerialPort.PARITY_NONE); inputstream=serialport.getInputStream();

88

outputstream=serialport.getOutputStream();

} catch(Exception e){ System.out.println("Exception"+e); } finally { } } else { // } } } } System.out.println("Else");

public void serialEvent(SerialPortEvent serialevent) { switch(serialevent.getEventType()) { case SerialPortEvent.BI: case SerialPortEvent.OE: case SerialPortEvent.FE: case SerialPortEvent.PE:
89

case SerialPortEvent.CD: case SerialPortEvent.CTS: case SerialPortEvent.DSR: case SerialPortEvent.RI: case SerialPortEvent.OUTPUT_BUFFER_EMPTY: break; case SerialPortEvent.DATA_AVAILABLE: try{ String temp=null; while(true) { Thread.sleep(1000); int len=inputstream.available(); byte[] b=new byte[inputstream.available()]; while(inputstream.available()>0) { inputstream.read(b, 0,len); String str=new String(b); System.out.println(str); writing(str); } break; } }

catch (Exception e) {
90

//System.out.println("the system is not working"); } } } public void writing(String str) {

try{ File iu=new File("can.txt"); FileWriter fstream = new FileWriter(iu); BufferedWriter out = new BufferedWriter(fstream); out.write(str); out.close(); } catch (Exception e) { System.err.println("Error: " + e.getMessage()); } }

public static void main(String args[]) {


91

new comcheck(); } } SOFTWARE BLOCK MATLAB CODE clc; clear all; close all; % Training Phase --------------------------------------------------------% disp('Training Speech Signals ...'); disp(' '); ck = input('Have You Already Record the Voices (Press Y or N) : ','s'); disp(' '); % number of parameters no_p = str2num(input('Enter the Number of Parameters : ','s')); disp(' '); if ck == 'N' for i=1:no_p Fs = 22050; if i==1 str = '''Temperature'''; st = ['Press Enter and Say ',str]; input(st); y = wavrecord(1*Fs,Fs,'double'); pause(2); elseif i==2 str = '''Pressure''';
92

st = ['Press Enter and Say ',str]; input(st); y = wavrecord(1*Fs,Fs,'double'); pause(2) elseif i==3 str = '''Speed'''; st = ['Press Enter and Say ',str]; input(st); y = wavrecord(1*Fs,Fs,'double'); pause(2); end st = ['wave',num2str(i),'.wav']; wavwrite(y,Fs,st); wavplay(y,Fs); end end % Training of Speech Signals using GMM k = 16; for i = 1:no_p disp(st) [s, Fs] = wavread(st); v = mfcc(s, Fs); code{i} = vqlbg(v, k); end
93

% number of centroids required % train a VQ codebook for each speaker

st = ['wave',num2str(i),'.wav'];

% Compute MFCC's % Train VQ codebook

disp('Enter the Parameter Values for Test '); disp(' '); Temp = str2num(input('Enter the Temperature Value : ','s')); Pressure = str2num(input('Enter the Pressure Value : ','s')); Speed = str2num(input('Enter the Speed Value : ','s')); disp(' '); % Test the Speech for Getting a Parameter disp('Speech Test ...'); disp(' '); input('Press Enter and Say a Parameter you want to know'); y = wavrecord(1*Fs,Fs,'double'); pause(2); wavplay(y,Fs); noise=wavrecord(1*Fs,Fs,'double'); pause(1); wavplay(noise,Fs); disp('Bessels Low Pass Filter . . .'); [s] = bessel(y,noise);

disp('Butterworth Filter . . .'); [e w_err] = butter(s, noise); v = mfcc(s, Fs); % Compute MFCC's

94

distmin =5; k1 = 0; d_vq = []; for l = 1:length(code) d = disteu(v, code{l}); dist = sum(min(d,[],2)) / size(d,1) d_vq = [d_vq dist]; if dist < distmin distmin = dist; k1 = l; end end [value loc] = min(d_vq); % read the datas obj = fopen('can.txt','r'); data = fread(obj,'*char')'; Temp = str2num(data(2:4)) Pressure = round(str2num(data(5:8))) Speed = round(str2num(data(9:12))/100) % Get the Speech w = cd; cd('data'); selective = [10 20 30 40 50 60 70 80 90]; if loc == 1 fn = find(Temp == selective) if (Temp>=10 && Temp<=20 || fn) f = [num2str(Temp) '.wav'];
95

% each trained codebook, compute distortion

w1 = y; w2 = wavread(f); wavplay(w1,Fs); wavplay(w2,Fs); else f1 = [num2str(floor(Temp/10)) '0.wav']; f2 = [num2str(mod(Temp,10)) '.wav']; w1 = y; w2 = wavread(f1); w3 = wavread(f2); wavplay(w1,Fs); wavplay(w2,Fs); wavplay(w3,Fs); end elseif loc == 2 fp = find(Pressure == selective) if (Pressure>=10 && Pressure<=20 || fp) f = [num2str(Pressure) '.wav']; w1 = y; w2 = wavread(f); wavplay(w1,Fs); wavplay(w2,Fs); f1 = [num2str(floor(Pressure/10)) '0.wav']; f2 = [num2str(mod(Pressure,10)) '.wav']; w1 = y; w2 = wavread(f1); w3 = wavread(f2); wavplay(w1,Fs); wavplay(w2,Fs);
96

wavplay(w3,Fs); elseif loc == 3 f1 = [num2str(floor(Speed/10)) '0.wav']; f2 = [num2str(mod(Speed,10)) '.wav']; w1 = y; w2 = wavread(f1); w3 = wavread(f2); wavplay(w1,Fs); wavplay(w2,Fs); wavplay(w3,Fs); end cd(w);

97

You might also like