You are on page 1of 7

The VISNAV system uses a Position Sensitive Diode (PSD) sensor for 6 DOF estimation.

Output current from the PSD sensor determines the azimuth and elevation of the light source with respect to the sensor. By having four or more light source called beacons in the target frame at known positions the six degree of freedom data associated with the sensor is calculated. The beacon channel separation and demodulation are done on a fixed point digital signal processor (DSP) Texas Instruments TMS320C55x [2] using digital down conversion, synchronous detection and multirate signal processing techniques. The demodulated sensor currents due to each beacon are communicated to a floating point DSP Texas Instruments TMS320VC33 [2] for subsequent navigation solution by the use of colinearity equations. Among other competitive systems [3] a differential global positioning system (GPS) is limited to midrange accuracies, lower bandwidth, and requires complex infrastructures. The sensor systems based on differential GPS are also limited by geometric dilution of precision, multipath errors, receiver errors, etc.These limitations can be overcome by using the DSP embedded VISNAV system Signal Processing

This the general block diagram of VISNAV system. A sinusoidal carrier of approximately 40 kHz frequency is applied to modulate each beacon LED drive current. the resulting induced PSD signal current then vary sinusoidally at approximately same frequency and are demodulated to recover the currents that are proportional to the beacon light centroid. The output of PSD is very weak. So we have to amplify these signals by using a preamplifier. After amplification this signal is fed to four channel analog to digital converter. This converts the four channels of analog data into digital form. And is then fed to the DSP, TMS320C55x [2] to demodulate the signal. After the demodulation the four channel data is fed to the Six Degree Of Freedom estimator, which uses DSP for estimation. From this point we get the sensor coordinates. As discussed earlier that the controlling of beacons to avoid the problem of saturation we uses the beacon control data which is given by the DSP, TMS320VC33 [2]. This control data is in digital form. We use radio link to communicate the control data from the sensor electronics module to the beacon controller module
.. >>>>>>>>>>>>>>>>>>>Spacecraft missions such a spacecraft docking and formation flying

requires high-precision relative position and attitude data. Although a global positioning system (GPS) can provide this capability near the earth, deep space missions require the use of alternative technologies. One such technology is the vision-based navigation (VISNAV) sensor system developed at Texas A&M university .J comprises an electro

optical sensor combined with light sources or beacons. This patented sensor has an analog detector in the focal plane with a rise time of a few microseconds. Accuracies better than one part in 2000 of the field of view have been obtained .this paper presents a new approach involving simultaneous activation of beacons with frequency division multiplexing as part of the VISNA V sensor system
..

NEWS

1.

YouTube Opens Up Live Streaming to Non-Profits First 2

2.

10 Awesome Accessories to Organize Your Office 13

3.

CanvasPop Now Lets You Turn Small Facebook Pics Into Big Wall Art 1

Visual-Based Navigation Could Replace Satellite GPS


February 21, 2012 by PSFK 4 Pin It Share on Tumblr email share PSFK is a Mashable publishing partner that reports on ideas and trends in creative business, design, gadgets, and technology. This article is reprinted with the publisher's permission.
Ads by Google

Used ABB Robots - UK Based, ship worldwide. Also ABB Spares and Programming.
www.robotsltd.co.uk

Dr. Michael Milford from Queensland University of Technology is researching visual-based navigation, which could replace capital-intensive satellite GPS systems, using camera technology and simple mathematical algorithms to uniquely identify locations. This decentralized approach could widen the technologys scope and improve its accuracy, while also making navigating a much cheaper and simpler task. This new approach to visual navigation algorithms has been dubbed SeqSLAM (Sequence Simultaneous Localisation and Mapping) and it uses local best match and sequence recognition components to lock in locations. Dr. Milford explains how it works:

SeqSLAM uses the assumption that you are already in a specific location and tests that assumption over and over again. For example if I am in a kitchen in an office block, the algorithm makes the assumption Im in the office block, looks around and identifies signs that match a kitchen. Then if I stepped out into the corridor it would test to see if the corridor matches the corridor in the existing data of the office block layout. If you keep moving around and repeat the sequence for long enough you are able to uniquely identify where in the world you are using those images and simple mathematical algorithms. Dr. Milford is going to present his paper SeqSLAM: Visual Route-Based Navigation for Sunny Summer Days and Stormy Winter Nights at the International Conference on Robotics and Automation in America later this year. QUT This article originally published at PSFK here. .. ..

ABSTRACT:
We are developing techniques for estimating the full three dimensional motion of a vehicle from a image sequence acquired with an onboard video camera. The process is done without making any assumption about neither the type of motion nor the shape of the environment. The outcome is the complete vehicle trajectory and the three-dimensional structure of the scene. [ICCV95]

MOTIVATION & AIMS:


The purpose of this project is to build a system for estimating the full three-dimensional trajectory of a moving vehicle from the image sequence acquired by an onboard video camera. The main applications are assisted and autonomous navigation. There are situations (outdoors navigation, navigation in towns) where a model is either outright impossible or impractical to build, therefore techniques for navigation have to be developed that can work well in unstructured environments. We present here experiments on a long sequence that represents well typical indoor and city navigation.

RESEARCH:
We are experimenting different algorithms for motion and structure estimation from visual input (known as Structure-From-Motion algorithms in the vision community). This page regroups experimental results achieved on a long navigation sequence (4000 images). It includes: feature tracking results, 3D motion and trajectory estimations, and 3D structure reconstructions .

Initial sequence (3985 images) Feature tracking Tracked features

mpeg movie (250 frames)

mpeg movie (250 frames) mpeg movie (250 frames)

Reconstructed 3D structure and camera trajectory

mpeg movie (250 frames) - Top view

mpeg movie (250 frames) - Side view

ACHIEVEMENTS
We have demonstrated that automatically reconstructing the three-dimensional trajectory as well as the structure of the scene is feasible from purely visual input (monocular) without making any prior information about the type of motion or the structure of the envirnoment. However, further constraints may be enforced (for example planarity of the motion). The next step is to use the computed vehicle motion and scene structure for autonomous navigation.

Online Reports:

Visual navigation using a single camera. Download postscript (8 pages - 382K


gzipped), appeared in ICCV'95 proceedings, Boston, USA.

Motion from Points, Lines and P-Lines on N views - Technical Report,


submitted to ECCV96. Download postscript (31 pages - 208K gzipped)

Back to main page

..

You might also like