You are on page 1of 2

Modern self-driving cars generally use Bayesian Simultaneous localization and mapping

(SLAM) algorithms, which fuse data from multiple sensors and an off-line map into current
location estimates and map updates. SLAM with Detection and tracking of moving objects
(DATMO) is a variant developed by researcher now at Google which also handles detection and
tracking of other moving objects such as cars and pedestrians. Simpler systems may use roadside
Real Time Locating Systems (RTLS) beacon systems to aid localisation. Typical sensors include
lidar and stereo vision, GPS and Inertial Measurement Unit (IMU). Visual object recognition
uses machine vision including neural networks.
More and more vehicles already boast features that assist drivers and eliminate errors, including
blind-spot and lane-departure warning systems, automatic emergency braking, lane-change
assistance, drowsiness alerts, and bird’s-eye displays.

Laser Retro Reflector (LRR) - A passive optical device (corner cube reflectors) for accurate
satellite tracking from the ground (laser ranging stations of the SLR network) to support
instrument data evaluation.

http://web.stanford.edu/class/cs224n/
http://www.nltk.org/

I have 12 years of proven analytical & strategic experience in AI especially in the Defense &
Aerospace sector.

Developed automated system for behaviour control and AI reasoning.


Programmed and executed code and algorithms for games.
Suggested new practices for enhancements of software development quality.
Implemented AI features and modifications for existing codebase.
Participated in incorporation of AI software for finding patterns.
Provided assistance for integration of integration tools with ontologies.
Created data and simulation models through Hub and Spoke designs.
Formulated specifications for arbitrary schemas to support model consensus
When combined with human cognitive abilities, machines become more agile and effective.
Take freestyle chess for example. What you have is a computer that runs through possibilities
and presents them to the human and the human then uses them. And what it shows is that in 70
percent of the time, a human and a machine working together will routinely beat humans and
routinely beat machines that are operating alone. That's what human-machine developers are
shooting for. Human-machine collaboration allows us to implement entirely new concepts of
operation that are yet unknown, though the goal will always be one: human-centered autonomy.

Deep Learning especially Static image recognition, classification and tagging

Path Planning and Obstacle Avoidance: If the UGV comes across an obstacle, the IR sensors
detect the obstacle from a distance
Speech Recognition: iOS Speech Recognition

PYTHON
One of the friendly things about Python is that it allows you to type directly into the interactive
interpreter — the program that will be running your Python programs. You can access the
Python interpreter using a simple graphical interface called the Interactive DeveLopment
Environment (IDLE).

You might also like