You are on page 1of 2

Dear Dr.

Cottrell,

This summer, I worked with Yufei and Kevin to program our turtlebot to move until it
spotted an object and classify it as new or old, and we accomplished that goal by the summers
end. Although I hoped at the beginning of this summer that we would have done more than this
task, We encountered many problems along the way, stemming from errors in the robots operating system, poor documentation, and our lack of knowledge of ROS. There was a time during
the summer when I wanted to scrap the turtlebot project and instead work with the model or a
more theory-intensive project. I probably would have learned more by going that route. With that
said, I do feel like I learned a lesson in persistence by working with the turtlebot this summer.

I also feel that I have become a much faster and less error-prone programmer this summer with the copious amounts of coding I did. I learned to work with the turtlebot and ROS,
which I may find myself using again as it is the most popular robotic operating system. This was
the first time I worked with a robot. And, although it was not the most enjoyable experience, I am
still interested in robotics, and I dont think the turtlebot will be the last robot I work with.

What I did this summer was capture the depth and rgb data from the kinect, create a binary mask of the depth data where one corresponds to an object with a depth between some
range of distances from the turtlebot and zero to background points. We then set up caffes convolutional neural network to accept images multiplied by the mask and trained an sklearn svm to
classify the 8th hidden layer feature data from caffe. I coded up a second svm to accept as input
the difference of the the maximum of the probabilities and the average of the other probabilities
from the first svm to classify the object as new or old. A new object likely has a small difference
while an old object likely has a large difference. After training our program on several classes, it
seemed to classify new and old objects quite well; however, we did not formally test our program.

Yufei was very nice and helpful. She helped us get through some of our most difficult
problems. Yufei taught us about convolutional neural networks, svm, adaboost, and a little about
bayesian decision theory. She also gave me some advice about research and graduate school.

I do have an idea on how I feel the program could be improved. I feel that there should
be several smaller projects that last one to two weeks and grow in complexity rather than one
large project because I think this will encourage interest and participation. Each week the interns could learn about one or two new machine learning algorithm and then implement them on
an interesting problem without a programming library. For example, the first week could be to
learn about kmeans and implement it to classify something, then logistic regression, the second
week could be to use gradient descent to estimate some function, and then adaboost, neural
networks, bayesian networks, convolution and so on. Finally, the last 3 weeks can be dedicated
to a project of the interns choice that will be presented in the last week. I think that a schedule
such as this would be more conducive to learning and creativity than the one we followed.

I appreciate the opportunity of working in your office this summer. I am fascinated about
artificial intelligence, and the conversations and research topics here have ensured me that this
is the field I want to pursue in college and beyond.

Thank you very much,

Evan Phibbs

You might also like