UPDATE 27 Jun 2012:
The first one is of our new application for Google+ Hangouts.
It allows you control a robot directly from a hangout, while seeing and hearing like if you were where it is.
This way, you can not only talk to your friends but also follow them around their place or visit together and explore a new place, anywhere in the world.
The second is of an event called Human 2.0, hosted by Ar, “the brainchild of a group of neuroscience students and researchers from the Champalimaud Neuroscience Programme”, which was opened by our robot and then let Adam Kampff speak from Harvard to the public there in Lisbon, Portugal, as you can see.
You can see more photos os the event here.
UPDATE 16 Apr 2012:
This time we have done something different, we teamed up with “Dona Estefânia” hospital and “Pavilhão do Conhecimento” museum, to let kids who were on the hospital to visit and explore the museum with magabots.
We used Google Chat on Gmail not only to trasmit the video and sound between the computers but also to send the control messages.
The experiment was amazing! While they were kind of reluctant and confused at first, once they started exploring the remote museum their faces were filled with wonder and joy. All of them, without exception, had a lot of fun and we could see how much more relieved they were than before they tried it.
On the other hand, the kids that were on the museum appeared to be intrigued by the robots at first, but when saw that there was a kid on the other side, they flocked around it, and started to talk to them.
Watch the video ;)
UPDATE 03 Feb 2012:
Uploaded a new video showing Speech Recognition and Skeletal Tracking.
I know that magabot has already been posted here, but how this one has some modifications and it is what I'll be using on the next few months, I tought it would deserve its own Robot Page.
The main differences are the elevated structure and the 14V 5000 mAh battery that lets me use a Kinect on the robot.
After testing the robot with telepresence using skype, I've started developing what you can see on the video. The computer is running two different applications. One is what processes the kinect data and controls the robot base, the other is the talking head that appears on the screen. The kinect application was written in C#, using Microsoft's SDK and it talks to the face, which was done in Processing, via TCP messages.
Hope you enjoy it! :D