Let's Make Robots!

Object tracking / chasing robot

Uses openCV to track and chase a coloured object

Using openCV running on a completely underpowered laptop to identify and track an object. For this proof of concept, a blue ball is tracked. I send calculated servo positions via serial comms to the Arduino over USB for servo movement. When the object is lost from the camera frame, it will begin a search pattern using the pan/tilt camera mount (this is currently not implemented, and probably wont be until after uni exams). Upon finding the object, it will move towards it.

Robot movement/obstacle detection will be handled by Arduino as well. The laptop will determine object position relative to pan/tilt position and then move robot towards that location by sending commands to Arduino via serial. I have ordered tracks and chassis. I already have a Tamiya dual motor gearbox and motor shield for Arduino ready to be hooked up.

I have plans to maybe in the future change the object tracking from a blue ball to face / back-of-head tracking and have it follow me (or chase the kids or dog) around the house. That is a bit of a way off though. I want to get the movement happening first.

I have created this object tracking portion of the project as a proof of concept, and using the completely underpowered and slightly damaged, but mostly working Celeron 900 netbook I had lying around with Linux on it, it performs reasonably well. I have plans to replace the laptop with a 2xARM Pandaboard and a suitable power source to increase framerates, but funds are very limited at the moment. I'm hoping to source an old Core2 Duo laptop from a work colleague, but I fear it may be too large for the chassis to tow around.

 

Update 09/03/14

Implemented camera search functions during holidays a couple of weeks ago.

The most exciting thing was testing the ardumoto shieldl this evening. The test was extremely basic and simply involved manually typing the control string to the arduino where it was interpreted. In reality, the computer will construct this string and write it to the arduino over USB. This does work, but there is no movement logic at the moment, and it isn't exactly mobile.

Movement logic to follow and I've got a couple of IR distance sensors to add into the mix as well. I'll complete this stage possibly after this semester in a few months (if I want to pass everything).

I'm just about at the stage where I really need to start to get the cash together to buy a power supply and most importantly, the pandaboard for this project. This will enable mobility.

Then a bit of time to learn how to cross-compile what I've done so far, then throw everything into the chassis and see how it goes.

 

Update 17/04/14

Completely redesigned the chassis, it is now 3 levels. This was necessary to make more room for the battery pack, voltage regulators and everything else.

I gave up on the Pandaboard becuse apparently they don't make them any more. They tell me this after waiting a month for my order, delayed 3 times... so I ordered an odroid-u3 and it should arrive next week!

 

Update 08/05/14

oDroid arrived and installed. Built my own kernel based on Linaro. Seems to work pretty well.

I am having trouble getting Netbeans to build the project properly, so I am reduced to building some of the project in Netbeans and then finishing it off and linking on the oDroid itself. Fortunately this is not too hard. The problems were introduced as soon as I decided to attempt multithreading implemented in its own class. Netbeans does not build the dependencies in the correct order and this produces a linker error.

The oDroid is fast. Previously movement commands were pulsing to the Arduino and I could tell this by the flashing of the i/o LED. Now it is processing so quickly the LED appears to stay lit! I also reduced the resolution to 320x240 and it is performing very well.

The dramatic speed increase is attributed to a few things:

  • Each oDroid core is almost twice as fast as the single core celeron laptop I was previously using.
  • Image resolution reduction. (I could reduce resolution further, but there doesn't seem to be a need to at this stage.)
  • No waitkey - there is no need for the main thread to wait for input to exit the program. The program is designed to run automatically after oDroid boot, until power is removed.
  • Multiple threads are used for the image processing pipeline.

I have two worker threads, soon to expand to three or four. Currently one grabs frames from the camera, the other processes them and writes the result. The main thread picks this up and sends it to the Arduino. I want to separate this process into a third thread and add another one to listen for feedback from the Arduino. Currently there is no feedback, but for sensor integration there will need to be.

Next step is working on the power system. I have sourced a battery and ordered voltage regulators for the electronics, motors and sensors. I am keen to connect the servos again and see how fast the camera movement is now.

I will integrate the sensors into the code as well, and work out some kind of feedback system now that I have multithreading in C++ using posix threads pretty much worked out. This is planned to be done soon. Uni exams around the corner so I probably will have to study my arse off. Working out the multithreading issues took a couple of days...

 

Update 19/05/14

My voltage converters arrived so I took some time in between maths assignments to connect one up. I had a bit of trouble finding a power supply with enough amperage to drive the thing, but I eventually found a laptop power supply with 3.4A which was enough to perform some basic testing with one servo connected.

The circuit has the potential to draw 10.5A, but it will generally be in the range of 4-6A if my rough calculations are anywhere close.

I am quite pleased with the result. The tracking jerkiness is a result of the tolerance I have built in - the ball can move almost to the edge of the frame before it signals the servos to reposition the camera. The code itself tracks the ball close to realtime. I must be getting close to 30fps and a very stable image. I did need to reduce the resolution to 160x120 though, and I adjusted the "roundness" parameter of the HoughCircle function to be a bit more forgiving. The trade off is more false positives, but I'll see how it goes once the battery arrives and it finally becomes mobile!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Hi,

It's quite impressive!
I am trying to build a bot similar to that only using Intel Galileo board.
Would you mind sharing some code with me?

Thanks,
Ziv 

What would you like to know?

There's three main components:

1) opencv library to perform the image processing and object detection using hough circles on masked video frames

2) serial comms to send the commands to the arduino using libserial

3) ardunio reads the command string and moves the servos and runs the motors accordingly.

Soon I will add obstacle detection and avoidance which will involve feedback from the arduino. At the moment the arduino is not doing a great deal.

Since i am firly new to open CV i would like to see how you wrote the object detection and translated it into commands to move.
I guess options number 1 & 2 are my choices.
Thanks! 

For us, having the servos move smoothly was a challenge since we can only get 16fps using OpenCV at a minimum resolution of 176x144. At that refresh rate it was not possible to track the object using the camera thread so we decided to use the microcontroller thread instead to call the PID every 10ms. That means that the COG is calculated every 60ms at the end of each frame so there will be 6 PID cycles for each frame to update the servos; which also means less blur on the image.  Here is the result:

http://youtu.be/IpGD7KBM4zA

I would be happy with 15fps, at the moment I get about 3 - 5 with considerable lag on my setup which is a celeron 900 running Ubuntu 12.04 desktop.

The way I compensate for the lag so far is instead of attempting to track in realtime, when the object is noticed, it will tell the servos to move straight to that position where it last saw it and make that the centre of the screen. This is done by calculating an offset angle relative to the camera angle of view, I roughly calculated mine to be about +/- 11 and 8 degrees for horizontal and vertical. If it has moved away by the time it gets there, then hopefully it is still within the frame, if not, it goes looking.

The result is fairly jerky, but it matches what I see happening on the camera.

I'm hoping that the headless odroid running hardfloat ubuntu server will achieve better fps which should improve it a fair bit. (I compiled the kernel last week, odroid should arrive this week I hope.)

The other thing I could do is set up two threads. One simply tracks the object and populates the result which is read by a second thread that calculates the servo positions.

It wouldn't be much faster, but it may free up some cycles to be used for the opencv side of things.

Why dont you create dynamic threads which your second thread will read through each thread with the highest priority (latest value) while your first thread reads the second thread?

At the moment I have 2 worker threads, but they are not dynamic. I have overloaded virtual functions for each thread. It gets around the problems of using classes quite nicely.

I have two classes, one performs the serial functions and the other the image processing. Each have members that need to be accessible from the threads. Static functions won't cut it in this situation apparently, so I have to use function pointers in my threads.

It is fairly basic threading, a loop with an exit condition. I have worked with threads before in c#.net (I'm a software developer irl), but posix threading is something I learned in 2-3 days.

I'll take a look at dynamic threads and see whether they can help.

What's the baud of your serial?

Are you reading (Data into array) or (ReadLine)?

How many CPU cores in your laptop? (Threading Question)

What Dev board are you using?

Do you have experience with multi-pipeline core hardware bandwidth & error crunching?

serial is 9600. There is probably a limit to how much can be sent, but I haven't reached it yet, or I can't tell if I have at this stage.

data is being read into a string, I use libserial for the comms which seems to like to use strings. In the Arduino I use a char array to manipulate the data received.

4 cores in the oDroid-U3, 1 core in the laptop.

err. no.. maybe soon I will need to learn about it.

You're also using an arduino, An mbedLPC1768 is much faster at processing, i can suggest you check that out https://mbed.org/platforms/mbed-LPC1768/ 

Use the highest baud possible with the mbed

The 1 core in the laptop is your highest possible bottleneck if that is what is tracking the camera.

Multi-Pipeline: Like a Y pattern, you have 2 input(or more) and 1 output(or less) If you had two mbeds getting the data from your laptop and one mbed determining which one is the latest data chunk then use that datachunk (for the highest refresh rate of the laptop software can provide while (ERROR CRUNCHING) Discarding any data which is behind the timeline (Not latest Data) by the output mbed.

 

I hope this helps although it's only a suggestion for improvement, Your bot is pretty good :)