OpenCV on Raspberry Pi
No longer afeared of frying my Pi, I've moved on to trying to implement some of my bot goals. Like many, I want my bot to be able to interact with people, but I didn't realize that I'd stumble on this ability.
I've looked at many visual processing boards like the CMUcam v4, but I'm not paying $100 for any board. I looked into making one, it looks possible, but not much cheaper. So, I got curious as to what alternatives there are. I stumbled on Hack-a-Day's recommended article: OpenCV on Raspberry Pi.
Anyway, he provided instructions on setting up OpenCV (open source computer vision) on Raspberry Pi. Of course, it was about 20 minutes later I had the code working on my Pi.
I had been skeptical of the Pi's ability to run any computer vision software, and morever, it's usefulness given the Pi's processing constraints. But once I had it up and running, I noticed it actually ran smoother than I had hoped. Don't get me wrong, I think it is less than 10FPS, but I could tell it would work for many robot applications More than that, if the Raspberry Pi was used only for the computer vision, then it would still be cheaper than many other hardware driven CV boards.
Basic Raspberry Pi and WiFi Dongle
- WiFi Dongle: $6.17
- Raspberry Pi: $35.00
- SD Card (4g): $2.50
- Web cam: $8.00
- Total for Basic RPi: $51.67
Therefore, I went to work on hacking his code.
Many hours later, I ended up with a very crude Raspberry Pi, Ardy, Camera, and Servo orchestration to track my face. Mind you, this is a proof of concept, nothing more at this point. But I hope to eventually have my bot wandering around looking for faces.
Image of Pi VNC. The box outline is being written through i2c.
Pulling apart a little $8 eBay camera.
To Setup the Raspberry Pi:
If you're setting it up from sratch, start with these instructions.
But if you're already setup, I think all you need is OpenCV.
$ sudo apt-get install python-opencv
The Arduino code reads bytes from the i2c, converts them to characters, then places the characters into an integer array. The Pi is sending 4 numbers, 2 coordinates, x1, y1, x2, y2.
The Python code is "facetracker.py" by Roman Stanchak and James Bowman, I've merely added lines 101-105, which load the coordinates of the box around your face into a a string, converts that to a string array. I also added function txrx_i2c(). This function converts the string array into bytes and sends it to the i2c bus.
To change this setup from i2c to UART, focus on the txrx_i2c() in the Python code and the onRead() in the Arduino code. I assure you, UART would be much easier.
If anyone has any questions hollar at me. Oh! And if someone can tell me ways I could optimize this code, I'm all ears.
The Arduino Code:
Now for the Python Code: