Let's Make Robots!

Multiple similar distance sensor use for Obstacle Avoidance and Approach

Related to my post this weekend about Robot Size and Methods of Obstacle Avoidance but not limited to larger robots, does anyone's experience with use of multiple distance sensors (either US or IR) support concurrent use or alternatively suggest that the complications caused by their installation, interference with eachother and programming are not worth their addition?  If the detection threshold is greater than the distance from the front extremis to the wheelbase center on a differential drive, how often does a robot get "boxed in" to vestibules or peninsulas of obstacles?  What about when the use of the sensor become for approach-ie, when you want the machine to stop at a given distance from a target for a reason other than avoiding it (like delivering a beer or confronting an intruder?)  Is you answer different when the sensors are for ledge detection?

What about orientation and number for multiple sensors?  If you have two sensors each at offset 45º to the plane of travel, do you gain anything with a third facing forward?  Is the proper orientation for 3 at 90º intervals?  What works best for various footprint shapes?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

@Enigmerald, I fail to see how 90 degree orientation of sonars on a bot eliminates a possibility of interference.  I don't believe it does.  You can still get various types of single or multi-wall bounce if you have more than one sonar listening at the same moment.  Interference can be eliminated by simply not firing off the sonars close to the same time, and having enough delay before the next one starts.  This is what Maxbotix suggests when using multiple sonars.  Without knowing whether your opponent was triggering their sonar at the same time or in sequence and with what delay, I don't think you can know what "interference" they did or did not have to contend with.  I suspect they fired them off sequentially and didn't have trouble because of that.  However, they could still drive that bot straight into an outside corner (without seeing it at all) or many different walls depending on the angle of those walls to the sonar.  I don't believe there is any magic to 90 degrees, it has more to do with the triggering and the shapes of the environment around the bot.  I think your opponent was fortunate that they were navigating a maze with 90 degree turns, and fortunate to keep their bot alligned with those turns so their sonar could function.  The real world doesn't decide to line up with the direction from which your bot is approaching or your sonar is facing.  I would bet the key to their success was software.

I will hypothesize a solution...I think a robot with only 1-3 fixed sonars that paused from time to time and made and REMEMBERED several readings as it rotated in place for each turn (say 22.5 degrees apart), could have a much more accurate view of the world and make it through the maze quite well by simulating having a really good array.  The key is software again...remembering a set of readings at different angles and forming a view of the world.  If it were me, I would take several readings (say 5) at each given angle, throw out the outliers, and average those remaining, for each bearing.  A compass or encoders would be helpful.

Another failed experiment I tried...I tried putting two sonars on the front of a bot facing directly forward (one on the extreme right and left), so that a bot could perceive a wall that it was approaching and the angle of that wall to the robot.  This worked to a point but there were several issues.  1.  Once again, sequential not simultaneous trigger is a must and 2.  This did not work as the angle got too great, as both signals would bounce off the approaching wall and the robot would be blind.  The solution...put sonars at different angles (or rotate a single sonar or sonar set), so that at least one of the sonars would be facing the wall more or less head on and thus see it.  If I got lucky and 2 sonars saw it, some trig could be done to predict wall orientation.  I found this "software wall prediction" to not be a good idea in home use, as my house is not all flat walls and right angles.  When are 2 readings a single wall and when are they a wall and a plant or a plant and a chair?  The robot does not know.  Thats why I added it to my list of "failed experiments".

The following is another list of best guesses based on past tinkering:  If 0 degrees is the centerline of the bot, I would do the following with cheap sonars.  I haven't tried the newer Maxbotix ones, so these numbers would change depending on the sonars used.

1)  If I had one, obviously it would go at 0, and hopefully be panned

2)  If I had 2 fixed sonars, I would put them at -22 and +22.  Not really great.  Better to move slowly and pan.

3)  If I had 3, I'd put 1 at 0, and the others at -30 and +30.

4)  If I had 4 and they were fixed, I'd put them at -15, -45, 15, 45 ish.  4 on a panning base could be quite interesting.

5)  I believe 12 is a good setup at 30 degrees apart.  As a bot rotates or turns, it is still "balanced" as the same obstacles that disappear off some sonar will tend to show up on others.  This works well for software alogrithms (like the force field on) to sort through and keep a consistent picture of what to do.  As sonars are only a few bucks, this added intelligence per dollar spent is quite effective to me.  The fixed 12 setup also has the advantage that a bot can move through a cluttered space fluidly without ever having to stop/pan/or think.  If anyone is contemplating doing anything like that, I'd be happy to donate code or help as I have others.

I have wondered about putting a 3 or 4 sharp ir sensors on a turntable, 90 degrees apart, and panning it 90 degrees as fast as the lasers would allow in 1-5 degree increments.  Seems like a very detailed map could be built and maintained in a reasonable time.  Anybody tried that?

Regards,

Martin

Thanks for spotting it out Martin. True, using sensors in such fashion doesn't completely preclude the possibility of interference, unless the waves are fired at specific intervals, just as you mentioned. I totally agree with you. And yes, the surrounding course structure also wasn't too much of a problem. So,they just programmed course-specific routines ( I guess ) to achieve favourable results. But, I can't take back the style in which they oriented the sensors because it achieved perfect results in the course. And maybe the software could just be tuned at a good enough level to make the robot perform in real-world situations as well.

However, if we add a certain barrier to cut off the waves from "interfering" with the other sensors' waves, the readings could just get a lot better and clear. The other ideas you have presented are very interesting too. I need to try and experiment with a few.

Thanks for opening up the interesting discussion Max. I have had some experiences with ultrasonic sensors (cheap ones like the HCSR04)

I did use multiple ultrasonic sensors on an explorer-type robot with 3 HC-SR04s mounted at the front at 45 degrees from one another. It went quite successful in that case however there were occasional random routines executed by the robot. Once the interferences got going, just as you have mentioned in your post, there were not much changes that I could make in programming areas to counter the strange routines.

I was working on this particular rover as part of a national contest where there was some hefty wall-followings and obstacle avoidances going on. I didn't manage to do very well in the competition as the drivers burnt out during the event, but that's another story. Using this slanted orientation also made it likely for the waves to bounce of sharp edges and entirely deflecting the incident beam in a totally random direction. In other words, there were times when the robot would not receive any reading at all, ultimately too late to turn away from crashing into a certain obstacle.

The winning robot, on the other hand, taught me a lot. It aced through the entire obstacle-course (although it was as slow as a snail, oh wait, that's even faster) and there wasn't even a single robot that came close to giving it a run for its money. That particular robot used the same 3 ultrasonic sensors but mounted at 90 degrees from each other at the front, ruling out the chances of interference between the ultrasonic waves. Here's a quick picture of that robot:

The 3 sensors, while not exactly at 90 degree intervals, aced the entire course. Didn't have to deal with interferences or bizarre readings (straight and direct)

Here's an even better view of it wall following. It was a case of PID based wall following. haha! it went dead straight.

Here's a close-up view of the runner-up robot. It went at an even better speed as it had better motors and their orientation wasn't bad at all. 

They would have come close to beating the othe robot. I don't exactly remember why thet they didn't, but perhaps they had ambient lighting problems. (Had used some extra IR distance sensors, one with photodiodes, not shown in the robot above)

To sum up, I think the best orientation would be with those 3 HCSR04s at 90 degree intervals, but the latter one looked really sharp at times. It indeed boils down to the programming once the orientation part is done, doesn't it? (Especially in robot contests. You don't have competitors at home, but there's nothing wrong having a bring-the-beer-from-the-fridge-without-hitting-anything contest :D )

Those photos belong to the respective participants and teams. The top one is from NCIT college and the bottom one is from Realtime Solutions. (Don't want to run into copyright issues, but hey, they should be thanking me for being featured on a discussion at LMR :D )

This is why I love you guys. This thread started a year ago, but just today two of our most thoughtful contributors added meaningful, useful content to it! (Thanks Ashim & mtriplett!) LMR rules!

The responses so far are interesting but I’m just a simple chap and have given up trying to use multiple US sensors due to the problems processing “rogue” echoes in enclosed areas with an Arduino board.

I’ve tried various sensor angles with 2 and 3 sensor arrays and a range of drive delays between sensors. I’ve not been able to find any “universally” satisfactory solution.

So…… for me it is back to 1 US sensor (+/- 1 IR sensor) on a “pan” table driven by a servo.

The swivelling sensors also give the bot a satisfactory geekie look and for all but the fastest robots you can use a relatively slow scan which looks great.

The other advantage of this approach is the very simple and small code compared to using multiple sensors.

A previous robot had multiple (6 or 8) 10-80cm rangers roughly equidistant around the perimeter. I used basic behavioral programming and was dipping my toe into force field algorithms.

I also had many sonars, both 05s and 08s.

I liked the 08s because they had an I2C interface with a command that would set them off simultaneously and returned 32 bytes per sonar. This was meant to be fed into an ANN to get some dirt of better reading. I never got that far, but if one actually knew how it worked that mode might be good for beamforming.

And yes, I occasionally got stuck in places, but rarely.

I wish my brain could comprehend the beamforming page right now.

Here are a few of my experiences.  I am a trial and error newbie on this, so take everything I say with a grain a salt.  I try to highlight anything that is speculation.

In indoor environments using cheap HC-SR04 sensors in congested environments with a lot of walls, I found I could not operate multiple sonars triggered at the same time without interference ruining my data.  (It is speculation, but in outdoor environments that have few obstacles and where the bot is programmed to steer well clear of them, I think running the sonars triggered concurrently could work.)

The rest of my observations involve running multiple sonars sequentially.  Sonar "bounce" off walls was a huge problem.  Walls got stealthy at too great of angle.  I spent many hours driving bots into walls at various angles (with older Maxbotix and the SR04's)  After much testing, I found that I could rarely if ever run into a wall if I angled the sonars 30 degrees apart when using the SR04s.  I arranged them with one directly forward, and others 30 degrees apart to each side. Cloth obstacles like beds with comforters can usually be detected but mistakes happen.

More speculation, but I question this choice of 1 directly forward.   I wonder if having one at 15 degrees of centerline and another at -15, with others at 30 deg from that (45, -45, 75, -75) would have been better.  Reason being, if you get a reading directly forward that is not seen on any others, which way do you turn?

The more sonars (Sonic Overkill!) the slower you can cycle through all of them.  Speed and time of approach at various angles is important to think about.  If a bot is driving straight forward, the approach speed to a wall directly in front and perpendicular to the movement will be much faster than one at nearly parallel to the direction of motion (like a wall the bot is driving next to but is angled 10 degrees towards it).  Because of these varying approach speeds, one can play games with the triggering to trigger forward facing sonars more often than side facing.

I personally really like the forcefield algorithm when using this setup.  Once it is setup and the attractive and repulsive forces are tuned to the bots speed and turning abilities, it is uncanny how a robot can effectively "see" obstacles at longer ranges and make gradual and graceful turns around multiple obstacles in congested environments in very natural and graceful ways.  In this setup, the bot is always driving curves and never totally straight lines, if there is an obstacle within range.  Note, I think you pretty much would have to have a good compass on board to make this work though.

I also found the cheap sonars to be noisy and to get some bogus readings.  There are probably better approaches, but the simple approach that worked for me was to the think of three "ellipses" of defense around the bot (inner, middle, and outer).  I wrote code to tally the number of times in a row each sonar had a reading which broke each ring.  If any sonar broke any ring 3 times in a row, that ring was considered broken and the max speed was reduced or the robot was stopped altogether (inner ring broken).  This tallying process filtered out bogus data.  Reducing the speed gave more time for the sonars to run (and increasing the tallys) before running into something.  Conceptually, these ellipses would be made longer in front of the bot and narrower to the sides.  This also provides time to cycle the front sonar.  I spoke about "tuning the forces" with the FF Algo...these ellipses apply here as well.  Its a good idea to make the bot more sensitive to obstacles directly in front (duh) and make the repulsive forces greater on sonars towards the front, especially if you are having trouble cycling the sonars fast enough to 1.  Get the tally and 2.  Steer or Stop the bot in time to avoid a collision)  My appologies if this sounds confusing, but I think it will make sense to someone who is trying it firsthand.

In the end, I think cheap sonars can do a good job.  If you're not trying to pan a sonar around continuously, and all you want to do is avoid hitting stuff and don't mind the bot driving in stupid paths, then 1-3 sonars might do.  If you want to move in congested environments on curved paths that don't look stupid, then larger arrays (4-12) deserve consideration, unless you know how to do beamforming I suppose.

Cheers,

Martin

This is a technique used in large systems, like ships and submarines, as well as other places.  Not easy, not for the faint of heart, but if you want to push the limits of hobby robotics...

http://en.wikipedia.org/wiki/Beamforming

Essentially, beamforming creates a 3d image from two or more signals or a single signal picked up from different places.  Much like your eyes produce a 3d image from two 2d images.  Conceptually, it is pretty simple.  In the real world, be prepared for some serious number crunching and fancy signal processing algorithms.  I suspect a lot of the work could be done using something like Octave http://www.gnu.org/software/octave/ but it still wouldn't be easy.  If I ever get the time I plan to do some work on this myself.  Maybe in about 50 years.  Oh well.

I can't even conceptualize that at the "for jocks" level.  I want it to be what bats and Daredevil do, but I'm not sure that's what I read.  It has already been a humbling enough day...

Thanks for the point, Proto!  Looks like good theory.