SparkFun Forums 

Where electronics enthusiasts find answers.

Have questions about a SparkFun product or board? This is the place to be.
By gallamine
#113665
Howdy folks,

I was inspired by Ladyada's Open Kinect hacking bounty and I decided to start one of my own :) So, I got to thinking and decided that the scanning lidar system on the Neato Robotics XV-11 vacuum needed to be hacked. So, $300 (maybe $400 at this point) is up for grabs to the first person to post open source documentation and code for using the scanning lidar on a robotic system.

More details here:
http://robotbox.net/blog/gallamine/open ... 200-bounty

Tell your friends!
Happy hacking.
By SecretSquirrel
#113767
Hmm. So we have a single laser module and a single "camera". I'll assume its a standard CMOS sensor, probably QIF format, maybe VGA. How would I do it? Use a IR laser and remove the IR filter on the camera. Now, the trick? Set the laser optics to have a fairly high beam divergence. The higher the divergence, the more accurate the distance measurement, but the less the maximum measurable. On the camera side, you would want the optics to set the field of view very narrow. Align the laser and the camera so they are on the same axis.

Now, to measure distance, you simply calculate how much of the frame is taken up by the laser dot. In it's simplest, pick an RGB value that has a high enough intensity that it represents the very bright laser dot. Anything above that is Vmax and anything below is 0. Average all the pixels across the frame and you will get a 0<=V<=Vmax. V=0 is an error condition. For V>0 the relative distance is V/Vmax. The optics on the camera and laser would set the maximum distance represented by V/Vmax=1.

I don't have access to one of the devices, but looking at the parts in it, that's the first thing that came to mind. If you have a bit more processing power, you could account for the elongation of the laser spot when the angle of incidence was not 90 degrees. Likewise, you could look for a "circle-ish" spot in the center of the frame and adjust your intensity trigger point to account for different surface types and the drop in intensity as the beam diameter increases.

I will be curious to hear the eventual solution.
By SecretSquirrel
#113769
You could also, with a tight beam divergence, measure the distance off center of the beam dot. The farther off center the closer it is. You only have to process one line of video, but the trade off is that distance resolution will be determined by the resolution of the camera and by the distance of the object. The farther away, the larger the minimum distance step you can measure.
By gallamine
#113783
SecretSquirrel,

You can find more about the sensors operation in this talk:
http://cmusv.na6.acrobat.com/p69585299/ ... ode=normal

Or on this blog post:
http://www.hizook.com/blog/2009/12/20/u ... o-robotics

Or you can read the paper here:
http://ieeexplore.ieee.org/Xplore/login ... ision=-203

The are functionally doing the same thing as the Sharp IR rangefinders - a linear CCD array and looking at the triangulation of the return light.
By SecretSquirrel
#113785
Ah, so they used my second guess. Triangulation in 2d space. That explains the significant angular offset of the laser from the camera. At maximum range the laser dot needs to be on the far left of the frame for best resolution. Reading the paper, I would think that spending a little extra on the laser module and getting it with optics that set the beam divergence such that the spot size would remain fixed in the image regardless of distance would significantly reduce errors in locating the center at large distances. Probably doesn't matter for the use they had for it.
By gallamine
#114121
There's been some nice advances in the project:

1) RoboDynamics has offered $200 to the first person that uses the XV-11 sensor in a SLAM application for navigating and mapping between two points in a map.

2) There's a nice writeup on the SparkFun blog doing a teardown of the device. Lots of people have been analyzing the data posted there: http://www.sparkfun.com/news/490

3) User Xevel on the Trossen Robotics forum has made a visualization of the data from the Sparkfun post. http://forums.trossenrobotics.com/showt ... 470&page=6

We're now waiting to see if someone with the actual robot can plot the data and verify we've got it correctly.