Hough transform experiment part 1

There are times when you post an image like this

to twitter and everyone is like «wtf is that shit», and then you know it requires some explanation, but you just cant be arsed. Well, it’s «long time no post» situation tonight, so here goes that explanation.

An experiment

This post is about true experiment, as in: I have no idea if I am doing things right, and if they will end up the way I want it. When you write a piece of code knowing perfectly what you’re doing, that may be coding exercise or – if you like that more – artistic selfexpression act, but not an experiment. This time it is, and that’s why this «part 1» thing is up there in the title. So be warned: here be dragons.

Hough transform in flash?

Yeah. Hough transform. If you did not already know what it is, and did not click wiki link, stop reading right here – I’m not going to explain it. If you did, you should see that the picture above doesn’t look like Hough transform at all, so what does it have to do with it? Let’s see. See, if you want to do Hough transform (in fact, any image transform) in flash, you have two options:

  1. Write big-ass AS3 loop, optimize the hell out of it, rewrite with turbo diesel, azoth or, finally, alchemy, only to find that it still runs too slow;
  2. Write a bunch of PBKs that may work a bit better, but doing so they will burn your CPU.

This experiment takes the route number 2. But how can we calculate Hough transform in pixel bender? Bender kernel loops over destination image pixels; to calculate one such pixel from scratch you’d have to sample tons of pixels along the corresponding line in original – and that’s just stupid. The only thing we can realistically calculate and save to destination image is best line parameters for each original image pixel. In this case, however, we still need to loop over destination image to find aggregated values. What? Loop? Sigh, if only there was some native method to do aggregation step for us…

Well, guess what? There is! Meet BitmapData’s histogram(). This method gives you (in)obvious option to aggregate numbers in the image as long as only one primary color is used at a time. In other words, whatever information you store, you have only 768 possible values for every pixel. That’s not a lot at all; encoding both angle and distance together in that range would give us terrible resolution of only about 768 ~ 28 values for each angle and distance. On the other hand, if we deal with angles alone, we have potentially sub-degree resolution right away. Then, in «part 2» of this experiment, we could try and exract distance information separately for particular angles we want.

So, finding angles we are…

Fortunately, you don’t have to invent things here, Scharr did it for us. With his kernel, our 1st pass filter could look like:

parameter float threshold <
    minValue: 0.0; maxValue: 9.0;
    defaultValue: 0.3;
>;

input image4 src;
output pixel4 dst;

void
evaluatePixel()
{
    float2 at = outCoord ();
    pixel4 res = pixel4 (0.0, 0.0, 0.0, 1.0);
    // 3x3 Scharr operator (0.3, 1.0, 0.3)
    pixel4 ne = sampleNearest (src, float2 (at.x + 1.0, at.y - 1.0));
    pixel4 se = sampleNearest (src, float2 (at.x + 1.0, at.y + 1.0));
    pixel4 sw = sampleNearest (src, float2 (at.x - 1.0, at.y + 1.0));
    pixel4 nw = sampleNearest (src, float2 (at.x - 1.0, at.y - 1.0));
    pixel4 dx =
        sampleNearest (src, float2 (at.x + 1.0, at.y)) - // e
        sampleNearest (src, float2 (at.x - 1.0, at.y)) + // w
        0.3 * (ne + se - nw - sw);
    pixel4 dy =
        sampleNearest (src, float2 (at.x, at.y + 1.0)) - // s
        sampleNearest (src, float2 (at.x, at.y - 1.0)) + // n
        0.3 * (se + sw - ne - nw);
    // combined gradient
    float2 d = float2 (
        dx.r + dx.g + dx.b,
        dy.r + dy.g + dy.b
    );
    // threshold
    if (length (d) > threshold) {
        // map -PI..PI to 0..PI to 0..3
        float a = 0.954929658551372 * atan (d.y, d.x);
        if (a <= 0.0) a += 3.0;
        if (a < 1.0) {
            // 0 to PI/3 go to red
            res.r = a;
        } else {
            if (a < 2.0) {
                // PI/3 to 2PI/3 go to green
                res.g = a - 1.0;
            } else {
                // 2PI/3 to PI go to blue
                res.b = a - 2.0;
            }
        }
    }

    dst = res;
}

And this is exactly what you see at the picture above – color-coded edge angles. The thing that remains to do is proper clustering of histogram() output. I’m too lazy to write k-means just for that, so for now I just pick local maxima. Overall, this toy seems to pick up dominant angles correctly, so there are good chances for «part 2» to follow soon – stay tuned.

To be continued…

4 Responses to “Hough transform experiment part 1”


  1. 1 Thy October 5, 2010 at 20:21

    Cool! I guess..
    im confused, trying the demo :p

    but thats cool man.
    I guess.

    • 2 makc3d October 6, 2010 at 00:28

      Ha, thanks. To give you an example of what is this about, “part 1” alone could be immediately used for making virtual steering wheel without FLARToolKit (you save $1300 :), but this is not something I am looking to do with this experiment, so the demo is bare bones, just to make sure it works.

  2. 3 nicoptere October 23, 2010 at 02:27

    why is there no sound?


  1. 1 Hough transform thing continues… « ¿noʎ ǝɹɐ ʍoɥ 'ʎǝɥ Trackback on October 8, 2010 at 01:51

Ask a Question

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




Old stuff

September 2010
M T W T F S S
« Jul   Oct »
 12345
6789101112
13141516171819
20212223242526
27282930  

Oh, btw…


%d bloggers like this: