Previously on «Hough transform»: brave lone coder decides to explore random line detection algorithm; he splits the experiment in parts roughly corresponding to pixel bender kernels that he needs to write; he writes a post with many words that noone reads, and shares the demo that noone finds impressive.
Well, just so that you know, the experiment continues, and the demo is a bit better now:
As you can see, I added 2nd kernel, so we can calculate and overlay complete lines and not just angles. There’s not much to comment on, all this stuff is pretty straight-forward; the demo algorithm is PoC quality, yet it’s already fast enough to run in near-real time.
There are many small things I learned continue to learn playing with this thing, but attempting detailed discussion of these results right now would be incomplete (and boring), so I will just leave this update notice here and move on to «part 3».
Soon after this post I have looked into the difference between my approach and actual Hough transform. The thing is, in HT every edge pixel votes for all possible lines passing through itself, as represented by single black curve in this image:
In discussed approach, however, every pixel votes for “best” line (red dots), as calculated by 1st kernel, which may not coincide with actual line (intersection point in the image). For this reason I tried to decrease angular resolution of 1st kernel as much as 9 times, thinking this would improve actual line chances to fall into pixel’s voting range. This, however, didn’t improve test swf at all. I am clearly missing something important; there’s more thinking to do for me.
To be continued…