Archive for the '3D' Category

Sampling equirectangular textures

Suppose you want to do 360 panorama demo using equirectangular texture. Easy way would be to use sphere mesh, with enough triangles distortions are invisible, and the client is happy. The few of us, who are dealing with “make me another matterport” bullshit, are not so lucky – we have to project images onto complex meshes and, therefore, write the shaders. Continue reading ‘Sampling equirectangular textures’


What are UV coordinates?

Isn’t it amazing how many people in the field do not understand UV coordinates? I thought so too, so back in 2011 I took two images from Senocular article and made this animated GIF:


Well, I am just reposting it here now in case Alternativa3D wiki currently hosting it will go down, like their forum did.

How to set object.matrix in three.js?

While all the top google hits for this problem revolve around messing with matrixAutoUpdate property, I thought I’d share this simple trick:

object.matrix.copy (...); // or fromArray (...), etc
object.matrix.decompose (

This only really works with the matrices that can be represented with three.js’ position/quaternion/scale model, but in practice this is vast majority of the cases. You’re welcome.

Using VideoTexture with Camera in AIR

These days I am trying to write simple iOS app that takes live camera stream and applies custom shader to it, as my pet project. And – surprise – I am doing this with AIR. That’s right, after I have finally come to terms with the fact that flash is dead and, considering how much GLSL is superior to AGAL as a shader language, I am still using AIR to build this app. Why? Because it works :) Unlike HTML5 aka flash killer. WebRTC is still not supported on iOS, and community polyfills simply are not enough to plug the gap. So, the only other option would be to use ObjC or Swift, but I know neither yet, while with flash I am at home (and, potentially, can easily port the app to Android later).

VideoTexture ?

Ok, so we have simple access to live camera feed, but how to bring it to Stage3D (flash version of WebGL) to apply the shader? Traditionally, we had to draw the display list video object into bitmap and re-upload it to GPU every frame, which was costly. But quite recently Adobe gave us more straightforward option – VideoTexture. Whole new runtime feature with amusing history, however, flash is still dead, so its documentation is crap. Just following NetStream-based snippets and changing attachNetStream to attachCamera like this

var videoTexture:VideoTexture = stage3D.context3D.createVideoTexture ();
videoTexture.attachCamera (camera);
videoTexture.addEventListener (Event.TEXTURE_READY, renderFrame);

did not work.

The missing piece ?

There is RENDER_STATE event that can tell you if video decoding is GPU accelerated (in the case of live camera stream it is not, btw). Turns out, you have to subscribe to this event even if you don’t care, or else the magic does not happen – camera stream data will never reach the texture and, correspondingly, TEXTURE_READY events will never fire. You’d think this is stupid and can’t possibly be true, but then adding event listeners is exactly how you get webcam activityLevel property to work, for example, so I am not surprised. With this, the final code to make it work is

var videoTexture:VideoTexture = stage3D.context3D.createVideoTexture ();
videoTexture.attachCamera (camera);
videoTexture.addEventListener (VideoTextureEvent.RENDER_STATE, function (e:*):void {
	// yes, we want TEXTURE_READY events
videoTexture.addEventListener (Event.TEXTURE_READY, renderFrame);

On the alternative to gl_FrontFacing

Ok, this post is still about three.js and GLSL shaders, but unlike raytracing loads of cubes or relativity simulations, this is something more down-to-earth. In this post, we shall discuss some possible alternatives to gl_FrontFacing in your WebGL apps… Continue reading ‘On the alternative to gl_FrontFacing’

Relativistic lattice

How would the space look like if you were going really fast? You could find some videos if you know the key words, but not enough interactive stuff, so here goes one:

development screenshot Continue reading ‘Relativistic lattice’

Infinite cubes

Have you ever heared about raymarching distance fields? That’s how all those awesome shaders were made. Very simple and beautiful concept, but it comes with the price to pay – many, many iterations. Which means these shaders are slow. And even if your hardware is from 2015 and they are not slow for you – you will still notice. Here is the screenshot of this infinite cubes demo (3x magnification):
Exhibit 1: Gaps
Do you notice the gaps between the cubes? Maybe allow me to reduce the number of iterations (64 to 20) to make them more apparent (this time 2x reduction):

Exhibit 2: Gaps

What happens here? I shall call this “black hole effect” :) The rays that go near the surface are slowed down to a crawl, and need more iterations to escape:

Exhibit 3: Iterations

See, there are so many iterations I could not even be bothered to draw them all in this gif :) The same thing happens in the shader – the ray never reaches its target.

Is there anything else we can do?

I was thiking that maybe distance estimation should take into account ray direction. If you can do that, your shader will get quite a boost. But, for this particular shader, I could easily solve for intersections with cube faces instead. More iterations bring more of them in, and I just keep the closest face the ray hits:
Exhibit 4: Iterations of my shader

As you see, we have some “overdraw”, and some cubes are never complete, but it works faster than classic method. Which makes me happy enough to write this post, because now I have enough performance to do the next step… to be continued ;)

Old stuff

October 2019
« Jan    

Oh, btw…