Browsing articles in "Javascript"
Nov 30, 2017

#codevember 2017

I tried to do #codevember this year, where you are supposed to do one “code sketch” every day for the whole month. I guess the idea is like with all kind of “everydays”, that by doing something often, you get better at it.

I didn’t really like that format, was a bit too stressful having that hanging over me every night. I think I’m more “make one sketch every month and then polish/iterate for 30 days”.

Anyway was fun getting back to WebGL and Three.js a bit, been in to Unity for over a year now.

Here’s a page with all sketches collected.

There is also a youtube video here.

May 16, 2013

Md5 to json converter


Here’s a tool to convert *.md5mesh and *.md5anim files to the json format(3.1) used by three.js. The MD5 format is from id Tech 4(Doom 3, Quake 4, Wolfenstein and more).

It’s skeletal animation(skinning), the earlier MD2 format for example is vertex animation(morphTargets).

Here’s how you use it:
1. Drag and drop a *.md5mesh file.
2. Drag and drop a *.md5anim file.
3. Drag and drop a jpg/png for texture.
Any image containing ‘normal’ will be used as a normalMap, ‘bump’ as bumpMap, ‘specular’ as specularMap. None of these and it will be used as the diffuseMap. (this is for preview only and is not saved in the json)

Then save the file.

You can also tick “Lock rootbone” this locks the root bone to position 0,0,0 and the rotation to 0,0,0.
There is also an option to only export the animation data.

Also posted a fairly simple example of how to load a model and some animations, see here. Also shows how you could position other objects relative to the bones.

A small note about materials. I made it so every “mesh” in the *.md5mesh gets it’s own material index. And the idea is that you override the materials in the json-file with your own materials. The above example does that for example.

Three.js does currently not support blending of animations(like interpolating between animations for smooth transitions), hopefully someone will add that sometime in the future. Would be a great feature to have. :)
Edit (Feb 13 2014): Someone added basic support for blending a few months ago to three.js. Did a first test here, using the troll from the Hobbit-project.

Jul 9, 2012

Build with Chrome

Been working with a really fun project earlier this spring. Thought I´d try to give some brief insight in to part of the development process here, and some of the fun we´ve been having.

It can be described as a mash-up between Google Maps and LEGO®, where you can build LEGO on top of Maps. It´s called “Build with Chrome”. Here’s a YouTube video explaining it more visually.

We start out with what we call the “Discovery phase”. This is basicly where each dicipline tries to see the biggest challenges with the project and start exploring different options/paths/etc. Anything goes..
I saw two big challenges here. The first one being the builder, like how do you make some 3d-tool simple enough so normal people can use it. The second thing was how do you display peoples builds, several at the time without hitting to big performance issues.
We started with the second challenge.

Having recently worked with particles, my mindset was set to that.. So my initial idea was to treat the LEGO bricks as particles or several particles makes up a brick, etc.. The idea is to use a sprite/particle of the smallest brick and so on. 1 vertex = 1 sprite. So this is not “real” 3d, but more isometric/ortographic, no perpective. Here´s the first test, so you can see what I mean:

(Small note. There is a bug with gl_PointCoord with certain older ati-drivers(like osx uses for example), where it flips the sprites vertically. Never bothered to fix a workaround, so if it looks weird for you, it´s probably that, and you should use the (flipped versions).)
Isometric 1 (flipped)

I wanted to get rotation in there. So we rendered out a png-sequence of a 90 degree spin, then based on the viewing angle we update the sprite texture. Here´s and example of that:
(Mousedown and drag left/right)
Isometric 2 (flipped)

Tested to snap to predefined angles, as it looks a bit weird looking from the front when there is no perspective.
(Mousedown and drag left/right)
Isometric 3 (flipped)

A first test to use an actual dummy model.
(Mousedown and drag left/right to rotate, drag up/down to move camera)
Isometric 4 (flipped)

One of the early ideas was to have some sort of terrain variation, here´s a test of that:
(Mousedown and drag left/right to rotate, arrowkeys to pan)
Isometric 5 (flipped)

More dummy models testing:
(Mousedown and drag left/right to rotate)
Teapot (flipped)
Torusknot (flipped)
NK Logo (flipped)
NK Logo zoomed out

For comparing techniques
(Mousedown and drag left/right to rotate)
Isometric 8 (flipped)

A test to mix in real geometry and to not have sorting issues against the sprites.
(Mousedown and drag left/right to rotate)
Isometric 9 (flipped)

This was sort off a long shot track. So at the same time Mikael Emtinger was working on the more classical real 3d approach. But with a twist. So he also saw the challenge in how to draw potentially lots of geomtery in a really a optimized way, or as few drawcalls as possible.
The idea is basicly to construct the geometry in the shader. So the “geometry” is basicly a cube that is defined as a uniform. Then the geometry is “built up” in the shader with an attribute stream, with sizes, offsets and colors being sent in.
He started out with a texture stream, but then converted to a attribute stream for compability-issues, but kept the idea of storing models in a bitmap, like using rgb-channel, pixel values for width, height, depth, x, y, z, colorindex, etc.
The pegs in this approach is done with “camera mapping”, basicly a high poly peg is rendered to an offscreen buffer at the current camera angle and then that is used as a repeated texture. And this is basicly what is used in the “explore” mode to display models.

There are drawbacks with each approach of course, but always good to have different directions. The client eventually wanted the real 3d approach.
Kind of a relief in retroperspective, not sure how we would have solved certain thing with the isometric approach.. ;)

So the next thing was to decide framework, since we had Micke onboard and he have developed Glow, we decided to go with that to be able to have an optimized render-pipeline since this is very much a single purpose thing. That is rendering LEGO bricks.
Still that needed much behind-the-scenes development to get to a point where we front-end-folks could make something nice with it.
So I started using my usual weapon of choice, Three.js, to prototype with, and by this time the other big challenge have been pushed on to my desk(#¤#!ers). The builder. I was a bit scared of this.. but it was just to “bite the bull” or whatever you say… ;)
So we entered what we call the “Definition phase”, the goal is to define what it is we should produce, functions, visuals, etc.

Started by trying to find inspiration in other similar tools, like voxel-editors, etc.. I wanted to find something really simple as the target audience here was normal people and not tech-savvy geeks. Saw something interesting in 3d-paint, a voxel tool, like you had a gridplane that you could move around, so it basicly became like drawing in 2d.
That was the first thing I tried.
Builder 1

The feeling of being able to “draw” bricks was kinda nice, but building something became quite weird.

So I tried with the more classical approach, that is more on par with the LEGO Digitial Designer, which is really nice, but it’s more advanced and more “cad” than what we were aiming for here.
Builder 2

How it works is that when you point the mouse to the top-surface, a invisible plane gets positioned on top of the brick and is sized and positioned depending on the target brick and the “in hand”-brick + rotation. So the ray hits that instead and you can position at any peg. Also made it auto-stack upwards.

Incremental version. Added preview pegs to the ghostbricks so you could more easily see where it would go.
Builder 3

One that tested autosaving with local storage.
Builder 4

A version with more bricks, updated dummy-gui, also thin bricks.
Builder 5

A version with real pegs added. Also trying to figure out how to deal with tall builds, hence the y-slider in the top-right corner.
Builder 6

Added the sloped bricks.
Builder 7

Test with a maps-texture. In animation and more..
Builder 8

Real pegs on the baseplate.
Builder 9 (Also experimented with different thickness of the baseplate, 1 and 2)

Visual tweaks, trying to get the ADs/gfx people happy… I saw mission impossible 4 that night btw.. just sayin…
Builder 10

More functional things. Like the predefined angles for example..
Builder 11
Also tested different sizes of the baseplate, like this. You can change the querystring there.

So now Micke had gotten to a point with his LEGO-version of Glow, dubbed GLego, that it was time for me to stop prototyping and build it “for real”. The production phase. Which basicly meant tearing everything down and start from scratch, using GLego instead of Three.
Might add that I had hard time explaining this to PMs, ADs, clients, etc… “But it´s working…, We´re almost there!” Anyway…

I got up to speed pretty quickly, Micke had done a great job, and I constantly bugged him for new features/fixes etc.
And here´s a first version of GLego-version that I showed:
GLego Builder 0

An incremental one with loads of fixes after feedback, etc.
GLego Builder 1

And the last one using my horrible dummy-gui. Added environment mapping and more.
GLego Builder 2

From there on it went in to the app-engine version, where a real gui was applied, bugfixes, small tweaks, changes and so on. And eventually it became what is on the live site.
Live Builder

This was LOADS of fun to work with and I´m quite alright with how it turned out, for once.
LEGO and WebGL in one project is like a real nerd-fiesta..

You can visit the full thing at: Build with Chrome
(Official Google blog post)

As always it´s a collaboration between many parties. We at North Kingdom. Agigen helped with both the backend and html/css-frontend. Mikael Emtinger did the WebGL framework, shaders, etc. Mark in Australia was the agency client together with Google Chrome and LEGO. The clients were really cool btw. (which is freaking rare…). ;)

And if you read this far I can give you some pro-tips regarding the Builder. You can change colors with the 0-9 keys on the keyboard. There is also a “special” hidden “my-little-pony-inspired” color(easter egg) available if you do a certain key-combination(or go by the console). Rotating bricks is quickest done by using the left/right arrowkeys… And that´s about it!

Also here´s some various screenshots of WIP/testbuilds/failures/etc. (Click to enlarge them).


Jun 28, 2012


Got a chance to play with a Kinect recently, I know.. about 2 years after everyone else. :)
Anyway, I wanted to get started prototyping/testing really quickly, and as of late I have come to think that the most accessible and quickest way is with javascript and 2dcanvas or WebGL.
So I found KinectJS, that makes it really simple to get started. Just run the server and through a websocket you get access to the nodes, etc.
There are others of course, like as3NUI if you want to go with as3. Or obviously c# and c++ examples that comes with the sdk.

Since this is really hard to show online, I have tried to screenrecord some of the tests.

The first thing was just to try and get the nodes showing.
So this is just a canvas plotting the nodes as a skeleton:

Next thing I tried was some headtracking and moving a camera around accordingly to get some sort “holographic” effect.
It´s filmed with a crappy mobile-camera, sorry for that. But you should get the idea:

Then the most obvious geek-thing, control a light saber with your arm:

Test to emit particles from your hands, feels good:

Draw some trails:

Always wondered what it feels like to be a flower :D

Then a test with connecting some box2d stuff to the nodes:

Jumped around a bit to much while tesing that, broke my fucking lamp…

Then an attempt at making something more installation-like. The idea is to project this on to one wall of a room. When it detects a person it lights up like a long corridor and extends the room. And then uses headtracking to change the camera position so the perspective is correct from that persons pov. That´s the theory anyway.
Also to have some sort of interactivity and some references in the extened room, you can “throw” some balls by moving your hands in Z above a certain velocity, the balls then “inherits” the velocity of the hand.

And a variation with an object you can spin around by moving your hands.

So I have mostly tested the Kinect as an input device(which is what is.. duh), but I mean I haven´t touched the video and depthmap stuff.

And as an input device it certainly got some pros and cons. Like doing precission stuff and for example controlling something with lots of accuracy is really hard. I guess also partly cause there is no tactile feedback(a bit of the same problem as touchscreens have imo). As a result control interfaces have to be quite forgiving(for example like this)…
But on the other hand there are stuff that can feel really nice and responsive, like for example emitting particles or drawing and stuff like that. I guess you could say, creating or influencing something with motion in a void feels nicer than trying to control something with motion in a void. :)

In our business it certainly got potential for installations and similar stuff, have obviously just peaked at the surface here. Was lots of fun though!

Jan 30, 2012

Md2 to json converter

Tried to make a simple little “webapp” that converts Md2 models(the old quake 2 model format) into the json format(version 3) that is used by three.js.

You simply open the page, and then drag and drop a *.md2 file and it will be converted and a preview of the mesh should appear. You can then drag and drop jpg/png images for the texture to be previewed as well.
Left click and drag to rotate, right click and drag to move. Scrollwheel to zoom.
If it looks ok, enter a filename and hit “Save”.
It should now be downloaded(in Chrome at least, Firefox opens it in a new window).

You can then load in the json files into your three.js projects and use them as morphTargets.

MD2 to JSON Converter (a model to test with, if you don’t have one)

Example of converted model as morphTarget



I have only tested this in Chrome and Firefox, so not sure if it works anywhere else. Mostly done as an exercise for myself to understand the format, drag and drop and filehandling etc. But might be useful to someone else out there.

Edit (Mar 08, 2012): Looks like I had the exporter output a horizontally flipped model, should be fixed now. Thanks to @alteredq for letting me know.

Edit (May 11 2013): Updated it to export to the latest format(3.1).

Edit (Feb 10 2014): Fixed it, was broken due to some webgl changes in recent browsers.

Dec 29, 2010

Bye 2010

Another busy year have (almost) passed, and more projects behind me. We have one site nominated at The Fwa:s Peoples Choice Awards 2010 this time, it’s July.
Things slowed down a bit towards the end of the year though. Great for me, had time to play with some mobile/android stuff. And later also try a bit of js+canvas and some WebGL. I can really see the potential. Really fun to play with from a developers point-of-view.
I also actually ended up using twitter, it’s a good work-tool/channel to see trends and what people are up to etc. You can ‘follow’ me @oosmoxiecode.
This site is nearing 10 years without a remake/update… lol. I would really like to convert it to a wordpress-solution.. some day.

Have a great 2011 everyone!


    Sorry, no Tweets were found.