Dan Newcome on technology

I'm bringing cyber back

Archive for September 2010

Observations about Javascript performance

leave a comment »

I just posted my js1k entry, and while I was doing the final rain dance of getting things down to 1024 bytes and testing in all of the major browsers (with the notable exception of IE) I noticed a few things about performance in the different browsers.

1) Javascript object allocation in Firefox is slower than the rest of the browsers. My fractal rendering code uses my own little complex number class that is just a tiny object that has two numbers as members. Here is the code:

function Complex( re, im ) {
	this.re = re;
	this.im = im;

This looks like no big deal, and in Google Chrome, Opera, and Safari, it wasn’t. In Firefox, however, creating a new instance of this type was a big performance hit. Operations like complex multiplication create and return a new instance of the complex number type, so you can imagine that since we perform these operations 100s of thousands of times in the course of rendering a single image, this can be a major drag. Here is a sample of the calling code:

function ComplexMult( a, b ) {
	return new Complex( 
		( a.re * b.re ) - ( a.im * b.im ),
		( a.re * b.im ) + ( a.im * b.re )

Performance is much better if we rewrite the code to reuse one of the arguments as the return value, but I couldn’t do this for every case, since sometimes I needed the unmodified argument later in the calling function. I’m curious to know if creating several temp instances would help. I haven’t had the time to experiment with it, but something like this could help:

function ComplexMult( a, b, ret ) {
    ret.re = ( a.re * b.re ) - ( a.im * b.im );
    ret.im = ( a.re * b.im ) + ( a.im * b.re );
    return ret;

This makes the calling code kind of ugly but maybe it would be worth the performance gain in Firefox.

2) Reducing object member access really does make an impact if it happens enough times in a loop. The color cycling code in my js1k submission accesses the image data and increments the color values. Here is what the code looked like before:

function cycle() {
	for( var i=0; i < imgd.data.length; i+=4 ) {
		imgd.data[ i ] = ( imgd.data[ i ] + colorstep ) % 256;
		imgd.data[ i+1 ] = ( imgd.data[ i+1 ] - colorstep ) % 256;

Removing all of the ‘.data’ lookups gave me a pretty nice speed boost when rendering the next color-cycled frame:

function cycle() {
        // save results of lookup in 'idd'
        var idd=imgd.data;
	for( var i=0; i < idd.length; i+=4 ) {
		idd[ i ] = ( idd[ i ] + 1) % 256;
		idd[ i+1 ] = ( idd[ i+1 ] - 1) % 256;

I’ll confess that I optimized this accidentally while trying to get my code down to 1024 bytes. ‘idd’ is quite a bit shorter than ‘imgd.data’!

3) Use the canvas ImageData API when working with pixel data. This should seem obvious, but when I was first rendering images, I was using the drawing API to render single-pixel rectangles using context.fillRect():

// slow
function drawPoint( x, y, context ) {
	context.strokeStyle = "black";
	context.strokeRect( x, y, 1, 1 );

I don’t have the profiler stats handy, but filling up an ImageData object and writing it to the canvas was much faster:

// fast
function drawPoint( x, y, imgdata ) {
	var index = ( x + y * 500 ) * 4;
	imgdata[ index ] = 0;
	imgdata[ index+1 ] = 0;
	imgdata[ index+2 ] = 0;
	imgdata[ index+3 ] = 255;

Sorry I don’t have the profiler traces, but I just wanted to put a few thoughts out there as I was thinking about this stuff. Hopefully this helps out any would-be Javascript demo writers.

Written by newcome

September 7, 2010 at 8:25 am

Posted in Uncategorized

Javascript minimization

leave a comment »

I recently had to cram nearly 3k of Javascript code down into 1024 bytes for submission to the js1k competition. Initially I didn’t think that I’d get there, but I did. Once I stripped out all comments and log statements I still had over 2k of code. Here are a few things that helped me out:

1) Run your code through Douglas Crockford’s JSMin first and see how far you have to go. JSMin is not particularly aggressive, but it is fairly strict and shouldn’t break your code. I kept this tool at the ready to run as I progressed to see how my changes affected the output size. The first run got me to about 1.5k, so I knew I was getting into the ballpark of feasibility. I still had nearly half a kilobyte to go though.

2) Inline functions that are only called once. Getting rid of the word ‘function’ and the associated braces saves 10 characters, and at the call site we save another 3 at least, assuming a single-character function name. Worst-case scenario is having several arguments, in which case we pay the tax twice since the argument list is potentially copied twice in the code, once at the call site and once in the function definition.

3) Remove unused branches. This seems obvious, but in many cases I’m reusing code that I have written for other projects, and it isn’t always evident that there is unused code in the form of an ‘else’ statement that is never reached. For a 1k compo, it is worth it to comment out variables and statements to make sure they have the effect that you think they do. I didn’t even have to run the code in most cases. Just thinking about getting rid of a line was enough to get me to think through whether it was necessary.

4) Shorten variable names and member accessors. Library functions like document.getElementById can be shortened considerably by creating a new alias like ‘gid’ for example. User-defined functions and variable names can be shortened to single characters.

5) Create global variables. Sometimes this means just getting rid of the ‘var’ keyword when you know that your names won’t conflict or just reusing a variable name. In larger programs this is a recipe for disaster, but in a smaller program it isn’t so bad. Also, there is the case where we look something up in each function, a DOM element perhaps, and it makes sense to move it to the top of the code and make it a global with a short name.

6) Remove local variables. This is an obvious one. Local variables are typically introduced for readability’s sake, so jockeying several expressions into a larger, more complex expression can save a lot of space.

Wrapping things up, I’ll mention that extremely aggressive minimization is kind of like running a source-to-source optimizing compiler against your code. Once you have run out of tricks like removing all whitespace and shortening and aliasing variable names, the only real way to make gains is to write less code to accomplish the task, which means assuming the role of the compiler’s optimizer, looking for shortcuts and ways to avoid including redundant instructions in the output.

Written by newcome

September 7, 2010 at 7:33 am

Posted in Uncategorized