Dan Newcome, blog

I'm bringing cyber back

Archive for June 2010

Code reading: Google Chrome Speed Dial

leave a comment »

I’m thinking about starting a new series of posts where I write a few interesting details about some source code that I’ve read recently. From time to time I come across something unexpected or noteworthy, and in any case I put it in my notes — so why not post it up here?

On a lark, I did a ‘view source’ on the Google Chrome Speed Dial page that comes up when you first start up the browser. I figured that it was a web app, but I didn’t realize that they had so much custom code in there. For those of you who don’t use Chrome, the speed dial page is a (shameless) rip-off of the Opera speed dial page and looks something like the following:

I’m not going to go into a ton of detail here, but there were two interesting things that I noticed within a minute of looking over the code. The first thing was that they had their own template engine that claims to have been inspired by “JSTemplate”, which I can only assume means the venerable Trimpath template engine.

/**
 * @fileoverview This is a simple template engine inspired by JsTemplates
 * optimized for i18n.
 *
 * It currently supports two handlers:
 *
 *   * i18n-content which sets the textContent of the element
 *
 *     <span i18n-content="myContent"></span>
 *     i18nTemplate.process(element, {'myContent': 'Content'});
 *
 *   * i18n-values is a list of attribute-value or property-value pairs.
 *     Properties are prefixed with a '.' and can contain nested properties.
 *
 *     <span i18n-values="title:myTitle;.style.fontSize:fontSize"></span>
 *     i18nTemplate.process(element, {
 *       'myTitle': 'Title',
 *       'fontSize': '13px'
 *     });
 */

The second curious bit was a tiny html whitelist that walked the dom client-side and got rid of anything but a select few elements and attributes. Most whitelists are done server-side for obvious reasons (security) but it really illustrates how simple it can be if you have access to the html DOM.

The list of allowed tags is specified as an array:

var allowedTags = ['A', 'B', 'STRONG'];

They walk the DOM using a simple recursive function:

 function walk(n, f) {
    f(n);
    for (var i = 0; i < n.childNodes.length; i++) {
      walk(n.childNodes[i], f);
    }
  }

Which is called like this:

walk(df, function(node) {
    switch (node.nodeType) {
      case Node.ELEMENT_NODE:
        assertElement(node);
        var attrs = node.attributes;
        for (var i = 0; i < attrs.length; i++) {
          assertAttribute(attrs[i], node);
        }
        break;
 
      case Node.COMMENT_NODE:
      case Node.DOCUMENT_FRAGMENT_NODE:
      case Node.TEXT_NODE:
        break;
 
      default:
        throw Error('Node type ' + node.nodeType + ' is not supported');
    }
  };

There is some supporting code such as the ‘assertElement()’ and ‘assertAttribute()’ functions, which I won’t repeat here, but overall it’s very simple.

They go on to implement their own drag-and-drop functionality that looks a lot like jQuery, but isn’t. Actually there are a few jQuery-looking constructs here but I don’t see any evidence of jQuery itself. Maybe Google doesn’t want to use it now that it is shipping with .NET?

As a bonus, I’ll note that the thumbnails are generated in a manual way from a chunk of HTML in the page. The template looks like this:

 <div id="most-visited"> 
    <a class="thumbnail-container filler" tabindex="1" id="t0"> 
      <div class="edit-mode-border"> 
        <div class="edit-bar"> 
          <div class="pin"></div> 
          <div class="spacer"></div> 
          <div class="remove"></div> 
        </div> 
        <span class="thumbnail-wrapper"> 
          <span class="thumbnail"></span> 
        </span> 
      </div> 
      <div class="title"> 
        <div></div> 
      </div> 
    </a> 
  </div> 

And the code looks like this:

 <script> 
    (function() {
      var el = $('most-visited');
      if (shownSections & Section.LIST) {
        el.className += ' list';
      } else if (!(shownSections & Section.THUMB)) {
        el.className += ' collapsed';
      }
 
      for (var i = 1; i < 8; i++) {
        el.appendChild(el.firstElementChild.cloneNode(true)).id = 't' + i;
      }
 
      applyMostVisitedRects();
    })();
  </script> 

As you can see from the line

        el.appendChild(el.firstElementChild.cloneNode(true)).id = 't' + i;

this is a pretty bare-metal approach to generating the page content. More power to them, but then, they have the luxury of knowing that they are only running in the Chrome browser environment.

Written by newcome

June 28, 2010 at 5:24 pm

Posted in Uncategorized

Declarative dynamic ASP.NET forms

leave a comment »

A few days ago, I wrote a post dealing with literal data structures in .NET with a vague allusion to using them to do some declarative configuration duties. Well, to follow up, I’m going to show you a little bit about what I’m thinking. I’ve been really spoiled the last year or so doing Javascript, and I really want to get that code-as-data feeling going in some other work areas (.NET and MSCRM) so I took a look at some routine chores like pulling together a contact info form from declarative data. The code here is just a start, but should illustrate the basic idea.

For starters, we want to describe our ASP.NET form in a general way using the C# object literal notation that I described last time. Ideally we want to describe the underlying database/CRM field that corresponds with the form input, the label text, the field type, and so on. This makes things easy to change and also allows us to generate the configuration automatically from CRM for very flexible deployments. A field specification might look something like the following:

var field = new { 
	attribute = "db_firstname",
	formfield = "txtFirst",
	formlabel = "lblFirst",
	formlabeltext = "First Name",
	fieldtype = "text" 
},

Just looking at the naming conventions, just about any Winforms programmer should see where we are going with this: we are specifying the field labels and names here along with some information about the underlying data source. We won’t dig into the data source stuff until next time, but I’m going to put it in here as a reminder of where we intend to go with this stuff.

Now that we know what a single field specification looks like, let’s extend things a bit and create a data structure that describes the entity:

private dynamic spec = new { 
    entity = "contact", 
    fields = new object[] { 
        new { 
            attribute = "db_firstname",
            formfield = "txtFirst",
            formlabel = "lblFirst",
            formlabeltext = "First Name",
            fieldtype = "text" 
        },
        new { 
            attribute = "db_lastname",
            formfield = "txtLast",
            formlabel = "lblLast",
            formlabeltext = "Last Name",
            fieldtype = "text" 
        }
    }
};

One major thing to notice here is that we make use of the new ‘dynamic’ keyword. This makes life much easier, as otherwise we would have to use reflection to access the members of these anonymous types. If the 4.0 framework isn’t available, we can resort to using reflection, but I promise you it isn’t very pretty. Note also that we aren’t using the ‘var’ keyword here. We want to be able to pass the data structure around to methods that will generate the web form. Using var is very difficult since we would have to cast to object to pass it, and we’d end up using dynamic anyway, since otherwise we’d have to try to cast it to the anonymous type! Not for the faint of heart.

In the interest of brevity, we’ll only cover text input fields here, and only a single form layout. The crux of what we want to do is to create the Webforms controls and add them to the page using the specification shown above. We can do just that as follows:

private void AddFields( Control in_control, dynamic in_spec ) {
		foreach( dynamic item in spec.fields ) {
			HtmlGenericControl div = new HtmlGenericControl( "div" );
			div.Controls.Add( CreateLabel(
				item.formlabeltext,
				item.formlabel
			) );
			in_control.Controls.Add( CreateTextBox( item.formfield ) );
			in_control.Controls.Add( div );
		}
	}

The code could be much simpler if we were content with having our form fields laid out inline, but really — the bare minimum here is to have something of a standard field-per-line layout, so we have the overhead of creating elements using HtmlGenericControl controls.

Using the above code can be as simple as calling it from Page_Load as such:

  protected void Page_Load(object sender, EventArgs e) {
		AddFields( this.Controls[3], spec );
    }

In order to round things out, the following listing shows the convenience methods used in the main function that create the control and label objects themselves:

private Label CreateLabel( string in_label, string in_id ) {
		Label label = new Label();
		label.Text = in_label;
		return label;
	}
	private TextBox CreateTextBox( string in_id ) {
		TextBox textbox = new TextBox();
		textbox.ID = in_id;
		return textbox;
	}

I added a few more fields by simply adding items to the ‘spec’ data structure and ended up with something like the following:

Metaprogramming in C# is not nearly as easy as it is in Ruby or Javascript, but with recent additions to the framework, it is getting better. We’ll still have to rely on a bit of reflection from time to time, but hopefully this shows that it can be relatively straightforward.

Written by newcome

June 28, 2010 at 4:19 pm

Posted in Uncategorized

IE6/compatibility view testing

leave a comment »

On a recent client project, I was working on a few simple contact info forms for donations and dues renewals. I put together some nice, clean css layouts for the form fields and proceeded to the meat of the project, which was MS CRM integration.

Things were going well until it came time to do a client demo, and the page layout was broken. I had tested in all of the major browsers (IE/FF/Chrome/Safari/Opera) so I thought that all of the bases were covered.

After the demo we looked more closely at things and discovered that a default setting in IE causes things to display in compatibility view when the page is an ‘intranet’ site (which appears to include localhost). Here is a screenshot:

The layout bug is also apparent in IE6, so ‘compatibility view’ must be a similar rendering engine to the old IE6 (I don’t know this for certain, however).

Since the client is also running some instances of IE6, and I wasn’t sure that compatibility mode was the same thing as IE6, I wanted to test on IE6 itself. It has been a while since I looked around for testing solutions, and in the past I just had a virtual machine running Windows XP lying around to test on. However, the virtual machine was on a remote backup and would have taken me too long to set up again for quick testing.

Fortunately, I found something that worked — running IE6 under Spoon. Spoon is some kind of application virtualization environment that lets you run an application as a plugin. I don’t know exactly how it works, but it appears to get around the limitations that other solutions have in getting the actual binaries for IE6 that are responsible for layout to load up.

As icing on the cake, my old development plugins were installed just like they were before the new IE development tools were available. See the above screenshot showing the layout bug, along with Nikhil’s Web Development Helper loaded at the bottom of the screen. I wonder if IEDocMon will work as well?

Written by newcome

June 28, 2010 at 12:17 pm

Posted in Uncategorized

C# object literal notation

with 5 comments

I have a project coming up where I’d like to be able to write some declarative configuration in an object literal notation similar to JSON. The project is going to be written in C#, so I could embed a Javascript implementation like IronJS and just use JSON, or I could try using IronPython or some other .NET language that supports object literals. However, now that C# supports anonymous types, I wondered how far I could go staying within the C# language.

For starters, a simple “hash” style object can be created like so:

var obj = new { name = "value" };

Fortunately, anonymous types may be nested, so the following is valid:

var obj = new { 
    name = "value", 
    obj = new { 
        name = "value" 
    }
};

So far, things are looking pretty good. With the exception of the ‘new’ keyword, there are only slight syntactic differences between what we’d see in a JSON literal and what we have here in C#. The other major feature of JSON is the array literal. C# supports array initialization lists, so we can create a new unnamed array as such:

new object[]{ "one", "two", 3 }

Now it isn’t too much of a stretch to see that we can create arrays as values in our anonymous types, like this:

var obj = new { 
    name = "value", 
    arr = new object[] { "one", "two", 3 } 
};

Now what happens if we want go the other way around, and create an array of anonymous types? We can do this:

var obj = new object[] {
    new { name = "value" },
    new { name = "value2" }
};

There is one problem with this however. Since we are defining our array type as object, trying to access the ‘name’ member like this:

obj[1].name

Results in the following error:

error CS1061: 'object' does not contain a definition for 'name' and no extension method 'name' accepting
   a first argument of type 'object' could be found (are you missing a using directive or an assembly reference?)

Fortunately we can make use of the ‘dynamic’ keyword (it is actually a type too!) in C# to defer static type checking on our objects by declaring the array like this:

var obj = new dynamic[] {
    new { name = "value" },
    new { name = "value2" }
};

Things are looking pretty rosy right now — sure, the added syntax is a bit of a drag compared to JSON, but so far we’ve done much less work than we would had we embedded another language.

There is one last piece to the puzzle — functions. C# supports anonymous functions in the form of either anonymous delegates or lambda functions. Either will work for our purposes here, although lambdas weren’t introduced until later, so it is possible that your version of the .NET framework won’t support them.

// using anonymous delegate
new Action( delegate() { 
    Console.WriteLine( "called function" ); 
})

// using lambda expression
new Action( () => { 
    Console.WriteLine( "called function" ); 
})

There is one trick here that I haven’t talked about yet: the Action delegate type. It turns out that an anonymous delegate or lambda expression can’t be directly assigned to a variable in our anonymous types, so we have to wrap them in a proper delegate instance. We could have created our own delegate type for this purpose, but .NET has a few built-ins that make our lives somewhat simpler. ‘Action’ is a delegate describes a function taking no parameters and returning void, perfect for our sample here which only produces the side effect of console output.

If we put everything together we can create complex objects like the following:

var obj = new { 
	name = "value", 
	function = new Action(() => { 
		Console.WriteLine( "called function" ); 
		string foo = "bar";
		var nested = new Action( () => { 
			Console.WriteLine( "inner function " + foo ); 
		});
		nested();
        },
	array = new dynamic[] { 
		"one", 
		"two",  
		new Action( () => { Console.WriteLine( "array function" ); } ),
		3
	}
};

Notice that I threw in an extra treat — there is a function called ‘nested’ nested inside of another function. The local variable ‘foo’ is visible to the nested function. Speaking of variables, this leads us to the biggest shortcoming that I’ve found with expressing object literals in C# — we have no access to the ‘this’ reference in our functions. In JSON we would be able to refer to the object instance in which the function is defined but in C# we do not. It may be possible to retrieve a reference to ‘this’ somehow using Reflection, so if some astute reader finds a method for doing so, I’d love to know about it.

Written by newcome

June 11, 2010 at 5:05 pm

Posted in Uncategorized

Dynamic objects in C#

with one comment

Microsoft’s efforts toward making its .NET languages more dynamic get plenty of hype, so when I dug into using dynamic (aka expando) objects hoping to solve a few simple problems where I wanted to perform data binding on data that would only be known at runtime, I was optimistic that the new features would be just what I needed.

The Dynamic Language Runtime (DLR) defines a type called ExpandoObject that allows us to create new members on an instance on the fly, much like we can in languages such as Javascript. For example, in Javascript, we can add a member to an object by simply assigning a value:

var obj = {};
obj.member = "hello";

One very important feature of expando objects in Javascript is that we are able to create a member using runtime data. In the example above, the ‘member’ property was specified using Javascript code that was available when the script was compiled. Fortunately we can also do something like this:

obj["member"] = "hello";

Astute readers will observe that, in place of a literal string, we can use a variable whose value can be set at runtime:

var fieldname = "member";
obj[ fieldname ] = "hello";

So now that we’ve seen how things work in Javascript, let’s take a look at some C# code. In order to compile the code that follows, you’ll need Microsoft.Dynamic.dll from the DLR as well as Microsoft.CSharp.dll. Also, you’ll need to be running on the 4.0 framework, since we’ll be needing support for the ‘dynamic’ keyword.

dynamic obj = new System.Dynamic.ExpandoObject();
obj.member = "hello";

Brilliant. Now members can be created on the fly just like in Javascript. However, support for runtime member definition doesn’t seem to be released yet in the DLR. Fortunately, there is a workaround. Unfortunately it is ugly. We can cast to IDictionary and add the data member by adding a new element to the dictionary as follows:

( ( IDictionary )obj ).Add( "member", "hello" );

Note that the use of the ‘dynamic’ keyword signals to the compiler that type-checking should be deferred until runtime. It is also a type, so we can perform a cast. For example:

((dynamic) new System.Dynamic.ExpandoObject()).member = "woo";

It is a pointless example, since we don’t return a reference to the new object, but it serves to illustrate how ‘dynamic’ fits into the language.

Now the real reason that I wanted to create dynamic objects was so that I could bind them to an ASP.NET GridView. As an example, we can use anonymous types to create a data source like this:

MyGridView.DataSource = new object[] {
	new { name = "Dan", occupation = "programmer" }
};

The preceding code is great if we know exactly which fields we need at compile time, but in order to build the items at runtime we’d need something like the dynamic objects that I showed in the previous examples. Unfortunately, it doesn’t work — dynamic objects cannot be used in data binding. This leads me to wonder how useful ExpandoObject really is, since it is effectively just syntactic sugar around IDictionary if it doesn’t look like a normal .NET type (ie, with defined class members). Maybe I’m missing something, but this seems like an elaborate scheme that doesn’t fully solve the issue of creating dynamic objects in .NET.

Written by newcome

June 5, 2010 at 5:30 pm

Posted in Uncategorized

Is science fiction driving science reality?

leave a comment »

I recently watched the TED talk in which John Underkoffler demonstrated the three-dimensional operating system inteface designed for the movie Minority Report. I find it fascinating that Hollywood now has such high production standards, necessitated by ever-increasing audience sophistication, that it becomes easier for them to commission real research than to fake it.

Video games are another such example where entertainment is driving the state of the art in graphics technology. In our never-ending drive to be entertained, we are expanding the capabilities of computers that make possible advances in genetics research and other disciplines that rely on high-performance data visualization.

In recent months I have been reading more fiction. Science fiction, to be precise. In the past, while having an appreciation for works of literary art, I found myself unable to justify the time required to consume and appreciate such a work. In a practical sense, it was more appropriate to spend my time reading about technical realities rather than fantasies. In time however, I began to realize that when we think too much in everyday corporeal terms, the limitations of reality become the limitations of our imaginations. In the case of Minority Report, the traditional limitations of previous user interface design and operating systems research could be ignored, allowing the pursuit of dangerous and frightening ideas which may break the current rules.

Only by beginning in the abstract can we achieve discontinuous jumps in technology, which brings us back to fiction — in which any idea which can be written is possible. Even fiction seems unlikely to harness the breadth of what we are likely to see in our lifetimes — I believe we don’t have words for some of the concepts that are being toyed with even today, let alone those of tomorrow.

Written by newcome

June 5, 2010 at 12:42 pm

Posted in Uncategorized