Dan Newcome, blog

I'm bringing cyber back

Archive for December 2011

Basics of CSS 3D transforms

with one comment

CSS is getting more and more powerful these days, and now that CSS3 supports 3D transforms, we can do quite a lot without resorting to WebGL or some complex 3D drawing library. However, visualizing how 3D works in CSS can still be tricky, and it isn’t as flexible or rigorous as something like OpenGL. I’ve just started digging into 3D for possible extensions to the Donatello drawing library, so I thought I’d jot down a few initial notes here.

We’ll be using the Firefox nightly for the examples. They should work in Webkit also by modifying the CSS property prefixes, but I won’t cover that here. I’m going to start out with a very basic code example to get started. There don’t seem to be many explanations on the Web concerning what the bare minimum is to get things working.

The way the 3D transforms work is that there is a DOM element that sets the perspective reference, and elements that are to be rendered using 3D transforms are contained within this element. The reference element is not a viewport in the sense of typical 3D applications. Think of it more along the lines of the world scene graph. Elements may be outside of this container as is the case normally with CSS positioning using, say, a negative offset. There are other ways that content may appear outside of the parent element by way of the 3D transforms themselves as well.

Ok, let’s cut to the most basic example: a perspective view of a square div element that has been positioned perpendicular to the view reference plane.

To start with, let’s create a green square centered inside of a black-bordered “viewport” using standard CSS positioning and nothing fancy:


<div class="view">
    <div class="square">
    </div>
</div>

.view {
    border: 1px solid black;
    width: 300px;
    height: 300px;
}
.square {
    background-color: green;
    position: absolute;
    top: 100px;
    left: 100px;
    width: 100px;
    height: 100px;
}   

Firefox renders this like so:

Now in order to establish the 3D viewport and flip the square up onto its top edge, we need to do a few things. We’ll add the following CSS to the “view” class in order to put the observer over the center of the viewport element, 200 pixels overhead:

-moz-perspective: 200px;
-moz-transform-style: preserve-3d;
-moz-perspective-origin: 50% 50%;

Note that we need to use the vendor-specific “-moz” prefixes for the new CSS attributes. The W3C spec defines these without vendor prefixes, but for now, we need the prefixes if we want our code to work in any current real-world production Web browsers. Also keep in mind that the transform-style should be set to “preserve-3d”, since the default is “flat” and will effectively turn the 3D scene into a flat 2D projection, which will be very confusing to figure out if you don’t know what is going on. For example, our green square will end up just squashed into a shorter rectangle without this setting enabled.

In order to stand the green square up on end, we use the following CSS:

-moz-transform-origin: 0% 0%;  
-moz-transform: rotateX( 90deg ); 

Here we set the origin to the top left of the square and rotate it 90 degrees along the X axis (which is now aligned along the top edge of the green square courtesy of transform-origin).

The result looks like this:

Something to note if things don’t work as you’d expect is that the entire object must be beyond the viewer’s perspective. That is, if we stand a 100 pixel object up as we have done, and the perspective is set to 99 pixels, the object will disappear entirely from view. This caused me some pain early on in this exploration. Also, the position of the object in the scene doesn’t seem to affect its perspective rendering, but moving the perspective origin does. I’m not sure if this is intentional, but it is pretty confusing as you’d expect that moving either the camera or the object would have the same effect, but it doesn’t seem to. I could be mistaken on this, I’d love to hear from you if you have any more detail on how this works.

For more complex 3D examples check out the Mozilla Developer Network
However beware that they aren’t very well explained so if you didn’t quite get what I showed above, I’d try to figure out the simple case first before looking at the Mozilla examples.

Advertisements

Written by newcome

December 25, 2011 at 3:59 am

Posted in Uncategorized

Static method patterns

with 3 comments

When programming with statically-typed OO languages we all have the temptation sooner or later to implement some functionality using static methods. Static methods are “class” methods, which are called on the data types themselves rather than on an instance of a class.

In Microsoft .NET, Static methods are shared among all code that exists within a given application domain (appdomain). Static methods are very convenient for creating code that doesn’t need to deal with any external state, since we don’t have to create an instance of any class in order to call the code.

However, static methods do not participate in inheritance. They exist largely outside of the mechanisms that make object-oriented programming attractive. In light of this fact, I think that most of the time, static methods are an anti-pattern.

Static methods can make code hard to test since it is impossible to replace the static methods with alternative implementations such as mocks or stubs.

There is one workaround that I’ve found which I call the Static Adapter pattern. I have seen some references to this same pattern, and I don’t claim to have invented it, but I don’t see it listed in most classic design patterns books.

A static proxy is just like a classic proxy pattern but the methods on the proxy are static. For example, in a traditional proxy pattern we’d have something like this:

// proxied class
public class Service 
{
    public void DoSomething() {
        // something happens here
    }
}

public class ServiceProxy 
{
    // internally we have an instance of proxied class
    private Service m_service = new Service();

    // proxy the call to DoSomething()
    public void DoSomething() {
        m_service.DoSomething();
    }
}

In order to use the proxy, we do something like the following:

ServiceProxy proxy = new ServiceProxy();
proxy.DoSomething();

Notice that we have to create an instance of the proxy in order to work with it. Now there are two things that I want to extend this code to do. One is to allow me to avoid creating an instance of the proxy, and the other is to allow me to give the proxy a different internal service:

public class ServiceProxy 
{
    // internally we have an instance of proxied class
    public static Service m_service;

    // proxy the call to DoSomething()
    public static void DoSomething() {
        m_service.DoSomething();
    }
}

So now we have a static constructor that allows us to give the proxy its service implementation. The method DoSomething() can also now be called directly as:

ServiceProxy.m_service = new Service();
ServiceProxy.DoSomething();

We still have to specify the internal service for this to work. If we can get away with doing a default instantiation of the service we can do

ServiceProxy.DoSomething();

These static classes cannot implement an interface, but their internal services can. In order to provide a test implementation of some class we can plug in a dummy or test double as the internal service.

That’s enough for this post. I’ll have more to say about static classes and methods later.

Written by newcome

December 14, 2011 at 2:32 am

Posted in Uncategorized

When to write a library

with 4 comments

I’ve been meaning to write this post for a while. There have been a lot of instances in my work where the debate comes up about when to start abstracting code. There inevitably comes a time where a bit of code gets copied and tweaked enough times that it screams out to be put into a common library. However it has been my experience that programmers are still reluctant to take on the task of creating a common code base. Actually I’ve also noticed that many times programmers think that “code reuse” means just copying some of the files out of a previous project when starting a new one.

Some of these quick and dirty techniques are expedient initially, but we need to move past them to make any real progress on the problems that we work on. In my experience, creating modular code allows the code to be reasoned about more easily.

However, there are some downsides to creating libraries. It takes a non-trivial amount of work to create and maintain a piece of shared code. Once a piece of code is used in many places, it becomes nearly impossible to test the code without dedicated tests that cover all intended use cases. This means that even if the code is already written as a de facto library, lots of supporting code needs to be written in order to make the library maintainable.

Existing client code needs to be re-factored to use the new library going forward. This step is not totally necessary, as many times older code bases can be maintained as legacy code bases separately and only upgraded if other changes are necessary later on.

So this leads us to the question, when does it make sense to write a library? I’d say that when the pain of not having the code centrally located becomes significant. In the past I was a purist and I would have created some library right away, but I think that it takes some time to even figure out what the API should be so I think it makes sense now to intentionally experience some pain of having maybe three or four different versions of the code before consolidating the code.

Unfortunately, I think some languages make the unit of code deployment too large and cumbersome. In the .NET framework I’d argue that the Assembly is too cumbersome for many things. There are many times where I want a single interface to be independent and available separately but I would have to put it in its own assembly in order to reference it separately. It’s easy to fall into DLL hell with tens or even hundreds of assemblies. Each assembly carries the overhead of a separate build file or Visual Studio project file. I think there needs to be a lighter weight way of sharing code in .NET.

This post rambled on a bit, but I’m getting started on a series of posts on software engineering, so I wanted to kick things off with some problems that I have faced in the past, of which this is one of the most difficult.

Written by newcome

December 13, 2011 at 10:30 pm

Posted in Uncategorized

Full-stack programming case study

leave a comment »

There has been a lot of talk about “full-stack” programmers in the tech press recently. Along with the concept of “devops” this is an important distinction to make between pure coders and people who are capable of systems thinking. Generally speaking, in addition to a deep understanding of their core programming skills, they understand other parts of the IT stack like hardware, networking and operations.

I’m going to recount a story from a previous job I had where I was a programmer. We had a network team that was proficient in maintaining the existing network, but as the data center grew, things started to get unmanageable. The team argued about how to extend the network from the current single-subnet design to a multi-subnet network implemented with vlan-capable switches.

After many meetings and delays the network team had finally procured new equipment and thrown the stuff into the racks in the data center. However, no real thought had been given to how the migration would take place. Everyone was afraid to unplug anything and the data center couldn’t be down for more than a few hours, so any cut over would have to be prepared well in advance.

The network team wasn’t making any progress on the problem, so the CTO came to me to figure out if there was some solution to the migration problem. I’ve designed some pretty big networks in the past and I’ve dealt with some seriously big datacenter iron, so I figured that this network wouldn’t be too complex.

Since I had done some low level network programming in the past, I knew how multiple subnets could be subsumed in a larger subnet, and that asymmetrical routes could be created even though it is not a best practice to do so. The solution that I came up with would allow each machine on the network to be moved one at a time by setting up some asymmetric routes ahead of time. Once the routes were in place, machines could be moved without any of the other clients on the network noticing that they were actually on a new network. This allowed the network team to spread the migration out over several days without any outage at all. Once all machines were moved, the temporary routes were removed and everything was in order with networking best practices.

I found a diagram that I created for this project and redacted the IP addresses so that you can see the full solution.

Written by newcome

December 4, 2011 at 10:51 pm

Posted in Uncategorized

Why compiled regular expressions are awesome

with 3 comments

I was doing a little code spelunking recently and found this gem of a macro in the University of Washington IMAP server code. It does some validation of various date formats in email headers. The code was written this way for performance, but I wonder if regexps in modern languages would be up to the task? Anyway, I have posted the code here for your enjoyment.


/* ========================================================================
 * Copyright 1988-2006 University of Washington
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * 
 * ========================================================================
 */

/*
 * Program:	UNIX mail routines
 *
 * Author:	Mark Crispin
 *		Networks and Distributed Computing
 *		Computing & Communications
 *		University of Washington
 *		Administration Building, AG-44
 *		Seattle, WA  98195
 *		Internet: MRC@CAC.Washington.EDU
 *
 * Date:	20 December 1989
 * Last Edited:	30 August 2006
 */


/*				DEDICATION
 *
 *  This file is dedicated to my dog, Unix, also known as Yun-chan and
 * Unix J. Terwilliker Jehosophat Aloysius Monstrosity Animal Beast.  Unix
 * passed away at the age of 11 1/2 on September 14, 1996, 12:18 PM PDT, after
 * a two-month bout with cirrhosis of the liver.
 *
 *  He was a dear friend, and I miss him terribly.
 *
 *  Lift a leg, Yunie.  Luv ya forever!!!!
 */

/* Validate line
 * Accepts: pointer to candidate string to validate as a From header
 *	    return pointer to end of date/time field
 *	    return pointer to offset from t of time (hours of ``mmm dd hh:mm'')
 *	    return pointer to offset from t of time zone (if non-zero)
 * Returns: t,ti,zn set if valid From string, else ti is NIL
 */

#define VALID(s,x,ti,zn) {						\
  ti = 0;								\
  if ((*s == 'F') && (s[1] == 'r') && (s[2] == 'o') && (s[3] == 'm') &&	\
      (s[4] == ' ')) {							\
    for (x = s + 5; *x && *x != '\012'; x++);				\
    if (*x) {								\
      if (x[-1] == '\015') --x;						\
      if (x - s >= 41) {						\
	for (zn = -1; x[zn] != ' '; zn--);				\
	if ((x[zn-1] == 'm') && (x[zn-2] == 'o') && (x[zn-3] == 'r') &&	\
	    (x[zn-4] == 'f') && (x[zn-5] == ' ') && (x[zn-6] == 'e') &&	\
	    (x[zn-7] == 't') && (x[zn-8] == 'o') && (x[zn-9] == 'm') &&	\
	    (x[zn-10] == 'e') && (x[zn-11] == 'r') && (x[zn-12] == ' '))\
	  x += zn - 12;							\
      }									\
      if (x - s >= 27) {						\
	if (x[-5] == ' ') {						\
	  if (x[-8] == ':') zn = 0,ti = -5;				\
	  else if (x[-9] == ' ') ti = zn = -9;				\
	  else if ((x[-11] == ' ') && ((x[-10]=='+') || (x[-10]=='-')))	\
	    ti = zn = -11;						\
	}								\
	else if (x[-4] == ' ') {					\
	  if (x[-9] == ' ') zn = -4,ti = -9;				\
	}								\
	else if (x[-6] == ' ') {					\
	  if ((x[-11] == ' ') && ((x[-5] == '+') || (x[-5] == '-')))	\
	    zn = -6,ti = -11;						\
	}								\
	if (ti && !((x[ti - 3] == ':') &&				\
		    (x[ti -= ((x[ti - 6] == ':') ? 9 : 6)] == ' ') &&	\
		    (x[ti - 3] == ' ') && (x[ti - 7] == ' ') &&		\
		    (x[ti - 11] == ' '))) ti = 0;			\
      }									\
    }									\
  }									\
}

/* You are not expected to understand this macro, but read the next page if
 * you are not faint of heart.
 *
 * Known formats to the VALID macro are:
 *		From user Wed Dec  2 05:53 1992
 * BSD		From user Wed Dec  2 05:53:22 1992
 * SysV		From user Wed Dec  2 05:53 PST 1992
 * rn		From user Wed Dec  2 05:53:22 PST 1992
 *		From user Wed Dec  2 05:53 -0700 1992
 * emacs	From user Wed Dec  2 05:53:22 -0700 1992
 *		From user Wed Dec  2 05:53 1992 PST
 *		From user Wed Dec  2 05:53:22 1992 PST
 *		From user Wed Dec  2 05:53 1992 -0700
 * Solaris	From user Wed Dec  2 05:53:22 1992 -0700
 *
 * Plus all of the above with `` remote from xxx'' after it. Thank you very
 * much, smail and Solaris, for making my life considerably more complicated.
 */

/*
 * What?  You want to understand the VALID macro anyway?  Alright, since you
 * insist.  Actually, it isn't really all that difficult, provided that you
 * take it step by step.
 *
 * Line 1	Initializes the return ti value to failure (0);
 * Lines 2-3	Validates that the 1st-5th characters are ``From ''.
 * Lines 4-6	Validates that there is an end of line and points x at it.
 * Lines 7-14	First checks to see if the line is at least 41 characters long.
 *		If so, it scans backwards to find the rightmost space.  From
 *		that point, it scans backwards to see if the string matches
 *		`` remote from''.  If so, it sets x to point to the space at
 *		the start of the string.
 * Line 15	Makes sure that there are at least 27 characters in the line.
 * Lines 16-21	Checks if the date/time ends with the year (there is a space
 *		five characters back).  If there is a colon three characters
 *		further back, there is no timezone field, so zn is set to 0
 *		and ti is set in front of the year.  Otherwise, there must
 *		either to be a space four characters back for a three-letter
 *		timezone, or a space six characters back followed by a + or -
 *		for a numeric timezone; in either case, zn and ti become the
 *		offset of the space immediately before it.
 * Lines 22-24	Are the failure case for line 14.  If there is a space four
 *		characters back, it is a three-letter timezone; there must be a
 *		space for the year nine characters back.  zn is the zone
 *		offset; ti is the offset of the space.
 * Lines 25-28	Are the failure case for line 20.  If there is a space six
 *		characters back, it is a numeric timezone; there must be a
 *		space eleven characters back and a + or - five characters back.
 *		zn is the zone offset; ti is the offset of the space.
 * Line 29-32	If ti is valid, make sure that the string before ti is of the
 *		form www mmm dd hh:mm or www mmm dd hh:mm:ss, otherwise
 *		invalidate ti.  There must be a colon three characters back
 *		and a space six or nine	characters back (depending upon
 *		whether or not the character six characters back is a colon).
 *		There must be a space three characters further back (in front
 *		of the day), one seven characters back (in front of the month),
 *		and one eleven characters back (in front of the day of week).
 *		ti is set to be the offset of the space before the time.
 *
 * Why a macro?  It gets invoked a *lot* in a tight loop.  On some of the
 * newer pipelined machines it is faster being open-coded than it would be if
 * subroutines are called.
 *
 * Why does it scan backwards from the end of the line, instead of doing the
 * much easier forward scan?  There is no deterministic way to parse the
 * ``user'' field, because it may contain unquoted spaces!  Yes, I tested it to
 * see if unquoted spaces were possible.  They are, and I've encountered enough
 * evil mail to be totally unwilling to trust that ``it will never happen''.
 */

/* Build parameters */

#define KODRETRY 15		/* kiss-of-death retry in seconds */
#define LOCKTIMEOUT 5		/* lock timeout in minutes */

Written by newcome

December 4, 2011 at 10:07 pm

Posted in Uncategorized