Dan Newcome on technology

I'm bringing cyber back

Process stability and Software Engineering

leave a comment »

I’m reading some of Edward Deming’s writing and I can’t help but try to draw parallels between manufacturing fundamentals and statistical process control and the software development lifecycle.

Deming was able to show scientific ways of managing hidden variables without trying to make wild guesses about systems up-front. I have seen management systems come and go in Software Engineering and seen backlash against things like the Capability Maturity Model and various ISO certification systems. We ended up coming back to simple things like Agile. Why?

I think agile has become a sort of cargo cult, but some parts of it fundamentally address the right questions about process viewed through the lens of statistical process control. The concept of measuring velocity is a key part, as is tracking estimation errors. I think where the cargo cult comes in is the assumption that the goal is to have the maximum velocity for a given sprint. I’d argue that the more valuable goal is to achieve a stable velocity over time, even if it seems slow.

Complicating matters a bit is that software development is a double-production process. Artifacts are produced which in turn produce other behaviors and further artifacts. Now we are talking about another concept that is reliability. How does reliability relate to quality and to process stability? I think that there are statistical answers to this question.

I was listening to the CyberWire Daily podcast where one of the hosts was discussing cybersecurity insurance. The thesis is that threat models are so complex now that they can’t be calculated classically/deterministically. However, statistical models are very powerful and are able to find very good solutions to otherwise intractable problems.

This is going to be a short post, but hopefully it sows some seeds for me to return to in depth in a later series of posts.

Written by newcome

March 23, 2022 at 8:14 am

Posted in Uncategorized

All code is legacy code

leave a comment »

I was doing some one-on-one meetings with some folks on my dev team and my colleague showed me an informal document that he was keeping. Sprinkled throughout were references to legacy code and proposals to move to different technologies.

I wanted to dig into some specifics but I casually mentioned that all of our code is legacy code. Code is kind of like copyright. It’s legacy once it hits the disk (“fixed form of expression”). I can’t recall exactly where I picked this up but it’s a distillation of an opinion that I’ve formed during my career working with codebases at varying levels of maturity. I’ve seen cheeky references to legacy code as “any code that works”.

I’m part-way through reading an excellent book about legacy systems entitled “Kill it with fire” https://www.amazon.com/Kill-Fire-Manage-Computer-Systems/dp/1718501188

I will have to read back and see if I can trace it back to this book. Either way you should read it (Disclaimer: I have not finished it).

Responses to my comment in the one-on-one have been trickling in from the rest of the team. Everyone seems to have interpreted it a little differently but I think that each interpretation actually embodies a facet of what I’m trying to convey. The context of my utterance was a system that we needed to keep up-to-date so it makes sense to treat the system like a flowing stream. It’s going to have some dynamics all on its own, and to make sure we can guide it we need to manage it. (Software is performance? Software development is a generative system? Too much for this post).

Managing the entire lifecycle of code from the start means treating new code as legacy. Sounds crazy but it makes you think about your dependencies, are they well maintained? Is the code using a build system that is different than the rest of the system? What about testing plans (not the unit tests themselves – the strategy for maintaining quality over time).

The thing I didn’t expect was that it triggered a revelation in one of the other Senior Engineers that the decisions and technologies are always up for discussion. A codebase at rest can be an archaeological dig at worst and a collection of Just-So stories at best. It’s encouraging that this is getting people to ask why. I think that’s pretty amazing.

Written by newcome

March 22, 2022 at 7:51 am

Posted in Uncategorized


leave a comment »

The last few years I have been going out to a festival in the Nevada desert. You can probably guess which one but it’s not really important for this discussion. One of the core tenets or “principles” of that gathering is embracing immediacy. In the context of the festival that generally manifests itself in a kind of a fractally recursive series of pleasant distractions. One of our rules is if someone asks “what is that?” we have to go find out.

Naturally in the real world this lack of prioritization can lead to trouble. However, I have begun to explore the idea of optimizing for immediate action. There is rarely a better time to do a thing than the current moment. Unfortunately this comes at the cost of potentially missing high priority tasks that just don’t happen to be immediately in front of us.

There are several “triage”/inbox zero type of strategies that rely on one pass over potential tasks to see what can be knocked out in 5 minutes or less. My approach here is to try to optimize for speed so some things can be approached in smaller chunks of time.

Digressing a bit here – a useful thing done in a small amount of time is also dependent on there being a nice stopping point for the task. So really there are 2 things to optimize for. Quick to start and quick to stop. Stopping is important because the gains can be lost if the exploration is forgotten (if it is an info-task) or the intermediate result is misplaced only to be done again next time it’s immediately before us.

Before I do any more explaining I will give an example. Say I am looking at a broken power jack on a circuit board I’m holding. I think, ok just need to unsolder this thing. I have more barrel plugs in a little drawer in my organizer. The soldering stuff is in a plastic bin in a stack of bins. What do I have to do to accomplish this task? There are move moving parts than one would think. I have to put the board down somewhere. That might be a challenge if my bench isn’t clear. I need to pull the soldering bin out from under some other stacked bins potentially and figure out where to put it while I pull out the desoldering iron. I have to find a place to plug it in.

Immediacy can come into play in many places here. One is realizing that a good place for a power outlet is right in front of the bench, maybe mounted to the leg of the bench. Having a ready supply of double sided tape would be something that could enable this. The ability to slap a power strip right there and use it right away for this task is what I’m talking about. It should take less than 5 minutes to evaluate whether it’s a good idea, and if it works out it’s much easier to remember and maybe I’ll use screws or something more permanant. Kind of like paving the worn paths across the grass.

Other potential things in this example are, “where do I put the hot iron?” I have a little soldering iron holder on a clip now. The things that led to this was my cat knocking something heavy onto my soldering iron holder, breaking it off of the base. I delayed throwing it out since I didn’t have anything else and figured I would fix it eventually. However it was broken on the metal and I was thinking I could weld it so fast but I had to remember to take it to my shop, which never happened. I had some steel wire and started looking for somewhere to lash it, and I had a binder clip lying around and the idea worked and stuck. Pretty contrived, but how to enable more immediate actions like this?

One touch systems. I did the clip-on soldering iron holder while the iron was hot in my hand. This is what I would call “one touch”. I was already holding the problem. Another example of this would be washing a pan before letting it go after serving out the food it contained. It’s probably not going to get any easier to clean after it’s cool.

Written by newcome

December 5, 2021 at 2:09 pm

Posted in Uncategorized

Go to the H1 of the class

leave a comment »

I’m refactoring some HTML right now and it occurs to me that headings are problematic for non-frontend devs and inconsistently used at best among frontenders.

When I was at Yahoo a big part of the styling philosophy revolved around Atomic CSS. These were the kind of predecessor to React’s JS-like inline style declarations. Styles were generated by a post processor based on descriptive short class names. Eg, display: inline-block becomes D-ib for example (not the real syntax anymore but similar idea). The compiler would generate a the stylesheet to make the class name work, eg

.D-ib {display: inline-block}

The reason I’m bringing this up is that headings typically have some styling associated with them. The base stylesheets of browsers define them, and they are block-level at the very least. Most CSS template systems further define them and/or do some kind of “reset” that redefines them.

At a basic level, heading tags are semantic, telling an agent at what level in the hierarchy of information this markup resides. That’s basically it. I’m coming to think that in the age of components we need to treat headings as relative and purely semantic. On one extreme I see developers using just div everywhere. This gets around the problem of default stylings but throws away any attempt at semantic hierarchy. Maybe they sprinkle in some non-standard class name scheme for that.

Ideally H tags would have no styling different than div. Components always start with H1 at their top level, and would be rewritten during page composition to the appropriate depth based on “root” heading level. Styles are applied separately. Possibly inverting the “additional semantics” pattern by supplying “additional styling” even using .h1 class names. This seems like an anti-pattern at first but then some common trends in React styling started as anti-patterns as well.

Written by newcome

July 3, 2019 at 9:39 am

Posted in Uncategorized

Simple Ubuntu Desktop with Vagrant

leave a comment »

I wanted to spin up a Linux development environment to hack on some code that needed epoll. I could have run everything in a Docker container, but I kinda wanted to be in that environment for total hackage.

I thought maybe I’d just do it in a Virtualbox instance. Then I didn’t want to install Ubuntu or anything. Then I realized that Vagrant is supposed to solve this problem. Installed Vagrant and used Ubuntu Trusty – https://app.vagrantup.com/ubuntu

$ brew cask install vagrant
$ vagrant init ubuntu/trusty64
$ vagrant up

Then I realized I wanted a minimal desktop.

So googling.

Yep, XFCE FTW. https://stackoverflow.com/questions/18878117/using-vagrant-to-run-virtual-machines-with-desktop-environment

Then install chrome. It’s easiest to SSH into the instance and past this stuff in through the native shell.

$ wget -q -O – https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add –
$ echo ‘deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main’ | sudo tee /etc/apt/sources.list.d/google-chrome.list
$ sudo apt-get update
$ sudo apt-get install google-chrome-stable


In Virtualbox

$ startxfce4&

Written by newcome

June 25, 2019 at 3:27 pm

Posted in Uncategorized

Forgotten Javascript

leave a comment »

Every once in a while I get reminded of some dark corner of Javascript even though I’m developing with it every day. I give a lot of JS coding interviews and every once in a while something comes up that I hadn’t anticipated in the questions that I usually ask.

Recently someone tried to solve one of my coding problems using arguments.callee. I hadn’t thought about this variable in a while, and I realized why once I looked it up on MDN.

The short answer is that it doesn’t work in strict mode, which of course everyone is using right?

The long answer is that you are effectively looking at the call stack, which in most languages is sketchy due to compiler optimizations (function inlining, etc). In C# I have seen some code that relied on applying the attribute [MethodImplAttribute(MethodImplOptions.NoInlining)] to the function to keep the compiler from inlining it.

This got me thinking about today’s highly-optimized Javascript runtimes. Surely they can do inlining when generating target code right? It turns out that yes, they can, and even better, in the case of V8 we can trace how the optimizations are applied.

I took the example I found here and ran it myself to verify that my version of V8 installed via homebrew on OSX was performing the same optimizations.



Written by newcome

March 27, 2019 at 12:58 am

Posted in Uncategorized

Micro thing-uses

leave a comment »

Hello friends. I’m still writing code every day, believe it or not. I’m back in startup land. Away from big tech land mostly. Seeing both sides of the coin has been helping me have some insights I didn’t have before on the open-source adoption of technologies that trickle down from big organizations.

In the past I have mostly held the position that Your’e Not Gonna Need It with most things involving scale or performance, or (insert x thingy). Big companies need to deal with multiple facets of scale that you don’t need when your entire engineering department fits at one desk pod.

However, there are some success stories around the adoption of some technologies and some failures. I’m not sure yet if I can draw conclusions yet on what tips the scales one way or another. I think the elephant in the room right now is microservices. Leading up to microservices was containerization.

What are the technologies? Docker. Kubernetes. Lambda. Graphql. Apollo.

These don’t seem like they have that much in common but they are all attempts at granularization of the solution space in some way.

The monolithic app has been derided since I started in the software industry. My entire career we have been splitting things into libraries, creating dependencies and fighting the ugly monolith. He’s an easy enemy. A strawman. Kill the monolith!

Companies like Microsoft, Google and Facebook have been  known to keep all their code in one big repository. A monolith? Not necessarily but in some cases yes. Death to the monolith, long live the monrepo?

The distinction between units of code reuse and units of deployment is still something that confuses the issue. Remember n-tier apps? Of course we will handle x in the n tier. In reality there was one server with all the tiers on it. Was the database a tier? How many cycles of splitting and joining database schemas did the app go through?

It was an imperfect abstraction at best. What is a tier?

Now I ask, “what is a service”? What makes a service “micro”?

This post is not for answering questions but for asking them.

When we wrote .NET remoting objects, were those microservices? Why was DCOM so hated? Were they microservices? Remember registries for COM components – service locators in Java? Is this the old new thing, microservices?

Two ideas in computer science that have endured like viruses are functional programming and the UNIX philosophy. Why are these good ideas? Maybe they are bad ideas but somehow they keep popping up like bad pennies. Functional means never having to remember that you said you were sorry. UNIX means dance to your own drum beat, but only one.

Are UNIX commands proto-microservices? Is Bash the proto-orchestration framework? We are giving way too much credit to the shell here. It was UNIX pipes that was the stroke of genius. Does UNIX have side effects? Does it ever. However it’s interesting that there are some general patterns that hold. Standard I/O, return codes, and file handles cover almost everything. Pipes are special files. Sockets can be as well. So two basic subsystems cover almost everything in UNIX.

This brings us back to making some generalizations about service models. Why didn’t CORBA deliver on its promises? .NET remoting? I’m going to put out a strawman here and say – state and side effects.

Sounds like a familiar refrain. Functional programming vilifies these two things as its raison dêtre. What if, rather than fighting state and side effects, we use these as the very definition of a service? Monads are designed to encapsulate state. Could the same be said of a microservice? What about side effects? Any time in a system that there is a state boundary, is that a microservice? What about a specific side effect?

I was going to philosophize on containers and connectivity here too, but I think I’ll save it for next time.

Written by newcome

March 26, 2019 at 2:38 pm

Posted in Uncategorized

Editing Javascript

leave a comment »

One of the things I heard over and over again when I first started doing large applications with Javascript was “which IDE do you use?”. At the time I was using Visual Studio. Believe it or not, ancient Visual Studio had pretty good debugging capabilities with Internet Explorer. Everyone knows now that duo did not last.

There was a long stretch of time where I was using the editor of the month. Sublime text or something along those lines. Extensible editors were the rage but it didn’t feel like Javascript was a first-class citizen. I used to tell people that I didn’t use an IDE, which was not really true looking back. I had (and still have to some extent) a collection of VIM scripts that did things like open my editor windows and manage files on the left side. Searching open files and switching buffers. Compared to event to the old Visual Studio it was a little crude. But it was fast and it worked.

Later on I worked for a startup doing Python development. Intellij’s PyCharm was the IDE that we used there and it was amazing. Just every little thing catered to the language and I realized that the shortcuts from their Java IDE were the same. Later when I moved back to doing JS I tried out WebStorm and was hooked.

WebStorm is an amazing IDE. JetBrains did a great job at making it a seamless and consistent experience. Over time I think I kept buying it and extending the license. I ended up using Atom on a team at Yahoo along with the other devs and forgot about WebStorm for a while. Atom was a good editor. It was cool because it was written in Javascript. It was a JS app. While this had a certain appeal, there were some limitations that irked me like speed and opening large files. Coming from VIM this is kind of hard to compete with but it bothered me nonetheless.

I joined a new startup and loaded up WebStorm for the first time in a few months and realized my license had expired again. Not wanting to expense this on my first day of a new job, I opened up MS Visual Studio Code (VSCode). I had it installed out of curiosity that MS was doing an open cross-platform editor written in JS like Atom of yore.

I installed some plugins and started using it. Some people at work asked me what I was using immediately. People started using it. It was catching on for some reason. One guy found that it had a plugin for Emmet HTML generation and he was hooked. People were starting to use it and it seemed like overnight everyone was there. I never even advocated for it, it just happened.

Whatever the case JS is a first class citizen and there are tools out there that rock for dealing with it. The days when I tell people I don’t use an IDE for Javascript are over.

Written by newcome

January 15, 2018 at 1:25 pm

Posted in Uncategorized

Front-end trends

leave a comment »

The last few years have seen an incredible acceleration in the JS and general front-end ecosystem. However, the more things change, the more that they stay the same. The general problem that the Web is solving is still the promise of write-once run most places in some compatible way and without pre-installation. The paradigm is so strong that even native app development is trying to get in on the action.

In the last few months I’ve been involved with a production React Native iOS app and a Chrome extension and a “regular” React JS app. Javascript is everywhere and the tooling has become so fluid that we end up with new tools and best practices between projects only months apart.

When I was working on the Yahoo homepage, the decision to use React had just been made. The Media organization did a full turn to embrace it. The Web framework being used was called Touchdown. The “old” touchdown used Jade templates believe it or not. This had been pretty cutting edge only a year or two ago. Since we had control over this framework we went full steam into converting everything to use React.

However, there were a few old-timers around still that would just open up debugging tools or run things through a proxy and point out all of the issues we were ignoring about page weight, multiple requests, slow rendering, web fonts, and a litany of other issues. This is the old school web. The browser makes requests. CSS comes down the wire. Things block. JS is parsed. Rendering happens. Repaints, reflows, DOM thrashing.

React was created to solve a problem. The problem was how to make rendering UI based on data mutations predictable and easy. That’s really it. It didn’t tackle any of these other issues of content delivery or page performance other than to make disposing and updating DOM very fast and efficient. Effectively re-render for (almost) free. There were some leaks in this abstraction that could be patched manually with things like ShouldComponentUpdate.

Already there are serious competitors to React like Vue and Marko. Web Components have been on the horizon for some time now. We will likely learn the lessons of react and move on.

The JS ecosystem can be likened to learning how to learn or designing a process to produce and artifact faster rather than producing the actual artifact. Babel is ubiquitous so we are transpiling our code everywhere. I wonder if JS is even going to remain. Once browsers support a more general virtual machine and we transpile our JS to that VM, nothing is stopping us from replacing JS wholesale in this ecosystem.

Which brings me back to my initial point. The more things change the more they stay the same. Web front end has nothing to do with JS specifically. It’s HTTP requests and assets. DOM and styling. Browser rendering engines.

Written by newcome

January 15, 2018 at 11:48 am

Posted in Uncategorized


leave a comment »

I’m trying to distill some design wisdom I’ve come across over the years doing front-end work. There’s always this phase of any project where designers have a system of styles and components in Sketch or Illustrator. Sometimes this system is ad-hoc and sometimes it’s quite premeditated and referential. Either way the system must be translated to code by developers.

In the code world, we strive to define things as few times as possible (ideally once) and reuse things where we can while still allowing us to easily break out of the defaults where we need to without much effort.

Getting this all set up is easier said than done and many times the system is quite clean until part way through the project where it gets messy all of a sudden when some assumption made earlier suddenly doesn’t apply after all.

I’m being deliberately general here since what I want to do is put forth a few simple rules to derive the specifics from. These are:

Avoid semantic styles

When I was first introduced to atomic CSS I started to realize how problematic commonly created custom CSS rules like `button` and `top-navigation` were. This is the root of unmaintainable and impossible to reuse styling. Atomic CSS espouses composition of small rules. If `button` is a rounded corner box with a border and gradient it’s better to have rules for the border style and background applied separately to form a component called `button` rather than describing the styling directly in CSS.

Themes are pulled instead of pushed

It’s tempting to create page-level rules that define things like font sizes and colors. This is dangerous because it’s hard to prevent unwanted side effects in the components to which the rules are applied. It’s also tempting to create top-level rules that affect only certain things on the page but not others by pure coincidence such that later changes will cause undesirable effects that need to be locally overridden. Theme elements should be available globally but applied locally. I’m referring to this as “pulling” values from a global context rather than having the CSS engine “pushing” things down automatically from the stylesheet.

Themes consist of a fixed set of attributes

Decide on a fixed set of things that are part of a theme. This includes colors, sizes, border radii, etc. Provide default values in the theme. Having well-known theme elements will avoid confusion and allow re-theming to be done smoothly without errors.

Use sequences/functions (scales) for sizes

I learned a secret about layout from a CSS library called Tachyons. The idea here is that you don’t need that many pixel sizes to make a well-designed layout. In fact the sizes/proportions that aren’t used are just as important as the ones that are. Make omission part of the design. An example here (I never used this, and it probably looks bad but it illustrates the point) is to use a Fibonacci sequence for margin sizes [1,1,2,3,5,8,13…]. I think of these like a musical scale. There are certain notes (sizes) in the scale but some are omitted.

Distill intent

It’s important to specify the meaning of layout and positioning. What I mean by this is that sometimes designers will nudge spacing around elements to that they line up in some way that isn’t explicit by looking at the design. For example an element might be two-thirds of the way down its parent but looking at the Sketch layout we see that it’s 235 pixels or something like that. It’s easy to see centering and sometimes the tools support notions like that but often implicit layouts are hard for developers to pick out.

The only answer here is to design with these intent rules in mind and call them out or reverse-engineer them with the help of design by asking “what did you mean here?”. Which brings us to the last point:

Specify exceptions

Not everything fits into a neat box and sometimes design is akin to kerning text where we need to take visual weight or color or any one of a hundred different subtle factors into account when doing layout. Exceptional cases should be called out and reused if possible. This is kind of an anti-theme. The things that apply once instead of the things that apply always.

I’m thinking of calling this CSStem (CSS System). I’m not sure what form it will take. Probably a document and some examples along with tooling for generating a style guide/gallery/storybook type of web page as output.

Written by newcome

August 23, 2017 at 5:33 pm

Posted in Uncategorized