Skip to content

Javascript frontend build tooling

Javascript is an essential tool for building great user interfaces these days, and has some really mature tools for getting a great development environment. One problem I’ve had is there are many different ways to hooks these tools together.

After a lot of trial and error, I found a set that seems to work pretty well for me across different javascript frameworks.


  • write my app with multiple files, but deploy as one js file
  • have a machine handle mechanical boilerplate like IIFEs, strict mode, etc
  • automated tests (with code coverage), that run whenever I change a file
  • static analysis of my code to catch problems
  • local webserver to use with my app, that automatically reloads the page whenever I change a file
  • use ecmascript 6 (ES6) or other “compiles down to js” systems (e.g. typescript, react)
  • use other open source javascript libraries
  • use a small number of tools so it’s easier to understand
  • cross-platform development (sometimes I’m on windows, sometimes on linux)

This feels like a pretty high bar, but javascript has some crazy capabilities, and it’d be a shame not to use them.


npm for tasks and dependencies

I use npm as a task runner and for dependencies. It also seems like everything published to bower is also published to npm, so I’m happy to skip other package managers. Just don’t look too closely in the node_modules folder, it’s madness in there.

The npm scripts are effective for defining tasks without needing to install things globally nor edit my $PATH. Any node-based CLI tools installed to the local node_modules folder via npm install are automatically on the $PATH when using npm scripts. I’d rather spend a little disk space than manage the unintended consequences of shared global state. It also helps reduce setup time for other developers. We can get away with just git clone && npm install && npm start.

I generally make a few custom tasks:

  • npm start – start a local webserver and spin up a bunch of filesystem watchers to run tests and refresh my browser
  • npm run build – create a final bundle for distribution
  • npm run ci – run tests and static analysis for continous integration bots

I also tend to make subtasks, and chain them together using the limited syntax that is shared between sh and cmd.exe.

exorcist + browserify + plugins for compilation

I use browserify for an awful lot of things. It’s my compiler, translator, minifier, and bundler. I tend to think of it all as compilation, but that’s a simplification.

The most basic feature is using nodejs modules to write my code in many files and specify the dependencies between them using require statements at the top of each file. This nicely mirrors imports in every other programming environment (e.g. using in C#, import in python, require, require_once, include in php, etc).

exorcist creates a separate source map file from the browserify output. This is really useful for testing, since most browsers will load up the source map automatically and show you you’re actual source code in the debugger instead of whatever unreadable garbage that actually gets executed.

Most of the browserify functionality comes through plugins:

eslint + karma + jasmine + phantomjs + plugins for testing

This is probably the first bit of javascript build tooling I ever figured out, and I’ve been using it every since. karma has a “watch” option where it will keep re-running tests as code changes.

watchify + live-server + npm-run-all for active development

The “reload-as-you-edit” features are part of what make javascript such a productive environment for user interfaces, especially when you have multiple monitors. I love having a browser / terminal on one monitor showing the UI and test output, and then my editor on another monitor. It’s really great to be able to simply save a file and glance over and see if my tests pass or the UI looks alright.


All in all that’s 29 direct dependencies, and probably hundreds or thousands of indirect dependencies. That still feels crazy to me, but I think that’s mostly because I have to hook it all up myself. Something like eclipse or visual studio has a ton of moving parts, but it’s one installer so I tend not to think about it. You can see some examples on some of my hobby projects: c4-lab, chore-cat, and kpi-me.

There’s room for improvement (e.g. debugging tests, speed), but this setup has worked out pretty well for me. In my last post I decided against it, but then found some more tools that really brought it all together.

Most of these kinds of setups include grunt or gulp, but I haven’t really needed them. Between npm scripts, browserify, endless plugins, and shell redirection I can accomplish the same results with one less dependency and config file. I feel like if I did adopt grunt or gulp, I’d basically have the same setup, but with their plugins instead of browserify plugins.

AngularJS with ONLY npm and browserify

I’ve been working with javascript and AngularJS a lot recently, both at work and in hobby projects. I’m a big fan of the framework, but like most non-trivial javascript frameworks, it really wants to have a build/compile step. There are a lot of options for javascript build tools. I identified some main contenders: npm, grunt, gulp, browserify, webpack, bower, requirejs.

Recently I’ve been experimenting with just using npm and browserify, and wanted to summarize my results.

TL;DR: works well for small projects, but I think I need to add something like grunt or gulp (g(runt|ulp)).


The benefits I’m looking to get from my build tools:

  • can use many small files: angularjs code is easier for me to write and test if I have many small js/html files. Serving many small files to users is bad for performance (lots of HTTP requests), and I’m bad at maintaining a curated list of script tags for what to serve. Ideally I’m serving one or two files that contain everything my app needs
  • can use third-party libs: there are lots of good open source libraries that I want to use. I’m bad at managing those dependencies (and their dependencies) by hand, and I feel bad committing a minified library into source control

My test was a small web app to track when the last time I did a house chore: when-did-i. The source is up at ryepup/when-did-i. I’m actually experimenting with a bunch of different stuff, but I’m only going to consider npm and browserify here.

Using npm for third-party dependencies

Most libraries are published to npm, and I never had an issue with missing a library, and was able to keep all external libs out of my repo.

The only weird thing is the version specifier in package.json. By default, if you install package X (npm install --save X), it’ll find the current version (say 1.2.3), and then add it to package.json with a version specifier like ^1.2.3. This basically means “1.2.3” or anything newer with a 1.x.y version. This can cause some surprises, especially if you have a continuous integration setup. Your CI robot might be testing different versions than what you are developing against.

The solution to this is npm shrinkwrap, to specify precisely every version of every piece of software you want. It’s basically the equivalivent of python’s pip freeze and requirements.txt.

Using npm scripts for build actions

This worked out well… up to a point. Using npm scripts gave me easy access to a lot of npm installed command line programs, without needing to install them globally on my system or muck about with my path. I like keeping the project’s needs self-contained. There are a ton of small tools available on npm to do just about anything.

I like the simplicity; there’s no explicit “target” like other build tools, you just have a name, and what command you want to run with node’s path all setup. Then you can say npm run $NAME and it’ll go. You can add a “pre” or “post” prefix to the name to run other commands before/after. If you want to call your other scripts, you just use npm run my-other-script as part of your command. Pretty easy, pretty basic.

The problem arises when you want to do something more complex. The worst one I had was start:

"prestart": "npm install",
"start": "watch 'npm run build' src/ & live-reload --port 9091 ./build/* & ws -d ./build",

Let’s break it down:

  1. a “pre” script to ensure packages are installed first if someone runs npm run start
  2. start the npm-installed watch command to look file changes in src/, and run npm run build when something changes (this is an example of one command calling another), then we have an & to run watch in the background – this means our “start” script doesn’t work on windows
  3. start the npm-installed live-reload to run a LiveReload server on localhost:9091 to refresh my browser when something changes in build/, and another & to run this in the background
  4. start the npm-installed ws web server to serve the files in my build folder at localhost:8000

With that combination, as I edit my files they get rebuilt and my browser refreshes.

This is a pretty standard frontend development workflow, and I feel like it’s too much to squeeze into a one liner. I could make some short nodejs scripts that launch these services, but at that point I feel like I’m reinventing a wheel and I should just pull in g(runt|ulp).

Using browserify to combine files

I think this worked out pretty well, but also had some quirks. By using require statements, I was able to centralize all my angularjs registrations into one file, which felt really nice and reduced some boilerplate. Each of my javascript files was basically defining one function. I really liked not having to manually specify an IIFE in each file. It also generates all the source maps, so debugging in the browser is referring to my small files, not whatever the bundle produces.

browserify has a pretty rich plugin system, and I used browserify-ng-html2js to support keeping my templates in separate html files. This is another place where npm scripts broke down a little. By default browserify-ng-html2js puts each html file into it’s own angular module, and then I need to make my main angular module depend on each individual template. This is back to a manually curated list that I’m going to screw up. browserify-ng-html2js has an option to put all the templates into one module, but that only seems to be available if you use g(runt|ulp).

Pulling everything in via npm means I could have one bundled file that contains my code and all it’s dependencies. This gets to be kinda a big file. I added some machinery to reference some angularjs libs from a CDN, but the easist path with browserify is to have everything just included. I guess if you’re using cache headers well and versioning in the URL this might be alright. Rigth now I’m at 264KB (73.4KB over the wire) which does include some dependencies. Letting browserify combine ALL my dependencies would more than double the file size. I’m really not sure if that matters, but it makes me nervous.

In the past I’ve used some grunt machinery to maintain the list of scripts to load; I liked this a little better because what I was developing with was closer to what I’m deploying.


I like the browserify and npm combination, but npm scripts are too limited, and another build tool is required. I think npm scripts are good enough as a task runner for simple dev or CI, but build steps just need more configration. It’s possible that maybe the specific build libraries could better support looking at package.json for configuration, but there’s just a lot more momentum behind using g(runt|ulp).

install pygit2 + libgit2 w/ SSH support on ubuntu

I wanted to use Saltstack‘s gitfs to easily reference salt formulas from my gitolite repository over ssh using key-based authentication. In order for salt-master to do this, you need pygit2, which needs libgit2, which isn’t yet packaged by ubuntu. You can download a libgit2 deb file, but that doesn’t have libgit2 compiled with SSH support, so I had to compile from source:

writing saltstack formulas

SaltStack is a great open-source, cross-platform automation system. It lets you configure servers using declarative yaml files and python. You can create custom “states” during yaml/python, and then say “make server X have state Y”. There’s a lot of plumbing involved which I’m not going to delve into, but it’s pretty neat stuff and doesn’t require punching holes in firewalls.

SaltStack provides a lot of different mechanisms to re-use configuration, and recently I’ve been working a lot with salt formulas as a mechanism to create and re-use custom states.

From the docs:

Formulas are pre-written Salt States. They are as open-ended as Salt States themselves and can be used for tasks such as installing a package, configuring, and starting a service, setting up users or permissions, and many other common tasks.

These are implemented as a git repository with one formula in it, and then there are a variety of ways to reference a formula. It’s common to customize the configuration using salt’s pillar mechanism (yaml again)

Some thoughts on writing formulas:

  • the pillar.example file should show an in-depth example of what pillar customizations are available
  • this community has settled on restructured text over markdown
  • map.jinja files are used to set defaults and switch options based on OS
    • these defaults should match what’s in pillar.example, which is annoying to maintain by hand
  • there are a million formulas on github, but the nature of salt states lends to writing really opinionated states that are super-specific to one’s infrastructure. It’s really hard to make a formula generic enough or sufficiently customizable, unless you’re doing something so trivial that it’s likely not worth a formula. There’s just too many choices that need to be made. For example, a formula to install cacti has a ton of choices:
    • install from source, or from a package manager?
    • apache or nginx?
    • any cacti plugins?
    • which poller, spine or cmd.php?
    • install the non-free snmp MIBs, or stick with the free (as-in-everything) option?
  • if you find a repo on github, you should fork it and reference your fork, even if it’s perfect the way it is. If you reference someone else’s formula directly, then you might have problems if they make breaking changes, or security issues if someone slips a backdoor into the formula. Each formula is third-party code running (usually as root!) on your systems, so you need to be careful to read and understand every formula
  • I found myself forking an existing formula that’s close to what I want, and then thoroughly gutting it until it’s customized for my specific infrastructure. This is due to insufficient configuration options, not sharing my opinions on how to set something up, or being too specific to the creator’s infrastructure
  • after writing a formula, it seems like it should be subdivided into many smaller formulas so they each do one thing, but managing dependencies between formulas is an open issue. If you split it up in a way that feels right, your install instructions get way longer. Instead of “fork and install my formula and salt-call state.sls my-formula“, it’s “fork and install these N formulas, then salt-call
  • if you start a formula from scratch, fork the template-formula
  • jinja has some powerful abstraction mechanisms via macros and imports. It’s tempting to write a custom salt module in python, but a lot can be done just right in jinja. This is a double-edged sword
  • there are a ton of helpful examples in the official saltstack-formulas

Happy formulating!

Things I learned when re-learning ASP.NET

ASP.NET has changed dramatically in the past five years. I’ve had the privilege to work on some projects using the newer web stacks, as well as modernize an old project.

I’d been away from the Microsoft ecosystem from around 2006 until 2011, and after working in dynamic languages (javascript, python, lisp), it took me some time to figure out what kind of code the Microsoft ecosystem was happiest with.

I threw a lot of pasta on the wall, and here are a few that seemed to stick.


I had the smallest friction with this combination of libraries:

  • Entity Framework (EF): for database access, schema evolution, and ORM
  • Ninject: dependency injection (DI) library
  • ASP.NET MVC: fully featured web framework for serving user interfaces, with a filter/DI system that provides a lot of flexibility
  • ASP.NET WebApi: fully featured web framework for serving JSON/XML APIs, with a filter/DI system that provides a lot of flexibility. Makes a great backend for javascript frameworks
  • Moq: mock object library, makes easily fake any interface to minimize test setup/teardown
  • LocalDb: Microsoft’s answer to sqlite, a file-based database that’s SQL Server compatible but does not require any server or licensing
  • AutoMapper: quickly copy data from one type to another without a lot of repetitive code
  • log4net: very flexible logging; I bet it’s one of the oldest active C# open source libraries, I’ve been using it for years
  • FluentValidation: input validation library; lets you specify complex validation rules for a class

Everything is available via nuget, except for the LocalDb installer. Some of these are warrant further discussion.

Entity Framework (EF)

Very fully featured data access library. There’s a ton of depth here.


  • define db schemas using C# classes mostly through properties and attributes, able to use inheritance to reduce duplication
  • specify db schema evolution using tool-assisted explicit migrations; add a new field to one of the DB classes, call Add-Migration NewField-ticket42, and most of the work is done, including a natural place to add data fixes
  • generate SQL scripts of pending migrations (EF stores some metadata in your database) for deployment
  • LocalDb support
  • linq support that builds crazy but reasonably efficient SQL queries that make it easy to select the minimal amount of data you need. You can select a few properties from your db object without fetching a whole row, do joins, etc
  • provides a database cache; repeated requests to fetch a db object against the same db context are free
  • transaction support; any operations (including inserting complex relationships) on a db context are pending until you call SaveChanges
  • can work against existing databases, with some care


  • the linq support can be surprising; some things just aren’t allowed and it isn’t always obvious. Thankfully the exceptions are thrown early and have good messages
  • it’s easy to accidentally load way more data than you want
  • exception messages for some db errors can be a obtuse or require debugging to examine a property on the exception
  • really, really, really wants you to run all your DB operations against one instance of the DB context (i.e. per HTTP request). Things get really weird if you try to use a db object between two db context instances
  • is happier with updates if you load a db object in full, change the properties, then save
  • it can be tricky to sort out how to add a graph of db objects without fetching the whole DB or calling SaveChanges multiple times to get autoincrement ids. Totally doable, but easy to screw up
  • EF’s migrations require your db classes to be free of compiler errors, which leads to putting your db classes in a different DLL from the rest of your application. If you change a db class in a way that breaks the rest of your application, unless the db classes in a different assembly, you have to update your entire application before you can figure out the migration. This leads to other weirdness and tough questions like “which assembly should this DTO/interface go in?”


ASP.NET WebApi and ASP.NET MVC are very similar, and the two are being combined in ASP.NET vNext. They can also work together in one web project, albeit with different namespaces. It’ll be nice when ASP.NET vNext unifies these namespaces.


  • explicitly maps URLs and HTTP verbs to an action method on a controller class
  • filter system that lets you run code before and after your action
  • naming convention-based system to choose templates for controller actions, with helpful error messages telling you what names the framework expected
  • automatically serialize/deserialize between HTTP GET/POST/PUT/DELETE data into C# objects, with hooks to customize the process. Ends up acting a lot like method injection
  • hooks for how the framework instantiates controllers to let you use a DI library for controller creation
  • really likes view models; define all the data you want in a template as a class, create it in your controller action, and pass it to the template. Viewmodels are easy to test
  • the ASP.NET MVC template system has nice helpers like EditorFor and DisplayForto render UI for view models
  • Since controllers are plain classes, you can new them up in tests and pass in different input without running a web server
  • lots of plugins and helper libraries on nuget


  • MVC wants you to group your files by type, not by feature. This makes your template far away from the (usually) single controller that uses it
  • no easy way to share templates between different projects
  • ASP.NET WebApi and ASP.NET MVC have a lot of the same classes in different namespaces. If you’re using them together it can get confusing if you want a System.Web.Http.Filters.IAuthenticationFilter and accidentally autocomplete the wrong using statement and end up with a System.Web.Mvc.Filters.IAuthenticationFilter
  • visual studio refactoring tools like “Rename” do NOT change your templates
  • any code in your templates is technically in a different assembly, so anything in your viewmodel you want to use in a template needs to be public
  • some built-in MVC helpers look at attributes on your viewmodel for how to render, validate, etc. If you follow that approach, then changing a <label> requires a recompile and a bunch of “go to definition”. Seems like writing HTML in your templates is easier than using the helpers so you can scatter your UI text across your viewmodels. This is probably fine if you need internationalization or localization, but if you don’t then it just feels like extra hoops


Pretty straightforward dependency injection library. You tell it what implementation you want for what interface at application start, and then ask it to create all your objects.

class Foo : IFoo { public Foo(IBar b){} }
// on app start:
var kernel = new Ninject.StandardKernel();
var foo = kernel.Get<IFoo>(); // new Foo(new Bar())


  • easy to use
  • makes it practical to write many small, easily testable classes and not have to wire them up by hand
  • good error messages
  • support contexts so you can say “make one DBContext per HTTP request”
  • tons of options
  • lots of plugins on nuget:


  • depending on what version of ASP.NET MVC/WebApi you’re using, there are different nuget packages and install instructions, take care you’re using the right approach
  • “go to definition” gets less useful, since it’ll lead you to the interface, not to what ninject is actually instantiating at runtime
  • reduces the cost of many small classes; you get a long chain of “go to definition” to find the class actually doing the work


This is the dev/test database I’ve always wanted.


  • can run tests against a full DB. At the beginning of your test run create a new LocalDb file, run all your EF migrations, then run each test case in a DB transaction
  • great for running dev sites on your workstation without a ton of setup


  • sometimes the files get in a weird state and you have to change your connection string to get a new file
  • I expect some subtle difference between this and a full SQL Server that will bite you if you use proprietary SQL Server features


I kept going back and forth on this one; sometimes it’s a great time saver, sometimes it’s a huge hassle. Overall I think it’s a win.

Uses reflection to convert code like:

class Whatever {
  public Foo Copy(Bar b){
    return new Foo{
      Name = b.Name,
      Title = b.Title,
      Message = b.Message

into something like:

class Whatever{

  static Whatever(){
    Mapper.CreateMap<Bar, Foo>();

  public Foo Copy(Bar b){ return Mapper.Map<Foo>(b); }

When you have a lot of data transfer objects (DTO)s it can be really common to want to copy fields from one type to another.


  • reduces annoying boilerplate
  • can specify explicit mappings when names don’t match
  • can map collections and nested objects
  • works with EF and linq to map from your db objects and minimize how much you fetch from the db
  • you create mappings at application start, and they are cached from then on so you don’t pay a big reflection penalty


  • is really much happier when the property names match exactly
  • gets really finicky about mapping collections from linq
  • error messages could be better
  • refactorings like “rename” could introduce runtime errors if you don’t have good test coverage
  • copying data a lot isn’t a great idea; using more interfaces might eliminate the need
  • complex mappings that apply to multiple classes are difficult to reuse


Specify validation rules with a testable, fluent interface:

class FooValidator : FluentValidation.AbstractValidator<Foo> {
  public FooValidator(){
    RuleFor(x => x.Name).NotEmpty();
// validate
new FooValidator().Validate(new Foo())


  • easy to test
  • plugins to automatically validate deserialized MVC or WebApi requests
  • lots of validation rules
  • custom validations are straightforward to implement


  • need to pass the right type into AbstractValidator<T>, which can lead to messy generic type signatures if you want to re-use validation rules between parent/child classes. Using extension methods to re-use rules is sometimes easier
  • custom error messages are defined in your validator class, this can be far away from the UI that displays it

Different kinds of angularjs directives

Angularjs directives are a powerful tool. Like many powerful tools, it can take some time to figure out how to use it, and it’s easy to create a working solution that you’ll regret later. I’ve been using angular in small projects for a few years now, and have come up with a few different ways to think about and classify simple directives.

Component Directive

A logical user-inteface element. Is the trio of directive / template / controller. Usually has an isolated scope, and is self-contained. These are like .NET user or server controls. Much of john papa’s style guide advice applies to the controller and views used in component directive.

Use attributes to pass data in, and data gets out via double-data binding, events, or calling services.

Component directives are not composable.

Usage looks like <my-component ng-model="vm.thing"/> (or <div my-component> depending on IE support).


Mixin Directive

Add behavior to an existing element. Doesn’t isolate scope, but may read attributes directly. Rarely has a controller, mostly is just a link function.

Similar to bootstrap data- annotations or the approach of “unobtrusive” jquery plugins that function based on custom attributes.

Mixin directives are meant to be composable.

Usage looks like <input ng-model="vm.thing" my-mixin/>.

I haven’t found a many cases to write these, but many of the built-in directives fall into this category: ngDisabled, ngClass, etc.

DOM Macro Directive

Adds DOM to or around and existing element. Uses transclusion when possible, direct angular.element manipulation if not. Doesn’t have a scope, but may read attributes directly in the link function. Rarely has a controller, mostly is just a link function.

Many of the angular bootstrap directives fall into this category; they add a bunch of HTML so you don’t have to type it again.

DOM macro directives are composable, but you’ll likely need to mess with priority to get the results you want.


Meta Directive

Directive that adds other directives. Doesn’t isolate scope, rarely has a controller, reads configuration from attributes or services. Has some complicated boilerplate. The meta directive is similar to a DOM macro directive, and helps reduce repetitive angular directives.

Meant to be composable, which is the primary reason to use these.

Usage looks like <button ng-click="vm.remove()" my-remove-button/>.

Most macro directives could be implemented as component directives or DOM macro directives, but will less flexibility. For example, we could implement a remove button as a component, and use it like <my-remove-button ng-click="vm.remove()"/>.

But then we want to disable the button sometimes. In the macro directive, we can just add ngDisabled:

<button ng-click="vm.remove()" my-remove-button ng-disabled="vm.canRemove"/>

For the component directive, we can do the same thing, but we need to teach the component about ng-disabled, and it’s template needs to render as <button ng-disabled />. This is straightforward work, but still work.

The remove button isn’t a great example since it’s so simple, but is shows the weird boilerplate needed. Consider wanting common settings throughout your application for a more complicated directive like a datepicker


Other references

installing wiringPi on openelec

I’ve been having fun with Raspberry Pis recently. I’ve got one setup with openelec that I bring on camping trips, with some movies on a USB hard drive. By default, the pi doesn’t provide enough power on it’s USB ports, so I had to use an external USB hub. This meant two AC adapters, and lot more cabling. After some research, I found it’s possible to increase the power output on the USB ports (lock in auxillary power?) from 600mA to 1.2A. Testing & Setting the USB current limiter on the Raspberry Pi B+ has the details; you can use gpio to temporary bump the power, and then edit config.txt to retain that setting on reboot.

gpio is part of wiringPi, which has a pretty straightforward “git clone then build” installation story, but openelec is pretty locked down; no apt-get. To get a working copy of gpio, I followed the install instructions on another pi, then copied the compiled output over. But gpio relies on some shared C libraries, and didn’t run for me:

    ./gpio: error while loading shared libraries: cannot open shared object file: No such file or directory

To see which shared libraries gpio needs, we can run ldd:

    /lib/ (0xb6ef2000) => not found => not found => /lib/ (0xb6ed2000) => /lib/ (0xb6e5f000) => /lib/ (0xb6d39000)
    /lib/ (0xb6efc000)

So it needs to find and the wiringPi compiled output has wiringPi/ and devLib/, so we make a directory with those symlinked in, and then tell C to look for shared libraries there.

    mkdir /storage/lib
    ln -s /storage/wiringPi/wiringPi/ /storage/lib/
    ln -s /storage/wiringPi/devLib/ /storage/lib/
    export LD_LIBRARY_PATH=/storage/lib:$LD_LIBRARY_PATH

Now gpio can find it’s libraries. I was able to test my usb hard drive, and it works with 1.2A!

How do I know my code works?

I know the code for my new feature works because:

  1. I wrote it
  2. my IDE doesn’t underline anything in red
  3. my linting program doesn’t report any issues
  4. my runtime / compiler reports no syntax/compile errors
  5. the runtime / compiler on my build server reports no syntax/compile errors from a fresh checkout
  6. automated tests for my feature pass
  7. all automated tests pass
  8. all automated tests pass when run by my build server from a fresh checkout
  9. I peer-reviewed the code for my new feature
  10. I manually tested the new feature
  11. I manually tested other features and they still work
  12. Someone else manually tested the new feature
  13. Someone else manually tested other features and they still work
  14. I deployed it to production
  15. it’s been in production for an hour
  16. it’s been in production for a day
  17. it’s been in production for a week
  18. it’s been in production for a month
  19. it’s been in production for a year

Deploying .NET COM objects with MSBuild

At work I’ve been working with a very old classic ASP website, running on a very old hardware. After a few weeks of hacking vbscript, I’m
sorely missing C#, and especially unit tests. I feel so exposed when there’s no tests proving my code does what I think it does. For reasons I’ll not go into, deploying aspx pages is not an option, so I spent some time exploring creating COM objects using .NET, and consuming them from vbscript using Server.CreateObject. This seems like it will work really well for my purpose. This morning I sorted out deployment, which didn’t have any easy google answers.

My goals is to deploy via msbuild with a few commands:

  • MSBuild.exe /t:DeployCOM /p:env=test to deploy to my test site
  • MSBuild.exe /t:DeployCOM /p:env=prod to deploy to production

TL;DR: see deploy-com-dll.targets


Before getting into XML, to deploy a COM DLL, there are a few things to do:

  • unregister the old version of the DLL. I’m trying to avoid DLL Hell by only having one version of my DLL installed on the server, and explicitly not maintaining backward compatibility
  • restart IIS; if my COM object has been loaded in IIS, then the DLL file on disk is locked
  • copy new DLL to the server
  • register the new DLL

MSBuild design

MSBuild conveniently includes the RegisterAssembly and UnregisterAssembly tasks to handle the COM registration, but unfortunately these don’t work with remote machines, so we have to do things by hand.

MSBuild implementation

The XML is available as deploy-com-dll.targets.

First up, our conditional PropertyGroups:

<Propertygroup Condition="'$(Env)' == 'test'">
  <!-- UNC path where we publish code-->
  <!-- the remote path to where our code is deployed -->
  <!-- the computer name for psexec -->

<PropertyGroup Condition="'$(Env)' == 'prod'">

Pretty straightfoward, tells us where to copy our files to, and gives the remote path so when we call regasm we can tell it where to look.

Next, our unconditional PropertyGroup:

  <!-- where to copy our COM assembly -->
  <!-- remote path to our where we published our COM assembly -->
  <!-- remote path to our COM assembly -->
  <!-- remote path to our typelib -->
  <!-- local path to psexec-->
  <!-- remote path to regasm.exe -->
  <!-- remote path to iisreset.exe -->

Again, pretty straightforward. A lot of derived properties, specifying where to publish COM code, more remote paths so psexec can find the regasm and iisreset executables and our COM assembly.

Lastly, our Target:

<Target Name="DeployCOM">
  <!-- local path our compiled, strongly-named COM assembly and it's dependencies -->
    <FilesToCopy Include="$(MSBuildProjectDirectory)\src\My.Interop\bin\Release\*.*"/>
  <MakeDir Directories="$(COMPublishFolder)"/>
  <!-- unregister the old version of the DDL -->
  <Exec Command="$(PsExecBinary) $(Computer) $(RegAsm) /u /codebase /tlb:$(COMTypeLib) $(COMAssembly)"
  <!-- restart IIS -->
  <Exec Command="$(PsExecBinary) $(Computer) $(iisreset) /restart" IgnoreExitCode="true"/>
  <!-- copy new DLL to the server -->
  <Copy SourceFiles="@(FilesToCopy)"
  <!-- register the new DDL -->
  <Exec Command="$(PsExecBinary) $(Computer) $(RegAsm) /codebase /tlb:$(COMTypeLib) $(COMAssembly)"/>

There are some comments inline there breaking out the steps. A few important notes:

  • we’re using the /codebase flag to regasm; this means we are NOT going to be installing our .NET in the global assembly cache, so the COM system needs to know to look in our COMFolder to resolve .NET references. Without this, we would need to also call gacutil to install our .NET in the system, and any third-party libraries we’ve referenced
  • the COM assembly needs to be strongly named, and therefore all it’s dependencies need to be strongly named. Compiling and signing the assembly is out of scope here, but there are plenty of good tutorials out there.
  • in a few places we ignore the exit code of the exec call; on first deployment those tasks might fail
  • this is a really heavy handed approach with IIS. Using more of the versioning features in COM and .NET assembly will let you install multiple versions of everything so you can publish a new version, then update your vbscript code to use that new version. In my case, I have huge maintenance windows every night to do iis restarts, but that option is not available to everyone.


It’s ugly, but it works and is a hell of a lot prettier than writing vbscript, and I can write tests and get some confidence that my code works before it’s in production.

Visualizing call graphs in lisp using swank and graphviz

Last week I was doing some cleanup work (short holiday weeks are great for paying off technical debt), and was deleting some supposedly unused code. This was a pretty tedious process of running functions like slime-who-calls and slime-who-references, running git grep -i on the command line, and undefining functions in just the right order.

I’ve seen a lot of articles recently on static analysis of code, and spent some time playing with the introspection features of slime to identify unused code (short holiday weeks are also great for following a tangents). I ended up with a slow mudball of code that worked pretty well.

Warning, large images coming up.

The code itself is up on github, but there’s no ASDF system yet, so you have to load it manually:

(require :iterate)
(require :alexandria)
(require :swank)
(load "~/lisp/static-analysis/static-analysis.lisp")
(in-package :static-analysis)

An truncated example:

STATIC-ANALYSIS> (call-graph->dot :alexandria )
digraph g{
subgraph clusterG982{
subgraph clusterG949{
G983 -> G995
G951 -> G950

Here’s what it actually looks like:

The code currently scans all loaded code, and puts functions from each package in it’s own graphviz subgraph. The graph for an entire package for all loaded code isn’t really that useful, so I made another function to narrow it down. Here I’m specifying the list of packages to render, and the list of functions to show.

STATIC-ANALYSIS> (->dot (function-call-graph '(:alexandria) '(alexandria:rotate)))
digraph g{
subgraph clusterG1109{
G1040 [label="ROTATE-HEAD-TO-TAIL"]
G1049 [label="SAFE-ENDP"]
G1051 [label="PROPER-LIST-LENGTH"]
G1042 [label="ROTATE-TAIL-TO-HEAD"]
G1041 [label="ROTATE"]
G1040 -> G1051
G1051 -> G1049
G1051 -> G1054
G1042 -> G1051
G1041 -> G1040
G1041 -> G1042

Some systems have very complicated call graphs. At work we do a lot with clsql, and the overall call graph even from one function can get complicated quick:

So I added a depth param to keep the graph smaller, let’s say 3:

 (function-call-graph '(:clsql-sys :clsql-sqlite3)

Anyhoo, a fun toy, and I had a fun time writing it.