Skip to content

AngularJS with ONLY npm and browserify

I’ve been working with javascript and AngularJS a lot recently, both at work and in hobby projects. I’m a big fan of the framework, but like most non-trivial javascript frameworks, it really wants to have a build/compile step. There are a lot of options for javascript build tools. I identified some main contenders: npm, grunt, gulp, browserify, webpack, bower, requirejs.

Recently I’ve been experimenting with just using npm and browserify, and wanted to summarize my results.

TL;DR: works well for small projects, but I think I need to add something like grunt or gulp (g(runt|ulp)).

Goals

The benefits I’m looking to get from my build tools:

  • can use many small files: angularjs code is easier for me to write and test if I have many small js/html files. Serving many small files to users is bad for performance (lots of HTTP requests), and I’m bad at maintaining a curated list of script tags for what to serve. Ideally I’m serving one or two files that contain everything my app needs
  • can use third-party libs: there are lots of good open source libraries that I want to use. I’m bad at managing those dependencies (and their dependencies) by hand, and I feel bad committing a minified library into source control

My test was a small web app to track when the last time I did a house chore: when-did-i. The source is up at ryepup/when-did-i. I’m actually experimenting with a bunch of different stuff, but I’m only going to consider npm and browserify here.

Using npm for third-party dependencies

Most libraries are published to npm, and I never had an issue with missing a library, and was able to keep all external libs out of my repo.

The only weird thing is the version specifier in package.json. By default, if you install package X (npm install --save X), it’ll find the current version (say 1.2.3), and then add it to package.json with a version specifier like ^1.2.3. This basically means “1.2.3” or anything newer with a 1.x.y version. This can cause some surprises, especially if you have a continuous integration setup. Your CI robot might be testing different versions than what you are developing against.

The solution to this is npm shrinkwrap, to specify precisely every version of every piece of software you want. It’s basically the equivalivent of python’s pip freeze and requirements.txt.

Using npm scripts for build actions

This worked out well… up to a point. Using npm scripts gave me easy access to a lot of npm installed command line programs, without needing to install them globally on my system or muck about with my path. I like keeping the project’s needs self-contained. There are a ton of small tools available on npm to do just about anything.

I like the simplicity; there’s no explicit “target” like other build tools, you just have a name, and what command you want to run with node’s path all setup. Then you can say npm run $NAME and it’ll go. You can add a “pre” or “post” prefix to the name to run other commands before/after. If you want to call your other scripts, you just use npm run my-other-script as part of your command. Pretty easy, pretty basic.

The problem arises when you want to do something more complex. The worst one I had was start:

"prestart": "npm install",
"start": "watch 'npm run build' src/ & live-reload --port 9091 ./build/* & ws -d ./build",

Let’s break it down:

  1. a “pre” script to ensure packages are installed first if someone runs npm run start
  2. start the npm-installed watch command to look file changes in src/, and run npm run build when something changes (this is an example of one command calling another), then we have an & to run watch in the background – this means our “start” script doesn’t work on windows
  3. start the npm-installed live-reload to run a LiveReload server on localhost:9091 to refresh my browser when something changes in build/, and another & to run this in the background
  4. start the npm-installed ws web server to serve the files in my build folder at localhost:8000

With that combination, as I edit my files they get rebuilt and my browser refreshes.

This is a pretty standard frontend development workflow, and I feel like it’s too much to squeeze into a one liner. I could make some short nodejs scripts that launch these services, but at that point I feel like I’m reinventing a wheel and I should just pull in g(runt|ulp).

Using browserify to combine files

I think this worked out pretty well, but also had some quirks. By using require statements, I was able to centralize all my angularjs registrations into one file, which felt really nice and reduced some boilerplate. Each of my javascript files was basically defining one function. I really liked not having to manually specify an IIFE in each file. It also generates all the source maps, so debugging in the browser is referring to my small files, not whatever the bundle produces.

browserify has a pretty rich plugin system, and I used browserify-ng-html2js to support keeping my templates in separate html files. This is another place where npm scripts broke down a little. By default browserify-ng-html2js puts each html file into it’s own angular module, and then I need to make my main angular module depend on each individual template. This is back to a manually curated list that I’m going to screw up. browserify-ng-html2js has an option to put all the templates into one module, but that only seems to be available if you use g(runt|ulp).

Pulling everything in via npm means I could have one bundled file that contains my code and all it’s dependencies. This gets to be kinda a big file. I added some machinery to reference some angularjs libs from a CDN, but the easist path with browserify is to have everything just included. I guess if you’re using cache headers well and versioning in the URL this might be alright. Rigth now I’m at 264KB (73.4KB over the wire) which does include some dependencies. Letting browserify combine ALL my dependencies would more than double the file size. I’m really not sure if that matters, but it makes me nervous.

In the past I’ve used some grunt machinery to maintain the list of scripts to load; I liked this a little better because what I was developing with was closer to what I’m deploying.

Conclusion

I like the browserify and npm combination, but npm scripts are too limited, and another build tool is required. I think npm scripts are good enough as a task runner for simple dev or CI, but build steps just need more configration. It’s possible that maybe the specific build libraries could better support looking at package.json for configuration, but there’s just a lot more momentum behind using g(runt|ulp).

install pygit2 + libgit2 w/ SSH support on ubuntu

I wanted to use Saltstack‘s gitfs to easily reference salt formulas from my gitolite repository over ssh using key-based authentication. In order for salt-master to do this, you need pygit2, which needs libgit2, which isn’t yet packaged by ubuntu. You can download a libgit2 deb file, but that doesn’t have libgit2 compiled with SSH support, so I had to compile from source:

writing saltstack formulas

SaltStack is a great open-source, cross-platform automation system. It lets you configure servers using declarative yaml files and python. You can create custom “states” during yaml/python, and then say “make server X have state Y”. There’s a lot of plumbing involved which I’m not going to delve into, but it’s pretty neat stuff and doesn’t require punching holes in firewalls.

SaltStack provides a lot of different mechanisms to re-use configuration, and recently I’ve been working a lot with salt formulas as a mechanism to create and re-use custom states.

From the docs:

Formulas are pre-written Salt States. They are as open-ended as Salt States themselves and can be used for tasks such as installing a package, configuring, and starting a service, setting up users or permissions, and many other common tasks.

These are implemented as a git repository with one formula in it, and then there are a variety of ways to reference a formula. It’s common to customize the configuration using salt’s pillar mechanism (yaml again)

Some thoughts on writing formulas:

  • the pillar.example file should show an in-depth example of what pillar customizations are available
  • this community has settled on restructured text over markdown
  • map.jinja files are used to set defaults and switch options based on OS
    • these defaults should match what’s in pillar.example, which is annoying to maintain by hand
  • there are a million formulas on github, but the nature of salt states lends to writing really opinionated states that are super-specific to one’s infrastructure. It’s really hard to make a formula generic enough or sufficiently customizable, unless you’re doing something so trivial that it’s likely not worth a formula. There’s just too many choices that need to be made. For example, a formula to install cacti has a ton of choices:
    • install from source, or from a package manager?
    • apache or nginx?
    • any cacti plugins?
    • which poller, spine or cmd.php?
    • install the non-free snmp MIBs, or stick with the free (as-in-everything) option?
  • if you find a repo on github, you should fork it and reference your fork, even if it’s perfect the way it is. If you reference someone else’s formula directly, then you might have problems if they make breaking changes, or security issues if someone slips a backdoor into the formula. Each formula is third-party code running (usually as root!) on your systems, so you need to be careful to read and understand every formula
  • I found myself forking an existing formula that’s close to what I want, and then thoroughly gutting it until it’s customized for my specific infrastructure. This is due to insufficient configuration options, not sharing my opinions on how to set something up, or being too specific to the creator’s infrastructure
  • after writing a formula, it seems like it should be subdivided into many smaller formulas so they each do one thing, but managing dependencies between formulas is an open issue. If you split it up in a way that feels right, your install instructions get way longer. Instead of “fork and install my formula and salt-call state.sls my-formula“, it’s “fork and install these N formulas, then salt-call
  • if you start a formula from scratch, fork the template-formula
  • jinja has some powerful abstraction mechanisms via macros and imports. It’s tempting to write a custom salt module in python, but a lot can be done just right in jinja. This is a double-edged sword
  • there are a ton of helpful examples in the official saltstack-formulas

Happy formulating!

Things I learned when re-learning ASP.NET

ASP.NET has changed dramatically in the past five years. I’ve had the privilege to work on some projects using the newer web stacks, as well as modernize an old project.

I’d been away from the Microsoft ecosystem from around 2006 until 2011, and after working in dynamic languages (javascript, python, lisp), it took me some time to figure out what kind of code the Microsoft ecosystem was happiest with.

I threw a lot of pasta on the wall, and here are a few that seemed to stick.

Libraries

I had the smallest friction with this combination of libraries:

  • Entity Framework (EF): for database access, schema evolution, and ORM
  • Ninject: dependency injection (DI) library
  • ASP.NET MVC: fully featured web framework for serving user interfaces, with a filter/DI system that provides a lot of flexibility
  • ASP.NET WebApi: fully featured web framework for serving JSON/XML APIs, with a filter/DI system that provides a lot of flexibility. Makes a great backend for javascript frameworks
  • Moq: mock object library, makes easily fake any interface to minimize test setup/teardown
  • LocalDb: Microsoft’s answer to sqlite, a file-based database that’s SQL Server compatible but does not require any server or licensing
  • AutoMapper: quickly copy data from one type to another without a lot of repetitive code
  • log4net: very flexible logging; I bet it’s one of the oldest active C# open source libraries, I’ve been using it for years
  • FluentValidation: input validation library; lets you specify complex validation rules for a class

Everything is available via nuget, except for the LocalDb installer. Some of these are warrant further discussion.

Entity Framework (EF)

Very fully featured data access library. There’s a ton of depth here.

Good

  • define db schemas using C# classes mostly through properties and attributes, able to use inheritance to reduce duplication
  • specify db schema evolution using tool-assisted explicit migrations; add a new field to one of the DB classes, call Add-Migration NewField-ticket42, and most of the work is done, including a natural place to add data fixes
  • generate SQL scripts of pending migrations (EF stores some metadata in your database) for deployment
  • LocalDb support
  • linq support that builds crazy but reasonably efficient SQL queries that make it easy to select the minimal amount of data you need. You can select a few properties from your db object without fetching a whole row, do joins, etc
  • provides a database cache; repeated requests to fetch a db object against the same db context are free
  • transaction support; any operations (including inserting complex relationships) on a db context are pending until you call SaveChanges
  • can work against existing databases, with some care

Bad

  • the linq support can be surprising; some things just aren’t allowed and it isn’t always obvious. Thankfully the exceptions are thrown early and have good messages
  • it’s easy to accidentally load way more data than you want
  • exception messages for some db errors can be a obtuse or require debugging to examine a property on the exception
  • really, really, really wants you to run all your DB operations against one instance of the DB context (i.e. per HTTP request). Things get really weird if you try to use a db object between two db context instances
  • is happier with updates if you load a db object in full, change the properties, then save
  • it can be tricky to sort out how to add a graph of db objects without fetching the whole DB or calling SaveChanges multiple times to get autoincrement ids. Totally doable, but easy to screw up
  • EF’s migrations require your db classes to be free of compiler errors, which leads to putting your db classes in a different DLL from the rest of your application. If you change a db class in a way that breaks the rest of your application, unless the db classes in a different assembly, you have to update your entire application before you can figure out the migration. This leads to other weirdness and tough questions like “which assembly should this DTO/interface go in?”

ASP.NET MVC and ASP.NET WebApi

ASP.NET WebApi and ASP.NET MVC are very similar, and the two are being combined in ASP.NET vNext. They can also work together in one web project, albeit with different namespaces. It’ll be nice when ASP.NET vNext unifies these namespaces.

Good

  • explicitly maps URLs and HTTP verbs to an action method on a controller class
  • filter system that lets you run code before and after your action
  • naming convention-based system to choose templates for controller actions, with helpful error messages telling you what names the framework expected
  • automatically serialize/deserialize between HTTP GET/POST/PUT/DELETE data into C# objects, with hooks to customize the process. Ends up acting a lot like method injection
  • hooks for how the framework instantiates controllers to let you use a DI library for controller creation
  • really likes view models; define all the data you want in a template as a class, create it in your controller action, and pass it to the template. Viewmodels are easy to test
  • the ASP.NET MVC template system has nice helpers like EditorFor and DisplayForto render UI for view models
  • Since controllers are plain classes, you can new them up in tests and pass in different input without running a web server
  • lots of plugins and helper libraries on nuget

Bad

  • MVC wants you to group your files by type, not by feature. This makes your template far away from the (usually) single controller that uses it
  • no easy way to share templates between different projects
  • ASP.NET WebApi and ASP.NET MVC have a lot of the same classes in different namespaces. If you’re using them together it can get confusing if you want a System.Web.Http.Filters.IAuthenticationFilter and accidentally autocomplete the wrong using statement and end up with a System.Web.Mvc.Filters.IAuthenticationFilter
  • visual studio refactoring tools like “Rename” do NOT change your templates
  • any code in your templates is technically in a different assembly, so anything in your viewmodel you want to use in a template needs to be public
  • some built-in MVC helpers look at attributes on your viewmodel for how to render, validate, etc. If you follow that approach, then changing a <label> requires a recompile and a bunch of “go to definition”. Seems like writing HTML in your templates is easier than using the helpers so you can scatter your UI text across your viewmodels. This is probably fine if you need internationalization or localization, but if you don’t then it just feels like extra hoops

Ninject

Pretty straightforward dependency injection library. You tell it what implementation you want for what interface at application start, and then ask it to create all your objects.

class Foo : IFoo { public Foo(IBar b){} }
// on app start:
var kernel = new Ninject.StandardKernel();
kernel.Bind<IBar>().To<Bar>();
kernel.Bind<IFoo>().To<Foo>();
var foo = kernel.Get<IFoo>(); // new Foo(new Bar())

Good

  • easy to use
  • makes it practical to write many small, easily testable classes and not have to wire them up by hand
  • good error messages
  • support contexts so you can say “make one DBContext per HTTP request”
  • tons of options
  • lots of plugins on nuget:

Bad

  • depending on what version of ASP.NET MVC/WebApi you’re using, there are different nuget packages and install instructions, take care you’re using the right approach
  • “go to definition” gets less useful, since it’ll lead you to the interface, not to what ninject is actually instantiating at runtime
  • reduces the cost of many small classes; you get a long chain of “go to definition” to find the class actually doing the work

LocalDb

This is the dev/test database I’ve always wanted.

Good

  • can run tests against a full DB. At the beginning of your test run create a new LocalDb file, run all your EF migrations, then run each test case in a DB transaction
  • great for running dev sites on your workstation without a ton of setup

Bad

  • sometimes the files get in a weird state and you have to change your connection string to get a new file
  • I expect some subtle difference between this and a full SQL Server that will bite you if you use proprietary SQL Server features

AutoMapper

I kept going back and forth on this one; sometimes it’s a great time saver, sometimes it’s a huge hassle. Overall I think it’s a win.

Uses reflection to convert code like:

class Whatever {
  public Foo Copy(Bar b){
    return new Foo{
      Name = b.Name,
      Title = b.Title,
      Message = b.Message
    }
  }
}

into something like:

class Whatever{

  static Whatever(){
    Mapper.CreateMap<Bar, Foo>();
  }

  public Foo Copy(Bar b){ return Mapper.Map<Foo>(b); }
}

When you have a lot of data transfer objects (DTO)s it can be really common to want to copy fields from one type to another.

Good

  • reduces annoying boilerplate
  • can specify explicit mappings when names don’t match
  • can map collections and nested objects
  • works with EF and linq to map from your db objects and minimize how much you fetch from the db
  • you create mappings at application start, and they are cached from then on so you don’t pay a big reflection penalty

Bad

  • is really much happier when the property names match exactly
  • gets really finicky about mapping collections from linq
  • error messages could be better
  • refactorings like “rename” could introduce runtime errors if you don’t have good test coverage
  • copying data a lot isn’t a great idea; using more interfaces might eliminate the need
  • complex mappings that apply to multiple classes are difficult to reuse

FluentValidation

Specify validation rules with a testable, fluent interface:

class FooValidator : FluentValidation.AbstractValidator<Foo> {
  public FooValidator(){
    RuleFor(x => x.Name).NotEmpty();
  }
}
// validate
new FooValidator().Validate(new Foo())

Good

  • easy to test
  • plugins to automatically validate deserialized MVC or WebApi requests
  • lots of validation rules
  • custom validations are straightforward to implement

Bad

  • need to pass the right type into AbstractValidator<T>, which can lead to messy generic type signatures if you want to re-use validation rules between parent/child classes. Using extension methods to re-use rules is sometimes easier
  • custom error messages are defined in your validator class, this can be far away from the UI that displays it

Different kinds of angularjs directives

Angularjs directives are a powerful tool. Like many powerful tools, it can take some time to figure out how to use it, and it’s easy to create a working solution that you’ll regret later. I’ve been using angular in small projects for a few years now, and have come up with a few different ways to think about and classify simple directives.

Component Directive

A logical user-inteface element. Is the trio of directive / template / controller. Usually has an isolated scope, and is self-contained. These are like .NET user or server controls. Much of john papa’s style guide advice applies to the controller and views used in component directive.

Use attributes to pass data in, and data gets out via double-data binding, events, or calling services.

Component directives are not composable.

Usage looks like <my-component ng-model="vm.thing"/> (or <div my-component> depending on IE support).

Example

Mixin Directive

Add behavior to an existing element. Doesn’t isolate scope, but may read attributes directly. Rarely has a controller, mostly is just a link function.

Similar to bootstrap data- annotations or the approach of “unobtrusive” jquery plugins that function based on custom attributes.

Mixin directives are meant to be composable.

Usage looks like <input ng-model="vm.thing" my-mixin/>.

I haven’t found a many cases to write these, but many of the built-in directives fall into this category: ngDisabled, ngClass, etc.

DOM Macro Directive

Adds DOM to or around and existing element. Uses transclusion when possible, direct angular.element manipulation if not. Doesn’t have a scope, but may read attributes directly in the link function. Rarely has a controller, mostly is just a link function.

Many of the angular bootstrap directives fall into this category; they add a bunch of HTML so you don’t have to type it again.

DOM macro directives are composable, but you’ll likely need to mess with priority to get the results you want.

Example

Meta Directive

Directive that adds other directives. Doesn’t isolate scope, rarely has a controller, reads configuration from attributes or services. Has some complicated boilerplate. The meta directive is similar to a DOM macro directive, and helps reduce repetitive angular directives.

Meant to be composable, which is the primary reason to use these.

Usage looks like <button ng-click="vm.remove()" my-remove-button/>.

Most macro directives could be implemented as component directives or DOM macro directives, but will less flexibility. For example, we could implement a remove button as a component, and use it like <my-remove-button ng-click="vm.remove()"/>.

But then we want to disable the button sometimes. In the macro directive, we can just add ngDisabled:

<button ng-click="vm.remove()" my-remove-button ng-disabled="vm.canRemove"/>

For the component directive, we can do the same thing, but we need to teach the component about ng-disabled, and it’s template needs to render as <button ng-disabled />. This is straightforward work, but still work.

The remove button isn’t a great example since it’s so simple, but is shows the weird boilerplate needed. Consider wanting common settings throughout your application for a more complicated directive like a datepicker

Example

Other references

installing wiringPi on openelec

I’ve been having fun with Raspberry Pis recently. I’ve got one setup with openelec that I bring on camping trips, with some movies on a USB hard drive. By default, the pi doesn’t provide enough power on it’s USB ports, so I had to use an external USB hub. This meant two AC adapters, and lot more cabling. After some research, I found it’s possible to increase the power output on the USB ports (lock in auxillary power?) from 600mA to 1.2A. Testing & Setting the USB current limiter on the Raspberry Pi B+ has the details; you can use gpio to temporary bump the power, and then edit config.txt to retain that setting on reboot.

gpio is part of wiringPi, which has a pretty straightforward “git clone then build” installation story, but openelec is pretty locked down; no apt-get. To get a working copy of gpio, I followed the install instructions on another pi, then copied the compiled output over. But gpio relies on some shared C libraries, and didn’t run for me:

    ./gpio: error while loading shared libraries: libwiringPi.so: cannot open shared object file: No such file or directory

To see which shared libraries gpio needs, we can run ldd:

    /lib/libarmmem.so (0xb6ef2000)
    libwiringPi.so => not found
    libwiringPiDev.so => not found
    libpthread.so.0 => /lib/libpthread.so.0 (0xb6ed2000)
    libm.so.6 => /lib/libm.so.6 (0xb6e5f000)
    libc.so.6 => /lib/libc.so.6 (0xb6d39000)
    /lib/ld-linux-armhf.so.3 (0xb6efc000)

So it needs to find libwiringPi.so and libwiringPiDev.so. the wiringPi compiled output has wiringPi/libwiringPi.so.2.0 and devLib/libwiringPiDev.so.2.0, so we make a directory with those symlinked in, and then tell C to look for shared libraries there.

    mkdir /storage/lib
    ln -s /storage/wiringPi/wiringPi/libwiringPi.so.2.0 /storage/lib/libwiringPi.so
    ln -s /storage/wiringPi/devLib/libwiringPiDev.so.2.0 /storage/lib/libwiringPiDev.so
    export LD_LIBRARY_PATH=/storage/lib:$LD_LIBRARY_PATH

Now gpio can find it’s libraries. I was able to test my usb hard drive, and it works with 1.2A!

How do I know my code works?

I know the code for my new feature works because:

  1. I wrote it
  2. my IDE doesn’t underline anything in red
  3. my linting program doesn’t report any issues
  4. my runtime / compiler reports no syntax/compile errors
  5. the runtime / compiler on my build server reports no syntax/compile errors from a fresh checkout
  6. automated tests for my feature pass
  7. all automated tests pass
  8. all automated tests pass when run by my build server from a fresh checkout
  9. I peer-reviewed the code for my new feature
  10. I manually tested the new feature
  11. I manually tested other features and they still work
  12. Someone else manually tested the new feature
  13. Someone else manually tested other features and they still work
  14. I deployed it to production
  15. it’s been in production for an hour
  16. it’s been in production for a day
  17. it’s been in production for a week
  18. it’s been in production for a month
  19. it’s been in production for a year

Deploying .NET COM objects with MSBuild

At work I’ve been working with a very old classic ASP website, running on a very old hardware. After a few weeks of hacking vbscript, I’m
sorely missing C#, and especially unit tests. I feel so exposed when there’s no tests proving my code does what I think it does. For reasons I’ll not go into, deploying aspx pages is not an option, so I spent some time exploring creating COM objects using .NET, and consuming them from vbscript using Server.CreateObject. This seems like it will work really well for my purpose. This morning I sorted out deployment, which didn’t have any easy google answers.

My goals is to deploy via msbuild with a few commands:

  • MSBuild.exe /t:DeployCOM /p:env=test to deploy to my test site
  • MSBuild.exe /t:DeployCOM /p:env=prod to deploy to production

TL;DR: see deploy-com-dll.targets

Requirements

Before getting into XML, to deploy a COM DLL, there are a few things to do:

  • unregister the old version of the DLL. I’m trying to avoid DLL Hell by only having one version of my DLL installed on the server, and explicitly not maintaining backward compatibility
  • restart IIS; if my COM object has been loaded in IIS, then the DLL file on disk is locked
  • copy new DLL to the server
  • register the new DLL

MSBuild design

MSBuild conveniently includes the RegisterAssembly and UnregisterAssembly tasks to handle the COM registration, but unfortunately these don’t work with remote machines, so we have to do things by hand.

MSBuild implementation

The XML is available as deploy-com-dll.targets.

First up, our conditional PropertyGroups:

<Propertygroup Condition="'$(Env)' == 'test'">
  <!-- UNC path where we publish code-->
  <PublishFolder>\\dev-server\Sites\my-classic-asp-site</PublishFolder>
  <!-- the remote path to where our code is deployed -->
  <LocalPublishFolder>D:\WebRoot\my-classic-asp-site</LocalPublishFolder>
  <!-- the computer name for psexec -->
  <Computer>\\dev-server</Computer>
</PropertyGroup>

<PropertyGroup Condition="'$(Env)' == 'prod'">
  <PublishFolder>\\prod-server\WebRoot\my-classic-asp-site</PublishFolder>
  <LocalPublishFolder>D:\sites\my-classic-asp-site</LocalPublishFolder>
  <Computer>\\prod-server</Computer>
</PropertyGroup>

Pretty straightfoward, tells us where to copy our files to, and gives the remote path so when we call regasm we can tell it where to look.

Next, our unconditional PropertyGroup:

<PropertyGroup>
  <!-- where to copy our COM assembly -->
  <COMPublishFolder>$(PublishFolder)\COM</COMPublishFolder>
  <!-- remote path to our where we published our COM assembly -->
  <COMFolder>$(LocalPublishFolder)\COM</COMFolder>
  <!-- remote path to our COM assembly -->
  <COMAssembly>$(COMFolder)\My.Interop.dll</COMAssembly>
  <!-- remote path to our typelib -->
  <COMTypeLib>$(COMFolder)\My.Interop.tlb</COMTypeLib>
  <!-- local path to psexec-->
  <PsExec>$(MSBuildProjectDirectory)\tools\PsExec.exe</PsExec>
  <!-- remote path to regasm.exe -->
  <RegAsm>C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\RegAsm.exe</RegAsm>
  <!-- remote path to iisreset.exe -->
  <iisreset>c:\WINDOWS\system32\iisreset.exe</iisreset>
</PropertyGroup>

Again, pretty straightforward. A lot of derived properties, specifying where to publish COM code, more remote paths so psexec can find the regasm and iisreset executables and our COM assembly.

Lastly, our Target:

<Target Name="DeployCOM">
  <!-- local path our compiled, strongly-named COM assembly and it's dependencies -->
  <ItemGroup>
    <FilesToCopy Include="$(MSBuildProjectDirectory)\src\My.Interop\bin\Release\*.*"/>
  </ItemGroup>
  <MakeDir Directories="$(COMPublishFolder)"/>
  <!-- unregister the old version of the DDL -->
  <Exec Command="$(PsExecBinary) $(Computer) $(RegAsm) /u /codebase /tlb:$(COMTypeLib) $(COMAssembly)"
        IgnoreExitCode="true"/>
  <!-- restart IIS -->
  <Exec Command="$(PsExecBinary) $(Computer) $(iisreset) /restart" IgnoreExitCode="true"/>
  <!-- copy new DLL to the server -->
  <Copy SourceFiles="@(FilesToCopy)"
        DestinationFiles="@(FilesToCopy->'$(COMPublishFolder)\%(RecursiveDir)%(Filename)%(Extension)')"
        SkipUnchangedFiles="true"
        />
  <!-- register the new DDL -->
  <Exec Command="$(PsExecBinary) $(Computer) $(RegAsm) /codebase /tlb:$(COMTypeLib) $(COMAssembly)"/>
</Target>

There are some comments inline there breaking out the steps. A few important notes:

  • we’re using the /codebase flag to regasm; this means we are NOT going to be installing our .NET in the global assembly cache, so the COM system needs to know to look in our COMFolder to resolve .NET references. Without this, we would need to also call gacutil to install our .NET in the system, and any third-party libraries we’ve referenced
  • the COM assembly needs to be strongly named, and therefore all it’s dependencies need to be strongly named. Compiling and signing the assembly is out of scope here, but there are plenty of good tutorials out there.
  • in a few places we ignore the exit code of the exec call; on first deployment those tasks might fail
  • this is a really heavy handed approach with IIS. Using more of the versioning features in COM and .NET assembly will let you install multiple versions of everything so you can publish a new version, then update your vbscript code to use that new version. In my case, I have huge maintenance windows every night to do iis restarts, but that option is not available to everyone.

Summary

It’s ugly, but it works and is a hell of a lot prettier than writing vbscript, and I can write tests and get some confidence that my code works before it’s in production.

Visualizing call graphs in lisp using swank and graphviz

Last week I was doing some cleanup work (short holiday weeks are great for paying off technical debt), and was deleting some supposedly unused code. This was a pretty tedious process of running functions like slime-who-calls and slime-who-references, running git grep -i on the command line, and undefining functions in just the right order.

I’ve seen a lot of articles recently on static analysis of code, and spent some time playing with the introspection features of slime to identify unused code (short holiday weeks are also great for following a tangents). I ended up with a slow mudball of code that worked pretty well.

Warning, large images coming up.

The code itself is up on github, but there’s no ASDF system yet, so you have to load it manually:

(require :iterate)
(require :alexandria)
(require :swank)
(load "~/lisp/static-analysis/static-analysis.lisp")
(in-package :static-analysis)

An truncated example:

STATIC-ANALYSIS> (call-graph->dot :alexandria )
digraph g{
subgraph clusterG982{
label="PACKAGE STATIC-ANALYSIS"
G983 [label="ENSURE-PACKAGE-LIST"]
}
subgraph clusterG949{
label="PACKAGE ALEXANDRIA.0.DEV"
...
}
G983 -> G995
...
G951 -> G950
}
NIL

Here’s what it actually looks like:

http://ryepup.unwashedmeme.com/blog/wp-content/uploads/2012/01/wpid-alexandria-graph.png

The code currently scans all loaded code, and puts functions from each package in it’s own graphviz subgraph. The graph for an entire package for all loaded code isn’t really that useful, so I made another function to narrow it down. Here I’m specifying the list of packages to render, and the list of functions to show.

STATIC-ANALYSIS> (->dot (function-call-graph '(:alexandria) '(alexandria:rotate)))
digraph g{
subgraph clusterG1109{
label="PACKAGE ALEXANDRIA.0.DEV"
G1040 [label="ROTATE-HEAD-TO-TAIL"]
G1049 [label="SAFE-ENDP"]
G1054 [label="CIRCULAR-LIST-ERROR"]
G1051 [label="PROPER-LIST-LENGTH"]
G1042 [label="ROTATE-TAIL-TO-HEAD"]
G1041 [label="ROTATE"]
}
G1040 -> G1051
G1051 -> G1049
G1051 -> G1054
G1042 -> G1051
G1041 -> G1040
G1041 -> G1042
}
NIL

http://ryepup.unwashedmeme.com/blog/wp-content/uploads/2012/01/wpid-alexandria-rotate.png

Some systems have very complicated call graphs. At work we do a lot with clsql, and the overall call graph even from one function can get complicated quick:

http://ryepup.unwashedmeme.com/blog/wp-content/uploads/2012/01/wpid-clsql.png

So I added a depth param to keep the graph smaller, let’s say 3:

STATIC-ANALYSIS> (->dot
 (function-call-graph '(:clsql-sys :clsql-sqlite3)
                      '(clsql:map-query)
                      2))

http://ryepup.unwashedmeme.com/blog/wp-content/uploads/2012/01/wpid-clsql-limited.png

Anyhoo, a fun toy, and I had a fun time writing it.

Getting started with Hunchentoot and Talcl websites

This is a short guide to setting up a lisp-powered website with Hunchentoot and Talcl/Buildnode.  Hunchentoot is a web server, Talcl is a templating system, and Buildnode is a CXML helper library Talcl uses.  These are from notes I made while writing an app to help my wife record attendance and student progress for dance classes.

My high-level approach on my hobby projects is to write the user interfaces using mostly pure HTML/Javascript/CSS and jQuery, and then make a RESTful (mostly) API with Hunchentoot’s Easy Handlers that the Javascript front-end calls to perform some operations.  For some reason things like parenscript and cl-who never felt right to me.  Anyhoo, let’s get started.

Foundation

I’m calling this project “Alice”, so time to make the foundation:

(quickproject:make-project "~/lisp/alice/" :depends-on '(:iterate :alexandria :talcl :hunchentoot :buildnode))

I generally always include iterate and alexandria, and we’ll want a few things from buildnode directly so we’re depending on that separately from talcl.  Quickproject makes all my files, and I’m good to start.

Here’s the basic goal:

 

I want to have my templates stored in .tal files, and hunchentoot will need a place to look for static files, so we start with a few new directories: “www” for hunchentoot and “templates” for tal.  To easily get paths to these, I added a helper function:

(defun resource-path (path)
  "looks up path relative to whereever this asdf system is installed.  Returns a truename"
  (truename (asdf:system-relative-pathname :alice path)))

Hunchentoot

Now the fun begins. Next is a function to start the hunchentoot acceptor (which will handle listening on a port and dispatching requests) AND configure the static file handling I wanted.

(defvar *acceptor* nil "the hunchentoot acceptor")
(defun start-server (&optional (port 8888))
  (setf *acceptor* (make-instance 'hunchentoot:acceptor :port port))  
  ;; make a folder dispatcher the last item of the dispatch table
  ;; if a request doesn't match anything else, try it from the filesystem
  (setf (alexandria:last-elt hunchentoot:*dispatch-table*)
	(hunchentoot:create-folder-dispatcher-and-handler "/" (resource-path "www")))
  (hunchentoot:start *acceptor*))

By having the folder-dispatcher-and-handler as the last item in hunchentoot’s *dispatch-table*, it will only bail to the filesystem if no other handlers match. Hunchentoot has a *default-handler* mechanism, but it is limited; default-handlers do not have access to any request information.

Now I toss a stub style.css into my www/ directory, call start-server, then can load http://localhost:8080/style.css in my browser.  Great, the right half of my desired flowchart is done.  Now the Talcl part.

Talcl

Talcl reads template files, compiles them into lisp functions that accept a tal enviroment.  The tal environment is a set of key/value pairs that will fill in the templates.  Talcl has a bunch of features to handle writing to streams, but for now I’ll just generate strings and pass them to hunchentoot.

(defvar *tal-generator*
  (make-instance 'talcl:caching-file-system-generator
		 :root-directories (list (resource-path "templates"))))

The tal generator maps template names to template files, compiling the templates if needed. There are a few different classes that can be used here, but this one checks file dates and recompiles only if the file is newer.

(defun render-page (template &optional tal-env)
  "renders the given template" 
  (talcl:document-to-string
   (buildnode:with-html-document
     (talcl:tal-processing-instruction *tal-generator* template tal-env))))

This helper function takes the template name and the optional tal environment, and returns a string of the final output. Talcl deals in XML, but HTML is not XML so I use the buildnode:with-html-document macro to resolve the mismatches (eg: <script src=…></script> instead of <script/>). According to Talcl examples, there are several ways to get your tal content into an XML document, and tal-processing-instruction is the fastest.

(hunchentoot:define-easy-handler (home :uri "/") ()
  (render-page "home.tal"
	       (talcl:tal-env 'course (current-course))))

This adds a handler to hunchentoot’s table, and should get us going down the left branch of my flowchart.  The tal-env call is creating the tal environment where the compile template function will look for substitutions.  I think of these like keyword arguments for the template.  In this case, I’m pulling some course information and passing it to home.tal.

Tal Templates

The last complicated bit is the tal templates themselves. There are some good examples in the talcl repository.   I want one tal file to be the main site template, a frame around whatever content I’m trying to show with all the html,head,body tags.  Then I’ll have one tal file for each major UI element.

The overall site template will be in templates/template.tal:

<html lang="en" 
 xmlns:tal="http://common-lisp.net/project/bese/tal/core"
 tal:in-package=":alice">
 <head>
 <meta charset="utf8"/>
 <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.1/jquery.min.js"/>
 <script src="/script/alice.js"/>
 <link rel="stylesheet" href="/css/style.css" type="text/css" />
 </head>
 <body>
 <span id="body">$body</span>
 </body>
</html>

Since this is XML, we need some xmlns noise at the top, but we can use XMLisms like “<script/>”.  The key things to note here:

  • tal:in-package=":alice" – need to tell Tal where it should be evaluating
  • $body – this is one way to substitute values into the template. Talcl will look for a symbol 'alice::body in it’s tal environment

So that’s our main template file, now for the home.tal file:

<tal:tal xmlns:tal="http://common-lisp.net/project/bese/tal/core"
 xmlns:param="http://common-lisp.net/project/bese/tal/params"
 tal:in-package=":alice">
 <tal:include tal:name="template.tal">
  <param:body>
   <button>Start Jam Class</button><br/>
   <tal:loop tal:var="c" tal:list="(classes course)">
    <a href="class?name=$(name c)">Start $(name c) Class</a>
   </tal:loop>
  </param:body>
 </tal:include>
</tal:tal>

I have the same xmlns noise, but have a new one namespace, param.  This is the xml namespace tal uses to pass information from one template to another. The top level XML node is a “tal:tal” node, which does not render any output.  I include template.tal to get our main template, passing it the UI for this page in a param:body.  This adds 'alice::body to the tal environment, with the XML contents as the value, then template.tal is called.  I use some fancier tal statements to loop over all the dance classes in the given course and make a link to each one.

Performance

Happy hacking!