Skip to content

shibboleth attribute “scope () not accepted” and “value () could not be validated by policy, rejecting it”

I have a client who acts as a Shibboleth Service Provider (SP), and the corresponding Identity Provider (IdP) needed to update some of their information, so I had to spend a few hours debugging shibboleth again this morning.

The punchline: in the metadata for an IdP, there are TWO places you need to specify the scope of the IdP, once in the <IDPSSODescriptor> to cover authentication, and once later in the <AttributeAuthorityDescriptor> to cover any attributes.

Shibboleth is all about trust, and the lack thereof. The SP and IdP share a few keys, and then each maintain their own configuration files specifying how much information to trust. When a user logs in, they get bounced from SP->IdP to authenticate, then again from IdP -> SP with an auth token. From there, the SP can assume an authenticated user. If the SP wants any more information about the user (say, a username), it can make SOAP calls to the IdP requesting more attributes. The IdP checks it’s config files to see how many attributes to release to that particular SP, and sends back the attributes. The SP doesn’t trust the IdP on it’s word, so the SP checks it’s config files to validate that it can accept these attributes from this IdP. This is where I ran into a problem, getting an error like:

attribute (username) scope (foo.bar.edu) not accepted

This brings up another aspected of attributes, the scope.  An attribute’s scope seems somewhat akin to a namespace to me, it’s a way of grouping related attributes and prevent name conflicts between attributes.  In my case, my IdP was “bar.edu”, and my SP was configured to accept the username attribute from any site with any value (in my AAP.xml).  However, the IdP was returning the username scoped under a different domain, so the SP, being a paranoid creature, assumes the IdP is trying to falsify information and refused the attribute.  To convince the SP to accept the different scope, you have to change the metadata for the IdP, and specify that it is allowed to provide attributes in the new scope.  Unfortunately for me, there are 2 kinds of scopes you can change:

  1. Scope of the initial authentication
  2. Scope of the attributes

There are both children of <EntityDescriptor>.  This led to a frustrating morning of restarting IIS (because you need to restart everything under the sun for the configuration to get reloaded), and not seeing any change.  There are many ways to alter your attribute policy, and along to way to discovering the second place to specify scope I ran across another error message:

attribute (username) value (my-user) could not be validated by policy, rejecting it

This seemed to happen when my SP config tried to treat the username as an unscoped attribute.  You can’t just ignore the scope of an attribute if the IdP is sending one.

The end solution is to add another <shibmeta:Scope> specification to the attributes configuration section in the IdP’s metadata:

  <AttributeAuthorityDescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:1.1:protocol">
    <Extensions>
      <shibmeta:Scope xmlns:shibmeta="urn:mace:shibboleth:metadata:1.0" 
                      regexp="false">bar.edu</shibmeta:Scope>
      <shibmeta:Scope xmlns:shibmeta="urn:mace:shibboleth:metadata:1.0" 
                      regexp="false">foo.bar.edu</shibmeta:Scope>
    </Extensions>
    ...

simple job scheduling with a threaded lisp

Yesterday I had a need to do some batch processing, making many HTTP calls to crawl a government website to pull down some public data.  I didn’t want to run this during normal hours, because I didn’t want to put a strain on their server.  I had the web crawling sorted out as a function using drakma and cl-ppcre to get the data I wanted.

I wasn’t sure how best to schedule this function to run later that night, and was about to dig into anacron and figure out the right command-line flags to sbcl, when Nathan suggested the obvious:  why not have lisp schedule it?  It was very easy to throw something together using sleep and get-universal-time.  This morning I abstracted that into a helper function using SBCL Threading:

(defun run-later (utime-to-run function)
  (sb-thread:make-thread
   #'(lambda ()
       (sleep (- utime-to-run (get-universal-time)))
       (funcall function))))

Very handy, doesn’t lock my repl, scratches my itch, and easy to write.

logo for wedding favors

I’m getting married in a few weeks, and one of the final tasks was to make some favors for the guests. Heather and I decided on coffee mugs with a cute logo. We quickly came up with the concept of a heart with gears inside, as we’re having a steampunk/victorian themed wedding.  After one very frustrating evening with the gimp (which ended in a crash before I saved the file), I decided attempt two would be procedural, so I fired up my trusty REPL and vecto:

final version

final version

I seriously considered how to calculate the gear size, teeth, and rotation so they mesh perfectly, then gave up and specified them manually via guess-and-check.  The code is at http://paste.lisp.org/display/71465.

I was pretty happy, it ended up being one evening of hacking, with much of that taken rendering test images to get gear placement, rotation, and teeth correct.  I was able to get everything drawing in terms of one variable *r*, so I could scale the size of everything simply.  In retrospect, that could probably have been done just as easily using vecto:scale.  I used vecto:with-graphics-state more than I have before, and that made the drawing operations very straightforward.

I got a little optimistic with the gear-internal generic method, I thought to make many different styles for the inside of the gear, but ended up with only solid and spoked varieties, so had only one specialization for the generic method.

The internal shape of the gear (the number of spokes, or if it’s solid) was determined randomly, and I wrote a quick make-samples function to generate a batch of images.  Then I reviewed those with the fiancee to find a random arrangement we liked.

To pick a suitable font I found all the .ttf files on my computer, then looped through those to make a test logo for each font (the test-font function).  This proved pointless, as Heather knows her fonts and picked one from memory while I was scrolling through hundreds of png files.

The logo is off at the mug printers, and I should be getting a test mug in the mail this week.  Hopefully it will work out and I can offer secret alien technology to all my wedding guests.

some simple cl-smtp examples

The docs on cl-smtp are a little, um, terse, so I figured I’d post a few snippets for google to find:

Sending html email with cl-smtp:

(cl-smtp:send-email
 +mail-server+
 "from-email@example.com"
 "to-email@example.com"
 "Subject"   
 "<html><body>
<p>
Shiny <strong>h</strong><em>t</em><small>m</small>l.
</p>
<p>
</body></html>"
 :extra-headers '(("Content-type" "text/html; charset=\"iso-8859-1\"")))

Sending attachments with cl-smtp:

(cl-smtp:send-email
 +mail-server+
 "from-email@example.com"
 "to-email@example.com"
 "Subject"   
 "see attachement"
 :attachments '("/path/to/attachment"))

Ok, back to my regularly scheduled slog.

adw-charting now has at least one other user

I’ve gotten some nice email from Erik Winkels giving feedback on adw-charting, and now today (which has had already it’s share of exe-related posts), I get one more email with a patch for supporting clisp executables.

I was loading a font file using a path relative to the (asdf:component-pathname …), and that was tripping up the exe.

Building standalone EXEs is still pretty low on my list, as I mostly make web apps and have servers available.  I think the growing desire good EXE support is a sign of lisp getting some more traction, as folks get past the “wanting to get their feet wet” stage and into the “wanting other people to use their software” stage.

Erik was kind enough to send a screenshot along with this amusing note:

I’ve attached a screenshot of the utility (it’s a trading tool for EVE-Online) and I’ve deleted the working title since it can be somewhat offense for some people.

The mind reels.  What was this name that was deemed too offensive?  It looks pretty nice, maybe he’s using ltk?

screenshot of Erik's app

screenshot of Erik's app

adw-charting progress and plans

It’s been awhile since I posted any updates on adw-charting, but some progress has been made.  After deciding to use google charts for rendering, a few months ago, I’m re-thinking the decision.  Google charts look very nice, but they weren’t as automatic as I’d hoped, and there was still a lot of processing needed before it could display a reasonable chart either.  Using google is certainly less CPU/memory intensive than using vecto, but there are a few problems:

  • charts are limited to 300,000 pixels, which is smaller than it sounds
  • because all the data needs to be passed on a URL, it chokes on large datasets.  This can be helped by using alternate encodings for values, but that looks like a pain in the ass (basically have to map the arbitrary lisp data evenly onto 0-64 or 0-4096, then encode as ASCII, I predict rounding troubles)
  • it doesn’t automatically scale to fit your size.  For example, a bar chart uses 20px wide bars by default.  If you have too many bars they just wander off the right of the canvas, and you need to manually adjust the bar width to get things to fit
  • some of the labels / titles get cut off

The google backend certainly has it’s uses, but I’m going to revive the vecto backend, which gives complete control.  As a start, I’ve split the project into a 3 asdf systems:

  • adw-charting: has some utils and common code, declares the :adw-charting package
  • adw-charting-vecto: adds/exports functions for making vecto charts in the :adw-charting package
  • adw-charting-google: adds/exports functions for making google chart URLs into in the :adw-charting package

In this setup, you’ll load adw-charting-google.asd or adw-charting-vecto.asd, and then all the dependancies will sort themselves out, and you don’t have anything loaded you don’t need.

I looked at asdf-system-connections but didn’t see how it would help, so copied cl-xmpp‘s multiple asd file scheme.  This is currently just in the darcs repo, and may have some bugs.

I’ve also started an actual adw-charting todo list, as part of a greater effort to be more organized in my life, comments welcome here.

lispvan presentation

In a couple of hours I’ll be giving a presentation to lispvan about web applications with UCW and lisp.  I’ve mostly been a user of UCW, and spent time over the past week reading through the internals so I can give better explanations.  It’s a bit easier on me because UCW has been forked and modified by so many people that anything too in-depth would be inaccurate, so I have a good excuse for staying high level.

I’ve got a very small real-world example to go through, with everything running on my little eeePC, and source for a more complicated example that’s running back at home.

For reference, here’s the chart of a UCW request / response I’ll be going over:

It’s been awhile since I did any public speaking, and I’m excited about getting to sit down with other lispers.

New term: Stealthy-patch

By now we’ve all heard of monkey-patching, and I propose a new variant: stealthy-patching.  This is the act of patching a currently running process with zero downtime.  This is similar to monkey patching, but doesn’t have the negative connotations.

I just connected to the Gainesville-Green app and evaluated a few forms to add intelligent handling of the enter button on search pages, along with some new parenscript for more IE compatibility.  All told I redefined 1 UCW component and 2 defmethods, all without any active users getting interrupted.

On the other side of the coin, I also compiled and uploaded a new SBCL core containing those fixes, so next time the application restarts (in a suprise reboot, for example), my patches will still be in place.

Ahhh… so much flexibility.  It’s like getting into a hot-tub after a long day of frigid C#.

Prepping for Load

A week ago, our UCW-based website (Gainesville-Green) got some local news coverage (see “Gainesville-Green.com Segment Aired on WCJB TV20” on the project’s blog), and we spent the day trying to prep for an increased load.

Our setup is fairly straightforward:

  1. apache and mod_proxy to serve static content and get dynamic content from…
  2. a lisp (sbcl) http server listening on 127.0.0.1:3xxx talking via CL-SQL to…
  3. a postgresql database on a separate, more powerful server

When 5pm rolled around, we had:

  • long-lived caches of some expensive queries in lisp
  • lisp generating proper Cache-Control and Last-Modified headers to let our mostly static content be cached aggressively (using a new UCW dispatcher)
  • some visual / usability tweaks
  • cron job to automatically restart the lisp application server if it fails to respond

Over the next couple of days we added some more:

  • used mod_disk_cache on Apache’s end to put a cache server directly in front of our lisp server
  • reworked some entry points to work nicer with the cache
  • referenced our javascript libraries from Google’s AJAX Libraries API, which will serve the big js files off their content distribution network, gzipped and minified

So, not too much optimization was done inside lisp itself besides some basic memoization of database queries.  Hans Hübner made a post on Building Resilient Web Servers last week after our sprint, but our needs were much simpler.   We have basically static content that we want built on-demand.  From what I’ve read we could probably be faster by replacing Apache with nginx or some such, but for now it meets our needs nicely.

For a few days our cron job was reporting the site as regularly down and restarting it, and we finally tracked that down to a bug in the restarting script.  Turns out the HTTP “Not Modified” response code is 304, and we were looking for something in the 2xx range, so restarting needlessly.

The burst of traffic that Friday night was the most we’ve seen on any of our lisp projects, we got over 4000 hits that day on a weaker test server.  We’re serving 500-1000 pages per day now, and most of the bandwidth is coming from google’s CDN, so our load is nice and light, averaging around 15K per request.

I LOVE how straightforward and composable all these tools are.  Now that our “stay-alive” cron job isn’t randomly killing our lisp, we should have a nice long-lived process and better uptime.

lisp geocoding, more library possibilities

I’m finally working on a project that has a public component (besides a login screen), and the lisp has been flowing. I’ve been able to put in probably around 10 hours a week on this thing, and I can’t wait until its more presentable to show off. As part of this, we’re building up our work lisp codebase, and I’ve been keeping my eye out for opportunities to open source different library components, and trying to avoid creating a big ball of mud.

The first real candidate so far was adw-charting, but I think I found the next potentials: adw-yahoo and adw-google. They will be libraries for interfacing with various Yahoo! and Google services. So far I’ve only got code for one service apiece, but there’s plenty of growth potential.

Here’s a sample:

(defun geocode-fight ()
  (let ((adw-yahoo:*api-key* "YAHOO APPID")
	(adw-google:*api-key* "GOOGLE API-KEY")
	(address "5308 SW 75th ter, Gainesville FL, 32608"))
    (list
     (adw-yahoo:latlong address :cache-p nil)
     (adw-google:latlong address :cache-p nil))))

(geocode-fight)
=> (("address" 29.604265 -82.42349 "5308 SW 75th Ter")
(8 29.604923 -82.42338 "5308 SW 75th Terrace, Gainesville, FL 32608, USA"))

Yahoo and Google give back different some different data, but so far Yahoo’s coordinates seem more accurate. One thing I noticed was that Google’s geocoding service gives a different lat/long than what maps.google.com displays, which I thought was a little sneaky. I’ve also got some code to help construct google map widgets, using the homegrown html template system and parenscript, but I think much of that will end up getting stripped out, as it depends on the in-house libraries that are likely completely useless to anyone but us.

Still lots to do before either of those libs would be ready for a cl-net request, but it’s on the plate. After building so much on top of so many great free libraries, the need to contribute something back is very strong.