Here are recent entries. Page anchors are on ISO 8601 dates (e.g. 2011-05-27), so you can access the oldest entry like: blog.html#2010-11-13. I’m still figuring out how to do permalinks and individual pages.



As I continue my leisurely foray into the world of Haskell, I have come upon a bivouac of Haskellers near Chicago this weekend. Myself and a co worker from the office will be attending the first-ever ChiHack. Maybe I can get someone to explain Backpack to me.



Just wanted to make a quick note that I have published my first gem, reactive_record, on rubygems. It is a utility that lets you “export” constraints from your existing postgres database, creating Rails models.

It is pretty early, but I have put in a lot of work and I hope that it is useful for your projects.

named parameters in haskell


I was watching Rich Hickey’s keynote for Rails Conf 2012. I’ve been watching a lot of talks lately because I’ve organized a series of conference talk screenings at my work. And that’s probably a different blog post…

Anyhow, one thing that he mentioned that I don’t often think about is the complexity introduced with positional parameters. Let me explain:

function foo(x, y, z,) {...}

Requires that x, y, and z all be present in that exact order even if that’s not important:

function make_person(first, last, phone) {...}
function make_person(last, first, phone) {...}
function make_person(phone, first, last) {...}

If you encountered a wild make_person, you’d have to know which definition was the one that was used. The point is it really doesn’t matter that a person’s attributes are listed in that order. Any order is fine. But you’ve, implicitly, introduced a strict order-dependence here. Passing in a map/object/hash fixes this issue:

function make_person(opts) {
  opts['first'] = ...
  opts['last'] = ...
  opts['phone'] = ...

Haskell is pretty tied to the positional and unnamed argument thing. I was looking into how to do named and/or non-positional arguments.


The first thing that occurred to me is to do something like this:

import Text.Printf

data Person = Person { firstName :: String
                     , lastName :: String
                     , email :: String

formatAddress :: Person -> String
formatAddress p = printf "\"%s %s\" <%s>" f l e
    f = firstName p
    l = lastName p
    e = email p

And then I call it like so:

formatAddress Person { firstName = "Chris"
                     , lastName = "Wilson"
                     , email = ""

A slightly better tweak is to create a default that provides values for anything that’s missing (assuming that I have a function that just calls for one parameter):

formatEmail p = printf "<%s>" (email p)

defaultPerson = Person {firstName = "", lastName = "", email = ""}

But that “infects” the call site with the defaultPerson argument:

formatEmail defaultPerson {email=""}

All the other, irrelevant arguments are defaulted by the defaultPerson constructor.

Named Records

This pulls in the big guns of template Haskell to abstract machinery something like what is spelled out above:

{-# LANGUAGE TemplateHaskell #-}

import Data.NamedRecord
import Text.Printf
import Data.Name

name "firstName"
name "lastName"
name "email"

record "Person"
    `has` "firstName" := ''String
    `has` "lastName"  := ''String
    `has` "email"     := ''String

formatEmail :: Person -> String
formatEmail p = printf "<%s>" e
    e = p `get` email

But it has a rather pleasant usage:

formatEmail (newPerson `set` email := "")

There is also a nice way to do default arguments. Go check it out, the docs are good.


Okay, I have to admit that I’m less sure about this. Metaphor-weary haskellers please forgive me, but lenses seem to be the space-based laser (SBL) of the Haskell world right now. While the idea is simple, you’d like a way to pinpoint a structure for observation or (destructive) modification, the actual infrastructure surrounding it is rather elaborate. On the Haskell Cast #1, Edward Kmett goes into the details of the lens library. A few “lenses are the coalgebras for the costate comonad”s are thrown around and there’s generally a lot sailing over my head.

On iota of wisdom that I pulled down from the stratosphere was that lenses are a kind of “getter” and “setter”, albeit ones with firm FP grounding. These can be used to effectively create flexible parameters to functions. “Flexible” just means:

  • a pool of parameters to draw from
  • that are named rather than positional
  • and not all have to pe present
{-# LANGUAGE TemplateHaskell #-}
import Control.Lens
import Text.Printf

data Person = Person { _firstName :: String
                     , _lastName :: String
                     , _email :: String

makeLenses ''Person

-- use the "email" getter
formatEmail p = printf "<%s>" (p^.email)

-- issues a warning, but works.
main = putStrLn $ formatEmail (Person{_email=""})

Whew. It feels like cheating, but I really like how this works. Lenses let me “focus” on each field in my data structure by name (or position).

Lenses seem to fulfill the three bullets that I listed and they do so in the most “natural” way. I say that because, as Edward Kmett, goes into, lenses are useful for a bunch of other stuff and they compose really nicely:

data Person = Person { -- above...
                     , _phone = Phone { _number = "...", _type = "mobile" }

data Phone = Phone { _number :: String, _type :: String }

chris^.phone^.type -- equals "mobile"

So, in summary, I feel that lenses provide a credible solution to the named-record/non-positional/keyword arguments problem. Go forth and hack.

code budget


At almost the moment a new project begins, we start the ritual of estimation. Guess how long this will take. Guess how much this will cost. How about if we change that feature to this feature? The goal of all this prognostication is to try to fit a potentially infinite product within the finite means we have available to us in terms of money, people, and (sometimes overlooked) will.

These are all precious and limited resources that we must spend; we trade what we can afford for the software we want. Grouped among these precious and limited resources, I think we must include lines of code. As Dijkstra noted, that lines of code added are being recorded in the wrong side of the ledger. This means that as code is added to a codebase the difficulty of making changes or future additions keeps going up. This has a self-limiting effect in the same way that a dollar budget is self-limiting: when you exceed the budget you should examine the goals and outcomes of the project.

That’s my proposal. At the start of a project estimate the quantity of code needed to solve the business problem. This is hard in the same ways that other estimation is hard, but with time and practice, should be doable. When this budget is exceeded it should trigger some tough questions about the state that the project is in. Why have we used more lines of code than we thought? Could this be refactored? A benefit of the code budget is that it will force the team to examine the codebase at exactly the time that projects tend to be in maximum crunch mode. This hard check is a good thing that keeps everyone grounded in quality when all other external signals are screaming everything but.

nand2tetris: low-level love


I’ve been working through The Elements of Computing Systems, or as it is sometimes called nand2tetris. This is a fun course where you start out by being given (say: from God) the humble nand gate (file photo below):

Schematic of a nand gate

And here’s a nude photo (showing how one would be implemented):

Electrical schematic of a nand gate

But as far as nand2tetris is concerned, you can just assume that the above is a fact of life. You have a nand tree. So what do you do? well you start on an adventure where you define Not then And then Nor and, well you get the idea. Soon you find that you have a working ALU.

I’m currently on the cusp of making the hardware to software jump. All along I’ve been having a blast wiring up these little beasties in HDL

The course finishes up with implementing cool software in a HLL language. I haven’t peeped this far ahead but it is supposed to be akin to Java. Oh yeah, by the time you get to this point you’ll have implemented the compiler for this language, the VM that it runs in, the raw machine code the VM is written in, the CPU that runs that machine code, and so on. It goes all the way back to that humble nand gate that you started out with.

I hope to post some updates as I progress through the course, but the quickest way is to check my commits over on my n2t github project.

As a slight digression, but surely belonging here, this has been something of a summer-o-hardware for me. I started reading CODE by Charles Petzold a while back and the bottom-up description of computing that he laid out was intoxicating. I had to learn more and get my hands dirty with bits and bytes. And that’s when my hardware voyage began.

In keeping with this theme, here’s a short reading list if you want to blast your brain with computing. In fact, I bet that if you were to go through all these books it’d be like getting a degree in computer science – with a minor in cool-nerd history:

There you have it. Go off and hack hardware!

Lists out of lambdas and boxes out of functions


There’s a cool article by Steve Losh called List out of Lambda that reminded me, in a really good way, of a section in SICP. If you want to read the boiled-down scheme version that’s in SICP, here it is (my paraphrasing from SICP section 2.1.3)

“cons” makes a list by putting an element onto the front of an existing list.

(cons 1 '()) ; '(1)

that’s empty list ‘() and a list with just “1” in it above’(1). There’s two other functions that deconstruct a list: ‘car’ and ‘cdr’, or head and tail (name’s not really important):

(car '(1 2 3)) ; 1
(cdr '(1 2 3)) ; '(2 3)

Car returns the head of the list and cdr returns the rest of the list (without the head). You’d think that ‘car’, ‘cdr’, and ‘cons’ would pretty much have to be built in functions, but actually they don’t!

(define (cons x y)
  (define (dispatch m)
    (cond ((= m 0) x)
          ((= m 1) y)
          (else (error "Argument not 0 or 1 -- CONS" m))))

This is the trickiest thing to grok, but then you’re in the clear. Calling ‘cons’ returns a function (called “dispatch”) which “closes” over its two arguments. That means that the function is implicitly storing x and y off where function arguments are stored. Dispatch takes a single argument, m, which acts like a kind of selector. If m == 0, then dispatch returns the first argument to cons, if m == 1, then dispatch returns the second argument to cons.

((cons 1 '(2 3)) 0) ; 1
((cons 1 '(2 3)) 1) ; '(2 3)

Now we just define car and cdr to do exactly this:

(define (car lst)
  (lst 0))

(define (cdr lst)
  (lst 1))

Remember that the way that this works is that the list is being stored as a function so the only thing we can do is to call it!

(car (cons 1 '(2 3))) ; 1
(cdr (cons 1 '(2 3))) ; '(2 3)

Cool! We just built lists out of “nothing”. If you want to be even more mind-bending. You can make the “dispatch” function anonymous:

(define (cons x y)
  (lambda (m)
    (cond ((= m 0) x)
          ((= m 1) y)
          (else (error "Argument not 0 or 1 -- CONS" m)))))

It works the same.

If you view functions as little boxes that basically just contain their return values this makes sense. A function is like a box that, when given its argument barfs up the result. In fact, don’t think of a function as doing something, think of it as being something. If it is first class then you should be able to treat it this way in all respects. You can pass around these little boxes that have some value “in” them and the only way to get it out is to “call” it (or force, or… whatever). But, and we’re starting to tread into heavy functional land here, what if you weren’t so hung up on the idea of getting the value “out” of the box?

(define (box-it-up x) (lambda () x))

This puts a value, x, in a box. You can do whatever you want with the box. You can store it, you can pass it around etc. And, of course, you can open it up by doing this:

((box-it-up 10)) ; 10

If that’s a bit hard to read, just remember that whatever is the first thing in a lisp list is called. Javascript would be:

box_it_up(10)(); // 10

But let’s say that we don’t really care to open the box (we labeled our boxes really well). We just want to make sure that, whenever it is opened, that we obey special handling instructions. Let’s write “double this” on the box.

(define (double-this box)
  (lambda () (* (box) 2)))

Maybe I bent the rules a bit. I used a magic pen that when I wrote “double-this” on the box, it performed an old mover’s optimization trick. Instead of just having our original box I magically duplicated the old box with the twist that the new box now contains double whatever was in the old one ;) Got that? (Hey, metaphors are hard).

Maybe you can see where I’m going here. I don’t want to have to make a new kind of “double-this” function every time I want to do something. How about I just give you the magic pen?

(define (magic-pen box func)
  (lambda () (func (box))))

That means you can write “double-this” like so:

(define (double-this box)
  (magic-pen box (lambda (x) (* 2 x))))

Here’s how this all looks now:

(define twenty-box (double-this (box-it-up 10)))

(twenty-box) ; 20

cool! So this is how I’ve been thinking about Promises in javascript. We have a kind of box that unlike functions we really can’t open up for the simple reason that the value may not have happened yet. But it is no biggie because we can do whatever we like to the values inside inside the box!

If you’ve got all that, then I’m happy because I’ve also kinda sorted tricked you into understanding monads. Did you notice how I was just able to handwave at the end and say, “yeah, but instead of functions the ‘box’ is some as-yet-unreceived network packet”? Monads are just the idea that you can compute all day long with these sorts of “unopened boxes”. Well not just, but the devil is in the details and that means that I’ll probably write another blog post about it.

the dipert problem


Recently, Alan Dipert dropped a bomb on the twittersphere with his posing of this question (warning there are spoilers in the replies):

“pop quiz: solve point-free. answer must be a function value! #clojure”

In case your office has banned 4clojure for being a huge distraction, I’ll post the problem here:

(= 256 ((__ 2) 16),
       ((__ 8) 2))

(= [1 8 27 64] (map (__ 3) [1 2 3 4]))

(= [1 2 4 8 16] (map #((__ %) 2) [0 1 2 3 4]))

In problem 107, your challenge is to write a function that satisfies all of these (it could be dropped in place of the __s above). I will let you go take a crack at solving it. Because up next is some serious spoiler action.

Got your solution? I came up with this:

(fn [x] (fn [y] (reduce * (repeat x y))))

or (what I was really doing) in Haskell:

f :: Int -> Int -> Int
f x y = foldl1 (*) (replicate x y)

We are doing manual exponentiation: “make a list of ys that is x in length (e.g. replicate 8 2 == [2, 2, 2, 2, 2, 2, 2, 2]). Then you just run multiplication through the list:

foldl1 (*) [2,2,2,2,2,2,2,2] == 2 * 2 * 2 * ... 2 == 256

Now comes the “Dipert Problem.” He has told us that we have to rewrite the solution (or any solution) using so-called point-free style. I’m sure that there’s more to it, but essentially that means that we are not allowed to mention any variables! When I first heard about this style, it sounded impossible! The cool thing is that it isn’t and it leads to some massively simple code. Let’s try it out.

I’m going to start with my solution above called f and then write some successive versions of it, each time, I’ll remove a variable and call it the “next” version: f1, f2, okay? Cool.

f, f1, f2 :: Int -> Int -> Int
f x y = foldl1 (*) (replicate x y)

For the first transformation, we need to get rid of the y that’s hanging off the end of both sides of our equation. We’ll need to juggle the innards a bit because here is what the types look like so far:

foldl1 (*) :: [Int] -> Int
replicate x y :: Int -> a -> [a]

replicate takes two arguments and then produces a list that the foldl1 (*) wants to consume. The trouble is, and what tripped me up a bunch, is that I can’t just do this:

foldl1 (*) . replicate

Wah, wah (sad trombone). GHCI tells me:

Expected type: Int -> [c0]
  Actual type: Int -> a0 -> [a0]

Okay, that makes sense, for the fold and replicate to “line up” for composition, replicate has to take one argument then produce a list. The crux is that composition (the “dot” or period in the code) only works for single-argument-functons:

(.) :: (b -> c) -> (a -> b) -> a -> c

This is a little pipeline, but reversed because that’s how mathematics does it. It says “the right-side function takes an a and gives a b, and the left-side function expects a b and gives a c; now you can stitch them together and have a function that skips the b and takes you right from a to c.” But we have a function that looks like:

(a -> b -> c)

on the right-hand side; it won’t work. how do we convert a (a -> b -> c) to a (a -> (b -> c))? This way:

f x y =  foldl1 (*) ((replicate x) y)
f x y = (foldl1 (*) . (replicate x)) y
f1 x  =  foldl1 (*) . (replicate x)

Note: the first two lines are commented in case you are cut-n-pasting along. The first line just puts parenthesis in where they really are in haskell. Each time you see a function of two arguments, it is really a function which takes one argument and returns a function that expects the second argument! This weird but remarkable fact of haskell is called currying.

Now, on to the second line, we see that we have the right types! (I am cheating a bit on types, if you like, you can define rep which just uses Ints)

replicate x :: Int -> [Int]  -- cheating: where 'x' is a specific int
foldl1 (*)  :: [Int] -> Int

foldl1 (*) . replicate x :: Int -> Int

And that brings us to f1! We used grouping and composition to move the y outside the computation and then we dropped it from both sides.

Next we’ll tackle the x:

f x =  (foldl1 (*) .) (replicate x)
f x = ((foldl1 (*) .) . replicate) x
f2 =   (foldl1 (*) .) . replicate

It may look different, but the same thing is going on. We can group the composition with the fold without changing anything. This is just like doing:

3 + 4 == (3 +) 4

Next we do that same trick again where we can now compose the inner functions because the types line up (again, I’m simplifying types a bit):

((foldl1 (*) .) .) :: (a -> b -> [c]) -> a -> b -> c

it looks a bit hairy, but in our case, it is just what we want! If I fill in the actual types we’ll be using, it becomes clearer:

((foldl1 (*) .) .) :: (Int -> Int -> [Int]) -> Int -> Int -> Int

Booyah! This contraption takes a function of two Ints that produces a list of ints, [Int]. Well, that’s just what replicate is! So if we then feed in replicate:

(foldl1 (*) .) . replicate :: Int -> Int -> Int

And that’s it, we have a point-free function that takes two Ints and returns an Int. And so that’s our last, and final function:

f2 = (foldl1 (*) .) . replicate

In general, and I don’t know a term for this, but the operation of successive function composition lets us compose higher and higher arity functions together. Here’s a dumb example using my little point-free succ function:

g :: Int -> Int
g = (+1)
(g .)       :: (a -> Int) -> a -> Int
(g .) .)    :: (a -> b -> Int) -> a -> b -> Int
(g .) .) .) :: (a -> b -> c -> Int) -> a -> b -> c -> Int

Clear pattern. I kinda think of this as saying something like “please give me a function which eventually promises to give me what I want.” The eventually part is essentially “after you’ve collected all the stuff you need.” It would be trivially satisfied by some function that ignores its args and returns a constant:

(((g .) .) .) (\x y z -> 1) 4 5 6 == 2

Remembering that g just increments, the x y z are totally ignored. The function supplied to the multiply-composed g is like some kind of integer “pre-processor”; the x, y and z can be whatever you need to do to figure out how to give g an integer. Or at least that’s how I’m thinking of it.

I had a lot of fun trying to figure this out!

my “transparent web” talk


The Transparent Web: Bridging the Chasm in Web Development from twopoint718

Without much ado at all, here’s my talk. I’m covering the real basics of using both Ur/Web and Opa. I create a basic “Hello world” page in each and then I go on to write a little “comments” system.

my gpl talk


note to the reader, my speaker notes are reproduced below and all run together. Also, the editing isn’t top-notch.


First a warning, if you thought I was going to talk about software licenses, I’m not. Not really. I’m going to talk mostly about people. Ideas are one thing, they are the compiled results of processes that people go through. I want to decompile some ideas. I want to talk about that.

To that end, let me tell you a story. Let’s go back to 1980, Richard Stallman is employed as a staff hacker at the MIT AI lab. This is the stuff of legend.

The lab had recently been given a new prototype printer from XEROX PARC. This was ten times faster than the previous printer, finishing a 20 minute job in 2 minutes and with more precise shapes to boot. This was the same sort of tech that a decade hence would touch off the desktop publishing revolution. But back at the AI lab, the printer was becoming the source of more headache than anything else. Stallman and others would send jobs to the printer only to show up later to find that the job had jammed four pages in. This was a minor annoyance, but it was multiplied by everyone at the lab. Stallman thought, “why should I have to babysit this machine when I can code?”

Stallman knew a way to attack this sort of problem. On a previous printer he had modified the source to insert some monitoring code in the printer driver. Periodically, Stallman’s code would check to see that the printer was proceeding in its assigned job, if it had stalled, the program would alert whoever’s print job was affected. You’d get a message like: “The printer is jammed. Please fix it.” It wasn’t perfect, but it informed those most interested in the problem.

The solution this time around would be similar. Stallman could grab his old code, tweak it for the new printer and voila: jam notifications. So Stallman rolled up his sleeves, grabbed a coffee, and opened up the Xerox source code.

If you see where I’m going here, you’ll probably see what’s coming next. There was no source code. Stallman even spoke with the programmer that had worked on it and that programmer wasn’t allowed reveal the code to Stallman.

This is the moment where something happens. This is where an insight strikes, the apple falls on your head, the disparate pieces line up and you need to jump out of the tub and tell the world. Stallman decided, at that moment, that some fundamental wrong had been done: the wrong of not being allowed to help your neighbor by telling him how code works., This brings me to the main point of this talk. This is the thing that, even if everything else you hear is mangled or forgotten, I want to come through unchanged: RMS believes that software has moral implications, the choice of what kind of SOFTWARE you want is a choice about what kind of WORLD you want.

Note all the things that I didn’t say. It isn’t about what is technically superior. It isn’t about what is good for being able to sell. It isn’t about what the legal department says. It has no bearing on what various companies will tell you to be worried about. It isn’t about being good for playing games, or having flash support. It is nothing more and nothing less than a philosophical stance. You can agree with it, or disagree with it in exactly the same way as you would argue about Plato’s Forms.

I feel like this is the key misunderstanding in discussions surrounding the GPL and Free software. I’m taking a philosophical, an ethical, and maybe a moral stance. I haven’t brought anything else into it. Often, when I see discussions about software licenses, I feel like people are talking past one another from the very first sentence.

It is profoundly nonsensical to compare something like “justice” to something like a wrench., Philosophers begin by defining words, because if they don’t, we’ll get so mired in the muck of argument that no points are made, no progress is made.

The word “free” is a good place to start. Free can be taken to mean “no cost” but it can also be taken to mean “freedom”. This is sort of a fine point to make, but I think it could lead to lots of confusion.

Free software has to do with the “freedom” part. There are lots of really good objections at this point. The one that I have anticipated is “freedom for whom?” And that’s the core of the so-called permissive divide in the broader category of “open” software. The permissive people would respond to the “freedom for whom” question with something like “certainly not for me, you say I must share changes, that’s pretty restrictive.” And the answer to “freedom for whom?” that I want to present here is…

Well, that’s the rest of my talk., The GPL is a really a more general case of the Emacs license.

Now Emacs has a pretty storied history, wikipedia dates it back to the mid seventies, well before GNU or Emacs-as-GNU-project. But by the time of the release of Emacs 15, there was a sort of proto-GPL license attached. It served to give “users the right to make and distribute copies” and “the right to make modified versions, but not the right to claim sole ownership of those modified versions”. It was moving in a similar direction, but it was not as legalistically formal as the eventual GNU project would need it to be.

Stallman’s intellectual property attorney at the time viewed the GNU Emacs License pretty much as a simple contract, although one that stipulated a rather odd price. Rather than money, the license cost access to any changes. Users would have to share modified versions of the software. The attorney remarked: “I think asking other people to accept the price was, if not unique, highly unusual at that time”

In 1989 a 1.0 version of the GPL had emerged. The preamble read:

The General Public License is designed to make sure that you have the freedom to give away or sell copies of free software, that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things.

To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it.

one notable change was that users were no longer required to share changes. You could make private in-house tweaks to the software without being forced to share these changes back to the community. , License agreements are not usually characterized by what they give you, rather, as we scan ever longer End User License Agreements, or plow through revision 271 of Facebook’s new much-better-we-assure-you privacy policy, we are looking for things that they are taking from us.

The GPL, in essence, tries to codify a very idealistic hacker ethic. It is for tinkering, changing, breaking, reassembling, and passing it on to your friend. It is software as mix-tape.

The main things that the GPL gives you are broken down into four parts, aka the “four freedoms”, Zero: You’re allowed to do what you want with the software. An author can’t proscribe the software’s use for something that they don’t approve of. This is pretty profound, I think.

one: if you are going to be able to do this, you’ll need the source code. You’ll also need whatever is required to actually end up with a working program. This can be a point of contention. A corner case of this is in embedded systems such as set-top boxes where the code may be GPL, busybox is a common example, but you can’t actually change the code due to things like code signing., two: I think it is interesting that two emphasizes the goal of the redistribution. It isn’t just for fun or for copying’s sake, it is because we view software as something that can help people.

freedom three, the final freedom. You’ll also need access to the source code to realize this one. You are allowed to make public changes to the code. The difference with freedom one is that you’re allowed to do this out in the open, rather than just in private and for your own reasons. You can fork. You can contribute back.

lurking in freedom three is also the core of my argument, which I promise I’m getting to really soon., I’m going to try and dispel a common myth about the GPL, one I’ve heard a lot. The general gist is that “the GPL is to copyright as anarchy is to government.” Something that is opposed to the very notion of it. This is where people get the idea that any business built on such shifting sand of self-destruction must be flawed in some way.

Opposition to copyright is an interesting subject, there’s lots of good debate. But it doesn’t really have anything to do with the GPL. The GPL has staked its efficacy IN copyright.

Far from being some sort of anti-copyright construct, the GPL’s EXISTENCE depends on copyright, if you didn’t have copyright, you couldn’t have the GPL (or lots of other stuff). You wouldn’t get any say in what people do with your stuff… but that’s another discussion entirely!

So for the rest of the talk, consider copyright to be a constant underpinning, a foundational necessity for everything else we’re talking about. It’s just that we’re going to use it for something that it wasn’t intended: we’re hacking it., BAM: flip it

copyleft: pay it forward.

As Stallman said: “see [the GPL] as a form of intellectual jujitsu, using the legal system that software hoarders have set up against them”, he actually lifted this from a similar sticker from a sci-fi convention which read: “Copyleft (L), All Rights Reversed.”, And this brings me around to what my thinking on the GPL is. I guess I’m kinda surprised by all the emphasis on virulence these days. The metaphor is broken. Metaphors are broken–but that’s another talk.

Casting aside any metaphors, the GPL is an inductive license. This is a term that I made up but I think it describes the nature of the GPL much better than saying that it is viral.

An initial case is established. You have the four freedoms: the freedom to run, freedom to change, freedom to redistribute, and the freedom to share those changes.

But for it to really be Free software, the person receiving the software must have these freedoms. So it is not good enough for us to leave it here. We only have the base case for a software license. We have to prove the general case, not me or you, but person N+1., So the person who’s freedom we’re talking about is person N+1, the inductive person.

This idea is the essential difference between being permissive software and being free software. Free software describes the case of that person N+1, inductively. It raises the “freedom for whom?” question and answers it with “the inductive person”.

So I’ll leave where, approximately, I started with a definition:

The word “induction” is the practice of deriving general laws from specific cases, it arises from a root word meaning “leading to” or “hypothetical”. Free software asks us to consider this hypothetical person on the assumption that it could someday be anyone, indeed everyone.

secret santa


While I was sitting around and eating a ton of Christmas food, I got to thinking about the Secret Santa problem. In its most basic form, this is the same as something called a derangement. I mention it just because I think the name is cool, the concept is super simple, a derangement is a permutation of the elements of a list such that no element stays in the same place:

[1, 2, 3] would have a derangement:
[2, 3, 1]

notice that each element has moved. So this pertains to secret santas because if you are just not allowed to chose yourself then a derangement (like this) is all that you’d need, it would be a valid secret santa!

> zip [1, 2, 3] (derangement [1, 2, 3])

cool! person 1 gives to person 2, person 2 gives to person 3, and person 3 gives to person 1.

As my family could tell you, I thought that I could do better (in keeping with my motto “if it ain’t broke, fix it until it is”). Wouldn’t it be cool if in additon to just forbidding the case where you pick your own name (reflexive), you also can provide two more lists. One is a list of pairings which are disallowed and the second is a list of pairings which are to be discouraged (less likely).

I’ve implemented almost what I just described. In the code below, I don’t actually make a selection from some distribution where discouraged selections are less likely. Instead, I’ve added a bestSantas function that allows you to limit yourself to selections that are under a certain amount of badness (a selection has 1 point of badness for each discouraged pairing that it includes). I hadn’t decided how I wanted to select from among differing levels of badness yet. But anyway, enjoy!

sopa hearings


These SOPA hearings are beyond awful. The fact that the hearings, at least as I have found them at 6pm CST, are about who has standing to sue in the US, nothing more. This feels like a group of people standing around trying to decide to roast hotdogs or marshmallows as the Library of Alexandria burns.

This letter, which carries the signatures of many the designers and builders of the Internet (TCP/IP, BIND, DNS, HTTP, MIME, etc.), clearly lays out the damage that this bill would cause.

What I find infuriating about the discussions that are happening right now regarding SOPA, is that the committee members are so glaringly ignorant of the Internet. There is no discussion of how it would work with DNS, what burden that would put on hosting companies, schools, organizations, etc. In short, they don’t appear to know how the Internet works.

I cannot help but worry that there are influencers promoting this bill who have a very narrow interest in the Internet. They see it as an economic threat to an old business model (movies, music) and it must be turned into a safe and effective mechanism to secure an income stream in the future.

strangeloop 2011 notes


I got back from Strangeloop 2011 just this week and wanted to cover some of the interesting points from this really fascinating conference (it is on my must go list from now on)!

It was incredibly difficult to get to all the talks that I wanted to see because the conference was “seven talks wide” at most points. A common theme emerged where, as I finished up a talk in one room, I would see the stream of tweets start rolling in about some incredible talk that I had just missed; I can’t wait for those videos.

Here’s my recap of the stuff that I went to:

Sunday (workshop day)

  • Haskell: Functional Programming, Solid Code, Big Data with Bryan O’Sullivan - this was a really nice intro to Haskell for someone that hadn’t ever seen it before. I’ve worked through about half of the “Real World Haskell” book so a lot of this was not new. But it was great to see one of the authors explain some points himself. There was also some interesting comments from Gerald Sussman about how haskell is the “most advanced of the obsolete languages” (more on that later).

Monday (first day of conference)

  • Category Theory, Monads, and Duality in (Big) Data with Erik Meijer - This was a really cool opening keynote where Erik Meijer launched the new term CoSQL instead of NoSQL by showing how the two concepts are duals of one another (in the mathematical, category theory sense). This proved to be something of an overarching theme of the conference, things being different but mirrored versions of the same thing. see: A co-Relational Model of Data for Large Shared Data Banks.

  • [I skipped this timeslot because I was on the hallway track listening to Erik Meijer talk about static typing with some scala folks; very interesting!]

  • An Introduction to Doctor Who (and Neo4j) with Ian Robinson - I have to admit, I got sucked in because I’m a huge Doctor Who fan, but I had heard of graph databases before and Neo4j looked to be a really interesting one. In particular, I wanted to see if this could be used from Clojure (yes: borneo and clojure-neo4j). The talk concerend building a very complicated network of the relationships between several Doctor Who props (Daleks!) over time. It was pretty easy to see how these mapped nicely to nodes with arcs between them.

  • Skynet: A Scalable, Distributed Service Mesh in Go with Brian Ketelsen - this was a cool talk about a lightweight framework written in go for writing distributed applications that are highly resilient. It uses Doozer for data storage (though it didn’t in this talk).

  • Parser Combinators: How to Parse (nearly) Anything with Nate Young - This talk gave examples of writing parser combinators (where a parser here means a function that can consume a little input, and then returns another function that consumes input after it). The idea is to chain these parsers together with combinators (higher-order functions which take parsers and operate on them, like “oneOrMore” etc.). This talk reminded me of Bryan O’Sullivan’s funny phrase about how haskell’s “>>=” operator (read “bind”) is written in “moon language”.

  • Getting Truth Out of the DOM with Yehuda Katz - This was a talk about the SproutCore framework. Katz had a lot of insight about how to keep the browser interaction abstract and event-based rather than mucking about (and then being mired) in the DOM.

  • We Don’t Really Know How to Compute! with Gerald Sussman - This was a mind-blowing keynote. In fact, I had to develop a new unit of measure, the Eureka, which denotes having one’s mind blown once per minute. I think that in the 50-some minute talk that Sussman gave, I may have had more than about 50 mind-blowing thoughts. At one point Sussman asked how much time he had left and someone from the audience yelled out “who cares?”, which was pretty much the feeling in the room.

    Sussman started out the talk with a picture of a Kanizsa Triangle and mentioned that the brain can infer that there is a hidden triangle in just about 100 ms which is a few tens of “cycles” for the brain. With a computer, we don’t know how to even begin to solve this recognition problem in that few of cycles; we don’t really know how to compute. Sussman’s idea (which I can’t do justice to here), was that computing as we know it has to and will change in the near future. Computing will become massively distributed (“ambient”, but this term is from a later talk) and in disparate nodes that must collaborate to arrive at answers.

    His example, a Propagator was a program that can integrate more annd more data while keeping track of the provenance of that data. Or another way an “independent stateless machine connecting stateful cells”. Amazing!

Tuesday (second day of conference)

  • Embedding Ruby and RubyGems Over RedBridge with Yoko Harada - This didn’t make that much sense to me until coworker (@devn) started doing some cool stuff with using ruby gems from clojure.

  • Event Driven Programming in Clojure with Zach Tellman - This was a really cool talk. It looked to me to be an implementation of go-style concurrency (channels) in clojure. There was also a macro that would analyze data dependencies and do the correct async calls. The projects are called Lamina and Aleph and they’re one of those things that I want to find a project on which to use them.

  • Teaching Code Literacy with Sarah Allen - This was a talk about how to give kids the opportunity to learn about programming at an early age (Allen says that programming is one of those things that you don’t know if you’ll like it until you’ve tried it.) She also had found that the ages that programming should be introduced is 5th-6th grade; earlier than I thought!

  • Post-PC Computing is not a Vision with Allen Wirfs-Brock - This talk started with a breakdown of the eras of computing. First was a “coporate” era, then a “personal” era, and now we are entering the “ambient” era. Each era is defined by what ends computing resources are put toward. In the coporate era computing was used to solve problems that businesses had, then computing became more available generally, and finally it is becoming ubiquitous. This talk also covered the history of the browser and how it is, and will be, the platform for the forseeable future.

  • Simple Made Easy with Rich Hickey - Rich’s talk was an argument for disentangling computing. It started with separating the notions of “simple”, “complex”, and “easy”. Easy is a subjective thing, things that I find easy you may not. Simple is objective, it derives from the notion of “a single fold”. Complex is just the opposite, it is “woven or braided”. We must avoid adding complexity to our software, or as Rich put it, we must not “complect” it (“to interweave or entwine”). Humans have a finite (and very limited) ability to handle many factors simultaneously, and so to have any hope of working with difficult problems, we must be rigorous in working toward simplicity.

Rich had a few words for TDD in his talk, and I think these were widely misinterpreted. His point was simply that tests have a cost and a thoughtless devotion to them will risk underestimating that cost. I think a lot of people took that to mean “you shouldn’t test” or that “tests are worthless”, but I think he was just pointing out that they’re not free. He introduced the term “guardrail programming” for a style that just bounces between the guardrails rather than proceeds to a destination by steering.

This talk drew a standing ovation from the crowd, including, I hear, Gerald Sussman. I’ll be looking for it on video when it comes out.

Strangeloop 2011+N is definitely on my must-attend list. The people that I met (which could be another couple of blog posts) were worth the admission all by themselves. The talks were fascinating and gave me a ton to read up on. The conference felt like it was well-run and organized. St. Louis was a cool city to hang out in (I wish we had the same open-container law in Madison!). I can’t wait for next year.

Madison Ruby Conference


I’m going to be attending the Madison Ruby Conference this Friday and Saturday. I must confess that I’m not a Ruby-ista (does this reveal my pythonista heritage?) by training but I’m really excited to go and see who attends, chat with smart tech-folk, and generally have a good nerd-time yakking about code. Plus, it couldn’t hurt to see a little ruby code and see what all the fuss is about. I’m not going to confine myself to: C, Java, Python, Lisp, Haskell, and shell. Conference Ahoy.

Social Networking


Google+ is another Facebook. It may be nice but we should all remember that it is basically the same thing. That may be a positive or a negative for you, but the same conversation that we’ve been having about Facebook applies to Google+. If we want to network with people, perhaps it is worthwhile to consider existing tools. Blogs for general sharing (blogs that live on your own server!), email for 1-to-1 correspondence, IRC for chat and so on. You don’t need a third party to mediate your relationships.

Enjoy the XKCD on the topic.

Code Crawl


A code crawl is an event that’s a mix of a hackathon and a pub crawl. Here’s how it works:

Interested parties grab their laptops or, I suppose, lug their desktops, out to the bars. Everybody orders a drink or two and hacks on code for around an hour. The group gathers around a big table if the bar has it, or keeps in quasi-remote contact from the bar’s far-flung corners via IRC or the like. If the atmosphere allows it, it’s fun for participants to just yell “try it now” or “pull from my repo” across the room. Maybe they’ll get puzzled stares from other patrons. They should just explain that it’s a code crawl and everyone is programming! When they reach a good break point (ha), or just after an hour or two, everybody closes out their tabs (or the single mondo-tab!) and heads off to the next stop on the crawl.

Groups can do something fun when they think they’ve collectively hit the Ballmer Peak. Of course, it’s probably then a good time to wrap up the crawl!

First Post


First post. That’s what’s the best about reformatting a blog. It always seems to generate at least one extra post. It’s weird how blogging software does that.

Hackerspace Show on Wisconsin Public Radio


Larry Meiller did a show about (maker|hacker)spaces in Wisconsin. Sector67 is mentioned.

Learning to type on the Twiddler keyboard


I recently got a twiddler chorded keyboard (I love it). My two main goals with it are to be able to use it while doing things like giving talks, because it is like having one of those presentation clickers yet at the same time being able to competently type with it. And the second goal is to use it on my smartphone as a better alternative to the on-screen keyboard. I just want to make a little aside on the second goal. It’s not totally clear to me how to use an external USB keyboard with Android (though I have some leads) but things look generally promising.

Either use, of course, assumes that I can type on the crazy thing. I’m one of those people that find it fun to try and re-wire my brain to do new things and I figure that if I switched to dvorak (and have been using it for about 10 years) that I can tackle this thing! I decided to do some drills with the Twiddler so that I get to the point where I can use it for day-to-day stuff, from then on it’ll bootstrap itself through frequent use. That, by the way, was roughly my technique for learning dvorak back in the day

  1. Print out the layout and tape it up at eye-level, this breaks you of the habit of looking at the keys (they won’t help you if you remap the keyboard in software)
  2. Do simple drills of the home row (this is great on Dvorak because you can form TONS of words)
  3. Expand the drills to less frequently used letters and characters
  4. Now that you can type all words, even if you are slow, get on IM or IRC in a low-traffic channel that you would like to participate in, and just converse. This will provide both motivation and practice without the feeling of banging your head against the wall.
  5. Do this daily or almost-daily for about 4-8 weeks (that’s about how long it took me to match and then exceed my QWERTY speed)

To deal with steps 2 and 3 on the twiddler, I wrote this short python script that pulls words out out /usr/share/dict/words that can be typed without any chord (open), using the first chord key (1st or “L”), and finally the second chord key (2nd or “M”). I don’t have the third chord key on here because using just the first and second is sufficient for all letters. Here’s the script:

import random

def all_from(target_list, input_list):
    for c in input_list:
        if not c in target_list:
            return False
    return True

def first_set(input_word):
    return all_from("abcdefghABCDEFGH", input_word)

def second_set(input_word):
    return all_from("ijklmnopqIJKLMNOPQ", input_word)

def third_set(input_word):
    return all_from("rstuvwxyzRSTUVWXYZ", input_word)

def fourth_set(input_word):
    return all_from(".,;'\"?!-", input_word)

def search_words(words, key_set=first_set):
    out = []
    for word in words:
        if key_set(word) and len(word) > 1:
    return out

if __name__ == "__main__":
    get_words = 10
    fname = "/usr/share/dict/words"
    wordlist = open(fname, "r").read().split("\n")

    first = search_words(wordlist, first_set)
    second = search_words(wordlist, second_set)
    third = search_words(wordlist, third_set)
    #fourth = search_words(wordlist, fourth_set) # need wordlist w/ punct.

    print "open: ", " ".join(random.sample(first, get_words))
    print "1st:  ", " ".join(random.sample(second, get_words))
    print "2nd:  ", " ".join(random.sample(third, get_words))
    #print "3rd:  ", " ".join(random.sample(fourth, get_words))

Don’t zip, bundle(1)


I’ve been playing around with Plan 9 from Bell Labs a bunch lately. It takes the Unix idea that everything should have a file-like interface and runs with that to its logical (and surprisingly useful) extreme. It seems like a system that hangs together really well and has the feeling that it was actually designed, rather than grown. But that’s enough about that. What I wanted to mention was a script, bundle(1), that I found to be really useful. So, use the source, Luke!, and then I’ll say something about it:

echo '# To unbundle, run this file'
for i
    echo "echo $i"
    echo "sed 's/.//' >$i <<'//GO.SYSIN DD $i'"
    sed "s/^/-/" $i
    echo "//GO.SYSIN DD $i"

Cool huh? It’s kinda meta but here’s what it does: when you run it like bundle file1 file2 > it outputs a script that when run will recreate file1 and file2. It does this with a few applications of sed. Using file1 and file2 as examples, say I have file1’s contents:


and file2 is:


after running bundle with these two files the contents of will look like this:

# To unbundle, run this file
echo file1
sed 's/.//' > file1 <<'//GO.SYSIN DD file1'
//GO.SYSIN DD file1
echo file2
sed 's/.//' > file2 <<'//GO.SYSIN DD file2'
//GO.SYSIN DD file2

so, if you run this file, it prints “file1” and “file2” to the terminal, but then writes “hello” and “world” to file1 and file2, respectively (after having stripped the “-” off of each line using sed). So this is a neat little way to package up a bunch of text-ish files into a single “self-expanding” package.

unfill-paragraph (only Emacs nerds need apply)


I usually keep plain text at a nice and tidy 72 columns (give or take, but certainly under 80!). But there are times when it is necessary to have code that will be folded (word-wrapped) by the end user. Think of those text boxes on websites where the result is going to be displayed on some website in a totally unformatted way. In this case you want the text to be one long line per paragraph with a blank line separating each. That way, the text is as wide as the browser window or otherwise follows user preferences (see for how to do this in Firefox). Since I’m always hitting M-q in Emacs, my code’s always formatted at 72ish columns. The following bit of Emacs Lisp lets you unfill-lines, that is it strips out newline characters within a paragraph.

;; Stefan Monnier <foo at>. It is the opposite of fill-paragraph
;; Takes a multi-line paragraph and makes it into a single line of text.

(defun unfill-paragraph ()
    (let ((fill-column (point-max)))
    (fill-paragraph nil)))

(global-set-key (kbd "C-c M-q") 'unfill-paragraph)



Remembrance of Blogs Past



Yikes, it always seems to come to this. I find myself with a blank directory or database (it depends on the blog) that needs to be filled up with stuff that I write. And as is the requirement, I go and start filling it up with data.

I can pardon you for missing the blog that previously inhabited this domain name. Its short run was punctuated by frequent DNS issues, the blog equivalent of being deprived of oxygen. It was also overly fussy in its implementation so there was really no chance that Sarah would ever put words on the site (the same is true for me).

I switched over to WordPress after seeing some pretty impressive examples of what it does nowadays.

So as has become my custom, I’ll link back to other sites that I’ve also written.