Navigate / search

Pushing to Google App Engine

In part 1 we created a Google App Engine application, and got it running on a local server. Maven did all the work for us, in fact, we didn’t look at the code, or configure the server. In this post we’ll get the code running in the cloud, on Google’s App Engine PaaS.

We saw in the last post that running the app on a local server requires a simple maven command.

$ mvn appengine:devserver

Uploading the project to the Google App Engine Platform is just as easy

$ mvn appengine:update

Unfortunately, that won’t work yet, because we haven’t created a project to upload to. You’ll get the helpful “for an unspecified reason” error.

A Project In The Cloud

Before we deal with the code, let’s create a project in the cloud. If you don’t have a google account (what have you been doing?) then go make one.

Go to

Click on the ‘Select a project’ dropdown, and choose ‘Create a project’.

On the create project screen, enter a project name. I’ve gone with ‘Devjoy Endpoints Demo’. You can also enter a Project Id, but I’ve clicked on the arrow to generate a random one.

I don’t like random project IDs, but for a blog post, I want something that will stand out when we encounter it in code.

After saving the project, the dashboard will show the new project.

We still have some housekeeping to do before we get to the code. Click on the hamburger menu, select API Manager, and click OAuth consent screen.

Enter a project name, and hit save. Then select the Credentials tab, and click on ‘New credentials’. Select OAuth Client Id.

For application type, choose Web Application. Enter a name for the web client. I’ve chosen ‘Devjoy Endpoints Demo Web Client’.

Now you need to specify the addresses that can call your API via JavaScript. Pay attention here because this can bite you. We’re going to set up two authorised JavaScript Origins. One for the app when it’s running in the cloud, and one for when it’s running on localhost.

Note that the url for the app running in the cloud consists of the App Id followed by And that we use https. For localhost we use http.

When you save your new credentials, you generate a client Id. You can get the Client Id at any time from the API Manager in the developer console.

Into the IDE

Now, it’s time to get the code into IntelliJ IDEA and get it ready to push to Google App Engine. Select Import Project.

Browse to the pom.xml file for the project, and select ‘OK’.

On the ‘Import Project from Maven screen’, select ‘Environment settings’ button. Select the version of Maven you installed with Homebrew, rather than the bundled versions. I’m not sure how necessary this is, but I had problems that went away when I did it.

When you complete the import, you can begin exploring the code in the IDE. Before you do that, take a look at the Terminal Window. You can deploy the project to localhost, or to the cloud without leaving the IDE.

Instead of the Maven command we used in the last post, here’s a slight variation.

$ mvn appengine:devserver_start

That will start the app on localhost, but return to the command line.

You can use the following to stop the app.

$ mvn appengine:devserver_stop

Some necessary changes

There are some changes we need to make to the code to get it ready to run. Start with You need to paste the Client Id from your App Engine Project, into the WEB_CLIENT_ID project.

You also need to paste the Client Id into base.js

In pom.xml under the configuration of appengine-maven-plugin, you need to add an ${} field. This is a problem with the Maven Archetype.

Also in pom.xml, you need to paste your project Id (as distinct from the client Id).

This is the project Id that I allowed the console to generate at random.

Pushing To The Cloud

With those changes made, you can go to the Terminal and deploy your app.

$ mvn appengine:update

Maven will deploy the app, it may take a little while.

If you’ve followed the instructions above, the deploy should go without a hitch. If you get the dreaded “for an unspecified reason” error, make sure your Client Id and Project Id are correct. Even an upper/lower case issue is enough to cause problems.

Go to the browser and load your app, remember it’s

Don’t forget the https, or the JavaScript won’t be able to run. You only authorised the https origin.

You can also use the App Engine API Explorer to try out the back end API for the project.

Obviously, replace the Project Id with your own.

Go to ‘Services’ and select ‘helloworld API’.

Next Time…

We’ve covered a lot of ground in two posts. Next time we’ll take a breather. We won’t create anything new, or make any changes, but we’ll look at the code Maven generated. We’ll dig into the structure of the project, and we’ll get to know ‘endpoints’.

First steps with Google App Engine

At the moment I’m getting to grips with a whole new set of tools. Java, IntelliJ IDEA and Google App Engine. So, when it comes to blogging in 2016…there may be Java ahead. Consider yourself warned.

Google App Engine
I won’t waste time explaining what GAE is, beyond saying it’s a PaaS offering from Google. I trust you’ll be able to find overviews of it on the interwebs. In this post I’ll show you how easy it is to get up and running with
a GAE app, using Maven.

As I said above, this is all new to me. If it comes across as a noob writing about Java, that’s because it is. I’m also doing all this on a Mac. Things should be similar on Windows and Linux, but I haven’t tried it out.

Generate The Code
We’ll use Maven to generate a project. On a Mac, installing maven with homebrew couldn’t be simpler.

$ brew install maven

After you install Maven, you can check the version, and that it’s installed as follows:

$ mvn –version

If that’s all working, you’re ready to generate some code. In your terminal, change to your projects directory. Maven will create a specific folder for the new project, so don’t bother creating that.

$ mvn archetype:generate

Maven will display a list of templates (archetypes) with an option to filter the list. Type the word ‘endpoints’ to narrow things down a bit.

There should be two matching archetypes, pick number 2. The first archetype is a skeleton project. We’ll use the second because it includes a small sample application.

Choose the default version and enter a groupId. I use ‘com.devjoy.endpointsdemo’ so use something along those lines.

Enter an artifactId. Mine is ‘endpointsdemo’. Accept the defaults for the remaining properties. Enter ‘Y’ to confirm the values entered.

Maven will then generate your code.

Run on the local DevServer
App Engine provides a local dev server that will allow you to run your app. Go to the directory that maven created for your project. One of the files it contains is the maven pom.xml.

Use maven to spin up your app on the dev server.

$ mvn appengine:devserver

Go to http://localhost:8080 in your browser to see it in all it’s glory.

The sign-in and authenticated greeting won’t work yet, but the rest will.

Next Time…
In the next post I’ll import the project into IntelliJ IDEA. We’ll also get the project running in the cloud on Google App Engine.

Azure WTF#

This is my contribution to the 2015 F# Advent Calendar.

I recently set myself a challenge. Build a web-service in F#, and make it run on Azure. Nothing too fancy, just serve up some resources as Json, and allow me to add, update and delete them.

How hard could that be?

Well, as it turns out, hard.

I scavenged body parts from a few blog posts and talks. Beginning with Tamizhvendan Sembiyan’s excellent and detailed post on building web-services with

Suave is a really neat web framework that lets you work with HTTP Requests and Responses using idiomatic functional code. Which, Tomas Petricek points out is how it should be. The web is essentially a functional model. It is ideally a stateless function that accepts a Requests as input, and produces a Responses as output.

Tamizhvendan showed some nice handlers for serialising to and from Json, and for mapping urls to functions. I’m not going to go into all the details that he has covered. Seriously, read his post.

I do want to mention one place where I took a slightly different approach to his. He represented a resource as a record containing functions for the typical CRUD methods. The signatures are nice, like the ‘a -> ‘a option for Update, signifying that the resource in question may not exist to be updated.

type RestResource<'a> = {
  GetAll : unit -> 'a seq
  Create : 'a -> 'a
  Update : 'a -> 'a option
  Delete : int -> unit

I’m not a fan of the ‘record of functions’ approach to implementing interfaces. But the point of the post was to demo Suave and it did that. I was up and running and quickly had a basic webservice working locally. That word, ‘locally’ is about to get really significant.

The First Problem
In line with Tamizhvendan’s post, I built my first web service as a console app, ran it locally and it worked. Then, I tried to get it running on Azure.

I’m sure this is possible, but every example I could find for F#/Suave on Azure involved using Scripts instead of an App. Chief among them this one by Scott Hanselman.

Ok, I’ll just convert my Console app to a script for now.

How hard could that be?

Well, as it turns out, hard.

Hello Scripts
Before I actually try to convert my existing code, let’s see if I can get a simple hello world web service running as a script on an Azure Website. Let’s start with Scott’s post, mentioned above.

Ok, there’s a lot of new ground for me to cover here. Suave, Fake, Kudu, Paket, heck even F# scripting was pretty new to me, I don’t think I’d ever used it for anything that required more than one fsx file.

I have deployed a C# ASP.Net MVC app to Azure from github and each push triggers a new deploy. Setting that up was simplicity itself. Getting an F# script to deploy and run on Azure was starting to look like an ordeal by comparison.

And keep in mind, a Script isn’t even what I want. That’s a temporary compromise because I couldn’t get a console app to work. I thought F# was supposed to make life simpler.

The Second Problem
Now, what I should have done is just deploy Scott’s code directly from his repo, or forked his code and deployed it. But where’s the fun in that. No, I tried to recreate his work from scratch, to really understand it. And that cost me in the short term. But I think it was worth it.

I’ll do a separate post on the things I managed to get wrong just trying to recreate and deploy his simple example. Basically if you have any extra files in your repo like .sln or .fsproj files, you’ll get some confusing errors. And if you’re missing any, like oh say the .deployment file then weird stuff happens or doesn’t happen.

Sometimes nothing happens and it’s not your fault. You try to deploy and everything just hangs. I’ve noticed that three separate nights from about 11pm until after 1am (Irish Time). And then it works itself out. I need to investigate that more, but it’s really annoying when you’re already unsure of the code you’re deploying.

Now that I had something running, it was time to expand it to host my full Web Service.

How hard could that be?

Well, as it turns out, hard.

The Third Problem
Remember, I have to build my full Web Service as a script. The thing is, F# scripts are fine for building small scripts. If you’re building an app, with types and modules, you really want to be using the compiler and making real apps and libraries.

The reason for this is that when scripts reference each other, you can end up loading the same file more than once, because it’s a dependency of more than one other script. When this happens FSharp loads the file into two different namespaces. See the FSI_0006 in the following example.

[Loading /Users/richarddalton/Projects/EloRateApi/site/Model.fsx]
namespace FSI_0006.EloRateApi.Model

What this means is that the same ‘Type’ can exist in memory as two different incompatible types.

To put it bluntly, you can’t build complex apps as F# scripts. And on the off chance that you manage to, you probably shouldn’t. But, for now I wanted to get something running, so scripts would have to do.

I have good news, and bad news. The good news is I did manage to build a Web Service that serves up Json and lets me Create, Update and Delete resources.

The bad news is it only has two resources, Players and Games, and because it’s fsx files I don’t see it as a viable approach for a real web service with any more resources or any more logic.

So, I need to get back to the drawing board and either get console apps running on Azure, or put the bulk of the code in a Library, and use a single, simple Script to launch the Web Service. Which means I have to learn a little more about Fake so that I can build the Library and Deploy it.

The Code
The code in full is available here

Forgive me, it doesn’t yet have a nice welcome readme, but it will. The code also isn’t quite the way I like it. Partly because of the limitations of scripts, and mostly because I haven’t finished simplifying it yet.

The code does run locally, just clone the repo and run webserver.fsx using fsharp interactive. The Azure deployment is still a little hit and miss.

If you read Scott’s post mentioned above, you’ll understand how the deployment works, but for now, let me mention one or two pieces of interesting code.

webserver.fsx defines the routes and runs the server. It’s the starting point for the project.

restful.fsx is basically a stolen and slightly modified version of Tamizhvendan’s code, which allows me to be a little more flexible in the methods I define for a resource, because I don’t use an ‘interface’ as such.

model.fsx contains my types representing Players, Games and the State of the service. It also relies on a script called CQAgent that I’ve written and which I reference directly from Github using Paket.

CQAgent uses F#’s MailboxProcesser to provide a generic agent for managing State, and querying it using query methods, or modifying it using commands. It’s worth a blog post in it’s own right, and I’m indebted to Tomas Petricek for getting me over the line in writing it.

Here we see the power of suave. We compose handlers for incoming requests, so if a request is a HTTP.DELETE to the “games” resource, it will fail to match on any of the Player routes, and it will fail on all but the last game route.

let PlayerRoutes = choose [
                        Get "players" (Player.GetAll model)
                        GetById "players" (Player.GetItem model)
                        Post "players" (Player.Create model)
                        Delete "players" (Player.DeleteItem model)
                        Put "players" (Player.Update model)
                        PutById "players" (Player.UpdateById model)

let GameRoutes = choose [
                        Get "games" (Player.GetAll model)
                        GetById "games" (Player.GetItem model)
                        Post "games" (Player.Create model)
                        Delete "games" (Player.DeleteItem model)

let routes = choose [
              (NOT_FOUND "Huh?")

Each of those choices aren’t just a way of matching, they are calls to functions in restful.fsx that
will get the work done. If we look at the Delete function in restful.fsx, it’s a bit intimidating.

let Delete resourceName delete =
    let resourcePath = "/" + resourceName
    let resourceIdPath =
        new PrintfFormat<(int -> string),unit,string,string,int>(resourcePath + "/%d")

    let deleteResourceById id =
        delete id

    DELETE >>= pathScan resourceIdPath deleteResourceById

Don’t worry about how the function works, let’s just look at the signature. It takes a string resourceName. As you
can see we pass “players” or “games” to that. The third argument ‘delete’ is actually a function that will do the work of deleting the resource.

It’s signature isn’t annotated but it is ‘a -> unit, which is exactly the signature Tamizhvendan used.

We pass the following to that argument

(Player.DeleteItem model)

Let’s look at the Player module.

let public DeleteItem (model: CQAgent<State>) id =
    model.Command (fun (s: State) ->
        ((), { s with Players = removePlayer id s.Players }))

We can see now that what actually happens is we partially apply the DeleteItem method, passing it just the model,
producing an int -> unit function that matches the required signature of a Delete method.

Json etc.

All of the serializing to and from Json, and the parsing of urls us handled by restful.fsx. All of the handling of system state is handled by CQAgent.fsx.

The Ghost of Christmas Future

Where to from here?

This has been a really frustrating project because with Suave and with some of the other code, it really feels like F# is a great language for building web services. But the tooling and infrastructure either isn’t there or my ambition is writing cheques my knowledge can’t cash.

I’ll stick with it. But I’m really hoping I get a comment saying, I’m doing it all wrong, and it’s really simple when you know how.

Unit Testing Events and Callbacks in C#

Events and Callbacks offer excellent opportunities to simplify your code, but the need for tests is, if anything greater. This post demonstrates unit testing strategies for common scenarios.

Download Sample Code

The Problem
When you want to unit test a method it’s usually pretty simple. Call the method, pass it it’s parameters and assert against it’s return value, or some other property of the object that may have changed.

What happens when a method doesn’t return a value, or update some property? What happens when it leads (perhaps after some delay) to an event firing, or a callback getting called?

Events firing and callbacks getting called are very similar, but subtly different, so I’ll cover them both.

A Simple Callback

Here’s the simplest scenario. You register a callback with a class when you create it, and you have a method that communicates back via that callback. Here’s our class.

public class AClass {

    private Action<string> _callback;

	public AClass(Action<string> callback) { _callback = callback; }

    public void DoSomethingThatCallsBack(string str) {
      _callback(str + str);

To test this we want to test that the method passes the right result to the callback. We could use an anonymous function with an assert inside, but that test would pass even if the callback was never called.

Here’s how we do it.

    public void TestCallback() {
      var actual = string.Empty;

      var aClass = new AClass((s) => { actual = s; });

      Assert.AreEqual("AA", actual);

The anonymous function (s) => { actual = s; } has visibility of the variable ‘actual’ and so we can set it inside the callback and assert on it when we’re back in the scope of the test. This is a closure, a very common useful feature of programming with higher order functions.

A Delayed Callback

A more useful arrangement (and more difficult to test) is a method that returns control immediately and does it’s work in the background, eventually calling back when it’s done.

    public async void DoSomethingThatCallsBackEventually(string str, Action<string> callback) {
      var s = await LongRunningOperation(str);      

    private Task<string> LongRunningOperation(string s) {
      return Task.Run(() => {
          return "Delayed" + s;

We want to assert that ‘DoSomethingThatCallsBackEventually’ returned immediately, but we also want to ensure that the callback was eventually called with the correct value. I know my long running operation takes 2 seconds so I’m going to accept the method returning in less than half a second.

    public void TestEventualCallback() {
      AutoResetEvent _autoResetEvent = new AutoResetEvent(false);
      var actual = string.Empty;

      var sw = new Stopwatch();

      var aClass = new AClass();
      aClass.DoSomethingThatCallsBackEventually("A", (s) => { actual = s; _autoResetEvent.Set(); });

      Assert.Less(sw.ElapsedMilliseconds, 500);
      Assert.AreEqual("DelayedA", actual);

We can assert immediately after calling the method to check that it returned quickly enough. We then need to hold off on any further asserts until the callback fires. The trick is to use AutoResetEvent. It will wait until it receives a signal to continue. We can set it inside the callback and then continue on with our asserts.

This idea was written up by Anuraj P here.

Success or Failure

What if our long running method fails? it would be nice to provide Success and Failure callbacks and have the appropriate one fire.

This is how we do it. We’ll use timeout as a way of succeeding or failing.

    public async void DoSomethingThatCallsbackEventuallyOrTimesOut(string str, Action<string> success, Action<string> failure, int timeout) {
      var task = LongRunningOperation(str);  
      if (await Task.WhenAny(task, Task.Delay(timeout)) == task) {
        success(await task);
      } else {
        failure("Timed Out");

We pass two callback, and a timeout duration. If ‘Task.Delay(timeout)’ completes before our long running task then we’ve lost the race and the else part of the if will fire the failure callback. If our task completes first the success callback will fire.

Andrew Arnott wrote up this elegant solution here.

We can test the success scenario like this

    public void TestEventualCallbackSuccessWithTimeout() {
      AutoResetEvent _autoResetEvent = new AutoResetEvent(false);

      var actual = string.Empty;

      var aClass = new AClass();

      Action<string> onSuccess = (s) => { actual = s; _autoResetEvent.Set(); };
      Action<string> onFailure = (s) => { actual = s; _autoResetEvent.Set(); };

      aClass.DoSomethingThatCallsbackEventuallyOrTimesOut("A", onSuccess, onFailure, 2500);

      Assert.AreEqual("DelayedA", actual);

and the failure like this

    public void TestEventualCallbackFailureWithTimeout() {
      AutoResetEvent _autoResetEvent = new AutoResetEvent(false);

      var actual = string.Empty;

      var aClass = new AClass();

      Action<string> onSuccess = (s) => { actual = s; _autoResetEvent.Set(); };
      Action<string> onFailure = (s) => { actual = s; _autoResetEvent.Set(); };

      aClass.DoSomethingThatCallsbackEventuallyOrTimesOut("A", onSuccess, onFailure, 1500);

      Assert.AreEqual("Timed Out", actual);


Events are very similar to Delegates, the big difference being the ability to add multiple handlers. First I’ll declare the event. It’s payload will be a string.

  public class AClass {

    public event EventHandler<string> SomethingHappened;



Just as with the callbacks we’ll start with an example that fires immediately

    public void DoSomethingThatFiresAnEvent(string str) {
      if (SomethingHappened != null)
        SomethingHappened(this, str + str);

Since we can’t know if anything is watching the SomethingHappened event we have to check for null, and the EventHandler also requires the sender to be passed with the event.

Testing this event is very similar to testing the simple callback that we looked at above.

    public void TestEvent() {
      var actual = string.Empty;

      var aClass = new AClass();
      aClass.SomethingHappened += (_, s) => { actual = s; };


      Assert.AreEqual("AA", actual);

Note our anonymous function takes two arguments because the sender object is incluted. We use ‘_’ to indicate we’re not interested in it.

Just like with the delayed callback, we want to be able to test events that don’t fire immediately.

Here’s an example of such a method

    public async void DoSomethingThatFiresAnEventEventually(string str) {
      var s = await LongRunningOperation(str);

      if (SomethingHappened != null)
        SomethingHappened(this, s);

And here’s how we test it.

    public void TestEventualEvent() {
      AutoResetEvent _autoResetEvent = new AutoResetEvent(false);
      var actual = string.Empty;

      var sw = new Stopwatch();

      var aClass = new AClass();
      aClass.SomethingHappened += (_, s) => { actual = s; _autoResetEvent.Set(); };



      Assert.Less(sw.ElapsedMilliseconds, 500);
      Assert.AreEqual("DelayedA", actual);

Here’s a slight variation on the callback that times out. Here we want to catch an event that doesn’t fire as quickly as we’d expect.

    public void TestEventualEventTimesOut() {
      AutoResetEvent _autoResetEvent = new AutoResetEvent(false);
      var actual = string.Empty;

      var sw = new Stopwatch();

      var aClass = new AClass();
      aClass.SomethingHappened += (_, s) => { actual = s; _autoResetEvent.Set(); };



      Assert.Less(sw.ElapsedMilliseconds, 500);
      Assert.AreEqual("", actual);

Pride and Prejudice

My last post was in support of functional programmers. This post tries to get to the nub of the ‘Arrogance’ issue. Where does it come from? What sustains it? Why Functional Programmers in Particular?

There are real problems. There are enthusiasts who are hostile to newcomers. There are newcomers who are hostile to functional programming. I’m going to ignore these extremes. I’m interested in people who engage in good faith, but don’t manage to connect.

There’s seems to be a kind of dual reality happening. Newcomers and the enthusiasts can have different perspectives of the same interaction. Some of this confusion plays out in a world of anonymous forums and 140 character tweets. We are not designed to deal with complex issues through these new media. Things escalate, fast.

Here are a few of the more common disagreements.

Functional Programmers moralise about side effects and purity.

Functional programmers are cast as the vegetarians of programming. Side effects and mutable variables are off the menu. Some go full vegan and learn Haskell.

To a newcomer this can seem a bit puritanical. “Thou Shalt Not” lectures don’t sit well with programmers.

This is where we touch on the difference between easy and simple. Easy is a close cousin of familiar. Adding a lock around mutable state is easy if that’s what you know how to do.

Simplicity is the absence of complexity. Learning new skills to eliminate mutable state is hard, but the resulting solution can be simpler. As familiarity increases, difficulty decreases and you get simple AND easy.

If you learn a new paradigm, you are by definition ok with the unfamiliar. You have a different perspective than some of the newcomers that you are trying to convince.

That difference may lead newcomers to see you as puritanical, and may lead you to see newcomers as lazy. In that situation, everybody loses.

Functional Programmers Dismiss OO, which has served us well.

Every functional programming enthusiast I know has a background in Object Oriented programming. Many will admit to having struggled to do OO well.

The deeper they get into functional programming the simpler it seems by comparison. That is one of it’s big attractions.

It’s not uncommon to equate OO with the familiar complex solutions I describe above. And to think of functional programming as the unfamiliar but simpler alternative.

I’ve been guilty of thinking in those terms. It’s understandable. You’ve struggled with OO. You find a paradigm that seems to address your specific issues. It’s all too easy to think you’ve found ‘a better paradigm’.

Presenting functional as a better alternative to OO doesn’t sit well with people who know and like OO. It’s hostile, and it’s wrong in any case. Just as it’s wrong for newcomers to claim that OO is better or more intuitive.

I think of OO and functional as the longitude and latitude of software development. When you learn both you don’t explore two different paths, you explore an entire area mapped out by two axes.

Functional programmers obsess over clever unreadable code

Some programmers enjoy exercises like ‘Code Golf’. They cram entire algorithms into a single tweet. These recreational programming challenges are popular with with both functional and imperative/OO programmers.

Functional code does tend to be terse. If it’s unfamiliar to you, it can look a lot like a code golf exercise. The gleeful reactions of programmers to such code don’t help.

Idiomatic functional code is at a level of abstraction above imperative code. It tends to avoid loops, conditionals and temporary variables. It has unfamiliar concepts like pattern matching, sum types and partial application.

It can be a shock to see actual abstraction done well. Most of what we call abstraction is just indirection.

Object Oriented code done well should also offer abstraction over imperative code. That’s a story for another day.

Yes, code golf happens, there’s nothing wrong with it. But, not all programmers cackling over some terse code are playing code golf. They are not all smug and obsessed with their own cleverness.

Sometimes they’re writing good production code. They’re delighted to see abstraction working like it should. Every programmer, regardless of paradigm should celebrate that.

I just don’t get it / Functional Programmers are smug because they “get it”

Most programmers with a bit of experience can pick up a new language in a few days. It’ll take longer to understand it’s peculiarities. But you should see progress in days, if not hours.

We can do this because languages have common roots. This won’t happen if you try to learn a new paradigm.

Learning a new paradigm is akin to learning programming all over again. You’ll feel lost. For some, that is evidence that functional programming is “not intuitive”.

There is debate about whether OO is ‘intuitive’ because it models the real world? Or whether it’s just ‘familiar’ after years as the dominant paradigm?

For me, the lost feeling is the best part. I’m back to the days when new knowledge came in BIG jumps. When unconnected ideas joined together and unlocked completely new ideas.

It can be hard to articulate “how” you learned functional programming. It’s easy to give newcomers the impression that it is something that you just “get”. Try not to do that. There are eureka moments, but they come after working though that lost feeling.

Functional Programmers ramble on about maths and lambda calculus

I missed two years of school which left me with some big gaps in my maths knowledge.

Trust me when I say, you don’t need maths to grasp functional programming. But, functional programming can be a gateway drug to maths.

If functional programmers seem enthusiastic about maths it’s because maths is cool. When mathematical ideas start to make sense, it’s the same rush that we get from programming.

To work in a new paradigm and see it not only make sense, but tie back to mathematical underpinnings. It’s exciting.

Getting Involved

If you are interested in learning functional programming, the one piece of advice I would offer above all others is make contact with real people. Stay away from forums.

Watch talks, read blogs, follow people on twitter, see who they follow in turn. Get a sense of the personalities, the sense of humour etc.

People who are enthusiastic about functional programming will be glad to help anyone who shows a genuine interest. They will also be able to suggest others who can also help you.

We all tend to mirror the people we interact with. If you are enthusiastic and genuine in your enquiries you will find people who are enthusiastic an genuine in their efforts to help.

Some Functional Programmers Are Arrogant

On Sunday, I was preparing to fly home after Progressive .Net Tutorials in London. The conference had seen a big chunk of F# content. I’d spent a couple of days chatting with amazing people. Recharging the enthusiasm battery pack.

Apart from some heavy breathing on the mic for the first few minutes of my talk, and my inexplicably saying immutable when I meant mutable (multiple times), I was ok with how my talk went, feedback had been good.

The F# sessions were well attended, spirits were high, and to top it all, it emerged that Don Syme was to be awarded the prestigious Silver Medal by the Royal Academy of Engineering, for his work on C# Generics and the creation of F#.

It was a great week.

Then, a tweet.

Not this again. Functional Programmers are arrogant. Functional programmers drive newcomers away from functional programming.

It reminded me of Bill Hicks’ story about turning on CNN and seeing WAR, FAMINE, DEATH then going outside and hearing birds chirping. Where is all this stuff happening? Where are these arrogant functional programmers attacking people eager to learn, driving them away.

It reminded me of Bill Hicks’ story about turning on CNN and seeing WAR, FAMINE, DEATH then going outside and hearing birds chirping.

I’ve been programming since 1984 and professionally since 1996. So, yes, I’m a noob compared to Bob Martin, but dammit I’ve been around a while and nothing I’ve experienced in programming comes close to Functional Programming for sheer fun and a supportive community.

We started the Functional Kats monthly meetup in Dublin and specifically kept it non language specific to make it as welcoming as possible, and to allow users of different languages to learn from each other. Sceptical of Functional Programming? Then come along and use C#, Java or Javascript, or whatever. All welcome here.

Here’s the thing about people who are enthusiastic about functional programming. Most of them are just so relieved to find someone showing a genuine interest that they will bend over backwards to help you. To build you up. To make YOU feel smart.

I know this because I was on the receiving end of this generosity from the moment I showed an interest. This is why it’s frankly baffling when this charge of arrogance is levelled. Why functional programming? Of all the areas of software development, why is the arrogance of functional programmers a thing?

If it’s just a comment on the internet, a random tweet with no specific details, like this one

Then I just ignore it. I really don’t have the time or energy to deal with all the people that I think are wrong on the internet.

But this wasn’t some random tweet. This was Robert C. Martin. This was someone with a following, a reputation. This was someone speaking with authority. Worse than that, this was someone with a foot at least partially inside the world of FP. He has given talks on Clojure and even (I hope jokingly) suggested that we should all switch to Clojure and call programming languages done.

Worse than that, this was someone who has been on the receiving end of exactly these kinds of remarks. Craftsmanship is elitist. TDD’ers are arrogant. And not even random unsubstantiated claims. When Ted Neward (rightly in my opinion) raised questions about the link between Craftsmanship and the treatment of Heather Arthur, Bob took to his blog to defend Craftsmanship and swat back Ted.

At least Ted cited a specific incident. At least he called out specific bad behaviour, and it was bad.

Here’s the thing about people who are enthusiastic about functional programming. Most of them are just so relieved to find someone showing a genuine interest that they will bend over backwards to help you.

Bob issued a general tweet saying that arrogance is a “significant barrier to entry”. And I couldn’t believe it. He was legitimising random nonspecific comments about arrogance, with a random nonspecific comment about arrogance. (Yay..Recursion!)

Bob is better than this I thought. Bob should be better than this. Bob has to know that this tweet can be called on any time someone wants to lazily accuse functional programmers of being especially arrogant, without giving an example. More arrogant than OO programmers, more arrogant than DBA’s or DevOps advocates. More arrogant than C++ programmers or Agile Coaches.

Bob didn’t say some programmers are arrogant. He said some “functional” programmers are. He didn’t say some software craftsmen (or women) are arrogant. Actually he said they’re a great bunch helpful welcoming people. The Manifesto says they are.

Uncle Bob has jumped on the “Functional Programming/Arrogance” bandwagon, rather than use his considerable knowledge of computing history, and his considerable reputation within the industry to say, hold on, let’s all take a second and see if there really is a special arrogance problem in this particular field, or is this just received wisdom that people repeat without question.

Bob knows that arrogance and ego has been a feature of virtually every discussion in programming, from getting rid of GOTO to Static vs Dynamic typing. From how or whether to TDD, to how or whether to automate deployments. From Waterfall to Agile. From Pragmatism to Craftsmanship to Just Ship It. Hell the Javascript community almost imploded into an argument about semi-colons.

So, THAT is why I tweeted a reply

And that is why Bob asked me to elaborate, and that is why this post exists.

Is there arrogance in Functional Programming? Sure there is. Why? Because it’s programming.

Fred Brooks nailed it in the year I was born.

We work “only slightly removed from pure thought-stuff”

We build “castles in the air, from air, creating by the exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures….”

Of course all of us are arrogant some of the time, and some are arrogant all of the time. Of course. How could it be otherwise?

Google ‘Functional Programming Arrogant’. You’ll see some examples. Nothing really bad I have to say. I’d even posit that a lot of people dismissing FP as a fad or a bandwagon, or a bad idea without ever looking into it are showing as much or more arrogance.

Google ‘My Little Pony Arrogant’ there are far more results.

I’ll write another (much shorter) post about why I think this arrogance issue really exists, and what we can do about it, but tweets like that one from Uncle Bob are not part of the solution. They are part of the problem.

That’s for another day.

Fun with Charts and the REPL

PUBLIC SERVICE ANNOUNCEMENT: This post uses WinForms to display charts. It’s fine, but might pose some cross platform issues. You might like to consider XPlot as an alternative.

Apart from immutable by default, expressions instead of statements, easier parallel code, less noise, higher order functions, expressive type systems and borderline magic type inference what has functional programming ever done for us?

My new favourite part of the whole story is the REPL, and the freedom it (and the features above) gives us to experiment with code.

Power up an FSharp script file and use NuGet to get FSharp.Charting and FSharp.Collections.ParallelSeq. Once you’ve installed the packages, load them in the script file, and open the necessary namespaces.

#load "packages/FSharp.Charting.0.90.10/FSharp.Charting.fsx"
#r "packages/FSharp.Collections.ParallelSeq.1.0.2/lib/net40/FSharp.Collections.ParallelSeq.dll"

open System
open System.IO
open FSharp.Charting
open FSharp.Collections.ParallelSeq

Now we’re ready to go. We’re going to throw various numbers of dice and then chart the results. Let’s focus first on throwing one dice.

let random = System.Random()
let randomDice = Seq.initInfinite(fun _ -> random.Next(1,7))

This is an infinite sequence of individual dice rolls. For Monopoly we’d roll two dice at a time, for Yahtzee we’d roll 5. So our next job is to roll a number of dice at the same time. We can use Seq.take for this.

let throwAtATime n = Seq.initInfinite(fun _ -> (Seq.take n randomDice) |> Array.ofSeq)

We’ll be adding the ability to save dice rolls to file and load them again, so here are a few helper functions that’ll make things a little easier later.

let NumsToText (nums: int[]) =  nums |> (fun d -> d.ToString())
let Join (separator: string) (array: string[]) = String.Join(separator, array)
let TextToNums (text: string) =  text.Split(',') |> Int32.Parse

NumsToText converts the throws (integers) into an array of strings. Join wraps the String.Join function to allow us to partially apply it (and use it with pipelining). TextToNums splits a comma separated string of throws (read from a file) and converts it back into an array of ints. I’ve also wrapped the File.WriteAllLines and File.ReadLines so that we can use them in pipelined code.

let WriteToFile file lines = File.WriteAllLines(file, lines)

let ReadFromFile file =
    |> TextToNums

Whether we generate throws randomly or load them from a file, we have them in the same format, a Sequence of Arrays of integers. Lets write a function that takes that data structure and plots it as a chart. Here’s a function to plot the throws using FSharp Charting. Note that we use a ParallelSeq to speed things up a little.

// Plot To Screen
let Plot (throws: seq<int[]>) =
    |> Array.sum
    |> PSeq.countBy id
    |> PSeq.sortBy fst
    |> Chart.Line

We sum the dice in a given throw. Then we group by the total, sort by total and Chart the results.

As expected the result is a bell-curve, although this little experiment shows how easy it is to visualise data in situations where you might not necessarily know the pattern in advance. We’ll see some more charting shortly, but lets quickly look at how the same data structure can be persisted to a file.

// Save To File
let Save file (throws: seq<int[]>): Unit =
    |> NumsToText
    |> (Join ",")
    |> Seq.toArray
    |> WriteToFile file

And that’s it, we now have a suite of function that will allow us to do some cool things with very little effort. I love being able to use a simple data structure and a few helper functions and then use the REPL to “play” with the data. It allows us to answer questions about the data more quickly, but more importantly than that, the experimentation suggests questions we might not have otherwise thought to ask. Here are a few examples of getting stuff done in the REPL in just a few lines of code.

//Generate 1000000 random throws of 6 dice and save to file
throwAtATime 6
|> Seq.take 1000000
|> Save @"C:\Temp\dice.txt"

// Generate 1000000 random throws of 6 dice and plot a graph
|> Seq.take 1000000
|> Plot

// Read From file and plot a graph
ReadFromFile @"C:\Temp\dice.txt"
|> Plot

// Read From file and save to another file (No, I don't know why you would either).
ReadFromFile @"C:\Temp\dice.txt"
|> Save @"C:\Temp\dice2.txt"

As you play with data, questions come to mind. Like, can we visually see what more dice does to the bell curve? Of course we can.

let threeDice = throwAtATime3 |> Seq.take 100000
let tenDice = throwAtATime10 |> Seq.take 100000
Chart.Combine [Plot threeDice; Plot tenDice]

Are you pondering what I’m pondering? Could we just use to generate a whole slew of samples and plot them all together? Let’s see.

// Generate samples for 1 to 10 dice and plot them all 
|> (fun n -> throwAtATime |> Seq.take 1000000) 
|> Plot |> Chart.Combine 

Here we’ve seen a simple use of FSharp Charting. We’ve covered reading from and writing to a file. We’ve also seen how a couple of helper functions and paying attention to function signatues allow for some very readable code.

Minimize mental overhead for code readers

This week at our FunctionalKats meetup in Dublin, tackled the Luhn checksum algorithm.

This is a simple programming task, the challenge was to try and make the code readable.

I rattled off some code that worked, but I wasn’t at all happy with it. It implemented the problem as described. Partitioning the numbers into two groups and dealing with each group in turn.

let Luhn s =
    let double x = x*2

    let sumDigits = function
        | 18 -> 9
        | n when n > 9 -> n%9
        | n -> n

    let odds, evens =
        |> (string >> int)
        |> Array.ofSeq
        |> Array.rev
        |> Array.partitioni (fun i -> i%2 = 0)

    let oddSum = Array.sum odds
    let evenSum =
        |> double
        |> sumDigits
        |> Array.sum
    (oddSum + evenSum) % 10 = 0

The Array.partitioni function doesn’t exist, I had to write that too.

module Array =
    let partitioni f a =
        let first, second =
            |> Array.mapi (fun i x -> f i, x)
            |> Array.partition (fun (i, _) -> i)
        ( snd first, snd second)

I didn’t like the idea of partitioning the numbers. That may match the problem description but I think we can do better. I like functions that take in something and transforms it, perhaps through many transformations, but without splitting or branching the logic.

If a function splits the data and does separate transformations, there’s mental overhead involved. Every variable in a function is an extra concept for the reader to understand.

My second attempt tried to rectify this

let Luhn s =
    let double x = x*2

    let sumDigits = function
        | 18 -> 9
        | n when n > 9 -> n%9
        | n -> n

    let mapOdd = id
    let mapEven = (double >> sumDigits)
    let mapNumber i = if i%2=0 then mapOdd else mapEven

    let total =
        |> (string >> int)
        |> Array.ofSeq
        |> Array.rev
        |> Array.mapi mapNumber
        |> Array.sum

    total % 10 = 0

Instead of partitioning the data we map the entire array. The function used in the mapping is different for odd and even positions. The mapNumber function handles the toggling of the two functions.

This is better, but it’s still not right. Those mapOdd and mapEven functions, that IF expression selecting between them. It still feels like two trains of thought. We can do better.

let sumDigits n = Math.DivRem(n, 10) ||> (+)
let isMultipleOf x y = y % x = 0

let Luhn: string->bool = (string >> int) 
    >> Seq.rev
    >> Seq.map2 (*) (Seq.cycle (seq[1;2])) 
    >> sumDigits
    >> Seq.sum
    >> isMultipleOf 10

Now we’re getting much closer to the ideal. The function accepts a string, produces a bool. It gets from string to bool through series of simple transformations. No IF expressions, no splitting of the numbers into groups, no toggling of functions.

The trick lies in generating a cycle of 1’s and 2’s. When you multiply a sequence by 1,2,1,2,1,2…. You double every second item in the sequence. With that done, the rest of the function falls into place.

There are a few other subtle improvements. The sumDigits function is simpler thanks to the use of Map.DivRem and ||>.  The Luhn function itself is a composition of transformations. I also created the Seq.cycle and Seq.rev.

module Seq =
    let rec cycle s = seq { yield! s; yield! cycle s}

    let rev s = s |> Array.ofSeq |> Array.rev |> Seq.ofArray

I’m happy with the function as it stands. The ideas I wanted to get across with this post are

  • Don’t let the description of a problem constrain your thinking. It’s often possible to find solutions that are simpler than the problem would imply.
  • Try to reduce the moving parts, the “concepts” in your functions. Aim for functions that read straight through. A reader should have to push as little as possible onto their mental stack.
  • Try to avoid splitting your data and working on different subsections of it. A function should only accept the data it needs to work on. Keeping that data together as you transform it isn’t always possible. If you can achieve it, your code will usually be simpler.

No More Ad-Hoc Dev Environments

I’ve noticed over the past year that I waste a lot of time on environmental issues. With my machine, with configuration, with things that screw me up and then take forever to reverse or work around.

I should be getting to the root causes of those issues, and I am. But yet, despite our best efforts there will always be new issues with never before seen causes.

Our development environment is the foundation for everything we do. It is insane to focus on automated tests, builds, and deployment, while setting up our dev machines in an ad-hoc fashion.

I’m a strong believer that a new hire should be able to get from a blank hard drive to a working machine with minimal fuss.

Why put the new hire through this? Why not have machines ready to go?

A few reasons.

  1. If a new hire can set everything up then your procedure is solid. I don’t want this job done by someone with a head full of undocumented (or even documented) knowledge.
  2. Why waste the time of an existing employee who could be doing something more useful. Leave the setup of machines to the people who will use them, that way they are productive from the moment they sit down.
  3. Machines are out of date as soon as you configure them. A stash of machines waiting for new hires is just an inventory of machines that will need updating by the new hire anyway.The setup procedure should create an up to date machine. Think Lean, think JIT. A clean up to date machine, pulled into existence any time, by anyone. No waiting for the “New Machine Expert” to get around to it. No having to figure out what’s out of date when you finally get it.
  4. Most new hires just want to get some code, build it and run it. They should be able to do that by the end of day one. In fact they should be there by lunch time. Nobody wants to spend their entire first day or first few days following written instructions to set up their machine.
  5. When you solve this problem for new hires, you solve it for everyone. Our dev boxes should be disposable, interchangeable. Knowing that you can get up and running, or back to a known state quickly is liberating. It frees us all from the fear of screwing something up.

These are not new ideas. Vagrant, Docker and others are solving this exact problem. The installation of software is increasingly moving to package managers. Things tend to be a little easier of you are running Linux than say…Windows.

Oh, did I mentioned that all this has to work on Windows?

My dream is to plug a USB stick into a new machine, turn it on, and leave it to create a working development machine. Over the next few posts I’ll document some of the things I’m learning along the way to that ideal.

Mastermind, The Code Breaking Game

Download Code

Mastermind, is a code breaking game for two players.

A “Code Maker” creates a secret sequence of colour pegs. A “Code Breaker” must break the code by taking guesses and working with the feedback from the Code Maker. Feedback is given using Black and White Pegs.

  • A correct colour in the correct position is acknowledged with a Black Peg
  • A correct colour in the wrong position is acknowledged with a White Peg
  • The position of these pegs is not significant. So, a black peg indicates that one of the guessed pegs is correct, but not which one.

See here for a detailed description of the game.

For a programmer both the Code Maker and Code Breaker roles are interesting challenges.

The Code Maker role most compare a guess and a secret code and generate the correct feedback.

The Code Breaker role must issue guesses and figure out the secret code based on the feedback.

Before we look at either role, lets create some types that we can work with.

type Colour = Red|Orange|Yellow|Green|Blue
type Answer = {Black: int; White: int}

The Colours are a discriminated union of the 5 colours that can be used in the code. We also have an Answer that contains a certain number of Black and White pegs.

Each code consists of 4 pegs and colours can be duplicated. We don’t enforce that with types, in fact it’s quite easy to make the code work for arbitrary code lengths. There is one function in the current code that locks us into a code length of 4. I’ll discuss that towards the end of this post.

The Code Maker
Let’s start with a function that can compare a secret code with a guess and return the black/white pegs answer. I’ve put a worked example in the comments so you can follow what’s happening.

// E.g.
// code     = Red, Orange, Yellow, Blue
// guess    = Yellow, Yellow, Green, Blue
// expected = {Black = 1 (Blue); White = 1 (Yellow)}
let check code guess =
    let IsMatch t = fst t = snd t

    // right = [(Blue, Blue)]
    // wrong = [(Red, Yellow); (Orange, Yellow); (Yellow, Green)]
    let right, wrong = code guess
        |> List.partition IsMatch

    // Number of Black Pegs
    // 1 (Blue, Blue)
    let rightColourRightPosition =
        List.length right

    // Number of White Pegs
    // wrongCode  = [Red; Orange; Yellow]
    // wrongGuess = [Yellow; Yellow; Green]
    let wrongCode, wrongGuess = List.unzip wrong 

    // E.g. when colour = Yellow, result = 2
    let howManyOfThisColourOutOfPlace colour =
        |> List.filter(fun c -> c = colour)
        |> List.length

    // Number of White Pegs
    // 1 (Yellow) Although Yellow is guessed twice, there is only one Yellow in the code, so result is 1
    let rightColourWrongPosition =
        wrongCode                                                                          // [Red; Orange; Yellow]
        |> Seq.countBy(id)                                                                 // seq [(Red, 1); (Orange, 1); (Yellow, 1)]
        |> (fun group -> (snd group, howManyOfThisColourOutOfPlace (fst group)))   // seq [(1, 1); (1, 0); (1, 2)] (fst is occurences in code, snd is occurences in guess)
        |> Seq.sumBy Math.Min                                                              // For each colour, sum the lesser of occurences in code and in guess

    {Black = rightColourRightPosition; White = rightColourWrongPosition}

The Code Maker has visibility of the Secret Code (because it create it) and the guess (because the Code Breaker asks to have a guess checked). This will be more interesting when we look at the signature for the solve function.

Let’s not get ahead of ourselves. How to we check a guess against a secret code and calculate the number of Black and White pegs?

We zip the secret code and the guess (which are both lists of colours) this gives us a list of tuples, where each tuple represents the colour of the two pegs at the same position in the code and the guess.

We then partition that list into tuples where the fst and snd match (same colour, same position) and where fst and snd don’t match.

The number of Black pegs is simply the length of the list of tuples that matched.

The number of white pegs is calculated by taking the colours from the list of non-matching tuples and figuring out how many of the colours in the code are also in the guess. Since we partitioned the list, none of the pegs that were in the right place will get in the way of this calculation.

Follow along with the comments in the code and it should be clear.

The Code Breaker
On the face of it writing the code for the Code Breaker seems like it would be harder than for the Code Maker. It has to come up with guesses and then correlate the feedback from all the guesses to crack the code. We also can’t pass our secret code to the Code Breaker because that’s not information that it should have.

What should we pass to the solve function? It should be able to come up with guesses by itself. The only thing it needs is a way of asking the Code Maker to check a guess against the code. So, it needs a way of calling the check function. But it can’t pass the secret code to the check function, it can only pass the guess.

Enter Partial Application.

We can wrap a secret code up in a closure by partially applying the check function. This gives us a function that just accepts a guess and returns an answer. From the Code Breaker’s point of view that’s exactly how the Code Maker should work.

let secretCodeChecker = check [Red; Orange; Yellow; Blue]

Armed with that function the logic of the code breaker is pretty simple.

let solve checkFunction =
    let filterPossibilities possibilities guess =
        let answer = checkFunction guess
        |> List.filter (fun potential -> (check guess potential) = answer)

    let rec solve_iter possible =
        match possible with
        | head::[] -> head
        | head::_ -> solve_iter (filterPossibilities possible head)
        | _ -> raise CanNotBeSolved

    solve_iter possibleCodes

Start with all possibleCodes, take the first of them and guess it. Use the response from the Code Maker to filter out codes that are no longer possible. Repeat until there’s only one possibility left.

There’s a quick and easy way to generate all possible codes

let possibleCodes =
    seq {
            for i in ColourList do
                for j in ColourList do
                    for k in ColourList do
                        for l in ColourList do
                            yield [i;j;k;l]
    |> List.ofSeq

The problem with this is that it locks us into codes of length 4 even though nothing else in our solution has that restriction. A more general recursive function would be nice here, but let’s not waste time on that right now. Let’s get our Code Breaker working.

The most intersting part of breaking the code is filtering out potential codes based on feedback from a guess.

This ridiculously simple to do. We supply a guess to the Code Maker and get back an answer. We then take all of the remaining possible secret codes and we check our guess against each (using the non-partially applied check function) and we keep only the codes that result in the same answer that the Code Maker gave us.

To figure out the secret code that’s wrapped up in the secretCodeChecker function, we just pass it to solve

> solve secretCodeChecker;;

val it : Colour list = [Red; Orange; Yellow; Yellow]

And what about those nested loops that produce all possible codes. How can be turn that into a more general recursive function?

I’ll leave that for you as an exercise. If you look up the ‘Making Change’ example in Structure and Interpretation of Computer Programs here you’ll be in the right area, however instead of simply counting the number of possibilities, you need to actually capture and return them.