Navigate / search

Reading F#

I haven’t blogged for a loooong time. But, a tweet about some C# code rewritten in F# got me interested yesterday.

If you know a little F# it’s easy get sucked into thinking that having much fewer lines of code, and less noise generally makes F# code automatically better, cleaner, easier to read than C#. But, of course, that’s only true for people who know enough F# to read it.

When I see very smart C# devs unable to decipher what the F# code is doing, that gets me very interested.

Pete Smith very kindly did his own rewrite of the C# version.

Which in turn got translated back into F# again by Jason Imison.

The F# Code

The point of this post is to explain the original F# code a little bit, for C# devs who are curious, but find it hard to follow. It’s no reflection on either C# developers or the F# language that there’s confusion. This is a new paradigm. There are concepts in F# that simply don’t exist in C#. There are also concepts that look like C#, but behave differently.

I’ll take it step by step. Please post comments below if you need me to dig deeper into any part of it.

The Types

First we’ll look at the type definitions.

module Discount =
    type Year = int
    type [<Measure>] percent

Nothing exciting here. The code we’re writing is in a module called Discount. We’ll be able to import or ‘open’ that module when we want to use it later.

We create an alias (or Type Abreviation) of int called Year. This let’s us use ‘Year’ when defining other types, rather than the primitive int.

We also define percent as a unit of measure. F# can do amazing things with Units of Measure. This percentage example doesn’t necessarily show it off to full effect.

    type Customer = Simple | Valuable | MostValuable

Customer looks enough like an enum that most devs let it slide, but it’s more than that. Customer is a type with three possible values: Simple, Valuable, and MostValuable. These are not mere labels. This isn’t some layer of text over a numeric data type like an enum. They represent the full range of values for the Customer Type. In a sense, they are to Customer what ‘true’ and ‘false’ are to bool.

Let me repeat that, Customer can hold no other value than Simple, Valuable, or MostValuable. If you have a Customer, it can not hold an invalid value. it can not hold Null, or Nothing, or any out of range numeric value.

What we’re trying to do here is model the domain with types that make it impossible to capture impossible states.

AccountStatus is the first type that’s likely to completely throw a C# developer.

    type AccountStatus = 
        | Registered of Customer * since:Year
        | UnRegistered

This is actually similar to Customer, although the layout may suggest a difference that isn’t really there. An AccountStatus can be Registered or UnRegistered, just like a Customer can be Simple, Valuable, or MostValuable. For Registered accounts there’s extra information the Customer, and the Year they registered. For UnRegistered accounts there is no additional data, just the token UnRegistered.

This means that some valid values for the DataType AccountStatus include

Registered(MostValuable, 1)
Registered(Valuable, 6)
Registered(Simple, 1)
UnRegistered

The following are invalid, they are syntax errors.

UnRegistered(MostValuable, 1)
Registered
Registered(1, Simple)

Now, there’s a problem here. The Type definition makes Year look like a Year that an account has been active ‘since’. But later in the code it looks more like the number of years an account has been active ‘for’.

I’m not here to review or improve the code, just explain it. So, I’m mentioning that confusion here.

An Account Status can be UnRegistered. Or, it can be Registered with a Tuple of Customer and Year. Customer as we’ve seen can be Simple, Valuable or MostValuable. And Year is an integer.

That’s Algebraic Data Types. Define your types, compose them and use them. It’s no different to what you’ve always done in C#, however you haven’t had Sum Types, and you can’t define a tuple as simply as ‘Customer * Year’.

Some C# developers may like to think of AccountStatus as a BaseClass and Registered and UnRegistered as SubClasses. With Registered having additional fields. That is what’s going on under the hood, but I personally don’t give that a lot of thought.

By the way you can delete ‘since:’ in the definition of a Registered account, it serves only to tell you what the Year indicates. It doesn’t change the type in any way.

Those are the types we have to work with. Functional Programmers tend to lean quite heavily on types. I can definitely understand C# devs wondering whether ‘Year’ and ‘percent’ are really worth the effort in this case. Year in particular is troublesome because int isn’t necessarily the most robust way of representing a Year. If you’re going to introduce a new Type, maybe you should go all the way or don’t go at all. The confusion over what ‘Year’ actually means is troublesome.

That’s a debate for another day. But in this example at least the concept of Year is called out. The specific implementation, and underlying type may change later.

Let’s move on.

The Functions

    let customerDiscount = function
        | Simple    -> 1<percent>
        | Valuable  -> 3<percent>
        | MostValuable  -> 5<percent>

customerDiscount is just a function that maps a Customer to an integer percent. How do I know? Well that’s the signature of the function.

Customer -> int So, the valid inputs to this function are Simple, Valuable, and MostValuable. And the outputs you can see.

The way this function is written probably throws C# devs more than what it actually does. Let me rewrite it slightly.

    let customerDiscount customer = 
        match customer with 
        | Simple    -> 1<percent>
        | Valuable  -> 3<percent>
        | MostValuable  -> 5<percent>

This is exactly the same function, it just gives the argument a name, and then pattern matches on it. Because this kind of function is so common, the alternative syntax is possible.

    let yearsDiscount = function
        | years when years > 5  -> 5<percent>
        | years                 -> 1<percent> * years

yearsDiscount is a function that maps an int to an int percent. That Year alias is getting more troubling. It seems to have vanished here in the code where it’s actually used. F# isn’t perfect, and it doesn’t write itself for you. Ambiguities can creep in.

Let’s stick to what this function is doing. The function is in the same simplified pattern matching syntax as customerDiscount. The first clause matches when the value passed to the function is greater than 5, and returns 5 . The second clause matches any other integer value, and calculates the result. The end result, 1% discount per year, capped at 5%.

Notice that the entire body of the functions are expressions. There’s no ‘return’ statement. A clause in the match maps to a value and that is the value if the function.

    let accountDiscount = function
        | Registered(customer, years) -> customerDiscount customer, yearsDiscount years
        | UnRegistered                -> 0<percent>               , 0<percent>

On, now things are getting interesting. The signature of accountDiscount is

AccountStatus -> int<percent> * int<percent>

What does that mean?

We can pass in either a Registered account, with a Customer and number of years,
OR
We pass in unRegistered.

Those are the two possibilities for AccountStatus.

What do we get back?

int * int

A tuple, containing two int percents.

The tuple contains the results of calling the customerDiscount function, and the yearsDiscount function.

Look again at the accountDiscount function. How does it know the types of the input, and the output values.

    let accountDiscount = function
        | Registered(customer, years) -> customerDiscount customer, yearsDiscount years
        | UnRegistered                -> 0<percent>               , 0<percent>

It pattern matches on Registered and UnRegistered, so the input must be an AccountStatus. Both match clauses evaluate to an int * int tuple. So, the function as a whole must always evaluate to that too.

If the input to the function is UnRegistered, then the result is 0, 0. So, no discount.
But look at the match on Registered. Remember the Registered AccountStatus has a payload of sorts. A Customer type and number of years in the form of a Customer * Year tuple.

In the match clause we destructure that tuple into two variables customer and years.

    Registered(customer, years)

And then pass those variables to the relevant discount function to produce an output tuple.

    -> customerDiscount customer, yearsDiscount years

There’s a lot going on there that isn’t familiar. It takes a little time to adjust. The on the fly translation that a C# dev needs to do in their head is quite a burden when you start reading (and even harder when you start writing) F#. But it does get easy very quickly, and after a while then inherent consistency starts to shine through.

    let asPercent p = 
        decimal(p) / 100.0m

Ok, let me have another little moan here. This function takes an int, in other words a percentage in this nice format: 5, and returns it as a decimal 0.05m.

So, asDecimal might be a better name. Having to convert back to a decimal likes this leads me to thing maybe a decimal might have done the job just as well. But I’m not here to judge, just explain.

    let reducePriceBy discount price = 
        price - price * (asPercent discount)

This looks pretty straightforward, surely there’s no functional voodoo going on here? Well, actually there is some very cool functional voodoo going on.

In C# land, the order of arguments for a function isn’t strictly speaking, important. Some developers come up with standards and best practices, but, basically as long as you’re consistent, it doesn’t really matter.

In languages like F# it matters a great deal.

On the face of it, the reducePriceBy function accepts 2 arguments, discount, and price.

Partial Application was one of the big things that popped my C# tinted eyes, when I first encountered F#. In simple terms it means that if you pass some of the arguments to a function, you get back a function that accepts the rest of the arguments.

So, if we pass a discount to reducePriceBy, we get back a function that accepts a price, and reduces it by that locked in discount.

That’s why discount is the first argument and price is the second. It’s hard to see a use for a function that accepts various discounts and applies them to some locked in price.

Through the wonder of partial application, and all the types and functions above, we get to the centrepiece of the program. If you had known F# from the start, your eye would have headed straight for this function to see what was going on.

    let calculateDiscountedPrice account price = 
        let customerDiscount, yearsDiscount = accountDiscount account
        price
        |> reducePriceBy customerDiscount
        |> reducePriceBy yearsDiscount

calculateDiscountedPrice takes an AccountStatus. Which is either Registered for a particular Customer, and number of years, or UnRegistered.

calculateDiscountedPrice also takes a price which is a decimal.

The accountDiscountFunction takes the AccountStatus and returns a tuple containing Customer Discount and Years Discount (both in the int format). These are stored in customerDiscount and yearsDiscount respectively. There’s that destructuring again.

So, what’s the logic of discounting a price? Here’s the important bit.

        price
        |> reducePriceBy customerDiscount
        |> reducePriceBy yearsDiscount

Let me rewrite that slightly.

        price
        |> (reducePriceBy customerDiscount)
        |> (reducePriceBy yearsDiscount)

Remember I said that passing a discount value to reducePriceBy would return a new function that accepts a price.
Well, that’s what we’re doing. The parenthesis above aren’t necessary but they show that partial application is going to produce two functions, one that reduces a price by the customer discount amount and a second that reduces a price by the years discount amount.

Or, to make a short story long…

        let reducePriceByCustomerDiscount = reducePriceBy customerDiscount
        let reducePriceByYearsDiscount = reducePriceBy yearsDiscount

        price
        |> reducePriceByCustomerDiscount
        |> reducePriceByYearsDiscount

The pipe forward operator |> simply takes the value on it’s left and passes it to the function on it’s right. You will occasionally here people (like me a long time ago) say that the pipe forward operator passes a value on the left in as the ‘last argument’ to the function on the right.

As you can hopefully see that’s a bad way to think about it. The expression on the right of the pipe forward operator is evaluated and should produce a function. That expression might just be a function, or it might be a function that needs to be partially applied. It might in fact be any expression that evaluates to a function capable of accepting the value on the left of the operator.

And, when that value is piped into the function, the result of that can be piped on in the same way to the next function. As we see here.

The final little gimmick in this program is the test. There’s no test framework, or assert. Just a variable tests that will either be true or false.

let tests =
    [
        calculateDiscountedPrice (Registered(MostValuable, 1))  100.0m
        calculateDiscountedPrice (Registered(Valuable, 6))      100.0m
        calculateDiscountedPrice (Registered(Simple, 1))        100.0m
        calculateDiscountedPrice UnRegistered                   100.0m
    ] = [94.05000M; 92.15000M; 98.01000M; 100.0M]

Here are the two lists.

    [
        calculateDiscountedPrice (Registered(MostValuable, 1))  100.0m
        calculateDiscountedPrice (Registered(Valuable, 6))      100.0m
        calculateDiscountedPrice (Registered(Simple, 1))        100.0m
        calculateDiscountedPrice UnRegistered                   100.0m
    ]

    [94.05000M; 92.15000M; 98.01000M; 100.0M]

In all cases the price being used is 100.0m.

The AccountStatus values are as discussed earlier, either UnRegistered, or Registered along with a tuple of a Customer and an int.

Each of those calls to calculateDiscountedPrice will evaluate to a decimal, so we’ll end up with a list of decimals. If that list happens to match the list of decimals provided then ‘tests’ will be true, otherwise it will be false.

As it happens, it’s true

val tests : bool = true

Unit Testing Events and Callbacks in C#

TL/DR
Events and Callbacks offer excellent opportunities to simplify your code, but the need for tests is, if anything greater. This post demonstrates unit testing strategies for common scenarios.

Download Sample Code

The Problem
When you want to unit test a method it’s usually pretty simple. Call the method, pass it it’s parameters and assert against it’s return value, or some other property of the object that may have changed.

What happens when a method doesn’t return a value, or update some property? What happens when it leads (perhaps after some delay) to an event firing, or a callback getting called?

Events firing and callbacks getting called are very similar, but subtly different, so I’ll cover them both.

A Simple Callback

Here’s the simplest scenario. You register a callback with a class when you create it, and you have a method that communicates back via that callback. Here’s our class.

public class AClass {

    private Action<string> _callback;

	public AClass(Action<string> callback) { _callback = callback; }

    public void DoSomethingThatCallsBack(string str) {
      _callback(str + str);
    }
}

To test this we want to test that the method passes the right result to the callback. We could use an anonymous function with an assert inside, but that test would pass even if the callback was never called.

Here’s how we do it.

    [Test]
    public void TestCallback() {
      var actual = string.Empty;

      var aClass = new AClass((s) => { actual = s; });
      aClass.DoSomethingThatCallsBack("A");

      Assert.AreEqual("AA", actual);
    }

The anonymous function (s) => { actual = s; } has visibility of the variable ‘actual’ and so we can set it inside the callback and assert on it when we’re back in the scope of the test. This is a closure, a very common useful feature of programming with higher order functions.

A Delayed Callback

A more useful arrangement (and more difficult to test) is a method that returns control immediately and does it’s work in the background, eventually calling back when it’s done.

    public async void DoSomethingThatCallsBackEventually(string str, Action<string> callback) {
      var s = await LongRunningOperation(str);      
      callback(s);
    }

    private Task<string> LongRunningOperation(string s) {
      return Task.Run(() => {
          Thread.Sleep(2000);
          return "Delayed" + s;
        });
    }

We want to assert that ‘DoSomethingThatCallsBackEventually’ returned immediately, but we also want to ensure that the callback was eventually called with the correct value. I know my long running operation takes 2 seconds so I’m going to accept the method returning in less than half a second.

    [Test]
    public void TestEventualCallback() {
      AutoResetEvent _autoResetEvent = new AutoResetEvent(false);
      var actual = string.Empty;

      var sw = new Stopwatch();
      sw.Start();

      var aClass = new AClass();
      aClass.DoSomethingThatCallsBackEventually("A", (s) => { actual = s; _autoResetEvent.Set(); });
      sw.Stop();

      Assert.Less(sw.ElapsedMilliseconds, 500);
      Assert.IsTrue(_autoResetEvent.WaitOne());
      Assert.AreEqual("DelayedA", actual);
    }

We can assert immediately after calling the method to check that it returned quickly enough. We then need to hold off on any further asserts until the callback fires. The trick is to use AutoResetEvent. It will wait until it receives a signal to continue. We can set it inside the callback and then continue on with our asserts.

This idea was written up by Anuraj P here.

Success or Failure

What if our long running method fails? it would be nice to provide Success and Failure callbacks and have the appropriate one fire.

This is how we do it. We’ll use timeout as a way of succeeding or failing.

    public async void DoSomethingThatCallsbackEventuallyOrTimesOut(string str, Action<string> success, Action<string> failure, int timeout) {
      var task = LongRunningOperation(str);  
      if (await Task.WhenAny(task, Task.Delay(timeout)) == task) {
        success(await task);
      } else {
        failure("Timed Out");
      }
    }

We pass two callback, and a timeout duration. If ‘Task.Delay(timeout)’ completes before our long running task then we’ve lost the race and the else part of the if will fire the failure callback. If our task completes first the success callback will fire.

Andrew Arnott wrote up this elegant solution here.

We can test the success scenario like this

    [Test]
    public void TestEventualCallbackSuccessWithTimeout() {
      AutoResetEvent _autoResetEvent = new AutoResetEvent(false);

      var actual = string.Empty;

      var aClass = new AClass();

      Action<string> onSuccess = (s) => { actual = s; _autoResetEvent.Set(); };
      Action<string> onFailure = (s) => { actual = s; _autoResetEvent.Set(); };

      aClass.DoSomethingThatCallsbackEventuallyOrTimesOut("A", onSuccess, onFailure, 2500);

      Assert.IsTrue(_autoResetEvent.WaitOne());
      Assert.AreEqual("DelayedA", actual);
    }

and the failure like this

    [Test]
    public void TestEventualCallbackFailureWithTimeout() {
      AutoResetEvent _autoResetEvent = new AutoResetEvent(false);

      var actual = string.Empty;

      var aClass = new AClass();

      Action<string> onSuccess = (s) => { actual = s; _autoResetEvent.Set(); };
      Action<string> onFailure = (s) => { actual = s; _autoResetEvent.Set(); };

      aClass.DoSomethingThatCallsbackEventuallyOrTimesOut("A", onSuccess, onFailure, 1500);

      Assert.IsTrue(_autoResetEvent.WaitOne());
      Assert.AreEqual("Timed Out", actual);
    }

Events

Events are very similar to Delegates, the big difference being the ability to add multiple handlers. First I’ll declare the event. It’s payload will be a string.

  public class AClass {

    public event EventHandler<string> SomethingHappened;

    ...

}

Just as with the callbacks we’ll start with an example that fires immediately

    public void DoSomethingThatFiresAnEvent(string str) {
      if (SomethingHappened != null)
        SomethingHappened(this, str + str);
    }

Since we can’t know if anything is watching the SomethingHappened event we have to check for null, and the EventHandler also requires the sender to be passed with the event.

Testing this event is very similar to testing the simple callback that we looked at above.

    [Test]
    public void TestEvent() {
      var actual = string.Empty;

      var aClass = new AClass();
      aClass.SomethingHappened += (_, s) => { actual = s; };

      aClass.DoSomethingThatFiresAnEvent("A");

      Assert.AreEqual("AA", actual);
    }

Note our anonymous function takes two arguments because the sender object is incluted. We use ‘_’ to indicate we’re not interested in it.

Just like with the delayed callback, we want to be able to test events that don’t fire immediately.

Here’s an example of such a method

    public async void DoSomethingThatFiresAnEventEventually(string str) {
      var s = await LongRunningOperation(str);

      if (SomethingHappened != null)
        SomethingHappened(this, s);
    }

And here’s how we test it.

    [Test]
    public void TestEventualEvent() {
      AutoResetEvent _autoResetEvent = new AutoResetEvent(false);
      var actual = string.Empty;

      var sw = new Stopwatch();
      sw.Start();

      var aClass = new AClass();
      aClass.SomethingHappened += (_, s) => { actual = s; _autoResetEvent.Set(); };

      aClass.DoSomethingThatFiresAnEventEventually("A");

      sw.Stop();

      Assert.Less(sw.ElapsedMilliseconds, 500);
      Assert.IsTrue(_autoResetEvent.WaitOne());
      Assert.AreEqual("DelayedA", actual);
    }

Here’s a slight variation on the callback that times out. Here we want to catch an event that doesn’t fire as quickly as we’d expect.

    [Test]
    public void TestEventualEventTimesOut() {
      AutoResetEvent _autoResetEvent = new AutoResetEvent(false);
      var actual = string.Empty;

      var sw = new Stopwatch();
      sw.Start();

      var aClass = new AClass();
      aClass.SomethingHappened += (_, s) => { actual = s; _autoResetEvent.Set(); };

      aClass.DoSomethingThatFiresAnEventEventually("A");

      sw.Stop();

      Assert.Less(sw.ElapsedMilliseconds, 500);
      Assert.IsFalse(_autoResetEvent.WaitOne(1500));
      Assert.AreEqual("", actual);
    }

Mastermind, The Code Breaking Game

Download Code

Mastermind, is a code breaking game for two players.

A “Code Maker” creates a secret sequence of colour pegs. A “Code Breaker” must break the code by taking guesses and working with the feedback from the Code Maker. Feedback is given using Black and White Pegs.

  • A correct colour in the correct position is acknowledged with a Black Peg
  • A correct colour in the wrong position is acknowledged with a White Peg
  • The position of these pegs is not significant. So, a black peg indicates that one of the guessed pegs is correct, but not which one.

See here for a detailed description of the game.

For a programmer both the Code Maker and Code Breaker roles are interesting challenges.

The Code Maker role most compare a guess and a secret code and generate the correct feedback.

The Code Breaker role must issue guesses and figure out the secret code based on the feedback.

Before we look at either role, lets create some types that we can work with.

type Colour = Red|Orange|Yellow|Green|Blue
type Answer = {Black: int; White: int}

The Colours are a discriminated union of the 5 colours that can be used in the code. We also have an Answer that contains a certain number of Black and White pegs.

Each code consists of 4 pegs and colours can be duplicated. We don’t enforce that with types, in fact it’s quite easy to make the code work for arbitrary code lengths. There is one function in the current code that locks us into a code length of 4. I’ll discuss that towards the end of this post.

The Code Maker
Let’s start with a function that can compare a secret code with a guess and return the black/white pegs answer. I’ve put a worked example in the comments so you can follow what’s happening.

// E.g.
// code     = Red, Orange, Yellow, Blue
// guess    = Yellow, Yellow, Green, Blue
// expected = {Black = 1 (Blue); White = 1 (Yellow)}
let check code guess =
    let IsMatch t = fst t = snd t

    // right = [(Blue, Blue)]
    // wrong = [(Red, Yellow); (Orange, Yellow); (Yellow, Green)]
    let right, wrong =
        List.zip code guess
        |> List.partition IsMatch

    // Number of Black Pegs
    // 1 (Blue, Blue)
    let rightColourRightPosition =
        List.length right

    // Number of White Pegs
    // wrongCode  = [Red; Orange; Yellow]
    // wrongGuess = [Yellow; Yellow; Green]
    let wrongCode, wrongGuess = List.unzip wrong 

    // E.g. when colour = Yellow, result = 2
    let howManyOfThisColourOutOfPlace colour =
        wrongGuess
        |> List.filter(fun c -> c = colour)
        |> List.length

    // Number of White Pegs
    // 1 (Yellow) Although Yellow is guessed twice, there is only one Yellow in the code, so result is 1
    let rightColourWrongPosition =
        wrongCode                                                                          // [Red; Orange; Yellow]
        |> Seq.countBy(id)                                                                 // seq [(Red, 1); (Orange, 1); (Yellow, 1)]
        |> Seq.map (fun group -> (snd group, howManyOfThisColourOutOfPlace (fst group)))   // seq [(1, 1); (1, 0); (1, 2)] (fst is occurences in code, snd is occurences in guess)
        |> Seq.sumBy Math.Min                                                              // For each colour, sum the lesser of occurences in code and in guess

    {Black = rightColourRightPosition; White = rightColourWrongPosition}

The Code Maker has visibility of the Secret Code (because it create it) and the guess (because the Code Breaker asks to have a guess checked). This will be more interesting when we look at the signature for the solve function.

Let’s not get ahead of ourselves. How to we check a guess against a secret code and calculate the number of Black and White pegs?

We zip the secret code and the guess (which are both lists of colours) this gives us a list of tuples, where each tuple represents the colour of the two pegs at the same position in the code and the guess.

We then partition that list into tuples where the fst and snd match (same colour, same position) and where fst and snd don’t match.

The number of Black pegs is simply the length of the list of tuples that matched.

The number of white pegs is calculated by taking the colours from the list of non-matching tuples and figuring out how many of the colours in the code are also in the guess. Since we partitioned the list, none of the pegs that were in the right place will get in the way of this calculation.

Follow along with the comments in the code and it should be clear.

The Code Breaker
On the face of it writing the code for the Code Breaker seems like it would be harder than for the Code Maker. It has to come up with guesses and then correlate the feedback from all the guesses to crack the code. We also can’t pass our secret code to the Code Breaker because that’s not information that it should have.

What should we pass to the solve function? It should be able to come up with guesses by itself. The only thing it needs is a way of asking the Code Maker to check a guess against the code. So, it needs a way of calling the check function. But it can’t pass the secret code to the check function, it can only pass the guess.

Enter Partial Application.

We can wrap a secret code up in a closure by partially applying the check function. This gives us a function that just accepts a guess and returns an answer. From the Code Breaker’s point of view that’s exactly how the Code Maker should work.

let secretCodeChecker = check [Red; Orange; Yellow; Blue]

Armed with that function the logic of the code breaker is pretty simple.

let solve checkFunction =
    let filterPossibilities possibilities guess =
        let answer = checkFunction guess
        possibilities
        |> List.filter (fun potential -> (check guess potential) = answer)

    let rec solve_iter possible =
        match possible with
        | head::[] -> head
        | head::_ -> solve_iter (filterPossibilities possible head)
        | _ -> raise CanNotBeSolved

    solve_iter possibleCodes

Start with all possibleCodes, take the first of them and guess it. Use the response from the Code Maker to filter out codes that are no longer possible. Repeat until there’s only one possibility left.

There’s a quick and easy way to generate all possible codes

let possibleCodes =
    seq {
            for i in ColourList do
                for j in ColourList do
                    for k in ColourList do
                        for l in ColourList do
                            yield [i;j;k;l]
    }
    |> List.ofSeq

The problem with this is that it locks us into codes of length 4 even though nothing else in our solution has that restriction. A more general recursive function would be nice here, but let’s not waste time on that right now. Let’s get our Code Breaker working.

The most intersting part of breaking the code is filtering out potential codes based on feedback from a guess.

This ridiculously simple to do. We supply a guess to the Code Maker and get back an answer. We then take all of the remaining possible secret codes and we check our guess against each (using the non-partially applied check function) and we keep only the codes that result in the same answer that the Code Maker gave us.

To figure out the secret code that’s wrapped up in the secretCodeChecker function, we just pass it to solve

> solve secretCodeChecker;;

val it : Colour list = [Red; Orange; Yellow; Yellow]

And what about those nested loops that produce all possible codes. How can be turn that into a more general recursive function?

I’ll leave that for you as an exercise. If you look up the ‘Making Change’ example in Structure and Interpretation of Computer Programs here you’ll be in the right area, however instead of simply counting the number of possibilities, you need to actually capture and return them.

 

Playing for a Draw

It’s September, so time for a small diversion from the technical stuff to something far more important, Hurling.

Prior to 2012 the last All-Ireland final to end in a draw was the 1959 Kilkenny Waterford final. Since 2012 all three finals have had to be replayed. This many draws in a row is wildly unusual, but that in itself isn’t evidence of funny business. Anomalies happen, that’s why we call them anomalies.

Hurlers are operating at levels of strength, skill and fitness unlike anything we have ever seen. If we assume that the top teams are reaching roughly the same heights, aren’t close games more likely? and consequently shouldn’t we see more draws?

Or is there more to it?

This year alone, Kilkenny and Tipperary ended level after normal time in the National Hurling League Final, and Kilkenny were only a point ahead after extra time. Kilkenny and Galway tied the Leinster Semi-Final, Clare and Wexford couldn’t be separated in Normal or Extra time in the Qualifiers, and their replay also went to extra time. And of course Sunday’s Epic also ended all square.

For as long as I can remember there have been suspicions that the GAA hope for draws and the revenue boost that replays provide. It’s quite possible that’s true, but it’s a leap from that to suggest that the GAA actively influence referees to blow up level games, or allow close games to run on a little longer in the hopes of the trailing team equalizing.

I’m sure referees would be extremely unhappy and would absolutely refuse to be influenced in such a way, and I don’t believe it happens.

I do however think referees are doing exactly what I described above, albeit at their own volition rather than in response to pressure. I believe they are stopping tied games as quickly as possible, but allowing close games to run on. Giving the trailing team “A chance”.

Exhibit A – All-Ireland Final – Kilkenny vs Galway – 9 September 2012
3 minutes of injury time were added, with Kilkenny leading by a point.
At 71.50 a free was given to Galway and Jackie Tyrell was booked moments after a trip on Kilkenny’s Tommy Walsh was ignored. Galway tied the score from the resulting free and the referee ended the game a second or two short of the end of injury time.

Exhibit B – All-Ireland Final – Cork vs Clare – 8 September 2013
2 minutes of injury time were added. When the two minutes have elapsed Cork are up by a point. The Referee allows play to continue for 27 seconds. Clare equalize and play is then stopped.

Exhibit C – League Final – Kilkenny vs Tipperary – 4 May 2014
3 minutes of injury time added. Kilkenny a point up when the 3 minutes have elapsed. Play is allowed to continue for 38 seconds, including a missed free to Tipperary. Tipperary equalize from play, the referee doesn’t allow the puckout. Normal time ends level.

In Extra-Time 1 minute of injury time is added. Kilkenny lead by a point when the 1 minute elapses. Referee allows the puckout, Kilkenny win the ball and the game is ended. We of course can’t know if Tipperary would have been allowed a few seconds to equalise if they had won the ball, but the pattern from other games suggests they might.

Exhibit D – Leinster Semi-Final – Kilkenny vs Galway – 22 June 2014
2 minutes of injury time added. Kilkenny score with 2 seconds of injury time remaining to lead by a point.
Play is allowed to run on for 21 seconds, Commentator makes the comment “He has to give them a chance”. Galway equalize and the game ends.

Exhibit E – All-Ireland Final – Kilkenny vs Tipperary – 9 September 2014
1 minute of injury time added. Sides are level, a questionable free is awarded to Tipperary. The incident is similar to the free awarded in the 2012 final between Kilkenny and Galway, but this time the free is given to the defending player rather than the player with the ball. Had the free been given to Kilkenny it would almost certainly have been scored.

The entire 1 minute of injury time is used up taking the free and waiting for a decision from Hawkeye, play resumes with 71 minutes 23 seconds on the clock yet no additional time is played after the puckout. The referee ends the game immediately, with the sides level.

What’s going on?
I don’t believe there is a conspiracy to create draws for the GAA, but I do believe the referees are taking the easy way out by giving teams “a chance” to equalize, or blowing up if the sides are level. The pattern of this happening in the past makes it harder for a referees in the future.

Solution
There has apparently already been a decision to implement a public clock for hurling. That should happen urgently.

Tied championship games should always go to extra time. At the moment there’s a bizarre situation where a League game or a Qualifier game can go to extra time, but other championship games including the All-Ireland final is an immediate replay if the sides are level and the end of normal time.

Of course, hurling games need never go to a replay. Simply allowing tied teams to play on until someone scores would end most games within minutes. Perhaps requiring a team to lead by two points to clinch a game would be fairer, and should still end the game with very little extra time.

I don’t think the GAA try to engineer draws, but I suspect the likelihood of them ever eliminating them is fairly remote.

Active Patterns: Single Total (|A|)

This entry is part 2 of 8 in the series Active Patterns

Part 1 of this series was mainly sharpening the axe by covering some basics like Pattern matching. I also gave a general sense of what active patterns are (functions that can be used when pattern matching, such as in match expressions). Now it’s time to dig into the details.

As I mentioned previously there are arguably 5 variations of active patterns. This post will cover the first of those, the Single Total Active Pattern.

When we looked at plain old pattern matching we extracted and matched against values that were already there, by which I mean matching against a tuple like (21, 8, 2014) allowed us to match on the values 21, 8 and 2014 or any combination of them, but we couldn’t match against a values like ‘August’ or ‘Leap Year’.

Active Patterns allow us to do just that, we can take an input, transform it in some way and then match against the result of that transformation.

Let’s try a simple example.

let (|UpperCaseCount|) (str: string) =
    str.ToCharArray()
    |> Array.filter (fun c -> c.ToString() = c.ToString().ToUpper())
    |> Array.length

let UseThePatternLuke (str: string) =
    match str with
    | UpperCaseCount 4 -> "Bingo 4 is the magic number"
    | UpperCaseCount u -> sprintf "Nah %d upper case characters is no good" u

Active Recognizers
The first thing to note is the highlighted line. In particular those funny parenthesis around (|UpperCaseCount|). Those (|…|) are “banana clips” and they denote a special kind of function known as an ‘Active Recognizer’. These functions do the heavy lifting for Active Patterns. They accept the source data, break it up, transform it and output it in a form that can be matched against.

From the perspective of match expressions and assignments the pattern matching works exactly like the plain old pattern matching we saw in the last post. The difference with Active Patterns is that the Active Recognizer function has gotten in and transformed the data before the matching happens.

The”banana clips” above only enclose one value so this is a Single Total Active Pattern.

The significance of this will become more apparent when we look at the remaining kinds in subsequent posts.

We’re matching against a string, but the property of the string we’re interested in is the number of upper case characters. So, we define an active recognizer that takes a string, and returns the number of upper case characters.

Apart from those Banana Clips it looks like an ordinary function. I’ve mentioned in previous posts that the Single Total Active Pattern can be a little hard to explain because simply using a function almost always seems like a better idea. If you’re skeptical, stay with me (I’ve been there).

For simple pattern matching, there’s just the “match x with” code, or the destructuring assignment. For Active Patterns you define the active recognizer separate from the pattern match. Basically you pull some logic out into it’s own function. Nothing magical.

Here’s the same code with some scribbles to try and convey the relationship between the active recognizer and the pattern matching.

And here, after much head-scratching is an attempt at an example which is simple enough for anyone to understand, but where just using functions might not have been as clean.

let (|IsPalindrome|) (str: string) =
    str = String(str.ToCharArray() |> Array.rev)

let (|UpperCaseCount|) (str: string) =
    str.ToCharArray()
    |> Array.filter (fun c -> c.ToString() = c.ToString().ToUpper())
    |> Array.length

let (|LowerCaseCount|) (str: string) =
    str.ToCharArray()
    |> Array.filter (fun c -> c.ToString() = c.ToString().ToLower())
    |> Array.length

let (|SpecialCharacterCount|) (str: string) =
    let specialCharacters = "!£$%^"
    str.ToCharArray()
    |> Array.filter (fun c -> specialCharacters.Contains(c.ToString()))
    |> Array.length


let (|IsValid|) (str: string) =
    match str with
    | UpperCaseCount 0 -> (false, "Must have at least 1 upper case character")
    | LowerCaseCount 0 -> (false, "Must have at least 1 lower case character")
    | SpecialCharacterCount 0 -> (false, "Must have at least 1 of !£$%^")
    | IsPalindrome true -> (false, "A palindrome for a password? What are you thinking?")
    | UpperCaseCount u & LowerCaseCount l & SpecialCharacterCount s -> (true, sprintf "Not a Palindrome, %d upper case, %d lower case and %d special characters. You're good to go!!!" u l s)

I’ve actually defined a couple of different Active Recognizers, each of which transform the string in different ways. The pattern match can then use any combination of the four patterns.

We can match against literal values like true and 0, or we can match against variables as in the last case.

The highlighted line shows one of the real advantages of active patterns over simple functions. We call three functions, store the returned values and use them, all in one line.

Actually I was a little cheeky, even my IsValid “function” is actually an Active Recognizer, I can use it as follows

let checkPassword (password: string) =
    match password with
    | IsValid (true, _) -> "OK"
    | IsValid (false, reason) -> reason

This would also work

let checkPassword (password: string) =
    match password with
    | IsValid (false, reason) -> reason
    | _ -> "OK"

One final quirk of the Single Total Active Pattern is that you can use it like this.

> let (IsValid result) = "$TArAT$";;

val result : bool * string =
  (false, "A palindrome for a password? What are you thinking?")

The value on the right of what looks like an assignment is sent to the Active Recognizer, and the result of the Active Recognizer is then bound to the variable ‘result’.

What’s going on here is simply the same Destructuring Assignment we saw in the first post, but using an Active Pattern instead of simple Pattern Matching. For more on this, and on the Single Total Active Pattern I strongly recommend Luke Sandell’s excellent (and concise) blog post.

Don’t get too hung up the Single Total Active Pattern. In many cases a simple function will work and be as clear or maybe even clearer than the Active Pattern equivalent.

That said, understanding what’s going on with this type of Active Pattern will make it very easy to grasp the rest, and once you know how to use a new tool, it becomes easier to see places where it can work.

Stop focusing on Agile, fly the damn plane

Constant Learning
Being a software developer means constant learning. The technical landscape is always shifting. We have to run to stand still. We know this. We accept it. For some it’s the very thing that attracts them to the profession.

I’ve learned lots about software development in the last few years.

  • How to automate builds
  • How to automate tests
  • Object Oriented Programming/Design
  • Functional Programming/Design
  • Operating Systems
  • Programming Languages
  • Frameworks
  • Version Contol Systems

I’ve tried to embrace Agile, hell I’m even a certified Scrum Master. I attend conferences, speak at conferences, read lots and blog a little.

Despite all this I feel I am a worse “Software Developer” than I used to be. Which can be partly explained by this quote from John Archibald Wheeler.

“We live on an island surrounded by a sea of ignorance. As our island of knowledge grows, so does the shore of our ignorance.”

In other words, the more you learn, the more stupid you feel.

This, combined with the Dunning-Kruger effect suggests that feeling like we’re getting worse, even as we get better might be understandable.

But that’s not it. It would be great to explain this all away, pretend it’s all in the mind, but I don’t think it is. I believe I am actually a worse developer now than I used to be. Less productive, less focused, less comfortable.

And, I think I know why.

Fly The Plane
In an emergency pilots are trained to remember that their first priority is to “fly the plane”. It may seem odd that they need to be reminded of that fact, but it’s incredibly easy to become focused on an instrument that doesn’t work and forget to keep the plane in the air. On December 29th 1972 Eastern Air Lines Flight 401 crashed into the Florida Everglades with 101 fatalities. The flight crew were all focused on a burned out landing gear bulb and failed to notice that the autopilot wasn’t maintaining altitude.

They weren’t bad pilots, or bad people, they made a mistake, a mistake that we all make, all the time. The consequences of focusing on an immediate issue, forgetting to fly the plane, trusting that the autopilot had their back was catastrophic. They paid the ultimate price.

The consequences to the rest of us of making a similar mistake are far more benign, but there are consequences. I don’t think I’m alone in letting a focus on “building software the right way” distract from the real job of “building software”.

For me, it all started to go wrong when I started learning TDD.

Devouring everything I could read about TDD gave me a glimpse of an Agile world, of continuous integration, continuous deployment, executable specifications, distributed version control systems and feature branches. A world where the software development process worked. A magical place where you could pick requirements off the trees and working software flowed like a mighty stream.

Knowing that such a magical place existed became a curse. It made me resent the daily frustrations that have always blighted software developers. It made me feel bad every time I wrote code that didn’t have tests. It made me obsess over design and clean code to the point that sometimes I froze unable to move forward, It made me waste hours trying to automate things that absolutely needed to be automated, but not at the cost of shipping software.

Tools Tools Tools
The Agile manifesto proposes “Individuals and interactions over processes and tools”. And yet, processes and tools are deployed in ever growing numbers in an attempt to “be agile”. I’ve spent a huge chunk of my time on tools like Team City, Jenkins, Git, Subversion, Testrail, Rally, Jira, FitNesse, RSpec, NUnit, Vagrant, Puppet, VirtualBox, and all that before we even get to a programming language.

You can study all of those tools, learn about 20% of each of them and still not know a damn thing about delivering software other than it’s really hard to get tools to talk to each other.

A Call to Action
Here’s what I’ve started doing, and if my sorry tale sounds familiar you might like to join me.

Stop.

Go and build a product. Any product, but make it a complete product, it can be small but it must actually do something. I’ve started with a tool to rank the pool players in our office using Elo Ratings.

Don’t use any tools other than your IDE.

Open up Visual Studio or RubyMine or whatever and build a product.

Don’t write unit tests, don’t automate the build or the deployment. Don’t try to be agile.

When you have a fully working product, build another, and another, or built the same product again.

Forget the Red-Green-Refactor Rhythm, get into the rhythm of building working products, not functions, not programs, “Products”. Don’t just throw any old crap up and call it a product, apply a little polish. Pretend you are delivering for a client. Start by delivering the core value of the product and then improve it.

Do as much as possible manually so that you get your mind back to the bare bones of building software. What do you actually need to do?

Get faster at delivering. You should be able to build a small app in a few hours. Build the same app multiple times, Katas don’t have to be about Test Driven Development of tiny functions. Do a “Ship A Product” Kata, build a product in an hour, by hand, then throw it away and build it again.

Once you’ve got that rhythm going then, AND ONLY THEN, add in an automated build. When you’ve got that working then AND ONLY THEN add in automated tests.

For my first stab at this I didn’t even use version control, I put the code in dropbox.

Don’t do anything because “It’s the right thing to do, or it’s agile”, only do things because you can see that it makes sense, makes you faster, makes life easier, solves an ACTUAL problem.

Minimum Viable Product
Working Software is the minimum viable product from your software development process. An automated build that delivers nothing is just wasted time. You can spend a long time creating an all encompassing Walking Skeleton and never start on the product.

Focus on actually completing some small products. Figure out what you really need. Evolve your Walking Skeleton from first principles.

This isn’t an anti-agile or anti-tdd post. Quite the opposite. We need to take an agile approach to being agile. Working software is our green light, that’s the baseline. If adopting any agile practice hinders your ability to deliver working software then revert, get back to green and try again, or try something else.

Why I Study Functional Programming

For a while now I’ve been slowly writing a book about learning functional programming. I don’t know if it will ever see the light of day, that isn’t really the point. It’s more of an exercise to help me learn. I thought I might share the introduction here.

The Introduction
I became aware of computers and computer programming around the same time I became aware of the Rubik’s Cube. I was nine years old. The Cube awakened a love of puzzles that has stayed with me to this day. The computer was then and still is the ultimate puzzle.

The Rubik’s cube can be solved, there is an obviously right solution. That right solution is elegant, it looks beautiful. Thirty years on from the cube’s heyday someone solving it still attracts attention and it’s still thrilling to see the colours fall into place with the last few turns.

With a computer there is no right solution. There are endless problems to be solved and many solutions to every problem.

For the first few years of playing with computers the challenge was to get it to do anything at all. I wrote simple childish games. Some were fun to play, all were fun to write.

Then there was a Eureka moment. People do this for a living. I could do this for a living. I could get paid to write computer programs.

Studying programming in college was a joy. It was amazing to discuss programming with people who loved it as much as I did, and were often much better at it than I was.

I loved my first job. I wrote code on my first day, put code into production in my first week. I stayed late almost every night. It wasn’t work. It was as much fun as I had always hoped.

There have been many high points in my career, projects that I’m proud of, they are not surprisingly also the projects I most enjoyed working on.

There have also be been low points. Instead of getting to work at my hobby, I turned my hobby into work. I rarely get the same thrill from coding that I did as a 9 or 10 year old. I rarely get to delight in elegant solutions. I rarely get to build something just for the fun of building it.

I continue to write code in my spare time but the relentless emergence of new technologies, tools, frameworks and platforms has meant that even my play time has become work time, study time. I don’t study for fun, I study to keep up, and I mostly fail at that.

Functional Programming was something I added to the list of things I must learn about. It sat there dormant while other “more important” things bubbled to the top and stole my attention.

In 2012, at the Norwegian Developer Conference Vagif Abilov gave a beautiful presentation on a Functional approach to Conway’s Game of Life using F#.

Seeing his solution fall into place was like seeing the last few turns of a Rubik’s Cube. It was elegant, it was simple, it was right. For someone who was becoming increasingly disillusioned with programming, it was frankly moving.

You didn’t need to be a brilliant programmer with encyclopaedic knowledge of frameworks to understand what he was doing. My 10 year old self could have been dropped into that room and understood it all.

The voting system at NDC involves putting Green, Yellow or Red cards into a box indicating whether you thought the talk was Good, Average or Bad. As I registered my vote I noticed that the box was full of Green. I wasn’t the only one in the room who was impressed by what they saw.

I started learning F# and Functional Programming that evening and I haven’t stopped since.

One of those clever guys I talked programming with back in college recently described learning functional programming as “like learning to program all over again”.

He’s right, it is, but it’s even more fun second time around.

Richard Dalton
Kildare, Ireland
May 2014

Learning to think Functionally -> Why I still don’t understand Single Case Active Patterns

This entry is part 9 of 11 in the series Learning to think Functionally

I won’t lie to you, my burgeoning relationship with F# hit a bit of a rough patch recently.

While I’ve understood pattern matching from the outset, I’ve only had a vague idea about Active Patterns. Nothing I’ve read has really made them click for me. So, I decided to focus and try and get a better understanding.

What I managed to grasp almost immediately is that Active Patterns are functions and there are a few different types

  • Single Case
  • Multi-Case
  • Partial

This is where the trouble starts. Most explanations seem to begin with the Single Case Active Pattern on the basis that it is the “simplest”.

Here’s a typical example

let (|ToColor|) x =
    match x with
    | "red"   -> System.Drawing.Color.Red
    | "blue"  -> System.Drawing.Color.Blue
    | "white" -> System.Drawing.Color.White
    | _       -> failwith "Unknown Color"

This converts the value x to a color.

And here’s how we would use it

let (ToColor col) = "red"

These kinds of examples are everywhere and my overwhelming feeling on seeing them is WHY? Why not simply use a function?

let ToColor x =
    match x with
    | "red"   -> System.Drawing.Color.Red
    | "blue"  -> System.Drawing.Color.Blue
    | "white" -> System.Drawing.Color.White
    | _       -> failwith "Unknown Color"

The only difference seems to me to be the way the function is called.

Instead of

let (ToColor col) = "red"

we have

let col = ToColor "red"

Call me old fashioned but the regular function call looks better and more understandable to me. What on
earth is going on in the Active Pattern? A Function, with and out parameter, that accepts another parameter
using assignment?

It just seems daft compared to the more straightforward function call that we know and love. There has to be some scenario that makes the Active Pattern useful, but I’m pretty sure simple conversions like this aren’t it.

I did send out a cry for help on Twitter and I got a few replies showing cases where the Single Active Pattern is necessary, however the examples were quite complicated (to my novice eyes), leading me to think that far from being the “simplest” Active Pattern, the Single Case actually fulfils quite a niche purpose which is not straightforward at all. The Simplistic examples published on various blogs do nothing to illustrate where this feature is actually useful.

So, let’s park the Single Case, I’ll return to is when I’m able to explain it properly. For now, I still don’t understand it. In the Next post I’ll explain the Multiple Case Active Pattern, which I do understand.

Scrum Master != Project Manager

I like a lot of what I see in Scrum, but I’m not 100% bowled over by it all. For starters I dislike the term Scrum Master, and how Scrum Masters are “created”.

Let’s start with the name, “Scrum Master”. It implies leadership, authority and some sort of responsibility for ensuring that the team delivers. In short, it makes “Scrum Master” sound like a new fangled term for Project Manager.

Combine this with the fact that Scrum Master Certification can be achieved with two days training and a very very simple multiple choice exam and you’ve got real problems.

As a simple experiment, type Scrum or Scrum Master or even Agile into any Jobs Site. The results are kind of depressing. They paint a picture of an entire industry eager to appear agile without actually “getting it” at all.

Here’s the first result from running this experiment myself:

SCRUM MASTER

The Client is a major multinational organisation and a world leader in its field. They have a great reputation as an employer and are expanding at present due to new product development. They are actively looking for a software industry professional to take on the role of Scrum Master/ Project Manager within a software product development environment.

Scrum Master / Project Manager. Already I’m worried.

The Scrum Master will be responsible for the development of new features and enhancements on a number of customer facing applications and platforms.

No. The Scrum Master is responsible for helping you do Scrum correctly. The Scrum TEAM is responsible for developing new features. The Scrum Master does not LEAD that effort, or take responsibility for it.

This is a hands on role where you will act as scrum Master and work through the full software development lifecycle, carefully monitoring and controlling each of the phases from initiation to closure. Experience of project management/project leadership experience would be advantageous.

You will be responsible for all aspects of the Scrum Master role including Daily Scrums and Sprint Planning. You will conduct lessons learned sessions and ensure feedback is used for subsequent sprints. Arrange Sprint Demo and provide inputs and action process changes as/where required.

OK, good, stop there, don’t spoil it.

Ensure project objectives are met within the constraints of project scope, time, cost and required quality. Coordinate and communicate with project resources, internal team, stakeholders and vendors on all aspects of project progression/status. Measure progress and monitor performance (overall, scope, schedule, costs, quality) Risk Management including issue escalation and resolution. Post Project Implementation review to document lessons learnt.

You had to go and spoil it.

We would like to hear from candidates with 6 years + experience in a similar project management role, preferably in a fast paced service delivery environment. You should have exposure to Agile Development Tools proficiency such as Greenhopper and have experience of managing multiple projects concurrently. In addition you should have extensive hands-on experience in producing and grooming backlog, creating release plans etc.

The veil slips. A Job with the Title “Scrum Master”, turns out in the first paragraph to be Scrum Master/Project Manager, and by the end of the Job Spec we’re left with just Project Manager, for multiple concurrent projects.

There are two possibilities here.

1) This company wants a Project Manager and somebody has thrown in some Agile/Scrummy terms to attract more interest.
2) This is a company that is humouring some of it’s developers by letting them “do” Scrum, but in name only. As long as the Scrum Master fulfils the job that used to called “Project Manager” then the developers can call what they are doing anything they want.

If I were looking for a job as a Scrum Master (I’m not) I would run a million miles from this Job Spec.

Everything I know about creative writing, I learned from programmers.

  • Sentences should be terminated with a semi-colon;
  • At the start of every chapter, list all of the characters who will be appearing in that chapter;
  • All Characters should be killed off before the end of a chapter; Long living characters should be avoided as they tie multiple chapters together making it harder to swap out a chapter for a better one;
  • Deliver the most important stuff early; In a Murder Mystery the identity of the murderer is the most important piece of information to the reader; Deliver that in the first few pages; Defer less important details to later;
  • When you write a second draft, don’t delete anything; Leave old text there, perhaps with a strikethrough, in case you change your mind;
  • When scenes are repeated e.g. Someone having a drink; Don’t write it multiple times; Write it once and then for subsequent uses of that scene, refer the reader to the appropriate page;
  • If you can find a way to cram entire sentences into one or two words you should absolutely do that, it’s clever and will impress your readers; Using words from other languages can help with this;
  • Don’t create lots of fully fleshed out characters; Create a handful of basic characters and then describe other characters in terms of how they differ from these;