Navigate / search

FitNesse with .Net – What is FitNesse?

This entry is part 1 of 5 in the series FitNesse

Why another tutorial on installing and using FitNesse with .Net? Well, because I had a lot of trouble installing and using FitNesse with .Net. That may be an issue with the existing tutorials, or, it be an issue with me. Hopefully with these posts I can spare you some of the pain I encountered.

First things first. The source for all things FitNesse is here. There you’ll find lots of tutorials, samples and other useful info. In fact, if you stick at it you’ll get up and running using just the information on that site, but if you’re in any way like me you may find you slip off the happy path a few times. That’s where the tutorial you’re reading comes in.

History
Before you wrap your head around FitNesse, it helps to have an understanding of where it came from. The Framework for Integrated Testing (FIT) was developed by Ward Cunningham. The origins of FIT go back to a project where Cunningham discovered that he could allow users to enter test scenarios into a spreadsheet, and with a bit of coding he could make those test scenarios execute against the project code.

FIT is a publicly distributable version of that idea. Requirements can be written in a standard word document. Specific “examples” or “Test Cases” are created as Tables, and FIT enables us to hook that document up to executable code, run the tests and actually insert the results back into the Word Document.

FitNesse takes this notion a step further and provides a self contained Wiki complete with it’s own web server. Requirements can be entered into the Wiki or edited by anyone with access. A number of styles of tables give a means of demonstrating requirements using Sample Data. Like FIT, the FitNesse wiki can be “run” against our code base, and the results of tests inserted back into the Wiki.

Fit and Slim
Originally FitNesse used Fit as a means of hooking the tests (in the Wiki) up to executable code. Later the developers of FitNesse set about replacing Fit. The result was Slim. While you can still use FIT with FitNesse, I will focus on Slim, I think it’s an improvement on FIT and works better with .Net. There is a discussion of Fit and Slim here.

FitNess in Action
So, enough waffle, what does FitNesse look like in practice? The following screenshot shows a Decision Table, just waiting to be run. A Decision Table is one of a number of different types of tables that Slim provides as a means of illustrating requirements through tests.


We’ll get to what all of this means in due course, but for now, it’s a table with three inputs and an expected Result. The eagle eyed among you will notice that one of those examples looks like it’s expecting the wrong result. When we run our tests that should give us something worthwhile to look at.

Take a look at the buttons, they all do interesting things, but for now we’re just interested in the ‘Test’ button. If we click it, our FitNesse tests spring to life, execute themselves against our code base, fill in the results in our table, and highlight the passes and fails in green and read. Or to say it more simply with a picture…this happens…


Notice that one of the tests turns red. Dividing 6 by 3 doesn’t give 3. This isn’t highlighting a bug in the code, it’s highlighting a bug in the test, which can and does also happen from time to time. The point here is that when our expectations aren’t met, we’re told about it and can figure out whether it’s the code or the expectation that’s wrong.

What just happened?
How did the table know where to find our code and execute it?
How does it communicate with our code? How does it pass parameters and getting back results? What’s that text above the test table?

These and many more questions will be answered during this series of posts. I hope you’ll stay with me. Next up, we install FitNesse and a few more bits and pieces that we need to make it play nice with .Net languages like C#.

Fit Tables and Slim Tables
Before I close this post let me return to that issue of Fit vs Slim once more. A big practical difference between the two is the types of tables we can use to define tests. In the example above we used a Decision Table which is a Slim. The Equivalent Fit table is known as a ColumnFixture.

We’ll be using the Slim tables in these posts, but it’s worth being aware of Fit in case you see any examples of Fit tables.

GitHub Collaborative Learning


I’m embarking on something of an experiment in an effort to better understand Git and GitHub.

I’m recruiting a few fellow Git Noobs in the hopes that a little bit of collaborative learning will get us to where we want to be a little quicker. Think of it as a distributed study group.

The distributed and collaborative nature of GitHub means that for me at least, this seems like the most logical approach to understanding and getting comfortable with it.

The plan is simple.

Setup

  1. Sign up a team of GitHub Noobs.
  2. Make sure everyone has a GitHub account
  3. Make sure everyone has Client software
  4. Provide counselling for those traumatised by step 3

Project

  1. Each team member creates a simple project
  2. Each team member creates a repository for their project
  3. Each team member a grabs a clone of each of the other projects.

Play Time

  1. Team members play with the various projects.
  2. Make local changes to the projects.
  3. Issue pull requests to the other team members.
  4. Deal with Pull Requests from other team members
  5. Try to mess things up by stepping on each others toes

At each step in the process we’ll have a number of perspectives of the same activities, ideally members would blog about what they are doing/learning, but that isn’t essential.

The group will obviously be there to answer specific questions for each other, and since everyone is at the same stage in the process we should able to help each other through any problems that arise.

This is an experiment, it should take no more than a few days to a week to get through the exercise. If it turns out to be a useful way of studying a new technology then it can serve as a template for some future study groups.

If you are interested in participating, leave a comment here, or find me on twitter (@richardadalton). The more Noobs in our hive the better.

If you’re not a Noob but would like to get involved in some sort of mentoring capacity, get in touch.

Mind Mapping

I’ve been playing with mind mapping recently as a way of organizing my thinking on sometimes very broad subjects. As an example I’ve uploaded a Software Engineering mind map to Mindmeister.com.

This mindmap is partly a way of remembering the various technologies that I’ve worked with or know about, and partly a todo list of technologies that I’d like to look at.

I was going to create two maps where I would migrate items from one map to the other as I learned about them, but in truth that would be pointless. If you haven’t used a technology in the last few months, it will be as good as new to you should you try to use it again.

With that in mind I would need to constantly review the “things I know” map, to move items back to the “things I’d like to know” map.

Something I would like to do is split this map into 4 or 5 maps, one representing each of the major branches. It’s getting a bit unwieldy at the moment. If I do that I would need simple drill down and up linking between the maps.

The greatest benefit for me right now is that when I hear about a new product or technology, I can access the map from any machine and add a reminder to myself to look into it in the future, and by adding it to the appropriate place on the map, it will be there should I ever need to use something in that space.

For example, at DDD South West I was introduced to NCrunch. So, NCrunch gets added to the map in the Testing area. Now, whenever I glance at the testing area of the Map, NCrunch is there, along with all the other related tools.

Subversion Permissions

Warning
This tip involves editing your Subversion repository .access file.
My repository is hosted by Dreamhost, and there seems to be a problem with this.

Whenever I use the Dreamhost Panel to create a new user, it wipes my work from the .access file, and overwrites it with a default file that gives full permission to everyone.

If you need to edit your .access file, then keep a backup so that when you create new users, you can restore your version of the file.

I’ll look into this further to confirm, but you have been warned, there may be issues.

Subversion Permissions
If you are like me, then you may have a subversion repository called Clients that contains projects for various … Clients. The alternative is to have a separate repository for each client.

For a repository like this, you are presumably going to want to lock things down so that each client can only see their own projects.

Here’s how I do it.

The root directory of my repository contains a folder for each Client, within which there’s a folder for each of that clients projects.

root
  ├ Client1
  │    ├ Project1
  │    └ Project2
  ├ Client2
  │    └ Project1
  └  ...

These Client folders will effectively be the root folder from the perspective of the client. They’ll see the list of projects.

Edit your .access file. It might look something like this:

[/]
tom = rw
dick = rw
harry = rw
mary = rw

We need to change this. The first thing we’ll do is create groups to identify which companies people work for.

[groups]
myCompany = dick
client1 = tom, harry
client2 = mary

Then we need to ensure that only people in the myComany group can see the root directory.

Note the section name [/] is the folder we’re assigning access to, in this case root. We use the ‘@’ symbol to indicate that we’re assigning access to a group.

[/]
@myCompany = rw
@client1 =
@client2 =

We do this because if we don’t, clients will be able to see the root directory and see the list of other clients. Yes, they’ll only have access to their own, they can’t go snooping in the folders of other clients, but still, I’d prefer they never even see the names of the Clients.

The next step is to grant each group access to its own Client directory.

[/Client1]
@client1 = rw
 
[/Client2]
@client2 = rw

And that’s it. Job done. The following is the complete .access file.

[groups]
myCompany = dick
client1 = tom, harry
client2 = mary

[/]
@myCompany = rw
@client1 =
@client2 = 

[/Client1]
@client1 = rw
 
[/Client2]
@client2 = rw

Of course, now that you understand the principle, you can make this as complicated as you like. Grant Read access for every Client project to anyone who works for that Client, but grant Read/Write access only to the individuals who actually work on each project.

Create projects that some or all clients have shared read access to, in addition to their own projects. E.g. Shared Libraries that you use on multiple projects.

Note that this works exactly as described for me, no messing about with Apache modules, etc. But then, my respositories are hosted by Dreamhost, so you might want to check with your host if they have everything set up to work.

Or, before you bother with them, just try it. It might just work.

In case it doesn’t, you might need some of this information.

Fluent Mocking

In which our hero shouts “Hold on thar pilgrims” at those who would hate Mocking Frameworks.

Here’s a scenario (for once not a hypothetical scenario, this is a real system I worked on). You’re building a wizard based app. To be more accurate you’re building lots of wizard based apps, so you’ve extracted all the wizardy stuff up into it’s own classes. Your apps can now focus on actual bread and butter functionality. When I say ‘You’ I mean ‘I’ but this is a writing device that sucks you into the narrative. I bet deep down you already care a little bit about how these Wizards turned out.

Two of the objects in your Wizardy Stuff code are a ‘Wizard’ and a ‘WizardStep’. A Wizard obviously contains a number of WizardSteps. You get the idea.

There are quite a few tests that we can write to make sure that navigation works. A Wizard has a current step, Moving Next or Previous should increment or decrement the current step. Moving next on the last step causes the Wizard to finish. It’s all very simple, and what you’d expect from a wizard.

Let’s look at an example of the kind of tests we might have to write, and what it means in terms of mocks and stubs. We have a requirement that says that when the user clicks ‘Move Next’ the current step gets to validate itself and decide whether it will allow the Move Next to happen. If it returns false, the Wizard will refuse to allow the user to move on.

To test a feature like this we can do the following:

  • Create a wizard with one step
  • Use a stub for step1 that returns false when the OKToMoveNext method is called
  • Start the wizard
  • Assert that we’re on the first step
  • Attempt to move next
  • Assert that we’re still on the first step

After the attempt to MoveNext we should still be on step1 (because step1 returns false when asked to validate itself).

We can implement the test in various ways. A key issue is how to implement the stub for step1 that simulates a step failing validation. Here’s one example, using the Moq Framework:

	// Listing 1
        [Test()]
        public void Validation_CanPrevent_MoveNext()
        {
            Mock<IWizardStep> step1 = new Mock<IWizardStep>();
            step1.Setup(s => s.OKToMoveNext()).Returns(false);

            Wizard wizard = new Wizard()
                                    .AddStep(step1.Object)
                                    .Start();

            Assert.AreEqual(step1.Object, wizard.CurrentStep);

            wizard.MoveNext();

            Assert.AreEqual(step1.Object, wizard.CurrentStep);
        }

I don’t like this code. It’s too busy. There’s too much “stuff” that’s related to the mocking framework. The intent of the test might be discernible, but only just. The shaded lines in particular need a second or third glance to make sure you’re reading them right. Our intent is to create a stub wizard step that can’t move next. Our test should be screaming that intent so clearly that it cant be missed by someone reading the code.

I suspect that scenarios like this are one of the reasons why some developers find themselves edging back towards hand-rolled mocks and stubs. The equivalent code using hand-coded classes is much simpler and the intent of the test is clearer:

	// Listing 2
        [Test()]
        public void HM_Validation_CanPrevent_MoveNext()
        {
            IWizardStep step1 = new WizardStepThatIsNotOKToMoveNext();

            Wizard wizard = new Wizard()
                                    .AddStep(step1)
                                    .Start();

            Assert.That(wizard.CurrentStep == step1);

            wizard.MoveNext();

            Assert.That(wizard.CurrentStep == step1);
        }

The shaded code tells most of the story. Because we’re creating a simple class for a specific purpose, we can be very explicit with our naming.

Although listing 2 is an improvement over the code we produced using the Moq mocking framework, it’s not without it’s troubles. Our suite of tests is going to need a lot of different mocked WizardSteps to cover the various scenarios. Many will be very similar, or will have parts that are identical to parts of others. For example, we might have half a dozen version of the class that need to prevent a user Moving Next, but each may need to do that in conjuction with some other different behaviour.

We could try to make our Handmade mocks more intelligent, but that’s a slippery slope. Once you start adding in “one more little tweak, to facilitate one more test”, you quickly find yourself with a mock that’s more complex than the code you’re trying to test.

One interesting option is to go back to using our Mocking Framework, but hide the messiness of it behind a slightly nicer abstraction. Imagine being able to write a test like the one in Listing 3:

	// Listing 3
        [Test()]
        public void step_can_stop_move_next()
        {
            IWizardStep step1 = new MockWizardStep()
                                    .ThatCannot.MoveNext
                                    .Object();

            Wizard wizard = new Wizard()
                                    .AddStep(step1)
                                    .Start();

            Assert.AreEqual(step1, wizard.CurrentStep);

            wizard.MoveNext();

            Assert.AreEqual(step1, wizard.CurrentStep);
        }

This is a fluent style interface, but behind the scenes it’s doing all the same stuff that our first test did. The beauty of a mocking framework is that you can assemble the functionality you desire at runtime, rather than needing to code a specific class for the specific case you want to test. Once you’ve written the Factory that spits out mocks, you can use it to spit out other variations of the MockWizardStep:

	// Listing 4
            IWizardStep step1 = new MockWizardStep()
                                    .ThatCan.MoveNext
                                    .Object();

            IWizardStep step1 = new MockWizardStep()
                                    .ThatCan.MoveNext
                                    .ThatCannot.MovePrevious
                                    .Object();

Once you have the fluent interface in place it gets a lot easier to create exactly the right mock for the scenario you want to test. The test becomes clearer, and to a certain extent you’ve abstracted your tests away from the specific mocking framework that you are using.

It’s not all ribbons and bows. One problem is that you have to actually build the fluent interface. You can’t really make a generic one of these. A fluent interface by it’s nature is a Domain Specific Language. You implement a language based on the properties of the objects you’ll be mocking.

Creating the Fluent Interface isn’t a particularly complicated task (see Listing 5), but it’s enough work that you need to think carefully about whether it will pay for itself. The example here is also artificially simple, showing only a few stub methods. When you get into creating a fluent interface that allows you to configure Mock behaviour, like verifying method calls etc, things could get a little hairy.

It’s worth looking more closely at the shaded code in Listing 5, we what appears to be a readonly property, modifying fields within the class. What madness is this? It’s nothing really, just a trick used in constructing fluent grammars to avoid having parenthesis after every term in a statement.

    // Listing 5
    public class MockWizardStep
    {
        private Mock<IWizardStep> _step;
        private bool _thatCan = true;

        public MockWizardStep()
        {
            _step = new Mock<IWizardStep>();
        }

        public MockWizardStep ThatCan
        {
            get
            {
                _thatCan = true;
                return this;
            }
        }

        public MockWizardStep ThatCannot
        {
            get
            {
                _thatCan = false;
                return this;
            }
        }


        public MockWizardStep MoveNext
        {
            get
            {
                _step.Setup(v => v.OKToMoveNext()).Returns(_thatCan);
                return this;
            }
        }

        public MockWizardStep MovePrevious
        {
            get
            {
                _step.Setup(v => v.OKToMovePrevious()).Returns(_thatCan);
                return this;
            }
        }

        public IWizardStep Object()
        {
            return _step.Object;
        }

        public Mock<IWizardStep> Mock()
        {
            return _step;
        }
    }

So, where does that leave us? Are mocking frameworks saved from those who would hate them? Well, probably not, but for everyone else here’s one more tool in your toolbox for that day when the perfect scenario presents itself.

So Over Mocking Frameworks

In which our hero peddles more of the stuff that wasn’t good enough to make it into his TDD Session, and passes it off as an interesting blog post. Welcome to the blog equivalent of the “extras” section of a DVD.

During the course of preparing and presenting my session on Test Driven Development, I had quite a few interesting conversations with quite a few interesting people. Some were the face to face kind of chats that you inevitably have after presenting a session, some were via email.

One theme that popped up more than once was the notion of being “over” mocking frameworks. And when I say “over” I mean it in the “girlfriend from college that you’d rather not see any more” sense of the word.

It appears that when it comes to mocking, TDD practitioners go through a number of phases.

  • Ignorance – How to I test this object, it has all these dependencies?
  • Interest – Hmmm I can create classes that implement the same interface as a dependency.
  • Excitement – Holy crap, mocking frameworks are da shizzle, I want to use them everywhere.
  • Hmmm – Why is it complicated to do this with a mocking framework. A hand coded class would be so much simpler.
  • Disillusionment – Holy crap, mocking frameworks are a pain, I don’t want to use them anywhere.
  • Righteous Indignation – Dude! You’re using Mocking Frameworks? that’s like so 2009.
  • Perspective – Mocking frameworks, handrolled? Meh! Both have their place

As with virtually everything else in life, extreme views for or against Mocking frameworks are unlikely to be helpful to anyone. They’re a tool. Ignoring them completely (or consciously not using them) is no more helpful than surrendering to them and using them blindly.

In a future post I’ll look at how to mitigate some of the issues that cause people to “get over” Mocking Frameworks. I’ll also look at some of the issues that need to be addressed when using hand rolled mocks and stubs.

Test Driven Development – Pushing Through The Pain

Sample Code
TDD Sample Code for DDD South West

Last weekend I presented a session on Test Driven Development at DDD South West in Bristol. I actually presented the session twice thanks to being voted onto the repeat track, and in total I had 120 brave souls who came along to see the session.

I’ve taken the past week to go back over the sample code that accompanies the session and tidy it up a bit, add some ‘discussion’ comments about various issues, and also factor in a few suggestions that I received on the day.

The code can be downloaded from the link at the top and bottom of this post. The slides can be viewed here:



Thanks to everyone who came along, that especially to everyone who left feedback, and virtually everyone did. This session started with a 90 minute running time, I cut it to 1 hour for DDD South West, but it still felt a little rushed. My hope is to cut it dramatically in time for DDD North (if I’m fortunate enough to be invited) and cover less ground, but in more detail and with more time for interaction and questions from the floor.

If you have any problems with the code, or if you’d like to point out some issues with it then leave a comment or find me on twitter (@richardadalton).

Thanks for stopping by.

Sample Code
TDD Sample Code for DDD South West

Retro Fitting Unit Tests to Legacy Code

This post references a StackOverflow thread which you can read
here

One of the problems with TDD is that those who try it, often begin by booting up NUnit or something similar, writing a few trivial tests to get the lie of the land, then instead of doing TDD (i.e. Writing a test and then writing some code to make the test pass) they do something much much harder, under the mistaken impression that it will actually be a way to ease themselves into TDD.

They do what I did, they start out by trying to write some unit tests for existing code. Either a past project, or more likely the project they are currently working on. They quickly discover that these projects are (from their perspective) virtually impossible to test.

The reason why is obvious, 1) They are new to writing automated tests, which is a tricky enough skill to master, but more importantly 2) They are new to writing testable code which is actually a significantly bigger challenge than writing tests. The chances that their old projects contain testable code is virtually nill.

So, our protagonist is actually starting their journey to TDD with an obstacle that is actually beyond the ability of some seasoned TDD practitioners. Think a video game with the ultimate Boss level as the very first thing you do, before you learn the controls or understand much of anything about the game. That right there is a game that you’ll probably throw in the bin after about an hour, if you last that long.

In my research for my TDD session at #dddsw I found lots of interesting questions, blogs and articles, most of which I didn’t have time to fully discuss during the session, but I’ll point you in the direction of some of them over the next few days/weeks.

The first is a Stack Overflow discussion on how to write automated tests for a complicated function with lots of dependencies.

It’s notable because it’s not an uncommon problem, and the answers cover the usual spectrum from practical solutions to pragmatic advice to questioning the validity of the original question etc.

It’s also notable because Uncle Bob contributes an interesting Step By Step on how to add tests to existing code (in an kludgy way) and then how to refactor both the original code and the kludgy test handling.

Original Thread Here

Enjoy.

DDD South West 3

It’s 11.30, I’ve got work in the morning, I’m knackered and I’m still sitting in front of a computer having just tried out some things that I picked up at DDD South West over the weekend. That’s the effect that participating in the DDD community can do to you. It recharges those tech/geek batteries and for a few hours or a few days programming feels a little like it did when I was 15.

If there’s a downside to being a speaker it’s that the preparation for speaking sucks up a lot of time (at least it does if you need to prepare like I do). So things slide by un-researched or not studied properly. Having spoken at DDD Scotland last month and now DDD South West, with two different sessions, I’ve been heavily focused on DDD for about 4 months, while things I’d like to look at more closely like Threading and Rx for example haven’t gotten the attention they deserve.

The flip-side of this little faustian pact is that when I get to DDD I make up for lost time either in sessions or more often in the informal chats and demos (un-sessions as Paul Stack calls them) that spring up around the conference itself.

As usual on the day I didn’t get to see many of the other sessions. In fact I only saw Colin Mackay’s session on Parrallellization (I, like Colin hope I spelled that right, probably didn’t, but don’t really care.).

My latest project leans heavily on Threading but uses the older syntax. I haven’t had a chance to really look at the new (not so new any more) threading syntax, so this was a session I didn’t want to miss. It didn’t disappoint.

Within a few minutes of flopping into the recliner after the long trip home the laptop was open, and I was trying out some of the stuff Colin had shown us. I’m really blown away. I had touched briefly on the issue of Testing Threaded code in my TDD session and this is an area that I’m going to be spending a lot of time on over the next year or two.

I reworked my sample code using the new syntax and not only did the intent of the code become clearer, I removed all of the locks I was using (probably incorrectly) and at the same time removed an intermittent race condition bug that was in the code.

Apart from Colin’s session the only others that I attended were my own. I was on deck twice thanks to my TDD talk getting voted onto the repeat track. In a quirk of scheduling I presented on the Repeat Track before I presented in my originally scheduled spot.

The repeat track was presented in quite a small room. It got the day off to a nice start to be presenting to a full room, we had to turn a few people away and ask them to come to the main session later in the day.

The main session was in the Track 1 room, which is a pretty big room. That also felt pretty full, which suggests that there’s a real appetite for TDD, which is odd because in many ways things have moved beyond TDD and the focus has switched to BDD and so on. I wonder if the opinion shapers in the industry have perhaps gotten bored of something and moved on just when the mass of developers are only just catching up and really getting interested. I think I might try and stick to my philosophy of speaking about out of date topics and see of there continues to be a market for it.

I did have a slight fear that a TDD session could end in tears (if not mine then someone elses). At an event like DDD there’s a strong chance that the audience will include a significant number of people who fall into one of the following categories.

1. People who know more about TDD than I do, but come along for a look.
2. People who hold diametrically opposed views to mine, regardless of my views.
3. People who haven’t really gotten into TDD but have already been turned off by the hype.

It can be hard to include anything in such a session that will see any of these people come away with anything of use.

I’m sure there were some in the audience who got very little from the session, there always will be and I hope they left feedback so I can include some things for them, but I was delighted to receive some very kind commends from a number of people who I really didn’t expect to have benefited from the session.

Funnily enough I speak at these conferences in the hopes that I’ll energize a few developers to try out things like TDD, and I come away energized myself. Strange how that works.

The un-sessions were as always brilliant. An informal demo of TeamCity by Paul Stack addressed a few issues I’ve been having, and I got to show him my favourite simple trick – the miracle of creating a text file with a UDL extension and then double clicking on the icon.

A few people approached me with questions and suggestions about my session which I’m going to incorporate before I release the slides and code.

At the Geek Dinner on Saturday night I had endless fascinating conversations, from a chat about recruitment with Tim Gaunt who’s responsible for the brilliant http://borninthebarn.co.uk/ to picking Colin Mackay’s brain a little further on threading, to the usual round table moaning about how you can never have enough monitors to the best ways of encouraging community participation and on and on.

Retiring to the pub led to a fantastic chat with Graeme Foster about where technology will be in our kids lifetimes, and their kids lifetimes, to where we’ve come from (ZX81s and the BBC Micro). We talked for hours and never got around to talking about the thing we met up to talk about – Domain Driven Design.

From pub back to hotel and an oddly circle of chairs let to a long discussion on topics as wide ranging as The Rapture and Creationism, 80’s Rock Legends and how to impersonate them, Restaurants with strange bathrooms, Oldest, Youngest and most interesting birthdays, and most famous namesakes, Sci-Fi TV shows, various attempts at Pun inspired humour that resulted in threats of physical violence, all leading to an analysis of the lyrics of the Spitting Image Chicken Song with particular emphasis on the best order in which to accomplish the tasks outlined in said song.

The party broke about after 1am, and I went back to the room pondering the notion of an 8am for a taxi to the airport.

This morning (Sunday) as I slinked down to the lobby with a few minutes to spare before the Taxi arrived, I found a handful of DDD’ers getting through breakfast.

If you are not already part of this Community then sort that out at DDD North in Sunderland on October 8th. If you’re a software developer and you are not participating in some way in the wider development community, you really are missing all the best bits.

Thanks to everyone who grafted to make DDD South West go smoothly. I hope to be back in Bristol this time next year with an even better session.