First steps into Docker & Jenkinsfiles

When we decided a year ago to automate our continuous deployment process (previously releases were quick but manual), I had the chance to look at Docker. It was more accessible for Linux, not all features were working on Windows and the documentation was little, but we gave it a go.

I started to implement a continuous deployment solution based on Jenkins and Docker. Our applications are coded in C# and we wanted to limit the number of code changes. The first task I had to do is to find a way to “Dockerize” our console applications run by the Task Scheduler. There are more than one way to do so and I have written a post about it on my blog. If you are interested you can find more details on

We also moved from standard Jenkins projects to Pipelines. This allowed us to get all the build steps in the repository. We found that using the Shared Library was a big time saver. Our build steps can be centralised in a specific repository that all the projects will use. Once this main Jenkinsfile is created we just have to set our project related parameters in a small Jenkinsfile in each repositories.

You can find more details on how we implemented this Parameterized Jenkinsfile on my blog:

With this we have a full Continuous Deployment solution. A merge in our Git repository will trigger the build on Jenkins and the project will be built, tested and pushed either to a Development or a Production environment depending on the branch.


A look into Ping-Pong Programming

I have been doing solo .net programming for a bit over 4 years, and most of the unit testing I’ve done was post-code unit testing.

I’m quite fresh in matters of full TDD as I only started experimenting with it in the last year or so, and I also couldn’t say I’ve had successful close encounters with pair-programming before.

But recently I have had the opportunity to try both of them full on, both at once! And because of the overall positive experience I’ve had with it, I decided to write a little about it.

Ping pong programming, also known as P3 is a crossbreed between the two extreme programming practices pair-programming and test-driven development. It is most certainly not new, as one can find by asking google about it. Personally I was lucky enough to experience this first hand as the method is employed occasionally at my work-place.

How to do it ?

Optional: At session start define a task to do, no more than 1 hour or 2 long.

The P3 iteration 


  1. Jane becomes the driver and writes a test. It is of essence for the test to be as fine grained as possible. Meanwhile Jim is the observer. Ping !
  2. After Jane wrote her test, Jim takes the drive and writes an implementation that passes that test, but nothing more. Thus Jane becomes the observer and the roles switched.
  3. Both programmers look into ways to refactor the code / do clean code / remove duplication.
  4. Jim continues by writing the next test, as small as possible. Pong !
  5. After Jim wrote his test, Jane takes the drive and writes the minimum implementation that passes the test and nothing more. Roles switched again.


The whole process could be regarded as a game in which the programmers can compete in writing the smaller unit tests, and can cooperate in refactoring the code, but be wary of competing in refactoring the code.

Of course the process can be tweaked to suite your own needs and preferences, or to put an emphasis on what factors you consider the most important.

What flavour does TDD add to the mix ?

TDD has lots of benefits to it, but the ones I found the most valuable so far are :

  • When I have a sizeable and/or complex problem to solve (plan, implement, test), I usually try to breakdown the problem into smaller units, then try to tackle those based on priority/dependencies.
    • The TDD automaton has the intrinsic property to automatically break down large problems into the smallest size of sub-problems that you are comfortable with. The programmer can decide how fine-grained the sub-problems are.
  • Imagine you’re trying to implement a non-trivial sized problem, you might find yourself coding various bits of the problem as they cross your mind; but what about when you’re in the middle of a –very- important part of the implementation, when you suddenly realise some collateral small aspect of the initial problem that is also quite important while not being directly linked to what you’re doing right now. Since you don’t want to interrupt the coding sprint you’re currently into, you might make a quick note about the collateral small aspect to be implemented at some point later on, or you might rely on your memory, or you might just forget about it until a time beyond the perfect time to implement it.
    • TDD focuses your attention on each micro-problem at a time, and in effect, every small piece of code you make gets your full attention sooner or later. If you came across that small collateral aspect, just write an empty failing test for it, and you make sure you won’t forget to do it.
  • When coding non-TDD, I sometimes found myself working on some implementation only to realize that I went ahead and wrote more code that was actually needed, sometimes because I just went with the flow, at other times because I didn’t have a sharp tuned perspective on what the requirements are.
    • TDD asks you to write the tests first, and the tests you write are according to the specifications and requirements of that public behaviour you’re trying to implement. This helps tremendously with implementing only what is needed.
  • When trying to refactor a piece of code how do you make sure your new changes won’t break current behaviour while also performing new behaviour?
    • TDD is a safety net that helps you do refactoring by making sure that you’re previous tests covering the behaviour pass, while the new tests covering your new behaviour also pass.

What about pair-programming ?

  • It can increase the amount of ideas / scenarios about things that should happen or that can go wrong with an implementation – directly improving code quality.
  • It can increase the chance to spot very subtle problems code.
  • It is a means to share implementation knowledge about the task at hand, and general knowledge that is being passed between the two developers (of course this is highly dependent on the difference in experience between the developers).

Having done a lot of reading around the web, here are some collected things to consider when doing P3:

The driver

  • the driver is the master of the keyboard and mouse.
  • should prioritise minimising the time he uses to write his implementation. On the short term this will prevent the observer from becoming bored and loosing focus (as an extreme example, it may be very boring to watch someone write code for 30 minutes straight), so this has a positive impact on both the potential quality of the solution and on the time it takes to implement it.
  • can ask the observer for things like implementation ideas, better ways to solve a problem, bringing up alternative ideas, pointing out possible inputs that the code doesn’t cover, suggesting clearer names for classes, members, methods or variables.
  • can write code to make his point because sometimes because.. “5 lines of code are worth more than a thousand words?”.
  • can do exploratory “see if it works” coding for ~1 minute max.

The observer

  • should not attempt to bully his way into taking control over the keyboard/mouse.
  • is the safety net. He should be on the lookout for potential bugs / larger issues / potential for simplifications and improvements.
  • should immediately bring up errors or unreadable code.
  • should bring up larger issues preferably after the driver finishes his code writing round.
  • should ideally, as needed, tell the driver the missing bit of information or API they need at the moment they need it.
  • when pointing out a problem in code, he/she should make sure he does that diplomatically to avoid offending the driver. Examples of good verbal interaction : “do you think this is a valid test?”, “does this look correct to you?”, “what’s next?”.

Solving arguments

  • It is natural for two programmers to not have the same ideas, this is the whole point behind pair programming.
  • Avoid giving subjective reasons to support your arguments like : “I’m right! just because.” / “I’m always right” / “My seniority is higher so I don’t have to give reasons why I’m right”.
  • Always give objective reasons to support your arguments.
  • Decide what the priorities should be when writing code, is writing Clean Code a priority? Then the cleaner code wins. Is following Object Oriented Design a priority? Then the code that breaks OOD loses.
  • As a general rule of thumb try to avoid making personal remarks about your pair-programming mate, as a way to win arguments.

Why mix TDD and pair-programming after all ?

Taking a look at the two practices, the first thing I notice is that some of the benefits of TDD can be further enhanced by some pair programming benefits, or some of its shortcomings might be alleviated, for example:

  • The finer grained a problem is broken into, the more rigorous the testing is performed for that problem, so the better the quality of the code.
    • If during pair programming, the developers compete to make as smaller tests as possible, the effect would be better code quality.
  • If TDD focuses your attention on each small piece of code, at a time (because you write a test for that small piece of code, then you write the implementation itself)
    • During pair programming, both developers can focus their attention on each small piece of code at a time, so fewer hard-to-spot problems get missed.
  • If TDD contains the step of refactoring during each cycle
    • During pair programming, more ideas for a better refactoring might surface from the ideas of the two developers.
  • In TDD the more scenarios you test, the greater the benefits are :
    • With pair programming, more ideas will surface for scenarios that might be tested.
  • In TDD there might still be a risk that one can have a predefined – incomplete opinion about how to solve a problem.
    • With pair programming, the resulting code implementation is an inter-weave between both programmers’ perspective on the problem, given the alternative test writing.
  • In TDD sometimes the Test/Implementation routine might become too.. routine
    • With pair programming, the routine is broken due to the fact you might not expect the test that your partner is writing for you to code.


If done right, P3 has the potential to be a very useful technique to enhance code quality, spread out implementation knowledge throughout a team, while being fun at the same time.

It may not be a technique for everyone, with people who dislike pair-programming potentially having an inertia to also dislike this technique. With this said, P3 does solve some of the annoying aspects sometimes encountered with PP.

I could see how people that dislike pair programming might still enjoy P3.

Is P3 worth it? In my experience it’s the human aspect that’s harder than the process aspect – sometimes this has worked, sometimes this hasn’t. I’m going to use my research and see if the next time can go smoother. For everyone, it’s certainly worth a try – see how it goes for you!

A Multiple Browser Testing Framework


This discussion will revolve around the two main testing packages in use today – NUnit and Selenium. Both are fantastic tools but expect you to write tests in a single scenario, perhaps a ChromeDriver as an example. Multiple browser testing usually requires repeating the test for each browser you specify, here we solve this issue by describing in one place which browsers we would like to test and then for each scenario it will automatically be tested in each browser specified.


For this demo I’ll be using a dummy MVC application which when the homepage is called returns “Hello World!”

And the result:

Really we should have made the test before the content but let’s let that one slide for now. We should have a test which confirms that when we reach the homepage of our application that “Hello World!” is returned!

The Automation Test

Here you can see we have something similar to a standard unit test format. We are opting for one class per page; this is not always suitable for automation tests so consider your application beforehand.

We like to follow two other conventions when writing any tests. The method name should follow:


This has the advantages of A) Being consistent and flexible and B) Making it very easy to determine through the method name what use case is failing. The second convention is to structure all code into Arrange, Act and Assert blocks – here the verbosity is a just for demo purposes but by following this structure it allows any developer to clearly see which code is actually being tested.

The last thing to do is check out test passes.

Enhancing the Automation Test

Now the easy part is out of the way we can focus on bootstrapping multi browser functionality, we’ll do this by using the magic of Inheritance and method attributes.

Here we’ve made a MultiBrowserFixtureBase class which has three TestFixtures, the switch here is simply because TestFixture don’t currently let you pass in objects and the cleanest solution is to switch on a string instead. This class on its own will generate three tests for each parameter and by inheriting this class we will also inherit the TestFixture attributes.

In our HomeIndexTests we have inherited MultiBrowserFixtureBase and NUnit has recognised that we now have three tests for each individual test which pass in different browser parameters. The last thing to do is change our hardcoded chrome driver in the SetUp to a more generic version. Let’s do that by using a custom driver which employs the decorator pattern.

Here we add a contract to the class for IWebDriver and also take in an IWebDriver, delegating nearly all of the responsibility to the passed in class, however we have added some extra functionality so that we only specify our base URL in one place then append the relativeUrl and we wait for the elements to load before trying to query the page.

The final step is to replace our ChromeDriver in our main testing class.

Now we have a flow whereby for each TextFixture on the base class it will populate the Driver property, and for each test class it will pass in that public property to our decorator class which will delegate the responsibility.

Our code is now complete and if you run your test suite you should see that for every test you write it will run the test in each browser we have specified on the base class.

Enhancing the Enhanced Automation Test

This feature is very powerful and can also be used for other abstractions like multi regional support.

Here we have simply added another parameter countryCode and added the additional test fixtures for GB, EU and US. Now for each test we write we will have 9 tests in total, this is somewhat extreme but is a great idea in places which are mission critical such as a checkout page and it demonstrates the power of good abstractions.

Implementing generic controllers in ASP.NET MVC and WebAPI – Part 2: binding to derived classes

In the first post of this series I covered the customisation of default MVC framework behaviour that needs to be done to reuse controllers for the Competition website. Apart from MVC controllers we need to have ApiControllers, as most of the front-end functionality in the website implemented in React.JS. And again, most of the basic functionality can be reusable, like, for example, saving Entry view model to the database.

When rendering the Competition Entry page our generic MVC controller will pass the implementation of BaseEntryViewModel base class. Derived viewmodel classes would share some basic properties (like Id, Email, etc.), but also have some specific ones – ImageUrl, Location, etc. Instead of writing ApiController or action to save entries for each competition, we want to make this POST method reusable.

The solution with inherited controllers that hold a custom implementation of competition service works here as well. The biggest challenge though was parameter binding. If we just tell the action to expect BaseEntryViewModel as a parameter, members of derived classes that are not part of base class will be simply cut off during the default parameter binding. So we need to customise this process.

Custom parameter binding for ViewModel classes in API Controllers

To accomplish this we need to do two things:

  • first, to write a custom implementation of HttpParameterBinding
  • second, to create a parameter binding attribute that hooks up our custom binding to a parameter

To extract ViewModel data from the request body, we need to extend HttpParameterBinding class, overriding ExecuteBindingAsync method, responsible for parsing the request content. Using Reflection we can create an instance of the required ViewModel. But in order to do that we need to know the name of the view model. Maybe not the most elegant, but simple and reliable, solution was to pass the name as one of the base class members. Knowing class name, we can extract ViewModel type like this:

  .FirstOrDefault(t => t.Name == viewModelClassName.ToString())

Here is the full implementation of the EntryParameterBinding:

public class EntryParameterBinding : HttpParameterBinding
    public EntryParameterBinding(HttpParameterDescriptor descriptor)
        : base(descriptor)

    public override async Task ExecuteBindingAsync(
         ModelMetadataProvider metadataProvider,
         HttpActionContext actionContext,
         CancellationToken cancellationToken)
        var binding = actionContext.ActionDescriptor
             .FirstOrDefault(t => t is EntryParameterBinding);

        if (binding != null)
            var contents = await ParseRequestContent(actionContext);

            var viewModelType = GetViewModelType(binding, contents);

            if (viewModelType != null)
                var viewModel = Activator.CreateInstance(viewModelType);

                var properties = viewModelType
                     .Where(t => t.CanWrite);

                foreach (var property in properties)
                    if (contents.TryGetValue(property.Name, out var value))
                        var propType = property.PropertyType;
                        var parse = propType
                             .GetMethod("Parse", new[] { typeof(string) });
                        if (parse == null)
                            property.SetValue(viewModel, value);
                            var parsedItem = parse.Invoke(null, new[] { value });
                            property.SetValue(viewModel, parsedItem);

                SetValue(actionContext, viewModel);

    private async Task ParseRequestContent(HttpActionContext actionContext)
        var contentString = await actionContext.Request.Content.ReadAsStringAsync();

        return contentString.Split('&')
                .Select(parameter => parameter.Split('='))
                .ToDictionary(keyValue => keyValue[0].Split('.').Last(), keyValue => keyValue[1]);

    private Type GetViewModelType(HttpParameterBinding binding, Dictionary contents)
        object viewModelClassName;
        if (contents.TryGetValue("ViewModelClassName", out viewModelClassName))
            return binding.Descriptor
                .FirstOrDefault(t =>
                    t.Name == viewModelClassName.ToString());

        throw new NotSupportedException("Not supported competetion entry view model");

In order to attach our custom binding to the parameters that are expected to be the implementations of BaseEntryViewModel we need to use an attribute that would tell framework to use EntryParameterBinding instead of default one. In the attribute we just need to override GetBinding method to return EntryParameterBinding:

public sealed class EntryViewModelAttribute : ParameterBindingAttribute
    public override HttpParameterBinding GetBinding(HttpParameterDescriptor parameter)
        if (parameter == null)
            throw new ArgumentException("Invalid parameter");

        return new EntryParameterBinding(parameter);

Now all we need to do for the magic to happen is to apply the attribute like this:

public virtual async Task Post(int competitionId, [EntryViewModel] BaseEntryViewModel entryViewModel)

That concludes the small post series about my experience in creating generic MVC and Web API controller. You can see that with a little bit of extra work we made the website that can serve as a basis for our future competition and will save us from the biggest software evil – repetition. Hopefully the evolution of ASP.NET will bring these useful options out of the box, but for now we’re lucky to have enough points of extensibility to tailor framework to our needs.

Implementing generic controllers in ASP.NET MVC and WebAPI – Part 1: Attribute-based routing with inheritance

Once in a while Mountain Warehouse runs competition campaigns like Britain’s Best Post-walk Pint or Lights, Camera, Backpack. Most of the time the principle is the same – users submit entries on the website (usually a picture with some description and basic info about themselves), then people vote and a winner is selected (sometimes it takes several rounds of voting). After building several independent websites that were very similar in functionality (but yet a bit different), we decided it would be worth creating a reusable framework for all future competition websites.

The look of each competition would be quite different, so each one has to have its own set of views. But the controllers code can and should be reused. In the first part of this post series I will talk about MVC controllers, while the second one will be about Web API part. 

Essentially reusing common functionality in the Competition framework goes down to creating a base controller, that would take ICompetetionService in the constructor. Each one of the derived controllers (each competition would have its own controller) would pass a specific implementation of the service. Pretty seamless, apart from few tweaks that had to be done to the MVC framework configuration.

Inherited route attributes in MVC controllers

The tricky bit in inheriting controller was to inherit the routing. Using attribute routing, how can we make sure the derived controllers use action routes from the base controller with custom route prefixes? In order to accomplish that you need to do two things – one is pretty straight forward, another – not so much.

First, we need to override DefaultDirectRouteProvider. The reason for this is that, if you look inside the implementation of this class, you see that in the method that gets route attributes (GetActionRouteFactories) GetCustomAttributes function of actionDescriptor is called with inherit parameter set to false. So all we need to do is to override this method, like this:

public class InheritedDirectRouteProvider : DefaultDirectRouteProvider
    protected override IReadOnlyList GetActionRouteFactories(ActionDescriptor actionDescriptor)
        return actionDescriptor.GetCustomAttributes(typeof(IDirectRouteFactory), true).Cast().ToArray();

Now you need to register the extended route provider in the RouteConfig:

public class RouteConfig
    public static void RegisterRoutes(RouteCollection routes)
        routes.MapMvcAttributeRoutes(new InheritedDirectRouteProvider());

            name: "Default",
            url: "{controller}/{action}/{id}",
            defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional }

You think it would work, but here is the tricky bit, which I discovered looking at the decompiled code as well. Apparently the standard RouteAttribute from System.Web.Mvc has Inherited parameter set to false:

Place on a controller or action to expose it directly via a route.             When placed on a controller, it applies to actions that do not have any System.Web.Mvc.RouteAttribute’s on them.

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = true, Inherited = false)]
public sealed class RouteAttribute : Attribute, IDirectRouteFactory, IRouteInfoProvider

So I created my own Route attribute, with the only difference that the Inherited parameter set to true. Here is how it looks like:

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = true, Inherited = true)]
public class InheritedRouteAttribute : Attribute, IDirectRouteFactory, IRouteInfoProvider
    public string Name { get; set; }
    public int Order { get; set; }
    public string Template { get; private set; }

    public InheritedRouteAttribute() : this(string.Empty)

    public InheritedRouteAttribute(string template)
        Template = template;

    RouteEntry IDirectRouteFactory.CreateRoute(DirectRouteFactoryContext context)
        var builder = context.CreateBuilder(this.Template);
        builder.Name = this.Name;
        builder.Order = this.Order;
        return builder.Build();

Next step is applying the InheritedRoute attribute to actions in the base controller, like this:

public abstract class BaseCompetitionController : Controller
    private readonly ICompetitionService _competitionService;

    protected BaseCompetitionController(ICompetitionService competitionService)
        _competitionService = competitionService;

    public virtual ActionResult Index()
        var viewModel = _competitionService.GetLandingPageViewModel();
        return View(viewModel);

    public virtual ActionResult Enter()
        var viewModel = _competitionService.GetEnterPageViewModel();
        return View(viewModel);

And add RoutePrefix to derived controllers:

public class CameraController : BaseCompetitionController
    public CameraController(ICameraCompetitionService competitionService) : base(competitionService)

    public virtual ActionResult Index()
        return base.Index();

You can also override the default base controller routes just by setting Route attribute in the child controller action, see Index action in CameraController above.

In summary, two things you need to do to make attribute routing inheritable are:

  • Customise DefaultDirectRouteProvider, overriding GetActionRouteFactories method.
  • Create your own InheritedRouteAttribute that would have Inherited attribute set to true.

In Part 2 I cover how to make reusable ApiControllers.

Technology @ Mountain Warehouse

My name is Rob Church and I’m the head of the development team at Mountain Warehouse – over the last couple of years we’ve built up a great team here and we’d all like to give something back to the developer community. Over the last couple of years we’ve run hackathons, contributed to open source projects and researched new technologies. This year we’ll have dedicated time for these activities and to practice our writing skills, we’ll be sharing what we do on this blog.

We hope you’ll find our posts interesting or informative and would love it if you have anything to say back to us.