Specialisation is killing you

I once worked a project for a large company that aimed to implement an A/B testing solution on their website. The software selected made life really simple, we just needed to add a link to a javascript library to each webpage. In my digital agency days in the UK this would have been a straight-forward change and I would imagine we could have turned it around in as little as a week. In this case, however, the project took nearly nine months, involved dozens of different stakeholders, and cost hundreds of thousands of dollars.

Sound familiar? It should. This scenario repeats itself in large organisations all over the world and one of the main causes, in my opinion, is our trend towards specialisation. In hindsight, the major reason for that project being so much more complicated than it needed to be was the sheer number of different, specialist, people that needed to be involved to get it over the line.


Most large companies have implemented practice based management structures within their technology organisation in the aim of ensuring that individual competencies (e.g. analysis, testing, architecture, development, etc.) perform to a high standard. Practice managers are generally responsible for: hiring the right people; managing the performance of those people; managing career paths; and, quality assuring deliverables related to the practice. While specialisation can help to quality assure different technology functions, they also present some challenges.

In this post I’m going to describe some of the disadvantages of highly specialised workforces for technology organisations, and some of the things that you can do to make agile methods work in environments of that nature.


Why did we specialise?

When people started building software, we didn’t have a large set of specialist roles. A team of programmers would work, potentially under the direction of a project manager, to build and test software that predominantly helped to automate internal processes or perform complex calculations.

Over time, as software increased in complexity and we moved towards complex, integrated environments, the need for additional roles began to emerge and we saw roles like ‘architect’, ‘tester’, and ‘systems analyst’ appear. These roles were necessary as we moved from simple, software-focused methods, towards plan based methodologies such as SDLC/waterfall, where different phases of projects required different skills.

We then began to see the emergence of internet-based software which allowed users to directly interact with what used to be back office processes, increasing the complexity of software development further. Nowadays, to develop, operate and maintain enterprise software, technology initiatives needed to be cognisant of a massive breadth of knowledge:

  • Program and project management;
  • Facilitation and requirements decomposition;
  • Business process and software modeling;
  • Change management;
  • Software platforms and/or programming languages including proprietary APIs;
  • Middleware, integration patterns, and messaging queues;
  • Configuration and build management frameworks and practices;
  • Hardware, operating systems and systems/network services;
  • Network infrastructure including firewalls and load balancing, XaaS cloud based infrastructure and services;
  • Accessibility and usability;
  • Security standards (e.g. OWASP), advisories, patches and updates;
  • Quality assurance;
  • Test automation frameworks and practices;
  • IT Service Management.

With such a broad knowledge base, and phase based development methodologies, it made sense to specialise. Our original set of roles split further and now we saw the emergence of a huge range of technology titles. In large organisations, it’s not uncommon to have more than fifty different roles in technology, each with responsibility for a narrow domain of knowledge.

Things have actually reached the ironic point where some developers are starting to refer to themselves as “full-stack developers” if they can work on more than one part of the code base. Once upon a time, a “full-stack developer” was just a developer.

In my experience specialisation has emerged to solve four key problems:

  1. To create career paths for people who want to specialise in a technology function;
  2. To handle functions that are highly technical and for which the knowledge base is constantly changing;
  3. To differentiate between the skills required in different stages of a phase based project;
  4. To align with HR practices which align roles to salary bands.

However, in a classic systems thinking paradox, by solving those four problems we have optimised components of the value stream while actually making the value stream as a whole far less efficient.


Why can specialisation be bad?

Another role that I once had included ensuring that dev, test and production environments were ready on a major financial services program. I was assigned a task at one stage to orchestrate the implementation of a messaging queue between two testing platforms.

Provisioning a messaging queue should be a relatively simple task. It requires a hole to be punched in the internal firewall to allow connectivity, a queue to be provisioned on the server, and authentication to be established between the two systems. It’s a bit fiddly, but not a massive job, I thought at the time we’d have it done in a week or two.

After two weeks of elapsed time and a sore head from butting it against walls, all I had achieved was documenting the process for provisioning a queue and identifying all the people that were involved. One and a half weeks after that I had arranged for work codes so that I could actually get those people to do work, and finally, after an elapsed time of about six weeks, work commenced.

To get a simple queue built I ended up needing to orchestrate an outcome between:

  • The infrastructure, security and integration architects;
  • The solution designer for both systems;
  • Two application managers;
  • An MQ Configuration specialist; and,
  • A firewall specialist

It took over two months of elapsed time and a lot of orchestration effort, but the actual value add work involved was probably less than a day in total.These kind of situations plague large organisations and are the major reason why business stakeholders get frustrated with technology:

  1. Simple tasks take weeks or months and are exorbitantly expensive;
  2. More time is spent in coordination and hand-offs than in value added work;
  3. The heroes in the organisation become project managers who can orchestrate a path through the mess, usually by bending organisational policy to get things done;
  4. Technologists who break rules to deliver value, no matter how illogical those rules are in context, are referred to as ‘cowboys’.

Specialisation is bad because it narrows people’s context and forces them to think of themselves as a step in a process rather than as a participant in a value stream. It is based on the misguided premise that a group of individuals working on separate aspects of a solution and bringing them together will be more effective than a team of people working together on an outcome.

Specialist roles also build a misperception that the role in itself equals competency. It is not uncommon that there are people in a team who have significantly more competency in a knowledge area than the specialist who has been assigned that responsibility via their role.


When does specialisation make sense?

Specialisation is not ALWAYS a bad thing and there are some functions that genuinely require it. There are some technology tasks that are so complicated and messy that only someone with a lot of experience can work with them in a safe manner. Some regulatory and security tasks require specialisation because the knowledge base is always changing and the specialist must keep one eye on the future and one on the task.

However, you need to be cognizant of the costs of specialisation. Whenever you mandate that a specialist person perform a task, you are necessarily building hand-offs (and hence queues) into your delivery process. Queues inhibit the delivery of value and force us down the orchestration path.


How do I leverage specialisation in agile environments?

Most agile teams gradually move towards a generalised model where team members are as cross-functional as possible. For example, all members of an agile team should be able to help out with simple analysis, development and testing tasks if the team’s work queue dictates it. However, there are still parts of technology development that are more suited specialisation (see above) and it would be inefficient to build that capability into every team.

Don Reinhertsen, in his seminal tome “Principles of Product Development Flow”, describes a couple of great methods for efficiently working with specialists. F21: The Principle of Work Matching, describes the use of sequencing to match jobs to appropriate resources, while F29: The Principle of Resource Centralisation, arguing that correctly managed pools of centralised resources can actually reduce queues.

Don argues that where specialised skills are required for your agile teams, it might make more sense to build a centralised pool of specialists providing ‘air support’ for the teams on the ground. The specialist teams can still work on more generalist tasks if capacity allows, but work for those teams is sequenced to ensure that their primary focus is on the tasks for which their specialisation is required.

Spotify also has a nice approach to specialisation. Spotify’s original Engineering Model (which has changed a lot over the years) implemented a guild based approach to specialisation. Team members are focused on collaborating to deliver value, but may also join guilds that enable them to specialise in certain technologies, frameworks or skillsets.

I’d recommend a mix of the two approaches. Create centralised resource pools where a collection of functions is large enough to form a role, while using a guild approach to ensuring competency in practices, patterns, frameworks and platforms.


How do I cure my organisation from specialisation?

If you’ve read this far and you’re thinking that all is lost in your organisation, don’t fret. There are several small tweaks that you can make that will help you to overcome the challenges that specialisation introduces.

#1 Refocus your teams

If you’ve got a practice based management structure (e.g. PMO Manager, BA Practice Manager, QA Practice Manager, etc.) then the first place to start is with your practice leads. They’ll need to work with teams to refocus everyone on value delivery rather than local excellence. Encourage your staff to recognize when someone in a different role needs their help, and conversely not to bristle when someone who doesn’t have the right title tries to help them out with what they’re doing.

#2 Identify tasks that require genuine specialisation

You’ll find that each competency-based practice is generally only responsible for a limited set of tasks that require true specialisation. For instance, in the analysis competency, facilitation is a specialist function, as is requirements decomposition, but some tasks (such as requirements management and traceability) have more of an administrative nature. Make sure your practice leads aren’t reticent to open those tasks up to people outside of your practice if they’re becoming a bottleneck.

#3 Increase collaboration, reduce hand-offs

Focus on trying to work more collaboratively with other specialist competencies. The User Story artifact uses the “Card, Conversation, Confirmation” convention because User Stories are less about capturing a requirement and more about ensuring that knowledge is effectively transferred. Use visual techniques and descriptive models to increase collaboration between your specialist teams and try to reduce the amount of context lost when a document is ‘handed over the fence’.

#4 Reduce batch sizes

Reduce the size of the batches that you’re pushing through your delivery organisation. If you have to do project based delivery, make your projects smaller and deliver value incrementally rather than leaving it all to the end. Large batches have a tendency to institutionalise specialisation and hand-offs.

#5 Teach someone to fish

Agile techniques such as pairing are all about knowledge transfer and trying to reduce the need for specialists. They are proven to be effective, and don’t solely apply to software development. If you’ve got junior people in your team, or people in other competencies who are interested in learning new skills, let them pair with you on tasks.

For instance, let an interested developer/tester facilitate a requirements workshop with your support – it will not only increase their skillset and enable them to give you a chop out when you’re under the pump, it will also give them a better understanding of the challenges of your job.



  • David McCormack

    Excellent article. Two additional reasons for the move towards specialisation may be:
    1. To address the proliferation of technology stacks within the organisation.
    2.To allow different areas to be outsourced independently.

    • Andrew Blain

      Thanks for the feedback David, agree with those additional points.

Leave a Reply

Your email address will not be published. Required fields are marked *