×

Category: Leadership

We need to talk about salary

A massive pet peeve of mine is when I see roles that don’t advertise their salary clearly; and I’m aware they are plenty of others out there who share my annoyance. So why aren’t we as employers better about being open about pay?

I think ‘growing up’ in the civil service spoiled me when it comes to salaries, I always knew what grade a job was and what pay-scale that grade came with. When looking for new roles I could easily find out whether the pay-scale in a different department was higher or lower than my current home department, and there was never a need to have any awkward salary conversations as it was all out in the open.

While their were still some awkward gender pay disparity in parts of the civil service, for the most part it didn’t seem like much of an issue (*to me) as everyone working within the same role was generally on the same pay scale, and the slight differences in pay was usually about their length of time in role etc.

Interestingly when I became a Deputy Director and was involved more in recruitment and job offers I saw how complex the issue of salaries could be. While it was relatively easy to benchmark salaries against other departments; and competing with the private sector was never really going to happen; one thing I did spot was the difference in how we treated external vs. internal hires.

While internal promotions went automatically to the bottom of the pay band, external hires could negotiate higher salaries, this was based on the historic view that the private sector paid more so we had to be willing to offer them more money to join the civil service. This obviously did not give parity to people and suggested we prioritised external experience over internal experience. Given that government was forging the path of user centric design and product management; and being recognised around the world as the expert in digital innovation (at least in terms of service design and UCD etc.); it felt ridiculous to me that we weren’t being seen to value that internal expertise when it came to salary.

Thankfully I was able to get HR to agree to trial equal pay flexibility to internal and external hires; so that I could negotiate pay equally with all candidates no much what industry they came from, and base pay decisions solely on their experience and performance during recruitment. This seemed to work really well; our staff satisfaction went up when it came to the staff survey questions about pay and remuneration (almost unheard of) and it decreased the trend of civil servants constantly leaving for the private sector.

I did however notice quickly that male presenting candidates were far more comfortable negotiating than female presenting ones. To combat this when I prepared to offer anyone a role I had a small table that I’d prepared and agreed with HR that showed the candidates score and where that put them in terms of the salary scale we had for the role. This meant if the candidate wasn’t comfortable talking about salary, I could pitch them at the level I thought fair to ensure we were still giving parity to all hires.

Moving to the ‘private’ sector I now try to keep an eye one competitors salaries etc. to ensure I’m still offering a fair salary when hiring; but it’s actually really hard. It’s nigh on impossible to see salaries for other organisations without spending a lot of time doing detective work. Glassdoor and linkedIn both try to show average salaries for job roles via role titles; but there’s so much variety in job roles/ titles and responsibilities that it’s almost impossible to ensure parity.

Lots of companies don’t publicise the salary on their job adverts, and instead want candidates to apply and then discuss salary expectations as part of the early recruitment process. There’s lots of conversations out there on /AskAManager, LinkedIn etc. with people asking for advice on how and when they should bring up salary in the recruitment process. We shouldn’t be making it this hard for people to get a fair wage. There was a thread on twitter last week that highlighted how hard many women find it to know what salary they should be asking for when negotiating pay. This cloud of secrecy is shown to make wage parity/ discrimination higher. Women of colour in particular are shown to be hardest hit by the pay gap.

Theres plenty of studies out there that show that by not talking about salaries openly we are widening the pay gap, and it’s not just hurting our drive for equality, it’s hitting productivity too. Elena Belogolovsky stated in a study for Journal of Business and Psychology: “If I don’t know my co-worker’s pay, I assume that I might not be getting paid as much, and I decrease my performance. When people don’t know each other’s pay, they assume they are underpaid.”

So as an employer what can we do to improve pay transparency and parity?

  • Publicise the salary on all your job adverts. Ideally publicise a pay band to show the scale available to all candidates. Hell if you want a gold star, publicise your pay scales on your company website, whether you’re hiring or not; and publicise it again on all your job adverts.
  • When you’re offering a candidate a role, don’t wait for the candidate to bring up salary, and don’t only negotiate if the candidate asks to; proactively discuss with them what salary you believe is fair and why.
  • If you’re hiring multiple roles, keep track of what salary you have offered to each candidate and ensure all offers are fair and in line with people’s experience. I have previously gone back to a candidate who had accepted a role to offer them a slightly higher salary once I completed a recruitment campaign when I reviewed all the offers and felt based on experience they deserved more than initially agreed. The candidate was astonished as she’d never had anyone feedback to her before that she was worth more than the minimum.

We all need to do better to ensure pay parity. We need to be open about pay and be willing to talk about salaries and what ‘good’ and ‘equal’ looks like.

Agile Delivery in a Waterfall procurement world

One of the things that has really become apparent when moving ‘supplier side’ is how much the procurement processes used by the public sector to tender work doesn’t facilitate agile delivery.

The process of bidding for work, certainly as an SME is an industry in itself.

This month alone we’ve seen multiple Invitations to Tender’s on the Digital Marketplace for Discoveries etc, as many departments are trying to spend their budget before the end of the financial year.

The ITT’s will mention user research and ask how suppliers will work to understand user needs or hire proper user researchers. But they will then state they only have 4 weeks or £60K to carry out the Discovery. While they will specify the need for user research, no user recruitment has been carried out to let the supplier hit the ground running; it’s not possible for it to be carried out before the project starts (unless as a supplier you’re willing to do that for free; and even if you are, you’ve got less than a week to onboard your team, do any reading you need to do and complete user recruitment, which just isn’t feasible); and we regular see requests for prototypes within that time as well.

This isn’t to say that short Discoveries etc. are impossible, if anything COVID-19 has proved it is possible, however there the outcomes we were trying to deliver were understood by all; the problems we were trying to solve were very clear,; and there was a fairly clear understanding of the user groups we’d need to be working with to carry out any research; all of this enabled the teams to move at pace.

But we all know the normal commercial rules were relaxed to support delivery of the urgent COVID-19 related services. Generally it’s rare for an ITT to clarify the problem the organisation is trying to solve, or the outcomes they are looking to achieve. Instead they tend to solely focus on delivering a Discovery or Alpha etc. The outcome is stated as completing the work in the timeframe in order to move to the next stage; not as a problem to solve with clear goals and scope.

We spend a lot of time submitting questions trying to get clarity on what outcomes the organisations are looking for, and sometimes it certainly feels like organisations are looking for someone to deliver them a Discovery solely because the GDS/Digital Service Standard says they need to do one. This means, if we’re not careful, halfway through the Discovery phase we’re still struggling to get stakeholders to agree the scope of the work and why we really do need to talk to that group of users over there that they’ve never spoken too before.

Image result for gds lifecycle
The GDS lifecycle

The GDS lifecycle and how it currently ties into procurement and funding (badly) means that organisations are reluctant to go back into Discovery or Alpha when they need too, because of how they have procured suppliers. If as a supplier you deliver a Discovery that finds that there is no need to move into Alpha (because there are no user needs etc) or midway through an Alpha you find the option you prioritised for your MVP no longer meets the needs as anticipated, clients still tend to view that money as ‘lost’ or ‘wasted’ rather than accepting the value in failing fast and stopping or changing to do something that can add value. Even when the clients do accept that, sometimes the procurement rules that brought you on to deliver a specific outcome mean your team now can’t pivot onto another piece of work, as that needs to be a new contract; either scenario could mean as a supplier you loose that contract you spent so much time getting, because you did ‘the right thing’.

We regularly pick up work midway through the lifecycle; sometimes that’s because the previous supplier didn’t work out; sometimes its because they were only brought in to complete the Discovery or Alpha etc. and when it comes to re-tender, another supplier is now cheaper etc. That’s part and parcel of being a supplier; but I know from being ‘client side’ for so long how that can make it hard to manage corporate knowledge.

Equally, as a supplier, we rarely see things come out for procurement in Live, because there is the assumption by Live most of the work is done, and yet if you follow the intent of the GDS lifecycle rather than how it’s often interpreted, there should still be plenty of feature development, research etc happening in Live.

This is turn is part of the reason we see so many services stuck in Public Beta. Services have been developed by or with suppliers who were only contracted to provide support until Beta. There is rarely funding available for further development in Live, but the knowledge and experience the suppliers provided has exited stage left so it’s tricky for internal teams to pick up the work to move it into Live and continue development.

Most contracts specify ‘knowledge transfer’ (although sometimes it’s classed as a value add; when it really should be a fundamental requirement) but few are clear on what they are looking for. When we talk to clients about how they would like to manage that, or how we can ensure we can get the balance right between delivery of tangible outcomes and transferring knowledge, knowledge transfer is regularly de-scoped or de-prioritised. It ends up being seen as not as important as getting a product or service ‘out there’; but once the service is out there, the funding for the supplier stops and the time to do any proper knowledge transfer is minimal at best; and if not carefully managed suppliers can end up handing over a load of documentation and code without completing the peer working/ lunch and learns/ co-working workshops we’d wanted to happen.

Some departments and organisations have got much better at getting their commercial teams working hand and hand with their delivery teams; and we can always see those ITT’s a mile off; and it’s a pleasure to see them; as it makes it much easier for us as suppliers to provide a good response.

None of this is insurmountable, but we (both suppliers and commercial/procuring managers and delivery leads) need to get better at working together to look at how we procure/bid for work; ensuring we are clear on what the outcomes we’re trying to achieve are, and properly valuing ‘the value add’.

Agile at scale

What do we even mean when we talk about agile at scale and what are the most important elements to consider when trying to run agile at scale?

This is definitely one of those topics of conversation that goes around and around and never seems to get resolved or go away. What do we even mean when we talk about agile at scale? Do we mean scaling agile within a programme setting across multiple teams? Do we mean scaling it across multiple programmes? Or do we mean scaling it using it at scale within a whole organisation?

When ever I’m asked about what I believe to be the most important elements in enabling successful delivery using agile, or using agile at scale, the number one thing I will always talk about isn’t the technology; It isn’t digital capability; or experience with the latest agile ways of working (although all those things are important and do obviously help) it’s the culture.

I’ve blogged before on how to change a culture and why it’s important to remember cultural change alongside business transformation; but more and more, especially when we’re talking about agile at scale I’ve come to the conclusion that the culture of an organisation; and most especially the buy in and support for agile ways of working at a leadership level within an organisation, is the must fundamental element of being able to successfully scale agile.

Agile its self is sadly still one of those terms that is actually very marmite for some, especially in the senior leadership layers. They’ve seen agile projects fail; it seems like too much change for too little return, or its just something their digital/tech teams ‘do’ that they don’t feel the need to really engage with. GDS tells them they have to use it, so they do.

Which is where I think many of the agile at scale conversations stumble; it’s seen as a digital/tech problem, not an organisational one. This means that time and again, Service Owners, Programme Directors and agile delivery teams get stuck when trying to develop and get support for business cases that are trying to deliver holistic and meaningful change. We see it again and again. Agile delivery runs into waterfall funding and governance and gets stuck.

As a Service Owner or Programme Director trying to deliver a holistic service, how do you quantify in your business case the value this service and this approach to delivery will add? The obvious answer, hopefully, is using data and evidence to show the potential areas for investment and value it would add to both users and the business. But how do you get that data? Where from? How do you get senior leaders to understand it?

In organisations where agile at scale is a new concept, supporting senior leaders to understand why this matters isn’t easy. I often try and recommend new CDO’s, CEO’s or Chief Execs ‘buddy up’ or shadow some other senior folks who have been through this journey; folks like Darren Curry, Janet Hughes, Tom Read and Neil Couling; who understand why it matters, and have been through (or are going through) this journey themselves in their organisations and are able to share their experiences for both good and bad.

I will always give full praise to Alan Eccles CBE who was previously The Public Guardian, and chief exec of the Office of the Public Guardian, with out whom the first Digital Exemplar, the LPA online, would never have gone live. Alan was always very honest that he wasn’t experienced or knowledgeable about agile or digital, but he was fully committed to making the OPG the first true Digital exemplar Agency; and utilising everything digital, and agile ways of working, had to offer to transform the culture of the OPG and the services they delivered. If you want an example of what a true Digital culture looks like, and how vocal and committed Alan was to making the OPG digital, just take a look at their blog which goes all the way back to 2015 and maps the OPG’s digital journey.

Obviously, culture isn’t the only important factor when wanting to scale agile; the technology we use, the infrastructure and architecture we design and have in place, the skills of our people, the size of our teams and their capacity to deliver are also all important. But without the culture that encompasses and supports the teams, the ability to deliver at scale will always be a struggle.

The commitment to change, to embracing the possibilities and options that a digital culture and using agile at scale brings at the senior leadership level permeates through the rest of the organisation. It encourages teams to work in the open, fostering collaboration, identifying common components and dependancies. It acknowledges that failure is ok, as long as we’re sharing the lessons we’ve learned and are constantly improving. It supports true multidisciplinary working and enables holistic service design by encouraging policy, operations and finance colleagues etc to be part of the delivery teams. All of this in turn improves decision making and increases the speed and success of transformation programmes. Ultimately it empowers teams to work together to deliver; and that is how we scale agile.

Buy out, or buying in.

Welcome To The Party Richter GIFs | Tenor

So, we’re ten days into Difrent being ‘bought’ by The Panoply group; people keeping saying ‘congratulations’, ‘how’s it working for a new boss/ company?’, ‘how do you feel about the buy out?’ So I thought it was a good opportunity to reflect on my thoughts about the acquisition.

And the answer is, I’m feeling pretty good actually. Honestly, so far there hasn’t really been much difference, other than the feeling that we’re part of a larger group of likeminded people.

Difrent is still Difrent, my boss is still my boss, my teams are still my teams and my peers are still my peers. What it does mean is that I now have more peers to talk to, share lessons learned with and bounce ideas off of. It means there are potentially more opportunities for our people to get involved in, bigger communities of practice to be part of and more slack channels to share pictures of my dog on.

A picture of a black dog
The Dog.

Chatting to some of my team yesterday, and the best analogy I could think of about the Panoply group and my understanding of how it works is actually the Civil Service.

Within the Civil Service you ‘belong’ to a certain Government Department, I was at DWP for ten years, and even years after leaving there’s still a part of my brain that think of myself as a DWP person, even though I worked in 5 different departments in my tenure in the public sector. But as a Civil Servant, although I was in DWP, I had opportunities within and across the Civil Service that others outside the Civil Service didn’t. If you were at threat of redundancy in the CS, you got first dibs on other job opportunities, not just in your in department but across the Civil Service and secondments and training opportunities across government departments were possible to further your career development etc.

Within the Group we have the opportunity for our people to go on loan to another company in the group, to further their career development, or because the project they have been working on has ended and we don’t have something else for them to immediately move onto, but someone else in the group does. This is a massive bonus for our people. It gives them so many more opportunities, and takes aware some of the fear you get in agencies about ‘what happens when this contact ends?’ We’re already sorting out access to the communities of practice within the group and discussing opportunities for our people to do secondments in the future/ and vice versa for others in the group to come work with us.

These options to be part of something bigger, to open up and share more opportunities for our people, to work together with likeminded folks was one of the reasons I voted for joining the group when I was asked my opinion. And it certainly doesn’t hurt on a selfish level that so many people I know, have worked with before and respect are also in the group; Within the first 10 days I’ve already had fantastic welcome meetings with so many folks across the whole of the group.

My first ‘catch up/welcome to the party’ call with Ben Holliday was like we were picking up where we were last time we worked together, and Carolyn Manuel-Barkin and I have already put the world to rights and discussed all things Health related; all definitely good signs for me. And being part of the group is already paying off for us, with some joint opportunities with Not Binary and the fantastic folks there already looking very positive (honestly David Carboni has not only the most relaxing voice, but is also really interesting and if you get the chance to hear him talk tech and good team dynamics you should definitely take it). [EDIT: since posting this blog this morning, we have now won our first piece of work with Not Binary!]

Difrent is all about delivering outcomes that matter about adding value and making a difference; and we’ve always been vocal about working better in partnership, both with or clients and other suppliers. Panoply will help us do that.

Do Civil Servants dream of woolly sheep?

The frustration of job descriptions and their lack of clarity.

One of the biggest and most regularly occurring complaints about the Civil Service (and public sector as a whole) is their miss-management of commercial contracts.

There are regularly headlines in the papers accusing Government Departments & the Civil Servants working in them of wasting public money, and there has been a drive over the last few years especially to improve commercial experience especially within the Senior Civil Service.

When a few years ago my mentor at the time suggested leaving the public sector for a short while to gain some more commercial experience before going for any Director level roles, this seemed like a very smart idea. I would obviously need to provide evidence of my commercial experience to get any further promotions, and surely managing a couple of 500K, 1M contracts would not be enough, right?

Recently I’ve been working with my new mentor, focusing specially on gaining more commercial knowledge etc. and last month he set me an exercise to look at some Director and above roles within the Digital and Transformation arena to see what level of commercial experience they were asking for, so that I can measure my current levels of experience against what is being asked for.

You can therefor imagine my surprise when this month we got together to compare 4 senior level roles (2 at Director level and 2 Director General) and found that the amount of commercial experience requested in the job descriptions was decidedly woolly.

I really shouldn’t have been surprised, the Civil Service is famous for its woolly language, policy and strategy documents are rarely written in simple English after all.

But rather than job specifications with specific language asking for “experience of managing multiple multi million pound contracts successfully etc”. What is instead called for (if mentioned specially at all) is “commercial acumen” or “a commercial mindset” but no real definition of what level of acumen or experience is needed.

The Digital Infrastructure Director role at DCMS does mention commercial knowledge as part of the person specification, which it defines as “a commercial mindset, with experience in complex programmes and market facing delivery.

And this one from MoD, for an Executive Director Service Delivery and Operations, calls for “Excellent commercial acumen with the ability to navigate complex governance arrangements in a highly scrutinised and regulated environment”

Finally we have the recently published Government CDO role, which clearly mentions commercial responsibilities in the role description, but doesn’t actually demand any commercial experience in the person specification.

At which point, my question is, what level of Commercial acumen or experience do you actually want? What is a commercial mindset and how do you know if you have it? Why are we being so woolly at defining what is a fundamentally critical part of these roles?

How much is enough?

Recent DoS framework opportunities we have bid for or considered at Difrent have required suppliers to have have experience of things like “a minimum of 2 two million pound plus level contracts” (as an example) to be able to bid for them.

That’s helpful, as Delivery Director I know exactly how many multimillion pound contracts we’ve delivered successfully and can immediately decide whether as a company it’s worth us putting time or effort into the bid submissions. But as a person, I don’t have the same level of information needed to make a similar decision on a personal level.

The flip side of the argument is that data suggests that women especially won’t apply for roles that are “too specific” or have a long shopping list of demands, because women feel like they need to meet 75% of the person specification to apply. I agree with that wholeheartedly, but there’s a big difference between being far too specific and listing 12+ essential criteria for a role, and being soo unspecific you’ve become decidedly generic.

Especially when, as multiple studies have shown, in the public digital sector Job titles are often meaningless. Very rarely in the public sector does a job actually do what it says on the tin. What a Service Manager is in one Department can be very different in another one.

If I’m applying for an Infrastructure role I would expect the person specification to ask for Infrastructure experience. If I’m applying for a comms role, I expect to be asked for some level of comms experience; and I would expect some hint as too how much experience is enough.

So why when we are looking at Senior/ Director level roles in the Civil Service are we not helping candidates understand what level of commercial experience is ‘enough’? The private sector job adverts for similar level roles tend to be much more specific in terms of the amount of contract level experience/ knowledge needed, so why is the public sector being so woolly in our language?

Woolly enough for you?

*If you don’t get the blog title, I’m sorry, it is very geeky. and a terrible Philip K. Dick reference. But it amused me.

Notes from some Digital Service Standard Assessors on the Beta Assessment

The Beta Assessment is probably the one I get the most questions about; Primarily, “when do we actually go for our Beta Assessment and what does it involve?” 

Firstly what is an Assessment? Why do we assess products and services?

If you’ve never been to a Digital Service Standard Assessment it can be daunting; so I thought it might be useful to pull together some notes from a group of assessors, to show what we are looking for when we assess a service. 

Claire Harrison (Chief Architect at Homes England and leading Tech Assessor) and Gavin Elliot (Head of Design at DWP and a leading Design Assessor, you can find his blog here) helped me pull together some thoughts about what a good assessment looks like, and what we are specifically looking for when it comes to a Beta Assessment. 

I always describe a good assessment as the team telling the assessment panel a story. So, what we want to hear is:

  • What was the problem you were trying to solve?
  • Who are you solving this problem for? (who are your users?)
  • Why do you think this is a problem that needs solving? (What research have you done? Tell us about the users journey)
  • How did you decide to solve it and what options did you consider? (What analysis have you done?) 
  • How did you prove the option you chose was the right one? (How did you test this?)

One of the great things about the Service Manual is that it explains what each delivery phase should look like, and what the assessment team are considering at each assessment.

So what are we looking for at a Beta Assessment?

By the time it comes to your Beta Assessment, you should have been running your service for a little while now with a restricted number of users in a Private Beta. You should have real data you’ve gathered from real users who were invited to use your service, and your service should have iterated several times by now given all the things you have learnt. 

Before you are ready to move into Public Beta and roll your service out Nationally there are several things we want to check during an assessment. 

You need to prove you have considered the whole service for your users and have provided a joined up experience across all channels.

  • We don’t want to just hear about the ‘digital’ experience; we want to understand how you have/will provide a consistent and joined up experience across all channels.
  • Are there any paper or telephony elements to your service? How have you ensured that those users have received a consistent experience?
  • What changes have you made to the back end processes and how has this changed the user experience for any staff using the service?
  • Were there any policy or legislative constraints you had to deal with to ensure a joined up experience?
  • Has the scope of your MVP changed at all so far in Beta given the feedback you have received from users? 
  • Are there any changes you plan to implement in Public Beta?

As a Lead Assessor this is where I always find that teams who have suffered with empowerment or organisational silos may struggle.

If the team are only empowered to look at the Digital service, and have struggled to make any changes to the paper/ telephony or face to face channels due to siloed working in their Department between Digital and Ops (as an example) the Digital product will offer a very different experience to the rest of the service. 

As part of that discussion we will also want to understand how you have supported users who need help getting online; and what assisted digital support you are providing.

At previous assessments you should have had a plan for the support you intended to provide, you should now be able to talk though how you are putting that into action. This could be telephony support or a web chat function; but we want to ensure the service being offered is/will be consistent to the wider service experience, and meeting your users needs. We also want to understand how it’s being funded and how you plan to publish your accessibility info on your service. 

We also expect by this point that you have run an accessibility audit and have carried out regular accessibility testing. It’s worth noting, if you don’t have anyone in house who is trained in running Accessibility audits (We’re lucky in Difrent as we have a DAC assessor in house), then you will need to have factored in the time it takes to get an audit booked in and run well before you think about your Beta Assessment).

Similarly, by the time you go for your Beta Assessment we would generally expect a Welsh language version of your service available; again, this needs to be planned well in advance as this can take time to get, and is not (or shouldn’t be) a last minute job! Something in my experience a lot of teams forget to prioritise and plan for.

And finally assuming you are planning to put your service on GOV.UK, you’ll need to have agreed the following things with the GOV.UK team at GDS before going into public beta:

Again, while it shouldn’t take long to get these things sorted with the GOV.UK team, they can sometimes have backlogs and as such it’s worth making sure you’ve planned in enough time to get this sorted. 

The other things we will want to hear about are how you’ve ensured your service is scalable and secure. How have you dealt with any technical constraints? 

The architecture and technology – Claire

From an architecture perspective, at the Beta phases I’m still interested in the design of the service but I also have a focus on it’s implementation, and the provisions in place to support sustainability of the service. My mantra is ‘end-to-end, top-to-bottom service architecture’!

 An obvious consideration in both the design and deployment of a service is that of security – how the solution conforms to industry, Government and legal standards, and how security is baked into a good technical design. With data, I want to understand the characteristics and lifecycle of data, are data identifiable, how is it collected, where is it stored, hosted, who has access to it, is it encrypted, if so when, where and how? I find it encouraging that in recent years there has been a shift in thinking not only about how to prevent security breaches but also how to recover from them.

Security is sometimes cited as a reason not to code in the open but in actual fact this is hardly ever the case. As services are assessed on this there needs to be a very good reason why code can’t be open. After all a key principle of GDS is reuse – in both directions – for example making use of common government platforms, and also publishing code for it to be used by others.

Government services such as Pay and Notify can help with some of a Technologist’s decisions and should be used as the default, as should open standards and open source technologies. When  this isn’t the case I’m really interested in the selection and evaluation of the tools, systems, products and technologies that form part of the service design. This might include integration and interoperability, constraints in the technology space, vendor lock-in, route to procurement, total cost of ownership, alignment with internal and external skills etc etc.

Some useful advice would be to think about the technology choices as a collective – rather than piecemeal, as and when a particular tool or technology is needed. Yesterday I gave a peer review of a solution under development where one tool had been deployed but in isolation, and not as part of an evaluation of the full technology stack. This meant that there were integration problems as new technologies were added to the stack. 

The way that a service evolves is really important too along with the measures in place to support its growth. Cloud based solutions help take care of some of the more traditional scalability and capacity issues and I’m interested in understanding the designs around these, as well as any other mitigations in place to help assure availability of a service. As part of the Beta assessment, the team will need to show the plan to deal with the event of the service being taken temporarily offline – detail such as strategies for dealing with incidents that impact availability, and the strategy to recover from downtime and how these have been tested.

Although a GDS Beta assessment focuses on a specific service, it goes without saying that a good Technologist will be mindful of how the service they’ve architected impacts the enterprise architecture and vice-versa. For example if a new service built with microservices and also introduces an increased volume and velocity of data, does the network need to be strengthened to meet the increase in communications traversing the network?

Legacy technology (as well as legacy ‘Commercials’ and ways of working) is always on my mind. Obviously during an assessment a team can show how they address legacy in the scope of that particular service, be it some form of integration with legacy or applying the strangler pattern, but organisations really need to put the effort into dealing with legacy as much as they focus on new digital services. Furthermore they need to think about how to avoid creating ‘legacy systems of the future’ by ensuring sustainability of their service – be it from a technical, financial and resource perspective. I appreciate this isn’t always easy! However I do believe that GDS should and will put much more scrutiny on organisations’ plans to address legacy issues.

One final point from me is that teams should embrace an assessment. Clearly the focus is on passing an assessment but regardless of the outcome there’s lots of value in gaining that feedback. It’s far better to get constructive feedback during the assessment stages rather than having to deal with disappointed stakeholders further down the line, and probably having to spend more time and money to strengthen or redesign the technical architecture.

How do you decide when to go for your Beta Assessment?

Many services (for both good and bad reasons) have struggled with the MVP concept; and as such the journey to get their MVP rolled out nationally has taken a long time, and contained more features and functionality then teams might have initially imagined.  

This can make it very hard to decide when you should go for an Assessment to move from Private to Public Beta. If your service is going to be rolled out to millions of people; or has a large number of user groups with very different needs; it can be hard to decide what functionality is needed in Private Beta vs. Public Beta or what can be saved until Live and rolled out as additional functionality. 

The other things to consider is, what does your rollout plan actually look like? Are you able to go national with the service once you’ve tested with a few hundred people from each user group? Or, as is more common with large services like NHS Jobs, where you are replacing an older service, does the service need to be rolled out in a very set way? If so, you might need to keep inviting users in until full rollout is almost complete; making it hard to judge when the right time for your Beta assessment is. 

There is no right or wrong answer here, the main thing to consider is that you will need to understand all of the above before you can roll your service out nationally, and be able to tell that story to the panel successfully. 

This is because theoretically most of the heavy lifting is done in Private Beta, and once you have rolled out your service into Public Beta, the main things left to test are whether your service scaled and worked as you anticipated. Admittedly this (combined with a confusion about the scope of an MVP) is why most Services never actually bother with their Live Assessment. For most Services, once you’re in Public Beta the hard work has been done; there’s nothing more to do, so why bother with a Live Assessment? But that’s an entirely different blog! 

Reviewing the service together.

 

So, what is a Service Owner?

Before I discuss what (in my view) a Service Owner is, a brief history lesson into the role might be useful.

The role of the ‘Service Manager‘ was seen as critically important to the success of a product, and they were defined as a G6 (Manager) who had responsibility for the end to end service AND the person who led the team through their Service Standard assessments.

Now let’s think about this a bit; Back when the GDS Service Standard and the Service Manual first came into creation, they were specifically created for/with GOV.UK in mind. As such, this definition of the role makes some sense. GOV.UK was (relatively) small and simple; and one person could ‘own’ the end to end service.

The problem came about when the Service Standards were rolled our wider than in GDS itself. DWP is a good example of where this role didn’t work.

The Service Manual describes a service as the holistic experience for a user; so it’s not just a Digital Product, it’s the telephony Service that sits alongside it, the back end systems that support it, the Operational processes that staff use to deliver the service daily, along with the budget that pays for it all. Universal Credit is a service, State Pension is a service; and both of these services are, to put it bluntly, HUGE.

Neil Couling is a lovely bloke, who works really hard, and has the unenviable task of having overarching responsibility for Universal Credit. He’s also, a Director General. While he knows A LOT about the service, it is very unlikely that he would know the full history of every design iteration and user research session the Service went through, or be able to talk in detail about the tech stack and it’s resilience etc; and even if he did, he certainly would be very unlikely to have the 4 hours spare to sit in the various GDS assessments UC went through.

This led to us (in DWP) phasing out the role; and splitting the responsibilities into two, the (newly created role of ) Product Lead and the Service Owner. The Product Lead did most of the work of the Service Manager (in terms of GDS assessments etc), but they didn’t have the responsibility of the end to end service; this sat with the Service Owner. The Service Owner was generally a Director General (and also the SRO), who we clarified the responsibilities of when it came to Digital Services.

A few years ago, Ross (the then Head of Product and Service Management at GDS) and I, along with a few others, had a lot of conversations about the role of the Service Manager; and why in departments like DWP, the role did not work, and what we were doing instead.

At the time there was the agreement in many of the Departments outside of GDS that the Service Manager role wasn’t working how it had been intended, and was instead causing confusion and in some cases, creating additional unnecessary hierarchy. The main problem was, is it was in DWP, the breadth of the role was too big for anyone below SCS, which mean instead we were ending up with Service Managers who were only responsible for the digital elements of the service (and often reported to a Digital Director), with all non digital elements of the service sitting under a Director outside of Digital, which was creating more division and confusion.

As such, the Service Manual and the newly created DDaT framework were changed to incorporate the role of the Service Owner instead of the Service Manager; with the suggestion this role should be an SCS level role. However, because the SCS was outside of the DDaT framework, the amount the role could be defined/ specified was rather limited, and instead became more of a suggestion rather than a clearly defined requirement.

The latest version of the DDaT framework has interestingly removed the suggestions that the role should be an SCS role, and any reference of the cross over with the responsibilities of SRO, and now makes the role sound much more ‘middle management’ again, although it does still specify ownership of the end to end service.

Ok, so what should a Service Owner be?

When we talked about the role a few years ago, the intention was very much to define how the traditional role of the SRO joined up closer to the agile/digital/user centred design world; in order to create holistic joined up services.

Below is (at least my understanding of) what we intended the role to be:

  • They should have end to end responsibility for the holistic service.
  • They should understand and have overall responsibility for the scope of all products within the service.
  • They should have responsibility for agreeing the overall metrics for their service and ensuring they are met.
  • They should have responsibility for the overall budget for their service (and the products within it).
  • They should understand the high level needs of their users, and what teams are doing to meet their needs.
  • They should have an understanding (and have agreed) the high level priorities within the service. ((Which Product needs to be delivered first? Which has the most urgent resource needs etc.))
  • They should be working with the Product/Delivery/Design leads within their Products as much as the Operational leads etc. to empower them to make decisions, and understanding the decisions that have been made.
  • They should be encouraging and supporting cross functional working to ensure all elements of a service work together holistically.
  • They should be fully aware of any political/strategy decisions or issues that may impact their users and the service, and be working with their teams to ensure those are understand to minimise risks.
  • They should understand how Agile/Waterfall and any other change methodologies work to deliver change. And how to best support their teams no matter which methodology is being used.

In this way the role of the Service Owner would add clear value to the Product teams, without adding in unnecessary hierarchy. They would support and enable the development of a holistic service, bringing together all the functions a service would need to be able to deliver and meet user needs.

Whether they are an SCS person or not is irrelevant, the important thing is that they have the knowledge and ability to make decisions that affect the whole service, that they have overall responsibility for ensuring users needs are met, that they can ensure that all the products within the service work together, and that their teams are empowered, to deliver the right outcomes.

Doing your best vs. achieving the goal

The Agile Prime Directive states “Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.”

This is a wonderful principle to have during Retrospectives, in order to avoid getting stuck in the blame game, and to instead focus on results.

However, lets be very clear, the Agile Prime Directive isn’t an excuse for not delivering. If every sprint you miss your sprint goals, or you’re team constantly suffers from scope creep etc. Then you need to look a bit deeper to understand what is going wrong.

Even if you agree every individual did the best job they could, as a team are you working best together? Are you understanding your teams velocity as best you can? Do you all understand and agree the scope of the project or your sprint goals? Have you got the right mix of individuals and roles in the team to deliver? Is your team and the individuals in it empowered to make decisions?

If the answer to any of these questions is no, this could be impacting your ability to deliver.

The Agile Prime Directive is a good mindset to start conversations in, as we want to create safe and supportive environments for our teams in order to help them achieve their full potential, and recognising that everyone has room to improve is an important part of that. Nowhere in the Agile Prime Directive does it state everyone is perfect, just that they did their best given the skills/ ability and knowledge they had at the time.

However, while it is a good mindset to start with, unfortunately we all know it’s not 100% true. the Agile Prime Directive itself has issues, while it’s a lovely philosophy, and its intent is good; as a manager, and as a human I have to admit even to myself I haven’t ‘done my best’ every single day.

While most of the time we do all try our best and do our best; everyone has bad days. Occasionally on a team there will be someone who isn’t (for whatever reason) doing their best, their focus is elsewhere etc. External life will sometimes effect peoples work, the kids are ill, they have money worries, their relationship has just ended; these things happen. There will be people who don’t work well together, they can be cordial to each other, but don’t deliver their best when working together, personality clash happens. We need to be able to spot and call all out these things, but we obviously need to be able to do so in a positive and supportive way as much as possible.

Open and honest communication is the key to delivery; and having a culture of trust and empowerment is a critical part of that. We need to create environments where people feel supported and able to discuss issues and concerns, and we need to acknowledge that sometimes, for whatever reason, those issues do come down to an individual; and while I’m not suggesting we should ever name and shame in a retrospective, we need to be able to deal with that in an appropriate way.

We need to not only know and understand that even if everyone ‘is doing their best’, they can still do better; but that sometimes we need to be able to recognise and support those individuals and those teams who for whatever reason are not doing or achieving their best.

These issues can’t always just be ‘left to the retro’, while the retro is a great space to start to air and uncover issues, and learn from what has gone well, and what needs to improve; part of leading and managing teams is understanding which conversations need to come out from the retro and be dealt with alongside it.

If we are constantly missing sprint goals or suffering scope creep, we can not simple say ‘but we are all doing our best’, that isn’t good enough. In this instance the participant award is not enough. We are here to deliver outcomes, not just do the best we can.

How to change a culture

When delivering digital or business transformation, one of the things that often gets overlooked is the cultural changes that are needed to embed the transformation succesfully.

There can be many reasons why this happens, either because it’s not been considered, because it’s not been considered a priority, or simply because the people leading the transformation work don’t know how to do this.

In my experience the culture of an organisation can be the thing that makes or breaks a successful transformation programme or change initiative; if the culture doesn’t match or support the changes you are trying to make, then it’s unlikely that those changes will stick.

Below are some common causes of failure in my experience:

  • The scope of transformation programmes have been considered and set in silos without considering how they fit within the wider strategy.
  • Decisions have been made at ‘the top’ and time hasn’t been spent getting staff engagement, feelings and feedback to ensure they understand why changes are being made.
  • Decisions have been made to change processes without validating why the existing processes exist or how the changes will impact people or processes.
  • Changes have been introduced without ensuring the organisation has the capability or capacity to cope.
  • Lack of empowerment to the transformation teams to make decisions.
  • When introducing agile or digital ways of working, corresponding changes to finance/ governance/ commercials haven’t been considered; increasing siloed working and inconsistencies.

Walk the talk:

Within Difrent we use tools like the Rich Picture and Wardley mapping to help Senior Leaders to understand their strategic priorities and clearly define the vision and strategy in a transparent and visual way. These help them be able to agree the strategy and be able to ‘sell it’ to the wider organisation and teams in order to get engagement and understanding from everyone.

The Rich picture Difrent developed for the NHSBSA
The NHSBSA rich picture

In my experience this works especially well when the assumptions made by the SLT in the strategy and vision are tested with staff and teams before final version are agreed; helping people understand why changes are being made and how they and their role fit into the picture.

This is especially important when it comes to the next step, which is developing things like your transformation roadmap and target operating model. These things can not be developed in isolation if you want your transformation to succeed.

People always have different views when it comes to priorities, and ways to solve problems. It is vitally important to engage people when setting priorities for work, so they understand why changes to a data warehouse or telephony service are being prioritised before the new email service or website they feel they have been waiting months for. Feedback is key to getting buy in.

A whiteboard with the word 'feedback' written in the middle with written notes around it
‘Feedback’

Equally assumptions are often made at the top level about something being a priority based on process issues etc. Without understanding why those processes existed in the first place, which can miss the complexity or impact of any potential changes. This then means that after changes have been delivered, people find the transformation hasn’t delivered what they needed, and workarounds and old ways of working return.

One thing I hear often within organisations is they want ‘an open and transparent culture’ but they don’t embody those principles when setting strategic or transformation priorities; as such people struggle to buy into the new culture as they don’t understand or agree with how decisions have been made.

Think wider:

While people are the most important thing when thinking about transformation and business change, and changing a culture; they are not the only thing we have to consider. The next step is processes.

Whatever has inspired an organisation to transform, transformation can not be delivered within a silo; it is important to consider what changes may need to be made to things like finances; commercials and governance.

While these aren’t always obvious things to consider when delivered digital transformation as an example, they are vitally important in ensuring its success. One thing many organisations have found when changing their culture and introducing things like agile ways of working, is that traditional governance and funding processes don’t easily support empowered teams or iterative working.

As such, it’s vitally important if you want transformation to succeed to not get trapped in siloed thinking, but instead take a holistic service approach to change; ensuring you understand the end to end implications to the changes you are looking to make.

Taking a leap:

Equally, when making changes to governance or culture, one thing I have found in my experience is that senior leaders; while they want to empower teams and bring in new ways of working, they then struggle with how to ‘trust’ teams. Often as Senior Responsible Owners etc. they don’t want to be seen to be wasting money. As such they can enter a loop of needing changes ‘proving’ before they can fully embrace them, but by not being able to fully embrace the changes they aren’t demonstrating the culture they want and teams then struggle themselves to embrace the changes, meaning the real value of the transformation is never realised.

A woman standing in front of a project wall
A project board full of post it notes

There is no easy answer to this, sometimes you just have to take that leap and trust your teams. If you have invested in building capability (be that through training or recruitment of external experts) then you have to trust them to know what they are doing. Not easy when talking about multi-million pound delivery programmes, but this is where having an iterative approach really can help. By introducing small changes to begin with, this can help build the ‘proof’ needed to be able to invest in bigger changes.

There is no one ‘thing’

When delivering transformation, and especially when trying to change culture, there is no quick answer, or no one single thing you can do to guarantee success. But by considering the changes you will be making holistically, getting input and feedback from staff and stakeholders, engaging them in the process and challenging yourselves to demonstrate the cultural changes you want to see, it is much more likely the transformation you are trying to deliver will succeed.

The word 'change'
Change.

Delivering in a crisis

One of the key personal aims I had when I joined Difrent, just over six months ago, was to work somewhere that would let me deliver stuff that matters. Because I am passionate about people, and about Delivery;

After 15 years, right in the thick of some pioneering public sector work, combining high profile product delivery with developing digital capability working for organisations like the Government Digital Services (GDS), Department of Work and Pensions (DWP), The Care Quality Commission (CQC), and the Ministry of Defence (MoD); I was chaffing at the speed (or lack thereof) of delivery in the Public sector.

Parcel delivery

I hoped going agency side would remove some of that red tape, and let me get on and actually deliver; my aim when I started was to get a project delivered (to public beta at the very least) within my first year. Might seem like a simple ask, but in the 10 years I spent working in Digital, I’d only seen half a dozen services get into Live.

This is not because the projects failed, they are all still out there being used by people; but because once projects got into Beta, and real people could start using them, the impetus to go-live got lost somewhat.

Six months into the job and things looked to be on track, with one service in Private beta, another we are working on in Public Beta; plus a few Discoveries etc. underway; things were definitely moving quickly and I my decision to move agency side felt justified. Delivery was happening.

And then Covid-19 hit.

Gov.uk COVID-19 website
A tablet displaying the Gov.uk COVID-19 guidance

With COVID-19, the old normal, and ways of working have had to change rapidly. If for no other reason than we couldn’t all be co-located anymore. We all had to turn too fully remote working quickly, not just as a company but as an industry.

Thankfully within Difrent we’ve always had the ability to work remotely, so things like laptops and collaborative software were already in place internally; but the move to being fully remote has still been a big challenge. Things like setting up regular online collaboration and communication sessions throughout our week, our twice-daily coffee catchups and weekly Difrent Talks are something created for people to drop in on with no pressure attached and has helped people stay connected.

The main challenge has been how we work with out clients to ensure we are still delivering. Reviewing our ways of working to ensure we are still working inclusively; or aren’t accidentally excluding someone from a conversation when everyone is working from their own home. Maintaining velocity and ensuring everyone is engaged and able to contribute.

This is trickier to navigate when you’re all working virtually, and needs a bit more planning and forethought, but it’s not impossible. One of the positives (for me at least) about the current crisis is how well people have come together to get things delivered.

Some of the work that we have been involved in, which would generally have taken months to develop; has been done in weeks. User research, analysis and development happening in a fraction of the time it took before.

Graffiti saying ‘Made in Crisis’

So how are we now able to move at such a fast pace? Are standards being dropped or ignored? Are corners being cut? Or have we iterated and adapted our approach?

Once this is all over I think those will be the questions a lot pf people are asking; but my observation is that, if nothing else, this current crisis has made us really embrace what agility means.

We seem to have the right people ‘in the room’ signing off decisions when they are needed; with proper multidisciplinary teams, made up of people from both digital but also policy and operations etc, that are empowered to get on and do things. Research is still happening; but possibly at a much smaller scale, as and when it is needed; We’re truly embracing the Minimum Viable Product, getting things out there that aren’t perfect, but that real people can use; testing and improving the service as we go.

Once this is all over I certainly don’t want to have to continue the trend of on-boarding and embedding teams with 24 hours notice; and while getting things live in under 2 weeks is an amazing accomplishment; to achieve it comes at a high price – Not just in terms of resources but in terms of people, because that is where burnout will occur for all involved. But I believe a happy medium can be found.

My hope, once this is all over, is that we can find the time to consider what we in digital have learnt, and focus on what elements we can iterate and take forward to help us keep delivering faster and better, but in the right way, with less delays; so we can get services out there for people to use; because really, that is what we are all here to do.

Stay home, stay safe, save lives
Sign saying ‘stay home, stay safe, save lives’