We’re punishing those who are less experienced, and we need to stop.
In the last few weeks I’ve had multiple conversations with clients (both existing and new) who are preparing for or have recently not passed their Digital Service standard assessments who are really struggling to understand what is needed from them in order to pass their assessment.
These teams have tried to engage with the service standards teams, but given those teams are extremely busy; most teams cant get any time with their ‘link’ person until 6 weeks before their assessment; by which time most teams are quite far down their track and potentially leaves them a lot of (re)work to try and do before their assessment.
Having sat in on a few of those calls recently I’ve been surprised how little time is set aside to help the teams prep; and to give them advice on guidance on what to expect at an assessment if they haven’t been through one before. Thos no time or support for mock assessments for new teams. There may be the offer of one or two of the team getting to observe someone else’s assessment if the stars align to allow this; but it’s not proactively planned in; and instead viewed as a nice to have. There seems to be an assumption the project teams should know all of this already; and no recognition that a large number of teams don’t; this is still all new to them.
“In the old days” we as assessors and transformation leads used to set aside time regularly to meet with teams; talk through the problems they were trying to fix, understand any issues they may be facing, provide clarity and guidance before the assessment; so that teams could be confident they were ready to move onto the next phase before their assessment. But when I talk to teams now, so few of them are getting this support. Many teams reach out because the rare bits of guidance they have received hasn’t been clear, and in some cases it’s been contradictory and they don’t know who to talk too to get that clarity.
Instead, more and more of my time at the moment, as a supplier, is being set aside to support teams through their assessment. To provide advice and guidance on what to expect, how to prepare and what approach the team needs to take. Actually what an MVP is; how to decide when you need an assessment, and what elements of the service do you need to have ready to ‘show’ at each stage. What the difference is between Alpha/ Beta and Live assessments and why it matters. For so many teams this is still almost like a foreign language and new.
So, how can we better support teams through this journey?
Stop treating it like this is all old hat and that everyone should know everything about it already.
Digital Transformation has been ‘a thing’ for one generation (if you count from the invention of the internet as a tool for the masses in 1995); Within the public sector, GDS, the Digital Service Standards and the Digital Academy have existed for less than one generation; less than 10 years in-fact.
By treating it as a thing everyone should know, we make it exclusionary. We make people feel less than us for the simple act of not having the same experience we do.
We talk about working in the open, and many team do still strive to do that; but digital transformation is still almost seen as a magical art by many; and how to pass what should be a simple thing like a service standard assessment is still almost viewed as Arcane knowledge held by the few. As a community we need to get better at supporting each other, and especially those new to this experience, along this path.
This isn’t just a nice thing to do, its the fiscally responsible thing to do; by assuming teams already have all this knowledge we’re just increasing the likelihood they will fail, and that comes with a cost.
We need to set aside more time to help and guide each other on this journey; so that we can all succeed; that is how we truly add value, and ensure that Digital Transformation delivers and is around to stay for generations to come.
One of the things that has really become apparent when moving ‘supplier side’ is how much the procurement processes used by the public sector to tender work doesn’t facilitate agile delivery.
The process of bidding for work, certainly as an SME is an industry in itself.
This month alone we’ve seen multiple Invitations to Tender’s on the Digital Marketplace for Discoveries etc, as many departments are trying to spend their budget before the end of the financial year.
The ITT’s will mention user research and ask how suppliers will work to understand user needs or hire proper user researchers. But they will then state they only have 4 weeks or £60K to carry out the Discovery. While they will specify the need for user research, no user recruitment has been carried out to let the supplier hit the ground running; it’s not possible for it to be carried out before the project starts (unless as a supplier you’re willing to do that for free; and even if you are, you’ve got less than a week to onboard your team, do any reading you need to do and complete user recruitment, which just isn’t feasible); and we regular see requests for prototypes within that time as well.
This isn’t to say that short Discoveries etc. are impossible, if anything COVID-19 has proved it is possible, however there the outcomes we were trying to deliver were understood by all; the problems we were trying to solve were very clear,; and there was a fairly clear understanding of the user groups we’d need to be working with to carry out any research; all of this enabled the teams to move at pace.
But we all know the normal commercial rules were relaxed to support delivery of the urgent COVID-19 related services. Generally it’s rare for an ITT to clarify the problem the organisation is trying to solve, or the outcomes they are looking to achieve. Instead they tend to solely focus on delivering a Discovery or Alpha etc. The outcome is stated as completing the work in the timeframe in order to move to the next stage; not as a problem to solve with clear goals and scope.
We spend a lot of time submitting questions trying to get clarity on what outcomes the organisations are looking for, and sometimes it certainly feels like organisations are looking for someone to deliver them a Discovery solely because the GDS/Digital Service Standard says they need to do one. This means, if we’re not careful, halfway through the Discovery phase we’re still struggling to get stakeholders to agree the scope of the work and why we really do need to talk to that group of users over there that they’ve never spoken too before.
The GDS lifecycle and how it currently ties into procurement and funding (badly) means that organisations are reluctant to go back into Discovery or Alpha when they need too, because of how they have procured suppliers. If as a supplier you deliver a Discovery that finds that there is no need to move into Alpha (because there are no user needs etc) or midway through an Alpha you find the option you prioritised for your MVP no longer meets the needs as anticipated, clients still tend to view that money as ‘lost’ or ‘wasted’ rather than accepting the value in failing fast and stopping or changing to do something that can add value. Even when the clients do accept that, sometimes the procurement rules that brought you on to deliver a specific outcome mean your team now can’t pivot onto another piece of work, as that needs to be a new contract; either scenario could mean as a supplier you loose that contract you spent so much time getting, because you did ‘the right thing’.
We regularly pick up work midway through the lifecycle; sometimes that’s because the previous supplier didn’t work out; sometimes its because they were only brought in to complete the Discovery or Alpha etc. and when it comes to re-tender, another supplier is now cheaper etc. That’s part and parcel of being a supplier; but I know from being ‘client side’ for so long how that can make it hard to manage corporate knowledge.
Equally, as a supplier, we rarely see things come out for procurement in Live, because there is the assumption by Live most of the work is done, and yet if you follow the intent of the GDS lifecycle rather than how it’s often interpreted, there should still be plenty of feature development, research etc happening in Live.
This is turn is part of the reason we see so many services stuck in Public Beta. Services have been developed by or with suppliers who were only contracted to provide support until Beta. There is rarely funding available for further development in Live, but the knowledge and experience the suppliers provided has exited stage left so it’s tricky for internal teams to pick up the work to move it into Live and continue development.
Most contracts specify ‘knowledge transfer’ (although sometimes it’s classed as a value add; when it really should be a fundamental requirement) but few are clear on what they are looking for. When we talk to clients about how they would like to manage that, or how we can ensure we can get the balance right between delivery of tangible outcomes and transferring knowledge, knowledge transfer is regularly de-scoped or de-prioritised. It ends up being seen as not as important as getting a product or service ‘out there’; but once the service is out there, the funding for the supplier stops and the time to do any proper knowledge transfer is minimal at best; and if not carefully managed suppliers can end up handing over a load of documentation and code without completing the peer working/ lunch and learns/ co-working workshops we’d wanted to happen.
Some departments and organisations have got much better at getting their commercial teams working hand and hand with their delivery teams; and we can always see those ITT’s a mile off; and it’s a pleasure to see them; as it makes it much easier for us as suppliers to provide a good response.
None of this is insurmountable, but we (both suppliers and commercial/procuring managers and delivery leads) need to get better at working together to look at how we procure/bid for work; ensuring we are clear on what the outcomes we’re trying to achieve are, and properly valuing ‘the value add’.
What do we even mean when we talk about agile at scale and what are the most important elements to consider when trying to run agile at scale?
This is definitely one of those topics of conversation that goes around and around and never seems to get resolved or go away. What do we even mean when we talk about agile at scale? Do we mean scaling agile within a programme setting across multiple teams? Do we mean scaling it across multiple programmes? Or do we mean scaling it using it at scale within a whole organisation?
When ever I’m asked about what I believe to be the most important elements in enabling successful delivery using agile, or using agile at scale, the number one thing I will always talk about isn’t the technology; It isn’t digital capability; or experience with the latest agile ways of working (although all those things are important and do obviously help) it’s the culture.
I’ve blogged before on how to change a culture and why it’s important to remember cultural change alongside business transformation; but more and more, especially when we’re talking about agile at scale I’ve come to the conclusion that the culture of an organisation; and most especially the buy in and support for agile ways of working at a leadership level within an organisation, is the must fundamental element of being able to successfully scale agile.
Agile its self is sadly still one of those terms that is actually very marmite for some, especially in the senior leadership layers. They’ve seen agile projects fail; it seems like too much change for too little return, or its just something their digital/tech teams ‘do’ that they don’t feel the need to really engage with. GDS tells them they have to use it, so they do.
Which is where I think many of the agile at scale conversations stumble; it’s seen as a digital/tech problem, not an organisational one. This means that time and again, Service Owners, Programme Directors and agile delivery teams get stuck when trying to develop and get support for business cases that are trying to deliver holistic and meaningful change. We see it again and again. Agile delivery runs into waterfall funding and governance and gets stuck.
As a Service Owner or Programme Director trying to deliver a holistic service, how do you quantify in your business case the value this service and this approach to delivery will add? The obvious answer, hopefully, is using data and evidence to show the potential areas for investment and value it would add to both users and the business. But how do you get that data? Where from? How do you get senior leaders to understand it?
In organisations where agile at scale is a new concept, supporting senior leaders to understand why this matters isn’t easy. I often try and recommend new CDO’s, CEO’s or Chief Execs ‘buddy up’ or shadow some other senior folks who have been through this journey; folks like Darren Curry, Janet Hughes, Tom Read and Neil Couling; who understand why it matters, and have been through (or are going through) this journey themselves in their organisations and are able to share their experiences for both good and bad.
I will always give full praise to Alan Eccles CBE who was previously The Public Guardian, and chief exec of the Office of the Public Guardian, with out whom the first Digital Exemplar, the LPA online, would never have gone live. Alan was always very honest that he wasn’t experienced or knowledgeable about agile or digital, but he was fully committed to making the OPG the first true Digital exemplar Agency; and utilising everything digital, and agile ways of working, had to offer to transform the culture of the OPG and the services they delivered. If you want an example of what a true Digital culture looks like, and how vocal and committed Alan was to making the OPG digital, just take a look at their blog which goes all the way back to 2015 and maps the OPG’s digital journey.
Obviously, culture isn’t the only important factor when wanting to scale agile; the technology we use, the infrastructure and architecture we design and have in place, the skills of our people, the size of our teams and their capacity to deliver are also all important. But without the culture that encompasses and supports the teams, the ability to deliver at scale will always be a struggle.
The commitment to change, to embracing the possibilities and options that a digital culture and using agile at scale brings at the senior leadership level permeates through the rest of the organisation. It encourages teams to work in the open, fostering collaboration, identifying common components and dependancies. It acknowledges that failure is ok, as long as we’re sharing the lessons we’ve learned and are constantly improving. It supports true multidisciplinary working and enables holistic service design by encouraging policy, operations and finance colleagues etc to be part of the delivery teams. All of this in turn improves decision making and increases the speed and success of transformation programmes. Ultimately it empowers teams to work together to deliver; and that is how we scale agile.
The frustration of job descriptions and their lack of clarity.
One of the biggest and most regularly occurring complaints about the Civil Service (and public sector as a whole) is their miss-management of commercial contracts.
There are regularly headlines in the papers accusing Government Departments & the Civil Servants working in them of wasting public money, and there has been a drive over the last few years especially to improve commercial experience especially within the Senior Civil Service.
When a few years ago my mentor at the time suggested leaving the public sector for a short while to gain some more commercial experience before going for any Director level roles, this seemed like a very smart idea. I would obviously need to provide evidence of my commercial experience to get any further promotions, and surely managing a couple of 500K, 1M contracts would not be enough, right?
Recently I’ve been working with my new mentor, focusing specially on gaining more commercial knowledge etc. and last month he set me an exercise to look at some Director and above roles within the Digital and Transformation arena to see what level of commercial experience they were asking for, so that I can measure my current levels of experience against what is being asked for.
You can therefor imagine my surprise when this month we got together to compare 4 senior level roles (2 at Director level and 2 Director General) and found that the amount of commercial experience requested in the job descriptions was decidedly woolly.
I really shouldn’t have been surprised, the Civil Service is famous for its woolly language, policy and strategy documents are rarely written in simple English after all.
But rather than job specifications with specific language asking for “experience of managing multiple multi million pound contracts successfully etc”. What is instead called for (if mentioned specially at all) is “commercial acumen” or “a commercial mindset” but no real definition of what level of acumen or experience is needed.
The Digital Infrastructure Director role at DCMS does mention commercial knowledge as part of the person specification, which it defines as “a commercial mindset, with experience in complex programmes and market facing delivery.“
Finally we have the recently published Government CDO role, which clearly mentions commercial responsibilities in the role description, but doesn’t actually demand any commercial experience in the person specification.
At which point, my question is, what level of Commercial acumen or experience do you actually want? What is a commercial mindset and how do you know if you have it? Why are we being so woolly at defining what is a fundamentally critical part of these roles?
Recent DoS framework opportunities we have bid for or considered at Difrent have required suppliers to have have experience of things like “a minimum of 2 two million pound plus level contracts” (as an example) to be able to bid for them.
That’s helpful, as Delivery Director I know exactly how many multimillion pound contracts we’ve delivered successfully and can immediately decide whether as a company it’s worth us putting time or effort into the bid submissions. But as a person, I don’t have the same level of information needed to make a similar decision on a personal level.
The flip side of the argument is that data suggests that women especially won’t apply for roles that are “too specific” or have a long shopping list of demands, because women feel like they need to meet 75% of the person specification to apply. I agree with that wholeheartedly, but there’s a big difference between being far too specific and listing 12+ essential criteria for a role, and being soo unspecific you’ve become decidedly generic.
Especially when, as multiple studies have shown, in the public digital sector Job titles are often meaningless. Very rarely in the public sector does a job actually do what it says on the tin. What a Service Manager is in one Department can be very different in another one.
If I’m applying for an Infrastructure role I would expect the person specification to ask for Infrastructure experience. If I’m applying for a comms role, I expect to be asked for some level of comms experience; and I would expect some hint as too how much experience is enough.
So why when we are looking at Senior/ Director level roles in the Civil Service are we not helping candidates understand what level of commercial experience is ‘enough’? The private sector job adverts for similar level roles tend to be much more specific in terms of the amount of contract level experience/ knowledge needed, so why is the public sector being so woolly in our language?
*If you don’t get the blog title, I’m sorry, it is very geeky. and a terrible Philip K. Dick reference. But it amused me.
The Beta Assessment is probably the one I get the most questions about; Primarily, “when do we actually go for our Beta Assessment and what does it involve?”
Firstly what is an Assessment? Why do we assess products and services?
If you’ve never been to a Digital Service Standard Assessment it can be daunting; so I thought it might be useful to pull together some notes from a group of assessors, to show what we are looking for when we assess a service.
Claire Harrison (Chief Architect at Homes England and leading Tech Assessor) and Gavin Elliot (Head of Design at DWP and a leading Design Assessor, you can find his blog here) helped me pull together some thoughts about what a good assessment looks like, and what we are specifically looking for when it comes to a Beta Assessment.
I always describe a good assessment as the team telling the assessment panel a story. So, what we want to hear is:
What was the problem you were trying to solve?
Who are you solving this problem for? (who are your users?)
Why do you think this is a problem that needs solving? (What research have you done? Tell us about the users journey)
How did you decide to solve it and what options did you consider? (What analysis have you done?)
How did you prove the option you chose was the right one? (How did you test this?)
One of the great things about the Service Manual is that it explains what each delivery phase should look like, and what the assessment team are considering at each assessment.
So what are we looking for at a Beta Assessment?
By the time it comes to your Beta Assessment, you should have been running your service for a little while now with a restricted number of users in a Private Beta. You should have real data you’ve gathered from real users who were invited to use your service, and your service should have iterated several times by now given all the things you have learnt.
Before you are ready to move into Public Beta and roll your service out Nationally there are several things we want to check during an assessment.
We don’t want to just hear about the ‘digital’ experience; we want to understand how you have/will provide a consistent and joined up experience across all channels.
Are there any paper or telephony elements to your service? How have you ensured that those users have received a consistent experience?
What changes have you made to the back end processes and how has this changed the user experience for any staff using the service?
Were there any policy or legislative constraints you had to deal with to ensure a joined up experience?
Has the scope of your MVP changed at all so far in Beta given the feedback you have received from users?
Are there any changes you plan to implement in Public Beta?
As a Lead Assessor this is where I always find that teams who have suffered with empowerment or organisational silos may struggle.
If the team are only empowered to look at the Digital service, and have struggled to make any changes to the paper/ telephony or face to face channels due to siloed working in their Department between Digital and Ops (as an example) the Digital product will offer a very different experience to the rest of the service.
As part of that discussion we will also want to understand how you have supported users who need help getting online; and what assisted digital support you are providing.
At previous assessments you should have had a plan for the support you intended to provide, you should now be able to talk though how you are putting that into action. This could be telephony support or a web chat function; but we want to ensure the service being offered is/will be consistent to the wider service experience, and meeting your users needs. We also want to understand how it’s being funded and how you plan to publish your accessibility info on your service.
We also expect by this point that you have run an accessibility audit and have carried out regular accessibility testing. It’s worth noting, if you don’t have anyone in house who is trained in running Accessibility audits (We’re lucky in Difrent as we have a DAC assessor in house), then you will need to have factored in the time it takes to get an audit booked in and run well before you think about your Beta Assessment).
Similarly, by the time you go for your Beta Assessment we would generally expect a Welsh language version of your service available; again, this needs to be planned well in advance as this can take time to get, and is not (or shouldn’t be) a last minute job! Something in my experience a lot of teams forget to prioritise and plan for.
And finally assuming you are planning to put your service on GOV.UK, you’ll need to have agreed the following things with the GOV.UK team at GDS before going into public beta:
Again, while it shouldn’t take long to get these things sorted with the GOV.UK team, they can sometimes have backlogs and as such it’s worth making sure you’ve planned in enough time to get this sorted.
The other things we will want to hear about are how you’ve ensured your service is scalable and secure. How have you dealt with any technical constraints?
The architecture and technology – Claire
From an architecture perspective, at the Beta phases I’m still interested in the design of the service but I also have a focus on it’s implementation, and the provisions in place to support sustainability of the service. My mantra is ‘end-to-end, top-to-bottom service architecture’!
An obvious consideration in both the design and deployment of a service is that of security – how the solution conforms to industry, Government and legal standards, and how security is baked into a good technical design. With data, I want to understand the characteristics and lifecycle of data, are data identifiable, how is it collected, where is it stored, hosted, who has access to it, is it encrypted, if so when, where and how? I find it encouraging that in recent years there has been a shift in thinking not only about how to prevent security breaches but also how to recover from them.
Security is sometimes cited as a reason not to code in the open but in actual fact this is hardly ever the case. As services are assessed on this there needs to be a very good reason why code can’t be open. After all a key principle of GDS is reuse – in both directions – for example making use of common government platforms, and also publishing code for it to be used by others.
Government services such as Pay and Notify can help with some of a Technologist’s decisions and should be used as the default, as should open standards and open source technologies. When this isn’t the case I’m really interested in the selection and evaluation of the tools, systems, products and technologies that form part of the service design. This might include integration and interoperability, constraints in the technology space, vendor lock-in, route to procurement, total cost of ownership, alignment with internal and external skills etc etc.
Some useful advice would be to think about the technology choices as a collective – rather than piecemeal, as and when a particular tool or technology is needed. Yesterday I gave a peer review of a solution under development where one tool had been deployed but in isolation, and not as part of an evaluation of the full technology stack. This meant that there were integration problems as new technologies were added to the stack.
The way that a service evolves is really important too along with the measures in place to support its growth. Cloud based solutions help take care of some of the more traditional scalability and capacity issues and I’m interested in understanding the designs around these, as well as any other mitigations in place to help assure availability of a service. As part of the Beta assessment, the team will need to show the plan to deal with the event of the service being taken temporarily offline – detail such as strategies for dealing with incidents that impact availability, and the strategy to recover from downtime and how these have been tested.
Although a GDS Beta assessment focuses on a specific service, it goes without saying that a good Technologist will be mindful of how the service they’ve architected impacts the enterprise architecture and vice-versa. For example if a new service built with microservices and also introduces an increased volume and velocity of data, does the network need to be strengthened to meet the increase in communications traversing the network?
Legacy technology (as well as legacy ‘Commercials’ and ways of working) is always on my mind. Obviously during an assessment a team can show how they address legacy in the scope of that particular service, be it some form of integration with legacy or applying the strangler pattern, but organisations really need to put the effort into dealing with legacy as much as they focus on new digital services. Furthermore they need to think about how to avoid creating ‘legacy systems of the future’ by ensuring sustainability of their service – be it from a technical, financial and resource perspective. I appreciate this isn’t always easy! However I do believe that GDS should and will put much more scrutiny on organisations’ plans to address legacy issues.
One final point from me is that teams should embrace an assessment. Clearly the focus is on passing an assessment but regardless of the outcome there’s lots of value in gaining that feedback. It’s far better to get constructive feedback during the assessment stages rather than having to deal with disappointed stakeholders further down the line, and probably having to spend more time and money to strengthen or redesign the technical architecture.
How do you decide when to go for your Beta Assessment?
Many services (for both good and bad reasons) have struggled with the MVP concept; and as such the journey to get their MVP rolled out nationally has taken a long time, and contained more features and functionality then teams might have initially imagined.
This can make it very hard to decide when you should go for an Assessment to move from Private to Public Beta. If your service is going to be rolled out to millions of people; or has a large number of user groups with very different needs; it can be hard to decide what functionality is needed in Private Beta vs. Public Beta or what can be saved until Live and rolled out as additional functionality.
The other things to consider is, what does your rollout plan actually look like? Are you able to go national with the service once you’ve tested with a few hundred people from each user group? Or, as is more common with large services like NHS Jobs, where you are replacing an older service, does the service need to be rolled out in a very set way? If so, you might need to keep inviting users in until full rollout is almost complete; making it hard to judge when the right time for your Beta assessment is.
There is no right or wrong answer here, the main thing to consider is that you will need to understand all of the above before you can roll your service out nationally, and be able to tell that story to the panel successfully.
This is because theoretically most of the heavy lifting is done in Private Beta, and once you have rolled out your service into Public Beta, the main things left to test are whether your service scaled and worked as you anticipated. Admittedly this (combined with a confusion about the scope of an MVP) is why most Services never actually bother with their Live Assessment. For most Services, once you’re in Public Beta the hard work has been done; there’s nothing more to do, so why bother with a Live Assessment? But that’s an entirely different blog!
Before I discuss what (in my view) a Service Owner is, a brief history lesson into the role might be useful.
The role of the ‘Service Manager‘ was seen as critically important to the success of a product, and they were defined as a G6 (Manager) who had responsibility for the end to end service AND the person who led the team through their Service Standard assessments.
Now let’s think about this a bit; Back when the GDS Service Standard and the Service Manual first came into creation, they were specifically created for/with GOV.UK in mind. As such, this definition of the role makes some sense. GOV.UK was (relatively) small and simple; and one person could ‘own’ the end to end service.
The problem came about when the Service Standards were rolled our wider than in GDS itself. DWP is a good example of where this role didn’t work.
The Service Manual describes a service as the holistic experience for a user; so it’s not just a Digital Product, it’s the telephony Service that sits alongside it, the back end systems that support it, the Operational processes that staff use to deliver the service daily, along with the budget that pays for it all. Universal Credit is a service, State Pension is a service; and both of these services are, to put it bluntly, HUGE.
Neil Couling is a lovely bloke, who works really hard, and has the unenviable task of having overarching responsibility for Universal Credit. He’s also, a Director General. While he knows A LOT about the service, it is very unlikely that he would know the full history of every design iteration and user research session the Service went through, or be able to talk in detail about the tech stack and it’s resilience etc; and even if he did, he certainly would be very unlikely to have the 4 hours spare to sit in the various GDS assessments UC went through.
This led to us (in DWP) phasing out the role; and splitting the responsibilities into two, the (newly created role of ) Product Lead and the Service Owner. The Product Lead did most of the work of the Service Manager (in terms of GDS assessments etc), but they didn’t have the responsibility of the end to end service; this sat with the Service Owner. The Service Owner was generally a Director General (and also the SRO), who we clarified the responsibilities of when it came to Digital Services.
A few years ago, Ross (the then Head of Product and Service Management at GDS) and I, along with a few others, had a lot of conversations about the role of the Service Manager; and why in departments like DWP, the role did not work, and what we were doing instead.
At the time there was the agreement in many of the Departments outside of GDS that the Service Manager role wasn’t working how it had been intended, and was instead causing confusion and in some cases, creating additional unnecessary hierarchy. The main problem was, is it was in DWP, the breadth of the role was too big for anyone below SCS, which mean instead we were ending up with Service Managers who were only responsible for the digital elements of the service (and often reported to a Digital Director), with all non digital elements of the service sitting under a Director outside of Digital, which was creating more division and confusion.
As such, the Service Manual and the newly created DDaT framework were changed to incorporate the role of the Service Owner instead of the Service Manager; with the suggestion this role should be an SCS level role. However, because the SCS was outside of the DDaT framework, the amount the role could be defined/ specified was rather limited, and instead became more of a suggestion rather than a clearly defined requirement.
The latest version of the DDaT framework has interestingly removed the suggestions that the role should be an SCS role, and any reference of the cross over with the responsibilities of SRO, and now makes the role sound much more ‘middle management’ again, although it does still specify ownership of the end to end service.
Ok, so what should a Service Owner be?
When we talked about the role a few years ago, the intention was very much to define how the traditional role of the SRO joined up closer to the agile/digital/user centred design world; in order to create holistic joined up services.
Below is (at least my understanding of) what we intended the role to be:
They should have end to end responsibility for the holistic service.
They should understand and have overall responsibility for the scope of all products within the service.
They should have responsibility for agreeing the overall metrics for their service and ensuring they are met.
They should have responsibility for the overall budget for their service (and the products within it).
They should understand the high level needs of their users, and what teams are doing to meet their needs.
They should have an understanding (and have agreed) the high level priorities within the service. ((Which Product needs to be delivered first? Which has the most urgent resource needs etc.))
They should be working with the Product/Delivery/Design leads within their Products as much as the Operational leads etc. to empower them to make decisions, and understanding the decisions that have been made.
They should be encouraging and supporting cross functional working to ensure all elements of a service work together holistically.
They should be fully aware of any political/strategy decisions or issues that may impact their users and the service, and be working with their teams to ensure those are understand to minimise risks.
They should understand how Agile/Waterfall and any other change methodologies work to deliver change. And how to best support their teams no matter which methodology is being used.
In this way the role of the Service Owner would add clear value to the Product teams, without adding in unnecessary hierarchy. They would support and enable the development of a holistic service, bringing together all the functions a service would need to be able to deliver and meet user needs.
Whether they are an SCS person or not is irrelevant, the important thing is that they have the knowledge and ability to make decisions that affect the whole service, that they have overall responsibility for ensuring users needs are met, that they can ensure that all the products within the service work together, and that their teams are empowered, to deliver the right outcomes.
One of the most common questions that comes up in Bid opportunities is usually some variant of “how do you transfer your knowledge to us before you leave?”
This is completely valid question, and really important to both ask, and to understand, but also hard to answer well in 100 words without risking looking like knowledge transfer is only a nice to have!
Having been on the other side of the commercial table, making sure you get a supplier who will want to work with you and up-skill your own people so you are not reliant on the supplier forever is generally vital to both making sure the project is successful, and cost effective.
I’ve written Invitations to Tender that ask for examples of how suppliers would go about transferring knowledge and up-skilling my teams. I’ve sat through bid tender presentations as the buyer and listened to suppliers try to persuade me that they know best, and that they have the expertise my organisation needs to deliver a project or programme.
I was generally able to spot very quickly those organisations that took this more seriously than others, those that would work collaboratively with us vs. those more likely to just come in and do a sales job and leave us none the wiser reliant on their services.
But, if I’m honest, I never really judged that feel on the words they said, but more through the words they didn’t say, and more importantly HOW they said or didn’t say it.
Everyone can say the words ‘show and tell’, but how are you doing them? How are you getting stakeholders engaged? How are you making sure you have the right people turning up to engage with the project?
You can say you use Trello, JIRA, or Confluence etc. to create shared digital spaces to run your backlogs or share information; but how do you make sure the right people have access to them and know how to use them? How do you agree what information is going on there and when? How do you determine what information the team can see vs. your stakeholders, and make sure the information is understandable to everyone who needs it?
As long as suppliers are putting in key buzzwords, that nuance is hard to judge within 100 words, but so key to understand. And it’s not only important for the buying organisation to understand how the supplier would transfer knowledge, but it’s actually really important for the supplier to understand how receptive an organisation is as well.
I always assumed ‘knowledge transfer’ was something that was easy for suppliers to do as long as they put in some effort.
Now I sit on the other side of the table, its something i’ve realised there is a real art too. Not just writing a bid response that gets the message across, but doing it once you hit the ground. I’d always assumed that, as long as the team/ buying organisation was keen and engaged, knowledge transfer would be easy to do.
Eight months later I’ve realised it’s not as easy as it looks, as a supplier there’s a very fine line to walk between supporting an organisation, and looking patronising. Just as every organisation is somewhere different on their agile/digital journey, so is every individual.
A one size fits all approach to transferring knowledge will never work. You can’t assume because an organisation is new to agile or digital, every individual within the organisation is. Some organisations/people want more in the way of ‘coaching and mentoring’ others want less. Some organisations/people will say they are open to changing their ways of working, but will resist anything new; others are champing off your hand for every new tool or technique. Some want walking through everything you are doing so they can learn from it, others want you to just get on and deliver and tell them at the end how you did it.
And as suppliers, there is often as much we have to learn from the organisation as there is to ‘teach’, while we might be the experts in agile or digital or delivering transformation; we need to learn about and understand how their organisation works and why.
There is no ‘one answer’ on how to do knowledge transfer, and it’s not a one way street. It’s how you approach the question that is important. Are you open to working with an organisation (either as the buyer or the supplier) to understand how you can work together and learn from each other? As long as you are open to having those conversations and learning from each other, then the knowledge transfer will happen.
The Agile Prime Directive states“Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.”
This is a wonderful principle to have during Retrospectives, in order to avoid getting stuck in the blame game, and to instead focus on results.
However, lets be very clear, the Agile Prime Directive isn’t an excuse for not delivering. If every sprint you miss your sprint goals, or you’re team constantly suffers from scope creep etc. Then you need to look a bit deeper to understand what is going wrong.
Even if you agree every individual did the best job they could, as a team are you working best together? Are you understanding your teams velocity as best you can? Do you all understand and agree the scope of the project or your sprint goals? Have you got the right mix of individuals and roles in the team to deliver? Is your team and the individuals in it empowered to make decisions?
If the answer to any of these questions is no, this could be impacting your ability to deliver.
The Agile Prime Directive is a good mindset to start conversations in, as we want to create safe and supportive environments for our teams in order to help them achieve their full potential, and recognising that everyone has room to improve is an important part of that. Nowhere in the Agile Prime Directive does it state everyone is perfect, just that they did their best given the skills/ ability and knowledge they had at the time.
However, while it is a good mindset to start with, unfortunately we all know it’s not 100% true. the Agile Prime Directive itself has issues, while it’s a lovely philosophy, and its intent is good; as a manager, and as a human I have to admit even to myself I haven’t ‘done my best’ every single day.
While most of the time we do all try our best and do our best; everyone has bad days. Occasionally on a team there will be someone who isn’t (for whatever reason) doing their best, their focus is elsewhere etc. External life will sometimes effect peoples work, the kids are ill, they have money worries, their relationship has just ended; these things happen. There will be people who don’t work well together, they can be cordial to each other, but don’t deliver their best when working together, personality clash happens. We need to be able to spot and call all out these things, but we obviously need to be able to do so in a positive and supportive way as much as possible.
Open and honest communication is the key to delivery; and having a culture of trust and empowerment is a critical part of that. We need to create environments where people feel supported and able to discuss issues and concerns, and we need to acknowledge that sometimes, for whatever reason, those issues do come down to an individual; and while I’m not suggesting we should ever name and shame in a retrospective, we need to be able to deal with that in an appropriate way.
We need to not only know and understand that even if everyone ‘is doing their best’, they can still do better; but that sometimes we need to be able to recognise and support those individuals and those teams who for whatever reason are not doing or achieving their best.
These issues can’t always just be ‘left to the retro’, while the retro is a great space to start to air and uncover issues, and learn from what has gone well, and what needs to improve; part of leading and managing teams is understanding which conversations need to come out from the retro and be dealt with alongside it.
If we are constantly missing sprint goals or suffering scope creep, we can not simple say ‘but we are all doing our best’, that isn’t good enough. In this instance the participant award is not enough. We are here to deliver outcomes, not just do the best we can.
Changing how we work, to ensure we can still deliver.
One of the big tenants of agile working has always been about the importance of colocation, and there are a million blogs out there on why colocation makes a big difference.
The first value of the Agile Manifesto states: Individuals and Interactions Over Processes and Tools; and one of the 12 principles is to Enable face-to-face interactions; this is because it is generally understood that colocation allows a better ‘osmosis’ of knowledge between the team, allowing better and faster sharing of information and discussions.
But colocation has always had its downsides, the main ones being that constant colocation doesn’t’ allow people time to process information and work without interruption/ distraction. There’s also a large time and cost implication; with team members and especially Subject Matter Experts often having to travel a lot to remain engaged. The most common excuse I have heard from Senior Leaders in organisations on why they can’t attend user research sessions or show and tells etc. is the time and effort it takes not only to attend the event, but to travel to it as well.
As we get better at recognising that not everyone works in the same way; recognising the limits of colocation is also important.
For the last few years, most of the teams I’ve worked on or managed have used a mix of colocation and remote working; usually a minimum of 3 days (ideally 4) in the office working together and only one or two days working from home.
This allows the colocated days to be utilised best for design workshops, user research, sprint ceremonies etc. Days where we can make the most out of being face to face.
That means the ‘remote working’ days could be used to reflect, to review notes, ‘do work’. They were also the days that could be best used for meetings etc.
Obviously COVID-19 threw all of those ways of working on their head; with everything that could be done remotely, moving to be fully remote. Within Difrent in that time we have on-boarded new staff, stood up brand new teams, completed Discoveries, delivered critical services to help with the nations response to the pandemic etc. Now as we consider how we move to a world post pandemic is the time to pause and consider whether we need to (or even want to) return to old ways of working.
A conversation at the virtual #OneTeamGov breakfast meet last week highlighted that Lockdown has meant we have all had to find more inclusive ways of working. It used to be the case that people ‘in the office’ would often make most of the decisions, and then replay those decisions to us few remote workers. Nowadays, with no one in the office, it forces us all to think about who needs to be involved in conversations and decisions. It might take a bit more planning, but it allows us to be more considerate of people’s time and involvement.
Within Difrent we have recognised that a return back to full colocation is actually not necessary in order for us to keep delivering services that matter. Working remotely has not impacted our ability to deliver at all. Rather than having remote working be the exception, we are now planning how we can make that the norm.
Thinking about how we put people before processes; we are ensuring we use the days where we will all get together face to face to their best advantage, making sure we get value from peoples time and the effort they have put in to travel and that we are adding value to them (and the project) in return.
The discussions we had at the time focused on “how do we actually define the role? And what makes a good product manager?” And there have been plenty of blogs written on those questions over the years. It definitely feels like the role has matured and progressed over the last few years, and now is generally pretty well recognised.
However yesterday chatting to Si Wilson about SME’s and Product Managers, and why they were different roles, I realised this may be one area not touched on much, and actually a pretty key difference it’s important to understand.
In the private sector, the Product Manager is often “the voice of the business”, they are equally seen as the “voice of the customer” but when developing products to take to market and make a profit, it’s less about what the users need, and what the business can sell to them.
In the Public Sector, the role of the Product Manager is a bit different. The Product Manager is NOT the voice of the business, instead they are the voice of the vision. The Product Manager is responsible for ‘what could be’ they ensure the team are delivering quality and value, weighing up the evidence from everyone else in the team and making the decisions on where to focus next in order to meet the desired outcomes.
This slight change in focus is where the role of the Subject Matter Expert (SME) comes in. The Scrum Dictionary states the SME is the person with specialised knowledge; in my experience the SME provide’s the voice of the business; and what ‘is’ rather than what will be. They understand the in’s and out’s of an existing product, service or any sacred cows that need to be avoided (or understood) within an organisation. They usually work closely with the Business Analyst to map out business processes and User Researchers to understand staff experiences.
Back when we merry band of Head’s of Product were trying to understand the role, the decision to not have Product Managers ‘be the voice of the business’ was a very deliberate move as we felt it hampered the move to User Centred design, as it felt it was hard to step back and be agnostic about the solution if you’ve had years in the business and know every pain point and workaround going etc.
Some of the dangers of having a Product Manager who is also an SME are:
They feel they know everything already because of their experience, so feel that user research or testing is a waste of time.
They become a single point of failure for both knowledge and decision making, with too many people needing their attention at the same time
They can get lost in the weeds of details, which can lead to micromanaging or a lack of pace
That is not at all to say that Product Managers can’t ‘come from the business’ because obviously having some knowledge about the organisation and the service is helpful. But equally, having a clear delineation between the roles of the Product Manager and the SME is important; so if you do have someone covering both roles, it’s important to understand which hat is being worn when decisions are made; and for that individual to be able to draw a line between when they are acting as the PM and when they are the SME.
As a Product person, a good SME is worth their weight in gold. good ones bring loads of speed and stretching thinking — and even packaging thinking. They can help identify pain points, and help user researchers and business analysts find the right people to talk to when asking questions about processes’ etc. They give the Product Manager room to manoeuvre, and make sure things are moving on. Equally the best SME’s can be pragmatic, they understand that what the business wants doesn’t always match what users want, and work with the team to find the best way forward.
Where the role of the SME hasn’t worked well, in my experience, it tends to be because the individual hasn’t been properly empowered to make decisions by their organisation or line manager; or don’t actually have the knowledge required, and are instead their to capture questions or decisions and feed them back to their team/manager. Another common issues is that the SME can’t be pragmatic or understand the difference between user needs and business needs; and won’t get involved in user research or understand its importance. Rather than helping the team move work forward, they slow things down; wanting every decision justified to their satisfaction; wanting to make decisions themselves rather than working with the Product Manager.
Rarely have I found SME”s that could be dedicated full time to one project, they tend to be Policy or Ops experts etc. and so there are a lot of demands on their time. I suspect this is one of the reasons the role of the SME and Product Manager if sometimes blended together. However, while they ‘can’ be filled by the same person, in my experience having those roles filled by separate people does work much better, and allow the team to deliver value quicker.