×

Tag: Delivery

Assessing Innovation

(co-written with Matt Knight)

Some background, for context

Just over a month ago I got approached to ask if I could provide some advice on assessments to support phase two of the GovTech Catalyst (GTC) scheme. For those who aren’t aware of the GovTech Catalyst Scheme, there’s a blog here that explains how the scheme was designed to connect private sector innovators with the public sector sponsors, using Innovate UK’s Small Business Research Initiative (SBRI) to help find promising solutions to some of the hardest public sector challenges.

Person in a lab coat with a stethoscope around their neck looking through a Virtual Reality head set.
Looking for innovation

The Sponsor we were working with (who were one of the public sector sponsors of the scheme) had put two suppliers through to the next phase and allocated funding to see how and where tech innovation could help drive societal improvements in Wales. As part of their spend approval for the next phase, the teams had to pass the equivalent of a Digital Service Standard assessment at the 6 month point in order to get funding to proceed. 

For those who aren’t aware, there used to be a lovely team in GDS who would work with the GTC teams to provide advice and run the Digital Service Standard assessments for the projects; unfortunately this team got stood down last year; after the recent GTC initiatives started, leaving them with no one to talk to about assessments, nor anyone in place to assess them. 

The sponsor had reached out to both GDS and NHS Digital to see if they would be willing to run the assessments or provide advice to the teams, but had no luck; which left them a bit stuck; which is where I came in. I’ve blogged before about the Digital Service Standards; which led to the Sponsor reaching out to me to ask whether I’d be willing and able to help them out; or whether I knew any other assessors who might be willing to help. 

Preparing for the Assessments

As there were two services to assess; one of the first things I did was talk to the wonderful Matt Knight to see if he’d be willing and able to lead one of the assessments. Matt’s done even more assessments than me; and I knew he would be able to give some really good advice to the product teams to get the best out of them and their work. 

Matt and I sat and had a discussion on how to ensure we were approaching our assessments consistently; how to ensure we were honouring and adhering to the core tenants of the Digital Standards whilst also trying to assess the teams innovation and the value for money their services could deliver in line with the criteria for the GovTech scheme.

What became quickly apparent was; because this was to support the GTC scheme; the teams doing the work were fully private sector with little experience of the Digital Service Standards. A normal assessment, with the standard ‘bar’ we’d expect teams to be able to meet, wouldn’t necessarily work well; we’d need to be a little flexible in our approach. 

Obvious, no matter what type of Assessment you’re doing the basic framework of an assessment stays the same (start with user needs, then think about the End-to-End service, then you can talk about the team and design and tech, and along the way you need to ask about the awkward stuff like sustainability and open source and accessibility and metrics) can be applied to almost anything and come up with a useful result, regardless of sector/background/approach. 

As the services were tasked with trying to improve public services in Wales, we also wanted to take account of the newly agreed Welsh Digital Standards; using them alongside the original Digital Standards; obviously the main difference was the bits of the Welsh Standards that covered ensuring the well-being of people in Wales and promoting the Welsh Language (standards 8 & 9), you can read more about the Well being of future generations Act here

The assessments themselves 

An image of a team mapping out a user journey
User Journey Mapping

The assessments themselves ran well, (with thanks to Sam Hall, Coca Rivas and Claire Harrison my co-assessors) while the service teams were new to the process they were both fully open and willing to talk about their work, what went well and not so well and what they had learnt along the way. There was some great work done by both the teams we assessed, and it’s clearly a process that everyone involved learned a lot from, both in terms of the service teams, and the sponsor team, and it was great to hear about how they’d collaborated to support user research activities etc. Both panels went away to write up their notes; at which point Matt and I exchanged notes to see if there were any common themes or issues; and interestingly both assessments had flagged the need for a Service Owner from the sponsor to be more involved in order to help the team identify the success measures etc. 

When we played the recommendations and findings back to the Sponsor, this led to an interesting discussion; although the sponsor had nominated someone to act as the link for the teams in order to answer their questions etc. and to try and provide the teams some guidance and steer where they could. Because of the terms of the GTC scheme, the rules on what steers they could and couldn’t give were quite strict to avoid violating the terms of the competition. Originally the GTC team within GDS would have helped the sponsors navigate these slightly confusing waters in terms of competition rules and processes. However, without an experienced team to turn to for advice it leaves sponsors in a somewhat uncomfortable and unfamiliar position; although they had clearly done their best (and the recommendations in this blog are general comments on how we can improve how we assess innovation across the board and not specifically aimed at them)”

Frustratingly this meant that even when teams were potentially heading into known dead-ends etc; while the sponsor could try to provide some guidance and steer them in a different direction; they couldn’t force the teams pivot or change; instead the only option would be to pull the funding. While this makes sense from a competition point of view; it makes little to no sense from a public purse point of view; or from a Digital Standards point of view. It leaves sponsors stuck (when things might have gone a little off track) rather than being able to get teams to pivot; they are left choosing between potentially throwing away or losing some great work; or investing money in projects that may not be able to deliver. 

Which then raises the question; how should we be assessing and supporting innovation initiatives? How do we ensure they’re delivering value for the public purse whilst also remaining fair and competitive? How do we ensure we’re not missing out on innovative opportunities because of government bureaucracy and processes? 

In this process, what is the point of a Digital Service Standard assessment? 

If it’s like most other assessment protocols (do not start Matt on his gateway rant), then it’s only to assess work that has already happened. If so, then it’s not much good here, when teams are so new to the standards and need flexible advice and support on what they could do next etc.   

If it’s to assess whether a service should be released to end users, then it’s useful in central government when looking to roll out and test a larger service; but not so much use when it’s a small service, mainly internal users or a service that’s earlier on in the process aiming to test a proof of concept etc. 

If it’s to look at all of the constituent areas of a service, and provide help and guidance to a multidisciplinary team in how to make it better and what gaps there are (and a bit of clarity from people who haven’t got too close to see clearly), then it’s a lot of use here, and in other places; but we need to ensure the panel has the right mix of experts to be able to assess this. 

While my panel was all fantastic; and we were able to assess the levels of user research the team had done, their understanding of the problems they were seeing to solve, their ability to integrate with legacy tech solutions and how their team was working together etc. none of us had any experience in assessing innovation business cases or understanding if teams had done the right due diligence on their financial funding models. The standards specify that teams should have their budget sorted for the next phase and a roadmap for future development; in my experience this has generally been a fairly easy yes or no; I certainly wouldn’t know a good business accelerator if it came and bopped me on the nose. So while we could take a very high level call on whether we thought a service could deliver some value to users; and whether a roadmap or budget looked reasonable; a complex discussion on funding models and investment options was a little outside our wheelhouse; so was not an area we could offer any useful advice or recommendations on.  

How can we deliver and assess innovation better going forward? 

If we’re continuing to use schemes like the GTC scheme to sponsor and encourage private sector innovators to work with the public sector to solve important problems affecting our society, then we obviously need a clear way to assess their success. But we also need to ensure we’re setting up these schemes in such a way that the private sector is working with the public sector; and that means we need to be working in partnership; able to advise and guide them where appropriate in order to ensure we’re spending public money wisely. 

There is a lot of great potential out there to use innovative tech to help solve societal issues; but we can’t just throw those problems at the private sector and expect them to do all the hard work. While the private sector can bring innovative and different approaches and expertise, we shouldn’t ignore the wealth of experience and knowledge within the public sector either. We need people within the public sector with the right digital skills, who are able to  prioritise and understand the services that are being developed inorder to ensure that the public purse doesn’t pay for stuff that already exists to be endlessly remade. 

Assessment can have a role in supporting innovation; as long as we take a generous rather than nitpicking (or macro rather than micro) approach to the service standard. Assessments (and the Standards themselves) are a useful format for structuring conversations about services that involve users (hint: that’s most of them) just the act of starting with user needs – pt 1 – rather than tech – changes the whole conversation. 

However,  to make this work and add real value, solve a whole problem for users (point 2 of the new uk govt standard) – is critical, and that involves having someone who can see the entire end to end process for any new service and devise and own success measures for it. The best answer to both delivering innovation, and assessing it, is bringing the private and public sector together to deliver real value; creating a process that builds capacity, maturity and genuine collaboration within the wider public sector. A space to innovate and grow solutions. True multidisciplinary collaboration, working together to deliver real value.

“Together, We Create”

Big thanks to Matt for helping collaborate on this, if you want to find his blog (well worth a read) you can do so here:

Cost vs. Quality

A debate as old as time, and a loop that goes around and around; or so it seems in the Public Sector commercial space.

Every few years, often every couple of spend control cycles, the debate of cost vs. quality rears its head again; with Commercial weighting flip flopping between Quality as the most important factor, to cost (or lowest cost) as the highest priority.

When quality is the most important factor in the commercial space; Government Departments will prioritise the outputs they want to achieve; and weighting their commercial scores to the areas that indicate Quality – things like ‘Value Add’; ‘Delivering Quality’, ‘Culture’, ‘Delivering in Partnership etc’. We will see more output focused contracts coming out on to the market; with organisations clear on the vision they want to achieve and problems they need to solve and looking for the supplier that can best help them achieve that.

When reducing costs becomes the highest priority, the commercial weighting moves to ‘Value for Money’. Contracts are more likely to be fixed price and are often thinly veiled requests for suppliers to act as body shops rather than partners with commercial tenders scoring day rate cards rather than requesting the cost for overall delivery of outcomes.

Unfortunately, a lot of the time, when the priority switches to cost over quality; we end up with a lot of projects not being delivered; of outcomes being missed, and user needs not being met. In order to cut more and more costs, offshoring resource can become the only way to deliver the results cheaply; with the departmental project teams working out of sync with their offshore delivery partners; making co-design and delivery much harder to do, and making it almost impossible to achieve the required quality. This goes in a cycle, with Departments toting and grooming between “offshore as much as possible to cut costs” and “the only way to deliver quality is for everyone to be collocated in the office 100% of the time”. Full collocation of the teams inevitably driving up the costs again.

So, does that mean in order to get quality we have to have high costs? Surely there is an obviously a sweet spot we’re all looking for, where cost and quality align; but why does it seem so hard to achieve within the Public Sector and what do we need to be looking at to achieve it?

When the government commercial function (and GDS) shook up the public sector digital world over nearly a decade ago they introduced things like the Digital Marketplace and implemented the Spend Control pipeline; with the aim of moving departments away from the large SI’s that won 90% of government contracts. These suppliers often charged a fortune and rarely seemed to deliver what was actually needed. (This blog gives the details on what they intended, back in 2014).

Lots of SME suppliers began to enter the market and began to win contracts and change up how contracts were delivered, as completion increased, costs decreased; with quality partnerships forming between new suppliers and government departments; and the quality of delivery increased as new options, solutions and was of working were explored.

However, this left Departments managing lots of individual contracts; which grew increasingly complex and time consuming to mange. In order to try and reduce the number of contracts they had to manage; the scale of the contracts began to increase, with more and more multimillion pound contacts emerging.

As the size and value of the contracts increased, SME’s began to struggle to win them, as they couldn’t stand up the teams needed quickly; nor could they demonstrate they had the experience in delivering contracts of that scale; which became a bit of a self-fulfilling prophecy, as the larger SI’s continued to win the larger contracts as they were the only ones able to provide the evidence they could staff and deliver them; and their costs remained high.

This left the SME’s facing three options:

  • Decide not to try for the larger contracts, reducing the amount of competition; potentially increasing costs and decreasing quality in the long run);
  • Form partnership agreements with a number of other SME’s or a larger supplier (again reducing the amount of completion) in order to be able to stand up the teams needed and enable delivery of larger contracts. However having a consortium of suppliers not used to working together could complicate delivery, which could in turn decrease the quality or speed of delivery if not carefully managed; as such not all contracts allowed consortium or partnership bids due to the perceived complexity they could bring.
  • Or the SME aimed to grow to allow them to be able to win and deliver the larger contracts. As SME’s grew however, they would often have to either increase their costs in order to run a larger organisation that could still deliver the same quality they did as before; or they could keep their costs low, but their quality would likely decrease.

Throughout the pandemic, the focus has been on delivery; and there’s been a healthy mix of both small and large contracts coming out, meaning lots of competition. While costs have always been a factor;  the pandemic allowed both departments and suppliers to remove much of the costly admin and bureaucratic approval processes in favour of lightweight approaches involved to bring on suppliers and manage teams outputs, encouraging innovation in delivery and cost; with lockdowns ensuring co-location was now out of the question many suppliers were able to reduce their rates to support the pandemic response as both departments and suppliers agreeing that the priority had been on delivering quality products and services to meet organisations and users urgent needs.The removal of co-location as a prerequisite also open up the market to more suppliers to bid for work, and more individuals applying for more roles; which increased competition and inevitably improved the quality out the outputs being produced. This in fact led to a lot of innovation being delivered throughout the pandemic which has benefited us all.

As we move out of the pandemic and into the next spending review round; the signs are that the focus is about to swing back to costs as the highest priority. With larger contracts coming out that are looking for cheaper day rates in order to allow departments to balance their own budgets; but as the economy bounces back and departments begin to insist again that teams return to the office, most suppliers will want to increase their costs to pre-pandemic levels. If we’re not careful the focus on cost reduction will mean we could decrease the quality and innovation that has been being delivered throughout the pandemic; and could cost the taxpayers more in the long run. Look at DWP’s first attempt to deliver Universal Credit for how badly things can go wrong when cost is the highest priority and when the Commercial team and runs the procurement process with minimum input from Delivery; driving the commercial and deliver decisions being made more than quality.

To find the sweet spot between Cost and Quality we need to create the best environment for innovation and competition. Allowing flexibility on where teams can be based will support this; supporting and encouraging SME’s and Medium sized suppliers to bid for and win contracts by varying contract sizes and values. Focusing on outputs over body shopping. Looking for what value suppliers can add in terms of knowledge transfer and partnership rather than simply prioritising who is the cheapest.

It’s important we all work together to get the balance between cost and quality right, and ensure we remain focused on delivering the right things in the right way.

Seesaw

How to be a Product Advocate

Why you need a Product Person in your team.

Since joining Kainos a few weeks ago, I’ve had a number of conversations internally and with clients about the relationship between Delivery and Product; and why I as a Product Person moved over to Delivery.

‘Products at the heart of delivery’ image

My answer to that question was that, having spent over 10 years as a Product Person, and seeing the growth of Product as a ‘thing’ within the Public Sector; helping Product grow and mature, developing the community, ways of working, career pathway etc; I realised that what was missing was Product thinking at a senior level. Most Senior leaders within the Programme delivery or Transformation space come from a traditional delivery background (if not an operational one) and while many of them do now understand the value of user centric design and user needs etc; they don’t understand the benefit of a product centric approach or what value Product thinking brings.

The expansion of Product people in the Public sector has predominantly been driven by GDS and the Digital Service standards; with most organisations now knowing they need a ‘Product Manger‘ in order to pass their Service Standard Assessment. However, almost 10 years later, most organisations are still not prioritising the hiring and capability development of their Product people. In May I worked with four different teams each working to the Digital Standards and needing to pass an assessment; and in none of those teams was the role of the Product manger working in the way we intended when we creating the DDaT Product Management capability framework.

Most organisations (understandably) feel the role of the Product Manager should be an internal one, rather than one provided by a Supplier; but 9 times out of 10 the person they have allocated to the role has no experience in the role, have never worked on a product or service that was developed to the digital standards never mind having been through an assessment; and they are regularly not budgeted or allocated the project full time; often being split across too many teams or split between the Product Manager role whilst still working in Ops or Policy or whoever they have come from previously; more often than not their actually a Subject Matter Expert, not a Product Manager (which I’ve blogged about before).

As a supplier; this makes delivery so much harder. When the right Product person isn’t allocated to a project, we can quickly see a whole crop of issues emerge.

So what are the signs that Product isn’t being properly represented within a team:

  • Overall vision and strategy are unclear or not shared widely; teams aren’t clear on what they’re trying to achieve or why; this can be because the Product person is not able to clearly articulate the problem the team are there to solve or the outcomes that team are their to deliver aren’t clearly defined.
  • Roadmap doesn’t exist, is unstable or does not go beyond immediate future/ or the Scope of the project keeps expanding; often a sign that prioritisation isn’t being looked at regularly or is happening behind closed doors making planning hard to do.
  • Success measures are unclear or undefined; because the team doesn’t understand what they’re trying to achieve and often leads to the wrong work getting prioritised or outcomes not getting delivered or user needs not met.
  • Work regularly comes in over budget or doesn’t meet the business case; or the team keeps completing Discoveries and then going back to the start or struggling to get funding to progress. This can be a sign the team aren’t clear what problem they are trying to solve or that the value that the work delivers cannot be/ isn’t clearly articulated by the Product person.
  • Delivery is late/ velocity is slow. This can be a sign the team aren’t getting access to their Product person in a timely manner causing bottlenecks in stories being agreed or signed off; or that the Product person is not empowered to make decisions and is constantly waiting for sign off from more senior stakeholders.
  • Role out is delayed or messy, with operational teams frustrated or unclear on project progress; a sign that the team doesn’t have someone owning the roadmap who understands what functionality will be available when and ensuring any dependancies are clearly understand and being monitored, or a sign that there isn’t someone engaging with or communicating progress to wider stakeholders.

More often than not as a Supplier I’ve had to argue that we need to provide a Product person to work alongside/ with teams to coach/support their internal Product people in the skills and responsibilities a Product person needs to have to enable successful delivery. Where clients have been adamant they don’t want Product people from a Supplier (often for budgetary reasons), we’ve then had to look at how we sneak someone in the door; usually by adding a Business Analyst or delivery manager to the team who also has Product skills, because otherwise are ability to deliver will be negatively impacted.

When budgets are tight, the role of Product person is often the first thing project managers try to cut or reduce; prioritising the technical or project delivery skills over Product ones. As such, teams (and organisations) need to understand the skills a good product person brings; and the cost of not having someone within a team who has those skills.

  • Their role is to focus on and clarify to the team (and business) the problem the team are trying to fix.
  • Ensure a balance between user needs; business requirements and technical constraints/options.
  • Quantifying and understanding the ROI/ value a project will deliver; and ensuring that can be tracked and measured through clear success measures and metrics.
  • Being able to translate complex problems into roadmaps for delivery. Prioritising work and controlling the scope of a product or service to ensure it can be delivered in a timely and cost effective manner, with a proper role out plan that can be clearly communicated to the wider organisation.

As an assessor, I have seen more projects fail their assessments at Alpha (or even occasionally Beta) because they lack that clear understanding of the problem there trying to solve or their success measures etc; than I have because they’ve used the wrong technical stack etc. This can be very costly; and often means undress of thousands (if not millions) of pounds being written off or wasted due to delays and rework. Much more costly than investing in having a properly qualified or experienced Product people working within teams.

While Product and Delivery are often seen as very different skill sets; I recognised a few years ago the value in having more people who understand and can advocate for both the value Product thinking brings to delivery; but also how delivery can work better with Product. People who can not only understand but also champion both in order to ensure we’re delivering the right things in the right ways to meet our clients and their users needs.

Which is why I made the active decision to hop the fence and try and bring the professions closer together and build understanding in both teams and senior leaders in the need for Product and Delivery skills to be invested in and present within teams in order to support and enable good delivery, and I as really glad to see when I joined Kainos that we’re already talking about how to bring our Product and Delivery communities closer together and act for advocates to support each other; and it was in fact a chat with the Kainos Head of Product Charlene McDonald that inspired this blog.

Having someone with the title of Product Manager or Owner isn’t enough; we need people who are experienced in Product thinking and skilled in Product Management; but that isn’t all we need. We need to stop seeing the role of Product person as an important label needed you can give to anyone in the team in order to pass an assessment and understand why the role and the skills it brings are important. We need senior leaders, project managers and delivery teams who understand what value Product brings; who understand why product is important and what it could cost the team and their organisation if those product skills are not included and budgeted for properly right from the start. We need Senior Leaders to understand why it’s important to invest in their product people; giving them the time and support they need to do their job properly; rather than spreading them thin across teams with minimal training or empowerment.

We need more Product advocates.

Delivering Value for yourself and others

Just short of two years ago I accepted the role of Director of Delivery at Difrent, a big move for me as I’d only worked in the Public Sector, but a good opportunity to see how things worked on the other side of the commercial table; and a great opportunity to work with some fantastic people (Honestly, Rach Murphy herself is a powerhouse who can teach the world a thing or two and always worth making time for) outside of the public sector, and learn new skills.

The services we were delivering at Difrent we’re very similar to those I’d been working on before, and I worked with many familiar faces; but still the challenges were new. Working at a start up that was beginning to scale up was a very different environment to working in a large established Government Department. Not just delivering great services that meet user needs, but also building up business processes; scaling up teams; winning new business.

And then there was the pandemic.

Because of it’s strong background in Health, Difrent was on the front line when it came to stepping up and supporting the COVID-19 response working with the NHSBSA; NHSX and DHSC. I always thought that my time on Universal Credit was the most fast pacing and demanding time of my life; which it turns out was nothing compared to being asked to stand up 6 teams of experts within 72 hours at the start of the first wave to support various urgent pandemic related services.

Alongside supporting and delivering high priority COVID-19 related services in unprecedented timescales (we successfully helped delivery of the Home Testing service in under a month) we also had to keep delivering our existing products and services; helping Skills for Care go Live with their Adult Workforce Data Set service, continuing delivery of NHS Jobs, helping the Planning Inspectorate pass their Beta Assessment for their Appeals service and delivering the wholesale business transformation for the British Psychological Society; whilst also picking up and delivering a whole host of other projects and services that we continued to win.

Because of the pandemic, a lot of new teams were beginning to work with Digital Service Standards, and having to go through Service Standard Assessments for the first time; and an increasing amount of my time began being demanded by clients to support them understand and adhere to the service standards. I’ve always joked about my perfect record for passing Assessments (while being clear, that not passing the first time isn’t failure, it just means you have more to learn!) working with one client to turn around their service in under 3 weeks from complete un-adherence to the standards to passing a Beta assessment has got to be a personal best!

The last year has been full on, with long weeks and even longer days. I’m so proud of everything DIfrent has achieved in the last 18 months; but I also recognised the time is right for me to move on and focus more on the bits of my role I am most passionate about.

And what is that? Being hands on and working with clients to solve problems. Having the time to work with teams to understand the issues they’re facing and how to go about fixing them. Seeing the positive changes being made and thinking of ways to keep iterating and improving on what we’ve done. Investing in and building that cultural and organisation change up over time. Whilst at the same time having a proper work life balance again; having time to give attention to my family and friends; rediscovering the things I enjoy doing outside of work and having time and energy to do them. As lockdown begins to end, it’s time for me to have a new start.

And so, from next week I’m moving on to work with Kainos, I’m really excited about this new opportunity. Going into a larger organisation means there will be more peers to share that load; bigger problems to solve for clients, bigger teams to work with, all with the benefit of the organisational processes etc in place already that we will need to deliver large projects; which will allow me to focus on working with clients fully and ensuring I’m delivering real value to them, and getting real value myself from my work.

Partnership

The good and the bad.

At Difrent we always talk about our desire to deliver in partnership with out clients. To move beyond the pure supplier and client relationship to enable proper collaboration.

One of my main frustrations when I was ‘client side’ was the amount of suppliers we’d work with who said they would partner with us, but then when the contract started, after the first few weeks had passed and the new relationship glow had faded; the teams and the account managers reverted to type. I can’t recall how many times I had to have conversations at the supplier governance meetings where I was practically begging them to challenge us; to be a critical friend and push for the right thing; to feedback to us about any issues and suggest improvements. It always felt like we were reaching across a gap and never quite making full contact.

As such, that’s one of the areas in Difrent I (and others) are very keen to embody. We try to be true partners; feeding back proactively where there are issues or concerns or where we have suggestions. Trying to foster collaborative ‘one team’ working.

We’ve obviously had more success with this on some contracts vs others. There’s always more we can learn about how to better partner with our clients; however; given we see a lot of complaining about strained partnerships between clients and suppliers; I thought I’d do a bit of a case study/ reflection and praise of one partnership we’ve been working on recently.

Difrent won a contract with the Planning Inspectorate last year, and it was the first completely remote pitch and award we’d been involved with on a multi million pound contract.

From the start of the procurement it became really clear that the Planning Inspectorate wanted a partner; that this wasn’t just lip service, but something they truly believed it. As part of the procurement process they opened up their github so we could see their code; they opened up their Miro so we could see their service roadmap, they proactively shared their assessment reports with suppliers etc.

For us this made not only a good impression, but enabled us to develop a more informed and valuable pitch.

Since we put virtual feet in the virtual door that dedication to partnership has remained as true 6 months later as it was then. Outside of our weekly governance calls we’ve had multiple workshops to discuss collaboration and ways of working. We’ve had multiple discussions on knowledge transfer and reflecting on progress and ways to iterate and improve.

Where there have been challenges we’ve all worked hard to be proactive and open and honest in talking things through. They’ve welcome our suggestions and feedback (and proactively encouraged them) and been equally proactive on giving us feedback and suggestions.

This has helped us adapt and really think about how we do things like knowledge transfer, always challenging (especially remotely), but something we’re passionate about getting right. We’ve all worked so hard on this, so much so that it’s become on of the core bits of our balanced scorecard; ensuring they as a client can measure the value they’re getting from our partnership not just through our outputs on the projects we’re working on, but our contributions to the organisation as a whole; which is also really helpful for us to be able to help us analyse and iterate our ‘value add’ to our partners; and ensure we’re delivering on our promises.

I think there is a lot of learning for other Departments/ ALB’s out there looking to procure digital services or capability on how a good partnership with a supplier needs to start before the contract is signed.

Thanks to Paul Moffat and Stephen Read at the Planning Inspectorate for helping with this blog – demonstrating that partnership in action!

Talking Digital Transformation

It’s something that has come up a lot in conversations at the moment, what is Digital Transformation? What does Digital Transformation mean to me? I always joke that it’s my TED talk subject, if I had one; as such I thought why not write a blog about it?

What is Digital Transformation?

According to Wikipedia, Digital Transformation “is the adoption of digital technology to transform services or businesses, through replacing non-digital or manual processes with digital processes or replacing older digital technology with newer digital technology.

The Wikipedia definition focuses on 3 of the main areas of Digital Transformation; technology, data, process; which are the areas most people quote when but doesn’t reference organisational change; which is often recognised as the 4th pillar needed for successful transformation.

If we’re being specific, then I agree with the Wikipedia definition at the project or service level, but when someone says Digital Transformation to me; I automatically start thinking about what that means at the organisational level, before moving onto the other areas.

I’ve done plenty of blogs previously on the importance of considering your organisational culture when trying to implement change; and how likely it is that your transformation will fail if you don’t consider your culture as part of it; but that as we see from the Wikipedia Definition; the people side of Digital Transformation is often forgotten.

There’s a good blog here that defines the 4 main challenges organisations face when looking to implement Digital Transformation, which it defines as:

  • Culture.
  • Digital Strategy and Vision.
  • IT infrastructure and digital expertise.
  • Organisational Structure.

Here, we see Culture is the first/largest challenge mainly organisations face; which is why it’s important is’t not treated as an afterthought. Why is that? Is our methodology wrong?

So how do we go about delivering Digital Transformation?

The Enterprise project has a good article here on what it views as the 3 important approaches leaders should take when implementing Digital Transformation.

  • Solve the biggest problem first.
  • Collaborate to gain influence.
  • Keep up with information flows.

There’s (hopefully) nothing revolutionary here; this is (in my opinion) common sense in terms of approach. But so often, when we start talking about Digital Transformation, we can quickly fall into the trap about talking about frameworks and methodology; rather than the how and why of our approach to solving problems. So, are there any particular frameworks we should be using? Does the right framework guarantee success?

There are lots of different frameworks out there; and I can’t document them all; but below are some examples…

This article sums up what it deems as the top 5 Digital Transformation frameworks, which are the big ones; including MIT; DXC; CapGemini; McKinsey; Gartner; Cognizant and PWC. It’s a good summary and I won’t repeat what it says about each, but it looks at them in the following terms that I think are key for successful Digital transformation:

  • customer-centricity
  • opportunity and constraints
  • company culture
  • simplicity

There are obviously a few others out there; and I thought I’d mention a couple:

The first one is this AIMultiple; this one interestingly has culture as the final step; which for me makes it feel like you are ‘doing transformation to the teams rather than engaging teams and bringing them into the transformation; which doesn’t work well for me.

AIMultiple Digital Transformation Framework
https://research.aimultiple.com/what-is-digital-transformation/#what-is-a-digital-transformation-framework

This second one; from ionology, has Digital Culture and Strategy as its first building block; with user engagement as its second building with equal waiting to Processes, Technology and Data. It recognises that all of these elements together are needed to deliver Digital Transformation successfully. This one feels much more user centric to me.

https://www.ionology.com/wp-new/wp-content/uploads/2020/03/Digital-Transformation-Blocks-Equation.jpg

So where do you start?

Each of these frameworks has key elements they consider, in a particular order that they feel works best. But before panicking about which (if any) framework you need to pick; it’s worth remembering that no single framework will work for every business and any business will need to tailor a framework to fit their specific needs. 

How you plan to approach your transformation is more important than the framework you pick. Which is why the Enterprise article above about good leadership for me is spot on. We should always be asking:

  • What is the problem you’re trying to solve within your organisation by transforming it, and why?
  • Who do you need to engage and collaborate with to enable successful transformation?
  • What is the data you need to understand how best to transform your organisation?

Once you know what you’re trying to achieve and why, you can understand the options open to you; you can then start looking at how you can transform your processes, technology, data and organisational structure; at which point you can then define your strategy and roadmap to deliver. All of the above should be developed in conjunction with your teams and stakeholders so that they are engaged with the changes that are/will be happening.

Any framework you pick should be flexible enough to work with you to support you and your organisation; they are a tool to enable successful Digital Transformation; not the answer to what is Digital Transformation.

So, for me; what does Digital Transformation mean?

As the Enterprise Project states; Digital transformation “is the integration of digital technology into all areas of a business, fundamentally changing how you operate and deliver value to customers. It’s also a cultural change that requires organisations to continually challenge the status quo, experiment, and get comfortable with failure.” Which I wholeheartedly agree with.

Agile Delivery in a Waterfall procurement world

One of the things that has really become apparent when moving ‘supplier side’ is how much the procurement processes used by the public sector to tender work doesn’t facilitate agile delivery.

The process of bidding for work, certainly as an SME is an industry in itself.

This month alone we’ve seen multiple Invitations to Tender’s on the Digital Marketplace for Discoveries etc, as many departments are trying to spend their budget before the end of the financial year.

The ITT’s will mention user research and ask how suppliers will work to understand user needs or hire proper user researchers. But they will then state they only have 4 weeks or £60K to carry out the Discovery. While they will specify the need for user research, no user recruitment has been carried out to let the supplier hit the ground running; it’s not possible for it to be carried out before the project starts (unless as a supplier you’re willing to do that for free; and even if you are, you’ve got less than a week to onboard your team, do any reading you need to do and complete user recruitment, which just isn’t feasible); and we regular see requests for prototypes within that time as well.

This isn’t to say that short Discoveries etc. are impossible, if anything COVID-19 has proved it is possible, however there the outcomes we were trying to deliver were understood by all; the problems we were trying to solve were very clear,; and there was a fairly clear understanding of the user groups we’d need to be working with to carry out any research; all of this enabled the teams to move at pace.

But we all know the normal commercial rules were relaxed to support delivery of the urgent COVID-19 related services. Generally it’s rare for an ITT to clarify the problem the organisation is trying to solve, or the outcomes they are looking to achieve. Instead they tend to solely focus on delivering a Discovery or Alpha etc. The outcome is stated as completing the work in the timeframe in order to move to the next stage; not as a problem to solve with clear goals and scope.

We spend a lot of time submitting questions trying to get clarity on what outcomes the organisations are looking for, and sometimes it certainly feels like organisations are looking for someone to deliver them a Discovery solely because the GDS/Digital Service Standard says they need to do one. This means, if we’re not careful, halfway through the Discovery phase we’re still struggling to get stakeholders to agree the scope of the work and why we really do need to talk to that group of users over there that they’ve never spoken too before.

Image result for gds lifecycle
The GDS lifecycle

The GDS lifecycle and how it currently ties into procurement and funding (badly) means that organisations are reluctant to go back into Discovery or Alpha when they need too, because of how they have procured suppliers. If as a supplier you deliver a Discovery that finds that there is no need to move into Alpha (because there are no user needs etc) or midway through an Alpha you find the option you prioritised for your MVP no longer meets the needs as anticipated, clients still tend to view that money as ‘lost’ or ‘wasted’ rather than accepting the value in failing fast and stopping or changing to do something that can add value. Even when the clients do accept that, sometimes the procurement rules that brought you on to deliver a specific outcome mean your team now can’t pivot onto another piece of work, as that needs to be a new contract; either scenario could mean as a supplier you loose that contract you spent so much time getting, because you did ‘the right thing’.

We regularly pick up work midway through the lifecycle; sometimes that’s because the previous supplier didn’t work out; sometimes its because they were only brought in to complete the Discovery or Alpha etc. and when it comes to re-tender, another supplier is now cheaper etc. That’s part and parcel of being a supplier; but I know from being ‘client side’ for so long how that can make it hard to manage corporate knowledge.

Equally, as a supplier, we rarely see things come out for procurement in Live, because there is the assumption by Live most of the work is done, and yet if you follow the intent of the GDS lifecycle rather than how it’s often interpreted, there should still be plenty of feature development, research etc happening in Live.

This is turn is part of the reason we see so many services stuck in Public Beta. Services have been developed by or with suppliers who were only contracted to provide support until Beta. There is rarely funding available for further development in Live, but the knowledge and experience the suppliers provided has exited stage left so it’s tricky for internal teams to pick up the work to move it into Live and continue development.

Most contracts specify ‘knowledge transfer’ (although sometimes it’s classed as a value add; when it really should be a fundamental requirement) but few are clear on what they are looking for. When we talk to clients about how they would like to manage that, or how we can ensure we can get the balance right between delivery of tangible outcomes and transferring knowledge, knowledge transfer is regularly de-scoped or de-prioritised. It ends up being seen as not as important as getting a product or service ‘out there’; but once the service is out there, the funding for the supplier stops and the time to do any proper knowledge transfer is minimal at best; and if not carefully managed suppliers can end up handing over a load of documentation and code without completing the peer working/ lunch and learns/ co-working workshops we’d wanted to happen.

Some departments and organisations have got much better at getting their commercial teams working hand and hand with their delivery teams; and we can always see those ITT’s a mile off; and it’s a pleasure to see them; as it makes it much easier for us as suppliers to provide a good response.

None of this is insurmountable, but we (both suppliers and commercial/procuring managers and delivery leads) need to get better at working together to look at how we procure/bid for work; ensuring we are clear on what the outcomes we’re trying to achieve are, and properly valuing ‘the value add’.

Agile at scale

What do we even mean when we talk about agile at scale and what are the most important elements to consider when trying to run agile at scale?

This is definitely one of those topics of conversation that goes around and around and never seems to get resolved or go away. What do we even mean when we talk about agile at scale? Do we mean scaling agile within a programme setting across multiple teams? Do we mean scaling it across multiple programmes? Or do we mean scaling it using it at scale within a whole organisation?

When ever I’m asked about what I believe to be the most important elements in enabling successful delivery using agile, or using agile at scale, the number one thing I will always talk about isn’t the technology; It isn’t digital capability; or experience with the latest agile ways of working (although all those things are important and do obviously help) it’s the culture.

I’ve blogged before on how to change a culture and why it’s important to remember cultural change alongside business transformation; but more and more, especially when we’re talking about agile at scale I’ve come to the conclusion that the culture of an organisation; and most especially the buy in and support for agile ways of working at a leadership level within an organisation, is the must fundamental element of being able to successfully scale agile.

Agile its self is sadly still one of those terms that is actually very marmite for some, especially in the senior leadership layers. They’ve seen agile projects fail; it seems like too much change for too little return, or its just something their digital/tech teams ‘do’ that they don’t feel the need to really engage with. GDS tells them they have to use it, so they do.

Which is where I think many of the agile at scale conversations stumble; it’s seen as a digital/tech problem, not an organisational one. This means that time and again, Service Owners, Programme Directors and agile delivery teams get stuck when trying to develop and get support for business cases that are trying to deliver holistic and meaningful change. We see it again and again. Agile delivery runs into waterfall funding and governance and gets stuck.

As a Service Owner or Programme Director trying to deliver a holistic service, how do you quantify in your business case the value this service and this approach to delivery will add? The obvious answer, hopefully, is using data and evidence to show the potential areas for investment and value it would add to both users and the business. But how do you get that data? Where from? How do you get senior leaders to understand it?

In organisations where agile at scale is a new concept, supporting senior leaders to understand why this matters isn’t easy. I often try and recommend new CDO’s, CEO’s or Chief Execs ‘buddy up’ or shadow some other senior folks who have been through this journey; folks like Darren Curry, Janet Hughes, Tom Read and Neil Couling; who understand why it matters, and have been through (or are going through) this journey themselves in their organisations and are able to share their experiences for both good and bad.

I will always give full praise to Alan Eccles CBE who was previously The Public Guardian, and chief exec of the Office of the Public Guardian, with out whom the first Digital Exemplar, the LPA online, would never have gone live. Alan was always very honest that he wasn’t experienced or knowledgeable about agile or digital, but he was fully committed to making the OPG the first true Digital exemplar Agency; and utilising everything digital, and agile ways of working, had to offer to transform the culture of the OPG and the services they delivered. If you want an example of what a true Digital culture looks like, and how vocal and committed Alan was to making the OPG digital, just take a look at their blog which goes all the way back to 2015 and maps the OPG’s digital journey.

Obviously, culture isn’t the only important factor when wanting to scale agile; the technology we use, the infrastructure and architecture we design and have in place, the skills of our people, the size of our teams and their capacity to deliver are also all important. But without the culture that encompasses and supports the teams, the ability to deliver at scale will always be a struggle.

The commitment to change, to embracing the possibilities and options that a digital culture and using agile at scale brings at the senior leadership level permeates through the rest of the organisation. It encourages teams to work in the open, fostering collaboration, identifying common components and dependancies. It acknowledges that failure is ok, as long as we’re sharing the lessons we’ve learned and are constantly improving. It supports true multidisciplinary working and enables holistic service design by encouraging policy, operations and finance colleagues etc to be part of the delivery teams. All of this in turn improves decision making and increases the speed and success of transformation programmes. Ultimately it empowers teams to work together to deliver; and that is how we scale agile.

Notes from some Digital Service Standard Assessors on the Beta Assessment

The Beta Assessment is probably the one I get the most questions about; Primarily, “when do we actually go for our Beta Assessment and what does it involve?” 

Firstly what is an Assessment? Why do we assess products and services?

If you’ve never been to a Digital Service Standard Assessment it can be daunting; so I thought it might be useful to pull together some notes from a group of assessors, to show what we are looking for when we assess a service. 

Claire Harrison (Chief Architect at Homes England and leading Tech Assessor) and Gavin Elliot (Head of Design at DWP and a leading Design Assessor, you can find his blog here) helped me pull together some thoughts about what a good assessment looks like, and what we are specifically looking for when it comes to a Beta Assessment. 

I always describe a good assessment as the team telling the assessment panel a story. So, what we want to hear is:

  • What was the problem you were trying to solve?
  • Who are you solving this problem for? (who are your users?)
  • Why do you think this is a problem that needs solving? (What research have you done? Tell us about the users journey)
  • How did you decide to solve it and what options did you consider? (What analysis have you done?) 
  • How did you prove the option you chose was the right one? (How did you test this?)

One of the great things about the Service Manual is that it explains what each delivery phase should look like, and what the assessment team are considering at each assessment.

So what are we looking for at a Beta Assessment?

By the time it comes to your Beta Assessment, you should have been running your service for a little while now with a restricted number of users in a Private Beta. You should have real data you’ve gathered from real users who were invited to use your service, and your service should have iterated several times by now given all the things you have learnt. 

Before you are ready to move into Public Beta and roll your service out Nationally there are several things we want to check during an assessment. 

You need to prove you have considered the whole service for your users and have provided a joined up experience across all channels.

  • We don’t want to just hear about the ‘digital’ experience; we want to understand how you have/will provide a consistent and joined up experience across all channels.
  • Are there any paper or telephony elements to your service? How have you ensured that those users have received a consistent experience?
  • What changes have you made to the back end processes and how has this changed the user experience for any staff using the service?
  • Were there any policy or legislative constraints you had to deal with to ensure a joined up experience?
  • Has the scope of your MVP changed at all so far in Beta given the feedback you have received from users? 
  • Are there any changes you plan to implement in Public Beta?

As a Lead Assessor this is where I always find that teams who have suffered with empowerment or organisational silos may struggle.

If the team are only empowered to look at the Digital service, and have struggled to make any changes to the paper/ telephony or face to face channels due to siloed working in their Department between Digital and Ops (as an example) the Digital product will offer a very different experience to the rest of the service. 

As part of that discussion we will also want to understand how you have supported users who need help getting online; and what assisted digital support you are providing.

At previous assessments you should have had a plan for the support you intended to provide, you should now be able to talk though how you are putting that into action. This could be telephony support or a web chat function; but we want to ensure the service being offered is/will be consistent to the wider service experience, and meeting your users needs. We also want to understand how it’s being funded and how you plan to publish your accessibility info on your service. 

We also expect by this point that you have run an accessibility audit and have carried out regular accessibility testing. It’s worth noting, if you don’t have anyone in house who is trained in running Accessibility audits (We’re lucky in Difrent as we have a DAC assessor in house), then you will need to have factored in the time it takes to get an audit booked in and run well before you think about your Beta Assessment).

Similarly, by the time you go for your Beta Assessment we would generally expect a Welsh language version of your service available; again, this needs to be planned well in advance as this can take time to get, and is not (or shouldn’t be) a last minute job! Something in my experience a lot of teams forget to prioritise and plan for.

And finally assuming you are planning to put your service on GOV.UK, you’ll need to have agreed the following things with the GOV.UK team at GDS before going into public beta:

Again, while it shouldn’t take long to get these things sorted with the GOV.UK team, they can sometimes have backlogs and as such it’s worth making sure you’ve planned in enough time to get this sorted. 

The other things we will want to hear about are how you’ve ensured your service is scalable and secure. How have you dealt with any technical constraints? 

The architecture and technology – Claire

From an architecture perspective, at the Beta phases I’m still interested in the design of the service but I also have a focus on it’s implementation, and the provisions in place to support sustainability of the service. My mantra is ‘end-to-end, top-to-bottom service architecture’!

 An obvious consideration in both the design and deployment of a service is that of security – how the solution conforms to industry, Government and legal standards, and how security is baked into a good technical design. With data, I want to understand the characteristics and lifecycle of data, are data identifiable, how is it collected, where is it stored, hosted, who has access to it, is it encrypted, if so when, where and how? I find it encouraging that in recent years there has been a shift in thinking not only about how to prevent security breaches but also how to recover from them.

Security is sometimes cited as a reason not to code in the open but in actual fact this is hardly ever the case. As services are assessed on this there needs to be a very good reason why code can’t be open. After all a key principle of GDS is reuse – in both directions – for example making use of common government platforms, and also publishing code for it to be used by others.

Government services such as Pay and Notify can help with some of a Technologist’s decisions and should be used as the default, as should open standards and open source technologies. When  this isn’t the case I’m really interested in the selection and evaluation of the tools, systems, products and technologies that form part of the service design. This might include integration and interoperability, constraints in the technology space, vendor lock-in, route to procurement, total cost of ownership, alignment with internal and external skills etc etc.

Some useful advice would be to think about the technology choices as a collective – rather than piecemeal, as and when a particular tool or technology is needed. Yesterday I gave a peer review of a solution under development where one tool had been deployed but in isolation, and not as part of an evaluation of the full technology stack. This meant that there were integration problems as new technologies were added to the stack. 

The way that a service evolves is really important too along with the measures in place to support its growth. Cloud based solutions help take care of some of the more traditional scalability and capacity issues and I’m interested in understanding the designs around these, as well as any other mitigations in place to help assure availability of a service. As part of the Beta assessment, the team will need to show the plan to deal with the event of the service being taken temporarily offline – detail such as strategies for dealing with incidents that impact availability, and the strategy to recover from downtime and how these have been tested.

Although a GDS Beta assessment focuses on a specific service, it goes without saying that a good Technologist will be mindful of how the service they’ve architected impacts the enterprise architecture and vice-versa. For example if a new service built with microservices and also introduces an increased volume and velocity of data, does the network need to be strengthened to meet the increase in communications traversing the network?

Legacy technology (as well as legacy ‘Commercials’ and ways of working) is always on my mind. Obviously during an assessment a team can show how they address legacy in the scope of that particular service, be it some form of integration with legacy or applying the strangler pattern, but organisations really need to put the effort into dealing with legacy as much as they focus on new digital services. Furthermore they need to think about how to avoid creating ‘legacy systems of the future’ by ensuring sustainability of their service – be it from a technical, financial and resource perspective. I appreciate this isn’t always easy! However I do believe that GDS should and will put much more scrutiny on organisations’ plans to address legacy issues.

One final point from me is that teams should embrace an assessment. Clearly the focus is on passing an assessment but regardless of the outcome there’s lots of value in gaining that feedback. It’s far better to get constructive feedback during the assessment stages rather than having to deal with disappointed stakeholders further down the line, and probably having to spend more time and money to strengthen or redesign the technical architecture.

How do you decide when to go for your Beta Assessment?

Many services (for both good and bad reasons) have struggled with the MVP concept; and as such the journey to get their MVP rolled out nationally has taken a long time, and contained more features and functionality then teams might have initially imagined.  

This can make it very hard to decide when you should go for an Assessment to move from Private to Public Beta. If your service is going to be rolled out to millions of people; or has a large number of user groups with very different needs; it can be hard to decide what functionality is needed in Private Beta vs. Public Beta or what can be saved until Live and rolled out as additional functionality. 

The other things to consider is, what does your rollout plan actually look like? Are you able to go national with the service once you’ve tested with a few hundred people from each user group? Or, as is more common with large services like NHS Jobs, where you are replacing an older service, does the service need to be rolled out in a very set way? If so, you might need to keep inviting users in until full rollout is almost complete; making it hard to judge when the right time for your Beta assessment is. 

There is no right or wrong answer here, the main thing to consider is that you will need to understand all of the above before you can roll your service out nationally, and be able to tell that story to the panel successfully. 

This is because theoretically most of the heavy lifting is done in Private Beta, and once you have rolled out your service into Public Beta, the main things left to test are whether your service scaled and worked as you anticipated. Admittedly this (combined with a confusion about the scope of an MVP) is why most Services never actually bother with their Live Assessment. For most Services, once you’re in Public Beta the hard work has been done; there’s nothing more to do, so why bother with a Live Assessment? But that’s an entirely different blog! 

Reviewing the service together.

 

So, what is a Service Owner?

Before I discuss what (in my view) a Service Owner is, a brief history lesson into the role might be useful.

The role of the ‘Service Manager‘ was seen as critically important to the success of a product, and they were defined as a G6 (Manager) who had responsibility for the end to end service AND the person who led the team through their Service Standard assessments.

Now let’s think about this a bit; Back when the GDS Service Standard and the Service Manual first came into creation, they were specifically created for/with GOV.UK in mind. As such, this definition of the role makes some sense. GOV.UK was (relatively) small and simple; and one person could ‘own’ the end to end service.

The problem came about when the Service Standards were rolled out wider than in GDS itself. DWP is a good example of where this role didn’t work.

The Service Manual describes a service as the holistic experience for a user; so it’s not just a Digital Product, it’s the telephony service that sits alongside it, the back end systems that support it, the Operational processes that staff use to deliver the service daily, along with the budget that pays for it all. Universal Credit is a service, State Pension is a service; and both of these services are, to put it bluntly, HUGE.

Neil Couling is a lovely bloke, who works really hard, and has the unenviable task of having overarching responsibility for Universal Credit. He’s also, a Director General. While he knows A LOT about the service, it is very unlikely that he would know the full history of every design iteration and user research session the Service went through, or be able to talk in detail about the tech stack and it’s resilience etc; and even if he did, he certainly would be very unlikely to have the 4 hours spare to sit in the various GDS assessments UC went through.

This led to us (in DWP) phasing out the role; and splitting the responsibilities into two, the (newly created role of ) Product Lead and the Service Owner. The Product Lead did most of the work of the Service Manager (in terms of GDS assessments etc), but they didn’t have the responsibility of the end to end service; this sat with the Service Owner. The Service Owner was generally a Director General (and also the SRO), who we clarified the responsibilities of when it came to Digital Services.

A few years ago, Ross (the then Head of Product and Service Management at GDS) and I, along with a few others, had a lot of conversations about the role of the Service Manager; and why in departments like DWP, the role did not work, and what we were doing instead.

At the time there was the agreement in many of the Departments outside of GDS that the Service Manager role wasn’t working how it had been intended, and was instead causing confusion and in some cases, creating additional unnecessary hierarchy. The main problem was, as it was in DWP, the breadth of the role was too big for anyone below SCS, which mean instead we were ending up with Service Managers who were only responsible for the digital elements of the service (and often reported to a Digital Director), with all non digital elements of the service sitting under a Director outside of Digital, which was creating more division and confusion.

As such, the Service Manual and the newly created DDaT framework were changed to incorporate the role of the Service Owner instead of the Service Manager; with the suggestion this role should be an SCS level role. However, because the SCS was outside of the DDaT framework, the amount the role could be defined/ specified was rather limited, and instead became more of a suggestion rather than a clearly defined requirement.

The latest version of the DDaT framework has interestingly removed the suggestions that the role should be an SCS role and any reference of the cross over with the responsibilities of SRO, and now makes the role sound much more ‘middle management’ again, although it does still specify ownership of the end to end service. Re-adding in the confusion we tried to remove a few years ago.

Ok, so what should a Service Owner be?

When we talked about the role a few years ago, the intention was very much to define how the traditional role of the SRO joined up closer to the agile/digital/user centred design world; in order to create holistic joined up services.

Below is (at least my understanding of) what we intended the role to be:

  • They should have end to end responsibility for the holistic service.
  • They should understand and have overall responsibility for the scope of all products within the service.
  • They should have responsibility for agreeing the overall metrics for their service and ensuring they are met.
  • They should have responsibility for the overall budget for their service (and the products within it).
  • They should understand the high level needs of their users, and what teams are doing to meet their needs.
  • They should have an understanding (and have agreed) the high level priorities within the service. ((Which Product needs to be delivered first? Which has the most urgent resource needs etc.))
  • They should be working with the Product/Delivery/Design leads within their Products as much as the Operational leads etc. to empower them to make decisions, and understanding the decisions that have been made.
  • They should be encouraging and supporting cross functional working to ensure all elements of a service work together holistically.
  • They should be fully aware of any political/strategy decisions or issues that may impact their users and the service, and be working with their teams to ensure those are understand to minimise risks.
  • They should understand how Agile/Waterfall and any other change methodologies work to deliver change. And how to best support their teams no matter which methodology is being used.

In this way the role of the Service Owner would add clear value to the Product teams, without adding in unnecessary hierarchy. They would support and enable the development of a holistic service, bringing together all the functions a service would need to be able to deliver and meet user needs.

Whether they are an SCS person or not is irrelevant, the important thing is that they have the knowledge and ability to make decisions that affect the whole service, that they have overall responsibility for ensuring users needs are met, that they can ensure that all the products within the service work together, and that their teams are empowered, to deliver the right outcomes.