×

Category: Agile

How do we make legacy transformation cool again?

Guest blog first published in #TechUk’s Public Sector week here on the 24th of June 2022.

Legacy Transformation is one of those phrases; you hear it and just… sigh. It conjures up images of creaking tech stacks and migration plans that are more complex and lasting longer than your last relationship.  

Within the Public Sector, over 45% of IT spend is on Legacy Tech. Departments have been trying to tackle legacy transformation for over 20+ years; but it remains the number one blocker to digital transformation.  

An image of some servers in black and white covered in wires.
Black and White servers

So why is it so hard and what can we do about it?   

The fundamental problem with Legacy transformation is that as an approach it’s outdated.  

The problem companies are trying to solve is that their technology systems need modernising or replacing; usually (at least in the public sector) these programmes come about because a contract is coming to an end and/or the platform the companies’ technology was built upon is effectively burning and can no longer be maintained.  

The problems with this approach are:  

  • That it so often ends in a big bang transition due to the desire to avoid hybrid running of services because of the complexity of migration 
  • The architecture of the new system is constrained by the need to remain consistent with the technical architecture used across the organisation,   
  • Transformation programmes can easily fall into the trap of delivering a ‘like for like’ solution that misses out on opportunities for innovation; this can be for many reasons, often as they have a cliff edge contract leaving them in a rush to find a replacement quickly,   
  • The programmes are developed in siloes, only considering the technical changes needed; but they don’t consider the wider business change needed to make transformation stick.  
  • The value is only delivered once the new service goes live and replaces the old system when it’s turned off.  This leaves many organisations needing to run both systems at once; but not wishing to due to the large cost implications.  

Due to these issues the big bang delivery often ends up being a lot later than planned; costing significantly more while neither meeting the users or business needs; and quickly becoming outdated.  

Don’t forget, the latest thing you’ve just updated will itself be considered Legacy in 5 years; so do we need to start thinking about legacy transformation differently? Is there an iterative approach to legacy transformation that works, and how should we approach it?  

Within Kainos we’ve worked hard to bring the User Centred design principles we’ve used to successfully deliver Digital Services to accomplish high impact legacy transformation programmes. By understanding user needs and business requirements we can plan early for ‘just enough’ legacy change to support the transformation; prioritising and identifying the value that can be added where and when; building scalable and extensible services that will maximise automation opportunities; carefully evaluating transition options and data migration dependencies so we can ensure we’re meeting user needs and adding value at each stage without risking business disruption.   

A whiteboard covered in post it notes and a user journey to demonstrate user centred design
User Centric Design

This incremental, user centred approach allows us to identify opportunities for innovation and truly enable digital transformation that focuses on the business benefits, reducing overall costs whilst realising value early and often.  

By thinking about business change and taking this iterative approach to realise value early and often we’ve been able to stop assuming that every element of the old legacy service needs throwing out and replacing; and instead, we’re identifying those elements that can be kept with just a bit of love and care to update them and make them work, and which elements we need to deliver something new. By prioritising where we focus our effort and making sure whether it is something old or something new, or a combination of the two, we can meet those critical user and business needs.  

Up-cycling doesn’t just work for vintage furniture and clothes after all, maybe it’s time we take that same mindset when we’re think about technical transformation; reinventing something old and making it into something better and new. After all tech changes faster than ever, so if we don’t change our mindset and approach, we will be left behind and quickly not just become out of fashion, we’ll be outdated.  

By adapting our approach to Legacy Transformation, Kainos are able to build excellent services that are secure and that users want to use; transforming business processes to fully embrace digital channels; microservices architecture that reduces future legacy risk; and costs that are optimised to benefit from public cloud platforms. 

Maximising the Lean Agility approach in the Public Sector

First published on the 26th June 2022 as part of #TechUk’s Public Sector week here ; co-authored by Matt Thomas.

We are living in a time of change, characterised by uncertainty. Adapting quickly has never been more important than today, and for organisations, this often means embracing and fully leveraging the potential of digital tools.

A lot has been said about Lean Agility but for an organisation in the Public Sector facing the prospect of a digital transformation, it is still difficult to understand what to do and how.

In our mind, while lean helps to solve the right problems, agility supports quick adaptability and the ability to change course whenever necessary.

A poster saying 'build, measure, learn" with an image of a pencil eraser removing the "L" or learn
Build, Measure, Learn

At Kainos working in the Digital Advisory team the one problem we hear about repeatedly from clients is the difficulties they face of delivering the right thing at pace, and how they struggle to maximise their efficiency. Some of the typical red flags we see when beginning to understand why clients are struggling to deliver effectively are:

  • evergreen delivery projects that never end; without an end product in sight or a product nobody uses constantly being tweaked; as opposed to teams delivering units of quantifiable value,  
  • lacking prioritisation; everything is a priority and so everything is in-flight at the same time,  
  • development is stalled or slow; with poor delivery confidence and large gaps between releases, 
  • traditional long-term funding cycles requiring a level of detail which doesn’t match near-term agile planning and responsive delivery, 
  • ineffective communication and lack of experienced deliver leadership; so decision making is made on gut feel and who shouts loudest rather than being firmly tied to desired business outcomes, 
  • Siloed pockets of various stages of Agile adoption /maturity and effectiveness making coordinated planning and collaboration difficult. 

Within Kainos our belief was that by introducing Lean-Agility Management we could scientifically remove waste & inefficiency whilst Increasing delivery confidence, employee job satisfaction and visibility of the work being undertaken. As such we. introduced a lightweight and straightforward Lean-Agility approach that could be adopted across multiple portfolios. 

Our approach does not just focus on Agile coaching (although that’s part of it) or other isolated elements of a transformation, but on 4 distinct pillars: Lean-Agility Management, Lean-Analytics & Dashboarding, Product & Design Coaching and Agile Coaching & Architecture.  This gives us the opportunity to build sustainability and in-house expertise to continue this journey. 

Recently we’ve been working with an integrated energy super-major to help them improve in several of these key areas.  We were asked to help, whilst contributing to the wider Agility transformation by bringing consistent high standards in delivery culture and ways of working through Lean and Agility. 

The results have delighted the client; we have managed to improve delivery speed by over 70%, delivery confidence by more than 50% and job satisfaction by over 20%.

This approach is one we’re using with several other clients in the commercial sector, all with similar positive effects; but it’s not something we encounter being used within the Public Sector much; either by us or by other consultancies.

How can this approach help the public sector and what is needed to make this a success?

From our experience, we have found the key elements to getting this right are:  

  • Starting with a Proof of Value (POV) – We tend to pick two volunteer squads to test with and prove this approach can work and add value.  
  • Senior Buy in and time – Agility transformation lives and dies by the clarity and direction of its leaders; teams need clear leadership, the support and empowerment to innovate and improve.  
  • Pod structure connects the transformation from exec to squads 
  • Multi-disciplined Agility team with knowledge of Product, Design and DevSecOps as well as Agility 
  • Desire to change culture – We don’t just mean continuous improvement, everybody does that, the difference is evolving to a resolute passion to rigorously improve everything 
  • Data at the core – clear metrics give teams a direction of travel and an idea of where targeted improvements could add real value   
  • Consider the people – We track job satisfaction because it’s important. Improvements come from your people. If you keep losing your people, you’re constantly going to be in a state of hiring and retraining, which is costly in terms of time and money. Happy people innovate and perform better.

Our Lean-Agility approach is very much an Agile approach to an Agile transformation, we start small prove the value, learn your business, customise and adapt. Lean-Agility is something we mould to you rather than a theory we try to plug and play, in that sense Lean-Agility for you will look and feel different to Lean-Agility for a different client and so it should! 

Becoming Product Led

Recently I was asked how I would go about moving an organisation to being Product Led; when agile and user centric design are equally new to the company, or when agile has not delivered in the way that was expected.

Before diving into the how, I think it’s worth first considering the what and they why.

What do we mean by being ‘product led’?

A product led approach is where your product experience is the central focus of your organisation. Within the public sector we incorporate user centric design into our products to ensure that we deliver real value by:š

  • Taking an outside-in perspective (starting with user needs)š;
  • Rapid, early validation of ideas (testing early and often); š
  • Maturing through iteration (based on user feedback)š and
  • Disciplined prioritisation (using quantitative and qualitative data) to deliver value.

Is this not just another name for agile?

This is a question that comes up regularly; and in my opinion, no it’s not. Agile is a delivery methodology; being product led is wider than that. it’s the wrapper that sits above and surrounds the delivery approach you use. It comes ‘before’ you decide on which delivery methodology you will use; and continues long after. It’s your culture and ways of working. The two can often go hand in hand; but if agile is the how, product is the what and the why.

Why is being product led important?

šWell, by moving to a product led approach we allow the organisation to link their outputs to their customer needs and ensuring they align to their organisational capabilities and strategy. šIt also allows organisations to focus on their customers needs and understand their users perspectivesš. By understanding and focusing on user needs it allows organisations to deliver value faster, making it quicker and easier for organisations to learn from what has gone well (and what hasn’t)š which in turn makes cheaper and faster to address any issues or risksš. It also makes it easier for organisations to spot opportunities for innovation and growth.

How do you move your organisation to being product led?

First things first, a culture that empowers the asking of questions and testing of hypothesis is essential for innovation. But to allow that to happen, organisations need senior leaders who understand and support their teams to work in this way. The appropriate ,light weight/ adaptable, governance and funding approvals processes being in place are critical to enable product innovation and empower delivery teams.

The second element that’s key is having the right data. Good product orientation depends on having access to quality data; what are our current metrics? Where are our current pain points? Do we understand our current costs? What products/ services have the highest demand? etc. This data enables us to make quality decisions and measure our progress our successes.

Thirdly, we need to have clearly articulated strategy/vision for the organisation; what is our USP (Unique Selling Proposition)? What do we want to achieve? What are our goals? What value are we looking to add? What do we want to be different in 5/10 years from now?

To develop that strategy/vision, we need to have a clear understanding about our users and stakeholders. Who are we developing these products for? Who are our stakeholders? How are we engaging with them? What do they need from us?

Finally, once we’ve got the strategy, the vision, an understanding of our user needs and a set of hypothesis we want to test; we need a healthy delivery approach, with skilled teams in place to enable us to test our ideas and deliver that value. As we’ve said previously, to be product centric we need to be able to design services that are based on user needs, so that we can test regularly with our users to ensure we understand, and are meeting, those needs.

What are the sign of a good product led culture?

  • You are regularly engaging with the users; working to understand their needs and iterating your approach and services based on their feedback.
  • Your culture empowers and encourages people to ask questions. “Why are we doing this?”; “Who are we doing this for”, “Is anyone else already doing this?”, “What will happen if we don’t do this {now)?”, “What have we learnt from our previous failures/successes?”
  • Your teams are working collaboratively, policy and operations teams working hand in hand with tech/digital teams; to ensure you’re delivering value.
  • You’re considering and testing multiple options at each stage; looking for innovative solutions, and working to understand which options will best meet your users needs and add the most value.
  • Linked to the above; You’re testing regularly, being willing to ‘throw away’ what doesn’t work and refine your ideas based on what does work.
  • You’re delivering value early and often.
Prioritising the backlog

Which comes first, the Product Manager, or the product culture?

If you don’t have any trained product people, can you begin to move to a product led culture, or must you hire the product people first? This is the chicken and the egg question. For many organisations, especially those already using agile delivery methodologies or engaged in digital transformation; they may have already sunk a lot of time and money into delivery, and pausing their work whilst they change their culture and hire a load of skilled product folk just isn’t going to work; but, you can begin to move towards a product led approach without hiring a load of Product Managers. Whilst having experience product folk can definitely help, you probably have lots of folks in the organisation who are already over half way there and just need some help on that road.

One stumbling block many organisations fall over on their move to a product led approach is the difference between focusing on outcomes, rather than outputs or features.

An output is a product or service that you create; an outcome is the problem that you solve with that product. A feature is something a product or service does, whereas a benefit is what customers actually need. If we go straight to developing features, we could be making decisions based on untested assumptions. 

There are 5 steps to ensure you’re delivering outcomes that add value and deliver benefits vs. focusing on features that simply deliver an output:š

  • State the Problemš – what are we trying to solve/change?
  • Gather User Data – have we understood the problem correctly?
  • Set Concrete Goals and Define Success Criteria – what would success look like? š
  • Develop Hypothesis – how could we best solve this problem? š
  • Test Multiple Ideas – does this actually solve the problem?

When you’re trying to identify the right problem to fix, look at existing data from previous field studiescompetitive analysisanalytics, and feedback from customer support. Use a mix of quantitative and qualitative data to ensure you have understood your user needs, and their behaviours.  Then analyse the information, spot any gaps, and perform any additional research required to help you verify the hypothesis you have developed when trying to decide how you could solve the problem your users are facing.

They key element to being product led is understanding the problem you are trying to fix and focusing on the value you will deliver for your users by fixing it. It’s about not making assumptions you know what your users want, but by engaging with your users to understand what they need. It’s about spotting gaps and opportunities to innovate and add value, rather than simply building from or replacing what already exists. It’s about focusing on delivering that value early and often.

Assessing Innovation

(co-written with Matt Knight)

Some background, for context

Just over a month ago I got approached to ask if I could provide some advice on assessments to support phase two of the GovTech Catalyst (GTC) scheme. For those who aren’t aware of the GovTech Catalyst Scheme, there’s a blog here that explains how the scheme was designed to connect private sector innovators with the public sector sponsors, using Innovate UK’s Small Business Research Initiative (SBRI) to help find promising solutions to some of the hardest public sector challenges.

Person in a lab coat with a stethoscope around their neck looking through a Virtual Reality head set.
Looking for innovation

The Sponsor we were working with (who were one of the public sector sponsors of the scheme) had put two suppliers through to the next phase and allocated funding to see how and where tech innovation could help drive societal improvements in Wales. As part of their spend approval for the next phase, the teams had to pass the equivalent of a Digital Service Standard assessment at the 6 month point in order to get funding to proceed. 

For those who aren’t aware, there used to be a lovely team in GDS who would work with the GTC teams to provide advice and run the Digital Service Standard assessments for the projects; unfortunately this team got stood down last year; after the recent GTC initiatives started, leaving them with no one to talk to about assessments, nor anyone in place to assess them. 

The sponsor had reached out to both GDS and NHS Digital to see if they would be willing to run the assessments or provide advice to the teams, but had no luck; which left them a bit stuck; which is where I came in. I’ve blogged before about the Digital Service Standards; which led to the Sponsor reaching out to me to ask whether I’d be willing and able to help them out; or whether I knew any other assessors who might be willing to help. 

Preparing for the Assessments

As there were two services to assess; one of the first things I did was talk to the wonderful Matt Knight to see if he’d be willing and able to lead one of the assessments. Matt’s done even more assessments than me; and I knew he would be able to give some really good advice to the product teams to get the best out of them and their work. 

Matt and I sat and had a discussion on how to ensure we were approaching our assessments consistently; how to ensure we were honouring and adhering to the core tenants of the Digital Standards whilst also trying to assess the teams innovation and the value for money their services could deliver in line with the criteria for the GovTech scheme.

What became quickly apparent was; because this was to support the GTC scheme; the teams doing the work were fully private sector with little experience of the Digital Service Standards. A normal assessment, with the standard ‘bar’ we’d expect teams to be able to meet, wouldn’t necessarily work well; we’d need to be a little flexible in our approach. 

Obvious, no matter what type of Assessment you’re doing the basic framework of an assessment stays the same (start with user needs, then think about the End-to-End service, then you can talk about the team and design and tech, and along the way you need to ask about the awkward stuff like sustainability and open source and accessibility and metrics) can be applied to almost anything and come up with a useful result, regardless of sector/background/approach. 

As the services were tasked with trying to improve public services in Wales, we also wanted to take account of the newly agreed Welsh Digital Standards; using them alongside the original Digital Standards; obviously the main difference was the bits of the Welsh Standards that covered ensuring the well-being of people in Wales and promoting the Welsh Language (standards 8 & 9), you can read more about the Well being of future generations Act here

The assessments themselves 

An image of a team mapping out a user journey
User Journey Mapping

The assessments themselves ran well, (with thanks to Sam Hall, Coca Rivas and Claire Harrison my co-assessors) while the service teams were new to the process they were both fully open and willing to talk about their work, what went well and not so well and what they had learnt along the way. There was some great work done by both the teams we assessed, and it’s clearly a process that everyone involved learned a lot from, both in terms of the service teams, and the sponsor team, and it was great to hear about how they’d collaborated to support user research activities etc. Both panels went away to write up their notes; at which point Matt and I exchanged notes to see if there were any common themes or issues; and interestingly both assessments had flagged the need for a Service Owner from the sponsor to be more involved in order to help the team identify the success measures etc. 

When we played the recommendations and findings back to the Sponsor, this led to an interesting discussion; although the sponsor had nominated someone to act as the link for the teams in order to answer their questions etc. and to try and provide the teams some guidance and steer where they could. Because of the terms of the GTC scheme, the rules on what steers they could and couldn’t give were quite strict to avoid violating the terms of the competition. Originally the GTC team within GDS would have helped the sponsors navigate these slightly confusing waters in terms of competition rules and processes. However, without an experienced team to turn to for advice it leaves sponsors in a somewhat uncomfortable and unfamiliar position; although they had clearly done their best (and the recommendations in this blog are general comments on how we can improve how we assess innovation across the board and not specifically aimed at them)”

Frustratingly this meant that even when teams were potentially heading into known dead-ends etc; while the sponsor could try to provide some guidance and steer them in a different direction; they couldn’t force the teams pivot or change; instead the only option would be to pull the funding. While this makes sense from a competition point of view; it makes little to no sense from a public purse point of view; or from a Digital Standards point of view. It leaves sponsors stuck (when things might have gone a little off track) rather than being able to get teams to pivot; they are left choosing between potentially throwing away or losing some great work; or investing money in projects that may not be able to deliver. 

Which then raises the question; how should we be assessing and supporting innovation initiatives? How do we ensure they’re delivering value for the public purse whilst also remaining fair and competitive? How do we ensure we’re not missing out on innovative opportunities because of government bureaucracy and processes? 

In this process, what is the point of a Digital Service Standard assessment? 

If it’s like most other assessment protocols (do not start Matt on his gateway rant), then it’s only to assess work that has already happened. If so, then it’s not much good here, when teams are so new to the standards and need flexible advice and support on what they could do next etc.   

If it’s to assess whether a service should be released to end users, then it’s useful in central government when looking to roll out and test a larger service; but not so much use when it’s a small service, mainly internal users or a service that’s earlier on in the process aiming to test a proof of concept etc. 

If it’s to look at all of the constituent areas of a service, and provide help and guidance to a multidisciplinary team in how to make it better and what gaps there are (and a bit of clarity from people who haven’t got too close to see clearly), then it’s a lot of use here, and in other places; but we need to ensure the panel has the right mix of experts to be able to assess this. 

While my panel was all fantastic; and we were able to assess the levels of user research the team had done, their understanding of the problems they were seeing to solve, their ability to integrate with legacy tech solutions and how their team was working together etc. none of us had any experience in assessing innovation business cases or understanding if teams had done the right due diligence on their financial funding models. The standards specify that teams should have their budget sorted for the next phase and a roadmap for future development; in my experience this has generally been a fairly easy yes or no; I certainly wouldn’t know a good business accelerator if it came and bopped me on the nose. So while we could take a very high level call on whether we thought a service could deliver some value to users; and whether a roadmap or budget looked reasonable; a complex discussion on funding models and investment options was a little outside our wheelhouse; so was not an area we could offer any useful advice or recommendations on.  

How can we deliver and assess innovation better going forward? 

If we’re continuing to use schemes like the GTC scheme to sponsor and encourage private sector innovators to work with the public sector to solve important problems affecting our society, then we obviously need a clear way to assess their success. But we also need to ensure we’re setting up these schemes in such a way that the private sector is working with the public sector; and that means we need to be working in partnership; able to advise and guide them where appropriate in order to ensure we’re spending public money wisely. 

There is a lot of great potential out there to use innovative tech to help solve societal issues; but we can’t just throw those problems at the private sector and expect them to do all the hard work. While the private sector can bring innovative and different approaches and expertise, we shouldn’t ignore the wealth of experience and knowledge within the public sector either. We need people within the public sector with the right digital skills, who are able to  prioritise and understand the services that are being developed inorder to ensure that the public purse doesn’t pay for stuff that already exists to be endlessly remade. 

Assessment can have a role in supporting innovation; as long as we take a generous rather than nitpicking (or macro rather than micro) approach to the service standard. Assessments (and the Standards themselves) are a useful format for structuring conversations about services that involve users (hint: that’s most of them) just the act of starting with user needs – pt 1 – rather than tech – changes the whole conversation. 

However,  to make this work and add real value, solve a whole problem for users (point 2 of the new uk govt standard) – is critical, and that involves having someone who can see the entire end to end process for any new service and devise and own success measures for it. The best answer to both delivering innovation, and assessing it, is bringing the private and public sector together to deliver real value; creating a process that builds capacity, maturity and genuine collaboration within the wider public sector. A space to innovate and grow solutions. True multidisciplinary collaboration, working together to deliver real value.

“Together, We Create”

Big thanks to Matt for helping collaborate on this, if you want to find his blog (well worth a read) you can do so here:

Service Owner vs. Programme Manager vs. Product Lead

What’s the difference? Does the name matter?

Over a year ago, following an interesting chat with David Roberts at NHSBSA, I got to thinking about the role of the Service Owner; and why the role wasn’t working in the way we intended back in the dawn of the Service Manual. This in turn (as most things do for me) led to a blog in order to try and capture my thoughts in the vague hope they might be useful/interesting for anyone reading them.

Ironically, for what was a random think-piece, it has consistently been my most popular blog; getting a least a dozen reads everyday since I wrote it. Which got me thinking again; what is it about that blog that resonates with people? And the fact is, the role of the Service Owner is no better or consistently understood today than it was then. The confusion over the role of the Service Owner; their role and responsibilities, is still one of the most common things I get asked about. What’s the difference between a Service Owner or Manager (is there one)? How/why is the role different to that of the Product Lead? What is the difference between a Service Manager and a Programme Manager? Is the Service Owner different to the SRO? What do all these different role titles mean?

What's In a Name? A lot. – AE2S Communications
What’s in a name?

Every department/Agency within the Public Sector seems to have implemented the role of the Service Owner differently; which makes it very hard for those in the role (or considering applying for the role) to understand what they should be doing or what their responsibilities are etc. This is probably why, as a community of practice within DDaT, it certainly used to be the one hardest communities to bring together, as everyone in it was doing such different roles to each other.

Some clients I’ve been working with use the role of Service Owner and Lead Product Manager interchangeably; some have Service Owners who sit in Ops and Service Managers who sit in Digital (or vice versa); some have Service Managers sitting alongside Programme Managers; or Service Owners alongside Programme Directors, all desperately trying to not stand on each others toes.

So what is the difference?

The obvious place to look for clarity surely is the Service Manual, or the DDaT capability framework. The Service manual specifies it is the responsibility of the Service Owner is to be: “the decision-making authority to deliver on all aspects of a project. Who also:

  • has overall responsibility for developing, operating and continually improving your service
  • represents the service during service assessments
  • makes sure the necessary project and approval processes are followed
  • identifies and mitigates risks to your project
  • encourages the maximum possible take-up of your digital service
  • has responsibility for your service’s assisted digital support”

When the DDaT capability framework was first written, the Service Manager was more akin to a Product person; and originally sat as a senior role within that capability framework; yet they were also responsibility for the end to end service (which was a very big ask for anyone below the SCS working as an SRO). But the role often got confused with that of the IT Service manager, and (as perviously discussed in last years blog) the responsibilities and titles got changed to create the role of Service Owner instead.

Interesting in the Service Manual the reference to the Service Owner being the person who has responsibility for the end to end service; has now been removed; instead focusing on them being the person responsible for being the person responsible for delivering the project. While I imagine this is because it’s very hard for any one person (below SCS level) to have responsibility for an end to end service in the Public Sector due to the size of the Products and Services the Public Sector delivers; it does however mean the new role as description in the Service Manual seems to bring the role of Service Owner closer to that of the Programme Manager.

However, in contrast to the description in the Service manual, the DDaT capability framework does still specify that the role of the Service Owner is “accountable for the quality of their service, and you will be expected to adopt a portfolio view, managing end-to-end services that include multiple products and channels.” Obviously the onus here has changed from being responsible for the end to end service to managing the end to end service. But even that is a clear difference to being responsible for delivering a project as the manual describes it.

Some elements of the new Service Owner role description in the Manual do still align to the traditional responsibilities of Product people (mainly considering things like assisted digital support and ensuring you can maximise take up of your service); but the Service Manual has now removed those responsibilities within a team from the Product Manager. Now the Product Manager seems too intended to be much more focused solely on user needs and user stories; rather than the longer term uptake and running of the service. But again, confusingly, in the Capability framework for Product Management there is still the expectation that Product people will be responsible for ensuring maximum take-up of the service etc.

It seems in trying to clarify the role of the Service Owner, the Service Manual and the Capability framework disagree on exactly what the responsibilities of the role are; and rather than clarify the difference between the Product people and the Service Owners, the waters have instead been muddied even more. Nor have they made it any clearer if/what the difference is between the role of the Service Owner or Programme manager is.

The Project Delivery Capability framework states that “there are many other roles that are needed to successfully deliver projects. These roles are not included in our framework but you will find information on them within the frameworks of other professions, such as, Digital, Data & Technology framework” frustratingly it doesn’t give any clarity on how and when roles like SRO or Programme Manager might overlap with roles within the DDaT framework; nor how these roles could work best with the roles within the DDaT framework. Both the Service Owner role and the Programme manager role state responsibility for things like stakeholder management; business case development/alignment; risk management and governance adherence. Admittedly the language is slightly different; but the core themes are the same.

So is the assumption that you don’t need both a Programme Manager and a Service Owner? Is it an either or that has never been clearly specified? If you’re using PRINCE2 you get a Programme Manager, if Agile its a Service Owner? I would hope not, mainly because we all know that in reality, most Public Sector digital programmes are a blend of methodologies and never that clear cut. So are we not being clear enough about what the role of the Service Owner is? Does it really matter if we don’t have that clarity?

Evidence has shown that when teams aren’t clear on the roles and responsibilities of there team mates, and especially those people responsible for making key decisions; then bottlenecks being to occur. Teams struggle to know who should be signing of what. Hierarchy and governance become essential to achieving any progress; but inevitabley delays occur while approvals are sought, which simply slows down delivery.

So can we get some clarity?

At the start of the year DEFRA advertised a role for a Service Owner which (I thought) clearly articulated the responsibilities of the role, and made it clear how that role would sit alongside and support Product team and work with Programme professionals to ensure effective delivery of services that met user needs. Sadly this clarity of role seems few and far between.

I would love, when travel etc. allows, to see a workshop happen mapping out the roles of Service Owner; SRO; Programme manager; Product Lead etc. Looking at what their responsibilities are; providing clarity on where there is any overlap and how this could be managed better so that we can get to the point where we have consistency in these roles; and better understanding of how they can work together without duplication or confusion over the value they all add.

For now, at least, it’s each organisations responsibility to ensure that they are being clear on what the responsibilities for the roles and those people working in them are. We need to stop pretending the confusion doesn’t exist and do are best to provide clarity to our teams and our people; otherwise we’re only muddying the waters and it’s that kind of confusion that inevitably impacts teams and their ability to deliver.

Let’s be clear, say what do you mean

How to be a Product Advocate

Why you need a Product Person in your team.

Since joining Kainos a few weeks ago, I’ve had a number of conversations internally and with clients about the relationship between Delivery and Product; and why I as a Product Person moved over to Delivery.

‘Products at the heart of delivery’ image

My answer to that question was that, having spent over 10 years as a Product Person, and seeing the growth of Product as a ‘thing’ within the Public Sector; helping Product grow and mature, developing the community, ways of working, career pathway etc; I realised that what was missing was Product thinking at a senior level. Most Senior leaders within the Programme delivery or Transformation space come from a traditional delivery background (if not an operational one) and while many of them do now understand the value of user centric design and user needs etc; they don’t understand the benefit of a product centric approach or what value Product thinking brings.

The expansion of Product people in the Public sector has predominantly been driven by GDS and the Digital Service standards; with most organisations now knowing they need a ‘Product Manger‘ in order to pass their Service Standard Assessment. However, almost 10 years later, most organisations are still not prioritising the hiring and capability development of their Product people. In May I worked with four different teams each working to the Digital Standards and needing to pass an assessment; and in none of those teams was the role of the Product manger working in the way we intended when we creating the DDaT Product Management capability framework.

Most organisations (understandably) feel the role of the Product Manager should be an internal one, rather than one provided by a Supplier; but 9 times out of 10 the person they have allocated to the role has no experience in the role, have never worked on a product or service that was developed to the digital standards never mind having been through an assessment; and they are regularly not budgeted or allocated the project full time; often being split across too many teams or split between the Product Manager role whilst still working in Ops or Policy or whoever they have come from previously; more often than not their actually a Subject Matter Expert, not a Product Manager (which I’ve blogged about before).

As a supplier; this makes delivery so much harder. When the right Product person isn’t allocated to a project, we can quickly see a whole crop of issues emerge.

So what are the signs that Product isn’t being properly represented within a team:

  • Overall vision and strategy are unclear or not shared widely; teams aren’t clear on what they’re trying to achieve or why; this can be because the Product person is not able to clearly articulate the problem the team are there to solve or the outcomes that team are their to deliver aren’t clearly defined.
  • Roadmap doesn’t exist, is unstable or does not go beyond immediate future/ or the Scope of the project keeps expanding; often a sign that prioritisation isn’t being looked at regularly or is happening behind closed doors making planning hard to do.
  • Success measures are unclear or undefined; because the team doesn’t understand what they’re trying to achieve and often leads to the wrong work getting prioritised or outcomes not getting delivered or user needs not met.
  • Work regularly comes in over budget or doesn’t meet the business case; or the team keeps completing Discoveries and then going back to the start or struggling to get funding to progress. This can be a sign the team aren’t clear what problem they are trying to solve or that the value that the work delivers cannot be/ isn’t clearly articulated by the Product person.
  • Delivery is late/ velocity is slow. This can be a sign the team aren’t getting access to their Product person in a timely manner causing bottlenecks in stories being agreed or signed off; or that the Product person is not empowered to make decisions and is constantly waiting for sign off from more senior stakeholders.
  • Role out is delayed or messy, with operational teams frustrated or unclear on project progress; a sign that the team doesn’t have someone owning the roadmap who understands what functionality will be available when and ensuring any dependancies are clearly understand and being monitored, or a sign that there isn’t someone engaging with or communicating progress to wider stakeholders.

More often than not as a Supplier I’ve had to argue that we need to provide a Product person to work alongside/ with teams to coach/support their internal Product people in the skills and responsibilities a Product person needs to have to enable successful delivery. Where clients have been adamant they don’t want Product people from a Supplier (often for budgetary reasons), we’ve then had to look at how we sneak someone in the door; usually by adding a Business Analyst or delivery manager to the team who also has Product skills, because otherwise are ability to deliver will be negatively impacted.

When budgets are tight, the role of Product person is often the first thing project managers try to cut or reduce; prioritising the technical or project delivery skills over Product ones. As such, teams (and organisations) need to understand the skills a good product person brings; and the cost of not having someone within a team who has those skills.

  • Their role is to focus on and clarify to the team (and business) the problem the team are trying to fix.
  • Ensure a balance between user needs; business requirements and technical constraints/options.
  • Quantifying and understanding the ROI/ value a project will deliver; and ensuring that can be tracked and measured through clear success measures and metrics.
  • Being able to translate complex problems into roadmaps for delivery. Prioritising work and controlling the scope of a product or service to ensure it can be delivered in a timely and cost effective manner, with a proper role out plan that can be clearly communicated to the wider organisation.

As an assessor, I have seen more projects fail their assessments at Alpha (or even occasionally Beta) because they lack that clear understanding of the problem there trying to solve or their success measures etc; than I have because they’ve used the wrong technical stack etc. This can be very costly; and often means undress of thousands (if not millions) of pounds being written off or wasted due to delays and rework. Much more costly than investing in having a properly qualified or experienced Product people working within teams.

While Product and Delivery are often seen as very different skill sets; I recognised a few years ago the value in having more people who understand and can advocate for both the value Product thinking brings to delivery; but also how delivery can work better with Product. People who can not only understand but also champion both in order to ensure we’re delivering the right things in the right ways to meet our clients and their users needs.

Which is why I made the active decision to hop the fence and try and bring the professions closer together and build understanding in both teams and senior leaders in the need for Product and Delivery skills to be invested in and present within teams in order to support and enable good delivery, and I as really glad to see when I joined Kainos that we’re already talking about how to bring our Product and Delivery communities closer together and act for advocates to support each other; and it was in fact a chat with the Kainos Head of Product Charlene McDonald that inspired this blog.

Having someone with the title of Product Manager or Owner isn’t enough; we need people who are experienced in Product thinking and skilled in Product Management; but that isn’t all we need. We need to stop seeing the role of Product person as an important label needed you can give to anyone in the team in order to pass an assessment and understand why the role and the skills it brings are important. We need senior leaders, project managers and delivery teams who understand what value Product brings; who understand why product is important and what it could cost the team and their organisation if those product skills are not included and budgeted for properly right from the start. We need Senior Leaders to understand why it’s important to invest in their product people; giving them the time and support they need to do their job properly; rather than spreading them thin across teams with minimal training or empowerment.

We need more Product advocates.

Digital Transformation is still new

We’re punishing those who are less experienced, and we need to stop.

The timeline of Digital Transformation. Courtesy of Rachelle @ https://www.strangedigital.org/

In the last few weeks I’ve had multiple conversations with clients (both existing and new) who are preparing for or have recently not passed their Digital Service standard assessments who are really struggling to understand what is needed from them in order to pass their assessment.

These teams have tried to engage with the service standards teams, but given those teams are extremely busy; most teams cant get any time with their ‘link’ person until 6 weeks before their assessment; by which time most teams are quite far down their track and potentially leaves them a lot of (re)work to try and do before their assessment.

Having sat in on a few of those calls recently I’ve been surprised how little time is set aside to help the teams prep; and to give them advice on guidance on what to expect at an assessment if they haven’t been through one before. Thos no time or support for mock assessments for new teams. There may be the offer of one or two of the team getting to observe someone else’s assessment if the stars align to allow this; but it’s not proactively planned in; and instead viewed as a nice to have. There seems to be an assumption the project teams should know all of this already; and no recognition that a large number of teams don’t; this is still all new to them.

“In the old days” we as assessors and transformation leads used to set aside time regularly to meet with teams; talk through the problems they were trying to fix, understand any issues they may be facing, provide clarity and guidance before the assessment; so that teams could be confident they were ready to move onto the next phase before their assessment. But when I talk to teams now, so few of them are getting this support. Many teams reach out because the rare bits of guidance they have received hasn’t been clear, and in some cases it’s been contradictory and they don’t know who to talk too to get that clarity.

Instead, more and more of my time at the moment, as a supplier, is being set aside to support teams through their assessment. To provide advice and guidance on what to expect, how to prepare and what approach the team needs to take. Actually what an MVP is; how to decide when you need an assessment, and what elements of the service do you need to have ready to ‘show’ at each stage. What the difference is between Alpha/ Beta and Live assessments and why it matters. For so many teams this is still almost like a foreign language and new.

So, how can we better support teams through this journey?

Stop treating it like this is all old hat and that everyone should know everything about it already.

Digital Transformation has been ‘a thing’ for one generation (if you count from the invention of the internet as a tool for the masses in 1995); Within the public sector, GDS, the Digital Service Standards and the Digital Academy have existed for less than one generation; less than 10 years in-fact.

By treating it as a thing everyone should know, we make it exclusionary. We make people feel less than us for the simple act of not having the same experience we do.

We talk about working in the open, and many team do still strive to do that; but digital transformation is still almost seen as a magical art by many; and how to pass what should be a simple thing like a service standard assessment is still almost viewed as Arcane knowledge held by the few. As a community we need to get better at supporting each other, and especially those new to this experience, along this path.

This isn’t just a nice thing to do, its the fiscally responsible thing to do; by assuming teams already have all this knowledge we’re just increasing the likelihood they will fail, and that comes with a cost.

We need to set aside more time to help and guide each other on this journey; so that we can all succeed; that is how we truly add value, and ensure that Digital Transformation delivers and is around to stay for generations to come.

Agile Delivery in a Waterfall procurement world

One of the things that has really become apparent when moving ‘supplier side’ is how much the procurement processes used by the public sector to tender work doesn’t facilitate agile delivery.

The process of bidding for work, certainly as an SME is an industry in itself.

This month alone we’ve seen multiple Invitations to Tender’s on the Digital Marketplace for Discoveries etc, as many departments are trying to spend their budget before the end of the financial year.

The ITT’s will mention user research and ask how suppliers will work to understand user needs or hire proper user researchers. But they will then state they only have 4 weeks or £60K to carry out the Discovery. While they will specify the need for user research, no user recruitment has been carried out to let the supplier hit the ground running; it’s not possible for it to be carried out before the project starts (unless as a supplier you’re willing to do that for free; and even if you are, you’ve got less than a week to onboard your team, do any reading you need to do and complete user recruitment, which just isn’t feasible); and we regular see requests for prototypes within that time as well.

This isn’t to say that short Discoveries etc. are impossible, if anything COVID-19 has proved it is possible, however there the outcomes we were trying to deliver were understood by all; the problems we were trying to solve were very clear,; and there was a fairly clear understanding of the user groups we’d need to be working with to carry out any research; all of this enabled the teams to move at pace.

But we all know the normal commercial rules were relaxed to support delivery of the urgent COVID-19 related services. Generally it’s rare for an ITT to clarify the problem the organisation is trying to solve, or the outcomes they are looking to achieve. Instead they tend to solely focus on delivering a Discovery or Alpha etc. The outcome is stated as completing the work in the timeframe in order to move to the next stage; not as a problem to solve with clear goals and scope.

We spend a lot of time submitting questions trying to get clarity on what outcomes the organisations are looking for, and sometimes it certainly feels like organisations are looking for someone to deliver them a Discovery solely because the GDS/Digital Service Standard says they need to do one. This means, if we’re not careful, halfway through the Discovery phase we’re still struggling to get stakeholders to agree the scope of the work and why we really do need to talk to that group of users over there that they’ve never spoken too before.

Image result for gds lifecycle
The GDS lifecycle

The GDS lifecycle and how it currently ties into procurement and funding (badly) means that organisations are reluctant to go back into Discovery or Alpha when they need too, because of how they have procured suppliers. If as a supplier you deliver a Discovery that finds that there is no need to move into Alpha (because there are no user needs etc) or midway through an Alpha you find the option you prioritised for your MVP no longer meets the needs as anticipated, clients still tend to view that money as ‘lost’ or ‘wasted’ rather than accepting the value in failing fast and stopping or changing to do something that can add value. Even when the clients do accept that, sometimes the procurement rules that brought you on to deliver a specific outcome mean your team now can’t pivot onto another piece of work, as that needs to be a new contract; either scenario could mean as a supplier you loose that contract you spent so much time getting, because you did ‘the right thing’.

We regularly pick up work midway through the lifecycle; sometimes that’s because the previous supplier didn’t work out; sometimes its because they were only brought in to complete the Discovery or Alpha etc. and when it comes to re-tender, another supplier is now cheaper etc. That’s part and parcel of being a supplier; but I know from being ‘client side’ for so long how that can make it hard to manage corporate knowledge.

Equally, as a supplier, we rarely see things come out for procurement in Live, because there is the assumption by Live most of the work is done, and yet if you follow the intent of the GDS lifecycle rather than how it’s often interpreted, there should still be plenty of feature development, research etc happening in Live.

This is turn is part of the reason we see so many services stuck in Public Beta. Services have been developed by or with suppliers who were only contracted to provide support until Beta. There is rarely funding available for further development in Live, but the knowledge and experience the suppliers provided has exited stage left so it’s tricky for internal teams to pick up the work to move it into Live and continue development.

Most contracts specify ‘knowledge transfer’ (although sometimes it’s classed as a value add; when it really should be a fundamental requirement) but few are clear on what they are looking for. When we talk to clients about how they would like to manage that, or how we can ensure we can get the balance right between delivery of tangible outcomes and transferring knowledge, knowledge transfer is regularly de-scoped or de-prioritised. It ends up being seen as not as important as getting a product or service ‘out there’; but once the service is out there, the funding for the supplier stops and the time to do any proper knowledge transfer is minimal at best; and if not carefully managed suppliers can end up handing over a load of documentation and code without completing the peer working/ lunch and learns/ co-working workshops we’d wanted to happen.

Some departments and organisations have got much better at getting their commercial teams working hand and hand with their delivery teams; and we can always see those ITT’s a mile off; and it’s a pleasure to see them; as it makes it much easier for us as suppliers to provide a good response.

None of this is insurmountable, but we (both suppliers and commercial/procuring managers and delivery leads) need to get better at working together to look at how we procure/bid for work; ensuring we are clear on what the outcomes we’re trying to achieve are, and properly valuing ‘the value add’.

Agile at scale

What do we even mean when we talk about agile at scale and what are the most important elements to consider when trying to run agile at scale?

This is definitely one of those topics of conversation that goes around and around and never seems to get resolved or go away. What do we even mean when we talk about agile at scale? Do we mean scaling agile within a programme setting across multiple teams? Do we mean scaling it across multiple programmes? Or do we mean scaling it using it at scale within a whole organisation?

When ever I’m asked about what I believe to be the most important elements in enabling successful delivery using agile, or using agile at scale, the number one thing I will always talk about isn’t the technology; It isn’t digital capability; or experience with the latest agile ways of working (although all those things are important and do obviously help) it’s the culture.

I’ve blogged before on how to change a culture and why it’s important to remember cultural change alongside business transformation; but more and more, especially when we’re talking about agile at scale I’ve come to the conclusion that the culture of an organisation; and most especially the buy in and support for agile ways of working at a leadership level within an organisation, is the must fundamental element of being able to successfully scale agile.

Agile its self is sadly still one of those terms that is actually very marmite for some, especially in the senior leadership layers. They’ve seen agile projects fail; it seems like too much change for too little return, or its just something their digital/tech teams ‘do’ that they don’t feel the need to really engage with. GDS tells them they have to use it, so they do.

Which is where I think many of the agile at scale conversations stumble; it’s seen as a digital/tech problem, not an organisational one. This means that time and again, Service Owners, Programme Directors and agile delivery teams get stuck when trying to develop and get support for business cases that are trying to deliver holistic and meaningful change. We see it again and again. Agile delivery runs into waterfall funding and governance and gets stuck.

As a Service Owner or Programme Director trying to deliver a holistic service, how do you quantify in your business case the value this service and this approach to delivery will add? The obvious answer, hopefully, is using data and evidence to show the potential areas for investment and value it would add to both users and the business. But how do you get that data? Where from? How do you get senior leaders to understand it?

In organisations where agile at scale is a new concept, supporting senior leaders to understand why this matters isn’t easy. I often try and recommend new CDO’s, CEO’s or Chief Execs ‘buddy up’ or shadow some other senior folks who have been through this journey; folks like Darren Curry, Janet Hughes, Tom Read and Neil Couling; who understand why it matters, and have been through (or are going through) this journey themselves in their organisations and are able to share their experiences for both good and bad.

I will always give full praise to Alan Eccles CBE who was previously The Public Guardian, and chief exec of the Office of the Public Guardian, with out whom the first Digital Exemplar, the LPA online, would never have gone live. Alan was always very honest that he wasn’t experienced or knowledgeable about agile or digital, but he was fully committed to making the OPG the first true Digital exemplar Agency; and utilising everything digital, and agile ways of working, had to offer to transform the culture of the OPG and the services they delivered. If you want an example of what a true Digital culture looks like, and how vocal and committed Alan was to making the OPG digital, just take a look at their blog which goes all the way back to 2015 and maps the OPG’s digital journey.

Obviously, culture isn’t the only important factor when wanting to scale agile; the technology we use, the infrastructure and architecture we design and have in place, the skills of our people, the size of our teams and their capacity to deliver are also all important. But without the culture that encompasses and supports the teams, the ability to deliver at scale will always be a struggle.

The commitment to change, to embracing the possibilities and options that a digital culture and using agile at scale brings at the senior leadership level permeates through the rest of the organisation. It encourages teams to work in the open, fostering collaboration, identifying common components and dependancies. It acknowledges that failure is ok, as long as we’re sharing the lessons we’ve learned and are constantly improving. It supports true multidisciplinary working and enables holistic service design by encouraging policy, operations and finance colleagues etc to be part of the delivery teams. All of this in turn improves decision making and increases the speed and success of transformation programmes. Ultimately it empowers teams to work together to deliver; and that is how we scale agile.

Do Civil Servants dream of woolly sheep?

The frustration of job descriptions and their lack of clarity.

One of the biggest and most regularly occurring complaints about the Civil Service (and public sector as a whole) is their miss-management of commercial contracts.

There are regularly headlines in the papers accusing Government Departments & the Civil Servants working in them of wasting public money, and there has been a drive over the last few years especially to improve commercial experience especially within the Senior Civil Service.

When a few years ago my mentor at the time suggested leaving the public sector for a short while to gain some more commercial experience before going for any Director level roles, this seemed like a very smart idea. I would obviously need to provide evidence of my commercial experience to get any further promotions, and surely managing a couple of 500K, 1M contracts would not be enough, right?

Recently I’ve been working with my new mentor, focusing specially on gaining more commercial knowledge etc. and last month he set me an exercise to look at some Director and above roles within the Digital and Transformation arena to see what level of commercial experience they were asking for, so that I can measure my current levels of experience against what is being asked for.

You can therefor imagine my surprise when this month we got together to compare 4 senior level roles (2 at Director level and 2 Director General) and found that the amount of commercial experience requested in the job descriptions was decidedly woolly.

I really shouldn’t have been surprised, the Civil Service is famous for its woolly language, policy and strategy documents are rarely written in simple English after all.

But rather than job specifications with specific language asking for “experience of managing multiple multi million pound contracts successfully etc”. What is instead called for (if mentioned specially at all) is “commercial acumen” or “a commercial mindset” but no real definition of what level of acumen or experience is needed.

The Digital Infrastructure Director role at DCMS does mention commercial knowledge as part of the person specification, which it defines as “a commercial mindset, with experience in complex programmes and market facing delivery.

And this one from MoD, for an Executive Director Service Delivery and Operations, calls for “Excellent commercial acumen with the ability to navigate complex governance arrangements in a highly scrutinised and regulated environment”

Finally we have the recently published Government CDO role, which clearly mentions commercial responsibilities in the role description, but doesn’t actually demand any commercial experience in the person specification.

At which point, my question is, what level of Commercial acumen or experience do you actually want? What is a commercial mindset and how do you know if you have it? Why are we being so woolly at defining what is a fundamentally critical part of these roles?

How much is enough?

Recent DoS framework opportunities we have bid for or considered at Difrent have required suppliers to have have experience of things like “a minimum of 2 two million pound plus level contracts” (as an example) to be able to bid for them.

That’s helpful, as Delivery Director I know exactly how many multimillion pound contracts we’ve delivered successfully and can immediately decide whether as a company it’s worth us putting time or effort into the bid submissions. But as a person, I don’t have the same level of information needed to make a similar decision on a personal level.

The flip side of the argument is that data suggests that women especially won’t apply for roles that are “too specific” or have a long shopping list of demands, because women feel like they need to meet 75% of the person specification to apply. I agree with that wholeheartedly, but there’s a big difference between being far too specific and listing 12+ essential criteria for a role, and being soo unspecific you’ve become decidedly generic.

Especially when, as multiple studies have shown, in the public digital sector Job titles are often meaningless. Very rarely in the public sector does a job actually do what it says on the tin. What a Service Manager is in one Department can be very different in another one.

If I’m applying for an Infrastructure role I would expect the person specification to ask for Infrastructure experience. If I’m applying for a comms role, I expect to be asked for some level of comms experience; and I would expect some hint as too how much experience is enough.

So why when we are looking at Senior/ Director level roles in the Civil Service are we not helping candidates understand what level of commercial experience is ‘enough’? The private sector job adverts for similar level roles tend to be much more specific in terms of the amount of contract level experience/ knowledge needed, so why is the public sector being so woolly in our language?

Woolly enough for you?

*If you don’t get the blog title, I’m sorry, it is very geeky. and a terrible Philip K. Dick reference. But it amused me.