×

Tag: Agile

Notes from some Digital Service Standard Assessors on the Beta Assessment

The Beta Assessment is probably the one I get the most questions about; Primarily, “when do we actually go for our Beta Assessment and what does it involve?” 

Firstly what is an Assessment? Why do we assess products and services?

If you’ve never been to a Digital Service Standard Assessment it can be daunting; so I thought it might be useful to pull together some notes from a group of assessors, to show what we are looking for when we assess a service. 

Claire Harrison (Chief Architect at Homes England and leading Tech Assessor) and Gavin Elliot (Head of Design at DWP and a leading Design Assessor, you can find his blog here) helped me pull together some thoughts about what a good assessment looks like, and what we are specifically looking for when it comes to a Beta Assessment. 

I always describe a good assessment as the team telling the assessment panel a story. So, what we want to hear is:

  • What was the problem you were trying to solve?
  • Who are you solving this problem for? (who are your users?)
  • Why do you think this is a problem that needs solving? (What research have you done? Tell us about the users journey)
  • How did you decide to solve it and what options did you consider? (What analysis have you done?) 
  • How did you prove the option you chose was the right one? (How did you test this?)

One of the great things about the Service Manual is that it explains what each delivery phase should look like, and what the assessment team are considering at each assessment.

So what are we looking for at a Beta Assessment?

By the time it comes to your Beta Assessment, you should have been running your service for a little while now with a restricted number of users in a Private Beta. You should have real data you’ve gathered from real users who were invited to use your service, and your service should have iterated several times by now given all the things you have learnt. 

Before you are ready to move into Public Beta and roll your service out Nationally there are several things we want to check during an assessment. 

You need to prove you have considered the whole service for your users and have provided a joined up experience across all channels.

  • We don’t want to just hear about the ‘digital’ experience; we want to understand how you have/will provide a consistent and joined up experience across all channels.
  • Are there any paper or telephony elements to your service? How have you ensured that those users have received a consistent experience?
  • What changes have you made to the back end processes and how has this changed the user experience for any staff using the service?
  • Were there any policy or legislative constraints you had to deal with to ensure a joined up experience?
  • Has the scope of your MVP changed at all so far in Beta given the feedback you have received from users? 
  • Are there any changes you plan to implement in Public Beta?

As a Lead Assessor this is where I always find that teams who have suffered with empowerment or organisational silos may struggle.

If the team are only empowered to look at the Digital service, and have struggled to make any changes to the paper/ telephony or face to face channels due to siloed working in their Department between Digital and Ops (as an example) the Digital product will offer a very different experience to the rest of the service. 

As part of that discussion we will also want to understand how you have supported users who need help getting online; and what assisted digital support you are providing.

At previous assessments you should have had a plan for the support you intended to provide, you should now be able to talk though how you are putting that into action. This could be telephony support or a web chat function; but we want to ensure the service being offered is/will be consistent to the wider service experience, and meeting your users needs. We also want to understand how it’s being funded and how you plan to publish your accessibility info on your service. 

We also expect by this point that you have run an accessibility audit and have carried out regular accessibility testing. It’s worth noting, if you don’t have anyone in house who is trained in running Accessibility audits (We’re lucky in Difrent as we have a DAC assessor in house), then you will need to have factored in the time it takes to get an audit booked in and run well before you think about your Beta Assessment).

Similarly, by the time you go for your Beta Assessment we would generally expect a Welsh language version of your service available; again, this needs to be planned well in advance as this can take time to get, and is not (or shouldn’t be) a last minute job! Something in my experience a lot of teams forget to prioritise and plan for.

And finally assuming you are planning to put your service on GOV.UK, you’ll need to have agreed the following things with the GOV.UK team at GDS before going into public beta:

Again, while it shouldn’t take long to get these things sorted with the GOV.UK team, they can sometimes have backlogs and as such it’s worth making sure you’ve planned in enough time to get this sorted. 

The other things we will want to hear about are how you’ve ensured your service is scalable and secure. How have you dealt with any technical constraints? 

The architecture and technology – Claire

From an architecture perspective, at the Beta phases I’m still interested in the design of the service but I also have a focus on it’s implementation, and the provisions in place to support sustainability of the service. My mantra is ‘end-to-end, top-to-bottom service architecture’!

 An obvious consideration in both the design and deployment of a service is that of security – how the solution conforms to industry, Government and legal standards, and how security is baked into a good technical design. With data, I want to understand the characteristics and lifecycle of data, are data identifiable, how is it collected, where is it stored, hosted, who has access to it, is it encrypted, if so when, where and how? I find it encouraging that in recent years there has been a shift in thinking not only about how to prevent security breaches but also how to recover from them.

Security is sometimes cited as a reason not to code in the open but in actual fact this is hardly ever the case. As services are assessed on this there needs to be a very good reason why code can’t be open. After all a key principle of GDS is reuse – in both directions – for example making use of common government platforms, and also publishing code for it to be used by others.

Government services such as Pay and Notify can help with some of a Technologist’s decisions and should be used as the default, as should open standards and open source technologies. When  this isn’t the case I’m really interested in the selection and evaluation of the tools, systems, products and technologies that form part of the service design. This might include integration and interoperability, constraints in the technology space, vendor lock-in, route to procurement, total cost of ownership, alignment with internal and external skills etc etc.

Some useful advice would be to think about the technology choices as a collective – rather than piecemeal, as and when a particular tool or technology is needed. Yesterday I gave a peer review of a solution under development where one tool had been deployed but in isolation, and not as part of an evaluation of the full technology stack. This meant that there were integration problems as new technologies were added to the stack. 

The way that a service evolves is really important too along with the measures in place to support its growth. Cloud based solutions help take care of some of the more traditional scalability and capacity issues and I’m interested in understanding the designs around these, as well as any other mitigations in place to help assure availability of a service. As part of the Beta assessment, the team will need to show the plan to deal with the event of the service being taken temporarily offline – detail such as strategies for dealing with incidents that impact availability, and the strategy to recover from downtime and how these have been tested.

Although a GDS Beta assessment focuses on a specific service, it goes without saying that a good Technologist will be mindful of how the service they’ve architected impacts the enterprise architecture and vice-versa. For example if a new service built with microservices and also introduces an increased volume and velocity of data, does the network need to be strengthened to meet the increase in communications traversing the network?

Legacy technology (as well as legacy ‘Commercials’ and ways of working) is always on my mind. Obviously during an assessment a team can show how they address legacy in the scope of that particular service, be it some form of integration with legacy or applying the strangler pattern, but organisations really need to put the effort into dealing with legacy as much as they focus on new digital services. Furthermore they need to think about how to avoid creating ‘legacy systems of the future’ by ensuring sustainability of their service – be it from a technical, financial and resource perspective. I appreciate this isn’t always easy! However I do believe that GDS should and will put much more scrutiny on organisations’ plans to address legacy issues.

One final point from me is that teams should embrace an assessment. Clearly the focus is on passing an assessment but regardless of the outcome there’s lots of value in gaining that feedback. It’s far better to get constructive feedback during the assessment stages rather than having to deal with disappointed stakeholders further down the line, and probably having to spend more time and money to strengthen or redesign the technical architecture.

How do you decide when to go for your Beta Assessment?

Many services (for both good and bad reasons) have struggled with the MVP concept; and as such the journey to get their MVP rolled out nationally has taken a long time, and contained more features and functionality then teams might have initially imagined.  

This can make it very hard to decide when you should go for an Assessment to move from Private to Public Beta. If your service is going to be rolled out to millions of people; or has a large number of user groups with very different needs; it can be hard to decide what functionality is needed in Private Beta vs. Public Beta or what can be saved until Live and rolled out as additional functionality. 

The other things to consider is, what does your rollout plan actually look like? Are you able to go national with the service once you’ve tested with a few hundred people from each user group? Or, as is more common with large services like NHS Jobs, where you are replacing an older service, does the service need to be rolled out in a very set way? If so, you might need to keep inviting users in until full rollout is almost complete; making it hard to judge when the right time for your Beta assessment is. 

There is no right or wrong answer here, the main thing to consider is that you will need to understand all of the above before you can roll your service out nationally, and be able to tell that story to the panel successfully. 

This is because theoretically most of the heavy lifting is done in Private Beta, and once you have rolled out your service into Public Beta, the main things left to test are whether your service scaled and worked as you anticipated. Admittedly this (combined with a confusion about the scope of an MVP) is why most Services never actually bother with their Live Assessment. For most Services, once you’re in Public Beta the hard work has been done; there’s nothing more to do, so why bother with a Live Assessment? But that’s an entirely different blog! 

Reviewing the service together.

 

The art of Transferring Knowledge

One of the most common questions that comes up in Bid opportunities is usually some variant of “how do you transfer your knowledge to us before you leave?”

This is completely valid question, and really important to both ask, and to understand, but also hard to answer well in 100 words without risking looking like knowledge transfer is only a nice to have!

Having been on the other side of the commercial table, making sure you get a supplier who will want to work with you and up-skill your own people so you are not reliant on the supplier forever is generally vital to both making sure the project is successful, and cost effective.

Developers comparing code together

I’ve written Invitations to Tender that ask for examples of how suppliers would go about transferring knowledge and up-skilling my teams. I’ve sat through bid tender presentations as the buyer and listened to suppliers try to persuade me that they know best, and that they have the expertise my organisation needs to deliver a project or programme.

I was generally able to spot very quickly those organisations that took this more seriously than others, those that would work collaboratively with us vs. those more likely to just come in and do a sales job and leave us none the wiser reliant on their services.

But, if I’m honest, I never really judged that feel on the words they said, but more through the words they didn’t say, and more importantly HOW they said or didn’t say it.

Everyone can say the words ‘show and tell’, but how are you doing them? How are you getting stakeholders engaged? How are you making sure you have the right people turning up to engage with the project?

A person standing in front of a whiteboard moving a post it note in a team meeting

You can say you use Trello, JIRA, or Confluence etc. to create shared digital spaces to run your backlogs or share information; but how do you make sure the right people have access to them and know how to use them? How do you agree what information is going on there and when? How do you determine what information the team can see vs. your stakeholders, and make sure the information is understandable to everyone who needs it?

As long as suppliers are putting in key buzzwords, that nuance is hard to judge within 100 words, but so key to understand. And it’s not only important for the buying organisation to understand how the supplier would transfer knowledge, but it’s actually really important for the supplier to understand how receptive an organisation is as well.

I always assumed ‘knowledge transfer’ was something that was easy for suppliers to do as long as they put in some effort.

Now I sit on the other side of the table, its something i’ve realised there is a real art too. Not just writing a bid response that gets the message across, but doing it once you hit the ground. I’d always assumed that, as long as the team/ buying organisation was keen and engaged, knowledge transfer would be easy to do.

Two people talking in front of a white board that shows flow charts and prototypes.

Eight months later I’ve realised it’s not as easy as it looks, as a supplier there’s a very fine line to walk between supporting an organisation, and looking patronising. Just as every organisation is somewhere different on their agile/digital journey, so is every individual.

A one size fits all approach to transferring knowledge will never work. You can’t assume because an organisation is new to agile or digital, every individual within the organisation is. Some organisations/people want more in the way of ‘coaching and mentoring’ others want less. Some organisations/people will say they are open to changing their ways of working, but will resist anything new; others are champing off your hand for every new tool or technique. Some want walking through everything you are doing so they can learn from it, others want you to just get on and deliver and tell them at the end how you did it.

And as suppliers, there is often as much we have to learn from the organisation as there is to ‘teach’, while we might be the experts in agile or digital or delivering transformation; we need to learn about and understand how their organisation works and why.

Two people having a conversation

There is no ‘one answer’ on how to do knowledge transfer, and it’s not a one way street. It’s how you approach the question that is important. Are you open to working with an organisation (either as the buyer or the supplier) to understand how you can work together and learn from each other? As long as you are open to having those conversations and learning from each other, then the knowledge transfer will happen.

Why SME’s are important, but shouldn’t be the Product Manager

Along time ago in a land far away; well four years ago and sat in a very cold office in trafalgar square; Ross Ferguson , Alex Kean , Scot Colfer and I plus a few others sat discussing the DDaT capability framework for Product Management.

The discussions we had at the time focused on “how do we actually define the role? And what makes a good product manager?” And there have been plenty of blogs written on those questions over the years. It definitely feels like the role has matured and progressed over the last few years, and now is generally pretty well recognised.

However yesterday chatting to Si Wilson about SME’s and Product Managers, and why they were different roles, I realised this may be one area not touched on much, and actually a pretty key difference it’s important to understand.

In the private sector, the Product Manager is often “the voice of the business”, they are equally seen as the “voice of the customer” but when developing products to take to market and make a profit, it’s less about what the users need, and what the business can sell to them.

In the Public Sector, the role of the Product Manager is a bit different. The Product Manager is NOT the voice of the business, instead they are the voice of the vision. The Product Manager is responsible for ‘what could be’ they ensure the team are delivering quality and value, weighing up the evidence from everyone else in the team and making the decisions on where to focus next in order to meet the desired outcomes.

This slight change in focus is where the role of the Subject Matter Expert (SME) comes in. The Scrum Dictionary states the SME is the person with specialised knowledge; in my experience the SME provide’s the voice of the business; and what ‘is’ rather than what will be. They understand the in’s and out’s of an existing product, service or any sacred cows that need to be avoided (or understood) within an organisation. They usually work closely with the Business Analyst to map out business processes and User Researchers to understand staff experiences.

Back when we merry band of Head’s of Product were trying to understand the role, the decision to not have Product Managers ‘be the voice of the business’ was a very deliberate move as we felt it hampered the move to User Centred design, as it felt it was hard to step back and be agnostic about the solution if you’ve had years in the business and know every pain point and workaround going etc.

Some of the dangers of having a Product Manager who is also an SME are:

  • They feel they know everything already because of their experience, so feel that user research or testing is a waste of time.
  • They become a single point of failure for both knowledge and decision making, with too many people needing their attention at the same time
  • They can get lost in the weeds of details, which can lead to micromanaging or a lack of pace

That is not at all to say that Product Managers can’t ‘come from the business’ because obviously having some knowledge about the organisation and the service is helpful. But equally, having a clear delineation between the roles of the Product Manager and the SME is important; so if you do have someone covering both roles, it’s important to understand which hat is being worn when decisions are made; and for that individual to be able to draw a line between when they are acting as the PM and when they are the SME.

A person presenting at a whitewall to a team

As a Product person, a good SME is worth their weight in gold. good ones bring loads of speed and stretching thinking — and even packaging thinking. They can help identify pain points, and help user researchers and business analysts find the right people to talk to when asking questions about processes’ etc. They give the Product Manager room to manoeuvre, and make sure things are moving on. Equally the best SME’s can be pragmatic, they understand that what the business wants doesn’t always match what users want, and work with the team to find the best way forward.

Where the role of the SME hasn’t worked well, in my experience, it tends to be because the individual hasn’t been properly empowered to make decisions by their organisation or line manager; or don’t actually have the knowledge required, and are instead their to capture questions or decisions and feed them back to their team/manager. Another common issues is that the SME can’t be pragmatic or understand the difference between user needs and business needs; and won’t get involved in user research or understand its importance. Rather than helping the team move work forward, they slow things down; wanting every decision justified to their satisfaction; wanting to make decisions themselves rather than working with the Product Manager.

Rarely have I found SME”s that could be dedicated full time to one project, they tend to be Policy or Ops experts etc. and so there are a lot of demands on their time. I suspect this is one of the reasons the role of the SME and Product Manager if sometimes blended together. However, while they ‘can’ be filled by the same person, in my experience having those roles filled by separate people does work much better, and allow the team to deliver value quicker.

Delivering in a crisis

One of the key personal aims I had when I joined Difrent, just over six months ago, was to work somewhere that would let me deliver stuff that matters. Because I am passionate about people, and about Delivery;

After 15 years, right in the thick of some pioneering public sector work, combining high profile product delivery with developing digital capability working for organisations like the Government Digital Services (GDS), Department of Work and Pensions (DWP), The Care Quality Commission (CQC), and the Ministry of Defence (MoD); I was chaffing at the speed (or lack thereof) of delivery in the Public sector.

Parcel delivery

I hoped going agency side would remove some of that red tape, and let me get on and actually deliver; my aim when I started was to get a project delivered (to public beta at the very least) within my first year. Might seem like a simple ask, but in the 10 years I spent working in Digital, I’d only seen half a dozen services get into Live.

This is not because the projects failed, they are all still out there being used by people; but because once projects got into Beta, and real people could start using them, the impetus to go-live got lost somewhat.

Six months into the job and things looked to be on track, with one service in Private beta, another we are working on in Public Beta; plus a few Discoveries etc. underway; things were definitely moving quickly and I my decision to move agency side felt justified. Delivery was happening.

And then Covid-19 hit.

Gov.uk COVID-19 website
A tablet displaying the Gov.uk COVID-19 guidance

With COVID-19, the old normal, and ways of working have had to change rapidly. If for no other reason than we couldn’t all be co-located anymore. We all had to turn too fully remote working quickly, not just as a company but as an industry.

Thankfully within Difrent we’ve always had the ability to work remotely, so things like laptops and collaborative software were already in place internally; but the move to being fully remote has still been a big challenge. Things like setting up regular online collaboration and communication sessions throughout our week, our twice-daily coffee catchups and weekly Difrent Talks are something created for people to drop in on with no pressure attached and has helped people stay connected.

The main challenge has been how we work with out clients to ensure we are still delivering. Reviewing our ways of working to ensure we are still working inclusively; or aren’t accidentally excluding someone from a conversation when everyone is working from their own home. Maintaining velocity and ensuring everyone is engaged and able to contribute.

This is trickier to navigate when you’re all working virtually, and needs a bit more planning and forethought, but it’s not impossible. One of the positives (for me at least) about the current crisis is how well people have come together to get things delivered.

Some of the work that we have been involved in, which would generally have taken months to develop; has been done in weeks. User research, analysis and development happening in a fraction of the time it took before.

Graffiti saying ‘Made in Crisis’

So how are we now able to move at such a fast pace? Are standards being dropped or ignored? Are corners being cut? Or have we iterated and adapted our approach?

Once this is all over I think those will be the questions a lot pf people are asking; but my observation is that, if nothing else, this current crisis has made us really embrace what agility means.

We seem to have the right people ‘in the room’ signing off decisions when they are needed; with proper multidisciplinary teams, made up of people from both digital but also policy and operations etc, that are empowered to get on and do things. Research is still happening; but possibly at a much smaller scale, as and when it is needed; We’re truly embracing the Minimum Viable Product, getting things out there that aren’t perfect, but that real people can use; testing and improving the service as we go.

Once this is all over I certainly don’t want to have to continue the trend of on-boarding and embedding teams with 24 hours notice; and while getting things live in under 2 weeks is an amazing accomplishment; to achieve it comes at a high price – Not just in terms of resources but in terms of people, because that is where burnout will occur for all involved. But I believe a happy medium can be found.

My hope, once this is all over, is that we can find the time to consider what we in digital have learnt, and focus on what elements we can iterate and take forward to help us keep delivering faster and better, but in the right way, with less delays; so we can get services out there for people to use; because really, that is what we are all here to do.

Stay home, stay safe, save lives
Sign saying ‘stay home, stay safe, save lives’



How do we determine value?

And how do we make sure we are delivering it?

In a previous blog I discussed the importance of understanding the value you are trying to add, and how you measure cost vs vale. How we measure value and ensure we are delivering a valuable return on investment is one of the ‘big’ questions at the moment, that never seems to go away.

Scott Colfer has equally blogged before on the complexity of measuring value when there is no profit to measure against. When working in the public sector it’s not an easy problem to solve. There is a lot of conversations about making sure we don’t waste public money, but how do we actually make sure public money is being spent in a valuable way?

A jar of coins
A jar of coins being spilt

The first principle of the Agile Manifesto is “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.” But what is valuable?

At a kick off session this week, for a new project we’re shortly going to begin, a client said one of their hopes was that all code deployed would work first time; and someone else stated that they ‘didn’t want rework’. When we broke these thoughts down to understand where these fears were coming from, it was the need to add value and not waste money; which itself was coming from previous issues caused by a long time to deploy, and the cost to make changes.

There was equally the fear that by swapping out suppliers mid project we (as the new supplier) would want to redesign and rework everything to make it our own; which would slow down delivery and drive up cost even more.

There is obviously no value for anyone in doing that. The value comes by having a short feedback loop, co-designing and constantly testing, learning and iterating, working together in short weekly or fortnightly sprints, to get things delivered. Making sure there is little time as possible between designing something, to getting it tested and used by real users; ensuring it meets their needs as quickly as possible.

Through examining what has been delivered already against the user needs and the outcomes the organisation is looking to achieve; by identifying gaps and pain-points we reduce waste; and by prioritising the areas where improvements can be made we ensure that reworking only happens when there is actual value in doing so.

A parcel being delivered
Parcel delivery

At a talk this week I was asked how we prioritise the work that needs doing and ensure that we do deliver. The important thing is to deliver something, but ideally not just any old thing, we want to ideally be delivering the right thing. Sometimes we won’t know what that is, and it’s only by doing something that we can establish whether that was the right thing or not. But that’s why short feedback loops are important. Checking back regularly, iterating and testing frequently, allows you too recognise when there is value in carrying on vs. value in stoping and doing something different.

When I’m trying to decide where the value is, and where is the best place to start, I consider things like:

  1. Why are we doing this?
  2. Why are we doing it now?
  3. What happens if we don’t do this now?
  4. Who will this affect?
  5. How many people will it impact?
  6. How long could this take?
  7. Any indicative costs?
  8. Any key milestones/ deadlines?
  9. Any critical dependancies that could affect our ability to deliver?
  10. Will this help us deliver our strategy? Or is it a tactical fix?

Once we have started work, it’s important to agree measure of success (be they financial, reducing time, staffing numbers; or things like improved uptake or a better customer experience) and keep measuring what is being delivered against those targets.

At Difrent a key part of the value we add is about the people, not just the technology or processes; there is value in us working in the open, by being transparent; running lunch and learn sessions or talks; blogging or speaking at events etc. we can add wider value outside of a specific project or service.

A person presenting at a whitewall to a team
People listening to someone speaking/ sharing

When we are considering what adds value, the other thing it’s important to consider is the culture we are delivering in. Are there communities of practice in place already, any design patterns we should be adhering too? There is value in building in consistency, as this helps us ensure we are delivering quality.

There are many different ways to determine what adds value, and many different kinds of value, but the importance is by focusing on making positive improvements, and by constantly learning from mistakes and ensuring they don’t get repeated so no time is wasted and real value can be delivered.

What even is agile anyway?

So you’re a leader in your organisation and Agile is ‘the thing’ that everyone is talking about. Your organisation has possible trialed one or two Agile projects within the Digital or Tech department, but they haven’t really delivered like you thought they would, and you think you can ‘do more’ with it, but honestly, what even is it in the first place?

It’s a question that comes up fairly regularly, and if you are asking it, you are not alone! This blog actually started from such a conversation last week.

Tweet https://twitter.com/NeilTamplin/status/1220608708452999170

First and foremost there is Agile with a capital A, this is the project methodology, predominantly designed for software development, as defined here. It “denotes a method of project management, used especially for software development, that is characterized by the division of tasks into short phases of work and frequent reassessment and adaptation of plans.”

However nowadays, especially in the public sector, agile doesn’t only apply to software. More and more of the conversations happening in communities like #OneTeamGov are about the culture of agility. How you create the environment for Agile to succeed, and this is where many people, especially leaders, are getting lost.

So how do you ‘be agile?’

Being agile is borrowing the concepts used in agile development, to develop that culture. As Tom Loosemore says when talking about Digital, it’s about “applying the culture, processes, business models & technologies of the internet-era to respond to people’s raised expectations.”

But it’s more than what you transform, it’s how you do it.

The Agile manifesto says that Agile is about:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

When you consider individuals and interactions over processes and tools, then you remove unnecessary hierarchy and empower people to make decisions. You don’t enforce rigid processes for the sake of it, but iterate your governance based on feedback of users (in this instance your staff!). By being agile you focus on communicating directly with human beings, looking to how you can accommodate more actual conversations, and time together, rather than relaying on emails and papers as your only way to communicate.

By prioritising working software over comprehensive documentation you are constantly testing and iterating what works based on what is meeting your user needs, rather than deciding upfront what the answer is before knowing if it will actually work. You involve user research in your policy and strategy discussions. You analyse and test your new processes before you implement them. You change your funding and governance models to allow more innovation and exploration, and base your decisions on data and evidence, not theory. By being agile you are able to demonstrate working product or tangible services to stakeholders and customers, rather than just talking about what will be done.

Customer collaboration rather than contract negotiation is about bringing people along with you and working in partnership, achieving results together. Embracing and managing change to be innovative and deliver value whilst still being competitive and minimising unproductive churn and waste.

When thinking about responding to change over following a plan, it’s about being able to innovate and iterate. Prioritising and working on the most important work first. Building in short feedback loops and taking on board feedback.

Post it notes on a wall

Why is ‘being agile’ important?

Because as the market changes, and users expectations change, companies that can not take onboard feedback and iterate their products and services loose out. This is also true when it comes to companies themselves in terms of what they offer their staff, less people now go to work just for the money, people want more job satisfaction, empowering staff to make decisions and cutting bureaucracy are not only ways to cut costs, but also increase the value to both your users, your stakeholders and your staff.

Resources to help:

  • Scrum.org have a decent blog on Agile Leaders which can be found here
  • For Leaders in the Public Sector, the Digital Academy has an Agile for Leaders course, details of which can be found here
  • The Centre for Agile Leadership has a blog on business agility here (and for those in the US they run courses)
  • And the Agile Business Consortium have a white-paper describing the role of culture and leadership within Agile which can be found here

Welcome to the Dark Side.

Last week I started working for @BeDifrent, a business transformation agency working with both Public and Private sector clients to help them deliver #TechForGood.

This is a massive change for me, I spent almost 15 years in the Public Sector, I always said I was a public servant for life, and in my heart I am, when people have asked me this week what I do it’s been very odd to not reply “I work in the public sector”.

But the thing is, I still am, Difrent’s clients are predominantly public sector at the moment (at least the ones I’ve been dealing with in my first two weeks). The challenges our clients are facing are so similar to those I’m used to facing, but the opportunities are so much bigger.

At my interview I got asked why I was interested in this role, and my answer was very honest and in two parts.

One, for my career development. I’ve spent three years working at Deputy Director level as a Head of Product in the Public Sector, and I loved my role. Product and Service design are things that I am passionate about, and designing and delivering services to users that really matter, that improve things for them, is the thing that drives me.

But I’d also realised what I did was wider than the label “Head of Product” really allowed for. So much of my effort and time was on the cultural and organisational changes organisations needed to make to enable them to deliver and change into a Product and User led organisation.

Which is what led me to consider Difrent. When I saw the job advertised I did my homework on the company and the people. Who were they? What made Difrent different? Why did they care?

My mentor for years had been recommending I consider doing a stint outside of the public sector to gain experience from the other-side of the table, but the thought had always made me twitch, but what I saw from Difrent’s information, from reading up on the amazing Rachel Murphy and from talking to colleagues who had made the jump into the dark side to both Difrent and other like minded agencies recently made me feel that maybe this was the time to take that leap into the dark.

My focus will be on working with our clients to ensure we can deliver. Supporting our teams and building our capability to ensure we keep doing the right things in the right ways.

So yes, not only will this give me experience on the other side of the contracting table, and the opportunity to see how the other side live. But the public sector still need us suppliers, there will always be short term projects and pieces of work that it makes sense to use suppliers to help with rather than massively increase their headcount’s, and more importantly (for me) we have more flexibility sometimes, the chance to quickly bring in different perspectives and points of view.

Difrent describe themselves as being activists for change and doing the right thing. They are passionate about delivering things that matter, and only working with clients who meet their #TechForGood ethos.

And for me that is Difrent’s main attraction, they want to help bring about that change, to ensure we are delivering the right things in the right way for the right reasons. Advocating and agitating for that change and real transformation.

As someone who talks a lot about finding their tribe, I look around the company and see a lot of great people passionate about delivering real change. It was especially great to see and hear the diversity and inclusion stats for the company being proudly discussed at events. One of the things that attracted me to Difrent is how much they talk about their people, and how important their people are to them, it feels like a real community of people who care. As stated by Dan Leakey, what ever our makeup, Difrent are 100% awesome.

With credit to @RachelleMoose for the inforgraphic

And while it’s only midway through week two, what I’ve seen so far has already made me feel like the dark side is full of bright lights. I’ve spent time in both Newcastle and Blackpool with some of our delivery teams, getting to understand the outcomes we are trying to deliver and why, and how we can best support our clients to meet their user’s needs.

Darth Vader with wings and a halo

So while I do intend to return to the public sector in the future with lots of new great experience under my belt, for now I feel like the message is “welcome to the dark side, we’re not all bad.”

Building a case based on assumptions

Why you shouldn’t start with the business case.

I’ve been working within Digital transformation for almost ten years now, working on some of the largest projects and programmes within the public sector. From front line services to backend systems, from simple forms to complex benefit processing applications.

One thing that has been a feature of every product or service I’ve been a part of has been the business case. Over the years I’ve worked to challenge and transform the business case itself, making it more agile and less cumbersome, in multiple organisations.

Traditionally business cases have been built on the preconception that you know exactly what solution you want, with the costs and timings estimated accordingly. These behemoth business cases usually clock in over 25 pages long, with very little room of flexibly or change. The millstones in them are clearly laid out and everyone sits around clapping themselves on the back for delivering the business case, and then wondering why the Product itself never gets delivered.

A laptop with a document on next to a notebook and smartphone

In the last decade as the more agile methodologies and user centric ways of working have spread the traditional business case, and the role of those individuals who are focused on their development, has struggled to keep pace with the changes happening within the projects and programmes themselves.

The traditional method of drafting business cases that map out your road map and spend in full are now antiquated, and holding back teams from delivering. New business cases need to instead focus on agreeing design principles and the problem the business is trying to fix rather than bottoming out the minutiae of the roadmap. On explaining the assumptions that have helped define the scope of the Product or programme, which can be backed up by evidence , this is worth more than a cost estimate hammered down to the pounds and pence.

Before doing Product evaluations it is vitally important to ensure all senior stakeholders agree on the assumptions the team is working too (regarding the scope, business needs, user needs etc.) And these are the things new business cases should be focused on, not jumping straight to a solution based on product comparisons that have been carried out before everyone has agreed what is in scope.

One anecdote in particular has always stuck with me, in terms of why it’s important to agree your scope, before you start comparing products.

A few years ago, back when I was working with the Office of the Public Guardian on their CRM replacement, the team at the time did some research and analysis into the best options for the business and whether they should be looking to build, buy or configure a new system.

As the business wanted to be a digital be default exemplar, there was an early assumption that the new system would only need to ingest data received via digital channels, or call data for the minimal cases that couldn’t be dealt with digitally. This led to some early product comparisons being done, into Products that would meet the business’ requirements.

However, some research and conversations with legal SMEs during the Discover period highlighted that, as the OPG had responsibilities as a safeguarding body, they needed to be able to accept and analyse data received via any source. Which meant they actually needed a system that could ingest and understand faxed data, call data, digital data and handwritten data. The ability to ingest and assign meta data to handwritten data meant some products that had actually been in consideration now had to be ruled out.

Thankfully the business case for the CRM system had been developed with enough flexibility and empowerment and trust within the programme team, that this did not dramatically slow down or derail the team in terms of delivery as they were still working within the agreed scope and cost envelope, but the Product Comparisons had to be reconsidered and the scope and cost estimates changed accordingly.

While this was a relatively small example, it highlights the importance of validating scope assumptions before pinning down your business case.

Many organisations embracing Digital and agile ways of working have struggled with how they can fit the need for traditional governance structures, and especially the business case, into the culture and ways of working that Digital brings with it. My honest opinion is that you can’t.   

Instead, there has been a movement in some areas, led by the likes of GDS and MoJ which I have been apart of and leading conversations along with others on for some years, to change the role and format of the Business case. To encourage the business case itself to be developed and iterated alongside the Product and Programme it supports. This approach to iterate the business case alongside the agile Project lifecycle was first laid out by GDS back in 2014 for digital transformation programmes. The Institute for Government did a report back in October 2018 on how business cases were used, and what could be improved to enable better delivery.

Rather than a business case written almost in isolation by a Programme Manager before going round and round for comments, there is value in treating your business case like any other output from the a multidisciplinary team.  

A blank notebook next to a laptop

Instead of a 25+ page tome that aims to spell everything out upfront, before the project even commences properly, there is much more value in simply having a couple of pages explaining the problem the project is seeking to fix and why, along with estimated timing and costs for some exploratory work to define key assumptions and answer key questions (like what happens if we don’t fix this? How many people will it effect? Are there any legal requirements we need to be aware of?) that will help your project start on the right foot.

Once you can answer those questions, then you can iterate the business case; taking a stab at estimating how you think you might going about fixing the problem(s), how long it will take to fix the important key problem(s) you identified need fixing first, are there any products out there in the market that could do this for you? How much might this roughly cost?

You can then iterate the business case again once you’ve started developing the Product or implementing the identified solution. Once you have validated the assumptions you’ve made previously about the solution to the problem you’re fixing.

This means the business case is a living document, kept up to date with the costs and timetable you’re working to. It means your board are able to much more accurately assuage their accountabilities, ensuring costs are being spent in line with the scope of the programme or project.

Empty chairs around a table

Whatever methodology you are using, the importance of being able to explain why you are doing something, and what the problem you are trying to fix is, before leaping into what software product is the solution to buy and how much it’ll cost you. If it’s done right, the business case helps you evidence you are doing the right thing and spending money in the right way.

Delivering Digital Government 2019

This week Claire Harrison (Head of Architecture from CQC) and I had the opportunity to attend the Delivering Digital Goverment event run by Worth Systems in The Hague.

The event was focused on how digital has transformed governments across the world, sharing best practices and lessons learned. With speakers from the founding of GDS, like Lord Maude, as well as speakers from the Netherlands, and it was a great opportunity to meet others working on solving problems for users in the Government space wider than the UK.

A lot of the talks, especially by the GDS alum were things I had heard before, but I actually found that reassuring, that over 5 years later I am still doing the right things, and approaching problems in the right way.

It was especially interesting to hear from both Lord Maude, and others, about the work they have been doing with foreign governments, for example in Canada, Peru and Hawaii. The map Andrew Greenway, previous of GDS now from Public Digital, shared of the digital government movement was fantastic to see, and really made me realise how big what we are trying to achieve around the world really is.

@ad_greenway sharing a map of the Digtial Government transformations happening around the world

The talks from some of the Dutch speakers were really interesting. I loved hearing about the approach the council in The Hague are taking to digital innovations, and their soon to be published digital strategy. One of the pilots the city are running in particular intrigued me; in an effort to reduce traffic, they put sensors onto parking spaces in key shopping streets and all disabled parking bays in the city. This gave them real time information on the use of the parking spaces, and where available spaces were and successfully decreased traffic from people driving around searching for spaces. They were now looking at how to scale the pilot an manage the infrastructure and senor data for a ‘smart’ city, working with local business to enable new services to be offered.

The draft digital strategy for the city of The Hague

We also heard about the work the Netherlands has been doing to pilot other innovative digital services, like a new service that allows residents in an area to submit planning ideas to improve their neighbourhoods, with the first trial receiving over 50 suggestions, of those 4 have been chosen to take forward. We heard about the support that was given to enable everyone to take part, and it was nice to hear about the 78 year old resident who’s suggestion came 5th.

It was also great to hear from the speaker from Matthij from Novum, a digital innovation lab in the Netherlands, who talked about his own personal journey into Digital transformation, learning from failures and ensuring that you prepare for failure from the start. He also told us about some fascinating research they have been doing into the use of smart speakers, especially with the elderly, to enable better engagement and use of government services to those that need assistive technologies.

An image of an older lady talking to an AI robot, courtesy of Novum

Realising that 30% of eligible claimants for the Dutch state pension supplement were not claiming it, they believed that this was potentially down to the complexity of the form. They hypothesised that smart speakers might be one way to solve this problem. However recognising that it was no good to make assumptions and design a solution for users without ensuring they had understood the problem their users were facing properly they did a small sample test with elderly users to see whether they could use smart speakers to check the date of their next pension payment (one of the largest contributors to inbound calls to the Sociale Verzekeringsbank), they found that not only could elderly users use the smart speakers, but that the introduction of smart speakers into their homes decreased loneliness dramatically.

There were other good sessions with James Stewart from GDS & Public Digtial on technology within digital, and an interesting panel session at the end. Every session was good, and I learnt something I heard something new at each one. My only grumble from the day was the lack of diversity in the speakers. Which the organises themselves put their hands up and admitted before they were called out on it. A quick call on twitter and the ever amazing Joanne Rewcaslte from DWP shared a list of amazing female speakers, so hopefully that will help with the next event.

One key thing I took away from the day is that the challenges are the same everyone, but the message is also the same, involve users from the start. In the practical steps everyone could start tomorrow, Matthij talked about ensuring you interview 5 end users, and some steps to simple prototypes you could develop to engage your users.

This slide from Lord Maude summed up three of the main things any organisations needs to succeed in delivering Digital Transformation

Lord Maude talked about the importance of a strong mandate, Novum talked about having a good understanding of the problem you are trying to fix at the start. The digital strategy from the Hague highlights the fact they want everyone to be able to participate and deliver a personal service to their citizens. As Andrew Greenaway said, they key thing is to “start with user needs”.

The other second key message from the day was that, as Lord Maude put it… “Just Do it!” A digital strategy delivers nothing, the strategy should be delivery, instead of spending months on developing a digital strategy, “you just have to start” by doing something, this in turn will help you develop your strategy once you understand the problems you are trying to solve, the people you will need, and the set up and way of doing things that works best in your organisation. This was a message reinforced by every speaker throughout the day.

@jystewart sharing a statement from Ivana Osores from Interbank… “You have to just start”

The third key message was the importance of good leadership, good teams and good people. Talk in the open about the failures you’ve made and what you have learned. Build strong multidisciplinary and diverse teams. As Andrew Greenway said, Start with teams, not apps or documents. In the round table discussion on building capability we spent a lot of time discussing the best ways to build capability, and the fact that in order to get good people and be able to keep them, and to go on to develop good things, you need strong leadership that is bought in to the culture you need to deliver.

I left the day with a number of good contacts, had some great conversations, and felt reinvigorated and reassured. Speaking to Worth I know they are aiming to run another event next year, with both an even more diverse international cohort and an equal number of female speakers, and I for one will definitely be signing up again for the next event.

Lord Maude, myself and Claire Harrison at the social gathering after the event

Service Standards for the whole service

How the service standards have evolved over time….

Gov.uk has recently published the new Service Standards for government and public sector agencies to use when developing public facing transactional services.

I’ve previously blogged about why the Service Standards are important in helping us develop services that meet user needs, as such I’ve been following their iteration with interest.

The service standards are a labour of love that have been changed and iterated a couple of time over the last 6 years. The initial digital by default service standard, developed in 2013 by the Government Digital Service, came fully into force in April 2014 for use by all transactional Digital Products being developed within Government; it was a list of 26 standards all Product teams had to meet to be able to deliver digital products to the public. The focus was on creating digital services so good that people preferred to use them, driving up digital completion rates and decreasing costs by moving to digital services. It included making plans for the phasing out of alternative channels and encouraged that any non-digital sections of the service should only be kept where legally required.

A number of fantastic products and services were developed during this time, leading the digital revolution in government, and vastly improving users experience of interacting government. However, these Products and Services were predominantly dubbed ‘shiny front ends’. They had to integrate with clunky back end services, and often featured drop out points from the digital service (like the need for wet signatures) that it was difficult to change. This meant the ‘cost per transaction’ was actually very difficult to calculate; and yet standard 23 insisted all services must publish their cost per transaction as one of the 4 minimum key performance indicators required for the performance platform.

The second iteration of the digital service standard was developed in 2015, it reduced the number of standards services had to meet to 18, and was intended to be more Service focused rather than Product focused, with standard number 10 giving some clarity on how to ‘test the service end to end’. It grouped the standards together into themes to help the flow of the service standard assessments, it also clarified and emphasised a number of the points to help teams develop services that met user needs. While standard 16 still specified you needed a plan for reducing you cost per transaction, it also advised you to calculate how cost effective your non transactional user journeys were and to include the ‘total cost’ which included things like printing, staff costs and fixtures and fittings.

However, as Service design as a methodology began to evolve, the standards were criticised for still being too focused on the digital element of the service. Standard 14 still stated that ‘everyone much be encourage to use the digital service’. There were also a lot of questions about how the non digital elements of a service could be assessed, and the feeling that the standards didn’t cover how large or complicated some services could be.

Paper and Digital

The newest version of the Service standard has been in development since 2017, a lot of thought and work has gone into the new standard, and a number of good blogs have been written about the process the team have gone through to update them. As a member of some of the early conversations and workshops about the new standards I’ve been eagerly awaiting their arrival.

While the standards still specifically focus on public facing transactional services, they have specially be designed for full end to end services, covering all channels users might use to engage with a service. There are now 14 standards, but the focus is now much wider than ‘Digital’ as is highlighted by the fact the word Digital has been removed from the title!

Standard number 2 highlights this new holistic focus, acknowledging the problems users face with fragmented services. Which is now complimented by Standard number 3 that specifics that you must provide a joined up experience that meets all user needs across all channels. While the requirement to measure your cost per transaction and digital take up is still there for central government departments, it’s no longer the focus, instead the focus of standard 10 is now on identifying metrics that will indicate how well the services is solving the problem it’s meant to solve.

For all the changes, one thing has remained the same thorough out, the first standard upon which the principles of transformation in the public sector are built; understand the needs of your users.

Apparently the new standards are being rolled out for Products and Services entering Discovery after the 30th of June 2019, and I for one I’m looking forward to using them.

Launch!