×

Tag: Digital Service Standard

Notes from some Digital Service Standard Assessors on the Beta Assessment

The Beta Assessment is probably the one I get the most questions about; Primarily, “when do we actually go for our Beta Assessment and what does it involve?” 

Firstly what is an Assessment? Why do we assess products and services?

If you’ve never been to a Digital Service Standard Assessment it can be daunting; so I thought it might be useful to pull together some notes from a group of assessors, to show what we are looking for when we assess a service. 

Claire Harrison (Chief Architect at Homes England and leading Tech Assessor) and Gavin Elliot (Head of Design at DWP and a leading Design Assessor, you can find his blog here) helped me pull together some thoughts about what a good assessment looks like, and what we are specifically looking for when it comes to a Beta Assessment. 

I always describe a good assessment as the team telling the assessment panel a story. So, what we want to hear is:

  • What was the problem you were trying to solve?
  • Who are you solving this problem for? (who are your users?)
  • Why do you think this is a problem that needs solving? (What research have you done? Tell us about the users journey)
  • How did you decide to solve it and what options did you consider? (What analysis have you done?) 
  • How did you prove the option you chose was the right one? (How did you test this?)

One of the great things about the Service Manual is that it explains what each delivery phase should look like, and what the assessment team are considering at each assessment.

So what are we looking for at a Beta Assessment?

By the time it comes to your Beta Assessment, you should have been running your service for a little while now with a restricted number of users in a Private Beta. You should have real data you’ve gathered from real users who were invited to use your service, and your service should have iterated several times by now given all the things you have learnt. 

Before you are ready to move into Public Beta and roll your service out Nationally there are several things we want to check during an assessment. 

You need to prove you have considered the whole service for your users and have provided a joined up experience across all channels.

  • We don’t want to just hear about the ‘digital’ experience; we want to understand how you have/will provide a consistent and joined up experience across all channels.
  • Are there any paper or telephony elements to your service? How have you ensured that those users have received a consistent experience?
  • What changes have you made to the back end processes and how has this changed the user experience for any staff using the service?
  • Were there any policy or legislative constraints you had to deal with to ensure a joined up experience?
  • Has the scope of your MVP changed at all so far in Beta given the feedback you have received from users? 
  • Are there any changes you plan to implement in Public Beta?

As a Lead Assessor this is where I always find that teams who have suffered with empowerment or organisational silos may struggle.

If the team are only empowered to look at the Digital service, and have struggled to make any changes to the paper/ telephony or face to face channels due to siloed working in their Department between Digital and Ops (as an example) the Digital product will offer a very different experience to the rest of the service. 

As part of that discussion we will also want to understand how you have supported users who need help getting online; and what assisted digital support you are providing.

At previous assessments you should have had a plan for the support you intended to provide, you should now be able to talk though how you are putting that into action. This could be telephony support or a web chat function; but we want to ensure the service being offered is/will be consistent to the wider service experience, and meeting your users needs. We also want to understand how it’s being funded and how you plan to publish your accessibility info on your service. 

We also expect by this point that you have run an accessibility audit and have carried out regular accessibility testing. It’s worth noting, if you don’t have anyone in house who is trained in running Accessibility audits (We’re lucky in Difrent as we have a DAC assessor in house), then you will need to have factored in the time it takes to get an audit booked in and run well before you think about your Beta Assessment).

Similarly, by the time you go for your Beta Assessment we would generally expect a Welsh language version of your service available; again, this needs to be planned well in advance as this can take time to get, and is not (or shouldn’t be) a last minute job! Something in my experience a lot of teams forget to prioritise and plan for.

And finally assuming you are planning to put your service on GOV.UK, you’ll need to have agreed the following things with the GOV.UK team at GDS before going into public beta:

Again, while it shouldn’t take long to get these things sorted with the GOV.UK team, they can sometimes have backlogs and as such it’s worth making sure you’ve planned in enough time to get this sorted. 

The other things we will want to hear about are how you’ve ensured your service is scalable and secure. How have you dealt with any technical constraints? 

The architecture and technology – Claire

From an architecture perspective, at the Beta phases I’m still interested in the design of the service but I also have a focus on it’s implementation, and the provisions in place to support sustainability of the service. My mantra is ‘end-to-end, top-to-bottom service architecture’!

 An obvious consideration in both the design and deployment of a service is that of security – how the solution conforms to industry, Government and legal standards, and how security is baked into a good technical design. With data, I want to understand the characteristics and lifecycle of data, are data identifiable, how is it collected, where is it stored, hosted, who has access to it, is it encrypted, if so when, where and how? I find it encouraging that in recent years there has been a shift in thinking not only about how to prevent security breaches but also how to recover from them.

Security is sometimes cited as a reason not to code in the open but in actual fact this is hardly ever the case. As services are assessed on this there needs to be a very good reason why code can’t be open. After all a key principle of GDS is reuse – in both directions – for example making use of common government platforms, and also publishing code for it to be used by others.

Government services such as Pay and Notify can help with some of a Technologist’s decisions and should be used as the default, as should open standards and open source technologies. When  this isn’t the case I’m really interested in the selection and evaluation of the tools, systems, products and technologies that form part of the service design. This might include integration and interoperability, constraints in the technology space, vendor lock-in, route to procurement, total cost of ownership, alignment with internal and external skills etc etc.

Some useful advice would be to think about the technology choices as a collective – rather than piecemeal, as and when a particular tool or technology is needed. Yesterday I gave a peer review of a solution under development where one tool had been deployed but in isolation, and not as part of an evaluation of the full technology stack. This meant that there were integration problems as new technologies were added to the stack. 

The way that a service evolves is really important too along with the measures in place to support its growth. Cloud based solutions help take care of some of the more traditional scalability and capacity issues and I’m interested in understanding the designs around these, as well as any other mitigations in place to help assure availability of a service. As part of the Beta assessment, the team will need to show the plan to deal with the event of the service being taken temporarily offline – detail such as strategies for dealing with incidents that impact availability, and the strategy to recover from downtime and how these have been tested.

Although a GDS Beta assessment focuses on a specific service, it goes without saying that a good Technologist will be mindful of how the service they’ve architected impacts the enterprise architecture and vice-versa. For example if a new service built with microservices and also introduces an increased volume and velocity of data, does the network need to be strengthened to meet the increase in communications traversing the network?

Legacy technology (as well as legacy ‘Commercials’ and ways of working) is always on my mind. Obviously during an assessment a team can show how they address legacy in the scope of that particular service, be it some form of integration with legacy or applying the strangler pattern, but organisations really need to put the effort into dealing with legacy as much as they focus on new digital services. Furthermore they need to think about how to avoid creating ‘legacy systems of the future’ by ensuring sustainability of their service – be it from a technical, financial and resource perspective. I appreciate this isn’t always easy! However I do believe that GDS should and will put much more scrutiny on organisations’ plans to address legacy issues.

One final point from me is that teams should embrace an assessment. Clearly the focus is on passing an assessment but regardless of the outcome there’s lots of value in gaining that feedback. It’s far better to get constructive feedback during the assessment stages rather than having to deal with disappointed stakeholders further down the line, and probably having to spend more time and money to strengthen or redesign the technical architecture.

How do you decide when to go for your Beta Assessment?

Many services (for both good and bad reasons) have struggled with the MVP concept; and as such the journey to get their MVP rolled out nationally has taken a long time, and contained more features and functionality then teams might have initially imagined.  

This can make it very hard to decide when you should go for an Assessment to move from Private to Public Beta. If your service is going to be rolled out to millions of people; or has a large number of user groups with very different needs; it can be hard to decide what functionality is needed in Private Beta vs. Public Beta or what can be saved until Live and rolled out as additional functionality. 

The other things to consider is, what does your rollout plan actually look like? Are you able to go national with the service once you’ve tested with a few hundred people from each user group? Or, as is more common with large services like NHS Jobs, where you are replacing an older service, does the service need to be rolled out in a very set way? If so, you might need to keep inviting users in until full rollout is almost complete; making it hard to judge when the right time for your Beta assessment is. 

There is no right or wrong answer here, the main thing to consider is that you will need to understand all of the above before you can roll your service out nationally, and be able to tell that story to the panel successfully. 

This is because theoretically most of the heavy lifting is done in Private Beta, and once you have rolled out your service into Public Beta, the main things left to test are whether your service scaled and worked as you anticipated. Admittedly this (combined with a confusion about the scope of an MVP) is why most Services never actually bother with their Live Assessment. For most Services, once you’re in Public Beta the hard work has been done; there’s nothing more to do, so why bother with a Live Assessment? But that’s an entirely different blog! 

Reviewing the service together.

 

The people getting left behind

Why ‘in the era of remote working we need to stop thinking about ‘digital services’ as a separate thing, and just think about ‘services’.

Last night when chatting to @RachelleMoose about whether digital is a privilege, which she’s blogged about here, it made me remember a conversation from a few weeks ago with @JanetHughes about the work DEFRA were doing, and their remit as part of the response to the current pandemic (which it turns out is not just the obvious things like food and water supplies, but also what do we do about Zoo’s and Aquariums during a lockdown?!)

A giraffe

This in turn got me thinking about the consequences of lockdown that we might never have really have considered before the COVID 19 pandemic hit; and the impact a lack of digital access has on peoples ability to access public services.

There are many critical services we offer everyday that are vital to peoples lives that we never imagined previously as ‘digital’ services which are now being forced to rely on digital as a means of delivery, and not only are those services themselves struggling to adapt but we are also at risk of forgetting those people for whom digital isn’t an easy option.

All ‘digital’ services have to prove they have considered Digital Inclusion, back in 2014 it was found approx. 20% of Britains had basic digital literacy skills, and the Digital Literacy Strategy aimed to have everyone who could be digital literate, digitally able by 2020. However it was believed that 10% of the population would never be able to get online, and the Assisted Digital paper published in 2013 set out how government would enable equal access to users to ensure digital excluded people were still able to access services. A report by the ONS last year backs this assumption up, showing that in 2019 10% of the population were still digital excluded.

However, as the effects of lockdown begin to be considered, we need to think about whether our assisted digital support goes far enough; and whether we are really approaching how we develop public services holistically, how we ensure they are future proof and whether we are truly including everyone.

There have been lots of really interesting articles and blogs about the impact of digital (or the lack of access to digital) on children’s education. With bodies like Ofsted expressing concerns that the lockdown will widen the gap education between children from disadvantaged backgrounds and children from more affluent homes; with only 5% of the children classified as ‘in need’ who were expected to still be attending school turning up.

An empty school room

According to the IPPR, around a million children do not have access to a device suitable for online lessons; the DfE came out last month to say there were offering free laptops and routers to families in need; however a recent survey showed that while over a quarter of teachers in private schools were having daily interaction with their pupils online less than 5% of those in state schools were interacting with their pupils daily online. One Academy chain in the North West is still having to print home learning packs and arrange for families to physically pick up and drop off school work.

The Good Things Foundation has shared its concerns similarly about the isolating effects of lockdown, and the digital divide that is being created, not just for families with children, but for people with disabilities, elderly or vulnerable people or households in poverty. Almost 2 million homes have no internet access, and 26 million rely on pay as you go data to get online. There has been a lot of concern raised about people in homes with domestic violence who have no access to phones or the internet to get help. Many companies are doing what they can to try and help vulnerable people stay connected or receive support but it has highlighted that our current approach to designing services is possibly not as fit for the future as we thought.

The current pandemic has highlighted the vital importance for those of us working in or with the public sector to understand users and their needs, but to also ensure everyone can access services. The Digital Service Standards were designed with ‘digital’ services in mind, and it was never considered 6 months ago, that children’s education, or people’s health care needed to be considered and assessed against those same standards.

The standards themselves say that the criteria for assessing products or services is applicable if either of the following apply:

  • getting assessed is a condition of your Cabinet Office spend approval
  • it’s a transactional service that’s new or being rebuilt – your spend approval will say whether what you’re doing counts as a rebuild

The key phrase here for me is ‘transactional service’ ie. the service allows:

  • an exchange of information, money, permission, goods or services
  • submitting of personal information that results in a change to a government record

While we may never have considered education as a transactional service before now, as we consider ‘the new normal’ we as service designers and leaders in the transformation space need to consider which of our key services are transactional, how we are providing a joined up experience across all channels; and what holistic service design really means. We need to move away from thinking about ‘digital and non digital services’ and can no longer ‘wait’ to assess new services, instead we need to step back and consider how we can offer ANY critical service remotely going forward should we need to do so.

A child using a tablet

Digital can no longer be the thing that defines those with privilege, COVID 19 has proved that now more than ever it is an everyday essential, and we must adapt our policies and approach to service design to reflect that. As such, I think it’s time that we reassess whether the Digital Service Standards should be applied to more services than they currently are; which services we consider to be ‘digital’ and whether that should even be a differentiator anymore. In a world where all services need to be able to operate remotely, we need to approach how we offer our services differently if we don’t want to keep leaving people behind.

Matt Knight has also recently blogged on the same subject, so linking to his blog here as it is spot on!

The Day Data went Viral

This week the UK Government and Parliament petitions website has been getting a lot of attention, both good and not so good. This site has been a great example of how the Digital Service Standards work to ensure what we deliver in the public sector meets user needs.

On the 20th of February a petition was created on the petitions website to Revoke Article 50 and remain within the EU, on the 21st of March the petition went viral, and as of writing this blog has currently got 5,536,580 5,608,428 5,714,965 signatures. This is the biggest petition to have ever been started since the sites launch. Not only that, it is now the most supported petition in the world, ever.

Screenshot of the petitions website

The first version of the site was developed in 2010 after the election. Originally intended to replace the Number 10 petition site, which had a subtly different purpose. The new version of the Parliamentary petitions site was then launched in 2015, as an easy way for users to make sure their concerns were heard by the government and parliament. The original version of the service was developed by Pete Herlihy and Mark O’Neill back in the very early days of Digital Government, before the Digital Service Standard was born.

The site was built using open source code, meaning anyone can access the source code used to build the site, making it is easy to interrogate the data. With a number of sites, like unboxed, developing tools to help map signatories of petitions etc based off the data available.

Screenshot of the unboxed website

Within the Governments Digital Design standards using open source code has always been one of the standards some departments have really struggled with, it’s digital standard number 8, and is often a bit contentious. But looking at the accusations being lobbied at the Revoke Article 50 petition, that people outside of the UK are unfairly signing the petition, that people are creating fake emails to sign the petition etc, it shows why open source data is so important. While the petitions committee won’t comment in detail about the security measures they use; examining the code you can see the validation the designers built into the site to try and ensure it was being used seurely and fairly.

britorbot data analysis

Speaking of security measures, that’s digital service standard number 7, making sure the service has the right security levels, the petitions site apparently uses both automated and manual techniques to spot bots; disposable email addresses and other fraudulent activities. This works with digital standard number 15, using tools for analysis that collect performance data; to monitor signing patterns etc. Analysing the data, 96% of signatories have been within the UK (what the committee would expect from a petition like this).

tweet from the Petitions Committee from 22nd March

Another key service standard is building a service that can be iterated and improved on a frequent basis (digital standard number 5), which mean that when the petition went viral, the team were able to spot that the site wasn’t coping with the frankly huge amount of traffic headed it’s way and quickly doubled the capacity of the service within a handful of hours.

tweet from Pete Herlihy (product manager – petitions website)

This also calls out the importance of testing your service end to end (standard number 10) and ensuring its scalable; and if and when it goes down (as the petitions website did a number of times given the large amount of traffic that hit it, you need to have a plan for what to do when it goes down (standard number 11), which for the poor Petitions team meant some very polite apologetic messages being shared over social media while the team worked hard and fast to get the service back online.

tweet from the Petitions Committee from 21st March

The staggering volume of traffic to the site, and the meteoric speed in which the petition went vial, shows that at its heart, the team who developed the service had followed Digital Service Standard number 1. Understand your user’s needs.

In today’s culture of social media, people have high expectations of services and departments with there interactions online, we live in a time of near instant news, entertainment and information- and an expectation of having the world available at our fingertips with a click of a button. People want and need to feel that their voice is being heard, and the petitions website tapped into that need, delivering it effectively under conditions that are unprecedented.

Interestingly when the site was first developed, Mark himself admitted they didn’t know if anyone would use it. There was a lot of concern from people that 100,000 signatures was too high a figure to trigger a debate; but within the first 100 days six petitions had already reached the threshold and become eligible for a debate in the Commons. Pete wrote a great blog back in 2011 summing up what those first 100 days looked like.

It’s an example of great form design, and following digital service standard number 12, it is simple and intuitive to use. This has not been recognised or celebrated enough over the last few days, both the hard work of the team who developed the service and those maintaining and iterating it today. In my opinion this service has proven over the last few days that it is a success, and that the principles behind the Digital Service Standards that provided the design foundations for the site are still relevant and adding value today.

tweet from Mark O’Neill (part of the original team)

Round and round we go.

In other words Agile isn’t linear so stop making it look like it is.

Most people within the public sector who work in Digital transformation have seen the GDS version of the Alpha lifecycle:

Which aims to demonstrate that developing services starts with user needs, and that projects will move from Discovery to Live, with iterations at each stage of the lifecycle.

The problem with this image of Agile is that it still makes the development of Products and Services seem linear, which it very rarely is. Most Products and Services I know, certainly the big complex ones, will need several cracks at a Discovery. They move into Alpha and then back to Discovery. They may get to Beta, stop and then start again. The more we move to a Service Design mentality, and approach problems holistically, the more complex we realise they are, and this means developing Products and Services that meet user needs is very rarely as simple and straightforward as the GDS Lifecycle makes it appear.

And this is fine, one of the core principles of Agile is failing fast. Stopping things rather than carrying on regardless. We iterate our Products and Services because we realise there is more to learn. More to Discover.

The problem is, especially in organisations new to Agile and the GDS way of working, they see the above image, and its more linear portrayal seems familiar and understandable to them, because they are generally user to Waterfall projects which are linear. So when something doesn’t move from Alpha to Beta, when it needs to go back into Discovery they see that as a failure of the team, of the Project. Sometimes it is, but more not always, sometimes the team have done exactly what they were meant to do, they realised the problem identified at the start wasn’t the right problem to fix because they have tested assumptions and learned from their research. This is what we want them to do.

The second problem with the image put forward in the GDS lifecycle is that it doesn’t demonstrate how additional features are added. The principle of Agile is getting the smallest usable bit of your Product or Service out and being used by users as soon as you can, the minimum viable product (MVP), and this is right. But once you have your MVP live what then? The Service Manual talks about keeping iterating in Live, but if your Product or Service is large or complex, then your MVP might just be for one of your user groups, and now you need to develop the rest. So what do you do? Do you go back into Discovery for the next user segment (ideally, if you need to yes), but the GDS lifecycle doesn’t show that.

As such, again for those organisations new to Agile, they don’t’ factor that in to their business cases, it’s not within the expectations of the stakeholders, and this is where Projects end up with bloated scopes and get stuck forever in Discovery or Alpha because the Project is too big to deliver.

With Public Services being developed to the Digital Service Standards set by GDS, we need a version of the lifecycle that breaks that linear mindset and helps everyone understand that within an Ariel project you will go around and around the lifecycle and back and forwards several times before you are done.

Agile is not a sprint, a race, or a marathon, it’s a game of snakes and ladders. You can get off, go back to the start or go back a phase or two if you need to. You win when all your user needs are met, but as user needs can change over time, you have to keep your eye on the board, and you only really stop playing once you decommission your Product or Service!