×

Tag: Digital Service Standard

The Day Data went Viral

This week the UK Government and Parliament petitions website has been getting a lot of attention, both good and not so good. This site has been a great example of how the Digital Service Standards work to ensure what we deliver in the public sector meets user needs.

On the 20th of February a petition was created on the petitions website to Revoke Article 50 and remain within the EU, on the 21st of March the petition went viral, and as of writing this blog has currently got 5,536,580 5,608,428 5,714,965 signatures. This is the biggest petition to have ever been started since the sites launch. Not only that, it is now the most supported petition in the world, ever.

Screenshot of the petitions website

The first version of the site was developed in 2010 after the election. Originally intended to replace the Number 10 petition site, which had a subtly different purpose. The new version of the Parliamentary petitions site was then launched in 2015, as an easy way for users to make sure their concerns were heard by the government and parliament. The original version of the service was developed by Pete Herlihy and Mark O’Neill back in the very early days of Digital Government, before the Digital Service Standard was born.

The site was built using open source code, meaning anyone can access the source code used to build the site, making it is easy to interrogate the data. With a number of sites, like unboxed, developing tools to help map signatories of petitions etc based off the data available.

Screenshot of the unboxed website

Within the Governments Digital Design standards using open source code has always been one of the standards some departments have really struggled with, it’s digital standard number 8, and is often a bit contentious. But looking at the accusations being lobbied at the Revoke Article 50 petition, that people outside of the UK are unfairly signing the petition, that people are creating fake emails to sign the petition etc, it shows why open source data is so important. While the petitions committee won’t comment in detail about the security measures they use; examining the code you can see the validation the designers built into the site to try and ensure it was being used seurely and fairly.

britorbot data analysis

Speaking of security measures, that’s digital service standard number 7, making sure the service has the right security levels, the petitions site apparently uses both automated and manual techniques to spot bots; disposable email addresses and other fraudulent activities. This works with digital standard number 15, using tools for analysis that collect performance data; to monitor signing patterns etc. Analysing the data, 96% of signatories have been within the UK (what the committee would expect from a petition like this).

tweet from the Petitions Committee from 22nd March

Another key service standard is building a service that can be iterated and improved on a frequent basis (digital standard number 5), which mean that when the petition went viral, the team were able to spot that the site wasn’t coping with the frankly huge amount of traffic headed it’s way and quickly doubled the capacity of the service within a handful of hours.

tweet from Pete Herlihy (product manager – petitions website)

This also calls out the importance of testing your service end to end (standard number 10) and ensuring its scalable; and if and when it goes down (as the petitions website did a number of times given the large amount of traffic that hit it, you need to have a plan for what to do when it goes down (standard number 11), which for the poor Petitions team meant some very polite apologetic messages being shared over social media while the team worked hard and fast to get the service back online.

tweet from the Petitions Committee from 21st March

The staggering volume of traffic to the site, and the meteoric speed in which the petition went vial, shows that at its heart, the team who developed the service had followed Digital Service Standard number 1. Understand your user’s needs.

In today’s culture of social media, people have high expectations of services and departments with there interactions online, we live in a time of near instant news, entertainment and information- and an expectation of having the world available at our fingertips with a click of a button. People want and need to feel that their voice is being heard, and the petitions website tapped into that need, delivering it effectively under conditions that are unprecedented.

Interestingly when the site was first developed, Mark himself admitted they didn’t know if anyone would use it. There was a lot of concern from people that 100,000 signatures was too high a figure to trigger a debate; but within the first 100 days six petitions had already reached the threshold and become eligible for a debate in the Commons. Pete wrote a great blog back in 2011 summing up what those first 100 days looked like.

It’s an example of great form design, and following digital service standard number 12, it is simple and intuitive to use. This has not been recognised or celebrated enough over the last few days, both the hard work of the team who developed the service and those maintaining and iterating it today. In my opinion this service has proven over the last few days that it is a success, and that the principles behind the Digital Service Standards that provided the design foundations for the site are still relevant and adding value today.

tweet from Mark O’Neill (part of the original team)

Round and round we go.

In other words Agile isn’t linear so stop making it look like it is.

Most people within the public sector who work in Digital transformation have seen the GDS version of the Alpha lifecycle:

Which aims to demonstrate that developing services starts with user needs, and that projects will move from Discovery to Live, with iterations at each stage of the lifecycle.

The problem with this image of Agile is that it still makes the development of Products and Services seem linear, which it very rarely is. Most Products and Services I know, certainly the big complex ones, will need several cracks at a Discovery. They move into Alpha and then back to Discovery. They may get to Beta, stop and then start again. The more we move to a Service Design mentality, and approach problems holistically, the more complex we realise they are, and this means developing Products and Services that meet user needs is very rarely as simple and straightforward as the GDS Lifecycle makes it appear.

And this is fine, one of the core principles of Agile is failing fast. Stopping things rather than carrying on regardless. We iterate our Products and Services because we realise there is more to learn. More to Discover.

The problem is, especially in organisations new to Agile and the GDS way of working, they see the above image, and its more linear portrayal seems familiar and understandable to them, because they are generally user to Waterfall projects which are linear. So when something doesn’t move from Alpha to Beta, when it needs to go back into Discovery they see that as a failure of the team, of the Project. Sometimes it is, but more not always, sometimes the team have done exactly what they were meant to do, they realised the problem identified at the start wasn’t the right problem to fix because they have tested assumptions and learned from their research. This is what we want them to do.

The second problem with the image put forward in the GDS lifecycle is that it doesn’t demonstrate how additional features are added. The principle of Agile is getting the smallest usable bit of your Product or Service out and being used by users as soon as you can, the minimum viable product (MVP), and this is right. But once you have your MVP live what then? The Service Manual talks about keeping iterating in Live, but if your Product or Service is large or complex, then your MVP might just be for one of your user groups, and now you need to develop the rest. So what do you do? Do you go back into Discovery for the next user segment (ideally, if you need to yes), but the GDS lifecycle doesn’t show that.

As such, again for those organisations new to Agile, they don’t’ factor that in to their business cases, it’s not within the expectations of the stakeholders, and this is where Projects end up with bloated scopes and get stuck forever in Discovery or Alpha because the Project is too big to deliver.

With Public Services being developed to the Digital Service Standards set by GDS, we need a version of the lifecycle that breaks that linear mindset and helps everyone understand that within an Ariel project you will go around and around the lifecycle and back and forwards several times before you are done.

Agile is not a sprint, a race, or a marathon, it’s a game of snakes and ladders. You can get off, go back to the start or go back a phase or two if you need to. You win when all your user needs are met, but as user needs can change over time, you have to keep your eye on the board, and you only really stop playing once you decommission your Product or Service!