We’re punishing those who are less experienced, and we need to stop.
In the last few weeks I’ve had multiple conversations with clients (both existing and new) who are preparing for or have recently not passed their Digital Service standard assessments who are really struggling to understand what is needed from them in order to pass their assessment.
These teams have tried to engage with the service standards teams, but given those teams are extremely busy; most teams cant get any time with their ‘link’ person until 6 weeks before their assessment; by which time most teams are quite far down their track and potentially leaves them a lot of (re)work to try and do before their assessment.
Having sat in on a few of those calls recently I’ve been surprised how little time is set aside to help the teams prep; and to give them advice on guidance on what to expect at an assessment if they haven’t been through one before. Thos no time or support for mock assessments for new teams. There may be the offer of one or two of the team getting to observe someone else’s assessment if the stars align to allow this; but it’s not proactively planned in; and instead viewed as a nice to have. There seems to be an assumption the project teams should know all of this already; and no recognition that a large number of teams don’t; this is still all new to them.
“In the old days” we as assessors and transformation leads used to set aside time regularly to meet with teams; talk through the problems they were trying to fix, understand any issues they may be facing, provide clarity and guidance before the assessment; so that teams could be confident they were ready to move onto the next phase before their assessment. But when I talk to teams now, so few of them are getting this support. Many teams reach out because the rare bits of guidance they have received hasn’t been clear, and in some cases it’s been contradictory and they don’t know who to talk too to get that clarity.
Instead, more and more of my time at the moment, as a supplier, is being set aside to support teams through their assessment. To provide advice and guidance on what to expect, how to prepare and what approach the team needs to take. Actually what an MVP is; how to decide when you need an assessment, and what elements of the service do you need to have ready to ‘show’ at each stage. What the difference is between Alpha/ Beta and Live assessments and why it matters. For so many teams this is still almost like a foreign language and new.
So, how can we better support teams through this journey?
Stop treating it like this is all old hat and that everyone should know everything about it already.
Digital Transformation has been ‘a thing’ for one generation (if you count from the invention of the internet as a tool for the masses in 1995); Within the public sector, GDS, the Digital Service Standards and the Digital Academy have existed for less than one generation; less than 10 years in-fact.
By treating it as a thing everyone should know, we make it exclusionary. We make people feel less than us for the simple act of not having the same experience we do.
We talk about working in the open, and many team do still strive to do that; but digital transformation is still almost seen as a magical art by many; and how to pass what should be a simple thing like a service standard assessment is still almost viewed as Arcane knowledge held by the few. As a community we need to get better at supporting each other, and especially those new to this experience, along this path.
This isn’t just a nice thing to do, its the fiscally responsible thing to do; by assuming teams already have all this knowledge we’re just increasing the likelihood they will fail, and that comes with a cost.
We need to set aside more time to help and guide each other on this journey; so that we can all succeed; that is how we truly add value, and ensure that Digital Transformation delivers and is around to stay for generations to come.
One of the things that has really become apparent when moving ‘supplier side’ is how much the procurement processes used by the public sector to tender work doesn’t facilitate agile delivery.
The process of bidding for work, certainly as an SME is an industry in itself.
This month alone we’ve seen multiple Invitations to Tender’s on the Digital Marketplace for Discoveries etc, as many departments are trying to spend their budget before the end of the financial year.
The ITT’s will mention user research and ask how suppliers will work to understand user needs or hire proper user researchers. But they will then state they only have 4 weeks or £60K to carry out the Discovery. While they will specify the need for user research, no user recruitment has been carried out to let the supplier hit the ground running; it’s not possible for it to be carried out before the project starts (unless as a supplier you’re willing to do that for free; and even if you are, you’ve got less than a week to onboard your team, do any reading you need to do and complete user recruitment, which just isn’t feasible); and we regular see requests for prototypes within that time as well.
This isn’t to say that short Discoveries etc. are impossible, if anything COVID-19 has proved it is possible, however there the outcomes we were trying to deliver were understood by all; the problems we were trying to solve were very clear,; and there was a fairly clear understanding of the user groups we’d need to be working with to carry out any research; all of this enabled the teams to move at pace.
But we all know the normal commercial rules were relaxed to support delivery of the urgent COVID-19 related services. Generally it’s rare for an ITT to clarify the problem the organisation is trying to solve, or the outcomes they are looking to achieve. Instead they tend to solely focus on delivering a Discovery or Alpha etc. The outcome is stated as completing the work in the timeframe in order to move to the next stage; not as a problem to solve with clear goals and scope.
We spend a lot of time submitting questions trying to get clarity on what outcomes the organisations are looking for, and sometimes it certainly feels like organisations are looking for someone to deliver them a Discovery solely because the GDS/Digital Service Standard says they need to do one. This means, if we’re not careful, halfway through the Discovery phase we’re still struggling to get stakeholders to agree the scope of the work and why we really do need to talk to that group of users over there that they’ve never spoken too before.
The GDS lifecycle and how it currently ties into procurement and funding (badly) means that organisations are reluctant to go back into Discovery or Alpha when they need too, because of how they have procured suppliers. If as a supplier you deliver a Discovery that finds that there is no need to move into Alpha (because there are no user needs etc) or midway through an Alpha you find the option you prioritised for your MVP no longer meets the needs as anticipated, clients still tend to view that money as ‘lost’ or ‘wasted’ rather than accepting the value in failing fast and stopping or changing to do something that can add value. Even when the clients do accept that, sometimes the procurement rules that brought you on to deliver a specific outcome mean your team now can’t pivot onto another piece of work, as that needs to be a new contract; either scenario could mean as a supplier you loose that contract you spent so much time getting, because you did ‘the right thing’.
We regularly pick up work midway through the lifecycle; sometimes that’s because the previous supplier didn’t work out; sometimes its because they were only brought in to complete the Discovery or Alpha etc. and when it comes to re-tender, another supplier is now cheaper etc. That’s part and parcel of being a supplier; but I know from being ‘client side’ for so long how that can make it hard to manage corporate knowledge.
Equally, as a supplier, we rarely see things come out for procurement in Live, because there is the assumption by Live most of the work is done, and yet if you follow the intent of the GDS lifecycle rather than how it’s often interpreted, there should still be plenty of feature development, research etc happening in Live.
This is turn is part of the reason we see so many services stuck in Public Beta. Services have been developed by or with suppliers who were only contracted to provide support until Beta. There is rarely funding available for further development in Live, but the knowledge and experience the suppliers provided has exited stage left so it’s tricky for internal teams to pick up the work to move it into Live and continue development.
Most contracts specify ‘knowledge transfer’ (although sometimes it’s classed as a value add; when it really should be a fundamental requirement) but few are clear on what they are looking for. When we talk to clients about how they would like to manage that, or how we can ensure we can get the balance right between delivery of tangible outcomes and transferring knowledge, knowledge transfer is regularly de-scoped or de-prioritised. It ends up being seen as not as important as getting a product or service ‘out there’; but once the service is out there, the funding for the supplier stops and the time to do any proper knowledge transfer is minimal at best; and if not carefully managed suppliers can end up handing over a load of documentation and code without completing the peer working/ lunch and learns/ co-working workshops we’d wanted to happen.
Some departments and organisations have got much better at getting their commercial teams working hand and hand with their delivery teams; and we can always see those ITT’s a mile off; and it’s a pleasure to see them; as it makes it much easier for us as suppliers to provide a good response.
None of this is insurmountable, but we (both suppliers and commercial/procuring managers and delivery leads) need to get better at working together to look at how we procure/bid for work; ensuring we are clear on what the outcomes we’re trying to achieve are, and properly valuing ‘the value add’.
A blog on the new National Careers ‘Discover your skills and careers’ Service
As I sit here are ten past ten on a Wednesday night watching social media have a field day with the new National Careers service, I’m yet again reminded about the importance of the Digital Service Standard, especially Standard Number One – Understand users and their needs. And why we need to get Ministers and senior leaders to understand their importance.
The first role of any good User Centric designer or Product Manager within the public sector is understanding the problem you’re trying to solve.
In this case, the problem we’re facing is not a small one. Because of COVID-19 we currently have approximately 1.4M people unemployed with many more still facing redundancy due to the ongoing pandemic. ONS data states that between March and August, the number of people claiming benefits rose 120% to 2.7 million.
The Entertainment, Leisure and Hospitality sectors have been decimated, amongst many others. Just this week we’ve had Cineworld announce 45,000 job loses and Odeon may soon be following suit. Theatres and live event venues across the country are reporting they are on the brink of collapse.
So, when the Chancellor announced as part of the summer statement, a whole host of support for people too retrain; it included advice for people to use the new Careers and Skills advice service to get ideas on new career options.
A service to help people understand new career options right now is a great idea, it absolutely should meet user need.
Unfortunately, you only have to look at the headlines to see how well the new service has been received. The service is currently such a laughing stock that no-one is taking it seriously; which is a massive shame, because it’s trying to solve a very real problem.
A number of my friends and acquaintances have now taken the quiz (as has half of twitter apparently) and it was suggested I have a look. So I did. (As an aside, it recommended I retrain in the hospitality industry, all who know me know how terrible this would be for all involved, last week I managed to forget to cook 50% of our dinner, and I am clinically unable to make a good cup of coffee, never mind clean or tidy anything!)
It has good intentions, and in a number of cases, it may not be too far off the mark; the team behind the service have done a write up here* of how they have developed it, and what they set out to achieve. Unfortunately, while the service seems to be simple to understand and accessible to use; what it seems to be missing is any level of context or practicality that would help it meet the problem it’s being used for.
*EDIT: Which has sadly now been taken down, which is a massive shame, because they did good work, but sadly I suspect under political pressure to get something out there quickly. We’ve all been there, it’s a horrid position to be in.
While they have tested with users with accessibility needs, the focus seems to have been on whether they can use the digital service; not does the service actually meet their needs?
My friend with severe mobility and hearing issues was advised to retrain as a builder. Another friend with physical impairments (and a profound phobia of blood) was advised they were best suited to a role as a paramedic. A friend with ASD who also has severe anxiety and an aversion to people they don’t know, was advised to become a beautician. Another friend who is a single parent was given three career options that all required evening and weekend work. At no point does this service ask whether you have any medical conditions or caring needs that would limit the work you could do. While you can argue that that level of detail falls under the remit of a jobs coach; it can understandable be seen as insensitive and demoralising to be recommending careers to people they are physically unable to do.
Equally, unhelpful is the fact the service which has been especially recommended to people who have been made redundant from the worst hit industries; is recommending those same decimated industries to work in, with no recognition of the current jobs market.
My partner, who was actually made redundant from her creative role due to COVID-19, (and the target audience for this service according to the Chancellor) was advised to seek a role in the creative industries; an industry that doesn’t currently exist; and a quick look on social media proves she isn’t alone.
The service doesn’t actually collect enough (well, any) data about the career someone is in, nor does it seem to have any interface to the current jobs market to understand whether the careers its recommending are actually viable.
Unfortunately, the service is too generic, and while it would possibly help school/ college students who are trying to choose their future career paths in a ‘normal’ job market, (And I honestly suspect that’s who it was actually developed for!) it’s not meetings the fundamental problem we are facing at the moment; ie. help people understand their career options in the current market.
If you’ve worked within Digital in the Public Sector you’ve had to deal with Ministers and Directors who don’t really understand the value of user research or why we need to test things properly before we role them out nationally. The current debacle with the careers website is possible a perfect example of why you need to make sure you actually test your service with a wide range of users regularly; not just rely on assumptions and user personas; and why its important to test and iterate the service with real users multiple times before it gets launched. It highlights the need for us to get Ministers to understand that rushing a service out there quickly isn’t always the right answer.
We all need to understand users and their needs. Just because a service is accessible doesn’t mean it solves the problem users are facing.
The Beta Assessment is probably the one I get the most questions about; Primarily, “when do we actually go for our Beta Assessment and what does it involve?”
Firstly what is an Assessment? Why do we assess products and services?
If you’ve never been to a Digital Service Standard Assessment it can be daunting; so I thought it might be useful to pull together some notes from a group of assessors, to show what we are looking for when we assess a service.
Claire Harrison (Chief Architect at Homes England and leading Tech Assessor) and Gavin Elliot (Head of Design at DWP and a leading Design Assessor, you can find his blog here) helped me pull together some thoughts about what a good assessment looks like, and what we are specifically looking for when it comes to a Beta Assessment.
I always describe a good assessment as the team telling the assessment panel a story. So, what we want to hear is:
What was the problem you were trying to solve?
Who are you solving this problem for? (who are your users?)
Why do you think this is a problem that needs solving? (What research have you done? Tell us about the users journey)
How did you decide to solve it and what options did you consider? (What analysis have you done?)
How did you prove the option you chose was the right one? (How did you test this?)
One of the great things about the Service Manual is that it explains what each delivery phase should look like, and what the assessment team are considering at each assessment.
So what are we looking for at a Beta Assessment?
By the time it comes to your Beta Assessment, you should have been running your service for a little while now with a restricted number of users in a Private Beta. You should have real data you’ve gathered from real users who were invited to use your service, and your service should have iterated several times by now given all the things you have learnt.
Before you are ready to move into Public Beta and roll your service out Nationally there are several things we want to check during an assessment.
We don’t want to just hear about the ‘digital’ experience; we want to understand how you have/will provide a consistent and joined up experience across all channels.
Are there any paper or telephony elements to your service? How have you ensured that those users have received a consistent experience?
What changes have you made to the back end processes and how has this changed the user experience for any staff using the service?
Were there any policy or legislative constraints you had to deal with to ensure a joined up experience?
Has the scope of your MVP changed at all so far in Beta given the feedback you have received from users?
Are there any changes you plan to implement in Public Beta?
As a Lead Assessor this is where I always find that teams who have suffered with empowerment or organisational silos may struggle.
If the team are only empowered to look at the Digital service, and have struggled to make any changes to the paper/ telephony or face to face channels due to siloed working in their Department between Digital and Ops (as an example) the Digital product will offer a very different experience to the rest of the service.
As part of that discussion we will also want to understand how you have supported users who need help getting online; and what assisted digital support you are providing.
At previous assessments you should have had a plan for the support you intended to provide, you should now be able to talk though how you are putting that into action. This could be telephony support or a web chat function; but we want to ensure the service being offered is/will be consistent to the wider service experience, and meeting your users needs. We also want to understand how it’s being funded and how you plan to publish your accessibility info on your service.
We also expect by this point that you have run an accessibility audit and have carried out regular accessibility testing. It’s worth noting, if you don’t have anyone in house who is trained in running Accessibility audits (We’re lucky in Difrent as we have a DAC assessor in house), then you will need to have factored in the time it takes to get an audit booked in and run well before you think about your Beta Assessment).
Similarly, by the time you go for your Beta Assessment we would generally expect a Welsh language version of your service available; again, this needs to be planned well in advance as this can take time to get, and is not (or shouldn’t be) a last minute job! Something in my experience a lot of teams forget to prioritise and plan for.
And finally assuming you are planning to put your service on GOV.UK, you’ll need to have agreed the following things with the GOV.UK team at GDS before going into public beta:
Again, while it shouldn’t take long to get these things sorted with the GOV.UK team, they can sometimes have backlogs and as such it’s worth making sure you’ve planned in enough time to get this sorted.
The other things we will want to hear about are how you’ve ensured your service is scalable and secure. How have you dealt with any technical constraints?
The architecture and technology – Claire
From an architecture perspective, at the Beta phases I’m still interested in the design of the service but I also have a focus on it’s implementation, and the provisions in place to support sustainability of the service. My mantra is ‘end-to-end, top-to-bottom service architecture’!
An obvious consideration in both the design and deployment of a service is that of security – how the solution conforms to industry, Government and legal standards, and how security is baked into a good technical design. With data, I want to understand the characteristics and lifecycle of data, are data identifiable, how is it collected, where is it stored, hosted, who has access to it, is it encrypted, if so when, where and how? I find it encouraging that in recent years there has been a shift in thinking not only about how to prevent security breaches but also how to recover from them.
Security is sometimes cited as a reason not to code in the open but in actual fact this is hardly ever the case. As services are assessed on this there needs to be a very good reason why code can’t be open. After all a key principle of GDS is reuse – in both directions – for example making use of common government platforms, and also publishing code for it to be used by others.
Government services such as Pay and Notify can help with some of a Technologist’s decisions and should be used as the default, as should open standards and open source technologies. When this isn’t the case I’m really interested in the selection and evaluation of the tools, systems, products and technologies that form part of the service design. This might include integration and interoperability, constraints in the technology space, vendor lock-in, route to procurement, total cost of ownership, alignment with internal and external skills etc etc.
Some useful advice would be to think about the technology choices as a collective – rather than piecemeal, as and when a particular tool or technology is needed. Yesterday I gave a peer review of a solution under development where one tool had been deployed but in isolation, and not as part of an evaluation of the full technology stack. This meant that there were integration problems as new technologies were added to the stack.
The way that a service evolves is really important too along with the measures in place to support its growth. Cloud based solutions help take care of some of the more traditional scalability and capacity issues and I’m interested in understanding the designs around these, as well as any other mitigations in place to help assure availability of a service. As part of the Beta assessment, the team will need to show the plan to deal with the event of the service being taken temporarily offline – detail such as strategies for dealing with incidents that impact availability, and the strategy to recover from downtime and how these have been tested.
Although a GDS Beta assessment focuses on a specific service, it goes without saying that a good Technologist will be mindful of how the service they’ve architected impacts the enterprise architecture and vice-versa. For example if a new service built with microservices and also introduces an increased volume and velocity of data, does the network need to be strengthened to meet the increase in communications traversing the network?
Legacy technology (as well as legacy ‘Commercials’ and ways of working) is always on my mind. Obviously during an assessment a team can show how they address legacy in the scope of that particular service, be it some form of integration with legacy or applying the strangler pattern, but organisations really need to put the effort into dealing with legacy as much as they focus on new digital services. Furthermore they need to think about how to avoid creating ‘legacy systems of the future’ by ensuring sustainability of their service – be it from a technical, financial and resource perspective. I appreciate this isn’t always easy! However I do believe that GDS should and will put much more scrutiny on organisations’ plans to address legacy issues.
One final point from me is that teams should embrace an assessment. Clearly the focus is on passing an assessment but regardless of the outcome there’s lots of value in gaining that feedback. It’s far better to get constructive feedback during the assessment stages rather than having to deal with disappointed stakeholders further down the line, and probably having to spend more time and money to strengthen or redesign the technical architecture.
How do you decide when to go for your Beta Assessment?
Many services (for both good and bad reasons) have struggled with the MVP concept; and as such the journey to get their MVP rolled out nationally has taken a long time, and contained more features and functionality then teams might have initially imagined.
This can make it very hard to decide when you should go for an Assessment to move from Private to Public Beta. If your service is going to be rolled out to millions of people; or has a large number of user groups with very different needs; it can be hard to decide what functionality is needed in Private Beta vs. Public Beta or what can be saved until Live and rolled out as additional functionality.
The other things to consider is, what does your rollout plan actually look like? Are you able to go national with the service once you’ve tested with a few hundred people from each user group? Or, as is more common with large services like NHS Jobs, where you are replacing an older service, does the service need to be rolled out in a very set way? If so, you might need to keep inviting users in until full rollout is almost complete; making it hard to judge when the right time for your Beta assessment is.
There is no right or wrong answer here, the main thing to consider is that you will need to understand all of the above before you can roll your service out nationally, and be able to tell that story to the panel successfully.
This is because theoretically most of the heavy lifting is done in Private Beta, and once you have rolled out your service into Public Beta, the main things left to test are whether your service scaled and worked as you anticipated. Admittedly this (combined with a confusion about the scope of an MVP) is why most Services never actually bother with their Live Assessment. For most Services, once you’re in Public Beta the hard work has been done; there’s nothing more to do, so why bother with a Live Assessment? But that’s an entirely different blog!
Before I discuss what (in my view) a Service Owner is, a brief history lesson into the role might be useful.
The role of the ‘Service Manager‘ was seen as critically important to the success of a product, and they were defined as a G6 (Manager) who had responsibility for the end to end service AND the person who led the team through their Service Standard assessments.
Now let’s think about this a bit; Back when the GDS Service Standard and the Service Manual first came into creation, they were specifically created for/with GOV.UK in mind. As such, this definition of the role makes some sense. GOV.UK was (relatively) small and simple; and one person could ‘own’ the end to end service.
The problem came about when the Service Standards were rolled our wider than in GDS itself. DWP is a good example of where this role didn’t work.
The Service Manual describes a service as the holistic experience for a user; so it’s not just a Digital Product, it’s the telephony Service that sits alongside it, the back end systems that support it, the Operational processes that staff use to deliver the service daily, along with the budget that pays for it all. Universal Credit is a service, State Pension is a service; and both of these services are, to put it bluntly, HUGE.
Neil Couling is a lovely bloke, who works really hard, and has the unenviable task of having overarching responsibility for Universal Credit. He’s also, a Director General. While he knows A LOT about the service, it is very unlikely that he would know the full history of every design iteration and user research session the Service went through, or be able to talk in detail about the tech stack and it’s resilience etc; and even if he did, he certainly would be very unlikely to have the 4 hours spare to sit in the various GDS assessments UC went through.
This led to us (in DWP) phasing out the role; and splitting the responsibilities into two, the (newly created role of ) Product Lead and the Service Owner. The Product Lead did most of the work of the Service Manager (in terms of GDS assessments etc), but they didn’t have the responsibility of the end to end service; this sat with the Service Owner. The Service Owner was generally a Director General (and also the SRO), who we clarified the responsibilities of when it came to Digital Services.
A few years ago, Ross (the then Head of Product and Service Management at GDS) and I, along with a few others, had a lot of conversations about the role of the Service Manager; and why in departments like DWP, the role did not work, and what we were doing instead.
At the time there was the agreement in many of the Departments outside of GDS that the Service Manager role wasn’t working how it had been intended, and was instead causing confusion and in some cases, creating additional unnecessary hierarchy. The main problem was, is it was in DWP, the breadth of the role was too big for anyone below SCS, which mean instead we were ending up with Service Managers who were only responsible for the digital elements of the service (and often reported to a Digital Director), with all non digital elements of the service sitting under a Director outside of Digital, which was creating more division and confusion.
As such, the Service Manual and the newly created DDaT framework were changed to incorporate the role of the Service Owner instead of the Service Manager; with the suggestion this role should be an SCS level role. However, because the SCS was outside of the DDaT framework, the amount the role could be defined/ specified was rather limited, and instead became more of a suggestion rather than a clearly defined requirement.
The latest version of the DDaT framework has interestingly removed the suggestions that the role should be an SCS role, and any reference of the cross over with the responsibilities of SRO, and now makes the role sound much more ‘middle management’ again, although it does still specify ownership of the end to end service.
Ok, so what should a Service Owner be?
When we talked about the role a few years ago, the intention was very much to define how the traditional role of the SRO joined up closer to the agile/digital/user centred design world; in order to create holistic joined up services.
Below is (at least my understanding of) what we intended the role to be:
They should have end to end responsibility for the holistic service.
They should understand and have overall responsibility for the scope of all products within the service.
They should have responsibility for agreeing the overall metrics for their service and ensuring they are met.
They should have responsibility for the overall budget for their service (and the products within it).
They should understand the high level needs of their users, and what teams are doing to meet their needs.
They should have an understanding (and have agreed) the high level priorities within the service. ((Which Product needs to be delivered first? Which has the most urgent resource needs etc.))
They should be working with the Product/Delivery/Design leads within their Products as much as the Operational leads etc. to empower them to make decisions, and understanding the decisions that have been made.
They should be encouraging and supporting cross functional working to ensure all elements of a service work together holistically.
They should be fully aware of any political/strategy decisions or issues that may impact their users and the service, and be working with their teams to ensure those are understand to minimise risks.
They should understand how Agile/Waterfall and any other change methodologies work to deliver change. And how to best support their teams no matter which methodology is being used.
In this way the role of the Service Owner would add clear value to the Product teams, without adding in unnecessary hierarchy. They would support and enable the development of a holistic service, bringing together all the functions a service would need to be able to deliver and meet user needs.
Whether they are an SCS person or not is irrelevant, the important thing is that they have the knowledge and ability to make decisions that affect the whole service, that they have overall responsibility for ensuring users needs are met, that they can ensure that all the products within the service work together, and that their teams are empowered, to deliver the right outcomes.
Why ‘in the era of remote working we need to stop thinking about ‘digital services’ as a separate thing, and just think about ‘services’.
Last night when chatting to @RachelleMoose about whether digital is a privilege, which she’s blogged about here, it made me remember a conversation from a few weeks ago with @JanetHughes about the work DEFRA were doing, and their remit as part of the response to the current pandemic (which it turns out is not just the obvious things like food and water supplies, but also what do we do about Zoo’s and Aquariums during a lockdown?!)
This in turn got me thinking about the consequences of lockdown that we might never have really have considered before the COVID 19 pandemic hit; and the impact a lack of digital access has on peoples ability to access public services.
There are many critical services we offer everyday that are vital to peoples lives that we never imagined previously as ‘digital’ services which are now being forced to rely on digital as a means of delivery, and not only are those services themselves struggling to adapt but we are also at risk of forgetting those people for whom digital isn’t an easy option.
All ‘digital’ services have to prove they have considered Digital Inclusion, back in 2014 it was found approx. 20% of Britains had basic digital literacy skills, and the Digital Literacy Strategy aimed to have everyone who could be digital literate, digitally able by 2020. However it was believed that 10% of the population would never be able to get online, and the Assisted Digital paper published in 2013 set out how government would enable equal access to users to ensure digital excluded people were still able to access services. A report by the ONS last year backs this assumption up, showing that in 2019 10% of the population were still digital excluded.
However, as the effects of lockdown begin to be considered, we need to think about whether our assisted digital support goes far enough; and whether we are really approaching how we develop public services holistically, how we ensure they are future proof and whether we are truly including everyone.
There have been lots of really interesting articles and blogs about the impact of digital (or the lack of access to digital) on children’s education. With bodies like Ofsted expressing concerns that the lockdown will widen the gap education between children from disadvantaged backgrounds and children from more affluent homes; with only 5% of the children classified as ‘in need’ who were expected to still be attending school turning up.
According to the IPPR, around a million children do not have access to a device suitable for online lessons; the DfE came out last month to say there were offering free laptops and routers to families in need; however a recent survey showed that while over a quarter of teachers in private schools were having daily interaction with their pupils online less than 5% of those in state schools were interacting with their pupils daily online. One Academy chain in the North West is still having to print home learning packs and arrange for families to physically pick up and drop off school work.
The Good Things Foundation has shared its concerns similarly about the isolating effects of lockdown, and the digital divide that is being created, not just for families with children, but for people with disabilities, elderly or vulnerable people or households in poverty. Almost 2 million homes have no internet access, and 26 million rely on pay as you go data to get online. There has been a lot of concern raised about people in homes with domestic violence who have no access to phones or the internet to get help. Many companies are doing what they can to try and help vulnerable people stay connected or receive support but it has highlighted that our current approach to designing services is possibly not as fit for the future as we thought.
The current pandemic has highlighted the vital importance for those of us working in or with the public sector to understand users and their needs, but to also ensure everyone can access services. The Digital Service Standards were designed with ‘digital’ services in mind, and it was never considered 6 months ago, that children’s education, or people’s health care needed to be considered and assessed against those same standards.
The standards themselves say that the criteria for assessing products or services is applicable if either of the following apply:
getting assessed is a condition of your Cabinet Office spend approval
it’s a transactional service that’s new or being rebuilt – your spend approval will say whether what you’re doing counts as a rebuild
The key phrase here for me is ‘transactional service’ ie. the service allows:
an exchange of information, money, permission, goods or services
submitting of personal information that results in a change to a government record
While we may never have considered education as a transactional service before now, as we consider ‘the new normal’ we as service designers and leaders in the transformation space need to consider which of our key services are transactional, how we are providing a joined up experience across all channels; and what holistic service design really means. We need to move away from thinking about ‘digital and non digital services’ and can no longer ‘wait’ to assess new services, instead we need to step back and consider how we can offer ANY critical service remotely going forward should we need to do so.
Digital can no longer be the thing that defines those with privilege, COVID 19 has proved that now more than ever it is an everyday essential, and we must adapt our policies and approach to service design to reflect that. As such, I think it’s time that we reassess whether the Digital Service Standards should be applied to more services than they currently are; which services we consider to be ‘digital’ and whether that should even be a differentiator anymore. In a world where all services need to be able to operate remotely, we need to approach how we offer our services differently if we don’t want to keep leaving people behind.
Matt Knight has also recently blogged on the same subject, so linking to his blog here as it is spot on!
One of the key personal aims I had when I joined Difrent, just over six months ago, was to work somewhere that would let me deliver stuff that matters. Because I am passionate about people, and about Delivery;
After 15 years, right in the thick of some pioneering public sector work, combining high profile product delivery with developing digital capability working for organisations like the Government Digital Services (GDS), Department of Work and Pensions (DWP), The Care Quality Commission (CQC), and the Ministry of Defence (MoD); I was chaffing at the speed (or lack thereof) of delivery in the Public sector.
I hoped going agency side would remove some of that red tape, and let me get on and actually deliver; my aim when I started was to get a project delivered (to public beta at the very least) within my first year. Might seem like a simple ask, but in the 10 years I spent working in Digital, I’d only seen half a dozen services get into Live.
This is not because the projects failed, they are all still out there being used by people; but because once projects got into Beta, and real people could start using them, the impetus to go-live got lost somewhat.
Six months into the job and things looked to be on track, with one service in Private beta, another we are working on in Public Beta; plus a few Discoveries etc. underway; things were definitely moving quickly and I my decision to move agency side felt justified. Delivery was happening.
And then Covid-19 hit.
With COVID-19, the old normal, and ways of working have had to change rapidly. If for no other reason than we couldn’t all be co-located anymore. We all had to turn too fully remote working quickly, not just as a company but as an industry.
Thankfully within Difrent we’ve always had the ability to work remotely, so things like laptops and collaborative software were already in place internally; but the move to being fully remote has still been a big challenge. Things like setting up regular online collaboration and communication sessions throughout our week, our twice-daily coffee catchups and weekly Difrent Talks are something created for people to drop in on with no pressure attached and has helped people stay connected.
The main challenge has been how we work with out clients to ensure we are still delivering. Reviewing our ways of working to ensure we are still working inclusively; or aren’t accidentally excluding someone from a conversation when everyone is working from their own home. Maintaining velocity and ensuring everyone is engaged and able to contribute.
This is trickier to navigate when you’re all working virtually, and needs a bit more planning and forethought, but it’s not impossible. One of the positives (for me at least) about the current crisis is how well people have come together to get things delivered.
Some of the work that we have been involved in, which would generally have taken months to develop; has been done in weeks. User research, analysis and development happening in a fraction of the time it took before.
So how are we now able to move at such a fast pace? Are standards being dropped or ignored? Are corners being cut? Or have we iterated and adapted our approach?
Once this is all over I think those will be the questions a lot pf people are asking; but my observation is that, if nothing else, this current crisis has made us really embrace what agility means.
We seem to have the right people ‘in the room’ signing off decisions when they are needed; with proper multidisciplinary teams, made up of people from both digital but also policy and operations etc, that are empowered to get on and do things. Research is still happening; but possibly at a much smaller scale, as and when it is needed; We’re truly embracing the Minimum Viable Product, getting things out there that aren’t perfect, but that real people can use; testing and improving the service as we go.
Once this is all over I certainly don’t want to have to continue the trend of on-boarding and embedding teams with 24 hours notice; and while getting things live in under 2 weeks is an amazing accomplishment; to achieve it comes at a high price – Not just in terms of resources but in terms of people, because that is where burnout will occur for all involved. But I believe a happy medium can be found.
My hope, once this is all over, is that we can find the time to consider what we in digital have learnt, and focus on what elements we can iterate and take forward to help us keep delivering faster and better, but in the right way, with less delays; so we can get services out there for people to use; because really, that is what we are all here to do.
Back when I started working in Digital as a Product Owner in 2011, and I did my agile training course, one of the first ‘principles’ that was discussed was ‘There is no such thing as a stupid question”. Which as a newbie in the agile/digital world was great to hear, because I felt like I knew literally nothing.
This concept has always been something I’ve repeated to the teams and people I’ve been working with. There will always be something you don’t know, it is impossible to know everything. Therefor we have to be able to ask questions and find out information without fear of being made to feel stupid.
However, as digital transformation and agile begins to roll out and spread, that acceptance of ‘not knowing’ seems to have become less common. I hear a lot from colleagues outside of digital that ‘agile is a cult, or digital is a clique’ with it’s own language that doesn’t welcome in those who don’t know the ‘lingo’.
A friend of mine had a scrum coach in to speak to their team and deliver some training to their organisation (if you don’t know what scrum is, that’s ok, here’s a link), and she said the way that he spoke to them was as if they were all idiots who knew nothing, and that he made scrum sound like a religion for zealots. There was no opportunity to question, only to agree. This isn’t what should be happening. There’s no better way to foster feelings of exclusion and frustration than be treating people who don’t know something as ‘lesser’.
The public sector has always struggled with acronyms, and while we regularly hear about the drive to reduce the use of them with the greatest will in the world, everyone will find themselves slipping up and using them sometimes, because they are everywhere and we assume that everyone knows them. But we have to remember that they don’t.
At a global digital conference last year in The Hague I was happily chatting away to someone working for the Dutch Pensions service and kept referencing several Government Departments by their acronyms without thinking, leaving the poor person I was speaking to rather lost.
Similarly in my interview for my current role, I was too embarrassed to check an acronym (PnL) and just assumed I knew exactly what I was being asked about. It was only after 10 minutes of waffle I was politely corrected that I was not been asked about Procurement frameworks and instead about my experience of managing Profit and Loss. Obvious in retrospective, but never an acronym I’d heard before and who want’s to look ignorant in an interview?
Clare made a point that often we’re not actually saving time by using acronyms, but we are gatekeeping and increasing that siloed attitude, which is counterproductive to the work we’re doing. This is especially important, as Rachelle pointed out, given how inaccessible acronyms often are, and that they are actually not unique. One random set of letters to me may mean something completely different to someone else working in a different organisation or sector or with completely different experiences. We are actually increasing the chance for confession and misunderstandings while not saving time or effort.
There is a lot of great work happening in the Public sector, using the Digital Service Standards (primarily standard 4 – make the service simple to use, and 5 – make sure everyone can use the service) and the principles of the Plain English Campaign, to simply the content we provide to users, to make it clear, concise and easy to comprehend. However when it comes to how we talk to each other, we are forgetting those same standards.
My conversation this week has reminded me how important it is, as a Senior Leader to:
firstly try and not use acronyms or digital/agile jargon, or to not make assumptions about other peoples knowledge without checking first their experience and understanding.
Secondly, speak up and ask more questions when I don’t know things. To show by doing, that it is ok to not know everything.
After-all, there are no stupid questions, just opportunities to learn and share knowledge.
How the service standards have evolved over time….
Gov.uk has recently published the new Service Standards for government and public sector agencies to use when developing public facing transactional services.
I’ve previously blogged about why the Service Standards are important in helping us develop services that meet user needs, as such I’ve been following their iteration with interest.
The service standards are a labour of love that have been changed and iterated a couple of time over the last 6 years. The initial digital by default service standard, developed in 2013 by the Government Digital Service, came fully into force in April 2014 for use by all transactional Digital Products being developed within Government; it was a list of 26 standards all Product teams had to meet to be able to deliver digital products to the public. The focus was on creating digital services so good that people preferred to use them, driving up digital completion rates and decreasing costs by moving to digital services. It included making plans for the phasing out of alternative channels and encouraged that any non-digital sections of the service should only be kept where legally required.
A number of fantastic products and services were developed during this time, leading the digital revolution in government, and vastly improving users experience of interacting government. However, these Products and Services were predominantly dubbed ‘shiny front ends’. They had to integrate with clunky back end services, and often featured drop out points from the digital service (like the need for wet signatures) that it was difficult to change. This meant the ‘cost per transaction’ was actually very difficult to calculate; and yet standard 23 insisted all services must publish their cost per transaction as one of the 4 minimum key performance indicators required for the performance platform.
The second iteration of the digital service standard was developed in 2015, it reduced the number of standards services had to meet to 18, and was intended to be more Service focused rather than Product focused, with standard number 10 giving some clarity on how to ‘test the service end to end’. It grouped the standards together into themes to help the flow of the service standard assessments, it also clarified and emphasised a number of the points to help teams develop services that met user needs. While standard 16 still specified you needed a plan for reducing you cost per transaction, it also advised you to calculate how cost effective your non transactional user journeys were and to include the ‘total cost’ which included things like printing, staff costs and fixtures and fittings.
However, as Service design as a methodology began to evolve, the standards were criticised for still being too focused on the digital element of the service. Standard 14 still stated that ‘everyone much be encourage to use the digital service’. There were also a lot of questions about how the non digital elements of a service could be assessed, and the feeling that the standards didn’t cover how large or complicated some services could be.
The newest version of the Service standard has been in development since 2017, a lot of thought and work has gone into the new standard, and a number of good blogs have been written about the process the team have gone through to update them. As a member of some of the early conversations and workshops about the new standards I’ve been eagerly awaiting their arrival.
While the standards still specifically focus on public facing transactional services, they have specially be designed for full end to end services, covering all channels users might use to engage with a service. There are now 14 standards, but the focus is now much wider than ‘Digital’ as is highlighted by the fact the word Digital has been removed from the title!
Standard number 2 highlights this new holistic focus, acknowledging the problems users face with fragmented services. Which is now complimented by Standard number 3 that specifics that you must provide a joined up experience that meets all user needs across all channels. While the requirement to measure your cost per transaction and digital take up is still there for central government departments, it’s no longer the focus, instead the focus of standard 10 is now on identifying metrics that will indicate how well the services is solving the problem it’s meant to solve.
For all the changes, one thing has remained the same thorough out, the first standard upon which the principles of transformation in the public sector are built; understand the needs of your users.
Apparently the new standards are being rolled out for Products and Services entering Discovery after the 30th of June 2019, and I for one I’m looking forward to using them.
On the 20th of February a petition was created on the petitions website to Revoke Article 50 and remain within the EU, on the 21st of March the petition went viral, and as of writing this blog has currently got 5,536,5805,608,428 5,714,965 signatures. This is the biggest petition to have ever been started since the sites launch. Not only that, it is now the most supported petition in the world, ever.
The first version of the site was developed in 2010 after the election. Originally intended to replace the Number 10 petition site, which had a subtly different purpose. The new version of the Parliamentary petitions site was then launched in 2015, as an easy way for users to make sure their concerns were heard by the government and parliament. The original version of the service was developed by Pete Herlihy and Mark O’Neill back in the very early days of Digital Government, before the Digital Service Standard was born.
The site was built using open source code, meaning anyone can access the source code used to build the site, making it is easy to interrogate the data. With a number of sites, like unboxed, developing tools to help map signatories of petitions etc based off the data available.
Speaking of security measures, that’s digital service standard number 7, making sure the service has the right security levels, the petitions site apparently uses both automated and manual techniques to spot bots; disposable email addresses and other fraudulent activities. This works with digital standard number 15, using tools for analysis that collect performance data; to monitor signing patterns etc. Analysing the data, 96% of signatories have been within the UK (what the committee would expect from a petition like this).
Another key service standard is building a service that can be iterated and improved on a frequent basis (digital standard number 5), which mean that when the petition went viral, the team were able to spot that the site wasn’t coping with the frankly huge amount of traffic headed it’s way and quickly doubled the capacity of the service within a handful of hours.
This also calls out the importance of testing your service end to end (standard number 10) and ensuring its scalable; and if and when it goes down (as the petitions website did a number of times given the large amount of traffic that hit it, you need to have a plan for what to do when it goes down (standard number 11), which for the poor Petitions team meant some very polite apologetic messages being shared over social media while the team worked hard and fast to get the service back online.
The staggering volume of traffic to the site, and the meteoric speed in which the petition went vial, shows that at its heart, the team who developed the service had followed Digital Service Standard number 1. Understand your user’s needs.
In today’s culture of social media, people have high expectations of services and departments with there interactions online, we live in a time of near instant news, entertainment and information- and an expectation of having the world available at our fingertips with a click of a button. People want and need to feel that their voice is being heard, and the petitions website tapped into that need, delivering it effectively under conditions that are unprecedented.
Interestingly when the site was first developed, Mark himself admitted they didn’t know if anyone would use it. There was a lot of concern from people that 100,000 signatures was too high a figure to trigger a debate; but within the first 100 days six petitions had already reached the threshold and become eligible for a debate in the Commons. Pete wrote a great blog back in 2011 summing up what those first 100 days looked like.
It’s an example of great form design, and following digital service standard number 12, it is simple and intuitive to use. This has not been recognised or celebrated enough over the last few days, both the hard work of the team who developed the service and those maintaining and iterating it today. In my opinion this service has proven over the last few days that it is a success, and that the principles behind the Digital Service Standards that provided the design foundations for the site are still relevant and adding value today.