AI Ethics: send money, guns & lawyers

Photo of Westminster Abbey taken by Iulian Ursu
Photo: Iulian Ursu (cc)

My speech notes, for a talk given to the Westminster Abbey Institute on 31 May 2018

This evening I’d like to present a problem within what I believe to be the most transformative technology of our lives: artificial intelligence. I’ll suggest why I think that problem will involve some colossal rows involving money, guns, and lawyers. And as well as explaining the problem, I’d like your help to find the right way for us to respond professionally and personally, so I look forward to the discussion afterwards.

  • Who am I, and why am I here?
    • I’m a married father of two – living in south west London. Missing the family during half term holiday.
    • Like you I’m an alumni of the Fellow’s programme at the institute. I’ve always sought an inflated sense of purpose in what I do, which is easy to find in public life. But it is difficult to hang on to that sense of purpose without moral and spiritual reflection, which I think the Institute catalyses.
    • I’m someone who has slipped between the public and private sector, technology and policy, institutions and startups. I’ve worked in HM Treasury, and in Africa; in the Dept. Energy & Climate Change, and in a small Engineering charity; in the Cabinet Office and in an AI firm; because I really value the perspective that one can get by moving to a different position.
    • Right now, I work at ASI Data Science. We help people create artificial intelligence to solve business and policy problems. I’d say we’re now the leading specialist AI agency in Europe. My job is to make sure we don’t go bust.
      • We have worked with more than 150 organisations as diverse as Amnesty International on their marketing, Easyjet on their staff scheduling; Isaac Physics to help people learn faster; and Siemens, on predictive maintenance for their trains;
      • In Government, we’ve used AI in the Home Office to help spot and remove terrorist propaganda on social media platforms; in the Department of Education to help better forecast the number of teachers required in each local authority; we’ve used it in the NHS to work towards a more effective way to predict the recurrence of cancer; and in local government to help better identify houses in multiple occupation.
      • We believe that AI is for Everyone. It is a remarkable technology that has the potential to solve hard problems and automate hard work. It isn’t just for the Internet Advertising giants. It isn’t even just for companies and governments. It should benefit workers, and customers, and citizens: everyone.
  • What is AI?
    • I think it is helpful to define AI in the context of science and the scientific method:
      • Science as described by Karl Popper, is the process of collecting, analysing, and presenting data from the world and testing theories that describe that data.
      • That second step – analysis and deduction is the most mysterious – it requires human intelligence, which to Karl Popper was a black box.
      • Data Science is the Scientific method using software. Artificial Intelligence is the set of techniques that replicate that analytical, deductive step that is done by human intelligence in the scientific method.
    • So AI isn’t magic, it’s science. And it isn’t just advanced statistics. AI is different because:
      • More complex functional approximation – e.g. Image recognition
      • Continuously learns – e.g. AlphaGo Zero
      • The goal is usually automation not human ‘insights’ e.g. trading algorithms
    • AI started in the 1950s, and has progressed in fits and starts, or ‘winters’ and ‘summers’ as they are called in the field. For simplicity there are two types of AI to be aware of:
      • Symbolic AI (1950s-1980s): Pre-defined models; Hand-coded rules; Small data sets
      • Statistical AI or machine learning (1990s-present): Learning algorithms; Unspecified rules; Large data sets
    • Symbolic AI failed because it was always constrained to working on very small problems – it couldn’t deal with the complexity of the real world. Statistical AI has advanced because of the accessibility of vast sources of data; and the exponential increase in computing power.
    • Money, Guns & Lawyers are all intricately bound up in the future of AI:
      • Enormous amounts of money are now pouring into the field. Like the historical investment in electrification or the Internet, I don’t expect it to slow down any time soon.
      • Military at the forefront of experimentation – autonomous weapons particularly controversial. Quick show of hands – who would be comfortable with the British Military using autonomous weapons?
        • We have said we won’t – but missile defense systems are largely autonomous already, as are cyber attacks.
        • Russian and Israeli firms (e.g. Kalashnikov and Airobotics) are developing autonomous combat drones using AI, and more will follow.
        • South Korean guns on the sentry posts along the border with North Korea are partly autonomous.
      • And if you thought the conversations provoked by GDPR were tricky – the lawyers are going to have an absolute field day with AI – who is liable for the decisions that AI models make? What counts as discrimination? What rationale has to be provided for decisions made by machines? The list goes on and on.
  • Why does AI matter as a technology?
    • General Purpose Technology – like electricity, or the shipping container, or machine tools – there are so many secondary uses and implications – we can’t even imagine all of them.
    • Let’s take self-driving cars. They are coming. One example of social and economic transformation driven by AI, but one among many. I’m pretty sure that our children may not ever need to learn to drive. What are the implications of AI in Cars (h/t to Benedict Evans):
      • Reduce c.1 million Road Deaths/year globally. Something over 90% of all accidents are now caused by driver error, and a third of fatal accidents in the USA involved alcohol. Huge economic effect to these accidents: property damage, medical and emergency services, legal costs, lost work and congestion. UK analysis found a cost of £30bn every year.
      • Fewer people employed as drivers. There are something over 230,000 taxi and private car drivers in the USA and around 1.5m long-haul truck-drivers.
      • Taxis become 75% cheaper, because the wage of the person driving the vehicle accounts for three quarters of the cost.
    • Higher road capacity & faster journeys. No lanes, no separation, no stopping distances, and no signals, means profoundly different traffic patterns.  Accidents themselves cause as much as a third of congestion.

Slightly more speculatively, we might project:

      • No more looking for parking spaces & no more need for on-street parking
      • Collapsed distinction between cars & buses
      • Renaissance of Rural Pubs
      • Media Consumption time rises
      • Falling crime from car camera footage
    • But AI isn’t attracting such a lot of interest just because of its economic potential – or even because of the potential for apocalypse (we don’t worry about asteroids in the same way) – but because AI is entangled in stories about ourselves – about our identities.
  • AI and what it means to be human
    • A lot of people, from Alan Turing in the 1950s to Stuart Russell today, along with most of Silicon Valley, and influential philosophers like Peter Singer and Julian Savulescu all appear to fix on cognition and analytical intelligence as the principal way to define a person, and a person’s value.
    • It is interesting to me that even the critics of AI – like Yuval Harari, the author of Homo Deus, are often just as reductionist in their comparison between human and machine intelligence.
    • I think that is an impoverished definition: I think character trumps cognition, but we’ll come back to that.
    • Defined in this way – it sets machine intelligence up to be in direct competition with human intelligence, and therefore with us as people. Stories spread and anxiety grows about how machines will judge us. How they may replace us.
    • And in terms of cognition – the machines are getting brighter. It may be helpful to think about intelligence in some qualitative categories rather than just as a general scale. In ascending order of difficulty for machines we have:
      • Calculation – this is simple
      • Prediction – this can be complex, but is now common
      • Recognition – this is new
      • Understanding – contextual intuition is easy for us v. hard for machines
    • Computers are getting better at this at the same time that we’re increasingly aware of our own cognitive limitations. We aren’t often straightforwardly rational. We can be and usually are biased, emotional, lazy, or distracted in our thinking – that’s me anyway – I’m less sure about you.
    • Our cognitive biases are widely exploited. Quite a few methods of psychological manipulation are now used as reliable business models – advertising in particular.
    • And it is easy to think that emotional or subjective thinking is less good than the super rational kind. That we should only ever aspire to the life scientific.
    • But what if it takes more than intelligence to understand the world? What if intelligence is insufficient? What if, as Martin Heidegger suggested, the world is more than a set of facts.
  • AI Ethics: how will AI be used?
    • Like any technology or tool, AI is a capability that opens up a series of possibilities. It isn’t inherently good or bad, but it will be used in positive and less positive ways, and I’d love to explore not only what that might look like, but how we can understand the current framework and biases that are likely to characterise the ethos of applied AI.
    • The most common way of thinking about AI ethics is to imagine all the things that could go wrong, why they might go wrong, and start to construct ways of avoiding those failures.
      • Autonomous vehicles crashing
      • AI Chatbots spitting obscenities into the Internet
      • More scarily – Deep Fakes
      • Less visibly – algorithms used to triage applications discriminating on ethnicity
    • But I think an equally fruitful way to explore AI ethics is to  focus on what ‘going right’ looks like:
      • Cost efficiency or customer satisfaction?
      • Efficient allocation of labour, or full employment?
      • And in an era where even the quality of our sleep is now quantified – what should happen to those things that count, but cannot be counted? Like mercy? Or grace?
      • Most counter-intuitively, might a purely maximalist approach to optimising outcomes overlook the value within their opposites: the power within  weakness, the trap of independence, or the good that can emerge from suffering.
    • AI as a technology lends itself to an ethic of rationality where the world really is just a set of facts, and where a Western Cartesian, objective distant representation of the world is thought sufficient to make sense of it.
    • One reason for the uptake of AI across government is because it is entirely in tune with the gospel of New Public Management:
      • The philosophy that You can’t manage what you can’t measure. Started in the private Sector: Tom Peters – In Search of Excellence (1982):
      • Then came to the Public Sector: Osborne and Gaebler – Reinventing Government (1993)
      • It was (excellently) embodied by Michael Barber – who set up the PM’s Delivery Unit – and has recently written – How to Run a Government
      • Remarkably resilient management philosophy. Was reflected in a centrist political consensus for the last 25 years.  Almost unobjectionable. Who could be against efficiency?
      • Strengths:
        • It broke open the closed shop of the professions – teaching, medicine, law, and recognised that accountability is necessary, even for experts.
        • Offers a freedom from rules – by specifying outcomes and outputs, and encouraging innovation in process terms.
        • Not partisan but technocratic – so can be embraced by left and right – you can target equality or performance – as long as you’ve got measurable KPIs it works.
    • However AI within a philosophy of managerialism looks rather threatening (unless you’re a manager). ‘Good’ looks like taking people out of the loop, with the goal of greater efficiency.
  • And managerialism has recently run aground:
    • The bureaucracy of DWP dramatised in I, Daniel Blake. The Bureaucracy of the Home Office in the Windrush controversy. The bureaucracy of HMRC when it comes to charitable donations and reclaiming tax. In the NHS. Yes even the NHS: the unsubtle hints that terminating a fetus with Down’s syndrome is probably for the best. And most prosaically – how many times have you been kept on hold? How many times to be funnelled into a ‘process’ for the convenience of the company or organisation, rather than the citizen?
    • All five are examples of managerialism deployed in the interests of the organisation rather than the citizen.
    • Moreover, everywhere we are faced with ‘wicked problems’ can’t be solved analytically, or with more operational efficiency.
      • Debt counselling → relational imperative, not a transactional one.
      • Adult Skills Retraining → motivation of students and providers and employers, changing culture as well as services.
      • Brexit → not an economic argument, but a political clash of identities
    • I suspect they can only be solved relationally – not transactionally, because they involve a clash of values rather than an analytical mystery.
  • An ethics of AI, to just deliver efficiency could do more harm than good:
    • In Healthcare → automating diagnostics → what about the patient-doctor relationship?
    • In Education → personalised pedagogy through MOOCs → what about the inspiration of human tutorials
  • Out of the box, AI works to get tasks and transactions done; AI doesn’t help the formation of character, or the provision of kindness. In the language of David Brooks – AI is about the resume virtues, not the eulogy virtues.
  • Unless we recognise the essence of public services and duties within relationships and character, we will use AI to optimise our way to an impersonal and unresponsive state.
  • What can we do?
    • Well, I’d love to hear your suggestions. But here are three that occur to me:
    • First – Let’s not fall prey to CP Snow’s Two Cultures – in his 1959 lecture Snow declared “If the scientists have the future in their bones,…then the traditional culture responds by wishing the future did not exist.” Arguably that division continues today. We can stand against it by learning how to commission, sponsor and assure the use of AI where we are working by taking an AI for Executives type training.
    • Second, we should lobby for the use of AI, but also for it to be transparent and accountable – don’t blindly trust the machines, and don’t just rely on the lawyers.
    • Last, and perhaps most critically, we should use the efficiency savings from AI to invest in relationships, community, and social cohesion – for ourselves, as well as for the public that we serve.

In conclusion

I believe AI is a general purpose technology, much like electricity or computers, that has such broad application that in 50 years time there will scarcely be a corner of life untouched by its impact.

But there are two possible futures that involve artificial intelligence. Not the apocalyptic vs. the ecstatic visions of commentators such like Elon Musk and Ray Kurzweil that capture the headlines. That is merely a speculative debate about what the technology will be able to do in 20 years time. More important for us right now is the two ways in which the technology will be used today.

We stand today between a future that is increasingly quantified, optimised, and managed; and a future that emphasises the relational heart of work and services. These two futures will co-exist, but they will be felt very differently. Because of the power of AI, I think it will tip the balance, one way or another. And there will be rows. Those rows will involve money, guns and lawyers. The way those rows are resolved will define our our politics, our experience of work, and our social integration for a generation to come.


I’m grateful to a number of people for their thoughts and suggestions for this talk – including Adrian Brown, Will Davies, Sally Phillips….


Policymaker’s guide to Digital

Quite a few colleagues in the Civil Service have asked me recently about how policy professionals can learn more about ‘digital’. This collection of web resources obviously isn’t definitive or authoritative, and I don’t necessarily agree with everything that the authors say, but everything here has helped me personally. If you have other suggestions, please do leave them in the comments below.

Jeremy Heywood – Government as a Platform
Mike Bracken – The Strategy is Delivery
Tom Coates – Is the pace of change really such a shock?
Evgeny Morozov – The rise of data and death of politics
Francis Irving – History of version control

Mark Foden – The Gubbins of Government
Jonathan Zittrain – Future of the Internet
Cory Doctrow – How to break the internet
Clay Shirky – Here comes everybody
Tim O’Reilly – Government as a Platform

The Government’s Digital Service Standard, and the Service Design Manual
Governance for Service Delivery
People & skills in a digital development team
Who are the Digital Leaders in the UK Government
Overview of web tools for civil servants (and I would add Feedly)

From Gutenberg to Zuckerberg – John Naughton (2012)
Small Pieces Loosely Joined – David Weinberger (2003)
More useful book suggestions here from Mike, and John.

Python for beginners – Code Academy
Data Science for beginners – Coursera
Other MOOC providers: EdX, Udacity, Udemy, Khan Academy, FutureLearn

Government platforms:
Publishing: GOV.UK
Monitoring & Evaluating: Performance Platform
Verifying users: GOV.UK Verify
Purchasing: Digital Marketplace

Digital Government in the next Parliament

Yesterday I gave a speech to the High Potential Development Scheme cohort of Civil Service Directors and Directors General. I’ve blogged it here in case it is of wider interest.


What does digital mean for you?

Getting a decent laptop for work?

Maybe getting Google alerts of all the relevant news on your brief, straight to your phone?

Maybe a transformation programme for better digital services to meet the needs of our users.

Perhaps, it’s using Twitter to improve your engagement, and strengthen your influence.

Maybe just the wistful memory of a familiar departmental website that you knew your way around, and had some sense of control over?

Whatever digital means, I know that you’ll want to keep it in perspective. My most important asset is not my shiny new laptop, or my knowledge of data science, but time and attention – mine, and that of those around me. The internet in my pocket is useful, but also a liability: an endless source of distraction. It makes time move faster, and disappear more quickly.

I’m doing digital delivery now after a spell at Google, but really I’m a recovering policy wonk who spent 10 years in the Prime Minister’s Strategy Unit, the Treasury, and the Department of Energy and Climate Change working in a way that most of you are familiar with.

Nevertheless, I have drunk the Kool Aid, and I do I think digital will transform the way in which policy is made; the way in which ministers engage; and the way in which Government is done.

In this parliament, digital has largely been in the wings. Noises off. I will argue that it is about to come centre stage. In the next parliament, it will inescapably affect how we play our parts.

Enormous challenges loom over us:

  • Welfare Reform – pensions, troubled families, and social care
  • Britain in the World – from Europe, to terrorism, to Russia & Ukraine
  • Our long term security – our climate, our energy security
  • Our political Union – whether devolution, or disintegration
  • Funding and financing – the costs of the NHS to banking regulation
  • Public trust and engagement in politics and the political process
  • Inequality – wage stagnation, migration, productivity, the squeezed middle
  • Our civil service – competition for talent, capability, the risk of another Snowden

Alongside these challenges we face a fiscal backdrop just as sombre as in the last parliament, with cuts to the budget and staffing of Whitehall set to continue as the new normal.

Digital has potential

From the sidelines, I believe that the Government Digital Service has demonstrated what digital transformation looks like in the context of public service delivery:

  • government wide platforms like GOV.UK, the Performance Platform, and Verify, a new platform to validate users identity when they engage with government. These are high-quality common components for the whole of government to rely on, and they will continue to evolve and improve;
  • rapidly transformed departmental Digital Services like Register to Vote, Lasting Power of Attorney, or Carers Allowance that give citizens and businesses a simpler, clearer faster service, and which have seen delighted users. They’ve even had to install a new button in the Ministry of Justice call centre to register the positive feedback they’ve been getting – they had never had any before.
  • Digital has enabled savings of over £1bn since 2011, according to HM Treasury.

In the three years since GDS was founded, New Zealand and Hawaii have both taken the source code for GOV.UK to run for themselves; we’ve been described as the best startup in Europe by Saul Klein, a leading UK Tech Venture Capitalist, and the ‘Gold standard of global digital government’ by the Washington Post. We beat the Shard and the Olympic Cauldron in a design competition, and the United States government has just created a Digital Service modelled on us.

I’m not here to brag – we’re far from perfect, and it is tough to scale an organisation from a dozen to 500+ people without a few growing pains. I make the point only because it is interesting to contrast the external news of GDS, with the less positive stories you sometimes hear inside Whitehall.

In any case, there is so much more to do:

  • a more concerted effort to replace Government’s worn out tech – like the laptops which take 10 minutes to start, or the email attachments with twelve layers of tracked changes – with the quality of software and hardware that we’re becoming used to using outside of work (and by freeing ourselves from eye-wateringly expensive supplier contracts we would slash the £8bn annual spend on IT in the process);
  • we need systematic use of performance information and data science techniques to understand how we’re doing in real-time, and get better evidence for our operational and policy decisions; and
  • we need more shared cross-government components for service delivery, like licensing, fraud detection, making payments, identity matching, and case management software.

That is what I believe the future of digital transformation looks like in the context of public service delivery.

But what about in the more traditional and strategically central context of policy making?

The we way we’re making policy isn’t working

I think there are five big problems with the classical policy making process.

First, it takes too long.In 2005 I came into the Treasury to work on Productivity. I made the case for an independent review of Intellectual Property policy, which was commissioned, and completed by the end of 2006. Since then there has been a gradual implementation, with thousands of pages of consultation, explanation, and legislation. One of the 54 recommendations made 10 years ago, to legalise copying CDs to your computer, finally comes into effect on the first of October this year. Do people even buy CDs any more?

Second, it has become too rigid in details. The cycle of green paper, then Whitepaper, then draft bill, and a slew of secondary legislation – it quickly leaves the realm of principles, and becomes service design. Fixing operational service design in legislation is a disaster because the best way of meeting the needs of users are hard for anyone to predict from first principles, and they are likely to change over time. But we’re stuck with a statute book that demands wet signatures, which has been a problem for services like Universal Credit and Lasting Power of Attorney, and a paper tax disc for car registration. On top of that, because it is difficult to change, new policy often gets layered on to existing policy, breeding complexity and confusion. Just look at the sedimentary layers in the design of our Energy Market, Tax Code, or policies like Carers Allowance.

Third, it learns too slowly from experience. Aside from professional lobbying groups, people are generally not good at giving feedback on abstract propositions. We are much better at giving feedback on our experience of a service. And as civil servants, I’d argue that we’re also much better at responding to this kind of feedback. Testing a small prototype service with real users gives invaluable insight, and also forces us to think about the service interface early rather than at the last minute. Services like the Green Deal that launch with a ‘big bang’ run enormous implementation risks.

Fourth, fear of technology removes a vital design tool. The benefits of losing the paper counterpart to driving licence have been manifest for a decade, but fear of the IT change required put policy colleagues off. We need a better education as to what is hard, and what isn’t.

Lastly, policy possibilities are limited by a technical archipelago. There has long been a policy desire to be able to set up a business with ‘one click’? Policy colleagues across departments are able to agree that this would be a boon for business. But technically, this is genuinely hard because these systems stationed in different departments and agencies haven’t been designed to talk to one another. Have you heard of ‘Tell us Once’? The ambition is laudable – but this is a policy pipedream until we interconnect our systems properly.

Where big data systems have grown up in one department as a national asset – like insurance numbers, cars, or patients, legally and technically they tend to serve the needs of that one department first, with others struggling to get access. I saw this recently in the Illegal Working review, where the Home Office was requesting access to national insurance numbers from HMRC and DWP, to creating a service for employers to check on the status of potential employees.

Digital will be needed to solve the upcoming big policy challenges

The way we solve problems needs to change.

Classically, I’ve tended to understand policy as flowing from strategy. But I think I’ve got things upside down. I’ve begun to think that policy should flow bottom up, rather than top-down: from a service, and engagement with users, not from a strategy.

Digital services are quick to build, easy to iterate, a constant source of rich & objective feedback. They also open new opportunities for policy that may be able to break open the stale debates that have stymied solutions to problems as diverse as road charging, energy efficiency, media regulation, and child protection.

My default mode of operation, was to take a problem and analytically describe a solution, with as many supporting stakeholders, case studies, and cross-national parallels as I could muster. Our bosses, ministers, and the Centre prod and poke it, and if it stands to reason, is affordable, and has politics on its side, then it has a good chance of being accepted and announced.

We don’t know if it will work, and it usually takes a couple of years between announcement and implementation to find out.

But the public policy problems we have now are too difficult to solve once. We need adaptive solutions that can be experimentally applied, extended or cancelled, without loss of career, or years of paperwork.

The tools with which we solve problems may need to change as well. As a policy analyst, I was taught that there were always basically three options. Regulation. Taxation. Spending.

Service delivery came nowhere.

What if the operational design of a service was our first concern rather than our last?

Lets build something, and then test our assumptions for real. Lobby for ministerial announcements that describe delivery, rather than just intent.

The sums of money involved are relatively small to build prototypes and alphas – often smaller than the amount we would spend on policy development. The Treasury has even adapted their business case methods in the Green Book to enable this to happen.

We need more Engineers, not more lawyers. We have a dearth of experimentation, not a shortage of rational process.

Digital is, of course, not the only answer – the themes of design thinking, open policy making, What Works centres, pilots and RCTs, behavioural insights and cognitive biases, all of them link back to experimentation, and digital is a golden thread that links them together.

It is now just a few weeks after the latest government IT disaster – the Home Office’s eBorders programme. So I understand if it feels premature to sound the trumpets for Digital. But as someone once said, the Future is already here, its just unevenly distributed.

The mechanics of Government in other countries are becoming Digital

This is not a story that is native to Britain. Other countries are fellow-travellers.

Estonia – started from scratch in the aftermath of their independence. They have passed a law allowing any citizen to refuse to provide government with a piece of information if they have already provided it. This requires radical interconnection between their agencies and departments. They don’t need many people to run their digital services. They said that seven people run their benefits service.

China, Mexico, Singapore, Finland – all these countries are taking a fresh approach to public policy, by bringing digital delivery to the centre, and breaking the old habits of government.

After the trauma of, the US Government is also paying much more attention to service delivery.

You will be central to this – what do you need to know to succeed?

There are a bunch of things might help you understand digital skills in more detail that I will circulate later today.

But as well as skills, I would emphasise Culture. Our design principles are a straightforward list, but to to make them happen requires a big change in culture.

As senior leaders, this is particularly a job for you. You’re the guardians of culture for the civil service, and you’ll have a big say in what the future looks like.

Change doesn’t come easily. In the 16th Century, Japanese Samurai didn’t readily embrace guns or other firearms, because they distrusted the new technology, and because the sword was deeply embedded as a part of their art, culture, and honour. They went to extraordinary lengths to avoid using guns, but eventually they did, because they found that without it they were losing.

I would pick three of those design principles as particular cultural challenges:

Make things Open – despite Snowden, despite civil service salary transparency – making things open makes them better.

Do less – in particular, write less: on the tour, did you pass the publication ticker for government. It is astonishing to watch the volume and variety of words the government publishes.

Understand context – get know your users – visit the user testing lab downstairs, or sit with your departmental digital team doing some guerilla user testing.

And without wanting to sounds like a counsellor, you may want to have a conversation with someone who can advise you on the particular challenges you’re facing, and the ideas you’ve got for developing some of these themes. Perhaps the digital leader in your department, or the Chief Digital Officer, if there is one. If you’re stuck, then come back to us and we’ll be able to help or put you in touch with someone who can.

And now, after so many words, surely it’s time for us to go and build something.

Digital delivery

I’ve recently published two posts about the work we’re doing in the Government Digital Service to track our progress, and to measure the performance of public services:

Digital marches on: rising take-up, falling costs
The Performance Platform: open for business

We have also recently launched the Digital Service Standard, finalised guidance for Agile business cases with HM Treasury, and the long awaited user research lab is nearly complete. It has been a good couple of months!

Digital up 9% in just over a year