On March 30th we at Capita published an article on Covid-19 and trust. It opened with one pretty startling fact.

At that point in time Singapore, close to the regional epicentre of the outbreak, was reporting only 96 cases and no deaths.

That would change but in in the first few weeks of the crisis Singapore deployed a state data system that allowed them to use the private data of their population to deliver real time services and interventions that appeared to stop the virus in its tracks.

They merged health and travel databases — a seemingly complex task achieved within a day — and then made that information widely available to help identify cases.

Then they launched a text and mobile web-based software solution for those placed under home quarantine to report their location and health status to the government. It followed this up by using its early infections to establish an advanced contact tracing system and deploying that data to make crucial decisions about isolation, quarantine and health provision.

And they did it fast.

And when that changed – when the numbers of both cases and sadly deaths began to creep up later in May it was in a hidden population – illegal migrant workers who weren’t part of the data systems. They weren’t accounted for in providing healthcare and support, they were under the legal radar – and the impact was profound.

The truth is that an algorithm is only as good as the information that goes into it. The Canadian BlueDot was able to track the spread of the virus and able to detect the outbreak days before initial reports from the Centers for Disease Control and Prevention (CDC) and the World Health Organization. BlueDot’s algorithm picked up early warning signals by applying natural language processing and machine learning to data sets including news coverage, global flight patterns, and government reports.

But most systems rely on the data coming out of healthcare systems. The Singapore example is unique because it was collaborative. The population have proved far more willing to supply personal data for the greater good than most developed economies.

Many debates around acceptable data use basically miss the point. The question isn’t really about how data is used, it’s about whether a clear and shared objective is being met.

Loyalty cards are a very simple, very effective example. Research shows people are accepting of their data being used, if the value proposition is clear i.e. it fulfils the purpose they are told its for.

Twenty years ago, research told us that consumers trusted brands like Heinz, Marks and Spencer’s and Mothercare more than they trusted the government – to tell them the truth, to treat them with respect, to deliver on promises. Today those that can do the same with our data to use it appropriately, to keep it safe, and to deliver on those promises – will become like the John Lewis Christmas advert in December – trusted, reliable, beloved.

Because when we trust, we trust “big time”. Innovations like the Disney Magic Bands require us to trust someone not just with our banking data, but with the safety and privacy of where we sleep, and the names of our children.

And when companies get it wrong – ask Facebook. Perhaps the damage to their brand is immeasurable.

Getting it wrong isn’t always as deliberate as cross selling or cyber security breaches - when there are technological and methodological problems we detract from the proper debate on ethics around surveillance and protection of data. And this is a problem because data is at the core of how we ask better questions and come up with better answers.  

Digital transformation of the UK would be much easier with ID cards but the repeated mistakes of the past and a lack of trust makes this politically difficult.

So, unlike many other countries, the UK has no register of citizens and there is no requirement for an individual to register with their Local Authority.

NHS numbers and even National Insurance numbers are not reliable enough to match a person across different services and regions. If known at all, a citizen can be viewed very differently by each silo using different data fields but if Covid-19 has shown us anything it is that when it comes to a large-scale people response, access to data that provides a holistic view of a citizen is essential.

Critical.

There has been a lot of debate around the ethics and morality of the data revolution – a widespread sense that our information would be used against us, or at least without our consent.

But what could we offer citizens if their data was used to provide better services, both in a crisis and out of one? Joined up social and health care; education services linked to benefit application; even something as simple as making sure that delivery shopping slots for the vulnerable were up to date and correct?

The improvement in service delivery and planning would be a step change. We often discuss the ethics of using data and how it is accessed. But what are the ethics of not doing this?

You’ve all heard the great line about the optimist and the pessimist? The pessimist says, “It can’t get any worse…” and the Optimist replies “Oh but it can!”

But in this area, I am the optimist - a crisis can change a lot of things.

Last week YouGov found that 78% of UK adults would be fine with providing location and tracking data if it reduced the length of time they had to spend in lockdown.

As we come out of lockdown, perhaps there is an opportunity for a properly adult, mature and informed debate around data, consent, application and trust.

And just what we gain by trusting a little more.

Written by

Placeholder

Kevin Nicholas

Government Market Leader

Kevin Nicholas is an expert in digital innovation and transformation and its use in the public sector to create safer, healthier, and more prosperous nations. He leads Government work, where his focus will span healthcare, defence, policing and justice, and central government to deliver better outcomes to the citizens of the UK.

Our latest insights

Thinking about your organisation?

Get in touch
Scroll Top