Skip to main content
Home

The evolution of data ethics in the public sector

from  September 22, 2020 | 3 min read

The public sector is used to having to comply with stringent regulations around data. But as bigger volumes of data are processed automatically by artificial intelligence (AI) algorithms, complex ethical issues are being raised. So, how are public sector organizations responding to this challenge and how can Unit4 help?

Public sector organizations have a lot of experience in complying with data regulation, perhaps more so than many other sectors. Following policy, ensuring good governance and keeping data secure is part of everyday business.

But things are getting harder all the time. The sheer volume of data, the many new types of data, and the fact that they’re being processed automatically by intelligent machines all raise new issues.

What are the main issues?

According to the Deloitte Insights’ Government Trends 2020 report, public sector organizations are having to move quickly to address the following main ethical issues:

  • Privacy – law enforcement agencies, transit authorities and retailers are implementing facial recognition technology often without the subjects’ consent.
  • Transparency – the way AI algorithms learn and evolve is not always fully understood, even by their creators.
  • Bias – unconscious discrimination can be introduced into decisions due to bias in either the algorithm or the underlying data.
  • Accountability – the biggest issue is lack of clarity about who’s in charge: Who authorizes collection and processing of data? Who sets the standards and who enforces them?

There are no clear-cut answers to these issues yet, and a lot of effort is being made around the world to address them.

What progress has been made?

One of the biggest developments in recent years was the European Union’s General Data Protection Regulation (GDPR) in 2018.

Shortly afterwards, Serbia’s National Assembly enacted a new data protection law modeled after GDPR; Hong Kong published an ethical accountability framework that includes guidelines for protecting the privacy of citizens; and Bahrain passed a data protection law making it the first Middle Eastern country to adopt a comprehensive privacy law.

In spite of these initiatives, there are still many problems. Since 2013, 9.7 billion data records have been lost or stolen globally, says the Deloitte report. And since its launch, more than 95,000 complaints have been received by the Data Protection Authorities under the EU GDPR legislation.

What needs to happen next?

A number of ethical frameworks are emerging. The UK Government, for example, has published a data ethics framework to clarify how public sector entities should treat data.

Canada has developed an open-source Algorithmic Impact Assessment to help ensure decisions made by automated systems are interpretable and transparent.

The city of New York has established an automated decision systems task force to explore how the city uses algorithms to make decisions. The Netherlands’ Utrecht Data School has developed a data ethics decision aid tool that is currently used by various Dutch municipalities to make ethical decisions related to data.

But national legislation is finding it hard to keep up with the ethical issues raised by the advances of AI. So, the technology industry is taking a responsible lead. The Partnership on AI, for example, representing some of the biggest technology firms, including Apple, Amazon, Google, Facebook, IBM, and Microsoft, has been set up to advance the understanding of AI technologies.

How is Unit4 contributing?

Unit4 is deeply involved in this issue. For many years our systems have held and processed citizen data on behalf of our public sector customers. Like everyone, we’ve developed comprehensive GDPR compliance measures covering both our own activities and those of our customers.

For some time, we’ve been building AI into our software to spot usage patterns, predict user needs and automate routine tasks. This not only improves efficiency but enhances the People Experience for the users and citizens who interact with it.

Our approach is to design processes and systems that work the way people want to work, and that includes molding them to the requirements of regulation and the organization’s own internal policies.

Our software is particularly easy to configure, which makes it easy for public sector organizations to create processes, reports and audit trails that demonstrate compliance with law, regulation and best practice, according to different stakeholder requirements.

And our open, collaborative and people-centered culture guides us through the data ethics issues that the age of AI brings.

We will work alongside our public sector customers to deliver transparency, accountability and responsibility – and ultimately citizen trust – as we move further into the age of AI-driven public services.

For more on how our software can help you practice good data ethics in the age of AI, go to the Unit4 Public Sector webpage or get in touch with us for a chat.

Sign up to see more like this