In 2030, what we currently call ‘government’ will – or should – look radically different.
Although the public sector has traditionally been more cautious in its IT investment than the private sector, the breakneck speed at which citizens continue to adopt the latest technology has done much to force its hand.
Technology is now central to the everyday life of people in the UK, and the government has been compelled to recognise the widespread societal changes this has created.
Of course, as with any sudden change, there are those who worry that technology is having a detrimental effect on public services. The distinguished psychologist Sir Cary Cooper, for example, argued that the sector’s emphasis on digital solutions is weakening the relationship between citizens and services. In other words: the public sector is at risk of losing its human touch.
However, despite these misgivings, it’s impossible to argue against the cost-effectiveness of digital efficiencies, most of which are made with the earnest intention of widening the range of public services and deepening their impact across the country.
And while you could argue either way on the success of the government’s initial forays into a digital-first world, the very fact that these technologies are being embraced should be viewed as a positive step forward with the public sector showing a willingness to engage with citizens according to the latter’s preferences.
So far, while digital investment has been significant, the impact has been on a relatively small scale – enabling people to pay bills or file their tax returns via an online portal, for example. But this is just the start, and the potential application of modern technologies within the public sector is tremendously exciting.
Here are three ways technology could change the world of government:
1) Bringing artificial intelligence to the Civil Service
The evolution in public sector technology will be due, in part, to the advent of artificial intelligence (AI) and automation that we are likely to see over the next ten years.
According to BT’s The Future Workplacereport, more than one-third of IT decision makers are already using some form of AI or automation technology, and just under one-third are planning to implement AI and automation tools in the next two years. Of those who plan to invest, 62 per cent are optimistic that this technology will make their organisation more effective.
The UK public sector has already embraced technologies such as AI at some level: Ninety-five per cent of public sector organisations claim that they are already using at least one form of disruptive technology, compared with 85 per cent of those operating in the private sector. The legal, policy and public-facing decisions that were once solely human – such as those in courtrooms or hospital theatres – will soon become the domain of decentralised computation, based on the harvesting of trillions of data points
In late 2016, University College London (UCL) computer scientists developed a prototype AI ‘judge’. This software analysed data patterns to predict courtroom outcomes. In 79 per cent of assessed cases, the AI verdict was the same as that delivered by humans.
Although much more testing and learning remains to be done, this is surely a sign of our future legal system. The ethical and social implications of such a revolution necessitate widespread debate and legislative adaptation, with legal, scientific and technology experts weighing in.
2) Removing politics from policy
In real-world terms, having access to thousands upon thousands of data points – and the ability to crunch and draw insight from these – provides a currently unprecedented level of understanding.
From running a police force to developing agricultural legislation, everything will radically change.
Take the former as an example: millions of computational units mixed with previous data and real-time sensors would mean a police force that could be largely managed off predictive algorithms, heralding a new era in operational efficiency.
In many way, this is already happening. BT, for example, has worked with the Metropolitan Police to increase productivity and streamline the force’s organisation, and being able to identify patterns within data could only aid this.
Inevitably, there are fears that the advent of machines and new disruptive technologies will de-skill and decrease the workforce, as well as engendering a lack of empathy and human reasoning in sensitive situations. Such a vast quantity of data means that decisions will become increasingly computational, which could signal an end to the place of emotion in decision-making, from local planning to a doctor’s advice.
Similarly, using machines to interpret data could help separate patterns of correlation from those of causation – offering more effective methods of crime prevention for police or better risk detection for the NHS, enabling public sector organisations to combat the causes of problems, rather than just the symptoms.
3) Educating the Digital Generation
While AI and machine learning will undoubtedly take prominent roles in the government of the future, we still need the human touch to ensure everything runs smoothly. Future authorities need to make provisions to ensure that humans are available when needed and that these machines integrate rather than replace. Appropriate delegation between bots and humans, mobility, collaboration and efficient information-sharing will be key.
A successful government will emphasise the need for educating and preparing humans for their changing roles in this impending automation age. Implemented correctly under careful legislation, the best possible course would be a collaborative, future-proof strategy that capitalises upon technological creativity and innovation to play to the strengths of both people and machines.
Despite public fears, leading IT-decision makers are increasingly concurring; as many as a third of the IT decision-makers predicted that these new technologies would actually create more jobs, and 97 per cent say they’re already seeing the benefits of new disruptive tech.
Government support is needed. As well as encouraging tech literacy for all – for example, integrating tech-focused teaching into the school curriculum – there will be calls for authorities to take the development and preservation of uniquely human traits such as respect and empathy equally seriously. Governmental provision of financial backing for business ventures that prioritise uniquely human traits is also a widely advocated first initiative.
The next steps
While the benefits are obvious, how we make this happen is less straightforward. Education and legislation will be crucial in this preparation, and authorities must act to ensure the government’s digital revolution takes place in the right conditions. The recent creation of the UK Select Committee for AI is a promising first step in the right direction.
As the automation age inevitably draws closer and we see human roles evolve with the advent of machines, it is high time for authorities to prepare for the changes to come.