For all of human history, politics has been driven by human activities and the interactions of humans within, and between, networks. Now, advances in artificial intelligence (AI) hold out the prospect of a fundamental change in this arrangement. The idea of non-human entities, increasingly being used in international affairs, could create radical change in our understanding of politics at the highest levels. Find out the six things the world can do to prepare.
(1) Governments worldwide should invest in developing — and keeping — home-grown talent and expertise in AI.
AI expertise must not reside in only a small number of countries — or solely within narrow segments of the population — as there is a danger that countries could become dependent on the expertise currently concentrated in the US and China.
(2) Corporations, foundations and governments should allocate funding to develop and deploy AI systems with humanitarian goals.
The humanitarian sector could benefit from such systems, which might, for example, improve response times in emergencies. Since such systems are unlikely to be immediately profitable for the private sector, a concerted effort needs to be made to develop them on a not-for-profit basis.
(3) It should not be left to technical experts to understand the benefits — and limitations — of AI.
Better education and training on what AI is — and what it is not — should be made as broadly available as possible. Those developing the technologies would benefit from a greater understanding of the underlying ethical goals.
(4) Developing strong working relationships between public and private AI developers, particularly in the defence sector, is critical.
Since much of the innovation is taking place in the commercial sector, ensuring that intelligent systems charged with critical tasks can carry them out safely — and ethically — will require openness between different types of institutions.
(5) Clear codes of practice are necessary to ensure that the benefits of AI can be shared widely while at the same time the risks are well-managed.
In developing these codes of practice, policymakers and technology experts should understand the ways in which regulating artificially intelligent systems may be different from regulating arms or trade flows while also drawing relevant lessons from those models.
(6) Developers and regulators should pay particular attention to the question of human–machine interfaces.
Artificial and human intelligence are fundamentally different, and interfaces between the two must be designed carefully, and reviewed constantly in order to avoid misunderstandings that in many applications could have serious consequences.