Six Things the World Can Do to Prepare

For all of human history, politics has been driven by human activities and the interactions of humans within, and between, networks. Now, advances in artificial intelligence (AI) hold out the prospect of a fundamental change in this arrangement. The idea of non-human entities, increasingly being used in international affairs, could create radical change in our understanding of politics at the highest levels. Find out the six things the world can do to prepare.

Self-driving cars during a road test on 22 March 2018 in Beijing, China. Beijing’s traffic authority has issued temporary number plates to self-driving cars developed by the Chinese search engine Baidu. Image: VCG/VCG/Getty Images.

(1) Governments worldwide should invest in developing — and keeping — home-grown talent and expertise in AI.

AI expertise must not reside in only a small number of countries — or solely within narrow segments of the population — as there is a danger that countries could become dependent on the expertise currently concentrated in the US and China.

Robots are presented at the Beijing International Consumer Electronics Expo in China. Image: Zhang Peng/LightRocket/Getty Images.

(2) Corporations, foundations and governments should allocate funding to develop and deploy AI systems with humanitarian goals.

The humanitarian sector could benefit from such systems, which might, for example, improve response times in emergencies. Since such systems are unlikely to be immediately profitable for the private sector, a concerted effort needs to be made to develop them on a not-for-profit basis.

A Red Cross employee in Mexico works at the collection centre in the city of Toluca, Mexico on 8 June 2018. The Mexican Red Cross is sending more than 130 tons of humanitarian aid to the people affected by the recent eruption of the Fuego Volcano in Guatemala. Image: Mario Vazquez/AFP/Getty Images.

(3) It should not be left to technical experts to understand the benefits — and limitations — of AI.

Better education and training on what AI is — and what it is not — should be made as broadly available as possible. Those developing the technologies would benefit from a greater understanding of the underlying ethical goals.

A student attends a lesson in robotics at the IT Lyceum at the Kazan Federal University. Image: Yegor Aleyev/TASS/Getty Images.

(4) Developing strong working relationships between public and private AI developers, particularly in the defence sector, is critical.

Since much of the innovation is taking place in the commercial sector, ensuring that intelligent systems charged with critical tasks can carry them out safely — and ethically — will require openness between different types of institutions.

U1208 Lab at Inserm studies cognitive sciences in robot-human communication. Image: BSIP/UIG/Getty Images.

(5) Clear codes of practice are necessary to ensure that the benefits of AI can be shared widely while at the same time the risks are well-managed.

In developing these codes of practice, policymakers and technology experts should understand the ways in which regulating artificially intelligent systems may be different from regulating arms or trade flows while also drawing relevant lessons from those models.

Saudi Arabian citizen humanoid, Sophia, is seen during the Discovery exhibition on 30 April 2018 in Toronto, Canada. Image: Phoo by Yu Ruidong/China News Service/VCG/Getty Images.

(6) Developers and regulators should pay particular attention to the question of human–machine interfaces.

Artificial and human intelligence are fundamentally different, and interfaces between the two must be designed carefully, and reviewed constantly in order to avoid misunderstandings that in many applications could have serious consequences.

U1208 Lab at Inserm in France studies robot-human communication. Image: BSIP/UIG/Getty Images.

Source link

Leave a Reply

Pin It on Pinterest

Share This

Share this post with your friends!