Thinking Aloud: what does AI mean for security and governance development?

Thinking Aloud is an occasional series in which TAG staff members offer their views on longer-range strategic issues which relate to our work securing the lives and livelihoods of the world’s most vulnerable people


Asked last week what keeps him up at night, Sundar Pichai, the chief executive of Google and its parent company Alphabet, said that his number one worry is that he doesn’t really understand how AI chatbots work. Coming from the person in charge of a company racing to compete with Microsoft and others to roll out AI to the rest of us, this is a striking admission. If Mr Pichai doesn’t really know how AI works, what hope is there for the rest of us? Meanwhile his VP for engineering – the British computer pioneer Geoffrey Pinton who developed the rudiments of machine leaning and is known as the ‘godfather of AI’ – quit his role at Google to warn that the point when computers become smarter than people is far closer than he had previously predicted.

It is axiomatic these days to say that the pace of change in AI, and its lack of any real regulation, has implications across every area of our lives and societies.  

At the heart of these implications is a mismatch between the speed at which AI is developing and the speed at which we humans can adapt our thinking and societal institutions around it. Moore’s law, used since the 1960s to project a doubling of computing processing power every two years, looks quaint by comparison with current estimates of a similar twofold increase every 3 months. And set against the speed of human evolution (fishhooks to fish fingers in a little under 60,000 years) our chances of keeping up look pretty slim.  

This mismatch has given rise to more than a few dystopian visions: unchecked AI turning our weapons systems against us; a world of disgruntled idlers made redundant by computers doing their jobs better than they ever could; governments bankrupted as income tax evaporates in the face of a computerisation of the labour market; the need for human intimacy eroded as more and more people fall in love with their perfect chatbot.

Offsetting this dystopia is a world of new possibilities: a human civilisation free from the drudgery of repetitive, low-income labour; precise prediction and avoidance of natural disasters; accurate early diagnosis and treatment of disease. But when Elon Musk joins a group of 1000 AI experts calling for a pause in GPT-4 development, and with Italy moving to effectively ban ChatGPT, the cost-benefit calculation – or rather our inability to even begin to calculate it – clearly has a lot of people worried.

So what could this mean for international security and governance development?

We in the fragile states development community may not be at the vanguard of global computing, but we’re certainly subject to it. So in the spirit not of prediction but speculation, a few thoughts.

  1. Growing global inequality. The pace of AI development, and the critical role of AI itself in generating greater and greater AI functionality (computers making computers better), means that the ‘haves’ are going to have, and be able to generate, exponentially more AI technologies; while the ‘have nots’ are, well, not.
  2. Semiconductors as the ‘new oil’. US export controls aimed at cutting China off from the conductive materials essential to AI-level processing power are an early shot in a global race to secure, and monopolise, the raw materials of AI. In addition to being a central part of geopolitical conflict, as the world’s most prized resources shift away from fossil fuels towards superconductive rare earth minerals like nobium and yttrium, new parts of the world will become prizes to be competed – and in some cases – fought for.
  3. Shifts in labour needs further impoverish the already poor. It’s probably going to take a while before AI replaces the need for the high level, high-earning people who invent, instantiate, own and manage the world’s intellectual property. But it won’t be long before much skilled labour is redundant, laying-off of large parts of the world’s industrial labour base. Major job losses look set to further damage the ability of developing economies to provide sustainable livelihoods for their rapidly rexpanding populations.
  4. Middle class narrowing redefines what ‘development’ means. Countries like Indonesia have moved from low income to middle income status through generating the demand for and supply of a burgeoning middle class, which has itself generated powerful pressures for more representative government. But in a world which needs far fewer doctors, accountants and lawyers – large elements of whose jobs will be performed by AI in the foreseeable future – what will the social development pathway look like?
  5. Less money for overseas aid. As people work less and machines work more, the tax base in the developed world will change. Government revenues, even if those governments tax the machines themselves, will fall away, and governments will have to spend more money keeping less occupied populations busy in social, educational or leisure activities. This could accelerate the already major and growing pressure to spend what money there is at home first.
  6. Analysis and planning done for us. A recent research programme in central Africa used AI crunching of mass data to assess population vulnerability to conflict. One finding it threw up is that people’s vulnerability to conflict was correlated with how closely they lived to standing pools of water. This was, of course, correlative not causal, and likely due to vulnerability to malaria being tied to poverty, which is also closely linked to conflict vulnerability. But human analysis could never have spotted the correlation, and it won’t be long before AI not only identifies but makes sense of such factors and determines the most effective and/or efficient response options. How long, in other words, before we development experts can simply rely on technology to tell us what to do?
  7. Another regulatory jungle. Like the internet, AI is set to be subject to a massively complex effort on regulation, with major variations in how different jurisdictions see the issue and propose to address it. The British Government’s AI White Paper indicates a light touch approach that delegates much of the work to industry-level regulators, while the EU proposes a more directive and hands-on regulatory approach. If there is already clear blue water between the UK and EU approaches, divergence with the US and other allies, let alone Russia and China, is going to be oceans wide.

In common with the many AI brain-dumps doing the rounds, this list is far from exhaustive, and certainly not predictive. All it indicates is that we probably need a conversation on how the defining issue of the 21st Century, and potentially many centuries thereafter, is relevant to securing the lives and livelihoods of vulnerable communities around the globe. We at TAG look forward to being part of that conversation.

Finally, having considered –  in the traditional human way –  the question of what AI means for security and governance development, we put the same question to an ChatGPT-powered app. The bot came up with a strikingly similar list, leaving us wondering exactly how much value we humans had contributed to the debate. But at the same time we found it rather reassuring. If it had told us everything’s going to be fine – now that really would have been worrying.