Poison or panacea? How does Kenneth Cukier view the 'threat' posed by AI

Climate change. Pandemics. These are just two potential existential threats where AI could offer a solution, according to Kenneth Cukier. Senior editor at The Economist, Cukier claims AI is not a promising development; it’s a necessary one. That’s how his guest lecture at Imperial College began, and there were many interesting points thereafter – which included his take on the level of fear that plagues discussions around this technology.

Here are the highlights.

1) The threat to jobs is overblown

We’ve all seen news stories about how AI is posing a threat to our jobs, such as this BBC article from 20 June, which claims half of all jobs will be lost to AI in 45 years. Cukier argues that the number of jobs on the line is exaggerated. For example, he pointed out that some of the more pessimistic reports classified salespeople or baristas as particularly vulnerable, but then rightly highlighted the importance of a human touch in these roles. There is no doubt roles will change, but Cukier claims that this could allow staff to direct more of their efforts towards human-facing activities.

2) If we can handle nuclear weapons, we can handle AI

Cukier also drew parallels between AI and nuclear weapons, which have been in existence for 70 years. This technology obviously has a catastrophic destructive capacity: this is also an argument levelled  against AI. But, at the time of writing, global nuclear war is unlikely. Cukier said that in the same way we have built institutions around nuclear weapons – such as non-proliferation bodies and treaties – we can do the same for AI to temper any negative effects.

3) Exploit personal data

The privacy concerns cited by AI sceptics is also a bone of contention for Cukier. This is primarily because he envisages a world where personal data can be used as a currency in a marketplace. Organisations such as Amazon, Facebook and the NHS already hold our personal data, so Cukier questions why we don’t allow wider access to this information. He went on to compare keeping our personal data private to shoving cash into a mattress.

But this financial gain wasn’t the only reason Cukier criticised the sceptics when it comes to personal data. He believes current privacy laws are stifling our ability to use technologies for good, and to potentially reduce human suffering. For example, Cukier pointed out that, in the US, a hospital that used a heuristic to predict the likelihood of patients contracting sepsis was 10 per cent more effective at making predictions. But, privacy laws prevent this being shared because the variables taken into account could identify individuals.

To say that Cukier spoke with conviction on his views on privacy is an understatement. He ended by railing at a “Trump-like” aversion to empirical evidence when it comes to personal data usage. Although it’s not quite the same as denying humanity’s role in climate change, by lumping these positions together Cukier’s criticism is explicit.

So could AI be the threat some make it out to be?

I’m unsure whether AI is as innocuous as Cukier makes it out to be, but his unbridled enthusiasm and somewhat controversial position on privacy made for a riveting talk. What’s clear is AI will certainly have a significant effect on our lives in the years to come, whether we are sceptical or overwhelmingly positive about it, so building the institutions necessary to harness the best of AI and mitigate the worst seems sensible.

The author

Danny is a senior account manager and is based in the Manchester office

More about Danny