I hope everyone is enjoying the latest breakthrough in artificial intelligence (AI) as much as I am.
In one of the latest AI developments, a new computer program – DALL-E 2 – generates images from a text prompt. Give it the phrase “Club Penguin Bin Laden”, and it will go off and draw Osama as a cartoon penguin. For some, this was more than a bit of fun: it was further evidence that we shall soon be ruled by machines.
Sam Altman, chief executive of the now for-profit Open AI company which provides the model that underpins DALL-E, suggested that a generalized intelligence (AGI) was close at hand. So too did Elon Musk, who founded Altman’s venture. Musk even gave a year for when this would happen: 2029.
Yet when we look more closely, we see that DALL-E really isn’t very clever at all. It’s a crude collage maker, which only works if the instructions are simple and clear, such as “Easter Island Statue giving a TED Talk”. It struggles with more subtle prompts, fails to render everyday objects: fingers are drawn as grotesque tubers, for example, and it can’t draw a hexagon.
DALL-E is actually a lovely example of what psychologists call priming: because we’re expecting to see a penguin Bin Laden, that’s what we shall see – even if it looks like neither Osama nor a penguin.
“Impressive at first glance. Less impressive at second. Often, an utterly pointless exercise at the third,” is how Filip Pieńkowski, a scientist at Accel Robotics, describes such claims, and DALL-E very much conforms to this general rule.
Today’s AI hyperbole has gotten completely out of hand, and it would be careless not to contrast the absurdity of the claims with reality, for the two are now seriously diverging. Three years ago Google chief executive Sundai Pinchar told us that AI would be “more profound than fire or electricity”. However, driverless cars are further away than ever, and AI has yet to replace a single radiologist.
There have been some small improvements to software processes, such as the wonderful way that old movie footage can be brought back to life by being upscaled to 4K resolution and 60 frames per second. Your smartphone camera now takes slightly better photos than it did five years ago. But as the years go by, the confident predictions that vast swathes of white collar jobs in finance, media and law would disappear look like a fantasy.
Any economist who confidently extrapolates profound structural economic changes – of the sort of magnitude that affects GDP – from AI ventures such as DALL-E should keep those showerthoughts to themselves. This wild extrapolation was given a name by the philosopher Hubert Dreyfus, who brilliantly debunked the first great AI hype of the 1960s. He called it the “first step fallacy”.
His brother, Stuart, a true AI pioneer, explained it like this: “It was like claiming that the first monkey that climbed a tree was making progress towards landing on the moon.”
Today’s misleadingly-named “deep learning” is simply a brute force statistical approximation, made possible by computers being able to crunch a lot more data than they could, to find statistical regularities or patterns.
AI has become good at the act of mimicry and pastiche, but it has no idea of what it is drawing or saying. It’s brittle and breaks easily. And over the past decade it has got bigger but not much smarter, meaning the fundamental problems remain unsolved.
Earlier this year the neuroscientist, entrepreneurial and serial critic of AI, Gary Marcus had enough. Taking Musk up on his 2029 prediction, Marcus challenged the founder of Tesla to a bet. By 2029, he posited, AI models like GPT – which uses deep learning to produce human-like text – should be able to pass five tests. For example, they should be able to read a book and reliably answer questions on its plot, characters and their motivations.
A Foundation agreed to host the wager, and the stake rose to $500,000 (£409,000). Musk didn’t take up the bet. For his pains, Marcus has found himself labeled as what the Scientologists call a “suppressive”. This is not a sector that responds to criticism well: when GPT was launched, Marcus and similarly sceptical researchers were promised access to the system. He never got it.
“We need much tighter regulation around AI and even claims about AI,” Marcus told me last week. But that’s only half the picture.
I think the reason we’re so easily fooled by the output of AI models is because, like Agent Mulder in the X-Files, is because we want to believe. The Google engineer who became convinced his chatbot had developed a soul was one such example, but it is also journalists who seem to want to believe in magic more than anyone.
The Economist devoted an extensive 4,000 word feature last week to the claim that “huge foundation models are turbo-charging AI progress”, but ensured the magic spell wasn’t broken by only quoting the faithful, and not critics like Marcus.
In addition, a lot of people are doing rather well as things are – waffling about a hypothetical future that may never arrive. Quangos abound, and for example, the UK’s research funding body recently threw £3.5m of taxpayer’s money towards a program called Enabling a Responsible AI Ecosystem.
It doesn’t pay to say the Emperor has no clothes: the Courtiers might be out of a job.