Why AI Feels So Big (Sold With AI - Edition 15)
Most people seem to be thinking we have reached a 'this is it' moment in AI. I don't think we have. In this post, I share (some of) my reasoning.
(Here and there, I take a break from talking about the intersection of Sales and AI to talk about AI topics that are on everyone’s mind. In this edition too, I take up a question which I believe is on everyone’s mind. Including mine. And yours.)
It’s been more than 2 years since ChatGPT took the world by storm, but the AI enthusiasm shows no signs of abating. From my founder friends to VCs to politicians to even the common man on the street, everyone has AI on their lips.
So why exactly does AI feel so big right now?
If you remember my introduction from the first edition of this newsletter, you’d remember that I have been an AI aficionado for 25+ years now. There are arguably only a hundred or two people out there who have walked the AI talk longer than me. In room after room, why then do I look like the AI pessimist?
(Hint: I am not. By far. Even fairly recently, I have called it a bigger change than even the internet)
So is it really big or is it not? Well, as always, the answer is nuanced. I believe it is indeed a very big change (just like I said above), but it is not as big as everyone seems to be thinking it is.
To help you understand what I mean, let me tell you a small story.
Sometime ago, humans did not know the distance from earth to the moon. A group of scientists from this really famed university (of its time) thought that it was possible to land an object on the moon and decided to take it up as a challenge.
The first group launched an object that went up around 100m. Everyone laughed at them.
The next group managed to launch an object that went up around 1 km. People still laughed, but they were surprised as they hadn’t seen something go up that much.
The next group actually managed to get an object as high as 10 kms. It disappeared from people’s vision. For a moment, it seemed to the villagers around that this group had really managed to make it to the moon. But soon enough, unfortunately, they came to know that it didn’t.
Now the next group of scientists boasted of some serious calibre. When they launched their object, it just went and went. It reached a height of 3000 kms. It was 300X more than any earlier object had ever managed to reach!
Everyone lost their mind. They thought they had done it. Even some of the scientists felt that they had. The leader of the scientists started going around the block, blowing his trumpet and beating his chest.
Just then, one of the other scientists managed to calculate the distance to the moon.. And they realized they were staring at a strange reality - they had managed to reach a level that was almost deemed impossible, and truly felt like reaching the moon to all the villagers (and even many scientists).
But in reality, how much of the way had they really gotten?
Not even 1%. (The distance to the moon is around 384K kms)
Now this is a made up story. And I am not saying that AI today is only 1% of the way to becoming AGI. Quite possibly, it has made more progress. I definitely think it is now possible. (Very likely, even inevitable)
Then why does it feel to so many people as if we have ‘done it’? Even many who are more than the proverbial villagers in the story above?
I think it is a combination of two factors - almost everyone’s lack of a reference frame combined with 99.999% people’s lack of detailed knowledge about the internal architecture (and limitations) of the LLMs.
We thought the Turing Test provided a reference framework to determine what can be called AGI, but we blew past it and realized it was not even a credible benchmark.
We then thought the ARC prize could act as a benchmark. In all probability, we are about to blow past that too.
On the other hand, there are also very few people who understand LLMs’ architectural limitations. Francois Chollet, one of the leading voices on AI and the creator of the ARC Prize goes so far as to categorically state that LLMs will not lead to AGI.
Even the Nobel Laureate Demis Hassabis has put the odds of AGI developing by 2035 at a cautious “reasonable chance”. Which to me, sounds, well, reasonable. And also a far cry from the proclamations like this one by Sam Altman and others who are more in the business of selling AI.
From history to philosophy, we can look at it through many other lenses, and a smart person will likely reach the same conclusion. History provides enough examples that show that it is not the first time that a technological discovery has made people go ‘this is it’. And philosophy provides enough reasons that show why any utopian state or an utopian technology is practically impossible.
However, those are topics for another day. The world is changing very fast and it is hard for anyone to make reliable predictions at this time. Me included.