There’s a lot of talk and fears about AI replacing humans , but on one key aspect – “thinking”, it still has a lot of catching up to do. Some may argue that using the term “Intelligence” to describe what it does, with that key attribute missing, is wrong. At best it is still “Automation” or perhaps “Intelligent Automation”/”Smart Automation”.
I woke up this morning like most other days with lots of thoughts. I am sure most of you did too. What to eat, what to do, where to go and on and on. Some of them appeared very focused while others a bit random. Most likely the random thoughts would also not be truly random but responses to some triggers or based on your mood. But the fact of the matter is – I had thoughts, we all had thoughts.
Can AI have thoughts ? Does AI have thoughts ?
Will AI have the capability to have random thoughts “play in its head”? How do you define these random thoughts in terms of the mathematical probabilistic model? These are linked neither to intuition or reason and not are directly specific goal related.
This morning itself I had a number of random thoughts on what I should do, what I should wear or eat and they will form the basis of my future course of actions. These random thoughts are often driven by “mood”. What is the equivalent of that for AI? Will a “happy” AI deliver different outcomes than a “sad” AI. Will those moods be driven by AIs own experiences or be a human specified “setting”.
Does AI have that sort of capabilities yet ? Will it ever ?
Is that what will distinguish AI from human intelligence. With AI being more directed towards a specific effort while Human Intelligence being more random,unpredictable and hence more creative?
Przemek had an interesting take on this:
I think about random thoughts as noise and it can be modelled in this way. Random thoughts we’re having are related to things we have seen and are not really random – they are just a noisy variant of our experiences, influenced also by how we feel.
With AI it makes sense to talk about noise, while “mood” can be thought of as “recent data you were trained on + state of your architecture” (i.e. = “recent experiences + state of your body”). This is the analogy I see, but artificial and human intelligence develop in totally different ways, so it is really hard to compare.
Especially that AI is currently only working in narrow tasks, and it’s not clear when we see a general purpose AI.
So while the term “Intelligence” has been appropriated for Artificial Intelligence, creating some kind of equivalence with Human Intelligence, it still has a lot of ground to cover. Yes, it now has intuition and reasoning – can beat humans (and even computers) in games of chess (AlphaZero AI beats champion chess program after teaching itself in four hours) and even the more complex Go (‘Like A God,’ Google A.I. Beats Human Champ Of Notoriously Complex Go Game). But it still cannot “Think”, atleast not in the way humans do.
Food for “thought”? For humans atleast. For AI, not yet.