From the perspective of years, Artificial intelligence has had some ups and downs. These are called A.I. winters, and upon its basis stands nothing else as fragile human nature: overly inflated expectations and naiveness.
But first things first, and everything starts from the simple computer. The idea of thinking machines appeared soon after. One of the first appearances was an essay written by Alan Turing titled “Computing Machinery and Intelligence,” including a description of the famous Turing test.
Scientists predicted that the emergence of genuinely thinking machines was only a matter of time in those days. The thesis was based on the assumption that since devices are capable of performing complex algebraic operations, and this ability is, in the common understanding, a sign of intelligence, then “mastering” simpler operations should not be a problem for them.
In 1958 the first perceptron (simplest neuron network brick) was designed. A.I. was near. As it appeared, it was rather nearly A.I.
As we all know, it didn’t happen, and as everything shows us, it won’t happen anytime soon. The explanation is relatively simple yet not intuitive.
Easy tasks for humans are challenging to machines, and vice versa. It seems a bit illogical, doesn’t it? This phenomena has a name and is called Moravec’s paradox.
Why have we failed?
As we mentioned above, the main problem was the intuitive perception of Artificial Intelligence. In this mode of reasoning, intelligence is somehow equal to “wisdom.” Identical to the extraordinary deeds the human brain can perform. At some point, it’s true, as the most intelligent human subject can perform complicated reasoning (sometimes abstract) – and calculations.
Indeed, the most intelligent people can be proud of the high computational power of their brains. Yet id doesn’t mean that those who can’t aren’t humans and aren’t “thinking,” right? And this is what we assume when it comes to imagining the thinking machines. Robots, capable of conducting a conversation, walking, or advising – act like alive beings.
So what’s the matter with humans?
Indeed, not the only one but the leading explanation could be the years of evolution behind basic human skills. We underestimate the fact that every single person who has lived on earth since the wake of mankind has contributed to the development of these simple brain skills. At the same time, we underestimate how critical they were for humanity to improve and – eventually – led us to a state where we don’t use our brains just to survive, and we have time for inventions like math.
These skills are:
- Face recognition (including facial expression and it’s interpretation)
- Object recognition
- Motor skills
- Understanding natural language
The last one is interesting in particular as, for me, is the pure Artificial Intelligence implemented into our minds. Sometimes we don’t know why, but we feel that something is – for example – a bad idea, or that is better, or worse. Why do we assess that someone is not trustworthy?
Why do we know that?
Well, we don’t know why because it’s just intuition. It can often be tricked and lead us astray; however, intuition has some basis, which is simply our life experience.
That’s why, in general, it is easier to trick children than adults. After years of living in society, we unconsciously learned signals tickling our brains with subtle alerts.
Disclaimer: General lack of computing power and wrong assumption while designing A.I. was also the reason of a failure. Moreover, just like humans, computers need data to develop their “intelligence.” Our brains fully absorb the information that surrounds us, but computers need to be fed with digital data, and this has only been available in sufficient quantities for a relatively short time.
Why did I mention that it’s pure A.I. installed on our biological hardware? Because this is how neural – especially the deep neural networks work. They work, and we really don’t know why they are making such decisions. The complexity of their work mode is too high, and the decision is based on a tremendous number of sets of signals.
It requires a lot of computing power when it comes to implementing it into the computer. We, on the other hand, are doing it unconsciously. Over the centuries, these skills have been developed and are so entrenched in your “operating system” that we see them as trivial, basic, not worthy of attention. It’s like a muscle we’ve trained over millennia.
Still don’t convinced?
A perfect yet tricky example of such a skill is facial recognition. Take a look – facial recognition comes easily to us… until we have to distinguish the faces of people from a different ethnic group. After all, our brains are well trained, but on a given set of data – the faces or the types of faces we’ve seen in our lives, and are compatible. This skill was crucial for distinguishing your mates. Other patterns wasn’t that important.
We are damn good at recognizing faces, but as you can see, this skill is no longer as obvious and natural as we thought it was when we change the environment.
Back to the high
On the other hand, ‘high-level’ skills are relatively new to our brains. They are processed in higher levels of our brains – usually in the cortex. Those parts are younger. They are not really a challenge, considering computational power. We simply use weaker hardware to process them, and that’s why we perceive them as challenging.
Those high levels are:
- Quantum physics,
- Solving abstract problems,
- Playing games like Go or chess
- Processing tons of data at the same time, etc.
Intelligence is something much more complicated and vague than we might think on the first try. And indeed, We should treat humans’ and machines’ intelligence differently.
It’s worth of mention here that computers are tools. So as well should be A.I. The device that helps us expand our abilities. Usually, the abbreviation A.I. is extended as Artificial Intelligence. Still, I prefer extension Augmented Intelligence, and that’s how we should think about every A.I. fueled device, software, or machine that surrounds us.
What we try to develop when we think about ‘thinking machines,’ what scientists were thinking in 50,’ was nothing more than ‘conscious machines’.
I’m not sure if this idea excites or scares me more.
Digital marketer and copywriter experienced and specialized in AI, design, and digital marketing itself. Science, and holistic approach enthusiast, after-hours musician, and sometimes actor. LinkedIn