In an earlier blog article I wrote about how human intelligence differs from artificial intelligence, namely human intelligence is general intelligence while artificial intelligence is specialized intelligence. The article provides “food for thought” for those who fear technology evolution, and specifically AI. In today’s article I offer more reflections on the evolution of AI.
Milestones in Artificial Intelligence Evolution
Put in simple words, AI is about Thinking Machines. The English computer scientist Alan Turing was the first academic who proposed to consider the question “Can machines think?” in 1950. An important aspect of making computers think is allowing them to understand and interact in natural (human) language. Joseph Weizenbaum’s ELIZA (1966) was a known early successful implementation of such software at the MIT. It allowed users to undergo a psychiatric interview by ELIZA, the software. In the 1980s the interest in AI grew. Progressed was booked in what was called “knowledge-based systems”, “fuzzy logic” and “neural networks”. The latter is – if simplified – the machine-equivalent of how human brains are wired. By the end of the 1990s IBM’s Deep Blue computer defeated the world chess champion Garry Kasparov in what people consider a game that requires human intelligence: chess. IBM’s effort in AI continued, and another milestone was booked in 2011 when IBM’s cognitive computing platform Watson won the Jeopardy! Quiz Show, playing against the best human winners.
These examples (ELIZA, Deep Blue and Watson) are not random. I consider them to represent three phases in the evolution of Artificial Intelligence. ELIZA was the first chatbot, allowing users to have a seemingly intelligent conversation with a machine, by understanding semantics and basic sentence structure (short sentences) within a specific domain. Deep Blue represented a breakthrough in computers being able to assess a large number of scenarios within a structured problem domain. Watson was a breakthrough in the ability of a computer to process, understand and analyze large quantities of unstructured text.
Factors Contributing To The Progress in AI
But how was this progress achieved? Did computers (or in fact: software) really become more intelligent? To a certain extent one can argue so, because today’s algorithms are more complex than in the past. But in order to understand what happened, let’s look more closely at a couple of developments.
Introducing Moore’s Law
Imagine that within two years you could double the amount of work that you do per day. And two years later you double it again. Within 10 years you’re able to do 32 times the amount of work that you did initially. Not possible? Moore’s law is the observation (thus factual) that the number of transistors on integrated circuit chips doubles about every two years, which entails doubling the computer processing power every two years. This rate of evolving computational power has been steady since Gordon Moore first published an article with this observation in 1965.
Moore’s Law Fuels AI Progress
The computational power of integrated chips – thus IT systems – increased with a factor of 226 (67,108,864) between 1965 and 2017. If you’re thinking that it’s not fair to compare to the 1960s (too early in IT developments), let’s compare 2017 to 1991: the computational power increased with factor 8192. Imagine that in 2017 you could do 8192 times the amount of work that you could do in 1991. IT systems can do so many calculations per second, that the speed by itself may make humans consider computers intelligent. Going back to the Deep Blue example: Deep Blue was able to calculate so many potential “what if” scenarios (i.e. chess moves) quickly, that it was able to determine which chess moves were better than others. Does that mean that Deep Blue was “intelligent”, or just very fast?
Data Proliferation Enables AI
Data is the petrol on which a software system runs. As computers can do more calculations per given unit of time, they need data to fuel their calculations. With the proliferation of automation, sensors and connectivity, information today is broadly available. Software nowadays has much more data to perform “intelligent” calculations with, thereby allowing it to draw conclusions that were not possible before. Not just because there was insufficient computational power, but because there was no data, insufficient data, or data that was not available timely and in a computer-interpretable way.
Thus it is the combination of data and processing power that forms a catalyst for AI.
If you had access to much more information than you currently do (data proliferation), and you could read, interpret and analyze the data much faster than you currently do (Moore’s Law), you might think of yourself as a “super human”. Would it make you more intelligent? This might be a philosophical question.
The evolution of AI goes of course further than what I discussed in this article. The article does not aim to provide a full picture of AI. It makes certain simplifications, and aims to make readers think about what Artificial Intelligence is, and about what we consider to be “intelligent” IT systems.
Suggested Reading: How Human Intelligence Differs From Artificial Intelligence
Go back to the blog start page.
Sign up to receive blog updates via email.