We’re at the second appointment with our section dedicated to the Artificial Intelligence.
As you know, in the first article we focused on defining this discipline, reflecting on the impact its diffusion has from an economical and innovative point of view.
Today, before going into the topic and outlining its application, we should flip to sixty years ago, trying to understand how the Artificial Intelligence took the first steps, before impacting on our world and influencing our way of life.
Which are the origins of the Artificial Intelligence?
Looking back, we should go to the end of the fifties to discover a decisive moment for Artificial Intelligence birth.
In 1950, in fact, the famous mathematician Alan Turing, became promoter of the assumption according to which a computer is able to solve autonomously problems, as a human being does. Only in 1955 the definition “Artificial Intelligence” appeared, thanks to what John McCarthy affirmed, who proposed the creation of a group to analyze and study the topic, during the conference programmed for the summer 1956 at the Dartmouth College in Hanover, in New Hempshire.
“We propose that a two-month, ten-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
Nevertheless, several experts consider the American psychologist and computer scientist Franck Rosenblatt the father of Cybernetic and Artificial Intelligence.
Thanks to Rosenblatt’s studies the first neural network had been developed. We’re talking about the Perceptron, a network model with an inbound status and outbound one and an intermediate learning rule, based on an error-back-propagation algorithm. This mathematic function, able to modify the synaptic weight of the neural network, allowed to progressively direct the outbound set of values to the ones desired.
While, in the following years, two mathematicians, Marvin Minsky and Seymour Papert, demonstrated how the network model patented by Rosenblatt had several limitations, due to the limited computational capabilities of a single Perceptron. The two mathematicians, understood that, to solve these limitations and allow to the neural network to fix the complex problems, was necessary to implement a network composed of several Perceptrons. Even if the intuition was correct, the two scholars faced the problem related to the hardware, because in that period was almost impossible find out an infrastructure able to bear these operations.
The real technological turning point happened between the 70’s and the 80’s, when, with the development of the first GPUs (Graphic Processing Units) a higher calculation power was possible, lowering significantly networks training time of about 10/20 times.
So, thanks to the GPU, during the years, it had been possible obtaining benefits in terms of calculation power, allowing the simultaneous management of several operations and supporting the evolution of Artificial Intelligence up to our days.
See you to a new article of our Blog!