ANN or Artificial Neural Networks were inspired by the biological brain's architecture. ANNs are the core of Deep Learning. They are powerful, versatile, and scalable, making them ideal to tackle large and highly complex machine-learning tasks.
Surprisingly ANNs have been around for quite a while: They were first introduced back in 1943 by the neurophysiologist Warren McCulloch and the mathematician Walter Pitts. McCulloch and Pitts presented a simplified computational model of how biological neurons might work together using propositional logic. It was the first artificial neural network architecture.
ANNs' early success led to the widespread belief that we would soon be conversing with truly intelligent machines. But with a twist of fate when it become clear that the promise will not be fulfilled the funding went elsewhere the ANNs entered a long winter.
After some years in the early 1980s, there was a revival of interest in the study of neural networks, as new architectures were invented and better training techniques were developed. But still, the progress was slow, and again in the 1990s other powerful Machine learning techniques were invented, such as Support Vector Machines. They were providing better results and stronger theoretical foundations than ANNs, so once again the study of neural networks sadly entered a long winter of wait.
Finally, we are witnessing yet another wave of interest in ANNs. Will this wave die out as previous ones did? We will find out in the next part of this topic.