
From Wolfgang Arnold
Introducing the hype…
Gartner placed Machine Learning right at the "Peak of Inflated Expectations" in the Emerging Technologies Hype Cycle - while Natural-Language Question Answering is running down the slope towards the „Trough of Disillusionment“. In 2003, Tim Menzies already took a longer term view and located general Artificial Intelligence’s (AI) „Peak of Inflated Expectations“ in the mid 80s, according to that picture we are now on the Plateau of Profitability.
This may sound confusing, so let’s try to investigate the views and shed some light into a key AI technology used today: Artificial Neural Networks (ANN). Starting with the hype-cycle: There is probably a grain of truth all pictures: On one hand, various pattern recognition problems are solved pretty successfully meanwhile (speech, handwriting, images). Solutions are in wide commercial use, like Alexa or Siri for speech recognition; face recognition in almost all latest photography applications - those tools are rather on the upward slanted „Plateau of Profitability“.
On the other hand, AI has always been sensitive to hypes („Significant AI breakthroughs have been promised ‘in 10 years’ for the past 60 years.“ [1]). So, there is not one single peak, but recurring peaks of inflated expectations and after each peak some wheat is separated from the chaff. AI technology evolves step-by-step or peak-by-peak.
As a DIY example: speech recognition - translating sound data into words - works surprisingly well. But: have you ever received a real „intelligent“ answer from Siri or Alexa (try it in your browser here)?
A closer look at ANN: Did you know it's "old stuff"? Roots go back to the 1940s
Initial ideas of ANNs date back to the 1940s when McCulloch and Pitts created first algorithms to model neural networks. About 10 years later, Frank Rosenblatt developed the „perceptron“, an initial pattern recognition network. Then, for few decades, progress was rather slow because no good training algorithm was available and computers were slow. In the 1980s the back propagation algorithm was introduced to train neural networks. and with increasing computing power (cf. google’s TPU) the doors were open for successful implementations in pattern recognition problems like speech and image recognition.
How do they work? What makes them special?
While the details of ANNs go far beyond the scope of this article, it’s worth at least scratching the surface a bit. The beauty of ANNs lies in the fact that basic principles are in fact quite simple and that they are a completely different approach to problem solving compared to ‚normal‘ computer programming and processing data.
Basically, an artificial neural network is a graph where „neurons“ are the nodes and weights are the edges (i.e. the connections between the neurons).
Now, how do we get a meaningful output? Here, the weights and learning algorithm play essential roles. A blank network can’t do anything. Just like a baby, it has to learn. For training networks a set of training data is required, e.g. a set of photos of cats and dogs with the right caption. Now, the tough work starts: an image is fed into the network and output is checked. Based on the error of the output, the weights are adjusted (here, above mentioned back propagation algorithm is typically used). This is done over and over again, until the error rates drop below desired values.

So, while in a traditional approach to programming, knowledge about problem and solution is coded into an algorithm, with neural networks, you don’t need to worry about details how e.g. image recognition works, you let the machine do the (training) work. Interestingly, artificial neural networks used for image recognition mimic to some extent the human (or mammal) visual processing (see e.g. the work by Margret Livingstone [5]).
So far the dry theory – you can try neural networks in your browser: this example by Gene Kogan demonstrates the recognition of handwritten. When playing around a bit (just press space bar) you may notice that some of the digits are super difficult to read (we are also making errors) and the network sometimes classifies them wrongly. So, also an ANN is not perfect. You might also enjoy this tool for ‚creating‘ cats from few sketched lines.
We've seen, the basics are simple, yet, ‚growing‘ these simple building blocks into a useful network is an art in itself. There are many variables to consider and to adjust - ranging from network architecture to deciding on learning approach, types of neurons and last but not least the training data.
A lot can go wrong - here are two examples where badly chosen training data turned the ‚neutral‘ neuronal networks into not quite so benign bots:
- Microsoft’s Tay adapted a very questionable political ‚opinion’ after learning from inappropriate tweets [10]
- Google’s photo app was trained with a too limited set of examples and really badly misclassified some photos [11]
Note, that the networks simply played back what they learned from the training data. If our own bias is reflected in the data, the network will „learn“ it.
While all the accomplishments are astonishing, it’s important to note that ANNs are trained for specific use cases like classifying handwritten digits. Used in slightly different context, they quickly fail, e.g. when using negative images instead of positives (see [4]). The result of this experiment is actually no big surprise. The network was trained with positive images only - and the readers who have experienced the chemical age of photography probably agree that it is indeed much harder to recognize a negative image than a positive one.
Conclusion
So, with this short overview of AI in general and ANN in particular: Is AI (again) over-hyped? How realistic are warning voices of today’s thought leaders (Stephen Hawking and Bill Gates joining Elon Musk)?
The technology of ANNs has definitely taken a leap forward during recent years and will continue to shape technology and how we interact with technology. Yet, with all the buzz and excitement around ANNs, the expectations are certainly to some extent inflated at the moment. Do we miss to recognize the threat in all the hype? Probably, just like expectations are inflated, the worries are probably equally inflated. The challenge rather is to understand not only the technology but also the consequences of its use and application in our businesses and societies. So, stay human while machine learning!
Would you like to join the discussion? Join AI Open Ecosystem Network project
Need more inspiration?
Papers and further reading:
[1] History of AI: http://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf
[2] 21st-century AI: Proud, Not Smug: http://www.menzies.us/pdf/03aipride.pdf
[3] Gartner hype cycle emerging technologies: https://www.gartner.com/newsroom/id/3412017
[4] Deep Neural Networks Do Not Recognize Negative Images: https://arxiv.org/pdf/1703.06857.pdf
[5] Human visual processing: https://livingstone.hms.harvard.edu
[6] Gates on dangers of AI: https://www.washingtonpost.com/news/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/?utm_term=.7241c53ae910
[6] https://en.m.wikipedia.org/wiki/Artificial_neural_network
[7] https://en.m.wikipedia.org/wiki/Backpropagation
[9] A good summary on ANN: http://www.turingfinance.com/misconceptions-about-neural-networks/
What can go wrong:
Online books:
Some Libraries and tools - it has been never easier to use or even create your own ANN:
- Tensorflow, google: https://www.tensorflow.org
- Object detection API from google: https://github.com/tensorflow/models/tree/master/object_detection
- Tensor Processing Unit (TPU): https://cloud.google.com/blog/big-data/2017/05/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu
- Machine Learning in Python: http://scikit-learn.org/stable/index.html
- Apple BNNS: https://developer.apple.com/documentation/accelerate/bnns
- Amazon: https://console.aws.amazon.com/machinelearning/home?region=us-east-1#/
- Salesforce: - https://metamind.io/
- salesforce: https://metamind.io/research/your-tldr-by-an-ai-a-deep-reinforced-model-for-abstractive-summarization
- adobe: http://www.adobe.com/de/sensei.html
- https://www.excire.com/
Demos / examples and fun stuff:
- Gene Kogan’s demos: http://ml4a.github.io/demos/ - http://ml4a.github.io/dev/demos/mnist_forwardpass.html
- Smile Vector (Demo): https://twitter.com/smilevector?lang=de
- Get a new haircut: https://github.com/ajbrock/Neural-Photo-Editor
- edges2cats, edges2shoes: https://affinelayer.com/pixsrv/
- Inceptionism: https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html
- Alexa in a browser: https://echosim.io/
- Reddit: ttps://www.reddit.com/r/learnmachinelearning/