Simple Explanations

How Do Computers Learn? (And what it means for business, medicine and beyond)

Big Data is all the rage these days. It’s used in medicine to identify disease causing genes. It’s used by companies like Facebook to extract useful information from user data. We hear it so much that it has become a buzzword at corporate parties. “I am a big data specialist at my firm.” “Really? What do you do?” “I input data into Excel.”

It can be confusing wading through the hype, so I’ll attempt to give a very high level and simplified explanation of a model called Artificial Neural Networks.

Imagine you’re trying to teach a baby what a motorcycle is, but there’s a catch: the baby has never seen ANYTHING in his life except basic shapes. His mind is, as John Locke would call it, a “tabula rasa”, a completely blank slate. How would you do it? Maybe you’ll describe it to him. “It has two round shapes at the base. It has a triangular body. It has two cylinders on either side that you can hold on to.” Feeling adventurous, you ask him to point out a motorcycle. “Here!” He proudly shows you:


You lament at failing this deceptively simple task. Maybe you’ll try another approach, an inductive method. You show the baby pictures of motorcycles. One or two wouldn’t do. You’ll have to show the poor wretch hundreds of motorcycle pictures before he gets it. By the end of it, both of you are exhausted, and the little twerp still confuses motorcycles and bicycles half the time. You give up.

Fortunately, computers can process thousands of images before you have the chance to say “I quit”, and teaching a computer to recognize images is very much like using the inductive approach to teach a baby. This is where neural networks come in. Various pieces of the algorithm came about in the mid 1900s, but fell out of favour due to limitations in computing power. When backpropagation, an important piece of the puzzle, was designed around the 1980s, neural networks became hip again. Today, variations of the same model is used to accomplish tasks from speech recognition to predicting consumer tastes.

A Biologically Inspired Algorithm

Neural networks are a drastically simplified mimic of the biological nervous system. In our bodies, specialized cells

Biological neurons

called neurons make up the nervous system. Each neuron is connected to many others, and communication between neurons via chemical signals shapes our thoughts and feelings. If you think this is simplistic, it’s because it is. However, to understand neural networks, this is sufficient.

In a neural network, we have nodes as “neurons”. These “neurons” are simply logical units that does something math-y to a piece of data. For example, if you fed a network images, a node may take one pixel in that image and apply a mathematical function to it. This transformed pixel then becomes the input (analogous to a biological signal) to another node, and so on until we get our desired output.

A neural network is arranged in several layers of nodes. I’m going to take you through each layer to help you understand what it does. Let’s use our motorcycle recognition problem as an example. Our goal is to show the network tens of thousands of motorcycle images so that it is eventually able to recognize what a motorcycle is.

An artificial neural network with one hidden layer

The first layer is called the input layer. It simply takes in the data in a discrete format. For us, this means each motorcycle image is broken into pixels. The next layer is called the hidden layer, and a network may have several of these (multiple hidden layers is what makes a network “deep” as in “deep learning”). Each hidden layer extracts some characteristic or feature from the image; the first hidden layer may extract lines, the next one basic shapes, etc. By the time the data reaches the last layer, called the output layer, we can ask the network to do several things: we can ask it to reproduce the original image (in which case the network is called an Autoencoder), or we can ask it to tell us if it is a motorcycle or not (the network is then called a Classifier).

That was dense! Let’s work through one image of a motorcycle. Let’s say we have a tiny greyscale image with 784 pixels. We use 784 nodes in our input layer to take in those pixels. We apply a mathematical function to each pixel and pass the results to a hidden layer. This layer extracts the lines in our image; for example, a motorcycle may have 10 lines lying at a 13.5 degree angle. After this is a second hidden layer. This one extracts objects: a wheel, an exhaust pipe, a windscreen. The transformed result from this layer then goes into our output layer, and the network interprets it as…not a motorcycle.

A neural network that detects faces, showing what characteristic each layer extracts.

A neural network that detects faces, showing what characteristic each layer extracts.

Why not? If you showed a baby just one picture, you don’t give him much of a chance to learn. Similarly, we need to show our network more examples, and give it feedback each time. The way this feedback is given is via a cost function, a number indicating the level of error a network is making. The goal of “teaching” the network is to minimize this error.

So our network compares what it got (“not a motorcycle”) with what we gave it (“a motorcycle”), and it realizes it has made a mistake. It “backpropagates” this error to find out which layers were responsible for it, and makes adjustments accordingly. When the next image of the motorcycle comes, it does the same thing and makes a slight adjustment again. By the ten thousandth example, our network is fairly accurate at detecting motorcycles.

Neural Networks in Business and Medicine

The Artificial Neural Network is one model within the vast field of Machine Learning. Several Machine Learning

A Pembroke Welsh Corgi (left) and a Cardigan Welsh Corgi (right)

A Pembroke Welsh Corgi (left) and a Cardigan Welsh Corgi (right)

techniques together make a powerful combination. Skype’s new translate function processes spoken words into other languages in real time. Microsoft’s Cortana can tell the difference between a Pembroke Welsh Corgi and a Cardigan Welsh Corgi, a feat a layperson finds challenging. The handwriting recognition software in Apple products, Amazon’s product recommendation system, and email spam filtering all use machine learning to generate insight out of vast quantities of data.

As an example, here‘s an explanation posted on Gizmodo of how Google Voice transcribes spoken language: “The neural network chops up the speech it’s hearing into short segments, then identifies vowel sounds. The subsequent layers can work out how the different vowel sounds fit together to form words, words fit together to form phrases, and finally infer meaning from what was just mumbled.”

IBM's Watson analyzes the tone of an email.

IBM’s Watson analyzes the tone of an email.

Similar advancements are happening in medicine. In a paper published in The Lancet, researchers compared prognostic predictions for colorectal cancer patients using a neural network against those made by surgeons. After training the network with 334 patients using 42 variables each (logically equivalent to 334 images with 42 pixels each), they found that the network had a 90% accuracy, exceeding the surgeons’ accuracy of 79%. On the diagnostic side, researchers are using neural networks to improve X-ray, ECG, and even EEG accuracy.

Despite its success, neural networks are not perfect. Just months



ago, Google Photos accidentally identified a photo of two black individuals as “gorillas”. (“We’re appalled and genuinely sorry that this happened,” said a Google spokesperson). This debacle gives us some insight about the limitations of neural networks. Like a baby, these networks tend to make snap judgments based on observable features. In a way, neural networks always judge a book by its cover.

Sometimes, we can exploit a network’s bluntness for fun. For example, if you trained a network on pictures of animals, and then asked it to enhance features in another picture, it will attempt to look for animal faces where none exist. Google tried this, and the result was eerie:


In another instance, Andrej Karpathy trained a neural network on Shakespearean texts, and asked it to create its own play. The result was surprisingly meaningful:


At the end of the day, neural networks are still a far cry from an actual human brain. It may do some tasks better, ones that involve finding patterns from large amounts of data. Yet for it to understand the meaning and context behind what it sees is an entirely different matter. Perhaps, one day, machines will be able to create sonnets and explain love better than we. But until that day, we can take comfort in our ever shrinking sense of humanity.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s