Android,iOS,Gadgets,Reviews Everything About Technology

Study neural network in just four steps simple guide to learn


This time I decided to study neural networks. Basic skills in this matter I was able to get for the summer and autumn of 2015. By basic skills, I mean that I myself can create a simple neural network from scratch. In this article I will give a few explanations and share the resources that you might need to learn.

Step 1. Neurons and method of direct distribution

So what is a neural network? Let’s wait with this and first deal with one neuron.

- Advertisement -

A neuron is similar to a function: it takes several values ​​to input and returns one.

The circle below denotes an artificial neuron. He receives 5 and returns 1. The input is the sum of three synapses connected to the neuron (the three arrows on the left).

In the left part of the picture, we see 2 input values ​​(green) and an offset (highlighted in brown).

Input data can be numeric representations of two different properties. For example, when creating a spam filter, they could mean the presence of more than one word, written in CAPITAL LETTERS, and the presence of the word “Viagra”.

Input values ​​are multiplied by their so-called “weights”, 7 and 3 (highlighted in blue).

Now we add the resulting values ​​with an offset and get a number, in our case 5 (highlighted in red). This is the input of our artificial neuron.

Then the neuron makes some calculation and outputs the output value. We got 1, because the rounded value of the sigmoid at point 5 is equal to 1 (we’ll talk more about this function later).

If it was a spam filter, the fact of output 1 would mean that the text was marked by the neuron as spam.

If you combine these neurons, you will receive a directly propagating neural network – the process proceeds from input to output, through neurons connected by synapses, as in the picture to the left.

I highly recommend watching a series of videos from Welch Labs to improve the understanding of the process.

Step 2. Sigmoid

After you have looked at the lessons from Welch Labs, it would be a good idea to get acquainted with the  fourth week of the Coursera machine learning course on neural networks – it will help you understand the principles of their work. The course greatly deepens into mathematics and is based on Octave, and I prefer Python. Because of this, I missed the exercises and learned all the necessary knowledge from the video.

The first priority for me was studying the sigmoid , as it appeared in many aspects of neural networks. I already knew something about it from the third week of the above-mentioned course , so I revised the video from there.

But on some videos you will not go far. For complete understanding, I decided to encode it myself. So I started writing an implementation of a logistic regression algorithm  (which uses a sigmoid).

It took a whole day, and hardly the result was satisfactory. But it does not matter, because I figured out how everything works.

You do not have to do it yourself, because special knowledge is required here – the main thing is that you understand how the sigmoid is arranged.

Step 3. Method of back propagation error

Understand the principle of the neural network from input to output is not so difficult. It is much more difficult to understand how a neural network is trained on data sets. The principle I use is called the method of back propagation of an error .

In short: you estimate how much the network was wrong, and change the weight of the input values ​​(blue numbers in the first picture).

The process goes from the end to the beginning, since we start from the end of the network (see how far the network guess deviates from the truth) and move backward, changing the weight path until we get to the input. To calculate all this manually, knowledge of the material will be required. Khan Academy  provides good courses on the subject of metanalysis, but I studied it at the university. Also, you can not bother and use libraries that will count the entire matan for you.

Here are three sources that helped me understand this method:

In the process of reading the first two articles, you need to code yourself, this will help you in the future. And in general, neural networks can not be properly understood if you neglect the practice. The third article is also cool, but it’s more like an encyclopedia, because it’s the size of a whole book. It contains detailed explanations of all the important principles of the operation of neural networks. These articles will also help you learn concepts such as cost function and gradient descent.

Step 4. Creating your own neural network

When reading various articles and manuals, you will somehow write small neural networks. I recommend doing just that, because it is a very effective method of teaching.

Another useful article was  A Neural Network in 11 lines of Python  from IAmTrask . It contains an amazing amount of knowledge, compressed to 11 lines of code.

After reading this article, you should write the implementation of all the examples yourself. This will help you to close the holes in knowledge, and when you succeed, you will feel like you have gained super strength.

Since in the examples there are often implementations that use vector calculations, I recommend taking a course on linear algebra from Coursera.

After that, you can see the manual Wild ML  from Denny Britz , in which the neural networks are more complicated.

Now you can try to write your own neural network or experiment with already written ones. It’s very funny to find the data set you’re interested in and test various assumptions with your networks.

To find good data sets, you can visit my site  and choose the right one there.

Anyway, now you better start your experiments than listen to my advice. Personally, I’m currently studying Python libraries for programming neural networks.

Good luck!