MullOverThings

Useful tips for everyday

How to improve the accuracy of neural networks?

How to improve the accuracy of neural networks?

In the process of training, we want to start with a bad performing neural network and wind up with network with high accuracy. In terms of loss function, we want our loss function to much lower in the end of training. Improving the network is possible, because we can change its function by adjusting weights.

Why do we separate training and inference phases of neural networks?

In order to motivate why we separate the training and inference phases of neural networks, it can be useful to analyse the computational complexity. This essay assumes familiarity with analytical complexity analysis of algorithms, and hereunder big-O notation.

What should be the output of a neural network?

The last thing to note, is that we usually want a number between 0 and 1 as an output from out neural network so that we treat is as a probability. For example, in dogs-vs-cats we could treat a number close to zero as a cat, and a number close to one as a dog.

Why do we need non-linearity in a neural network?

The need for non-linearity comes from the fact, that we connect neurons together and the fact the linear function on top of linear function is itself a linear function. So, if didn’t have non-linear function applied in each neuron, the neural network would be a linear function, thus not more powerful than a single neuron.

How is loss function used in neural network training?

Many of the conventional approaches to this problem are directly applicable to that of training neural networks. Although the loss function depends on many parameters, one-dimensional optimization methods are of great importance here. Indeed, they are very often used in the training process of a neural network .

What’s the best way to train a neural network?

The intuitive way to do it is, take each training example, pass through the network to get the number, subtract it from the actual number we wanted to get and square it (because negative numbers are just as bad as positives).

How is the learning problem for neural networks formulated?

The learning problem for neural networks is formulated as searching of a parameter vector w∗ w ∗ at which the loss function f f takes a minimum value. The necessary condition states that if the neural network is at a minimum of the loss function, then the gradient is the zero vector.