Neural Networks - Forward Propagation

Neural Networks - Forward Propagation

z(j)=Θ(j1)a(j1)
a(j)=g(z(j))
hΘ(x)=a(j+1)=g(z(j+1))
Note the importance of adding a bias term before each forward propagation from one layer to the next

If we're looking at 2 adjacent layers j and j+1 at a time, it is just like logistic regression where features came/result from layer j-1 forward propagation. This allows neural network learns its own features.

Resources:
https://www.coursera.org/learn/machine-learning/lecture/Hw3VK/model-representation-ii

Comments