Neural Networks - Forward Propagation
z(j)=Θ(j−1)a(j−1)
a(j)=g(z(j))
hΘ(x)=a(j+1)=g(z(j+1))
Note the importance of adding a bias term before each forward propagation from one layer to the nexta(j)=g(z(j))
hΘ(x)=a(j+1)=g(z(j+1))
If we're looking at 2 adjacent layers j and j+1 at a time, it is just like logistic regression where features came/result from layer j-1 forward propagation. This allows neural network learns its own features.
Resources:
- https://www.coursera.org/learn/machine-learning/lecture/Hw3VK/model-representation-ii
Comments
Post a Comment