Neural Networks - Model Representation

Neural Networks - Model Representation

Terminologies:

Neuron = logistic unit
\(x_0\) = 1 = bias unit
\(\Theta\) = parameter, weight
\(h_{\theta} (x) = \frac{1}{1 + e^{-\theta^T x}}\) = sigmoid (logistic) activation function
Layer 1 (input layer) contains input units
Final layer that output \(h_{\theta} (x)\) is output layer
The rest are hidden layers
\(a_i^{(j)}\) = "activation" (value that's computed by and as output by previous layer) of unit i in layer j
\(\Theta^{(j)}\) = matrix of weights controlling function mapping from layer j to layer j + 1

Resources:

https://www.coursera.org/learn/machine-learning/supplement/Bln5m/model-representation-i

Comments