Neural Networks - Model Representation
Terminologies:
Neuron = logistic unit\(x_0\) = 1 = bias unit
\(\Theta\) = parameter, weight
\(h_{\theta} (x) = \frac{1}{1 + e^{-\theta^T x}}\) = sigmoid (logistic) activation function
Layer 1 (input layer) contains input units
Final layer that output \(h_{\theta} (x)\) is output layer
The rest are hidden layers
\(a_i^{(j)}\) = "activation" (value that's computed by and as output by previous layer) of unit i in layer j
\(\Theta^{(j)}\) = matrix of weights controlling function mapping from layer j to layer j + 1
Comments
Post a Comment