Neural Networks - Model Representation
Terminologies:
Neuron = logistic unitx0 = 1 = bias unit
Θ = parameter, weight
hθ(x)=11+e−θTx = sigmoid (logistic) activation function
Layer 1 (input layer) contains input units
Final layer that output hθ(x) is output layer
The rest are hidden layers
a(j)i = "activation" (value that's computed by and as output by previous layer) of unit i in layer j
Θ(j) = matrix of weights controlling function mapping from layer j to layer j + 1
Comments
Post a Comment