Simplified Cost Function
Vectorized implementation:
\(\begin{align*} & h = g(X\theta)\newline & J(\theta) = \frac{1}{m} \cdot \left(-y^{T}\log(h)-(1-y)^{T}\log(1-h)\right) \end{align*}\)
Gradient Descent:
Vectorized implementation:
\(\theta := \theta - \frac{\alpha}{m} X^{T} (g(X \theta ) - \vec{y})\)
Notice the algorithm "looks" identical to linear regression though under the hood, it is using a different hypothesis function.
Resources:
- https://www.coursera.org/learn/machine-learning/lecture/MtEaZ/simplified-cost-function-and-gradient-descent
- https://www.coursera.org/learn/machine-learning/supplement/0hpMl/simplified-cost-function-and-gradient-descent
Comments
Post a Comment