Simplified Cost Function and Gradient Descent

Simplified Cost Function

Vectorized implementation:
h=g(Xθ)J(θ)=1m(yTlog(h)(1y)Tlog(1h))

Gradient Descent:

Vectorized implementation:
θ:=θαmXT(g(Xθ)y)


Notice the algorithm "looks" identical to linear regression though under the hood, it is using a different hypothesis function.

Resources:
https://www.coursera.org/learn/machine-learning/lecture/MtEaZ/simplified-cost-function-and-gradient-descent
https://www.coursera.org/learn/machine-learning/supplement/0hpMl/simplified-cost-function-and-gradient-descent

Comments