Simplified Cost Function
Vectorized implementation:
h=g(Xθ)J(θ)=1m⋅(−yTlog(h)−(1−y)Tlog(1−h))
Gradient Descent:
Vectorized implementation:
θ:=θ−αmXT(g(Xθ)−→y)
Notice the algorithm "looks" identical to linear regression though under the hood, it is using a different hypothesis function.
Resources:
- https://www.coursera.org/learn/machine-learning/lecture/MtEaZ/simplified-cost-function-and-gradient-descent
- https://www.coursera.org/learn/machine-learning/supplement/0hpMl/simplified-cost-function-and-gradient-descent
Comments
Post a Comment