Coursera 강의 정리/Machine Learning - Andrew Ng (3) 썸네일형 리스트형 Week 3: Logistic Regression (feat. Regularization) Andres Ng 교수님의 Machine Learning 3주차 키워드 및 간단 정리입니다. Logistic Regression Hypothesis, Cost Function, Gradient Hypothesis $h_\theta(x) = \frac{1}{1+e^{-\theta^{T}x}}$ Cost Function: Crossentropy $J(\theta) = -\frac{1}{m}\Sigma ylog(h_\theta(x)) + (1-y)log(1-h_\theta(x))$ Gradient $\theta_j := \theta_j - \alpha\frac{\delta}{\delta \theta_j}J(\theta)$ $:= \theta_j - \alpha \frac{1}{m} \Sigma (h_\thet.. Week 2 Andrew Ng 교수님의 Machine Learning 강좌 주요 키워드 및 내용입니다. Linear Regression with Multiple Variables Hypothesis, Cost Function, and Gradient Hypothesis $h_\theta(x) = \theta^{T}x = \theta_0 + \theta_1x_1 + \theta_2x_2 + ... + \theta_nx_n$ Cost Function $J(\theta) = \frac{1}{2m} \Sigma_{i=1}^{m}(h_\theta(x)-y)^{2}$ Gradient $\theta_j := \theta_j - \alpha \frac{\delta}{\delta \theta_j}J(\theta) = \theta_j.. Week 1 Andrew Ng 교수님의 Machine Learning 강좌 주요 키워드 및 내용입니다. Introduction Definition of Machine Learning Learn without being explicitly programmed For task T, with lots of experience E, performance P would be better Categorization of Machine Learning Supervised Learning and Unsupervised Learning Regression and Classification Cocktail Party Problem Linear Regression with One Variable Univariate Linear Regr.. 이전 1 다음