Andrew Ng 교수님의 Machine Learning 강좌 주요 키워드 및 내용입니다.
Linear Regression with Multiple Variables
Hypothesis, Cost Function, and Gradient
- Hypothesis hθ(x)=θTx=θ0+θ1x1+θ2x2+...+θnxn
- Cost Function J(θ)=12mΣmi=1(hθ(x)−y)2
- Gradient θj:=θj−αδδθjJ(θ)=θj−αΣmi=1(hθ(x)−y)xj
Feature Scaling/Mean Normalization
Learning Rate
- Low → Slow Learning
- High → Divergence
Linear Regression with Polynomial Variables
Normal Equation
- Optimizer Parameter θ=(XTX)−1XTY
- Invertible Matrix (XTX)−1XTY Problem → Remove Linear Dependency, Reduce Number of Features
Gradient Descent vs Normal Equation
- Gradient Descent → Large Number of Features
- Normal Equation → Small Number of Features (Computationally Expensive to Calculate (XTX)−1XTY)
Octave/MATLAB Tutorial
Vectorization Computation
'Coursera 강의 정리 > Machine Learning - Andrew Ng' 카테고리의 다른 글
Week 3: Logistic Regression (feat. Regularization) (0) | 2020.08.04 |
---|---|
Week 1 (0) | 2020.08.02 |