# Coursera Machine Learning 一元线性回归和梯度下降

## 监督学习

In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that there is a relationship between the input and the output.

## 无监督学习

Unsupervised learning allows us to approach problems with little or no idea what our results should look like. We can derive structure from data where we don’t necessarily know the effect of the variables.

## 模型表示

• $x$ : 输入

• $y$ : 输出

• $m$ : 训练集大小

• $(x^{(i)}, y^{(i)})$ : 训练集中第$i$个数据

• $function \ h:\ X → Y$ : 预测的函数，也叫hypothesis

## 损失函数

Squared error function：

## 梯度下降

• $\alpha$ (learning rate): 学习率，过小会导致下降太慢，过大可能导致结果不收敛
• 算法中$\theta_0$, $\theta_1$需要同时更新！