当前位置: 首页 > news >正文

[Machine learning][Part4] 多维矩阵下的梯度下降线性预测模型的实现

目录

模型初始化信息:

模型实现:

多变量损失函数:

多变量梯度下降实现:

多变量梯度实现:

多变量梯度下降实现:


之前部分实现的梯度下降线性预测模型中的training example只有一个特征属性:房屋面积,这显然是不符合实际情况的,这里增加特征属性的数量再实现一次梯度下降线性预测模型。

这里回顾一下梯度下降线性模型的实现方法:

  1. 实现线性模型:f = w*x + b,模型参数w,b待定
  2. 寻找最优的w,b组合:

             (1)引入衡量模型优劣的cost function:J(w,b) ——损失函数或者代价函数

             (2)损失函数值最小的时候,模型最接近实际情况:通过梯度下降法来寻找最优w,b组合

模型初始化信息:

  • 新的房子的特征有:房子面积、卧室数、楼层数、房龄共4个特征属性。
Size (sqft)Number of BedroomsNumber of floorsAge of HomePrice (1000s dollars)
21045145460
14163240232
852213517

 上面表中的训练样本有3个,输入特征矩阵模型为:

具体代码实现为,X_train是输入矩阵,y_train是输出矩阵

X_train = np.array([[2104, 5, 1, 45], [1416, 3, 2, 40],[852, 2, 1, 35]])
y_train = np.array([460, 232, 178])

模型参数w,b矩阵:

代码实现:w中的每一个元素对应房屋的一个特征属性

b_init = 785.1811367994083
w_init = np.array([ 0.39133535, 18.75376741, -53.36032453, -26.42131618])

模型实现:

def predict(x, w, b): """single predict using linear regressionArgs:x (ndarray): Shape (n,) example with multiple featuresw (ndarray): Shape (n,) model parameters   b (scalar):             model parameter Returns:p (scalar):  prediction"""p = np.dot(x, w) + b     return p   

多变量损失函数:

J(w,b)为:

代码实现为:

def compute_cost(X, y, w, b): """compute costArgs:X (ndarray (m,n)): Data, m examples with n featuresy (ndarray (m,)) : target valuesw (ndarray (n,)) : model parameters  b (scalar)       : model parameterReturns:cost (scalar): cost"""m = X.shape[0]cost = 0.0for i in range(m):                                f_wb_i = np.dot(X[i], w) + b           #(n,)(n,) = scalar (see np.dot)cost = cost + (f_wb_i - y[i])**2       #scalarcost = cost / (2 * m)                      #scalar    return cost

多变量梯度下降实现:

多变量梯度实现:

def compute_gradient(X, y, w, b): """Computes the gradient for linear regression Args:X (ndarray (m,n)): Data, m examples with n featuresy (ndarray (m,)) : target valuesw (ndarray (n,)) : model parameters  b (scalar)       : model parameterReturns:dj_dw (ndarray (n,)): The gradient of the cost w.r.t. the parameters w. dj_db (scalar):       The gradient of the cost w.r.t. the parameter b. """m,n = X.shape           #(number of examples, number of features)dj_dw = np.zeros((n,))dj_db = 0.for i in range(m):                             err = (np.dot(X[i], w) + b) - y[i]   for j in range(n):                         dj_dw[j] = dj_dw[j] + err * X[i, j]    dj_db = dj_db + err                        dj_dw = dj_dw / m                                dj_db = dj_db / m                                return dj_db, dj_dw

多变量梯度下降实现:

def gradient_descent(X, y, w_in, b_in, cost_function, gradient_function, alpha, num_iters): """Performs batch gradient descent to learn theta. Updates theta by taking num_iters gradient steps with learning rate alphaArgs:X (ndarray (m,n))   : Data, m examples with n featuresy (ndarray (m,))    : target valuesw_in (ndarray (n,)) : initial model parameters  b_in (scalar)       : initial model parametercost_function       : function to compute costgradient_function   : function to compute the gradientalpha (float)       : Learning ratenum_iters (int)     : number of iterations to run gradient descentReturns:w (ndarray (n,)) : Updated values of parameters b (scalar)       : Updated value of parameter """# An array to store cost J and w's at each iteration primarily for graphing laterJ_history = []w = copy.deepcopy(w_in)  #avoid modifying global w within functionb = b_infor i in range(num_iters):# Calculate the gradient and update the parametersdj_db,dj_dw = gradient_function(X, y, w, b)   ##None# Update Parameters using w, b, alpha and gradientw = w - alpha * dj_dw               ##Noneb = b - alpha * dj_db               ##None# Save cost J at each iterationif i<100000:      # prevent resource exhaustion J_history.append( cost_function(X, y, w, b))# Print cost every at intervals 10 times or as many iterations if < 10if i% math.ceil(num_iters / 10) == 0:print(f"Iteration {i:4d}: Cost {J_history[-1]:8.2f}   ")return w, b, J_history #return final w,b and J history for graphing

梯度下降算法测试:

# initialize parameters
initial_w = np.zeros_like(w_init)
initial_b = 0.
# some gradient descent settings
iterations = 1000
alpha = 5.0e-7
# run gradient descent 
w_final, b_final, J_hist = gradient_descent(X_train, y_train, initial_w, initial_b,compute_cost, compute_gradient, alpha, iterations)
print(f"b,w found by gradient descent: {b_final:0.2f},{w_final} ")
m,_ = X_train.shape
for i in range(m):print(f"prediction: {np.dot(X_train[i], w_final) + b_final:0.2f}, target value: {y_train[i]}")# plot cost versus iteration  
fig, (ax1, ax2) = plt.subplots(1, 2, constrained_layout=True, figsize=(12, 4))
ax1.plot(J_hist)
ax2.plot(100 + np.arange(len(J_hist[100:])), J_hist[100:])
ax1.set_title("Cost vs. iteration");  ax2.set_title("Cost vs. iteration (tail)")
ax1.set_ylabel('Cost')             ;  ax2.set_ylabel('Cost') 
ax1.set_xlabel('iteration step')   ;  ax2.set_xlabel('iteration step') 
plt.show()

结果为:

可以看到,右图中损失函数在traning次数结束之后还一直在下降,没有找到最佳的w,b组合。具体解决方法,后面会有更新。

完整的代码为:

import copy, math
import numpy as np
import matplotlib.pyplot as pltnp.set_printoptions(precision=2)  # reduced display precision on numpy arraysX_train = np.array([[2104, 5, 1, 45], [1416, 3, 2, 40], [852, 2, 1, 35]])
y_train = np.array([460, 232, 178])b_init = 785.1811367994083
w_init = np.array([ 0.39133535, 18.75376741, -53.36032453, -26.42131618])def predict(x, w, b):"""single predict using linear regressionArgs:x (ndarray): Shape (n,) example with multiple featuresw (ndarray): Shape (n,) model parametersb (scalar):             model parameterReturns:p (scalar):  prediction"""p = np.dot(x, w) + breturn pdef compute_cost(X, y, w, b):"""compute costArgs:X (ndarray (m,n)): Data, m examples with n featuresy (ndarray (m,)) : target valuesw (ndarray (n,)) : model parametersb (scalar)       : model parameterReturns:cost (scalar): cost"""m = X.shape[0]cost = 0.0for i in range(m):f_wb_i = np.dot(X[i], w) + b  # (n,)(n,) = scalar (see np.dot)cost = cost + (f_wb_i - y[i]) ** 2  # scalarcost = cost / (2 * m)  # scalarreturn costdef compute_gradient(X, y, w, b):"""Computes the gradient for linear regressionArgs:X (ndarray (m,n)): Data, m examples with n featuresy (ndarray (m,)) : target valuesw (ndarray (n,)) : model parametersb (scalar)       : model parameterReturns:dj_dw (ndarray (n,)): The gradient of the cost w.r.t. the parameters w.dj_db (scalar):       The gradient of the cost w.r.t. the parameter b."""m, n = X.shape  # (number of examples, number of features)dj_dw = np.zeros((n,))dj_db = 0.for i in range(m):err = (np.dot(X[i], w) + b) - y[i]for j in range(n):dj_dw[j] = dj_dw[j] + err * X[i, j]dj_db = dj_db + errdj_dw = dj_dw / mdj_db = dj_db / mreturn dj_db, dj_dwdef gradient_descent(X, y, w_in, b_in, cost_function, gradient_function, alpha, num_iters):"""Performs batch gradient descent to learn theta. Updates theta by takingnum_iters gradient steps with learning rate alphaArgs:X (ndarray (m,n))   : Data, m examples with n featuresy (ndarray (m,))    : target valuesw_in (ndarray (n,)) : initial model parametersb_in (scalar)       : initial model parametercost_function       : function to compute costgradient_function   : function to compute the gradientalpha (float)       : Learning ratenum_iters (int)     : number of iterations to run gradient descentReturns:w (ndarray (n,)) : Updated values of parametersb (scalar)       : Updated value of parameter"""# An array to store cost J and w's at each iteration primarily for graphing laterJ_history = []w = copy.deepcopy(w_in)  # avoid modifying global w within functionb = b_infor i in range(num_iters):# Calculate the gradient and update the parametersdj_db, dj_dw = gradient_function(X, y, w, b)  ##None# Update Parameters using w, b, alpha and gradientw = w - alpha * dj_dw  ##Noneb = b - alpha * dj_db  ##None# Save cost J at each iterationif i < 100000:  # prevent resource exhaustionJ_history.append(cost_function(X, y, w, b))# Print cost every at intervals 10 times or as many iterations if < 10if i % math.ceil(num_iters / 10) == 0:print(f"Iteration {i:4d}: Cost {J_history[-1]:8.2f}   ")return w, b, J_history  # return final w,b and J history for graphing# initialize parameters
initial_w = np.zeros_like(w_init)
initial_b = 0.
# some gradient descent settings
iterations = 1000
alpha = 5.0e-7
# run gradient descent
w_final, b_final, J_hist = gradient_descent(X_train, y_train, initial_w, initial_b,compute_cost, compute_gradient,alpha, iterations)
print(f"b,w found by gradient descent: {b_final:0.2f},{w_final} ")
m,_ = X_train.shape
for i in range(m):print(f"prediction: {np.dot(X_train[i], w_final) + b_final:0.2f}, target value: {y_train[i]}")# plot cost versus iteration
fig, (ax1, ax2) = plt.subplots(1, 2, constrained_layout=True, figsize=(12, 4))
ax1.plot(J_hist)
ax2.plot(100 + np.arange(len(J_hist[100:])), J_hist[100:])
ax1.set_title("Cost vs. iteration");  ax2.set_title("Cost vs. iteration (tail)")
ax1.set_ylabel('Cost')             ;  ax2.set_ylabel('Cost')
ax1.set_xlabel('iteration step')   ;  ax2.set_xlabel('iteration step')
plt.show()

http://www.lryc.cn/news/190003.html

相关文章:

  • LCR 078. 合并 K 个升序链表
  • JVM面试题:(三)GC和垃圾回收算法
  • hive建表指定列分隔符为多字符分隔符实战(默认只支持单字符)
  • android.app.RemoteServiceException: can‘t deliver broadcast
  • 信创办公–基于WPS的EXCEL最佳实践系列 (单元格与行列)
  • VsCode同时编译多个C文件
  • Android绑定式服务
  • 系统韧性研究(1)| 何谓「系统韧性」?
  • 使用Perl脚本编写爬虫程序的一些技术问题解答
  • SAP内部转移价格(利润中心转移价格)的条件
  • WRF如何批量输出文件添加或删除文件名后缀
  • Ubuntu右上角不显示网络的图标解决办法
  • AM@数列极限
  • Vue-2.3v-model原理
  • ​左手 Serverless,右手 AI,7 年躬身的古籍修复之路
  • 计算mask的体素数量
  • VR全景营销颠覆传统营销,让消费者身临其境
  • FreeRTOS学习笔记——四、任务的定义与任务切换的实现
  • js 之让人迷惑的闭包 03
  • 10月10日上课内容 Docker--harbor私有仓库部署与管理
  • Java 序列化和反序列化为什么要实现 Serializable 接口
  • vite+vue3+ts中使用require.context | 报错require is not defined | 获取文件夹中的文件名
  • C#(Csharp)我的基础教程(四)(我的菜鸟教程笔记)-Windows项目结构分析、UI设计和综合事件应用的探究与学习
  • Flink: Only supported for operators
  • NSIDC定义的海冰相关概念
  • 【码银送书第八期】《Python数据挖掘:入门进阶与实用案例分析》
  • 微信小程序底部tabBar不显示图标
  • PostgreSQL基操之角色、表空间、数据库与表
  • 【算法|滑动窗口No.1】leetcode209. 长度最小的子数组
  • 11_博客管理系统_实现过程