深度学习案例(一):优化器训练-构造线性模型

xiaoxiao2021-02-28  57

import tensorflow as tf import numpy as np #使用numpy构造100个随机点 x_data=np.random.rand(100) y_data=x_data*0.5+0.6 #构造一个线性模型 b=tf.Variable(0.) k=tf.Variable(0.) y=x_data*b+k #二次代价函数 loss=tf.reduce_mean(tf.square(y_data-y)) #定义一个梯度下降法来进行训练的优化器 optimizer=tf.train.GradientDescentOptimizer(0.2) #最小化代价函数 train=optimizer.minimize(loss) #初始化变量 init=tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) for i in range(300): sess.run(train) if i ==0: print(i,[b.eval(),k.eval()]) 0 [0.18192242, 0.33778045] 20 [0.41750893, 0.6432164] 40 [0.45200694, 0.6251434] 60 [0.47207776, 0.6146284] 80 [0.48375493, 0.60851073] 100 [0.49054867, 0.6049515] 120 [0.49450126, 0.6028808] 140 [0.49680084, 0.60167605] 160 [0.49813876, 0.6009751] 180 [0.49891713, 0.60056734] 200 [0.49936998, 0.60033005] 220 [0.49963346, 0.60019207] 240 [0.49978673, 0.6001117] 260 [0.4998759, 0.600065] 280 [0.49992776, 0.6000379]
转载请注明原文地址: https://www.6miu.com/read-2622719.html

最新回复(0)