Tensorboard:
如何更直观的观察数据在神经网络中的变化,或是已经构建的神经网络的结构。上一篇文章说到,可以使用matplotlib第三方可视化,来进行一定程度上的可视化。然而Tensorflow也自带了可视化模块Tensorboard,并且能更直观的看见整个神经网络的结构。
上面的结构图甚至可以展开,变成:
如何在event文件中添加自己想要可视化的数据:
a.定义summary operation:
tf.scaler_summary:用来添加一些标量,比如 lr,loss,accuracy ,etc
tf.image_summary:用来添加一些进入graph的输入图片
tf.histogram_summary:用来统计激活分布,梯度分布,权重分布
tf.audio_summary:
比如:
tf.scalar_summary('标签',想要记录的变量)
b.定义一个op来将所有的summary operation 合并起来
merged = tf.merge_all_summaries()
c.使用graph初始化一个summary_writer
train_writer = tf.train.SummaryWriter(FLAGS.summaries_dir + '/train',sess.graph)
d.每隔n step将summary写入
summary, acc = sess.run([merged, accuracy], feed_dict=feed_dict(False))
train_writer.add_summary(summary, i)
使用:
结构图:
with tensorflow .name_scope(layer_name):
直接使用以上代码生成一个带可展开符号
的一个域,并且支持嵌套操作:
with tf.name_scope(layer_name): with tf.name_scope(
'weights'):
节点一般是变量或常量,需要加一个“name=‘’”参数,才会展示和命名,如:
with tf.name_scope(
'weights'): Weights = tf.Variable(tf.random_normal([in_size,out_size]))
结构图符号及意义:
变量:
变量则可使用Tensorflow.histogram_summary()方法:
tf.histogram_summary(layer_name+
"/weights",Weights)
常量:
常量则可使用Tensorflow.scalar_summary()方法:
tf.scalar_summary(
'loss',loss)
展示:
最后需要整合和存储SummaryWriter:
merged = tf.merge_all_summaries() writer = tf.train.SummaryWriter(
"/目录",sess.graph)
merged也是需要run的,因此还需要:
result = sess.run(merged) writer.add_summary(result,i)
执行:
运行后,会在相应的目录里生成一个文件,执行:
tensorboard
--logdir=
"/目录"
会给出一段网址:
浏览器中打开这个网址即可,因为有兼容问题,firefox并不能很好的兼容,建议使用Chrome。
常量在Event中,结构图在Graphs中,变量在最后两个Tag中。
附项目代码:
项目承接自上一篇文章(已更新至最新Tensorflow版本API r1.2):
import tensorflow as tf
import numpy as np
def add_layer(inputs,in_size,out_size,n_layer,activation_function=
None): layer_name=
"layer%s" % n_layer with tf.name_scope(layer_name): with tf.name_scope(
'weights'): Weights = tf.Variable(tf.random_normal([in_size,out_size])) tf.summary.histogram(layer_name+
"/weights",Weights) with tf.name_scope(
'biases'): biases = tf.Variable(tf.zeros([
1,out_size])+
0.1) tf.summary.histogram(layer_name+
"/biases",biases) with tf.name_scope(
'Wx_plus_b'): Wx_plus_b = tf.matmul(inputs,Weights)+biases tf.summary.histogram(layer_name+
"/Wx_plus_b",Wx_plus_b)
if activation_function
is None: outputs = Wx_plus_b
else: outputs = activation_function(Wx_plus_b) tf.summary.histogram(layer_name+
"/outputs",outputs)
return outputs x_data = np.linspace(-
1,
1,
300)[:,np.newaxis] noise = np.random.normal(
0,
0.05,x_data.shape) y_data = np.square(x_data)-
0.5+noise with tf.name_scope(
'inputs'): xs = tf.placeholder(tf.float32,[
None,
1],name=
'x_input') ys = tf.placeholder(tf.float32,[
None,
1],name=
'y_input') l1 = add_layer(xs,
1,
10,n_layer=
1,activation_function=tf.nn.relu) prediction = add_layer(l1,
10,
1,n_layer=
2,activation_function=
None) with tf.name_scope(
'loss'): loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-prediction),reduction_indices=[
1])) tf.summary.scalar(
'loss',loss) with tf.name_scope(
'train'): train_step = tf.train.GradientDescentOptimizer(
0.1).minimize(loss) init = tf.initialize_all_variables() sess = tf.Session() merged = tf.summary.merge_all() writer = tf.summary.FileWriter(
"Desktop/",sess.graph) sess.run(init)
for i
in range(
1000): sess.run(train_step,feed_dict={xs:x_data,ys:y_data})
if i%
50==
0: result = sess.run(merged,feed_dict={xs:x_data,ys:y_data}) writer.add_summary(result,i)