tf.train属于Training类,一般用于梯度的计算。由于使用缘故探究下tf.train的简单用法。
Optimizer that implements the gradient descent algorithm.
tf.train.GradientDescentOptimizer.__init__(learning_rate, use_locking=False, name='GradientDescent')Construct a new gradient descent optimizer.
Args:
learning_rate: A Tensor or a floating point value. The learning rate to use.use_locking: If True use locks for update operation.sname: Optional name prefix for the operations created when applying gradients. Defaults to “GradientDescent”.Returns: An Operation that updates the variables in ‘var_list’. If ‘global_step’ was not None, that operation also increments global_step.
Add operations to minimize ‘loss’ by updating ‘var_list’.
This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function.
Args:
loss: A Tensor containing the value to minimize.global_step: Optional Variable to increment by one after the variables have been updated.var_list: Optional list of variables.Variable to update to minimize ‘loss’. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES.gate_gradients: How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH.name: Optional name for the returned operation.结果