RMSProp optimizer
optimizer_rmsprop(
learning_rate = 0.001,
rho = 0.9,
epsilon = NULL,
decay = 0,
clipnorm = NULL,
clipvalue = NULL,
...
)
float >= 0. Learning rate.
float >= 0. Decay factor.
float >= 0. Fuzz factor. If NULL
, defaults to k_epsilon()
.
float >= 0. Learning rate decay over each update.
Gradients will be clipped when their L2 norm exceeds this value.
Gradients will be clipped when their absolute value exceeds this value.
Unused, present only for backwards compatability
It is recommended to leave the parameters of this optimizer at their default values (except the learning rate, which can be freely tuned).
This optimizer is usually a good choice for recurrent neural networks.
Other optimizers:
optimizer_adadelta()
,
optimizer_adagrad()
,
optimizer_adamax()
,
optimizer_adam()
,
optimizer_nadam()
,
optimizer_sgd()