Summary

In this guide you learn how to customize what happens when you call fit(model). This functionality presupposes that you define your own custom model. To that end, you see how to subclass keras$Model , defining a custom train_step() method that does what you need.

Introduction

When you’re doing supervised learning, you can use fit() and everything works smoothly.

When you need to write your own training loop from scratch, you can use the tf$GradientTape and take control of every little detail.

But what if you need a custom training algorithm, but you still want to benefit from the convenient features of fit(), such as callbacks, built-in distribution support, or step fusing?

A core principle of Keras is progressive disclosure of complexity. You should always be able to get into lower-level workflows in a gradual way. You shouldn’t fall off a cliff if the high-level functionality doesn’t exactly match your use case. You should be able to gain more control over the small details while retaining a commensurate amount of high-level convenience.

When you need to customize what fit() does, you should override the training step function of the Model class. This is the function that is called by fit() for every batch of data. You will then be able to call fit() as usual – and it will be running your own learning algorithm.

Note that this pattern does not prevent you from building models with the Functional API. You can do this whether you’re building Sequential models, Functional API models, or subclassed models.

Let’s see how that works.

Setup

library(tensorflow)
library(tfdatasets)
library(keras)
library(magrittr, include.only = "%<>%")

# -- we start by defining some helpers we'll use later --
# xyz for zipper. Like zip() in python
xyz <- function(...) purrr::transpose(list(...))

map_and_name <- function(.x, .f, ...) {
  out <- purrr::map(.x, .f[-2L], ...)
  names(out) <- purrr::map_chr(.x, .f[-3L], ...)
  out
}

stopifnot(tf_version() >= "2.2") # Requires TensorFlow 2.2 or later.

A first simple example

Let’s start from a simple example:

  • We create a new class that subclasses keras$Model.
  • We just override the method train_step(self, data).
  • We return a named list mapping metric names (including the loss) to their current value.

The input argument data is what gets passed to fit as training data:

  • If you pass arrays, by calling fit(x, y, ...), then data will be the tuple (x, y)
  • If you pass a tf.data.Dataset, by calling fit(dataset, ...), then data will be what gets yielded by dataset at each batch.

In the body of the train_step method, we implement a regular training update, similar to what you are already familiar with. Importantly, we compute the loss via self$compiled_loss, which wraps the loss(es) function(s) that were passed to compile().

Similarly, we call self$compiled_metrics$update_state(y, y_pred) to update the state of the metrics that were passed in compile(), and we query results from self$metrics at the end to retrieve their current value.

CustomModel(keras$Model) %py_class% {
  train_step <- function(self, data) {
    # Unpack the data. Its structure depends on your model and
    # on what you pass to `fit()`.
    c(x, y) %<-% data

    with(tf$GradientTape() %as% tape, {
      y_pred <- self(x, training = TRUE)  # Forward pass
      # Compute the loss value
      # (the loss function is configured in `compile()`)
      loss <- self$compiled_loss(y, y_pred, regularization_losses = self$losses)
    })

    # Compute gradients
    trainable_vars <- self$trainable_variables
    gradients <- tape$gradient(loss, trainable_vars)
    # Update weights
    self$optimizer$apply_gradients(xyz(gradients, trainable_vars))
    # Update metrics (includes the metric that tracks the loss)
    self$compiled_metrics$update_state(y, y_pred)

    # Return a named list mapping metric names to current value
    map_and_name(self$metrics, .x$name ~ .x$result())
  }
}

Let’s try this out:

# Construct and compile an instance of CustomModel
inputs <- layer_input(shape(32))
outputs <- layer_dense(inputs, units = 1)
model <- CustomModel(inputs, outputs)
model %>% compile(optimizer = "adam",
                  loss = "mse",
                  metrics = "mae")

# Just use `fit` as usual
x <- k_random_uniform(c(1000, 32))
y <- k_random_uniform(c(1000, 1))
model %>% fit(x, y, epochs = 3, verbose = 1)

Going lower-level

Naturally, you could just skip passing a loss function in compile(), and instead do everything manually in train_step. Likewise for metrics.

Here’s a lower-level example, that only uses compile() to configure the optimizer:

  • We start by creating Metric instances to track our loss and a MAE score.
  • We implement a custom train_step() that updates the state of these metrics (by calling update_state() on them), then query them (via result()) to return their current average value, to be displayed by the progress bar and to be pass to any callback.
  • Note that we would need to call reset_states() on our metrics between each epoch! Otherwise calling result() would return an average since the start of training, whereas we usually work with per-epoch averages. Thankfully, the framework can do that for us: just list any metric you want to reset in the metrics property of the model. The model will call reset_states() on any object listed here at the beginning of each fit() epoch or at the beginning of a call to evaluate().
loss_tracker <- keras$metrics$Mean(name="loss")
mae_metric <- keras$metrics$MeanAbsoluteError(name="mae")

CustomModel(keras$Model) %py_class% {
  train_step <- function(data) {
    c(x, y) %<-% data

    with(tf$GradientTape() %as% tape, {
      y_pred <- self(x, training = TRUE)  # Forward pass
      # Compute our own loss
      loss <- keras$losses$mean_squared_error(y, y_pred)
    })
    # Compute gradients
    trainable_vars <- self$trainable_variables
    gradients <- tape$gradient(loss, trainable_vars)

    # Update weights
    self$optimizer$apply_gradients(xyz(gradients, trainable_vars))

    # Compute our own metrics
    loss_tracker$update_state(loss)
    mae_metric$update_state(y, y_pred)
    list(loss = loss_tracker$result(),
         mae = mae_metric$result())
  }

  metrics %<-active% function() {
    # We list our `Metric` objects here so that `reset_states()` can be
    # called automatically at the start of each epoch
    # or at the start of `evaluate()`.
    # If you don't implement this property, you have to call
    # `reset_states()` yourself at the time of your choosing.
    list(loss_tracker, mae_metric)
  }
}

# Construct an instance of CustomModel
inputs <- layer_input(shape(32))
outputs <- inputs %>% layer_dense(units = 1)
model <- CustomModel(inputs, outputs)

# We don't pass a loss or metrics here
model %>% compile(optimizer = "adam")

# Just use `fit` as usual -- you can use callbacks, etc.
x <- k_random_uniform(c(1000, 32))
y <- k_random_uniform(c(1000, 1))
model %>% fit(x, y, epochs=5)

Supporting sample_weight and class_weight

You may have noticed that our first basic example didn’t make any mention of sample weighting. If you want to support the fit() arguments sample_weight and class_weight, you’d simply do the following:

  • Unpack sample_weight from the data argument
  • Pass it to compiled_loss and compiled_metrics (of course, you could also just apply it manually if you don’t rely on compile() for losses and metrics)
  • That’s it. That’s the list.
CustomModel(keras$Model) %py_class% {
  train_step <- function(data) {
    # Unpack the data. A third element is `data` is optional, but if present
    # it's assigned to sample_weight. Its structure depends on your model and on
    # what you pass to `fit()`.
    c(x, y, sample_weight = NULL) %<-% data

    with(tf$GradientTape() %as% tape, {
      y_pred <- self(x, training = TRUE)  # Forward pass
      # Compute the loss value.
      # The loss function is configured in `compile()`.
      loss <- self$compiled_loss(y, y_pred,
                                 sample_weight = sample_weight,
                                 regularization_losses = self$losses)
    })

    # Compute gradients
    trainable_vars <- self$trainable_variables
    gradients <- tape$gradient(loss, trainable_vars)

    # Update weights
    self$optimizer$apply_gradients(xyz(gradients, trainable_vars))

    # Update the metrics.
    # Metrics are configured in `compile()`.
    self$compiled_metrics$update_state(y, y_pred, sample_weight = sample_weight)

    # Return a dict mapping metric names to current value.
    # Note that it will include the loss (tracked in self$metrics).
    map_and_name(self$metrics, .x$name ~ .x$result())
  }
}

# Construct and compile an instance of CustomModel
inputs <- layer_input(shape(32))
outputs <- inputs %>% layer_dense(units = 1)
model <- CustomModel(inputs, outputs)
model %>% compile(optimizer = "adam", loss = "mse", metrics = "mae")

# You can now use sample_weight argument
x <- k_random_uniform(c(1000, 32))
y <- k_random_uniform(c(1000, 1))
sw <- k_random_uniform(c(1000, 1))
model %>% fit(x, y, sample_weight = sw, epochs = 5)

Providing your own evaluation step

What if you want to do the same for calls to model %>% evaluate()? Then you would override test_step in exactly the same way. Here’s what it looks like:

CustomModel(keras$Model) %py_class% {
  test_step <- function(data) {
    # Unpack the data
    c(x, y) %<-% data
    # Compute predictions
    y_pred <- self(x, training = FALSE)
    # Updates the metrics tracking the loss
    self$compiled_loss(y, y_pred, regularization_losses = self$losses)
    # Update the metrics.
    self$compiled_metrics$update_state(y, y_pred)
    # Return a dict mapping metric names to current value.
    # Note that it will include the loss (tracked in self.metrics).
    map_and_name(self$metrics, .x$name ~ .x$result())
  }
}


# Construct an instance of CustomModel
inputs <- layer_input(shape(32))
outputs <- inputs %>% layer_dense(units = 1)
model <- CustomModel(inputs, outputs)
model %>% compile(loss = "mse", metrics = "mae")

# Evaluate with our custom test_step
x <- k_random_uniform(c(1000, 32))
y <- k_random_uniform(c(1000, 1))
model %>% evaluate(x, y)

Wrapping up: an end-to-end GAN example

Let’s walk through an end-to-end example that leverages everything you just learned.

Let’s consider:

  • A generator network meant to generate 28x28x1 images.
  • A discriminator network meant to classify 28x28x1 images into two classes (“fake” and “real”).
  • One optimizer for each.
  • A loss function to train the discriminator.
# Create the discriminator
discriminator <-
  keras_model_sequential(name = "discriminator", input_shape = c(28, 28, 1)) %>%
  layer_conv_2d(64, c(3, 3), strides = c(2, 2), padding = "same") %>%
  layer_activation_leaky_relu(alpha = 0.2) %>%
  layer_conv_2d(128, c(3, 3), strides = c(2, 2), padding = "same") %>%
  layer_activation_leaky_relu(alpha = 0.2) %>%
  layer_global_max_pooling_2d() %>%
  layer_dense(1)


# Create the generator
latent_dim <- 128L
generator <- keras_model_sequential(name = "generator", input_shape = c(latent_dim)) %>%
  # We want to generate 128 coefficients to reshape into a 7x7x128 map
  layer_dense(7 * 7 * 128) %>%
  layer_activation_leaky_relu(alpha = 0.2) %>%
  layer_reshape(c(7, 7, 128)) %>%
  layer_conv_2d_transpose(128, c(4, 4), strides = c(2, 2), padding = "same") %>%
  layer_activation_leaky_relu(alpha = 0.2) %>%
  layer_conv_2d_transpose(128, c(4, 4), strides = c(2, 2), padding = "same") %>%
  layer_activation_leaky_relu(alpha = 0.2) %>%
  layer_conv_2d(1, c(7, 7), padding = "same", activation = "sigmoid")

Here’s a feature-complete GAN class, overriding compile() to use its own signature, and implementing the entire GAN algorithm in just a few lines in train_step:

GAN(keras$Model) %py_class% {
  `__init__` <- function(discriminator, generator, latent_dim) {
    super()$`__init__`()
    self$discriminator <- discriminator
    self$generator <- generator
    self$latent_dim <- latent_dim
  }

  compile <- function(d_optimizer, g_optimizer, loss_fn) {
    super()$compile()
    self$d_optimizer <- d_optimizer
    self$g_optimizer <- g_optimizer
    self$loss_fn <- loss_fn
  }

  train_step <- function(real_images) {

    # Sample random points in the latent space
    batch_size <- tf$shape(real_images)[1]

    random_latent_vectors <- tf$random$normal(list(batch_size, self$latent_dim))

    # Decode them to fake images
    generated_images <- self$generator(random_latent_vectors)

    # Combine them with real images
    combined_images <- tf$concat(list(generated_images, real_images),
                                 axis = 0L)

    # Assemble labels discriminating real from fake images
    labels <- tf$concat(list(tf$ones(c(batch_size, 1L)),
                             tf$zeros(c(batch_size, 1L))), axis = 0L)

    # Add random noise to the labels - important trick!
    labels %<>% `+`(0.05 * tf$random$uniform(tf$shape(labels)))

    # Train the discriminator
    with(tf$GradientTape() %as% tape, {
      predictions <- self$discriminator(combined_images)
      d_loss <- self$loss_fn(labels, predictions)
    })
    grads <- tape$gradient(d_loss, self$discriminator$trainable_weights)
    self$d_optimizer$apply_gradients(xyz(grads, self$discriminator$trainable_weights))

    # Sample random points in the latent space
    random_latent_vectors <- tf$random$normal(shape = list(batch_size, self$latent_dim))

    # Assemble labels that say "all real images"
    misleading_labels <- tf$zeros(list(batch_size, 1L))

    # Train the generator (note that we should *not* update the weights
    # of the discriminator)!
    with(tf$GradientTape() %as% tape, {
      predictions <- self$discriminator(self$generator(random_latent_vectors))
      g_loss <- self$loss_fn(misleading_labels, predictions)
    })
    grads <- tape$gradient(g_loss, self$generator$trainable_weights)
    self$g_optimizer$apply_gradients(xyz(grads, self$generator$trainable_weights))
    list(d_loss = d_loss, g_loss = g_loss)
  }
}

Let’s test-drive it:

# Prepare the dataset. We use both the training and test MNIST digits.

batch_size <- 64

ds <- dataset_mnist()
all_digits <- k_concatenate(list(ds$train$x, ds$test$x), axis=1)
all_digits <- k_cast(all_digits, "float32") / 255.0
all_digits <- k_reshape(all_digits, c(-1, 28, 28, 1))

dataset <- all_digits %>%
  tensor_slices_dataset() %>%
  dataset_shuffle(buffer_size = 1024) %>%
  dataset_batch(batch_size)

gan <- GAN(discriminator = discriminator,
           generator = generator,
           latent_dim = latent_dim)

gan %>% compile(
  d_optimizer = keras$optimizers$Adam(learning_rate = 0.0003),
  g_optimizer = keras$optimizers$Adam(learning_rate = 0.0003),
  loss_fn = keras$losses$BinaryCrossentropy(from_logits = TRUE)
)

# To limit the execution time, we only train on 100 batches. You can train on
# the entire dataset. You will need about 20 epochs to get nice results.
dataset %<>% dataset_take(100)

gan %>%
  fit(dataset, epochs = 1)

Happy training!