tf.RNNEncoderBlock module

tf.RNNEncoderBlock module

Base encoder block from which the RNNEncoder models are built.

class tf.RNNEncoderBlock.RNNEncoderBlock(*args: Any, **kwargs: Any)


The RNN Encoder models for the language modeling task and sentiment analysis. The init takes a nested params dictionary. Each of the following sections represents one of the outer keys, which must have its own set of parameters.

  • hidden_size (int) – Number of units inside each RNN layer.

  • encoder_depth (int) – Number of RNN layers. Depth of the model.

  • dropout_rate (float) – Fraction to drop.

  • use_residuals (bool) – Whether skip connections are used.

  • residual_type (str) –

    Options are 'short' or 'long'. If long, adds the input to the end of the block. If short, adds to the middle. The following is an example of short residuals for a certain layer ordering:


    The following is an example of long residuals for the same layers:


  • enable_layer_norm_before_dropout (bool) – Exclusive from enable_layer_norm_after_dropout. Inserts a layer norm before the dropout in each RNNEncoderBlock layer.

  • enable_layer_norm_after_dropout (bool) – Exclusive from enable_layer_norm_before_dropout. Inserts a layer norm after the dropout in each RNNEncoderBlock layer.

  • initializer (str) –

    See Tensorflow documentation for SimpleRNNCell for more on the following parameters:

    • bias_initializer (str)

    • rnn_use_bias (bool)

    • rnn_activation (str)

    • rnn_recurrent_activation (str)

    • rnn_unit_forget_bias (bool)

  • rnn_cell (str) – Can be 'lstm', 'gru' or 'rnn'.

call(inputs, mask=None, is_training=True)

Embedding layer inputs or whatnot