tf.GraphAttentionLayer module

tf.GraphAttentionLayer module

class tf.GraphAttentionLayer.GraphAttentionLayer(*args: Any, **kwargs: Any)

Bases: modelzoo.common.layers.tf.BaseLayer.BaseLayer

Implementation of the Cerebras layer GraphAttention.

Reference:

Parameters
  • channels (int) – Output channels of convolution.

  • activation (callable) – Keras Activation to use.

  • normalize_output (bool) – Specifies whether to normalize outputs of graph attention. Defaults to True.

  • use_bias (bool) – Specifies whether to add bias in training. Defaults to True.

  • num_heads (int) – Number of attention heads to use. Defaults to 1.

  • concat_heads (bool) – Specifies whether to concatenate the output of the attention heads instead of averaging. Defaults to True.

  • dropout_rate (float) – Internal dropout rate for attention coefficients.

  • layer_norm_epsilon (float) – Espilon value to be used for layer norm.

  • neg_inf (float) – Negative infinity for masking. Defaults to -1e4.

  • leaky_relu_alpha (float) – Negative slope coefficient. Defaults to 0.2.

  • kernel_initializer (str) – Keras kernel initializer to use. Defaults to "glorot_uniform".

  • bias_initializer (str) – Kernel bias initializer to use. Defaults to "zeros".

  • attn_kernel_initializer (str) – Keras kernel initializer to use for attention. Defaults to "glorot_uniform".

  • boundary_casting (bool) – See documentation for BaseLayer.

  • tf_summary – See documentation for BaseLayer.

  • **kwargs – Additional keyword arguments for BaseLayer.

build(input_shape)
call(inputs, training=True, **kwargs)

Apply graph attention to inputs.

Parameters
  • inputs (tuple) – Contains adjacency matrix of the shape [batch_size, num_nodes, num_nodes] and feature matrix of the shape [batch_size, num_nodes, in_dim].

  • **kwargs – Additional keyword arguments for the call argument.

Returns

Graph Attention layer output of shape [batch_size, num_nodes, channels].