NN GitHub

Layers

Layers are the building blocks of a neural network. Layer classes are available under nn.layers and also aliased under nn.

Usage

To use a layer, create a layer instance and call it directly:

layer = nn.Dense(units=64, activation='relu')
outputs = layer(inputs)

Or in a single line:

outputs = nn.Dense(units=64, activation='relu')(inputs)

Core Layers

  • Dense
  • Dropout
  • Flatten
  • InputSpec

Convolutional Layers

  • Conv1D
  • Conv2D
  • Conv2DTranspose
  • Conv3D
  • Conv3DTranspose
  • SeparableConv1D
  • SeparableConv2D

Pooling Layers

  • AveragePooling1D
  • AveragePooling2D
  • AveragePooling3D
  • MaxPooling1D
  • MaxPooling2D
  • MaxPooling3D

Recurrent Layers

  • RNN
  • BasicLSTMCell
  • BasicRNNCell
  • DeviceWrapper
  • DropoutWrapper
  • GRUCell
  • LSTMCell
  • LSTMStateTuple
  • MultiRNNCell
  • RNNCell
  • ResidualWrapper

RNN

nn.RNN(cell, cell_b=None, return_sequences=False, return_state=False, merge_mode='concat', **kwargs)

Example:

def model(x):
    # Create layers
    embedding = nn.Embedding(10000, 300)
    cell = nn.LSTMCell(128)
    rnn = nn.RNN(cell)
    # Connect layers
    sequence_length = nn.sequence.length(x)  # required for variable length sequences
    x = embedding(x)
    x = rnn(x, sequence_length)
    ...

To create a bidirectional RNN, simply pass a second cell for the backward RNN:

cell_b = nn.LSTMCell(128)
bidirectional = nn.RNN(cell, cell_b)
outputs = bidirectional(inputs, sequence_length)

If merge_mode is None, outputs will be a tuple of forward and backward outputs.

Sparse Layers

  • Embedding

Embedding

nn.Embedding(input_dim, output_dim, embeddings_initializer=None, embeddings_regularizer=None, embeddings_constraint=None, dtype=tf.float32, **kwargs)

Normalization Layers

  • BatchNormalization

See Also