In this tutorial, we’ll see how we can initialize and get the biases in a keras model. Let’s create a small neural network with one Convolutional layer and one Dense layer containing 10 nodes and an output layer with 1 node.

bias_initializer = tf.keras.initializers.HeNormal()
input_shape=(28,28,1)
model = tf.keras.Sequential(
    [
        tf.keras.Input(shape=input_shape),
        tf.keras.layers.Conv2D(32, kernel_size=(3, 3),activation="relu",use_bias=True,bias_initializer=bias_initializer),
        tf.keras.layers.Flatten(),
        tf.keras.layers.Dense(10, activation="relu",use_bias=True,bias_initializer='zeros'),
        tf.keras.layers.Dense(1, activation="softmax"),
    ]
)

Everything is pretty simple, and you should be familiar with almost all of it. The only new thing is a parameter in the hidden layer like use_bias and bias_initializer.

Use Bias

It specifies whether or not you want to include biases in the layer for all of its neurons. If we want to include biases, we set the parameter value to True. Otherwise, we set it to False.

The default value is True, so if we don’t specify this parameter, the layer will include bias terms by default.

Bias Initializer

It determines how the biases are initialized. This parameter determines how the biases are first set before we start training the model.

‘zeros’ is the default value for the bias_initializer parameter. If we instead wanted to change this so that the biases were set to some other type of values, like all ones, or random numbers, then we can. Keras has a list of initializers that it supports.

bias_initializer = tf.keras.initializers.HeNormal()
tf.keras.layers.Conv2D(32, kernel_size=(3, 3),activation="relu",use_bias=True,bias_initializer=bias_initializer),

We are setting this parameter’s value to the ‘HeNormal’. This means that all 32 biases in this layer will be set to a value before the model starts training.

Access Biases

After we initialize these biases, we can get them out and inspect their values by calling model.layer[0].get_weights().

This gives us all the biases for the layer in the model. The list has 2 elements, of shape (input_dim, output_dim) and (output_dim,) for weights and biases respectively.

Keras Get Biases

We have these randomly initialized bias vectors containing 32 elements corresponding to the bias term for the first layer for which we specified ‘HeNormal()’ as the bias_initializer.

Keras Get and weight Biases

Similarly, we have the bias corresponding to the dense layer, which is again, followed by the bias vector that contains 10 zero correspondings to the bias.

Output Layer bias

Remember, we didn’t set any bias parameters for the output layer, but because Keras uses bias and initializes bias terms with zeros by default, we get this for free.

After being initialized, during training, these biases will be updated as the model learns the optimized values for them. If we were to train this model and then call the get_weights() function again, then the values for the weights and biases would likely be very different.

Related Post