Keras: Deep Learning for humans

During these weeks, I am following the course Introduction to Deep Learning. This course aims to:

  • give learners a basic understanding of modern neural networks and their applications in computer vision and natural language understanding
  • understand popular building blocks of neural networks including fully connected layers, convolutional and recurrent layers
  • build blocks to define complex modern architectures in TensorFlow and Keras frameworks.

It is not simply learning new methods and all the new library for managing them.
Fortunately, there are different high-level libraries that help in this work. One of the most important is Keras.

From the web site: Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.
For installing, I suggest reading carefully the installation guide.

In this post, we will reuse the first exercise with Keras for creating a simple MultiLayers Neural Network for recognizing the handwritten number from 0 to 9. For this example, we will use the mnist dataset provides by Keras. Mnist is a dataset of 60,000 28×28 images of the 10 digits, along with a test set of 10,000 images.
An example of an image is the following one:

The first issue with this Dataset is the absence of a validation set for preventing overfitting. Fortunately, the course provides us with a library with the division in train, validation, and test.

Now, let’s start with the creating of a simple multilayers perceptron. After some lines for declaring the libraries, we can create the Neural Network:

import tensorflow as tf
s = tf.InteractiveSession()
import keras
from keras.models import Sequential
import keras.layers as ll

Then, we have to create the container, that in my case in called mlp:

model = Sequential(name="mlp")

After that, we can add the layers. The first one is the Input Layer, it is made of 28×28 neurons because it is the size of the images. Then, it is flattened from 2D-Matrix to 1D-Matrix for calculation purpose.

model.add(ll.InputLayer([28, 28]))
model.add(ll.Flatten())

Now, we can add the network body. In this example, it is composed of 2 different layers each one of 100 neurons with the relu function as activation.

model.add(ll.Dense(100))
model.add(ll.Activation('relu'))

model.add(ll.Dense(100))
model.add(ll.Activation('relu'))

The last layer is the output layer, the output class are 10, so 10 neurons are necessary

model.add(ll.Dense(10, activation='softmax'))

After the designing step, Keras needs creating the of the model using the compile method. In this step, it is possible to add an optimization algorithm, the loss function, and the evaluating metric.

model.compile("adam", "categorical_crossentropy", metrics=["accuracy"])

Now the can see a summary of our network using the summary() method:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         (None, 28, 28)            0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 784)               0         
_________________________________________________________________
dense_4 (Dense)              (None, 100)               78500     
_________________________________________________________________
activation_3 (Activation)    (None, 100)               0         
_________________________________________________________________
dense_5 (Dense)              (None, 100)               10100     
_________________________________________________________________
activation_4 (Activation)    (None, 100)               0         
_________________________________________________________________
dense_6 (Dense)              (None, 10)                1010      
=================================================================
Total params: 89,610
Trainable params: 89,610
Non-trainable params: 0
_________________________________________________________________

After the design, let’s see how to train the model. In Keras, it is very simple using the fit methods passing: train dataset, validation dataset, and a number of epochs:

model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=10);
For each epoch, I will have an output like this:
Epoch 1/10
50000/50000 [==============================] - 3s - loss: 0.4069 - acc: 0.8810 - val_loss: 0.2233 - val_acc: 0.9351

It is displayed the loss and the accuracy for the train set and for the validation set.

Finally, we can evaluate how our network is good with the test set. Also, in this case, Keras is very simple to use:

print("Loss, Accuracy = ", model.evaluate(X_test, y_test))

In my assignment, I reach the value of 97,56%.

For this first post, it is enough. As it is simple to see, Keras allow you to create a complex neural network with few lines of code. Hopefully, I will create some more complex network in the next weeks.

Sharing is caring!

Leave a Reply