## Handwritten Digit Recognition

Objective: Classify handwritten digits from the MNIST dataset by training a convolutional neural network (CNN) using the Keras deep learning library.

### Data Preparation

We begin by downloading the data and creating training & validation sets. Keras has inbuilt helper functions to do this.

In :
``````from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
``````
```Using TensorFlow backend. ```

Each sample is a 28px x 28 px image, flattened out a vector of length 784 i.e. 28x28.

In :
``train_images.shape, train_labels``
Out:
``((28, 28), 5)``

Let's take a look at some sample images from the training set, by plotting them in a grid.

In :
``````%matplotlib inline
import matplotlib.pyplot as plt

grid_size = 6
f, axarr = plt.subplots(grid_size, grid_size)
for i in range(grid_size):
for j in range(grid_size):
ax = axarr[i, j]
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax.imshow(train_images[i * grid_size + j], cmap='gray')`````` We're going to apply the following preprocessing steps:

1. Reshape the image vectors into 28x28 matrices
2. Separate 20% of the training set to create a vaidation set
3. Conver the numeric labels into one-hot vectors
In :
``````train_images = train_images.reshape((60000, 28, 28, 1))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28, 28, 1))
test_images = test_images.astype('float32') / 255

from keras.utils import to_categorical

partial_train_images = train_images[:45000]
partial_train_labels = train_labels[:45000]

validation_images = train_images[45000:]
validation_labels = train_labels[45000:]

partial_train_labels = to_categorical(partial_train_labels)
validation_labels = to_categorical(validation_labels)
test_labels = to_categorical(test_labels)
``````

### Model & Training

Now we're ready to define a simple CNN model.

In :
``````input_shape = (28,28,1)
num_classes = 10``````
In :
``````from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28,28,1)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))

model.summary()
``````
```WARNING:tensorflow:From /usr/local/anaconda3/envs/keras-mnist-jovian/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 13, 13, 32) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 5408) 0 _________________________________________________________________ dense_1 (Dense) (None, 10) 54090 ================================================================= Total params: 54,410 Trainable params: 54,410 Non-trainable params: 0 _________________________________________________________________ ```
In :
``````import jovian

jovian.log_hyperparams({
'arch': 'Conv+Dense',
'opt': 'rmsprop',
'epochs': 2,
'bs': 128
})
``````
```[jovian] Hypermaters logged. ```
In :
``model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])``
In :
``````history = model.fit(
partial_train_images,
partial_train_labels,
epochs=2,
batch_size=128,
validation_data=(validation_images, validation_labels))
``````
```WARNING:tensorflow:From /usr/local/anaconda3/envs/keras-mnist-jovian/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. Train on 45000 samples, validate on 15000 samples Epoch 1/2 45000/45000 [==============================] - 18s 395us/step - loss: 0.3758 - acc: 0.8985 - val_loss: 0.1835 - val_acc: 0.9499 Epoch 2/2 45000/45000 [==============================] - 15s 343us/step - loss: 0.1493 - acc: 0.9579 - val_loss: 0.1223 - val_acc: 0.9656 ```
In :
``````jovian.log_metrics({
'loss': 0.1493, 'acc': 0.9579
})
``````
```[jovian] Metrics logged. ```
In :
``````from utils import plot_history

plot_history(history)
``````  ### Model Evaluation

In :
``test_loss, test_acc = model.evaluate(test_images, test_labels)``
```10000/10000 [==============================] - 1s 114us/step ```
In :
``````print('Test loss:', test_loss)
print('Test acc:', test_acc)``````
```Test loss: 0.11089804765433073 Test acc: 0.9691 ```

We can also save the trained model's weights to disk, so we won't need to train it again.

In :
``model.save('mnist-cnn.h5')``

### Save & Commit

In :
``import jovian``
In [ ]:
``jovian.commit(artifacts=['mnist-cnn.h5'], files=['utils.py'])``
```[jovian] Saving notebook.. ```
In [ ]:
`` ``