Jovian
⭐️
Sign In
Note: This is a mirror of a project from leadingindia interns to showcase jovian.

Intro to the project:

Video Introduction:

Forest Fire UAV Video

Video Link, Poster Link
Project Description

Wildfire is a natural disaster, causing irreparable damage to local ecosystem. Sudden and uncontrollable wildfires can be a real threat to residents’ lives. Statistics from National Interagency Fire Center (NIFC) in the USA show that the burned area doubled from 1990 to 2015 in the USA. Recent wildfires in northern California (reported by CNN) have already resulted in more than 40 deaths and 50 missing.

More than 200,000 local residents have been evacuated under emergency. The wildfires occur 220,000 times per year globally, the annual burned area is over 6 million hectares. Accurate and early detection of wildfire is therefore of great importance. Fire detection task is crucial for people safety. Several fire detection systems were developed to prevent damages caused by fire. One can find different technical solutions. Most of them are sensors based and are also generally limited to indoors.

They detect the presence of particles generated by smoke and fire by ionization, which requires a close proximity to the fire. Consequently, they cannot be used in large covered area. Moreover, they cannot provide information about initial fire location, direction of smoke propagation, size of the fire, growth rate of the fire, etc. To get over such limitations video fire detection systems are used.

This project looks at automating this process using a CNN model built with Keras frameworks and trained on UAV images.

Benefits of using Jovian:
  • Requirements: The orignal code repository doesn't mention the requirements to run the code. The setup for Keras and other libraries might require some iterations and guess work which might take a lot of efforts.

  • Setup issues: Different versions of the frameworks might cause issues with the current server setup. In this case, Keras required CUDA version 9.x and the system was setup at the latest CUDA 10.x version which caused needed a lot of debugging and a complete re-installation later. Once the code is pushed to jovian, all of the dependancies are handled by jovian and the complete installation is a one-click process.

  • Experiments: GitHub showcases different notebooks which can be confusing. However, the authors have run multiple experiments which aren't best documented. Jovian allows hosting multiple versions of the experiments and comparing the best results here, which also allows communicating the efforts in the project. (Presenting one simple notebook may not communicate the efforts required for a 1-month project in the best fashion for example).

We can also compare all experiments. See here

  • Replication: Jovian also enables us to host the dataset along with the output pickle files from the experiment. This saves the time required to re-train the model, one can simply run the notebook and perform inference.

Setup and How to is mentioned below:

System setup

Jovian makes it easy to share Jupyter notebooks on the cloud by running a single command directly within Jupyter. It also captures the Python environment and libraries required to run your notebook, so anyone (including you) can reproduce your work.

Option 1: Run Online:
  • At the Top of the notebook you can find a one-click run online button for:
    • Run on MyBinder
    • Run on Collab
    • Run on Kaggle Kernels
Option 2: Run on Local Machine:

Here's what you need to do to get started:

Install Anaconda by following the instructions given here. You might also need to add Anaconda binaries to your system PATH to be able to run the conda command line tool. Install the jovian Python library by the running the following command (without the $) on your Mac/Linux terminal or Windows command prompt:

pip install jovian --upgrade

Download the notebook for this tutorial using the jovian clone command:

$ jovian clone <notebook_id>

(You can get the notebook_id by clicking the 'Clone' button at the top of this page on https://jvn.io)

Running the clone command creates a directory 01-pytorch-basics containing a Jupyter notebook and an Anaconda environment file.

$ ls forest-fire-detection-uav-images

Now we can enter the directory and install the required Python libraries (Jupyter, PyTorch etc.) with a single command using jovian:

$ cd forest-fire-detection-uav-images
$ jovian install

Jovian reads the environment.yml file, identifies the right dependencies for your operating system, creates a virtual environment with the given name (01-pytorch-basics by default) and installs all the required libraries inside the environment, to avoid modifying your system-wide installation of Python. It uses conda internally. If you face issues with jovian install, try running conda env update instead.

We can activate the virtual environment by running

$ conda activate nine

For older installations of conda, you might need to run the command: source activate nine

Once the virtual environment is active, we can start Jupyter by running

$ jupyter notebook

You can now access Jupyter's web interface by clicking the link that shows up on the terminal or by visiting http://localhost:8888 on your browser.

Experiments:

This project looked at multiple experiments with the best results from the one currently being displayed. To compare the results from previous experiments, please click here

In [2]:
#If you're running this notebook for the first time, please uncomment the following lines to install jovian
#!pip install jovian -q --upgrade
In [3]:
import jovian

Getting the Dataset:

Option A: Download from DropBox
In [4]:
#Uncomment these commands to download the dataset:
#!wget https://www.dropbox.com/s/cf3e09xinznbgrb/data.zip?dl=0
#!mv data.zip?dl=0 data.zip
#!unzip data.zip
Option B: Clone from Jovian

If you have cloned this notebook from Jovian, the data and checkpoints are automatically downloaded for you, simply run the following command:

In [5]:
##Uncomment the following line
#!unzip data.zip
In [6]:
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
from keras.callbacks import ModelCheckpoint, LearningRateScheduler, TensorBoard, EarlyStopping
Using TensorFlow backend.
In [7]:
# dimensions of our images.
img_width, img_height = 200, 200
In [8]:
train_data_dir = 'data/train'
validation_data_dir = 'data/validation'
nb_train_samples = 1914
nb_validation_samples = 182
epochs = 50
batch_size = 16
In [9]:
hyperparams = {
    'img_width' : 100,
    'img_height' : 100,
    'nb_train_samples': 1914,
    'nb_validation_samples': 182,
    'epochs': 50,
    'batch_size' : 16
}

jovian.log_hyperparams(hyperparams)
[jovian] Please enter your API key (from https://jvn.io ): ········ [jovian] Hypermaters logged.
In [10]:
if K.image_data_format() == 'channels_first':
    input_shape = (3, img_width, img_height)
else:
    input_shape = (img_width, img_height, 3)
In [11]:
if K.image_data_format() == 'channels_first':
    input_shape = (3, img_width, img_height)
else:
    input_shape = (img_width, img_height, 3)

model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (3, 3)))
model.add(Activation('sigmoid'))                #Added
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])

In [12]:
# augmentation 
train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)

# augmentation 
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)
In [13]:
train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')
Found 1914 images belonging to 2 classes.
In [14]:
validation_generator = test_datagen.flow_from_directory(
    validation_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')
Found 182 images belonging to 2 classes.
In [15]:
checkpoint = ModelCheckpoint("first1_cp.h5", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=5)
In [16]:
history = model.fit_generator(
    train_generator,
    steps_per_epoch=nb_train_samples // batch_size,
    epochs=epochs,
    validation_data=validation_generator,
    validation_steps=nb_validation_samples // batch_size)
Epoch 1/50 119/119 [==============================] - 21s 173ms/step - loss: 0.6030 - acc: 0.6534 - val_loss: 0.4529 - val_acc: 0.8352 Epoch 2/50 119/119 [==============================] - 16s 135ms/step - loss: 0.3905 - acc: 0.8426 - val_loss: 0.3078 - val_acc: 0.9157 Epoch 3/50 119/119 [==============================] - 16s 134ms/step - loss: 0.3658 - acc: 0.8363 - val_loss: 0.3694 - val_acc: 0.8373 Epoch 4/50 119/119 [==============================] - 16s 134ms/step - loss: 0.3134 - acc: 0.8574 - val_loss: 0.2963 - val_acc: 0.8916 Epoch 5/50 119/119 [==============================] - 16s 138ms/step - loss: 0.3133 - acc: 0.8687 - val_loss: 0.2597 - val_acc: 0.9217 Epoch 6/50 119/119 [==============================] - 16s 138ms/step - loss: 0.2986 - acc: 0.8694 - val_loss: 0.2836 - val_acc: 0.8614 Epoch 7/50 119/119 [==============================] - 16s 135ms/step - loss: 0.3037 - acc: 0.8729 - val_loss: 0.3158 - val_acc: 0.9217 Epoch 8/50 119/119 [==============================] - 16s 133ms/step - loss: 0.2906 - acc: 0.8756 - val_loss: 0.2849 - val_acc: 0.8494 Epoch 9/50 119/119 [==============================] - 16s 133ms/step - loss: 0.2936 - acc: 0.8786 - val_loss: 0.3370 - val_acc: 0.8373 Epoch 10/50 119/119 [==============================] - 16s 134ms/step - loss: 0.3140 - acc: 0.8621 - val_loss: 0.3183 - val_acc: 0.8554 Epoch 11/50 119/119 [==============================] - 16s 133ms/step - loss: 0.2854 - acc: 0.8707 - val_loss: 0.3118 - val_acc: 0.8735 Epoch 12/50 119/119 [==============================] - 16s 134ms/step - loss: 0.2830 - acc: 0.8810 - val_loss: 0.2761 - val_acc: 0.8916 Epoch 13/50 119/119 [==============================] - 16s 132ms/step - loss: 0.2972 - acc: 0.8865 - val_loss: 0.2764 - val_acc: 0.8977 Epoch 14/50 119/119 [==============================] - 16s 133ms/step - loss: 0.2727 - acc: 0.8812 - val_loss: 0.2413 - val_acc: 0.8976 Epoch 15/50 119/119 [==============================] - 16s 133ms/step - loss: 0.2630 - acc: 0.8862 - val_loss: 0.2859 - val_acc: 0.8976 Epoch 16/50 119/119 [==============================] - 16s 132ms/step - loss: 0.2823 - acc: 0.8906 - val_loss: 0.3407 - val_acc: 0.8976 Epoch 17/50 119/119 [==============================] - 16s 134ms/step - loss: 0.2643 - acc: 0.8868 - val_loss: 0.2488 - val_acc: 0.8976 Epoch 18/50 119/119 [==============================] - 16s 132ms/step - loss: 0.2399 - acc: 0.9001 - val_loss: 0.2000 - val_acc: 0.9096 Epoch 19/50 119/119 [==============================] - 16s 132ms/step - loss: 0.2405 - acc: 0.8988 - val_loss: 0.3498 - val_acc: 0.9036 Epoch 20/50 119/119 [==============================] - 16s 133ms/step - loss: 0.2534 - acc: 0.8899 - val_loss: 0.3250 - val_acc: 0.8855 Epoch 21/50 119/119 [==============================] - 16s 134ms/step - loss: 0.2478 - acc: 0.8988 - val_loss: 0.3567 - val_acc: 0.8916 Epoch 22/50 119/119 [==============================] - 16s 134ms/step - loss: 0.2318 - acc: 0.8928 - val_loss: 0.2520 - val_acc: 0.9096 Epoch 23/50 119/119 [==============================] - 16s 134ms/step - loss: 0.2520 - acc: 0.9044 - val_loss: 0.2856 - val_acc: 0.9217 Epoch 24/50 119/119 [==============================] - 16s 137ms/step - loss: 0.2311 - acc: 0.9111 - val_loss: 0.2451 - val_acc: 0.9036 Epoch 25/50 119/119 [==============================] - 16s 135ms/step - loss: 0.2010 - acc: 0.9151 - val_loss: 0.2300 - val_acc: 0.9034 Epoch 26/50 119/119 [==============================] - 16s 135ms/step - loss: 0.2447 - acc: 0.9001 - val_loss: 0.4456 - val_acc: 0.8494 Epoch 27/50 119/119 [==============================] - 16s 135ms/step - loss: 0.2236 - acc: 0.9118 - val_loss: 0.3572 - val_acc: 0.8614 Epoch 28/50 119/119 [==============================] - 16s 133ms/step - loss: 0.2110 - acc: 0.9096 - val_loss: 0.2609 - val_acc: 0.9096 Epoch 29/50 119/119 [==============================] - 16s 132ms/step - loss: 0.2006 - acc: 0.9223 - val_loss: 0.2387 - val_acc: 0.9036 Epoch 30/50 119/119 [==============================] - 16s 132ms/step - loss: 0.1989 - acc: 0.9211 - val_loss: 0.2041 - val_acc: 0.9277 Epoch 31/50 119/119 [==============================] - 16s 132ms/step - loss: 0.1955 - acc: 0.9157 - val_loss: 0.2532 - val_acc: 0.9036 Epoch 32/50 119/119 [==============================] - 16s 133ms/step - loss: 0.1956 - acc: 0.9291 - val_loss: 0.2349 - val_acc: 0.8976 Epoch 33/50 119/119 [==============================] - 16s 132ms/step - loss: 0.1958 - acc: 0.9165 - val_loss: 0.3062 - val_acc: 0.8614 Epoch 34/50 119/119 [==============================] - 16s 132ms/step - loss: 0.1779 - acc: 0.9335 - val_loss: 0.2043 - val_acc: 0.9458 Epoch 35/50 119/119 [==============================] - 16s 132ms/step - loss: 0.2079 - acc: 0.9230 - val_loss: 0.2371 - val_acc: 0.9217 Epoch 36/50 119/119 [==============================] - 16s 132ms/step - loss: 0.1885 - acc: 0.9235 - val_loss: 0.3251 - val_acc: 0.8976 Epoch 37/50 119/119 [==============================] - 16s 133ms/step - loss: 0.1876 - acc: 0.9303 - val_loss: 0.1961 - val_acc: 0.9375 Epoch 38/50 119/119 [==============================] - 16s 132ms/step - loss: 0.1896 - acc: 0.9256 - val_loss: 0.4117 - val_acc: 0.8675 Epoch 39/50 119/119 [==============================] - 16s 132ms/step - loss: 0.1844 - acc: 0.9293 - val_loss: 0.2101 - val_acc: 0.9036 Epoch 40/50 119/119 [==============================] - 16s 131ms/step - loss: 0.1841 - acc: 0.9272 - val_loss: 0.1996 - val_acc: 0.9217 Epoch 41/50 119/119 [==============================] - 16s 131ms/step - loss: 0.1665 - acc: 0.9321 - val_loss: 0.2503 - val_acc: 0.8976 Epoch 42/50 119/119 [==============================] - 16s 131ms/step - loss: 0.1781 - acc: 0.9322 - val_loss: 0.3518 - val_acc: 0.8976 Epoch 43/50 119/119 [==============================] - 16s 133ms/step - loss: 0.1759 - acc: 0.9319 - val_loss: 0.2078 - val_acc: 0.8976 Epoch 44/50 119/119 [==============================] - 16s 135ms/step - loss: 0.1462 - acc: 0.9440 - val_loss: 0.3039 - val_acc: 0.9096 Epoch 45/50 119/119 [==============================] - 16s 133ms/step - loss: 0.1629 - acc: 0.9359 - val_loss: 0.1823 - val_acc: 0.9157 Epoch 46/50 119/119 [==============================] - 16s 131ms/step - loss: 0.1476 - acc: 0.9361 - val_loss: 0.2323 - val_acc: 0.9096 Epoch 47/50 119/119 [==============================] - 16s 130ms/step - loss: 0.1474 - acc: 0.9466 - val_loss: 0.2135 - val_acc: 0.9277 Epoch 48/50 119/119 [==============================] - 16s 132ms/step - loss: 0.1635 - acc: 0.9254 - val_loss: 0.2173 - val_acc: 0.9337 Epoch 49/50 119/119 [==============================] - 15s 130ms/step - loss: 0.1730 - acc: 0.9318 - val_loss: 0.2320 - val_acc: 0.9034 Epoch 50/50 119/119 [==============================] - 16s 130ms/step - loss: 0.1509 - acc: 0.9375 - val_loss: 0.2012 - val_acc: 0.9277
In [17]:
model.save('firstimplementation.h5') 
In [18]:
jovian.log_metrics({
    'loss': 0.1509,
    'acc': 0.9375,
    'val_loss': 0.2012,
    'val_acc': 0.9277
})
[jovian] Metrics logged.
In [19]:
jovian.log("Final Model")
[jovian] Final Model
In [20]:
import matplotlib.pyplot as plt
%matplotlib inline
# list all data in history
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
Notebook Image
Notebook Image
In [ ]:
jovian.commit(artifacts=['firstimplementation.h5'])
[jovian] Saving notebook..
In [ ]: