Jovian
⭐️
Sign In

70 Years of Formula 1 - An Exploratory Data Analysis

Formula 1, the pinnacle of motorsports, is celebrating its 70th anniversary this year. From the British Grand Prix in 1950, this sport has not just enthralled racing fans around the world but also has been a platform for innovations and cutting edge technological developments. Being an F1 fan myself, I thought I would do my first data analysis project on the 70-year history of the sport! Thanks to Ergast Developer API and their well-kept database images for making this happen. Here, we will be using their data up to the 2020 Russian GP held on 27th September. This project is a part of the course Data Analysis with Python: Zero to Pandas hosted by Jovian. I will be using Numpy and Pandas libraries in Python to do some primary data exploration on the driver and constructor records and a few other aspects of F1. Matplotlib and Seaborn libraries are used for plotting and visualisations.

How to run the code

This is an executable Jupyter notebook hosted on Jovian.ml, a platform for sharing data science projects. You can run and experiment with the code in a couple of ways: using free online resources (recommended) or on your own computer.

Option 1: Running using free online resources (1-click, recommended)

The easiest way to start executing this notebook is to click the "Run" button at the top of this page, and select "Run on Binder". This will run the notebook on mybinder.org, a free online service for running Jupyter notebooks. You can also select "Run on Colab" or "Run on Kaggle".

Option 2: Running on your computer locally
  1. Install Conda by following these instructions. Add Conda binaries to your system PATH, so you can use the conda command on your terminal.

  2. Create a Conda environment and install the required libraries by running these commands on the terminal:

conda create -n zerotopandas -y python=3.8 
conda activate zerotopandas
pip install jovian jupyter numpy pandas matplotlib seaborn opendatasets --upgrade
  1. Press the "Clone" button above to copy the command for downloading the notebook, and run it on the terminal. This will create a new directory and download the notebook. The command will look something like this:
jovian clone notebook-owner/notebook-id
  1. Enter the newly created directory using cd directory-name and start the Jupyter notebook.
jupyter notebook

You can now access Jupyter's web interface by clicking the link that shows up on the terminal or by visiting http://localhost:8888 on your browser. Click on the notebook file (it has a .ipynb extension) to open it.

Importing Libraries

We will first import the Python libraries required in this notebook.

In [1]:
# Importing python libraries
from urllib.request import urlretrieve
import os
import pandas as pd
import numpy as np
In [2]:
# Importing plotting and visualisation libraries
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
%matplotlib inline

sns.set_style('darkgrid')
matplotlib.rcParams['font.size'] = 14
matplotlib.rcParams['figure.figsize'] = (9, 5)
matplotlib.rcParams['figure.facecolor'] = '#00000000'

Downloading the Dataset

For easy access to the CSV files and its schema from Ergast (updated till 27/09/2020), they are hosted as repositories in GitHub. The following variables give the paths to each of the files.

In [3]:
# URLs to fetch data
url_circuits = 'https://gist.githubusercontent.com/harikanth-m/f15b47aeab0d75c128824d0f72821db1/raw/circuits.csv'
url_const_results = 'https://gist.githubusercontent.com/harikanth-m/f41f068dca8a309c81a04c6a45fecf58/raw/constructor_results.csv'
url_const_standings = 'https://gist.githubusercontent.com/harikanth-m/ccf0bd917f013d34928ae8c9b695059c/raw/constructor_standings.csv'
url_const = 'https://gist.githubusercontent.com/harikanth-m/f128413f4bff5c3c901b9a8c599a78b9/raw/constructors.csv'
url_driver_standings = 'https://gist.githubusercontent.com/harikanth-m/fcdb3e6650972ee6f0db07288074b0b7/raw/driver_standings.csv'
url_drivers = 'https://gist.githubusercontent.com/harikanth-m/d37be6c8c871300822bf8676e2206d9b/raw/drivers.csv'
url_lap_times = 'https://gist.githubusercontent.com/harikanth-m/61770032a57dd013651163142589b2bc/raw/lap_times.csv'
url_pit_stops = 'https://gist.githubusercontent.com/harikanth-m/45d94ae5a8b8e7f0cace662cc7942d04/raw/pit_stops.csv'
url_qualifying = 'https://gist.githubusercontent.com/harikanth-m/08f55f6687874c5a0bf01790c477e4bc/raw/qualifying.csv'
url_races = 'https://gist.githubusercontent.com/harikanth-m/9382cffceb1264e246fae76e4989cc31/raw/races.csv'
url_results = 'https://gist.githubusercontent.com/harikanth-m/6b1b6e36d3709abf3b97fc703303471b/raw/results.csv'
url_status = 'https://gist.githubusercontent.com/harikanth-m/c1bb17485523243929c57f4f13d4fcba/raw/status.csv'
url_schema = 'https://gist.githubusercontent.com/harikanth-m/71f7fd19c8afba09ad99664ece517268/raw/f1db_schema.txt'

Let's make a new directory to download the data files.

In [4]:
# Creating a new directory for the data files
os.makedirs('./data', exist_ok=True)

We will now download the files to this directory using urlretrieve function.

In [5]:
# Downloading the csv files
urlretrieve(url_circuits, './data/circuits.csv')
urlretrieve(url_const_results, './data/constructor_results.csv')
urlretrieve(url_const_standings, './data/constructor_standings.csv')
urlretrieve(url_const, './data/constructors.csv')
urlretrieve(url_driver_standings,'./data/driver_standings.csv')
urlretrieve(url_drivers, './data/drivers.csv')
urlretrieve(url_lap_times, './data/lap_times.csv')
urlretrieve(url_pit_stops, './data/pit_stops.csv')
urlretrieve(url_qualifying, './data/qualifying.csv')
urlretrieve(url_races, './data/races.csv')
urlretrieve(url_results, './data/results.csv')
urlretrieve(url_status, './data/status.csv')
urlretrieve(url_schema, './data/f1db_schema.txt');

The following code checks if the files have been to the directory.

In [3]:
os.listdir('data')
Out[3]:
['constructor_results.csv',
 'drivers.csv',
 'constructors.csv',
 'status.csv',
 'lap_times.csv',
 'qualifying.csv',
 'f1db_schema.txt',
 'driver_standings.csv',
 'races.csv',
 'pit_stops.csv',
 'circuits.csv',
 'results.csv',
 'constructor_standings.csv']

We can look at the schema text file to get information about the database and the contents of each of the CSV files.

In [4]:
schema = open('./data/f1db_schema.txt', 'r')
print(schema.read())
+----------------------+ | Tables_in_f1db | +----------------------+ | circuits | | constructorResults | | constructorStandings | | constructors | | driverStandings | | drivers | | lapTimes | | pitStops | | qualifying | | races | | results | | seasons | | status | +----------------------+ circuits.csv +------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+--------------+------+-----+---------+----------------+ | circuitId | int(11) | NO | PRI | NULL | auto_increment | | circuitRef | varchar(255) | NO | | | | | name | varchar(255) | NO | | | | | location | varchar(255) | YES | | NULL | | | country | varchar(255) | YES | | NULL | | | lat | float | YES | | NULL | | | lng | float | YES | | NULL | | | alt | int(11) | YES | | NULL | | | url | varchar(255) | NO | UNI | | | +------------+--------------+------+-----+---------+----------------+ constructor_results.csv +----------------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------------+--------------+------+-----+---------+----------------+ | constructorResultsId | int(11) | NO | PRI | NULL | auto_increment | | raceId | int(11) | NO | | 0 | | | constructorId | int(11) | NO | | 0 | | | points | float | YES | | NULL | | | status | varchar(255) | YES | | NULL | | +----------------------+--------------+------+-----+---------+----------------+ constructor_standings.csv +------------------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------------+--------------+------+-----+---------+----------------+ | constructorStandingsId | int(11) | NO | PRI | NULL | auto_increment | | raceId | int(11) | NO | | 0 | | | constructorId | int(11) | NO | | 0 | | | points | float | NO | | 0 | | | position | int(11) | YES | | NULL | | | positionText | varchar(255) | YES | | NULL | | | wins | int(11) | NO | | 0 | | +------------------------+--------------+------+-----+---------+----------------+ constructors.csv +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | constructorId | int(11) | NO | PRI | NULL | auto_increment | | constructorRef | varchar(255) | NO | | | | | name | varchar(255) | NO | UNI | | | | nationality | varchar(255) | YES | | NULL | | | url | varchar(255) | NO | | | | +----------------+--------------+------+-----+---------+----------------+ driver_standings.csv +-------------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+--------------+------+-----+---------+----------------+ | driverStandingsId | int(11) | NO | PRI | NULL | auto_increment | | raceId | int(11) | NO | | 0 | | | driverId | int(11) | NO | | 0 | | | points | float | NO | | 0 | | | position | int(11) | YES | | NULL | | | positionText | varchar(255) | YES | | NULL | | | wins | int(11) | NO | | 0 | | +-------------------+--------------+------+-----+---------+----------------+ drivers.csv +-------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+--------------+------+-----+---------+----------------+ | driverId | int(11) | NO | PRI | NULL | auto_increment | | driverRef | varchar(255) | NO | | | | | number | int(11) | YES | | NULL | | | code | varchar(3) | YES | | NULL | | | forename | varchar(255) | NO | | | | | surname | varchar(255) | NO | | | | | dob | date | YES | | NULL | | | nationality | varchar(255) | YES | | NULL | | | url | varchar(255) | NO | UNI | | | +-------------+--------------+------+-----+---------+----------------+ lap_times.csv +--------------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------------+--------------+------+-----+---------+-------+ | raceId | int(11) | NO | PRI | NULL | | | driverId | int(11) | NO | PRI | NULL | | | lap | int(11) | NO | PRI | NULL | | | position | int(11) | YES | | NULL | | | time | varchar(255) | YES | | NULL | | | milliseconds | int(11) | YES | | NULL | | +--------------+--------------+------+-----+---------+-------+ pit_stops.csv +--------------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------------+--------------+------+-----+---------+-------+ | raceId | int(11) | NO | PRI | NULL | | | driverId | int(11) | NO | PRI | NULL | | | stop | int(11) | NO | PRI | NULL | | | lap | int(11) | NO | | NULL | | | time | time | NO | | NULL | | | duration | varchar(255) | YES | | NULL | | | milliseconds | int(11) | YES | | NULL | | +--------------+--------------+------+-----+---------+-------+ qualifying.csv +---------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------+------+-----+---------+----------------+ | qualifyId | int(11) | NO | PRI | NULL | auto_increment | | raceId | int(11) | NO | | 0 | | | driverId | int(11) | NO | | 0 | | | constructorId | int(11) | NO | | 0 | | | number | int(11) | NO | | 0 | | | position | int(11) | YES | | NULL | | | q1 | varchar(255) | YES | | NULL | | | q2 | varchar(255) | YES | | NULL | | | q3 | varchar(255) | YES | | NULL | | +---------------+--------------+------+-----+---------+----------------+ races.csv +-----------+--------------+------+-----+------------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------+--------------+------+-----+------------+----------------+ | raceId | int(11) | NO | PRI | NULL | auto_increment | | year | int(11) | NO | | 0 | | | round | int(11) | NO | | 0 | | | circuitId | int(11) | NO | | 0 | | | name | varchar(255) | NO | | | | | date | date | NO | | 0000-00-00 | | | time | time | YES | | NULL | | | url | varchar(255) | YES | UNI | NULL | | +-----------+--------------+------+-----+------------+----------------+ results.csv +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ | resultId | int(11) | NO | PRI | NULL | auto_increment | | raceId | int(11) | NO | | 0 | | | driverId | int(11) | NO | | 0 | | | constructorId | int(11) | NO | | 0 | | | number | int(11) | YES | | NULL | | | grid | int(11) | NO | | 0 | | | position | int(11) | YES | | NULL | | | positionText | varchar(255) | NO | | | | | positionOrder | int(11) | NO | | 0 | | | points | float | NO | | 0 | | | laps | int(11) | NO | | 0 | | | time | varchar(255) | YES | | NULL | | | milliseconds | int(11) | YES | | NULL | | | fastestLap | int(11) | YES | | NULL | | | rank | int(11) | YES | | 0 | | | fastestLapTime | varchar(255) | YES | | NULL | | | fastestLapSpeed | varchar(255) | YES | | NULL | | | statusId | int(11) | NO | | 0 | | +-----------------+--------------+------+-----+---------+----------------+ seasons.csv +-------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+--------------+------+-----+---------+-------+ | year | int(11) | NO | PRI | 0 | | | url | varchar(255) | NO | UNI | | | +-------+--------------+------+-----+---------+-------+ status.csv +----------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------+--------------+------+-----+---------+----------------+ | statusId | int(11) | NO | PRI | NULL | auto_increment | | status | varchar(255) | NO | | | | +----------+--------------+------+-----+---------+----------------+

There are 13 files in the dataset, and various Ids connect them. We can use these Ids to join files to get data frames for our needs. The data seems pretty well organised. There is also an active community reporting bugs in the dataset. Hence, we will skip data cleaning for the time being. Necessary processing will be done on course. Now, let's dive right in!

Chequered Flag!

Let us start by looking at the Formula 1 races so far. For this, we will load the races.csv file as a data frame.

In [5]:
races_raw_df = pd.read_csv('data/races.csv')
races_raw_df
Out[5]:

The beginning few rows are from the 2009 season. Thus, the data is not in chronological order. Since the date column has type date (see schema), we can sort it as follows.

In [6]:
races_raw_df = races_raw_df.sort_values('date')
races_raw_df
Out[6]:

The file gives data for all the races held so far, but also has info on upcoming races scheduled for this year's championship. Let's remove those rows as these races are yet to happen. The last completed race is the Russian GP held on 27th September 2020. We will delete the future races using the date column again.

In [7]:
races_df = races_raw_df[races_raw_df.date <= '2020-09-27']
Question: How many chequered flags have been waved in Formula 1 so far?

Let's see how many F1 championship races have been completed until the Russian GP.

In [8]:
races_completed = len(races_df)
print('The number of chequered flags waved in F1 so far (until 2020 Russian GP) is {}.'.format(races_completed))
The number of chequered flags waved in F1 so far (until 2020 Russian GP) is 1028.

Now, let us visualise the trend in the number of championship races held per season calendar over the 70 years.

In [9]:
idx = races_df.groupby(['year'])['date'].transform(max) == races_df['date'] # Finds if the race index is last of the season
season_finale = races_df[idx].rename(columns = {'round' : 'tot_races'}) 
season_finale = season_finale[season_finale.year != 2020] # A new data frame with season-ending races only, excluding the 2020 season

# Plotting
plt.figure(figsize=(16, 6))

plt.plot(season_finale.year, season_finale.tot_races, 's-b')
plt.xlabel('Year')
plt.ylabel('Number of races')
plt.title("Championship races per seasons");

There were only seven races for the initial 1950 Formula1 season. Compare that to 21 championship races last year! Although most races happen in Europe, Formula 1 races in all other continents except Africa and Antarctica. Travelling is a heavy burden on both the drivers and the teams. Need not mention the amount of cargo they transport all around the world.

Driving in Circles!

Let us look at the circuits around the world that have hosted Formula 1 races. For this, we will load the circuits.csv file as a data frame.

In [10]:
circuits_raw_df = pd.read_csv('data/circuits.csv')
circuits_raw_df
Out[10]:

If you look at the data, you can find names of some circuits that haven't hosted any races yet. For example, Hanoi in Vietnam was supposed to hold a race this season. But due to the pandemic, like many other races, it had to be cancelled. To find the actual number of tracks to have hosted an F1 race, we will merge the above data frame with the one containing data of all the races.

In [11]:
host_circuits =  races_df.drop(['date', 'time', 'url', 'round'], axis = 1).rename(columns = {'name':'gp_name'}).merge(
    circuits_raw_df.drop(['lat', 'lng', 'alt', 'url'], 1).rename(columns = {'name':'cir_name'}), how = 'left')
Question: How many circuits have hosted a Formula 1 race on their tarmac?
In [12]:
tot_host_cir = host_circuits['circuitId'].nunique()
print('{} different circuits around the world have hosted a Formula 1 race on their tarmac.'.format(tot_host_cir))
73 different circuits around the world have hosted a Formula 1 race on their tarmac.

So far 73 circuits have played venues for F1 races. Autodromo Internazionale del Mugello is the latest addition to this list after it hosted the penultimate Grand Prix so far, the 2020 Tuscan GP. We will now look at the top 10 circuits to have hosted the most number of GPs.

In [13]:
# Sorting top 10 after taking count of occurrences
top_hosts = host_circuits[['cir_name']].cir_name.value_counts().reset_index().rename(
           columns={'index': 'name', 'cir_name': 'counts'}).head(10)

# Plotting
plt.figure(figsize = (12,6))

sns.barplot(top_hosts.counts, top_hosts.name)
plt.title('Top 10 F1 host circuits')
plt.xlabel('Number of races hosted')
plt.ylabel('');

With 70 races, Monza has hosted more GPs than any other circuit. This high-speed Italian racetrack has been a part of every Formula1 season, except 1980. In the second spot, we have the legendary Circuit de Monaco laid out on the streets of Monte Carlo. Apart from racing, it is famous for glitz and glamour. Silverstone and Spa are other iconic circuits that can boast more than 50 races. All these historic tacks still feature in modern F1 calendars.

We can now look at the countries to have hosted Formula 1 over the years.

In [14]:
host_countries = host_circuits[['raceId', 'year', 'country']].groupby(
    ['year','country']).size().reset_index().rename(columns = {0:'Races'})

# Plotting
plt.figure(figsize = (15,10))

sns.scatterplot(data = host_countries, x = 'year', y = 'country', s = 100, hue = 'Races', palette="deep")
plt.title('F1 host countries')
plt.xlabel('Year')
plt.ylabel('');

Formula 1 has deep roots in the European soil, and it shows. All the countries to have consistently hosted F1 races are from Europe. Italy and the UK have been part of every Formula 1 season yet. Other top host nations are Belgium and Germany. The USA is the country outside the European continent with most races. F1 has looked for newer grounds ever since its inception. From the 1970s and 1980s, countries like Canada, Brazil, Japan, Australia and Hungary are frequently visited. Some of the regular modern F1 circuits are in Bahrain, China, Singapore and UAE. Countries like Italy, Germany and the USA have hosted on multiple circuits in the same calendar year. The United States hosted three Grands Prix in 1982, one each in Long Beach, Detroit and Las Vegas. In 2020, when the organisers had to reschedule the season due to the pandemic, Silverstone in the UK and Red Bull Ring in Austria hosted two races each. Bahrain is also set to do the same, but the races will be in different layouts.

In [ ]:
 
In [ ]:
 

Let us save and upload our work to Jovian before continuing.

In [53]:
project_name = "zerotopandas-course-project-70-years-of-f1" # change this (use lowercase letters and hyphens only)
In [2]:
!pip install jovian --upgrade -q
In [54]:
import jovian
In [55]:
jovian.commit(project=project_name)
[jovian] Attempting to save notebook.. [jovian] Please enter your API key ( from https://jovian.ml/ ): API KEY:
--------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) /srv/conda/envs/notebook/lib/python3.8/site-packages/click/termui.py in prompt_func(text) 128 echo(text, nl=False, err=err) --> 129 return f("") 130 except (KeyboardInterrupt, EOFError): /srv/conda/envs/notebook/lib/python3.8/site-packages/click/termui.py in hidden_prompt_func(prompt) 53 ---> 54 return getpass.getpass(prompt) 55 /srv/conda/envs/notebook/lib/python3.8/site-packages/ipykernel/kernelbase.py in getpass(self, prompt, stream) 842 UserWarning, stacklevel=2) --> 843 return self._input_request(prompt, 844 self._parent_ident, /srv/conda/envs/notebook/lib/python3.8/site-packages/ipykernel/kernelbase.py in _input_request(self, prompt, ident, parent, password) 903 # re-raise KeyboardInterrupt, to truncate traceback --> 904 raise KeyboardInterrupt("Interrupted by user") from None 905 except Exception as e: KeyboardInterrupt: Interrupted by user During handling of the above exception, another exception occurred: Abort Traceback (most recent call last) <ipython-input-55-70bbcde94589> in <module> ----> 1 jovian.commit(project=project_name) /srv/conda/envs/notebook/lib/python3.8/site-packages/jovian/utils/commit.py in commit(message, files, outputs, environment, privacy, filename, project, new_project, git_commit, git_message, **kwargs) 179 180 # Retrieve Gist ID & title --> 181 project_title, project_id = _parse_project(project, filename, new_project) 182 183 # Create or update gist (with title and ) /srv/conda/envs/notebook/lib/python3.8/site-packages/jovian/utils/commit.py in _parse_project(project, filename, new_project) 273 else: 274 project_title = project --> 275 username = api.get_current_user()['username'] 276 metadata = api.get_gist(username + '/' + project) 277 /srv/conda/envs/notebook/lib/python3.8/site-packages/jovian/utils/api.py in get_current_user() 18 19 def get_current_user(): ---> 20 res = get(url=_u('/user/profile'), headers=_h()) 21 if res.status_code == 200: 22 return res.json()['data'] /srv/conda/envs/notebook/lib/python3.8/site-packages/jovian/utils/api.py in _h() 10 def _h(): 11 """Create authorization header with API key""" ---> 12 return {"Authorization": "Bearer " + get_api_key(), 13 "x-jovian-source": "library", 14 "x-jovian-library-version": __version__, /srv/conda/envs/notebook/lib/python3.8/site-packages/jovian/utils/credentials.py in get_api_key() 262 creds = read_creds() 263 if API_TOKEN_KEY not in creds: --> 264 key, _ = read_or_request_api_key() 265 if not validate_api_key(key): 266 log('The current API key is invalid or expired.', error=True) /srv/conda/envs/notebook/lib/python3.8/site-packages/jovian/utils/credentials.py in read_or_request_api_key() 300 return api_key, source 301 else: --> 302 return request_api_key(), 'request' 303 304 /srv/conda/envs/notebook/lib/python3.8/site-packages/jovian/utils/credentials.py in request_api_key() 283 """Ask the user to provide the API key""" 284 log("Please enter your API key ( from " + read_webapp_url() + " ):") --> 285 api_key = click.prompt("API KEY", hide_input=True) 286 return api_key 287 /srv/conda/envs/notebook/lib/python3.8/site-packages/click/termui.py in prompt(text, default, hide_input, confirmation_prompt, type, value_proc, prompt_suffix, show_default, err, show_choices) 145 while 1: 146 while 1: --> 147 value = prompt_func(prompt) 148 if value: 149 break /srv/conda/envs/notebook/lib/python3.8/site-packages/click/termui.py in prompt_func(text) 134 if hide_input: 135 echo(None, err=err) --> 136 raise Abort() 137 138 if value_proc is None: Abort:

Data Preparation and Cleaning

TODO - Write some explanation here.

Instructions (delete this cell):

  • Load the dataset into a data frame using Pandas
  • Explore the number of rows & columns, ranges of values etc.
  • Handle missing, incorrect and invalid data
  • Perform any additional steps (parsing dates, creating additional columns, merging multiple dataset etc.)
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [22]:
import jovian
In [23]:
jovian.commit()
[jovian] Attempting to save notebook.. [jovian] Updating notebook "aakashns/zerotopandas-course-project-starter" on https://jovian.ml/ [jovian] Uploading notebook.. [jovian] Capturing environment.. [jovian] Committed successfully! https://jovian.ml/aakashns/zerotopandas-course-project-starter

Exploratory Analysis and Visualization

TODO - write some explanation here.

Instructions (delete this cell)

  • Compute the mean, sum, range and other interesting statistics for numeric columns
  • Explore distributions of numeric columns using histograms etc.
  • Explore relationship between columns using scatter plots, bar charts etc.
  • Make a note of interesting insights from the exploratory analysis

Let's begin by importingmatplotlib.pyplot and seaborn.

In [24]:
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline

sns.set_style('darkgrid')
matplotlib.rcParams['font.size'] = 14
matplotlib.rcParams['figure.figsize'] = (9, 5)
matplotlib.rcParams['figure.facecolor'] = '#00000000'

TODO - Explore one or more columns by plotting a graph below, and add some explanation about it

In [ ]:
 
In [ ]:
 

TODO - Explore one or more columns by plotting a graph below, and add some explanation about it

In [ ]:
 
In [ ]:
 

TODO - Explore one or more columns by plotting a graph below, and add some explanation about it

In [ ]:
 
In [ ]:
 

TODO - Explore one or more columns by plotting a graph below, and add some explanation about it

In [ ]:
 
In [ ]:
 

TODO - Explore one or more columns by plotting a graph below, and add some explanation about it

In [ ]:
 
In [ ]:
 

Let us save and upload our work to Jovian before continuing

In [25]:
import jovian
In [26]:
jovian.commit()
[jovian] Attempting to save notebook.. [jovian] Updating notebook "aakashns/zerotopandas-course-project-starter" on https://jovian.ml/ [jovian] Uploading notebook.. [jovian] Capturing environment.. [jovian] Committed successfully! https://jovian.ml/aakashns/zerotopandas-course-project-starter

Asking and Answering Questions

TODO - write some explanation here.

Instructions (delete this cell)

  • Ask at least 5 interesting questions about your dataset
  • Answer the questions either by computing the results using Numpy/Pandas or by plotting graphs using Matplotlib/Seaborn
  • Create new columns, merge multiple dataset and perform grouping/aggregation wherever necessary
  • Wherever you're using a library function from Pandas/Numpy/Matplotlib etc. explain briefly what it does
Q1: TODO - ask a question here and answer it below
In [ ]:
 
In [ ]:
 
In [ ]:
 
Q2: TODO - ask a question here and answer it below
In [ ]:
 
In [ ]:
 
In [ ]:
 
Q3: TODO - ask a question here and answer it below
In [ ]:
 
In [ ]:
 
In [ ]:
 
Q4: TODO - ask a question here and answer it below
In [ ]:
 
In [ ]:
 
In [ ]:
 
Q5: TODO - ask a question here and answer it below
In [ ]:
 
In [ ]:
 
In [ ]:
 

Let us save and upload our work to Jovian before continuing.

In [28]:
import jovian
In [29]:
jovian.commit()
[jovian] Attempting to save notebook.. [jovian] Updating notebook "aakashns/zerotopandas-course-project-starter" on https://jovian.ml/ [jovian] Uploading notebook.. [jovian] Capturing environment.. [jovian] Committed successfully! https://jovian.ml/aakashns/zerotopandas-course-project-starter

Inferences and Conclusion

TODO - Write some explanation here: a summary of all the inferences drawn from the analysis, and any conclusions you may have drawn by answering various questions.

In [30]:
import jovian
In [31]:
jovian.commit()
[jovian] Attempting to save notebook.. [jovian] Updating notebook "aakashns/zerotopandas-course-project-starter" on https://jovian.ml/ [jovian] Uploading notebook.. [jovian] Capturing environment.. [jovian] Committed successfully! https://jovian.ml/aakashns/zerotopandas-course-project-starter

References and Future Work

TODO - Write some explanation here: ideas for future projects using this dataset, and links to resources you found useful.

Submission Instructions (delete this cell)

(Optional) Write a blog post

In [32]:
import jovian
In [35]:
jovian.commit()
[jovian] Attempting to save notebook.. [jovian] Updating notebook "aakashns/zerotopandas-course-project-starter" on https://jovian.ml/ [jovian] Uploading notebook.. [jovian] Capturing environment.. [jovian] Committed successfully! https://jovian.ml/aakashns/zerotopandas-course-project-starter
In [ ]: