Jovian
⭐️
Sign In

70 Years of Formula 1 - An Exploratory Data Analysis

Formula 1, the pinnacle of motorsports, is celebrating its 70th anniversary this year. From the British Grand Prix in 1950, this sport has not just enthralled racing fans around the world but also has been a platform for innovations and cutting edge technological developments. Being an F1 fan myself, I thought I would do my first data analysis project on the 70-year history of the sport! Thanks to Ergast Developer API and their well-kept database images for making this happen. Here, we will be using their data up to the 2020 Russian GP held on 27th September. This project is a part of the course Data Analysis with Python: Zero to Pandas hosted by Jovian. I will be using Numpy and Pandas libraries in Python to do some primary data exploration on the driver and constructor records and a few other aspects of F1. Matplotlib and Seaborn libraries are used for plotting and visualisations.

How to run the code

This is an executable Jupyter notebook hosted on Jovian.ml, a platform for sharing data science projects. You can run and experiment with the code in a couple of ways: using free online resources (recommended) or on your own computer.

Option 1: Running using free online resources (1-click, recommended)

The easiest way to start executing this notebook is to click the "Run" button at the top of this page, and select "Run on Binder". This will run the notebook on mybinder.org, a free online service for running Jupyter notebooks. You can also select "Run on Colab" or "Run on Kaggle".

Option 2: Running on your computer locally
  1. Install Conda by following these instructions. Add Conda binaries to your system PATH, so you can use the conda command on your terminal.

  2. Create a Conda environment and install the required libraries by running these commands on the terminal:

conda create -n zerotopandas -y python=3.8 
conda activate zerotopandas
pip install jovian jupyter numpy pandas matplotlib seaborn opendatasets --upgrade
  1. Press the "Clone" button above to copy the command for downloading the notebook, and run it on the terminal. This will create a new directory and download the notebook. The command will look something like this:
jovian clone notebook-owner/notebook-id
  1. Enter the newly created directory using cd directory-name and start the Jupyter notebook.
jupyter notebook

You can now access Jupyter's web interface by clicking the link that shows up on the terminal or by visiting http://localhost:8888 on your browser. Click on the notebook file (it has a .ipynb extension) to open it.

Importing Libraries

In [1]:
# Importing python libraries
from urllib.request import urlretrieve
import os
import pandas as pd
import numpy as np
In [2]:
# Importing plotting and visualisation libraries
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
%matplotlib inline

sns.set_style('darkgrid')
matplotlib.rcParams['font.size'] = 14
matplotlib.rcParams['figure.figsize'] = (9, 5)
matplotlib.rcParams['figure.facecolor'] = '#00000000'

Downloading the Dataset

For easy access to the CSV files and its schema from Ergast (updated till 27/09/2020), they are hosted as repositories in GitHub. The following variables give the paths to each of the files.

In [3]:
# URLs to fetch data
url_circuits = 'https://gist.githubusercontent.com/harikanth-m/f15b47aeab0d75c128824d0f72821db1/raw/circuits.csv'
url_const_results = 'https://gist.githubusercontent.com/harikanth-m/f41f068dca8a309c81a04c6a45fecf58/raw/constructor_results.csv'
url_const_standings = 'https://gist.githubusercontent.com/harikanth-m/ccf0bd917f013d34928ae8c9b695059c/raw/constructor_standings.csv'
url_const = 'https://gist.githubusercontent.com/harikanth-m/f128413f4bff5c3c901b9a8c599a78b9/raw/constructors.csv'
url_driver_standings = 'https://gist.githubusercontent.com/harikanth-m/fcdb3e6650972ee6f0db07288074b0b7/raw/driver_standings.csv'
url_drivers = 'https://gist.githubusercontent.com/harikanth-m/d37be6c8c871300822bf8676e2206d9b/raw/drivers.csv'
url_lap_times = 'https://gist.githubusercontent.com/harikanth-m/61770032a57dd013651163142589b2bc/raw/lap_times.csv'
url_pit_stops = 'https://gist.githubusercontent.com/harikanth-m/45d94ae5a8b8e7f0cace662cc7942d04/raw/pit_stops.csv'
url_qualifying = 'https://gist.githubusercontent.com/harikanth-m/08f55f6687874c5a0bf01790c477e4bc/raw/qualifying.csv'
url_races = 'https://gist.githubusercontent.com/harikanth-m/9382cffceb1264e246fae76e4989cc31/raw/races.csv'
url_results = 'https://gist.githubusercontent.com/harikanth-m/6b1b6e36d3709abf3b97fc703303471b/raw/results.csv'
url_status = 'https://gist.githubusercontent.com/harikanth-m/c1bb17485523243929c57f4f13d4fcba/raw/status.csv'
url_schema = 'https://gist.githubusercontent.com/harikanth-m/71f7fd19c8afba09ad99664ece517268/raw/f1db_schema.txt'

Let's make a new directory to download the data files.

In [4]:
# Creating a new directory for the data files
os.makedirs('./data', exist_ok=True)

We will now download the files to this directory using urlretrieve function.

In [5]:
# Downloading the csv files
urlretrieve(url_circuits, './data/circuits.csv')
urlretrieve(url_const_results, './data/constructor_results.csv')
urlretrieve(url_const_standings, './data/constructor_standings.csv')
urlretrieve(url_const, './data/constructors.csv')
urlretrieve(url_driver_standings,'./data/driver_standings.csv')
urlretrieve(url_drivers, './data/drivers.csv')
urlretrieve(url_lap_times, './data/lap_times.csv')
urlretrieve(url_pit_stops, './data/pit_stops.csv')
urlretrieve(url_qualifying, './data/qualifying.csv')
urlretrieve(url_races, './data/races.csv')
urlretrieve(url_results, './data/results.csv')
urlretrieve(url_status, './data/status.csv')
urlretrieve(url_schema, './data/f1db_schema.txt');

The following code checks if the files have been to the directory.

In [13]:
os.listdir('data')
Out[13]:
['constructor_results.csv',
 'drivers.csv',
 'constructors.csv',
 'status.csv',
 'lap_times.csv',
 'qualifying.csv',
 'f1db_schema.txt',
 'driver_standings.csv',
 'races.csv',
 'pit_stops.csv',
 'circuits.csv',
 'results.csv',
 'constructor_standings.csv']

We can look at the schema text file to get information about the database and the contents of each of the CSV files.

In [14]:
schema = open('./data/f1db_schema.txt', 'r')
print(schema.read())
+----------------------+ | Tables_in_f1db | +----------------------+ | circuits | | constructorResults | | constructorStandings | | constructors | | driverStandings | | drivers | | lapTimes | | pitStops | | qualifying | | races | | results | | seasons | | status | +----------------------+ circuits.csv +------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+--------------+------+-----+---------+----------------+ | circuitId | int(11) | NO | PRI | NULL | auto_increment | | circuitRef | varchar(255) | NO | | | | | name | varchar(255) | NO | | | | | location | varchar(255) | YES | | NULL | | | country | varchar(255) | YES | | NULL | | | lat | float | YES | | NULL | | | lng | float | YES | | NULL | | | alt | int(11) | YES | | NULL | | | url | varchar(255) | NO | UNI | | | +------------+--------------+------+-----+---------+----------------+ constructor_results.csv +----------------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------------+--------------+------+-----+---------+----------------+ | constructorResultsId | int(11) | NO | PRI | NULL | auto_increment | | raceId | int(11) | NO | | 0 | | | constructorId | int(11) | NO | | 0 | | | points | float | YES | | NULL | | | status | varchar(255) | YES | | NULL | | +----------------------+--------------+------+-----+---------+----------------+ constructor_standings.csv +------------------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------------+--------------+------+-----+---------+----------------+ | constructorStandingsId | int(11) | NO | PRI | NULL | auto_increment | | raceId | int(11) | NO | | 0 | | | constructorId | int(11) | NO | | 0 | | | points | float | NO | | 0 | | | position | int(11) | YES | | NULL | | | positionText | varchar(255) | YES | | NULL | | | wins | int(11) | NO | | 0 | | +------------------------+--------------+------+-----+---------+----------------+ constructors.csv +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | constructorId | int(11) | NO | PRI | NULL | auto_increment | | constructorRef | varchar(255) | NO | | | | | name | varchar(255) | NO | UNI | | | | nationality | varchar(255) | YES | | NULL | | | url | varchar(255) | NO | | | | +----------------+--------------+------+-----+---------+----------------+ driver_standings.csv +-------------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+--------------+------+-----+---------+----------------+ | driverStandingsId | int(11) | NO | PRI | NULL | auto_increment | | raceId | int(11) | NO | | 0 | | | driverId | int(11) | NO | | 0 | | | points | float | NO | | 0 | | | position | int(11) | YES | | NULL | | | positionText | varchar(255) | YES | | NULL | | | wins | int(11) | NO | | 0 | | +-------------------+--------------+------+-----+---------+----------------+ drivers.csv +-------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+--------------+------+-----+---------+----------------+ | driverId | int(11) | NO | PRI | NULL | auto_increment | | driverRef | varchar(255) | NO | | | | | number | int(11) | YES | | NULL | | | code | varchar(3) | YES | | NULL | | | forename | varchar(255) | NO | | | | | surname | varchar(255) | NO | | | | | dob | date | YES | | NULL | | | nationality | varchar(255) | YES | | NULL | | | url | varchar(255) | NO | UNI | | | +-------------+--------------+------+-----+---------+----------------+ lap_times.csv +--------------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------------+--------------+------+-----+---------+-------+ | raceId | int(11) | NO | PRI | NULL | | | driverId | int(11) | NO | PRI | NULL | | | lap | int(11) | NO | PRI | NULL | | | position | int(11) | YES | | NULL | | | time | varchar(255) | YES | | NULL | | | milliseconds | int(11) | YES | | NULL | | +--------------+--------------+------+-----+---------+-------+ pit_stops.csv +--------------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------------+--------------+------+-----+---------+-------+ | raceId | int(11) | NO | PRI | NULL | | | driverId | int(11) | NO | PRI | NULL | | | stop | int(11) | NO | PRI | NULL | | | lap | int(11) | NO | | NULL | | | time | time | NO | | NULL | | | duration | varchar(255) | YES | | NULL | | | milliseconds | int(11) | YES | | NULL | | +--------------+--------------+------+-----+---------+-------+ qualifying.csv +---------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------+------+-----+---------+----------------+ | qualifyId | int(11) | NO | PRI | NULL | auto_increment | | raceId | int(11) | NO | | 0 | | | driverId | int(11) | NO | | 0 | | | constructorId | int(11) | NO | | 0 | | | number | int(11) | NO | | 0 | | | position | int(11) | YES | | NULL | | | q1 | varchar(255) | YES | | NULL | | | q2 | varchar(255) | YES | | NULL | | | q3 | varchar(255) | YES | | NULL | | +---------------+--------------+------+-----+---------+----------------+ races.csv +-----------+--------------+------+-----+------------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------+--------------+------+-----+------------+----------------+ | raceId | int(11) | NO | PRI | NULL | auto_increment | | year | int(11) | NO | | 0 | | | round | int(11) | NO | | 0 | | | circuitId | int(11) | NO | | 0 | | | name | varchar(255) | NO | | | | | date | date | NO | | 0000-00-00 | | | time | time | YES | | NULL | | | url | varchar(255) | YES | UNI | NULL | | +-----------+--------------+------+-----+------------+----------------+ results.csv +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ | resultId | int(11) | NO | PRI | NULL | auto_increment | | raceId | int(11) | NO | | 0 | | | driverId | int(11) | NO | | 0 | | | constructorId | int(11) | NO | | 0 | | | number | int(11) | YES | | NULL | | | grid | int(11) | NO | | 0 | | | position | int(11) | YES | | NULL | | | positionText | varchar(255) | NO | | | | | positionOrder | int(11) | NO | | 0 | | | points | float | NO | | 0 | | | laps | int(11) | NO | | 0 | | | time | varchar(255) | YES | | NULL | | | milliseconds | int(11) | YES | | NULL | | | fastestLap | int(11) | YES | | NULL | | | rank | int(11) | YES | | 0 | | | fastestLapTime | varchar(255) | YES | | NULL | | | fastestLapSpeed | varchar(255) | YES | | NULL | | | statusId | int(11) | NO | | 0 | | +-----------------+--------------+------+-----+---------+----------------+ seasons.csv +-------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+--------------+------+-----+---------+-------+ | year | int(11) | NO | PRI | 0 | | | url | varchar(255) | NO | UNI | | | +-------+--------------+------+-----+---------+-------+ status.csv +----------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------+--------------+------+-----+---------+----------------+ | statusId | int(11) | NO | PRI | NULL | auto_increment | | status | varchar(255) | NO | | | | +----------+--------------+------+-----+---------+----------------+

There are 13 files in the dataset, and various Ids connect them. We can use these Ids to join files to get data frames for our needs. The data seems pretty well organised. There is also an active community reporting bugs in the dataset. Hence, we will skip data cleaning for the time being. Necessary processing will be done on course.

Let us save and upload our work to Jovian before continuing.

In [15]:
project_name = "zerotopandas-course-project-70-years-of-f1" # change this (use lowercase letters and hyphens only)
In [2]:
!pip install jovian --upgrade -q
In [16]:
import jovian
In [ ]:
jovian.commit(project=project_name)
[jovian] Attempting to save notebook..

Data Preparation and Cleaning

TODO - Write some explanation here.

Instructions (delete this cell):

  • Load the dataset into a data frame using Pandas
  • Explore the number of rows & columns, ranges of values etc.
  • Handle missing, incorrect and invalid data
  • Perform any additional steps (parsing dates, creating additional columns, merging multiple dataset etc.)
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [22]:
import jovian
In [23]:
jovian.commit()
[jovian] Attempting to save notebook.. [jovian] Updating notebook "aakashns/zerotopandas-course-project-starter" on https://jovian.ml/ [jovian] Uploading notebook.. [jovian] Capturing environment.. [jovian] Committed successfully! https://jovian.ml/aakashns/zerotopandas-course-project-starter

Exploratory Analysis and Visualization

TODO - write some explanation here.

Instructions (delete this cell)

  • Compute the mean, sum, range and other interesting statistics for numeric columns
  • Explore distributions of numeric columns using histograms etc.
  • Explore relationship between columns using scatter plots, bar charts etc.
  • Make a note of interesting insights from the exploratory analysis

Let's begin by importingmatplotlib.pyplot and seaborn.

In [24]:
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline

sns.set_style('darkgrid')
matplotlib.rcParams['font.size'] = 14
matplotlib.rcParams['figure.figsize'] = (9, 5)
matplotlib.rcParams['figure.facecolor'] = '#00000000'

TODO - Explore one or more columns by plotting a graph below, and add some explanation about it

In [ ]:
 
In [ ]:
 

TODO - Explore one or more columns by plotting a graph below, and add some explanation about it

In [ ]:
 
In [ ]:
 

TODO - Explore one or more columns by plotting a graph below, and add some explanation about it

In [ ]:
 
In [ ]:
 

TODO - Explore one or more columns by plotting a graph below, and add some explanation about it

In [ ]:
 
In [ ]:
 

TODO - Explore one or more columns by plotting a graph below, and add some explanation about it

In [ ]:
 
In [ ]:
 

Let us save and upload our work to Jovian before continuing

In [25]:
import jovian
In [26]:
jovian.commit()
[jovian] Attempting to save notebook.. [jovian] Updating notebook "aakashns/zerotopandas-course-project-starter" on https://jovian.ml/ [jovian] Uploading notebook.. [jovian] Capturing environment.. [jovian] Committed successfully! https://jovian.ml/aakashns/zerotopandas-course-project-starter

Asking and Answering Questions

TODO - write some explanation here.

Instructions (delete this cell)

  • Ask at least 5 interesting questions about your dataset
  • Answer the questions either by computing the results using Numpy/Pandas or by plotting graphs using Matplotlib/Seaborn
  • Create new columns, merge multiple dataset and perform grouping/aggregation wherever necessary
  • Wherever you're using a library function from Pandas/Numpy/Matplotlib etc. explain briefly what it does
Q1: TODO - ask a question here and answer it below
In [ ]:
 
In [ ]:
 
In [ ]:
 
Q2: TODO - ask a question here and answer it below
In [ ]:
 
In [ ]:
 
In [ ]:
 
Q3: TODO - ask a question here and answer it below
In [ ]:
 
In [ ]:
 
In [ ]:
 
Q4: TODO - ask a question here and answer it below
In [ ]:
 
In [ ]:
 
In [ ]:
 
Q5: TODO - ask a question here and answer it below
In [ ]:
 
In [ ]:
 
In [ ]:
 

Let us save and upload our work to Jovian before continuing.

In [28]:
import jovian
In [29]:
jovian.commit()
[jovian] Attempting to save notebook.. [jovian] Updating notebook "aakashns/zerotopandas-course-project-starter" on https://jovian.ml/ [jovian] Uploading notebook.. [jovian] Capturing environment.. [jovian] Committed successfully! https://jovian.ml/aakashns/zerotopandas-course-project-starter

Inferences and Conclusion

TODO - Write some explanation here: a summary of all the inferences drawn from the analysis, and any conclusions you may have drawn by answering various questions.

In [30]:
import jovian
In [31]:
jovian.commit()
[jovian] Attempting to save notebook.. [jovian] Updating notebook "aakashns/zerotopandas-course-project-starter" on https://jovian.ml/ [jovian] Uploading notebook.. [jovian] Capturing environment.. [jovian] Committed successfully! https://jovian.ml/aakashns/zerotopandas-course-project-starter

References and Future Work

TODO - Write some explanation here: ideas for future projects using this dataset, and links to resources you found useful.

Submission Instructions (delete this cell)

(Optional) Write a blog post

In [32]:
import jovian
In [35]:
jovian.commit()
[jovian] Attempting to save notebook.. [jovian] Updating notebook "aakashns/zerotopandas-course-project-starter" on https://jovian.ml/ [jovian] Uploading notebook.. [jovian] Capturing environment.. [jovian] Committed successfully! https://jovian.ml/aakashns/zerotopandas-course-project-starter
In [ ]: