Formula 1, the pinnacle of motorsports, is celebrating its 70th anniversary this year. From the British Grand Prix in 1950, this sport has not just enthralled racing fans around the world but also has been a platform for innovations and cutting edge technological developments. Being an F1 fan myself, I thought I would do my first data analysis project on the 70-year history of the sport! Thanks to Ergast Developer API and their well-kept database images for making this happen. Here, we will be using their data up to the 2020 Russian GP held on 27th September. This project is a part of the course Data Analysis with Python: Zero to Pandas hosted by Jovian. I will be using Numpy and Pandas libraries in Python to do some primary data exploration on the driver and constructor records and a few other aspects of F1. Matplotlib and Seaborn libraries are used for plotting and visualisations.
This is an executable Jupyter notebook hosted on Jovian.ml, a platform for sharing data science projects. You can run and experiment with the code in a couple of ways: using free online resources (recommended) or on your own computer.
The easiest way to start executing this notebook is to click the "Run" button at the top of this page, and select "Run on Binder". This will run the notebook on mybinder.org, a free online service for running Jupyter notebooks. You can also select "Run on Colab" or "Run on Kaggle".
Install Conda by following these instructions. Add Conda binaries to your system PATH
, so you can use the conda
command on your terminal.
Create a Conda environment and install the required libraries by running these commands on the terminal:
conda create -n zerotopandas -y python=3.8
conda activate zerotopandas
pip install jovian jupyter numpy pandas matplotlib seaborn opendatasets --upgrade
jovian clone notebook-owner/notebook-id
cd directory-name
and start the Jupyter notebook.jupyter notebook
You can now access Jupyter's web interface by clicking the link that shows up on the terminal or by visiting http://localhost:8888 on your browser. Click on the notebook file (it has a .ipynb
extension) to open it.
# Importing python libraries
from urllib.request import urlretrieve
import os
import pandas as pd
import numpy as np
# Importing plotting and visualisation libraries
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
%matplotlib inline
sns.set_style('darkgrid')
matplotlib.rcParams['font.size'] = 14
matplotlib.rcParams['figure.figsize'] = (9, 5)
matplotlib.rcParams['figure.facecolor'] = '#00000000'
For easy access to the CSV files and its schema from Ergast (updated till 27/09/2020), they are hosted as repositories in GitHub. The following variables give the paths to each of the files.
# URLs to fetch data
url_circuits = 'https://gist.githubusercontent.com/harikanth-m/f15b47aeab0d75c128824d0f72821db1/raw/circuits.csv'
url_const_results = 'https://gist.githubusercontent.com/harikanth-m/f41f068dca8a309c81a04c6a45fecf58/raw/constructor_results.csv'
url_const_standings = 'https://gist.githubusercontent.com/harikanth-m/ccf0bd917f013d34928ae8c9b695059c/raw/constructor_standings.csv'
url_const = 'https://gist.githubusercontent.com/harikanth-m/f128413f4bff5c3c901b9a8c599a78b9/raw/constructors.csv'
url_driver_standings = 'https://gist.githubusercontent.com/harikanth-m/fcdb3e6650972ee6f0db07288074b0b7/raw/driver_standings.csv'
url_drivers = 'https://gist.githubusercontent.com/harikanth-m/d37be6c8c871300822bf8676e2206d9b/raw/drivers.csv'
url_lap_times = 'https://gist.githubusercontent.com/harikanth-m/61770032a57dd013651163142589b2bc/raw/lap_times.csv'
url_pit_stops = 'https://gist.githubusercontent.com/harikanth-m/45d94ae5a8b8e7f0cace662cc7942d04/raw/pit_stops.csv'
url_qualifying = 'https://gist.githubusercontent.com/harikanth-m/08f55f6687874c5a0bf01790c477e4bc/raw/qualifying.csv'
url_races = 'https://gist.githubusercontent.com/harikanth-m/9382cffceb1264e246fae76e4989cc31/raw/races.csv'
url_results = 'https://gist.githubusercontent.com/harikanth-m/6b1b6e36d3709abf3b97fc703303471b/raw/results.csv'
url_status = 'https://gist.githubusercontent.com/harikanth-m/c1bb17485523243929c57f4f13d4fcba/raw/status.csv'
url_schema = 'https://gist.githubusercontent.com/harikanth-m/71f7fd19c8afba09ad99664ece517268/raw/f1db_schema.txt'
Let's make a new directory to download the data files.
# Creating a new directory for the data files
os.makedirs('./data', exist_ok=True)
We will now download the files to this directory using urlretrieve
function.
# Downloading the csv files
urlretrieve(url_circuits, './data/circuits.csv')
urlretrieve(url_const_results, './data/constructor_results.csv')
urlretrieve(url_const_standings, './data/constructor_standings.csv')
urlretrieve(url_const, './data/constructors.csv')
urlretrieve(url_driver_standings,'./data/driver_standings.csv')
urlretrieve(url_drivers, './data/drivers.csv')
urlretrieve(url_lap_times, './data/lap_times.csv')
urlretrieve(url_pit_stops, './data/pit_stops.csv')
urlretrieve(url_qualifying, './data/qualifying.csv')
urlretrieve(url_races, './data/races.csv')
urlretrieve(url_results, './data/results.csv')
urlretrieve(url_status, './data/status.csv')
urlretrieve(url_schema, './data/f1db_schema.txt');
The following code checks if the files have been to the directory.
os.listdir('data')
['constructor_results.csv',
'drivers.csv',
'constructors.csv',
'status.csv',
'lap_times.csv',
'qualifying.csv',
'f1db_schema.txt',
'driver_standings.csv',
'races.csv',
'pit_stops.csv',
'circuits.csv',
'results.csv',
'constructor_standings.csv']
We can look at the schema text file to get information about the database and the contents of each of the CSV files.
schema = open('./data/f1db_schema.txt', 'r')
print(schema.read())
+----------------------+
| Tables_in_f1db |
+----------------------+
| circuits |
| constructorResults |
| constructorStandings |
| constructors |
| driverStandings |
| drivers |
| lapTimes |
| pitStops |
| qualifying |
| races |
| results |
| seasons |
| status |
+----------------------+
circuits.csv
+------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+----------------+
| circuitId | int(11) | NO | PRI | NULL | auto_increment |
| circuitRef | varchar(255) | NO | | | |
| name | varchar(255) | NO | | | |
| location | varchar(255) | YES | | NULL | |
| country | varchar(255) | YES | | NULL | |
| lat | float | YES | | NULL | |
| lng | float | YES | | NULL | |
| alt | int(11) | YES | | NULL | |
| url | varchar(255) | NO | UNI | | |
+------------+--------------+------+-----+---------+----------------+
constructor_results.csv
+----------------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------------+--------------+------+-----+---------+----------------+
| constructorResultsId | int(11) | NO | PRI | NULL | auto_increment |
| raceId | int(11) | NO | | 0 | |
| constructorId | int(11) | NO | | 0 | |
| points | float | YES | | NULL | |
| status | varchar(255) | YES | | NULL | |
+----------------------+--------------+------+-----+---------+----------------+
constructor_standings.csv
+------------------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------------------+--------------+------+-----+---------+----------------+
| constructorStandingsId | int(11) | NO | PRI | NULL | auto_increment |
| raceId | int(11) | NO | | 0 | |
| constructorId | int(11) | NO | | 0 | |
| points | float | NO | | 0 | |
| position | int(11) | YES | | NULL | |
| positionText | varchar(255) | YES | | NULL | |
| wins | int(11) | NO | | 0 | |
+------------------------+--------------+------+-----+---------+----------------+
constructors.csv
+----------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------+--------------+------+-----+---------+----------------+
| constructorId | int(11) | NO | PRI | NULL | auto_increment |
| constructorRef | varchar(255) | NO | | | |
| name | varchar(255) | NO | UNI | | |
| nationality | varchar(255) | YES | | NULL | |
| url | varchar(255) | NO | | | |
+----------------+--------------+------+-----+---------+----------------+
driver_standings.csv
+-------------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+----------------+
| driverStandingsId | int(11) | NO | PRI | NULL | auto_increment |
| raceId | int(11) | NO | | 0 | |
| driverId | int(11) | NO | | 0 | |
| points | float | NO | | 0 | |
| position | int(11) | YES | | NULL | |
| positionText | varchar(255) | YES | | NULL | |
| wins | int(11) | NO | | 0 | |
+-------------------+--------------+------+-----+---------+----------------+
drivers.csv
+-------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+----------------+
| driverId | int(11) | NO | PRI | NULL | auto_increment |
| driverRef | varchar(255) | NO | | | |
| number | int(11) | YES | | NULL | |
| code | varchar(3) | YES | | NULL | |
| forename | varchar(255) | NO | | | |
| surname | varchar(255) | NO | | | |
| dob | date | YES | | NULL | |
| nationality | varchar(255) | YES | | NULL | |
| url | varchar(255) | NO | UNI | | |
+-------------+--------------+------+-----+---------+----------------+
lap_times.csv
+--------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------+--------------+------+-----+---------+-------+
| raceId | int(11) | NO | PRI | NULL | |
| driverId | int(11) | NO | PRI | NULL | |
| lap | int(11) | NO | PRI | NULL | |
| position | int(11) | YES | | NULL | |
| time | varchar(255) | YES | | NULL | |
| milliseconds | int(11) | YES | | NULL | |
+--------------+--------------+------+-----+---------+-------+
pit_stops.csv
+--------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------+--------------+------+-----+---------+-------+
| raceId | int(11) | NO | PRI | NULL | |
| driverId | int(11) | NO | PRI | NULL | |
| stop | int(11) | NO | PRI | NULL | |
| lap | int(11) | NO | | NULL | |
| time | time | NO | | NULL | |
| duration | varchar(255) | YES | | NULL | |
| milliseconds | int(11) | YES | | NULL | |
+--------------+--------------+------+-----+---------+-------+
qualifying.csv
+---------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+--------------+------+-----+---------+----------------+
| qualifyId | int(11) | NO | PRI | NULL | auto_increment |
| raceId | int(11) | NO | | 0 | |
| driverId | int(11) | NO | | 0 | |
| constructorId | int(11) | NO | | 0 | |
| number | int(11) | NO | | 0 | |
| position | int(11) | YES | | NULL | |
| q1 | varchar(255) | YES | | NULL | |
| q2 | varchar(255) | YES | | NULL | |
| q3 | varchar(255) | YES | | NULL | |
+---------------+--------------+------+-----+---------+----------------+
races.csv
+-----------+--------------+------+-----+------------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+------------+----------------+
| raceId | int(11) | NO | PRI | NULL | auto_increment |
| year | int(11) | NO | | 0 | |
| round | int(11) | NO | | 0 | |
| circuitId | int(11) | NO | | 0 | |
| name | varchar(255) | NO | | | |
| date | date | NO | | 0000-00-00 | |
| time | time | YES | | NULL | |
| url | varchar(255) | YES | UNI | NULL | |
+-----------+--------------+------+-----+------------+----------------+
results.csv
+-----------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------------+--------------+------+-----+---------+----------------+
| resultId | int(11) | NO | PRI | NULL | auto_increment |
| raceId | int(11) | NO | | 0 | |
| driverId | int(11) | NO | | 0 | |
| constructorId | int(11) | NO | | 0 | |
| number | int(11) | YES | | NULL | |
| grid | int(11) | NO | | 0 | |
| position | int(11) | YES | | NULL | |
| positionText | varchar(255) | NO | | | |
| positionOrder | int(11) | NO | | 0 | |
| points | float | NO | | 0 | |
| laps | int(11) | NO | | 0 | |
| time | varchar(255) | YES | | NULL | |
| milliseconds | int(11) | YES | | NULL | |
| fastestLap | int(11) | YES | | NULL | |
| rank | int(11) | YES | | 0 | |
| fastestLapTime | varchar(255) | YES | | NULL | |
| fastestLapSpeed | varchar(255) | YES | | NULL | |
| statusId | int(11) | NO | | 0 | |
+-----------------+--------------+------+-----+---------+----------------+
seasons.csv
+-------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+--------------+------+-----+---------+-------+
| year | int(11) | NO | PRI | 0 | |
| url | varchar(255) | NO | UNI | | |
+-------+--------------+------+-----+---------+-------+
status.csv
+----------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+--------------+------+-----+---------+----------------+
| statusId | int(11) | NO | PRI | NULL | auto_increment |
| status | varchar(255) | NO | | | |
+----------+--------------+------+-----+---------+----------------+
There are 13 files in the dataset, and various Ids connect them. We can use these Ids to join files to get data frames for our needs. The data seems pretty well organised. There is also an active community reporting bugs in the dataset. Hence, we will skip data cleaning for the time being. Necessary processing will be done on course.
Let us save and upload our work to Jovian before continuing.
project_name = "zerotopandas-course-project-70-years-of-f1" # change this (use lowercase letters and hyphens only)
!pip install jovian --upgrade -q
import jovian
jovian.commit(project=project_name)
[jovian] Attempting to save notebook..
TODO - Write some explanation here.
Instructions (delete this cell):
- Load the dataset into a data frame using Pandas
- Explore the number of rows & columns, ranges of values etc.
- Handle missing, incorrect and invalid data
- Perform any additional steps (parsing dates, creating additional columns, merging multiple dataset etc.)
import jovian
jovian.commit()
[jovian] Attempting to save notebook..
[jovian] Updating notebook "aakashns/zerotopandas-course-project-starter" on https://jovian.ml/
[jovian] Uploading notebook..
[jovian] Capturing environment..
[jovian] Committed successfully! https://jovian.ml/aakashns/zerotopandas-course-project-starter
TODO - write some explanation here.
Instructions (delete this cell)
- Compute the mean, sum, range and other interesting statistics for numeric columns
- Explore distributions of numeric columns using histograms etc.
- Explore relationship between columns using scatter plots, bar charts etc.
- Make a note of interesting insights from the exploratory analysis
Let's begin by importingmatplotlib.pyplot
and seaborn
.
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
sns.set_style('darkgrid')
matplotlib.rcParams['font.size'] = 14
matplotlib.rcParams['figure.figsize'] = (9, 5)
matplotlib.rcParams['figure.facecolor'] = '#00000000'
TODO - Explore one or more columns by plotting a graph below, and add some explanation about it
TODO - Explore one or more columns by plotting a graph below, and add some explanation about it
TODO - Explore one or more columns by plotting a graph below, and add some explanation about it
TODO - Explore one or more columns by plotting a graph below, and add some explanation about it
TODO - Explore one or more columns by plotting a graph below, and add some explanation about it
Let us save and upload our work to Jovian before continuing
import jovian
jovian.commit()
[jovian] Attempting to save notebook..
[jovian] Updating notebook "aakashns/zerotopandas-course-project-starter" on https://jovian.ml/
[jovian] Uploading notebook..
[jovian] Capturing environment..
[jovian] Committed successfully! https://jovian.ml/aakashns/zerotopandas-course-project-starter
TODO - write some explanation here.
Instructions (delete this cell)
- Ask at least 5 interesting questions about your dataset
- Answer the questions either by computing the results using Numpy/Pandas or by plotting graphs using Matplotlib/Seaborn
- Create new columns, merge multiple dataset and perform grouping/aggregation wherever necessary
- Wherever you're using a library function from Pandas/Numpy/Matplotlib etc. explain briefly what it does
Let us save and upload our work to Jovian before continuing.
import jovian
jovian.commit()
[jovian] Attempting to save notebook..
[jovian] Updating notebook "aakashns/zerotopandas-course-project-starter" on https://jovian.ml/
[jovian] Uploading notebook..
[jovian] Capturing environment..
[jovian] Committed successfully! https://jovian.ml/aakashns/zerotopandas-course-project-starter
TODO - Write some explanation here: a summary of all the inferences drawn from the analysis, and any conclusions you may have drawn by answering various questions.
import jovian
jovian.commit()
[jovian] Attempting to save notebook..
[jovian] Updating notebook "aakashns/zerotopandas-course-project-starter" on https://jovian.ml/
[jovian] Uploading notebook..
[jovian] Capturing environment..
[jovian] Committed successfully! https://jovian.ml/aakashns/zerotopandas-course-project-starter
TODO - Write some explanation here: ideas for future projects using this dataset, and links to resources you found useful.
Submission Instructions (delete this cell)
- Upload your notebook to your Jovian.ml profile using
jovian.commit
.- Make a submission here: https://jovian.ml/learn/data-analysis-with-python-zero-to-pandas/assignment/course-project
- Share your work on the forum: https://jovian.ml/forum/t/course-project-on-exploratory-data-analysis-discuss-and-share-your-work/11684
- Share your work on social media (Twitter, LinkedIn, Telegram etc.) and tag @JovianML
(Optional) Write a blog post
- A blog post is a great way to present and showcase your work.
- Sign up on Medium.com to write a blog post for your project.
- Copy over the explanations from your Jupyter notebook into your blog post, and embed code cells & outputs
- Check out the Jovian.ml Medium publication for inspiration: https://medium.com/jovianml
import jovian
jovian.commit()
[jovian] Attempting to save notebook..
[jovian] Updating notebook "aakashns/zerotopandas-course-project-starter" on https://jovian.ml/
[jovian] Uploading notebook..
[jovian] Capturing environment..
[jovian] Committed successfully! https://jovian.ml/aakashns/zerotopandas-course-project-starter