Jovian
⭐️
Sign In
Learn data science and machine learning by building real-world projects on Jovian

Data Analysis with Python: Zero to Pandas - Course Project Guidelines

(remove this cell before submission)

Important links:

This is the starter notebook for the course project for Data Analysis with Python: Zero to Pandas. You will pick a real-world dataset of your choice and apply the concepts learned in this course to perform exploratory data analysis. Use this starter notebook as an outline for your project . Focus on documentation and presentation - this Jupyter notebook will also serve as a project report, so make sure to include detailed explanations wherever possible using Markdown cells.

Evaluation Criteria

Your submission will be evaluated using the following criteria:

  • Dataset must contain at least 3 columns and 150 rows of data
  • You must ask and answer at least 4 questions about the dataset
  • Your submission must include at least 4 visualizations (graphs)
  • Your submission must include explanations using markdown cells, apart from the code.
  • Your work must not be plagiarized i.e. copy-pasted for somewhere else.

Follow this step-by-step guide to work on your project.

Step 1: Select a real-world dataset

Here's some sample code for downloading the US Elections Dataset:

import opendatasets as od
dataset_url = 'https://www.kaggle.com/tunguz/us-elections-dataset'
od.download('https://www.kaggle.com/tunguz/us-elections-dataset')

You can find a list of recommended datasets here: https://jovian.ml/forum/t/recommended-datasets-for-course-project/11711

Step 2: Perform data preparation & cleaning

  • Load the dataset into a data frame using Pandas
  • Explore the number of rows & columns, ranges of values etc.
  • Handle missing, incorrect and invalid data
  • Perform any additional steps (parsing dates, creating additional columns, merging multiple dataset etc.)

Step 3: Perform exploratory analysis & visualization

  • Compute the mean, sum, range and other interesting statistics for numeric columns
  • Explore distributions of numeric columns using histograms etc.
  • Explore relationship between columns using scatter plots, bar charts etc.
  • Make a note of interesting insights from the exploratory analysis

Step 4: Ask & answer questions about the data

  • Ask at least 4 interesting questions about your dataset
  • Answer the questions either by computing the results using Numpy/Pandas or by plotting graphs using Matplotlib/Seaborn
  • Create new columns, merge multiple dataset and perform grouping/aggregation wherever necessary
  • Wherever you're using a library function from Pandas/Numpy/Matplotlib etc. explain briefly what it does

Step 5: Summarize your inferences & write a conclusion

  • Write a summary of what you've learned from the analysis
  • Include interesting insights and graphs from previous sections
  • Share ideas for future work on the same topic using other relevant datasets
  • Share links to resources you found useful during your analysis

Step 6: Make a submission & share your work

(Optional) Step 7: Write a blog post

Example Projects

Refer to these projects for inspiration:

NOTE: Remove this cell containing the instructions before making your submission. You can do using the "Edit > Delete Cells" menu option.

DIGITAL PAYMENT IN INDIA!!!!!!!

Digital Payment India is India's one of its kind payment marketplace, which offers one-stop solution for all kinds of payments. empowers merchants/traders/small business to collect payment from customers in a hassle-free way.

With the power of Mobile and Aadhaar, we have made it our mission to transform our nation's retail stores into Digital Marts. Our Retail store is the future 'Digital Sevak' and will lead the movement to digitize cash by offering services like Domestic Money Transfer (DMT), Aadhaar Enabled Payment Service (AEPS), Recharge and Bill Payments.

##Data Analysis with Python: Zero to Pandas" is a practical, beginner-friendly and coding-focused introduction to data analysis covering the basics of Python, Numpy, Pandas, data visualization and exploratory data analysis.

How to run the code

This is an executable Jupyter notebook hosted on Jovian.ml, a platform for sharing data science projects. You can run and experiment with the code in a couple of ways: using free online resources (recommended) or on your own computer.

Option 1: Running using free online resources (1-click, recommended)

The easiest way to start executing this notebook is to click the "Run" button at the top of this page, and select "Run on Binder". This will run the notebook on mybinder.org, a free online service for running Jupyter notebooks. You can also select "Run on Colab" or "Run on Kaggle".

Option 2: Running on your computer locally
  1. Install Conda by following these instructions. Add Conda binaries to your system PATH, so you can use the conda command on your terminal.

  2. Create a Conda environment and install the required libraries by running these commands on the terminal:

conda create -n zerotopandas -y python=3.8 
conda activate zerotopandas
pip install jovian jupyter numpy pandas matplotlib seaborn opendatasets --upgrade
  1. Press the "Clone" button above to copy the command for downloading the notebook, and run it on the terminal. This will create a new directory and download the notebook. The command will look something like this:
jovian clone notebook-owner/notebook-id
  1. Enter the newly created directory using cd directory-name and start the Jupyter notebook.
jupyter notebook

You can now access Jupyter's web interface by clicking the link that shows up on the terminal or by visiting http://localhost:8888 on your browser. Click on the notebook file (it has a .ipynb extension) to open it.

DOWNLOADING DATASET!!!!!

Instructions for downloading the dataset (delete this cell)

In [1]:
!pip install jovian opendatasets --upgrade --quiet
In [2]:
dataset_url = 'https://www.kaggle.com/lazycipher/upi-usage-statistics-aug16-to-feb20' 
In [3]:
import opendatasets as od
od.download(dataset_url)
Please provide your Kaggle credentials to download this dataset. Learn more: http://bit.ly/kaggle-creds Your Kaggle username: akanshasaini Your Kaggle Key: ········
100%|██████████| 1.17k/1.17k [00:00<00:00, 606kB/s]
Downloading upi-usage-statistics-aug16-to-feb20.zip to ./upi-usage-statistics-aug16-to-feb20

The dataset has been downloaded and extracted.

In [5]:
data_dir = './upi-usage-statistics-aug16-to-feb20'
In [6]:
import os
os.listdir(data_dir)
Out[6]:
['UPIData.csv']

Let us save and upload our work to Jovian before continuing.

In [7]:
project_name = "zerotopandas-course-project-starter-digital payment"
In [8]:
!pip install jovian --upgrade -q
In [9]:
import jovian
In [11]:
jovian.commit(project= "digital payment")
[jovian] Attempting to save notebook.. [jovian] Please enter your API key ( from https://jovian.ml/ ): API KEY: ········ [jovian] Creating a new project "akanshasaini888/digital payment" [jovian] Uploading notebook.. [jovian] Capturing environment.. [jovian] Committed successfully! https://jovian.ml/akanshasaini888/digital-payment

Data Preparation and Cleaning

TODO - Write some explanation here.

Instructions (delete this cell):

  • Load the dataset into a data frame using Pandas
  • Explore the number of rows & columns, ranges of values etc.
  • Handle missing, incorrect and invalid data
  • Perform any additional steps (parsing dates, creating additional columns, merging multiple dataset etc.)
In [13]:
!pip install jovian --upgrade --quiet


In [14]:
!pip install numpy seaborn pandas matplotlib
Requirement already satisfied: numpy in /srv/conda/envs/notebook/lib/python3.8/site-packages (1.19.1) Requirement already satisfied: seaborn in /srv/conda/envs/notebook/lib/python3.8/site-packages (0.10.1) Requirement already satisfied: pandas in /srv/conda/envs/notebook/lib/python3.8/site-packages (1.1.2) Requirement already satisfied: matplotlib in /srv/conda/envs/notebook/lib/python3.8/site-packages (3.3.0) Requirement already satisfied: scipy>=1.0.1 in /srv/conda/envs/notebook/lib/python3.8/site-packages (from seaborn) (1.5.2) Requirement already satisfied: pytz>=2017.2 in /srv/conda/envs/notebook/lib/python3.8/site-packages (from pandas) (2020.1) Requirement already satisfied: python-dateutil>=2.7.3 in /srv/conda/envs/notebook/lib/python3.8/site-packages (from pandas) (2.8.1) Requirement already satisfied: pillow>=6.2.0 in /srv/conda/envs/notebook/lib/python3.8/site-packages (from matplotlib) (7.2.0) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in /srv/conda/envs/notebook/lib/python3.8/site-packages (from matplotlib) (2.4.7) Requirement already satisfied: kiwisolver>=1.0.1 in /srv/conda/envs/notebook/lib/python3.8/site-packages (from matplotlib) (1.2.0) Requirement already satisfied: cycler>=0.10 in /srv/conda/envs/notebook/lib/python3.8/site-packages (from matplotlib) (0.10.0) Requirement already satisfied: six>=1.5 in /srv/conda/envs/notebook/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas) (1.15.0)
In [15]:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import jovian
%matplotlib inline
In [16]:
jovian.commit()

[jovian] Attempting to save notebook.. [jovian] Updating notebook "akanshasaini888/digital-payment" on https://jovian.ml/ [jovian] Uploading notebook.. [jovian] Capturing environment.. [jovian] Committed successfully! https://jovian.ml/akanshasaini888/digital-payment
In [18]:
jovian.commit(files=['UPIData.csv'])
[jovian] Attempting to save notebook.. [jovian] Updating notebook "akanshasaini888/digital-payment" on https://jovian.ml/ [jovian] Uploading notebook.. [jovian] Capturing environment.. [jovian] Uploading additional files...
[jovian] Error: Ignoring "UPIData.csv" (not found)
[jovian] Committed successfully! https://jovian.ml/akanshasaini888/digital-payment

We load the file.

In [19]:
Pkm = pd.read_csv('UPIDATA.csv')
--------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-19-adec25b0e7df> in <module> ----> 1 Pkm = pd.read_csv('UPIDATA.csv') /srv/conda/envs/notebook/lib/python3.8/site-packages/pandas/io/parsers.py in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision) 684 ) 685 --> 686 return _read(filepath_or_buffer, kwds) 687 688 /srv/conda/envs/notebook/lib/python3.8/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds) 450 451 # Create the parser. --> 452 parser = TextFileReader(fp_or_buf, **kwds) 453 454 if chunksize or iterator: /srv/conda/envs/notebook/lib/python3.8/site-packages/pandas/io/parsers.py in __init__(self, f, engine, **kwds) 934 self.options["has_index_names"] = kwds["has_index_names"] 935 --> 936 self._make_engine(self.engine) 937 938 def close(self): /srv/conda/envs/notebook/lib/python3.8/site-packages/pandas/io/parsers.py in _make_engine(self, engine) 1166 def _make_engine(self, engine="c"): 1167 if engine == "c": -> 1168 self._engine = CParserWrapper(self.f, **self.options) 1169 else: 1170 if engine == "python": /srv/conda/envs/notebook/lib/python3.8/site-packages/pandas/io/parsers.py in __init__(self, src, **kwds) 1996 kwds["usecols"] = self.usecols 1997 -> 1998 self._reader = parsers.TextReader(src, **kwds) 1999 self.unnamed_cols = self._reader.unnamed_cols 2000 pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__() pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._setup_parser_source() FileNotFoundError: [Errno 2] No such file or directory: 'UPIDATA.csv'
In [ ]:
import jovian
In [ ]:
jovian.commit()

Exploratory Analysis and Visualization

TODO - write some explanation here.

Instructions (delete this cell)

  • Compute the mean, sum, range and other interesting statistics for numeric columns
  • Explore distributions of numeric columns using histograms etc.
  • Explore relationship between columns using scatter plots, bar charts etc.
  • Make a note of interesting insights from the exploratory analysis

Let's begin by importingmatplotlib.pyplot and seaborn.

In [ ]:
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline

sns.set_style('darkgrid')
matplotlib.rcParams['font.size'] = 14
matplotlib.rcParams['figure.figsize'] = (9, 5)
matplotlib.rcParams['figure.facecolor'] = '#00000000'

TODO - Explore one or more columns by plotting a graph below, and add some explanation about it

In [ ]:
 
In [ ]:
 

TODO - Explore one or more columns by plotting a graph below, and add some explanation about it

In [ ]:
 
In [ ]:
 

TODO - Explore one or more columns by plotting a graph below, and add some explanation about it

In [ ]:
 
In [ ]:
 

TODO - Explore one or more columns by plotting a graph below, and add some explanation about it

In [ ]:
 
In [ ]:
 

TODO - Explore one or more columns by plotting a graph below, and add some explanation about it

In [ ]:
 
In [ ]:
 

Let us save and upload our work to Jovian before continuing

In [ ]:
import jovian
In [ ]:
jovian.commit()

Asking and Answering Questions

TODO - write some explanation here.

Instructions (delete this cell)

  • Ask at least 5 interesting questions about your dataset
  • Answer the questions either by computing the results using Numpy/Pandas or by plotting graphs using Matplotlib/Seaborn
  • Create new columns, merge multiple dataset and perform grouping/aggregation wherever necessary
  • Wherever you're using a library function from Pandas/Numpy/Matplotlib etc. explain briefly what it does
Q1: TODO - ask a question here and answer it below
In [ ]:
 
In [ ]:
 
In [ ]:
 
Q2: TODO - ask a question here and answer it below
In [ ]:
 
In [ ]:
 
In [ ]:
 
Q3: TODO - ask a question here and answer it below
In [ ]:
 
In [ ]:
 
In [ ]:
 
Q4: TODO - ask a question here and answer it below
In [ ]:
 
In [ ]:
 
In [ ]:
 
Q5: TODO - ask a question here and answer it below
In [ ]:
 
In [ ]:
 
In [ ]:
 

Let us save and upload our work to Jovian before continuing.

In [ ]:
import jovian
In [ ]:
jovian.commit()

Inferences and Conclusion

TODO - Write some explanation here: a summary of all the inferences drawn from the analysis, and any conclusions you may have drawn by answering various questions.

In [ ]:
import jovian
In [ ]:
jovian.commit()

References and Future Work

TODO - Write some explanation here: ideas for future projects using this dataset, and links to resources you found useful.

Submission Instructions (delete this cell)

(Optional) Write a blog post

In [ ]:
import jovian
In [ ]:
jovian.commit()
In [ ]: