Sign In

Project Whatsapp Message analysis

Write some introduction about your project here: describe the dataset, where you got it from, what you're trying to do with it, and which tools & techniques you're using. You can also mention about the course, and what you've learned from it.

As a first step, let's upload our Jupyter notebook to

In [1]:
!pip install jovian --upgrade --quiet
!pip install numpy --upgrade --quiet
!pip install pandas --upgrade --quiet
!pip install matplotlib --upgrade --quiet
!pip install seaborn --upgrade --quiet
!pip install wordcloud --upgrade --quiet
!pip install emoji  --upgrade --quiet

In [19]:
project_name = "whatsapp-chat-analysis-course-project-try"

In [20]:
import jovian
In [ ]:
[jovian] Attempting to save notebook..

Data Preparation and Cleaning

In [2]:
import os
import pandas as pd
import re
import datetime as time
import jovian
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import emoji
import re
In [3]:
whatsapp_df = pd.read_fwf('Chat.txt', header = None)

In [4]:
<class 'pandas.core.frame.DataFrame'> RangeIndex: 23330 entries, 0 to 23329 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 0 23177 non-null object 1 1 23087 non-null object 2 2 788 non-null object dtypes: object(3) memory usage: 546.9+ KB

After that we will use the info() that provided by the pandas to understand the datatype in the dataframe. As you can see we need to do some clearning such as the date and the Media omitted. (re-explain)

In [5]:
(23330, 3)
In [13]:
def txtTodf(txt_file):
    '''Convert WhatsApp chat log text file to a Pandas dataframe.'''
    # some regex to account for messages taking up multiple lines
    pat = re.compile(r'^(\d\d\/\d\d\/\d\d\d\d.*?)(?=^^\d\d\/\d\d\/\d\d\d\d|\Z)', re.S | re.M)
    with open(txt_file) as file:
        data = ['\n', ' ') for m in pat.finditer(]

    user     = []; 
    message  = []; 
    datetime = []
    for row in data:

        # timestamp is before the first dash
        datetime.append(row.split(' - ')[0])

        # sender is between am/pm, dash and colon
            s ='m - (.*?):', row).group(1)

        # message content is after the first colon
            message.append(row.split(': ', 1)[1])

    df = pd.DataFrame(zip(datetime, user, message), columns=['datetime', 'user', 'message'])
    df['datetime'] = pd.to_datetime(df.datetime, format='%d/%m/%Y, %I:%M %p')

    # remove events not associated with a sender
    df = df[df.user != ''].reset_index(drop=True)
    return df

whatsapp_df = txtTodf('Chat.txt')
In [14]:
In [15]:
<class 'pandas.core.frame.DataFrame'> RangeIndex: 22701 entries, 0 to 22700 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 datetime 22701 non-null datetime64[ns] 1 user 22701 non-null object 2 message 22701 non-null object dtypes: datetime64[ns](1), object(2) memory usage: 532.2+ KB
In [16]:
--------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-16-70bbcde94589> in <module> ----> 1 jovian.commit(project=project_name) NameError: name 'project_name' is not defined

Clearning the image data

In [11]:
# To understand the number od the image data
img = whatsapp_df[whatsapp_df['message'] == "<Media omitted>" ]

(1182, 3)

Since we want to do analysis on the text rather than image so we have to clean the image data in the text file. In here we have 11k of image in the three row

In [12]:
# We will drop all the image file by using the Drop functions
whatsapp_df.drop(img.index, inplace=True)

So now we will drop all the img to make the dataset more clean. Moreover, we want to make sure it will not copy a new dataset that why we will use "inplace == True"

In [13]:

As you can see now the dataset is clean from the media format.But we have a problem because after we did the clearning the index of the dataset had been off-order. So now we have to clean the data by using the reset_index().

In [14]:
whatsapp_df.reset_index(inplace=True, drop=True)
(21519, 3)

So after the data is clean we have left 21519 data in our dataset. So now we are able to perform the data driven decision making!

In [28]:
In [ ]:
In [ ]:
def split_count(text):

    emoji_list = []
    data = regex.findall(r'\X', text)
    for word in data:
        if any(char in emoji.UNICODE_EMOJI for char in word):

    return emoji_list

total_messages = df.shape[0]
media_messages = df[df['Message'] == '<Media omitted>'].shape[0]
df["emoji"] = df["Message"].apply(split_count)
emojis = sum(df['emoji'].str.len())
URLPATTERN = r'(https?://\S+)'
df['urlcount'] = df.Message.apply(lambda x: re.findall(URLPATTERN, x)).str.len()
links = np.sum(df.urlcount)

Let get started on the data we are going analysis

  1. Which users have the most Chat/messages in the group?
  2. Which emojis use the most by which users?
  3. The most usage of whatsapp during the time and day?
  4. Which month have the highest messages and also the busiest month?
  5. What time did the users usually start chatting and sleep?
In [ ]:
[jovian] Attempting to save notebook..

1. Which users have the most Chat/messages in the group?

In [23]:
#Understand how many user and messages in this chat first
totalNumberofMessage = whatsapp_df.message.count()
username   = whatsapp_df["user"].unique()

print('The total of the number of message:',totalNumberofMessage)
print('User name that involve in the chat:',username)
The total of the number of message: 21519 User name that involve in the chat: ['Ed' 'Rohit' 'Pei Yin']
In [25]:
# We want to know each user send how many message and Emojis
amountOfChat = whatsapp_df.groupby("user")["message"].count()

length = whatsapp_df.user.unique()

for i in range(len(length)):
    #Get one particular user name
    user_df = whatsapp_df[whatsapp_df['user'] == length[i]]
    #user_df will show the user message 
    #Get the total number of each user send
     print('Messages', user_df.shape[0])
    # emojis conists of total emojis
    emojis = sum(req_df['emoji'].str.len())
    print('Emojis Sent', emojis)
    #links consist of total links
    links = sum(req_df["urlcount"])   
    print('Links Sent', links)   

Ed          6991
Pei Yin     4260
Rohit      10268
Name: message, dtype: int64
In [ ]:
l = messages_df.Author.unique()

for i in range(len(l)):
  # Filtering out messages of particular user
  req_df= messages_df[messages_df["Author"] == l[i]]
  # req_df will contain messages of only one particular user
  print(f'Stats of {l[i]} -')
  # shape will print number of rows which indirectly means the number of messages
  print('Messages Sent', req_df.shape[0])
  #Word_Count contains of total words in one message. Sum of all words/ Total Messages will yield words per message
  words_per_message = (np.sum(req_df['Word_Count']))/req_df.shape[0]
  print('Words per message', words_per_message)
  #media conists of media messages
  media = media_messages_df[media_messages_df['Author'] == l[i]].shape[0]
  print('Media Messages Sent', media)
  # emojis conists of total emojis
  emojis = sum(req_df['emoji'].str.len())
  print('Emojis Sent', emojis)
  #links consist of total links
  links = sum(req_df["urlcount"])   
  print('Links Sent', links)   
In [ ]:
In [ ]: