Pandas Tutorial
Pandas Tutorial
Pandas is a Python library.
Pandas is used to analyze data.
Learning by Reading
We have created 14 tutorial pages for you to learn more about Pandas.
Starting with a basic introduction and ends up with cleaning and plotting data:
Learning by Quiz Test
Test your Pandas skills with a quiz test.
Learning by Exercises
Learning by Examples
In our “Try it Yourself” editor, you can use the Pandas module, and modify the code to see the result.
Example
Load a CSV file into a Pandas DataFrame:
import pandas as pd
df = pd.read_csv(‘data.csv’)
print(df.to_string())
Try it Yourself »Click on the “Try it Yourself” button to see how it works.
Get Certified!
$10 ENROLL
Pandas Introduction
What is Pandas?
Pandas is a Python library used for working with data sets.
It has functions for analyzing, cleaning, exploring, and manipulating data.
The name “Pandas” has a reference to both “Panel Data”, and “Python Data Analysis” and was created by Wes McKinney in 2008.
Why Use Pandas?
Pandas allows us to analyze big data and make conclusions based on statistical theories.
Pandas can clean messy data sets, and make them readable and relevant.
Relevant data is very important in data science.
Data Science: is a branch of computer science where we study how to store, use and analyze data for deriving information from it.
What Can Pandas Do?
Pandas gives you answers about the data. Like:
- Is there a correlation between two or more columns?
- What is average value?
- Max value?
- Min value?
Pandas are also able to delete rows that are not relevant, or contains wrong values, like empty or NULL values. This is called cleaning the data.
Where is the Pandas Codebase?
The source code for Pandas is located at this github repository https://github.com/pandas-dev/pandas
github: enables many people to work on the same codebase.
Pandas Getting Started
Installation of Pandas
If you have Python and PIP already installed on a system, then installation of Pandas is very easy.
Install it using this command:
C:\Users\Your Name>pip install pandas
If this command fails, then use a python distribution that already has Pandas installed like, Anaconda, Spyder etc.
Import Pandas
Once Pandas is installed, import it in your applications by adding the import
keyword:
import pandas
Now Pandas is imported and ready to use.
Example
import pandas
mydataset = {
‘cars’: [“BMW”, “Volvo”, “Ford”],
‘passings’: [3, 7, 2]
}
myvar = pandas.DataFrame(mydataset)
print(myvar)
Try it Yourself »Get Certified!
$10 ENROLL
Pandas as pd
Pandas is usually imported under the pd
alias.
alias: In Python alias are an alternate name for referring to the same thing.
Create an alias with the as
keyword while importing:
import pandas as pd
Now the Pandas package can be referred to as pd
instead of pandas
.
Example
import pandas as pd
mydataset = {
‘cars’: [“BMW”, “Volvo”, “Ford”],
‘passings’: [3, 7, 2]
}
myvar = pd.DataFrame(mydataset)
print(myvar)
Try it Yourself »Checking Pandas Version
The version string is stored under __version__
attribute.
Pandas Series
What is a Series?
A Pandas Series is like a column in a table.
It is a one-dimensional array holding data of any type.
Example
Create a simple Pandas Series from a list:
import pandas as pd
a = [1, 7, 2]
myvar = pd.Series(a)
print(myvar)
Try it Yourself »Labels
If nothing else is specified, the values are labeled with their index number. First value has index 0, second value has index 1 etc.
This label can be used to access a specified value.
Create Labels
With the index
argument, you can name your own labels.
Example
Create you own labels:
import pandas as pd
a = [1, 7, 2]
myvar = pd.Series(a, index = [“x”, “y”, “z”])
print(myvar)
Try it Yourself »When you have created labels, you can access an item by referring to the label.
Get Certified!
$10 ENROLL
Key/Value Objects as Series
You can also use a key/value object, like a dictionary, when creating a Series.
Example
Create a simple Pandas Series from a dictionary:
import pandas as pd
calories = {“day1”: 420, “day2”: 380, “day3”: 390}
myvar = pd.Series(calories)
print(myvar)
Try it Yourself »Note: The keys of the dictionary become the labels.
To select only some of the items in the dictionary, use the index
argument and specify only the items you want to include in the Series.
Example
Create a Series using only data from “day1” and “day2”:
import pandas as pd
calories = {“day1”: 420, “day2”: 380, “day3”: 390}
myvar = pd.Series(calories, index = [“day1”, “day2”])
print(myvar)
Try it Yourself »DataFrames
Data sets in Pandas are usually multi-dimensional tables, called DataFrames.
Series is like a column, a DataFrame is the whole table.
Example
Create a DataFrame from two Series:
import pandas as pd
data = {
“calories”: [420, 380, 390],
“duration”: [50, 40, 45]
}
myvar = pd.DataFrame(data)
print(myvar)
Try it Yourself »You will learn about DataFrames in the next chapter.
Pandas DataFrames
What is a DataFrame?
A Pandas DataFrame is a 2 dimensional data structure, like a 2 dimensional array, or a table with rows and columns.
Example
Create a simple Pandas DataFrame:
import pandas as pd
data = {
“calories”: [420, 380, 390],
“duration”: [50, 40, 45]
}
#load data into a DataFrame object:
df = pd.DataFrame(data)
print(df)
Result
calories duration 0 420 50 1 380 40 2 390 45
Locate Row
As you can see from the result above, the DataFrame is like a table with rows and columns.
Pandas use the loc
attribute to return one or more specified row(s)
Example
Return row 0:
#refer to the row index:
print(df.loc[0])
Result
calories 420 duration 50 Name: 0, dtype: int64
Note: This example returns a Pandas Series.
Example
Return row 0 and 1:
#use a list of indexes:
print(df.loc[[0, 1]])
Result
calories duration 0 420 50 1 380 40
Note: When using []
, the result is a Pandas DataFrame.
Get Certified!
$10 ENROLL
Named Indexes
With the index
argument, you can name your own indexes.
Example
Add a list of names to give each row a name:
import pandas as pd
data = {
“calories”: [420, 380, 390],
“duration”: [50, 40, 45]
}
df = pd.DataFrame(data, index = [“day1”, “day2”, “day3”])
print(df)
Result
calories duration day1 420 50 day2 380 40 day3 390 45
Locate Named Indexes
Use the named index in the loc
attribute to return the specified row(s).
Example
Return “day2”:
#refer to the named index:
print(df.loc[“day2”])
Result
calories 380 duration 40 Name: 0, dtype: int64
Load Files Into a DataFrame
If your data sets are stored in a file, Pandas can load them into a DataFrame.
Example
Load a comma separated file (CSV file) into a DataFrame:
import pandas as pd
df = pd.read_csv(‘data.csv’)
print(df)
Try it Yourself »You will learn more about importing files in the next chapters.
Pandas Read CSV
Read CSV Files
A simple way to store big data sets is to use CSV files (comma separated files).
CSV files contains plain text and is a well know format that can be read by everyone including Pandas.
In our examples we will be using a CSV file called ‘data.csv’.
Download data.csv. or Open data.csv
Example
Load the CSV into a DataFrame:
import pandas as pd
df = pd.read_csv(‘data.csv’)
print(df.to_string())
Try it Yourself »Tip: use to_string()
to print the entire DataFrame.
If you have a large DataFrame with many rows, Pandas will only return the first 5 rows, and the last 5 rows:
Example
Print the DataFrame without the to_string()
method:
import pandas as pd
df = pd.read_csv(‘data.csv’)
print(df)
Try it Yourself »max_rows
The number of rows returned is defined in Pandas option settings.
You can check your system’s maximum rows with the pd.options.display.max_rows
statement.
Example
Check the number of maximum returned rows:
import pandas as pd
print(pd.options.display.max_rows)
Try it Yourself »In my system the number is 60, which means that if the DataFrame contains more than 60 rows, the print(df)
statement will return only the headers and the first and last 5 rows.
You can change the maximum rows number with the same statement.
Example
Increase the maximum number of rows to display the entire DataFrame:
import pandas as pd
pd.options.display.max_rows = 9999
df = pd.read_csv(‘data.csv’)
print(df)
Try it Yourself »Get Certified!
$10 ENROLL
Pandas Read JSON
Read JSON
Big data sets are often stored, or extracted as JSON.
JSON is plain text, but has the format of an object, and is well known in the world of programming, including Pandas.
In our examples we will be using a JSON file called ‘data.json’.
Example
Load the JSON file into a DataFrame:
import pandas as pd
df = pd.read_json(‘data.json’)
print(df.to_string())
Try it Yourself »Tip: use to_string()
to print the entire DataFrame.
Dictionary as JSON
JSON = Python Dictionary
JSON objects have the same format as Python dictionaries.
If your JSON code is not in a file, but in a Python Dictionary, you can load it into a DataFrame directly:
Example
Load a Python Dictionary into a DataFrame:
import pandas as pd
data = {
“Duration”:{
“0”:60,
“1”:60,
“2”:60,
“3”:45,
“4”:45,
“5”:60
},
“Pulse”:{
“0”:110,
“1”:117,
“2”:103,
“3”:109,
“4”:117,
“5”:102
},
“Maxpulse”:{
“0”:130,
“1”:145,
“2”:135,
“3”:175,
“4”:148,
“5”:127
},
“Calories”:{
“0”:409,
“1”:479,
“2”:340,
“3”:282,
“4”:406,
“5”:300
}
}
df = pd.DataFrame(data)
print(df)
Try it Yourself »Pandas – Analyzing DataFrames
Viewing the Data
One of the most used method for getting a quick overview of the DataFrame, is the head()
method.
The head()
method returns the headers and a specified number of rows, starting from the top.
Example
Get a quick overview by printing the first 10 rows of the DataFrame:
import pandas as pd
df = pd.read_csv(‘data.csv’)
print(df.head(10))
Try it Yourself »In our examples we will be using a CSV file called ‘data.csv’.
Download data.csv, or open data.csv in your browser.
Note: if the number of rows is not specified, the head()
method will return the top 5 rows.
Example
Print the first 5 rows of the DataFrame:
import pandas as pd
df = pd.read_csv(‘data.csv’)
print(df.head())
Try it Yourself »There is also a tail()
method for viewing the last rows of the DataFrame.
The tail()
method returns the headers and a specified number of rows, starting from the bottom.
Get Certified!
$10 ENROLL
Info About the Data
The DataFrames object has a method called info()
, that gives you more information about the data set.
Example
Print information about the data:
print(df.info())
Result
<class 'pandas.core.frame.DataFrame'> RangeIndex: 169 entries, 0 to 168 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Duration 169 non-null int64 1 Pulse 169 non-null int64 2 Maxpulse 169 non-null int64 3 Calories 164 non-null float64 dtypes: float64(1), int64(3) memory usage: 5.4 KB None
Result Explained
The result tells us there are 169 rows and 4 columns:
RangeIndex: 169 entries, 0 to 168 Data columns (total 4 columns):
And the name of each column, with the data type:
# Column Non-Null Count Dtype --- ------ -------------- ----- 0 Duration 169 non-null int64 1 Pulse 169 non-null int64 2 Maxpulse 169 non-null int64 3 Calories 164 non-null float64
Null Values
The info()
method also tells us how many Non-Null values there are present in each column, and in our data set it seems like there are 164 of 169 Non-Null values in the “Calories” column.
Which means that there are 5 rows with no value at all, in the “Calories” column, for whatever reason.
Empty values, or Null values, can be bad when analyzing data, and you should consider removing rows with empty values. This is a step towards what is called cleaning data, and you will learn more about that in the next chapters.
Cleaning Data
Pandas – Cleaning Data
Data Cleaning
Data cleaning means fixing bad data in your data set.
Bad data could be:
- Empty cells
- Data in wrong format
- Wrong data
- Duplicates
In this tutorial you will learn how to deal with all of them.
Our Data Set
In the next chapters we will use this data set:
Duration Date Pulse Maxpulse Calories 0 60 '2020/12/01' 110 130 409.1 1 60 '2020/12/02' 117 145 479.0 2 60 '2020/12/03' 103 135 340.0 3 45 '2020/12/04' 109 175 282.4 4 45 '2020/12/05' 117 148 406.0 5 60 '2020/12/06' 102 127 300.0 6 60 '2020/12/07' 110 136 374.0 7 450 '2020/12/08' 104 134 253.3 8 30 '2020/12/09' 109 133 195.1 9 60 '2020/12/10' 98 124 269.0 10 60 '2020/12/11' 103 147 329.3 11 60 '2020/12/12' 100 120 250.7 12 60 '2020/12/12' 100 120 250.7 13 60 '2020/12/13' 106 128 345.3 14 60 '2020/12/14' 104 132 379.3 15 60 '2020/12/15' 98 123 275.0 16 60 '2020/12/16' 98 120 215.2 17 60 '2020/12/17' 100 120 300.0 18 45 '2020/12/18' 90 112 NaN 19 60 '2020/12/19' 103 123 323.0 20 45 '2020/12/20' 97 125 243.0 21 60 '2020/12/21' 108 131 364.2 22 45 NaN 100 119 282.0 23 60 '2020/12/23' 130 101 300.0 24 45 '2020/12/24' 105 132 246.0 25 60 '2020/12/25' 102 126 334.5 26 60 2020/12/26 100 120 250.0 27 60 '2020/12/27' 92 118 241.0 28 60 '2020/12/28' 103 132 NaN 29 60 '2020/12/29' 100 132 280.0 30 60 '2020/12/30' 102 129 380.3 31 60 '2020/12/31' 92 115 243.0
The data set contains some empty cells (“Date” in row 22, and “Calories” in row 18 and 28).
The data set contains wrong format (“Date” in row 26).
The data set contains wrong data (“Duration” in row 7).
The data set contains duplicates (row 11 and 12).
Pandas – Cleaning Empty Cells
Empty Cells
Empty cells can potentially give you a wrong result when you analyze data.
Remove Rows
One way to deal with empty cells is to remove rows that contain empty cells.
This is usually OK, since data sets can be very big, and removing a few rows will not have a big impact on the result.
Example
Return a new Data Frame with no empty cells:
import pandas as pd
df = pd.read_csv(‘data.csv’)
new_df = df.dropna()
print(new_df.to_string())
Try it Yourself »In our cleaning examples we will be using a CSV file called ‘dirtydata.csv’.
Download dirtydata.csv. or Open dirtydata.csv
Note: By default, the dropna()
method returns a new DataFrame, and will not change the original.
If you want to change the original DataFrame, use the inplace = True
argument:
Example
Remove all rows with NULL values:
import pandas as pd
df = pd.read_csv(‘data.csv’)
df.dropna(inplace = True)
print(df.to_string())
Try it Yourself »Note: Now, the dropna(inplace = True)
will NOT return a new DataFrame, but it will remove all rows containg NULL values from the original DataFrame.
Replace Empty Values
Another way of dealing with empty cells is to insert a new value instead.
This way you do not have to delete entire rows just because of some empty cells.
The fillna()
method allows us to replace empty cells with a value:
Example
Replace NULL values with the number 130:
import pandas as pd
df = pd.read_csv(‘data.csv’)
df.fillna(130, inplace = True)
Try it Yourself »Replace Only For Specified Columns
The example above replaces all empty cells in the whole Data Frame.
To only replace empty values for one column, specify the column name for the DataFrame:
Example
Replace NULL values in the “Calories” columns with the number 130:
import pandas as pd
df = pd.read_csv(‘data.csv’)
df[“Calories”].fillna(130, inplace = True)
Try it Yourself »Get Certified!
$10 ENROLL
Replace Using Mean, Median, or Mode
A common way to replace empty cells, is to calculate the mean, median or mode value of the column.
Pandas uses the mean()
median()
and mode()
methods to calculate the respective values for a specified column:
Example
Calculate the MEAN, and replace any empty values with it:
import pandas as pd
df = pd.read_csv(‘data.csv’)
x = df[“Calories”].mean()
df[“Calories”].fillna(x, inplace = True)
Try it Yourself »Mean = the average value (the sum of all values divided by number of values).
Example
Calculate the MEDIAN, and replace any empty values with it:
import pandas as pd
df = pd.read_csv(‘data.csv’)
x = df[“Calories”].median()
df[“Calories”].fillna(x, inplace = True)
Try it Yourself »Median = the value in the middle, after you have sorted all values ascending.
Example
Calculate the MODE, and replace any empty values with it:
import pandas as pd
df = pd.read_csv(‘data.csv’)
x = df[“Calories”].mode()[0]
df[“Calories”].fillna(x, inplace = True)
Try it Yourself »Mode = the value that appears most frequently.
Pandas – Cleaning Data of Wrong Format
Data of Wrong Format
Cells with data of wrong format can make it difficult, or even impossible, to analyze data.
To fix it, you have two options: remove the rows, or convert all cells in the columns into the same format.
Convert Into a Correct Format
In our Data Frame, we have two cells with the wrong format. Check out row 22 and 26, the ‘Date’ column should be a string that represents a date:
Duration Date Pulse Maxpulse Calories 0 60 '2020/12/01' 110 130 409.1 1 60 '2020/12/02' 117 145 479.0 2 60 '2020/12/03' 103 135 340.0 3 45 '2020/12/04' 109 175 282.4 4 45 '2020/12/05' 117 148 406.0 5 60 '2020/12/06' 102 127 300.0 6 60 '2020/12/07' 110 136 374.0 7 450 '2020/12/08' 104 134 253.3 8 30 '2020/12/09' 109 133 195.1 9 60 '2020/12/10' 98 124 269.0 10 60 '2020/12/11' 103 147 329.3 11 60 '2020/12/12' 100 120 250.7 12 60 '2020/12/12' 100 120 250.7 13 60 '2020/12/13' 106 128 345.3 14 60 '2020/12/14' 104 132 379.3 15 60 '2020/12/15' 98 123 275.0 16 60 '2020/12/16' 98 120 215.2 17 60 '2020/12/17' 100 120 300.0 18 45 '2020/12/18' 90 112 NaN 19 60 '2020/12/19' 103 123 323.0 20 45 '2020/12/20' 97 125 243.0 21 60 '2020/12/21' 108 131 364.2 22 45 NaN 100 119 282.0 23 60 '2020/12/23' 130 101 300.0 24 45 '2020/12/24' 105 132 246.0 25 60 '2020/12/25' 102 126 334.5 26 60 20201226 100 120 250.0 27 60 '2020/12/27' 92 118 241.0 28 60 '2020/12/28' 103 132 NaN 29 60 '2020/12/29' 100 132 280.0 30 60 '2020/12/30' 102 129 380.3 31 60 '2020/12/31' 92 115 243.0
Let’s try to convert all cells in the ‘Date’ column into dates.
Pandas has a to_datetime()
method for this:
Example
Convert to date:
import pandas as pd
df = pd.read_csv(‘data.csv’)
df[‘Date’] = pd.to_datetime(df[‘Date’])
print(df.to_string())
Try it Yourself »Result:
Duration Date Pulse Maxpulse Calories 0 60 '2020/12/01' 110 130 409.1 1 60 '2020/12/02' 117 145 479.0 2 60 '2020/12/03' 103 135 340.0 3 45 '2020/12/04' 109 175 282.4 4 45 '2020/12/05' 117 148 406.0 5 60 '2020/12/06' 102 127 300.0 6 60 '2020/12/07' 110 136 374.0 7 450 '2020/12/08' 104 134 253.3 8 30 '2020/12/09' 109 133 195.1 9 60 '2020/12/10' 98 124 269.0 10 60 '2020/12/11' 103 147 329.3 11 60 '2020/12/12' 100 120 250.7 12 60 '2020/12/12' 100 120 250.7 13 60 '2020/12/13' 106 128 345.3 14 60 '2020/12/14' 104 132 379.3 15 60 '2020/12/15' 98 123 275.0 16 60 '2020/12/16' 98 120 215.2 17 60 '2020/12/17' 100 120 300.0 18 45 '2020/12/18' 90 112 NaN 19 60 '2020/12/19' 103 123 323.0 20 45 '2020/12/20' 97 125 243.0 21 60 '2020/12/21' 108 131 364.2 22 45 NaT 100 119 282.0 23 60 '2020/12/23' 130 101 300.0 24 45 '2020/12/24' 105 132 246.0 25 60 '2020/12/25' 102 126 334.5 26 60 '2020/12/26' 100 120 250.0 27 60 '2020/12/27' 92 118 241.0 28 60 '2020/12/28' 103 132 NaN 29 60 '2020/12/29' 100 132 280.0 30 60 '2020/12/30' 102 129 380.3 31 60 '2020/12/31' 92 115 243.0
As you can see from the result, the date in row 26 was fixed, but the empty date in row 22 got a NaT (Not a Time) value, in other words an empty value. One way to deal with empty values is simply removing the entire row.
Get Certified!
$10 ENROLL
Removing Rows
The result from the converting in the example above gave us a NaT value, which can be handled as a NULL value, and we can remove the row by using the dropna()
method.
Example
Remove rows with a NULL value in the “Date” column:
df.dropna(subset=[‘Date’], inplace = True)
Pandas – Fixing Wrong Data
Wrong Data
“Wrong data” does not have to be “empty cells” or “wrong format”, it can just be wrong, like if someone registered “199” instead of “1.99”.
Sometimes you can spot wrong data by looking at the data set, because you have an expectation of what it should be.
If you take a look at our data set, you can see that in row 7, the duration is 450, but for all the other rows the duration is between 30 and 60.
It doesn’t have to be wrong, but taking in consideration that this is the data set of someone’s workout sessions, we conclude with the fact that this person did not work out in 450 minutes.
Duration Date Pulse Maxpulse Calories 0 60 '2020/12/01' 110 130 409.1 1 60 '2020/12/02' 117 145 479.0 2 60 '2020/12/03' 103 135 340.0 3 45 '2020/12/04' 109 175 282.4 4 45 '2020/12/05' 117 148 406.0 5 60 '2020/12/06' 102 127 300.0 6 60 '2020/12/07' 110 136 374.0 7 450 '2020/12/08' 104 134 253.3 8 30 '2020/12/09' 109 133 195.1 9 60 '2020/12/10' 98 124 269.0 10 60 '2020/12/11' 103 147 329.3 11 60 '2020/12/12' 100 120 250.7 12 60 '2020/12/12' 100 120 250.7 13 60 '2020/12/13' 106 128 345.3 14 60 '2020/12/14' 104 132 379.3 15 60 '2020/12/15' 98 123 275.0 16 60 '2020/12/16' 98 120 215.2 17 60 '2020/12/17' 100 120 300.0 18 45 '2020/12/18' 90 112 NaN 19 60 '2020/12/19' 103 123 323.0 20 45 '2020/12/20' 97 125 243.0 21 60 '2020/12/21' 108 131 364.2 22 45 NaN 100 119 282.0 23 60 '2020/12/23' 130 101 300.0 24 45 '2020/12/24' 105 132 246.0 25 60 '2020/12/25' 102 126 334.5 26 60 20201226 100 120 250.0 27 60 '2020/12/27' 92 118 241.0 28 60 '2020/12/28' 103 132 NaN 29 60 '2020/12/29' 100 132 280.0 30 60 '2020/12/30' 102 129 380.3 31 60 '2020/12/31' 92 115 243.0
How can we fix wrong values, like the one for “Duration” in row 7?
Get Certified!
$10 ENROLL
Replacing Values
One way to fix wrong values is to replace them with something else.
In our example, it is most likely a typo, and the value should be “45” instead of “450”, and we could just insert “45” in row 7:
For small data sets you might be able to replace the wrong data one by one, but not for big data sets.
To replace wrong data for larger data sets you can create some rules, e.g. set some boundaries for legal values, and replace any values that are outside of the boundaries.
Example
Loop through all values in the “Duration” column.
If the value is higher than 120, set it to 120:
for x in df.index:
if df.loc[x, “Duration”] > 120:
df.loc[x, “Duration”] = 120
Try it Yourself »Removing Rows
Another way of handling wrong data is to remove the rows that contains wrong data.
This way you do not have to find out what to replace them with, and there is a good chance you do not need them to do your analyses.
Example
Delete rows where “Duration” is higher than 120:
for x in df.index:
if df.loc[x, “Duration”] > 120:
df.drop(x, inplace = True)
Try it Yourself »Pandas – Removing Duplicates
Discovering Duplicates
Duplicate rows are rows that have been registered more than one time.
Duration Date Pulse Maxpulse Calories 0 60 '2020/12/01' 110 130 409.1 1 60 '2020/12/02' 117 145 479.0 2 60 '2020/12/03' 103 135 340.0 3 45 '2020/12/04' 109 175 282.4 4 45 '2020/12/05' 117 148 406.0 5 60 '2020/12/06' 102 127 300.0 6 60 '2020/12/07' 110 136 374.0 7 450 '2020/12/08' 104 134 253.3 8 30 '2020/12/09' 109 133 195.1 9 60 '2020/12/10' 98 124 269.0 10 60 '2020/12/11' 103 147 329.3 11 60 '2020/12/12' 100 120 250.7 12 60 '2020/12/12' 100 120 250.7 13 60 '2020/12/13' 106 128 345.3 14 60 '2020/12/14' 104 132 379.3 15 60 '2020/12/15' 98 123 275.0 16 60 '2020/12/16' 98 120 215.2 17 60 '2020/12/17' 100 120 300.0 18 45 '2020/12/18' 90 112 NaN 19 60 '2020/12/19' 103 123 323.0 20 45 '2020/12/20' 97 125 243.0 21 60 '2020/12/21' 108 131 364.2 22 45 NaN 100 119 282.0 23 60 '2020/12/23' 130 101 300.0 24 45 '2020/12/24' 105 132 246.0 25 60 '2020/12/25' 102 126 334.5 26 60 20201226 100 120 250.0 27 60 '2020/12/27' 92 118 241.0 28 60 '2020/12/28' 103 132 NaN 29 60 '2020/12/29' 100 132 280.0 30 60 '2020/12/30' 102 129 380.3 31 60 '2020/12/31' 92 115 243.0
By taking a look at our test data set, we can assume that row 11 and 12 are duplicates.
To discover duplicates, we can use the duplicated()
method.
The duplicated()
method returns a Boolean values for each row:
Example
Returns True
for every row that is a duplicate, othwerwise False
:
print(df.duplicated())
Try it Yourself »Get Certified!
$10 ENROLL
Removing Duplicates
To remove duplicates, use the drop_duplicates()
method.
Remember: The (inplace = True)
will make sure that the method does NOT return a new DataFrame, but it will remove all duplicates from the original DataFrame.
Correlations
Pandas – Data Correlations
Finding Relationships
A great aspect of the Pandas module is the corr()
method.
The corr()
method calculates the relationship between each column in your data set.
The examples in this page uses a CSV file called: ‘data.csv’.
Download data.csv. or Open data.csv
Example
Show the relationship between the columns:
df.corr()
Try it Yourself »Result
Duration Pulse Maxpulse Calories Duration 1.000000 -0.155408 0.009403 0.922721 Pulse -0.155408 1.000000 0.786535 0.025120 Maxpulse 0.009403 0.786535 1.000000 0.203814 Calories 0.922721 0.025120 0.203814 1.000000
Note: The corr()
method ignores “not numeric” columns.
Result Explained
The Result of the corr()
method is a table with a lot of numbers that represents how well the relationship is between two columns.
The number varies from -1 to 1.
1 means that there is a 1 to 1 relationship (a perfect correlation), and for this data set, each time a value went up in the first column, the other one went up as well.
0.9 is also a good relationship, and if you increase one value, the other will probably increase as well.
-0.9 would be just as good relationship as 0.9, but if you increase one value, the other will probably go down.
0.2 means NOT a good relationship, meaning that if one value goes up does not mean that the other will.
What is a good correlation? It depends on the use, but I think it is safe to say you have to have at least 0.6
(or -0.6
) to call it a good correlation.
Perfect Correlation:
We can see that “Duration” and “Duration” got the number 1.000000
, which makes sense, each column always has a perfect relationship with itself.
Good Correlation:
“Duration” and “Calories” got a 0.922721
correlation, which is a very good correlation, and we can predict that the longer you work out, the more calories you burn, and the other way around: if you burned a lot of calories, you probably had a long work out.
Bad Correlation:
“Duration” and “Maxpulse” got a 0.009403
correlation, which is a very bad correlation, meaning that we can not predict the max pulse by just looking at the duration of the work out, and vice versa.
Plotting
Pandas – Plotting
Plotting
Pandas uses the plot()
method to create diagrams.
We can use Pyplot, a submodule of the Matplotlib library to visualize the diagram on the screen.
Read more about Matplotlib in our Matplotlib Tutorial.
Example
Import pyplot from Matplotlib and visualize our DataFrame:
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv(‘data.csv’)
df.plot()
plt.show()
Try it Yourself »The examples in this page uses a CSV file called: ‘data.csv’.
Download data.csv or Open data.csv
Scatter Plot
Specify that you want a scatter plot with the kind
argument:
kind = 'scatter'
A scatter plot needs an x- and a y-axis.
In the example below we will use “Duration” for the x-axis and “Calories” for the y-axis.
Include the x and y arguments like this:
x = 'Duration', y = 'Calories'
Example
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv(‘data.csv’)
df.plot(kind = ‘scatter’, x = ‘Duration’, y = ‘Calories’)
plt.show()
Result
Remember: In the previous example, we learned that the correlation between “Duration” and “Calories” was 0.922721
, and we conluded with the fact that higher duration means more calories burned.
By looking at the scatterplot, I will agree.
Let’s create another scatterplot, where there is a bad relationship between the columns, like “Duration” and “Maxpulse”, with the correlation 0.009403
:
Example
A scatterplot where there are no relationship between the columns:
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv(‘data.csv’)
df.plot(kind = ‘scatter’, x = ‘Duration’, y = ‘Maxpulse’)
plt.show()
Result
Get Certified!
$10 ENROLL
Histogram
Use the kind
argument to specify that you want a histogram:
kind = 'hist'
A histogram needs only one column.
A histogram shows us the frequency of each interval, e.g. how many workouts lasted between 50 and 60 minutes?
In the example below we will use the “Duration” column to create the histogram:
Note: The histogram tells us that there were over 100 workouts that lasted between 50 and 60 minutes.
Quiz/Exercises
Pandas Quiz
You can test your Pandas skills with W3Schools’ Quiz.
The Test
The test contains 25 questions and there is no time limit.
The test is not official, it’s just a nice way to see how much you know, or don’t know, about Pandas.
Count Your Score
You will get 1 point for each correct answer. At the end of the Quiz, your total score will be displayed. Maximum score is 25 points.
If you don’t know Pandas, we suggest that you read our Pandas Tutorial from scratch.
Pandas Exercises
You can test your Pandas skills with W3Schools’ Exercises.
Exercises
We have gathered a variety of Pandas exercises (with answers) for each Pandas Chapter.
Try to solve an exercise by filling in the missing parts of a code. If you’re stuck, hit the “Show Answer” button to see what you’ve done wrong.
Count Your Score
You will get 1 point for each correct answer. Your score and total score will always be displayed.
If you don’t know Pandas, we suggest that you read our Pandas Tutorial from scratch.
References
Pandas – DataFrame Reference
All properties and methods of the DataFrame object, with explanations and examples:
Property/Method | Description |
---|---|
abs() | Return a DataFrame with the absolute value of each value |
add() | Adds the values of a DataFrame with the specified value(s) |
add_prefix() | Prefix all labels |
add_suffix() | Suffix all labels |
agg() | Apply a function or a function name to one of the axis of the DataFrame |
aggregate() | Apply a function or a function name to one of the axis of the DataFrame |
align() | Aligns two DataFrames with a specified join method |
all() | Return True if all values in the DataFrame are True, otherwise False |
any() | Returns True if any of the values in the DataFrame are True, otherwise False |
append() | Append new columns |
applymap() | Execute a function for each element in the DataFrame |
apply() | Apply a function to one of the axis of the DataFrame |
assign() | Assign new columns |
astype() | Convert the DataFrame into a specified dtype |
at | Get or set the value of the item with the specified label |
axes | Returns the labels of the rows and the columns of the DataFrame |
bfill() | Replaces NULL values with the value from the next row |
bool() | Returns the Boolean value of the DataFrame |
columns | Returns the column labels of the DataFrame |
combine() | Compare the values in two DataFrames, and let a function decide which values to keep |
combine_first() | Compare two DataFrames, and if the first DataFrame has a NULL value, it will be filled with the respective value from the second DataFrame |
compare() | Compare two DataFrames and return the differences |
convert_dtypes() | Converts the columns in the DataFrame into new dtypes |
corr() | Find the correlation (relationship) between each column |
count() | Returns the number of not empty cells for each column/row |
cov() | Find the covariance of the columns |
copy() | Returns a copy of the DataFrame |
cummax() | Calculate the cumulative maximum values of the DataFrame |
cummin() | Calculate the cumulative minmum values of the DataFrame |
cumprod() | Calculate the cumulative product over the DataFrame |
cumsum() | Calculate the cumulative sum over the DataFrame |
describe() | Returns a description summary for each column in the DataFrame |
diff() | Calculate the difference between a value and the value of the same column in the previous row |
div() | Divides the values of a DataFrame with the specified value(s) |
dot() | Multiplies the values of a DataFrame with values from another array-like object, and add the result |
drop() | Drops the specified rows/columns from the DataFrame |
drop_duplicates() | Drops duplicate values from the DataFrame |
droplevel() | Drops the specified index/column(s) |
dropna() | Drops all rows that contains NULL values |
dtypes | Returns the dtypes of the columns of the DataFrame |
duplicated() | Returns True for duplicated rows, otherwise False |
empty | Returns True if the DataFrame is empty, otherwise False |
eq() | Returns True for values that are equal to the specified value(s), otherwise False |
equals() | Returns True if two DataFrames are equal, otherwise False |
eval | Evaluate a specified string |
explode() | Converts each element into a row |
ffill() | Replaces NULL values with the value from the previous row |
fillna() | Replaces NULL values with the specified value |
filter() | Filter the DataFrame according to the specified filter |
first() | Returns the first rows of a specified date selection |
floordiv() | Divides the values of a DataFrame with the specified value(s), and floor the values |
ge() | Returns True for values greater than, or equal to the specified value(s), otherwise False |
get() | Returns the item of the specified key |
groupby() | Groups the rows/columns into specified groups |
gt() | Returns True for values greater than the specified value(s), otherwise False |
head() | Returns the header row and the first 10 rows, or the specified number of rows |
iat | Get or set the value of the item in the specified position |
idxmax() | Returns the label of the max value in the specified axis |
idxmin() | Returns the label of the min value in the specified axis |
iloc | Get or set the values of a group of elements in the specified positions |
index | Returns the row labels of the DataFrame |
infer_objects() | Change the dtype of the columns in the DataFrame |
info() | Prints information about the DataFrame |
insert() | Insert a column in the DataFrame |
interpolate() | Replaces not-a-number values with the interpolated method |
isin() | Returns True if each elements in the DataFrame is in the specified value |
isna() | Finds not-a-number values |
isnull() | Finds NULL values |
items() | Iterate over the columns of the DataFrame |
iteritems() | Iterate over the columns of the DataFrame |
iterrows() | Iterate over the rows of the DataFrame |
itertuples() | Iterate over the rows as named tuples |
join() | Join columns of another DataFrame |
last() | Returns the last rows of a specified date selection |
le() | Returns True for values less than, or equal to the specified value(s), otherwise False |
loc | Get or set the value of a group of elements specified using their labels |
lt() | Returns True for values less than the specified value(s), otherwise False |
keys() | Returns the keys of the info axis |
kurtosis() | Returns the kurtosis of the values in the specified axis |
mask() | Replace all values where the specified condition is True |
max() | Return the max of the values in the specified axis |
mean() | Return the mean of the values in the specified axis |
median() | Return the median of the values in the specified axis |
melt() | Reshape the DataFrame from a wide table to a long table |
memory_usage() | Returns the memory usage of each column |
merge() | Merge DataFrame objects |
min() | Returns the min of the values in the specified axis |
mod() | Modules (find the remainder) of the values of a DataFrame |
mode() | Returns the mode of the values in the specified axis |
mul() | Multiplies the values of a DataFrame with the specified value(s) |
ndim | Returns the number of dimensions of the DataFrame |
ne() | Returns True for values that are not equal to the specified value(s), otherwise False |
nlargest() | Sort the DataFrame by the specified columns, descending, and return the specified number of rows |
notna() | Finds values that are not not-a-number |
notnull() | Finds values that are not NULL |
nsmallest() | Sort the DataFrame by the specified columns, ascending, and return the specified number of rows |
nunique() | Returns the number of unique values in the specified axis |
pct_change() | Returns the percentage change between the previous and the current value |
pipe() | Apply a function to the DataFrame |
pivot() | Re-shape the DataFrame |
pivot_table() | Create a spreadsheet pivot table as a DataFrame |
pop() | Removes an element from the DataFrame |
pow() | Raise the values of one DataFrame to the values of another DataFrame |
prod() | Returns the product of all values in the specified axis |
product() | Returns the product of the values in the specified axis |
quantile() | Returns the values at the specified quantile of the specified axis |
query() | Query the DataFrame |
radd() | Reverse-adds the values of one DataFrame with the values of another DataFrame |
rdiv() | Reverse-divides the values of one DataFrame with the values of another DataFrame |
reindex() | Change the labels of the DataFrame |
reindex_like() | ?? |
rename() | Change the labels of the axes |
rename_axis() | Change the name of the axis |
reorder_levels() | Re-order the index levels |
replace() | Replace the specified values |
reset_index() | Reset the index |
rfloordiv() | Reverse-divides the values of one DataFrame with the values of another DataFrame |
rmod() | Reverse-modules the values of one DataFrame to the values of another DataFrame |
rmul() | Reverse-multiplies the values of one DataFrame with the values of another DataFrame |
round() | Returns a DataFrame with all values rounded into the specified format |
rpow() | Reverse-raises the values of one DataFrame up to the values of another DataFrame |
rsub() | Reverse-subtracts the values of one DataFrame to the values of another DataFrame |
rtruediv() | Reverse-divides the values of one DataFrame with the values of another DataFrame |
sample() | Returns a random selection elements |
sem() | Returns the standard error of the mean in the specified axis |
select_dtypes() | Returns a DataFrame with columns of selected data types |
shape | Returns the number of rows and columns of the DataFrame |
set_axis() | Sets the index of the specified axis |
set_flags() | Returns a new DataFrame with the specified flags |
set_index() | Set the Index of the DataFrame |
size | Returns the number of elements in the DataFrame |
skew() | Returns the skew of the values in the specified axis |
sort_index() | Sorts the DataFrame according to the labels |
sort_values() | Sorts the DataFrame according to the values |
squeeze() | Converts a single column DataFrame into a Series |
stack() | Reshape the DataFrame from a wide table to a long table |
std() | Returns the standard deviation of the values in the specified axis |
sum() | Returns the sum of the values in the specified axis |
sub() | Subtracts the values of a DataFrame with the specified value(s) |
swaplevel() | Swaps the two specified levels |
T | Turns rows into columns and columns into rows |
tail() | Returns the headers and the last rows |
take() | Returns the specified elements |
to_xarray() | Returns an xarray object |
transform() | Execute a function for each value in the DataFrame |
transpose() | Turns rows into columns and columns into rows |
truediv() | Divides the values of a DataFrame with the specified value(s) |
truncate() | Removes elements outside of a specified set of values |
update() | Update one DataFrame with the values from another DataFrame |
value_counts() | Returns the number of unique rows |
values | Returns the DataFrame as a NumPy array |
var() | Returns the variance of the values in the specified axis |
where() | Replace all values where the specified condition is False |
xs() | Returns the cross-section of the DataFrame |
__iter__() | Returns an iterator of the info axes |
LAST COMMENTS