By Mark on November 11, 2022

# Tutorial: Filtering with Pandas

When performing data analysis, it is essential to be able to filter data based on certain criteria. For instance, you might want to explore the GDP of the wealthiest countries or aggregate the rainfall of certain states from 2001-2010. Luckily for us, we can leverage the Python Pandas framework to filter data for a very wide range of use cases.

We will walk through a number of examples that use Pandas to filter data. Although we have embedded code snippets and results, please feel free to follow along and run the code yourself using this Deepnote workbook.

### A quick intro to Pandas

Pandas is a powerful data analysis and manipulation tool built on top of the Python programming language. Pandas makes it easier to explore, clean, and process data using two core data structures: Series and DataFrames.

**Series** — 1-dimensional labeled homogenous arrays, with fixed size and mutable data

**DataFrames** — 2-dimensional labeled arrays of potentially heterogenous tabular data, with mutable size and mutable data. DataFrames can also be thought of as dict-like containers for Series objects.

DataFrames and Series are both built on top of numpy arrays. To understand how to filter with Pandas, let’s first dive into how filtering works in numpy.

### Filtering with Numpy arrays

Numpy arrays operate like indexed lists. Array indexes enable us to select certain elements based on their ordinal position. For instance, `indexed_array`

in the below example has 5 elements, each corresponding to an index value from 0 to 4.

If we want to select the first, third, and fifth scalar elements, we can do the following.

```
indexed_array = np.array(['a','b','c','d','e'])
indexed_array[0] #returns first element ('a')
indexed_array[2] #returns third element ('c')
indexed_array[4] #returns last element ('e')
```

Numpy also makes it possible to select multiple elements from an array by passing in a list of indexes, the result of which is a numpy array.

It also possible to filter numpy arrays by passing in a list of Boolean values corresponding to the array indexes. Note that the dimensions of the Boolean list must match those of the numpy array. For instance, if we want to select the first, third, and last elements of `indexed_array`

using Booleans, we can do the following:

Using Python’s list comprehension, we can also apply logical arguments to translate nearly any list into a list of Booleans. We can then use that list of Booleans to filter the indexes of the numpy array.

Numpy arrays use a mechanism called broadcasting to apply element-wise operations on each element in an array. This allows us to compute Boolean values using simple logical arguments, as can be seen in the following example.

Putting this all together, we can combine boolean indexing, list comprehension, and broadcasting to filter numpy arrays in a variety of ways.

### Filtering Pandas rows

Because Pandas data structures use numpy arrays, we can use many of the techniques covered in the previous section to filter Pandas rows. Pandas has additional functionality, like `.query()`

functions, that provide powerful filtering tools that are not available with vanilla numpy arrays.

##### Row indexing

Similar to numpy arrays, we can use indexing to filter Pandas rows. For instance we can return all rows corresponding to the `'Apple'`

and `'Amazon'`

indexes using the Pandas native `.loc[row_indexes, columns]`

function.

Or we can return all rows corresponding to numerical indexes `[0:2]`

using the Pandas native `.iloc[numerical_row_indexes, columns]`

function.

##### Conditionals

Similarly to numpy arrays, we can filter rows in Pandas data structures by passing in a list of Boolean values that correspond 1:1 with the indexes of each row. Let’s explore some ways to filter rows in Pandas using Boolean lists below.

**Broadcasting**

We can create Pandas filters by applying element-wise operations on each element in a Series.

**List comprehension**

List comprehension also enable us to create a list of Booleans to filter Pandas rows by iterating over a list and applying logic to each element.

**Compound Logical Statements**

We can also apply compound logical statements to describe more complex criteria across multiple Series.

##### Querying

As previously mentioned, Pandas provides functionality beyond what is possible with numpy arrays. For instance, Pandas has a native `.query()`

function that helps us write SQL-like statements to filter DataFrames.

##### Indexing vs Conditionals vs Querying

With so many ways to filter Pandas rows, you may be wondering how to choose which technique to apply. Indexing and conditionals are generally the fastest filtering method, while querying usually results in the cleanest code, especially for more complex filtering statements, but the results generally take the longest to execute. Although we have laid out some straightforward heuristics to help you decide, understanding which technique best fits which scenario will likely come with practice.

### Selecting Pandas Columns

Selecting columns in Pandas DataFrames is generally more straightforward than filtering rows. Below we will examine how to filter DataFrame columns based on index, name, and data type.

##### Selecting using Indexes

Similar to Pandas rows, DataFrame columns can also be filtered on their indexes using the `.loc`

function and numerical indexes using the `.iloc`

function.

##### Selecting using Column Names

Pandas DataFrames also make it easy to select columns by passing in a list of column names.

##### Selecting using Data Types

Each column in a Pandas DataFrame has an assigned data type (dtype). DataFrames columns can also be filtered by selecting only columns that match certain dtypes.

### Common Scenarios to Use Pandas Filters

There are limitless ways we can apply these techniques to filter Pandas data structures. To help reinforce some of these concepts, we will go through some of the most common scenarios for filtering Pandas rows.

Below we have a DataFrame called `blizzards`

containing the names, dates, snowfall (in cm), and locations of the biggest blizzards in US history. We will use this DataFrame to walk through some common questions Pandas users have about filtering.

##### Filtering by Substring Criteria

Filtering on substring criteria can be useful when you want to select rows based on partial string matches for a subset of the column values. For example, we could do the following if we wanted to filter our `blizzards`

DataFrame down to just the events with `Snow`

in the name (index).

##### Filtering on Dates

Filtering on dates is also a very common use case. Let’s say we wanted to filter `blizzards`

rows down to just the `events in the 20th century (between 1900-1999)`

. Pandas enables us to do this without explicitly casting the date objects into datetime objects.

##### Filtering Using ‘in’ and ‘not in’

Oftentimes we will want to only select rows with values that are part of (or not part of) a pre-determined set. For instance, we can look at only the major snowfall events that took place in `New York`

and `Texas`

.

We can also use similar techniques to filter rows to **exclude** events that took place in `New York`

and `Texas`

.

##### Operator Chaining

Operator chaining enables us to perform complex filtering logic. This is also when the DataFrames `.query`

function becomes especially useful. For instance, let’s filter our `blizzards`

DataFrame down to only rows that had `greater than 100 cm of snowfall`

AND `occurred after 1900`

OR `took place in a region that starts with 'New'`

.

### Conclusion & further reading

We should now have a better understanding of how to filter with numpy arrays, how to apply and build off those techniques to filter Pandas rows, how to select Pandas columns, and when to use different Pandas filtering techniques. Although we covered a lot in this post, we have only scratched the surface of using Pandas to perform data analysis and data engineering.

For more information, please refer to the following links:

Share this post

Mark Cinali

Follow Mark on LinkedIn

## Join the world's best data teams and get started with Deepnote

No credit card required. Run your first notebook in seconds.