Aporia has been acquired by Coralogix, instantly bringing AI security and reliability to thousands of enterprises | Read the announcement

Back to Blog
How-To

How to Drop Duplicate Rows Across Multiple Columns in a DataFrame

Aporia Team Aporia Team 2 min read Sep 06, 2022

We should not have duplicate rows in a DataFrame because they cause the results of our analysis to be unreliable or simply wrong and waste memory and computation.

In this short how-to article, we will learn how to drop duplicate rows in Pandas and PySpark DataFrames.

How to Delete Rows Based on Column Values in a DataFrame?

Pandas

We can use the drop_duplicates function for this task. By default, it drops rows that are identical, which means the values in all the columns are the same.

df = df.drop_duplicates()

In some cases, having the same values in certain columns is enough for being considered as duplicates. The subset parameter can be used to select columns to look for when detecting duplicates.

df = df.drop_duplicates(subset=["f1","f2"])

By default, the first occurrence of duplicate rows is kept in the DataFrame and the other ones are dropped. We also have the option to keep the last occurrence.

# keep the last occurrence
df = df.drop_duplicates(subset=["f1","f2"], keep="last")

PySpark

The dropDuplicates function can be used for removing duplicate rows.

df = df.dropDuplicates()

It allows checking only some of the columns for determining the duplicate rows.

df = df.dropDuplicates(["f1","f2"])

This question is also being asked as:

  • How to remove duplicate values using Pandas and keep any one
  • Checking for duplicate data in Pandas

People have also asked for:

Rate this article

Average rating 5 / 5. Vote count: 1

No votes so far! Be the first to rate this post.

On this page

Blog
Building a RAG app?

Consider AI Guardrails to get to production faster

Learn more
Table of Contents

Related Articles