How to display a DataFrame in Python
Discover ways to display a Python DataFrame. Get tips, see real-world uses, and learn how to debug common errors in your code.

A clear view of your pandas DataFrame is essential for data analysis. Python provides multiple methods to display data, from basic console outputs to styled presentations perfect for reports and dashboards.
In this article, you'll explore various techniques to show your data. You will find practical tips, real world applications, and advice to debug common issues for effective DataFrame visualization.
Basic display of a DataFrame
import pandas as pd
data = {'Name': ['Alice', 'Bob', 'Charlie'], 'Age': [25, 30, 35]}
df = pd.DataFrame(data)
print(df)--OUTPUT--Name Age
0 Alice 25
1 Bob 30
2 Charlie 35
The simplest way to view your DataFrame is with the print() function. This command outputs a clean, text-based table directly to your console. It’s an essential tool for quick checks during development, allowing you to instantly verify the contents and structure of your df.
The output automatically includes the index, column headers, and data aligned in a readable format. This default representation is a core feature of pandas, providing a quick and effective way to understand your dataset without any extra configuration.
Basic display techniques
Moving beyond a simple print(), you can inspect specific parts of your data with head() and tail() or enhance its presentation with display() and set_option().
Using head() and tail() to view portions of data
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C'])
print(df.head(3)) # Show first 3 rows--OUTPUT--A B C
0 0.469112 -0.282863 -1.509059
1 -1.135632 1.212112 -0.173215
2 0.119209 -1.044236 -0.861849
When dealing with large datasets, you don't always need to see everything at once. The head() and tail() methods are perfect for this, letting you preview a small section of your DataFrame without overwhelming your console.
head()shows the first few rows.tail()shows the last few rows.
Both default to showing five rows, but you can pass a number, like in df.head(3), to see a specific amount.
Using display() in Jupyter notebooks
import pandas as pd
data = {'Name': ['Alice', 'Bob', 'Charlie'], 'Score': [85, 92, 78]}
df = pd.DataFrame(data)
from IPython.display import display
display(df)--OUTPUT--Name Score
0 Alice 85
1 Bob 92
2 Charlie 78
In environments like Jupyter Notebook, the display() function from IPython.display provides a more polished output than a standard print(). It renders your DataFrame as a clean HTML table, which is much easier on the eyes and better for presentations.
- The key advantage is this enhanced visual formatting—perfect for analysis and reports inside a notebook.
While Jupyter often calls display() implicitly on the last line of a cell, explicitly using it ensures your DataFrame is shown exactly where you want it, not just at the end.
Customizing display options with set_option()
import pandas as pd
pd.set_option('display.max_rows', 10)
pd.set_option('display.max_columns', 5)
pd.set_option('display.width', 80)
df = pd.DataFrame({"A": range(20), "B": range(20, 40)})
print(df)--OUTPUT--A B
0 0 20
1 1 21
2 2 22
3 3 23
4 4 24
.. .. ..
15 15 35
16 16 36
17 17 37
18 18 38
19 19 39
[20 rows x 2 columns]
The set_option() function lets you customize the default display settings for all your DataFrames. It's a great way to manage large datasets that might otherwise clutter your console. These settings are global, so they'll apply to every DataFrame you show.
display.max_rows: Controls how many rows are displayed before the output is truncated.display.max_columns: Sets the maximum number of columns to show.display.width: Adjusts the line width before wrapping.
Notice how setting max_rows to 10 makes pandas display a summarized view of the larger DataFrame.
Advanced display techniques
Beyond simple viewing, you can uncover deeper insights by highlighting values, transposing your layout, or generating statistical summaries with functions like describe().
Highlighting data with conditional formatting
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(3, 3), columns=['A', 'B', 'C'])
# Create a boolean mask for positive values
positive_mask = df > 0
print(df[positive_mask]) # Show only positive values, others as NaN--OUTPUT--A B C
0 NaN 0.124121 0.215798
1 0.358537 NaN NaN
2 NaN 0.344426 0.411865
Conditional formatting lets you zero in on specific data by creating a boolean mask. The expression df > 0 generates a new DataFrame, positive_mask, filled with True or False values based on whether each number is positive. This mask acts as a filter for your data.
- When you apply the mask with
df[positive_mask], you're telling pandas to show only the values that meet your condition. - Values that don't meet the condition—where the mask is
False—are replaced withNaN.
Transposing DataFrames for different views
import pandas as pd
data = {'Name': ['Alice', 'Bob'], 'Age': [25, 30], 'City': ['New York', 'Boston']}
df = pd.DataFrame(data)
transposed_df = df.T # Transpose rows and columns
print(transposed_df)--OUTPUT--0 1
Name Alice Bob
Age 25 30
City New York Boston
Transposing your DataFrame with the .T attribute flips its orientation, turning rows into columns and columns into rows. This simple operation provides a completely different perspective on your data's structure.
- It's especially useful for "wide" DataFrames where you have many columns and few rows.
- This new layout makes it easier to compare all the features for a single record at a glance.
Creating summary views with describe()
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(10, 4), columns=['A', 'B', 'C', 'D'])
summary = df.describe()
print(summary)--OUTPUT--A B C D
count 10.000000 10.00000 10.000000 10.000000
mean 0.016014 -0.05144 -0.167298 -0.077999
std 0.941140 1.02735 0.961496 0.923334
min -1.290465 -1.62432 -2.089087 -1.541663
25% -0.603954 -0.66045 -0.762050 -0.553171
50% -0.171243 -0.23232 -0.090741 -0.211805
75% 0.712183 0.65318 0.569064 0.463263
max 1.513906 1.71824 1.326228 1.261158
The describe() method is your go-to for a quick statistical snapshot of your data. It automatically calculates key descriptive statistics for all numerical columns, giving you a high-level understanding of your dataset's distribution and scale with a single command.
- It provides essential metrics like the
countof non-empty values, themean(average), and thestd(standard deviation). - You also get the
minandmaxvalues, along with quartiles, which help you see the data's range and spread at a glance.
Move faster with Replit
Replit is an AI-powered development platform that transforms natural language into working applications. Describe what you want to build, and Replit Agent creates it—complete with databases, APIs, and deployment.
For the DataFrame display techniques we've explored, Replit Agent can turn them into production tools:
- Build a data profiling tool that automatically generates and displays statistical summaries using
describe()for any uploaded CSV. - Create a financial dashboard that highlights transactions above a certain threshold, using conditional formatting to draw attention to key figures.
- Deploy a feature comparison utility that takes a dataset and transposes it with
.Tto easily compare attributes for a single item.
Try building your next data tool with Replit Agent. Describe your app idea, and it will write the code, test it, and fix issues automatically, all in your browser.
Common errors and challenges
Even with the right tools, you might encounter issues like truncated views, incorrect data types, or confusing warnings when displaying your data.
Fixing truncated output with set_option()
If your DataFrame appears cut off with ellipses, it’s because pandas truncates large outputs by default. You can easily override this behavior using pd.set_option(). To see all your data, set display.max_rows and display.max_columns to None.
Converting data types with to_numeric()
When a column of numbers is treated as text (an object dtype), methods like describe() won't work as expected. This often happens when data is imported with non-numeric characters. Use the pd.to_numeric() function to convert the column, which enables proper calculations and analysis.
Resolving SettingWithCopyWarning with the copy() method
The SettingWithCopyWarning is a common flag that appears when you modify a slice of a DataFrame. It warns that your change might not be saved to the original data. The best practice is to create an explicit, independent version of the slice using the .copy() method before making any modifications.
Fixing truncated output with set_option()
When your DataFrame is large, pandas automatically shortens the output with ellipses to keep your console tidy. While helpful, this can hide data you need to see. The code below shows what this truncated view looks like in practice.
import pandas as pd
large_df = pd.DataFrame({'A': range(100), 'B': range(100, 200)})
print(large_df) # Output will be truncated with ...
Creating a DataFrame with 100 rows exceeds the default display limit, which is why pandas summarizes the output. The code below demonstrates how to adjust this setting to see every row.
import pandas as pd
large_df = pd.DataFrame({'A': range(100), 'B': range(100, 200)})
pd.set_option('display.max_rows', None)
print(large_df) # Shows all rows
By calling pd.set_option('display.max_rows', None), you instruct pandas to remove its row display limit entirely. This is a global setting, so all DataFrames you print afterward will show in full.
- This is your go-to solution when you need to inspect every single row for detailed analysis or debugging, rather than seeing a summarized view. You can apply the same logic to columns with
display.max_columns.
Converting data types with to_numeric()
It's a common issue: a column that looks numeric is actually stored as text, or an object dtype. This often happens with mixed data types, preventing you from performing mathematical operations like sum(). The code below demonstrates this exact problem.
import pandas as pd
df = pd.DataFrame({
'ID': ['001', '002', '003'],
'Value': ['10', 20, 30] # Mixed string and integers
})
print(df['Value'].sum()) # Fails due to string values
The sum() operation fails because the first value, '10', is a string, not a number. You can't add text to integers. The code below shows how to properly prepare the column for calculation.
import pandas as pd
df = pd.DataFrame({
'ID': ['001', '002', '003'],
'Value': ['10', 20, 30] # Mixed string and integers
})
df['Value'] = pd.to_numeric(df['Value'])
print(df['Value'].sum()) # Now correctly returns 60
The fix is to apply pd.to_numeric() to the entire column. This function attempts to change every value into a number, making it possible to perform mathematical operations like sum(). Once converted, the calculation succeeds because you're adding numbers, not text.
- Keep an eye out for this issue when importing data, especially from CSVs, as numeric values are often read as strings. Checking your column
dtypesafter loading data is a good habit.
Resolving SettingWithCopyWarning with the copy() method
The SettingWithCopyWarning is a heads-up from pandas that you might be changing a copy of your data, not the original DataFrame. This often happens when you filter and then assign new values. The code below shows a common scenario.
import pandas as pd
df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8]})
filtered = df[df['A'] > 2]
filtered['B'] = 0 # Triggers SettingWithCopyWarning
print(filtered)
The warning is triggered by so-called "chained assignment." You're modifying a slice created with df[df['A'] > 2], and pandas can't guarantee if the change will stick. The code below shows how to resolve this ambiguity.
import pandas as pd
df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8]})
filtered = df[df['A'] > 2].copy()
filtered['B'] = 0 # No warning with explicit copy
print(filtered)
The fix is to create an explicit, independent DataFrame using the .copy() method. This tells pandas you're intentionally working on a separate slice, not the original data, which resolves the ambiguity and silences the warning.
- This is the recommended approach whenever you filter a DataFrame and then need to modify the resulting subset. It ensures your changes are predictable and contained within the new copy.
Real-world applications
Beyond just viewing data, these techniques are essential for practical tasks like summarizing sales reports or reshaping sensor data with unstack().
Displaying sales reports with totals
Adding total rows and columns to a sales report provides a quick, high-level summary of performance across different categories and time periods.
import pandas as pd
# Monthly sales data for a retail company
sales_data = {
'Month': ['Jan', 'Feb', 'Mar', 'Apr'],
'Laptops': [120000, 100000, 130000, 115000],
'Phones': [95000, 110000, 105000, 120000]
}
sales_df = pd.DataFrame(sales_data).set_index('Month')
# Display original sales data
print("Monthly Sales Report:")
print(sales_df)
# Add a summary row and column for better analysis
sales_df['Total'] = sales_df.sum(axis=1)
sales_df.loc['Total'] = sales_df.sum()
print("\nSales Report with Totals:")
print(sales_df)
This code first structures monthly sales data into a DataFrame, setting the 'Month' column as the index for clear organization. It then systematically adds summary calculations to create a more comprehensive report.
- First, a 'Total' column is created using
sales_df.sum(axis=1). Theaxis=1argument sums the values horizontally, giving you the total sales for each month. - Next, a 'Total' row is added using
sales_df.loc['Total'] = sales_df.sum(). This sums vertically down each column, calculating the total for each product category and a final grand total.
Analyzing sensor data with unstack()
The unstack() method reshapes long-format sensor data by pivoting an index level into columns, which makes it much easier to compare readings from different sensors side-by-side.
import pandas as pd
import numpy as np
# Create sample IoT sensor data with multi-level indexing
dates = pd.date_range('2023-01-01', periods=3, freq='D')
sensors = ['Temp1', 'Temp2', 'Humid1']
index = pd.MultiIndex.from_product([dates, sensors],
names=['Date', 'Sensor'])
# Generate random readings
np.random.seed(0)
readings = np.round(np.random.uniform(10, 50, len(index)), 1)
sensor_df = pd.DataFrame({'Reading': readings}, index=index)
# Display with multi-level index
print("Raw Sensor Readings:")
print(sensor_df)
# Reshape for better analysis using unstack
reshaped_df = sensor_df.unstack(level='Sensor')
print("\nReshaped Sensor Readings:")
print(reshaped_df)
This code demonstrates how to reshape data using a MultiIndex. It starts by creating a long-format DataFrame where each row is a single sensor reading, indexed by both date and sensor type.
- The
MultiIndex.from_product()function is used to create this two-level index from all combinations of dates and sensors. - The
unstack()method then reorganizes the data. It takes the 'Sensor' index level and converts its unique values into separate columns.
This transforms the table into a wide format, with one row per date showing all sensor readings for that day.
Get started with Replit
Turn these display techniques into a real application. Give Replit Agent a prompt like "a tool that previews CSVs with head() and describe()" or "a dashboard that highlights sales above a target".
The agent writes the code, tests for errors, and handles deployment for you. Start building with Replit.
Create and deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.
Create & deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.


.png)
.png)