Convert Unix Timestamps in Pandas — Bulk DataFrame Methods
Table of Contents
Pandas has the most efficient Unix timestamp conversion of any Python tool because it vectorizes the operation. Where standard Python datetime takes milliseconds per row in a loop, Pandas converts millions of rows in a single call. The trade-off is you have to know which function to use — there are three ways to do the same thing in Pandas, and only one is fast.
This guide covers the right method (and the wrong ones) for converting Unix timestamps in DataFrames.
pd.to_datetime — The Right Way for Bulk Conversion
For converting a column of Unix timestamps in a DataFrame, pd.to_datetime with the unit argument is the answer. It is vectorized, handles all four precision levels, and is the only method you should use for more than a few rows.
import pandas as pd
df = pd.DataFrame({'ts': [1711000000, 1711000060, 1711000120]})
# Seconds
df['datetime'] = pd.to_datetime(df['ts'], unit='s', utc=True)
# Milliseconds
df['datetime'] = pd.to_datetime(df['ts'], unit='ms', utc=True)
# Microseconds
df['datetime'] = pd.to_datetime(df['ts'], unit='us', utc=True)
# Nanoseconds (Pandas internal default)
df['datetime'] = pd.to_datetime(df['ts'], unit='ns', utc=True)
The utc=True argument is the most important detail in this entire guide. Without it, Pandas returns naive datetimes that you cannot safely combine with timezone-aware data later. Forgetting this argument is the source of about half the timezone bugs in Pandas code.
DataFrame Datetime to Unix Timestamp
# From a datetime column to Unix seconds (vectorized)
df['unix_seconds'] = df['datetime'].astype('int64') // 10**9
# Milliseconds
df['unix_ms'] = df['datetime'].astype('int64') // 10**6
# Microseconds
df['unix_us'] = df['datetime'].astype('int64') // 10**3
# Nanoseconds (Pandas native unit, no division needed)
df['unix_ns'] = df['datetime'].astype('int64')
Pandas internally stores datetimes as datetime64[ns] — nanoseconds since the Unix epoch as int64. Casting to int64 exposes the raw nanosecond value, then you divide to get the unit you actually want.
Use floor division (//) not regular division (/) to get an integer result. Regular division returns a float and loses precision for very large integers.
Timezone Localization in Pandas
Pandas has a quirky rule: a column can be either entirely naive or entirely timezone-aware. You cannot mix.
# If you forgot utc=True at conversion time, localize after the fact
df['datetime'] = pd.to_datetime(df['ts'], unit='s') # naive
df['datetime'] = df['datetime'].dt.tz_localize('UTC') # now aware
# Convert from one zone to another
df['ny_time'] = df['datetime'].dt.tz_convert('America/New_York')
df['tokyo_time'] = df['datetime'].dt.tz_convert('Asia/Tokyo')
# Strip timezone for display (rarely the right choice)
df['display'] = df['datetime'].dt.tz_localize(None)
The distinction between tz_localize and tz_convert trips up everyone:
tz_localize— attaches a timezone to a naive datetime without changing the displayed valuetz_convert— converts a timezone-aware datetime to a different zone, changing the displayed value
Get this backwards and you shift your data by hours without realizing it. The rule: localize once at the start (after pd.to_datetime), then only convert from there.
The DST gotcha
Localizing to a timezone with DST can fail on ambiguous or nonexistent times. The 2 AM hour during a spring-forward transition does not exist; the 2 AM hour during fall-back happens twice. Pandas raises an error by default — pass ambiguous='infer' or nonexistent='shift_forward' to handle these explicitly.
For one-off value verification, the free Unix timestamp converter auto-detects seconds vs milliseconds.
Try It Free — No Signup Required
Runs 100% in your browser. No data is collected, stored, or sent anywhere.
Open Free Unix Timestamp ConverterFrequently Asked Questions
How do I convert a column of Unix timestamps in Pandas?
pd.to_datetime(df["col"], unit="s", utc=True) for seconds, or unit="ms" for milliseconds. The utc=True argument is critical — without it the result is naive and you will hit timezone bugs later. This is the only method you should use for more than a few rows.
How do I convert Pandas datetime back to Unix timestamp?
df["col"].astype("int64") // 10**9 for seconds. Pandas internally stores datetimes as nanoseconds since epoch, so casting to int64 gives raw nanoseconds, then divide to get the unit you want. Use floor division for integer results.
Why do I get a TypeError comparing Pandas datetimes?
Pandas does not let you compare a naive datetime with a timezone-aware one — the result would be ambiguous. Either pass utc=True at conversion time so everything is aware, or call tz_localize on the naive column to add timezone info before comparison.
What is the difference between tz_localize and tz_convert in Pandas?
tz_localize attaches a timezone to a naive datetime without changing the displayed value. tz_convert changes a timezone-aware datetime to a different zone, shifting the displayed value. Localize once at the start, then only convert from there.
Why is my Pandas datetime column slow to filter?
You may be filtering with string comparisons (df[df["dt"] > "2024-01-01"]) which forces conversion on every row. Convert the comparison value to a Timestamp once: df[df["dt"] > pd.Timestamp("2024-01-01", tz="UTC")]. Make sure both sides are timezone-aware or both are naive.
How does Pandas handle nanosecond precision Unix timestamps?
Pandas stores datetime64[ns] as int64 nanoseconds since 1970, so nanosecond timestamps are the native format. Pass them directly with unit="ns" and you get full precision with no floating point loss. This is one place Pandas is better than the standard datetime module.

