Blog
Wild & Free Tools

Decode Log Timestamps — dmesg, Syslog, Wireshark, and Splunk

Last updated: April 2026 7 min read

Table of Contents

  1. dmesg Timestamps
  2. Syslog and Journalctl
  3. Wireshark and tcpdump
  4. Splunk and Kafka
  5. Frequently Asked Questions

Every Linux log format encodes time differently. dmesg uses seconds since boot. syslog uses RFC 3339 in newer versions and a custom format in older ones. Wireshark capture files store epoch seconds with microsecond precision. Splunk normalizes everything but exposes the raw _time field as a Unix timestamp.

When you are correlating events across multiple log sources during an incident, you need to convert all of them to a common format fast. This is the cheat sheet.

dmesg — Seconds Since Boot

dmesg shows kernel messages with timestamps that look like [1234567.890123]. This is seconds since the system booted, not seconds since the Unix epoch. The two are completely different and the confusion has wasted countless debugging hours.

$ dmesg | head -3
[    0.000000] Linux version 5.15.0-91-generic
[    1.234567] ACPI: Early table checksum verification disabled
[12345.678901] usb 1-2: new high-speed USB device

To convert dmesg timestamps to wall clock time, you need the boot time of the machine. The formula is: boot_time + dmesg_timestamp = wall_clock_time.

# Get boot time as Unix timestamp
$ stat -c %Y /proc/1
1711000000

# Convert a dmesg timestamp by adding it to boot time
$ echo "1711000000 + 12345.678901" | bc
1711012345.678901

The easy way: use dmesg -T instead of plain dmesg. The -T flag converts to wall clock time automatically. Modern dmesg has supported this for years.

Syslog and journalctl — RFC 3339 and Custom Formats

Traditional syslog (rsyslog, syslog-ng) writes timestamps in a custom format like Mar 21 07:46:40. Note that this format has no year — it is implied to be the current year. Sort syslog by date across a year boundary and you get garbage.

Modern syslog with RFC 5424 uses ISO 8601: 2024-03-21T07:46:40.123456+00:00. Much better, but old log lines still exist with the legacy format.

# Convert traditional syslog format with date(1)
$ date -d "Mar 21 07:46:40" +%s
1711006000

# journalctl supports Unix timestamps directly with --since/--until
$ journalctl --since "@1711000000"
$ journalctl --until "@1711086400"

# Display journalctl with epoch timestamps instead of dates
$ journalctl -o short-unix

If you are correlating events across servers, switch journalctl to -o short-unix output. Every line gets the actual Unix timestamp, which is comparable across machines without timezone confusion.

Sell Custom Apparel — We Handle Printing & Free Shipping

Wireshark — Epoch Microseconds in Capture Files

PCAP files store timestamps as epoch seconds plus microseconds. Wireshark displays them in human format by default but you can switch to seconds since epoch in the View menu (View → Time Display Format → Seconds Since 1970-01-01).

When you export packets as JSON or CSV from Wireshark, the timestamp comes out as a floating-point Unix value like 1711000000.123456. To convert in code:

// JavaScript
const ts = 1711000000.123456;
const date = new Date(ts * 1000);

# Python
import datetime
ts = 1711000000.123456
dt = datetime.datetime.fromtimestamp(ts, tz=datetime.timezone.utc)

For tcpdump, the -tttt flag prints absolute date and time, -ttttt prints timestamps relative to the first packet. -tt shows raw Unix epoch seconds with microseconds.

Splunk and Kafka — Internal Epoch Storage

Splunk _time field

Splunk stores all events with a _time field that is a Unix timestamp in seconds (with millisecond precision in newer versions). Even though the search UI displays formatted dates, the underlying value is always epoch.

To work with raw timestamps in SPL:

| eval epoch_ts = strftime(_time, "%s")
| eval unix_ms = strftime(_time, "%s%3N")

# Filter by epoch range
index=main earliest=1711000000 latest=1711086400

Splunk's earliest/latest accept Unix timestamps directly when there is no relative time format. This is useful for scripts that pass exact timestamps to a Splunk REST API.

Kafka message timestamps

Kafka attaches a timestamp to every message — milliseconds since epoch by default. When you consume with kafka-console-consumer, add --property print.timestamp=true to see the value:

$ kafka-console-consumer --bootstrap-server localhost:9092 \
    --topic events --property print.timestamp=true
CreateTime:1711000000123  {"event":"login"}

The 13-digit value is milliseconds. Divide by 1000 for seconds, or paste straight into the free Unix timestamp converter which auto-detects the format.

For more on debugging API timestamps see the JWT timestamp decoder guide.

Try It Free — No Signup Required

Runs 100% in your browser. No data is collected, stored, or sent anywhere.

Open Free Unix Timestamp Converter

Frequently Asked Questions

Are dmesg timestamps Unix timestamps?

No. dmesg timestamps are seconds since the system booted, not seconds since 1970. To convert to wall clock time you need to add the boot time of the machine. The easier solution is to use dmesg -T which converts automatically.

How do I show wall clock time in dmesg?

Use dmesg -T (uppercase T). It converts the relative-since-boot timestamps to absolute dates using your system's current uptime and clock. Works on Linux from kernel 3.5 and later, which is essentially everything in production today.

What format does syslog use for timestamps?

Traditional rsyslog and syslog-ng use a Mar 21 07:46:40 format with no year. Modern RFC 5424 syslog uses ISO 8601 with full timezone info. journalctl exposes both, plus a Unix epoch format via -o short-unix.

How do I convert a Wireshark timestamp to a date?

Wireshark stores timestamps as epoch seconds with microsecond precision. In the View menu choose Time Display Format → Seconds Since 1970-01-01 to see the raw epoch value. Or paste any exported timestamp into a Unix timestamp converter — it handles fractional seconds correctly.

What is the timestamp format in Splunk _time?

Splunk _time is a Unix timestamp in seconds, stored with millisecond precision. The search UI shows formatted dates, but the underlying value is epoch. Use strftime(_time, "%s") in SPL to extract the integer value.

How do I see the Kafka message timestamp from kafka-console-consumer?

Add --property print.timestamp=true to your kafka-console-consumer command. Each message will be prefixed with CreateTime: followed by the millisecond Unix timestamp. Divide by 1000 for seconds.

Launch Your Own Clothing Brand — No Inventory, No Risk