Milliseconds vs Seconds Unix Timestamp — How to Tell Them Apart
Table of Contents
You get a JSON response from an API and one field is 1711000000. Another field is 1711000000123. Both are clearly Unix timestamps. One is 10 digits, the other is 13. Same moment? Different moment? Different precision? It is the most common confusion in working with Unix time.
The short answer: 10 digits is seconds, 13 digits is milliseconds, both represent the same moment. The longer answer is more interesting because it explains why JavaScript and Java both ended up using milliseconds while Unix and Linux still use seconds.
Two Conventions for the Same Concept
A Unix timestamp counts how much time has passed since January 1, 1970 UTC. The unit you measure that time in is a separate decision.
| Unit | Digits | Used by |
|---|---|---|
| Seconds | 10 (until 2286) | Unix, Linux, C time_t, MySQL UNIX_TIMESTAMP, most APIs |
| Milliseconds | 13 | JavaScript, Java, Kotlin, .NET, Firebase |
| Microseconds | 16 | Some Linux logs, Kafka, Python time.time_ns / 1000 |
| Nanoseconds | 19 | Go, Java Instant, eBPF, high-frequency trading systems |
The same instant in time has four different numeric representations depending on the unit. Multiplying by 1000 converts down (seconds → milliseconds), dividing by 1000 converts up. Always integer multiplication when possible to avoid floating point precision loss.
Why JavaScript and Java Picked Milliseconds
The reason is historical and slightly arbitrary. When James Gosling designed Java in 1995, he wanted dates with sub-second precision because Java was meant for distributed systems where tracking event order mattered. He picked milliseconds as the unit for java.util.Date and System.currentTimeMillis().
JavaScript, designed by Brendan Eich in 1995 to be Java-compatible-ish, copied the same convention. JavaScript Date stores time as "milliseconds since epoch" and that has propagated everywhere JavaScript has gone — into JSON APIs, into MongoDB drivers, into anything that touches the web.
The Unix world stuck with seconds because in 1970, when Unix time was designed, sub-second precision was not really useful for what computers did. By the time it was useful, the convention was set.
So we live with both
This means if you work in a stack with both JavaScript and Linux components — which is basically every modern web application — you have to convert between seconds and milliseconds at the boundary. Everywhere. It is the source of more silent timezone bugs than anything else.
Sell Custom Apparel — We Handle Printing & Free ShippingHow to Detect Which Unit You Have
Three reliable ways to tell whether a Unix timestamp is in seconds or milliseconds.
1. Count the digits (works until year 2286)
- 9-10 digits → seconds (year 2001 to 2286)
- 12-13 digits → milliseconds (same year range)
- 15-16 digits → microseconds
- 18-19 digits → nanoseconds
2. Compare to a known reference
The current Unix timestamp in seconds is around 1.7 billion. In milliseconds it is around 1.7 trillion. If your value is closer to a billion, it is seconds. If it is closer to a trillion, it is milliseconds. This rule of thumb works through about the year 2100.
3. Sanity check the converted result
Convert the value as if it were seconds and see what date you get. If the result is in 1970, your value is actually milliseconds (you needed to divide by 1000 first). If the result is in the year 56000, your value was milliseconds being interpreted as seconds.
The free Unix timestamp converter auto-detects based on digit count — paste either format and it returns the right date without you having to think about which unit it is.
Common Mixing Bugs and How to Avoid Them
Bug 1: JavaScript Date with seconds-based input
You pass a 10-digit timestamp directly to new Date(ts). JavaScript expects milliseconds, so it interprets your value as milliseconds — which puts the date in January 1970. Multiply by 1000 first.
Bug 2: API returns milliseconds, frontend stores as seconds
Backend (Java) returns timestamps as milliseconds in JSON. Frontend reads them, divides by 1000 to "convert to seconds for storage in our table". Now timestamps are off by a factor of 1000 for half the rows. Pick one unit and use it everywhere.
Bug 3: SQL function expects seconds, app passes milliseconds
FROM_UNIXTIME(1711000000123) in MySQL returns a date in the year 56270. The function expected seconds, you gave it milliseconds. Always: FROM_UNIXTIME(ms / 1000) when crossing the boundary.
Bug 4: Float division loses precision
Converting nanoseconds to seconds with ns / 1_000_000_000 in Python (or any language) using floating point loses precision past microseconds. For exact nanosecond timestamps, divide as integer (ns // 10**9) or use a fixed-point library.
Bug 5: Mixing units in comparisons
Comparing two timestamps where one is in seconds and the other in milliseconds returns nonsense. Always normalize to the same unit before comparing.
For deeper coverage of all the language-specific gotchas see JavaScript timestamp methods or the complete Unix timestamp reference.
Try It Free — No Signup Required
Runs 100% in your browser. No data is collected, stored, or sent anywhere.
Open Free Unix Timestamp ConverterFrequently Asked Questions
What is the difference between Unix timestamp in seconds and milliseconds?
They represent the same moment, just with different units. Seconds-based Unix time is the original Unix convention. Milliseconds-based is what JavaScript and Java use because both languages were designed to support sub-second precision out of the box. To convert: seconds × 1000 = milliseconds.
How do I tell if a timestamp is in seconds or milliseconds?
Count the digits. A 10-digit number is seconds, a 13-digit number is milliseconds, 16 digits is microseconds, 19 digits is nanoseconds. This works for any date between roughly 2001 and 2286. Outside that range, use a reference value: current time is about 1.7 billion seconds or 1.7 trillion milliseconds.
Why does JavaScript use milliseconds instead of seconds?
Java picked milliseconds as the unit for its Date class in 1995 because James Gosling wanted sub-second precision. JavaScript was designed to be Java-compatible-ish around the same time and copied the convention. Both languages still use it 30 years later.
Why does Python sometimes use seconds and sometimes nanoseconds?
Python's time.time() returns seconds as a float (with microsecond precision). time.time_ns() returns nanoseconds as an integer. The float version is convenient but loses precision past about microseconds. Use the nanosecond version when exact precision matters.
Should I store timestamps as seconds or milliseconds in my database?
Whichever your application code uses natively, to avoid conversion at every boundary. JavaScript/Java/.NET shops should store milliseconds. C/Linux/Python shops should store seconds. Mixed shops should pick one and convert at the entry/exit points only.
Will my code break in the year 2286?
A 10-digit timestamp in seconds runs out of digit room in 2286 (it becomes 11 digits then). This is not actually a bug — it just means the digit-count detection rule no longer works. The timestamps themselves are fine until year 292 billion (the limit of 64-bit signed integers).

