Blog
Wild & Free Tools

JSON to CSV in Python Without Pandas — Standard Library Only

Last updated: February 2026 6 min read
Quick Answer

Table of Contents

  1. Method 1: json + csv (standard library)
  2. Method 2: Flatten nested objects without Pandas
  3. Method 3: When Pandas is worth adding
  4. When to skip the script entirely
  5. Frequently Asked Questions

You don't need Pandas to convert JSON to CSV in Python. The standard library's json and csv modules handle most cases without any pip install. Here are three lightweight approaches — plus a reminder that the browser converter handles this in seconds when you don't want to write code at all.

Method 1: Standard Library Only (json + csv)

This works for flat JSON arrays — no dependencies, no install:

import json
import csv

with open('data.json', 'r') as f:
    data = json.load(f)

# Infer headers from all keys in the array
headers = list(data[0].keys()) if data else []

with open('output.csv', 'w', newline='', encoding='utf-8') as f:
    writer = csv.DictWriter(f, fieldnames=headers)
    writer.writeheader()
    writer.writerows(data)

This handles flat JSON arrays (no nested objects). If a row has keys not in headers, DictWriter raises an error — pass extrasaction='ignore' to skip extra keys silently.

To collect headers from ALL rows (in case different rows have different keys):

headers = list(dict.fromkeys(k for row in data for k in row.keys()))

This preserves insertion order (Python 3.7+) and collects the union of all keys.

Method 2: Flatten Nested Objects Without Pandas

For nested JSON, write a simple recursive flatten function:

import json
import csv

def flatten(obj, parent_key='', sep='.'):
    items = {}
    for k, v in obj.items():
        new_key = parent_key + sep + k if parent_key else k
        if isinstance(v, dict):
            items.update(flatten(v, new_key, sep))
        else:
            items[new_key] = v
    return items

with open('data.json', 'r') as f:
    data = json.load(f)

flat_data = [flatten(row) for row in data]
headers = list(dict.fromkeys(k for row in flat_data for k in row.keys()))

with open('output.csv', 'w', newline='', encoding='utf-8') as f:
    writer = csv.DictWriter(f, fieldnames=headers, extrasaction='ignore')
    writer.writeheader()
    writer.writerows(flat_data)

This replicates what the browser tool does with dot notation — address.city becomes its own column. No Pandas required. It doesn't handle nested arrays (lists of objects inside a row), but for nested object trees it works well.

Sell Custom Apparel — We Handle Printing & Free Shipping

Method 3: When to Use Pandas (It's Not Always Overkill)

Pandas json_normalize() is worth installing when your JSON has nested arrays (one-to-many relationships) that the standard library can't flatten cleanly:

import json
import pandas as pd

with open('data.json', 'r') as f:
    data = json.load(f)

df = pd.json_normalize(data, record_path='items', meta=['orderId'])
df.to_csv('output.csv', index=False)

The record_path parameter tells Pandas which nested array to explode into rows, and meta carries top-level fields alongside each expanded row. This produces proper relational CSV output from one-to-many JSON.

For everything else (flat arrays, simple nesting), the standard library approach is smaller, faster to deploy, and has no dependency management overhead.

When to Skip the Script and Use the Browser Tool

Writing and running a Python script has setup cost: opening an editor, getting the file paths right, running the script, debugging any issues. For a one-time conversion of a JSON file you have on hand, the browser converter takes 30 seconds vs 5–10 minutes for the script approach.

Use Python when:

Use the browser tool when: you have a JSON array and need a CSV file right now, and the thought of writing even 10 lines of Python sounds like too much work for this task.

No Python Environment? Convert JSON to CSV in the Browser

Paste your JSON array, get CSV in seconds. No pip install, no virtual environment, no script to write.

Open Free JSON to CSV Converter

Frequently Asked Questions

Why not just use Pandas for everything?

Pandas is a 30+ MB library with its own dependencies. For a script that only converts JSON to CSV, adding Pandas adds unnecessary bloat. The standard library json + csv approach is ~50 lines of code, zero dependencies, and works anywhere Python runs — including serverless functions with strict package size limits.

How do I convert JSON from a URL to CSV in Python without Pandas?

Use the requests library (or urllib from the standard library) to fetch the JSON, then process with the csv module: import json, csv, urllib.request; data = json.loads(urllib.request.urlopen("URL").read()); writer = csv.DictWriter(...)

Does the standard library approach handle Unicode characters in JSON values?

Yes — open your files with encoding="utf-8" (shown in the examples). This handles accented characters, Asian scripts, and emojis correctly. Python's json module reads UTF-8 by default.

What if my JSON is a single object, not an array?

Wrap it in a list before processing: data = [json.load(f)] if isinstance(json.load(f), dict) else json.load(f). Or just manually wrap it: data = [your_single_object].

Alicia Grant
Alicia Grant Frontend Engineer

Alicia leads image and PDF tool development at WildandFree, specializing in high-performance client-side browser tools.

More articles by Alicia →
Launch Your Own Clothing Brand — No Inventory, No Risk