A browser-based JSON to CSV converter handles one file at a time. When you have 50 files, 500 files, or a single file that is too large for a browser tab, you need a different approach. Here are the actual scripts and commands for batch conversion — ready to copy and run.
| Scenario | Browser Tool | Script / CLI |
|---|---|---|
| 1 file, under 50MB | ✓ Fastest option | Overkill |
| 1 file, 50-100MB | ✓ Works on most devices | ~Not needed unless slow |
| 1 file, 100MB+ | ✗ May crash browser | ✓ Use jq or Python |
| 5-10 files | ~Tedious but doable | ✓ Worth scripting |
| 50+ files | ✗ Not practical | ✓ Definitely script it |
| Daily automated conversion | ✗ Cannot automate | ✓ Cron job or CI/CD |
| Custom field extraction | ✗ Converts everything | ✓ Full control with jq/pandas |
| Sensitive data check | ✓ Data stays local | ✓ Also local |
Requirements: bash (Mac/Linux/WSL), jq installed (brew install jq or apt install jq).
Convert a single file — flat JSON array:
If your JSON is an array of objects with the same keys:
jq -r '(.[0] | keys_unsorted) as $keys | $keys, map([.[ $keys[] ]])[] | @csv' input.json > output.csvThis extracts column headers from the first object, then maps every object to CSV rows. Handles any number of fields automatically.
Batch convert all JSON files in a directory:
for f in *.json; do jq -r '(.[0] | keys_unsorted) as $keys | $keys, map([.[ $keys[] ]])[] | @csv' "$f" > "${f%.json}.csv"; doneThis loops through every .json file, converts it, and saves a .csv with the same name. A directory of 100 files converts in seconds.
Extract specific fields only:
jq -r '.[] | [.name, .email, .address.city] | @csv' input.json > output.csvThis picks only name, email, and city (flattening the address object manually). Add a header row by echoing column names first.
Requirements: Python 3.x, pandas (pip install pandas).
Single file conversion:
data = json.load(open("input.json"))df = pd.json_normalize(data)df.to_csv("output.csv", index=False)json_normalize automatically flattens nested objects with dot-notation column names. This is the most reliable method for complex nested JSON.
Batch convert a directory:
glob.glob("*.json") to find all JSON files.replace(".json", ".csv")source_file column to track which file each row came fromMerge all JSON files into one CSV:
pd.concat(frames, ignore_index=True) to mergeRequirements: Node.js, npm install json2csv
JSON.parse(fs.readFileSync("input.json"))new Parser({ flattenObjects: true }).parse(data)fs.writeFileSync("output.csv", csv)The json2csv library handles flattening, escaping, and header generation. The flattenObjects: true option creates dot-notation columns for nested objects — same behavior as the browser converter and pandas.
Large JSON files crash browser tabs and exhaust Node.js memory. Streaming solutions:
pip install ijson. Parses JSON incrementally: ijson.items(open("big.json", "rb"), "item") yields one object at a time. Write each to CSV as you go.| File Size | Recommended Tool | Memory Usage | Notes |
|---|---|---|---|
| Under 50MB | Browser converter | ~100-200MB (browser tab) | Fastest for one-off work |
| 50-500MB | jq or Python pandas | ~Same as file size | pandas loads into memory; jq streams |
| 500MB-5GB | jq (streaming) | ~Constant (few MB) | jq processes incrementally |
| 500MB-5GB | Python ijson | ~Constant (few MB) | Streaming parser, process row by row |
| 5GB+ | jq or custom streaming | ~Constant | Split file first if possible |
Even when you are doing batch work, the browser converter has a role:
Start with one file to validate the structure, then scale to batch.
Open JSON to CSV Converter