Blog
Wild & Free Tools

Flatten Nested JSON Without PySpark, Snowflake, or Databricks — Free Browser Tool

Last updated: April 2026 7 min read
Quick Answer

Table of Contents

  1. Warehouse flatten patterns
  2. When warehouse wins
  3. When browser wins
  4. Hybrid workflow
  5. What browser tool cannot do
  6. Frequently Asked Questions

Data engineers reflexively open a Databricks notebook the moment someone asks "can you flatten this JSON?" For a one-off — a sample payload, a webhook debug, a quick column prep before a manual import — that is way more infrastructure than the task needs. Our JSON Flattener does the same structural transformation in a browser tab, no cluster, no warehouse compute, no notebook.

This post is not a suggestion to replace your production pipelines — when the data lives in a Snowflake table, you flatten with LATERAL FLATTEN. When you are processing terabytes, Spark is the tool. But when the data is a JSON blob in your clipboard and you need it flat right now, a browser tool is the shortest path between problem and answer.

What Warehouse Flattening Looks Like

Every major data warehouse has its own flatten syntax:

Snowflake:

SELECT t.value:user.name::STRING AS user_name
FROM my_table, LATERAL FLATTEN(input => parse_json(raw_json)) t;

PySpark:

from pyspark.sql.functions import explode, col
df = df.select(col("user.name"), col("user.address.city"))
# or explode arrays:
df = df.withColumn("order", explode("orders"))

BigQuery:

SELECT user.name, user.address.city FROM `project.dataset.table`;
-- or with JSON_EXTRACT for stringified JSON columns

Databricks SQL: Same as Spark SQL — explode(), lateral view, dot-notation column access.

AWS Glue: relationalize() transform, which does a full flatten-and-split-arrays operation.

All of these are the right choice when the data lives in the warehouse. They are overkill when the data is a JSON string you want to look at.

When You Absolutely Should Use Your Warehouse

The data is already in a table. If a VARIANT column in Snowflake contains your nested JSON, LATERAL FLATTEN runs where the data lives. Pulling it out to flatten in a browser is backwards.

Scale. Millions of rows with nested JSON per row. Spark and Snowflake are designed for this. A browser processes one payload at a time.

Scheduled jobs. Nightly pipelines, Airflow DAGs, dbt models — anything that needs to run without a human pasting JSON into a tool.

Array explosion. When you need "one row per order" out of nested {"user":{"orders":[...]}}, warehouse functions explode cleanly. Our browser tool preserves arrays as leaves.

Joins downstream. If the flat output needs to join with another table in the warehouse, keeping the whole operation in SQL is faster than round-tripping through a local tool.

Sell Custom Apparel — We Handle Printing & Free Shipping

When the Browser Tool Is Faster

Debugging a webhook or API payload. A Stripe webhook arrives with five levels of nesting. You need to see every field flat to understand what is going on. Paste → Flatten → scan. 10 seconds total.

Preparing a sample for a product team member. Someone non-technical asks "what fields does this API return?" Flatten the response, paste it into a Google Doc, done. No warehouse access required.

Writing a data mapping document. You are documenting which API fields map to which downstream columns. The flat view is what your stakeholders read, not the nested blob.

Testing a flatten strategy before building the pipeline. Before you write the Snowflake LATERAL FLATTEN, flatten a sample in the browser to confirm the structure matches what downstream expects. Catches bugs before they reach production.

Converting config files, not table data. The JSON lives in a file, not a warehouse. No reason to ETL it into a table just to flatten.

A Hybrid Workflow for Data Engineers

Here is how data engineers typically use both:

Step 1 — Design phase (browser). Grab a sample payload. Flatten it in the JSON Flattener. Review the flat keys. This tells you what columns your final schema needs.

Step 2 — Validate assumptions (browser + spreadsheet). Export a few flat samples to CSV via our JSON to CSV converter and look at the data in Excel. Spot nulls, inconsistencies, unexpected field names.

Step 3 — Build the pipeline (warehouse). Translate the browser-verified structure into LATERAL FLATTEN or PySpark logic. You already know what keys to expect and how nested the data gets.

Step 4 — Monitor in production (warehouse + browser for anomalies). Pipeline runs. When something looks off, grab a failing record, paste it into the browser, flatten, and compare against expected structure. The browser tool becomes your triage tool.

The browser and the warehouse are not competing — they solve different phases of the same problem.

What the Browser Tool Cannot Do (And What to Use Instead)

Array explosion into rows. Use Snowflake LATERAL FLATTEN or PySpark explode().

Flatten millions of records at once. Use Spark or a warehouse. A browser processes one JSON at a time.

Integrate with a data catalog. Flat column names need to land in a catalog like Glue Data Catalog or Unity Catalog. That requires warehouse-level operations.

Preserve strict column types across rows. A warehouse knows that user.address.zip is a STRING. The browser tool preserves types within a single payload but does not enforce schema across many payloads.

Cost-aware execution. Snowflake charges warehouse credits. Databricks charges cluster time. Browser is free. For one-off work, that economic difference matters — a 5-minute LATERAL FLATTEN exploration on an XS warehouse costs real money; the browser costs nothing.

Skip the Cluster for One-Off Flattening

Browser tool — paste, click Flatten, copy. Warehouse-free.

Open Free JSON Flattener

Frequently Asked Questions

Can I use the browser tool to design my Snowflake LATERAL FLATTEN query?

Yes — this is actually one of its best uses. Flatten a sample payload in the browser to see exactly what dot-notation paths exist. Then translate those paths into SELECT columns in your Snowflake query. The browser gives you a cheap preview before you spin up warehouse compute.

How does this compare to AWS Glue relationalize()?

Glue relationalize() is more aggressive — it splits nested arrays into separate tables and creates foreign keys. Our browser tool does simple structural flattening without splitting. Use Glue for full relational ETL; use the browser for single-payload structure inspection.

Does the browser tool handle Snowflake VARIANT type?

Not directly — browser input is plain JSON text. Export the VARIANT column as a string (SELECT raw_json::STRING FROM ...), paste the result, and flatten. The tool treats it the same as any other JSON.

What about BigQuery JSON functions?

BigQuery JSON_EXTRACT_SCALAR and JSON_QUERY operate on stringified JSON columns. Same pattern as Snowflake — flatten in the warehouse when the data is there, flatten in the browser when the data is a string on your clipboard.

Jake Morrison
Jake Morrison Security & Systems Engineer

Jake's conviction that files should never touch a third-party server is the foundation of WildandFree's zero-upload design.

More articles by Jake →
Launch Your Own Clothing Brand — No Inventory, No Risk