CSV Viewer & Converter

Processed locally

View CSV as an interactive table, filter rows, and convert between CSV and JSON. Delimiter-aware parsing.

CSV Input
Runs entirely in your browser — nothing is uploaded
Runs entirely in your browser. No uploads. Your files stay private.

CSV Viewer & Converter

CSV (Comma-Separated Values) is the oldest and most ubiquitous tabular data format - the spec everyone agrees to disagree about. RFC 4180 codified one common dialect in 2005, but in practice every database, spreadsheet, and BI tool emits a slightly different variant. This viewer parses the most-common variants, renders them as a filterable table, and converts losslessly to and from JSON.
The parser is a hand-written state machine you can read in the source above (parseCSV) - small enough to audit, but full enough to handle the real edge cases: quoted fields containing the delimiter, doubled-up quotes ("") to escape a literal quote inside a quoted field, and CRLF or LF line endings that mix freely in files originating from Windows and Unix. For very large files or pathological dialects, swap to PapaParse, but for inspection workflows the inline parser keeps the bundle small and the privacy story simple.
Delimiter choice matters more than people expect. The default is a comma, but European spreadsheets default to semicolon (because comma is a decimal separator in most non-English locales), database exports often use tab (giving you TSV), and pipe-delimited ("|") files appear in older mainframe and ETL pipelines. The dropdown switches the parser and round-trip exporter at the same time, so converting a semicolon CSV to JSON and back gives you semicolon CSV again.
The JSON conversion treats the first row as headers and produces an array of plain objects, one per data row. Going the other way, the JSON-to-CSV exporter takes the keys of the first object as headers - if later objects have extra keys, those columns are dropped. The escape function quotes any field containing the delimiter, a double quote, or a newline, exactly as RFC 4180 specifies, so the output round-trips through Excel and Numbers cleanly.
Known CSV pitfalls the parser handles: fields wrapped in double quotes that contain commas ("Smith, John"), embedded newlines inside quoted fields, and the doubled-quote escape ("He said ""hi""" decodes to He said "hi"). What it doesn't handle: BOM markers (Excel sometimes prepends 0xEF 0xBB 0xBF to UTF-8 CSV - strip the first three bytes if your headers look mangled), trailing whitespace inside fields by design (it's trimmed), and MS-specific quirks like leading apostrophes that prevent Excel from auto-converting strings to numbers.
Live row filtering does a case-insensitive substring search across every cell. It's linear and runs on each keystroke - fine up to about 50,000 rows on a modern laptop. Beyond that you'll feel the lag because re-rendering the table dominates. The fastest way to slim a huge CSV in this tool is to filter to your subset, copy the JSON output, and reload that subset for further work.
Privacy: the parser, table renderer, and JSON conversion all run in JavaScript inside your tab. There is no upload, no analytics on the data, and no server round-trip during conversion. The CSV/JSON you paste stays in your browser memory and is gone when the tab closes - which matters when the data is a customer export, a salary roster, or anything else you don't want sitting in a third-party log file.

Common Use Cases

01

Inspecting database dumps

Paste a SELECT export from Postgres or MySQL and read the rows in a real table without launching Excel.

02

Spreadsheet to API conversion

Convert a CSV from a Google Sheet into a JSON array ready to import into a Postman collection or a fixture file.

03

JSON-to-CSV for stakeholders

Turn a JSON API response into CSV so a non-technical teammate can open it in their spreadsheet of choice.

04

Filtering test fixtures

Use the live row filter to pull only matching rows out of a 10,000-row test fixture, then export just that slice.

Frequently Asked Questions

Comma (the default and what RFC 4180 specifies), semicolon (common in European exports), tab (TSV), and pipe (some legacy systems and ETL tools). Pick the right one from the dropdown - the tool will use the same delimiter when it re-exports JSON to CSV.
Yes. Fields wrapped in double quotes can contain the delimiter, embedded newlines, and doubled-up quotes ("") to represent a literal quote. This matches the RFC 4180 quoting rules and what Excel emits by default.
No hard limit, but the live table rendering and substring filter are linear in row count. Up to ~50,000 rows feels fine on a modern laptop; beyond that, filtering and scrolling start to lag because the table re-renders on every keystroke.
Excel often prepends a UTF-8 BOM (the bytes 0xEF 0xBB 0xBF) to files it saves as "CSV UTF-8". The parser does not strip that automatically, so the first header column will appear with three weird leading characters. Open the file in a text editor first and remove the BOM, or save without it.
Inside a double-quoted field, commas are treated as literal characters. "Smith, John",30,Boston parses as exactly three columns. Without the surrounding quotes, the same comma would split the field into two columns - this is the most common reason a CSV looks wrong.
The CSV-to-JSON converter produces flat objects (one per row). The JSON-to-CSV converter expects a flat array of flat objects - nested objects and arrays are stringified or dropped. For nested data, flatten in code first, or use a dedicated JSON-to-CSV tool with explicit dotted-path handling.
Excel auto-detects strings that look like dates and reformats them on import - 09/03 might become a 2003 date or a 2009 date depending on locale. The CSV itself is fine; the issue is Excel's import. Prefix dates with an apostrophe to force text, or import via the Data > From Text/CSV wizard with explicit column types.
The exporter takes column headers from the keys of the first object. Subsequent objects with extra keys lose those columns; objects missing a key get an empty string in that column. If your JSON has heterogeneous shapes, normalize them before converting.
Yes - both the table view and the conversions preserve input order. The filter narrows the visible set without re-sorting. There is no built-in sort because relying on the source order is the safer default for round-tripping.
No. The parser, table view, and conversions all run in JavaScript inside your browser. No network request leaves the page during parsing or export, so customer lists, payroll exports, and other sensitive data stay in your tab and never touch a server.

Advertisement