CSVs Are Kinda Bad. DSVs Are Kinda Good
The article highlights CSV limitations and proposes DSV as a better alternative, using non-printing ASCII characters to simplify data handling while noting potential compatibility issues with existing tools.
Read original articleThe article discusses the limitations of CSV (Comma-Separated Values) files and proposes using DSV (Delimiter-Separated Values) as a more effective alternative. The author highlights various issues encountered when working with CSVs, such as problems with escaping characters, handling different newline conventions across platforms, and the complexity of parsing due to conflicting data values. These challenges arise from the use of common characters like commas and quotes, which can appear in the data itself, leading to malformed entries. The author suggests using non-printing ASCII characters as delimiters, which would eliminate many of these issues. By implementing a DSV format with specific control characters for fields and records, the author demonstrates how data can be written and read without the complications associated with CSVs. However, the article also notes potential downsides, such as reduced readability in text editors and compatibility issues with existing tools like Google Sheets. Ultimately, the author advocates for DSV as a more robust solution for data interchange, especially in scenarios where data integrity is critical.
- CSV files have significant limitations due to character conflicts and formatting issues.
- DSV format using non-printing ASCII characters can simplify data handling.
- The implementation of DSV can prevent common data parsing errors.
- Existing tools and software may not support DSV, posing compatibility challenges.
- The author encourages considering DSV for more reliable data interchange.
Related
Why *not* parse `ls` (and what to do instead)
Parsing 'ls' output in Unix systems is discouraged due to challenges with special characters. Shell globs and for loops are recommended for reliable file handling, criticizing 'ls' parsing for its limitations.
Lessons from Ancient File Systems
The article examines the evolution of Atari 8-bit file systems, focusing on Atari DOS versions, their limitations, and the emergence of alternatives like MyDOS and SpartaDOS, emphasizing the need for future-oriented design.
Why CSV is still king
CSV remains a dominant file format in data processing due to its simplicity and widespread adoption, despite challenges like lack of standardization and text encoding issues. Its relevance continues.
Thoughts on Canonical S-Expressions (2019)
Canonical S-Expressions (csexp) efficiently handle binary data without base64 encoding but lack associative array support, complicating complex data serialization. Alternatives like Bencoding and MessagePack may offer better solutions.
Another variable-length integer encoding
The article presents two encoding schemes for small integers in binary formats: metric varint and imperial varint, highlighting their efficiency, advantages, and the use of zig-zag encoding for signed integers.
And if you are in control of producing data, just produce strict RFC 4180-compliant CSV data and everybody will be able to read it just fine. There is no need to make your reader’s lives difficult by using yet another non-standard data format.
See also: <https://news.ycombinator.com/item?id=39679753>
Not really.
What's bad is when people keep insisting on coming up with new and amazing CSV dialects.
https://www.ietf.org/rfc/rfc4180.txt is very clear about what CSV files are supposed to look like, and the fact that people keep ignoring this for whatever reason, is not the formats problem.
And no, "using another format" is not a solution to this. Because: I can just invent a new DSV dialect. Or a JSON dialect. Or a dialect where the field separator is "0xFF00FF00" and the row separator is the string `DECAFCOFFEE` encoded in EBCDIC, all other characters have to be UTF-32, except for a, b and f, which also need to be EBCDIC encoded.
> For starters, it’s rather unreadable when opened in a text editor. But I bet you don’t really do that with your CSVs all that often anyway!
Wrong. I do that with csv files all the time. In fact I even have an amazing vim plugin just for them [0]. That's pretty much the point of having a plaintext tabular data storage format: That I can view and edit it using standard text wrangling utilities.
---
There is a much simpler solution to this problem: Don't accept broken CSV. If people keep ignoring standards, thats their problem.
The intent of the format is to be human readable and editable. Sure, Tab characters can be used instead of commas. (TSV files) Yes that's that "" to escape a quote rule. Oh and quoted values are optional, unquoted strings are fine as long as they contain no newline or record separator characters.
Sure, you could make another CSV inspired format which uses the old mainframe control characters; except as keeps getting pointed out, even programmers often don't know how to enter raw flow control characters on their systems. Who even bashes those out these days? I know I have to look it up every time.
Rejoice that the format is so simple, it's all just text which software might convert to numbers or other values as it might desire.
Also, I’m on a mobile right now, so can’t verify that, but it seems the format is flawed. The reader decodes UTF8 strings after splitting the binary buffer by the delimiter, but I believe the delimiter may be a part of a UTF8 character.
Edit: just checked and there’s actually no chance that the delimiter the author chose would be part of UTF8 encoding of any other character than the delimiter itself
Some nitpicks, maybe someone finds it useful. Could we talk about a code design a little bit?
class DSV:
@property
def delimiter(cls) -> bytes:
return b'\x1F'
@property
def record_separator(cls) -> bytes:
return b'\x1E'
@property
def encoding(cls) -> str:
return 'utf-8'
It's Python, do not make a premature properties for static values. class DSV:
delimiter = b'\x1F'
record_separator = b'\x1E'
encoding = 'utf-8'
Also it's a false inheritance relationship. Writer is not related to configuration. You can't make any other useful subclasses for DSV (ok maybe DSVReader, but that's it). At least it should be in the opposite way: an abstract Writer operating on instance configuration attributes and DSVWriter defining these attributes.Also `self._buffer += chunk` is O(N^2). It starts to bite even for buffers small as 100 bytes. It's ok for an example, but it's an issue for real code. Example at least buffers incomplete record not a whole read chunk (good!). But does only one split at a time (bad).
[Grumpy mode end]
Nevertheless article is very valuable and interesting to read. CSV gotchas are well described.
I can't tell you how many times I've downloaded a CSV that didn't escape quotes or newlines correctly, or how many times Excel has failed to correctly parse a perfectly valid CSV due to some decades-old quirk.
I know that there are better formats that make these types of situations pop up less, but is a little bit of quality control too much to ask for? If you've tested your software to make sure that it can handle CSVs with line breaks, tabs, and both types of quotes, then you've seemingly done more testing than 90% of the software out there.
On that note, the LibreOffice Calc team deserves major credit for how many different CSV formats it can handle. It's saved my bacon so many times when Excel wasn't up to the task.
Any file format needs a well-specified escape strategy, because every file format is binary and may contain binary data. CSV is kinda bad not only because, in practice, there's no consensus escaping, but also because we don't communicate what the chosen escaping is!
I think a standard meta header like follows would do wonders to improve interchangeability, without having to communicate the serialization format out-of-band.
``` #csv delim=";" encoding=utf8 quote=double locale="pt-BR" header=true ```
(RFC-4180 does specify that charset and header may be specified in the MIME type)
In my experience, the best way to handle this is:
1) Use TSV (tab-separated) instead of CSV (most things that export CSV also export TSV). Strip LF characters while reading and assume newlines are CR.
2) If you have a stubborn data source that insists on CSV, convert it to TSV in a pre-process. This could be a separate step or part of your reader as you're reading in the file. That means there's a single place to handle the escaping nonsense, and you can tailor that to each data source.
I have in fact seen CSV files used as an interchange format for things that include non-plaintext fields. And I've seen nested CSV files.
I really wish that were true.
Many times they just couldn't seem to find the comma. Other times there were commas in the item description (unescaped). My favourite though was when files were edited collaboratively using a Mac, Windows and Linux machines - multiple line-end types FTW! Like I said, a long and somewhat inglorious career..
DSVs didn't work with either Google Sheets, nor vim and neither Python – I assume this is the exhaustive list of software the author would have needed support from. The question, then: If no software understands the format, what's the point?
> I first learned about these ASCII delimiters while working with .fec [Federal Election Commission] files.
And then the author instantly chose a different delimiter. Two parties and already two standards. That should have been the final red flag for this proposal.
--- Aside: CSVs have so many problems wit their data format that you have to always verify them anyway.
Germans write 5.000,24€ where an American would write $5,000.24. Date strings. URL-encoded strings. Numbers as strings. Floating numbers.
Solving the delimiter problem accomplishes nothing.
There's also a more modern USV (Unicode Separated Values) which has visible separators.
Every time I have to do any major work with CSVs, I re-lament this exact thing.
I think the only way this could ever become more widespread is to fix all the open source tooling so that it's eventually just supported everywhere - then keep evangelizing for... ~30 yrs.
Probably you should also register a new mime type and extension and make it a new thing - don't overload .CSV any further - but make the same tooling support it.
I too have wondered why the hell aren't we using those special characters already ever since I discovered their existence
You still have the issue described by "" with '' if I read the examples correctly.
Am I alone in this?
It's less code for you and you can do neat things like zstd compression on the columns.
Bonus, it also doesn't require that you load and unload everything in memory.
I can only speculate on this but in Perl, for fake multimensional arrays à la `$foo{$x,$y,$z}`[^1], Perl uses ASCII character 28 (U+001C INFORMATION SEPARATOR FOUR) as its default subscript separator. Perl borrowed this feature from AWK, which uses the same character by default for the same purpose.
Based on Perl, I initally used that same character for that same purpose in a project or two. I cannot speculate on why Aho, Weinberger, and/or Kernighan chose that character. (On or before 1977.)
[^1]: Not to be confused with nested array (or hash) references in Perl, a truer form of multimensional arrays: `$foo->[$x]->{$y}->[$z]`
Good luck with that.
The problem isn't "CSV". The problems come from: - "excel doesn't like this CSV therefore it's not valid" - "what do you mean the CSV I sent you is wrong? I used excel" - "standard? What's a standard? I put info then a comma. That should be good enough for anyone"
CSV, when done right (i.e. following a standard) is a great format. It's human readable, less verbose than, say, JSON, and everybody understands it.
Just have to make sure business people (and less technical technical people) understand that CSV != Excel and vice-versa.
The purpose is to statically analyze the numbers for anomalies or any signs of deviation from expected randomness. I do this all in python3 with various libraries. It seems to be working, but...
What is a more efficient format than csv for this kind of operation?
Edit: I have also preserved all leading zeros by conversion to strings -- csv readers don't care much for leading zeros and simply disappear them, but quotes fix that.
It's something I've never understood why, why not use something more standard like SIP, or even a more structured message format. Having to parse CSV across N different packet boundaries is a royal PITA
As other said, most of the times, if you are producing them, just produce them right, or choose other formats.
If you don't then pray for the best.
Related
Why *not* parse `ls` (and what to do instead)
Parsing 'ls' output in Unix systems is discouraged due to challenges with special characters. Shell globs and for loops are recommended for reliable file handling, criticizing 'ls' parsing for its limitations.
Lessons from Ancient File Systems
The article examines the evolution of Atari 8-bit file systems, focusing on Atari DOS versions, their limitations, and the emergence of alternatives like MyDOS and SpartaDOS, emphasizing the need for future-oriented design.
Why CSV is still king
CSV remains a dominant file format in data processing due to its simplicity and widespread adoption, despite challenges like lack of standardization and text encoding issues. Its relevance continues.
Thoughts on Canonical S-Expressions (2019)
Canonical S-Expressions (csexp) efficiently handle binary data without base64 encoding but lack associative array support, complicating complex data serialization. Alternatives like Bencoding and MessagePack may offer better solutions.
Another variable-length integer encoding
The article presents two encoding schemes for small integers in binary formats: metric varint and imperial varint, highlighting their efficiency, advantages, and the use of zig-zag encoding for signed integers.