"There are an enormous number of tools out there that only exist because people don't know how to chain together basic 1970s Unix text-processing tools in a pipeline."
Arguably that is why the original implementation of Perl was written. If I remember the story correctly, we can never know for sure whether, e.g., AWK would have sufficed, because the particular the job the author wrote Perl for as a contractor was confidential.
Are people using jq most concerned about speed, or are they more concerned about syntax.
JSON suffers a problem from which line-oriented untilities generally have immunity: a large enough and deeply nested JSON structure will choke or crash a program that tries to read all the data into memory at once, or even in large chunks. The process is resource-constrained as the size of the data increases. There are no limits placed on the size or depth of JSON files.
I use sed and tr for most simple JSON files. It is possible to overlfow the sed buffer but it rarely ever happens. sed is found everywhere and it's resource-friendly. Others might choose a program for speed or syntax but the issue of reliability is even more important to me. jq alone is not a reliable solution for any and all JSON. It can be overkill for simple json and resource-constrained for large, complex JSON.
Arguably that is why the original implementation of Perl was written. If I remember the story correctly, we can never know for sure whether, e.g., AWK would have sufficed, because the particular the job the author wrote Perl for as a contractor was confidential.
Are people using jq most concerned about speed, or are they more concerned about syntax.
JSON suffers a problem from which line-oriented untilities generally have immunity: a large enough and deeply nested JSON structure will choke or crash a program that tries to read all the data into memory at once, or even in large chunks. The process is resource-constrained as the size of the data increases. There are no limits placed on the size or depth of JSON files.
I use sed and tr for most simple JSON files. It is possible to overlfow the sed buffer but it rarely ever happens. sed is found everywhere and it's resource-friendly. Others might choose a program for speed or syntax but the issue of reliability is even more important to me. jq alone is not a reliable solution for any and all JSON. It can be overkill for simple json and resource-constrained for large, complex JSON.
https://stackoverflow.com/questions/59806699/json-to-csv-usi...
netstrings (https://cr.yp.to/proto/netstrings.txt) do not suffer from the same problem as JSON.