Sure there are 0.000001% edge cases where that MIGHT be the next big bottleneck.
I see the same thing repeated in various front end tooling too. They all claim to be _much_ faster than their counterpart.
9/10 whatever tooling you are using now will be perfectly fine. Example; I use grep a lot in an ad hoc manner on really large files I switch to rg. But that is only in the handful of cases.
The difference between 2ms and 0.2ms might sound unneeded, or even silly to you. But somebody, somewhere, is doing stream processing of TB-sized JSON objects, and they will care. These news are for them.
Also performance improvements on heavy used systems unlocks:
Cost savings
Stability
Higher reliability
Higher throughput
Fewer incidents
Lower scaling out requirements.
So went not compare that case directly? We'd also want to see the performance of the assumed overheads i.e. how it scales.
It’s the same sentiment as “Individuals don’t matter, look at how tiny my contribution is.”. Society is made up of individuals, so everybody has to do their part.
> 9/10 whatever tooling you are using now will be perfectly fine.
It is not though. Software is getting slower faster than hardware is getting quicker. We have computers that are easily 3–4+ orders of magnitudes faster than what we had 40 years ago, yet everything has somehow gotten slower.
Out of curiosity, have you read the jq manpage? The first 500 words explain more or less the entire language and how it works. Not the syntax or the functions, but what the language itself is/does. The rest follows fairly easily from that.
Say a number; make a real argument. Don't just wave your hand and say "just imagine how right I could be about this vague notion if we only knew the facts"
I don't think I remember one case where jq wasn't fast enough
Now what I'd really want is a jq that's more intuitive and easier to understand
>
> 9/10 whatever tooling you are using now will be perfectly fine
Are you working in frontend? On non-trivial webapps? Because this is entirely wrong in my experience. Performance issues are the #1 complaint of everyone on the frontend team. Be that in compiling, testing or (to a lesser extend) the actual app.
Either the team I worked at was horrible, or you are from Google/Meta/Walmart where either everyone is smart or frondend performance is directly related to $$.
From that I completely agree with your statement - however, you're not addressing the point he makes which kinda makes your statement completely unrelated to his point
99.99% of all performance issues in the frontend are caused by devs doing dumb shit at this point
The frameworks performance benefits are not going to meaningfully impact this issue anymore, hence no matter how performant yours is, that's still going to be their primary complaint across almost all complex rwcs
And the other issue is that we've decided that complex transpiling is the way to go in the frontend (typescript) - without that, all built time issues would magically go away too. But I guess that's another story.
It was a different story back when eg meteorjs was the default, but nowadays they're all fast enough to not be the source of the performance issues
Opencode, ClaudeCode, etc, feel slow. Whatever make them faster is a win :)
> The query language is deliberately less expressive than jq's. jsongrep is a search tool, not a transformation tool-- it finds values but doesn't compute new ones. There are no filters, no arithmetic, no string interpolation.
Mind me asking what sorts of TB json files you work with? Seems excessively immense.
You could probably do something similar for a faster jq.
I'm sure there are reasons against switching to something more efficient–we've all been there–I'm just surprised.
- Someone likes tool X
- Figures, that they can vibe code alternative
- They take Rust for performance or FAVORITE_LANG for credentials
- Claude implements small subset of features
- Benchmark subset
- Claim win, profit on showcase
Note: this particular project doesn't have many visible tells, but there's pattern of overdocumentation (17% comment-to-code ratio, >1000 words in README, Claude-like comment patterns), so it might be a guided process.
I still think that the project follows the "subset is faster than set" trend.
If you work at a hyperscaler, service log volume borders on the insane, and while there is a whole pile of tooling around logs, often there's no real substitute for pulling a couple of terabytes locally and going to town on them.
Usually, a perceptive user/technical mind is able to tweak their usage of the tools around their limitations, but if you can find a tool that doesn't have those limitations, it feels far more superior.
The only place where ripgrep hasn't seeped into my workflow for example, is after the pipe and that's just out of (bad?) habit. So much so, sometimes I'll do this foolishly rg "<term>" | grep <second filter>; then proceed to do a metaphoric facepalm on my mind. Let's see if jg can make me go jg <term> | jq <transformation> :)
If jq is something you run a few times by hand, a "faster jq" is about as compelling as a faster toaster. A lot of these tools still get traction because speed is an easy pitch, and because some team hit one ugly bottleneck in CI or a data pipeline and decided the old tool was now unacceptable.
Prioritizing SEO-ing speed over supporting the same features/syntax (especially without an immediately prominent disclosure of these deficiencies) = marketing bullshit
A faster jq except it can't do what jq does... maybe I can use this as a pre-filter when necessary.
But every now and then a well-optimised tool/page comes along with instant feedback and is a real pleasure to use.
I think some people are more affected by that than others.
Obligatory https://m.xkcd.com/1205
However, as someone who always loved faster software and being an optimisation nerd, hat's off!
If you don't mind me asking, which yq? There's a Go variant and a Python pass-through variant, the latter also including xq and tomlq.
Also, there are lots of charts without comparison so the numbers mean nothing...
Just added this new tool to arkade, along with the existing jq/yq.
No Arm64 for Darwin.. seriously? (Only x86_64 darwin.. it's a "choice")
No Arm64 for Linux?
For Rust tools it's trivial to add these. Do you think you can do that for the next release?
Everything can be rewritten in Rust will be written in Rust.
Second, some comments on the presentation: the horizontal violin graphs are nice, but all tools have the same colours, and so it's just hard to even spot where jsongrep is. I'd recommend grouping by tool and colour coding it. Besides, jq itself isn't in the graphs at all (but the title of the post made me think it would be!).
Last, xLarge is a 190MiB file. I was surprised by that. It seems too low for xLarge. I daily check 400MiB json documents, and sometimes GiB ones.
basically the double jump to find values in the heap is what slows down these tools most
Had to spend some efforts to set up completions, also there some small rough edges around commands discoverability, but anyway, much better than the previous oh-my-zsh setup
Ideally, wish it also had a flag to enforce users to write type annotations + compiling scripts as static binaries + a TUI library, and then I'd seriously consider it for writing small apps, but I like and appreciate it in the current state already
The whole tool would be like a few dozen lines of c++ and most likely be faster than this.
> Jq is a powerful tool, but its imperative filter syntax can be verbose for common path-matching tasks. jsongrep is declarative: you describe the shape of the paths you want, and the engine finds them.
IMO, this isn't a common use case. The comparison here is essentially like Java vs Python. Jq is perfectly fine for quick peeking. If you actually need better performance, there are always faster ways to parse JSON than using a CLI.
It does some kind of stack forking which is what allows its funky syntax
jq is supposed to fit in to other bash scripts as a one liner. That's it's super power. I know very few people who write regex on the fly either (unless you were using it everyday) they check the documentation and flesh it out when they need it.
Just use Claude to generate the jq expression you need and test it.
https://news.ycombinator.com/item?id=47542182
The reason I was interested, was adding the new tool to arkade (similar to Brew, but more developer/devops focused - downloads binaries)
The agent found no Arm binaries.. and it seemed like an odd miss for a core tool
If the arm64 version was on homebrew (didn’t check if it is but assume not because it’s not mentioned on the page), I’d install it from there rather than from cargo.
I don’t really manually install binaries from GitHub, but it’s nice that the author provides binaries for several platforms for people that do like to install it that way.
To address the concern, anyway, I'm sure it would soon be available in brew as an arm binary.
Some bits of the site are hard to read "takes a query and a JSON input" query is in white and the background of the site is very light which makes it hard to read.