It is too bad, though. People bad at English will themselves be reading this forever now and think this is the way real people write, speak, or are supposed to.
It's many things. The relentless ethusiasm about everything. Prefacing any answer to a question with an affirmation that it was a good question first. And yes, sorry, pedants of the web who feel witch-hunted because you knew how to employ keyboard shortcuts and used em-dashes in 2015 and have the receipts to prove it -- you never used 17 in the span of a single page. I think that was the first I can remember using ever and I had to contrive a way to do it where a semi-colon wouldn't clearly work better.
The purpose of language is to communicate meaning and intent, not to sound or feel a particular way, unless you're reading for entertainment or enjoyment.
This is the second post I'm commenting on within a span of like 30 minutes where someone did some really good work and shared it, but the top comments are complaining about AI usage.
Either LLM-assisted content needs to be banned entirely (might be), or complaining about it should be considered a breach of etiquette at sites like HN that are tech-centric.
Calling it a spellchecker is simply wrong if you give an LLM some bullet points and then instruct it to write an article. I find it more insincere because it's an extra layer between the author and the reader which substantially affects every aspect of the piece of writing, not just the spelling of individual words, or Microsoft Word nagging you to avoid passive voice.
If OP is not a native English speaker and is using an LLM to create a reasonable prose, then it might be the best way for them to try and communicate their ideas. It's probably better than Google translate. It affects how the reader interprets the writing, though.
My other point, which I also stand by, is that I find the default writing style of current LLMs exhausting to read. It feels like a college student has submitted an assignment on engaging writing and decided to use every technique they could find in their textbook, because they want to get top marks. It just feels forced to me.
--------------------------------
As an example, I asked claude to make my argument more "clear". See how it wrote it:
Style isn't separate from content — it is content. The way something is written shapes how a reader interprets its meaning, and that's always been true. Calling an LLM a "spellchecker" only holds if it's catching typos. The moment you hand it bullet points and ask it to produce an article, it's not correcting your writing — it's replacing it. That's a fundamentally different thing.
I'll grant one exception: if someone isn't a fluent English speaker and uses an LLM to bridge that gap, that's a legitimate trade-off, even if it still changes how the reader experiences the piece.
But my broader complaint stands independent of that debate: current LLMs produce a recognizable, exhausting prose style. Every sentence is engineered to be "engaging." Every paragraph hits the expected beats. It reads like someone who learned to write from a listicle about writing — technically compliant, but hollow. The effort to sound compelling ends up undercutting any sense that a real person with a real perspective is behind it.
I think the idea of sharing the raw prompt traces is good. Then I can feed that to an LLM and get the original information prior to expansion.
> What’s needed is something different:
> Requirement ptrace seccomp eBPF Binary rewrite Low overhead per syscall No (~10-20µs) Yes Yes Yes [...]
Even if you disallow executing anything outside of the .text section, you still need the syscall trap to protect against adversarial code which hides the instruction inside an immediate value:
foo: mov eax, 0xc3050f ;return a perfectly harmless constant
ret
...
call foo+1
(this could be detected if the tracing went by control flow instead of linearly from the top, but what if it's called through a function pointer?)I first assumed it was redirecting them to a library in user mode somehow, but actually the syscall is replaced with "int3", which also goes to the kernel. The whole reason why the "syscall" instruction was introduced in the first place was that it's faster than the old software interrupt mechanism which has to load segment descriptors.
So why not simply use KVM to intercept syscall (as well as int 80h), and then emulate its effect directly, instead of replacing the opcode with something else? Should be both faster and also less obviously detectable.
It is possible to restrict the call-flow graph to avoid the case you described, the canonical reference here is the CFI and XFI papers by Ulfar Erlingsson et.al. In XFI they/we did have a binary rewriter that tried to handle all the corner cases, but I wouldn't recommend going that deep, instead you should just patch the compiler (which funnily we couldn't do, because the MSVC source code was kept secret even inside MSFT, and GCC source code was strictly off-limits due to being GPL-radioactive...)
[1] <https://github.com/google/gvisor/blob/master/pkg/sentry/plat...>
Also gVisor (aka runsc) is a container runtime as well. And it doesn't gatekeep syscalls but chooses to re-implement them in userland.
Inside the guest, there's no kernel to attach strace to — the shim IS the syscall handler. But we do have full observability: every syscall that hits the shim is logged to a trace ring buffer with the syscall number, arguments, and TSC timestamp. It's more complete than strace in some ways — you see denied calls too, with the policy verdict, and there's no observer overhead because the logging is part of the dispatch path.
So existing tools don't work, but you get something arguably better: a complete, tamper-proof record of every syscall the process attempted, including the ones that were denied before they could execute. I'll publish a follow-on tomorrow that details how we load and execute this rewritten binary and what the VMM architecture looks like.
example how we used it in early 2000s to implement pre linux namespace containerization.
https://www.usenix.org/legacy/publications/library/proceedin... (note the shepherd and where kubernetes arguably got the pod name from).
and security policies on top of it
https://www.usenix.org/legacy/event/lisa07/tech/full_papers/...
How secure does this make a binary? For example would you be able to run untrusted binary code inside a browser using a method like this?
Then can websites just use C++ instead of javascript for example?
What's stopping the process from reading its own memory and seeing that the syscall was patched?
This is the kind of foundation that I would feel comfortable running agents on. It’s not the whole solution of course (yes agent, you’re allowed to delete this email but not that email can’t be solved at this level)… let me know when you tackle that next :-)