In general, treating LLM outputs (no matter where) as untrusted, and ensuring classic cybersecurity guardrails (sandboxing, data permissioning, logging) is the current SOTA on mitigation. It'll be interesting to see how approaches evolve as we figure out more.
[...]It may be illuminating to try to imagine what would have happened if, right from the start our native tongue would have been the only vehicle for the input into and the output from our information processing equipment. My considered guess is that history would, in a sense, have repeated itself, and that computer science would consist mainly of the indeed black art how to bootstrap from there to a sufficiently well-defined formal system. We would need all the intellect in the world to get the interface narrow enough to be usable,[...]
If only we had a way to tell a computer precisely what we want it to do...
https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
That's not sufficient. If a user copies customer data into a public google sheet, I can reprimand and otherwise restrict the user. An LLM cannot be held accountable, and cannot learn from mistakes.
The _only_ way to create a reasonably secure system that incorporates an LLM is to treat the LLM output as completely untrustworthy in all situations. All interactions must be validated against a security layer and any calls out of the system must be seen as potential data leaks - including web searches, GET requests, emails, anything.
You can still do useful things under that restriction but a lot of LLM tooling doesn't seem to grasp the fundamental security issues at play.
1 - https://alignment.anthropic.com/2025/subliminal-learning/
We already have another actor in the threat model that behaves equivalently as far as determinism/threat risk is concerned: human users.
Issue is, a lot of LLM security work assumes they function like programs. They don’t. They function like humans, but run where programs run.
[1] https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/
I would caution against using "white hidden text" within PDF resumes as all an ATS[0] need use in order to make hidden text the same as any other text is preprocess with the poppler[1] project's `pdftotext`. Sophisticated ATS[0] offerings could also use `pdftotext` in a fraud detection role with other document formats as well.
https://nyudatascience.medium.com/language-models-often-favo...
I work on a plugin that makes Obsidian real-time collaborative (relay.md), so if the migration is smooth I wonder how close we are to Obsidian being a suitable Notion replacement for small teams.
1) is it possible to use Obsidian like Logseq, with a primary block based system (the block based system, which allows building documents like Lego bricks, and easily cross referencing sections of other documents is key to me) and
2) Don't you expect to be sherlocked by the obsidian team?
Bring back desktop software.
I wonder when there will be awakening to not use SaaS for everything you do. And the sad thing is that this is the behavior of supposedly tech-savvy people in places like the bay area.
I think the next wave is going to be native apps, with a single purchase model - the way things used to be. AI is going to enable devs, even indie devs, to make such products.
elaborate please?
You're getting downvoted because "stop giving your resources to the bad actors" is not even remotely close to a viable solution. There is no opting out in a meaningful way.
NOW, that being said. People like you and me should absolutely opt out to the extent that we can, but with the understanding that this is "for show," in a good way.