Now the tables have turned, and legitimate software has to become somehow polymorphic to thwart attacks by malware.
I'm curious because years ago the academic strongly pushes the FG ASLR story, then OpenBSD did kernel relinking, but I haven't heard any industry story on how effective this is.
Not a rhetorical question.
Maybe take a known vulnerable exec, create a fuzzing attacker and run it both ways seeing how long it takes to get lucky a few times. The more secure version should take longer.
(FG)ASLR is more of a "targeting at exploit instead of vulnerability" style mitigation.
FG-ASLR helps because, even when you know where .text is, there are now N possible randomized locations for the piece of code your exploit leverages, so if you pick one and exploit M machines that way, only M/N of the exploits will succeed (where you got lucky).
Ultimately it is obfuscation, but with enough entropy it is very effective. It can't mitigate or prevent an exploit, but it makes it more work to turn an exploit into code execution consistently enough for it to be useful.
I wonder if it is possible to make a relinker which only requires binary output -- so it could be easily incorporated into existing systems.
One way I can think of is to keep relocation/original object information in the debug sections, so that one can reconstruct original object files and re-link them. But I am guessing this will not work with LTO though... Or maybe we can just make a bunch of debug sections and store input object/library files verbatim -- this will at least double the binary size, but will allow for easier relinking.
https://research.facebook.com/publications/bolt-a-practical-...
https://groups.google.com/g/llvm-dev/c/ef3mKzAdJ7U/m/1shV64B...
At least, someone finally understands that static, fully predictable, reproduce-able-builds are only an convenience feature for the attacker side.
Fully reproducible builds provide great assurance against the supply chain attacks. But 100% reproducibility is in some cases a bit too much. What matters is whether the artifact can be easily proven to be functionally identical to the canonical one.
So I am 100% for a fully predictable sshd random-relink kit, producing unpredictable sshd binaries, but only as long as there is an instruction how to check that the sshd binary that allegedly came from it indeed could have come from it, and was not quietly replaced by some malicious entity.
You can easily verify the integrity of the object files that are used in the random relinking - they are included in the binary distribution, and are necessary to perform the relinking.
The debate of static vs dynamic linking is still going on, and a very strong argument against static linking has always been that upgrading vulnerable libraries is made difficult. But think of it: package managers already hold the meta-data of what links to what; object files can be distributed just as easily as shared objects; the last necessary step is to move the actual linking step from the kernel to the package manager.
In theory all functions, or more realistically groups of functions spanning page-size increments, could be dynamically located. The obvious way to achieve that would be to have multiple .text sections within a main executable or library. But off-hand I don't know if that's actually supported by ELF, or if so whether the standard tool chains and environments could easily support it.
I don't understand the full logic here. Yes I can authenticate the object files. But how would you discern, after a possible intrusion, an "sshd" binary that is indeed a random combination of these objects, from a trojaned "sshd" binary?
I guess the holy grail would be to combine this with hot patching (https://en.wikipedia.org/wiki/Patch_(computing)#HOT-PATCHING), and relink the kernel every now and then while it is running (currently, a system under attack would have to be rebooted every now and then, and that’s undesirable). That would face ‘a few’ technical hurdles, though.
I have to admit I am guilty of this as well, but any mantained openbsd setup should have an uptime of no more than 6 months and a well maintained openbsd setup will be shorter than that as security patches are applied.
Having said that one of the things I like about openbsd is that if you want to go dark and have an ultra stable system(no updates ever) all the pieces are there for you, (you will want to have the source, I would also make sure I have the ports tree for that release and a copy of the ports dist files.)
So it looks like they are going to move on from CVS eventually.
AFAIK, they still interface with CVS directly, but I assume the expectation is to eventually transition to got.
[0]: https://marc.info/?l=openbsd-tech&m=167388832715992&w=2
Makefile.relink: cc -o sshd `echo ${OBJS} | tr ' ' '\n' | sort -R` ${LDADD} ./sshd -V && install -o root -g wheel -m ${BINMODE} sshd /usr/sbin/sshd
https://github.com/openbsd/src/commit/898412097f87ba70d4012f...
This does not replace classic ASLR: OpenBSD 5.7 activated position-independent static binaries (Static-PIE) by default.
https://en.wikipedia.org/wiki/Address_space_layout_randomiza...
Sometimes it can be beneficial to optimize the link so most of the main thread stays in cache. Obviously this only really matters for CPU-intensive programs.