The more pedestrian 5950X or the now bargain 3950X are great for anyone doing a lot of compiling. With the right motherboard they even have ECC RAM support. Game changer for workstations in the $1000–$2000 range.
The more expensive Threadripper parts really shine when memory bandwidth becomes a bottleneck. In my experience, compiling code hasn’t been very memory bandwidth limited. However, some of my simulation tools don’t benefit much going from 8 to 16 cores with regular Ryzen CPUs because they’re memory constrained. Threadripper has much higher memory bandwidth.
edit: to be clearer, I'm not thinking of dedicated build machines here (hence RAID comment) but over all impact on dev time by getting local builds a lot faster.
Source code files are relatively small and modern OSes are very good at caching. I ran out of SSD on my build server a while ago and had to use a mechanical HDD. To my surprise, it didn’t impact build times as much as I thought it would.
Threadripper can be useful for IO, especially for C++ (which is famously quite IO intensive) owing to its 128 PCIe lanes, you can RAID0 a bunch of drives and have absolutely ridiculous IO.
wait - seems you can get one, just pay 2x list price.
I am trying to build a system for Reinforcement Learning research and seeing many things depend on python, I am not certain how to best optimise the system.
It's much quieter under load as well.
You're claiming this plugin has deeper IDE integration than `make`? I find that really, really difficult to believe. And if it's true, it seems like the solution is to either use a better IDE, or improve IDE support for the de facto standard tools that already exist, as opposed to writing a plugin for the -j flag.
TwineCompile is not a plugin wrapping the -j flag. It is a separate thing entirely unique to C++Builder. It does offer integration with MSBuild though.
The second part of that was the fall off. With the 1 million size files it only ever used half of the cores and each successive round of core compiles it would use even less cores. TwineCompile didn't seem to have that problem but this post was not about TwineCompile vs. MAKE -j so I did not investigate this farther.
I was expecting MAKE/GCC to blow me away and use all 64 cores full bore until complete and it did not do this.
https://www.gnu.org/software/make/manual/html_node/Job-Slots...
Both of those problems seemed solvable if he was willing to chunk up his application into libraries, maybe 1024 files per library then linked to the main application.
MinGW's linker supports passing the list of objects as a file for this reason and CMake will use that by default.
Alas, that was not to be. Modern languages are fun and all, but not Delphi-back-in-the-day level fun :-).
The "mold" linker:
https://github.com/rui314/mold
>"Concretely speaking, I wanted to use the linker to link a Chromium executable with full debug info (~2 GiB in size) just in 1 second. LLVM's lld, the fastest open-source linker which I originally created a few years ago, takes about 12 seconds to link Chromium on my machine. So the goal is 12x performance bump over lld. Compared to GNU gold, it's more than 50x."
Or perhaps the code wasn’t modified to spread the work across all processor core groups (a Windows thing to support more than 64 logical cores).
https://bitsum.com/general/the-64-core-threshold-processor-g...
But alas, I have said for some time that a fast compiler should be able to compile about 1MLOC/S with some basic optimization work.
make -j>3 just locks the process and fails.
Is it the same with g++? I have 4GB so I should be able to compile with 4 cores, but the processes only fill 2-3 cores even when I try make -j8 on a 8 core machine and then locks the entire OS until it craps out?!
Something is fishy...
Seems like our code is inflating quite rapidly. I remember when 1M was the biggest project. /snark
This all seems kind of pointless since distributed C++ compilation has been a thing for decades, so they could have used a cluster of Ryzens instead of "zowie look at our huge expensive single box".
int
main
()
{
/*
_______ _ _ _ _
|__ __| | (_) (_) | |
| | | |__ _ ___ _ ___ __ _ | | ___ _ __ __ _ _ __ _ __ ___ __ _ _ __ __ _ _ __ ___
| | | '_ \| / __| | / __| / _` | | |/ _ \| '_ \ / _` | | '_ \| '__/ _ \ / _` | '__/ _` | '_ ` _ \
| | | | | | \__ \ | \__ \ | (_| | | | (_) | | | | (_| | | |_) | | | (_) | (_| | | | (_| | | | | | |
|_| |_| |_|_|___/ |_|___/ \__,_| |_|\___/|_| |_|\__, | | .__/|_| \___/ \__, |_| \__,_|_| |_| |_|
__/ | | | __/ |
|___/ |_| |___/
*/
return 0;
}