And many desktop motherboards manage to screw up even the basic fan curve, offering users control of only two points within strict bounds, no piecewise linear curves or hysteresis settings.
I started a fan controller project some 4 years ago and it's now sadly in limbo, waiting for me to solve power filtering issues for the makeshift power supply it grew into. Maybe I should just limit myself to 4-pin fans...
CPU temperatures can swing from 40C to 90C and back in a matter of seconds as loads come and go. Modern fan control algorithms have delays and smoothing for this reason.
If you had a steady state load so stable that you could tune around it, setting a fan curve is rather easy and PID is overkill. For normal use where you’re going between idle with the occasional spike and back, trying to PID target a specific temperature doesn’t really give you anything useful and could introduce unnecessary delays in cooling if tuned incorrectly.
Something that would help is multiple fans being adjusted by the same PID controller. It's not a problem if they're spinning too fast, but if you need more cooling the controller needs to increase air flow at the fastest rate which means turning up multiple fans.
I suppose the best way would be to try, but my current computers have no (or very unstable) interfaces for fan control.
They actually exist (such as ARCTIC F12 TC) but not very common. Separate controllers such as Adafruit EMC2101 are also available.
You can verify that in the datasheet of EMC2101, and in the case of F12 TC in product descriptions (..The fan responses to the rise in temperature swiftly within a critical 6°C range, the rpm of the ARCTIC F Pro TC fan increases from around 500 to its maximum of 2,000 rpm (the steep curve in the chart).)
There's pretty much a 1:1 link between CPU load and CPU temperature, so relying on temperature probes is good enough for almost all situations.
The chip comes to consumers overclocked by default with a TDP of 105W. I suspect this is the case so that it can beat Intel on benchmarks on "default" settings.
You can set it to "eco mode" and have it run at 65W or 45W TDP. Under load, this only results in like a 5% reduction in performance for a dramatic reduction in electricity consumption, fan speed, heat, etc.
Not sure if the 5500x series chips are overclocked but using eco mode could be a good approach.
When you first launch it you have to scroll down the disclaimer/license to check off the "I agree to terms and conditions" box (which is obviously unchecked by default).
When you do, it creates the "install button" you can click but the checked box now sits beside text about sending AMD information and if you aren't looking you may assume its still the same text as when you checked the box.
The end effect is to get the user to agree to send data without them noticing.
After you agree to the disclaimer the checkbox disappears and there is no changed text that appears.
In fact, chips have gotten so fast these days, I am seriously considering running my desktop on a laptop-class CPU. Should be able to run it near silently on air cooling alone with low power consumption.
But also, now, the deps for $WORK are in pre-buolt docker so I'm not rebuilding nodejs every other weekend.
Depending on your cooling, having a 100Mhz higher max clock could be worth $5.
AMD have reduced this in their 7000 series.
On the Intel side of things, I've seen my 14700K idle at around 5W (which shoots up to 300W under load, I find it greatly amusing) and my 12700H at 0.5W or less.
[0] https://github.com/LibreHardwareMonitor/LibreHardwareMonitor
In my 15 years of PC building this fan software tops them all. Huge amount of customisation and actually allows you to control fan speeds both under CPU OR GPU heavy loads at once.
This software can do what this article is looking to do, but I am not sure if there is a non-Windows version.
- one you might expect to be open source, since there is no monetary interest behind it besides an option to donate, and
- one that would potentially lend itself quite well to being open source, since it would offer a great base for other people to tinker around with their own cooling setups.
You're commenting on a discussion on how someone leveraged this sort of open source software to improve cooling.
Their linked GitHub repo, which only has a compiled zip uploaded for release artifact creation, not the code:
> https://github.com/Rem0o/FanControl.Releases
From the repo:
"Sources for this software are closed."
I wonder why the author hasn't open sourced it - there doesn't appear to be any commercial aspect to the tool, and the author makes a point of it being "free" on the site.
https://github.com/kelvie/esphome-config/blob/master/pc-fan-...
The main reason for doing this was so that I didn't have to connect the controller to my main PC via USB to program it (I can change the target points via MQTT/wifi).
Playing around with this stuff on my laptop I've also noticed that you have to be careful what calls you make when querying system status on a loop, some things (like weirdly, `powerprofilectl get`) even when called every 5 seconds drains a surprising amount of battery, so in a sense, your tool may start to affect the "idle" power consumption somewhat, and you need to test that.
I switched from years old Arctic silver 5 to Noctua NT-H1. It resulted in a dramatic difference. 64c loaded vs 84c -- I now suspect I had an air bubble which may invalidate the initial motivation for the work in the first place :-)
Most AIOs need servicing after a few years- find instructions on how to disassemble yours, clean the water block, flush the radiator, and refill with a deionized-water/glycol mix.
It's just a compound what helps with a heat transfer. Both CPU IHS and the radiator are not ideal surfaces, so if you install one on one without TP then there would be bubbles of air trapped in imperfections of the surfaces. They are not a problem per se but they do make the heat transfer worse, so IHS is hotter than it could be and a bit less effective to transfer the heat to the radiator.
You have two options to improve the heat transfer:
a) polish both the IHS and the radiator to a perfect surface, ie they would should mirror like; and then you should use a torque screwdriver to ensure what the cooler is tightened and leveled exactly right;
b) or you can just slap some thermal paste between them and call it done.
And in the second case you need 'just enough' of TP to smooth out the imperfections, but if you slpat too much then again you are making the heat transfer between the IHS and the radiator worse.
So if you ever find yourself minding about how much TP should you use then just start with a pea size, place the radiator (you can even screw it if that's your jam) then remove it and look at how the TP spread around the IHS. If it covers 90% with a thin film (and your radiator isn't the lowest shit tier polished with the old rusted raspel) when you are fine enough.
The only difference between 20yo and now is what IHS are way more larger so now you may find a need to use... a larger pea as a reference.
Most modern coolers will provide sufficient pressure to spread a pea sized amount of thermal paste to cover the whole cooler.
There are also thermal pads, both reusable and single use, that perfom well and don't require any guesswork.
If you want to paste, noctua has the best paste in terms of thermal resistance, but mx4 or mx5 both perform well, as does cryorig and a bunch of others.
Here's one of the test videos you can find on Youtube about the subject: https://youtu.be/aaxBYrZFJZM?t=199
https://www.sciencedirect.com/science/article/abs/pii/S01676...
Link to full paper via sci-hub:
https://sci-hub.st/10.1016/j.scico.2021.102609
This appears to be a web page where the authors have posted links to their research, data, updates, etc:
https://sites.google.com/view/energy-efficiency-languages/
For transparency, I've posted these here recently:
It's not really surprising to see C and C++ doing so well, also positive to see Rust being up there as one of the most energy efficient languages. The one language that keeps surprising me is Pascal. It often in the top 5 - 10 in terms of speed and it also does really well for energy consumption. While I haven't read the article, I could also imagine that it's good in terms of "power spend compiling" due to it's one-pass compiler. What I'm not sure of is if it's all a result of the language design, or if it's because it just had a lot of work put into it over the years by some really smart people. I presume that the tested implementation is Free Pascal.
E.g. assembly would have very low energy use by itself, but would require an inordinate amount of human energy (~8.7 MJ/day) invested to get that end result, making it very inefficient when the whole picture is considered. Unless that code runs everywhere constantly for years of course.
Either way there seems to be a serious problem with unfiltered air being sucked into the case here. That radiator isn't going to radiate if it is wearing a furr coat.
Undervolting with PBO2 should not decrease peformance unless you have done something very wrong.
Ryzen CPUs have limit, max temperature, frequency, power, and voltage. The voltage curve follows a frequency curve so higher clock speed requires a higher voltage. A negative offset in PBO reduces the voltage required for a given frequency. It shifts down the voltage curve. Lower voltage typically means less heat and power draw so you can achive a higher clock speed without hitting temperature, power, or voltage maximums.
If your system is stable when undervolting you don't see a loss in performance, generally it improves because you are able to reach higher clock speeds before running into voltage or power limits. The exception is if you induce a rare issue called clock skew at extreme cases that i'm not even sure you can do with PBO2.
https://github.com/dak180/TrueNAS-Scripts/blob/master/FanCon...
(That's not mine. I think I wrote a variation in Python.)
Then I realized: my server is in the unfinished part of my basement where I can't hear it anyway. Let's just run the fans at 80% speed all the time since that's sufficient to keep the drives cool.
It’s because the cpu is designed to push itself to a thermal limit and have its output performance decided by how you keep it at that limit so it essentially goes full throttle to 90deg then slows down if your cooling can’t keep up which causes the fan spikes.
So I’m told from the research I did.
My latest machine had the same issue but just updating all drivers, setting some auto curves and adding easing for the fan spin up time completely solved it.
No, Linux is as valid technologically as the other offerings, fragmented or not. It's just that it doesn't have a mega-corp behind it to push, to make deals with businesses, schools, governments. As soon as Google stepped in with Android and ChromeOS, suddenly it was everywhere.
I switched to a Fractal Celsius and its default setting is to control pump and fan speed by water temp. Problem solved.
This seems extremely over-engineered and sounds like it could have been solved by using Noctua or similar quiet fans.
Most of the work is in stress testing the CPU to see what you can get away with without thermal throttling. Also helps to do this in the summer if you're in a no-air-conditioning-by-default country.
Setting your AIO pump to static full power will greatly increase the idle power consumption of the machine.
My only worry is that rapid changes in pump speed might cause extra mechanical stress or wear on the pump, but I have no data to back that up. I've just heard that water pumps sometimes behave in counter-intuitive ways - e.g: sometimes running at a higher speed is better for longevity than a lower one.
Not dissing on the author's efforts, quite the contrary! But they demonstrate the rabbit hole that is second order effects (like multi-fan beat frequency) and number of parameters to take into account (like "... A solution to [when to enable Passive Mode] may be to detect if the computer is in use (mouse movements)")