Environment: I am currently playing with a pid control function for my gpu fan, that is instead of saying "map temp x to fanspeed y"(fan curve) say "set fan to speed needed for temp z"(pid control)
Question: is there a reason pid type control is never a thermal option? Or put another way, is there something about the desired thermal characteristics of a computer that make pid control undesirable?
As a final thought, I have halfway convinced myself that in a predictable thermal system a map would match a set of pid parameters anyway.
Why though? I generally don't care about the specific temperatures of my CPU and GPU, just that they don't get too warm, so for the CPU (AIO) I basically have "0% up until 45C, then increment up to 100% when it hits 90C" and the same for the GPU except it's always at 10%.
I guess I could figure out target temperatures, and do it the other way, but I'm not sure what the added complexity is for? The end results (I need at least) remains the same, cool down the hardware when it gets hotter, and for me, the simpler the better.
I also have two ambient temperature sensors in the chassi itself, right at the intake and the outtake. The intake one is just for monitoring if my room gets too warm so the computer won't be effective at cooling (as the summers here get really warm) and the outtake one is to check overall temperature and control the intake fans. In reality, I don't think I need to do even this, just the CPU+GPU temperature + set fan speed based on that feels simple enough to solve 99% of the things you'd like to be able to do here.
And now am about halfway through building pid fan control software and a janky gpu temp simulator so I can get some intuition on tuning the pid parameters before I set it on my actual gpu. you know, the fun part of computing. But now I am worried that perhaps there is a real reason nobody does it this way.
I think no one is doing it that way, because there is simply no need for it. Sure, when I'm 3D printing some material it sometimes need the heatbed to be exactly 45C or whatever, but why would I care about the specific temperature of my GPU? As long as it's not throttled when GPU utilization is at 100%, I'm good to go.
KISS :)
Again though I could be totally off. I just remember that being spread around as “conventional wisdom.”
Yeah, I'd understand not wanting to go between 0C and 90C over and over. But my GPU idles at around 35C, maxes out at 85C or something, and going back and fourth will surely be preferable than staying with a single temperature but voltage clock the card. Especially considering performance.
But again, I'm using my card for ML, number-crunching, simulations and VFX, you might be right that the use case of cryptocurrency mining prefers a different thermal profile.
I limit power consumption profiles and clock speeds unless higher power is required, and combine that with an oversized cooling system - keeps regular temps consistent.
I've only every seen something like this in really high reliability equipment because they're worried about repeated thermal expansion causing cracks in the boards/solder joints. There is, often, heaters available for use if the temperature gets too low. For most equipment I think that the juice just isn't worth the squeeze so it isn't done.
For most people a fan curve is more obvious to work with and it’s largely good enough without the irritating behaviors insufficiently tuned control loops can exhibit.