You guys are talking about probably one of the few fields where an ML takeover isn’t very feasible. (Partly because for a vast portion of control problems, we’re already about as good as you can get).
Adding a black box to your flight home for Christmas with no mathematical guarantee of robustness or insight into what it thinks is actually going on to go from 98%-> 99% efficiency is…..not a strong use case for LLMs to say the least
I'm certainly not intelligent enough to solve these problems, but I don't think any intelligent people out there can either. Not alone, at least. Maybe I'm too dumb to realize that it's not as complicated as I think, though. I have no idea.
I programmed a flight controller for a quadcopter and that was plenty of suffering in itself. I can't imagine doing limbs attached to a torso or something. A single limb using inverse kinematics, sure – it can be mounted to a 400lb table that never moves. Beyond that is hard.
You need to do all of these things you’re talking about and then be able to quantify stability, robustness, and performance in a way that satisfies human requirements. A black box neural network isn’t going to do that, and you’re throwing away 300 years of enlightenment physics by making some data engorged LLM spit out something that “sort of works” while giving us no idea why or for how long.
Control theory is a deeply studied and rich field outside of computer science and ML. There’s a reason we use it and a reason we study it.
Using anything remotely similar to an LLM for this task is just absolutely naive (and in any sort of crucial application would never be approved anyways).
It’s actually a matter of human safety here. And no — ChatGPT spitting out a nice sounding explanation of why some controller will work is not enough. There needs to be a mathematical model that we can understand and a solid justification for the control decisions. Which uh…at the point where you’re reviewing all of this stuff for safety , you’re just doing the job anyways…
First there was a comment that GPT wasn't intelligent yet, because give it a few servos and it can't make them walk.
But that's something we can't do yet either.