I of course mean we're using these LLMs for a lot of tasks that they're inappropriate for, and a clever manually coded algorithm could do better and much more efficiently.
Sure, but how long would it take to implement this algorithm, and would that be worth it for one-off cases?
Just today I asked Claude to create a jq query that looks for objects with a certain value for one field, but which lack a certain other field. I could have spent a long time trying to make sense of jq's man page, but instead I spent 30 seconds writing a short description of what I'm looking for in natural language, and the AI returned the correct jq invocation within seconds.
How so? I'd imagine a robot connected to the data center embodying its mind, connected via low-latency links, would have to walk pretty far to get into trouble when it comes to interacting with the environment.
The speed of light is about three orders of magnitude faster than the speed of signal propagation in biological neurons, after all.
Recent research from NVIDIA suggests such an efficiency gain is quite possible in the physical realm as well. They trained a tiny model to control the full body of a robot via simulations.
---
"We trained a 1.5M-parameter neural network to control the body of a humanoid robot. It takes a lot of subconscious processing for us humans to walk, maintain balance, and maneuver our arms and legs into desired positions. We capture this “subconsciousness” in HOVER, a single model that learns how to coordinate the motors of a humanoid robot to support locomotion and manipulation."
...
"HOVER supports any humanoid that can be simulated in Isaac. Bring your own robot, and watch it come to life!"
More here: https://x.com/DrJimFan/status/1851643431803830551
---
This demonstrates that with proper training, small models can perform at a high level in both cognitive and physical domains.
Hmm .. my intuition is that humans' capabilities are gained during early childhood (walking, running, speaking .. etc) ... what are examples of capabilities pretrained by evolution, and how does this work?
This is a great milestone, but OpenAI will not be successful charging 10x the cost of a human to perform a task.
True, but they might be successful charging 20x for 2x the skill of a human.
If it can be spun up with Terraform, I bet you they could.
Right now when I ask an LLM… I have to sit there and verify everything. It may have done some helpful reasoning for me but the whole point of me asking someone else (or something else) was to do nothing at all…
I’m not sure you can reliably fulfill the first scenario without achieving AGI. Maybe you can, but we are not at that point yet so we don’t know yet.