That's the frustrating thing. LLMs don't materially reduce the set of problems where I'm running against a wall or have trouble finding information.
* To catch passive voice and nominalizations in my writing.
* To convert Linux kernel subsystems into Python so I can quickly understand them (I'm a C programmer but everyone reads Python faster).
* To write dumb programs using languages and libraries I haven't used much before; for instance, I'm an ActiveRecord person and needed to do some SQLAlchemy stuff today, and GPT 4o (and o1) kept me away from the SQLAlchemy documentation.
OpenAI talks about o1 going head to head with PhDs. I could care less. But for the specific problem we're talking about on this subthread: o1 seems materially better.
Do you have an example chat of this output? Sounds interesting. Do you just dump the C source code into the prompt and ask it to convert to Python?
def linear_ctr(target, argc, argv):
print("Constructor called with args:", argc, argv)
# Initialize target-specific data here
return 0
def linear_dtr(target):
print("Destructor called")
# Clean up target-specific data here
def linear_map(target, bio):
print("Mapping I/O request")
# Perform mapping here
return 0
linear_target = DmTarget(name="linear", version=(1, 0, 0), module="dm_mod")
linear_target.set_ctr(linear_ctr)
linear_target.set_dtr(linear_dtr)
linear_target.set_map(linear_map)
info = linear_target.get_info()
print(info)
(A bunch of stuff elided). I don't care at all about the correctness of this code, because I'm just using it as a roadmap for the real Linux kernel code. The example use case code is an example of something GPT 4o provides that I didn't even know I wanted.I think of it as making the top bar of the T thicker, but yes, you're right, it also spreads it much wider.
I can't think of many situations where I would use them for a problem that I tried to solve and failed - not only because they would probably fail, but in many cases it would even be difficult to know that it failed.
I use it for things that are not hard, can be solved by someone without a specialized degree that took the effort to learn some knowledge or skill, but would take too much work to do. And there are a lot of those, even in my highly specialized job.
Tangentially related, a comic on them: https://existentialcomics.com/comic/557
As you step outside regular Stack Overflow questions for top-3 languages, you run into limitations of these predictive models.
There's no "reasoning" behind them. They are still, largely, bullshit machines.
It often fails even for those questions.
If I need to babysit it for every line of code, it's not a productivity boost.