I mean no, no it isn't.
I'm giving it info on how to construct data models with a custom library, so interacting with that is not using anything previously stored, and then giving it businesses/tasks to model as simple human descriptions.
If you tell me that something which
* Takes a human description of a problem
* Describes back to me the overall structure and components required to solve it with a hierarchy
* Converts that into code, correctly identifying where it makes sense for an address to be contained within a model or distinct and referenced
* Correctly reuses previously created classes that are obviously not in its original dataset
has no understanding or reasoning and it just regurgitating things it's seen before simply mashed together, I don't know what to say.
Frankly
> it's taking several based on probability and merging them into what it thinks we're looking for. I
Sounds pretty much like understanding and reasoning to me.
> but there's no need to exaggerate it as magic.
I'm absolutely not saying there's magic. Humans aren't magic and they can do reasoning. I'm saying it's not just looking up text and regurgitating it.
I think this is supported by things like othello-gpt, which builds an internal world model and outputs based on that.