> if it exceed the context the agent does random stuff, that are often against simplicity and coherent logical structure.
That's a current technical limitation. Are you so sure it won't be overcome in the near/mid future?
> LLM has zero intention, and rely on you to decide what to build and more importantly not build
But work is being done to even remove or automate this layer, right? It can be hyperbole (in fact, it is) but aren't Anthropic et al predicting this? Why wouldn't your boss, or your boss' boss, do this instead of you? If they lack the judgment currently, are you so sure they cannot gain it, if they don't have to waste time learning how to code? If not now, what about soon-ish?
> At this current year and date, the AI does not automate me in anyway
Not now, granted. But what about soon? In other words, shouldn't you be worried as well as excited?