*I should qualify that "using" CC in the strict sense has no learning curve, but really getting the most out of it may take some time as you see its limitations. But it's not learning tech in the traditional sense.
Projects as simple as "set up a tmux/vim binding so I can write prompts in one pane and run claude in the other". Fails.
I've been coding for over 20 years.
If there is no learning curve, why doesn't it work for me? You can't say I'm not using it right, because if that was true, then all I need to do is climb the learning curve to fix that, the curve that you say doesn't exist.
That's what's being asked of me in my last two jobs. Vibe code it, if it's bad just throw it away and regenerate it because it's "cheap". The only thing that matters is that you can quickly generate visible changes and ship it to market.
Out of frustration I asked upper management (in my current job), if you want me to use AI like that then I'll do it. But when it inevitably fails, who is responsible? If there's no risk to me, I will AI generate everything starting today, but if I have to take on the risk I won't be able to do this.
Their response was that AI generates the code, I'm responsible for reviewing it and making sure it's risk free. I can see that they're already looking for contractors (with no skin in the game) that are more than willing to run the AI agents and ship vibe code, so I'm at a loss on what to do.
I'm not sure why it isn't working for you. Maybe your expectation is a perfect one-shot or else it has zero value, and nothing in between?
But my advice is to switch gears and see the "plan file" as the deliverable that you're polishing over implementation. It's planning and research and specification that tends to be the hard part, not yoloing solutions live to see if they'll work -- we do the latter all the time to avoid 10min of planning.
So, try brainstorming the issue with Claude Code, talk it through so it's on the same page as you, ensure it's done research (web search, docs) to weigh the best solutions, and then enter plan mode so it generates a markdown plan file.
From there you can read/review,tweak the plan file. Or have it implement it. Or you implement it. But the idea is that an LLM is useful at this intermediate planning stage without tacking on additional responsibilities.
I think by "no learning curve" they are referring to how you can get value from it without doing the research you'd need to use a conventional tool. But there is a learning curve to getting better results.
I learned my plan file workflow just from Claude Code having "Plan Mode" that spits out a plan file, and it was obvious to me from there, but there are people who don't know it exists nor what the value of it is, yet it's the centerpiece of my workflow. I also think it's the right way to use AI: the plan/prompt is the thing you're building and polishing, not skipping past it to an underspecified implementation. Because once you're done with the plan, then the impl is trivial and repeatable from that plan, even if you wanted to do the impl yourself.
I'm way past the point of arguing anything here, just trying to help.
This is exactly the workflow that works very well for me in Cursor (although I don't use their Plan Mode - I do my version of it). If you know the codebase well this can increase your speed/productivity quite a bit. Not trying to convince naysayers of this, their minds are already made up. Just wanted to chime in that this workflow does actually work very well (been using it for over 6 months).
I've been reading a book about the history of math and at some points in the beginning the author pointed out how some fields undergo a radical change within due to some discovery (e.g. quantum theory in physics) and the practitioners in that field inevitably go through this transformation where the generations before and after can't really relate to each other anymore. I'm paraphrasing quite a bit though so I'll just recommend people check out the book if they're interested: The History of Mathematics by Jacqueline Stedall
And the aforementioned VS Code video, if I remember correctly: https://youtu.be/dutyOc_cAEU?si=ulK3MaYN7_CPO76k
Because LLMs are not actually good at programming, despite the hype.
I think a decent place to start is: given a small web app, give it a bug report and ask it what causes the bug.
There isn't? Then why is it that whenever devs have tried it and not achieved useful results, they're told that they just haven't learned how to use it right?
If people really counted all the time they spend coddling the AI, trying again, then trying again and again and again to get a useful output, then having to clean up that output, they would see that the supposed efficiency gains are near zero if not negative. The only people it really helps are people who were not good at coding to begin with, and they will be the ones producing the absolute worst slop because they don't know the difference between good and bad code. AI is constantly trying to introduce bugs into my codebase, and I see it happening in real-time with AI code completion. So, no you aren't "holding it wrong", the other people are no different than the crypto-bro's who were pushing blockchain into everything and hoping it would stick.
I use LLMs pretty regularly, so I'm familiar with the kinds of tasks they work well on and where they fall flat. I'm sure I could get at least some utility from Claude Code if I had an unlimited budget, but the voracious appetite for tokens even on a trivially small project -- combined with a worse answer than a curated-context chatbot prompt -- makes its value proposition very dubious. For now, at least.
* I considered trying Opus, but the fundamental issue of it eating through tokens meant, for me, that even if it worked much better, the cost would dramatically outweigh the benefit.
If you have try teaching someone something from the absolute ground up, you will quickly realize that a huge number of things you now believe are "standard assumptions" or "obvious" or "intuitive" are actually the result of a lot of learning you forgot you did.