I'm sorry to hear this. I have encouraged the developers I manage to try out the tools but we're no where close to "forcing" anyone to use them. It hasn't come up yet but I'll be pushing back hard on any code that is clearly LLM-generated, especially if the developer who "wrote" it can't explain what's happening. Understanding and _owning_ the code the LLMs generate is part of it, "ChatGPT said..." or "Cursor wrote..." are not valid sentence starters to question like "Why did you do it this way?". LLM-washing (or whatever you want to call it) will not be tolerated, if you commit it, you are responsible for it.
> It's been very stressful and overall an extremely negative experience for me, which is made worse when I read the constant cheerleading online, and the "You're just using it wrong" criticisms of my negative experience
I hate hearing this because there are plenty of people writing blog posts or making youtube videos about how they are 10000x-ing their workflow. I think most of those people are completely full of it. I do believe it can be done (managing multiple Claude Code or similar instance running) but it turns you into a code reviewer and because you've already ceded so much control to the LLM it's easy to fall into the trap of thinking "One more back and forth and the LLM will get it" (cut to 10+ back and forths later when you need to pull the ripcord and reset back to the start).
Copilot and short suggestions (no prompting from me, just it suggesting in-line the next few lines) are the sweet spot for me. I fear many people are incorrectly extrapolating LLM capability. "Because I prompted my way to a POC then clearly an LLM would have no problem adding a simple feature to my existing code base" - Not so, not by a longshot.