This statement worries me for a number of reasons.
First, I work on a very massive codebase, with a large engineering organization. And I have seen a number of AI generated unit tests. I have not seen a single example of an LLM-generated unit test that didn't contain a number of test anti-patterns. To the extent where I would argue that they are creating a self-fulfilling prophecy. You said you think that unit tests are a waste of time. I would argue that they CAN be even worse than that.
The benefit of unit tests is that, at their best, they give you a safety net for refactoring existing code. If you change the implementation of a system under test, and the tests pass, you know you didn't introduce a breaking change.
But a lot of tests couple themselves tightly to the implementation details. Every single LLM-generated unit test I have observed in the wild introduces this anti-pattern. If you have a system under test, and changing the implementation of that system without breaking its behaviour causes a test to fail... that's called the "Fragile Test" problem. Now your unit test is not only failing to encourage you to refactor code... it's actively DISCOURAGING you from refactoring code. In this case, the unit test is providing DISVALUE rather than value.
So the fact that a) you think unit tests are a waste of time and b) you look at AI as a way to save you from a chore ... tells me that you have no business ever writing uint tests with or without AI. Please stop. You are making the world worse by leveraging an LLM to do these things for you.
I have NEVER looked at writing a unit test by hand as a "chore" or as a "waste of time." I often write my tests before even writing my implementation code, because doing so helps me think through both the design and requirements of my code... and gives me a little mini-sandbox context where I can make sure that the brand new code that I am writing is doing what I want it to. It's a problem solving tool. Not something to be done after the fact as a chore.
"Write Documentation" is not writing code. And if you don't read the documentation you're generating, no one else will. So what's the point of having it in the first place if no one reads it? Food for thought.
"Find problems" I see as being akin to a spell check, or the red squigglies when you have a syntax error. I do see the value of having ML tech within the internal IDE implementation. We've been using ML for email SPAM filters for decades. There are interesting problems that can be solved there. But this is an area where I want my IDE to flag something for me. It's not me actively prompting an LLM to generate code for me.