Maybe! So far what I’ve seen are AI devs that are too afraid to say ”No you’re wrong this will never work” and can’t ask those awkward questions like ”Ok A sounds great, but you already promised to customer B that not-A will always be true. What do we do about that?”
It is very important to have a working mental model of the system to do requirements gathering effectively. AI struggles mightily in that area.