From speaking to my friends in the industry, it seems like uptake for code is happening slowly, but unevenly, and the results are largely dependent on the level of documentation, which is often lacking. (I know of a few people using AI for (high-quality!) work on Godot, and their AIs struggle with many of the implicit conventions present in the codebase.)
With that being said, I would say that LLMs have generally been quite the boon for the (limited) gameplay work that I have done of recent. Because the cost of generation is so cheap [0], it is trivial to try something out, experiment with variations, and then polish it up or discard it entirely.
This also applies to performance work: if it's a metric that the AI can see and autonomously work on, it can be optimised. This is, of course, not always possible - it's hard to tell your AI to optimise arbitrary content - but it's often more possible than not, especially if you get creative. (Asking it to extract a particularly hot loop out from the code it resides within, and then optimising that, for example: entirely feasible.)
I think there are still growing pains, but I'm confident that LLMs will rock the world of gamedev, just like they're doing to other more well-attested fields of programming.
[0]: https://simonwillison.net/guides/agentic-engineering-pattern...