Seems risky to start running commands on your machine that a hallucinating AI spit out at you without understanding them, but I guess they can at least let you know what to look up so you can double check what the command will do.
It really depends on what you're doing right? Any command line work that I use an LLM to help with is non-destructive. I'll use jq, ffmpeg, etc., but I don't update the initial source.