I think a lot of lessons from AI safety apply surprisingly well to A/B testing, mainly around how hard it is to align your actual goals with the metrics you use for optimization, and how disastrous the consequences can be. It doesn't have to go wrong, but it's incredibly hard to ensure it goes right, especially if it's the only feedback you have.
I really don't like the positioning of ALL A/B testing as unethical behavior where you're hostilely trying to take advantage of a user. It's quite the opposite. There are a lot of extremely poor user experiences out there and a quality testing program can help improve user experiences, remove risk from making sweeping changes, and help you learn more about your audience and market.
The vast majority of the successful testing I've done is done around trying to HELP users navigate the site and product catalog, understand the product, and purchase the product. Attention spans are fleeting with online shopping and even the smallest points of confusion or friction can turn shoppers off.
Additionally, often times I'll read into test results after a month or so to see if there were any issues with orders that might indicate purchases from disinterested people or misaligned expectations.
It seems to miss the point to blame/stigmatize a specific tool because it's been used poorly by a few bad actors in a public way.
Search engine, a single-purpose tool, is as simple as they come regarding customer experience. Still, a good search algorithm can make me click on the first result if it is good, and a bad search algorithm can make me click the first result because they are so bad that scrolling further is a waste of time, especially if I already needed to scroll through widgets and ads to get to the first result.
It's not about just selecting good metrics, it's about higher level picture that A/B testing can never get you.