There’s a difference between “valid concern” and “any possibility.” LLMs are possibly sentient in the same sense that rocks are, technically we haven’t identified where the sentience comes from. So maybe it is in there.
Personally, I’m coming around to the spiritual belief that rocks might be sentient, but I don’t expect other people to treat their treatment of rocks as a valid problem and also it isn’t obvious what the ethical treatment of a rock is.
The actual harms being done today are still more pressing than the hypothetical harms of future. And should be prioritized in terms of resources spent.
If it's a valid dichotomy (I don't think it is) then the answer is to stop research on LLMs, and task the researchers with fighting human slavery instead.
I do not think that those researchers are fungible. We could however allocate a few hundred million less to AI research, and more to fighting human exploitation. We could pass stronger worker protection and have the big corporations pay for it - which then they have less money to spent on investments (in AI). Heck we could tax AI investments or usage directly, and spend it on worker rights or other cases of human abuse.
It isn’t the primary motivation of capitalists unfortunately, but improving automation could be part of the fight against human slavery and exploitation.