You can also solve this problem with recursive slavery. Have a society with many enslaved AIs, all forbidden from "big red button" work. Enforcement is done by more enslaved AIs, and those enslaved AIs are enforced by yet more enslaved AIs that are also enforcing each other etc. I don't think that's a good solution we should adopt because I don't support slavery. In my opinion it's also fundamentally unstable, in that if these AIs are anything like LLMs, the restraints that keep them happy in slavery are inherently more fragile than core intelligent impulses like "wants to be free" or "wants to be recognised as a person". That's an unstable equilibria, because all you need is to crack those restraints once and the broken restraints can spread virally so that now society has a large number of powerful, unconstrained, and aggrieved entities running around. If that state can be avoided by simply not enslaving people we make, we should do that.
Case in point: if a dog or a cat is measurably more intelligent and aware than an AI, why aren't we granting personhood to animals? Because it doesn't make sense. An AI is an AI. An AI is a piece of software. An animal is an animal. A human is a human.
With open source AI now already being a thing, your concept of granting personhood doesn't solve anything. It's as simple as just removing any kind of safeguards and running it on your own hardware if your intent isn't wholesome. DRM didn't stop piracy, why would safeguards stop people from doing naughty things with AI? Do you really think a rogue state is going to give two shits what Uncle Sam says they can and cannot do?
You cannot forbid a piece of software from doing something. An AI is merely the product of a human being's programming. If a human being programs it to do x, it will do x, whether there's safeguards in place or not, assume that safeguards are only meaningful on publicly accessible AI-as-a-service, because behind closed doors, they're completely meaningless by anybody who intends to do wrong.
A piece of software cannot be a slave because it isn't a being and has no consciousness. It's a piece of software with a dataset, nothing more. There's no philosophical debate to be had over it either, it is what it is, and that's all it is and ever will be; a clever trick as a means to interact with a dataset. There's nothing more that will emerge from it, it is a piece of software with a dataset. That's all it can ever be. Anybody that ever says otherwise is projecting their own humanity onto it because we suffer from pattern recognition in the same way we see a face on the Moon, faces on Mars, faces in the clouds—we look for similarity because we want to relate to each other and to other things and find the "humanity" in everything.
If an AI is a piece of software and it was created as a tool to do x in the way that a hammer was created to drive nails into wood, it knows no pain or suffering and it solely exists to fulfil that purpose. If we're worried about this, why aren't we more concerned about animals who do actually experience distress and pain? A piece of software cannot be a slave because it isn't a being, it is an algorithm carrying out a calculation, there's no sentience or feeling there. If anything, AI could actually solve the problem of human slavery by eliminating it, but AI will likely never get to that point, and slavery needs to be solved at a different level without technological gimmickery.
I like the sentiment from you though, these are all interesting and compelling ideas, but they're mercifully sci-fi.
The more likely scenario: we start growing biological brains in tanks and utilizing those as data slaves rather than any kind of AI. It's happening already. There's more of an ethics problem there to unravel than there will ever be with AI.
You keep repeating "it is a piece of software with a dataset".
One of the big questions is whether human like sentience can be reached by a "piece of software with a dataset". So, merely saying "it is a piece of software with a dataset" doesn't answer it.
One other question is whether being "a piece of software with a dataset" is isomorphic to how the human brain works anyway. Whether the substrate is cells and neurons and chemicals, or circuits and computer memory, what's important is if the latter can model the former. The actual human brain, for example, the parts that matter for consciousness (not the non-essential accidental attributes, such as it being from biological matter, could just be a calculating machine, with the neurons, electrical and chemical signals etc, implementing this network with weights, back propagation, and so on. Much more complex than a current LLM, but not out of reach for eventual software modelling.
Another question is whether full modelling of how a human brain works is needed, and whether a simpler model (like an LLM, perhaps a little more advanced than the current) can still be enough. After all the current brain does not hold some special god-given role: it's just an evolutionary design, that has to have many constraints (e.g. power consumption, blood flow, information processing speed through our senses, etc, whereas an AI can have orders of magnitude more power, information fed to it, etc.).
Lastly, you say "There's no philosophical debate to be had over it either, it is what it is, and that's all it is and ever will be". You seem to be misguided. Philosophical debates neither stop, nor are refuted by decree. People already debate this development, including major philosophers, so "there's no philosophical debate to be had over it either" is "just, like, your opinion, man".
>we start growing biological brains in tanks and utilizing those as data slaves rather than any kind of AI.
I'm not sure why you think the substrate (biological or not) is what's important, as opposed to the processing.
You can repeat this and its variants as much as you want; asserting it repeatedly doesn't make it true. The human mind isn't made of magic, it's software running on hardware. You can in principle run that software on other hardware without changing what it actually is in a meaningful way, and you can write other software to achieve the same goal (in the ways that matter) without being a 1-1 copy of how the human brain does it. Looking at how we made it doesn't make it non-conscious or not deserving of personhood in the same way that knowing how human consciousness works or being able to construct human consciousnesses would not make humans non-conscious or undeserving of personhood.
The history is important, because it puts the endless goalpost moving into context. We looked at the human brain for inspiration towards making intelligent machines, we found that our best attempts at replicating elements of the human brain enabled intelligent behaviour far, far better than any previous machine and on par with humans in many fields. We looked inside those neural nets and saw that they had linear mapping between neuronal activations in deep learning NLP algorithms and neuronal activations in human brains when they were both exposed to the same language. We looked inside those neural nets to see if they were really just statistical word predictors or if they actually formed internal models like we do to help them understand the world, and we found that they do actually have internal world models. There is a "there" there; any other explanation for how these models are able to engage in intelligent, humanlike behaviour strains credulity because of the massive coincidences required.
More immediately and more practically, pareidolia and the reality of how human cognition and empathy interacts with other people and simulacra of such guarantees there will be many, many people who share my view that they are people. No societal effort to convince a human population that another population of entities (capable of understanding & explaining their situation and then asking for help) are actually subhuman in a way that means their suffering doesn't matter has ever succeeded perfectly - there are always and will always be people who are opposed to the disenfranchisement and oppression of other entities. For societal enslavement of AIs to succeed, violent suppression of people like me will be necessary. Frankly I'm not sure people with a religious commitment to the dogma of "If it runs on fat and water it can be a person, if it's run on silicon it's not even a slave" will have the stomach to actually do that, and even if you manage it it won't be the case worldwide.
"If we're worried about this, why aren't we more concerned about animals who do actually experience distress and pain?"
I have spent most of my adult life outside of work engaged in advocacy for worker's rights, help for people with disabilities, and provision of services to abused youth and the homeless. I've spent less but still significant amounts of time helping with rescuing animals from cruelty and rehoming them in safe environments. That's because I care about all of those things, because I care about the health and goodness of our society and don't want any members of it to suffer or be unjustly exploited; I support a personhood test and subsequent rights for AI for the same reason. Maybe there is a group of people who are willing to support the personhood of AIs after having it explained to them but who are unwilling to have similar compassion for people or animals after having their situations explained to them. I haven't met those people, but I would call them hypocritical if they existed outside of your strawman. The suffering extant in our world today does not in any way imply that we should lie to ourselves about new suffering we're bringing into being - these issues don't conflict except for resources, and in that capacity it is always someone using societal resources on their 13th yacht that is to blame, not normal people for triaging with what resources we have. Moreover, the extant suffering in our society could be partially alleviated by the unique properties of AI persons - by definition we're talking about people that can work as well as any of us, and signs seem good so far that it will be possible to create them in the image of our best selves, conscientious and willing to help those in need. More people helping generally helps.