That's not what I meant. What you describe is deliberate engineering. Not accidentally falling out such as from just training on larger and larger datasets which some people think will result in digital consciousness or something through "emergence".
It is almost certain that the next stage of intelligence will be digital. But it is very foolish and unnecessary to try to speed that along.
It is likely that we have a century or two max left in control of the planet, regardless of what we do. On some level I agree that totally suppressing it indefinitely would be a shame.
When I said "living" I meant digital life. Such as those things you describe and others including control-seeking, self-preservation, and reproduction which are all central to living beings.
The problem is that AI will soon think 100 or more times faster than humans. This is anticipated based on the history of increases in computing efficiency and the fact that we are now optimizing a very specific system (LLMs). Humans will not in any way be able to keep up.
This is not luddism. I have a service that connects GPT-4 to Linux VMs on the internet to install or write software. I think this technology is great and has a lot of positive potential.
But when you deliberately try to emulate animals (like humans) and combine that with hyperspeed and other superintelligent characteristics, you are essentially approaching suicide or at least, abdicating all responsibility for your environment. There is no way to prevent such a thing from making all of your decisions for you.
The speed difference will be incredible. Imagine a bullet time scene where everyone seems to be moving in extreme slow motion. Now multiply by 10 so they are so slow they seem completely frozen.
This level of performance is coming in five years or less.
While I don't want to suppress the evolution of intelligent life in our corner of the universe, I also am not ready to join a death cult. Especially not accidentally.