Though it's possible the people who think a theoretical future AI will turn the planet into paperclips have merely forgotten that perpetual motion machines aren't possible.
Part of such precautionary planning involves asking whether such an accident could happen easily or not. There certainly isn't consensus at the moment, but the philosophy very clearly favors a cautious approach.
Most people are used to thinking about established science that follows expected rules, or incremental advances that have no serious practical consequences. But this isn't that. There is good reason to think that we're approaching a step-change in capabilities to shape the world, and even a strong suspicion of this warrants taking serious defensive measures. Crucially for this particular instance of the discussion, OP is favoring that.
There will necessarily be a broad spectrum of opinions regarding how to handle this, both in the central judgement and how palatably the opinion itself is presented. Using a dismissive moniker like 'religious' for a whole segment of it doesn't give justice to the arguments.
Present a counterargument if you feel strongly about it, and see whether that will stand on its own merit.
> Present a counterargument if you feel strongly about it, and see whether that will stand on its own merit.
This is a bad way to talk to rationalists because it's what they think solves everything and is the reason they're convinced an AI is going to enslave them. As long as you're actually right, saying "no that's dumb and not worth worrying about" is superior to logical arguments about things you can't have logical arguments about (because there are unenumerable "unknown unknowns" in the future). This is called "metarationality".
e.g. Someone could decide to kill you because they don't like one of your posts (1). Is there any finite amount of work you could do to stop this? No (2). Should you worry about this? No (3).
You can't logically prove the 2->3 step, nor can you calculate the probability of it being a problem, but it still doesn't seem to be a problem.
(Keep in mind that biological machines, ie life, have managed to turn the surface of the planet into 'green goo'.)
None of em replace the entire planet though. That's a lot of rock to digest without any more energy to help you do it.
And a paperclip factory isn't self-reproducing (that would be a paperclip factory factory). It's just a regular machine that can break down. The people afraid of that one are imagining a perfect non-breaking-down non-energy-requiring machine because they've accidentally joined a religion.
All that oxygen comes from all the plants.
Yes, life has so far only covered the top of the planet. You are right that a paper clip maximizer would need quite a bit of time to go deeper than life has gone (if it would get there at all).
> And a paperclip factory isn't self-reproducing [...]
Why wouldn't it? If your hypothetical superhuman AGI determined that becoming self-reproducing would be the right thing to do, presumably it would do that.
No perfection required for that. Biological machines aren't perfect either. Just good enough.
You are right that thermodynamics puts a limit on how fast anything can transform the planet into paperclips or grey goo.
Though the limit is probably mostly about waste heat, not necessarily about available energy:
There's enough hydrogen around that an AGI that figured out nuclear fusion would have all the energy it needs. But on a planet wide basis, there's no way to dissipate waste heat faster than via radiation into space.
(Assuming currently known physics, but allowing for advances in technology and engineering.)
---
Of course, when we worry about paperclip maximisers, it's bad enough when they turn the whole biosphere into paperclips. Noticing that they'll have a hard time turning the rest of the earth into paperclips would be scant consolation for humanity.
(But the thermodynamic limits on waste heat still apply even when just turning the biosphere into paperclips.)
This seems an odd refutation for several reasons.
First, the paperclip AI might determine that self-reproducing factories would be an optimisation, and aim to achieve that by any means necessary.
Second, a single paperclip factory that doesn't reproduce might still develop the means of bringing raw materials to it.
Either way, an all-consuming paperclip AI emerges.
In general, I find the equating of the paperclip problem with a religious cult to be naive.
Your analogy is weak and also false: viruses can't self-reproduce, but need to bind to a host's protein synthesis pathways.