That isn't true, people do things for others all the time any form of explicit or implicit compensation, they don't even believe in a God so not even that, they still help others for no gain.
We can program an AI to be exactly like that, just being happy from helping others.
But if you believe humans are all that selfish then you are a very sad individual, but you are still wrong. Most humans are very much capable of performing fully selfless acts without being stupid.
The entire thought experiment about the paperclip maximizer, in fact most AI threat scenarios is focused on this problem: that we produce something so alien that it executes it's goal to the diminishment of all other human goals, yet with the diligence and problem solving ability we'd expect of human sentience.
I think that's probably a bad idea, personally