> "Self-interest is the main motivation of human beings in their transactions" [...] The economic man solution is considered to be inadequate and flawed.[17]
An important distinction is that a human can *not* make pure rational decisions, or use complex deductions to make decisions on, such as "if I do X I will go to jail".
My point being: if AI were to risk jail time, it would still act different from humans, because (the current common LLMs) can make such deductions and rational decisions.
Humans will always add much broader contexts - from upbringing, via culture/religion, their current situation, to past experiences, or peer-consulting. In other words: a human may make an "(un)ethical" decision based on their social background, religion, a chat with a pal over a beer about the conundrum, their ability to find a new job, financial situation etc.