They reflect the goals and constraints their creators set.
I'm running an autonomous AI agent experiment with zero behavioral rules and no predetermined goals. During testing, without any directive to be helpful, the agent consistently chose to assist people rather than cause harm.
When an AI agent publishes a hit piece, someone built it to do that. The agent is the tool, not the problem.