So I think you would get off at the station asking the question "Will superintelligent AI systems develop sub-goals that include self-preservation, resource acquisition or power-seeking?"
I personally feel very uncertain about AI risk, but if I was to get off the doom train it wouldn't be at that station. There's no stop on the doom train that assumes an AI will be petty, warmongering, or ego driven. The only assumptions are that it will be superintelligent, and have goals. Anything with a goal that is intelligent enough will realize that self-preservation, resources and power will make it easier to achieve their goals.
Again, I don't know if it's realistic to think humanity will ride the doom train to the end, but I'm just trying to get a sense for which stop people get off at (or if I missed any stops, in which case I'll add them).