I'm not a huge AI risk guy, but it doesn't seem irrational to me. It doesn't seem unreasonable that you'll have a 5% chance of > than human IQ in the next 100 years. Assuming a 5% risk of it going rogue you end could end up in a pretty terrible situation.
First, I call myself rationalist because of the culture around the blog SlateStarCodex (and r/<same name>[1]), by Scott Alexander (now at AstralCodexTen[2] for interesting reasons), and a bit less lesswrong.com , and also a few sources like Julia Galef (which is popular for the Scout Mindset[3]) and some people around this movement. The ideas are fairly tame and simple around encouraging clear thinking and avoiding biases -- but they go fairly deep into promoting an understanding how our brains and minds work (frequently discussed in SSC). I am partly defined by this now, I learned a lot about myself and in some ways improved my writing and why not thinking (yes, thinking more effective in general terms is not as well defined as it seems, and not nearly as effective as it seems either -- but in a certain sense I think it's good). I have not went through EY's extensive work to critique or embrace it. I think this movement is partly a response to the very damaging ideological war environment of the late 2010s and still ongoing that creates divided societies and not bringing us closer to a move charitable, loving, understanding, good future (instead lost in endless culture wars -- that in a way became very real wars with millions of affected). EA is a nice sort of consensus result from (part?) of the community as how one should approach charity and ethics (but none of the original EAs came from rationality I believe, they are largely academic philosophers from Oxford).
I'm an EA and I'm not sure where I stand on the AI debate. I personally have embraced the philosophy (although reluctantly, I try to follow Scout Mindset principles, which is I try to stay open to changing my mind), but I mostly give to GiveWell and to GiveDirectly. I also give to a few local charities in my quite poor country (written about here[4] -- I believe donating locally can be effective), volunteer and to many open source projects (some public seen here[5]). I made a commitment with myself to live frugally and give all that I can reasonably give, without unreasonable excesses (largely consumerism, luxury goods, things I don't need to work, etc.). I really believe in the potential of the movement, and rationality does help, because in the end it invites you to embrace the value of other lives, which I've become convinced with by many angles (from a metaphysical perspective, to a logical, social and economic ones): that's the foundation of my giving. Recently, Scott A. wrote that EAs really believe in the movement/philsophy, and the recently criticisms haven't really shaken my belief in the fundamentals (although I'm saddened by SBF and his statements, which I find deeply misguided and unethical, more here[6]). I also subscribe to Jane Goodall's (and Singer's) observation that we need to make the Head (reason) and the Heart (love and humanity) work in harmony to thrive as a society and as individuals -- most people have good will, but if you don't use reason and solid ethics to guide your good will we often fall short, sometimes tragically and spectacularly.
I believe other people matter, and I'm committed to trying to make the world a better place with whatever tools I find most appropriate (statistics, economics, social sciences, art, philosophy, technology, math ...): It's very simple, very robust, and very necessary for our collective future as well :)
(which is to say: Hack the Planet!!!)
[1] https://reddit.com/r/slatestarcodex
[2] https://astralcodexten.substack.com/
[3] https://www.youtube.com/watch?v=3MYEtQ5Zdn8
[4] https://www.reddit.com/r/EffectiveAltruism/comments/yrhl3g/l...
[5] https://liberapay.com/gustavonramires/ see also: https://www.reddit.com/r/EffectiveAltruism/comments/v7ma0d/w...