FWIW I have many years of full-time RStudio dev experience, and while I've definitely had a few hard-to-explain crashes, I'd characterize it as very reliable overall. When problems arise they tend to be due to community-contributed packages, especially packages that call out to C++. (My name is on the bug fix log for some major packages.)
Unintentional and unnecessary creation of huge, memory-hogging objects is a closely related footgun. Packages are often not built with large data in mind and make choices that scale terribly, such as storing multiple copies of the data in the model object, or creating enormous nonsparse matrices to represent the model term structure. It's a legacy of the academic statistics culture R grew out of. Most researchers test their fancy new method on a tiny dataset, write a paper, and call it a day.
No argument about the debugging experience. I find it very slow, especially with large datasets, and try to avoid it. Not much experience with reproducible R builds but I wouldn't be surprised if it was a pain.