If you're building a financial ledger, you absolutely must use serializable isolation. If you're building a Twitter clone, sure, use something weaker that will gain you some performance.
I'd make the case that we should be recommending the use of serializable by default unless you have a reason why you think it's OK to use something weaker, rather than having the default be better-performing-but-unsafe. The sort of concurrency validation errors that you get if you needed Serializable and used Read-Committed instead are really, really hard to reproduce, debug, and diagnose.
Serializable (or serializable snapshot isolation) is stronger, it doesn't allow anomalies such as write skew, but also a lot more expensive since you need to keep track of changes in rows matched by predicates to avoid phantom reads (as compared to just keeping track of the rows returned by the query with predicates in this particular transaction).
Also worth noting that some DBs such as Oracle actually lie about implementing serializable (in this case they only offer snapshot isolation), so it's worth keeping that in mind and use locks if necessary.