(2) True, it works better if you add continuations.
My personal favorite concurrency formalism is the Chemical Abstract Machine[1]. The y-calculus is a neat extension that preserves the spirit of the lambda calculus, but also permits (purely non-deterministic) concurrency.
[0]: http://en.wikipedia.org/wiki/Fully_abstract#Abstraction
[1]: http://www.lix.polytechnique.fr/~fvalenci/papers/cham.pdf
Adding timing is straightforward for any model of computation with operational semantics. This addition generally changes the semantics dramatically (eg. equalities that hold). There are many timed versions of process calculus, e.g. timed CCS, timed CSP, timed pi-calculus.
Regarding (2), I'm afraid I also disagree. While continuations can express classical logic in some way, it's generally only possible to express certain cut elimination strategies (e.g. call-by-value, as in the Stewart-Ong lambda-mu-calculus). I don't think we know how to do all of classical logic yet. Moreover, adding jumps (which is what continuations are) to lambda-calculus results in an extremely complicated and hard to understand system (e.g. the reduction semantics of the lambda-mu-calculus is usually only handwaved). If you write the very same semantics in pi-calculus, it's trivial and extremely transparent (see "Control in the Pi-Calculus" by Honda et al). Finally, I think it's conceptually deeply confused to 'add' continuations to lambda-calculus. A much cleaner picture emerges when one understands that the call-return jumping discipline engendered by the function-calls of lambda-calculus is a special case of the general jumping that continuations can do. So really one should start with a continuation calculus, and can then see lambda-calculus as a strict sub-calculus of especially disciplined jumps (call-return with affine usage of the return jump).