That is, even if we follow the current trend of adopting more ideas from functional programming in mainstream programming languages, I’m saying that I doubt we will ever completely remove variable state, which is what I understand Rich to mean by “place-oriented programming”, or events that must happen in a certain order.
Instead, I think we will learn to control these aspects of our programs better. When we model time-dependent things, we want to have well-specified behaviour based on a clean underlying model, so we can easily understand what our code will do. Today, we have functions and variables, and we have type systems that can stop us passing the colour orange into a function eat(food). Tomorrow, I think we’ll promote some of these time-related ideas to first-class entities in our programming languages too, and we’ll have rules to stop you doing time-dependent things without specifying valid relationships to other time-dependent things. Some of the ideas in that second talk you linked to, like recognising that we’re often modelling a process, are very much what I’m talking about here.
As an aside, it’s possible that instead of adding first-class entities for things like effects, we will instead develop some really flexible first-class concepts that let us implement effects as just another type of second-class citizen. However, given the experience to date with monads in Haskell and with Lisps in general, I’m doubtful that anything short of first-class language support is going to cut it for a mainstream audience. It seems that for new programming styles to achieve mainstream acceptance, some concepts have to be special.
In any case, my hope is that if we make time-related ideas explicit when we care about them, it will mean that when we don’t need to keep track of time, we needn’t clutter our designs/code with unnecessary details. That contrasts with typical imperative programming today, where you’re always effectively specifying things about timing and order of execution whether you actually care about them or not, but when it comes to things like concurrency and resource management the underlying models of how things interact usually aren’t very powerful and allow many classes of timing/synchronisation bug to get into production.