I'm not sure if we're actually debating anything at this point...
A cellular automaton or other grid simulation has modularity which means that the state space for one cell is relatively small. So, you might admit that the whole thing is doing computation without caring how the cell transition function is implemented?
But, lookup tables are at one extreme, and there are plenty of other design techniques for arbitrary functional computation that avoid conditional branching. At the digital design level, you can think of a continuum of logic gates where ROMs, mux/demux, or even ALU operations all have elements that can be seen as lookup-table or computation. The difference is in the observer's mental model, more so than in actual gate structures.
At the software level, many instructions can be used for demultiplexing. So, you can compute two different potential values, e.g. in separate registers, and select between results without branching. You could use a conditional-store instruction, or some kind of swizzle/pack instruction, but without any of that you could compute a binary test result, extend it to a whole register width, and use bitwise logical operations to combine both registers while effectively selecting one or the other. The essence of whether it is computation or conditional selection is in the eye of the beholder.
This is a common way to transform some code paths for vectorization, e.g. to generate one common SIMD instruction stream that has the effect of performing different optional computations in each lane, while actually running the same instructions on all lanes.