let me preface this with the disclaimer that i am far from an expert on the topic
probably numerous eddies in natural turbulent fluid flows have been digital turing-complete computers, given what we know now about the complexity of turbulence and the potential simplicity of turing-complete behavior. but is there an objective, rather than subjective, way to define this? how complicated are our input-preparation and output-interpretation procedures allowed to be? if there is no limit, then any stone or grain of sand will appear to be turing-complete
a quibble: the eniac was eventually augmented to support stored-program operation but not, as i understand it, until after the ias machine (the johnniac) was already operational
another interesting question there is how much human intervention we permit; the ias machine and the eniac were constantly breaking down and requiring repairs, after all, and wouldn't have been capable of much computation without constant human attention. suppose we find that there is a particular traditional card game in which players can use arbitrarily large numbers. if the players decide to simulate minsky's two-counter machine, surely the players are turing-complete; is the game? are the previous games also turing-complete, the ones where they did not make that decision? does it matter if there happens to be a particular state of the cards which obligates them to simulate a two-counter machine?
if instead of attempting to measure the historical internal computational capability of systems that the humans could not perceive at the time, such as thunderstorms and the z3, we use the subjective standard of what people actually programmed to perform universal computation, then the ias machine or one of its contemporaries was the first turing-complete computer (if given enough memory); that's when universal computation first made its effects on human society felt