story
There's absolutely no mention of biological inspiration whatsoever. At the same time, one can point to a long and rich history of convolutional filters being used in signal processing. And then there's the name, Convolutional Neural Network. The entire concept of a CNN is framed as a series of learned filters.
Regardless, Le Cun is not the first to describe CNNs, merely one of the first to use them for OCR (specifically for hand-written text).
The first neural network arch to use convolutions instead of matmuls was this[2], from the year of our lord 1988. This in turn is based on Fukushima's "neocognitron"[3] (1980), which is based on the visual cortex of felines (from work done by Hubel and Wiesel in the 50s/60s).
I guess it is not super surprising you might be confused – Le Cun seems a bit more reticent than average to cite the work he's building on top of, and when he does it is frequently in reference to his own prior work. So if that is where you're getting your picture of artificial neural network history, your skewed perception makes sense.
[1] https://ieeexplore.ieee.org/abstract/document/41400
[2] https://proceedings.neurips.cc/paper/1987/file/98f1370821019...
[3] https://www.cs.princeton.edu/courses/archive/spr08/cos598B/R...
"The most influential of these early discussions was probably the 1943 paper of Warren McCulloch and Walter Pitts in which activity in neuronal* networks was identified with the operations of the propositional calculus. Actual simulations of recognition automata based on networks were carried out by Frank Rosenblatt before 1958 but the theoretical limitations of his "perceptrons" were soon pointed out by Marvin Minsky and Seymour Papert"
excerpt from a 1998 paper, "Real Brains and Artificial Intelligence" (https://www.jstor.org/stable/20025142)
"Walter Harry Pitts, Jr. (23 April 1923 – 14 May 1969) was an American logician who worked in the field of computational neuroscience.[1]"