CISC always decoded to simple more or less single cycle ops internally, that's how microcode works. The RISC shtick was to get rid of that decoding into simple ops in the first place. Originally that didn't make sense because those ops' fetch bandwidth would be competing with data bandwidth. But notice how RISC popped up the same time as ubiquitous instruction caches? They solved the same problem in a more general way; the I$ means that your I fetch isn't competing with data on hot paths. You can also see this in how all of the early CISC archs would have single instruction versions of memset/memcpy/etc. The goal here is to get the cycle by cycle instructions out of the main bus data path by sticking them in microcode.