The point is that is not feasible when the design layout is randomized. Reverse engineering FPGA bitstreams is a notoriously hard problem. You might detect your synthesis of RISC-V but there is no algorithm that can detect any RISC-V, and definitely no algorithm that can do that in a tiny low power embedded processor that can pass by unnoticed.
> You could make SERDESes do various kinds of naughty things when certain patterns go by-- dump some scan chain info, so you can kick some special packets and read out state remotely.
Yes, backdooring I/O is still possible. But it significantly raises the bar for the backdoor, since now you're relying on significant back and forth to probe deeper into the system. This isn't an absolute defense, it's just better than hard silicon because you can make backdooring the core logic impractical, especially self-contained.
> You could make naughty patterns of bits crossing places do bad stuff. Think of dynamic effects like rowhammer being deliberately included, so if you know the design you can figure out what outside data will trigger bits to flip and state to leak. (Yes, I know that block rams are SRAMs, but that doesn't mean you can't deliberately add capacitive coupling or screw up synchronizers in various ways. And it looks like we may have block NVRAMs soon, so that opens the possibility for various evil even more).
With design randomization, you can make it hard to detect patterns like that. Think things like randomizing the polarity of each bit line going in/out of RAM. Again, the point is the backdoor has to work with any design, and this opens up a wide range of mitigations that you can implement at that stage, that make it a lot less practical.
Keep in mind I'm bringing this up in the context of people believing that some ThinkPad the FSF rubber-stamped respects your freedom (and security) when it contains microcontrollers hooked up to LPC running secret blobs. Yes, if we want to go deeper down the rabbit hole of hardware trustability, there is definitely more to be done after Precursor, but it's a particularly clever example of how to at least attempt to begin to solve the silicon trust problem.