I'm completely naive, but even so I'm tentatively confident saying yes, it's a halting problem thing.
My understanding of formal verification is a) that it doesn't eliminate bugs, it just reduces their probability by a percentage, and b) that it's academically very attractive (which is why HN is constantly hearing about it) but practically very expensive in terms of investment/return, in the sense that formal verification requires formal modeling and so forth in order to exist in the first place, and that requires actual design lockdown, and suddenly everything got Complicated™ :)
You could possibly get away with an FPGA-targeting compiler (low-relevance related conversation: https://news.ycombinator.com/item?id=17151024) set up in a stripped-down form, but you'd need to lock it down a bit tighter than https://www.corsix.org/content/malicious-luajit-bytecode (a bit old, but illustrative of a point), and then the question would be whether that would negate the performance benefit of the FPGA in the first place.
The thread I linked the comment from had some interesting info on FPGA vs GPU (eg, https://news.ycombinator.com/item?id=17150985)