We have been told by cloud providers that the FPGA cannot be directly connected due to network security concerns. Since there is no easy way to control how the arbitrary hardware programmed by users on the FPGA will interact with their network. Microsoft has been using FPGAs directly connected to their network (called a bump-in-the-wire architecture) for the FPGAs used in their datacenter (see Project Catapult for details). But these FPGAs are not programmable by Azure users yet.
Would that even be possible, or does it boil down to the halting problem (how can you know a sequence of transactions is malicious)?
My understanding of formal verification is a) that it doesn't eliminate bugs, it just reduces their probability by a percentage, and b) that it's academically very attractive (which is why HN is constantly hearing about it) but practically very expensive in terms of investment/return, in the sense that formal verification requires formal modeling and so forth in order to exist in the first place, and that requires actual design lockdown, and suddenly everything got Complicated™ :)
You could possibly get away with an FPGA-targeting compiler (low-relevance related conversation: https://news.ycombinator.com/item?id=17151024) set up in a stripped-down form, but you'd need to lock it down a bit tighter than https://www.corsix.org/content/malicious-luajit-bytecode (a bit old, but illustrative of a point), and then the question would be whether that would negate the performance benefit of the FPGA in the first place.
The thread I linked the comment from had some interesting info on FPGA vs GPU (eg, https://news.ycombinator.com/item?id=17150985)