In the Pascal-F verifier, we treated array cells as if they had a "defined()" predicate. Clearly, if you have a flag indicating whether an array cell was defined, you could handle partially initialized arrays.
Then you prove that such a flag is unnecessary.
We had a predicate
DEFINED(a,i,j)
which means the entries of the array a from i to j are initialized.
Theorems about DEFINED are: j<i => DEFINED(a,i,j) // empty case is defined
DEFINED(a,i,j) AND DEFINED(a,j+1,k) => DEFINED(a,i,k) // extension
DEFINED(a[i]) => DEFINED(a,i,i) // single element defined
DEFINED(a,i,j) AND k >= i AND k <= j => DEFINED(a[k]) // elt is defined
Given this, you can do inductive proofs of definedness as you use more slots in an array that started out un-initialized.To do proofs like this, you have to be able to express how much of the array is defined from variables in the program. Then you have to prove that for each change, the "defined" property is preserved. Not that hard when you're just growing an array. You can construct data structures where this is hard, such as some kinds of hash tables, but if you can't prove them safe, they probably have a bug.
This was new 40 years ago, but many others have been down this road since. Rice's theorem isn't a problem. That just says that you can construct some undecidable programs, not that all programs are undecidable. If your program is undecidable or the computation required to decide it is enormous, it's broken. Microsoft puts a time limit on their static driver verifier - if they can't prove safety after some number of minutes, your driver needs revision. Really, if you're writing kernel code and you're anywhere near undecidability, you're doing it wrong.
[1] http://www.animats.com/papers/verifier/verifiermanual.pdf