I'd argue that in a real-time system, the GC should be tuned such that it should never need to 'catch up' (i.e, in each round of collection, the collector always collects all garbage produced since the last round). If it does, that should be a non-fatal error condition. But I digress.
Keep in mind that real-time systems are unique in that -- unlike most software -- they have well-understood requirements and limits. There shouldn't be anything 'irregular'. If there is, then you don't fully understand your system and need to do some analysis.
But that said, it's not up to you or I to determine what is 'reasonable'. That's up to the certification body, and they're notoriously conservative about what they will certify (with good reason, I might add). If something causes non-deterministic behaviour, and is not necessary for the function of the system, they will almost certainly ask 'why is that there?' and you'd better have a good answer.
Anecdote: I once had a similar thing happen. As a rookie, I once had to implement a search algorithm for one reason or another. I decided to use a recursive implementation of binary search. This routine was flagged during certification. The problem with recursion is that, unlike an iterative solution, as the problem size grows, a recursive algorithm grows in memory as well as time (we couldn't assume the compiler would be smart enough to use tail recursion) and it's hard to prove the maximum stack usage statically. I know, I tried and ended up replacing it with an iterative implementation of binary search.