The dataflow is not totally clear to me—am I understanding correctly that the parser generator is implemented by a recursive-descent parser implemented in 1400 lines of JS, which then compiles the example grammar in "grammar" into the top-down Forthish code in "grammar_error", which is then used to compile the Algolish code in "input" into the Lispish code in "output", which is then compiled into the BF in "result"?
While this is an inspiring and aesthetically delightful achievement, it's not clear that it helps with the hex0 seeds, because executing the 1400-line recursive-descent parser requires the seed to grow by the size of one JavaScript execution engine; all the existing ones are about four orders of magnitude larger than the live-bootstrap seed. Lacking that, a malicious JS execution engine could inject malicious code into the BF output, in the Karger–Thompson-attack scenario.
I suspect that the initial grammar compilation could be achieved by a cut-down compiler-compiler along the lines of what I'm working on in http://canonical.org/~kragen/sw/dev3/meta5ix.m5, although Meta5ix itself is not currently bootstrappable from a human-written seed.