I learned a bit of this way back in college! The grammar itself is unambiguous (e.g. indefinite articles are implemented as Skolem constants, the whole thing is LR-parseable) but the semantics are not. There's a category of six-letter words (e.g. jbopre = lojbo + prenu = lojban learner) where the semantics of combining them isn't (probably can't) be defined. Remarkably, how the words themselves combine isn't even fully defined!
I wonder if any neural net-based translation services learn to translate to an intermediate language (Source->IL->Target), and if so, if analyzing the IL would have fairly unambiguous semantics to resist differences in semantics across languages.