Populating these Ontologies is very manual and time-consuming right now. LLMs without any additional training (I'm currently using a mix of the various size Lllama2 models, along with GPT 3.5 and 4 for this) are capable of few-shot generation of ontological classification. Extending this classification using fine-tuning is doing REALLY well.
I'm also seeing a lot of value in using LLMs to query and interpret proofs from deductive reasoners against these knowledge graphs. I have been limiting the scope of my research around this to two domains that have a kind of eccentric mix of formal practices, explicit "correct" knowledge, and common sense rules of thumb to be successful. Queries can be quite onerous to build which a fine-tuned model can help with, and LLMs can both assist in interpreting those logic chains and doing knowledge maintenance to add in the missing common sense rules or remove bad or outdated rules. Even selecting among possible solutions produced by the reasoners is really solid when you include the task, desires, and constraints of what you're trying to accomplish in the prompt performing the selection.
The formal process and knowledge is handled very well by knowledge graphs along with a deductive reasoning engine, but they produce very long winded logic chains to reach a positive or negative conclusion where a simpler chain might have sufficed (usually due to missing rules, or a lack of common sense rules) and are generally incapable of "leaps" in deduction.
LLMs on their own are capable of (currently largely low-level) common sense reasoning, and some formal reasoning but are still very prone to hallucinations. A 20% failure rate when building rules that human lives may depend on is a non-starter. This will improve but I don't think a probabilistic approach will ever fully remove them. We can use all of our tools together in various blends to augment and verify knowledge, fully automatically, to make more capable systems.