Contextual reasoning is very important for many situations, because time is one of the most important contexts. You can have a fact like "the iPhone is the best selling phone". This fact is true or false based on the context. It is definitely false in the real world year 2006 and before, because the first iPhone was not released yet. While the fact may be true right now in a global context, if we are talking about a specific country or region, it may be false. For example, I would expect that satellite-capable phones may be more popular in a highly remote (non-urban) country or region.
I've been refreshing myself on ontology design and knowledge representation lately. I'm kind of surprised that is hasn't moved very much in the last two decades. I'm guessing we're about to hit an inflection point there as it becomes a cool topic with the current AI trends.
It occurs to me that if we want LLMs to be able to reason better, we may need to write texts that explicitly embody the reasoning we desire, and not just use a bunch of books or whatever scraped from the Internet.
Populating these Ontologies is very manual and time-consuming right now. LLMs without any additional training (I'm currently using a mix of the various size Lllama2 models, along with GPT 3.5 and 4 for this) are capable of few-shot generation of ontological classification. Extending this classification using fine-tuning is doing REALLY well.
I'm also seeing a lot of value in using LLMs to query and interpret proofs from deductive reasoners against these knowledge graphs. I have been limiting the scope of my research around this to two domains that have a kind of eccentric mix of formal practices, explicit "correct" knowledge, and common sense rules of thumb to be successful. Queries can be quite onerous to build which a fine-tuned model can help with, and LLMs can both assist in interpreting those logic chains and doing knowledge maintenance to add in the missing common sense rules or remove bad or outdated rules. Even selecting among possible solutions produced by the reasoners is really solid when you include the task, desires, and constraints of what you're trying to accomplish in the prompt performing the selection.
The formal process and knowledge is handled very well by knowledge graphs along with a deductive reasoning engine, but they produce very long winded logic chains to reach a positive or negative conclusion where a simpler chain might have sufficed (usually due to missing rules, or a lack of common sense rules) and are generally incapable of "leaps" in deduction.
LLMs on their own are capable of (currently largely low-level) common sense reasoning, and some formal reasoning but are still very prone to hallucinations. A 20% failure rate when building rules that human lives may depend on is a non-starter. This will improve but I don't think a probabilistic approach will ever fully remove them. We can use all of our tools together in various blends to augment and verify knowledge, fully automatically, to make more capable systems.
the tutorial page https://www.idp-z3.be/tutorial.html is a bit more enlightening
as is https://interactive-idp.gitlab.io/ learning material
but I didn't find those first, I found the docs first (via the four "learn more" links on the landing page)
and I don't think they are the best starting point
https://docs.idp-z3.be/en/latest/introduction.html
> IDP-Z3 can be installed using the python package ecosystem.
> install python 3, with pip3, making sure that python3 is in the PATH.
> use git to clone https://gitlab.com/krr/IDP-Z3 to a directory on your machine
> (For Linux and MacOS) open a terminal in that directory and run the following command
First thought is... why isn't it published to PyPI?
But then elsewhere https://docs.idp-z3.be/en/latest/IDP-Z3.html
> the idp_engine package available on Pypi.
And looking in the git repo it seems like this is the same package published as https://pypi.org/project/idp-engine/
Why not just say "pip install idp-engine"...?
Quick response: with pypi, you can only install the reasoning engine. By cloning the repository, you get the full suite of tools, including the "interactive consultant".
I think it could probably be evolved so that they can be pip installed too.
e.g. instead of git clone and `poetry run python3 main.py` maybe your pip package could provide its own cli frontend for running the server.
(I have nothing to do with IDP-Z3) This is the general pattern:
* If you are a developer: get sources and work on sources, because, probably, you want to add to them or modify them etc. Part of this experience may be creating a package and installing it with pip or some other package installation program, but it's not usually what other developers (contributors) would be working on.
* If you are a user: use the supported package manager to obtain the package and use it through documented interface.
I'm honestly not sure if this is why or how these instructions came to be, but this is a fairly common thing to do, so, w/o looking, I'd guess this is what it is.
For example, you give IDP-Z3 the formula that links a tax-free amount, a tax rate and a tax-included amount, and the values of any two of its parameters, and it will compute the missing parameter. You do not need to write 3 different formula, one for each case. If you give him only one parameter, it will say that the other two parameters are relevant.