If you're willing to drop CoreNLP there's also https://demos.explosion.ai/displacy/ that's worth checking out.
Amusingly a year or so ago I took the Stanford Dependency parser and fed its output tree into a Prolog system to try to pull out the semantics. It was used to analyze business news (getting at the who's, what's and why's).
The easiest approach was to wrap a very simple DSL around Prolog (which, BTW, Prolog is great at). Then in the DSL (which still retained logical variables and backtracking) you could write things like:
%% Simple statements -- root is an announcement word whose subject and object tell the story. %% 'IBM announced a new computer today' announce(Who, About, What) ==> s+root(['announc', 'releas', 'introduc','launch', 'unveil', 'reveal', 'agre']), #Dep1, subject(Who), Dep1 >> object(About, What).
%% 'IBM has announced a partnership ...' is caught by the above. But 'IBM has entered into a partnership ...' needs %% a little more work announce(Who, About, announcement) ==> s+root(['enter']), #Obj, subject(Who), Obj >> prep_pobj_chain(PPC), {PPC = [Prep|About]}.
I think a Prolog-based query planner as a front end to Sparql on Wikidata could be quite interesting.
Alanl
%% Simple statements -- root is an announcement word whose subject and object tell the story.
%% 'IBM announced a new computer today'
announce(Who, About, What) ==>
s+root(['announc', 'releas', 'introduc','launch', 'unveil', 'reveal', agre']),
#Dep1,
subject(Who),
Dep1 >> object(About, What).
%% 'IBM has announced a partnership ...' is caught by the above. But 'IBM has entered into a partnership ...' needs
%% a little more work
announce(Who, About, announcement) ==>
s+root(['enter']),
#Obj,
subject(Who),
Obj >> prep_pobj_chain(PPC),
{PPC = [Prep|About]}.Frankly speaking, I am a bit skeptical about pattern matching algorithms for answering questions. It would help if you showed some kind of stats about your algorithm's performance on a diverse question set. For example, you can scrape simple quiz questions (and answers) from quiz sites [1] and report back on the performance.
[1] http://www.quiz-zone.co.uk/questionsbydifficulty/1/0/answers...
Q: "What is purpose" A: "Justin Bieber album" Q: "What is a car?" A: "country in Africa" Q: "What is a male?" A: "capital of Maldives" Q: "What is a female?" A: "human who is female (use with Property:P21 sex or gender). For groups of females use with ''subclass of (P279)''"
my point in this comment is just to say that when it does give an odd answer, it can be funny, not to say that it sometimes gives odd answers.
how many lines of resolution are there in an ntsc television signal?
what is the melting point of tin/lead eutetic solder?
what species of whale was moby dick?
what grain is most often used to make beer?
what is the boiling point of water?
how many chromosomes does a normal human have?
what animal is known as "man's best friend"?
what fps did id software release in 1993?
what is the largest known prime number?
what is the clock rate of the arduino uno?
As a comparison, Google gives 8 correct answers directly (either as an special info box, or as highlighted part of a web page), 1 correct answer as the 2nd search result (Doom), and 1 incorrect answer (largest known prime).
Also, it seems to have issues formatting dates before 1900 (for the bday one, the answer it returns is more of an error message than an answer: "year=1732 is before 1900; the datetime strftime() methods require year >= 1900")
Tip: The link to the source is pointing to github pages, which hasn't been set up.