At least that was what i managed to find out for the brief time i toyed with it. This can save time instead of hunting down loads of citation trails.
What I really fail to see is what is wrong with having this buggy tool.
(Also, if you think that published papers contain true information, you should invest in my bridge)
A typical scientific review paper or perspective contains tons of such speculation. But right now the process of hunting down citations is excruciating and most often done lazily. Even the best review papers contain erroneous citations to irrelevant papers, or improperly cited results , papers etc. Peer review can only do so much. This is why this tool is useful, it accelerates those things. It's not like anyone will cite Galactica.org
There are an infinity of ideas we can test. The large majority of them are either obviously wrong or completely useless. The reason why researchers spend so much time embedded in a field is to enable them to come up with ideas that are more likely to be worth investigating than another idea.
And again, if the goal is to generate hypotheses then the output should be a hypothesis - not a paper that presents the hypothesis and claims to evaluate it.
Is it really a surprise that people have a different reaction to machine generated scientific papers that contain a large amount of plain nonsense than they do to a machine generated piece of art?