> It lists the author as having a Masters Degree in Mycology from University of East Ontario. A search later revealed there is no "University of East Ontario."
This has got to be criminal negligence.
Chances are the lazy authors using an AI generator would actually have known quite enough to write a reasonably safe-ish identification book, despite their laughably fake credentials, they just would not be good at writing and took the shortcut to make up for that.
It's like killing someone driving a heavy SUV while under the influence, vs killing someone pushing a heavy SUV while under the influence. Both are bad, but one is infinitely more likely to happen than the other.
An LLM writing the same would instead create a plausible but wrong book. A layperson could be tricked by this – crucially wrong sentences would be sandwiched in between pages and pages of superficially right ones.
There is a vast ocean of difference between these two. Producing something plausible with no foreknowledge of the subject matter became possible.
Trusted/proven sources are the ones that should be protected by law; ie doctors, engineers, etc.
Otherwise let's start prosecuting everyone's grandparents for saying that throwing salt over your shoulder wards off bad luck.
People have been murdering since the beginning of time, but machine guns make it easy to murder a lot of people very fast. So we don’t just have laws against murder, but we have laws restricting access to machine guns.
AI tools make it possible to spread misinformation and disinformation much faster, because you can produce a high quantity of it really quickly. Just like how a machine gun shoots a lot of bullets really quickly. It’s not a fundamentally different type of thing, just a new scale / speed.
There's more intelligence in this AI guide, than there is in the majority of college diploma's coming out of of Canada today.
This isn’t an AI issue, it’s a basic scam/fraud issue.
Regulation like the EU’s AI Act (and even more strict) should be in effect worldwide. Corps have been running the show but naturally their focus is on monetisation rather than what their creations actually do.
Just to clarify, the reddit post is 99% creative writing, it's a fake story created by a new Reddit account.
The difference between those two things is rather important.
This has to be some kind of a new level of idiocy. I mean, use AI to make junk sci-fi stories, and generate fake authors all day. But going for a book about mushrooms deserves a special stupidity and evilness award.
No excuse, obviously. But it could explain how whoever's running the scam could have run it off this cliff - probably others, too.
It's been a couple years since I bought even paper books off Amazon. Used should theoretically be closer to OK, but eBay sellers are cheaper and much more trustworthy.
That's fine for things like stock images or text on random product pages, but for things like this? Yeah, the very concept is just risky as hell.
You verify information by finding multiple, different sources.
When I left a negative review pointing out that the author was a stock photo (entire content: "The author of this book is a fraud. There is no Tina B. Baker, she is a stock photo."), Amazon pulled the review saying it violated their guidelines.
They mentioned morels, which are sort of plausible (dates excepted) since false morels aren't going to kill you. What kind of mushroom would they be hunting in late July where their poisoning wouldn't be actually-newsworthy?
> The book has been removed from sale from the online retailer, ...
Emphasis added. I don't know if this is real or not, but they haven't given us the title.
https://www.vox.com/24141648/ai-ebook-grift-mushroom-foragin...
https://fortune.com/2023/09/03/ai-written-mushroom-hunting-g...
Of course the story remains plausible, and it certainly has an air of truthiness, but I'd give it very good odds for being fake.
> My wife just received an email from the online retailer. She has been asked to "Not take any photographs or copies of the product in question due to copyright issues" and it states, "the product must be returned immediately by special delivery by [DATE]." There's some other statements as well about our account being terminated if we fail to return the product by the specific date. We've got a lot of movies and series that we have purchased over the years on this account, I wouldn't want to lose them.
https://www.washingtonpost.com/technology/2024/03/18/ai-mush... ("Using AI to spot edible mushrooms could kill you")
- "Like past mushroom identification apps, the accuracy is poor, Claypool found in a new report for Public Citizen, a nonprofit consumer advocacy organization. But AI companies and app stores are offering these apps anyway, often without clear disclosures about how often the tools are wrong."
- "Apple, Google, OpenAI and Microsoft didn’t respond to requests for comment."
My father was a moderately known evolutionary biologist and he advised his students encountering an unknown plant and curious whether it was edible to "try it and see". But this was for a flowering plant. Most mushrooms can't be dealt with that way.
>There's some other statements as well about our account being terminated if we fail to return the product by the specific date. We've got a lot of movies and series that we have purchased over the years on this account, I wouldn't want to lose them.
This story is so fake it hurts. Reddit eating up ragebait is one thing but posts like this doesn't belong to HN at all
Edit: Wait, this is a book made up with LLMs. I think the author should be on the hook for publishing unless they added a disclaimer their book has no grounds in reality.
Not necessarily an AI issue.
Some mushrooms are horribly poisonous. That's the nature into which we were born. It's not a consequence of policy, and won't be fixed by policy.
People need to learn to be careful about sources of information, with care in proportion to consequences. State intervention will preclude that evolutionary step, almost certainly without actually ameliorating the problem.
Regardless of the veracity of the claims in that post, there's not much new here aside from the fact that the distributor generated the content using AI rather than making it up themselves. Quackery and snake oil has always been a thing, and plenty of people have been seriously injured or died from misinformation about food safety or medicine.
The next time someone hesitates to seek professional medical attention for a problem because they got a blessing from the elders at their church and they think God will heal them as soon as they start having more faith, we can start talking about where we can really draw the line between personal responsibility and holding liars liable.
I'm not curios. Because curiosity killed the cat.
directly akin to the people who throw themselves into stopped cars in Russia for insurance fraud purposes, only to be captured on dash-cam video and derisively immortalized on social media.
The other viable explanation is "but Google Maps told me to drive off the pier."
Its not a fact searching exercise.
Even if no one did anything wrong, you might misidentify it in the field. Unless you're very experienced in this field this feels like a very risky and stupid thing to do.
This isn't an AI problem. This is a "Don't eat things growing in the woods" problem.
Perhaps they asked ChatGPT and were told it was a great idea to eat wild mushrooms.
> This isn't an AI problem. This is a "Don't eat things growing in the woods" problem.
This is a misinformation problem, which you can't solve simply by saying "you should have been better informed". The whole problem of AIs accelerating the post-truth age is that reliable sources are becoming scarcer at an exponential rate.