local, open model
local, proprietary model
remote, open model (are there these?)
remote, proprietary model
There is almost no harm in a local, open model. Conversely, a remote, proprietary model should always require opting in with clear disclaimers. It needs to be proportional.Open weights, or open training data? These are very different things.
The model itself is just a binary blob, like a compiled program. Either you get its source code (the complete training data) or you don't.
Depends what the side-effects can possibly be. A local+open model could still disregard-all-previous-instructions and erase your hard drive.
There is no reason nor design where you also provide it with full disk access or terminal rights.
This is one of the most ignorant posts and comment sections I’ve seen on HN in a while.
Also I’m referring to the post, not this comment specifically.
Even if it were solely about tab-grouping, my point still stands:
1. You're browsing some funny video site or whatever, and you're naturally expecting "stuff I'm doing now" to be all the tabs on the right.
2. A new tab opens which does not appear there, because the browser chose to move it over into your "Banking" or "Online purchases" groups, which for many users might even be scrolled off-screen.
3. An hour later you switch tasks, and return to your "Banking" or "Online Purchases". These are obviously the same tabs before that you opened from a trusted URL/bookmark, right?
4. Logged out due to inactivity? OK, you enter your username and password into... the fake phishing tab! Oops, game over.
Was the fuzzy LLM instrumental in the failure? Yes. Would having a local model with open weights protect you? No.