You gave it capability to delete emails. Why did you expect it not to do that at least some of the time? And with enough user some of the time will most likely happen...
Because of the I in AI of course. Would you call it false advertisement and go after the providers?
But wait, hold my beer, now we've got people turning openclaw type tools loose in their systems to do things as sudo or install software packages from supply-chain-attack vulnerable repositories with no human intervention whatsoever!
1) Despite what people say about security and privacy, most are willing sacrifice both for the sake of potential convenience
2) Our priorities for the past decades have been wrong, or the times have changed and we should reevaluate them all
The root cause was and a complete lack of effort to even attempt to secure things because no one had thought to do so, and now we're starting all over again at a new computing layer. Cloud was somewhat similar, but not nearly as bad.
It's bizarre to me since presumably someone who learned the lessons before is still working, but also great for my job security.
Well duh
I’ll believe in AI agent’s abilities the day two criteria can be met.
1. A killer app is made with it.
2. That app doesn’t rely on heavily subsidized models that are burning a dollar to make 20 cents.
FWIW I agree with your criteria for AI agent success, and I haven't seen it happen yet.
It’s funny that this technology only admits in-band signaling. Given that, any foreign content is risky. It’s actually quite interesting that the current technological ecosystem is built around a high trust situation: npm, pip, cargo all run foreign code in the developer context and communities have norms of downloading random people’s modules.
And so I suppose it’s no surprise that we use LLMs - another tech that is high-trust: since it has no out of band signaling ability.
But it seems like we’re very close to the end of the era where someone will use (in a sensitive system) arbitrary web content carrying the equivalent of merged code/data.
Or will one day some obscure “Unicode homograph” library end up pwning half the world because it was a dependency 10 layers deep for an optional but default-enabled feature that nobody cares about.
Things like Visual Studio’s extension marketplace really acare me. It’s too easy to install Jim Bob’s “starter pack” of extensions that bundles many well known ones with an unheard of one… Or install the wrong “Python” extension because there are 20 with the same icon…
Untrusted data sources can provide data that causes bad things to occur. If that's a vulnerability, then any application that ingests data is riddled with vulnerabilities.
I agree that the behavior should change from a default of allowing external network requests to denying them, but this "report" reads like overly dramatic marketing BS.
There's an important difference between "the import had bad numbers so the report is wrong" versus "the import had a virus and now our network is compromised."
They are not the same kind of failure, they don't have the same impacts, and they don't involve the same mechanisms for prevention, detection, or remediation.
It's not all that different from people realizing that several popular model servers didn't support access control and could execute commands. It's an inherent part of the design that was rather naive from a security perspective, not something that requires coordinated disclosure or the rest of the security theater described in this marketing release.
The other is that an attacker can sneak something in that arbitrarily rewrites your spreadsheet. Triggers could be on content, or on a pre-planned attack time across many instances. Impacts could be subtly-flawed conclusions, or coarser "it stopped working and the deadline is looming" sabotage.
"Yeah boss, I sent out the checks to every vendor listed in the spreadsheet, what's wrong?"
For example https://en.wikipedia.org/wiki/Melissa_(computer_virus)