So I would assume that three letter agencies would love to take something like GPT-4 and fine tune it based on all the data they have about existing terrorists.
On the flip side, LLMs must give the NSA a new challenge: a flood of garbage text generated by no-one in particular. Perhaps there will be more effort to put surveillance directly on-device as tapping networks yields more noise.
It's possible that LLMs will suddenly make a leap in reliability and usability (e.g. much higher context window without corresponding massive increases in memory usage). But I have yet to see it.
So far it's great at some specific usecases. Interacting with humans, rewriting or making up text. Summarising. A hit & miss at everything else.
Don't get me wrong, I love AI tech and I'm heavily experimenting with it (both at work and at home with local models). But as with most hyped technologies I find the benefits far overblown in marketing stories.
Our leadership jumped on Microsoft Copilot (the one for Office 365 because they have tens of different copilots :) ) like a pack of hungry wolves afraid to miss the boat. And the result was.... kinda meh. It's kinda promising and impresses with simple play school stuff ("make me a presentation about home safety") and totally and utterly fails when you try to do anything serious work related. Sooo many times I get "Sorry I can't do this right now", "Sorry I need more training for this", "I can't do this for you but this is how you can do it yourself!" or it does something but like totally wrong.
Meanwhile we have a bunch of MS training people running around evangelising and telling us how great everything is and making excuses for everything that goes wrong :) You can almost see them breathe a sigh of relief every time something works as it should. That's not what we were promised.
Maybe it will get there, but I don't see it happening tomorrow to be honest. LLMs were an impressive leap but their achilles heels have become clear and it's proving difficult to overcome them.
I'm really enjoying surfing the knife's edge of technology (as I was and still am with metaverse) but I don't yet see this as a game changer except in a few specific industries. People editing text for a living certainly have a need to worry.
I also wonder what will happen with future AI training. Now that more and more websites are filled with AI-generated content that is often at best "mediocre", and considering future AI models will be trained on that, will they be able to improve their accuracy or struggle to maintain it?
These are tasks that would have taken months of development or millions of dollars in manual effort before. It's not just hype.
Put another way: most people only get charged with a crime if it's worth a law-enforcement officer's time to catch you, but many small violations are ignored in favor of higher priorities. We may have to contemplate a future where AI is clever enough to notice everything that can be construed as a violation of some law and put on a prosecutor's backlog.
Schneier talks about this as well: https://www.schneier.com/blog/archives/2023/12/ai-and-mass-s...
That's what any good stalker or person experienced with social engineering is able to do right now, but it takes a lot of time and energy. Resorting to LLMs would considerably decrease both. And it gets easier the more people you have information about.
This then begs the question of what level of censorship reduction to apply. Should government employees be allowed to e.g., war-game a mass murder with an AI? What about discussing how to erode civil rights?
Not only is Anthropic anti-open-source, they're also anti-open-output.
Saying "Hey, try our product! It can do everything!" while ALSO saying, "Sorry, you're not allowed to use our general intelligence product to compete with general intelligence..." just evidences no upper IQ bound on Dunning-Kruger
That does sound exhausting.
You're defending a company that makes "safe" usage part of their brand and every press release mentions how much they care about safety. Then one day, they announce they are making some compromises to their safety policy so they can get large new customers (government) but don't worry all they care about is safety. It's comical how predictable this was.
The can call themselves "sonnet", "bard", "open" and a whole plethora of other positive things. What remains is that they go into the direction of Palantir and the rest is just marketing.
https://support.anthropic.com/en/articles/9528712-exceptions...
The things which you're allowing yourself to imagine, don't exist in the reality of information we're discussing here
> For example, we have crafted a set of contractual exceptions to our general Usage Policy that are carefully calibrated to enable beneficial uses by carefully selected government agencies. These allow Claude to be used for legally authorized foreign intelligence analysis, such as combating human trafficking, identifying covert influence or sabotage campaigns, and providing warning in advance of potential military activities, opening a window for diplomacy to prevent or deter them.
Sometimes I wonder if this is cynicism or if they actually drank their own cool-aid.
Firstly, anthropic made an LLM, exposed it to the internet, and provided these terms of acceptable use.
https://www.anthropic.com/legal/archive/4903a61b-037c-4293-9...
There was no need for cynicism or kool aid at this stage.
Later on, presumably now-ish, anthropic changed the usage policy, to add an exception.
https://support.anthropic.com/en/articles/9528712-exceptions...
> Exceptions to our Usage Policy
> Updated today
The exception is that, starting from now,
> Anthropic may enter into contracts with government customers that tailor use restrictions to that customer’s public mission and legal authorities if, in Anthropic’s judgment, the contractual use restrictions and applicable safeguards are adequate to mitigate the potential harms addressed by this Usage Policy.
I don't think any kool aid or cynicism is needed.
The change is that, if anthropic think the client use case meets the listed humanitarian goals, then the client may use the LLM.
What are the security implications if American corpos like Google DeepMind, Microsoft GitHub, Anthropic and “Open”AI have explicitly anticompetitive / noncommercial licenses for greed/fear, so the only models people can use without fear of legal repercussions are Chinese?
Surely, Capitalism wouldn’t lead us to make a tremendous unforced error at societal scale?
Every AI is a sleeper agent risk if nobody has the balls and / or capacity to verify their inputs. Guess who wrote about that? https://arxiv.org/abs/2401.05566
Perhaps (optimistically) this is just a credibility-grab from Anthropic, with no basis in fact.
> Government agencies can use Claude to provide improved citizen services, streamline document review and preparation, enhance policymaking with data-driven insights, and create realistic training scenarios. In the near future, AI could assist in disaster response coordination, enhance public health initiatives, or optimize energy grids for sustainability.
Listen to Edward Snowden. This guy is not fucking around.
very optimistic of you :-)