> Gaggle’s CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said. “I wish that was treated as a teachable moment, not a law enforcement moment,” said Patterson.
It's entirely predictable that schools will call law enforcement for many of these detections. You can't sell to schools that have "zero tolerance" policies and pretend that your product won't trigger those policies.
But alas, we don't live in that world. We live in a world where there will be firings, civil, and even criminal liability for those who make wrong judgments. If the AI says "possible gun", the human running things who alerts a SWAT team faces all upside and no downside.
Hmm, maybe this generation's version of "nobody ever got fired for buying IBM" will become "nobody ever got fired for doing what the AI told them to do." Maybe humanity is doomed after all.
My thought when posting was, if the schools already have surveillance cameras that human security guards are watching, adding an AI to alert them to items of interest alone wasn't bad. But maybe you've changed my mind. The AI pays more invasive attention to every stream. Whereas a guard may be watching 16 feeds at once and barely be paying attention, and no one may ever even view the feed unless a crime occurs and they go looking for evidence.
Regardless this setup was way worse! The article said the AI:
> ... scans existing surveillance footage and alerts police in real time when it detects what it believes to be a weapon.
Wow, the system was designed with no human in the loop - it automatically summons armed police!
That "we could" is doing some very heavy lifting. But even if we pretend it's remotely feasible, do we want to take an institution that already trains submission to authority and use it to normalize ubiquitous "for your own good" surveillance?
In fact one might say that what the communist parties did in the 1910s was pretty much that. Ubiquitous surveillance is the problem here, not AI. Communist states used tens of thousands of "agents" that would just walk around, listen in to random conversations, and arrest (and later torture and deport) people. Of course communist states that still exist, like China, have started using AI to do this, but it is nothing new for China and it's people.
And, of course, what these communist states are doing is protecting the rich and powerful in society, and enforcing their "vision", using far more oppressive means than even the GOP dares to dream about. Including against "socialist causes", like LGBTQ. For starters, using state violence against people for merely talking about problems, for example.
But can we at least talk about also holding the school accountable for the absolutely insane response?
You talk about not selling to schools that have "zero tolerance" policies as if those are an immutable fact of nature that can never be changed, but they are a human thing that has very obvious negative effects. There is no reason we actually have to have "zero tolerance" policies that traumatize children who genuinely did nothing wrong.
"Zero tolerance" for bringing deadly weapons to school, I can understand. So long as what's being checked for is actual deadly weapons, and not just "anything vaguely gun-shaped", or "anything that one could in theory use as a deadly weapon" (I mean, that would include things like "pens" and "textbooks", so...).
"Zero tolerance" for particular kinds of language is much less acceptable. And I say this as someone who is fully in favor of eliminating things like hate speech or threats of violence—you don't do it by coming down like the wrath of God on children for a single instance of such speech, whether it was actually hate speech or not. They are in school; that's the perfect place to be teaching them a) why such speech is not OK, b) who it hurts, and c) how to express themselves without it, rather than just treating them like terrorists.