It just created a situation in which a bunch of people with guns were told that some teen had a gun. That's a very unsafe situation that the system created, out of nothing.
And some teen may be traumatized. Again, unsafe.
Incidentally, the article's quotes make this teen sound more adult than anyone who sold or purchased this technology product.
Another false positive by one of these leading content filters schools use - the kid said something stupid in a group chat and an AI reported it to the school, and the school contacted the police. The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time. They are suing Gaggle, who claims they never intended their system to be used that way.
These kinds of false positives are incredibly common. I interviewed at one of their competitors (Lightspeed), and they actually provide a paid service where they have humans review all the alerts before being forwarded to the school or authorities. This is a paid addon, though.
> Gaggle’s CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said. “I wish that was treated as a teachable moment, not a law enforcement moment,” said Patterson.
It's entirely predictable that schools will call law enforcement for many of these detections. You can't sell to schools that have "zero tolerance" policies and pretend that your product won't trigger those policies.
Yeah, there's a shop near me that sells bongs "intended" for use with tobacco only.
All he wanted was a Pepsi. Just one Pepsi. And they wouldn't give it to him.
Is there some legal way to sue a pair of actors (Gaggle and school) then let them sue each other over who has to pay what percentage?
Holy shitballs. In my experience such pain addons have very cheap labor attached to them, certainly not what you would expect based on the sales pitch.
Oh look, a corporation refusing to take responsibility for literally anything. How passe.
Human car crash? Human punishment. Corporate-owned car crash? A fine which reduces salaries some negligible percent.
Versus all the natural people at the highest echelons of our political economic system valiantly taking responsibility for fuckall?
Someone nearby: well what if they use it to replace human thinking instead of augment it?
Engineer: well they would be ridiculous. Nobody would ever think that’s a good idea.
Marketing Team: it seems like this lands best when positioning it as a decision-making tool. Let’s get some metrics on how much faster it is at making decisions than people are.
Sales Rep: ok, Captain, let’s dive into our flagship product, DecisionMaker Pro, the totally automated security monitoring agent…
::6 months later—some kid is being held at gunpoint over snacks.::
The authorities are just itching to have their brains replaced with by dumb computer logic, without regard for community safety and wellbeing.
for this civilian use case the next step - AR googles worn by police with that AI will be projecting onto the googles where that teenager has his gun (kind of Black Mirror style), and the next step after that is obviously excluding the humans even from the execution step.
But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.
Aside from improving the quality of the detection model, we should try to reduce the “cost” of both failure modes as much as possible. Putting a human in the loop or having secondary checks are ways to do that.
Given the probability of police officers in the USA taking any action as hostile and then ending up shooting him a false positive here is the same as swatting someone.
The system here sent the police off to kill someone.
We answered the screams at the door to guns pointed at our faces, and countless cops.
It was explained to us that this was the restrained version. We got a knock.
Unfortunately, I understand why these responses can't be neutered too much. You just never know.
Reality is there are guns in schools every day. “Solutions” like this aren’t making anyone safer. School shooters don’t fit this profile - they are planners, not impulsive people hanging out at the social event.
More disturbing is the meh attitude of both the company and the school administration. They almost engineered a tragedy through incompetence, and learned nothing.
Um. That's not really the danger here.
The danger is that it's as clear as day that in the future someone is gonna be killed. That's not just a possibility. It's a certainty given the way the system is set up.
This tech is not supposed to be used in this fashion. It's not ready.
My read of the "Um" and the quoting, was that you thought I missed that first danger, and so were disagreeing in a dismissive way.
When actually we're largely in agreement about the first danger. But people trying to follow a flood of online dialogue might miss that.
I mentioned the second danger because it's also significant. Many people don't understand how safety works, and will think "nobody got shot, so the system must have worked, nothing to be concerned about". But it's harder for even those people to dismiss the situation entirely, when the second danger is pointed out.
Hell, it wouldn't even move the needle on racial bias much because LLMs have already shown themselves to be prejudiced against minorities due to the stereotypes in their training data.
[0]Even though no other free society has to pay that price but whatever.
This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.
I'm not downplaying the risks. I'm saying that we should remember that almost everything has risks and benefits, and as a society we decide for or against using/doing them based mostly on the ratio between those two things.
So we need some data on the rates of false vs. true detections here. (A value judgment is also required.)
Make them pay money for false positives instead of direct support and counselling. This technology is not ready for production, it should be in a lab not in public buildings such as schools.
Decision-maker accountability is the only thing that halts bad decision-making.
This assumes no human verification of the flagged video. Maybe the bag DID look like a gun. We'll never know, because modern journalism has no interest in such things. They obtained the required emotional quotes and moved on.
It already cost money paying for the time and resources to be misappropriated.
There needs to be resignations, or jail time.
Agreed.
> This technology is not ready for production
No one wants stuff like this to happen, but nearly all technologies have risks. I don't consider that a single false positive outweighs all of its benefits; it would depend on the rates of false and true positives, and the (subjective) value of each (both high in this case, though I'd say preventing 1 shooting is unequivocally of more value than preventing 1 innocent person being unnecessarily held at gunpoint).
We've got a model out there now that we've just seen has put someone's life at risk... Does anyone apart from that company actually know how accurate it is? What it's been trained on? Its false positive rate? If we are going to start rolling out stuff like this, should it not be mandatory for stats / figures to be published? For us to know more about the model, and what it was trained on?
He frequently criticizes the legality of police holding people at gunpoint based on flimsy information, because the law considers it a use of force, which needs to be justified. For instance, he's done several videos about "high risk" vehicle stops, where a car is mistakenly flagged as stolen, and the occupants are forced to the ground at gunpoint.
My take is that if both the AI automated system and a "human in the loop" police officer looked at the picture and reasonably believed that the object in the image was a gun, then the stop might have been justified.
But if the automated system just sent the officers out without having them review the image beforehand, that's much less reasonable justification.
But the fact that the police showed the photo does suggest that maybe they did manually review the photo before going out. If that's the case, do wonder how much the AI influenced their own judgment, though. That is, if there was no AI involved, and police were just looking at real-time surveillance footage, would they have made the same call on their own? Possibly not: it feels reasonable to assume that they let the fact of the AI flagging it override their own judgment to some degree.
prioritize your own safety by not attending any location fitted with such a system, or deemed to be such a dangerous environment that such a system is desired.
the AI "swatted" someone.
How many packs of Twizzlers, or Doritos, or Snickers bars are out there in our schools?
First time it happens, there will be an explosion of protests. Especially now that the public knows that the system isn't working but the authorities kept using it anyway.
This is a really bad idea right now. The technology is just not there yet.
Why do you believe this? In the US, cops will cower outside of a school with an armed gunman actively murdering children, forcibly detain parents who wish to go in if the cops wont, and then re-elect everyone involved
In the US, an entire segment of the population will send you death threats claiming you are part of some grand (democrat of course) conspiracy for the crime of being a victim of a school shooting. After every school shooting, republican lawmakers wear an AR-15 pin to work the next day to ensure you know who they care about.
Over 50% of the country blamed the protesting students at Kent state for daring to be murdered by the national guard.
Cops can shoot people in broad daylight, in the back, with no justification, or reasonable cause, or can even barge into entirely the wrong house and open fire on homeowners exercising their basic right to protect themselves from strangers invading their homes, and as long as the people who die are mostly black, half the country will spout crap like "They died from drugs" or "they once sold a cigarette" or "he stole skittles" or "they looked at my wife wrong" while the cops take selfies reenacting the murder for laughs and talk about how terrified they are by BS "training" that makes them treat every stranger as a wanted murderer armed to the teeth. The leading cause of death for cops is still like heart disease of course.
Trump sent unmarked forces to Portland and abducted people off the street into unmarked vans and I think we still don't really know what happened there? He was re-elected.
The technology literally can NEVER be there. It is completely impossible to positively identify a bulge in clothing as a handgun. But that doesn’t stop irresponsible salesmen from making the claim anyway.
* Even hundreds of cops in full body armor and armed with automatic guns will not dare to engage a single "lone wolf" shooter doing a killing spree in a school; the heartless cowards may even prevent the parents from going inside to rescue their kids: Uvalde school shooting incident
* Cop on a ego trip, will shoot down a clearly harmless kid calmly eating a burger in his own car (not a stolen car): Erik Cantu incident
* Cops are not there to serve the society, they are not there to ensure safety and peace for the neighborhood, they are merely armed militia to protect the rich and powerful elites: https://www.alternet.org/2022/06/supreme-court-cops-protect-...
> The Department of School Safety and Security quickly reviewed and canceled the initial alert after confirming there was no weapon. I contacted our school resource officer (SRO) and reported the matter to him, and he contacted the local precinct for additional support. Police officers responded to the school, searched the individual and quickly confirmed that they were not in possession of any weapons.
What's unclear to me is the information flow. If the Department of School Safety and Security recognized it as a false positive, why did the principal alert the school resource officer? And what sort of telephone game happened to cause local police to believe the student was likely armed?
1. https://www.wbaltv.com/article/student-handcuffed-ai-system-...
Now we get the privilege of walking by AI security cameras placed in random locations, hoping they don't flag us.
There's a ton of money to be made with this kind of global frisking, so lots of pressure to roll out more and more systems.
How does this not spiral out of control?
A friend of mine once got pulled aside for extra checks and questioning after he had already gone through the scanners, because he was waiting for me on the other side to walk to the gates together and the agent didn't like that he was "loitering" – guess his ethnicity...
Not sure, but I bet they were looking at their feet kinda dusting off the bottoms, making awkward eye contact with the security guard on the other side of the one way door.
Email the state congressman and tell them what you think.
Since (pretty much) nobody does this, if a few hundred people do it, they will sit up and take notice. It takes less people than you might think.
Since coordinating this with a bunch of strangers (I.e. the public) is difficult, the most effective way is to normalise speaking up in our culture. Of course normalising it will increase the incoming comm rate, which will slowly decrease the effectiveness but even post that state, it’s better than where we are, which is silent public apathy
Note: Precheck is incredibly quick and easy to get; and GE is time consuming and annoying, but has its benefits if you travel internationallly. Both give the same benefits at TSA.
Second note: let's pretend someone replied "I shouldn't have to do that just to be treated...blah blah" and that I replied, "maybe not, but a few bucks could still solve this problem, if it bothers you enough that's worth it to you."
Sure, can't argue with that. But doesn't it bug you just a little that (paying a fee to avoid harassment) doesn't look all that disimilar from a protection racket? As to whether it's a few bucks or many, now you're just a mark negotiating the price.
Also my partner has told me that apparently my armpits sometimes smell of weed or beer, despite me not coming in contact with either of those for a very long time, and now I definitely don't want to get taken into a small room by a TSA person (After some googling, apparently those smells can be associated with high stress)
(I suppose if I attended pro sports games or large concerts, I'd be doing it for those, too)
I started looking at people trying to decide who looked juicy to the security folks and getting in line behind them. They can’t harass two people in rapid succession. Or at least not back then.
The one I felt most guilty about, much later, was a filipino woman with a Philippine passport. Traveling alone. Flying to Asia (super suspect!). I don’t know why I thought they would tag her, but they did. I don’t fly well and more stress just escalates things, so anything that makes my day tiny bit less shitty and isn’t rude I’m going to do. But probably her day would have been better for not getting searched than mine was.
In fact, they will probably demonize the victim to find sn excuse why he deserved to get shot.
Also, no need to escalate this into a race issue.
Would probably eliminate the need for the TSA security theater so that will probably never happen.
Instead of:
1. AI detects gun on surveillance
2. Dispatch armed police to location
It should be:
1. AI detects gun on surveillance
2. Human reviews the pictures and verifies the threat
3. Dispatch armed police to location
I think the latter version is likely what already took place in this incident, and it was actually a human that also mistook a bag of Doritos for a gun. But that version of the story is not as interesting, I guess.
1. https://www.nottinghammd.com/2025/10/22/student-robbed-outsi...
2. https://www.si.com/high-school/maryland/baltimore-county-hig...
3. https://www.wbaltv.com/article/knife-assault-rossville-juven...
4. https://www.wbal.com/stabbing-incident-near-kenwood-high-sch...
5. https://www.cbsnews.com/baltimore/news/teen-injured-after-re...
The crime stats seem fine to me. In a city like Baltimore, the numbers you've presented are shockingly low. When I was going through school, it was quite common for bullies to rob kids... even on campus. Teachers pretty much never did anything about it.
[0] Maybe the guy is a rapist, and maybe he isn't. If he is, that's godawful and I hope he goes to jail and gets his shit straight.
Behold - a real life example of a "Not a hotdog" system, except this one is gun / not-a-gun.
Except the fictional one from the series was more accurate...
I expect a school to be smart enough to say “Yes, this is a terrible situation, and we’re taking a closer look at the risks involved here.”
> Baltimore County Public Schools echoed the company’s statement in a letter to parents, offering counseling services to students impacted by the incident.
(Emphasis mine)
We already went through this years ago with all those terrorism databases and we (humanity) have learned nothing--any database will have a percentage of erroneous data, it is impossible to eliminate erroneous data completely. Therefore, any database used to identify <fill in the blank> will have erroneous conclusions. It's been observed over and over again and governments can't help themselves that "this time it will be different because <fill in the blank> e.g. AI.
In the Menezes case the cops were playing a game of telephone that ended up with him being shot in the head.
This case was a horrifying failure of the entire system that up until that point had fairly decent results for children who end up having to be taken away from their parents and later returned once the Mom/Dad clean up their act.
I wonder how effective an apology and explanation would have been? Just some respect.
That is the real issue.
Police force anywhere else in the world that know how to behave would have approched the student, have had a small chat with him, found out all he had in hands was a bag of doritos, maybe would have asked politely to see the content of his bag, explaining the search has been triggered by an autodetection system that may lead to occasional errors and wished him a good day.
No. Trusting AI is clearly the issue.
If there was a 9-1-1 call to the police that there was an active shooter at your kids school, how would you want the police to show up?
1. To enhance human productivity; or
2. To replace humans.
Companies, particularly in the US, very much want to go with (2) and part of the reason they can is because there are zero consequences for incidents like this.
A couple ofexamples spring to mind:
1. the UK Royal Mail scandal where a bad system accused postmasters of theft, some of whom committed suicide over the allegations. Those allegations were later proven false and it was the system's fault. IMHO the people who signed off and deployed this should be charged with negligent homicide; and
2. The Hertz case where people who had returned cars were erroneously flagged as car thieves and report was made to police. This created hell for people who would often end up with warrants they had no idea about and would be detained on random traffic stops over a car that was never stolen.
Now these aren't AI but just like the Doritos case here, the principle is the same: companies are trying to replace people with computers. In all cases, a human should be responsible for reviewing any such complaint. In the Hertz case, a human should check to see if the car is actually stolen.
In the Royal Mail situation, the system needs to show its work. Deployment should be against the existing accounting system and discrepancies between the two need to be investigated for bugs until the system is proven correct. Particularly in the early stages, a forensic accountant (if necessary) should verify that funds were actually stolen before filing a criminal complaint.
And if "false positive" criminal complaints are filed, the people who allowed that to happen, if negligent (and we all know they are), should themslves be criminally charged.
We are way too tolerant of black box systems that can result in significant harm or even death to people. Show your work. And make a human put their name and reputation to any output of such systems.
This problem doesn't exist in Europe or Japan because guns aren't that ubiquitous, which means that the police have the time to think before they act, which makes them less likely to escalate and start shooting. Obviously, for Americans, the only solution is to get rid of the gun culture, but this will never happen, so suck it up that AI gets you swatted.
>Obviously, for Americans, the only solution is to get rid of the gun culture, but this will never happen, so suck it up that AI gets you swatted.
...and you are correct.
But I really wouldn't want to send my kid to a school that surveils students all the time, and uses garbage software like this that directly puts kids into dangerous situations. I feel like with a private school, I'd have more choice and ability to influence that sort of thing.
No. If you're investigating someone and have existing reason to believe they are armed then this kind of false positive might be prioritizing safety. But in a general surveillance of a public place, IMHO you need to prioritize accuracy since false positives are very bad. This kid was one itchy trigger-pull away from death over nothing - that's not erring on the side of safety. You don't have to catch every criminal by putting everyone under a microscope, you should be catching the blatantly obvious ones at scale though.
Its pretty clearly documented how it works here:
https://www.omnilert.com/solutions/gun-detection-system https://www.omnilert.com/solutions/ai-gun-detection https://www.omnilert.com/solutions/professional-monitoring
This exact scenario is discussed in [1]. The "human in the loop" failed, but we're supposed to blame the human, not the AI (or the way it was implemented). The humans serve as "moral crumple zones".
""" The emphasis on human oversight as a protective mechanism allows governments and vendors to have it both ways: they can promote an algorithm by proclaiming how its capabilities exceed those of humans, while simultaneously defending the algorithm and those responsible for it from scrutiny by pointing to the security (supposedly) provided by human oversight. """
I suspect, though, that the AI flagging that image heavily influenced the cop doing the manual review. Without the AI, I'd expect that a cop manually watching a surveillance feed would have found nothing out of the ordinary, and this wouldn't have happened.
So I agree that it's weird to just blame the human in the loop here. Certainly they share blame, but the fact of an AI model flagging this sort of thing (and doing an objectively terrible job of it) in the first place should take most of the blame here.
"Omnilert" .. "You Have 10 Seconds To Comply"
-now targeting Black children!
Q: What was the name of the Google AI Ethicist who was fired by Google for raising the concern that AI overwhelmingly negatively framed non-white humans as threats .. Timnit Gebru
https://en.wikipedia.org/wiki/Timnit_Gebru#Exit_from_Google
We, as technologists, ARE NOT DOING BETTER. We must do better, and we are not on the "DOING BETTER" trajectory.
We talk about these "incidents" with breathless, "Wwwwellll if we just train our AI better ..." and the tragedies keep rolling.
Q2: Which of you has had a half dozen Squad Cars with Armed Police roll up on you, and treat you like you were a School Shooter? Not me, and I may reasonably assume it's because I am white, however I do eat Doritos.
> “They showed me the picture, said that looks like a gun, I said, ‘no, it’s chips.’”
So AI did the initial detection, but police looked at it and agreed. We don't see the image, but it probably did look like a gun because of a weird shadow or something.
Fundamentally, this isn't really any different from a person seeing someone with what looks like a gun and calling the cops, only it turns out the person didn't see it clearly.
The main issue is just that with increased numbers of images, there will be an increase in false positives. Can this be fixed by including multiple images, e.g. from motion of the object, so police (and the AI) can better eliminate false positives before traumatizing some poor teen?
Just put the full footage in front of an unbiased third party for a multi-stage verification first. The problem space isn't "is that weird shadow in the picture a gun or not?" it's "does the kid in the video have a gun?". It's not hard to figure out the difference between a bag of chips and a gun based on body language. Presumably the kid ate chips out of the bag? Using certain motions that one makes when doing that? Presumably the kids around him all saw the object in his hands and somehow did not react as if it was a gun? Jeez.
Not sure I agree. The AI flagging it certainly biased the person doing the manual review toward agreeing with the AI's assessment. I can imagine a scenario where there was no AI involved, just a human watching that same surveillance feed, and (correctly) not seeing anything alarming in it.
Also I expect the AI completely failed at context. I wouldn't be surprised if the full video feed, a few minutes (or even seconds) before the flagged frame, shows the kid crumpling up the empty Doritos bag and stuffing it in his pocket. The AI probably doesn't keep all that context around to use when making a later decision, and giving just the flagged frame of video to the human may have caused them to miss out on important context.
e.g. Not "this student has a gun" but "this model says the student has a gun with a probability of 60%".
If an AI can't quantify it's degree of confidence, it shouldn't be used for this sort of thing.
I wanna see the frames too.
And police do this kind of stuff all the time (or in the very least you hear about it a lot if you grew up in a major city).
So if you’re gonna automate broken systems, you’re going to see a lot more of the same.
I’m not sure what the answer is but I definitely feel that “security” system like this that are purchased and rolled out need to be highly regulated and be coupled with extreme accountability and consequences for false positives.
Abolish SWAT teams. Do away with the idea that the state employees can be permitted to be more armed than anyone else.
Blaming the so-called 'swatter' (whether it's a human or AI) is really not getting at the root of the problem.
I thought those two things were impossible?
"Sorry, that's Nacho gun"
Then, just sit back and enjoy as the lawsuit unfolds.
It is about the question, the answer will become very clear once you understand what was the question presented to the inference model, and of course what data and context was fed
I hope they sue the police department over this.
Imagine the head scratching that's going on with execs who are surprised things might work when a probabilistic software is being used for deterministic purposes without realizing there's a gap between it kind of by nature.
I can't. The execs won't care and probably in their sadist ways, cheer.
How on Earth does a person walk with a concealed gun? What does a woman in a skirt with one taped to her thigh walk like? What does a man in a bulky sweatshirt with a pistol on his back walk like? What does a teenager in wide legged cargo jeans with two pistols and a extra magazines walk like?
[1]: https://www.omnilert.com/blog/what-is-visual-gun-detection-t...
Given that there's no relevant screening step here and it's just being applied to everyone who happens to be at a place it's truly incredible that such an analysis would shake out in favor of this tech. The false positive rate would have to be vanishingly tiny, and it's simply not plausible that's true. And that would have to be coupled with a pretty low false negative rate, or you'd need an even lower false positive rate to make up for how little good it's doing even when it's not false-positiving.
So I'm sure that analysis was either deliberately never performed, or was and then was ignored and not publicized. So, yes, it's a fraud.
(There's also the fact that as soon as these are known to be present, they'll have little or no effect on the very worst events involving firearms at schools—shooters would just avoid any scheme that involved loitering around with a firearm where the cameras can see them, and count on starting things very soon after arriving—like, once you factor in second order effects, too, there's just no hope for these standing up to real scrutiny)
Gait analysis is really good these days, but normal, small objects in a bag don't impact your gait.
Can someone outline a more pragmatic, if not likely, course of what happens next after this? Is it swept under the rug and we move on?
* the student was black
Is that really a coincidence?
It's just a matter of time before this or something worse happens.
ED-209 mistakenly viewed a young man as armed, blows him away in the corporate boardroom.
The article even included an homage to:
“Dick, I’m very disappointed in you.”
“It’s just a small glitch.”
Edit: And racism. Just watched the video.
The real question is: Would this have happened in an upper/middle class school.
The student has dark skin. And is attending a school in a crime ridden neighborhood.
Were it a white student in a low crime neighborhood, would they have approached him with guns drawn?
The AI failure is masking the real problem - bad police behavior.
That would have been bold
But really this is typical of cop overreaction with escalation and ego rather than calm, legal, and reasonable investigation. Karens may SWAT people they don't like, but it's police officers who must use reasonableness and restraint to defend the vestiges of their impartiality and community confidence based on asking questions and gathering evidence in a legal and appropriate manner rather than rushing to conclusions. Case in point: The NYC rough false arrest of a father in front of his kid to retrieve his mis-delivered package where the egomaniacal bully cop aggressively lectures a guy for his own mistake to cover his own ego while blaming the victim: https://youtu.be/LXd-4HueHYE
the proofs are there.
philosophers mulled this over long ago and made clear statements as to why ai cant work
though not that for a second do I misdunderstand that it is "all in" for ai, and we all get to go for the 100 trillion dollar ride to hell.
can we have truely awsome automation for manufacturing and mundane beurocratic tasks?, fuck ya we can!
but anything that requires understanding, is forever out of reach, which unfortunatly is also lacking in the people pushing, this thing, now
I hope this kid gets what he deserves.
What a tragedy. I'm sure racial profiling on behalf of the AI and the police had absolutely nothing to do with it.
Because that's not what slander is.
What do you propose that is "reasonable" given the frameworks established by Heller, McDonald, Caetano, and Bruen?
I can 3D print or mill basically every item you can imagine prohibiting at home: what exactly do laws do in this case?
(* see also "how to lie with statistics").
Fuck you.
It's ok everyone, you're safer now that police are pointing a gun at you, because of a bag of chips ... just to be safe.
/s
Absolutely ridiculous. We're living "computer said you did it, prove otherwise, at gunpoint".
Or am I kidding? AI is only as good as its training and humans are...not bastions of integrity...
But ……
Doritos should definitely use this as an advertisement, Doritos - The only weapon of mass deliciousness, or something like that
And of course pay the kid, so something positive came come out of the experience for him