Whether it's AI that flagged her, or a witness who saw her, or her IP address appeared on the logs. Did anybody bothered to ask her "where were you the morning of july 10th between 3 and 4pm. But that's not what happened, they saw the data and said "we got her".
But this is the worst part of the story:
> And after her ordeal, she never plans to return to the state: “I’m just glad it’s over,” she told WDAY. “I’ll never go back to North Dakota.”
That's the lesson? Never go back to North Dakota. No, challenge the entire system. A few years back it was a kid accused of shoplifting [0]. Then a man dragged while his family was crying [1]. Unless we fight back, we are all guilty until cleared.
[0]: https://www.theregister.com/2021/05/29/apple_sis_lawsuit/
The incentive is to prosecte and prove the charges.
Speaking from the experience of being falsely accused after calling 911 to stop a drunk woman from driving.
The narrative they "investigated" was so obviously false, bodycam evidence directly contradicted multiple key facts. Officials are interested only seeking to prove the case. Thankfully the jury came to the right verdict.
To me the scariest part of this as a process is how many times (I’d casually estimate at least 75%) it is blindingly obvious that the prosecutor has not read the statement of charges or officer statements until everyone is in front of the judge. I get on one hand this judge seems to often be handling probable cause hearings but so many of these should never have resulted in any paperwork being turned in to the prosecution, let alone anyone having to show up in court.
Long story short, emotional abusive partner got drunk and verbally combative, despite my attempts to de-escalate. When nothing worked I went into my bedroom and locked the door. She started pounding on the door and demanded her things. I gave them to her and told her she needs a ride home she no longer welcome. She verbally abused and provoked me for 10 minutes before getting in her car. Took the keys, called 911. She grabbed me causing us both to fall a few minutes before the cops arrived and told them I threw her to the ground. We both had a couple scapes so they arrested us both.
Interfered with the 911 call, filed a false police report, assaulted me, caused property damage. She got charged with class c assault only and a dismissal. I felt like I was seen as guilty until proven innocent.
Fortunately I recorded all her verbal abuse (prosecution tried to use it against me and brought in DV expert to explain both her conduct and my 911 call as typical in IPV cases)
Fortunately the jury didn't buy it. I was literally being threatened with violence in my own home for telling her to leave. Between that and the bodycam statements contradicting her testimony I was shocked that they didn't drop the charges or offer a favorable plea deal.
The judge was absolutely fair, prosecutor bent on punishment, alleged victim was attempting to ruin my life (as captured in my audio)
Whew!
In the end the claims were so obviously fabricated that my attorney made no defense. It was clear that the accuser was not credible.
Perjury was provable. No consequences for her. This happened in Brazoria county
Minimum 1 year of jail time for grossly wrongful arrests that could be avoided with standard procedure or investigation tactics that were not applied.
What we really need is a change in police culture.
The police today have zero incentive to serve the public, they have zero skin in the game and can literally get away with murder.
Any time you hear the call for "law and order", that is the audience that supports the current system, because they like it like this.
The truth is much more complicated and involves politics. For example Seattle (and possibly other cities?) enacted a law that involves paying damages for being wrong in the event of bringing certain types of charges. But that has resulted in some widely publicized examples where the prosecutor erred by being overly cautious.
And to nobody’s surprise, failure to pay this bill is in itself a Class B felony…
I don't get it, if they only care about prosecuting and proving the case, wouldn't they go by the bodycam evidence? They didn't prove the case. Maybe if their incentive was to prosecute and prove the charges, they'd go by the obvious evidence. Or am I missing something here?
"He took my phone and it was dead" -> bodycam showed her using the phone when police arrived
I provided a recording of my accuser clearly being drunk, aggressive, threatening me while I was de-escalating. I was the one who called 911 to stop her from drunk driving. Her speech clearly slurred.
Instead of realizing her story doesn't add up, the prosecutor brought in a DV expert to explain how it's typical for abusers to call 911 and that her behavior was a normal reaction to being assaulted.
Thankfully, the jury knew better.
That’s seems to be in the realm of poissibility here if I am understanding things correctly (imo)
A month ago or so people on HN discussed facial recognition when looking victims and perpetrators in child exploitation material, and people were complaining that meta did not allow this fast enough. Neither the article or the people in that discussion draw any connection that the issues in this article could happen. People seemingly want to think that the lesson is "Never go back to North Dakota", as that is a much easier lesson than considering false positives in detection algorithms and their impact on a legal system that is constrained in budget, time, training and incentives.
We could sit here all day arguing “you should always validate the results”, but even on HN there are people loudly advocating that you don’t need to.
You should always validate the results, but there is an inherint difference between an AI generated tool for personal use and a tool which could be used to destroy someones life.
They don't validate the results of their fellow officers, or the validity of warrants, or anything else that predicates an arrest. Why would they start with this?
To the extent people trust AI to be infallible, it's just laziness and rapport (AI is rarely if ever rude without prompting, nor does it criticize extensive question-asking as many humans would, it's the quintessential enabler[1]) that causes people to assume that because it's useful and helpful for so many things, it'll be right about everything.
The models all have disclaimers that state the inverse. People just gradually lose sight of that.
[1] This might be the nature of LLMs, or it might be by design, similar to social media slop driving engagement. It's in AI companies' interest to have people buying subscriptions to talk with AIs more. If AI goes meta and critiques the user (except in more serious cases like harm to self or others, or specific kinds of cultural wrongthink), that's bad for business.
Why it happens is secondary to the fact that it does.
> The models all have disclaimers that state the inverse. People just gradually lose sight of that.
Those disclaimers are barely effective (if at all), and everyone knows that. Including the ones putting them there.
I see all kinds of people being told that AI-based AI detection software used for detecting AI in writing is infallible!
You want to make sure people aren't using fallible AI? Use our AI to detect AI? What could possibly go wrong.
"The trauma, loss of liberty, and reputational damage cannot be easily fixed,” Lipps' lawyers told CNN in an email.
That sounds a LOT like a statement you make for before suing for damages, not to mention they literally say "Her lawyers are exploring civil rights claims but have yet to file a lawsuit, they said."
This lady probably just wants to go back to normal life and get some money for the hell they put her in. She has never been on a airplane before, I doubt she is going to take on the entire system like you suggest. Easier said than done to "challenge the entire system", what does that even mean exactly?
...Unable to pay her bills from jail, she lost her home, her car and even her dog.
There is not a jury in the country that will side against the woman. I am not even sure who will make the best pop culture mashup - John Wick or a country song writer?(Also, what happened to journalism - no Oxford comma?)
Where your home was lost to foreclosure because one JUDGE did not look at the paperwork.
There should be a way to personally sue somebody when they don't do their job. Protecting the innocent. The JUDGE failed badly here.
Flimsy evidence would mean no warrant. Do your basic investigation please... Rubberstamping JUDGE caused this.
Why are they not named? Like they are a spectator. Infact they are the cause.
Effectively it just raises taxes to cover the cost of these failed prosecutions.
Everytime one of these cases happens, a cop and a prosecutor should be out of a job permanently. Possibly even jailed. The false arrest should lose the cop their job and get them blacklisted, the prosecution should lose the prosecutor's right to practice law.
And if the police union doesn't like that and decides to strike, every one of those cops should simply be fired. Much like we did to the ATC. We'd be better off hiring untrained civilians as cops than to keep propping up this system of warrior cops abusing the citizens.
There is actually a federal register for LEOs that have been terminated for cause or resigned to avoid termination.
The police unions that operate in the jurisdictions that employ 70% of US police have negotiated into their CBAs that the register “cannot be used for hiring or promotional decisions”. Read into that what you will.
It absolutely was. There's no question of this. Now we need to ask how was the system marketed, what did the police pay for it, how were they trained to use it?
> anybody bothered to ask her "where were you the morning of july 10th between 3 and 4pm.
Legally that amounts "hearsay" and cannot have any value. Those statements probably won't even be admissible in court without other supporting facts entered in first.
> we are all guilty until cleared.
This is not at a phenomenon that started with AI. If you scratch the surface, even slightly, you'll find that this is a common strategy used against defendants who are perceived as not being financially or logistically capable of defending themselves.
We have a private prison industry. The line between these two outcomes is very short.
I just want to understand your argument: you believe that any alibi provided is hearsay, and has no legal value, and that they can't even take the statement in order to validate it? That's your position?
You can offer your story to the police but the fact that you did or what you said to them will not come into evidence in court. You cannot call the officer to the stand and then ask them to repeat in court what you said. That would be "hearsay." So, for a lot of reasons, if you're already arrested, you probably don't even want to tell them any of that. It can only be used against you and never for you. Get your lawyer and have them ready the case to prove that alibi for you.
How is that hearsay if she's directly testifying to her own whereabouts?
Hearsay would be if someone else was testifying "she was in X location on july 10th between 3 and 4pm", without the accused being available for cross
"I was at the library" is firsthand testimony.
"I saw her at the library" is firsthand testimony.
"I saw her library card in her pocket" is firsthand testimony.
"She was at the library - Bob told me so" is hearsay. Just look at the word - "hear say". Hearsay is testifying about events where your knowledge does not come from your own firsthand observations of the event itself.
Agree in principle. But people like her does not have the resources, financially and emotionally to go through the legal system again. Unless there are charitable lawyers who are willing to do it on her behalf for free.
Better just to apply Musk or Altman software to the problem and avoid it entirely.
https://www.clearview.ai/privacy-and-requests
I have suddenly becomes very interested in New York's S1422 Biometric Privacy Act.
A judge and the warrant process are supposed to be the safeguard against police doing shady stuff (like relying on an AI hit to decide who commit a crime). But if the judges can't be bothered...
First, the detective used the FaceSketchID system, which has been around since around 2014. It is not new or uniquely tied to modern AI.
Second, the system only suggests possible matches. It is still up to the detective to investigate further and decide whether to pursue charges. And then it is up to court to issue the warrant.
The real question is why she was held in jail for four months. That is the part that I do not understand. My understanding is that there is 30-day limit (the requesting state must pick up the defendant within 30 day). Regarding the individual involved, Angela Lipps, she has reportedly been arrested before, so it is possible she was on parole. So maybe they were holding her because of that?
Can someone clarify how that process works?
They probably did “identity challenge” arguing that she is not the right person. But from Tennessee’s perspective, she was considered the correct person to be arrested, so there was no “mistaken identity” in their system. In other words, North Dakota Wanted person x and here is person x.
Once a judge in North Dakota reviewed the full evidence (and found that person they issued warrant for arrest is not one they want), the case was dismissed.
Cops did not do a proper investigation and the judge green-lighted it.
It is all on the JUDGE or possibly a magistrate who approved a faulty warrant.
The judge failed the poor woman. FIRE him.
Then sue Clearview for big bucks.
Actually most criminal defense attorneys recommend not waiving your speedy trial rights. Yes, the defense goes in blind. But so does the prosecution, and they're the ones that have to make a case.
The usual result for defendants that don't waive their speedy trial rights is an acquittal if the case goes to trial (between 50-60%), which doesn't sound like a lot but prosecutors are expected to win >90% of their trials. Additionally, in many counties they don't have sufficient courtrooms to handle all the criminal trials within the speedy trial timeframe, so if the trial date comes and a courtroom is not available the case is dismissed with prejudice. Nonviolent misdeameanors are the lowest priority for a courtroom (and by that I mean even family law cases have priority over nonviolent misdos in most counties), so those cases are frequently dismissed a day or two before the trial date. Consequently, most prosecutors will offer better and better plea bargains as the trial date approaches.
This is even more true for murders, which is why murder suspects don't usually get charged for a year or two after the crime.
The timer starts from when you invoke it, though.
The 2 issues, which she may be caught in, are that it’s “speedy” from the perspective of a court, and that it really means “free from undue delays”.
There is no general definition of a speedy trial, but I think the shortest period any state defines is a month (with some states considering several months to still be “speedy”).
A trial can still be speedy even past that window if the prosecution can make a case that they genuinely need more time (like waiting for lab tests to come back).
It’s basically only ever not speedy if the prosecution is just not doing anything.
As the article gestures towards, challenging the extradition can greatly extend the timeline, from 30 days after the arrest to 90 days after a formal identity hearing. Which isn't fair and isn't intuitive, but is unfortunately a long-standing part of the system. (Even worse, this kind of mistaken identity can't be challenged in an extradition hearing; the question isn't whether she's the person who committed the crime but whether she's the person identified in the warrant.)
This is how it should work, but I still think it is important to discuss these failures in the context of AI risks.
One of the largest real-world dangers of AI (as we define that now) is that it is often confidently wrong and this is a terrible situation when it comes to human factors.
A lot of people are wired in such a way that perceived confidence hacks right through their amygdala and they immediately default to trust, no matter how unwarranted.
They picked her up in TN and held her for 4 months, even after:
The ND police knew the ID was fake and the person using it was not her. The ND police knew she had been in TN before, during, and after the crime.
She is still technically a suspect, even after all of this has come out.
What I still do not understand is why she spent nearly six months in a Tennessee jail. That part remains unclear and needs further explanation.
Source: I live in Fargo and have been following this story closely. Everyone here is pissed
I wonder who is slandering her more... WOW
Maybe the citys insurance carrier hired a FIRM...
They will be taking a hit.
Maybe she objected to the extradition order without good counsel.
"I aint never been to N.Dakota". She found out the hard way how the law works..
What about the banks being hit. Surely they have good cameras. This was bad mojo. I would think a Wells Fargo/BoA has a unit for this stuff.
Finincial crimes handled like this. The banks will be sued too I suspect.. Deep pockets settle out.
The fundamental problem is that among the 350 million people living in the United States, there are a lot of pairs of people who look pretty darn similar. It used to be impractical to ask a question like "who in the US looks like the person in this security footage", and so as a matter of practicality, once you found someone who looks like the suspect, you probably also have other evidence, even if it's pretty weak, linking them to the crime.
But with AI, you can ask "who in the US looks like this person", and so we need to re-calibrate what it means if all you know is that someone looks like a suspect. I am of the opinion that "looks like someone," in the absence of any other evidence, is reasonable suspicion, but not probable cause, that you are the person you look like. Reasonable suspicion is enough for the police to stop you on the street and ask for your ID, but not enough to arrest you. There are other data points that alone might not even be reasonable suspicion, but could be combined with "looks like someone" to make probable cause, such as "was near the place at the time the crime happened".
AI isn't really the problem, even whether or not the AI's determination that two people look alike is valid or reviewed by a human isn't the problem. The problem is assuming that because two people look alike they must be the same person, even if you have no other evidence of them being the same person.
"[I]t’s not just a technology problem, it’s a technology and people problem."
I can't. I just can't.
If you look at examples of people quoting on the internet, lots are out of context, paraphrased, or made up.
AI is just mimicking what it has seen.
However, the system uses a dragnet approach, and is checking against millions of people. If you are checking 300 million people, that 99.999% accuracy check is going to find 3,000 people, and AT LEAST 99.96% of those people are going to be innocent.
This is why we can’t have wide, automated surveillance.
https://pub.towardsai.net/the-air-gapped-chronicles-the-cour...
The use case here is police facial recognition. Not hitting nails. The parent wasn't saying "AI is a liability" with no context.
The problem here is incidental to the tool; it was done by the cops and therefore nobody will be held accountable.
Only one small little problem --- there is no way to tell if you are using it "correctly".
The only way to be sure is to not use it.
Using it basically boils down to, "Do you feel lucky?".
The Fargo police didn't get lucky in this case. And now the liability kicks in.
But...
> there is no way to tell if you are using it "correctly".
This simply isn't true, at least in cases like this.
I know common sense isn't really all that common, but why would you give more credence to an untested tool than an untested crack-addled human informant?
The entire point of the informant, or the AI in this instance, is to generate leads. Which subsequently need to be checked.
Now, if I misused a hammer and it hurt everyone's thumb in my country, then maybe what you said would have some merit.
Otherwise, I'd say it's an extremely lazy argument
I wonder if AI / shadow IT will change that.
I doubt it.
Computing has traditionally been all about math and logic. This is really all that a binary logic computer is capable of. When applied to this purpose, it can offer highly accurate results at very low cost.
Current AI is an attempt to branch out from simply calculating into decision making. But it does so in the worst possible way --- using probability and statistics (aka guesswork) instead of logic and reasoning. In other words, AI offers questionable results at high cost.
As this article shows, relying on guesswork is a legal liability issue waiting to happen in many (if not most) operating environments.
I fully agree, this seems like a legal liability issue waiting to happen.