IHL actually prohibits the killing of persons who are not combatants or "fighters" of an armed group. Only those who have the "continuous function" to "directly participate in hostilities"[1] may be targeted for attack at any time. Everyone else is a civilian that can only be directly targeted when and for as long as they directly participate in hostilities, such as by taking up arms, planning military operations, laying down mines, etc.
That is, only members of the armed wing of Hamas (not recruiters, weapon manufacturers, propagandists, financiers, …) can be targeted for attack - all the others must be arrested and/or tried. Otherwise, the allowed list of targets of civilians gets so wide than in any regular war, pretty much any civilian could get targeted, such as the bank employee whose company has provided loans to the armed forces.
Lavender is so scary because it enables Israel's mass targeting of people who are protected against attack by international law, providing a flimsy (political but not legal) justification for their association with terrorists.
[1]: https://www.icrc.org/en/doc/assets/files/other/icrc-002-0990...
There is some incredible magic that often happens: as soon as anyone is targeted and killed, they immediately transform from civilians to "collaborators", "terrorists", "militants" etc. Of course everything is classified and restricted to avoid anyone snooping around and asking questions.
In the Guardian article, an IDF spokesperson says it exists and is only used as the former, and I'm sure that's what was intended and maybe even what the higher-ups think, but I suspect it's become the latter.
I think the loop-hole here is that a weapon manufacturing facility is almost certainly a military strategic target, and international law allows you to target the infrastructure provided the military advantage gained is porportional to the civilian death.
So you can't target the individuals but according to international law its fine to target the building they are in while the individuals are still inside provided its militarily worth it.
It seems wrong that you can't target weapon manufacturers, can you cite a source? Weapon manufacturers contribute to the military action, and destroying weapon manufacturers contributes to military advantage.
https://www.abc.net.au/news/2024-04-03/world-central-kitchen...
https://www.abc.net.au/news/2024-04-02/israeli-strike-that-k...
Pretty disgraceful (which itself feels a disgracefully unimpactful thing to say regarding people losing their lives whilst doing charity work).
The problem with Hamas is that they don't shy away from hiding combattants in civilian clothings or use women and children as suicide bombers. There is more than enough evidence of this tactic, dating back many many years [1].
By not just not preventing, but actively ordering such war crimes, Hamas leadership has stripped its civilian population of the protections of international law.
> Otherwise, the allowed list of targets of civilians gets so wide than in any regular war, pretty much any civilian could get targeted, such as the bank employee whose company has provided loans to the armed forces.
In regular wars, it's uniformed soldiers against uniformed soldiers, away from civilian infrastructure (hospitals, schools, residential areas). The rules of war make deviating from that a war crime on its own, simply because it places the other party in the conflict of either having no chance to wage the war or to commit war crimes on their own.
[1] https://en.wikipedia.org/wiki/Use_of_child_suicide_bombers_b...
In theory, yes. In practice--in which make believe world is this true?
> Formally, the Lavender system is designed to mark by all suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ), including low-ranking ones, as potential bombing targets.
Obviously any judgement is probabilistic.
My understanding is that AI in it's current form is not an applicable technology to be anywhere near this type of use.
Again my understanding: Inference models by their very nature are largely non-deterministic, in terms of being able to evaluate accurately against specific desired outcomes. They need large scale training data available to provide even low levels of accuracy. That type of training data just isn't available, its all likely to be based on one big hallucination, is my take. I'd be surprised if this AI model was even 10% accurate. It wouldn't surprise me if it was less than 1% accurate. Not that accuracy appears to be critical from what I've read.
The Guardian article: https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai..., makes me wonder whether AI development should be allowed at all. Didn't even have that thought before today.
This specific application and the claimed rationale is as close as I have come to seeing what I consider true and deliberate "Evil application" of technology out in the open.
Is this a naive take?
The “AI” exists to retcon the justification for any particular genocidal act, but this is really just an old school mindless slaughter driven by anger and racism.
Someone will double down and include AI into the execution phase via AI controlled drones, tanks, etc. Then they will claim no responsibility and blame the ghost-in-the-shell.
I really doubt that's the case, seems more like a "fire first if any suspicion at all and ask questions later" policy. If there was an intentional policy to kills journalists, aid workers and medical staff you'd see a lot more dead.
And you have to be extremely naive or one sided to not realize that Hamas does use those type of roles as cover for their operations.
Not trying to justify Israel's actions because they are fucked up, but based on all the evidence we have you are clearly wrong.
I wonder what the alternative is in a case like this. I know very little about military strategy-- without the AI would Israel have been picking targets less, or more haphazardly? I think there may be some mis-reading of this article where people imagine that if Israel weren't using an AI they wouldn't drop any bombs at all, that's clearly unlikely given that there's a war on. Obviously people, including innocents, are killed in war, which is why we all loathe war and pray for the current one to end as quickly as possible.
> “Everything was statistical, everything was neat — it was very dry,” B. said. He noted that this lack of supervision was permitted despite internal checks showing that Lavender’s calculations were considered accurate only 90 percent of the time; in other words, it was known in advance that 10 percent of the human targets slated for assassination were not members of the Hamas military wing at all.```
So, there was no human sign-off. I guess the policy itself was ordered by someone, but all the ongoing targets that were selected for assassination were solely authorized by the AI system's predictions.
This sentence is horrifically dystopian... "in order to save time and enable the mass production of human targets without hindrances"
It seems obvious to me that the alternative would be a slower process for picking targets leading to fewer overall targets picked and the guarantee that a human conscience is involved in the process.
And your probably right that the alternatives maybe worse, the folks behind Lavender could probably even prove it with data.. but there should be a moral impetus to always have a human in the loop regardless. And any such attempt to justify it won't capture the publics attention like a sky-net doomsday happening over the civilians in Gaza.
Don't Create The Torment Nexus
I think that once you start from the viewpoint that you're not going to create the Torment Nexus, it becomes a lot easier to avoid creating the Torment Nexus.
The IDF only read the first half of the classic IBM slide!
A lot of news around the bombing called out the uniquely large scale and rapidity of the campaign.
This was a preview of future conflicts.
We're entering the WWI phase of new technology being brought without rules to conflicts where the abuses will be horrific until rules are finally put in place.
Another system would signal that target is at home and it's time to bomb. This system was using phone to geo-locate and due to nature of living in Gaza phones transfer hands often.
Without Lavender they would have dropped less bombs IMO.
At least AI pretends to look at some data instead of just defaulting to tribal bloodlust... who's to say it can't be more ethical? It doesn't take much to beat our track record.
This is the second paragraph:
"In addition to talking about their use of the AI system, called Lavender, the intelligence sources claim that Israeli military officials permitted large numbers of Palestinian civilians to be killed, particularly during the early weeks and months of the conflict."
>processing masses of data to rapidly identify potential “junior” operatives to target. Four of the sources said that, at one stage early in the war, Lavender listed as many as 37,000 Palestinian men who had been linked by the AI system to Hamas or PIJ.
This is really no different than how the world was working in 2001 and choosing who to send to Gitmo and other more secretive prisons, or bombing their location
More than anything else it feels like just like in the corporate world, the engineers in the army are overselling the AI buzzword to do exactly what they were doing before it existed
If you use your paypal account to send money to an account identified as ISIS, you're going to get a visit from a 3 letter organization really quick. This sounds exactly like that from what the users are testifying to. Any decision to bomb or not bomb a location wasn't up to the AI, but to humans
By the world you mean the US, but yes you are correct.
"NSA targets SIM cards for drone strikes, ‘Death by unreliable metadata’"
https://www.computerworld.com/article/2475921/whistleblower-...
"Gitmo" didn't open until 2002
Okay, how is this not a war crime?
There are ~2M civilians who live in Gaza, and many of them don't have access to food, water, medicine, or safe shelter. Some of those unfortunates live above, or below, Hamas operatives and their families.
"Oh, sorry, lol." "It was unintentional, lmao, seriously." "Our doctrine states that we can kill X civilians for every hostile operative, so don't worry about it."
The war in Gaza is unlike Ukraine -- where Ukrainian and Russian villagers can move away from the front, either towards Russia or westwards into Galicia -- and where nobody's flattening major population centers. In Gaza, anybody can evidently be killed at any time, for any reason or for no reason at all. The Israeli "strategy" makes the Ukrainians and Russians look like paragons of restraint and civility.
I was working in Urs's Google Technical Infrastructure division. I read about the project in the news. Urs had a meeting about it where he lied to us, saying the contract was only $9M. It had already been expanded to $18M and was on track for $270M. He and Jeff Dean tried to downplay the impact of their work. Jeff Dean blinked constantly (lying?) while downplaying the impact. He suddenly stopped blinking when he began to talk about the technical aspects. I instantly lost all respect for him and the company's leadership.
Strong abilities in engineering and business often do not come with well-developed morals. Sadly, our society is not structured to ensure that leaders have necessary moral education, or remove them when they fail so completely at moral decisions.
https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai...
And, personally, I think that stories like this are of public interest - while I won’t ask for it directly, I hope the flag is removed and the discussion can happen.
I would hope they can be unflagged and merged, this appears to be an important story about a novel use of technology.
> "The Lavender machine joins another AI system, “The Gospel,” about which information was revealed in a previous investigation by +972 and Local Call in November 2023, as well as in the Israeli military’s own publications. A fundamental difference between the two systems is in the definition of the target: whereas The Gospel marks buildings and structures that the army claims militants operate from, Lavender marks people — and puts them on a kill list."
It's one thing to use these systems to mine data on human populations for who might be in the market for a new laptop, so they can be targeted with advertisements - it's quite different to target people with bombs and drones based on this technology.
Both use personal metadata, and both can horribly get it wrong.
I would argue that it's likely the only outcome it has had that directly relates to IDF objectives has probably been negative (i.e. the unintended killing of hostages).
Sadly, I think that the continued use of this AI is supported because it is helping to provide cover for individuals involved in war crimes. I wouldn't be surprised if the AI really weren't very sophisticated at all and that to serve the purpose of cover that doesn't matter.
They say the objective is to destroy Hamas and save the hostages.
I think the actual objective is to murder as many palestinians as possible. At the very least that is the actual objective of some IDF soldiers. They've said as much publicly.
Whether or not that's the actual objective intentionally or unintentionally is just arguing semantics at this point.
Their invasion of the Gaza city went way better than expected by most analysts, with minimal casualties among Israeli. So probably? Hard to compare with the alternative reality where they select the targets the old way.
That their stated objectives are likely unachievable is a different issue.
Hamas has been considerably diminished. It's not accurate to say the war has been a "total failure".
The world should not forget this.
The legal question is whether the civilian casualties are proportional to the concrete military value of the target.
A question that's worth considering is whether, when considering proportionality, all civilians (as defined by law) are made equal in a moral sense.
For example, the category "civilian" includes munitions workers or those otherwise offering support to combatants on the one hand, and young children on the other. It also includes members of the civil population who are actually involved in hostilities without being a formal part of an armed force.
The law of armed conflict doesn't distinguish these; albeit that I think people might well distinguish, on a moral level, between casualties amongst young children, munitions workers, and informal combatants.
Sure would be convenient if Hamas is 6% of the population
For 100 targets, 90 are 'correct', plus 20x civs per-target is 90/2100 or 4% real accuracy.
Say you use a model that's only 50% accurate and limit yourself to 10 civs per-target, you're at 50/1100 or 4.5% accuracy!
I guess my point is that no self-respecting datascient would release a 50% accurate model, let alone one used to make life or death decisions and yet, in the application of this model, decisions made by humans about its use has made it no better than doing exactly that.
The main crux of the story is the automated target acquisition and the policy to engage the target in civilian homes - there are intelligence errors and collateral damage.
The questions are: is the intelligence gathering and decision making ethical and is the accepted collateral damage ratio reasonable given the scale.
This is different from for example Russian strategy to target whole neighborhoods to inflict terror in the civilian population by indiscriminate killings.
Instead the West keeps supplying Israel with weapons and munitions.
Edit: We sometimes turn off flags when an article contains significant new information and also has at least some chance of providing a substantive basis for discussion. I haven't read the current article yet but it seems like a reasonable candidate for this, so I turned off the flags.
For anyone who wants more information about how we approach doing that, in the context of the current topic, here are some past explanations:
https://news.ycombinator.com/item?id=39618973 (March 2024)
https://news.ycombinator.com/item?id=39435324 (Feb 2024)
https://news.ycombinator.com/item?id=39435024 (Feb 2024)
https://news.ycombinator.com/item?id=39237176 (Feb 2024)
FWIW, I found this to be a really interesting story that I didn't previously know about, so I hope it stays up, and this is a story I'd be willing to vouch for.
Edit: Flagged after less than 9 minutes, I overestimated!
This is in contrast to how I feel about a statistical model flagging people to be murdered. That's not even remotely OK, even if the decision to actually carry out the murder ultimately goes through a person. Using a statistical model to choose targets is incredibly naive, and practically guarantees that perverse incentives will drive decision-making.
This does seem to be a big step more “AI” than previous systems I’ve heard described though.
No weapons are nice, but if the good guys don't develop AI weapons, the bad guys will.
From what I gather, many US engineers are morally opposed to them. But if China develops them and gets into a war with the US, will Americans be happy to lose knowing that they have the moral high ground?
The World Central Kitchen attack appears to have used smart munitions (missiles from a drone) on a mobile truck.
It's mostly business as usual. The technology makes the brutality more efficient, though:
Describing human personnel as a “bottleneck” that limits the army’s capacity during a military operation, the commander laments: “We [humans] cannot process so much information. It doesn’t matter how many people you have tasked to produce targets during the war — you still cannot produce enough targets per day.”
...
By adding a name from the Lavender-generated lists to the Where’s Daddy? home tracking system, A. explained, the marked person would be placed under ongoing surveillance, and could be attacked as soon as they set foot in their home, collapsing the house on everyone inside.
“Let’s say you calculate [that there is one] Hamas [operative] plus 10 [civilians in the house],” A. said. “Usually, these 10 will be women and children. So absurdly, it turns out that most of the people you killed were women and children.”
Using Google search, you can search new articles in previous years. You'll find older articles about Israel killing aid workers, for example. This is from 2018: https://www.theguardian.com/global-development/2018/aug/24/i...
The interesting thing about how this conflict is developing is that this story is full of quotes from Israeli intelligence. Most plainly say what they're doing. Western outlets may put a positive spin on it (because our governments generally support Israel), but the Israeli military themselves are making their intentions clear: https://news.yahoo.com/israeli-minister-admits-military-carr...
How far does the AI system go… is it behind the AI decision to starve the population of Gaza?
And if it is behind the strategy of starvation as a tool of war, is it also behind the decision to kill the aid workers who are trying to feed the starving?
How far does the AI system go?
Also, can an AI commit a war crime? Is it any defence to say, “The computer did it!” Or “I was just following AI’s orders!”
There’s so much about this death machine AI I would like to know.
No, the point of this program seems to be to find targets for assassination, removing the human bottleneck. I don't think bigger strategic decisions like starving the population of Gaza was bottlenecked in the same way as finding/deciding on bombing targets is.
> is it also behind the decision to kill the aid workers who are trying to feed the starving?
It would seem like this program gives whoever is responsible for the actual bombing a list of targets to chose from, so supposedly a human was behind that decision but aided by a computer. Then it turns out (according to the article at least) that the responsible parties mostly rubberstamped those lists without further verification.
> can an AI commit a war crime?
No, war crimes are about making individuals responsible for their choices, not about making programs responsible for their output. At least currently.
The users/makers of the AI surely could be held in violation of laws of war though, depending on what they are doing/did.
It's not that the "AI" described here is an autonomous actor.
> During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based. One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing
Obviously all this is to be taken with a grain of salt, who knows if it's even true.
"An AI" doesn't exist. What is being labeled "AI" here is a statistical model. A model can't do anything; it can only be used to sift data.
No matter where in the chain of actions you put a model, you can't offset human responsibility to that model. If you try, reasonable people will (hopefully) call you out on your bullshit.
> There’s so much about this death machine AI I would like to know.
The death machine here is Israel's military. That's a group of people who don't get to hide behind the facade of "an AI told me". It's a group of people who need to be held responsible for naively using a statistical model to choose who they murder next.
[0] https://www.reuters.com/world/middle-east/what-we-know-so-fa...
> It is also because Mr. Obama embraced a disputed method for counting civilian casualties that did little to box him in. It in effect counts all military-age males in a strike zone as combatants, according to several administration officials, unless there is explicit intelligence posthumously proving them innocent.
> Counterterrorism officials insist this approach is one of simple logic: people in an area of known terrorist activity, or found with a top Qaeda operative, are probably up to no good. “Al Qaeda is an insular, paranoid organization — innocent neighbors don’t hitchhike rides in the back of trucks headed for the border with guns and bombs,” said one official, who requested anonymity to speak about what is still a classified program.
[0] https://www.france24.com/en/middle-east/20240403-gaza-aid-wo...
I can't even imagine what it would be like to just like the idea of AI, study, get a job writing some Python, then one day wake up and learn you have quite a lot of blood (indirectly) on your hands.
Like either you need to become the kind of person that doesn't care, or one that learns to live with a lot of ambient guilt hanging around. Not sure which is worse.
Honestly feel so much for the ten thousand bright eyed, intelligent nerds eager for technology and the future. I know they will be compensated well, but that won't ever balance out what will happen to their minds one way or another.
But this is an old story at this point I guess.
Brings the Ironies of Automation paper to mind: https://en.m.wikipedia.org/wiki/Ironies_of_Automation
Specifically: If _most_ of a task is automated, human oversight becomes near useless. People get bored, are under time pressure, don't find enough mistakes etc and just don't do the review job they're supposed to do anymore.
A dystopian travesty.
But blaming AI is just easier than acknoledging at every step of this theres a human being Oking it, the war is Ok'd by a human, the target list is ok'd by a human, the missle launch/bomb drop is ok'd by a human, the fucking trigger is pulled by a human.
But sure because the target list was vetted by an AI its the AI's fault.
https://www.bloomberg.com/news/articles/2024-01-10/palantir-...
https://www.cnbc.com/2024/03/13/palantir-ceo-says-outspoken-...
“Saw you blowing up the children…”
“It wasn’t me.”
https://edition.cnn.com/2024/03/08/middleeast/gaza-israelis-...
If so it's worth noting that we have much better data on that campaign. We know exactly how many Hezbollah members have died because that organization actually releases that information. We have good numbers on civilian casualties. Naturally there are many different factors but I think Israel has done a much better job over there in terms of minimizing civilian casualties. There have been some notable incidents like IIRC journalists getting hit, but the overall numbers I think are significantly weighed towards military targets.
I wouldn't give them credit. It's a very different environment and that alone is enough to explain fewer civilian deaths. Even if they cared exactly as much as they do about Gazan civilians they would be killing fewer civilians as a proportion.
...
Khodr reported: “This is not the first time a health centre has been hit in the ongoing confrontations along the border. We’ve seen numerous attacks against health centres especially in front-line villages and we have seen paramedics killed.”
https://www.aljazeera.com/news/2024/3/27/hezbollah-launches-...
I'm sorry. This is so terrible that humor is the only recourse left to me. We were once afraid of AI drones with guns murdering the wrong people, but now we have an AI that is being used to plan a systematic bombing campaign. Human pilots and all the associated support personal are its tools and liberal quotas have been set on how many of the wrong people are permissible for each strike to hit. Yet again, reality has surpassed science fiction nightmare.
The future is now.
It's a vanishingly small list. Virtually everyone wearing a (D) or (R) hat is extremely interested in sending our tax money towards this purpose, which fund programs exactly like in this article.
How do you think they process millions of call records, intercepted messages, sim swaps, etc?
People thought this way about the machine gun, the armored tank, the atom bomb. But once the genie is out there's no putting it back in.
As an aside, I think this is a good example of how humans and AI will work together to bring efficiency to whatever tasks need to be accomplished. There's a lot of fear of AI taking jobs, but I think it was Peter Thiel who said years ago that future AI would work side by side humans to accomplish tasks. Here we are.
"A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION"
it's sort of irrelevant if some shitty computer system is killing people - the people who need to be arrested are the people who allowed the shitty computer system to do that. we obviously cannot allow "oh, not my fault I chose to allow a computer to kill people" to be an excuse or a defence for murder or manslaughter or ... anything.Thousands of years ago, gunpowder was invented. This technology enabled humans to finally break through mountains and build tunnels. It enabled the beautiful display of fireworks. But the misuse of this technology ultimately leads to destructions of cultures and civilizations.
This latest development with AI as implemented in Lavender — is one that’s exceptionally dangerous. This latest misuse of technology should concern all.
We must not allow the proliferation of this brilliant technology to be used for the purpose of destruction. It concerns me greatly.
I hope that we could resolve conflicts and differences in ways that are civil.
USA didn't exactly have much stricter conditions or way better accurancy of their intelligence. They did nothing qualitatively different. They just labeled anyone in the blast radious as unknown enemy combatants in the reports. And USA never had to operate at this volume. I guess that's just how modern war looks from the position of superior firepower.
We can kill more. Feed us targets. We can do it cheaply and fast. 10-20 civilians per one speculative target is acceptable for us.
Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.
this means they are actually targeting the children phones at night presupposing their father is in their proximity. they are doing this because Hamas operatives probably don't take their phones to their houses.AI system says person X in location Y needs to be taken out due to "terrorist association". Check if location Y is cleared for operations. Command has given general authority for operations in this region.
An autonomous drone is deployed like a Patriot missile shooting out from some array into the night sky, quietly flies to location Y, identifies precise GPS coordinates and sends itself including a sizeable warhead into the target. Later, some office dude sits down at his desk at 8:30am, opens some reporting program.
"Ah, 36 kills last night." Takes a sip of coffee.
[1]: https://en.wikipedia.org/wiki/Total_Information_Awareness [2]: https://blog.eutopian.io/tags/strategic-software/
Israel used AI to identify 37,000 Hamas targets
In the past there was all this talk of nonlethal weaponry, but nowadays it seems to be used at best "in the small", by police and not the military
Killing will only ever get easier and faster and remote from human action, oversight and consequence for the perpetrator. Too fast for humans to understand, to remote too feel
https://twitter.com/Aryan_warlord/status/1774859594747273711
Perfect match for a targeting AI, the AI could even customize each missile as it's being built according to the target it selected.
Let's face it, in any war, civilians are really screwed. It's true here, it was true in Afghanistan or Vietnam or WWII. They get shot at, they get bombed, by accident or not, they get displaced. Milosevic in Serbia didn't need an AI to commit genocide.
The real issue to me is what the belligerents are OK with. If they are ok killing people on flimsy intelligence, I don't see much difference between perfunctory human analysis and a crappy AI. Are we saying that somehow Hamas gets some brownie points for not using an AI?
It seems like the whole cell phone infrastructure need to be torn down.
Minimizing deaths is the humane approach to war. So we move away from broad killing mechanisms (shelling, crude explosives, carpet bombing), in favor of precise killing machines. Drones, targeted missiles and now AI allow you to be ruthlessly efficient in killing an enemy.
The question is - How cold and not-human-like can these methods be, if they are in fact reducing overall deaths ?
I won't pretend an answre is obvious.
The west hasn't seen a real war in a long time. Their impression of war is either ww1 style mass deaths on both sides or overnight annihilation like America's attempts in the middle east. So our vocabulary limits us to words like Genocide, Overthrow, Insurgency, etc. This is war. It might not map onto our intuitions from recent memory, but this is exactly what it looks like.
When you're in a long drawn out war with a technological upper hand...you leverage all technology to help you win. At the same time, once pandoras box is open, it tends to stay open for your adversaries as well. We did well to maintain global consensus on chemical and nuclear warfare. I don't see any such concensus coming out of the AI era just yet.
All I'll say is that I won't be quick to make judgements on the morality of such tech in war. What do you think happened to the spies that were caught due to decoding of the enigma ?
So overfitting or hallucinations as a feature. Scary.
It maybe worth noting that there is at least one notification service out there to draw attention to such posts. Joel spolsky even mentioned such a service that existed back when stackoverflow was first being built.
Human coordination is arguably the most powerful force in existence, especially when coordinating to do certain things.
Also interesting: it would seem(!) that once an article is flagged, it isn't taken down but simply disappears from the articles list. This is quite interesting in a wide variety of ways if you think about it from a global cause and effect perspective, and other perspectives[1]!
Luckily, we can rest assured that all is probably well.
https://youtube.com/watch?v=dub8fBuXK_w&pp=ygUZaXRzIGxhdmVuZ...
This statement means little without knowing the accuracy of a human doing the same job.
Without that information this is an indictment of military operational procedures, not of AI.
So they were having daily quotas for killings. Literally a killing machine with a input capacity of 1200 targets per day that has to be fed. Just like the Nazis during WW2
1. Hebrew University’s Faculty of Repressive Science 2. The spiraling absurdity of Germany’s pro-Israel fanaticism 3. The first step toward disintegrating Israel’s settler machine
As such, their view is not at all balanced or even-handed. Objective truth obviously matters very little to them since they exhibit such open bias and loathing towards Israel and the Jewish people.
if the markers, a la features, discussed in the article are anything to go with, it is a very disturbing method of classifying a target. if human evaluators use the same approach to target bombings, then there is no supporting how this war is being fought.
For at least 15 years we've had personalized newsfeeds in social media. For even longer we've had search engine ranking, which is also personalized. Whenever criticism is levelled against Meta or Twitter or Google or whoever for the results on that ranking, it's simply blamed on "the algorithm". That serves the same purpose: to provide moral cover for human actions.
We've seen the effects of direct human intervention in cases like Google Panda [1]. We also know that search engines and newsfeeds filter out and/or downrank objectionable content. That includes obvious categories (eg CSAM, anything else illegal) but it also includes value-based judgements on perfectly legitimate content (eg [2]).
Lavender is Israel saying "the algorithm" decided what to strike.
I want to put this in context. In ~20 years of the Vietnam War, 63 journalists were killed or lost )presumed dead) [3]. In the 6 months since October 7, at least 95 journalists have been killed in Gaza [4]. In the years prior there were still a large number killed [5], famously including an American citizen Shireen abu-Akleh [6].
None of this is an accident.
My point here is that anyone who blames "the algorithm" or deflects to some ML system is purposely deflecting responsibility from the human actions that led to that and for that to continue to exist.
[1]: https://en.wikipedia.org/wiki/Google_Panda
[2]: https://www.hrw.org/report/2023/12/21/metas-broken-promises/...
[3]: https://en.wikipedia.org/wiki/List_of_journalists_killed_and...
[4]: https://cpj.org/2024/04/journalist-casualties-in-the-israel-...
[5]: https://en.wikipedia.org/wiki/List_of_journalists_killed_dur...
[6]: https://en.wikipedia.org/wiki/Killing_of_Shireen_Abu_Akleh
Oh, very noble of you to take on that risk, from that side of the bomb sight.
> Second, we reveal the “Where’s Daddy?” system, which tracked these targets and signaled to the army when they entered their family homes.
This sounds immoral at first, but if proportionality is taken into consideration, the long term effects of this might be positive, ie fewer deaths long term due to the elimination of Hamas staff. The devil is in the details however, as there is clearly a point beyond which this becomes unacceptable. Sadly collective punishment is unavoidable in war, and one could argue that between future Israeli victims and current Palestinian ones, the IDF has a moral obligation to choose the latter.
> Fourth, we explain how the army loosened the permitted number of civilians who could be killed during the bombing of a target.
This article below states the civilian to militant death ratio in Gaza is 1:1, and for comparison the usual figure in modern war is 9:1, such as during the Battle of Mosul against ISIS. They may still be within the realm of moral action here, but the fog of war makes it very difficult to assess.
https://www.newsweek.com/israel-has-created-new-standard-urb...
I’m unsure why the UN + Arab Nations don’t take control of the situation, get rid of Hamas, provide peacekeeping, integrate Palestine into Israel, and enforce property rights. All this bloodshed is revolting.
Ugh.
if ( contact.image.ocr().find( 'relief' ) ) contact.bomb()ETA:
I wonder if this is going to ruin their SEO...it might be worth a rebrand.
> Fourth, we explain how the army loosened the permitted number of civilians who could be killed during the bombing of a target. Fifth, we note how automated software inaccurately calculated the amount of non-combatants in each household. And sixth, we show how on several occasions, when a home was struck, usually at night, the individual target was sometimes not inside at all, because military officers did not verify the information in real time.
Tbh this feels like making a machine that points at a random point on the map by rolling two sets of dice, and then yelling "more blood for the blood god" before throwing a cluster bomb
be ready to be targeted by AI, from another state, within another war
972mag is a left-wing media and what they say should be viewed with skepticism because they follow a pro-Palestine narrative.
On silver lining for those who lost their lives to his particular holocaust: These technologies in particular have a tendency of ending up used against the very people who created them or authorized their use.
AI
Yeah, yeah guidelines and all.
Just watched someone get their post deleted for criticizing Israel's online PR/astroturfing.
Israel's ability to shape online discussion has left a bad taste in my mouth. Trust is insanely low, I think the US should get a real military base in Israel in exchange for our effort. If the US gets nothing for their support, I'd be disgusted.
The second is this: Why is a western ally allowed to have Apartheid, allowed to kill thousands of women and children with or without AI, besiege (medieval style) 2.3mil civilians, starve and dehydrate them to death, all the while comparing a tiny area without war planes, without a standing military, without statehood to Nazi Germany and Gaza to Dresden to completely level Gaza? To Nazi Germany that had the most advanced technology of their time, threatening the whole world? Dehumanising Palestinians by declaring them all „terrorists“, mocking their dead, mutilated bodies in Telegram groups with 125k Israelis (imagine 4mil US citizens in a group mocking other nations dead children). Why do we allow this to happen? Why is a western ally allowed to do this while almost all our western governments fund and support this and silence protest against it?
How is this even possible to do without having the system make a lot of mistakes? As much AI talk there is on HN these days, I would have recalled an article that talks about this kind of military-grade capability.
Are there any resources I can look at, and maybe someone here can talk about it from experience.
I'm not sure what is wrong with this technology. They barely say at the achievements this technology has gained, and only speaking about the bad side.
This article tries to make you think behind the scenes that Israel is a technology advanced, strong country, and Gaza are poor people whom did nothing.
It didn't even speak about the big 7 October massacre, where tens or even a hundreds innocent women were raped, because they were Israelis. I'm not sure when this kind of behavior is accepted in any way, and it makes you think that Hamas is not a legit organization, but just barbaric monsters.
Be sure that Gaza civilians support the massacre, and a survey reports that 72% of the Palestinians support the massacre[1], spoiler: it's much higher.
[1] https://edition.cnn.com/2023/12/21/middleeast/palestinians-b...
It seems like Israel is already bombing indiscriminately, with 35 000 killed (the majority of whom are women and children). Was AI used for these targets?
History is going show a similar story to when IBM helped facilitate the Holocaust, this genocide also has people working on tools that enable it; people "just doing their job."
Did AI target World Central Kitchen or the 200+ humanitarians, journalists, hostages and medics? This is just one aspect of Apartheid Israel's war crimes.
Apartheid Israel seems to be a pariah state, if it's not with their hacking or bombing consulates, it's with the military industrial complex relationship with the US. Do they think their actions are conducive to their well-being?
US supporting Israel makes very little sense.
That being said, Trump signed bill to removed reporting of drone strikes by US military and he approved more strikes than Obama.
So US likely has amplified systems compared to Lavender and Gospel. We'd have no idea.
This season of Daily Show about AI comes to mind: https://www.youtube.com/watch?v=20TAkcy3aBY
Everyone claiming AI is going to do great good, solve climate change yada yada is deeply in an illusion.
AI will only amplify what corporations and state powers already do.
Every. Single. Time.
(...) at some point we relied on the automatic system, and we only checked that [the target] was a man — that was enough. It doesn’t take a long time to tell if someone has a male or a female voice (...)
...sounds fake as shit. Any dumb system can make male/female decision automatically, no fucking way human needs to verify it by listening to recordings while sohphisticated AI system is involved in filtering.Why would half a dozen, active military offcers brag about careless use of tech and bombing families with children while they sleep risking accusation of treason?
Feels like well done propaganda more than anything else to me.
It's plausible they use AI. It's also plausible they don't that much.
It's plausible it has high false positive rate. It's also plausible it has multiple layers of crosschecks and has very high accuracy - better than human personel.
It's plausible it is used in rush without any doublechecks at all. It's also plausible it's used with or after other intelligence. It's plausible it's used as final verification only.
It's plausible that targets are easier to locate home. It's plausible it's not, ie. it may be easier to locate them around listed, known operation buildings, tracked vehicles, while known, tracked mobile phone is used etc.
It's plausible that half a dozen active officers want to share this information. It's also plausible that narrow group of people have access to this information. It's plausible they would not engage in activity that could be classified as treason. It's also plausible most personel simply doesn't know the origin of orders up the chain, just immediate.
It's plausible it's real information. It's also plausible it's fake or even AI generated, good quality, possibly intelligence produced fake.
Frankly looking at AI advances I'd be surprised if propaganda quality would lag behind operational, on the ground use.
- Weaponized financial trojan horses like crypto
- Weaponized chemical warfare through addictions
- Drone swarm attacks in Ukraine
- AI social-media engineered outrage to change publics perception
- Impartial, jingoistic mainstream war propaganda
- Censorship and manipulation of neutral views as immoral
- Weaponized AI software
Looks like a major escalation towards a total war of sorts.
But what is even sadder is that the supposedly morally superior western world is entirely bribed and blackmailed to stand behind Israel. And then you have countries like Germany where you get thrown in jail for being upset at Israel.
That is appalling.
If we can’t trust AI to drive a car, how the hell can we trust it to pick who lives and who dies?
At that point I had to scroll back up to check whether this was just a really twisted April's Fools joke.
Hamas combatants like fried chicken, beer, and women. I also like these things. I can't possibly see anything wrong with this system...
Our premiere AI geniuses were all sqawking to congress about the dangers of AI and here we see that "they essentially treated the outputs of the AI machine “as if it were a human decision.”
Sounds like you want to censor information that could hurt your bottomline.
This should be advertised. The true price of AI is people using computers to make decisions no decent person would. It's not a feature, it's a war crime.
How are those "acceptable" collateral deaths not war crimes?
We have no idea whether this story itself is relaying anything of value. For all we know, stories like this could be a part of the war effort.
> Underlining everything +972 does is a dedication to promoting a progressive worldview of Israeli politics, advocating an end to the Israeli occupation of the West Bank, and protecting human and civil rights in Israel and Palestine.
> And while the magazine’s reported pieces—roughly half of its content—adhere to sound journalistic practices of news gathering and unbiased reporting, its op-eds and critical essays support specific causes and are aimed at social and political change.
1: https://www.tabletmag.com/sections/israel-middle-east/articl...