IHL actually prohibits the killing of persons who are not combatants or "fighters" of an armed group. Only those who have the "continuous function" to "directly participate in hostilities"[1] may be targeted for attack at any time. Everyone else is a civilian that can only be directly targeted when and for as long as they directly participate in hostilities, such as by taking up arms, planning military operations, laying down mines, etc.
That is, only members of the armed wing of Hamas (not recruiters, weapon manufacturers, propagandists, financiers, …) can be targeted for attack - all the others must be arrested and/or tried. Otherwise, the allowed list of targets of civilians gets so wide than in any regular war, pretty much any civilian could get targeted, such as the bank employee whose company has provided loans to the armed forces.
Lavender is so scary because it enables Israel's mass targeting of people who are protected against attack by international law, providing a flimsy (political but not legal) justification for their association with terrorists.
[1]: https://www.icrc.org/en/doc/assets/files/other/icrc-002-0990...
Its success was so marked that it was immediately decided in 1893 to move a Tabulator to Ellis Island, to count the ethnics from the source with Hollerith's new technology. Herman Hollerith had great success in his own lifetime, the technology eventually becoming the core of the Computing-Tabulating-Recording Company, otherwise known, a decade later, as International Business Machines.
The establishment of this clear process surrounding race - actual race law - was, believe it or not, pretty novel in Western history. A lot of old-timey race policy - like the relationship between a monarch and the Jews, or what exactly a visiting Muslim could or couldn't do (like sell and buy slaves cough Venice cough) - this race stuff was almost always very, ah, what we'd call "tribal knowledge". A Jew in the Middle Ages could have far greater rights and lifestyle than in later periods, but those rights were completely unpredictable; this was true to greater or lesser extent for many "outsiders" in the early European era. Even in 1900 American innovation in race law - based on "Science!" - was a new thing, and extremely exciting to the enthusiasts of folk movements[2] crisscrossing our entire civilization[3] at the time. One of those was Willy Heidinger, who established Deutsche Hollerith Maschinen Gesellschaft to produce license-built Hollerith machines. World events interceded, however, and the German civil service infrastructure to run a census would not be present until much later . . 1933, in fact, when things would get very spicy indeed in the world of race "science".
And then, of course, cataclysm: the end of the European Order.
On the European continent, a debt to truth was paid. A hundred million dead or maimed, nations wrecked, a whole world - a weltanschauung - burnt down to the foundations - below the foundations. But elsewhere - like in the New World - the lesson was not as stark. And in yet other places the inverse lesson was learned: once you determine a person is not a person, you must brutalize yourself and your population immediately, before the soon-to-be-unpeople realizes that the struggle is existential.
Let's wrap this up.
What 20th century Race Science/Race Law were trying to do was make sense of something as complicated as human culture but using the sciences they understood: 19th century statistics, the physics of iron and steam. Those were the sciences with the capital backing, so - of course! - those were the only science that mattered. Today, we're looking at another complex element of the human experience - human language, human consciousness - and again, we're looking at it through the science that's got the most capital backing it: computation. That's how "text" somehow, incredibly, came to contain "language". Or how "scarcity" was represented by "money" - as if there were any N-dimensional descriptions that could adequately vectorize either of those concepts.
Ultimately, when you really dig yourself into these sorts of artificial - if not downright dishonest - "science-y" establishments, when you start imposing them on the world, you don't break out of them easily. Or without damage. The people making use of your LLM widget do not understand the math - all they know, like the race science of previous centuries - is that it's Science-y. It might as well be wearing a Mitre and Crosier.
[1] What those actions were, is a subject for another post. Probably inside a soon-to-be-flagged topic.
[2] The American example in race law was also very exciting to a certain Mr. Adolf Hitler, as well. You can read all about it in Mein Kampf. Hitler's attitude towards America is really fascinating stuff, but an entirely other subject.
[3] And beyond! Ethnonationalism spread like fire, as colonized peoples realized this could be their big ticket towards peerage in the European age.
There is some incredible magic that often happens: as soon as anyone is targeted and killed, they immediately transform from civilians to "collaborators", "terrorists", "militants" etc. Of course everything is classified and restricted to avoid anyone snooping around and asking questions.
In the Guardian article, an IDF spokesperson says it exists and is only used as the former, and I'm sure that's what was intended and maybe even what the higher-ups think, but I suspect it's become the latter.
The Guardian article makes it clear prior to those denials that those higher-up appear to not to care how accurate it is and appear to be making a conscious choice to accept the fact it is highly flawed on the basis that it might kill some of whom they would legitimately claim as valid targets.
It's clear from the operational details discussed in the article the critical target number is largely number of kills, regardless of whether they are any actual material threat, or not.
Cull predominantly the male population and their family members, not assassinate active threats is the overall impression I got of the Israeli strategy.
I must add that anyone claiming the use of AI and inference models in this way is in anyway justifiable needs to seek help. The claim of 90% accuracy is almost certainly over claiming by over 100%.
Intended by who? You don't kill 13,000 children by accident.
I think the loop-hole here is that a weapon manufacturing facility is almost certainly a military strategic target, and international law allows you to target the infrastructure provided the military advantage gained is porportional to the civilian death.
So you can't target the individuals but according to international law its fine to target the building they are in while the individuals are still inside provided its militarily worth it.
It seems wrong that you can't target weapon manufacturers, can you cite a source? Weapon manufacturers contribute to the military action, and destroying weapon manufacturers contributes to military advantage.
https://www.abc.net.au/news/2024-04-03/world-central-kitchen...
https://www.abc.net.au/news/2024-04-02/israeli-strike-that-k...
Pretty disgraceful (which itself feels a disgracefully unimpactful thing to say regarding people losing their lives whilst doing charity work).
The problem with Hamas is that they don't shy away from hiding combattants in civilian clothings or use women and children as suicide bombers. There is more than enough evidence of this tactic, dating back many many years [1].
By not just not preventing, but actively ordering such war crimes, Hamas leadership has stripped its civilian population of the protections of international law.
> Otherwise, the allowed list of targets of civilians gets so wide than in any regular war, pretty much any civilian could get targeted, such as the bank employee whose company has provided loans to the armed forces.
In regular wars, it's uniformed soldiers against uniformed soldiers, away from civilian infrastructure (hospitals, schools, residential areas). The rules of war make deviating from that a war crime on its own, simply because it places the other party in the conflict of either having no chance to wage the war or to commit war crimes on their own.
[1] https://en.wikipedia.org/wiki/Use_of_child_suicide_bombers_b...
In theory, yes. In practice--in which make believe world is this true?
> Formally, the Lavender system is designed to mark by all suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ), including low-ranking ones, as potential bombing targets.
Obviously any judgement is probabilistic.
My understanding is that AI in it's current form is not an applicable technology to be anywhere near this type of use.
Again my understanding: Inference models by their very nature are largely non-deterministic, in terms of being able to evaluate accurately against specific desired outcomes. They need large scale training data available to provide even low levels of accuracy. That type of training data just isn't available, its all likely to be based on one big hallucination, is my take. I'd be surprised if this AI model was even 10% accurate. It wouldn't surprise me if it was less than 1% accurate. Not that accuracy appears to be critical from what I've read.
The Guardian article: https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai..., makes me wonder whether AI development should be allowed at all. Didn't even have that thought before today.
This specific application and the claimed rationale is as close as I have come to seeing what I consider true and deliberate "Evil application" of technology out in the open.
Is this a naive take?
I really doubt that's the case, seems more like a "fire first if any suspicion at all and ask questions later" policy. If there was an intentional policy to kills journalists, aid workers and medical staff you'd see a lot more dead.
And you have to be extremely naive or one sided to not realize that Hamas does use those type of roles as cover for their operations.
Not trying to justify Israel's actions because they are fucked up, but based on all the evidence we have you are clearly wrong.
Obviously, nobody in an international court will be able to say "... but the AI did it!" - This is just far too easy as a way out. There are rules to AI Usage and one of them is not a usage like that - as already said somewhere else: The AI is only as ethical/moral as the humans behind them.
I wonder what the alternative is in a case like this. I know very little about military strategy-- without the AI would Israel have been picking targets less, or more haphazardly? I think there may be some mis-reading of this article where people imagine that if Israel weren't using an AI they wouldn't drop any bombs at all, that's clearly unlikely given that there's a war on. Obviously people, including innocents, are killed in war, which is why we all loathe war and pray for the current one to end as quickly as possible.
> “Everything was statistical, everything was neat — it was very dry,” B. said. He noted that this lack of supervision was permitted despite internal checks showing that Lavender’s calculations were considered accurate only 90 percent of the time; in other words, it was known in advance that 10 percent of the human targets slated for assassination were not members of the Hamas military wing at all.```
So, there was no human sign-off. I guess the policy itself was ordered by someone, but all the ongoing targets that were selected for assassination were solely authorized by the AI system's predictions.
This sentence is horrifically dystopian... "in order to save time and enable the mass production of human targets without hindrances"
It seems obvious to me that the alternative would be a slower process for picking targets leading to fewer overall targets picked and the guarantee that a human conscience is involved in the process.
And your probably right that the alternatives maybe worse, the folks behind Lavender could probably even prove it with data.. but there should be a moral impetus to always have a human in the loop regardless. And any such attempt to justify it won't capture the publics attention like a sky-net doomsday happening over the civilians in Gaza.
Don't Create The Torment Nexus
I think that once you start from the viewpoint that you're not going to create the Torment Nexus, it becomes a lot easier to avoid creating the Torment Nexus.
The IDF only read the first half of the classic IBM slide!
A lot of news around the bombing called out the uniquely large scale and rapidity of the campaign.
This was a preview of future conflicts.
We're entering the WWI phase of new technology being brought without rules to conflicts where the abuses will be horrific until rules are finally put in place.
Another system would signal that target is at home and it's time to bomb. This system was using phone to geo-locate and due to nature of living in Gaza phones transfer hands often.
Without Lavender they would have dropped less bombs IMO.
At least AI pretends to look at some data instead of just defaulting to tribal bloodlust... who's to say it can't be more ethical? It doesn't take much to beat our track record.
This is the second paragraph:
"In addition to talking about their use of the AI system, called Lavender, the intelligence sources claim that Israeli military officials permitted large numbers of Palestinian civilians to be killed, particularly during the early weeks and months of the conflict."
>processing masses of data to rapidly identify potential “junior” operatives to target. Four of the sources said that, at one stage early in the war, Lavender listed as many as 37,000 Palestinian men who had been linked by the AI system to Hamas or PIJ.
This is really no different than how the world was working in 2001 and choosing who to send to Gitmo and other more secretive prisons, or bombing their location
More than anything else it feels like just like in the corporate world, the engineers in the army are overselling the AI buzzword to do exactly what they were doing before it existed
If you use your paypal account to send money to an account identified as ISIS, you're going to get a visit from a 3 letter organization really quick. This sounds exactly like that from what the users are testifying to. Any decision to bomb or not bomb a location wasn't up to the AI, but to humans
By the world you mean the US, but yes you are correct.
"NSA targets SIM cards for drone strikes, ‘Death by unreliable metadata’"
https://www.computerworld.com/article/2475921/whistleblower-...
"Gitmo" didn't open until 2002
Okay, how is this not a war crime?
There are ~2M civilians who live in Gaza, and many of them don't have access to food, water, medicine, or safe shelter. Some of those unfortunates live above, or below, Hamas operatives and their families.
"Oh, sorry, lol." "It was unintentional, lmao, seriously." "Our doctrine states that we can kill X civilians for every hostile operative, so don't worry about it."
The war in Gaza is unlike Ukraine -- where Ukrainian and Russian villagers can move away from the front, either towards Russia or westwards into Galicia -- and where nobody's flattening major population centers. In Gaza, anybody can evidently be killed at any time, for any reason or for no reason at all. The Israeli "strategy" makes the Ukrainians and Russians look like paragons of restraint and civility.
When the US was in Afghanistan, Al Qaeda learned that the US (generally) won't shoot ambulances. So what became the most valuable vehicle to Al Qaeda? Hamas took notes, but Israel doesn't seem to care as much as the US.
Also, besides all that, once something is used for military operations, it is fair game as a military target. Regardless of civilians. When the law was written it was assumed that governments wouldn't intentionally use their civilians as protection.
The only thing that made this time a bit different is the crazy, almost hard to believe, switch from the Ukrainian conflict and how it was seen and portrayed... To western countries staying completely silent when again, it's our side doing it. Well it wasn't hard to believe but it just made it a lot more blatant.
Israel doesn't really care though since israeli officers routinely go on public tirades that amount to mask-off allusions to genocide ("wipe Gaza" "level the city to the ground" "make it unliveable"), with again 0 consequences at all. Even Russia at least tries to not have Russian military officers just say the quiet part out loud.
Maybe it is. Maybe it isn't.
Some questions worth asking: what is international law? How is international order maintained?
I agree that images and footage from Gaza are disturbing. But I encourage you to think systematically about what it is we are seeing.
https://archive.is/2024.04.02-205352/https://www.haaretz.com...
> With unintended strikes, there's "we work hard to avoid this, but based on bad intel made a rare, tragic error," and "we've encouraged RoE that foreseeably makes tragic errors frequent, but this looks bad and in hindsight wish we hadn't done it."
> Israel's strike on WCK food aid workers is the latter
Israel has long had pretty plain issues with its rules of engagement. Recall that earlier in this conflict, the IDF shot three of the hostages whose recovery is one of the main goals of the operation!
Yet still, even that its the most important, friendly fire still happens
I was working in Urs's Google Technical Infrastructure division. I read about the project in the news. Urs had a meeting about it where he lied to us, saying the contract was only $9M. It had already been expanded to $18M and was on track for $270M. He and Jeff Dean tried to downplay the impact of their work. Jeff Dean blinked constantly (lying?) while downplaying the impact. He suddenly stopped blinking when he began to talk about the technical aspects. I instantly lost all respect for him and the company's leadership.
Strong abilities in engineering and business often do not come with well-developed morals. Sadly, our society is not structured to ensure that leaders have necessary moral education, or remove them when they fail so completely at moral decisions.
https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai...
And, personally, I think that stories like this are of public interest - while I won’t ask for it directly, I hope the flag is removed and the discussion can happen.
I would hope they can be unflagged and merged, this appears to be an important story about a novel use of technology.
Readers might still find it helpful to read both pieces, of course.
On the other hand of course there are also those that jump on any claim that makes Israel look bad. Claims of which there are many. Of which far too many have become pretty evident. Which far too many people do not want to be true and will ignore.
So what can one do? I guess keep an open mind and give claims a couple of days to be proven or disproven. Only then judge.
> "The Lavender machine joins another AI system, “The Gospel,” about which information was revealed in a previous investigation by +972 and Local Call in November 2023, as well as in the Israeli military’s own publications. A fundamental difference between the two systems is in the definition of the target: whereas The Gospel marks buildings and structures that the army claims militants operate from, Lavender marks people — and puts them on a kill list."
It's one thing to use these systems to mine data on human populations for who might be in the market for a new laptop, so they can be targeted with advertisements - it's quite different to target people with bombs and drones based on this technology.
Both use personal metadata, and both can horribly get it wrong.
I would argue that it's likely the only outcome it has had that directly relates to IDF objectives has probably been negative (i.e. the unintended killing of hostages).
Sadly, I think that the continued use of this AI is supported because it is helping to provide cover for individuals involved in war crimes. I wouldn't be surprised if the AI really weren't very sophisticated at all and that to serve the purpose of cover that doesn't matter.
They say the objective is to destroy Hamas and save the hostages.
I think the actual objective is to murder as many palestinians as possible. At the very least that is the actual objective of some IDF soldiers. They've said as much publicly.
Whether or not that's the actual objective intentionally or unintentionally is just arguing semantics at this point.
Their invasion of the Gaza city went way better than expected by most analysts, with minimal casualties among Israeli. So probably? Hard to compare with the alternative reality where they select the targets the old way.
That their stated objectives are likely unachievable is a different issue.
Hamas has been considerably diminished. It's not accurate to say the war has been a "total failure".
The world should not forget this.
The legal question is whether the civilian casualties are proportional to the concrete military value of the target.
A question that's worth considering is whether, when considering proportionality, all civilians (as defined by law) are made equal in a moral sense.
For example, the category "civilian" includes munitions workers or those otherwise offering support to combatants on the one hand, and young children on the other. It also includes members of the civil population who are actually involved in hostilities without being a formal part of an armed force.
The law of armed conflict doesn't distinguish these; albeit that I think people might well distinguish, on a moral level, between casualties amongst young children, munitions workers, and informal combatants.
I wonder if you would say the same on the other side where every male or female above 18 years is required to serve in thr military and in the reserve afterwards? [1]
By your argument would you say that all of these are legitimate targets?
Sure would be convenient if Hamas is 6% of the population
For 100 targets, 90 are 'correct', plus 20x civs per-target is 90/2100 or 4% real accuracy.
Say you use a model that's only 50% accurate and limit yourself to 10 civs per-target, you're at 50/1100 or 4.5% accuracy!
I guess my point is that no self-respecting datascient would release a 50% accurate model, let alone one used to make life or death decisions and yet, in the application of this model, decisions made by humans about its use has made it no better than doing exactly that.
"we really need to missile this guy or he will kill more" vs "well we got 37 badies and also kim and yashonda, damn i really liked yashonda"
Actually after writing this my mind went farther, "since yashonda was a good person we actually have a whole bunch of hard facts about how good a person she actually was, did a lot of help for her community and was a real pillar of helping the next generation of kids be less violent...too bad we didn't add any of that info into the kill-algorithm "
The main crux of the story is the automated target acquisition and the policy to engage the target in civilian homes - there are intelligence errors and collateral damage.
The questions are: is the intelligence gathering and decision making ethical and is the accepted collateral damage ratio reasonable given the scale.
This is different from for example Russian strategy to target whole neighborhoods to inflict terror in the civilian population by indiscriminate killings.
Instead the West keeps supplying Israel with weapons and munitions.
Edit: We sometimes turn off flags when an article contains significant new information and also has at least some chance of providing a substantive basis for discussion. I haven't read the current article yet but it seems like a reasonable candidate for this, so I turned off the flags.
For anyone who wants more information about how we approach doing that, in the context of the current topic, here are some past explanations:
https://news.ycombinator.com/item?id=39618973 (March 2024)
https://news.ycombinator.com/item?id=39435324 (Feb 2024)
https://news.ycombinator.com/item?id=39435024 (Feb 2024)
https://news.ycombinator.com/item?id=39237176 (Feb 2024)
Having a human in the loop prevents bad-faith actors from abusing the system to suppress information and discussions.
Kind of related thought - is there a topic you think is more divisive? And also, is there some way that this is measured officially or unofficially?
> According to six Israeli intelligence officers
Not 1 reservist, or 2 retired officers, or 3 contractors, but 6 active serviceman - whose day to day job is to figure out how to hide secrets.
There are more statistically impossible statements.
FWIW, I found this to be a really interesting story that I didn't previously know about, so I hope it stays up, and this is a story I'd be willing to vouch for.
I guess all you have to do, if you want to suppress information about something, is to ensure that its comments always devolve into unproductive discussions. Funny, I once read about this as a tactic for controlling information flow in online communities...
im not sure what other tools exist other than a block button like X
Admins can, and do, prune entire branches of comments off of posts.
These two methods would take a bit more work than just banishing the topic entirely, but with topics like the first time that "AI" kill lists are publicized, maybe exceptions should be made.
I wish people would let people decide for themselves what is productive or not...
For a high quality piece of tech-related investigative journalism like this, flagging is simply censorship.
Considering what regularly doesn't get flagged on this site related to AI, conflict, etc., this topic seems to fit in.
It's also currently dropping rank on the front page, despite being heavily upvoted.
Edit: Flagged after less than 9 minutes, I overestimated!
This is in contrast to how I feel about a statistical model flagging people to be murdered. That's not even remotely OK, even if the decision to actually carry out the murder ultimately goes through a person. Using a statistical model to choose targets is incredibly naive, and practically guarantees that perverse incentives will drive decision-making.
I also don't think there is a way to complain about abusing flags other than emailing the mods; I have no clue about the effectiveness of this complaint.
Edit: And the humans who approved the list should be help accountable, of course.
This does seem to be a big step more “AI” than previous systems I’ve heard described though.
No weapons are nice, but if the good guys don't develop AI weapons, the bad guys will.
From what I gather, many US engineers are morally opposed to them. But if China develops them and gets into a war with the US, will Americans be happy to lose knowing that they have the moral high ground?
Development of tools of death is not a good guy/bad guy thing. The "bad guys" think the "good guys" are bad.
I think "killing" is bad, no matter who develops the tools.
The World Central Kitchen attack appears to have used smart munitions (missiles from a drone) on a mobile truck.
It's mostly business as usual. The technology makes the brutality more efficient, though:
Describing human personnel as a “bottleneck” that limits the army’s capacity during a military operation, the commander laments: “We [humans] cannot process so much information. It doesn’t matter how many people you have tasked to produce targets during the war — you still cannot produce enough targets per day.”
...
By adding a name from the Lavender-generated lists to the Where’s Daddy? home tracking system, A. explained, the marked person would be placed under ongoing surveillance, and could be attacked as soon as they set foot in their home, collapsing the house on everyone inside.
“Let’s say you calculate [that there is one] Hamas [operative] plus 10 [civilians in the house],” A. said. “Usually, these 10 will be women and children. So absurdly, it turns out that most of the people you killed were women and children.”
Using Google search, you can search new articles in previous years. You'll find older articles about Israel killing aid workers, for example. This is from 2018: https://www.theguardian.com/global-development/2018/aug/24/i...
The interesting thing about how this conflict is developing is that this story is full of quotes from Israeli intelligence. Most plainly say what they're doing. Western outlets may put a positive spin on it (because our governments generally support Israel), but the Israeli military themselves are making their intentions clear: https://news.yahoo.com/israeli-minister-admits-military-carr...
How far does the AI system go… is it behind the AI decision to starve the population of Gaza?
And if it is behind the strategy of starvation as a tool of war, is it also behind the decision to kill the aid workers who are trying to feed the starving?
How far does the AI system go?
Also, can an AI commit a war crime? Is it any defence to say, “The computer did it!” Or “I was just following AI’s orders!”
There’s so much about this death machine AI I would like to know.
No, the point of this program seems to be to find targets for assassination, removing the human bottleneck. I don't think bigger strategic decisions like starving the population of Gaza was bottlenecked in the same way as finding/deciding on bombing targets is.
> is it also behind the decision to kill the aid workers who are trying to feed the starving?
It would seem like this program gives whoever is responsible for the actual bombing a list of targets to chose from, so supposedly a human was behind that decision but aided by a computer. Then it turns out (according to the article at least) that the responsible parties mostly rubberstamped those lists without further verification.
> can an AI commit a war crime?
No, war crimes are about making individuals responsible for their choices, not about making programs responsible for their output. At least currently.
The users/makers of the AI surely could be held in violation of laws of war though, depending on what they are doing/did.
There is also another AI system that tracks when these target get home.
Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.
I think "assassination" colloquially means to pinpoint and kill one individual target. I don't mean to say you are implying this, but I do want to make it clear to other readers that according to the article, they are going for max collateral damage, in terms of human life and infrastructure.
“The only question was, is it possible to attack the building in terms of collateral damage? Because we usually carried out the attacks with dumb bombs, and that meant literally destroying the whole house on top of its occupants. But even if an attack is averted, you don’t care — you immediately move on to the next target. Because of the system, the targets never end. You have another 36,000 waiting.”
It's not that the "AI" described here is an autonomous actor.
> During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based. One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing
Obviously all this is to be taken with a grain of salt, who knows if it's even true.
"An AI" doesn't exist. What is being labeled "AI" here is a statistical model. A model can't do anything; it can only be used to sift data.
No matter where in the chain of actions you put a model, you can't offset human responsibility to that model. If you try, reasonable people will (hopefully) call you out on your bullshit.
> There’s so much about this death machine AI I would like to know.
The death machine here is Israel's military. That's a group of people who don't get to hide behind the facade of "an AI told me". It's a group of people who need to be held responsible for naively using a statistical model to choose who they murder next.
Turning off comments makes as much sense as just posting the heading and no link or attribution.
[0] https://www.reuters.com/world/middle-east/what-we-know-so-fa...
> It is also because Mr. Obama embraced a disputed method for counting civilian casualties that did little to box him in. It in effect counts all military-age males in a strike zone as combatants, according to several administration officials, unless there is explicit intelligence posthumously proving them innocent.
> Counterterrorism officials insist this approach is one of simple logic: people in an area of known terrorist activity, or found with a top Qaeda operative, are probably up to no good. “Al Qaeda is an insular, paranoid organization — innocent neighbors don’t hitchhike rides in the back of trucks headed for the border with guns and bombs,” said one official, who requested anonymity to speak about what is still a classified program.
[0] https://www.france24.com/en/middle-east/20240403-gaza-aid-wo...
Brings the Ironies of Automation paper to mind: https://en.m.wikipedia.org/wiki/Ironies_of_Automation
Specifically: If _most_ of a task is automated, human oversight becomes near useless. People get bored, are under time pressure, don't find enough mistakes etc and just don't do the review job they're supposed to do anymore.
A dystopian travesty.
https://www.bloomberg.com/news/articles/2024-01-10/palantir-...
https://www.cnbc.com/2024/03/13/palantir-ceo-says-outspoken-...
“Saw you blowing up the children…”
“It wasn’t me.”
https://edition.cnn.com/2024/03/08/middleeast/gaza-israelis-...
If so it's worth noting that we have much better data on that campaign. We know exactly how many Hezbollah members have died because that organization actually releases that information. We have good numbers on civilian casualties. Naturally there are many different factors but I think Israel has done a much better job over there in terms of minimizing civilian casualties. There have been some notable incidents like IIRC journalists getting hit, but the overall numbers I think are significantly weighed towards military targets.
I'm sorry. This is so terrible that humor is the only recourse left to me. We were once afraid of AI drones with guns murdering the wrong people, but now we have an AI that is being used to plan a systematic bombing campaign. Human pilots and all the associated support personal are its tools and liberal quotas have been set on how many of the wrong people are permissible for each strike to hit. Yet again, reality has surpassed science fiction nightmare.
The future is now.
How do you think they process millions of call records, intercepted messages, sim swaps, etc?
People thought this way about the machine gun, the armored tank, the atom bomb. But once the genie is out there's no putting it back in.
As an aside, I think this is a good example of how humans and AI will work together to bring efficiency to whatever tasks need to be accomplished. There's a lot of fear of AI taking jobs, but I think it was Peter Thiel who said years ago that future AI would work side by side humans to accomplish tasks. Here we are.
"A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION"
it's sort of irrelevant if some shitty computer system is killing people - the people who need to be arrested are the people who allowed the shitty computer system to do that. we obviously cannot allow "oh, not my fault I chose to allow a computer to kill people" to be an excuse or a defence for murder or manslaughter or ... anything.Thousands of years ago, gunpowder was invented. This technology enabled humans to finally break through mountains and build tunnels. It enabled the beautiful display of fireworks. But the misuse of this technology ultimately leads to destructions of cultures and civilizations.
This latest development with AI as implemented in Lavender — is one that’s exceptionally dangerous. This latest misuse of technology should concern all.
We must not allow the proliferation of this brilliant technology to be used for the purpose of destruction. It concerns me greatly.
I hope that we could resolve conflicts and differences in ways that are civil.
USA didn't exactly have much stricter conditions or way better accurancy of their intelligence. They did nothing qualitatively different. They just labeled anyone in the blast radious as unknown enemy combatants in the reports. And USA never had to operate at this volume. I guess that's just how modern war looks from the position of superior firepower.
We can kill more. Feed us targets. We can do it cheaply and fast. 10-20 civilians per one speculative target is acceptable for us.
Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.
this means they are actually targeting the children phones at night presupposing their father is in their proximity. they are doing this because Hamas operatives probably don't take their phones to their houses.AI system says person X in location Y needs to be taken out due to "terrorist association". Check if location Y is cleared for operations. Command has given general authority for operations in this region.
An autonomous drone is deployed like a Patriot missile shooting out from some array into the night sky, quietly flies to location Y, identifies precise GPS coordinates and sends itself including a sizeable warhead into the target. Later, some office dude sits down at his desk at 8:30am, opens some reporting program.
"Ah, 36 kills last night." Takes a sip of coffee.
This needs to be applied to nation-states & so much more we're engineering.
I'd love to see a design methodology grounded in accounting for all nondual needs of humans. This idea usually comes with complaints of that being an impossible task, without really understanding the issue.
There are also numerous other organisations with such standards.
[1]: https://en.wikipedia.org/wiki/Total_Information_Awareness [2]: https://blog.eutopian.io/tags/strategic-software/
Israel used AI to identify 37,000 Hamas targets
In the past there was all this talk of nonlethal weaponry, but nowadays it seems to be used at best "in the small", by police and not the military
Killing will only ever get easier and faster and remote from human action, oversight and consequence for the perpetrator. Too fast for humans to understand, to remote too feel
https://twitter.com/Aryan_warlord/status/1774859594747273711
Perfect match for a targeting AI, the AI could even customize each missile as it's being built according to the target it selected.
Let's face it, in any war, civilians are really screwed. It's true here, it was true in Afghanistan or Vietnam or WWII. They get shot at, they get bombed, by accident or not, they get displaced. Milosevic in Serbia didn't need an AI to commit genocide.
The real issue to me is what the belligerents are OK with. If they are ok killing people on flimsy intelligence, I don't see much difference between perfunctory human analysis and a crappy AI. Are we saying that somehow Hamas gets some brownie points for not using an AI?
Instead of the Milosevic example I'd say it's analagous to Dehomag machines during the Holocaust. The Nazis didn't need advanced database systems to attempt a genocide, but having access to them made it far far easier to turn the whole process into a factory line: something predictable and constant that allowed it to achieve a pace and scope far beyond what they would have been able to do otherwise. Similar here, or in other cases where advanced technology is brought to bear in war. Anything that makes human death more automated is, IMO, abhorrent and worth of criticism in it's own right.
It seems like the whole cell phone infrastructure need to be torn down.
The social media input is terrifying: show any Palestinian sympathies (sentiment analysis) in your posts and you're on the list.
That's my understanding. That the whole of the Gaza strip is essentially watched under the equivalent of stingrays and all traffic out is monitored with room 641a style taps.
Minimizing deaths is the humane approach to war. So we move away from broad killing mechanisms (shelling, crude explosives, carpet bombing), in favor of precise killing machines. Drones, targeted missiles and now AI allow you to be ruthlessly efficient in killing an enemy.
The question is - How cold and not-human-like can these methods be, if they are in fact reducing overall deaths ?
I won't pretend an answre is obvious.
The west hasn't seen a real war in a long time. Their impression of war is either ww1 style mass deaths on both sides or overnight annihilation like America's attempts in the middle east. So our vocabulary limits us to words like Genocide, Overthrow, Insurgency, etc. This is war. It might not map onto our intuitions from recent memory, but this is exactly what it looks like.
When you're in a long drawn out war with a technological upper hand...you leverage all technology to help you win. At the same time, once pandoras box is open, it tends to stay open for your adversaries as well. We did well to maintain global consensus on chemical and nuclear warfare. I don't see any such concensus coming out of the AI era just yet.
All I'll say is that I won't be quick to make judgements on the morality of such tech in war. What do you think happened to the spies that were caught due to decoding of the enigma ?
So overfitting or hallucinations as a feature. Scary.
It maybe worth noting that there is at least one notification service out there to draw attention to such posts. Joel spolsky even mentioned such a service that existed back when stackoverflow was first being built.
Human coordination is arguably the most powerful force in existence, especially when coordinating to do certain things.
Also interesting: it would seem(!) that once an article is flagged, it isn't taken down but simply disappears from the articles list. This is quite interesting in a wide variety of ways if you think about it from a global cause and effect perspective, and other perspectives[1]!
Luckily, we can rest assured that all is probably well.
https://youtube.com/watch?v=dub8fBuXK_w&pp=ygUZaXRzIGxhdmVuZ...
This statement means little without knowing the accuracy of a human doing the same job.
Without that information this is an indictment of military operational procedures, not of AI.
So they were having daily quotas for killings. Literally a killing machine with a input capacity of 1200 targets per day that has to be fed. Just like the Nazis during WW2
1. Hebrew University’s Faculty of Repressive Science 2. The spiraling absurdity of Germany’s pro-Israel fanaticism 3. The first step toward disintegrating Israel’s settler machine
As such, their view is not at all balanced or even-handed. Objective truth obviously matters very little to them since they exhibit such open bias and loathing towards Israel and the Jewish people.
Is this any different?
if the markers, a la features, discussed in the article are anything to go with, it is a very disturbing method of classifying a target. if human evaluators use the same approach to target bombings, then there is no supporting how this war is being fought.
For at least 15 years we've had personalized newsfeeds in social media. For even longer we've had search engine ranking, which is also personalized. Whenever criticism is levelled against Meta or Twitter or Google or whoever for the results on that ranking, it's simply blamed on "the algorithm". That serves the same purpose: to provide moral cover for human actions.
We've seen the effects of direct human intervention in cases like Google Panda [1]. We also know that search engines and newsfeeds filter out and/or downrank objectionable content. That includes obvious categories (eg CSAM, anything else illegal) but it also includes value-based judgements on perfectly legitimate content (eg [2]).
Lavender is Israel saying "the algorithm" decided what to strike.
I want to put this in context. In ~20 years of the Vietnam War, 63 journalists were killed or lost )presumed dead) [3]. In the 6 months since October 7, at least 95 journalists have been killed in Gaza [4]. In the years prior there were still a large number killed [5], famously including an American citizen Shireen abu-Akleh [6].
None of this is an accident.
My point here is that anyone who blames "the algorithm" or deflects to some ML system is purposely deflecting responsibility from the human actions that led to that and for that to continue to exist.
[1]: https://en.wikipedia.org/wiki/Google_Panda
[2]: https://www.hrw.org/report/2023/12/21/metas-broken-promises/...
[3]: https://en.wikipedia.org/wiki/List_of_journalists_killed_and...
[4]: https://cpj.org/2024/04/journalist-casualties-in-the-israel-...
[5]: https://en.wikipedia.org/wiki/List_of_journalists_killed_dur...
[6]: https://en.wikipedia.org/wiki/Killing_of_Shireen_Abu_Akleh
Oh, very noble of you to take on that risk, from that side of the bomb sight.
> Second, we reveal the “Where’s Daddy?” system, which tracked these targets and signaled to the army when they entered their family homes.
This sounds immoral at first, but if proportionality is taken into consideration, the long term effects of this might be positive, ie fewer deaths long term due to the elimination of Hamas staff. The devil is in the details however, as there is clearly a point beyond which this becomes unacceptable. Sadly collective punishment is unavoidable in war, and one could argue that between future Israeli victims and current Palestinian ones, the IDF has a moral obligation to choose the latter.
> Fourth, we explain how the army loosened the permitted number of civilians who could be killed during the bombing of a target.
This article below states the civilian to militant death ratio in Gaza is 1:1, and for comparison the usual figure in modern war is 9:1, such as during the Battle of Mosul against ISIS. They may still be within the realm of moral action here, but the fog of war makes it very difficult to assess.
https://www.newsweek.com/israel-has-created-new-standard-urb...
I’m unsure why the UN + Arab Nations don’t take control of the situation, get rid of Hamas, provide peacekeeping, integrate Palestine into Israel, and enforce property rights. All this bloodshed is revolting.
Ugh.
if ( contact.image.ocr().find( 'relief' ) ) contact.bomb()Wouldn't be surprised if this hasn't already been the case in Israel-Palestine already. AI targeting of Palestinians long before October 7th in other words.
ETA:
I wonder if this is going to ruin their SEO...it might be worth a rebrand.
> Fourth, we explain how the army loosened the permitted number of civilians who could be killed during the bombing of a target. Fifth, we note how automated software inaccurately calculated the amount of non-combatants in each household. And sixth, we show how on several occasions, when a home was struck, usually at night, the individual target was sometimes not inside at all, because military officers did not verify the information in real time.
Tbh this feels like making a machine that points at a random point on the map by rolling two sets of dice, and then yelling "more blood for the blood god" before throwing a cluster bomb
Ultimately, it's a calculus of "us vs them" and which lives are valued or devalued.
Relatedly, are police justified when they shoot at a house with 500 rounds, killing the suspect and their entire family that happened to be in the general vicinity? Is the math "one law enforcement > n lives as long as one was a (potential) badguy"?
If you wanted to do this with minimal civilian casualties, then you bring the ground forces in, block by block, and you clear things the old-fashioned way. You take casualties, but those are casualties who signed up to be "warfighters".
Now this IS inflamatory: I think we have a lot of warfighters and cops who are just plain cowards, that's the mentality. Why have a class of trained and armed people who are so afraid of dying that they'd rather kill anything and everything in their path than potentially be injured or killed?
I thought the ethos of the warfighter and law enforcement was "act as a shield, act as a bulwark, save lives, give my life so that others may be free, etc etc". Nowadays its "nah I'm not going in that school, there's badguys with guns and I might die, just stay outside".
That leads to a failure of imagination where somehow "blow up a building with innocent people as long as you got your target" seems somehow justified because you didn't risk a 'good guy' life. Cowardice.
No. It's just a tool. People still configure the parameters and ultimately make decisions. Likewise modern missile do not make conflicts more or less ethical just because they require advanced physics.
I doubt an artillery system using machine learning to correct its trajectory and get better accuracy would be controversial, since the AI in that case is just controlling the path of a shell that an operator has determined needs to hit a target decided upon by humans.
The AI did something, but the IDF used it to justify effectively committing a genocide.
be ready to be targeted by AI, from another state, within another war
https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai...
972mag is a left-wing media and what they say should be viewed with skepticism because they follow a pro-Palestine narrative.
On silver lining for those who lost their lives to his particular holocaust: These technologies in particular have a tendency of ending up used against the very people who created them or authorized their use.
AI
Yeah, yeah guidelines and all.
Just watched someone get their post deleted for criticizing Israel's online PR/astroturfing.
Israel's ability to shape online discussion has left a bad taste in my mouth. Trust is insanely low, I think the US should get a real military base in Israel in exchange for our effort. If the US gets nothing for their support, I'd be disgusted.
Posts do get flagged and/or killed, whether by user flags, software, or mods, but you can always see all of those if you turn 'showdead' on in your profile. This is in the FAQ: https://news.ycombinator.com/newsfaq.html.
If you notice a post getting flagged and/or killed that shouldn't have been, you can let us know and we'll take a look. You can also use the 'vouch' feature, also described in https://news.ycombinator.com/newsfaq.html.
https://news.ycombinator.com/threads?id=Sporktacular&next=39...
Can you explain why would the USA support one country instead of appeasing 300 million in the area?
What are the benefits out of being so pro israel?
The second is this: Why is a western ally allowed to have Apartheid, allowed to kill thousands of women and children with or without AI, besiege (medieval style) 2.3mil civilians, starve and dehydrate them to death, all the while comparing a tiny area without war planes, without a standing military, without statehood to Nazi Germany and Gaza to Dresden to completely level Gaza? To Nazi Germany that had the most advanced technology of their time, threatening the whole world? Dehumanising Palestinians by declaring them all „terrorists“, mocking their dead, mutilated bodies in Telegram groups with 125k Israelis (imagine 4mil US citizens in a group mocking other nations dead children). Why do we allow this to happen? Why is a western ally allowed to do this while almost all our western governments fund and support this and silence protest against it?
How is this even possible to do without having the system make a lot of mistakes? As much AI talk there is on HN these days, I would have recalled an article that talks about this kind of military-grade capability.
Are there any resources I can look at, and maybe someone here can talk about it from experience.
I'm not sure what is wrong with this technology. They barely say at the achievements this technology has gained, and only speaking about the bad side.
This article tries to make you think behind the scenes that Israel is a technology advanced, strong country, and Gaza are poor people whom did nothing.
It didn't even speak about the big 7 October massacre, where tens or even a hundreds innocent women were raped, because they were Israelis. I'm not sure when this kind of behavior is accepted in any way, and it makes you think that Hamas is not a legit organization, but just barbaric monsters.
Be sure that Gaza civilians support the massacre, and a survey reports that 72% of the Palestinians support the massacre[1], spoiler: it's much higher.
[1] https://edition.cnn.com/2023/12/21/middleeast/palestinians-b...
It seems like Israel is already bombing indiscriminately, with 35 000 killed (the majority of whom are women and children). Was AI used for these targets?
History is going show a similar story to when IBM helped facilitate the Holocaust, this genocide also has people working on tools that enable it; people "just doing their job."
Did AI target World Central Kitchen or the 200+ humanitarians, journalists, hostages and medics? This is just one aspect of Apartheid Israel's war crimes.
Apartheid Israel seems to be a pariah state, if it's not with their hacking or bombing consulates, it's with the military industrial complex relationship with the US. Do they think their actions are conducive to their well-being?
US supporting Israel makes very little sense.
That being said, Trump signed bill to removed reporting of drone strikes by US military and he approved more strikes than Obama.
So US likely has amplified systems compared to Lavender and Gospel. We'd have no idea.
This season of Daily Show about AI comes to mind: https://www.youtube.com/watch?v=20TAkcy3aBY
Everyone claiming AI is going to do great good, solve climate change yada yada is deeply in an illusion.
AI will only amplify what corporations and state powers already do.
Every. Single. Time.
(...) at some point we relied on the automatic system, and we only checked that [the target] was a man — that was enough. It doesn’t take a long time to tell if someone has a male or a female voice (...)
...sounds fake as shit. Any dumb system can make male/female decision automatically, no fucking way human needs to verify it by listening to recordings while sohphisticated AI system is involved in filtering.Why would half a dozen, active military offcers brag about careless use of tech and bombing families with children while they sleep risking accusation of treason?
Feels like well done propaganda more than anything else to me.
It's plausible they use AI. It's also plausible they don't that much.
It's plausible it has high false positive rate. It's also plausible it has multiple layers of crosschecks and has very high accuracy - better than human personel.
It's plausible it is used in rush without any doublechecks at all. It's also plausible it's used with or after other intelligence. It's plausible it's used as final verification only.
It's plausible that targets are easier to locate home. It's plausible it's not, ie. it may be easier to locate them around listed, known operation buildings, tracked vehicles, while known, tracked mobile phone is used etc.
It's plausible that half a dozen active officers want to share this information. It's also plausible that narrow group of people have access to this information. It's plausible they would not engage in activity that could be classified as treason. It's also plausible most personel simply doesn't know the origin of orders up the chain, just immediate.
It's plausible it's real information. It's also plausible it's fake or even AI generated, good quality, possibly intelligence produced fake.
Frankly looking at AI advances I'd be surprised if propaganda quality would lag behind operational, on the ground use.
- Weaponized financial trojan horses like crypto
- Weaponized chemical warfare through addictions
- Drone swarm attacks in Ukraine
- AI social-media engineered outrage to change publics perception
- Impartial, jingoistic mainstream war propaganda
- Censorship and manipulation of neutral views as immoral
- Weaponized AI software
Looks like a major escalation towards a total war of sorts.
There has been no mass self-correction to my knowledge that would avert this kind of destructive behavior.
But in saying that, I am fully aware that most of such behavior stems from people who are in charge of the world at a political level.
Is it implausible to think that this is something that will have to change in order for the world to change?
The war doesn’t serve anyone but a few rotten minds who are trying to make decisions on behalf of millions if not billions of people.
And we share a similar nudge. I do think that was is happening in the world today is a mere preparation (of society) for a massive power struggle in various parts of the world that will inevitably lead to a full-blown war. But this is only my personal feeling/interpretation.
I realize this seems almost unrealistically upbeat, and most people don’t want to believe it given what we see in the media every day. Note that I’m not arguing against increasing global instability, which will become worse if Russia triumphs in Ukraine (whatever form that could take) or the US continues to turn its back on its allies.
Disinformation and AI fakery via social media are probably the scariest things to me on your list. Twitter is now a garbage dump for this stuff, but the good news is that it is hemorrhaging both users and money.
War is terrible. War has always been terrible. It was almost certainly worse in the past, but it still sucks now. Most of the things you mention were way worse 100 years ago.
Sure, AI didn't write the propaganda, instead humans did. The affect was the same.
But what is even sadder is that the supposedly morally superior western world is entirely bribed and blackmailed to stand behind Israel. And then you have countries like Germany where you get thrown in jail for being upset at Israel.
Back in 2002 or so, a friend of mine swore blind that an American had been arrested for wearing a "give whirled peas a chance" T-shirt — which is an anecdotal way of saying: are you sure you've got the full story?
I'm learning German by listening to „Langsam Gesprochene Nachrichten“ by Deutsche Welle, and it definitely looks like a lot of people are less than enthusiastic about how Israel's forces are conducting themselves in war despite the constant note that Hamas is (1) a terror organisation that (2) started this particular round by killing 1000 civilians: https://www.dw.com/en/israel-withdraws-from-gazas-devastated...
Germany is also extremely sensitive to every aspect of this due to the events of 80 years ago.
Reports I've seen from the BBC show that there are significant protests in Israel, by those who consider the war to be justified, against their own government, not only for dropping the ball by failing to prevent the initial attack, but also for driving a wedge between them and their closest allies with the conduct of the war: https://www.bbc.com/news/world-middle-east-68722308
Add religious indoctrination to that. A huge number of Americans are evangelical Christians who unconditionally support Israel because they are utterly convinced that the continued existence of Israel is a necessary prerequisite for the reincarnation of their god.
That said, there's also something noticeably different about this conflict. For the first time, the reporting I've seen in the mainstream press has generally been trending negative towards Israel. For example, the Washington Post has had a recent article on a press tour the IDF led of the burned-out remains of the hospital it attacked, clearly part of a campaign to justify why it was necessary, and the entire article was dripping with subtext of "we don't buy what the IDF is saying". And even the political headlines are generally framed in a way to keep you asking "should the US even be supporting Israel?"
Israel has already squandered all the sympathy it got from the terrorist attacks last October, and it's well on the way to squandering all residual sympathy from the Holocaust. And the Israeli political and military establishment seems to have zero clue that this is going on.
At times, Israel allowed for a two-state solution but Hamas wanted every Jew there dead or gone. They’d push them into the ocean itself if allowed. People called for Israel reducing their presence in Gaza for peace. Doing that led to more attacks instead of more peace.
Recently, Hamas killed and kidnapped civilians on purpose. Whereas, Israel warned people to leave before the invasion where they then focused on military targets. If people stayed and were connected to those, they’ll likely die during the invasion. The OP is about people who stayed that are mostly connected to militants. OP writer pities their families but not all the non-militant families Hamas killed.
While both sides are plenty guilty, one is actually aiming for peace, focusing on military targets, and reducing civilian casualties. The other broke peace, attacked civilians, and called for more genocide. The difference between these two strategies shows that anyone wanting long-term stability with less murder in the area should support Israel.
Also, Israel is allied more with us while their opponents keep funding terrorist groups, including our own enemies. They’re also strong, economic partners. Why on earth would we ditch our friends to back people who do little for us and support our enemies?
1. “How the Israel lobby moved to quash rising dissent in Congress against Israel’s apartheid regime”
2. “Top Pro-Israel Group Offered Ocasio-Cortez $100,000 Campaign Cash”
3. “Senate Candidate in Michigan Says He Was Offered $20 Million to Challenge Tlaib”
[1]: https://theintercept.com/2023/11/27/israel-democrats-aipac-b...
[2]: https://www.huffpost.com/entry/ocasio-cortez-aipac-offer-con...
[3]: https://www.nytimes.com/2023/11/22/us/politics/hill-harper-r...
That is appalling.
If we can’t trust AI to drive a car, how the hell can we trust it to pick who lives and who dies?
It is obvious that Israel has loosened their targeting requirements, this story points to their internal justifications. The first step in ending this conflict must be to reimpose these standards of self restraint.
At that point I had to scroll back up to check whether this was just a really twisted April's Fools joke.
There's often a criticism of the US military doctrine that our weapons are great but are often way more expensive than the thing we shoot them at (as exemplified in our engagement with the Houthis in the Red Sea.)
If anything, the quote you pulled sounds like its talking about highly precise weaponry, and it seems to me that the way to minimize the overall death in a war is to use your precise weapons to take out the most impactful enemy.
Which part of this is different than how you see the world so that reading this quote threw you?
expensive relative to what? a single rifle bullet? jdam kits are not expensive, easy to manufacturer, and there's plenty of 500lb dumb bombs lying around. If a country has access to precision guided bomb tech then I'd say the should be obligated to use it for bombing exclusively.
Hamas combatants like fried chicken, beer, and women. I also like these things. I can't possibly see anything wrong with this system...
Our premiere AI geniuses were all sqawking to congress about the dangers of AI and here we see that "they essentially treated the outputs of the AI machine “as if it were a human decision.”
Sounds like you want to censor information that could hurt your bottomline.
I am pro Palestine and not simping for Israel. I think visibility on Israel's actions matter, but HN is also very clearly not the appropriate website for a lot of politically involved news.
Just as an example, the EU is setting a lot of law and policy surrounding technology right now, affecting how companies like Apple operate or putting policy into place to regulate emerging technologies like AI. The people who make the technology should be aware of those policies, how it affects what they build, and society's view on the products of their development more broadly.
I realize Israel and Palestine is a charged topic, but in my view, the high stakes of that conflict and the threat to human life on both sides means it's more important to have conversations about technology in that context, not less. Those conversations are probably going to hurt somebody's feelings, but we ought to talk about issues like how freedom of speech online and terrorism are connected and how AI systems and the military are mixing because it's important to maintaining the ethical fabric of our profession.
There could hardly be a more pertinent issue for tech right now. Just sweepingly wild shit that we should be grappling with.
This should be advertised. The true price of AI is people using computers to make decisions no decent person would. It's not a feature, it's a war crime.
We are privy to the whims of whatever political views of those that aligned/run/manage/stake in YC and their policies and values.
I think it takes a tiny number of flags to nuke a post, independent of its upvotes, so strong negative community opinions are always quick to kill things.
To restore it, mods have to step in, get involved, pick a "side".
I think the flagging criteria needs overhauling so popular, flagged posts only get taken down at the behest of a moderator. But that does mean divisive topics stay up longer.
For the nothing it's worth, I don't see this post as divisive. It's uncovering something ugly and partisan in nature, but a debate about whether or not an AI should be allowed to make these decisions needn't be partisan at all.
How are those "acceptable" collateral deaths not war crimes?
You can think that what they are doing is bad, but thats unrelated to the highly specific claim of genocide, which requires specific intent.
If Israel wasn't able to use tools like this, then it probably wouldn't be viable for them to identify much of Hamas (that's kind of the point of guerilla warfare). Since that would make it difficult to fight a war efficiently, they would be more likely to engage in diplomacy.
To put it bluntly, useing AI to decide on targets for lethal operations in unconsiounable given the current and forseable state of technology.
Come back to me when it can be trusted to make mortgage eligability questions without engaging in what would be blatantly illegal discrimination if not laundered by a computer algorithm.
We have no idea whether this story itself is relaying anything of value. For all we know, stories like this could be a part of the war effort.
> Underlining everything +972 does is a dedication to promoting a progressive worldview of Israeli politics, advocating an end to the Israeli occupation of the West Bank, and protecting human and civil rights in Israel and Palestine.
> And while the magazine’s reported pieces—roughly half of its content—adhere to sound journalistic practices of news gathering and unbiased reporting, its op-eds and critical essays support specific causes and are aimed at social and political change.
1: https://www.tabletmag.com/sections/israel-middle-east/articl...