My understanding from the reading I've done is that this project is to analyze drone video after it's collected, to automate tasks like picking out when people enter and exit buildings and to read the license plates off cars. If that understanding is correct then this quote seems disingenuous. Google's project is no more designed to "Target and kill at a distance" than the designers of the drone's camera, or it's engine. Arguably even less than those, since those components are in use when drone actually launches strikes. Google's project only comes into play after drones have returned and they have time to crunch the data.
Of course it is! It's an intentional mis-characterization of what the software does.
> Uses like this for technology are a perfect example of things we...all promised we would never do.
I didn't make any such promise. When did you?
The cause of human rights, freedom, dignity, and survival would have been actively harmed if British and American scientists and engineers took the same moral stance that you're taking and refused to participate in the war effort. Do you think it was wrong of Turing to help break German encryption?
The letter that I read[1] states:
"Recently, Googlers voiced concerns about Maven internally. Diane Greene responded, assuring them that the technology will not “operate or fly drones” and “will not be used to launch weapons.” While this eliminates a narrow set of direct applications, the technology is being built for the military, and once it’s delivered it could easily be used to assist in these tasks."
and:
"Building this technology to assist the US Government in military surveillance – and potentially lethal outcomes – is not acceptable."
So the concern seems to be that the military can easily repurpose Google's technology to lethal ends.
[1] - https://static01.nyt.com/files/2018/technology/googleletter....
I don't see why their promises mean something, and I don't see why some people never learn that there are no good corporations. They all exist to make money. Except the non profits. And there is tons of money in war, as you know.
Building tech that kills people is therefore playing dumb or not caring.
> While a Google spokesperson says the program is "scoped for non-offensive purposes," a letter signed by almost 4,000 Google employees took issue with this assurance, saying, "The technology is being built for the military, and once it's delivered, it could easily be used to assist in [lethal] tasks."
If the object and intent recognition is made fast enough, and is able to be sent to and from a drone in flight, then the technology can be re-purposed offensively, regardless of its initial purpose.
Thats certainly easier for me to believe than someone who sincerely thinks that military spending millions of dollars on targeting systems doesn’t mean the military is planning on using targeting systems it spent millions of dollars to develop.
This is likely a result of massive corporate/government entanglement. Google can't say no. Their stock could crash, their negotiating ability could go down significantly, all the work they've done on lobbying could be in danger. Who knows what other back room deals are happening.
I mean, I'd be hard-pressed to think of any such product, but maybe I lack imagination.
Granted. There's getting it to demoable state, and there's getting it to work under all conditions, getting it stable, getting it tested, and so on and so forth.
But still, this is not exactly state of the art anymore. This ship has sailed. Over and done. Genie cannot be put into the bottle. The US army has this option now, and very soon essentially any professional military will have it. A quick course on AI will enable you to do this, and I assume that the US military has enough such people available.
Same with tracking specific people in (high-res) cams. There's a computational cost, but this has been done and described so many times. If anybody wants to build a network of cameras that can track specific people by their faces, there's nothing stopping them at this point.
So why get all worked up about this ? What's the big deal ?
Robustness is harder, but for this problem, not very hard.
There is one section on resigning:
> If we discover misuse of data that we consider illegal or unethical in our organizations:
> ...
> If we do not have such authority, and our organizations force us to engage in such misuse, we will resign from our positions rather than comply.
From what I have seen, Google's help with the Pentagon seems limited to software consulting. If Google had shared any data with the Pentagon for military purposes then it would have crossed a line for the pledge.
The pledge says to try to fix it through: working with colleagues/leaders; then whistleblowing; then legal defenses if they have the authority; then resignation.
I'm guessing not many which is a reason why it is important for firms such as Google to prioritize the hiring of combat vets (besides the fact that they risked their lives to serve our country).
For those who have not served in combat or lost a friend or relative that served in combat, saving lives with drone technology is too abstract.
The drones are very effective at killing terrorists and technology which improves the effectiveness of killing terrorists and enemy combatants (and thus saving American lives) is a good thing.
In Israel, both men and women alike are drafted and the women can serve in combat positions if they desire. Men serve on one month reserve duty until they are 40. Some of these men have been educated as engineers and they understand first-hand the importance of developing technology to save the lives of combat soldiers.
This is something firms like Google are missing: there seems to be little empathy for US military soldiers who are risking their lives defending our nation and developing of technologies to save their lives.
There are other than American lives at stake. I suppose what you say would be true, if the U.S. used the technology to kill only those universally bad terrorists and if this somehow significantly decreased the damage terrorists do around the world.
But this is fantasy. The U.S. will most probably use this technology to detect more and to kill more, with much less regard to foreign lives than to American lives. If the technology flags a building that most probably contains a terrorist, the non-terrorist people present won't matter much to a drone whose work is killing terrorists. The building with all people near it will be gone. I do not believe the U.S. government or U.S. military care about the lives of poor people who are on the other side of the world.
Drone attacks also probably aren't that good of a service to U.S. in the long term. They generate strong opposition world-wide and probably also generate new terrorists. Killing one terrorist now in this way may mean creating 10 terrorists 10 years from now.
And then you have the argument that if you reduce the human cost of wars, you make wars more likely.
Very good point! Drone attacks make the operators into drones themselves, detached from feeling the consequences of their actions.
Even though a program like this might save some lives in the short term, training machines that don't have empathy and can be programmed for whatever means to kill seems like a Pandora's Box that we should be damn sure we want to open before actually doing so.
Also, my apologies for the less-than-respectful replies that you are getting. You should not be chastised for respectfully sharing an important opinion, and their behavior is unbecoming of what we should expect of American citizens.
On another note, how about everyone that works for Intel, or ARM or in any of the EDA companies? Or on Open Source? Should they all quit unless their comfortable being accomplice to creation of machines of mayhem? Or what about everyone that works for BoA or Wells Fargo - how can you in good faith work for the companies that have again and again shown to engage in questionable business practices? Oil companies neither have ever engaged in questionable activities in Africa and elsewhere - how can people work or them? And then there are medical, forest, make-up and, well pretty much every other major industry. Full of people who choose to put their moral obligations a side. Never mind us wearing clothes and such that are result of child labor and such.
In other words, which question is more appealing:
* Given that we're in this conflict, what should we do?
* How can we prevent these kinds of conflicts in the future?
Those are very different conversations. Personally I'm more focused on the second one. But we have to take care of the first one too. And it's messy.
If these types of projects make war machines more precise overall, they may actually decrease overall collateral damage and reduce the total time war is waged which could cause less lives to be lost during war.
Until us humans can collectively overcome the various problems that cause war, might be worth it for the best minds to help make war machines as precise as possible.
Say a government only has nuclear bombs in its arsenal and they really want an enemy of the state dead. Do you think they're willing to nuke an entire city to kill one person?
Now imagine a government has electronic kill switches. Imagine it being almost like The Matrix, they can just flip your life off at the flick of a button. Do you think they willing to just flick off the lives of anyone who they don't like?
You're effectively arguing that the latter is better than the former. Societies use more of a technology the far down the learning curve development goes and the cheaper the technology is. If there is no cost to violence, violence will be endless.
Unfortunately, historically speaking, the use of war is assured.
We’ve tried non-intervention before, and places like Czechoslovakia, Poland, China, and Rwanda have paid the price. And the adversaries we have faced, from Hitler to Daesh, would not hesitate to use weapons of mass destruction as we would. The reason asymmetric warfare works is that the terrorist is willing to stoop to levels that we are not. The only counter to that is precision warfare.
> If these types of projects make war machines more precise overall, they may actually decrease overall collateral damage and reduce the total time war is waged which could cause less lives to be lost during war.
"I'm staunchly opposed to beating my children, but understand it's unfortunate necessity under certain extreme circumstances.
If these types of projects make child beatings more precise overall, they may actually decrease overall harm to children...."