For example:
There is a blind spot in AI research | Autonomous systems are already ubiquitous, but there are no agreed methods to assess their effects
If it's interesting, I'll still click.
While I never post a comment on an article before reading it, I almost always read the comments before reading the article to avoid click baits. An brief one-line summary that serves the same role as an abstract would eliminate the need to read the comments first.
It might even help eliminate clickbaity titles in technical articles altogether.
buzzfeed/upworthy had such an impact on the world of journalism, that after an eon of only ever using the original headline, techmeme was forced to start rewriting headlines
here was their post outlining their decision http://news.techmeme.com/130906/headlines
It actually does discourage that, I remember seeing a post from dang saying so.
In almost every domain where there is data and a simple prediction task, even really crude statistical methods outperform "experts". This has been known for decades. Yet in almost every domain algorithms are resisted. Because people distrust them so much, or fear losing their jobs, or all of the above.
But humans are vastly more biased. Unattractive people get twice as long sentences. People heavily discriminate based on political denomination. Not to mention race or gender. Judges give way harsher sentences when they are hungry. Interviews negatively correlate with job performance.
Humans are The Worst. Anywhere they can be replaced with an algorithm, they should be.
The referenced propublica result has been criticized here: https://www.chrisstucchio.com/blog/2016/propublica_is_lying.... "almost statistically significant"
A recent example from recently: facebook replaced human-curated news with machine-curated, they started trending fake news [2].
Another example is algorithms that try to help you during automated phone calls, hence people always try to get to a human. This is because the speech-to-concept parsing/mapping is flawed or because they're not programmed to perform some specific tasks.
Another example is self-driving cars. Google cars have been involved in more accidents per mile than average humans[1].
In general, people's intuition is built through repeated encounters, that's why it is so great.
[1] https://www.bloomberg.com/news/articles/2015-12-18/humans-ar... [2] http://www.theverge.com/2016/8/30/12702478/facebook-trending...
> Driverless vehicles have never been at fault, the study found: They’re usually hit from behind in slow-speed crashes by inattentive or aggressive humans unaccustomed to machine motorists that always follow the rules and proceed with caution.
Which completely negates the point you are trying to make.
In fairness though, it sounds like the point you and others are often making is this. Humans are now considered dumb, bias and unreliable. So we need to invest in some kind of external policing system (AI) to run our world for us and make sure we're doing it right. Basically establish reliance on something external to ourselves?
This is sad because it sounds like we're losing faith in ourselves to evolve for the better and hope the machines can do a better job at self-improvement ?
I'm generally curious about your point of view, sometimes I'm confused with the enthusiasm people have about this aspect of AI? Is it a form of distrust and dislike of society that makes us want to put faith in robots? A kind of adult angst?
I worry because we could be barking up the wrong tree if this is the case.
I see fear about algorithms everywhere. Previous articles insist that algorithms could be unfair or racist. This article suggests things along those lines as well. The EU recently banned perhaps the majority of applications of machine learning, in any place where they might be used to rank individuals. This fear is hugely setting back society and technological progress. And almost every one of these places will have to revert back to human judgement. Which by every measure is far worse and far less fair.
Nano tech has a similar problem at hand. There are indications that nanoparticles could have serious health related issues. Despite this researches are pushing ahead full steam with bringing nanotech to market. The money going to development far exceeds whats going to test safety. In an AMA with a nano materials researcher I asked if he ever has concerns about the safety of what he is making. His response was along the lines of "Sure I do, but it's not my job to deal with that. I just get paid to develop the tech."
Tech development has always had a shoot first ask questions later approach.
Then naming it autopilot is deceotive marketing at best. I love Elon as much as anyone else, but i find this marketing practice to be downright dangerous.
“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”
-- Pedro Domingos, in The Master Algorithm (2015)
Until we overcome such issues in humans, they probably are not solvable in AI.
There should be an addition to the statement though, which is "Unfortunately, Humans en masse are even stupider."
Society is ill right now and maybe people are hoping artificial intelligence will take them to a better place?
The authors are asking: in a hypothetical world where many decisions are AI assisted, what is the risk that AI systems slow social change because they are too dumb to understand exceptions, peculiarities, positive externalities? What can we do to establish parameters that will allow us to know when a certain AI system is trained well enough to be used in real world, with minimal risk of undesired social and cultural implications?
"As another example, a 2015 study9 showed that a machine-learning technique used to predict which hospital patients would develop pneumonia complications worked well in most situations. But it made one serious error: it instructed doctors to send patients with asthma home even though such people are in a high-risk category. Because the hospital automatically sent patients with asthma to intensive care, these people were rarely on the ‘required further care’ records on which the system was trained."
Basically if you can drive a manual car, it is easy to drive an automatic one, but the opposite is not true.
Old GPS and even new one got 15% of the time the address wrong when I was a mover. Not only GPS failed, but what do you do when you have no usable maps?
Well, we have fired the people doing the maps, they are hardly updated at the pace mayors and real estates promoters are changing the territory, if you have an awesome GPS with no updated maps your GPS is useless, no?
We are forgetting to do the heavy underlying costly maintaining of maps, directions, forming drivers to read signs figuring GPS made them obsolete. Now we have to maintain: maps, satellite, computers and to live with people unable to use a map and a compass that are distracted when they drive by potentially wrong information and to dumb to read the sign saying there are entering a one way street in counter sense relying on their GPS.
Then too, the automation in Airbus/tesla and Boeing have proven to be less valuable then pilots' experience when computers fail due to false négative (frozen Pitot probes) or false positive (sun blinding cameras). I think civil and military records about accidents are a nice source of information about "right level of automation".
The problem is keeping up to date workers requires constant, heavy practice without too much automation. And human time nowadays is expensive.
That is one of the reason France (at the opposite of Japan) kept automation in nuclear plant rudimentary. Because when a system is critical, you really prefer human that can handle stuff at 99.999% than a computer that do great 100% of the time if and only if its sensors do works or nothing too catastrophic happens (flood, tsunamin, earthquake)
The problem is industry wants to spare on costly formations and educations (not the one from the university, I mean the one that is useful) but knowledge you have not yet crafted because of change of circumstances (I will be delighted to see how self driving car are behaving in massive congestion with dead locks) will be hard to program if we lose the common sense of doing the stuff by ourselves. How do you correct a machine misfunctioning to do something you have forgotten to do correctly yourself? You may even ignore when it will fail. Not because of it, but because of your lack of referential.
"A practical and broadly applicable social-systems analysis thinks through all the possible effects of AI systems on all parties."
which seems incredibly difficult to do completely. Hopefully the authors will further describe their approach in future publications.
Also, somewhat of a nitpick, but the article states:
"The company has also proposed introducing a ‘red button’ into its AI systems that researchers could press should the system seem to be getting out of control."
in reference to Google, but cites a paper which discusses mitigating the effects of interrupting reinforcement learning [0]. The paper makes a passing reference to a "big red button" as this is a common method for interrupting physically situated agents, but that is certainly not the contribution or focus of the work.