At least with a human president, there is the chance that they will grow up and shrug off the orders they are given. The power actually lies with the person that was elected, not the invisible people who paid to get them the job.
[1] http://www.theatlantic.com/technology/archive/2013/01/ibms-w...
But how do they know Watson would find the expected value of these things positive? Maybe Watson would be a republican. And this all points down a huge rabbit hole of ethical and political philosophy and stuff.
Like, should Watson take into consideration his likelihood of being elected? In that case, much of his neural network should be dedicated to predicting voter outcome. And that seems pretty obviously problematic.
The problem is that Watson has been talked up as almost a strong AI, when it's actually a really good classifier, annotator and summarizer. While there's a great role for Watson-style systems in policy development and practice, they are only one in a battery of ML and analytic techniques, none of which can stand alone without a fully human point of view.
Easy enough. If Watson turned out to be a Republican they would start tweaking the parameters until it became a Democrat.
That’s very unlikely.
Any logical system based on the concept that human life is worth existing on its own (no matter what the person has contributed to society) automatically ends up with the necessary conclusion that things like subsidized healthcare are mandatory.
Obviously, one could give the program the basic assumption that human life is not worth anything, and it should instead focus purely on profit, and it might end up with a more republican ideology.
But giving an AI with access to nuclear weapons the assumption that human life isn’t the most important factor is... a bad idea.
Not just facts - the designers of the AI also choose the underlying assumption and models. Even the very idea of using an AI implies a certain set of biases and intentions.
Characteristics that are truly common enough in humans that can safely be extracted as a factor are rare. Most of the time we try to compromise so we can call our differences "close enough". The process of finding and/or creating those compromises is what we call "politics", and while computers can certainly help as a tool, but the process needs human involvement by definition.
Attempts to turn over any kind of political or social decision-making to an algorithm is simply a way to disguise the concentration of power. The algorithm's designers ultimately end up with the power, while others are denied power.
The racist tactic known as "redlining"[1] is a pre-AI example. Black people aren't denied hosing directly, they simply "don't quality" for a loan, with the real reasons obscured behind a proprietary "credit worthiness" equation.
[1] http://www.theatlantic.com/magazine/archive/2014/06/the-case...
edit:
Instead of using AI as a decision-maker, a place where AI (and other technology) might actually be useful is as the facilitator and/or part of the "panel of experts" used in Delphi methods[2]. While the people tend to jump on bandwagons and make stupid decisions when unorganized, we have a lot of examples where a general crowd of people make very good decisions when they are focused on a specific goal and have enough structure to allow for iterative refining of ideas.
A few of his comments that are relevant to the use of technology with politics and society:
We have these extraordinarily limiting constraints from a past in which we did not have
the tools to have anything other than extraordinarily limiting constraints. But, now we
do have the tools, and the tools are running away with us faster than the social
institutions can keep up.
...
I think countries ought to set up Departments of the Future. [...] We are on the edge of
having the technology to be able to say, let us run a constant, dynamic, updated review
of everything that science and technology is thinking about [...] then let us use the same
techniques to ask the public in general, not politicians, whether they like that idea,
whether they feel that they could live with that idea. And then, like a Delphi technique,
re-run it until everybody stops changing their mind.
...
Collate all [research laboratories and business R&D] together and process them using stuff
like big data to see what the pattern looks like becoming, and then layering on top of that
social media analytics to say, if this was coming, would you like it, and if not, why not?
In other words, to have a sort of 24 hour a day referendum
The other parts of the interview are very interesting as well: ... it’s no longer important to teach people to be chemists or physicists or anything ‘ists
because those jobs are gone, and if they’re not gone today they’re gone tomorrow. And unless
we know the old tools of critical thinking and logic and such, we will not be able to handle
what follows. So, we’re wasting our time training people to be things that will no longer
exist in 10, 15, 20 years time.
...
Every single value structure is meaningless [...] commercial society will be destroyed
at a stroke. The trouble is the transition period [...] how we get from here to there.
The vested interests, I mean, we’re going to have to shoot every one of them – nobody,
nobody is going to give way to this. [...] All cultural values relate to scarcity, ultimately.
[1] http://youarenotsosmart.com/transcripts/transcript-interview...It is absolutely normal to have an establishment and an elite that influences the governments and the presidents decisions.
Only in the most dictatorial and absolutist types of governments you would see a lone person at the top deciding on issues without consulting with anyone.
But I'm not sure that such a thing has ever existed. Even maniacs with a cult following behind them like Hitler had to balance the interests of different factions within the system.
What we have to get rid off is certain categories of influencing decisions that have a negative impact for the majority. (bribery)
The game backstory is that during cold war, it became a hot war, nukes were launched, and in the US was built one (or more) underground cities, that are administrered by a paranoid computer that hates and fear communists.
My current Paranoia character works on "TechServices" and sometimes make his opinion known (something that is actually illegal to do) that "The Friend Computer" ins't the real ruler, his designers and programmers are.
It is fun to see the implications in the game, specially as the mindset of players affect their characters and behaviour, some people are loyal to the computer, some people consider the computer the enemy, and some people consider the computer only a tool, and are loyal (or enemy) of the "High Programmers" that have access to the administration computer code.
The end result is a rather byzantine situation where everyone (including The Computer) is plotting against everyone else, all being sufficiently paranoid.
Granted, it may not work out so rosily in real life- the reasoning log would likely be liberally redacted due to factors classified from public knowledge (just as some of Obama's more puzzling stances may, charitably, be explained by things he's not allowed to tell us)- but that's its own problem.
Having AI's as advisers, that would be totally different. In the end we can still hold responsible the people that listened to that advice as it's their responsibility to check if the advice from their AI advisers is real.
That doesn't change the AI, though - since it still is physical hardware in some place, even if you could verify the software you can still exploit the hardware.
This has so much potential for a Phillip-K-Dick-ian rabbit hole of paranoia and insanity that it made me chuckle. I think you might be hand-waving away a lot of complexity here. :)
This was literally the best usage of AI as comedy ever made.
"Anyone smart enough to be a good president wouldn't want to be president."
This, because of the pressure and stress involved...
Watson, with human advisers, should be relatively stress-proof...a possible plus...
I guess then the joke would evolve to: "Anyone smart enough to be a good adviser wouldn't want to be."
... unless there's leverage against the elected, something that would subjectively seem to compromise the integrity more than the slowly gotten used to corruption.
No other area of human activity is so unscientific and so replete with falsehoods and lies as politics. We don't tolerate physicists or doctors lying, but lying in politics is a norm. Add to this a mix of dogmas, ideologies (often based on backward religious ideas), wishful thinking, and yes, men's testosterone power plays, and you have a recipe for non-progress, wars, subjugations, geopolitical games, etc etc etc. Even ideas that look like a noble cause often backfire and result in death and destruction.
Scientifically-based politics should be a norm in the 21th century.
This statement is already a normative one. I don't even know where to start in terms of breaking down this incredibly feeble argument but one place to start would be the fact that a move to "Scientifically-based politics" would be a political move in and of itself. Then you have to consider that you're asking people to get rid of ideologies, "backward religious ideas" and replace them with "science." Science done by whom? A bunch of California tech companies? I wonder what that world would look like.
This comment reads like something from /r/juststemthings or /r/justneckbeardthings and I honestly can't even tell if this is a joke or not.
Merkel’s government, despite being criticized for never having their own opinion, did this quite well, and handled most things well.
https://en.wikipedia.org/wiki/Technocracy
Technocracy was also a theme of many communists. It's also reappeared in the form of the Futurist Party, and some other small movements like the venus project.
For an interesting review of these ideas, seriously read this: http://slatestarcodex.com/2014/09/24/book-review-red-plenty/
>This book was the first time that I, as a person who considers himself rationally/technically minded, realized that I was super attracted to Communism.
>Here were people who had a clear view of the problems of human civilization – all the greed, all the waste, all the zero-sum games. Who had the entire population united around a vision of a better future, whose backers could direct the entire state to better serve the goal. All they needed was to solve the engineering challenges, to solve the equations, and there they were, at the golden future. And they were smart enough to be worthy of the problem – Glushkov invented cybernetics, Kantorovich won a Nobel Prize in Economics.
Project Cybersyn was a really cool idea that tried to actually implement these ideas in the real world just as computers were becoming advanced enough to do these things. But unfortunately it didn't last very long:
http://www.newyorker.com/magazine/2014/10/13/planning-machin...
If my company violates ITAR rules when we do aerospace work we can go to jail. A Secretary of State handles classified email without regard for security and on servers she controls, and she could be the next President. A President lies and manipulates facts and he suffers no consequences. A Senator makes-up shit, lies, cheats and makes promises he will never fulfill and is not held accountable, ever. A Mayor changes a vote and nothing happens to him. A Government organization spends lavishly and goes so far as to use their migt to punish people who do not align with their politics and nobody is fired or goes to jail. Politicians launch us into bullshit wars and they are not held accountable for any of it. They give money lavishly to other countries (and brutal dictators) while our own kids suffer and schools have to lay off teachers.
No, the elephant in the room isn't the lack of science (although some would be good), it's the lack of consequences. It's the lack of honesty. It's the lack of accountability and restraint. It's that and more. And it won't change until people wake up and demand it, which is unlikely until things become FUBAR.
The system is utterly broken and needs to be adjusted if it will be effective in the next 200 years.
It's no different than what happens when religion gets mixed up in politics. The separation of church and state protects the church as much or more than it does the state. Politics and power corrupt whatever they touch.
There are ways to improve the impact of science (and religion and education, etc) on the political world, but mixing them together too much will just poison the positive traits, rather than lifting up politics.
We have a democratic-republic in the United States so popular / populist majorities can't inflict absolute will over unpopular / unpopulist people / regions.
Populists today, at least in the US, aren't as dangerous as they are / were in countries where changing the constitution is much easier for a powerful / popular leader.
what if the computer decides there is no way to feed the population at the current (or a future) growth rate and implements a mandatory two children only policy?
a computer wouldn't understand tact, politics or empathy, or if it would, it wouldn't be a solution much better than what we'd get now.
Not really, judging from the size of aisles filled with homeopathic remedies. Some people practically have to be dragged away, kicking and screaming, by government agencies so that they stop harming themselves.
It's always people. Stupid people ruining beautiful ideals. (And that includes me, of course.) Until we can fix people, it's futile to wish for "scientific" politics.
I'm kinda optimist, so I believe better education will improve the situation, but who knows.
More realistic is to get politics out of as many areas of life as possible.
There is a big problem, how do you do that? I thought about it recently on another forum - every scientific statement has a form of implication: If you do A, you will get B. So there is no beginning of it. You may say, I start from some axiom system, but then you have to agree on that. Even in classical propositional logic, even if you consider the same resulting formal system, you can get large (infinite?) variety of axiomatic systems that can lead to it.
So, what I would like to see, would be - equip everybody with Watson and let them vote, direct democracy style.
More than anything, this ad makes me think of how long of a way we have to go before this type of AI will be useful to humans, much less able to run a country. Why doesn't President Obama consult Watson before making decisions? Because every decision would require a massive data collection and processing project, training and tuning of delicate models and rigorous testing. And the net result would be what? Processing of factual information that any human could get by reading a brief prepared by an aide?
We're worried about AI representing an existential threat or creating wide spread unemployment, but right now the best and brightest computer in the world can't provide any more practical political value than Monica Lewinsky. This is a good ad campaign, timely and provocative. But it also highlights how the path ahead is as long and arduous as a trip to Mordor. glhf, IBM!
[1] http://www.ibm.com/connect/federal/us/en/technology_solution...
From the policy platform this could be Bernie's VP running mate.
FTA:
- Single-payer national health care.
- Free university level education.
- Ending homelessness.
- Legalizing and regulating personal recreational drug use.
- Shift bulk of electrical generation to solar, wind, hydroelectric, and wave farm.
- Review/Repair/Replace/Remove highways, bridges, dams.
- Upgrade and subsidize metropolitan public transit solutions for the next century.
- Build and subsidize metropolitan high-speed communication networks.
- Ensure a minimum-wage that meets a reasonable cost of living.
- Ensure fair and safe working conditions.
- Ensure global environmental commons protections.
This doesn't inspire my confidence in their systems ability to run the nation.
- No idea, the president's database server is down...
- And where are the autonomous tanks flying to ?
- Access denied...
- Oh crap, not again..
[1] http://www.ft.com/intl/cms/s/2/dced8150-b300-11e5-8358-9a82b...
I assume they're relying on non-ML people who make the money decisions to get caught up in the hype and force it on their organizations, but in my (very limited) experience that hasn't worked out. The people in charge at the organizations I've worked at have been just as skeptical of the hype.
Except this isn't anything to do with Watson-the-jepordy-winner. It's a set of web services which are useful for doing analysis on various things.
The question answering service was withdrawn last year: https://developer.ibm.com/watson/blog/2015/11/11/watson-ques...
We already have someone running for president with this agenda.
Are you honestly and seriously suggesting that the US tax system is absolutely at the limit and there's no room to pay for any of these policies? For example, we shouldn't have any more tax brackets over the $400,000 one? Have you looked at the proposed tax brackets that actually pay for these plans? How do you explain the idea that 3 new tax brackets ($500k-2m 43%, $2m-10m 48%, and $10m+ 52%) would destroy the economy? How exactly do people who make $10m+ single-handedly sustain our economy, why would a 10% tax increase on their over-$10m income end that, and why do the thousands of dollars of reduced overall expenses (including taxes) for almost everyone else not matter at all in your math?
http://www.bbc.co.uk/programmes/p02np2dg
I believe not, as Watson is a solution to the Big Database problem of finding relevant information from a massive corpus, but I am not convinced that that's what enables human intelligence, preferring the views that intelligence is necessarily context specific (there is no objective "intelligent" act or being) and is enacted between an environment, body, and culture, rather than processing of patterns of simulation in the brain.
However, I can't shake the feeling that Watson is just telling us what we want to hear.
Time and time again, Hollywood (and Dr. Stephen Hawking) warn us that once given enough power and control, the end-game policy of an AI with ambitions of omnipotence is Kill All Humans
But it reads awfully seriously. I don't see any of the telltale signs of satire.
So I think a better explanation is this is a clever marketing stunt for IBM.
What if Watson says vegetables should be banned and smoking should be mandatory? What if he's right?
I think this is one of the most important statements on the entire page.
All in all, this is a terrible idea for all kinds of reasons that have no connection to the false idea that Watson is intelligent. ;)
"If you are interested in the intersection between technology and politics we invite you to donate to the Electronic Frontier Foundation ( https://supporters.eff.org/donate ). For 25 years the EFF has been a champion for civil liberties, privacy, and education on politics around emerging technologies. With your support they will continue to aid in technological progression with humanity in mind."
//Uncomment for production
//#define kill_all_humans 0 Watson, the Watson logo, Power7, DeepQA, and the IBM logo are
copyright IBM. The Watson 2016 Foundation has no affiliation
with IBM. The views and opinions expressed here in no way
represent the views, positions or opinions - expressed or
implied - by IBM or anyone else.
According to WHOIS the domain is owned by: Registrant Name: Aaron Siegel
Registrant City: Los Angeles
Registrant State/Province: CaliforniaOne Corporation, under God!
Since Google\FB\Twitter\Amazon etc are all being run algorithmically, no reason to indefinably keep paying the execs 300x what the engineers make. Let's just pay Watson. Watson as a VC sounds much more interesting.
If you are not just being cynical---which I wouldn't blame---I would revisit this thinking: It seems as though executive orders[1] and vetoes[2] alone seem like reason enough to have the office. Even if you don't agree with all orders or vetoes, this kind of stuff certainly seems like more than entertainment to me.
> Watson as a VC sounds much more interesting.
That actually seems like a really cool idea. There is a developer API; someone should start one!
[1]: https://www.whitehouse.gov/briefing-room/presidential-action... [2]: http://www.senate.gov/reference/Legislation/Vetoes/vetoCount...
Even if a AI president is in place, providing policy direction to maximize some agreeable set of end goals - the humans around the president would not understand the nuances, nor effectively able to / want to implement the policy given their own agendas. I feel that systems is bound to fail.
Now if only the AI president is able to do behind-close-door deals with politicians and special interests groups, some of the the policies might actually get implemented. But the end goals could shift pretty radically in that scenario.
I am having a hard time visualizing a scenario where an AP president would be effective / useful.
Maybe a human president could use an AI advisor to get data-driven arguments to support his/her policies.
IBM
now that would be some serious dogfooding!
[1] https://www.linkedin.com/pulse/2015-technology-7-predictions...
EDIT:
I don't think we can trust an AI just yet. For example I had arguments from people who wanted me to feed motivation (cover) letters from job applicants into Watson to determine "cultural fit" (I'm in the tech recruitment business atm). IMO these technologies are way over-hyped for now and we are walking down a very dangerous path, because marketing pushes into this direction and the technology is far from ready.
To prove my point I tried to feed Joseph Mengele, Stalin & Bin Laden writings into Watson to see how he evaluates the data. As expected Watson had some "great things" to say about these characters. Another feeling I get is that when we read info about ourselves in this context it's like reading a horoscope. People read 2 things that are true (but vague) and the 3rd thing may not be true but they shrug it off as "oh I didn't know this about myself yet ... I'll have to monitor myself in future to see if this is right". We are prone to be "open" to such statements as long as they sound like a positive trait. But is it true? So in that sense the machine learning might fool us into thinking we remove bias (but we can not remove bias like this). I honestly think that this technology should come with a warning label because people who have no idea about how the data is being prepared or analysed will interpret the output verbatim and take it face value.
Here is the link http://blog.valbonne-consulting.com/2015/06/13/using-big-dat...
Every politician seems incompetent because there is no way any human being can gather and analyse the wants and needs of every single constituent and form a strategy that benefits as many people as possible. There's just not enough time in the day and brainpower available no matter how you divise the workload.
AI can solve this. Maybe not as a candidate but at least as a raw information parser.
If someone gathered and open sourced the information on what everyone wanted in relation to some policy we could even have competing AIs that parse it in different ways to figure out the best way to tackle a problem.
But we don't want electronic voting, so electronic politicians would be a very bad idea
https://www.ibm.com/blogs/watson/2016/02/decoding-the-debate...
https://treasurytoday.com/2014/05/algorithm-appointed-to-inv...
https://en.wikipedia.org/wiki/Category:Multivac_short_storie...
For anyone interested, here is a recent presentation I gave on Watson and a summary of what it can do.
IBM Watson: Building a Cognitive App with Concept Insights
http://www.primaryobjects.com/2016/02/01/ibm-watson-building...
I cannot imagine how it can possibly get any worse.
Your imagination is severely lacking then.
The executive branch should do the same thing, but the legislature should try it first. That's what I was saying, just the order of operations.