Nowadays nobody thinks of them as geniuses.
Also, people used to be considered geniuses for knowing a lot of things.
Nowadays information is just a Google search away, so knowing a lot doesn't really mean as much as it used to. What matters more nowadays is your ability to learn synthesize the things you know to come up with creative solutions to things.
Basically the "memory" part of human brains have become commoditized without us even realizing.
It's still very early but I do think there have been some subtle but significant step forward in the last couple of years. The most important being: machines are capable of doing certain things better in ways humans can't comprehend easily. I think this is a glimpse into the future where the "creativity" aspect of our brains will become commoditized, also without us realizing.
This doesn't mean machines will take over, just like machines didn't take over the world because they have better memory. But I think this will result in many humans taking advantage of this aspect to exert influence on rest of the humanity.
This is still true, and in the eagerness to dismiss "memorization" as a thing of the past you overlook the obvious. For example, anything you care to know about, say, C++ programming or quantum field theory is available to you on the internet. But does that mean you can write a C++ program as if you had already learned it? What if you want to write a C++ program and you have to look up everything? You will do a very poor job if at all, and you will take a lot of time.
So yeah, until looking up stuff in the internet is as quick as effective as looking stuff up in your brain (the quick may happen but the effective I don't think so), then it still is a very worthy skill.
There are some other things that I think are less valued these days:
- Informed speculation counts for less when you could do a search instead. Maybe a good thing?
- Cleverness counts less when there are memes everywhere. Jokes are ever more cheap and disposable.
On the other hand, good judgement of what you find still counts.
That's a good point, because I think this "ability to speak multiple languages" will become commoditized too through technology. You already see pieces of technology that enable you to communicate in realtime (although clunky and not accessible enough at the moment)
I agree that "realtime" aspect would be the last wall that will stand to distinguish humans from machines, that is, until humans can find ways to inject circuits into the brain (which is already being explored by multiple entrepreneurs and scientists)
You give Google search too much credit. What is a click away is still largely superficial information on any topic and the popular (or specific data set) is often extremely biased or downright incorrect.
I don't see the problem if these problems can also be solved without relying exclusively on knowledge.
As an outsider, I'd say that Google's hiring strategy is pretty successful.
Knowing what is credible and what is not on the internet is a skill in itself. If you don't have that skill you'd likely be telling people all about how Bush did 9/11 or how hillary killed a DNC employee.
What if technology similar to AlphaGo can be generalized to domains with imperfect information (Libratus from CMU recently beat top Poker players. DeepStack which is NN-based achieved a similar feat.) and to other domains (DeepMind is working on Starcraft.)? What are our remaining competitive advantages against machines?
What are future geniuses supposed to be like or to do (assuming your presupposition)?
While it's a lot easier to get superficial knowledge, the ability to do something deeply is an incredibly valuable skill that is powered by memory. Heck even Euler, one of the greatest mathemeticians of all time had a phenomenal memory and could recite any verse from the Aeneid at will - I don't doubt that memory was a critical component of his success.
So, explain this to me like I'm 5. If Google search means you don't need to remember any knowledge any more, why is it that possession of an English-Greek dictionary does not render one capable of speaking both languages fluently?
You can look up all the words you want. Assume you have a grammar of each language at hand, also. Do you think you would be able to speak fulently or understand a fluent speaker of a language you don't know?
I think the answer lies less in "I will be the absolute best", and more along the lines of "I will do it better than anyone before me". And sometimes, even "I will do my best" is an excellent reason for doing things.
I don't think Go players were in it due to a need for expertise that machines could not fulfill until now. And if people nowadays keep practicing with swords several centuries after the invention of firearms, Go players will do just fine.
[1] https://en.wikipedia.org/wiki/Historical_European_martial_ar...
Man, I think this is absolutely it. I feel very sorry for someone who would pin their happiness on being the best in the world. It may not be within your ability! I simply cannot run faster than Usain Bolt. So that's not the game! The game is just to do my best.
It's kind of like what happened to painting when the era of photography began.
Machines taking over human disciplines still kills the disciplines, some may just not be aware how this type of death works.
That said, I disagree with most here in that I don't think machines are anywhere close to taking over creative human disciplines that don't follow incredibly specific rules, like boardgames.
look into what happened to popularity of most bullshido martial arts after UFC (MMA) blew up.
Whereas in Go, everyone's playing the same game by the same rules, the machines are just way better at it.
What I've seen is a move away from attempting to understand why a given position was faulty in the first place, instead running a game through a engine and using that as an arbiter of correct/inadvisable moves. While effective it just feels, sterile. Give me romantic play or burn my board.
I could see myself losing the passion for software engineering and design if an AI can do it better. That would have to be a general AI, and hopefully another couple of decades away.
I wonder if I could enjoy movies or books written by an AI. Scary to think about the psychological manipulation it would be capable of, especially if it lives inside a Google or Facebook datacenter.
EDIT: this is not the same article I saw before, but same topic: https://futurism.com/googles-new-ai-is-better-at-creating-ai...
You are expressing a highly misguided viewpoint about what it means to be in touch with beauty.
Nobody has the capacity to build the Pyramids or the Taj Mahal today. So what? Architects haven't shut shop.
Indeed, much of the current excitement around AI playing programs lies in the fact that computers are too slow to do the exhaustive brute force tree search either; they need a lot of very clever valuation and pruning techniques to explore more of the tree in less time. It's just a different form of cleverness than what humans do, and there is a lot of feedback between the two communities, with human players helping programmers identify good heuristics, and then computer players uncovering new possibilities for humans to incorporate into their play.
Go is a lot harder than you think for machines to play.
This goes for every turn based game and a lot of card games as well.
If you just had perfect memory you'd be a formidable chess and cards player.
Anybody paying attention to top level chess knows that it has turned into mind bogglingly boring forced draw lines due to engine analysis. I've seen super GMs argue that analysis is so deep now that "e4" openings for white may be unplayable due to how rapidly black can equalize. Romantic play has been all but squeezed out of chess, which is why there has been renewed demand for blitz chess and murmurs that it may one day supplant standard chess as the main World Championship.
The replies miss out that improvements in, say, automobile speed don't impact how marathon runners run their races. But improvements in AI due modify how cognitive (rather than physical) games are played. There is a trade off that is unavoidable. I imagine Go will now become over analyzed just like chess where the top players memorize spreadsheets full of opening moves.
Fortunately, there are variants of chess like Zhouse, which still appear too complex for engine to dominate any position (although they will defeat any human), and for which nearly every move is romantic still.
I also doubt that pros will try to memorize openings like chess players would; they're very different games and this sort of opening memorization is way more important and efficient in chess than in go.
Yes, we do not completely understand the workings of current advanced neural networks either but the effects are still contained as they are not general enough to cause unintended impact outside their domains.
This could have started to change: a recent Google paper, AutoML, allows the machines to design themselves to suit each task. [1] A future advance could allow the machines to pick and learn to do new tasks that are helpful to accomplish a given high level mission. Therefore, chances of unintended consequences become much greater.
With human involvement only at the meta level, deep understanding of the generated implementations becomes more challenging and, in highly complex domains, perhaps impossible.
The major issue is, without a moral core that closely aligns with humanity's evolved morality, there will be moves that advanced AIs come up with that we deem abhorrent, and sometimes unforeseeable, yet they perform them innocently and we only find out the consequences once it is too late.
[1] https://research.googleblog.com/2017/05/using-machine-learni...
The games are apparently very interesting to study.
From what they've released against itself, the games look like a different game. Especially at times, Game 2 in the current crop for example.
However, it is disappointing that the code and model will not be released publicly after Alphago finishes competitive play. It's one thing to say that an apple, once dropped, will fall to the ground, but another to describe its motion as 1/2at^2 + vt.
Not only do you have the principle and the formula behind it, but also a little physics simulator tool! At this point, it is hard to complain.
Actually, it's very easy to complain. If they released the model, people could generate arbitrarily many self-play games instead of depending on DM to release 50, could create arbitrarily many tools using the model instead of depending on DM to create and maintain a single tool, and could verify the results of training a clone based on even sketchy descriptions of the methods instead of depending on DM releasing a detailed enough whitepaper and then guessing at whether a reimplementation is competitive or not. DM is only being 'generous' if you ignore how releasing the model is easier for them and superior for us in every way.
Should be enough, no?
I have always liked playing through great games, both Chess (using the book The Golden Dozen) and Go (modern games and the ancient Shogun Castle games).
I have some history with computer Go. In the late 1970s I wrote a Go playing program in UCSD Pascal that I sold for the Apple II, and also for a lot more money I sold the source code to a few people who wanted to experiment with it. DeepMind's AlphaGo is a great intellectual and technological triumph and I agree that it is an example of future AIs teaching us and working with us.
A little off topic, but Peter Norvig gave a nice talk a few weeks ago at the NYC Lisp Users Group where he talked about the future of collaboration with AIs and also that the ability to work effectively with AIs, adding human insights, will be an important future job skill.
What can be done to prepare for the end of human supremacy, and quite likely human civilization? For instance, as a software developer it feels almost pointless to continue improving at my craft if AI systems will surpass me within 2-4 years (even if the pessimists are right about it taking 5-10 years, that's still an awfully small timeframe).
Likewise, it feels a little pointless to work on any endeavor - technical or otherwise - including but not limited to AI research itself. From a purely practical standpoint just getting up to speed on AI research will take a solid 5+ years, and from a moral vantage point I'm not sure that's even a defensible career given the obvious and hugely negative implications that field will have for human civilization.
Even in artistic endeavors, humans will soon be second fiddle to our own creations - so it's not like there's any "point" to starting down that path either.
Is it time to just engage in a hedonistic, nihilistic, fest of gluttony and "fun" while that's still possible? Honestly, news like this just makes me consider ending it all : it feels like none of us will have much of a future before long.
Walk into any real world business today. There's a huge amount of need for humans, because fundamentally business is about trust not productivity.
Check out: UC Berkeley's Center for Human-Compatible AI, led by Prof. Stuart Russell, a co-author of the field's standard textbook. [1] He just gave a TED talk on the issue [2].
Several other noted researchers in AI are working on the issue as well.
For a short primer: https://futureoflife.org/background/aimyths/
[1] http://www.openphilanthropy.org/focus/global-catastrophic-ri...
[2] https://www.ted.com/talks/stuart_russell_how_ai_might_make_u...
Oh lord. 4 years ago was 2013. Was there such a jump from 2013 to today that makes you or anyone claim that within just 24 months machines will actually program better than developers, when there isn't as much as a proof of concept of that yet? Bar a major and unexpected breakthrough you can sleep assuredly that no machine will take your job just yet.
(Perhaps applying machine learning to code review might be useful, to spot bugs? The problem would be getting good data to train it.)
Even 10 years seems impossibly soon.
What happened was new mathematical tools and new hardware were developed, and suddenly it was all too possible.
It's clear that with our current tools, general AI is out of reach, and new tools must first be developed. But because nobody has any idea what those new tools are, it could happen overnight or over 100 years.
Secondly you're looking mostly at the negatives but not positives of AI advancing. Some of those:
We will likely use AI to enhance ourselves rather than just have it take over.
Such merging may lead to the end of death. At the moment sure you can develop away then age and die - the AI thing may be jollier.
Robots at some stage will be able to do the work so you should be able to have a hedonistic gluttony fest it that's your thing.
- AI that builds an understanding of a large legacy codebase, and is able to diagram & explain it
- using that to refactor convoluted logic and reduce complexity
- using that to train something that can write code from scratch or rewrite an exisiting codebase in a different language
Seems like a billion dollar business, as the world develops more and more large and shitty codebases with high maintenance costs.
That's the question you're asking.
I'm fascinated to see what the next step for this AI is. Anyone care to speculate what a system like this could most readily be applied to?
I'm not sure how true this is. It's pretty rare to find someone who is a genius in more than one domain. Einstein was famously offered the presidency of Israel. Sure, he could probably do well in other sciences, but he was smart enough to know he could not apply his genius to unrelated domains.
Meanwhile, Antarctica may crumble. How about putting effort into solving THAT problem, with all your technology & knowhow Google?