EDIT: just wanted to add, I'd love to ask the marketing team behind this site how they came up with quest. What the fuck is a quest, and how is it different than all of these researchers just doing whatever they wanted, anyway? MIT isn't funding these labs directly in most cases and the work has been ongoing, it's just a branding exercise.
It is still decent in bio, some of the graduate institutes, etc, but the core engineering school isn’t the MIT of engineering anymore. My employee friends largely agree.
They do what they do exceptionally well but arent THAT different.
Its all branding - and I say that having taught there for 3 years.
Love wandering those halls.
(current undergrad, I'm sure you have more insight into what's going on)
someone has to grab the steering wheel, and it can be mit and other responsible adults, or it could be a bunch of hair brained insert rude word folks of the sort that hijacked neurosciences (hbp anyone) or, God protect us the European semantic web community. They are in the offices of the great and good as we speak, extracting your tax dollars.
Buckle up, Serious People are going to rediscover the fundamental Hard Problems and relocate the current Hot Topics into their appropriate ontologies.
I'm not aware of any novel abstraction which led to a solution to these intractable problems. The problems became tractable because of silicon and incremental algorithm improvements.
This is another way of saying, yeah intractable problems are being made tractable, but these problems aren't stepping stones to AGI.
https://www.theatlantic.com/technology/archive/2018/01/the-s...
We are far away from understanding intelligence in all directions. Top down, bottom up, neurologically, psychologically, logically, mathematically and last but not least philosophically.
There's optimism at the moment, because we are doing more stuff with more annotated data (the annotations providing the semantic grounding, as in "Not hot dog" vs. "Not in category 339492-883764-399274"). The key difference this time being access to (and processing power for-) sufficiently large "training sets" (read "samples") for deep-learning algorithms (read "statistical models"). From an AGI point of view, this is nothing but an expensive parlor trick, because the "intelligent" part is the annotation, not the categorization after the fact.
Which goes to show that a) our machine learning models are dumb as bricks and b) they are as far from AGI as worms are from building a rocket to go to the moon, where their god lives (see all those holes up there?).
I think they are saying...Intellectuals are going to change the way they guide their research or formulate their hypothesis based on the this work at MIT. This person believes that this project signifies a paradigm shift in the fundamental origins of thought, hereafter nothing will be the same.
I think this person is trying way to hard to sound smart or poetic.
So all the current excitement around super-intelligent AIs, and whatnot, will go the same way as it has the previous times we all got excited.
> A key to the success of MIT IQ will be identifying industry allies who share our passion for tackling big, real-world problems. That work is already underway: we have forged a number of collaborative projects with industry, such as the MIT–IBM Watson AI Lab.
However, the biggest incentive to go into industry is income. Sure, your research and your department can do more with more funding but wouldn't you still make about the same amount as a grad/research scientist/professor?
UC Berkeley launches Center for Human-Compatible Artificial Intelligence
http://news.berkeley.edu/2016/08/29/center-for-human-compati...
Ugh, this kind of corporate-speak is nauseating. Can anyone understand what this "quest" actually entails?
If we want to solve this problem we’re going to have to reverse engineer intelligence. Otherwise we’re just going to continue to run into walls by trying to either brute force our way from the ground up and by ignore lessons from biological intelligence or philosophize from the top down.
At least not with decades of manual supervision.
I've posted this before but here is my proposal: https://scrollto.com/life-a-universe-simulation/
What I propose is a minimum viable digital environment that can support the creation of self-organized turing machines that feed off their environment. What this really means is, coming up with a digital environment that can support the evolutionary process. Evolution requires vast space, vast time (in this case -- clock cycles), and principles that allow for both storage and movement of information. The storage and movement of information is accomplished most simply by roughly emulating mass/energy conservation/conversion laws that we have in our universe. With just collisions that form stationary quasiparticles, and can also annihilate to reform the moving fundamental particles, universal computation is enabled.
Toffoli and Fredkin discovered the power of collision-based computing decades ago. There is a lot of literature and good results they derived on the power of these types of systems.
Let's create life the only way we know it formed -- evolution. It's far more elegant and less engineered than trying to unravel how chaos formed competitive results on a million-deep evolutionary ancestor tree.