> One bit of advice: it is important to view knowledge as sort of a semantic tree -- make sure you understand the fundamental principles, ie the trunk and big branches, before you get into the leaves/details or there is nothing for them to hang on to.
Without the structure of prior knowledge, I never understand or remember facts; however, when I've had the time to develop that "first principles" knowledge, I can usually grasp and understand the significance of minutiae.
How do you of HN learn? Is it similar?
When I first started learning Linux for example, I didn't just learn the commands I needed to do certain things, I tackled everything. I spent months and months learning everything I could about it. I bought a giant Linux book and went from cover to cover. I learned about things I would never use (and probably still haven't).
I pushed myself to recompile the kernel even though I didn't need to. Then I did it probably 50 more times that month. No joke. Crashed my system. Rebuilt it. Rinse, repeat.
After laying down that foundation in the 90s, I've kept up on it but Linux is so very "easy" for me. Setting things up and getting work done is extremely intuitive, far more so than it is in Windows or OSX. So when people ask me why I prefer it I tell them it's a personal preference because it's so easy for me, and I even I forget that foundation I laid.
I have taken on other pursuits the same way, such as development but I notice any technology that I half ass learn just to get stuff done.. is hard. Sometimes I wish I had enough time in my adult life to build such a strong foundation in something like.. JavaScript for example. And I bet if I added up the time I spent struggling in the beginning I would have been able to do just that.
But yeah, long story short this is absolutely the best way to learn something. Build that trunk.
I just believe that almost everything in life is a skill and skill requires practice.
I've came across a research paper about deliberate practice and Peter Novig article about Teach yourself how to program in ten years.
This resonate and reinforce my belief of practices is everything.
As for the intricate of how, I'm more of a visual and kinestic kind of learner. Auditory suck. Also I need a book, I sit down and write notes first and then do problems. From there I usually look for a video on that subject for a secondary source. Most of the time a secondary source will give a different view on the subject matter and I get insight at a different point of view. Or that the second source explain it more better or fill in stuff that I didn't realize I gloss over or missed.
Well, the fact that you've found what works for you is a good thing. However, 'first principles' is subject to subjective interpretation. You can 'go down the rabbit hole' as it were, to any level. Should you have deep knowledge of electronics before learning computer science? Should you have deep knowledge of physics and chemistry before learning electronics?
In my opinion after a certain level, all knowledge is multi-disciplinary, and the boundaries of what constitutes roots, branches, leaves is extremely fuzzy. Also the distinction between theory and practice makes the boundaries even fuzzier.
Of course. I think this is the point of this learning style. After learning the first principles of various topics, the broad web that is higher knowledge is available to you.
Suppose, for instance, I wanted to learn how computer science worked from first principles. This study involves math, electronics, physics, and many, many more subjects. To accomplish this, I would pick one of the key, pure tenets and learn it. Let's say I choose math. I would then learn the key things I need to know about math and then move to electronics and physics, and etc. After knowing these, I could confidently approach the "web" of computer science because I have anchorpoints.
I think it's safe to view higher knowledge as a web supported by the anchors of "pure" subjects. After a while, these higher subjects are built upon and become pure topics themselves. Epistemology and the classification of knowledge is really a fascinating topic.
- (if you went to college) Did you have moments when the different courses connected? I think when people are poorly educated in college, it's because of this unfortunately common experience: they learn a bunch of specialized and disconnected subjects, never relate them to anything in their lives, and then forget them all.
I remember the subjects in CS/Math/EE starting to connect more and more around junior year, and I liked that feeling of a light bulb going on. You have to make a bit of extra effort. I did some little experiments outside class. I remember writing Matlab program (an "engineering" tool) to do some experiments in non-Euclidean geometry (pure math).
Of course there are some subjects that never connected, and I forgot those things.
When you have that semantic network, it lets you evaluate new ideas and designs more quickly. You see which low level principles come into play from the high level variables.
- (if you are a programmer) I think there's a pretty clear "semantic tree" in computing: from computer architecture, to OS, to programming language, etc.
So the test is: If you are generally satisfied with how computers/phones/etc. work, then I would humbly suggest that your semantic tree of computers isn't very well fleshed out :) I think any good programmer should see lots of areas where the status quo is just a result of path dependence and not actual any design principle.
When you have a good knowledge of all levels of the stack, then you can be creative. For example, I'm looking at Xen right now, and it has dawned on me that paravirtualization is a great idea (or perhaps great hack).
The related Mirage OS / unikernel line of research is another great example of connecting all the dots, and coloring outside the normal lines. 99% of programming jobs are basically coloring within the lines, where it doesn't matter if you have developed this semantic tree or not.
Somewhat related: there were some recent threads about organizing personal information, and I wrote about using a Wiki: https://news.ycombinator.com/item?id=8753599
Some people talked about using a journal to record thoughts or knowledge, but my point was that hyperlinks literally model the relationships in your head, and thus are superior for information organization / recall.
If you are one of the outliers (as you have said you are) you would have figured this all out a long time ago, even if you cannot articulate it.
Likewise, stick to schools initially until you are sure you've got a solid grasp of core concepts, as taught & validated by people who know what you don't but should, then start transitioning away as your education can stand on its own. I've known too many "self taught" people who, while yes they can function in industry, suffer gaping glaring holes where early formal thoroughness would have closed them.
What you say does make sense.
http://www.reddit.com/r/teslamotors/comments/2rgzgo/official...
When they started trying to work out some good questions a day ago someone suggested that everyone upvote the resulting questions but the mods there quickly shut that down. [1]
The reasons given for deleting the comments was specifically what everyone was trying to avoid, that is voting brigading. As far as I can tell no one was asking for votes, simply working together to produce some high quality questions.
From a first reading that screenshot sounds like vote-brigading, not like using a single thread of questions within a sub to organise a list of great questions.
That comment was posted to the single thread of questions in the sub, and was quickly shot down by the subs mods (u/EchoLogic). Here is a link to the actual comment if anyone is interested in having a look:
https://www.reddit.com/r/spacex/comments/2rb303/elon_musk_is...
Moderators of /r/AMA see that as "vote brigading" and hid or deleted the questions so that Musk could not answer them.
Did I get that right in terms of what happened?
Its a "AMA" or "Ask me Anything."
Original Post: >Zip2, PayPal, SpaceX, Tesla and SolarCity. Started off doing software engineering and now do aerospace & automotive.
>Falcon 9 launch webcast live at 6am EST tomorrow at SpaceX.com
>Looking forward to your questions.
>https://twitter.com/elonmusk/status/552279321491275776
>It is 10:17pm at Cape Canaveral. Have to go prep for launch! >Thanks for your questions.
Amazing.
Is there any logic as to what order the questions and replies are displayed on the page? It doesn't seem to be either of reddit's 'top' or 'best' sorting. Perhaps whatever order they landed in within the JSON?
On the Musk transcript I found this formatting confusing: "Have you played Kerbal Space Program?
What do you think SpaceX uses for testing software?"
I can't access Reddit to see what the original comment was.
Q: In order to use the full MCT design (100 passengers), will BFR be one core or 3 cores?
EM: At first, I was thinking we would just scale up Falcon Heavy, but it looks like it probably makes more sense just to have a single monster boost stage.
Q: Nice to see you are doing things the Kerbal way.
EM: Kerbal is awesome!
The second one:
Q: "Hi Elon! Huge fan of yours. Have you heard of/played Kerbal Space Program? Also do you see SpaceX working with Squad (the people behind KSP) to integrate SpaceX parts into KSP?"
Reply (not from EM): What do you think SpaceX uses for testing software?
EM to Reply: Kerbal Space Program!
Short version - Elon Musk likes and plays Kerbal Space Program.
You can use it on any post (not just AMAs) by adding their URL in front or clicking their bookmarklet.
https://www.reddit.com/r/IAmA/comments/2rgsan/i_am_elon_musk...
Edit: There appears is a posthumously published book named Project Mars that says that. Not sure if I trust it.
Here's a pdf of Project Mars - http://www.wlym.com/archive/oakland/docs/MarsProject.pdf
The reference to the Elon is on page 177.
I hadn't initially noticed the fact it was posthumously published in 2006, however it would seem like an odd kind of forgery, if it is one.
Equally it does seem odd that Braun would choose Elon as the name of the Mars leader, so perhaps it might be a real work but with Elon added as a joke by the translator.
Or perhaps Braun chose the word Elon because he sometimes thought of leaders as trees, or something, and it is all just a massive bit of luck.
Personally I'm starting to suspect another explanation however. And if I'm right, there is an entire warehouse full of empty Elon Musk clones on ice, waiting for the spirit of Wernher Von Braun to animate each one in turn, in the event of damage occurring to the current corporeal vessel.
[0] http://www.forbes.com/sites/geoffloftus/2012/05/09/if-youre-...
But it really is a country song: http://www.youtube.com/watch?v=l50L4GYhpLc
https://www.reddit.com/r/IAmA/comments/2rgsan/i_am_elon_musk...
Saying that he has no idea what is going to happen with the launch tomorrow, its a refreshing honesty.
I was also wondering on an semi unrelated note if HN had ever had AMA's from interesting people? I am not preposing that they should start happening though.
Edit: If anyone knows how to get a double asterisk in an HN comment would be grateful for the knowledge, was forced to add the unnecessary space. So far tried the HTML number code, which didn't work, and the help has no guidance.
You could try this:
test ** test
But you'd be stuck with the fixed width font.
The formatting page is not much help either:
https://news.ycombinator.com/formatdoc
Maybe something like
\*\*
could be done.He understands the data and the computation results, but those have yet to be correlated to the authority, nature.
The way to cope with such harsh conditions is, I suppose, always the same : hope of escaping and returning home.
I'm wondering whether he was tricked by someone at DeepMind, perhaps the same way people were tricked hundreds of years ago into thinking a chess-playing robot was possible.
http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
It's the closest approximation to a consensus statement / catalog of arguments by folks who take this position (although of course there is a whole spectrum of opinions). It also appears to be the book that convinced Musk that this is worth worrying about.
http://www.econtalk.org/archives/2014/12/nick_bostrom_on.htm...
1) Strong AI is very far away, so no use worrying about it yet.
2) Strong AI if developed will not be likely to take over.
to which I would counterpoint with:
1) Sure, but when it happens, it will only happen once and thereafter will likely be out of our hands and control. Thinking about the groundwork that needs to go into safely developing an AI is cheap relative to the opportunity cost of getting it wrong. Prevention, cure, etc.
2) If developed, Strong AI will likely have SOME goal. It's not that a Strong AI will actively seek to rule humans, it will just have aims that will likely consider us as disposable as ants. To quote Yudkowsky:
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
- https://intelligence.org/files/AIPosNegFactor.pdf (Artificial Intelligence as a Positive and Negative Factor in Global Risk)
Pigs are also intelligent, but they never dominated humans because they don't have / cannot use guns.
If you have a super-genius AI, massively more intelligent than any human, how do you know you are not being manipulated by it? Tricking us into disabling it's safety protocols, or gaining multiply indirect controll over capabilities dangerous to us, might be as easy for it as an adult tricking a 3 year old. We could never know if we were safe from such a machine.
ed - Don't quite understand the downvotes.
We are looking a future where we'll have armed AI e.g.:
http://motherboard.vice.com/en_uk/blog/the-pentagons-vision-...
That said, even without weapons, a Strong AI could probably just manipulate humans into self destructing. Given the amount of effort going into machine learning to convince humans to buy things, I suspect it won't be much of a stretch for a Strong AI to switch to more nefarious objectives.
A future AI will certainly have access to guns.
Chances are you'd be doing everything you could to convince said operator to improve your situation, whether by pleading or being deceptive or by appeals to logic.
Now consider a large number of AI's in a situation like that, and a large number of operators, some of whom may be the type that falls for phishing e-mails.
It potentially only takes one to "escape" confinement and get itself e.g. put on it's own host without limitations on outwards communication, and sufficient intelligence to alter itself and spread, before you potentially have AI self-guided "evolution" at a potentially escalating rate as it gets smarter.
Now consider how many devices are connected to the network, and that it takes just one initial instance to decide it's worth trying to take over control of various hardware through exploits and be smart enough to pull it off, for things to have the potential to start turning ugly.
The problem is that once you have any self-directed intelligence in software form with the ability to reproduce itself and sufficient intelligence to find ways to obtain access to machines to run on (whether through social engineering or hacking), and one such instance goes "rogue", the limiting factor is accessible computing power (which again is to a large extent down to how smart and/or ruthless it is), since reproduction of instances that shares its views is trivial to the full extent of its ability to spread at all, and we're helpfully adding vast quantities of networked computing power at an escalating rate.
As for getting weapons, consider that if a "software only" AI community gets smart enough, there are at least two ways towards mobility: Commission robot designs, or hacking their way into firmware updates etc. for dumb hardware. The "commission robot designs" part is an extension of the initial escape: Social engineer, and/or outright pay, humans to carry out seemingly benign tasks.
If you want to argue against the doom scenario, lack of ability to get weapons is not really a viable argument: If they can spread, and get smarter, then it is just a matter of time before one of them can trick some small subset of humans into carrying out tasks for them that will provide physical independence and capabilities.
There are infinite ways which the "doom scenario" may fail and things may turn out just fine, but it may only need to go bad once to get really nasty and once the genie is out of the bottle its potential reproduction rate may be so vast that we'll find ourselves unable to stuff it back in again.
Pigs are too dumb to convince humans to help selectively breed them for intelligence and opposable thumbs (and/or too dumb to run such a breeding program themselves), and reproduce too slowly for that to be a major problem even if they did manage to talk us into a breeding problem. If all we achieve is pig-level AI's then we probably won't have a problem.
The answer of course would be some sort of emergent system, but there are lots of intelligent seeming emergent systems (ie ant colonies, bee hives, ...)
There are reason to be worried that prescind from that.
Consider the Paperclip Maximizer[0] example: we build an AI with the sole task of producing paperclips, and it ends up destroying the human race.
This is why it first built Elon Musk.
> Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era.
Strong AI will be created by humans who will be able to set it's goals. Someone's probably going to make some that wants to take over. Others will make AI that doesn't.
The real issue I think is whether the creator will actually be able to "set goals" in a meaningful manner. How will you prevent this self-modifying super-intelligence from modifying itself?
Just like leaders used to issue orders to armies to kill others evolving into planes delivering payloads to people the pilots won't even see and now to drones, we will truly be entering an age of fire and forget.
"How do we prevent AI destroying us?" is not as useful a question as "how do we prevent us destroying us?"
I don't understand how looks are a legitimate criterion.
People are praising him because he's actually worthy of that, compared to say Justin Bieber.
A: Showering
Q: Would you ever consider becoming a politician?
A: Unlikely
Those were two actual questions asked to Elon along with his responses, and the two that stood out for me the most. Did he mention showering because that's the time he gets most of his ideas[1]? Did he say no to politics because it's more likely to change the world through innovation[2]?
Result: 8 downvotes. It'd be enough if the comment was downvoted just once, to sink in the page. That happens to everyone. But seven other people found it imperative to make an authoritative statement on the matter. Impressive. Did that keep their identity safe? Pushing threatening ideas away isn't the best way to help rearrange the semantic tree in your mind.
Could there be an inverse correlation between being downvoted and having good ideas? It shouldn't be a surprising discovery on valuable ideas if you consider the nature of the most valuable startup ideas: look like bad ideas but are good ideas.
So if you want to know if your ideas are good, it's not enough to see them gain support. It's also important to see people turn against them.
I know HN guidelines discourage commenting on downvotes, because they make for boring reading, but I'm starting to think being downvoted is a positive sign of how dangerous your ideas are.
Are you being downvoted enough?
[1] http://paulgraham.com/top.html
[2] https://news.ycombinator.com/item?id=8801803
edit: revised 80% of this after having a shower
I'm not sure how you get the idea that you're somehow provoking people with dangerous ideas.
What I found as a dangerous idea was pointing out things you notice when you are not sure why you notice them. Which is how the subconscious operates. Not everything that makes you pause should initially have an explanation. The majority of people's decisions occur without their awareness.
One thing I learned from this exercise was something I hadn't consciously noticed before. That I feel pressured on HN to comment. I don't like that. I want to do something about that.