There are lots of fields where fundamental theories are relatively weak or have poor predictive powers. Macroeconomics, climatology, nutrition etc. Basically, we don't have real Knowledge. We have a bunch of data and a bunch of theories. Some of the theories that aren't very general or aren't very applicable to real life scenarios are predictive but relatively useless. We know that certain nutrients have some importance. We know that restricting caloric intake leads to weight loss. We know that money supply, inflation and other things are linked together in various ways. The theories don't answer the questions we want then to with any kind of certainty. Still, we sometimes need to make decisions and some knowledge is better than none so we go and find experts anyway. There are people who are experts. They're experts in the study and they are aware of our knowledge in the field such as it is. But they don't have real answers because there just aren't useful answers to be had at this point. All of medicine was like this until pretty recently.
When Darwin published "The Origin of Species" evolutionary biology came to being as a different kind of field. One where the knowledge was real and the theory predictive. The theory was fundamental and strong. Darwin could make claims & predictions with a lot of confidence. Subsequent biologists could keep making predictions and when new discoveries (like genetics) were made, they were found to be consistent with the predictions of evolutionary biology. In fact, if they hadn't been, a careful researcher would probably assume that the mistake was in their own conclusions, not in Darwins. So if Chimpanzees are closer to humans than to Gorillas, we share common acceptors not shared with Gorillas and the distinction between Humans and Apes (if we want to keep Apes as a category) is morphological (which is allowed by Darwin) rather than one of proximity on a family tree.
Darwin was careful. He didn't publish until he was sure. If he didn't been sure we wouldn't have published. There are lots of Darwins in every one of the former type of field. They haven't discovered real Knowledge that can tell them what to do in an economic recession or what people should eat so they shut their mouth and keep looking. They are still experts but their expert opinion is "I dunno." That doesn't register as expertise so we go on to find someone that will explain about Aggregate demand, antinflamatory diets, carbohydrates or something like that.
Development methodologies, executive compensation, distributed companies etc. are in the category of things that we don't have real "scientific" knowledge about. Most business-ey knowledge is like that.
Now it sounds like I'm bashing people who talk about this stuff and I don't mean to be (hence my disclaimer at the start). I'm just pointing out that our knowledge in different fields is different. Joel Spolsky is very insightful in his essays about the Software industry, for example. But there are certain people that are comfortable with anecdote and generalities and assertions that may turn out to be untrue. There are certain people (like Darwin) who are not. If we're talking about development methodologies, the people we here from are self selected. They are comfortable making grand statements, manifestos and such even though they may be wrong.
That's still useful and certainly interesting, but there is a big category of people we aren't hearing from and they are relevant to the discussion.
The next time you roll through Austin make sure to look me up.
And thanks for the compliment :)
[1]http://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
Somewhat relatedly - there's another phenomena where people may be quite aware of the limits of their knowledge, but deliberately choose to hide it to achieve some objective (usually personal gain, but it could also be in service of some organization or mission), because they know that their audience is more likely to believe confident people. You pretty much have to do this to found a startup, because all startups are inherently risky and uncertain and yet few people will follow you if you seem uncertain.
likewise, there's no guarantee that a doctor can heal /you/. but you go to the doctor because he is aware of the various means of achieving health. this is different from evolutionary biology, where firm predictions can be relied on.
same goes for lawyers; they can't guarantee a victory, but they are aware of all the methods in the courtroom. These are professions and fields of study whose variables are humans. i agree with the parent commentor that the distinction is worth investigating.
(The point of knowing about things like publication biases in science is that they are systematic: once you know about publication bias, you know that estimates are on net, higher than they should be, and this is something you can apply to evaluating science that you read.)
There are real, pragmatic ways to correct for this bias and get more useful information out of online forums, too. For example:
1.) Look for people who have a solid, independently-verifiable track record who are now just starting ventures that need publicity. For example, Marc Andreesen had an absolutely awesome weblog in the ~1 year prior to founding Andreesen-Horowitz, but now his comments are largely limited to snarky one-liners and occasionally insightful one-paragraphers. Why? There's no incentive for him to spread his knowledge around the general public; his firm already has enough of a reputation to draw the top potential founders. The entrepreneurs he funds get his advice, but everybody else has to make due with occasional soundbites.
2.) Look for people who post only brief, offhand comments, but then follow up on those comments and do the research yourself. Many "silent experts" may have time in between compile breaks to throw in a throwaway comment or correction, but they don't have time to write a long missive. However, if you follow-up yourself and do a bit of Googling, you can take their clues and learn what they were talking about. This is how I found out about Haskell, it's how I found out about writing scalable event-driven servers, and it's how I found out about writing multi-language systems where a scripting language is embedded inside a larger program.
3.) Look for people who can see & acknowledge both sides of an issue. Practical experience teaches you about trade-offs, it teaches you about alternatives, and it teaches you that there are often multiple solutions and oftentimes you need to give up some desirable properties to get others. Blog posts by Internet Fanboyz teach you that there is One True Way Of Doing Everything that will solve all your problems, because that is the only way they've ever encountered.
4.) Similarly, look for people who stay out of flamewars. Folks with real jobs who care about their craft don't have time for that shit, because becoming an expert takes a lot of time. So the folks who do have time for that shit are generally either folks without jobs or folks who blow off their jobs to score points on the Internet.
5.) And perhaps most effectively - work directly with an expert. Start contributing to open-source and understand why the maintainers make the choices they do. Take a job at a well-respected company. Work with the gruff neckbeard at your employer. When experienced programmers have to clean up the messes you make, they have a very strong vested interest in not letting you make any messes.
Anecdotally, this is absolutely wrong. I can think of almost no one who participates online, who isn't better than most people who don't participate online.
Maybe it's an issue of averages, and at the edges this is true - that the average "superstar" spends less time online, but that the average "OK" person does spend time online.
But I can certainly say, the silent majority of programmers, the ones who don't take part in anything except just focusing on their work, are almost always worse. I've seen this time and time and time again.
I disagree. Many people (including myself) have observed that qui docet discit, and there is little better way to teach yourself than to discuss and debate and work with other people. This eliminates your claimed strong inverse correlation, and the rest of your suggestions simply become ways of finding additional information and not corrections for any systematic bias in available experts.
I think his target audience here is the {PHP,Java,C++,etc}-sucks crowd. Don't believe you know the global consensus on any technology just because the vocal minority (places like HN) seem to have a consensus. He even closes the article with the following:
> Your time may better spent getting in there and trying things rather than reading about what other people think.
But collectively, those people have spent a lot more time on the technologies, in many more situations, than I have used them or am likely to use them. How am I better off ignoring them and throwing away data? How does listening to them make me worse off?
When I was looking at statistics languages, I didn't spend a year trying Stata, a year trying SAS, a year trying Julia, a year trying Matlab, a year trying Panda+Python, a year trying R; I just looked at what people were using and blogging about and opinions on them, and picked R. How would I have been better off ignoring all of the community discussions and picking on my own? What systematic tendency causes the discussions to be literally worse than random noise?
Because it tells you that the more vocal people are usually full of BS, and you should take notice of what the silent experts and does-of-stuff practice. Who said there's no way of "correcting for this silent majority"?
How's that? All the more vocal people tell you is that they're... more vocal. Where is the evidence that the more vocal people are correlated with systematically being wrong in a predictable and correctable way?
> you should take notice of what the silent experts and does-of-stuff practice.
And how do you do that when they are silent?
> Who said there's no way of "correcting for this silent majority"?
You still haven't given any way.
On the other hand, you can't make decisions without data - without discernable activity, it's reasonable to assume a community is dead.
Despite this, there were clearly experts who could "finely craft" PHP applications. Although Facebook may not be the best example, it is the first one that comes to mind.
On top of that, the latest developments in the language and community have seen some big changes for the better around tooling, best practices, and the like. So I'm firmly of the belief that with the RFC, PSR and Composer/Packagist trio, as well as things like HHVM, we will see things begin to change for the better on this front in the future. Call me Mr Optimistic ;)
There has to be an incentive / prioritization by the person to do something about it. Also, some people don't even think they're experienced enough.
Maybe useful knowledge transfer via teaching or at least reviewing teaching material.
remember to judge others on their past behaviors, and nothing else.
Or: Those who can, do. Those who can't, talk about it.
If you take a group X, a certain percentage Y hang out in newsgroups and blog. He himself was part of that Y for Forth.
Why would the percentage Y be significantly different for any given language?
The number of bloggers/newsgroup users is like a survey, it's not complete, but it's a very good indicator of how popular a language is. The popularity of a language increases its ease of use by a considerable factor (libraries, help, documentation, etc.).
Hence Forth being a language no-one uses today, which actually contradicts his point, rather than illustrating it as he seems to think. Elizabeth Rather was right, there were people doing interesting things with Forth, but there weren't many.
And for most of us, that's an important thing to consider when using any tool.
That there is a huge number of bloggers looking down their nose at Perl & C++ simply shows there's something wrong with both those languages. It doesn't mean that they're not still useful though, nor does it mean something else solved the problem until a lot of 'I used Go/Rust/Haskell/Whatever to make non-trivial program' posts appear.
There are lots of reasons I can imagine: people trying out newer languages have more reason to talk publicly about what they're doing, since they usually have a vested interest in the growth of that language. Organizations with tight rules on secrecy are often risk-averse in general and less likely to try out new languages. So new languages wind up with users that are more open, and they appear even more active. (I don't know if this is true -- it's just a guess. But your assertion that the percentages would likely be uniform across languages feels unlikely to me.)
I'm not sure the argument is that the percentage who hang out in newsgroups and blog is different, anyway, but rather that looking at newsgroup activity and blogs specifically selects out many of the actual experts. This has absolutely been my experience, and one of the most damaging assumptions I see is that surveying blogs and twitter feeds is enough to get a complete understanding of how people use a piece of software and what they think of it.