> 16. Russell Conjugation: Journalists often change the meaning of a sentence by replacing one word with a synonym that implies a different meaning. For example, the same person can support an estate tax but oppose a death tax
Why did they choose “journalists” when the “death tax” narrative was created by politicians, a much better example group for this conjugation.
> 18. Overton Window: You can control thought without limiting speech. You can do it by defining the limits of acceptable thought while allowing for lively debate within these barriers. For example, Fox News and MSNBC set limits on what political thoughts they consider acceptable, but in the grand scheme of things, they’re both fairly conventional.
While this is true of MSNBC, Fox News is clearly swinging to the right more and more.
> 41. The Invisible Hand: Markets aggregate knowledge. Rising prices signal falling supply or increased demand, which incentivizes an increase in production. The opposite is true for falling prices. Prices are a signal wrapped in an incentive.
More recently, the invisible hand has been shown to have its thumb on the scale.
The actual definition of the Overton window exactly predicts what you both describe. See wikipedia for a better description.
I think for the same reason people blame web developers for how websites are built, ignoring that the web developer is not the product manager.
The only way of change is by habit.
Carol Dweck wrote a book called Mindset, which talks about two mindsets: a fixed mindset and a growth mindset. The idea is super simple, and you don't even need to read the book to know what it is. But just knowing that idea changed my entire motivation to do things, and now I believe that even if I'm not talented at something, I can get surprisingly far. (for instance, I'm a deep introvert, but I'm able to socialize for long periods of time and talk to strangers easily now, but only after I kept practicing for a period of 2 years -- I discovered it's possible to "bend" your introversion if you don't put yourself in a box and are willing to make an effort)
https://en.wikipedia.org/wiki/Mindset#Fixed_and_growth_minds...
Arguably the science around fixed vs growth mindsets is fuzzy, but here's the other insight: sometimes you can blunder into the right by holding some vague ideas loosely (the rationalist crowd think that avoiding biases leads you to correct actions -- in real life I have found this to be false).
There are two kinds of rationality: epistemic rationality (believing the right things) and instrumental rationality (believing in ideas that work to get you the ends desired, even if the ideas themselves are not 100% rational).
In business and life, instrumental rationality is much more useful and works more of the time (this is why the LessWrong crowd isn’t good at stochastic domains like business because their models, though logical, are insufficient -- this is a realm where instrumental rationality leads to success). That's another idea that changed my life.
From https://en.wikipedia.org/wiki/Carol_Dweck#Criticism
> Timothy Bates, a psychology professor at the University of Edinburgh, has been trying for several years to replicate Dweck's findings, each time without success, and his colleagues haven't been able to either.[23]
The statistics in Dweck's papers also fail the GRIM test which is a potential indicator of fraud (fake/false statistical values).
One of the ideas that helped me there comes actually from them: "Rationality wins". Which is the old: "If you're so smart why aren't you rich?".
Not easy to apply for anybody, it's much easier to be a smart loser.
How did you practice?
Before reading about FIRE I thought the idea of retiring silly because I was 100% sure I'd never stop working, even after turning 65. So saving money never made sense to me.
The blog about FIRE explained with a total different point of view which is not about retiring really but it's about having the choice to do only the work you really want to and being independent to make your own choices regardless of needing a paycheck.
That idea completely changed (part of) my life.
I love sites full of wisdom like this. Sometimes there’s a tool buried deep in my toolbox that my brain forgot I had and this brings it back to the surface. And sometimes there’s a new tool I can grok/incorporate in seconds. And inevitably there will be a tool I don’t care about or agree with and simply discard it. Big win for very minimal investment of time.
That’s a compelling idea. Maybe I’ll adopt it and get into a habit.
(The other is that graphy mind-mappy tools on the computer are often pointless, because this is the thing that human brains are WAY BETTER AT than the computer.
What the computer is good at is "perfect recall of specifics in the form of words")
Remarkable in times in which we study "one shot learning" for machines.
Edit: incidentally, this human detail reveals precisely what seems to be crucially missing in machine learning, current stage: ideas do propagate change through internal elaboration, in an organic process - ideas fine-tune models, through internal work.
The idea is the trigger. Doing it, to achieve lasting change requires habit forming.
Habit change is itself just an idea until someone chooses to actually do it.
Likewise a friend once pointed out that swinging a closed sauce bottle to get the sauce to the top is way more effective than shaking it linearly. That was 15 years ago and I still think of him every time I use this trick.
> 23. Gall’s Law: A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.
Meanwhile, a much more limited (initially) machine arrived on the scene, the Linotype machine. It was successfull and did run the world's news and magazine press for the next 100 years.
It's really a fascinating read: https://www.todayifoundout.com/index.php/2023/03/the-machine...
I see the obvious examples of complex systems failing, but we also have so many other complex systems that sure have their shares of issues, but I am not sure we can mark as failed.
For instance government constitutions are arguably complex, and formed from scratch.
Or to stay in the tech sphere and look at the problem from the other end, is it possible to build anything that hasn't effectively evolved from something more simple ? If tomorrow I'd want to build a whole new rocket from scratch, I'll need to build the parts piece by piece, and test them before assembly.
Do we consider these tests to be simpler systems that are proved to work, thus if my rocket design happens to work, it evolved from simpler systems that got combined ? Does my design come from so many other designs I've seen and learn, and so is an evolution of these past designs ? etc.
My core question would be, does that law have any practical impact in our field ?
This is, at best, expensive.
Incremental approaches to designing a control system with many if statements will never evolve into a PID controller. Nor will a PID controller be evolved into being able to handle non-linearities.
Scrum may be effective as an organisational process at a team level, but will fail utterly if it is applied to the entire organisation.
Incremental approaches optimise, but that’s all they can do.
It's a function of where you start on the graph. Experience and understanding can let you start in a different place on the graph that may be less efficient but have a higher local maximum.
This is a great, but absolutely not necessarily the only, way of writing software. As mentioned by other replies this is maybe not even possible in some fields. And as Dawkins points out, it’s is absolutely not the most efficient way and you don’t explore the full efficiency landscape by a long shot. But it works, and is predictable. I’ve had great success in software by following this rule.
It is also highly resilient to unpredictable organizational politics and inefficiencies because from the moment of having an MVP, you always have a functioning product.
This is not entirely true, because there is horizontal gene transfer, and serendipitous combinations, even of "junk" DNA. For example, eyes started off as ordinary brain cells that just happened to have a light-sensitive chemical in them: melatonin. The melatonin could have had another non-photosensing use prior to this, or it could even have been a mere byproduct of some other process.
The above article sounds like written by GPT. Also, the author's tweets look like this:
"If you really wanna learn about somebody, skip the 60 minute interview and have them drive you through New York City for 15 minutes."
And then this person calls himself "The Writing Guy" on his twitter bio and unironically sells a writing course.
I'm struggling to see the connection you are seeing, can you clarify? Are the ideas mentioned worth less because you found a potential way to discredit the person listing them?
I pasted first half of the article into it.
2 in the top 5 matches the handwritten article.
Edit: practically, in fact, some of the points presented are stubs - and require more elaboration, on the contrary.
Edit2: not to mention that Perell also expresses falsities just to bargain on words (like quantifiers).
~~~ Monty Python, The Meaning of Life
> 2. Doublespeak: People often say the opposite of what they mean
> 4. Preference Falsification: People lie about their true opinions
Are we supposed to apply these to the 48 other ideas that he pitches as his guiding lights ?
Otherwise I think more than the ideas themselves, it's a good reminder that anyone having seen enough paradigms will have a prism of often contradicting ideas to look through any specific issue. None of these ideas would make sense on their own, nor has much value outside of being another perspective to complete the others.
This is often lost when trying to pigeon hole a real situation into a single well-know pattern or single idea.
Indirection (and proxies!!) are cornerstones of human communication. I've found it reasonably helpful to test all strong statements through those 3 lenses.
This doesn't feel right at all from my culture's perspective. We are not indirect at all.
It's a self-help blog post, they had the ChatGPT prose style down before ChatGPT existed. For all the people who are convinced that humans are just running on autocomplete, with this particular genre they may actually have a point.
*Everyone belongs to a tribe and underestimates how influential that tribe is on their thinking.* There is [little correlation between](http://climatescience.oxfordre.com/view/10.1093/acrefore/978...) climate change denial and scientific literacy. But there is a strong correlation between climate change denial and political affiliation. That’s an extreme example, but everyone has views persuaded by identity over pure analysis. There’s four parts to this:
• Tribes are everywhere: Countries, states, parties, companies, industries, departments, investment styles, economic philosophies, religions, families, schools, majors, credentials, Twitter communities.
• People are drawn to tribes because there’s comfort in knowing others understand your background and goals.
• Tribes reduce the ability to challenge ideas or diversify your views because no one wants to lose support of the tribe.
• Tribes are as self-interested as people, encouraging ideas and narratives that promote their survival. But they’re exponentially more influential than any single person. So tribes are very effective at promoting views that aren’t analytical or rational, and people loyal to their tribes are very poor at realizing it.
I just don't agree with this at all. I think its folly to assume the world will always make sense. As humans we seek to make sense of things, but not everything fits into our cognitive little boxes.
The same thing applies to anything that emerges from a system of multiple people making multiple decisions, or from any observation about nature. The only thing that changes is the complexity of what you’re trying to understand.
The very idea of "making sense" is a human construct. The universe doesn't care about making sense - to us or otherwise, except to the extent that we are part of it, and some of us care. There's no magical rule that says "the world" should, will, or even can "make sense".
(IMO :)
I tend to think the only reasonable and scientific viewpoint is that it’s possible to make sense of everything. Anything else is defeatist.
Freakonomics argues it can have the opposite effect, of "licensing" undesired behavior by those who can afford to pay the cost. Something to think about when it comes to carbon credits, etc..
Here's an idea which will hopefully change someone's life:
If you take the best parts of in silico computing and the best parts of synthetic biology, you can make transformative hybrid devices much faster than trying to make just making tinier and tinier computers or smarter and smarter cells.
Example:
Mimee, M., N. Nadeau, T. J. Hayward, S. Carim, C. A. Flanagan, J. Jerger, S. Collins, T. R. McDonnell, R. N. Swartwout, W. C. Citorik, S. H. Bulovic, R. Langer, and G. Traverso. 2018. An ingestible bacterial-electronic system to monitor gastrointestinal health. Science 360: 915-918. doi: 10.1126/science.aas9315. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6430580/
They were sick of parents showing up late to pick their kids up so they instituted a penalty of $x for every minute a parent was late. More parents ended up being late because they felt less guilty knowing they could just pay the penalty. The social pressure to be on time was replaced by a financial one.
Pay tell: What was the 2nd order consequence? The childcare center started to make more money from late parents. It sounds like a good trade. Ideally, raise the price little by little, until behaviour hits an equilibrium. Then you have extracted maximum value for both sides! If parents don't like it, they can go to a different daycare.
This one is weird. First of all, competition can be very healthy and good for everyone. Sure, there's toxic games you can play in life, but anyone who's ever actually competed at something knows the difference.
Is competition the best way, in all those cases we think it is? Or is it just the easiest, low-friction way for an individual to gain advantage? And not so much benefiting the collective. Are coopetition, collaboration, cross-pollination more advanced and sophisticated mechanisms of evolution? Maybe requiring higher levels of civilization than mankind can bring to the table right now.
On the other hand, I have tried to "play to win" with stuff I do, which is giving it everything to succeed. Just not with the intent to make someone else lose. I've never really enjoyed pvp games.
Can someone explain healthy competition in their words?
This is terrible political ignorance disguised as "advice". Violence is the most fundamental form of influence, and is not owned by any particular political party or position, nor does the willingness to use violence communicate anything except that group's lack of commitment to pacifism.
This is akin to saying that serial killers and carpenters are basically the same because they both use sharp objects. In fact, you can equate any political position to any other, or literally any sovereign group of people, when you account for the fact that all of them have gone to war or committed some other act of violence to further their interests.
In fact, this is the Fox News/neocon Republican talking point: "anti-fascists are the same as fascists, because they harm fascists"
Does the author really think in such simplistic terms?
This succinct nugget of a concept which appears at the end of David Graeber’s magnum opus, Debt: The First 5000 Years, has completely transformed the way I look at and understand the world. It’s as if I was waiting all my life for that exact book.
Not that we can't stand on the shoulders of giants, but come on. Do you really need to include your unoriginal "ideas" in the same list and claim you came up with it?
"The Never-Ending Now" - Also just known as novelty bias or "shiny object syndrome". Or simply put, neomania. Life has gone through 24 hour news cycles for millennia, this isn't an unique idea although the author has obviously claimed the phrase on the internet already.
In my case, learning Korean in my 20s helped me understand group dynamics and hierarchies (they are built into the verbs themselves) in a way my English-language background never could. Classical Chinese gave me conceptual gifts, such as "principle" 理 vs. "material force" 氣 , and Japanese (very similar to Korean, but with important differences in nuance) the power of understatement and indirection.
https://www.sciencedirect.com/science/article/abs/pii/S03032...
See also
Not quite.. We have mirror neurons, we imitate to learn (in fact just watching is sufficient) but this behavior can be overridden (thanks to prefrontal cortex) but most don't apparently
"Complexity is a subsidy."--Jonah Goldberg
Especially when your livelihood is on the line.
This underpins a lot of what we call DevOps (i mean the actually useful interpretation of DevOps, not all the shit that gets a DevOps label in an attempt to sell something).
Despite working with this idea every day for over a decade, the idea itself still blows my mind. Despite being theoretically quite succinct, it has so much practical depth that i struggle to see myself getting bored of applying the learnings from it.
Anecdote: i landed in a role in a part of a big org that was on fire, but the fire wasn’t due to stupidity. The team was huge with genuinely no lemons on it (i later found out this was no accident - the head had been given permission to cherry pick staff from across the org and he took a lot of flak for causing brain drain in other parts of the org). They had a software component that everyone relied on in production but technically no one really owned. Everyone was maxed out, growth wasn’t the problem but an externally driven change in how the business worked was. The pace was non-negotiable.
This component “worked” as far as we could tell. Volumes and the fact that some theoretical failure modes would be hard to detect in practice at that time, meant it was not possible to be confident that it was fully working correctly, but it was at least mostly working.
The problem: no one could release changes to this component reliably but changes were often needed. Over 80% of releases were rolled back. On average it took 2-point-something releases to successfully get a change out to this component that didn’t need to be rolled back.
Lots of optimisations had been applied to this component. This was not a stupid team and it did not suffer this pain willingly. There were software optimisations applied - mostly tools to abstract changes to be simpler to deal with. For example, one source of complexity was a bunch of rules that had to be dealt with but these could be handled in software allowing the human to just specify mostly the desired behaviours. That improved the situation but only slightly, also the continuously changing landscape meant this tooling itself became a moving target and a source of bugs. There were special review processes for changes, there were 3 experts in this huge org who reviewed everyone elses changes - this review process was excruciating to perform and involved examining a gargantuan model representation in excel.
Still the failed releases ticked up. No other part of the system suffered in this way.
There was popular thinking at the time was that this system just needed an owner. Of course no one wanted that thankless poisoned chalice.
Applying ToC to this resulted in a system that needed no owner long term, the tools that been created were all disbanded, the review board too.
The result was that newbies to the team pairing on a deliverable would be given responsibility to change that component as a way of flexing their solo skills.