With that said, it is always good to keep in mind just how fragile civilization as a whole can be to things like global catastrophic events, as this article highlighted these are events that could have happened but did not, thankfully, however if they did the world would be very different today
I write Haskell, I make computer games, I make web apps. Mostly because it's fun and satisfying. I'm also quite good at it and it comes easily to me.
I remember when I decided to go into engineering, a peer of mine from high school said "whateveracct, you're top of your class. Why aren't you going into medicine in order to do something more Worthwhile with your life?"
Stuck with me ever since. I was repulsed by the mindset but I couldn't word why at the time. I later realized that it smelled of a deeply nihilistic (as in Nietzsche's ideas) view of the world. Ressentiment comes to mind.
Spending my conscious hours working with computers is a less nihilistic use of my time. I am not deferring this life's happiness and agency in order to "have made an impact" when my life is over and useless to me.
If you want more philosophy, consider Plato's Republic. An ideal society doesn't necessarily have everyone doing Most Important and Dire Work. It doesn't even have them doing what they're most "skilled" at! Instead, it has everyone living in alignment with their souls' desires and preferences. (e.g. A frail person with a Warrior's Soul should be a soldier before a strong person with an Artisan's Soul.)
You're looking for balance of "saving the world" (i.e. our responsibility with civilization and all lifeforms) and "enjoy your life". Clearly if everyone is concerned exclusively with enjoying their lives (i.e. hyper-hedonism), society collapses; that's deeply irresponsible. If everyone is also hyper-focused on self-propagation of our species with zero regard to our actual experience as conscious beings with rich inner lives, then clearly there's the risk of indeed making our inner lives much worse than they could be.
A system I've seen recommended here to think about it is (I've seen it related to Ikigai, a Japanese concept): you need to find a balance between your needs and experience, your skills and potential, and what's good for society at large (in a soft max-min).
I think overall, however, if we give it a little thought it's easy to find something aligned with our interests and potential that can really make a good impact. If you're interested I recommend the Effective Altruism community for a take on this (they're largely focused on more tangible things like Earning to Give) and 80000 hours. In all likelihood, just by being a functional member of our society (and giving what you can), if you don't work for some obviously evil enterprise (idk, making hyper-addictive things, oil field discovery, or something like that), you're probably helping society.
I encourage a different path as well: if you can program (or develop technology) and you're entrepreneurial (many people around here?) you can most likely make something that will make a good impact on society and even civilization at large. Furthering education with online tools, making educational games (or otherwise that promote growth and reflection), making tools more accessible, ... , improving the robustness and reliability of our systems, ..., the list goes on -- why not fulfill your potential to the best you can? Invent the future, Hack the planet.
> A nihilist is a man who judges of the world as it is that it ought not to be, and of the world as it ought to be that it does not exist. According to this view, our existence (action, suffering, willing, feeling) has no meaning: the pathos of 'in vain' is the nihilists' pathos – at the same time, as pathos, an inconsistency on the part of the nihilists.
Where is Nietzsche's nihilism in what you describe? How is your peer's worldview nihilistic when they place a high value on the effect of your potential actions on the world?
Surely for a nihilist, "to have made an impact" is a non-goal.
That does not mean I don’t sometimes feel the guilt of not being apart of something that drives humanity forward, though, even if all I could contribute was working the website that raises awareness instead! I just know what I’m capable of, but the reality vs the ideal is the philosophical struggle here for me.
I instead try to make conscious investments to try and move things forward and make personal choices consciously to issues like this as best as I reasonably can.
I still feel guilty sometimes I’m not working on something like raising the perception of the safety of nuclear energy ya know
Who says you can't enjoy work in a different field, e.g. medicine? Perhaps you can even combine those skills with computer programming. E.g. build an exoskeleton that makes partially paralyzed people walk again.
In terms of what to _do_ about it as a software developer, I'm still trying to figure that one out. I currently work at a BCorp which tends to make me feel better about the work I'm doing which at a minimum isn't doing harm to the world. You could try looking for a meaningful job at https://techjobsforgood.com/
What would you rather be working on?
In what capacity I don’t know. I’m pretty convinced we are overlooking geothermal energy in the USA at least. I often wonder if we could both relieve potentially dangerous pressure bellowing Yellowstone while simultaneously harnessing its geothermal energy for example
But alas, I think the reality is I lack the expertise to even talk about this with any authority
SPAs and code are tools. You can use them for all sorts of endeavours.
Or the latest Future of Life Award lareats https://futureoflife.org/future-of-life-award
The Cuban Missile Crisis, on the other hand, was no joke. We were amazingly lucky that Murphy was on vacation that month. Our reliance on luck to avoid destroying humanity is much, much scarier than our reliance on reason.
- Something with the spread rate of COVID-delta, and a high lethality rate after a long incubation period.
- Enriching uranium in a rather small facility. This may already have happened and been kept quiet. Laser enrichment was talked about a lot in the early 1990s, and then suddenly, after some announcements from Lawrence Livermore, things got much quieter.[1][2] As high-powered lasers get better, this gets easier. There's now a startup in Australia working on this process again.
- Long term, a birth rate that's below replacement rate. That's the current normal in the developed world.
[1] https://www.nrc.gov/docs/ML1204/ML12045A051.pdf
[2] https://en.wikipedia.org/wiki/Separation_of_isotopes_by_lase...
As a twist, "The giving plague" by David Brin is an interesting (short) take on it.
I wonder if anyone's ever calculated the optimal parameters for a disease to end humanity
If it goes on long enough there will be opportunities and a baby boom
I sometimes think how "lucky" (obviously, luck is relative) humanity is that HIV is an STD and not an airborne virus with the transmissibility of Delta.
I wonder how many years away we are from home hackers having the tools necessary to create a horror. Say, something with the spread rate of Covid Delta and which acts as an airborne prion disease.
I figured that it would be hard to contain on earth because it would fall to the center of the earth. So the trick is to build it in orbit.
It seem far fetch but once you decide to work in space it opens plenty of engineering shortcuts to scale-up the LHC. Space is very big, and it's already cold and a good enough vacuum, so you just need to maintain in position a few superconducting electromagnets.
You collide a few high energy particle to form one and you nurture it to make it grow.
Initially you move it by shining light or throwing things in it when the black-hole is less than 1kg, thanks to momentum conservation it's as easy as playing marbles.
Once it is in position you feed it anything you want and you build your space station and desk around the black hole. The more you shred things in it, the more mass it gets and the harder it will be to move around, but the greater the space-time distortion.
Funds you ask ? Price per kg in orbit has gone down tremendously. And there are plenty of rich people ready to use cryonics to attain immortality, so it didn't took much to convince one of them to hedge on a safer alternative to gain time. Because you see time pass slower near a black hole, and thanks to Einstein's General Relativity that has been known for more than a century. So instead of dying you get closer to your personal black-hole and you fast forward the future until the tech is ready to save you.
How could have I predicted that another stealth start-up (sponsored by the same guy ! as I later discovered) would have exactly the same idea, and now there are two black-holes orbiting earth and no way to divert them. Once they collide in exactly 1337 days their combined momentum won't allow them to orbit earth anymore...
In the other story (whose title I don't recall, but it was probably published in Analog), a bare something (perhaps a wormhole whose other end is in open space) is created, and begins to suck the Earth's air away. The hero builds a dome around it, but leaves a valve in the side to sell vacuum.
Thinking about this, I am curious what the original procedure would've been. How did they plan on retrieving the astronauts, in a capsule on the ocean, without allowing the air inside to escape?
[edit]: Targeting humans: a viral ideological, philosophical, or religious meme that causes us to self-destruct. A technological gift with insidious Trojan Horse functionality (a biotechnological machine with subtle side effects; a physics device that touches physics we haven't discovered). Irresistible instructions on how to join the friendly galactic internet -- which actually route to a paranoid deviant ETI that destroys anything that transmits (ala Dark Forest hypothesis).
Targeting machines: exploting a buffer overflow in the Allen Array's signal processing pipeline to upload onto this planet a self-replicating superintelligent AI.
Targeting the superior chthonic race living in the Earth's mantle that we don't know about: friendly instructions on how to terraform the terrestrial surface and become a spacefaring species.
I don’t quite buy it but it feels the most plausible of all the alternative explanation for Fermi
Because of phenomenons like Simultaneous Discovery[0] I feel that we are the first of the many civilizations in the Type I to Type II transition
I personally believe we are on the precipice of finally colonizing another planet or planets long term), and therefore given the distinct possibility that life all basically started around the same time (simultaneous discovery) we are just among the first of civilizations ever, I don’t believe currently that other galactic civilizations existed or currently exist that are not at the same pace as us, as I (and I admit I have little evidence here) that simultaneous discovery applies beyond just ideas, but there may be some kind of this phenomenon in natural evolution, and making (albeit big) assumption that is true and evolution follows certain paths like our own, we just happen to be among the first civilizations to be at our level, and in fact other alien life is likely to be similarly or less advanced than us
I’m either very right or very very very wrong, I figure
[0]: https://www.newyorker.com/magazine/2008/05/12/in-the-air#ixz...
In essence, the idea of us being among the first in our galaxy is compatible with the notion that intelligent life forming is so scarce that there's only ever going to be just a couple civilizations in our galaxy or that we're alone; because if life is so common that our galaxy would have (for example) a hundred intelligent civilization spawning events, then the expectation is that half of them would have been before us, and it would be a really, really unlikely that all of them are on Earth-like planets younger than us and none of them are on the multitude of Earth-like planets orbiting Sun-like planets older than us.
Also, given all the time-consuming steps required for life to form, "around the same time" would optimistically mean something on the scale of +/- a million years. Like, if the protozoic era took 0.1% more or less, that would be a difference of two million years; so if some civilization was much older or much younger, then the difference would be much more than that, and if we encounter a planet that's +/- hundred thousand years of progress, that mean that we really progressed at a remarkably coincidentally equal starting point and pace; and if we encounter a civilization that's just a thousand years of technological development ahead or behind, then that would be a so unbelievable coincidence that I'd consider that some kind of intelligent designer is required to explain it.
The concept of simultaneous discovery happens because the discoverers share an environment where the prerequisites for that discovery appear at the same time; this would not be plausible for civilizations forming naturally through evolution without any contact or influence between them, that would work only if they're e.g. intentionally designed and "planted" on planets by some previous civilization.
Well maybe could say "one of":
We recall that the big bang was ~14 billion years ago.
Our solar system was made ~5 billion years ago (very rough arithmetic) out of the results of an exploded star. The star may have exploded ~7 billion years ago. In that case it may have been a first generation, hydrogen star and took ~6 billion years to form, make heavy elements, and explode (and make more heavy elements).
So, that makes our star a second generation star and our solar system, from an exploded first generation star, one of the first.
If all that is true, then, okay, "we are just among the first of civilizations ever".
An interesting variation on this theme: a future Elon Musk figure dons his black hat and launches a satellite capable of stationkeeping at a point in line with a nearby star. The satellite emulates such an ETI.
"An Outside Context Problem was the sort of thing most civilizations encountered just once, and which they tended to encounter rather in the same way a sentence encountered a full stop."
Deciding not to launch a "counterattack" with nuclear weapons based on a hunch that the alert you're getting is false; THAT'S a world-changing decision.
Beyond pathogens, perhaps when life (inevitably) learns to change the code that makes up life, the recursion leads to implosion soon after
We already had moon rocks on Earth, blasted off of the moon by bolide strikes. Likewise Mars rocks, and bits of asteroids and of other planets' moons. Maybe even Venus rocks.
Relatedly, the energy released in certain bolide strikes on Earth far exceeds anything achieved even in Tsar Bomba, itself thousands of times more powerful than Fat Man.
I have not seen any analysis of whether a bolide strike might incidentally produce substantial fusion activity. At the pressure and temperature produced, it is hard to imagine it not occurring.
It seems like there ought to be long-lived products of such fusion detectable in the K-T layer, alongside whatever the bolide carried. Some might be weakly radioactive and thus detectable at very tiny concentration.
And in our universe, the LHC failed several times when they were first starting it up (the Quench Incident, for example), causing a year's delay--which seemed so unlikely at the time that I really wondered...
But the corporations would want to spread to all of them. Just like big cities all over the globe look the same, you can buy the same stuff. Trade is good, and I love corporations (never understood why anyone on the left would be against the concept - it's ideally suited to create a "container" for cooperating individuals), but even on our own main planet we can see that we have strong forces of equalization and winner takes all.
You would have to somewhat isolate the places if you want diversity. Same thing that helps with biological diversity. Otherwise it will all just be near-copies of sameness.
In that sense, it would just be like distributed storage of the same content. It helps when one place gets wiped out by accident, but it does not provide true robustness against the "unknown unknowns" (to quote Rumsfeld) that the universe occasionally throws at us.
Space is very good at that. Easy to end up in places that are not practical to reach. Over time things change.
Modern examples, IMO, are:
- Kessler Syndrome
- Trying to prevent an asteroid hitting the earth
- Nuclear war making the entire earth uninhabitable
Now I read on HN that the danger of nuclear war is hyped out of proportion to its true severity, and should be considered only a little. Sorry, maybe I misunderstand. I have read a similar thing on HN a few times though, people that seem to think nuclear war really would be no big deal at all.
But it always seems weird to be how some people are so worried to the point of obsession about global warming without apparently ever giving a thought to the ever-present risk of full-scale nuclear war—something infinitely worse. (Well, hardly "war", just a flurry of button-pressing for a few minutes.)
Like, if there's a scale of catastrophic events that goes from 0 to 10 where 0 is no big deal and 10 is human extinction, then the worst events humanity has ever seen are somewhere below 1 on that scale and absolutely horrific mass death is something like 2/10 - because the gap between the damage required for that and damage required for extinction is so much larger than the gap between no big deal and worse mass death than we have ever seen. Arguably the worst damage that life on Earth has seen is the dinosaur-ending asteroid, and IMHO a fraction homo sapiens (though perhaps not our civilization) could survive even that. A full scale USSR-USA exchange in 1960s might perhaps kill most people in the northern hemisphere and perhaps cause a nuclear winter decreasing crop yields with an associated famine - but if just a fraction of people in South Asia and Africa and South America survive the famine while the North nukes themselves to radioactive glasslands, that's very, very far from extinction.
Killing half of humanity would literally be an unprecedented level of horror, but it would not end our civilization; killing 90% of humanity would likely end our civilization-as-we-know it but would not end our species, that would bring us to the population level that Earth had in 1700s; and killing 99.99% of humanity would definitely destroy our civilization but it would "just" push back our population growth to the numbers we had ~70 000 years ago - horrific for every individual, but still not an extinction event.
If that "tail risk" though is "complete destruction of the planet", you only get to be wrong once.
If your tail risk is the end of civilization then it doesn't matter how small the probability. You'd be fucked with certainty on any long enough timeframe.
Some tail risks are to large to take. Eventually your number comes up.
Some other hyped risks: -
- Artificial General Intelligence
- CRISPR gene editing
- Gain-of-function work with viruses
I have no way of assessing the risks, and there is a lot of hyperventilation in some circles