For example, let's say you get a brain scan out of curiosity only. A small aneurysm is detected with a confidence of 95%, and said aneurysm, if real, has an estimated annual risk of rupture of 0.5%. The operation to clip said aneurysm, you are told, has a mortality rate of 10%. Do you have the op?
In the books I've read covering such medical ethics topics, a disproportionate number of patients do/would have the op because the knowledge of the aneurysm will "play on their mind" even if the odds are hugely in favor of leaving it be. For this reason, amongst others, unnecessary/preventative testing is discouraged by many medical professionals. (A similar dilemma is faced by folks in affected families who have to choose whether or not to have genetics testing for fatal familial insomnia risk – would you want to know if you're likely to face this usually inherited condition?)
The way I like to short-circuit this problem is to ask a bunch of experts what they would do if they had the same information, except applied to their own life. As long as you preface it with a promise not to hold them responsible for their answer, they'll often give better advice than the "official recommendation" that they were giving moments before. I've used this with doctors, realtors, home inspectors, plumbers, programmers, managers, retail employees...
"Hey Dr. Neurosurgeon, if you received this scan and you had the possible aneurysm, and you wanted to live as long as possible, what would you do?" If the official recommendation is to not get the clip, but a bunch of neurosurgeons say they personally would get the clip, then I'd probably get the clip.
It feels as if they are "information hazards" as I act exactly as the people do in your aneurysm example; I wash my hands for 10+ minutes when the risk is probably nominal, and so on. That 0.5% risk from your example absolutely "plays on my mind".
>"you'll never win a Nobel prize if you don't decontaminate your hands because the pollutant will absorb through your skin, into your blood, and then into your brain, and probably lower your IQ!"
Your imagination runs wild in such desperation.
It probably makes sense to move to less polluted areas and such, but I doubt excessive hand washing and those smaller habits would do much. I think we'd see a clear effect of successful people being compulsive handwashers? I mean, there are impurities in other things we can't control as well (air, water, food), so minimizing impurity from the hand only makes a very small difference. Still, brilliant people live in polluted places like LA (Terence Tao comes to mind!), various places in China, etc. so it doesn't seem to determine your outcome -- in fact, research shows the impact is small (but it does exist). Success is determined by the big factors and knowledge/wisdom-based choices, not being a few percent faster or more effective at solving puzzles -- unless you're in a particularly competitive job (maths competitions, professional athlete, professional chess player, etc.). Most mathematicians choose to explore different directions, different fields instead of competing head to head to solve single problems[1]. Our bodies and brains are also remarkably robust to small loads of impurities... in fact, one of the techniques of AI is to get rid of a few neurons here and there (DIY not advised). Hope this alleviates some concerns :)
[1] I got this from Maryam Mirzakhani interviews: https://news.stanford.edu/2017/07/15/maryam-mirzakhani-stanf...
Maybe to add to what might be the rationale to have an op like this: I've recently been on a call where someone brought up a metric called quality-adjusted life years (something like: "Would I rather live another year relatively painless or five more years, but in constant excruciating pain?"). As far as I know, this is mostly used in insurance calculations (which struck me as a bit distasteful, but I didn't look into it deeply enough to make a fair judgement), but it's a good mental model for what people might use in their reasoning in such a situation. Maybe for the patients that decided to have the op, the knowledge of their condition was bad enough to substantially reduce their quality of life and hence lead them to accept far higher risks to alleviate this.
For the second dilemma: Maybe simply knowing about the condition running in your family is an information hazard too. For someone very prone to worrying, knowing for sure that they have the condition might also come as a strange kind of relief.
If you look at it in terms of outcomes beyond life/death, the calculations become more vague-ish. Is "fixed or dead" possibly preferable to risking negative outcomes with life-long impact? (If you live in the land of capitalism worship, a huge part of the decision is "can I currently afford this medical care, and do I think I still can in a few years")
It's certainly a difficult decision, but it is some that should be up to individuals. We shouldn't have doctors make that decision on your behalf, as if they somehow knew better. We should educate doctors on how better communicate risk, and how to talk about possible impact. (Most medical professionals switch to "deer in headlights" when you want to talk about probabilities of outcomes, and certainly can't explain them well if you even get them to talk)
Most fascinatingly, because it's possible that an excessively cautious attitude towards dangerous information can cause harm, this paper is a potential example of dangerous information. I hope the authors considered that before publication!
Other than that, kind of interesting to see the terminology construction for a type of “higher order analysis” I’ve been thinking about lately.
Yes, it needs a [2011] tag on it. Because:
>There is, however, another way for robot- and AI-related risks to enter our information hazard taxonomy. They can enter it in the same ways as any risk relating to potentially dangerous technological development
is already happening. Given those autonomous decision without humans in the loop (e.g., flag some file as copyrighted), it isn't a basic-potential-risk, it's reallity.
> Information hazard: A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.
I'm getting the sense there is an underlying loop in the reasoning to support this concept of hazard, where the direction of truth and information seems confused. There's also the question of the durability of a given state of truth.
What is the difference between information and fear? (e.g. data about potential information) When and how is information realized into truth, and in turn, consequences? Given we know truth is not a necessary or sufficient condition to have consequences, what are the qualities of information that does? If there were no truth, and in turn, only narratives and struggles for power, is an information hazard just something that hinders your will to power?
Admittedly I find Bostrom's writing mostly impenetrable because he seems to publish things he's still sounding out and I don't quite detect the hand of an editor, but while this idea of a hazard seems interesting, I don't see that he's testing for or discovering its existance so much as coining and supporting a meme. Perhaps I've just missed it.
"In this paper, we will not be concerned with postmodernist critiques of the idea of objective truth nor with skeptical doubts about the possibility of knowledge. I shall assume some broad commonsensical understanding according to which there are truths and we humans sometimes manage to know some of these truths."
the bostromian creed wails.
To feel empowered, wagging their sheepdog tails.
To please their masters, by protecting the herd,
from the ever lurking, black sheep nerd.
Fuck OFF ye lousy boffins!
edit: /me howls Total transparency, or BUST!