This reminds me of a talk by Halvor Flake in which he says that people in intelligence agencies are incentivized to try to know everything, in order to minimize the chance of being surprised by anything.
> A technological breakthrough in battery tech has a big potential for "economic disruption"; is that a national security risk?
I think it's kind of an abuse of language to call it that, but I wouldn't be surprised if many people intuitively thought that yes, this should count as a national security risk.
> The standard "Bayesian false-positive" puzzle applies here: the base-rate of humans committing actual terrorist acts is extremely low. [...] Do we think the person making the decision about whether to make Joe disappear is aware of these subtle nuances in Bayesian statistics?
I think that people at, say, the CIA are somewhat aware of them because they have a notion of confidence levels of assessments, and they're also used to the idea that they have a lot of noise to deal with. Like when they were trying to find Osama bin Laden's house, they presumably had many many many many ideas of places that could potentially be his house for some reason, but then they actively tried to disprove those hypotheses (like scientists!).
They will also literally say that they assess certain things with low confidence. I don't know if some of the analysts writing that maybe actually have an explicit Bayesian probability and are mapping that into particular ranges...?
On the other hand, I think that military personnel in, say, the initial U.S. occupation of Afghanistan did not have this perspective and were happy to detain people for minimal and highly speculative reasons. (Also, I guess the base rate of people in Afghanistan during the war who were violently opposed to the U.S. was much higher than the usual base rate worldwide.) They got a number of tragic false positives which led to enormous injustices.
I think part of what you're getting at here is that governments' power when acting to actually influence the world is, well, a matter of life and death. So things like the base rate fallacy are just as big a deal there as they are in cancer treatment, or, if you believe there's a relevant moral asymmetry between killing and letting die, an even bigger deal. Also, lots of kinds of agents of lots of governments are rarely ever very accountable for their decisions or decisionmaking processes.
I think this is kind of an argument against surveillance and espionage (you can think of the scene in Good Will Hunting where the main character says he doesn't want to work for a spy agency because he can vividly imagine a false positive scenario in which a foreigner gets killed because of his work), but I think overall it's much more an argument for more transparency and accountability in uses of force. Like, you don't want to give lots of information to the CIA if they're violent maniacs because they might use that information recklessly. But while giving them that information might also be highly questionable on moral and legal and political grounds, it seems like the biggest problem in the scenario is if the CIA is composed of violent maniacs, or if it's incentivized to cultivate rather than restrain people's tendencies to violent maniacal behavior.
As an analogy about the base rate argument, people have argued that it could be beneficial for people's health to perform fewer tests, in cases where false positives often lead to wasteful or dangerous medical interventions. But if there's nothing bad about the tests themselves, theoretically it would make more sense to give doctors more information rather than less, while trying to improve their decisionmaking.