I enjoyed the discussion of the objective vs subjective probability distinction! It's a helpful one, but often neglected.
One short comment. As it is, the argument seems to overreach. As far as empirical knowledge is concerned, there’s always a non-zero chance, either objectively or subjectively, that something is possible. In other words, since empirical knowledge is defeasible, at the limit it’s impossible to say that something is impossible. What seems to matter more in this case isn’t the uncertainty itself, but the stakes. Igniting the atmosphere is a very bad outcome indeed!
The stakes alone are not sufficient to conclude that the bomb should not have been detonated on the grounds that it was possible that doing so would ignite the atmosphere. For it was also possible for the atmosphere to be ignited due to *not* detonating the bomb. A model-agnostic reason is needed to rate one probability much higher than the other.
I didn't mean to suggest that the stakes "alone" are sufficient (not sure how that could work?). I wanted to point out that what is mostly driving the prudence recommendation wasn't the probability of the calculations/theory being wrong, but simply that the stakes were very high. There are a lot of scenarios with similar probabilities that are inconsequential. Perhaps the makers of my toaster made a calculation error and my toast will burn as a result. Should I refrain from making myself a toast? I would argue that no. :)
I only skimmed through the Orb et al. paper, but as far as I understand their analysis assumes high stakes (e.g. "The stakes must also be very high to warrant this additional analysis of the risk, for the adjustment to the estimated probability will typically be very small in absolute terms." [p. 194]).
I enjoyed the discussion of the objective vs subjective probability distinction! It's a helpful one, but often neglected.
One short comment. As it is, the argument seems to overreach. As far as empirical knowledge is concerned, there’s always a non-zero chance, either objectively or subjectively, that something is possible. In other words, since empirical knowledge is defeasible, at the limit it’s impossible to say that something is impossible. What seems to matter more in this case isn’t the uncertainty itself, but the stakes. Igniting the atmosphere is a very bad outcome indeed!
The stakes alone are not sufficient to conclude that the bomb should not have been detonated on the grounds that it was possible that doing so would ignite the atmosphere. For it was also possible for the atmosphere to be ignited due to *not* detonating the bomb. A model-agnostic reason is needed to rate one probability much higher than the other.
See this paper for some relevant discussion: https://www.tandfonline.com/doi/abs/10.1080/13669870903126267
Thanks for the paper reference, didn't know it!
I didn't mean to suggest that the stakes "alone" are sufficient (not sure how that could work?). I wanted to point out that what is mostly driving the prudence recommendation wasn't the probability of the calculations/theory being wrong, but simply that the stakes were very high. There are a lot of scenarios with similar probabilities that are inconsequential. Perhaps the makers of my toaster made a calculation error and my toast will burn as a result. Should I refrain from making myself a toast? I would argue that no. :)
I only skimmed through the Orb et al. paper, but as far as I understand their analysis assumes high stakes (e.g. "The stakes must also be very high to warrant this additional analysis of the risk, for the adjustment to the estimated probability will typically be very small in absolute terms." [p. 194]).