Accuracy

Accuracy, or closeness-to-truth considerations have been used to motivate various epistemic rationality requirements. I have been thinking about how these apply in unusual settings:

Risk-aware settings

In risk-weighted decision theory, one additionally has a risk-profile which governs one's worst-case-scenario-style reasoning. Lara Buchak has recently promoted this as an account of rationality that goes beyond standard expected utility theory. We argue that what counts as an appropriate measure of accuracy for the risk-aware will differ from those for standard expected utility theory. In particular, the notion of strict propriety should be replaced by risk-weighed strict propriety.

There are then many questions to answer about this framework. For example:

  • What do these measures look like?
    (Work in progress with Ben Levinstein. Email me for details.)
  • Is a risk-aware agent epistemically required to gather free evidence?
    Answer: no.
  • Is a risk-aware agent still epistemically required to update by conditionalization?
    Answer: yes.

Non-classical logics

Robbie Williams has shown that quite generally the accuracy arguments apply when the world is governed by a non-classical logic. However, these require that we should assign a particular numerical value to the non-classical truth values, e.g. “neither”, and our degrees of belief are still given by numerical functions. I put some pressure on this and offer some alternative models. A further question is: what should the model of belief be like if I’m uncertain about logic? For such non-standard models of belief, then, the accuracy arguments need to be reconsidered. This relates the the consideration of accuracy arguments for the imprecise probability framework.

(Work in progress. Email me to get a copy of my working-paper.)

Imprecise probabilities

In imprecise probability models we represent one's beliefs with a set of probabilities. I suggest that in such settings a measure of accuracy should only directly apply to the precise probabilities. To get accuracy-theoretic advice for the imprecise, we then need to ask how an imprecise agent should evaluate various other imprecise options. The suggestion I make is that it does so by asking the precise members of it what they recommend. This then gets very natural results.

(Work in progress. Email me to get a copy of my working-paper.)

Avatar
Catrin Campbell-Moore
Lecturer in Philosophy