Epistemic Rationality

Weakening the Assumptions

A key way to think about epistemic rationality is to identify epistemic value with the closeness-to-truth of one’s beliefs (so-called accuracy) and consider how best to pursue this epistemic value. In the traditional framework this style of argument has been very successful. What happens when we consider some settings that reject assumptions built into the traditional framework.

Risk-aware settings

In risk-weighted decision theory, one additionally has a risk-profile which governs one’s worst-case-scenario-style reasoning. Lara Buchak has recently promoted this as an account of rationality that goes beyond standard expected utility theory. We argue that what counts as an appropriate measure of accuracy for the risk-aware will differ from those for standard expected utility theory. In particular, the notion of strict propriety should be replaced by risk-weighed strict propriety.

There are then many questions to answer about this framework. For example:

  • What do these measures look like?
    (Work in progress with Ben Levinstein.)
  • Is a risk-aware agent epistemically required to gather free evidence?
    Answer: no.
  • Is a risk-aware agent still epistemically required to update by conditionalization?
    Answer: yes.

Non-classical logics

Suppose you’re looking at a vaguely red wall and adopt the idea that it is neither true nor false that the wall is red. How should we then represent our uncertainty about the redness of the wall? Or more generally, what should your beliefs be like if the world is governed by some non-classical logic.

Robbie Williams has noted that the accuracy arguments apply in these settings if we assign a particular numerical value to the non-classical truth values, e.g. “neither”, and our degrees of belief are still given by numerical functions. I put some pressure on this and offer some alternative models. For such non-standard models of belief, then, the accuracy arguments need to be reconsidered.

Imprecise probabilities

In imprecise probability models we represent one’s beliefs with a set of probabilities. I suggest that in such settings a measure of accuracy should only directly apply to the precise probabilities. To get accuracy-theoretic advice for the imprecise, we then need to ask how an imprecise agent should evaluate various other imprecise options. The suggestion I make is that it does so by asking the precise members of it what they recommend and taking the resulting set. This then gets very natural results.