Pages

.

What does it even mean to "believe" something?



I've done three posts recently that dealt with the issue of "beliefs". First I talked about "derp", which I defined as the tedious repetition of beliefs too strong to be swayed by evidence. Then I jumped back into the blogosphere discussion of whether bets reveal beliefs. Finally, I asked whether inflationistas really believe their warnings of inflation.

But in all these discussions, there's been a more subtle and fundamental issue nagging at me. It's the question: What does it even mean to "believe" something in the first place?? This question seems like a trivial semantic issue, but it's very deeply important for all kinds of economics issues, from Bayesian inference to behavioral economics to the axioms of standard consumer choice theory and game theory.

And it's a question to which I don't really know a satisfactory answer. I am not sure what it means to "believe" something.

One idea of belief is a feeling of certitude. I may have a strong emotional reaction of "Yeah!" to one statement, and a reaction of "No way!" to another. For example, if you say "The sun rises in the east" I feel a feeling of "Yep!", but if you say "The sun rises in the west", I feel a feeling of "No way!" So is this a good definition of "belief"?

Not necessarily. First off, it can't be measured very precisely. Suppose I'm trying to decide whether I believe there's a 50% chance of rain tomorrow, or a 60% chance. My feeling of certitude might be about the same for both those propositions, and there's no way to tell which I "believe" more.

Also, certitude may not be invariant to the situation in which the question is posed. When I am actually required to act, my feeling of certitude may vanish. A great example is this recent study on partisan differences in survey responses:
But when there was money on the line, the size of the [partisan answer] gaps [on factual questions] shrank by 55 percent. The researchers ran another experiment, in which they increased the odds of winning for those who answered the questions correctly but also offered a smaller reward to those who answered “don’t know” rather than answering falsely. The partisan gaps narrowed by 80 percent.
This illustrates the conflict between what I call "Tribal Reality" and "Extant Reality". In response to a statement like "Global warming is a myth", conservatives may feel an upwelling of emotional certitude, due to their tribal affinity with a movement that has long sought to deny or downplay global warming. When there's nothing on the line, that feeling of certitude will determine the response to survey questions. But when there are actual consequences for getting the question right or wrong - when Extant Reality comes into the picture, in other words - emotional certitude may take a back seat.

OK, then how about the idea of a belief as "the degree to which you're willing to bet on something"? That seems like a reasonable definition, but it has big problems too. First of all, single bets can be hedged by outside bets, as I pointed out in the discussion on whether bets reveal beliefs. In that case, bets are not informative. Second of all, even if they are not hedged, bets depend on personal psychological characteristics like risk aversion and loss aversion and ambiguity aversion. In other words, bets will always depend on preferences. Since preferences depend on many outside things, a definition of beliefs that includes preferences will again result in "beliefs" changing depending on totally unrelated things, like whether I lose my job.

OK, well how about the notion of "probability" from Bayesian inference? In Bayesian probability theory, a probability and a belief are the same thing. I used this concept in my definition of "derp" (a "prior" and a "posterior" are both "probabilities"), but I have to admit that here too, I was working with a term without being sure of its usefulness.

In Bayesian probability theory, you assign a number to an event. That number is a "probability", and there are rules for how to update it in response to new data. But suppose you ask me to assign a probability to the event of the Republicans winning the election, and I say "I think there's a 120 percent chance!" Obviously I'm just saying words that I heard somewhere, and obviously my notions of what a "percent chance" means are very different from that of, say, most statisticians. I can feed a probability of 1.2 into Bayes' Rule, sure, but does the output of that exercise deserve to be called a "belief"?

OK, so suppose you tell me "No, silly, you have to give a number between 0% and 100%. That's how percents work!" So I think carefully for a second, and say "OK, I think there's a 99.999% chance that the Republicans will win the election." But obviously I am just repeating another popular catch phrase here. My number comes from my emotional feeling of certitude, not from any sort of internal engagement with Extant Reality.

Now of course this example is of a silly survey respondent, but in a subtler way it applies to mathematically sophisticated people too, even Bayesian statisticians! As Larry Wasserman points out, statisticians often choose their prior when conducting a Bayesian inference. They choose the prior based on some attractive properties, like "uninformativeness" with respect to some function of the parameters. If I choose my "prior" based on some consideration that has nothing to do with the question at hand, can the "prior" really be said to constitute a "belief"? This sort of "belief" is just as unstable as the others. (Also note that it lacks any emotional certitude, and you probably wouldn't bet on it either.)

So all of our intuitive definitions of "belief" will sometimes rely on external conditions that have nothing to do with the statement about which we are trying to determine our "belief". It seems to me that whatever my true "belief" about statement X is, it should (for most types of X) have nothing to do with whether I'm in a good or bad mood that day, or whether the question is framed using politically incendiary language, or whether my financial portfolio is net long inflation, or whether I am a risk-averse person, or whether I'm trying to use that "belief" to publish an empirical paper.

And yet I cannot think of any definition of belief that satisfies those invariance criteria. Furthermore, all of our intuitive definitions of "belief" seem to conflict with each other pretty severely in certain situations.

So I'm still not really sure what it means to "believe" something.

No comments:

Post a Comment