|
17 | 17 | "language": "python",
|
18 | 18 | "metadata": {},
|
19 | 19 | "outputs": [],
|
20 |
| - "prompt_number": 1 |
| 20 | + "prompt_number": 2 |
21 | 21 | },
|
22 | 22 | {
|
23 | 23 | "cell_type": "markdown",
|
|
35 | 35 | "The Philosophy of Bayesian Inference\n",
|
36 | 36 | "------\n",
|
37 | 37 | "\n",
|
| 38 | + "Before we explore Bayesian solutions, it would help to ask a question first: what is the inference problem? This is a very broad question, so I'll provide a broad answer. Inference is the task of discovering non-observed variables from information that is *somehow* related. One can think of it as the reverse-engineering problem: given output, infer input. This broad, definition provides enough room for many candidate solutions to the inference problem. One such solution is Bayesian methods. Consider the following vignette:\n", |
| 39 | + "\n", |
| 40 | + "\n", |
38 | 41 | " \n",
|
39 | 42 | "> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there are may be no bugs present...\n",
|
40 | 43 | "\n",
|
|
57 | 60 | "\n",
|
58 | 61 | "Notice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occuring, because they possess different *information* about the world.\n",
|
59 | 62 | "\n",
|
60 |
| - "Think about how we can extend this definition of probability to events that are not *really* random. That is, we can extend this to anything that is fixed, but we are unsure about: \n", |
| 63 | + "- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of heads if 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either heads or tails. Now what is *your* belief that the coin is heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n", |
61 | 64 | "\n",
|
62 | 65 | "- Your code either has a bug in it or not, but we do not know for certain which is true. Though we have a belief about the presence or absence of a bug. \n",
|
63 | 66 | "\n",
|
|
71 | 74 | "\n",
|
72 | 75 | "John Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even --especially-- if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the *prior probability*. For example, consider the posterior probabilities (read: posterior belief) of the above examples, after observing some evidence $X$.:\n",
|
73 | 76 | "\n",
|
74 |
| - "1\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n", |
| 77 | + "1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being heads. $P(A | X):\\;\\;$ You look at the coin, observe a heads, denote this information $X$, and trivially assign probability 1.0 to heads and 0.0 to tails.\n", |
| 78 | + "\n", |
| 79 | + "2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n", |
75 | 80 | "\n",
|
76 |
| - "2\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n", |
| 81 | + "3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n", |
77 | 82 | "\n",
|
78 |
| - "3\\. $P(A):\\;\\;$ That beautiful girl in your class probably doesn't have a crush on you. $P(A | X): \\;\\;$ She sent you an SMS message about this Friday night. Interesting... \n", |
| 83 | + "4\\. $P(A):\\;\\;$ You believe that the probability that the lovely girl in your class likes you is low. $P(A | X): \\;\\;$ She sent you an SMS message about this Friday night. Interesting... \n", |
79 | 84 | "\n",
|
80 | 85 | "It's clear that in each example we did not completely discard the prior belief after seeing new evidence, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n",
|
81 | 86 | "\n",
|
|
0 commit comments