Skip to content

Commit 0688fcd

Browse files
adding autocorrelation to chapter 3
1 parent 807d15d commit 0688fcd

File tree

5 files changed

+394
-120
lines changed

5 files changed

+394
-120
lines changed

Chapter1_Introduction/Chapter1_Introduction.ipynb

Lines changed: 10 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
"language": "python",
1818
"metadata": {},
1919
"outputs": [],
20-
"prompt_number": 1
20+
"prompt_number": 2
2121
},
2222
{
2323
"cell_type": "markdown",
@@ -35,6 +35,9 @@
3535
"The Philosophy of Bayesian Inference\n",
3636
"------\n",
3737
"\n",
38+
"Before we explore Bayesian solutions, it would help to ask a question first: what is the inference problem? This is a very broad question, so I'll provide a broad answer. Inference is the task of discovering non-observed variables from information that is *somehow* related. One can think of it as the reverse-engineering problem: given output, infer input. This broad, definition provides enough room for many candidate solutions to the inference problem. One such solution is Bayesian methods. Consider the following vignette:\n",
39+
"\n",
40+
"\n",
3841
" \n",
3942
"> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there are may be no bugs present...\n",
4043
"\n",
@@ -57,7 +60,7 @@
5760
"\n",
5861
"Notice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occuring, because they possess different *information* about the world.\n",
5962
"\n",
60-
"Think about how we can extend this definition of probability to events that are not *really* random. That is, we can extend this to anything that is fixed, but we are unsure about: \n",
63+
"- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of heads if 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either heads or tails. Now what is *your* belief that the coin is heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n",
6164
"\n",
6265
"- Your code either has a bug in it or not, but we do not know for certain which is true. Though we have a belief about the presence or absence of a bug. \n",
6366
"\n",
@@ -71,11 +74,13 @@
7174
"\n",
7275
"John Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even --especially-- if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the *prior probability*. For example, consider the posterior probabilities (read: posterior belief) of the above examples, after observing some evidence $X$.:\n",
7376
"\n",
74-
"1\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n",
77+
"1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being heads. $P(A | X):\\;\\;$ You look at the coin, observe a heads, denote this information $X$, and trivially assign probability 1.0 to heads and 0.0 to tails.\n",
78+
"\n",
79+
"2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n",
7580
"\n",
76-
"2\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n",
81+
"3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n",
7782
"\n",
78-
"3\\. $P(A):\\;\\;$ That beautiful girl in your class probably doesn't have a crush on you. $P(A | X): \\;\\;$ She sent you an SMS message about this Friday night. Interesting... \n",
83+
"4\\. $P(A):\\;\\;$ You believe that the probability that the lovely girl in your class likes you is low. $P(A | X): \\;\\;$ She sent you an SMS message about this Friday night. Interesting... \n",
7984
"\n",
8085
"It's clear that in each example we did not completely discard the prior belief after seeing new evidence, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n",
8186
"\n",

Chapter3_MCMC/IntroMCMC.ipynb

Lines changed: 174 additions & 83 deletions
Large diffs are not rendered by default.

Chapter4_TheGreatestTheoremNeverTold/LawOfLargeNumbers.ipynb

Lines changed: 204 additions & 29 deletions
Large diffs are not rendered by default.

Chapter5_LossFunctions/LossFunctions.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,13 +31,13 @@
3131
"cell_type": "markdown",
3232
"metadata": {},
3333
"source": [
34-
"Statisticians can be a sour bunch. Instead of considering their winnings, they only measure how much they are losing. In fact, they consider their wins to be negative loses. But what's interesting is *how they measure their losses.*\n",
34+
"Statisticians can be a sour bunch. Instead of considering their winnings, they only measure how much they are losing. In fact, they consider their wins as *negative loses*. But what's interesting is *how they measure their losses.*\n",
3535
"\n",
3636
"The author Nassim Taleb of *The Black Swan* and *Antifragility* stresses the importance of the *payoffs* of decisions, *not the accuracy*. For example, consider the following vignette:\n",
3737
"\n",
3838
"> A meteorologist is predicting the probability of a possible hurricane striking his city. He estimates, with 95% confidence, that the probability of it *not* striking is between 99% - 100%. He is very happy with his precision and advises the city that a major evacuation is unecessary. Unfortunately, the hurricane does strike and the city is flooded. \n",
3939
"\n",
40-
"The stylized example shows the flaw in using a pure accuracy metric to measure how good your estimate is, while an appealing and *objective* thing to do, misses the point: results of decision making. Taleb goes on to say \"I would rather be vaugely right than very wrong.\" "
40+
"This stylized example shows the flaw in using a pure accuracy metric to measure outcomes. Using a measure that emphasizes etimation accuracy, while an appealing and *objective* thing to do, misses the point of why you are even performing the statistical inference in the first place: results of inference. Taleb distills this quite succiently: \"I would rather be vaugely right than very wrong.\" "
4141
]
4242
},
4343
{

README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,10 @@ Interactive notebooks + examples can be downloaded by cloning! )
5353
We explore the gritty details of PyMC through code and examples. Examples include:
5454
- Analysis on real-time GitHub repo stars and forks.
5555

56-
56+
**More questions about PyMC?**
57+
Please post your modeling, convergence, or any other PyMC question on [cross-validated](http://stats.stackexchange.com/), the statistcs stack-exchange.
58+
59+
5760
Using the book
5861
-------
5962

0 commit comments

Comments
 (0)