Skip to content

Commit 2511eb3

Browse files
Merge pull request CamDavidsonPilon#53 from sash-ko/master
Chapter 2 fixes
2 parents 55abdff + ec5fd0f commit 2511eb3

File tree

1 file changed

+18
-14
lines changed

1 file changed

+18
-14
lines changed

Chapter2_MorePyMC/MorePyMC.ipynb

Lines changed: 18 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -574,13 +574,13 @@
574574
"\n",
575575
"3. Do we know $\\lambda$? No. In fact, we have a suspicion that there are *two* $\\lambda$ values, one for the earlier behaviour and one for the latter behaviour. We don't know when the behaviour switches though, but call the switchpoint $\\tau$.\n",
576576
"\n",
577-
"4. What is a good distribution for the two $\\lambda$s? The exponential is good, as it assigns probabilites to positive real numbers. Well the exponential distribution has a parameter too, call it $\\alpha$.\n",
577+
"4. What is a good distribution for the two $\\lambda$s? The exponential is good, as it assigns probabilities to positive real numbers. Well the exponential distribution has a parameter too, call it $\\alpha$.\n",
578578
"\n",
579579
"5. Do we know what the parameter $\\alpha$ might be? No. At this point, we could continue and assign a distribution to $\\alpha$, but it's better to stop once we reach a set level of ignorance: whereas we have a prior belief about $\\lambda$, (\"it probably changes over time\", \"it's likely between 10 and 30\", etc.), we don't really have any strong beliefs about $\\alpha$. So it's best to stop here. \n",
580580
"\n",
581581
" What is a good value for $\\alpha$ then? We think that the $\\lambda$s are between 10-30, so if we set $\\alpha$ really low (which corresponds to larger probability on high values) we are not reflecting our prior well. Similar, a too-high alpha misses our prior belief as well. A good idea for $\\alpha$ as to reflect our belief is to set the value so that the mean of $\\lambda$, given $\\alpha$, is equal to our observed mean. This was shown in the last chapter.\n",
582582
"\n",
583-
"6. We have no expert opinion of when $\\tau$ might have occured. So we will suppose $\\tau$ is from a discrete uniform distribution over the entire timespan.\n",
583+
"6. We have no expert opinion of when $\\tau$ might have occurred. So we will suppose $\\tau$ is from a discrete uniform distribution over the entire timespan.\n",
584584
"\n",
585585
"\n",
586586
"Below we give a graphical visualization of this, where arrows denote `parent-child` relationships. (provided by the [Daft Python library](http://daft-pgm.org/) )\n",
@@ -786,7 +786,7 @@
786786
"binomial = stats.binom\n",
787787
"\n",
788788
"parameters = [ (10, .4) , (10, .9) ] \n",
789-
"colors = [\"#348ABD\", \"#A60628\", \"#7A68A6\"]\n",
789+
"colors = [\"#348ABD\", \"#A60628\"]\n",
790790
"\n",
791791
"for i in range(2):\n",
792792
" N, p = parameters[i]\n",
@@ -797,7 +797,6 @@
797797
" label = \"$N$: %d, $p$: %.1f\"%(N,p), \n",
798798
" linewidth=3)\n",
799799
" \n",
800-
"\n",
801800
"plt.legend(loc=\"upper left\")\n",
802801
"plt.xlim(0, 10.5)\n",
803802
"plt.xlabel(\"$k$\")\n",
@@ -823,7 +822,7 @@
823822
"\n",
824823
"There is another connection between Bernoulli and Binomial random variables. If we have $X_1, X_2, ... , X_N$ Bernoulli random variables with the same $p$, then $Z = X_1 + X_2 + ... + X_N \\sim \\text{Binomial}(N, p )$.\n",
825824
"\n",
826-
"The expected value of a Bernoulli random variable is $p$. This can be seen by noting the the more general Binomial random variable has expected value $Np$ and setting $N=1$."
825+
"The expected value of a Bernoulli random variable is $p$. This can be seen by noting the more general Binomial random variable has expected value $Np$ and setting $N=1$."
827826
]
828827
},
829828
{
@@ -1341,6 +1340,7 @@
13411340
"\n",
13421341
"def logistic( x, beta):\n",
13431342
" return 1.0/( 1.0 + np.exp( beta*x) )\n",
1343+
"\n",
13441344
"x = np.linspace( -4, 4, 100 )\n",
13451345
"plt.plot(x, logistic( x, 1), label = r\"$\\beta = 1$\")\n",
13461346
"plt.plot(x, logistic( x, 3), label = r\"$\\beta = 3$\")\n",
@@ -1430,6 +1430,7 @@
14301430
"collapsed": false,
14311431
"input": [
14321432
"import scipy.stats as stats\n",
1433+
"\n",
14331434
"nor = stats.norm\n",
14341435
"x = np.linspace( -8, 7, 150 )\n",
14351436
"mu = (-2, 0, 3)\n",
@@ -1442,8 +1443,6 @@
14421443
" label =\"$\\mu = %d,\\;\\\\tau = %.1f$\"%(_mu, _tau), color = _color )\n",
14431444
" plt.fill_between( x, nor.pdf( x, _mu, scale =1./_tau ), color = _color, \\\n",
14441445
" alpha = .33)\n",
1445-
" \n",
1446-
"\n",
14471446
"\n",
14481447
"plt.legend(loc = \"upper right\")\n",
14491448
"plt.xlabel(\"$x$\")\n",
@@ -1474,7 +1473,7 @@
14741473
"\n",
14751474
"\n",
14761475
"\n",
1477-
"Below we continue our modeling of the the Challenger space craft:"
1476+
"Below we continue our modeling of the Challenger space craft:"
14781477
]
14791478
},
14801479
{
@@ -1634,6 +1633,7 @@
16341633
"collapsed": false,
16351634
"input": [
16361635
"figsize( 12.5, 4)\n",
1636+
"\n",
16371637
"plt.plot( t, mean_prob_t, lw = 3, label = \"average posterior \\nprobability \\\n",
16381638
"of defect\")\n",
16391639
"plt.plot( t, p_t[0, :], ls=\"--\",label=\"realization from posterior\" )\n",
@@ -1671,6 +1671,7 @@
16711671
"collapsed": false,
16721672
"input": [
16731673
"from scipy.stats.mstats import mquantiles\n",
1674+
"\n",
16741675
"# vectorized bottom and top 5% quantiles for \"confidence interval\"\n",
16751676
"qs = mquantiles(p_t,[0.05,0.95],axis=0)\n",
16761677
"plt.fill_between(t[:,0],*qs,alpha = 0.7,\n",
@@ -1723,6 +1724,7 @@
17231724
"collapsed": false,
17241725
"input": [
17251726
"figsize(12.5, 2.5)\n",
1727+
"\n",
17261728
"prob_31 = logistic( 31, beta_samples, alpha_samples )\n",
17271729
"\n",
17281730
"plt.xlim( 0.995, 1)\n",
@@ -1746,7 +1748,7 @@
17461748
"source": [
17471749
"### Is our model appropriate?\n",
17481750
"\n",
1749-
"The skeptical reader will say \"You delibrately chose the logistic function for $p(t)$ and the specific priors. Perhaps other functions or priors will give different results. How do I know I have chosen a good model?\" This is absolutely true. To consider an extreme situation, what if I had chosen the function $p(t) = 1,\\; \\forall t$, which guarantees a defect always occuring: I would have again predicted disaster on January 28th. Yet this is clearly a poorly chosen model. On the other hand, if I did choose the logistic function for $p(t)$, but specificed all my priors to be very tight around 0, likely we would have very different posterior distributions. How do we know our model is an expression of the data? This encourages us to measure the model's **goodness of fit**.\n",
1751+
"The skeptical reader will say \"You deliberately chose the logistic function for $p(t)$ and the specific priors. Perhaps other functions or priors will give different results. How do I know I have chosen a good model?\" This is absolutely true. To consider an extreme situation, what if I had chosen the function $p(t) = 1,\\; \\forall t$, which guarantees a defect always occurring: I would have again predicted disaster on January 28th. Yet this is clearly a poorly chosen model. On the other hand, if I did choose the logistic function for $p(t)$, but specificed all my priors to be very tight around 0, likely we would have very different posterior distributions. How do we know our model is an expression of the data? This encourages us to measure the model's **goodness of fit**.\n",
17501752
"\n",
17511753
"We can think: *how can we test whether our model is a bad fit?* An idea is to compare observed data (which if we recall is a *fixed* stochastic variable) with artifical dataset which we can simulate. The rational is that if the simulated dataset does not appear similar, statistically, to the observed dataset, then likely our model is not accurately represented the observed data. \n",
17521754
"\n",
@@ -1768,7 +1770,6 @@
17681770
"simulated = mc.Bernoulli( \"bernoulli_sim\", p)\n",
17691771
"N = 10000\n",
17701772
"\n",
1771-
"\n",
17721773
"mcmc = mc.MCMC( [simulated, alpha, beta, observed ] )\n",
17731774
"mcmc.sample( N )"
17741775
],
@@ -1798,6 +1799,7 @@
17981799
"collapsed": false,
17991800
"input": [
18001801
"figsize(12.5, 5)\n",
1802+
"\n",
18011803
"simulations = mcmc.trace(\"bernoulli_sim\")[:]\n",
18021804
"print simulations.shape\n",
18031805
"\n",
@@ -1833,7 +1835,7 @@
18331835
"\n",
18341836
"We wish to assess how good our model is. \"Good\" is a subjective term of course, so results must be relative to other models. \n",
18351837
"\n",
1836-
"We will be doing this graphically as well, which may seem like an even less objective method. The alternative is to use *Bayesian p-values*. These are still subjective, as the proper cutoff between good and bad is artibitrary. Gelman emphasises that the graphical tests are more illuminating [7] than p-value tests. We agree.\n",
1838+
"We will be doing this graphically as well, which may seem like an even less objective method. The alternative is to use *Bayesian p-values*. These are still subjective, as the proper cutoff between good and bad is arbitrary. Gelman emphasises that the graphical tests are more illuminating [7] than p-value tests. We agree.\n",
18371839
"\n",
18381840
"The following graphical test is a novel data-viz approach to logistic regression. The plots are called *separation plots*[8]. For a suite of models we wish to compare, each model is plotted on an individual separation plot. I leave most of the technical details about separation plots to the very accessible [original paper](http://mdwardlab.com/sites/default/files/GreenhillWardSacks.pdf), but I'll summarize their use here.\n",
18391841
"\n",
@@ -1962,9 +1964,10 @@
19621964
"cell_type": "code",
19631965
"collapsed": false,
19641966
"input": [
1965-
"figsize( 11., 1.5 )\n",
19661967
"from separation_plot import separation_plot\n",
19671968
"\n",
1969+
"figsize( 11., 1.5 )\n",
1970+
"\n",
19681971
"separation_plot(posterior_probability, D )"
19691972
],
19701973
"language": "python",
@@ -2046,7 +2049,7 @@
20462049
"source": [
20472050
"In the random model, we can see that as the probability increases there is no clustering of defects to the right-hand side. Similarly for the constant model.\n",
20482051
"\n",
2049-
"The the perfect model, the probability line is not well shown, as it is stuck to the bottom and top of the figure. Of course the perfect model is only for demonstration, and we cannot infer any scientific inference from it."
2052+
"The perfect model, the probability line is not well shown, as it is stuck to the bottom and top of the figure. Of course the perfect model is only for demonstration, and we cannot infer any scientific inference from it."
20502053
]
20512054
},
20522055
{
@@ -2071,6 +2074,7 @@
20712074
"input": [
20722075
"#type your code here.\n",
20732076
"figsize(12.5, 4 )\n",
2077+
"\n",
20742078
"plt.scatter( alpha_samples, beta_samples, alpha = 0.1 )\n",
20752079
"plt.title( \"Why does the plot look like this?\" )\n",
20762080
"plt.xlabel( r\"$\\alpha$\")\n",
@@ -2200,4 +2204,4 @@
22002204
"metadata": {}
22012205
}
22022206
]
2203-
}
2207+
}

0 commit comments

Comments
 (0)