Skip to content

Commit 86bc243

Browse files
Merge pull request CamDavidsonPilon#214 from mksenzov/master
A few typos and corrections
2 parents d8faef1 + d235577 commit 86bc243

File tree

4 files changed

+7
-6
lines changed

4 files changed

+7
-6
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,3 +2,4 @@
22
*.pyc
33
*~
44
*.png
5+
**/.ipynb_checkpoints

Chapter1_Introduction/Chapter1_Introduction.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -702,7 +702,7 @@
702702
"source": [
703703
"This code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. Note that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n",
704704
"\n",
705-
"`@pm.deterministic` is a decorator that tells PyMC this is a deterministic function. That is, if the arguments were deterministic (which they are not), the output would be deterministic as well. "
705+
"`@pm.deterministic` is a decorator that tells PyMC this is a deterministic function. That is, if the arguments were deterministic (which they are not), the output would be deterministic as well. Deterministic functions will be covered in Chapter 2. "
706706
]
707707
},
708708
{

Chapter2_MorePyMC/MorePyMC.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,7 @@
150150
"\n",
151151
"`some_variable = pm.DiscreteUniform(\"discrete_uni_var\", 0, 4)`\n",
152152
"\n",
153-
"where 0, 4 are the `DiscreteUniform`-specific lower and upper bound on the random variable. The [PyMC docs](http://pymc-devs.github.com/pymc/distributions.html) contain the specific parameters for stochastic variables. (Or use `??` if you are using IPython!)\n",
153+
"where 0, 4 are the `DiscreteUniform`-specific lower and upper bound on the random variable. The [PyMC docs](http://pymc-devs.github.com/pymc/distributions.html) contain the specific parameters for stochastic variables. (Or use `object??`, for example `pm.DiscreteUniform??` if you are using IPython!)\n",
154154
"\n",
155155
"The `name` attribute is used to retrieve the posterior distribution later in the analysis, so it is best to use a descriptive name. Typically, I use the Python variable's name as the `name`.\n",
156156
"\n",
@@ -253,7 +253,7 @@
253253
"\n",
254254
"For all purposes, we can treat the object `some_deterministic_var` as a variable and not a Python function. \n",
255255
"\n",
256-
"Prepending with the wrapper is the easiest way, but not the only way, to create deterministic variables. This is not completely true: elementary operations, like addition, exponentials etc. implicitly create deterministic variables. For example, the following returns a deterministic variable:"
256+
"Prepending with the wrapper is the easiest way, but not the only way, to create deterministic variables: elementary operations, like addition, exponentials etc. implicitly create deterministic variables. For example, the following returns a deterministic variable:"
257257
]
258258
},
259259
{
@@ -1082,7 +1082,7 @@
10821082
"\n",
10831083
"$$P( X = k ) = {{N}\\choose{k}} p^k(1-p)^{N-k}$$\n",
10841084
"\n",
1085-
"If $X$ is a binomial random variable with parameters $p$ and $N$, denoted $X \\sim \\text{Bin}(N,p)$, then $X$ is the number of events that occurred in the $N$ trials (obviously $0 \\le X \\le N$). The larger $p$ is (while still remaining between 0 and 1), the more events are likely to occur. The expected value of a binomial is equal to $Np$. Below we plot the mass probability distribution for varying parameters. \n"
1085+
"If $X$ is a binomial random variable with parameters $p$ and $N$, denoted $X \\sim \\text{Bin}(N,p)$, then $X$ is the number of events that occurred in the $N$ trials (obviously $0 \\le X \\le N$), and $p$ is the probability of a single event. The larger $p$ is (while still remaining between 0 and 1), the more events are likely to occur. The expected value of a binomial is equal to $Np$. Below we plot the mass probability distribution for varying parameters. "
10861086
]
10871087
},
10881088
{
@@ -2613,4 +2613,4 @@
26132613
"metadata": {}
26142614
}
26152615
]
2616-
}
2616+
}

Chapter3_MCMC/IntroMCMC.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -145,7 +145,7 @@
145145
"\n",
146146
"If these surfaces describe our *prior distributions* on the unknowns, what happens to our space after we incorporate our observed data $X$? The data $X$ does not change the space, but it changes the surface of the space by *pulling and stretching the fabric of the prior surface* to reflect where the true parameters likely live. More data means more pulling and stretching, and our original shape becomes mangled or insignificant compared to the newly formed shape. Less data, and our original shape is more present. Regardless, the resulting surface describes the *posterior distribution*. \n",
147147
"\n",
148-
"Again I must stress that it is, unfortunately, impossible to visualize this in large dimensions. For two dimensions, the data essentially *pushes up* the original surface to make *tall mountains*. The tendency of the observed data to *push up* the posterior probability in certain areas is checked by the prior probability distribution, so that less prior probability means more resistance. Thus in the double-exponential prior case above, a mountain (or multiple mountains) that might erupt near the (0,0) corner would be much higher than mountains that erupt closer to (5,5), since there is more resistance (low prior probability) near (5,5). The peak reflects the posterior probability of where the true parameters are likely to be found. Importantly, if the prior has assigned a probability of 0, then no posterior probability will be assigned there. \n",
148+
"Again I must stress that it is, unfortunately, impossible to visualize this in large dimensions. For two dimensions, the data essentially *pushes up* the original surface to make *tall mountains*. The tendency of the observed data to *push up* the posterior probability in certain areas is checked by the prior probability distribution, so that lower prior probability means more resistance. Thus in the double-exponential prior case above, a mountain (or multiple mountains) that might erupt near the (0,0) corner would be much higher than mountains that erupt closer to (5,5), since there is more resistance (low prior probability) near (5,5). The peak reflects the posterior probability of where the true parameters are likely to be found. Importantly, if the prior has assigned a probability of 0, then no posterior probability will be assigned there. \n",
149149
"\n",
150150
"Suppose the priors mentioned above represent different parameters $\\lambda$ of two Poisson distributions. We observe a few data points and visualize the new landscape: "
151151
]

0 commit comments

Comments
 (0)