You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Chapter6_Priorities/Priors.ipynb
+36-11Lines changed: 36 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -1389,20 +1389,45 @@
1389
1389
"\n",
1390
1390
"2. There typically exist conjugate priors for simple, one dimensional problems. For larger problems, involving more complicated structures, hope is lost to find a conjugate prior. For smaller models, Wikipedia has a nice [table of conjugate priors](http://en.wikipedia.org/wiki/Conjugate_prior#Table_of_conjugate_distributions).\n",
1391
1391
"\n",
1392
-
"Really, conjugate priors are only useful for their mathematical convenience: it is simple to go from prior to posterior. I personally see conjugate priors as only a neat mathematical trick, and offer little insight into the problem at hand. \n",
1393
-
"\n",
1392
+
"Really, conjugate priors are only useful for their mathematical convenience: it is simple to go from prior to posterior. I personally see conjugate priors as only a neat mathematical trick, and offer little insight into the problem at hand. "
1393
+
]
1394
+
},
1395
+
{
1396
+
"cell_type": "markdown",
1397
+
"metadata": {},
1398
+
"source": [
1394
1399
"## Jefferys Priors\n",
1395
1400
"\n",
1396
1401
"Earlier, we talked about objective priors rarely being *objective*. Partly what we mean by this is that we want a prior that doesn't bias our posterior estimates. The flat prior seems like a reasonable choice as it assigns equal probability to all values. \n",
1397
1402
"\n",
1398
-
"But the flat prior is not transformation invariant. What does this mean? Suppose we have a random variable $ \\bf X $ from Bernoulli($\\theta$). We define the prior on $p(\\theta) = 1$. \n",
1399
-
"\n",
1400
-
"PUT PLOT OF THETA HERE\n",
1401
-
"\n",
1402
-
"Now, let's transform $\\theta$ with the function $\\psi = log \\frac{\\theta}{1-\\theta}$. This is just a function to stretch $\\theta$ across the real line. Now how likely are different values of $\\psi$ under our transformation.\n",
1403
-
"\n",
1404
-
"PUT PLOT OF PSI HERE\n",
1405
-
"\n",
1403
+
"But the flat prior is not transformation invariant. What does this mean? Suppose we have a random variable $ \\bf X $ from Bernoulli($\\theta$). We define the prior on $p(\\theta) = 1$. "
1404
+
]
1405
+
},
1406
+
{
1407
+
"cell_type": "markdown",
1408
+
"metadata": {},
1409
+
"source": [
1410
+
"PUT PLOT OF THETA HERE"
1411
+
]
1412
+
},
1413
+
{
1414
+
"cell_type": "markdown",
1415
+
"metadata": {},
1416
+
"source": [
1417
+
"Now, let's transform $\\theta$ with the function $\\psi = log \\frac{\\theta}{1-\\theta}$. This is just a function to stretch $\\theta$ across the real line. Now how likely are different values of $\\psi$ under our transformation."
1418
+
]
1419
+
},
1420
+
{
1421
+
"cell_type": "markdown",
1422
+
"metadata": {},
1423
+
"source": [
1424
+
"PUT PLOT OF PSI HERE"
1425
+
]
1426
+
},
1427
+
{
1428
+
"cell_type": "markdown",
1429
+
"metadata": {},
1430
+
"source": [
1406
1431
"Oh no! Our function is no longer flat. It turns out flat priors do carry information in them after all. The point of Jeffreys Priors is to create priors that don't accidentally become informative when you transform the variables you placed them originally on."
0 commit comments