Skip to content

Commit 5832322

Browse files
committed
update docs
1 parent ad9173f commit 5832322

File tree

6 files changed

+67
-40
lines changed

6 files changed

+67
-40
lines changed

Localization/Kalmanfilter_basics.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@
3232
},
3333
{
3434
"cell_type": "code",
35-
"execution_count": 2,
35+
"execution_count": 1,
3636
"metadata": {},
3737
"outputs": [
3838
{
@@ -288,7 +288,7 @@
288288
"$$ \n",
289289
"f(x, \\mu, \\sigma) = \\frac{1}{\\sigma\\sqrt{2\\pi}} \\exp\\big [{-\\frac{(x-\\mu)^2}{2\\sigma^2} }\\big ]\n",
290290
"$$\n",
291-
"Range is [$-\\inf,\\inf $]\n",
291+
"Range is $$[-\\inf,\\inf] $$\n",
292292
"\n",
293293
"\n",
294294
"This is just a function of mean($\\mu$) and standard deviation ($\\sigma$) and what gives the normal distribution the charecteristic **bell curve**. "

Localization/Kalmanfilter_basics_2.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
"source": [
1414
" ### Probabilistic Generative Laws\n",
1515
" \n",
16-
"**1st Law**:\n",
16+
"#### 1st Law:\n",
1717
"The belief representing the state $x_{t}$, is conditioned on all past states, measurements and controls. This can be shown mathematically by the conditional probability shown below:\n",
1818
"\n",
1919
"$$p(x_{t} | x_{0:t-1},z_{1:t-1},u_{1:t})$$\n",
@@ -31,7 +31,7 @@
3131
"\n",
3232
"$$p(x_{t} | x_{0:t-1},z_{1:t-1},u_{1:t})=p(x_{t} | x_{t-1},u_{t})$$\n",
3333
"\n",
34-
"**2nd Law**:\n",
34+
"#### 2nd Law:\n",
3535
"\n",
3636
"If $x_{t}$ is complete, then:\n",
3737
"\n",
@@ -84,7 +84,7 @@
8484
"### Bayes Rule:\n",
8585
"\n",
8686
"\n",
87-
"Posterior = $\\frac{Likelihood*Prior}{Marginal} $\n",
87+
"Posterior = $$\\frac{Likelihood*Prior}{Marginal} $$\n",
8888
"\n",
8989
"Here,\n",
9090
"\n",
@@ -274,7 +274,7 @@
274274
"\\end{aligned}$$\n",
275275
"\n",
276276
"\n",
277-
"**$K$ is the *Kalman gain*. It's the crux of the Kalman filter. It is a scaling term that chooses a value partway between $\\mu_z$ and $\\bar\\mu$.**"
277+
"$K$ is the *Kalman gain*. It's the crux of the Kalman filter. It is a scaling term that chooses a value partway between $\\mu_z$ and $\\bar\\mu$."
278278
]
279279
},
280280
{

docs/jupyternotebook2rst.py

Lines changed: 23 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
"""
1+
"""
22
3-
Jupyter notebook converter to rst file
3+
Jupyter notebook converter to rst file
44
55
author: Atsushi Sakai
66
@@ -19,8 +19,22 @@ def get_notebook_path_list(ndir):
1919
return path
2020

2121

22+
def convert_rst(rstpath):
23+
with open(rstpath, "r") as file:
24+
filedata = file.read()
25+
26+
# convert from code directive to code-block
27+
# because showing code in Sphinx
28+
before = ".. code:: ipython3"
29+
after = ".. code-block:: ipython3"
30+
filedata = filedata.replace(before, after)
31+
32+
with open(rstpath, "w") as file:
33+
file.write(filedata)
34+
35+
2236
def generate_rst(npath):
23-
# print(npath)
37+
print("====Start generating rst======")
2438

2539
# generate dir
2640
dirpath = os.path.dirname(npath)
@@ -36,13 +50,18 @@ def generate_rst(npath):
3650
print(cmd)
3751
subprocess.call(cmd, shell=True)
3852

53+
rstpath = dirpath + "/" + basename
54+
convert_rst(rstpath)
55+
56+
# clean up old files
3957
cmd = "rm -rf "
4058
cmd += "./modules/"
4159
cmd += basename[:-4]
4260
cmd += "*"
43-
print(cmd)
61+
# print(cmd)
4462
subprocess.call(cmd, shell=True)
4563

64+
# move files to module dir
4665
cmd = "mv "
4766
cmd += dirpath
4867
cmd += "/*.rst ./modules/"

docs/modules/Kalmanfilter_basics.rst

Lines changed: 18 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ In the continous form,
2727

2828
.. math:: \mathbb E[X] = \int_{-\infty}^\infty x\, f(x) \,dx
2929

30-
.. code:: ipython3
30+
.. code-block:: ipython3
3131
3232
import numpy as np
3333
import random
@@ -61,7 +61,7 @@ data meaning the spread of the data.
6161

6262
.. math:: \mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2
6363

64-
.. code:: ipython3
64+
.. code-block:: ipython3
6565
6666
x=np.random.randn(10)
6767
np.var(x)
@@ -123,7 +123,7 @@ normal distribution:
123123
\begin{aligned}VAR(X) = \sigma_x^2 &= \frac{1}{n}\sum_{i=1}^n(X - \mu)^2\\
124124
COV(X, Y) = \sigma_{xy} &= \frac{1}{n}\sum_{i=1}^n[(X-\mu_x)(Y-\mu_y)\big]\end{aligned}
125125
126-
.. code:: ipython3
126+
.. code-block:: ipython3
127127
128128
x=np.random.random((3,3))
129129
np.cov(x)
@@ -141,7 +141,7 @@ normal distribution:
141141
142142
Covariance taking the data as **sample** with :math:`\frac{1}{N-1}`
143143

144-
.. code:: ipython3
144+
.. code-block:: ipython3
145145
146146
x_cor=np.random.rand(1,10)
147147
y_cor=np.random.rand(1,10)
@@ -159,7 +159,7 @@ Covariance taking the data as **sample** with :math:`\frac{1}{N-1}`
159159
160160
Covariance taking the data as **population** with :math:`\frac{1}{N}`
161161

162-
.. code:: ipython3
162+
.. code-block:: ipython3
163163
164164
np.cov(x_cor,y_cor,bias=1)
165165
@@ -183,7 +183,7 @@ According to this theorem, the average of n samples of random and
183183
independant variables tends to follow a normal distribution as we
184184
increase the sample size.(Generally, for n>=30)
185185

186-
.. code:: ipython3
186+
.. code-block:: ipython3
187187
188188
import matplotlib.pyplot as plt
189189
import random
@@ -222,13 +222,15 @@ described with two parameters, the mean (:math:`\mu`) and the variance
222222
223223
f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]
224224
225-
Range is [$-:raw-latex:`\inf`,:raw-latex:`\inf `$]
225+
Range is
226+
227+
.. math:: [-\inf,\inf]
226228

227229
This is just a function of mean(\ :math:`\mu`) and standard deviation
228230
(:math:`\sigma`) and what gives the normal distribution the
229231
charecteristic **bell curve**.
230232

231-
.. code:: ipython3
233+
.. code-block:: ipython3
232234
233235
import matplotlib.mlab as mlab
234236
import math
@@ -284,7 +286,7 @@ New mean is
284286
285287
\sigma_\mathtt{new} = \frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}
286288
287-
.. code:: ipython3
289+
.. code-block:: ipython3
288290
289291
import matplotlib.mlab as mlab
290292
import math
@@ -336,7 +338,7 @@ of the two.
336338
\begin{gathered}\mu_x = \mu_p + \mu_z \\
337339
\sigma_x^2 = \sigma_z^2+\sigma_p^2\, \square\end{gathered}
338340
339-
.. code:: ipython3
341+
.. code-block:: ipython3
340342
341343
import matplotlib.mlab as mlab
342344
import math
@@ -375,7 +377,7 @@ of the two.
375377
.. image:: Kalmanfilter_basics_files/Kalmanfilter_basics_21_1.png
376378

377379

378-
.. code:: ipython3
380+
.. code-block:: ipython3
379381
380382
#Example from:
381383
#https://scipython.com/blog/visualizing-the-bivariate-gaussian-distribution/
@@ -448,7 +450,7 @@ a given (X,Y) value.
448450

449451
\*\* numpy einsum examples \*\*
450452

451-
.. code:: ipython3
453+
.. code-block:: ipython3
452454
453455
a = np.arange(25).reshape(5,5)
454456
b = np.arange(5)
@@ -471,7 +473,7 @@ a given (X,Y) value.
471473
[3 4 5]]
472474
473475
474-
.. code:: ipython3
476+
.. code-block:: ipython3
475477
476478
#this is the diagonal sum, i repeated means the diagonal
477479
np.einsum('ij', a)
@@ -490,7 +492,7 @@ a given (X,Y) value.
490492
491493
492494
493-
.. code:: ipython3
495+
.. code-block:: ipython3
494496
495497
A = np.arange(3).reshape(3,1)
496498
B = np.array([[ 0, 1, 2, 3],
@@ -508,7 +510,7 @@ a given (X,Y) value.
508510
509511
510512
511-
.. code:: ipython3
513+
.. code-block:: ipython3
512514
513515
D = np.array([0,1,2])
514516
E = np.array([[ 0, 1, 2, 3],
@@ -526,7 +528,7 @@ a given (X,Y) value.
526528
527529
528530
529-
.. code:: ipython3
531+
.. code-block:: ipython3
530532
531533
from scipy.stats import multivariate_normal
532534
x, y = np.mgrid[-5:5:.1, -5:5:.1]

docs/modules/Kalmanfilter_basics_2.rst

Lines changed: 14 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,12 @@ KF Basics - Part 2
44

55
### Probabilistic Generative Laws
66

7-
**1st Law**: The belief representing the state :math:`x_{t}`, is
8-
conditioned on all past states, measurements and controls. This can be
9-
shown mathematically by the conditional probability shown below:
7+
1st Law:
8+
^^^^^^^^
9+
10+
The belief representing the state :math:`x_{t}`, is conditioned on all
11+
past states, measurements and controls. This can be shown mathematically
12+
by the conditional probability shown below:
1013

1114
.. math:: p(x_{t} | x_{0:t-1},z_{1:t-1},u_{1:t})
1215

@@ -27,7 +30,8 @@ Therefore the law now holds as:
2730

2831
.. math:: p(x_{t} | x_{0:t-1},z_{1:t-1},u_{1:t})=p(x_{t} | x_{t-1},u_{t})
2932

30-
**2nd Law**:
33+
2nd Law:
34+
^^^^^^^^
3135

3236
If :math:`x_{t}` is complete, then:
3337

@@ -84,7 +88,9 @@ hand, given C (Coin 1 is selected), A and B are independent.
8488
Bayes Rule:
8589
~~~~~~~~~~~
8690

87-
Posterior = $:raw-latex:`\frac{Likelihood*Prior}{Marginal}` $
91+
Posterior =
92+
93+
.. math:: \frac{Likelihood*Prior}{Marginal}
8894

8995
Here,
9096

@@ -158,7 +164,7 @@ and the resultant covariance is smaller.
158164
Bayes filter localization example:
159165
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
160166

161-
.. code:: ipython3
167+
.. code-block:: ipython3
162168
163169
from IPython.display import Image
164170
Image(filename="bayes_filter.png",width=400)
@@ -281,9 +287,9 @@ The variance in terms of the Kalman gain:
281287
&= (1-K)\bar\sigma^2
282288
\end{aligned}
283289
284-
**:math:`K` is the Kalman gain. It’s the crux of the Kalman filter. It
290+
:math:`K` is the *Kalman gain*. It’s the crux of the Kalman filter. It
285291
is a scaling term that chooses a value partway between :math:`\mu_z` and
286-
:math:`\bar\mu`.**
292+
:math:`\bar\mu`.
287293

288294
Kalman Filter - Univariate and Multivariate
289295
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

docs/modules/Planar_Two_Link_IK.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ https://robotacademy.net.au/lesson/inverse-kinematics-for-a-2-joint-robot-arm-us
3131

3232
First, let’s define a class to make plotting our arm easier.
3333

34-
.. code:: ipython3
34+
.. code-block:: ipython3
3535
3636
%matplotlib inline
3737
from math import cos, sin
@@ -70,7 +70,7 @@ First, let’s define a class to make plotting our arm easier.
7070
Let’s also define a function to make it easier to draw an angle on our
7171
diagram.
7272

73-
.. code:: ipython3
73+
.. code-block:: ipython3
7474
7575
from math import sqrt
7676
@@ -103,7 +103,7 @@ called forward_kinematics - forward kinematics specifies the
103103
end-effector position given the joint angles and link lengths. Forward
104104
kinematics is easier than inverse kinematics.
105105

106-
.. code:: ipython3
106+
.. code-block:: ipython3
107107
108108
arm = TwoLinkArm()
109109
@@ -172,7 +172,7 @@ kinematics for :math:`\theta_0` and :math:`\theta_1`, but that would be
172172
the wrong move. An easier path involves going back to the geometry of
173173
the arm.
174174

175-
.. code:: ipython3
175+
.. code-block:: ipython3
176176
177177
from math import pi
178178
@@ -245,7 +245,7 @@ to the “arm-down” configuration of the arm, which is what’s shown in the
245245
diagram. Now we’ll derive an equation for :math:`\theta_0` that depends
246246
on this value of :math:`\theta_1`.
247247

248-
.. code:: ipython3
248+
.. code-block:: ipython3
249249
250250
from math import atan2
251251
@@ -294,7 +294,7 @@ We now have an expression for this angle :math:`\beta` in terms of one
294294
of our arm’s joint angles. Now, can we relate :math:`\beta` to
295295
:math:`\theta_0`? Yes!
296296

297-
.. code:: ipython3
297+
.. code-block:: ipython3
298298
299299
arm.plot()
300300
label_diagram()

0 commit comments

Comments
 (0)