Skip to content

Commit 7daf98d

Browse files
committed
finish linear-model
1 parent 0a8751c commit 7daf98d

File tree

25 files changed

+1659
-294
lines changed

25 files changed

+1659
-294
lines changed

README.md

Lines changed: 19 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -19,45 +19,47 @@ Learn Deep Learning with PyTorch
1919
- Chapter 2: PyTorch基础
2020
- [Tensor和Variable](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter2_PyTorch-Basics/Tensor-and-Variable.ipynb)
2121
- [自动求导机制](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter2_PyTorch-Basics/autograd.ipynb)
22+
- [动态图与静态图](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter2_PyTorch-Basics/dynamic-graph.ipynb)
2223
- 数据的读取
23-
- autograd.function的介绍
24+
25+
26+
- Chapter 3: 神经网络
27+
- [线性模型与梯度下降](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/linear-regression-gradient-descend.ipynb)
28+
- Logistic 回归与优化器
2429
- Module和Sequential
25-
- 自定义参数的初始化
2630
- 模型保存和读取
27-
28-
- Chapter 3: PyTorch高级
29-
- tensorboard可视化
30-
- 优化器
31-
- 自定义loss和非标准层
32-
- 数据并行和多GPU
33-
- PyTorch的分布式应用
34-
- 使用ONNX转化为Caffe2模型
35-
36-
- Chapter 4: 多层感知器
37-
- 线性模型
38-
- Logistic 回归
31+
- 自定义参数的初始化
32+
- 优化算法
3933
- 多层神经网络
4034

41-
- Chapter 5: 卷积神经网络
35+
- Chapter 4: 卷积神经网络
4236
- 从0开始手动搭建卷积网络
4337
- 批标准化
4438
- 使用重复元素的深度网络,VGG
4539
- 更加丰富化结构的网络,GoogLeNet
4640
- 深度残差网络,ResNet
4741
- 稠密连接的卷积网络,DenseNet
4842

49-
- Chapter 6: 循环神经网络
43+
- Chapter 5: 循环神经网络
5044
- LSTM和GRU
5145
- 使用RNN进行时间序列分析
5246
- 使用RNN进行图像分类
5347
- Word Embedding和N-Gram模型
5448
- Seq-LSTM做词性预测
5549

50+
- Chapter 6: PyTorch高级
51+
- tensorboard可视化
52+
- 各种优化算法
53+
- autograd.function的介绍
54+
- 数据并行和多GPU
55+
- PyTorch的分布式应用
56+
- 使用ONNX转化为Caffe2模型
57+
5658
### part2: 深度学习的应用
5759
- Chapter 7: 计算机视觉
5860
- 图像增强的方法
5961
- Fine-tuning: 通过微调进行迁移学习
60-
- 语义分割: 通过卷积实现像素级别的分类
62+
- 语义分割: 通过FCN实现像素级别的分类
6163
- 使用卷积网络进行目标检测
6264
- 使用triplet loss进行人脸识别
6365
- Neural Transfer: 通过卷积网络实现风格迁移
Lines changed: 186 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,186 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# 动态图和静态图\n",
8+
"PyTorch 和 TensorFlow、Caffe 等框架最大的区别就是他们拥有不同的计算图表现形式。 TensorFlow 使用静态图,这意味着我们先定义计算图,然后不断使用它,在 PyTorch 中,每次都会重新构建一个新的计算图。\n",
9+
"\n",
10+
"对于使用者来说,两种形式的计算图有着非常大的区别,同时静态图和动态图都有他们各自的优点,比如动态图比较方便debug,使用者能够用任何他们喜欢的方式进行debug,同时非常直观,而静态图是通过先定义后运行的方式,之后再次运行的时候就不再需要重新构建计算图,所以速度会比动态图更快。"
11+
]
12+
},
13+
{
14+
"cell_type": "markdown",
15+
"metadata": {},
16+
"source": [
17+
"![](https://ws3.sinaimg.cn/large/006tNc79ly1fmai482qumg30rs0fmq6e.gif)"
18+
]
19+
},
20+
{
21+
"cell_type": "markdown",
22+
"metadata": {},
23+
"source": [
24+
"下面我们比较 while 循环语句在 TensorFlow 和 PyTorch 中的定义"
25+
]
26+
},
27+
{
28+
"cell_type": "code",
29+
"execution_count": 1,
30+
"metadata": {
31+
"collapsed": true
32+
},
33+
"outputs": [],
34+
"source": [
35+
"# tensorflow\n",
36+
"import tensorflow as tf\n",
37+
"\n",
38+
"first_counter = tf.constant(0)\n",
39+
"second_counter = tf.constant(10)"
40+
]
41+
},
42+
{
43+
"cell_type": "code",
44+
"execution_count": 2,
45+
"metadata": {
46+
"collapsed": true
47+
},
48+
"outputs": [],
49+
"source": [
50+
"def cond(first_counter, second_counter, *args):\n",
51+
" return first_counter < second_counter\n",
52+
"\n",
53+
"def body(first_counter, second_counter):\n",
54+
" first_counter = tf.add(first_counter, 2)\n",
55+
" second_counter = tf.add(second_counter, 1)\n",
56+
" return first_counter, second_counter"
57+
]
58+
},
59+
{
60+
"cell_type": "code",
61+
"execution_count": 3,
62+
"metadata": {
63+
"collapsed": false
64+
},
65+
"outputs": [],
66+
"source": [
67+
"c1, c2 = tf.while_loop(cond, body, [first_counter, second_counter])"
68+
]
69+
},
70+
{
71+
"cell_type": "code",
72+
"execution_count": 4,
73+
"metadata": {
74+
"collapsed": true
75+
},
76+
"outputs": [],
77+
"source": [
78+
"with tf.Session() as sess:\n",
79+
" counter_1_res, counter_2_res = sess.run([c1, c2])"
80+
]
81+
},
82+
{
83+
"cell_type": "code",
84+
"execution_count": 5,
85+
"metadata": {
86+
"collapsed": false
87+
},
88+
"outputs": [
89+
{
90+
"name": "stdout",
91+
"output_type": "stream",
92+
"text": [
93+
"20\n",
94+
"20\n"
95+
]
96+
}
97+
],
98+
"source": [
99+
"print(counter_1_res)\n",
100+
"print(counter_2_res)"
101+
]
102+
},
103+
{
104+
"cell_type": "code",
105+
"execution_count": 6,
106+
"metadata": {
107+
"collapsed": true
108+
},
109+
"outputs": [],
110+
"source": [
111+
"# pytorch\n",
112+
"import torch\n",
113+
"first_counter = torch.Tensor([0])\n",
114+
"second_counter = torch.Tensor([10])"
115+
]
116+
},
117+
{
118+
"cell_type": "code",
119+
"execution_count": 11,
120+
"metadata": {
121+
"collapsed": false
122+
},
123+
"outputs": [],
124+
"source": [
125+
"while (first_counter < second_counter)[0]:\n",
126+
" first_counter += 2\n",
127+
" second_counter += 1"
128+
]
129+
},
130+
{
131+
"cell_type": "code",
132+
"execution_count": 12,
133+
"metadata": {
134+
"collapsed": false
135+
},
136+
"outputs": [
137+
{
138+
"name": "stdout",
139+
"output_type": "stream",
140+
"text": [
141+
"\n",
142+
" 20\n",
143+
"[torch.FloatTensor of size 1]\n",
144+
"\n",
145+
"\n",
146+
" 20\n",
147+
"[torch.FloatTensor of size 1]\n",
148+
"\n"
149+
]
150+
}
151+
],
152+
"source": [
153+
"print(first_counter)\n",
154+
"print(second_counter)"
155+
]
156+
},
157+
{
158+
"cell_type": "markdown",
159+
"metadata": {},
160+
"source": [
161+
"上面的例子展示如何使用静态图和动态图构建 while 循环,看起来动态图的方式更加简单且直观,你觉得呢?"
162+
]
163+
}
164+
],
165+
"metadata": {
166+
"kernelspec": {
167+
"display_name": "mx",
168+
"language": "python",
169+
"name": "mx"
170+
},
171+
"language_info": {
172+
"codemirror_mode": {
173+
"name": "ipython",
174+
"version": 3
175+
},
176+
"file_extension": ".py",
177+
"mimetype": "text/x-python",
178+
"name": "python",
179+
"nbconvert_exporter": "python",
180+
"pygments_lexer": "ipython3",
181+
"version": "3.6.0"
182+
}
183+
},
184+
"nbformat": 4,
185+
"nbformat_minor": 2
186+
}

0 commit comments

Comments
 (0)