Skip to content

Commit 69cd301

Browse files
committed
change to version 2
1 parent 964c09a commit 69cd301

File tree

62 files changed

+268
-135848
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

62 files changed

+268
-135848
lines changed

README.md

Lines changed: 75 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,17 +4,88 @@ Learn Deep Learning with PyTorch
44

55
非常感谢您能够购买此书,这个github repository包含有[深度学习入门之PyTorch](https://item.jd.com/17915495606.html)的实例代码。由于本人水平有限,在写此书的时候参考了一些网上的资料,在这里对他们表示敬意。由于深度学习的技术在飞速的发展,同时PyTorch也在不断更新,且本人在完成此书的时候也有诸多领域没有涉及,所以这个repository会不断更新作为购买次书的一个后续服务,希望我能够在您深度学习的入门道路上提供绵薄之力。
66

7+
**注意:由于PyTorch版本更迭,书中的代码可能会出现bug,所以一切代码以该github中的为主。**
8+
79
![image.png](http://upload-images.jianshu.io/upload_images/3623720-7cc3a383f486d157.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
810

911
## 配置环境
1012

1113
书中已经详细给出了如何基于Anaconda配置python环境,以及PyTorch的安装,如果你使用自己的电脑,并且有Nvidia的显卡,那么你可以愉快地进入深度学习的世界了,如果你没有Nvidia的显卡,那么我们需要一个云计算的平台来帮助我们学习深度学习之旅。[如何配置aws计算平台](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/aws.md)
1214

1315

14-
16+
**以下的课程目录和书中目录有出入,因为内容正在不断更新,所有的内容更新完成会更迭到书的第二版中!**
1517
## 课程目录
16-
17-
18+
### part1: 深度学习基础
19+
- Chapter 2: PyTorch基础
20+
- PyTorch和NumPy
21+
- 自动求导机制
22+
- 数据的读取
23+
- Module和Sequential
24+
- 自定义参数的初始化
25+
- 模型保存和读取
26+
27+
- Chapter 3: PyTorch高级
28+
- tensorboard可视化
29+
- 优化器
30+
- 自定义loss和非标准层
31+
- 数据并行和多GPU
32+
- PyTorch的分布式应用
33+
- 使用ONNX转化为Caffe2模型
34+
35+
- Chapter 4: 多层感知器
36+
- 线性模型
37+
- Logistic 回归
38+
- 多层神经网络
39+
40+
- Chapter 5: 卷积神经网络
41+
- 从0开始手动搭建卷积网络
42+
- 批标准化
43+
- 使用重复元素的深度网络,VGG
44+
- 更加丰富化结构的网络,GoogLeNet
45+
- 深度残差网络,ResNet
46+
- 稠密连接的卷积网络,DenseNet
47+
48+
- Chapter 6: 循环神经网络
49+
- LSTM和GRU
50+
- 使用RNN进行时间序列分析
51+
- 使用RNN进行图像分类
52+
- Word Embedding和N-Gram模型
53+
- Seq-LSTM做词性预测
54+
55+
### part2: 深度学习的应用
56+
- Chapter 7: 计算机视觉
57+
- 图像增强的方法
58+
- Fine-tuning: 通过微调进行迁移学习
59+
- 语义分割: 通过卷积实现像素级别的分类
60+
- 使用卷积网络进行目标检测
61+
- 使用triplet loss进行人脸识别
62+
- Neural Transfer: 通过卷积网络实现风格迁移
63+
- Deep Dream: 探索卷积网络眼中的世界
64+
65+
- Chapter 8: 自然语言处理
66+
- char rnn实现文本生成
67+
- 联合卷积网络实现图片字幕
68+
- 使用rnn进行情感分析
69+
- seq2seq实现机器翻译
70+
- cnn+rnn+attention实现文本识别
71+
- Tree-lstm实现语义相关性分析
72+
73+
### part3: 高级内容
74+
- Chapter 9: 生成对抗网络
75+
- 自动编码器
76+
- 变分自动编码器
77+
- 生成对抗网络的介绍
78+
- 深度卷积对抗网络(DCGANs)
79+
- Wasserstein-GANs
80+
- 条件生成对抗网络(Conditional
81+
GANs)
82+
- Pix2Pix
83+
84+
- Chapter 10: 深度增强学习
85+
- 深度增强学习的介绍
86+
- Policy gradient
87+
- Actor-critic gradient
88+
- Deep Q-networks
1889

1990
## 一些别的资源
2091

@@ -24,7 +95,7 @@ Learn Deep Learning with PyTorch
2495

2596
关于PyTorch的资源
2697

27-
我的github repository [pytorch-beginner](https://github.com/SherlockLiao/pytorch-beginner)
98+
我的github repo [pytorch-beginner](https://github.com/SherlockLiao/pytorch-beginner)
2899

29100
[pytorch-tutorial](https://github.com/yunjey/pytorch-tutorial)
30101

aws.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,12 +19,11 @@
1919
![2.png](http://upload-images.jianshu.io/upload_images/3623720-0fca7afcf3c0508e.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
2020

2121

22-
这个界面只需要注意三个地方,一个是右上角的地区,需要选择一个离你比较近的地区,整个亚太地区可以选择韩国,日本,新加坡和孟买。然后是左边的一个方框"限制",如果你申请CPU的计算实例,那么不用管,如果你要申请GPU计算实例,就需要点击"限制"进行申请,因为GPU实例会产生费用,亚马逊需要和你确认这个事情,一般需要两到三个工作日。
22+
这个界面只需要注意三个地方,一个是右上角的地区,需要选择一个离你比较近的地区,整个亚太地区可以选择韩国,日本,新加坡和孟买,需要注意的是不同的地区实例价格是不同的,如果你有vpn,那么推荐选择俄勒冈,因为这个地区最便宜,比亚太地区便宜了4到5倍。然后是左边的一个方框"限制",如果你申请CPU的计算实例,那么不用管,如果你要申请GPU计算实例,就需要点击"限制"进行申请,因为GPU实例会产生费用,亚马逊需要和你确认这个事情,一般需要两到三个工作日。
2323

2424
接下面就可以开始启动实例了,点击中间的红框即可开始。
2525

2626

27-
2827
### 申请实例并启动
2928

3029

Lines changed: 192 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,192 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# PyTorch Basic\n",
8+
"In this notebook, we will introduce much pytorch basic knowledge which you need to get into deep learning world."
9+
]
10+
},
11+
{
12+
"cell_type": "markdown",
13+
"metadata": {},
14+
"source": [
15+
"## Use PyTorch as NumPy\n",
16+
"PyTorch is a Tensors and Dynamic neural networks in Python with strong GPU acceleration Library. So we can use it as numpy but it is faster than numpy."
17+
]
18+
},
19+
{
20+
"cell_type": "code",
21+
"execution_count": 1,
22+
"metadata": {
23+
"collapsed": true
24+
},
25+
"outputs": [],
26+
"source": [
27+
"import torch\n",
28+
"import numpy as np"
29+
]
30+
},
31+
{
32+
"cell_type": "code",
33+
"execution_count": 4,
34+
"metadata": {
35+
"collapsed": false
36+
},
37+
"outputs": [],
38+
"source": [
39+
"# create a numpy ndarray\n",
40+
"numpy_tensor = np.random.randn(10, 20)"
41+
]
42+
},
43+
{
44+
"cell_type": "markdown",
45+
"metadata": {},
46+
"source": [
47+
"we can convert numpy ndarray to pytorch Tensor in two ways"
48+
]
49+
},
50+
{
51+
"cell_type": "code",
52+
"execution_count": null,
53+
"metadata": {
54+
"collapsed": true
55+
},
56+
"outputs": [],
57+
"source": [
58+
"import torch\n",
59+
"from torch import nn\n",
60+
"import numpy as np\n",
61+
"from torch.autograd import Variable\n",
62+
"from torch.utils.data import Dataset, DataLoader\n",
63+
"from torchvision.datasets import ImageFolder\n",
64+
"import pandas as pd"
65+
]
66+
},
67+
{
68+
"cell_type": "code",
69+
"execution_count": null,
70+
"metadata": {
71+
"collapsed": true
72+
},
73+
"outputs": [],
74+
"source": [
75+
"# =============================Tensor================================\n",
76+
"# Define 3x2 matrix with given values\n",
77+
"a = torch.Tensor([[2, 3], [4, 8], [7, 9]])\n",
78+
"print('a is: {}'.format(a))\n",
79+
"print('a size is {}'.format(a.size())) # a.size() = 3, 2\n",
80+
"\n",
81+
"b = torch.LongTensor([[2, 3], [4, 8], [7, 9]])\n",
82+
"print('b is : {}'.format(b))\n",
83+
"\n",
84+
"c = torch.zeros((3, 2))\n",
85+
"print('zero tensor: {}'.format(c))\n",
86+
"\n",
87+
"d = torch.randn((3, 2))\n",
88+
"print('normal randon is : {}'.format(d))\n",
89+
"\n",
90+
"a[0, 1] = 100\n",
91+
"print('changed a is: {}'.format(a))\n",
92+
"\n",
93+
"numpy_b = b.numpy()\n",
94+
"print('conver to numpy is \\n {}'.format(numpy_b))\n",
95+
"\n",
96+
"e = np.array([[2, 3], [4, 5]])\n",
97+
"torch_e = torch.from_numpy(e)\n",
98+
"print('from numpy to torch.Tensor is {}'.format(torch_e))\n",
99+
"f_torch_e = torch_e.float()\n",
100+
"print('change data type to float tensor: {}'.format(f_torch_e))\n",
101+
"\n",
102+
"if torch.cuda.is_available():\n",
103+
" a_cuda = a.cuda()\n",
104+
" print(a_cuda)\n",
105+
"\n",
106+
"# =============================Variable===================================\n",
107+
"\n",
108+
"# Create Variable\n",
109+
"x = Variable(torch.Tensor([1]), requires_grad=True)\n",
110+
"w = Variable(torch.Tensor([2]), requires_grad=True)\n",
111+
"b = Variable(torch.Tensor([3]), requires_grad=True)\n",
112+
"\n",
113+
"# Build a computational graph.\n",
114+
"y = w * x + b # y = 2 * x + 3\n",
115+
"\n",
116+
"# Compute gradients\n",
117+
"y.backward() # same as y.backward(torch.FloatTensor([1]))\n",
118+
"# Print out the gradients.\n",
119+
"print(x.grad) # x.grad = 2\n",
120+
"print(w.grad) # w.grad = 1\n",
121+
"print(b.grad) # b.grad = 1\n",
122+
"\n",
123+
"x = torch.randn(3)\n",
124+
"x = Variable(x, requires_grad=True)\n",
125+
"\n",
126+
"y = x * 2\n",
127+
"print(y)\n",
128+
"\n",
129+
"y.backward(torch.FloatTensor([1, 0.1, 0.01]))\n",
130+
"print(x.grad)\n",
131+
"\n",
132+
"\n",
133+
"# ==============================nn.Module=================================\n",
134+
"class net_name(nn.Module):\n",
135+
" def __init__(self, other_arguments):\n",
136+
" super(net_name, self).__init__()\n",
137+
" self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size)\n",
138+
" # other network layer\n",
139+
"\n",
140+
" def forward(self, x):\n",
141+
" x = self.conv1(x)\n",
142+
" return x\n",
143+
"\n",
144+
"# ============================Dataset====================================\n",
145+
"\n",
146+
"\n",
147+
"class myDataset(Dataset):\n",
148+
" def __init__(self, csv_file, txt_file, root_dir, other_file):\n",
149+
" self.csv_data = pd.read_csv(csv_file)\n",
150+
" with open(txt_file, 'r') as f:\n",
151+
" data_list = f.readlines()\n",
152+
" self.txt_data = data_list\n",
153+
" self.root_dir = root_dir\n",
154+
"\n",
155+
" def __len__(self):\n",
156+
" return len(self.csv_data)\n",
157+
"\n",
158+
" def __getitem__(self, idx):\n",
159+
" data = (self.csv_data[idx], self.txt_data[idx])\n",
160+
" return data\n",
161+
"\n",
162+
"\n",
163+
"dataiter = DataLoader(myDataset, batch_size=32, shuffle=True,\n",
164+
" collate_fn=default_collate)\n",
165+
"\n",
166+
"dset = ImageFolder(root='root_path', transform=None,\n",
167+
" loader=default_loader)\n"
168+
]
169+
}
170+
],
171+
"metadata": {
172+
"kernelspec": {
173+
"display_name": "mx",
174+
"language": "python",
175+
"name": "mx"
176+
},
177+
"language_info": {
178+
"codemirror_mode": {
179+
"name": "ipython",
180+
"version": 3
181+
},
182+
"file_extension": ".py",
183+
"mimetype": "text/x-python",
184+
"name": "python",
185+
"nbconvert_exporter": "python",
186+
"pygments_lexer": "ipython3",
187+
"version": "3.6.0"
188+
}
189+
},
190+
"nbformat": 4,
191+
"nbformat_minor": 2
192+
}
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)