Skip to content

Commit ec10002

Browse files
glenn-jocherpre-commit-ci[bot]Laughing-q
authored
ultralytics 8.0.58 new SimpleClass, fixes and updates (ultralytics#1636)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Laughing <[email protected]>
1 parent ef03e67 commit ec10002

File tree

30 files changed

+354
-317
lines changed

30 files changed

+354
-317
lines changed

.github/workflows/greetings.yml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,8 +18,9 @@ jobs:
1818
pr-message: |
1919
👋 Hello @${{ github.actor }}, thank you for submitting a YOLOv8 🚀 PR! To allow your work to be integrated as seamlessly as possible, we advise you to:
2020
21-
- ✅ Verify your PR is **up-to-date** with `ultralytics/ultralytics` `main` branch. If your PR is behind you can update your code by clicking the 'Update branch' button or by running `git pull` and `git merge master` locally.
21+
- ✅ Verify your PR is **up-to-date** with `ultralytics/ultralytics` `main` branch. If your PR is behind you can update your code by clicking the 'Update branch' button or by running `git pull` and `git merge main` locally.
2222
- ✅ Verify all YOLOv8 Continuous Integration (CI) **checks are passing**.
23+
- ✅ Update YOLOv8 [Docs](https://docs.ultralytics.com) for any new or updated features.
2324
- ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ — Bruce Lee
2425
2526
See our [Contributing Guide](https://github.com/ultralytics/ultralytics/blob/main/CONTRIBUTING.md) for details and let us know if you have any questions!
@@ -33,7 +34,7 @@ jobs:
3334
3435
## Install
3536
36-
Pip install the `ultralytics` package including all [requirements.txt](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a [**Python>=3.7**](https://www.python.org/) environment with [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/).
37+
Pip install the `ultralytics` package including all [requirements](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a [**Python>=3.7**](https://www.python.org/) environment with [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/).
3738
3839
```bash
3940
pip install ultralytics

.github/workflows/links.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ jobs:
2323
with:
2424
fail: true
2525
# accept 429(Instagram, 'too many requests'), 999(LinkedIn, 'unknown status code'), Timeout(Twitter)
26-
args: --accept 429,999 --exclude twitter.com --verbose --no-progress './**/*.md' './**/*.html'
26+
args: --accept 429,999 --exclude-loopback --exclude twitter.com --verbose --no-progress './**/*.md' './**/*.html'
2727
env:
2828
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
2929

@@ -33,6 +33,6 @@ jobs:
3333
with:
3434
fail: true
3535
# accept 429(Instagram, 'too many requests'), 999(LinkedIn, 'unknown status code'), Timeout(Twitter)
36-
args: --accept 429,999 --exclude twitter.com,url.com --verbose --no-progress './**/*.md' './**/*.html' './**/*.yml' './**/*.yaml' './**/*.py' './**/*.ipynb'
36+
args: --accept 429,999 --exclude-loopback --exclude twitter.com,url.com --verbose --no-progress './**/*.md' './**/*.html' './**/*.yml' './**/*.yaml' './**/*.py' './**/*.ipynb'
3737
env:
3838
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}

README.md

Lines changed: 5 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ full documentation on training, validation, prediction and deployment.
5858
<summary>Install</summary>
5959

6060
Pip install the ultralytics package including
61-
all [requirements.txt](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a
61+
all [requirements](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a
6262
[**Python>=3.7**](https://www.python.org/) environment with
6363
[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/).
6464

@@ -105,28 +105,11 @@ success = model.export(format="onnx") # export the model to ONNX format
105105
Ultralytics [release](https://github.com/ultralytics/assets/releases). See
106106
YOLOv8 [Python Docs](https://docs.ultralytics.com/usage/python) for more examples.
107107

108-
#### Model Architectures
109-
110-
**NEW** YOLOv5u anchor free models are now available.
111-
112-
All supported model architectures can be found in the [Models](./ultralytics/models/) section.
113-
114-
#### Known Issues / TODOs
115-
116-
We are still working on several parts of YOLOv8! We aim to have these completed soon to bring the YOLOv8 feature set up
117-
to par with YOLOv5, including export and inference to all the same formats. We are also writing a YOLOv8 paper which we
118-
will submit to [arxiv.org](https://arxiv.org) once complete.
119-
120-
- [x] TensorFlow exports
121-
- [x] DDP resume
122-
- [ ] [arxiv.org](https://arxiv.org) paper
123-
124108
</details>
125109

126110
## <div align="center">Models</div>
127111

128-
All YOLOv8 pretrained models are available here. Detection and Segmentation models are pretrained on the COCO dataset,
129-
while Classification models are pretrained on the ImageNet dataset.
112+
All YOLOv8 pretrained models are available here. Detect, Segment and Pose models are pretrained on the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml) dataset, while Classify models are pretrained on the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/ImageNet.yaml) dataset.
130113

131114
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest
132115
Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
@@ -147,7 +130,7 @@ See [Detection Docs](https://docs.ultralytics.com/tasks/detect/) for usage examp
147130
<br>Reproduce by `yolo val detect data=coco.yaml device=0`
148131
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
149132
instance.
150-
<br>Reproduce by `yolo val detect data=coco128.yaml batch=1 device=0/cpu`
133+
<br>Reproduce by `yolo val detect data=coco128.yaml batch=1 device=0|cpu`
151134

152135
</details>
153136

@@ -167,7 +150,7 @@ See [Segmentation Docs](https://docs.ultralytics.com/tasks/segment/) for usage e
167150
<br>Reproduce by `yolo val segment data=coco.yaml device=0`
168151
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
169152
instance.
170-
<br>Reproduce by `yolo val segment data=coco128-seg.yaml batch=1 device=0/cpu`
153+
<br>Reproduce by `yolo val segment data=coco128-seg.yaml batch=1 device=0|cpu`
171154

172155
</details>
173156

@@ -187,7 +170,7 @@ See [Classification Docs](https://docs.ultralytics.com/tasks/classify/) for usag
187170
<br>Reproduce by `yolo val classify data=path/to/ImageNet device=0`
188171
- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
189172
instance.
190-
<br>Reproduce by `yolo val classify data=path/to/ImageNet batch=1 device=0/cpu`
173+
<br>Reproduce by `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu`
191174

192175
</details>
193176

README.zh-CN.md

Lines changed: 4 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ SOTA 模型。它在以前成功的 YOLO 版本基础上,引入了新的功能
5252
<details open>
5353
<summary>安装</summary>
5454

55-
Pip 安装包含所有 [requirements.txt](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt)
55+
Pip 安装包含所有 [requirements](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt)
5656
ultralytics 包,环境要求 [**Python>=3.7**](https://www.python.org/),且 [\*\*PyTorch>=1.7
5757
\*\*](https://pytorch.org/get-started/locally/)
5858

@@ -100,15 +100,6 @@ success = model.export(format="onnx") # 将模型导出为 ONNX 格式
100100
[模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) 会从
101101
Ultralytics [发布页](https://github.com/ultralytics/ultralytics/releases) 自动下载。
102102

103-
### 已知问题 / 待办事项
104-
105-
我们仍在努力完善 YOLOv8 的几个部分!我们的目标是尽快完成这些工作,使 YOLOv8 的功能设置达到YOLOv5
106-
的水平,包括对所有相同格式的导出和推理。我们还在写一篇 YOLOv8 的论文,一旦完成,我们将提交给 [arxiv.org](https://arxiv.org)
107-
108-
- [x] TensorFlow 导出
109-
- [x] DDP 恢复训练
110-
- [ ] [arxiv.org](https://arxiv.org) 论文
111-
112103
</details>
113104

114105
## <div align="center">模型</div>
@@ -132,7 +123,7 @@ Ultralytics [发布页](https://github.com/ultralytics/ultralytics/releases) 自
132123
<br>复现命令 `yolo val detect data=coco.yaml device=0`
133124
- **推理速度**使用 COCO
134125
验证集图片推理时间进行平均得到,测试环境使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例。
135-
<br>复现命令 `yolo val detect data=coco128.yaml batch=1 device=0/cpu`
126+
<br>复现命令 `yolo val detect data=coco128.yaml batch=1 device=0|cpu`
136127

137128
</details>
138129

@@ -150,7 +141,7 @@ Ultralytics [发布页](https://github.com/ultralytics/ultralytics/releases) 自
150141
<br>复现命令 `yolo val segment data=coco.yaml device=0`
151142
- **推理速度**使用 COCO
152143
验证集图片推理时间进行平均得到,测试环境使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例。
153-
<br>复现命令 `yolo val segment data=coco128-seg.yaml batch=1 device=0/cpu`
144+
<br>复现命令 `yolo val segment data=coco128-seg.yaml batch=1 device=0|cpu`
154145

155146
</details>
156147

@@ -168,7 +159,7 @@ Ultralytics [发布页](https://github.com/ultralytics/ultralytics/releases) 自
168159
<br>复现命令 `yolo val classify data=path/to/ImageNet device=0`
169160
- **推理速度**使用 ImageNet
170161
验证集图片推理时间进行平均得到,测试环境使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例。
171-
<br>复现命令 `yolo val classify data=path/to/ImageNet batch=1 device=0/cpu`
162+
<br>复现命令 `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu`
172163

173164
</details>
174165

docs/tasks/classify.md

Lines changed: 25 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,31 @@ of that class are located or what their exact shape is.
99

1010
!!! tip "Tip"
1111

12-
YOLOv8 _classification_ models use the `-cls` suffix, i.e. `yolov8n-cls.pt` and are pretrained on ImageNet.
13-
14-
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8){ .md-button .md-button--primary}
12+
YOLOv8 Classify models use the `-cls` suffix, i.e. `yolov8n-cls.pt` and are pretrained on [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/ImageNet.yaml).
13+
14+
## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8)
15+
16+
YOLOv8 pretrained Classify models are shown here. Detect, Segment and Pose models are pretrained on
17+
the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml) dataset, while Classify
18+
models are pretrained on
19+
the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/ImageNet.yaml) dataset.
20+
21+
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest
22+
Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
23+
24+
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
25+
|----------------------------------------------------------------------------------------------|-----------------------|------------------|------------------|--------------------------------|-------------------------------------|--------------------|--------------------------|
26+
| [YOLOv8n-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-cls.pt) | 224 | 66.6 | 87.0 | 12.9 | 0.31 | 2.7 | 4.3 |
27+
| [YOLOv8s-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-cls.pt) | 224 | 72.3 | 91.1 | 23.4 | 0.35 | 6.4 | 13.5 |
28+
| [YOLOv8m-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-cls.pt) | 224 | 76.4 | 93.2 | 85.4 | 0.62 | 17.0 | 42.7 |
29+
| [YOLOv8l-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-cls.pt) | 224 | 78.0 | 94.1 | 163.0 | 0.87 | 37.5 | 99.7 |
30+
| [YOLOv8x-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-cls.pt) | 224 | 78.4 | 94.3 | 232.0 | 1.01 | 57.4 | 154.8 |
31+
32+
- **acc** values are model accuracies on the [ImageNet](https://www.image-net.org/) dataset validation set.
33+
<br>Reproduce by `yolo val classify data=path/to/ImageNet device=0`
34+
- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
35+
instance.
36+
<br>Reproduce by `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu`
1537

1638
## Train
1739

docs/tasks/detect.md

Lines changed: 25 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,31 @@ scene, but don't need to know exactly where the object is or its exact shape.
99

1010
!!! tip "Tip"
1111

12-
YOLOv8 _detection_ models have no suffix and are the default YOLOv8 models, i.e. `yolov8n.pt` and are pretrained on COCO.
13-
14-
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8){ .md-button .md-button--primary}
12+
YOLOv8 Detect models are the default YOLOv8 models, i.e. `yolov8n.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml).
13+
14+
## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8)
15+
16+
YOLOv8 pretrained Detect models are shown here. Detect, Segment and Pose models are pretrained on
17+
the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml) dataset, while Classify
18+
models are pretrained on
19+
the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/ImageNet.yaml) dataset.
20+
21+
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest
22+
Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
23+
24+
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
25+
|--------------------------------------------------------------------------------------|-----------------------|----------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
26+
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 |
27+
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt) | 640 | 44.9 | 128.4 | 1.20 | 11.2 | 28.6 |
28+
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m.pt) | 640 | 50.2 | 234.7 | 1.83 | 25.9 | 78.9 |
29+
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l.pt) | 640 | 52.9 | 375.2 | 2.39 | 43.7 | 165.2 |
30+
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x.pt) | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 |
31+
32+
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.
33+
<br>Reproduce by `yolo val detect data=coco.yaml device=0`
34+
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
35+
instance.
36+
<br>Reproduce by `yolo val detect data=coco128.yaml batch=1 device=0|cpu`
1537

1638
## Train
1739

docs/tasks/segment.md

Lines changed: 25 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,31 @@ segmentation is useful when you need to know not only where objects are in an im
99

1010
!!! tip "Tip"
1111

12-
YOLOv8 _segmentation_ models use the `-seg` suffix, i.e. `yolov8n-seg.pt` and are pretrained on COCO.
13-
14-
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8){ .md-button .md-button--primary}
12+
YOLOv8 Segment models use the `-seg` suffix, i.e. `yolov8n-seg.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml).
13+
14+
## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8)
15+
16+
YOLOv8 pretrained Segment models are shown here. Detect, Segment and Pose models are pretrained on
17+
the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml) dataset, while Classify
18+
models are pretrained on
19+
the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/ImageNet.yaml) dataset.
20+
21+
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest
22+
Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
23+
24+
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
25+
|----------------------------------------------------------------------------------------------|-----------------------|----------------------|-----------------------|--------------------------------|-------------------------------------|--------------------|-------------------|
26+
| [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
27+
| [YOLOv8s-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
28+
| [YOLOv8m-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
29+
| [YOLOv8l-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
30+
| [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
31+
32+
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.
33+
<br>Reproduce by `yolo val segment data=coco.yaml device=0`
34+
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
35+
instance.
36+
<br>Reproduce by `yolo val segment data=coco128-seg.yaml batch=1 device=0|cpu`
1537

1638
## Train
1739

0 commit comments

Comments
 (0)