Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
更好理解多显卡文件名称更改
  • Loading branch information
AreChen committed Jul 28, 2023
commit 16e23800ab0b9aa787be47f615b8a26fd8d2b1e7
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,10 +76,12 @@ python ./demo/run_demo.py
model = model.eval()
```
替换为

```python
def get_model():
tokenizer = AutoTokenizer.from_pretrained("THUDM/codegeex2-6b", trust_remote_code=True)
from utils import load_model_on_gpus
from gpus import load_model_on_gpus
# gpus文件在demo文件夹中
model = load_model_on_gpus("THUDM/codegeex2-6b", num_gpus=2)
model = model.eval()
return tokenizer, model
Expand Down
File renamed without changes.
2 changes: 1 addition & 1 deletion demo/run_demo.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ def get_model():
tokenizer = AutoTokenizer.from_pretrained("THUDM/codegeex2-6b", trust_remote_code=True)
model = AutoModel.from_pretrained("THUDM/codegeex2-6b", trust_remote_code=True).to('cuda:0')
# 如需实现多显卡模型加载,请将上面一行注释并启用一下两行,"num_gpus"调整为自己需求的显卡数量
# from utils import load_model_on_gpus
# from gpus import load_model_on_gpus
# model = load_model_on_gpus("THUDM/codegeex2-6b", num_gpus=2)
model = model.eval()
return tokenizer, model
Expand Down