Add conditional_generation example xpu support#2684
Add conditional_generation example xpu support#2684BenjaminBossan merged 10 commits intohuggingface:mainfrom kaixuanliu:conditional_generation_xpu
Conversation
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
BenjaminBossan
left a comment
There was a problem hiding this comment.
Thanks for updating the examples. I found a handful of issues, otherwise LGTM.
| @@ -179,7 +180,979 @@ | |||
| "metadata": { | |||
| "tags": [] | |||
| }, | |||
| "outputs": [], | |||
| "outputs": [ | |||
There was a problem hiding this comment.
Let's clear the cells here. Normally, I prefer to see the outputs but this notebook already has cleared cells and having the given output in this case is not very helpful.
| lora_dropout=0.1, | ||
| task_type=TaskType.SEQ_2_SEQ_LM, | ||
| inference_mode=False, | ||
| total_step=len(dataset['train']) * num_epochs, |
| "outputs": [], | ||
| "outputs": [ | ||
| { | ||
| "ename": "ValueError", |
There was a problem hiding this comment.
It looks like this checkpoint has been deleted and I don't even know which model was used or how it was trained. I'd say, let's delete this notebook, as it is non-functional. The same is true for examples/causal_language_modeling/peft_lora_clm_accelerate_big_model_inference.ipynb.
| "\n", | ||
| "set_seed(42)\n", | ||
| "\n", | ||
| "device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n", |
There was a problem hiding this comment.
how about we use this? device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
| os.environ["TOKENIZERS_PARALLELISM"] = "false" | ||
|
|
||
| device = "cuda" | ||
| device = "xpu" if torch.xpu.is_available() else "cuda" |
| "from datasets import load_dataset\n", | ||
| "\n", | ||
| "device = \"cuda\"\n", | ||
| "device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n", |
| "from datasets import load_dataset\n", | ||
| "\n", | ||
| "device = \"cuda\"\n", | ||
| "device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n", |
| "os.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\n", | ||
| "\n", | ||
| "device = \"cuda\"\n", | ||
| "device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n", |
| "os.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\n", | ||
| "\n", | ||
| "device = \"cuda\"\n", | ||
| "device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n", |
| "from datasets import load_dataset\n", | ||
| "\n", | ||
| "device = \"cuda\"\n", | ||
| "device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n", |
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
|
@kaixuanliu Could you please run |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
@BenjaminBossan Oops, my fault, have fixed the formatting issue. |
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
|
Thanks, but there still seem to be issues with formatting. Maybe let's check that you have matching ruff version (0.9.10) and that the settings from the |
BenjaminBossan
left a comment
There was a problem hiding this comment.
Thanks for making the examples XPU-compatible and also updating them where necessary. The PR LGTM. @yao-matrix anything else from your side?
looks good for me, thx @BenjaminBossan |
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
No description provided.