Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
fc57993
refactor: cleanup unused import
DraconicDragon Jan 14, 2026
db43b1b
refactor: cleanup unused import
DraconicDragon Jan 14, 2026
612a02f
refactor(MainUI): remove seemingly redundant Line element
DraconicDragon Jan 14, 2026
e4e9612
refactor(BaseUI): various changes, see commit description
DraconicDragon Jan 14, 2026
9d97270
chore(UI): add TODO.md for UI changes
DraconicDragon Jan 14, 2026
3baeeb0
refactor(BaseUI): improve tooltip consistency
DraconicDragon Jan 14, 2026
e36834c
chore(UI): add more TODOs
DraconicDragon Jan 14, 2026
6ea4a6f
chore(BaseUI): capitalization
DraconicDragon Jan 14, 2026
339acba
refactor(MainUI): remove margins from BaseWIdget for more horizontal …
DraconicDragon Jan 15, 2026
26a9ad7
refactor(BaseUI): use QGridLayout instead of 3 separate QHBoxLayouts,…
DraconicDragon Jan 15, 2026
2ac101c
chore: update todo
DraconicDragon Jan 15, 2026
4f7fc19
feat: Protected Tags File (Global) input/selector UI Logic
DraconicDragon Jan 15, 2026
494756e
chore(BaseUI): rename label Global Protected Tags File -> Protected T…
DraconicDragon Jan 15, 2026
e97b00d
fix: Keep Tokens/Tags Separator was passing on it's value even when d…
DraconicDragon Jan 15, 2026
444d5e5
fix: infer values from UI for widget setup instead of hardcoded args …
DraconicDragon Jan 15, 2026
f18de8e
chore: update todo
DraconicDragon Jan 15, 2026
3dbf885
chore(BaseUI): captilization; also update todo
DraconicDragon Jan 15, 2026
acfa9c6
refactor(MainWindow): increase launch size width by 161
DraconicDragon Jan 16, 2026
b847271
refactor(BaseUI): minor adjustments
DraconicDragon Jan 16, 2026
47cab07
feat(ExperimentalArgsUI): initial addition, no UI logic yet
DraconicDragon Jan 16, 2026
8fd1fb3
refactor(ExperimentalArgsUI): mainly advanced vae settings ui + some …
DraconicDragon Jan 19, 2026
3525a37
refactor: move debiased estim. loss from BaseUI/GeneralUI to Experime…
DraconicDragon Jan 19, 2026
0602c87
refactor: disable train flux groupbox when v_param_enable is checked
DraconicDragon Jan 19, 2026
35842e3
feat(ExperimentalArgsUI): ui logic, saving etc
DraconicDragon Jan 19, 2026
45f35b8
refactor(GeneralUI): update defaults
DraconicDragon Jan 19, 2026
7f78d16
refactor(ExperimentalArgsUI): do not save/pass on vae batch size arg …
DraconicDragon Jan 19, 2026
6e801b9
refactor(ExperimentalArgsUI): update Min/Max/Stepping for all DoubleS…
DraconicDragon Jan 19, 2026
d1f3af5
fix(ExperimentalArgsUI): actually make vae batch size not save at val…
DraconicDragon Jan 19, 2026
3c7bc06
fix: floating point precision errors when saved on DoubleSpinBox
DraconicDragon Jan 19, 2026
9efc228
refactor(GeneralUI): add more explicit type casts to make seed actual…
DraconicDragon Jan 19, 2026
b124e5c
refactor(ExperimentalArgsUI): disable CFM until flow_model or v_param…
DraconicDragon Jan 19, 2026
27a01fc
chore: update todo, move to /ui_dev and add run_ui_dev.sh
DraconicDragon Jan 19, 2026
fd9351d
refactor(ExperimentalArgsUI): use real default value
DraconicDragon Jan 19, 2026
5f3d3d3
fix(dev): make run_ui_dev.sh execute python in project root
DraconicDragon Jan 23, 2026
32fab3b
refactor(ExperimentalArgsUI): change saving logic to not uncheck chec…
DraconicDragon Jan 23, 2026
1ffa4d6
formatting
DraconicDragon Jan 23, 2026
3690a62
fix(BaseUI): typo compatable -> compatible
DraconicDragon Jan 23, 2026
b1dfe96
feat(ExperimentalArgsUI): first set of tooltips
DraconicDragon Jan 23, 2026
8d0f9b8
feat(ExperimentalArgsUI): last set of tooltips (VAE)
DraconicDragon Jan 23, 2026
89582ad
fix: stupid designer
DraconicDragon Jan 23, 2026
d3643bc
fix the fix
DraconicDragon Jan 23, 2026
c7b6941
fix: vae batch size not saving cuz dumdum logic
DraconicDragon Jan 25, 2026
cd14d25
fix: lingering args after saving
DraconicDragon Jan 28, 2026
511fed1
refactor: subset input ui
DraconicDragon Jan 28, 2026
387828a
refactor: increase vae batch size maximum to 1000
DraconicDragon Jan 28, 2026
9f5e183
refactor: subset extra input ui
DraconicDragon Jan 28, 2026
87d122a
fix: bad reference to dragdroplineedit custom widget
DraconicDragon Jan 28, 2026
a42b755
refactor: update protected tags file tooltips everywhere
DraconicDragon Jan 28, 2026
449ecfa
chore: update todo
DraconicDragon Jan 28, 2026
ee8356b
feat: make dropdown expand if value loaded by toml
DraconicDragon Jan 28, 2026
5a888dd
Merge branch 'refresh' into ui/misc-baseui
DraconicDragon Jan 28, 2026
512edd6
refactor: change resolution defaults of bucket ui
DraconicDragon Jan 28, 2026
88e3520
Merge branch 'ui/misc-baseui' of https://github.com/draconicdragon/Lo…
DraconicDragon Jan 28, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
refactor: move debiased estim. loss from BaseUI/GeneralUI to Experime…
…ntalArgsUI misc section
  • Loading branch information
DraconicDragon committed Jan 19, 2026
commit 3525a3793fd6aec8a9a7728281db31cde0389cf9
5 changes: 5 additions & 0 deletions main_ui_files/ExperimentalArgsUI.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,9 @@ def setup_connections(self) -> None:
self.widget.cfm_enable.clicked.connect(
lambda x: self.edit_args("contrastive_flow_matching", x, True)
)
self.widget.debiased_estimation_loss_enable.clicked.connect(
lambda x: self.edit_args("debiased_estimation_loss", x, True)
)

def edit_args(self, name: str, value: object, optional: bool = False) -> None:
"""Update args dict, handling optional values."""
Expand Down Expand Up @@ -80,6 +83,7 @@ def load_args(self, args: dict) -> bool:

# misc args
self.widget.cfm_enable.setChecked(args.get("contrastive_flow_matching", False))
self.widget.debiased_estimation_loss_enable.setChecked(args.get("debiased_estimation_loss", False))


# sync args from UI
Expand All @@ -104,6 +108,7 @@ def load_args(self, args: dict) -> bool:

# misc args
self.edit_args("contrastive_flow_matching", self.widget.cfm_enable.isChecked(), True)
self.edit_args("debiased_estimation_loss", self.widget.debiased_estimation_loss_enable.isChecked(), True)

return True

Expand Down
6 changes: 0 additions & 6 deletions main_ui_files/GeneralUI.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,9 +124,6 @@ def setup_connections(self) -> None:
self.widget.v_pred_enable.clicked.connect(
lambda x: self.edit_args("scale_v_pred_loss_like_noise_pred", x, True)
)
self.widget.debiased_estimation_loss_enable.clicked.connect(
lambda x: self.edit_args("debiased_estimation_loss", x, True)
)
self.widget.FP16_enable.clicked.connect(lambda x: self.change_full_type(x, False))
self.widget.BF16_enable.clicked.connect(lambda x: self.change_full_type(False, x))
self.widget.FP8_enable.clicked.connect(lambda x: self.edit_args("fp8_base", x, True))
Expand Down Expand Up @@ -340,7 +337,6 @@ def load_args(self, args: dict) -> bool:
self.widget.high_vram_enable.setChecked(args.get("highvram", False))
self.widget.v_param_enable.setChecked(args.get("v_parameterization", False))
self.widget.v_pred_enable.setChecked(args.get("scale_v_pred_loss_like_noise_pred", False))
self.widget.debiased_estimation_loss_enable.setChecked(args.get("debiased_estimation_loss", False))
self.widget.FP16_enable.setChecked(args.get("full_fp16", False))
self.widget.BF16_enable.setChecked(args.get("full_bf16", False))
self.widget.FP8_enable.setChecked(args.get("fp8_base", False))
Expand Down Expand Up @@ -382,8 +378,6 @@ def load_args(self, args: dict) -> bool:
self.edit_args("lowram", self.widget.low_ram_enable.isChecked(), True)
self.edit_args("highvram", self.widget.high_vram_enable.isChecked(), True)
self.enable_disable_v_param(self.widget.v_param_enable.isChecked())
# Update args for the new debiased estimation loss checkbox
self.edit_args("debiased_estimation_loss", self.widget.debiased_estimation_loss_enable.isChecked(), True)
self.change_full_type(self.widget.FP16_enable.isChecked(), self.widget.BF16_enable.isChecked())
self.edit_args("fp8_base", self.widget.FP8_enable.isChecked(), True)
self.edit_args(
Expand Down
73 changes: 31 additions & 42 deletions ui_files/BaseUI.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ class Ui_base_args_ui(object):
def setupUi(self, base_args_ui):
if not base_args_ui.objectName():
base_args_ui.setObjectName(u"base_args_ui")
base_args_ui.resize(675, 573)
base_args_ui.resize(675, 563)
self.gridLayout_3 = QGridLayout(base_args_ui)
self.gridLayout_3.setObjectName(u"gridLayout_3")
self.formLayout_5 = QFormLayout()
Expand Down Expand Up @@ -376,59 +376,52 @@ def setupUi(self, base_args_ui):
self.model_checkbox_gridLayout = QGridLayout()
self.model_checkbox_gridLayout.setObjectName(u"model_checkbox_gridLayout")
self.model_checkbox_gridLayout.setVerticalSpacing(1)
self.low_ram_enable = QCheckBox(self.base_model_box)
self.low_ram_enable.setObjectName(u"low_ram_enable")
self.v_param_enable = QCheckBox(self.base_model_box)
self.v_param_enable.setObjectName(u"v_param_enable")
self.v_param_enable.setEnabled(True)

self.model_checkbox_gridLayout.addWidget(self.low_ram_enable, 0, 3, 1, 1)
self.model_checkbox_gridLayout.addWidget(self.v_param_enable, 1, 0, 2, 1)

self.v_pred_enable = QCheckBox(self.base_model_box)
self.v_pred_enable.setObjectName(u"v_pred_enable")
self.v_pred_enable.setEnabled(False)

self.model_checkbox_gridLayout.addWidget(self.v_pred_enable, 1, 1, 2, 1)

self.BF16_enable = QCheckBox(self.base_model_box)
self.BF16_enable.setObjectName(u"BF16_enable")
self.v2_enable = QCheckBox(self.base_model_box)
self.v2_enable.setObjectName(u"v2_enable")

self.model_checkbox_gridLayout.addWidget(self.BF16_enable, 1, 3, 2, 1)
self.model_checkbox_gridLayout.addWidget(self.v2_enable, 0, 0, 1, 1)

self.no_half_vae_enable = QCheckBox(self.base_model_box)
self.no_half_vae_enable.setObjectName(u"no_half_vae_enable")

self.model_checkbox_gridLayout.addWidget(self.no_half_vae_enable, 0, 2, 1, 1)

self.debiased_estimation_loss_enable = QCheckBox(self.base_model_box)
self.debiased_estimation_loss_enable.setObjectName(u"debiased_estimation_loss_enable")
sizePolicy1.setHeightForWidth(self.debiased_estimation_loss_enable.sizePolicy().hasHeightForWidth())
self.debiased_estimation_loss_enable.setSizePolicy(sizePolicy1)

self.model_checkbox_gridLayout.addWidget(self.debiased_estimation_loss_enable, 3, 0, 1, 2)

self.FP16_enable = QCheckBox(self.base_model_box)
self.FP16_enable.setObjectName(u"FP16_enable")

self.model_checkbox_gridLayout.addWidget(self.FP16_enable, 1, 2, 2, 1)

self.v2_enable = QCheckBox(self.base_model_box)
self.v2_enable.setObjectName(u"v2_enable")
self.sdxl_enable = QCheckBox(self.base_model_box)
self.sdxl_enable.setObjectName(u"sdxl_enable")

self.model_checkbox_gridLayout.addWidget(self.v2_enable, 0, 0, 1, 1)
self.model_checkbox_gridLayout.addWidget(self.sdxl_enable, 0, 1, 1, 1)

self.low_ram_enable = QCheckBox(self.base_model_box)
self.low_ram_enable.setObjectName(u"low_ram_enable")

self.model_checkbox_gridLayout.addWidget(self.low_ram_enable, 0, 3, 1, 1)

self.FP8_enable = QCheckBox(self.base_model_box)
self.FP8_enable.setObjectName(u"FP8_enable")

self.model_checkbox_gridLayout.addWidget(self.FP8_enable, 1, 4, 2, 1)

self.sdxl_enable = QCheckBox(self.base_model_box)
self.sdxl_enable.setObjectName(u"sdxl_enable")

self.model_checkbox_gridLayout.addWidget(self.sdxl_enable, 0, 1, 1, 1)

self.v_param_enable = QCheckBox(self.base_model_box)
self.v_param_enable.setObjectName(u"v_param_enable")
self.v_param_enable.setEnabled(True)
self.BF16_enable = QCheckBox(self.base_model_box)
self.BF16_enable.setObjectName(u"BF16_enable")

self.model_checkbox_gridLayout.addWidget(self.v_param_enable, 1, 0, 2, 1)
self.model_checkbox_gridLayout.addWidget(self.BF16_enable, 1, 3, 2, 1)

self.high_vram_enable = QCheckBox(self.base_model_box)
self.high_vram_enable.setObjectName(u"high_vram_enable")
Expand Down Expand Up @@ -611,45 +604,41 @@ def retranslateUi(self, base_args_ui):
#endif // QT_CONFIG(tooltip)
self.vae_selector.setText("")
#if QT_CONFIG(tooltip)
self.low_ram_enable.setToolTip(QCoreApplication.translate("base_args_ui", u"<html><head/><body><p>Enable this if the trainer is crashing due to running out of system RAM. Typically, this would only be used when interfacing with Google Colab</p></body></html>", None))
self.v_param_enable.setToolTip(QCoreApplication.translate("base_args_ui", u"<html><head/><body><p>V param, short for V-Paramatarization or commonly knows as 'v-pred', is a noise schedule that some models use. You can set this to train with this noise schedule versus the EDM version of typical SD1.X and SDXL models</p></body></html>", None))
#endif // QT_CONFIG(tooltip)
self.low_ram_enable.setText(QCoreApplication.translate("base_args_ui", u"Low RAM", None))
self.v_param_enable.setText(QCoreApplication.translate("base_args_ui", u"V Param", None))
#if QT_CONFIG(tooltip)
self.v_pred_enable.setToolTip(QCoreApplication.translate("base_args_ui", u"<html><head/><body><p>Scales the loss to be in line with EDM</p></body></html>", None))
#endif // QT_CONFIG(tooltip)
self.v_pred_enable.setText(QCoreApplication.translate("base_args_ui", u"Scale V pred loss", None))
#if QT_CONFIG(tooltip)
self.BF16_enable.setToolTip(QCoreApplication.translate("base_args_ui", u"<html><head/><body><p>Train in full BF16. Not compatable with full FP16 or training precision</p></body></html>", None))
self.v2_enable.setToolTip(QCoreApplication.translate("base_args_ui", u"<html><head/><body><p>Select this if you are using an SD2.X based model</p></body></html>", None))
#endif // QT_CONFIG(tooltip)
self.BF16_enable.setText(QCoreApplication.translate("base_args_ui", u"Full BF16", None))
self.v2_enable.setText(QCoreApplication.translate("base_args_ui", u"SD2.X Based", None))
#if QT_CONFIG(tooltip)
self.no_half_vae_enable.setToolTip(QCoreApplication.translate("base_args_ui", u"<html><head/><body><p>This loads the VAE in FP32 or full precision. Roughly doubles VRAM usage for VAE workloads like latent caching, but is sometimes required on older graphics cards</p></body></html>", None))
#endif // QT_CONFIG(tooltip)
self.no_half_vae_enable.setText(QCoreApplication.translate("base_args_ui", u"No Half Vae", None))
#if QT_CONFIG(tooltip)
self.debiased_estimation_loss_enable.setToolTip("")
#endif // QT_CONFIG(tooltip)
self.debiased_estimation_loss_enable.setText(QCoreApplication.translate("base_args_ui", u"Debiased Estimation Loss", None))
#if QT_CONFIG(tooltip)
self.FP16_enable.setToolTip(QCoreApplication.translate("base_args_ui", u"<html><head/><body><p>Allows training on full FP16. Not compatable with full BF16 or training precision</p></body></html>", None))
#endif // QT_CONFIG(tooltip)
self.FP16_enable.setText(QCoreApplication.translate("base_args_ui", u"Full FP16", None))
#if QT_CONFIG(tooltip)
self.v2_enable.setToolTip(QCoreApplication.translate("base_args_ui", u"<html><head/><body><p>Select this if you are using an SD2.X based model</p></body></html>", None))
self.sdxl_enable.setToolTip(QCoreApplication.translate("base_args_ui", u"<html><head/><body><p>Select this if you are using an SDXL based model</p></body></html>", None))
#endif // QT_CONFIG(tooltip)
self.v2_enable.setText(QCoreApplication.translate("base_args_ui", u"SD2.X Based", None))
self.sdxl_enable.setText(QCoreApplication.translate("base_args_ui", u"SDXL Based", None))
#if QT_CONFIG(tooltip)
self.FP8_enable.setToolTip(QCoreApplication.translate("base_args_ui", u"<html><head/><body><p>Loads the base model in FP8, which should reduce VRAM usage by roughly half of FP16. Training Precision must be one of FP16 or BF16</p></body></html>", None))
self.low_ram_enable.setToolTip(QCoreApplication.translate("base_args_ui", u"<html><head/><body><p>Enable this if the trainer is crashing due to running out of system RAM. Typically, this would only be used when interfacing with Google Colab</p></body></html>", None))
#endif // QT_CONFIG(tooltip)
self.FP8_enable.setText(QCoreApplication.translate("base_args_ui", u"FP8 Base", None))
self.low_ram_enable.setText(QCoreApplication.translate("base_args_ui", u"Low RAM", None))
#if QT_CONFIG(tooltip)
self.sdxl_enable.setToolTip(QCoreApplication.translate("base_args_ui", u"<html><head/><body><p>Select this if you are using an SDXL based model</p></body></html>", None))
self.FP8_enable.setToolTip(QCoreApplication.translate("base_args_ui", u"<html><head/><body><p>Loads the base model in FP8, which should reduce VRAM usage by roughly half of FP16. Training Precision must be one of FP16 or BF16</p></body></html>", None))
#endif // QT_CONFIG(tooltip)
self.sdxl_enable.setText(QCoreApplication.translate("base_args_ui", u"SDXL Based", None))
self.FP8_enable.setText(QCoreApplication.translate("base_args_ui", u"FP8 Base", None))
#if QT_CONFIG(tooltip)
self.v_param_enable.setToolTip(QCoreApplication.translate("base_args_ui", u"<html><head/><body><p>V param, short for V-Paramatarization or commonly knows as 'v-pred', is a noise schedule that some models use. You can set this to train with this noise schedule versus the EDM version of typical SD1.X and SDXL models</p></body></html>", None))
self.BF16_enable.setToolTip(QCoreApplication.translate("base_args_ui", u"<html><head/><body><p>Train in full BF16. Not compatable with full FP16 or training precision</p></body></html>", None))
#endif // QT_CONFIG(tooltip)
self.v_param_enable.setText(QCoreApplication.translate("base_args_ui", u"V Param", None))
self.BF16_enable.setText(QCoreApplication.translate("base_args_ui", u"Full BF16", None))
self.high_vram_enable.setText(QCoreApplication.translate("base_args_ui", u"High VRAM", None))
# retranslateUi

72 changes: 28 additions & 44 deletions ui_files/BaseUI.ui
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
<x>0</x>
<y>0</y>
<width>675</width>
<height>573</height>
<height>563</height>
</rect>
</property>
<property name="windowTitle">
Expand Down Expand Up @@ -662,13 +662,16 @@
<property name="verticalSpacing">
<number>1</number>
</property>
<item row="0" column="3">
<widget class="QCheckBox" name="low_ram_enable">
<item row="1" column="0" rowspan="2">
<widget class="QCheckBox" name="v_param_enable">
<property name="enabled">
<bool>true</bool>
</property>
<property name="toolTip">
<string>&lt;html&gt;&lt;head/&gt;&lt;body&gt;&lt;p&gt;Enable this if the trainer is crashing due to running out of system RAM. Typically, this would only be used when interfacing with Google Colab&lt;/p&gt;&lt;/body&gt;&lt;/html&gt;</string>
<string>&lt;html&gt;&lt;head/&gt;&lt;body&gt;&lt;p&gt;V param, short for V-Paramatarization or commonly knows as 'v-pred', is a noise schedule that some models use. You can set this to train with this noise schedule versus the EDM version of typical SD1.X and SDXL models&lt;/p&gt;&lt;/body&gt;&lt;/html&gt;</string>
</property>
<property name="text">
<string>Low RAM</string>
<string>V Param</string>
</property>
</widget>
</item>
Expand All @@ -685,13 +688,13 @@
</property>
</widget>
</item>
<item row="1" column="3" rowspan="2">
<widget class="QCheckBox" name="BF16_enable">
<item row="0" column="0">
<widget class="QCheckBox" name="v2_enable">
<property name="toolTip">
<string>&lt;html&gt;&lt;head/&gt;&lt;body&gt;&lt;p&gt;Train in full BF16. Not compatable with full FP16 or training precision&lt;/p&gt;&lt;/body&gt;&lt;/html&gt;</string>
<string>&lt;html&gt;&lt;head/&gt;&lt;body&gt;&lt;p&gt;Select this if you are using an SD2.X based model&lt;/p&gt;&lt;/body&gt;&lt;/html&gt;</string>
</property>
<property name="text">
<string>Full BF16</string>
<string>SD2.X Based</string>
</property>
</widget>
</item>
Expand All @@ -705,22 +708,6 @@
</property>
</widget>
</item>
<item row="3" column="0" colspan="2">
<widget class="QCheckBox" name="debiased_estimation_loss_enable">
<property name="sizePolicy">
<sizepolicy hsizetype="Preferred" vsizetype="Fixed">
<horstretch>0</horstretch>
<verstretch>0</verstretch>
</sizepolicy>
</property>
<property name="toolTip">
<string/>
</property>
<property name="text">
<string>Debiased Estimation Loss</string>
</property>
</widget>
</item>
<item row="1" column="2" rowspan="2">
<widget class="QCheckBox" name="FP16_enable">
<property name="toolTip">
Expand All @@ -731,46 +718,43 @@
</property>
</widget>
</item>
<item row="0" column="0">
<widget class="QCheckBox" name="v2_enable">
<item row="0" column="1">
<widget class="QCheckBox" name="sdxl_enable">
<property name="toolTip">
<string>&lt;html&gt;&lt;head/&gt;&lt;body&gt;&lt;p&gt;Select this if you are using an SD2.X based model&lt;/p&gt;&lt;/body&gt;&lt;/html&gt;</string>
<string>&lt;html&gt;&lt;head/&gt;&lt;body&gt;&lt;p&gt;Select this if you are using an SDXL based model&lt;/p&gt;&lt;/body&gt;&lt;/html&gt;</string>
</property>
<property name="text">
<string>SD2.X Based</string>
<string>SDXL Based</string>
</property>
</widget>
</item>
<item row="1" column="4" rowspan="2">
<widget class="QCheckBox" name="FP8_enable">
<item row="0" column="3">
<widget class="QCheckBox" name="low_ram_enable">
<property name="toolTip">
<string>&lt;html&gt;&lt;head/&gt;&lt;body&gt;&lt;p&gt;Loads the base model in FP8, which should reduce VRAM usage by roughly half of FP16. Training Precision must be one of FP16 or BF16&lt;/p&gt;&lt;/body&gt;&lt;/html&gt;</string>
<string>&lt;html&gt;&lt;head/&gt;&lt;body&gt;&lt;p&gt;Enable this if the trainer is crashing due to running out of system RAM. Typically, this would only be used when interfacing with Google Colab&lt;/p&gt;&lt;/body&gt;&lt;/html&gt;</string>
</property>
<property name="text">
<string>FP8 Base</string>
<string>Low RAM</string>
</property>
</widget>
</item>
<item row="0" column="1">
<widget class="QCheckBox" name="sdxl_enable">
<item row="1" column="4" rowspan="2">
<widget class="QCheckBox" name="FP8_enable">
<property name="toolTip">
<string>&lt;html&gt;&lt;head/&gt;&lt;body&gt;&lt;p&gt;Select this if you are using an SDXL based model&lt;/p&gt;&lt;/body&gt;&lt;/html&gt;</string>
<string>&lt;html&gt;&lt;head/&gt;&lt;body&gt;&lt;p&gt;Loads the base model in FP8, which should reduce VRAM usage by roughly half of FP16. Training Precision must be one of FP16 or BF16&lt;/p&gt;&lt;/body&gt;&lt;/html&gt;</string>
</property>
<property name="text">
<string>SDXL Based</string>
<string>FP8 Base</string>
</property>
</widget>
</item>
<item row="1" column="0" rowspan="2">
<widget class="QCheckBox" name="v_param_enable">
<property name="enabled">
<bool>true</bool>
</property>
<item row="1" column="3" rowspan="2">
<widget class="QCheckBox" name="BF16_enable">
<property name="toolTip">
<string>&lt;html&gt;&lt;head/&gt;&lt;body&gt;&lt;p&gt;V param, short for V-Paramatarization or commonly knows as 'v-pred', is a noise schedule that some models use. You can set this to train with this noise schedule versus the EDM version of typical SD1.X and SDXL models&lt;/p&gt;&lt;/body&gt;&lt;/html&gt;</string>
<string>&lt;html&gt;&lt;head/&gt;&lt;body&gt;&lt;p&gt;Train in full BF16. Not compatable with full FP16 or training precision&lt;/p&gt;&lt;/body&gt;&lt;/html&gt;</string>
</property>
<property name="text">
<string>V Param</string>
<string>Full BF16</string>
</property>
</widget>
</item>
Expand Down
Loading