Skip to content
This repository was archived by the owner on Jul 7, 2023. It is now read-only.
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
add _use_locking attribute to optimizer
  • Loading branch information
vinhngx committed Jul 23, 2019
commit 8687315f65c4f3d6c67c1b609eac40a6b2e4c99f
1 change: 1 addition & 0 deletions tensor2tensor/utils/optimize.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,7 @@ def optimize(loss, learning_rate, hparams, use_tpu=False, variables=None, gpu_au
elif _mixed_precision_is_enabled(hparams):
raise(RuntimeError("GPU auto mixed precision cannot be used with manual mixed precision"))
else:
setattr(opt, '_use_locking', 'True')
opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt)

opt_summaries = []
Expand Down