Skip to content

Commit 493228a

Browse files
authored
Fix AutoencoderTiny with use_slicing (huggingface#6850)
* Fix `AutoencoderTiny` with `use_slicing` When using slicing with AutoencoderTiny, the encoder mistakenly encodes the entire batch for every image in the batch. * Fixed formatting issue
1 parent 8bf046b commit 493228a

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

src/diffusers/models/autoencoders/autoencoder_tiny.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -292,7 +292,9 @@ def encode(
292292
self, x: torch.FloatTensor, return_dict: bool = True
293293
) -> Union[AutoencoderTinyOutput, Tuple[torch.FloatTensor]]:
294294
if self.use_slicing and x.shape[0] > 1:
295-
output = [self._tiled_encode(x_slice) if self.use_tiling else self.encoder(x) for x_slice in x.split(1)]
295+
output = [
296+
self._tiled_encode(x_slice) if self.use_tiling else self.encoder(x_slice) for x_slice in x.split(1)
297+
]
296298
output = torch.cat(output)
297299
else:
298300
output = self._tiled_encode(x) if self.use_tiling else self.encoder(x)

0 commit comments

Comments
 (0)