Skip to content

Commit e4288bf

Browse files
committed
Enable pytorch attention by default on AMD gfx1200/gfx1201
It's is significantly faster, 8 it/s vs 12 it/s on 9070 xt, ROCm 6.4.1.
1 parent c667197 commit e4288bf

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

comfy/model_management.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -301,7 +301,7 @@ def is_amd():
301301
logging.info("AMD arch: {}".format(arch))
302302
if args.use_split_cross_attention == False and args.use_quad_cross_attention == False:
303303
if torch_version_numeric[0] >= 2 and torch_version_numeric[1] >= 7: # works on 2.6 but doesn't actually seem to improve much
304-
if any((a in arch) for a in ["gfx1100", "gfx1101", "gfx1151"]): # TODO: more arches
304+
if any((a in arch) for a in ["gfx1100", "gfx1101", "gfx1151", "gfx1200", "gfx1201"]): # TODO: more arches
305305
ENABLE_PYTORCH_ATTENTION = True
306306
except:
307307
pass

0 commit comments

Comments
 (0)