Skip to content

Pull requests: pytorch/xla

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Reviews
Assignee
Filter by who’s assigned
Sort

Pull requests list

Fixes pytorch/xla#7398
#9047 opened Apr 26, 2025 by qianminj123 Loading…
Update pin 04/25
#9045 opened Apr 26, 2025 by bhavya01 Loading…
reduce redundant graph in collective op distributed SPMD and other distributed things.
#9044 opened Apr 25, 2025 by zpcore Loading…
Update bazel.md to replace Tensorflow with Openxla
#9042 opened Apr 25, 2025 by bhavya01 Loading…
Use new tuned table
#9041 opened Apr 25, 2025 by bythew3i Loading…
Update test_export_fx_passes.py
#8933 opened Apr 3, 2025 by avikchaudhuri Loading…
test on debian12 DO_NOT_MERGE Not for merging.
#8928 opened Apr 2, 2025 by zpcore Draft
Make GPU CUDA plugin require JAX
#8919 opened Apr 1, 2025 by tengyifei Draft
[DRAFT/WIP] Add top-p masking
#8871 opened Mar 21, 2025 by hyeygit Draft
[1/N] Initial implementation of local SPMD support distributed SPMD and other distributed things.
#8810 opened Mar 9, 2025 by lsy323 Loading…
Showcase jax.grad in torch_xla
#8800 opened Mar 5, 2025 by zpcore Loading…
Repro ragged paged attn kernel
#8752 opened Feb 26, 2025 by vanbasten23 Draft
Replace setup.py with pyproject.toml
#8744 opened Feb 26, 2025 by ManfeiBai Loading…
Follow up on ragged kernel wrapper
#8737 opened Feb 24, 2025 by vanbasten23 Draft
Document how to debug the dispatcher
#8712 opened Feb 15, 2025 by tengyifei Loading…
Add instruction for exporting inlined constant
#8707 opened Feb 13, 2025 by qihqi Loading…
Transition to Hermetic CUDA
#8665 opened Feb 3, 2025 by ysiraichi Draft
ProTip! Updated in the last three days: updated:>2025-04-23.