-
Notifications
You must be signed in to change notification settings - Fork 21.3k
Issues: pytorch/pytorch
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Unable to quantize resnet50 model with post training static quantization
#127391
opened May 29, 2024 by
nkhlS141
Backwards pass through Beta distribution rsample gives inf for 4 < alpha - 2**16 < 1040, beta = 3/2
#127387
opened May 29, 2024 by
douglas-boubert
Different tensor strides can result in surprisingly large discrepancies in Conv2d outputs
#127375
opened May 29, 2024 by
dragonmeteor
Torch.compile produces Exception: Please convert all Tensors to FakeTensors first or instantiate
oncall: pt2
#127374
opened May 29, 2024 by
melvinebenezer
DISABLED test_dtensor_op_db_bmm_cpu_float32 (__main__.TestDTensorOpsCPU)
module: flaky-tests
Problem is a flaky test in CI
oncall: distributed
Add this issue/PR to distributed oncall triage queue
skipped
Denotes a (flaky) test currently skipped in CI.
#127373
opened May 29, 2024 by
pytorch-bot
bot
[Feature Request] switch amx isa detection in onednn to cpuinfo
module: mkldnn
Related to Intel IDEEP or oneDNN (a.k.a. mkldnn) integration
module: third_party
#127368
opened May 29, 2024 by
mingfeima
torch.export.export() throws out an error when dealing weighttying model.
oncall: export
#127357
opened May 28, 2024 by
Hejiyu98
flake8: noqa
disables flake8 linter for the whole file and it's not obvious
actionable
module: lint
#127352
opened May 28, 2024 by
kit1980
Loading Old Checkpoints with DTensor
oncall: distributed
Add this issue/PR to distributed oncall triage queue
#127351
opened May 28, 2024 by
mvpatel2000
Dynamo should prune non-live captured variables
module: dynamo
oncall: pt2
#127350
opened May 28, 2024 by
zou3519
~PyTorch Docathon H1 2024!~
docathon-h1-2024
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#127345
opened May 28, 2024 by
svekars
DISABLED test_comprehensive_fft_ifft_cuda_float64 (__main__.TestInductorOpInfoCUDA)
module: flaky-tests
Problem is a flaky test in CI
module: inductor
module: rocm
AMD GPU support for Pytorch
oncall: pt2
skipped
Denotes a (flaky) test currently skipped in CI.
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#127344
opened May 28, 2024 by
pytorch-bot
bot
DISABLED test_arange2_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests)
module: flaky-tests
Problem is a flaky test in CI
module: inductor
module: rocm
AMD GPU support for Pytorch
oncall: pt2
skipped
Denotes a (flaky) test currently skipped in CI.
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#127343
opened May 28, 2024 by
pytorch-bot
bot
Add option for custom ops to automatically get a FakeTensor kernel (during static shapes)
module: pt2-dispatcher
PT2 dispatcher-related issues (e.g., aotdispatch, functionalization, faketensor, custom-op,
oncall: pt2
#127337
opened May 28, 2024 by
zou3519
Inductor fails at assert self.symbol_to_source.get(expr)
module: dynamic shapes
module: guards
module: inductor
oncall: pt2
#127328
opened May 28, 2024 by
laithsakka
torch.compile reorder_for_compute_comm_overlap sink_waits pass does not work
oncall: pt2
#127324
opened May 28, 2024 by
tombousso
[While_loop] How to use layer like torch.cond and similar
module: xla
Related to XLA support
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
torch.nn.BatchNorm2d
with while_loop?
module: higher order operators
#127320
opened May 28, 2024 by
ManfeiBai
torch.compile (inductor) bug random signed number generation
oncall: cpu inductor
CPU Inductor issues for Intel team to triage
oncall: pt2
#127310
opened May 28, 2024 by
hgreving2304
I don’t know if it’s a problem with cuda or pytorch
module: cuda
Related to torch.cuda, and CUDA support in general
module: cudnn
Related to torch.backends.cudnn, and CuDNN support
needs reproduction
Someone else needs to try reproducing the issue given the instructions. No action needed from user
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#127299
opened May 28, 2024 by
Ylinyuan
UNSTABLE linux-binary-manywheel / manywheel-py3_8-cuda12_4-test / test
module: ci
Related to continuous integration
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
unstable
#127289
opened May 28, 2024 by
atalman
UNSTABLE linux-binary-manywheel / manywheel-py3_8-cuda12_1-test / test
module: ci
Related to continuous integration
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
unstable
#127288
opened May 28, 2024 by
atalman
Fused AdamW maybe should accept lr_dict directly?
module: optimizer
Related to torch.optim
needs research
We need to decide whether or not this merits inclusion, based on research world
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#127284
opened May 28, 2024 by
Wongboo
Previous Next
ProTip!
no:milestone will show everything without a milestone.