The core challenge for PyTorch to combine Python’s flexibility and efficiency. To handle this, PyTorch converts dynamic graph into static graph to apply optimization passes. This whole procedure is divided into 2 parts:

  • Frontend:
  • Backend: apply optimization passes on the static graph

TorchScript

torch.fx

torch.compile

TorchDynamo

Interpreter level tracing framework.

Guard

Graph Break

AOTAutograd

Transform autograd into a part of static graphs. Using __torch_dispatch__

partition_fn determines what tensors are required for Autograd


TorchInductor

Backend optimizer.