.. _supported-pytorch-ops: Cerebras PyTorch Layer API ========================== Cerebras PyTorch Layer API implements a subset of PyTorch APIs with our custom implementation that takes advantage of our high-performance kernels and provides extra functionalities as compared to the native PyTorch version. The extra functionalities are optional and opt-in; if you don’t use the extra functionalities, then the Layer API is equivalent to the native PyTorch version. - :ref:`pytorch-ops-torch.nn.multihead-attention` is the replacement for ``torch.nn.MultiheadAttention`` - :ref:`pytorch-ops-torch.nn.transformer-decoder-layer` is the replacement for ``torch.nn.TransformerDecoderLayer`` - :ref:`pytorch-ops-torch.nn.transformer-decoder` is the replacement for ``torch.nn.TransformerDecoder`` - :ref:`pytorch-ops-torch.nn.transformer-encoder-layer` is the replacement for ``torch.nn.TransformerEncoderLayer`` - :ref:`pytorch-ops-torch.nn.transformer-encoder` is the replacement for ``torch.nn.TransformerEncoder`` .. note:: Cerebras has moved away from huggingface model implementations in favor for our own PyTorch layer API. One of the many benefits of using our PyTorch layer API is that it is designed to be (near) drop-in compatible with the transformer layers that are included in PyTorch. It is not possible (at least for T5 and Transformer) to maintain the same naming scheme in the migrated model as in the original. Supported PyTorch Optimizers ---------------------------- Cerebras PyTorch Optimizers implement most PyTorch optimizers under ``torch.optim`` namespace as drop-in replacement with the exact semantic. Our implementation take advantage of our hardware capabilities and support fallback on GPU or CPU depend on the target device. Supported optimizers: 1. SGD 2. SGDM 3. rmsprop 4. adadelta 5. Lamb 6. radam 7. adamax 8. adafactor 9. adagrad 10. adam 11. adamw 12. asgd 13. nadam 14. rprop Supported PyTorch Ops --------------------- If your model implementation requires additional PyTorch Ops beyond the layer APIs above, Cerebras also supports the following PyTorch operations. .. attention:: The following list of supported PyTorch ops is very preliminary. We cannot guarantee that mixing and matching them in your models will work. Support is only provided for the way they are used in the `Cerebras Model Zoo `_. nn -- - `torch.nn.BCEWithLogitsLoss `_ - `torch.nn.CrossEntropyLoss `_ Note: Known limitation: ``ignore_index`` can only be -100 - `torch.nn.Dropout `_ - `torch.nn.Embedding `_ Note: Known limitation: ``num_embeddings < 65536`` - `torch.nn.functional.dropout `_ - `torch.nn.functional.gelu `_ Note: Known limitation: May have precision issue when approximation ``!=tanh`` - `torch.nn.functional.pad `_ - `torch.nn.functional.softmax `_ - `torch.nn.LayerNorm `_ - `torch.nn.Linear `_ - `torch.nn.MSELoss `_ - `torch.nn.NLLLoss `_ Note: Known limitation: ``ignore_index`` can only be -100 - `torch.nn.ReLU `_ - `torch.nn.Softmax `_ - `torch.nn.TanH `_ Functional ---------- - `torch.nn.functional.log_softmax `_ - `torch.nn.functional.relu `_ - `torch.nn.functional.silu `_ Other ops --------- - `torch.abs `_ - `torch.all `_ - `torch.arange `_ - `torch.broadcast_to `_ - `torch.cat `_ - `torch.einsum `_ - `torch.flatten `_ - `torch.full `_ - `torch.full_like `_ - `torch.gather `_ - `torch.log `_ - `torch.matmul `_ - `torch.min `_ - `torch.ones `_ - `torch.rsqrt `_ - `torch.sigmoid `_ - `torch.sum `_ - `torch.tanh `_ - `torch.where `_ - `torch.zeros `_ - `torch.zeros_like `_ Layers ------ .. toctree:: :maxdepth: 2 supported-pytorch-optimizers.rst supported-pt-learning-rate-schedulers.rst pytorch-ops-torch.nn.multihead-attention.rst pytorch-ops-torch.nn.transformer-decoder-layer.rst pytorch-ops-torch.nn.transformer-decoder.rst pytorch-ops-torch.nn.transformer-encoder-layer.rst pytorch-ops-torch.nn.transformer-encoder.rst