Cerebras Wafer-Scale cluster (R2.1.0)#
Get started
Get started by setting up your Cerebras virtual environment. Try out our state-of-the-art deep learning models that reside in our Model Zoo and proceed to launch your first job in the Cerebras Wafer-Scale cluster:
- Set up a Cerebras virtual environment
- Clone the Cerebras Model Zoo
- Launch your first job on the Cerebras Wafer-Scale cluster
Familiarize yourself with the Cerebras PyTorch package and learn how to write your own training loop for your PyTorch model by using the Cerebras PyTorch API. This package includes functions and data structures specifically tailored to facilitate the utilization of Cerebras Wafer-Scale cluster. By mastering this package, you'll be well-equipped to craft your own training loop for your PyTorch models.
Beginner Tutorials
Get accustomed with using Cerebras's Wafer-Scale to preprocess data, load data, train, and fine-tune deep neural networks using our step-by-step beginner-level instructional tutorials:
How-to Guides
Take advantage of our advanced how-to guides that use the Cerebras PyTorch API and other Cerebras-enhanced features:
- Write a custom training loop using the Cerebras PyTorch API
- Train a GPT model using Maximum Update Parametrization
- Train a model with a large or small context window
- Instruction fine-tune an LLM
- Train a model with weight sparsity
- Restart a dataloader
- Port a trained and fine-tuned model to Hugging Face
- Control numerical precision level
- Deploy Cerebras models
Port your model
If your model is one of the supported models in Cerebras Model Zoo, you will port your model by:- Preprocessing your data using Cerebras Model Zoo, including Hugging Face datasets
- Creating your own data processors
- Modifying configuration files
- Converting configurations and checkpoints from Hugging Face
Cerebras Features
When developing your model:
- Understand kernel autogeneration with Autogen
- Define environment variables for input workers
- Import user-specific dependencies to Cerebras environments
- Follow best practices for CV dataloaders