Explore Cerebras Documentation#
Cerebras's Wafer-Scale cluster offers a framework to train and evaluate a neural network with near-perfect linear scaling across millions of cores without the inconvenience of distributed computing. Bring your own PyTorch ML model and dataset, or take advantage of our pre-configured Large Language Models(LLM) and Computer Vision(CV) models residing in Cerebras's Model Zoo.
Browse through this portal to gain access to Cerebras's core concepts, features, Cerebras PyTorch API, and Model Zoo. It also includes set-up guides showing how to set up a Cerebras virtual environment and the Model Zoo. It also contains various step-by-step tutorials and how-to guides that teach every aspect in detail. Begin by choosing your desired workflow:
Cerebras Wafer-Scale Cluster
Learn about Cerebras's framework that supports large-scale models up to and well beyond 1 billion parameters and large-scale inputs such as sequences of 50,000+ and 7k x 7k images. It could be single or multiple CS-2 systems that distribute jobs across all or a subset of CS-2 systems in the cluster. A supporting CPU cluster in this installation consists of MemoryX, SwarmX, input worker nodes, and management.
Original Cerebras Installation
Learn about Cerebras's framework, designed as a single CS-2 node that supports models below 1 billion parameters with pipelined execution. It comprises a CS-2 system and a CPU cluster with CPU nodes acting as coordinators and input workers.
Cerebras AI Model Studio
Learn about Cerebras's AI Model Studio, a computing service powered by Cerebras Wafer-Scale Clusters comprising 4+ CS-2s and hosted by Cirrascale Cloud Services. It is a purpose-built platform optimized for training and fine-tuning large language models on dedicated clusters of millions of cores.