Performance Optimization Practices
On This Page
Performance Optimization Practices¶
This document describes optimization practices to obtain best performance from Cerebras system. This document is written for the TensorFlow developers targeting Cerebras wafer scale engine.
Cerebras ML workflow¶
When you are targeting the Cerebras system for your neural network jobs, the ML workflow you will follow consists of the following steps, in the order specified:
Port your code to CS.
Prepare input data.
Compile on CPU.
Run on the CS system.
Prepare input data¶
Because the CS system can train a neural network at a very high speed, your input pipeline must be very fast to feed the input data to the CS system. A low input data throughput will become a bottleneck, preventing high CS system utilization. The following best practice will help optimize the input pipeline throughput.
Determine optimal input workers¶
To achieve a high input data throughput often requires that your input pipeline is running on multiple worker processes across many CPU nodes at once.
Use the The cs_input_analyzer Script. This script generates a report containing the recommended values for the following Slurm environment variables. These variables determine the number of CPU nodes to use in the cluster, the number of tasks to be used per node and the number of CPUs in a node assigned for a task.
nodes: $DEF_NODES
tasks_per_node: $DEF_TASKS
cpus_per_task: $DEF_CPUS
For optimal input data throughput, use the above recommended values in your The csrun_wse Script to train your model on the CS system.
Shard your input data¶
While it is necessary to use multiple CPU nodes as workers to stream the input data into the CS system, it is also critical that you configure the input pipeline to properly cycle through these multiple worker processes.
Optimal input data throughput performance can be achieved when you shard the input dataset across the N input worker nodes. This is a recommended option when your dataset is very large or the available local memory on your CPU nodes is limited.
In this approach, you split your dataset into multiple, shorter, distinct files and allocate them across the worker nodes so that different workers have different slices of the dataset. Sharding can make managing the dataset easier. Sharding also improves the capacity to shuffle the dataset, as the shuffle buffer memory needed is only to shuffle shards, and not the entire dataset. See Sharding For the CS system.
Enhance your input function code¶
In addition to sharding your input data, as described above, the input function code can also be optimized for better performance. The compiler analyzes your input function. It provides a detailed log identifying any missing functions and provides recommendations on parameter values to enhance the training performance on the CS system.
Note
Make sure you modify your input function code following the recommendations of the Input Function Report.
Always compile on CPU first¶
Before you run anything on the CS system, we recommend that you first compile your model on a support cluster CPU node. Compiling your model first on a support cluster CPU node has several benefits:
Validate only¶
You can run in validate_only
mode that runs a fast, light-weight verification. See Validate only.
Compile¶
After a successful validate_only run, you can run full compilation with compile_only
mode. See compile-only.
Using compiled artifacts¶
Before running on a CS system, you can run compile_only
on a support cluster CPU and save the compiled artifacts. This enables you to use these compiled artifacts later when you run this network on the CS system and save time in your workflow. You can accomplish this by running the compile_only
mode with --cs_ip
flag and passing in the IP address of the CS system. This will ensure that csrun_cpu
optimizes the compile for your specific CS system. See using-compiled-artifacts.
Use mixed-precision with dynamic loss scaling¶
If you are starting with a Keras model, then you will use the Keras Model to CerebrasEstimator wrapper to convert your model to use the CerebrasEstimator
.
Important
Make sure that you use mixed precision by specifying mixed_float16
while using the KerasModelToCerebrasEstimator
wrapper.
Use dynamic loss scaling¶
When you use mixed-precision for training a neural network you can obtain a considerable speedup in the training computation and require lesser memory bandwidth. However, simply converting data from FP32 to FP16 will result in increased training loss and eventually training divergence. This is because during the backward pass computation of the loss gradients, when the loss gradient values undergo FP32-to-FP16 conversion, most of these result in 0s in FP16. To avoid such increased training loss, use dynamic loss scaling.
The CS system supports dynamic loss scaling (DLS) during training. DLS will automatically determine an appropriate loss scale for your network, making it easy for you to enable mixed precision training with just a few lines of code. See more in TensorFlow Dynamic Loss Scaling.
Use cbfloat16
¶
Wherever it is supported, use the cbfloat16
data format. This is the CB16 half-precision, a Cerebras-specific 16-bit format. With 1 bit more for the exponent compared to FP16, the CB16 provides a bigger range with the following benefits:
Denormals are far less frequent.
Dynamic loss scaling is not necessary on many networks.
Recommended models for cbfloat16
¶
BERT for better throughput.
Monitor kernel performance metrics¶
When you compile your model with csrun_cpu
, for example:
csrun_cpu python run.py --mode=compile_only
the compiler writes out an estimated performance for your neural network into a text file compile_report.txt
in the directory where you compiled. These estimates are generated immediately after the network is compiled. These performance estimates provide you with valuable insights into how your network might perform on the CS system, without actually running it on the CS system.
See more at Compile Report.