Workflow for TensorFlow on CS
On This Page
Workflow for TensorFlow on CS¶
When you are using TensorFlow targeting Cerebras CS system, start with the high-level workflow described here.
Port to Cerebras¶
Prepare your model and the input function by using CerebrasEstimator
in place of the TensorFlow Estimator.
To run on the CS system you must use CerebrasEstimator
. However, when you use CerebrasEstimator
you can also run on a CPU or GPU with minimal code change.
See Port TensorFlow to Cerebras to accomplish this step.
Prepare input¶
To achieve a high input data throughput often requires that your input pipeline is running on multiple worker processes across many CPU nodes at once. Use this Prepare Input documentation to organize your input data pipeline.
Attention
Wherever you place your model and the associated scripts in your CS system support cluster, ensure that these are accessible from every node in the cluster.
Compile on CPU¶
Before you run it on the CS system, we recommend that you iterate until your model first compiles successfully on a CPU node that has the Cerebras Singularity container client software. To accomplish this step, proceed as follows:
First run in the
validate_only
mode.Then run full compile with
compile_only
.
See this Compile on CPU to perform this step.
Note
The The run.py Template documents an example template that can help you in organizing your code so these steps can become easy to manage. However, the
run.py
described in The run.py Template is an example template only. You can organize your code whichever way that best suits you, as long as you useCerebrasEstimator
.Hardware requirements
Also make sure that the hardware you use for this step satisfies the minimum requirements stated in Hardware resource recommendations.
Run the job on CS system¶
Finally, train, eval or predict on the CS system. Use the checkpoints to do evaluation on a GPU or a CPU. See this Train, Eval and Predict for documentation on this step.