Workflow for TensorFlow on CS
On This Page
Workflow for TensorFlow on CS¶
When you are using TensorFlow targeting Cerebras CS system, start with the high-level workflow described here.
Port to Cerebras¶
Prepare your model and the input function by using
CerebrasEstimator in place of the TensorFlow Estimator.
To run on the CS system you must use
CerebrasEstimator. However, when you use
CerebrasEstimator you can also run on a CPU or GPU with minimal code change.
See Port TensorFlow to Cerebras to accomplish this step.
To achieve a high input data throughput often requires that your input pipeline is running on multiple worker processes across many CPU nodes at once. Use this Prepare Input documentation to organize your input data pipeline.
Wherever you place your model and the associated scripts in your Original Cerebras Support-Cluster, ensure that these are accessible from every node in the cluster.
Compile on CPU¶
Before you run it on the CS system, we recommend that you iterate until your model first compiles successfully on a CPU node that has the Cerebras Singularity container client software. To accomplish this step, proceed as follows:
First run in the
Then run full compile with
See this Compile on CPU to perform this step.
The The run.py Template documents an example template that can help you in organizing your code so these steps can become easy to manage. However, the
run.pydescribed in The run.py Template is an example template only. You can organize your code whichever way that best suits you, as long as you use
Also make sure that the hardware you use for this step satisfies the minimum requirements stated in Hardware resource recommendations.