TensorFlow Quickstart

If you are new to Cerebras, then begin with this quickstart. Before you get into in-depth development, follow this quickstart to familiarize yourself at a high level with the Cerebras system workflow.

If you already explored this quickstart

If you already explored this quickstart, then skip to Cerebras Command Line Pattern you will need to know to compile on CPU and execute on Cerebras system.

If you are ready to develop

For an in-depth development guide using TensorFlow for Cerebras, skip to: Workflow for TensorFlow on CS.

This quickstart provides step-by-step instructions to:

  1. Clone the reference samples GitHub repository. This repository contains the neural network models that are validated on Cerebras system.

  2. Compile a simple, fully-connected MNIST (FC-MNIST) model on a CPU node in your Cerebras cluster. This step is recommended before you run the model on the Cerebras system.

  3. Run on a CPU node for training and evaluating the model. This approach is recommended for your development workflow, as it gives you better control of debugging your model before you run the model on the Cerebras system.

  4. Finally, run the model training job directly on your CS system. In this approach the compiling is also done directly on the CS system.

Prerequisites

Attention

Go over this Checklist Before You Quickstart before you proceed.

Clone the reference samples repository

  1. Log in to your CS system cluster.

  2. Clone the reference samples repository to your preferred location in your home directory.

    git clone https://github.com/Cerebras/cerebras_reference_implementations.git
    

In the reference samples directory you will see a few models for PyTorch and TensorFlow. In this quickstart we will use the FC MNIST model.

  1. Navigate to the fc_mnist model directory.

    cd cerebras_reference_implementations/fc_mnist/tf/
    

Compile on CPU

We recommend that you first compile your model successfully on a support cluster CPU node before running it on the CS system.

  • You can run in validate_only mode that runs a fast, light-weight verification. In this mode, the compilation will only run through the first few stages, up until kernel library matching.

  • After a successful validate_only run, you can run full compilation with compile_only mode.

This section of the quickstart shows how to execute these steps on a CPU node.

Tip

The validate_only step is very fast, enabling you to rapidly iterate on your model code. Without needing access to the CS system wafer scale engine, you can determine in this validate_only step if you are using any TensorFlow layer or functionality that is unsupported by either XLA or CGC.

Follow these steps:

  1. Navigate to the model directory.

    cd cerebras_reference_implementations/fc_mnist/tf/
    
  2. Run the compilation in validate_only mode.

csrun_cpu python run.py --mode train --validate_only
      ...
      XLA Extraction Complete
      =============== Starting Cerebras Compilation ===============
      Cerebras compilation completed: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:02s,  1.23s/stages]
      =============== Cerebras Compilation Completed ===============

Note

The validate_only mode checks the kernel compatibility of your model. When your model passes this mode, run the full compilation with compile_only to generate the CS system executable.

  1. Run the full compilation process in compile_only mode.

    This steps runs the full compilation through all stages of the Cerebras software stack to generate a CS system executable.

    csrun_cpu python run.py --mode train --compile_only --cs_ip <specify your CS_IP>
    ...
    XLA Extraction Complete
    =============== Starting Cerebras Compilation ===============
    Cerebras compilation completed: |                    | 17/? [00:18s,  1.09s/stages]
    =============== Cerebras Compilation Completed ===============
    

When the above compilation is successful, the model is guaranteed to run on CS system. You can also use this mode to run pre-compilations of many different model configurations offline, so that you can more fully utilize allotted CS system cluster time.

Note

The compiler will detect if a binary already exists for a particular model config and will skip compiling on-the-fly during training if it detects one.

Train and evaluate on CPU

To train and eval on CPU follow these steps:

  1. Navigate to model directory.

    cd cerebras_reference_implementations/fc_mnist/tf/
    
  2. Train and evaluate the model on the CPU.

    # train on CPU
    csrun_cpu python run.py --mode train
    
    # run eval on CPU
    csrun_cpu python run.py --mode eval  --eval_steps 1000
    

Run the model on the CS system

  1. Run the model on the CS system.

    The below csrun_wse command will compile the code if no existing compile artifacts are found, and will then run the compiled executable on the CS system.

    csrun_wse python run.py --mode train \
        --cs_ip <YOUR-CS1-IP-ADDRESS> \
        --max_steps 100000
    

    Note

    The max_steps and other parameters such as save_checkpoints_steps can also be set in the params.yaml file.

    The above command trains the FC-MNIST model for 100,000 steps by executing on the CS system at the IP address specified in the --cs_ip flag. When the command executes, you will see an output similar to the below:

    srun: job 5834 queued and waiting for resources
    srun: job 5834 has been allocated resources
    ...
    INFO:tensorflow:Graph was finalized.
    INFO:tensorflow:Running local_init_op.
    INFO:tensorflow:Done running local_init_op.
    INFO:tensorflow:Saving checkpoints for 0 into model_dir/model.ckpt.
    INFO:tensorflow:Programming CS system fabric. This may take a couple of minutes - please do not interrupt.
    INFO:tensorflow:Fabric programmed
    INFO:tensorflow:Coordinator fully up. Waiting for Streaming (using 0.97% out of 301600 cores on the fabric)
    INFO:tensorflow:Graph was finalized.
    INFO:tensorflow:Running local_init_op.
    INFO:tensorflow:Done running local_init_op.
    ...
    INFO:tensorflow:Training finished with 25600000 samples in 187.465 seconds, 136558.69 samples / second
    INFO:tensorflow:Saving checkpoints for 100000 into model_dir/model.ckpt.
    INFO:tensorflow:global step 100000: loss = 1.901388168334961e-05 (532.0 steps/sec)
    INFO:tensorflow:global step 100000: loss = 1.901388168334961e-05 (532.0 steps/sec)
    INFO:tensorflow:Loss for final step: 1.9e-05.