modelzoo.transformers.pytorch.bert.fine_tuning.classifier.input.BertClassifierDataProcessor.DataProcessor#

class modelzoo.transformers.pytorch.bert.fine_tuning.classifier.input.BertClassifierDataProcessor.DataProcessor[source]#

Bases: abc.ABC

Base class for processors that load their raw data from TFDS. Child classes must provide map_fn, name.

Parameters
  • data_params (dict) –

    Input parameters for creating dataset. Expects the following fields:

    • ”vocab_file” (str): Path to the vocab file.

    • ”data_dir” (str): Path to directory containing the TF Records.

    • ”batch_size” (int): Batch size.

    • ”max_sequence_length” (int): Maximum length of the sequence.

    • ”shuffle” (bool): Flag to enable data shuffling.

    • ”shuffle_seed” (int): Shuffle seed.

    • ”shuffle_buffer” (int): Shuffle buffer size.

    • ”do_lower” (bool): Flag to lower case the texts.

    • ”num_workers” (int): How many subprocesses to use for data loading.

    • ”drop_last” (bool): If True and the dataset size is not divisible

      by the batch size, the last incomplete batch will be dropped.

    • ”prefetch_factor” (int): Number of samples loaded in advance by each worker.

    • ”persistent_workers” (bool): If True, the data loader will not shutdown

      the worker processes after a dataset has been consumed once.

  • model_params (dict, optional) – Model parameters for creating the dataset, unused.

Methods

create_dataloader

create_dataset

__init__(data_params, model_params) None[source]#