modelzoo.transformers.pytorch.gpt2.input.GptHDF5DataProcessor.GptHDF5DataProcessor#

class modelzoo.transformers.pytorch.gpt2.input.GptHDF5DataProcessor.GptHDF5DataProcessor[source]#

Bases: modelzoo.transformers.data_processing.HDF5IterableDataProcessor.HDF5IterableDataProcessor

A HDF5 dataset processor for GPT pre-training. Loads data from HDF5 files. :param dict params: dict containing training

input parameters for creating dataset.

Expects the following fields: - “data_dir” (str or list of str): Path to dataset HDF5 files - “batch_size” (int): Batch size. - “shuffle” (bool): Flag to enable data shuffling. - “shuffle_buffer” (int): Size of shuffle buffer in samples. - “shuffle_seed” (int): Shuffle seed. - “num_workers” (int): How many subprocesses to use for data loading. - “drop_last” (bool): If True and the dataset size is not divisible

by the batch size, the last incomplete batch will be dropped.

  • “prefetch_factor” (int): Number of batches loaded in advance by each worker.

  • “persistent_workers” (bool): If True, the data loader will not shutdown

    the worker processes after a dataset has been consumed once.

  • “use_vsl” (bool): Flag to enable variable sequence length training.

    It requires the dataset to have two extra features: the attention_span of keys and the position_ids of tokens. Defaults to False.

Methods

collate_fn

create_dataloader

Classmethod to create the dataloader object.

__init__(params)[source]#
create_dataloader()#

Classmethod to create the dataloader object.